hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
d515bd7cdbb78dc5bf33dbd7026357b29d36f6cc | 12,932 | py | Python | openstackclient/tests/identity/v2_0/test_role.py | larsks/python-openstackclient | 9062811d10f2ab660ce38f9bd20be9c52daa9479 | [
"Apache-2.0"
] | null | null | null | openstackclient/tests/identity/v2_0/test_role.py | larsks/python-openstackclient | 9062811d10f2ab660ce38f9bd20be9c52daa9479 | [
"Apache-2.0"
] | null | null | null | openstackclient/tests/identity/v2_0/test_role.py | larsks/python-openstackclient | 9062811d10f2ab660ce38f9bd20be9c52daa9479 | [
"Apache-2.0"
] | null | null | null | # Copyright 2013 Nebula Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
import copy
import mock
from openstackclient.common import exceptions
from openstackclient.identity.v2_0 import role
from openstackclient.tests import fakes
from openstackclient.tests.identity.v2_0 import fakes as identity_fakes
class TestRole(identity_fakes.TestIdentityv2):
def setUp(self):
super(TestRole, self).setUp()
# Get a shortcut to the TenantManager Mock
self.projects_mock = self.app.client_manager.identity.tenants
self.projects_mock.reset_mock()
# Get a shortcut to the UserManager Mock
self.users_mock = self.app.client_manager.identity.users
self.users_mock.reset_mock()
# Get a shortcut to the RoleManager Mock
self.roles_mock = self.app.client_manager.identity.roles
self.roles_mock.reset_mock()
class TestRoleAdd(TestRole):
def setUp(self):
super(TestRoleAdd, self).setUp()
self.projects_mock.get.return_value = fakes.FakeResource(
None,
copy.deepcopy(identity_fakes.PROJECT),
loaded=True,
)
self.users_mock.get.return_value = fakes.FakeResource(
None,
copy.deepcopy(identity_fakes.USER),
loaded=True,
)
self.roles_mock.get.return_value = fakes.FakeResource(
None,
copy.deepcopy(identity_fakes.ROLE),
loaded=True,
)
self.roles_mock.add_user_role.return_value = fakes.FakeResource(
None,
copy.deepcopy(identity_fakes.ROLE),
loaded=True,
)
# Get the command object to test
self.cmd = role.AddRole(self.app, None)
def test_role_add(self):
arglist = [
'--project', identity_fakes.project_name,
'--user', identity_fakes.user_name,
identity_fakes.role_name,
]
verifylist = [
('project', identity_fakes.project_name),
('user', identity_fakes.user_name),
('role', identity_fakes.role_name),
]
parsed_args = self.check_parser(self.cmd, arglist, verifylist)
# DisplayCommandBase.take_action() returns two tuples
columns, data = self.cmd.take_action(parsed_args)
# RoleManager.add_user_role(user, role, tenant=None)
self.roles_mock.add_user_role.assert_called_with(
identity_fakes.user_id,
identity_fakes.role_id,
identity_fakes.project_id,
)
collist = ('id', 'name')
self.assertEqual(columns, collist)
datalist = (
identity_fakes.role_id,
identity_fakes.role_name,
)
self.assertEqual(data, datalist)
class TestRoleCreate(TestRole):
def setUp(self):
super(TestRoleCreate, self).setUp()
self.roles_mock.create.return_value = fakes.FakeResource(
None,
copy.deepcopy(identity_fakes.ROLE),
loaded=True,
)
# Get the command object to test
self.cmd = role.CreateRole(self.app, None)
def test_role_create_no_options(self):
arglist = [
identity_fakes.role_name,
]
verifylist = [
('role_name', identity_fakes.role_name),
]
parsed_args = self.check_parser(self.cmd, arglist, verifylist)
# DisplayCommandBase.take_action() returns two tuples
columns, data = self.cmd.take_action(parsed_args)
# RoleManager.create(name)
self.roles_mock.create.assert_called_with(
identity_fakes.role_name,
)
collist = ('id', 'name')
self.assertEqual(columns, collist)
datalist = (
identity_fakes.role_id,
identity_fakes.role_name,
)
self.assertEqual(data, datalist)
class TestRoleDelete(TestRole):
def setUp(self):
super(TestRoleDelete, self).setUp()
self.roles_mock.get.return_value = fakes.FakeResource(
None,
copy.deepcopy(identity_fakes.ROLE),
loaded=True,
)
self.roles_mock.delete.return_value = None
# Get the command object to test
self.cmd = role.DeleteRole(self.app, None)
def test_role_delete_no_options(self):
arglist = [
identity_fakes.role_name,
]
verifylist = [
('role', identity_fakes.role_name),
]
parsed_args = self.check_parser(self.cmd, arglist, verifylist)
# DisplayCommandBase.take_action() returns two tuples
self.cmd.take_action(parsed_args)
self.roles_mock.delete.assert_called_with(
identity_fakes.role_id,
)
class TestRoleList(TestRole):
def setUp(self):
super(TestRoleList, self).setUp()
self.roles_mock.list.return_value = [
fakes.FakeResource(
None,
copy.deepcopy(identity_fakes.ROLE),
loaded=True,
),
]
# Get the command object to test
self.cmd = role.ListRole(self.app, None)
def test_role_list_no_options(self):
arglist = []
verifylist = []
parsed_args = self.check_parser(self.cmd, arglist, verifylist)
# DisplayCommandBase.take_action() returns two tuples
columns, data = self.cmd.take_action(parsed_args)
self.roles_mock.list.assert_called_with()
collist = ('ID', 'Name')
self.assertEqual(columns, collist)
datalist = ((
identity_fakes.role_id,
identity_fakes.role_name,
), )
self.assertEqual(tuple(data), datalist)
class TestUserRoleList(TestRole):
def setUp(self):
super(TestUserRoleList, self).setUp()
self.projects_mock.get.return_value = fakes.FakeResource(
None,
copy.deepcopy(identity_fakes.PROJECT),
loaded=True,
)
self.users_mock.get.return_value = fakes.FakeResource(
None,
copy.deepcopy(identity_fakes.USER),
loaded=True,
)
self.roles_mock.roles_for_user.return_value = [
fakes.FakeResource(
None,
copy.deepcopy(identity_fakes.ROLE),
loaded=True,
),
]
# Get the command object to test
self.cmd = role.ListUserRole(self.app, None)
def test_user_role_list_no_options(self):
arglist = []
verifylist = []
parsed_args = self.check_parser(self.cmd, arglist, verifylist)
# This argument combination should raise a CommandError
self.assertRaises(
exceptions.CommandError,
self.cmd.take_action,
parsed_args,
)
def test_user_role_list_no_options_def_creds(self):
auth_ref = self.app.client_manager.auth_ref = mock.MagicMock()
auth_ref.project_id.return_value = identity_fakes.project_id
auth_ref.user_id.return_value = identity_fakes.user_id
arglist = []
verifylist = []
parsed_args = self.check_parser(self.cmd, arglist, verifylist)
# DisplayCommandBase.take_action() returns two tuples
columns, data = self.cmd.take_action(parsed_args)
self.roles_mock.roles_for_user.assert_called_with(
identity_fakes.user_id,
identity_fakes.project_id,
)
collist = ('ID', 'Name', 'Project', 'User')
self.assertEqual(columns, collist)
datalist = ((
identity_fakes.role_id,
identity_fakes.role_name,
identity_fakes.project_name,
identity_fakes.user_name,
), )
self.assertEqual(tuple(data), datalist)
def test_user_role_list_project(self):
self.projects_mock.get.return_value = fakes.FakeResource(
None,
copy.deepcopy(identity_fakes.PROJECT_2),
loaded=True,
)
arglist = [
'--project', identity_fakes.PROJECT_2['name'],
]
verifylist = [
('project', identity_fakes.PROJECT_2['name']),
]
parsed_args = self.check_parser(self.cmd, arglist, verifylist)
# This argument combination should raise a CommandError
self.assertRaises(
exceptions.CommandError,
self.cmd.take_action,
parsed_args,
)
def test_user_role_list_project_def_creds(self):
auth_ref = self.app.client_manager.auth_ref = mock.MagicMock()
auth_ref.project_id.return_value = identity_fakes.project_id
auth_ref.user_id.return_value = identity_fakes.user_id
self.projects_mock.get.return_value = fakes.FakeResource(
None,
copy.deepcopy(identity_fakes.PROJECT_2),
loaded=True,
)
arglist = [
'--project', identity_fakes.PROJECT_2['name'],
]
verifylist = [
('project', identity_fakes.PROJECT_2['name']),
]
parsed_args = self.check_parser(self.cmd, arglist, verifylist)
# DisplayCommandBase.take_action() returns two tuples
columns, data = self.cmd.take_action(parsed_args)
self.roles_mock.roles_for_user.assert_called_with(
identity_fakes.user_id,
identity_fakes.PROJECT_2['id'],
)
collist = ('ID', 'Name', 'Project', 'User')
self.assertEqual(columns, collist)
datalist = ((
identity_fakes.role_id,
identity_fakes.role_name,
identity_fakes.PROJECT_2['name'],
identity_fakes.user_name,
), )
self.assertEqual(tuple(data), datalist)
class TestRoleRemove(TestRole):
def setUp(self):
super(TestRoleRemove, self).setUp()
self.projects_mock.get.return_value = fakes.FakeResource(
None,
copy.deepcopy(identity_fakes.PROJECT),
loaded=True,
)
self.users_mock.get.return_value = fakes.FakeResource(
None,
copy.deepcopy(identity_fakes.USER),
loaded=True,
)
self.roles_mock.get.return_value = fakes.FakeResource(
None,
copy.deepcopy(identity_fakes.ROLE),
loaded=True,
)
self.roles_mock.remove_user_role.return_value = None
# Get the command object to test
self.cmd = role.RemoveRole(self.app, None)
def test_role_remove(self):
arglist = [
'--project', identity_fakes.project_name,
'--user', identity_fakes.user_name,
identity_fakes.role_name,
]
verifylist = [
('role', identity_fakes.role_name),
('project', identity_fakes.project_name),
('user', identity_fakes.user_name),
]
parsed_args = self.check_parser(self.cmd, arglist, verifylist)
# DisplayCommandBase.take_action() returns two tuples
self.cmd.take_action(parsed_args)
# RoleManager.remove_user_role(user, role, tenant=None)
self.roles_mock.remove_user_role.assert_called_with(
identity_fakes.user_id,
identity_fakes.role_id,
identity_fakes.project_id,
)
class TestRoleShow(TestRole):
def setUp(self):
super(TestRoleShow, self).setUp()
self.roles_mock.get.return_value = fakes.FakeResource(
None,
copy.deepcopy(identity_fakes.ROLE),
loaded=True,
)
# Get the command object to test
self.cmd = role.ShowRole(self.app, None)
def test_service_show(self):
arglist = [
identity_fakes.role_name,
]
verifylist = [
('role', identity_fakes.role_name),
]
parsed_args = self.check_parser(self.cmd, arglist, verifylist)
# DisplayCommandBase.take_action() returns two tuples
columns, data = self.cmd.take_action(parsed_args)
# RoleManager.get(role)
self.roles_mock.get.assert_called_with(
identity_fakes.role_name,
)
collist = ('id', 'name')
self.assertEqual(columns, collist)
datalist = (
identity_fakes.role_id,
identity_fakes.role_name,
)
self.assertEqual(data, datalist)
| 30.285714 | 77 | 0.611584 | 1,411 | 12,932 | 5.372077 | 0.115521 | 0.125198 | 0.078496 | 0.049868 | 0.834565 | 0.798945 | 0.75752 | 0.749868 | 0.741953 | 0.729947 | 0 | 0.002308 | 0.296551 | 12,932 | 426 | 78 | 30.356808 | 0.830933 | 0.1221 | 0 | 0.656766 | 0 | 0 | 0.016708 | 0 | 0 | 0 | 0 | 0 | 0.072607 | 1 | 0.059406 | false | 0 | 0.019802 | 0 | 0.105611 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1d142731b6388edb2a0b90ba5ff25bfcb57cb817 | 9,448 | py | Python | lib/regression_and_p_chart.py | richardgorham1/ds-prep-capstone | 69f87cb3312d160ea577b191b659c056414e837b | [
"MIT"
] | null | null | null | lib/regression_and_p_chart.py | richardgorham1/ds-prep-capstone | 69f87cb3312d160ea577b191b659c056414e837b | [
"MIT"
] | null | null | null | lib/regression_and_p_chart.py | richardgorham1/ds-prep-capstone | 69f87cb3312d160ea577b191b659c056414e837b | [
"MIT"
] | null | null | null | import math
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import scipy.stats as stats
from scipy.stats import t
from scipy.stats import norm
import lib.p_chart as pc
import lib.least_squares as ls
def regression_and_p_chart(x, y_0, y_1, y_2, suptitle = None,
subtitle_0 = None, subtitle_1 = None, subtitle_2 = None,
subtitle_3 = None, subtitle_4 = None, subtitle_5 = None,
xlabels_r = None, xlabels_p = None, ylabels_r = None,
ylabels_p = None, xmin = None, xmax = None,
xticks = None):
p_0 = y_0 / (y_0 + y_1 + y_2)
p_1 = y_1 / (y_0 + y_1 + y_2)
p_2 = y_2 / (y_0 + y_1 + y_2)
#Plot charts
plt.figure(figsize = (15, 12))
plt.subplots_adjust(hspace = 0.5)
if isinstance(suptitle, str):
plt.suptitle('Regression and p Charts'
'\nfor Suceptibility / Resistance\n' + suptitle)
else:
plt.suptitle('Regression and p Charts'
'\nfor Suceptibility / Resistance\n')
plt.subplot(321)
#Underscore and the start of label text will surpress the label on the plot
plt.scatter(x, y_0, label = '_x')
plt.plot(x, ls.least_squares(x, y_0).get('upi'), 'r--', label = '_x')
plt.plot(x, ls.least_squares(x, y_0).get('l'), 'b--', label = '_x')
plt.plot(x, ls.least_squares(x, y_0).get('lpi'), 'r--', label = '_x')
plt.fill_between(x, ls.least_squares(x, y_0).get('uci')['Data Year'],\
ls.least_squares(x, y_0).get('lci')['Data Year'],\
alpha = 0.25, color = '#99caff')
plt.grid()
if xmin and xmax != None:
plt.xlim(xmin, xmax)
else:
None
if xticks != None:
plt.xticks(xticks)
else:
None
if isinstance(subtitle_0, str):
plt.title(subtitle_0, y = 1.125)
else:
plt.title('Observations', y = 1.125)
if isinstance(ylabels_r, str):
plt.ylabel(ylabels_r)
else:
plt.ylabel('Obs.')
if isinstance(xlabels_r, str):
plt.xlabel(xlabels_r)
else:
plt.xlabel('Index')
#The emplty plots below are used to hold custom labels for the legend
plt.plot([], [], ' ', label = 'Slope: ' \
+ ls.least_squares(x, y_0).get('m').astype(str))
plt.plot([], [], ' ', label='R-sq: ' \
+ ls.least_squares(x, y_0).get('r_sq').astype(str))
if ls.least_squares(x, y_0).get('p') < 0.0001:
p_value = 'p value: <0.0001'
else:
p_value = 'p value: ' + ls.least_squares(x, y_0).get('p').astype(str)
plt.plot([], [], ' ', label = p_value)
plt.legend(bbox_to_anchor = (0, 1.05, 1, 0.2), \
loc = 2, ncol = 3, mode = "expand", borderaxespad = 2)
plt.subplot(322)
plt.plot(x, p_0,'bo--', label = '_x')
plt.step(x, pc.p_chart(y_0, p_0).get('ucl'), 'r--', \
label = 'Control Limits', where = 'mid')
plt.plot(x, pc.p_chart(y_0, p_0).get('cl'), 'g--', label = 'Center Line')
plt.step(x, pc.p_chart(y_0, p_0).get('lcl'), \
'r--', label = '_x', where = 'mid')
plt.grid()
if xmin and xmax != None:
plt.xlim(xmin, xmax)
else:
None
if xticks != None:
plt.xticks(xticks)
else:
None
if isinstance(subtitle_1, str):
plt.title(subtitle_1, y = 1.125)
else:
plt.title('Proportions', y = 1.125)
if isinstance(ylabels_p, str):
plt.ylabel(ylabels_p)
else:
plt.ylabel('proportion')
if isinstance(xlabels_p, str):
plt.xlabel(xlabels_p)
else:
plt.xlabel('Index')
plt.plot([], [], ' ', label = 'Mean: ' \
+ pc.p_chart(y_0,p_0).get('p_bar').astype(str))
plt.legend(bbox_to_anchor = (0, 1.05, 1, 0.2), \
loc = 2, ncol = 3, mode = "expand", borderaxespad = 2)
plt.subplot(323)
plt.scatter(x, y_1, label = '_x')
plt.plot(x, ls.least_squares(x, y_1).get('upi'), 'r--', label = '_x')
plt.plot(x, ls.least_squares(x, y_1).get('l'), 'b--', label = '_x')
plt.plot(x, ls.least_squares(x, y_1).get('lpi'), 'r--', label = '_x')
plt.fill_between(x, ls.least_squares(x, y_1).get('uci')['Data Year'],\
ls.least_squares(x, y_1).get('lci')['Data Year'],\
alpha = 0.25, color = '#99caff')
plt.grid()
if xmin and xmax != None:
plt.xlim(xmin, xmax)
else:
None
if xticks != None:
plt.xticks(xticks)
else:
None
if isinstance(subtitle_2, str):
plt.title(subtitle_2, y = 1.125)
else:
plt.title('Observations', y = 1.125)
if isinstance(ylabels_r, str):
plt.ylabel(ylabels_r)
else:
plt.ylabel('Obs.')
if isinstance(xlabels_r, str):
plt.xlabel(xlabels_r)
else:
plt.xlabel('Index')
#The emplty plots below are used to hold custom labels for the legend
plt.plot([], [], ' ', label = 'Slope: ' \
+ ls.least_squares(x, y_1).get('m').astype(str))
plt.plot([], [], ' ', label = 'R-sq: ' \
+ ls.least_squares(x, y_1).get('r_sq').astype(str))
if ls.least_squares(x, y_0).get('p') < 0.0001:
p_value = 'p value: <0.0001'
else:
p_value = 'p value: ' + ls.least_squares(x, y_1).get('p').astype(str)
plt.plot([], [], ' ', label = p_value)
plt.legend(bbox_to_anchor = (0, 1.05, 1, 0.2),
loc = 2, ncol = 3, mode = "expand", borderaxespad = 2)
plt.subplot(324)
plt.plot(x,p_1,'bo--',label='_x')
plt.step(x, pc.p_chart(y_1, p_1).get('ucl'), 'r--', \
label = 'Control Limits', where = 'mid')
plt.plot(x, pc.p_chart(y_1, p_1).get('cl'), 'g--', label = 'Center Line')
plt.step(x, pc.p_chart(y_1, p_1).get('lcl'), \
'r--', label = '_x', where = 'mid')
plt.grid()
if xmin and xmax != None:
plt.xlim(xmin, xmax)
else:
None
if xticks != None:
plt.xticks(xticks)
else:
None
if isinstance(subtitle_3, str):
plt.title(subtitle_3, y = 1.125)
else:
plt.title('Proportions', y = 1.125)
if isinstance(ylabels_p, str):
plt.ylabel(ylabels_p)
else:
plt.ylabel('proportion')
if isinstance(xlabels_p, str):
plt.xlabel(xlabels_p)
else:
plt.xlabel('Index')
plt.plot([], [],' ', label = 'Mean: ' \
+ pc.p_chart(y_1, p_1).get('p_bar').astype(str))
plt.legend(bbox_to_anchor = (0, 1.05, 1, 0.2), \
loc = 2, ncol = 3, mode = "expand", borderaxespad = 2)
plt.subplot(325)
plt.scatter(x, y_2, label = '_x')
plt.plot(x, ls.least_squares(x, y_2).get('upi'), 'r--', label = '_x')
plt.plot(x, ls.least_squares(x, y_2).get('l'), 'b--', label = '_x')
plt.plot(x, ls.least_squares(x, y_2).get('lpi'), 'r--', label = '_x')
plt.fill_between(x, ls.least_squares(x, y_2).get('uci')['Data Year'],\
ls.least_squares(x, y_2).get('lci')['Data Year'],\
alpha = 0.25, color = '#99caff')
plt.grid()
if xmin and xmax != None:
plt.xlim(xmin, xmax)
else:
None
if xticks != None:
plt.xticks(xticks)
else:
None
if isinstance(subtitle_4, str):
plt.title(subtitle_4, y = 1.125)
else:
plt.title('Observations', y = 1.125)
if isinstance(ylabels_r, str):
plt.ylabel(ylabels_r)
else:
plt.ylabel('Obs.')
if isinstance(xlabels_r, str):
plt.xlabel(xlabels_r)
else:
plt.xlabel('Index')
#The emplty plots below are used to hold custom labels for the legend
plt.plot([], [],' ', label = 'Slope: ' \
+ ls.least_squares(x, y_2).get('m').astype(str))
plt.plot([], [],' ', label = 'R-sq: ' \
+ ls.least_squares(x, y_2).get('r_sq').astype(str))
if ls.least_squares(x, y_0).get('p') < 0.0001:
p_value = 'p value: <0.0001'
else:
p_value = 'p value: ' + ls.least_squares(x, y_2).get('p').astype(str)
plt.plot([], [], ' ', label = p_value)
plt.legend(bbox_to_anchor = (0, 1.05, 1, 0.2), \
loc = 2, ncol = 3, mode = "expand", borderaxespad = 2)
plt.subplot(326)
plt.plot(x, p_2,'bo--', label = '_x')
plt.step(x, pc.p_chart(y_2, p_2).get('ucl'), 'r--', \
label = 'Control Limits', where = 'mid')
plt.plot(x, pc.p_chart(y_2, p_2).get('cl'), 'g--', label = 'Center Line')
plt.step(x, pc.p_chart(y_2, p_2).get('lcl'), \
'r--',label = '_x', where = 'mid')
plt.grid()
if xmin and xmax != None:
plt.xlim(xmin, xmax)
else:
None
if xticks != None:
plt.xticks(xticks)
else:
None
if isinstance(subtitle_5, str):
plt.title(subtitle_5, y = 1.125)
else:
plt.title('Proportions', y = 1.125)
if isinstance(ylabels_p, str):
plt.ylabel(ylabels_p)
else:
plt.ylabel('proportion')
if isinstance(xlabels_p, str):
plt.xlabel(xlabels_p)
else:
plt.xlabel('Index')
plt.plot([], [],' ', label = 'Mean: ' \
+ pc.p_chart(y_2, p_2).get('p_bar').astype(str))
plt.legend(bbox_to_anchor = (0, 1.05, 1, 0.2), \
loc = 2, ncol = 3, mode = "expand", borderaxespad = 2)
plt.show()
| 32.027119 | 79 | 0.532388 | 1,404 | 9,448 | 3.430199 | 0.10114 | 0.012874 | 0.078488 | 0.084095 | 0.843231 | 0.843231 | 0.840739 | 0.835548 | 0.832849 | 0.812292 | 0 | 0.040323 | 0.291279 | 9,448 | 294 | 80 | 32.136054 | 0.678913 | 0.030588 | 0 | 0.650206 | 0 | 0 | 0.08783 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.004115 | false | 0 | 0.037037 | 0 | 0.041152 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1d31d31e2ac944084a7d60490414d9da8d257a18 | 12,794 | py | Python | mayan/apps/document_signatures/tests/test_views.py | prezi/mayan-edms | e9bc10a056c3379b57115c6e83022f48c6298e1d | [
"Apache-2.0"
] | 4 | 2019-02-17T08:35:42.000Z | 2019-03-28T06:02:11.000Z | mayan/apps/document_signatures/tests/test_views.py | zhoubear/mayan-edms | e9bc10a056c3379b57115c6e83022f48c6298e1d | [
"Apache-2.0"
] | 1 | 2018-10-11T13:01:34.000Z | 2018-10-11T13:01:34.000Z | mayan/apps/document_signatures/tests/test_views.py | prezi/mayan-edms | e9bc10a056c3379b57115c6e83022f48c6298e1d | [
"Apache-2.0"
] | 3 | 2019-01-29T13:21:57.000Z | 2019-10-27T03:20:15.000Z | from __future__ import absolute_import, unicode_literals
import logging
from django.core.files import File
from django_downloadview.test import assert_download_response
from django_gpg.models import Key
from documents.models import DocumentVersion
from documents.tests import (
GenericDocumentViewTestCase, TEST_DOCUMENT_PATH
)
from ..models import DetachedSignature, EmbeddedSignature
from ..permissions import (
permission_document_version_signature_delete,
permission_document_version_signature_download,
permission_document_version_signature_upload,
permission_document_version_signature_verify,
permission_document_version_signature_view
)
from .literals import (
TEST_SIGNATURE_FILE_PATH, TEST_SIGNED_DOCUMENT_PATH, TEST_KEY_FILE
)
TEST_UNSIGNED_DOCUMENT_COUNT = 4
TEST_SIGNED_DOCUMENT_COUNT = 2
class SignaturesViewTestCase(GenericDocumentViewTestCase):
def test_signature_list_view_no_permission(self):
with open(TEST_KEY_FILE, 'rb') as file_object:
Key.objects.create(key_data=file_object.read())
with open(TEST_DOCUMENT_PATH, 'rb') as file_object:
document = self.document_type.new_document(
file_object=file_object
)
with open(TEST_SIGNATURE_FILE_PATH, 'rb') as file_object:
DetachedSignature.objects.create(
document_version=document.latest_version,
signature_file=File(file_object)
)
self.login_user()
response = self.get(
'signatures:document_version_signature_list',
args=(document.latest_version.pk,)
)
self.assertEqual(response.status_code, 403)
def test_signature_list_view_with_access(self):
with open(TEST_KEY_FILE, 'rb') as file_object:
Key.objects.create(key_data=file_object.read())
with open(TEST_DOCUMENT_PATH, 'rb') as file_object:
document = self.document_type.new_document(
file_object=file_object
)
with open(TEST_SIGNATURE_FILE_PATH, 'rb') as file_object:
DetachedSignature.objects.create(
document_version=document.latest_version,
signature_file=File(file_object)
)
self.login_user()
self.grant_access(
obj=document,
permission=permission_document_version_signature_view
)
response = self.get(
'signatures:document_version_signature_list',
args=(document.latest_version.pk,)
)
self.assertEqual(response.status_code, 200)
self.assertEqual(response.context['object_list'].count(), 1)
def test_signature_detail_view_no_permission(self):
with open(TEST_KEY_FILE, 'rb') as file_object:
Key.objects.create(key_data=file_object.read())
with open(TEST_DOCUMENT_PATH, 'rb') as file_object:
document = self.document_type.new_document(
file_object=file_object
)
with open(TEST_SIGNATURE_FILE_PATH, 'rb') as file_object:
signature = DetachedSignature.objects.create(
document_version=document.latest_version,
signature_file=File(file_object)
)
self.login_user()
response = self.get(
'signatures:document_version_signature_details',
args=(signature.pk,)
)
self.assertEqual(response.status_code, 403)
def test_signature_detail_view_with_access(self):
with open(TEST_KEY_FILE, 'rb') as file_object:
Key.objects.create(key_data=file_object.read())
with open(TEST_DOCUMENT_PATH, 'rb') as file_object:
document = self.document_type.new_document(
file_object=file_object
)
with open(TEST_SIGNATURE_FILE_PATH, 'rb') as file_object:
signature = DetachedSignature.objects.create(
document_version=document.latest_version,
signature_file=File(file_object)
)
self.login_user()
self.grant_access(
obj=document,
permission=permission_document_version_signature_view
)
response = self.get(
'signatures:document_version_signature_details',
args=(signature.pk,)
)
self.assertContains(response, signature.signature_id, status_code=200)
def test_signature_upload_view_no_permission(self):
with open(TEST_DOCUMENT_PATH, 'rb') as file_object:
document = self.document_type.new_document(
file_object=file_object
)
self.login_user()
with open(TEST_SIGNATURE_FILE_PATH, 'rb') as file_object:
response = self.post(
'signatures:document_version_signature_upload',
args=(document.latest_version.pk,),
data={'signature_file': file_object}
)
self.assertEqual(response.status_code, 403)
self.assertEqual(DetachedSignature.objects.count(), 0)
def test_signature_upload_view_with_access(self):
with open(TEST_DOCUMENT_PATH, 'rb') as file_object:
document = self.document_type.new_document(
file_object=file_object
)
self.login_user()
self.grant_access(
obj=document,
permission=permission_document_version_signature_upload
)
with open(TEST_SIGNATURE_FILE_PATH, 'rb') as file_object:
response = self.post(
'signatures:document_version_signature_upload',
args=(document.latest_version.pk,),
data={'signature_file': file_object}
)
self.assertEqual(response.status_code, 302)
self.assertEqual(DetachedSignature.objects.count(), 1)
def test_signature_download_view_no_permission(self):
with open(TEST_DOCUMENT_PATH, 'rb') as file_object:
document = self.document_type.new_document(
file_object=file_object
)
with open(TEST_SIGNATURE_FILE_PATH, 'rb') as file_object:
signature = DetachedSignature.objects.create(
document_version=document.latest_version,
signature_file=File(file_object)
)
self.login_user()
response = self.get(
'signatures:document_version_signature_download',
args=(signature.pk,),
)
self.assertEqual(response.status_code, 403)
def test_signature_download_view_with_access(self):
with open(TEST_DOCUMENT_PATH, 'rb') as file_object:
document = self.document_type.new_document(
file_object=file_object
)
with open(TEST_SIGNATURE_FILE_PATH, 'rb') as file_object:
signature = DetachedSignature.objects.create(
document_version=document.latest_version,
signature_file=File(file_object)
)
self.login_user()
self.grant_access(
obj=document,
permission=permission_document_version_signature_download
)
self.expected_content_type = 'application/octet-stream; charset=utf-8'
response = self.get(
'signatures:document_version_signature_download',
args=(signature.pk,),
)
with signature.signature_file as file_object:
assert_download_response(
self, response=response, content=file_object.read(),
)
def test_signature_delete_view_no_permission(self):
with open(TEST_KEY_FILE, 'rb') as file_object:
Key.objects.create(key_data=file_object.read())
with open(TEST_DOCUMENT_PATH, 'rb') as file_object:
document = self.document_type.new_document(
file_object=file_object
)
with open(TEST_SIGNATURE_FILE_PATH, 'rb') as file_object:
signature = DetachedSignature.objects.create(
document_version=document.latest_version,
signature_file=File(file_object)
)
self.login_user()
self.grant_access(
obj=document,
permission=permission_document_version_signature_view
)
response = self.post(
'signatures:document_version_signature_delete',
args=(signature.pk,)
)
self.assertEqual(response.status_code, 403)
self.assertEqual(DetachedSignature.objects.count(), 1)
def test_signature_delete_view_with_access(self):
with open(TEST_KEY_FILE, 'rb') as file_object:
Key.objects.create(key_data=file_object.read())
with open(TEST_DOCUMENT_PATH, 'rb') as file_object:
document = self.document_type.new_document(
file_object=file_object
)
with open(TEST_SIGNATURE_FILE_PATH, 'rb') as file_object:
signature = DetachedSignature.objects.create(
document_version=document.latest_version,
signature_file=File(file_object)
)
self.login_user()
self.grant_access(
obj=document,
permission=permission_document_version_signature_delete
)
self.grant_access(
obj=document,
permission=permission_document_version_signature_view
)
response = self.post(
'signatures:document_version_signature_delete',
args=(signature.pk,), follow=True
)
self.assertContains(response, 'deleted', status_code=200)
self.assertEqual(DetachedSignature.objects.count(), 0)
def test_missing_signature_verify_view_no_permission(self):
# Silence converter logging
logging.getLogger('converter.backends').setLevel(logging.CRITICAL)
for document in self.document_type.documents.all():
document.delete(to_trash=False)
old_hooks = DocumentVersion._post_save_hooks
DocumentVersion._post_save_hooks = {}
for count in range(TEST_UNSIGNED_DOCUMENT_COUNT):
with open(TEST_DOCUMENT_PATH, 'rb') as file_object:
self.document_type.new_document(
file_object=file_object
)
for count in range(TEST_SIGNED_DOCUMENT_COUNT):
with open(TEST_SIGNED_DOCUMENT_PATH, 'rb') as file_object:
self.document_type.new_document(
file_object=file_object
)
self.assertEqual(
EmbeddedSignature.objects.unsigned_document_versions().count(),
TEST_UNSIGNED_DOCUMENT_COUNT + TEST_SIGNED_DOCUMENT_COUNT
)
DocumentVersion._post_save_hooks = old_hooks
self.login_user()
response = self.post(
'signatures:all_document_version_signature_verify', follow=True
)
self.assertEqual(response.status_code, 403)
self.assertEqual(
EmbeddedSignature.objects.unsigned_document_versions().count(),
TEST_UNSIGNED_DOCUMENT_COUNT + TEST_SIGNED_DOCUMENT_COUNT
)
def test_missing_signature_verify_view_with_permission(self):
# Silence converter logging
logging.getLogger('converter.backends').setLevel(logging.CRITICAL)
for document in self.document_type.documents.all():
document.delete(to_trash=False)
old_hooks = DocumentVersion._post_save_hooks
DocumentVersion._post_save_hooks = {}
for count in range(TEST_UNSIGNED_DOCUMENT_COUNT):
with open(TEST_DOCUMENT_PATH, 'rb') as file_object:
self.document_type.new_document(
file_object=file_object
)
for count in range(TEST_SIGNED_DOCUMENT_COUNT):
with open(TEST_SIGNED_DOCUMENT_PATH, 'rb') as file_object:
self.document_type.new_document(
file_object=file_object
)
self.assertEqual(
EmbeddedSignature.objects.unsigned_document_versions().count(),
TEST_UNSIGNED_DOCUMENT_COUNT + TEST_SIGNED_DOCUMENT_COUNT
)
DocumentVersion._post_save_hooks = old_hooks
self.login_user()
self.grant_permission(
permission=permission_document_version_signature_verify
)
response = self.post(
'signatures:all_document_version_signature_verify', follow=True
)
self.assertContains(response, 'queued', status_code=200)
self.assertEqual(
EmbeddedSignature.objects.unsigned_document_versions().count(),
TEST_UNSIGNED_DOCUMENT_COUNT
)
| 33.492147 | 78 | 0.646788 | 1,350 | 12,794 | 5.762963 | 0.082222 | 0.097686 | 0.047815 | 0.053985 | 0.898201 | 0.831362 | 0.823779 | 0.823779 | 0.811954 | 0.801928 | 0 | 0.004441 | 0.278334 | 12,794 | 381 | 79 | 33.580052 | 0.838189 | 0.003986 | 0 | 0.657439 | 0 | 0 | 0.056907 | 0.044192 | 0 | 0 | 0 | 0 | 0.076125 | 1 | 0.041522 | false | 0 | 0.034602 | 0 | 0.079585 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1d3e4bf23bc3868180d94f20d2e290be0d9ca621 | 235 | py | Python | users/tests.py | RavidYael/lets-meet | 38c409dc3ae79e9683b2e22b0e3a51298aea606c | [
"MIT"
] | null | null | null | users/tests.py | RavidYael/lets-meet | 38c409dc3ae79e9683b2e22b0e3a51298aea606c | [
"MIT"
] | 21 | 2022-03-09T08:39:42.000Z | 2022-03-31T21:11:47.000Z | users/tests.py | RavidYael/lets-meet | 38c409dc3ae79e9683b2e22b0e3a51298aea606c | [
"MIT"
] | 5 | 2022-02-28T13:45:11.000Z | 2022-03-06T15:26:54.000Z | from .class_tests.users_tests import * # noqa: F403 F401
from .class_tests.sign_up_tests import * # noqa: F403 F401
from .class_tests.login_tests import * # noqa: F403 F401
from .class_tests.logout_tests import * # noqa: F403 F401
| 47 | 59 | 0.761702 | 37 | 235 | 4.594595 | 0.324324 | 0.211765 | 0.329412 | 0.447059 | 0.788235 | 0.652941 | 0.652941 | 0.652941 | 0 | 0 | 0 | 0.120603 | 0.153191 | 235 | 4 | 60 | 58.75 | 0.733668 | 0.268085 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
1d4e9d29898602d77cbc96df50817ff99717c60b | 22 | py | Python | TurboStock/app/views.py | CLacaile/TurboStock | abc3a8e6260cecaa1e9bb93c682e6cb1de8b628d | [
"MIT"
] | null | null | null | TurboStock/app/views.py | CLacaile/TurboStock | abc3a8e6260cecaa1e9bb93c682e6cb1de8b628d | [
"MIT"
] | 3 | 2020-01-10T15:20:49.000Z | 2021-03-18T22:34:33.000Z | TurboStock/app/views.py | CLacaile/TurboStock | abc3a8e6260cecaa1e9bb93c682e6cb1de8b628d | [
"MIT"
] | null | null | null | from .views import *
| 7.333333 | 20 | 0.681818 | 3 | 22 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.227273 | 22 | 2 | 21 | 11 | 0.882353 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1d8f40e4546baf7cda589c624e0aec1a8fa5d838 | 33 | py | Python | hello.py | dk975024/test1 | 025e782b0c2a4fcd41f6ea6a0783d97da96eeeda | [
"BSD-3-Clause"
] | null | null | null | hello.py | dk975024/test1 | 025e782b0c2a4fcd41f6ea6a0783d97da96eeeda | [
"BSD-3-Clause"
] | null | null | null | hello.py | dk975024/test1 | 025e782b0c2a4fcd41f6ea6a0783d97da96eeeda | [
"BSD-3-Clause"
] | null | null | null | print("Hello, here is Jia WANG")
| 16.5 | 32 | 0.69697 | 6 | 33 | 3.833333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.151515 | 33 | 1 | 33 | 33 | 0.821429 | 0 | 0 | 0 | 0 | 0 | 0.69697 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
d52ad2aa536abbaec796b27cf944314dcc364bcb | 3,959 | py | Python | circular_economy/migrations/0001_circular_economy.py | CzechInvest/ciis | c6102598f564a717472e5e31e7eb894bba2c8104 | [
"MIT"
] | 1 | 2019-05-26T22:24:01.000Z | 2019-05-26T22:24:01.000Z | circular_economy/migrations/0001_circular_economy.py | CzechInvest/ciis | c6102598f564a717472e5e31e7eb894bba2c8104 | [
"MIT"
] | 6 | 2019-01-22T14:53:43.000Z | 2020-09-22T16:20:28.000Z | circular_economy/migrations/0001_circular_economy.py | CzechInvest/ciis | c6102598f564a717472e5e31e7eb894bba2c8104 | [
"MIT"
] | null | null | null | # Generated by Django 2.2.3 on 2019-10-29 09:13
from django.db import migrations, models
import django.db.models.deletion
import uuid
class Migration(migrations.Migration):
initial = True
dependencies = [
('addresses', '0001_initial'),
]
operations = [
migrations.CreateModel(
name='ContactPerson',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('first_name', models.CharField(help_text='First name', max_length=40)),
('last_name', models.CharField(help_text='Last name', max_length=40)),
('middle_name', models.CharField(blank=True, help_text='Last name', max_length=40)),
('role', models.CharField(help_text='Director, HR Manager, ...', max_length=256)),
('crm', models.URLField(blank=True, help_text='CRM link')),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='Keyword',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('keyword', models.TextField(max_length=32)),
],
),
migrations.CreateModel(
name='Pilot',
fields=[
('id', models.UUIDField(default=uuid.uuid4, editable=False, primary_key=True, serialize=False)),
('name', models.CharField(help_text='Name', max_length=200)),
('activity', models.CharField(help_text='Activity name', max_length=200)),
('url', models.URLField()),
('characteristics', models.TextField()),
('project_description', models.TextField()),
('challange', models.TextField()),
('result', models.TextField()),
('address', models.ForeignKey(on_delete=django.db.models.deletion.PROTECT, to='addresses.Address')),
('contact_person', models.ForeignKey(on_delete=django.db.models.deletion.PROTECT, to='circular_economy.ContactPerson')),
('keywords', models.ManyToManyField(to='circular_economy.Keyword')),
],
),
migrations.CreateModel(
name='Municipality',
fields=[
('id', models.UUIDField(default=uuid.uuid4, editable=False, primary_key=True, serialize=False)),
('name', models.CharField(help_text='Name', max_length=200)),
('activity', models.CharField(help_text='Activity name', max_length=200)),
('url', models.URLField()),
('characteristics', models.TextField()),
('project_description', models.TextField()),
('challange', models.TextField()),
('result', models.TextField()),
('address', models.ForeignKey(on_delete=django.db.models.deletion.PROTECT, to='addresses.Address')),
('contact_person', models.ForeignKey(on_delete=django.db.models.deletion.PROTECT, to='circular_economy.ContactPerson')),
('keywords', models.ManyToManyField(to='circular_economy.Keyword')),
],
),
migrations.CreateModel(
name='Company',
fields=[
('id', models.UUIDField(default=uuid.uuid4, editable=False, primary_key=True, serialize=False)),
('name', models.CharField(help_text='Name', max_length=200)),
('url', models.URLField()),
('characteristics', models.TextField()),
('address', models.ForeignKey(on_delete=django.db.models.deletion.PROTECT, to='addresses.Address')),
('contact_person', models.ForeignKey(on_delete=django.db.models.deletion.PROTECT, to='circular_economy.ContactPerson')),
],
),
]
| 48.280488 | 136 | 0.577166 | 375 | 3,959 | 5.962667 | 0.234667 | 0.035778 | 0.067979 | 0.08229 | 0.765653 | 0.745081 | 0.745081 | 0.72093 | 0.72093 | 0.72093 | 0 | 0.016696 | 0.273807 | 3,959 | 81 | 137 | 48.876543 | 0.761043 | 0.011367 | 0 | 0.675676 | 1 | 0 | 0.165644 | 0.035276 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.040541 | 0 | 0.094595 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d53ae7d0de8a5729f1cd65448f37f028171a7fad | 28,855 | py | Python | pybind/nos/v7_1_0/cee_map/priority_table/__init__.py | shivharis/pybind | 4e1c6d54b9fd722ccec25546ba2413d79ce337e6 | [
"Apache-2.0"
] | null | null | null | pybind/nos/v7_1_0/cee_map/priority_table/__init__.py | shivharis/pybind | 4e1c6d54b9fd722ccec25546ba2413d79ce337e6 | [
"Apache-2.0"
] | null | null | null | pybind/nos/v7_1_0/cee_map/priority_table/__init__.py | shivharis/pybind | 4e1c6d54b9fd722ccec25546ba2413d79ce337e6 | [
"Apache-2.0"
] | 1 | 2021-11-05T22:15:42.000Z | 2021-11-05T22:15:42.000Z |
from operator import attrgetter
import pyangbind.lib.xpathhelper as xpathhelper
from pyangbind.lib.yangtypes import RestrictedPrecisionDecimalType, RestrictedClassType, TypedListType
from pyangbind.lib.yangtypes import YANGBool, YANGListType, YANGDynClass, ReferenceType
from pyangbind.lib.base import PybindBase
from decimal import Decimal
from bitarray import bitarray
import __builtin__
class priority_table(PybindBase):
"""
This class was auto-generated by the PythonClass plugin for PYANG
from YANG module brocade-cee-map - based on the path /cee-map/priority-table. Each member element of
the container is represented as a class variable - with a specific
YANG type.
YANG Description: Configure Priority Table
"""
__slots__ = ('_pybind_generated_by', '_path_helper', '_yang_name', '_rest_name', '_extmethods', '__map_cos0_pgid','__map_cos1_pgid','__map_cos2_pgid','__map_cos3_pgid','__map_cos4_pgid','__map_cos5_pgid','__map_cos6_pgid','__map_cos7_pgid',)
_yang_name = 'priority-table'
_rest_name = 'priority-table'
_pybind_generated_by = 'container'
def __init__(self, *args, **kwargs):
path_helper_ = kwargs.pop("path_helper", None)
if path_helper_ is False:
self._path_helper = False
elif path_helper_ is not None and isinstance(path_helper_, xpathhelper.YANGPathHelper):
self._path_helper = path_helper_
elif hasattr(self, "_parent"):
path_helper_ = getattr(self._parent, "_path_helper", False)
self._path_helper = path_helper_
else:
self._path_helper = False
extmethods = kwargs.pop("extmethods", None)
if extmethods is False:
self._extmethods = False
elif extmethods is not None and isinstance(extmethods, dict):
self._extmethods = extmethods
elif hasattr(self, "_parent"):
extmethods = getattr(self._parent, "_extmethods", None)
self._extmethods = extmethods
else:
self._extmethods = False
self.__map_cos3_pgid = YANGDynClass(base=RestrictedClassType(base_type=unicode, restriction_dict={'pattern': u'[0-7]|15\\.[0-7]', 'length': [u'1..32']}), is_leaf=True, yang_name="map-cos3-pgid", rest_name="map-cos3-pgid", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-drop-node-name': None, u'cli-incomplete-command': None}}, namespace='urn:brocade.com:mgmt:brocade-cee-map', defining_module='brocade-cee-map', yang_type='string', is_config=True)
self.__map_cos2_pgid = YANGDynClass(base=RestrictedClassType(base_type=unicode, restriction_dict={'pattern': u'[0-7]|15\\.[0-7]', 'length': [u'1..32']}), is_leaf=True, yang_name="map-cos2-pgid", rest_name="map-cos2-pgid", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-drop-node-name': None, u'cli-incomplete-command': None}}, namespace='urn:brocade.com:mgmt:brocade-cee-map', defining_module='brocade-cee-map', yang_type='string', is_config=True)
self.__map_cos7_pgid = YANGDynClass(base=RestrictedClassType(base_type=unicode, restriction_dict={'pattern': u'[0-7]|15\\.[0-7]', 'length': [u'1..32']}), is_leaf=True, yang_name="map-cos7-pgid", rest_name="map-cos7-pgid", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-drop-node-name': None}}, namespace='urn:brocade.com:mgmt:brocade-cee-map', defining_module='brocade-cee-map', yang_type='string', is_config=True)
self.__map_cos6_pgid = YANGDynClass(base=RestrictedClassType(base_type=unicode, restriction_dict={'pattern': u'[0-7]|15\\.[0-7]', 'length': [u'1..32']}), is_leaf=True, yang_name="map-cos6-pgid", rest_name="map-cos6-pgid", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-drop-node-name': None, u'cli-incomplete-command': None}}, namespace='urn:brocade.com:mgmt:brocade-cee-map', defining_module='brocade-cee-map', yang_type='string', is_config=True)
self.__map_cos5_pgid = YANGDynClass(base=RestrictedClassType(base_type=unicode, restriction_dict={'pattern': u'[0-7]|15\\.[0-7]', 'length': [u'1..32']}), is_leaf=True, yang_name="map-cos5-pgid", rest_name="map-cos5-pgid", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-drop-node-name': None, u'cli-incomplete-command': None}}, namespace='urn:brocade.com:mgmt:brocade-cee-map', defining_module='brocade-cee-map', yang_type='string', is_config=True)
self.__map_cos4_pgid = YANGDynClass(base=RestrictedClassType(base_type=unicode, restriction_dict={'pattern': u'[0-7]|15\\.[0-7]', 'length': [u'1..32']}), is_leaf=True, yang_name="map-cos4-pgid", rest_name="map-cos4-pgid", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-drop-node-name': None, u'cli-incomplete-command': None}}, namespace='urn:brocade.com:mgmt:brocade-cee-map', defining_module='brocade-cee-map', yang_type='string', is_config=True)
self.__map_cos1_pgid = YANGDynClass(base=RestrictedClassType(base_type=unicode, restriction_dict={'pattern': u'[0-7]|15\\.[0-7]', 'length': [u'1..32']}), is_leaf=True, yang_name="map-cos1-pgid", rest_name="map-cos1-pgid", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-drop-node-name': None, u'cli-incomplete-command': None}}, namespace='urn:brocade.com:mgmt:brocade-cee-map', defining_module='brocade-cee-map', yang_type='string', is_config=True)
self.__map_cos0_pgid = YANGDynClass(base=RestrictedClassType(base_type=unicode, restriction_dict={'pattern': u'[0-7]|15\\.[0-7]', 'length': [u'1..32']}), is_leaf=True, yang_name="map-cos0-pgid", rest_name="map-cos0-pgid", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-drop-node-name': None, u'cli-incomplete-command': None}}, namespace='urn:brocade.com:mgmt:brocade-cee-map', defining_module='brocade-cee-map', yang_type='string', is_config=True)
load = kwargs.pop("load", None)
if args:
if len(args) > 1:
raise TypeError("cannot create a YANG container with >1 argument")
all_attr = True
for e in self._pyangbind_elements:
if not hasattr(args[0], e):
all_attr = False
break
if not all_attr:
raise ValueError("Supplied object did not have the correct attributes")
for e in self._pyangbind_elements:
nobj = getattr(args[0], e)
if nobj._changed() is False:
continue
setmethod = getattr(self, "_set_%s" % e)
if load is None:
setmethod(getattr(args[0], e))
else:
setmethod(getattr(args[0], e), load=load)
def _path(self):
if hasattr(self, "_parent"):
return self._parent._path()+[self._yang_name]
else:
return [u'cee-map', u'priority-table']
def _rest_path(self):
if hasattr(self, "_parent"):
if self._rest_name:
return self._parent._rest_path()+[self._rest_name]
else:
return self._parent._rest_path()
else:
return [u'cee-map', u'priority-table']
def _get_map_cos0_pgid(self):
"""
Getter method for map_cos0_pgid, mapped from YANG variable /cee_map/priority_table/map_cos0_pgid (string)
"""
return self.__map_cos0_pgid
def _set_map_cos0_pgid(self, v, load=False):
"""
Setter method for map_cos0_pgid, mapped from YANG variable /cee_map/priority_table/map_cos0_pgid (string)
If this variable is read-only (config: false) in the
source YANG file, then _set_map_cos0_pgid is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_map_cos0_pgid() directly.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=unicode, restriction_dict={'pattern': u'[0-7]|15\\.[0-7]', 'length': [u'1..32']}), is_leaf=True, yang_name="map-cos0-pgid", rest_name="map-cos0-pgid", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-drop-node-name': None, u'cli-incomplete-command': None}}, namespace='urn:brocade.com:mgmt:brocade-cee-map', defining_module='brocade-cee-map', yang_type='string', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """map_cos0_pgid must be of a type compatible with string""",
'defined-type': "string",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=unicode, restriction_dict={'pattern': u'[0-7]|15\\.[0-7]', 'length': [u'1..32']}), is_leaf=True, yang_name="map-cos0-pgid", rest_name="map-cos0-pgid", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-drop-node-name': None, u'cli-incomplete-command': None}}, namespace='urn:brocade.com:mgmt:brocade-cee-map', defining_module='brocade-cee-map', yang_type='string', is_config=True)""",
})
self.__map_cos0_pgid = t
if hasattr(self, '_set'):
self._set()
def _unset_map_cos0_pgid(self):
self.__map_cos0_pgid = YANGDynClass(base=RestrictedClassType(base_type=unicode, restriction_dict={'pattern': u'[0-7]|15\\.[0-7]', 'length': [u'1..32']}), is_leaf=True, yang_name="map-cos0-pgid", rest_name="map-cos0-pgid", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-drop-node-name': None, u'cli-incomplete-command': None}}, namespace='urn:brocade.com:mgmt:brocade-cee-map', defining_module='brocade-cee-map', yang_type='string', is_config=True)
def _get_map_cos1_pgid(self):
"""
Getter method for map_cos1_pgid, mapped from YANG variable /cee_map/priority_table/map_cos1_pgid (string)
"""
return self.__map_cos1_pgid
def _set_map_cos1_pgid(self, v, load=False):
"""
Setter method for map_cos1_pgid, mapped from YANG variable /cee_map/priority_table/map_cos1_pgid (string)
If this variable is read-only (config: false) in the
source YANG file, then _set_map_cos1_pgid is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_map_cos1_pgid() directly.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=unicode, restriction_dict={'pattern': u'[0-7]|15\\.[0-7]', 'length': [u'1..32']}), is_leaf=True, yang_name="map-cos1-pgid", rest_name="map-cos1-pgid", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-drop-node-name': None, u'cli-incomplete-command': None}}, namespace='urn:brocade.com:mgmt:brocade-cee-map', defining_module='brocade-cee-map', yang_type='string', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """map_cos1_pgid must be of a type compatible with string""",
'defined-type': "string",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=unicode, restriction_dict={'pattern': u'[0-7]|15\\.[0-7]', 'length': [u'1..32']}), is_leaf=True, yang_name="map-cos1-pgid", rest_name="map-cos1-pgid", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-drop-node-name': None, u'cli-incomplete-command': None}}, namespace='urn:brocade.com:mgmt:brocade-cee-map', defining_module='brocade-cee-map', yang_type='string', is_config=True)""",
})
self.__map_cos1_pgid = t
if hasattr(self, '_set'):
self._set()
def _unset_map_cos1_pgid(self):
self.__map_cos1_pgid = YANGDynClass(base=RestrictedClassType(base_type=unicode, restriction_dict={'pattern': u'[0-7]|15\\.[0-7]', 'length': [u'1..32']}), is_leaf=True, yang_name="map-cos1-pgid", rest_name="map-cos1-pgid", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-drop-node-name': None, u'cli-incomplete-command': None}}, namespace='urn:brocade.com:mgmt:brocade-cee-map', defining_module='brocade-cee-map', yang_type='string', is_config=True)
def _get_map_cos2_pgid(self):
"""
Getter method for map_cos2_pgid, mapped from YANG variable /cee_map/priority_table/map_cos2_pgid (string)
"""
return self.__map_cos2_pgid
def _set_map_cos2_pgid(self, v, load=False):
"""
Setter method for map_cos2_pgid, mapped from YANG variable /cee_map/priority_table/map_cos2_pgid (string)
If this variable is read-only (config: false) in the
source YANG file, then _set_map_cos2_pgid is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_map_cos2_pgid() directly.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=unicode, restriction_dict={'pattern': u'[0-7]|15\\.[0-7]', 'length': [u'1..32']}), is_leaf=True, yang_name="map-cos2-pgid", rest_name="map-cos2-pgid", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-drop-node-name': None, u'cli-incomplete-command': None}}, namespace='urn:brocade.com:mgmt:brocade-cee-map', defining_module='brocade-cee-map', yang_type='string', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """map_cos2_pgid must be of a type compatible with string""",
'defined-type': "string",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=unicode, restriction_dict={'pattern': u'[0-7]|15\\.[0-7]', 'length': [u'1..32']}), is_leaf=True, yang_name="map-cos2-pgid", rest_name="map-cos2-pgid", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-drop-node-name': None, u'cli-incomplete-command': None}}, namespace='urn:brocade.com:mgmt:brocade-cee-map', defining_module='brocade-cee-map', yang_type='string', is_config=True)""",
})
self.__map_cos2_pgid = t
if hasattr(self, '_set'):
self._set()
def _unset_map_cos2_pgid(self):
self.__map_cos2_pgid = YANGDynClass(base=RestrictedClassType(base_type=unicode, restriction_dict={'pattern': u'[0-7]|15\\.[0-7]', 'length': [u'1..32']}), is_leaf=True, yang_name="map-cos2-pgid", rest_name="map-cos2-pgid", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-drop-node-name': None, u'cli-incomplete-command': None}}, namespace='urn:brocade.com:mgmt:brocade-cee-map', defining_module='brocade-cee-map', yang_type='string', is_config=True)
def _get_map_cos3_pgid(self):
"""
Getter method for map_cos3_pgid, mapped from YANG variable /cee_map/priority_table/map_cos3_pgid (string)
"""
return self.__map_cos3_pgid
def _set_map_cos3_pgid(self, v, load=False):
"""
Setter method for map_cos3_pgid, mapped from YANG variable /cee_map/priority_table/map_cos3_pgid (string)
If this variable is read-only (config: false) in the
source YANG file, then _set_map_cos3_pgid is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_map_cos3_pgid() directly.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=unicode, restriction_dict={'pattern': u'[0-7]|15\\.[0-7]', 'length': [u'1..32']}), is_leaf=True, yang_name="map-cos3-pgid", rest_name="map-cos3-pgid", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-drop-node-name': None, u'cli-incomplete-command': None}}, namespace='urn:brocade.com:mgmt:brocade-cee-map', defining_module='brocade-cee-map', yang_type='string', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """map_cos3_pgid must be of a type compatible with string""",
'defined-type': "string",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=unicode, restriction_dict={'pattern': u'[0-7]|15\\.[0-7]', 'length': [u'1..32']}), is_leaf=True, yang_name="map-cos3-pgid", rest_name="map-cos3-pgid", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-drop-node-name': None, u'cli-incomplete-command': None}}, namespace='urn:brocade.com:mgmt:brocade-cee-map', defining_module='brocade-cee-map', yang_type='string', is_config=True)""",
})
self.__map_cos3_pgid = t
if hasattr(self, '_set'):
self._set()
def _unset_map_cos3_pgid(self):
self.__map_cos3_pgid = YANGDynClass(base=RestrictedClassType(base_type=unicode, restriction_dict={'pattern': u'[0-7]|15\\.[0-7]', 'length': [u'1..32']}), is_leaf=True, yang_name="map-cos3-pgid", rest_name="map-cos3-pgid", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-drop-node-name': None, u'cli-incomplete-command': None}}, namespace='urn:brocade.com:mgmt:brocade-cee-map', defining_module='brocade-cee-map', yang_type='string', is_config=True)
def _get_map_cos4_pgid(self):
"""
Getter method for map_cos4_pgid, mapped from YANG variable /cee_map/priority_table/map_cos4_pgid (string)
"""
return self.__map_cos4_pgid
def _set_map_cos4_pgid(self, v, load=False):
"""
Setter method for map_cos4_pgid, mapped from YANG variable /cee_map/priority_table/map_cos4_pgid (string)
If this variable is read-only (config: false) in the
source YANG file, then _set_map_cos4_pgid is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_map_cos4_pgid() directly.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=unicode, restriction_dict={'pattern': u'[0-7]|15\\.[0-7]', 'length': [u'1..32']}), is_leaf=True, yang_name="map-cos4-pgid", rest_name="map-cos4-pgid", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-drop-node-name': None, u'cli-incomplete-command': None}}, namespace='urn:brocade.com:mgmt:brocade-cee-map', defining_module='brocade-cee-map', yang_type='string', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """map_cos4_pgid must be of a type compatible with string""",
'defined-type': "string",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=unicode, restriction_dict={'pattern': u'[0-7]|15\\.[0-7]', 'length': [u'1..32']}), is_leaf=True, yang_name="map-cos4-pgid", rest_name="map-cos4-pgid", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-drop-node-name': None, u'cli-incomplete-command': None}}, namespace='urn:brocade.com:mgmt:brocade-cee-map', defining_module='brocade-cee-map', yang_type='string', is_config=True)""",
})
self.__map_cos4_pgid = t
if hasattr(self, '_set'):
self._set()
def _unset_map_cos4_pgid(self):
self.__map_cos4_pgid = YANGDynClass(base=RestrictedClassType(base_type=unicode, restriction_dict={'pattern': u'[0-7]|15\\.[0-7]', 'length': [u'1..32']}), is_leaf=True, yang_name="map-cos4-pgid", rest_name="map-cos4-pgid", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-drop-node-name': None, u'cli-incomplete-command': None}}, namespace='urn:brocade.com:mgmt:brocade-cee-map', defining_module='brocade-cee-map', yang_type='string', is_config=True)
def _get_map_cos5_pgid(self):
"""
Getter method for map_cos5_pgid, mapped from YANG variable /cee_map/priority_table/map_cos5_pgid (string)
"""
return self.__map_cos5_pgid
def _set_map_cos5_pgid(self, v, load=False):
"""
Setter method for map_cos5_pgid, mapped from YANG variable /cee_map/priority_table/map_cos5_pgid (string)
If this variable is read-only (config: false) in the
source YANG file, then _set_map_cos5_pgid is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_map_cos5_pgid() directly.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=unicode, restriction_dict={'pattern': u'[0-7]|15\\.[0-7]', 'length': [u'1..32']}), is_leaf=True, yang_name="map-cos5-pgid", rest_name="map-cos5-pgid", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-drop-node-name': None, u'cli-incomplete-command': None}}, namespace='urn:brocade.com:mgmt:brocade-cee-map', defining_module='brocade-cee-map', yang_type='string', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """map_cos5_pgid must be of a type compatible with string""",
'defined-type': "string",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=unicode, restriction_dict={'pattern': u'[0-7]|15\\.[0-7]', 'length': [u'1..32']}), is_leaf=True, yang_name="map-cos5-pgid", rest_name="map-cos5-pgid", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-drop-node-name': None, u'cli-incomplete-command': None}}, namespace='urn:brocade.com:mgmt:brocade-cee-map', defining_module='brocade-cee-map', yang_type='string', is_config=True)""",
})
self.__map_cos5_pgid = t
if hasattr(self, '_set'):
self._set()
def _unset_map_cos5_pgid(self):
self.__map_cos5_pgid = YANGDynClass(base=RestrictedClassType(base_type=unicode, restriction_dict={'pattern': u'[0-7]|15\\.[0-7]', 'length': [u'1..32']}), is_leaf=True, yang_name="map-cos5-pgid", rest_name="map-cos5-pgid", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-drop-node-name': None, u'cli-incomplete-command': None}}, namespace='urn:brocade.com:mgmt:brocade-cee-map', defining_module='brocade-cee-map', yang_type='string', is_config=True)
def _get_map_cos6_pgid(self):
"""
Getter method for map_cos6_pgid, mapped from YANG variable /cee_map/priority_table/map_cos6_pgid (string)
"""
return self.__map_cos6_pgid
def _set_map_cos6_pgid(self, v, load=False):
"""
Setter method for map_cos6_pgid, mapped from YANG variable /cee_map/priority_table/map_cos6_pgid (string)
If this variable is read-only (config: false) in the
source YANG file, then _set_map_cos6_pgid is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_map_cos6_pgid() directly.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=unicode, restriction_dict={'pattern': u'[0-7]|15\\.[0-7]', 'length': [u'1..32']}), is_leaf=True, yang_name="map-cos6-pgid", rest_name="map-cos6-pgid", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-drop-node-name': None, u'cli-incomplete-command': None}}, namespace='urn:brocade.com:mgmt:brocade-cee-map', defining_module='brocade-cee-map', yang_type='string', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """map_cos6_pgid must be of a type compatible with string""",
'defined-type': "string",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=unicode, restriction_dict={'pattern': u'[0-7]|15\\.[0-7]', 'length': [u'1..32']}), is_leaf=True, yang_name="map-cos6-pgid", rest_name="map-cos6-pgid", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-drop-node-name': None, u'cli-incomplete-command': None}}, namespace='urn:brocade.com:mgmt:brocade-cee-map', defining_module='brocade-cee-map', yang_type='string', is_config=True)""",
})
self.__map_cos6_pgid = t
if hasattr(self, '_set'):
self._set()
def _unset_map_cos6_pgid(self):
self.__map_cos6_pgid = YANGDynClass(base=RestrictedClassType(base_type=unicode, restriction_dict={'pattern': u'[0-7]|15\\.[0-7]', 'length': [u'1..32']}), is_leaf=True, yang_name="map-cos6-pgid", rest_name="map-cos6-pgid", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-drop-node-name': None, u'cli-incomplete-command': None}}, namespace='urn:brocade.com:mgmt:brocade-cee-map', defining_module='brocade-cee-map', yang_type='string', is_config=True)
def _get_map_cos7_pgid(self):
"""
Getter method for map_cos7_pgid, mapped from YANG variable /cee_map/priority_table/map_cos7_pgid (string)
"""
return self.__map_cos7_pgid
def _set_map_cos7_pgid(self, v, load=False):
"""
Setter method for map_cos7_pgid, mapped from YANG variable /cee_map/priority_table/map_cos7_pgid (string)
If this variable is read-only (config: false) in the
source YANG file, then _set_map_cos7_pgid is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_map_cos7_pgid() directly.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=unicode, restriction_dict={'pattern': u'[0-7]|15\\.[0-7]', 'length': [u'1..32']}), is_leaf=True, yang_name="map-cos7-pgid", rest_name="map-cos7-pgid", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-drop-node-name': None}}, namespace='urn:brocade.com:mgmt:brocade-cee-map', defining_module='brocade-cee-map', yang_type='string', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """map_cos7_pgid must be of a type compatible with string""",
'defined-type': "string",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=unicode, restriction_dict={'pattern': u'[0-7]|15\\.[0-7]', 'length': [u'1..32']}), is_leaf=True, yang_name="map-cos7-pgid", rest_name="map-cos7-pgid", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-drop-node-name': None}}, namespace='urn:brocade.com:mgmt:brocade-cee-map', defining_module='brocade-cee-map', yang_type='string', is_config=True)""",
})
self.__map_cos7_pgid = t
if hasattr(self, '_set'):
self._set()
def _unset_map_cos7_pgid(self):
self.__map_cos7_pgid = YANGDynClass(base=RestrictedClassType(base_type=unicode, restriction_dict={'pattern': u'[0-7]|15\\.[0-7]', 'length': [u'1..32']}), is_leaf=True, yang_name="map-cos7-pgid", rest_name="map-cos7-pgid", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-drop-node-name': None}}, namespace='urn:brocade.com:mgmt:brocade-cee-map', defining_module='brocade-cee-map', yang_type='string', is_config=True)
map_cos0_pgid = __builtin__.property(_get_map_cos0_pgid, _set_map_cos0_pgid)
map_cos1_pgid = __builtin__.property(_get_map_cos1_pgid, _set_map_cos1_pgid)
map_cos2_pgid = __builtin__.property(_get_map_cos2_pgid, _set_map_cos2_pgid)
map_cos3_pgid = __builtin__.property(_get_map_cos3_pgid, _set_map_cos3_pgid)
map_cos4_pgid = __builtin__.property(_get_map_cos4_pgid, _set_map_cos4_pgid)
map_cos5_pgid = __builtin__.property(_get_map_cos5_pgid, _set_map_cos5_pgid)
map_cos6_pgid = __builtin__.property(_get_map_cos6_pgid, _set_map_cos6_pgid)
map_cos7_pgid = __builtin__.property(_get_map_cos7_pgid, _set_map_cos7_pgid)
_pyangbind_elements = {'map_cos0_pgid': map_cos0_pgid, 'map_cos1_pgid': map_cos1_pgid, 'map_cos2_pgid': map_cos2_pgid, 'map_cos3_pgid': map_cos3_pgid, 'map_cos4_pgid': map_cos4_pgid, 'map_cos5_pgid': map_cos5_pgid, 'map_cos6_pgid': map_cos6_pgid, 'map_cos7_pgid': map_cos7_pgid, }
| 77.986486 | 542 | 0.723999 | 4,302 | 28,855 | 4.596002 | 0.042073 | 0.025491 | 0.048149 | 0.031712 | 0.896874 | 0.870473 | 0.851558 | 0.848928 | 0.839318 | 0.826168 | 0 | 0.020369 | 0.118662 | 28,855 | 369 | 543 | 78.197832 | 0.757107 | 0.134465 | 0 | 0.482456 | 0 | 0.035088 | 0.383339 | 0.168077 | 0 | 0 | 0 | 0 | 0 | 1 | 0.118421 | false | 0 | 0.035088 | 0 | 0.27193 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d55533fd1aa913fe8b626d3ec75baa384b75645e | 78 | py | Python | preprocessor/all.py | sneha1234/Power-Log-Analysis-Using-Hadoop | bbbe91067fc3acbbfffef539b3e6c14583cedef4 | [
"Apache-2.0"
] | null | null | null | preprocessor/all.py | sneha1234/Power-Log-Analysis-Using-Hadoop | bbbe91067fc3acbbfffef539b3e6c14583cedef4 | [
"Apache-2.0"
] | null | null | null | preprocessor/all.py | sneha1234/Power-Log-Analysis-Using-Hadoop | bbbe91067fc3acbbfffef539b3e6c14583cedef4 | [
"Apache-2.0"
] | null | null | null | import os
os.system("python preprocess.py")
os.system("python hivercode.py")
| 15.6 | 33 | 0.75641 | 12 | 78 | 4.916667 | 0.583333 | 0.271186 | 0.474576 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.089744 | 78 | 4 | 34 | 19.5 | 0.830986 | 0 | 0 | 0 | 0 | 0 | 0.506494 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
633b9fd7c0690d8b45d1c6cd5c230c3ebac2ab36 | 138 | py | Python | pyDiamondsBackground/__init__.py | muma7490/PyDIAMONDS-Background | 769a8ad57003a7d342d5e21c0f6fa74fb74821c1 | [
"MIT"
] | null | null | null | pyDiamondsBackground/__init__.py | muma7490/PyDIAMONDS-Background | 769a8ad57003a7d342d5e21c0f6fa74fb74821c1 | [
"MIT"
] | 10 | 2017-09-28T13:45:11.000Z | 2017-10-11T12:32:58.000Z | pyDiamondsBackground/__init__.py | muma7490/PyDIAMONDS-Background | 769a8ad57003a7d342d5e21c0f6fa74fb74821c1 | [
"MIT"
] | null | null | null | from pyDiamondsBackground.Background import Background
from pyDiamondsBackground import models
from pyDiamondsBackground.strings import *
| 34.5 | 54 | 0.891304 | 13 | 138 | 9.461538 | 0.461538 | 0.585366 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.086957 | 138 | 3 | 55 | 46 | 0.97619 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6346ea05ec301510c8eca9c60c3718fcb05a4be2 | 373 | py | Python | nlstruct/dataloaders/__init__.py | YoannT/nlstruct | 9f5df13d89fc2c3070ba696bdf4f02539716a361 | [
"BSD-3-Clause"
] | null | null | null | nlstruct/dataloaders/__init__.py | YoannT/nlstruct | 9f5df13d89fc2c3070ba696bdf4f02539716a361 | [
"BSD-3-Clause"
] | null | null | null | nlstruct/dataloaders/__init__.py | YoannT/nlstruct | 9f5df13d89fc2c3070ba696bdf4f02539716a361 | [
"BSD-3-Clause"
] | null | null | null | from nlstruct.dataloaders.brat import load_from_brat
from nlstruct.dataloaders.ncbi_disease import load_ncbi_disease
from nlstruct.dataloaders.medic import load_medic_synonyms, load_alt_medic_mapping
from nlstruct.dataloaders.bc5cdr import load_bc5cdr
from nlstruct.dataloaders.quaero import load_quaero
from nlstruct.dataloaders.n2c2_2019_task3 import load_n2c2_2019_task3 | 62.166667 | 82 | 0.900804 | 54 | 373 | 5.907407 | 0.314815 | 0.225705 | 0.432602 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.045845 | 0.064343 | 373 | 6 | 83 | 62.166667 | 0.868195 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
89587d789a0b48f663a49ef0e15edff5a121cbe1 | 11,361 | py | Python | test/charm_test.py | exceptorr/charm-k8s-prometheus | aa07d479c289a93dd51cfe1d767273ec4ff60ab6 | [
"Apache-2.0"
] | 3 | 2020-03-16T12:19:40.000Z | 2020-06-05T05:22:19.000Z | test/charm_test.py | exceptorr/charm-k8s-prometheus | aa07d479c289a93dd51cfe1d767273ec4ff60ab6 | [
"Apache-2.0"
] | 11 | 2020-02-14T06:04:42.000Z | 2020-06-01T13:47:32.000Z | test/charm_test.py | exceptorr/charm-k8s-prometheus | aa07d479c289a93dd51cfe1d767273ec4ff60ab6 | [
"Apache-2.0"
] | 2 | 2020-05-12T12:40:47.000Z | 2020-06-05T05:22:24.000Z | import json
import sys
import unittest
from unittest.mock import (
call,
create_autospec,
patch
)
from uuid import uuid4
sys.path.append('lib')
from ops.framework import (
EventBase,
)
from ops.model import (
ActiveStatus,
MaintenanceStatus,
)
sys.path.append('src')
from adapters import (
framework,
k8s,
)
import charm
import domain
# This test is disabled due to the:
# https://github.com/canonical/operator/issues/307
# https://github.com/canonical/operator/issues/309
# class OnConfigChangedHandlerTest(unittest.TestCase):
# # We are mocking the time module here so that we don't actually wait
# # 1 second per loop during test exectution.
# @patch('charm.build_juju_unit_status', spec_set=True, autospec=True)
# @patch('charm.k8s', spec_set=True, autospec=True)
# @patch('charm.time', spec_set=True, autospec=True)
# @patch('charm.build_juju_pod_spec', spec_set=True, autospec=True)
# @patch('charm.set_juju_pod_spec', spec_set=True, autospec=True)
# def test__it_blocks_until_pod_is_ready(
# self,
# mock_pod_spec,
# mock_juju_pod_spec,
# mock_time,
# mock_k8s_mod,
# mock_build_juju_unit_status_func):
# # Setup
# mock_fw_adapter_cls = \
# create_autospec(framework.FrameworkAdapter, spec_set=True)
# mock_fw_adapter = mock_fw_adapter_cls.return_value
#
# mock_juju_unit_states = [
# MaintenanceStatus(str(uuid4())),
# MaintenanceStatus(str(uuid4())),
# ActiveStatus(str(uuid4())),
# ]
# mock_build_juju_unit_status_func.side_effect = mock_juju_unit_states
#
# mock_event_cls = create_autospec(EventBase, spec_set=True)
# mock_event = mock_event_cls.return_value
#
# harness = Harness(charm.Charm)
# harness.begin()
# harness.charm._stored.set_default(is_started=False)
# harness.charm.on.config_changed.emit()
#
# # # Exercise
# # charm.on_config_changed_handler(
# # mock_event, mock_fw_adapter, harness.charm._stored
# # )
# #
# # # Assert
# # assert mock_fw_adapter.set_unit_status.call_count == \
# # len(mock_juju_unit_states)
# # assert mock_fw_adapter.set_unit_status.call_args_list == [
# # call(status) for status in mock_juju_unit_states
# # ]
class BuildJujuUnitStatusTest(unittest.TestCase):
def test_returns_maintenance_status_if_pod_status_cannot_be_fetched(self):
# Setup
pod_status = k8s.PodStatus(status_dict=None)
# Exercise
juju_unit_status = charm.build_juju_unit_status(pod_status)
# Assertions
assert type(juju_unit_status) == MaintenanceStatus
assert juju_unit_status.message == "Waiting for pod to appear"
def test_returns_maintenance_status_if_pod_is_not_running(self):
# Setup
status_dict = {
'metadata': {
'annotations': {
'juju.io/unit': uuid4()
}
},
'status': {
'phase': 'Pending',
'conditions': [{
'type': 'ContainersReady',
'status': 'False'
}]
}
}
pod_status = k8s.PodStatus(status_dict=status_dict)
# Exercise
juju_unit_status = charm.build_juju_unit_status(pod_status)
# Assertions
assert type(juju_unit_status) == MaintenanceStatus
assert juju_unit_status.message == "Pod is starting"
def test_returns_maintenance_status_if_pod_is_not_ready(self):
# Setup
status_dict = {
'metadata': {
'annotations': {
'juju.io/unit': uuid4()
}
},
'status': {
'phase': 'Running',
'conditions': [{
'type': 'ContainersReady',
'status': 'False'
}]
}
}
pod_status = k8s.PodStatus(status_dict=status_dict)
# Exercise
juju_unit_status = charm.build_juju_unit_status(pod_status)
# Assertions
assert type(juju_unit_status) == MaintenanceStatus
assert juju_unit_status.message == "Pod is getting ready"
def test_returns_active_status_if_pod_is_ready(self):
# Setup
status_dict = {
'metadata': {
'annotations': {
'juju.io/unit': uuid4()
}
},
'status': {
'phase': 'Running',
'conditions': [{
'type': 'ContainersReady',
'status': 'True'
}]
}
}
pod_status = k8s.PodStatus(status_dict=status_dict)
# Exercise
juju_unit_status = charm.build_juju_unit_status(pod_status)
# Assertions
assert type(juju_unit_status) == ActiveStatus
class OnConfigChangedHandlerTest(unittest.TestCase):
@patch('charm.set_juju_pod_spec', spec_set=True, autospec=True)
@patch('charm.wait_for_pod_readiness', spec_set=True, autospec=True)
@patch('charm.ensure_config_is_reloaded', spec_set=True, autospec=True)
def test__it_pod_is_ready_and_config_is_updated(
self,
mock_ensure_config_is_reloaded,
mock_wait_for_pod_readiness_func,
mock_set_juju_pod_spec
):
# Setup
mock_fw_adapter_cls = \
create_autospec(framework.FrameworkAdapter, spec_set=True)
mock_fw = mock_fw_adapter_cls.return_value
mock_event_cls = create_autospec(EventBase)
mock_event = mock_event_cls.return_value
mock_state_cls = \
create_autospec(charm.StoredState, spec_set=True)
mock_state = mock_state_cls.return_value
mock_set_juju_pod_spec.return_value = False
charm.on_config_changed_handler(mock_event, mock_fw, mock_state)
assert mock_wait_for_pod_readiness_func.call_count == 0
assert mock_ensure_config_is_reloaded.call_count == 0
mock_set_juju_pod_spec.return_value = True
charm.on_config_changed_handler(mock_event, mock_fw, mock_state)
assert mock_wait_for_pod_readiness_func.call_count == 1
assert mock_ensure_config_is_reloaded.call_count == 1
class WaitForPodReadinessTest(unittest.TestCase):
# We are mocking the time module here so that we don't actually wait
# 1 second per loop during test exectution.
@patch('charm.build_juju_unit_status', spec_set=True, autospec=True)
@patch('charm.k8s', spec_set=True, autospec=True)
@patch('charm.time', spec_set=True, autospec=True)
@patch('charm.set_juju_pod_spec', spec_set=True, autospec=True)
def test__it_blocks_until_pod_is_ready(
self,
mock_pod_spec,
mock_time,
mock_k8s_mod,
mock_build_juju_unit_status_func):
# Setup
mock_fw_adapter_cls = \
create_autospec(framework.FrameworkAdapter, spec_set=True)
mock_fw_adapter = mock_fw_adapter_cls.return_value
mock_juju_unit_states = [
MaintenanceStatus(str(uuid4())),
MaintenanceStatus(str(uuid4())),
ActiveStatus(str(uuid4())),
]
mock_build_juju_unit_status_func.side_effect = mock_juju_unit_states
# Exercise
charm.wait_for_pod_readiness(mock_fw_adapter)
# Assert
assert mock_fw_adapter.set_unit_status.call_count == \
len(mock_juju_unit_states)
assert mock_fw_adapter.set_unit_status.call_args_list == [
call(status) for status in mock_juju_unit_states
]
class OnNewAlertManagerRelationHandler(unittest.TestCase):
@patch('charm.build_juju_pod_spec', spec_set=True, autospec=True)
def test__it_updates_the_juju_pod_spec_with_alerting_config(
self,
mock_build_juju_pod_spec_func):
# Setup
mock_fw_adapter_cls = \
create_autospec(framework.FrameworkAdapter,
spec_set=True)
mock_fw = mock_fw_adapter_cls.return_value
mock_fw.unit_is_leader.return_value = True
mock_event_cls = create_autospec(EventBase)
mock_event = mock_event_cls.return_value
mock_data = {str(uuid4()): str(uuid4())}
mock_event.data = dict(alerting_config=json.dumps(mock_data))
mock_prom_juju_pod_spec = create_autospec(domain.PrometheusJujuPodSpec)
mock_build_juju_pod_spec_func.return_value = mock_prom_juju_pod_spec
# Exercise
charm.on_new_alertmanager_relation_handler(mock_event, mock_fw)
# Assert
assert mock_build_juju_pod_spec_func.call_count == 1
assert mock_build_juju_pod_spec_func.call_args == \
call(app_name=mock_fw.get_app_name.return_value,
charm_config=mock_fw.get_config.return_value,
prom_image_meta=mock_fw.get_image_meta.return_value,
nginx_image_meta=mock_fw.get_image_meta.return_value,
alerting_config=mock_data)
assert mock_fw.set_pod_spec.call_count == 1
assert mock_fw.set_pod_spec.call_args == \
call(mock_prom_juju_pod_spec.to_dict())
assert mock_fw.set_unit_status.call_count == 1
args, kwargs = mock_fw.set_unit_status.call_args_list[0]
assert type(args[0]) == MaintenanceStatus
class OnStartHandlerTest(unittest.TestCase):
@patch('charm.build_juju_pod_spec', spec_set=True, autospec=True)
def test__it_updates_the_juju_pod_spec(self,
mock_build_juju_pod_spec_func):
# Setup
mock_fw_adapter_cls = \
create_autospec(framework.FrameworkAdapter,
spec_set=True)
mock_fw = mock_fw_adapter_cls.return_value
mock_fw.unit_is_leader.return_value = True
mock_event_cls = create_autospec(EventBase, spec_set=True)
mock_event = mock_event_cls.return_value
mock_prom_juju_pod_spec = create_autospec(domain.PrometheusJujuPodSpec)
mock_build_juju_pod_spec_func.return_value = mock_prom_juju_pod_spec
mock_state = create_autospec(charm.StoredState).return_value
# Exercise
charm.on_start_handler(mock_event, mock_fw, mock_state)
# Assert
assert mock_state.recently_started
assert mock_state.config_propagated
assert mock_build_juju_pod_spec_func.call_count == 1
assert mock_build_juju_pod_spec_func.call_args == \
call(app_name=mock_fw.get_app_name.return_value,
charm_config=mock_fw.get_config.return_value,
prom_image_meta=mock_fw.get_image_meta.return_value,
nginx_image_meta=mock_fw.get_image_meta.return_value,
alerting_config={})
assert mock_fw.set_pod_spec.call_count == 1
assert mock_fw.set_pod_spec.call_args == \
call(mock_prom_juju_pod_spec.to_dict())
assert mock_fw.set_unit_status.call_count == 1
args, kwargs = mock_fw.set_unit_status.call_args_list[0]
assert type(args[0]) == MaintenanceStatus
| 34.956923 | 79 | 0.643605 | 1,363 | 11,361 | 4.927366 | 0.123258 | 0.038416 | 0.042585 | 0.039607 | 0.819387 | 0.803157 | 0.783353 | 0.764443 | 0.74196 | 0.723496 | 0 | 0.005208 | 0.273215 | 11,361 | 324 | 80 | 35.064815 | 0.808163 | 0.197694 | 0 | 0.5 | 0 | 0 | 0.059169 | 0.020277 | 0 | 0 | 0 | 0 | 0.136364 | 1 | 0.040404 | false | 0 | 0.050505 | 0 | 0.116162 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8989c3f62acf917b7545fe6e324a351f41309076 | 37 | py | Python | src/las_to_cesium/__init__.py | jurajpalenik/las-to-cesium | 52b2bab3fc96318ba20be651597986579e80ec9f | [
"MIT"
] | null | null | null | src/las_to_cesium/__init__.py | jurajpalenik/las-to-cesium | 52b2bab3fc96318ba20be651597986579e80ec9f | [
"MIT"
] | null | null | null | src/las_to_cesium/__init__.py | jurajpalenik/las-to-cesium | 52b2bab3fc96318ba20be651597986579e80ec9f | [
"MIT"
] | null | null | null |
from las_to_cesium.main import main | 12.333333 | 35 | 0.837838 | 7 | 37 | 4.142857 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.135135 | 37 | 3 | 35 | 12.333333 | 0.90625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
89a9dc113edb369c6f6a3a00f9297bf22ab4ed80 | 2,893 | py | Python | squandered/transaction/models.py | kzborisov/squandered | df46d54d5388f9f30c89cfa912b034c1b1c93219 | [
"MIT"
] | null | null | null | squandered/transaction/models.py | kzborisov/squandered | df46d54d5388f9f30c89cfa912b034c1b1c93219 | [
"MIT"
] | null | null | null | squandered/transaction/models.py | kzborisov/squandered | df46d54d5388f9f30c89cfa912b034c1b1c93219 | [
"MIT"
] | null | null | null | from django.contrib.auth import get_user_model
from django.db import models
from django.utils.timezone import localtime
from squandered.account.models import Profile, CustomUser
UserModel = get_user_model()
class ExpenseCategory(models.Model):
NAME_MAX_LENGTH = 256
name = models.CharField(
max_length=NAME_MAX_LENGTH,
unique=True,
)
created_at = models.DateTimeField(
auto_now_add=True,
)
user = models.ForeignKey(
UserModel,
on_delete=models.CASCADE,
)
def __str__(self):
return f'{self.name}'
class Meta:
verbose_name_plural = 'Expense Categories'
ordering = ('-created_at',)
class IncomeCategory(models.Model):
NAME_MAX_LENGTH = 256
name = models.CharField(
max_length=NAME_MAX_LENGTH,
unique=True,
)
created_at = models.DateTimeField(
auto_now_add=True,
)
user = models.ForeignKey(
UserModel,
on_delete=models.CASCADE,
)
def __str__(self):
return f'{self.name}'
class Meta:
verbose_name_plural = 'Income Categories'
ordering = ('-created_at',)
class Expense(models.Model):
NAME_MAX_LENGTH = 1024
AMOUNT_DECIMAL_PLACES = 2
AMOUNT_MAX_DIGITS = 14
name = models.CharField(
max_length=NAME_MAX_LENGTH,
validators=(), # Char only validator
)
amount = models.DecimalField(
max_digits=AMOUNT_MAX_DIGITS,
decimal_places=AMOUNT_DECIMAL_PLACES,
)
description = models.TextField(
blank=True,
null=True,
)
date = models.DateField(
default=localtime,
)
created_at = models.DateTimeField(
auto_now_add=True,
)
category = models.ForeignKey(
ExpenseCategory,
on_delete=models.CASCADE,
)
user = models.ForeignKey(
UserModel,
on_delete=models.CASCADE,
)
def __str__(self):
return f'{self.category} {self.date} {self.amount}'
class Meta:
ordering = ('-date',)
class Income(models.Model):
NAME_MAX_LENGTH = 1024
AMOUNT_DECIMAL_PLACES = 2
AMOUNT_MAX_DIGITS = 14
name = models.CharField(
max_length=NAME_MAX_LENGTH,
)
amount = models.DecimalField(
max_digits=AMOUNT_MAX_DIGITS,
decimal_places=AMOUNT_DECIMAL_PLACES,
)
description = models.TextField(
blank=True,
null=True,
)
date = models.DateField(
default=localtime,
)
created_at = models.DateTimeField(
auto_now_add=True,
)
category = models.ForeignKey(
IncomeCategory,
on_delete=models.CASCADE,
)
user = models.ForeignKey(
UserModel,
on_delete=models.CASCADE,
)
def __str__(self):
return f'{self.category} {self.date} {self.amount}'
class Meta:
ordering = ('-date',)
| 22.426357 | 59 | 0.62945 | 312 | 2,893 | 5.567308 | 0.224359 | 0.062176 | 0.059873 | 0.072539 | 0.831318 | 0.794473 | 0.794473 | 0.794473 | 0.794473 | 0.794473 | 0 | 0.009583 | 0.278604 | 2,893 | 128 | 60 | 22.601563 | 0.822712 | 0.006568 | 0 | 0.685185 | 0 | 0 | 0.05954 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.037037 | false | 0 | 0.037037 | 0.037037 | 0.444444 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
89b1bb2cfa6ef397725cbb5b56c28d4a0c3292e0 | 232 | py | Python | blades_helper/__init__.py | eriksalt/blades_helper_proj | 96e9d856b0e7f7a9cfc26c9f1bdc89c574eebdbc | [
"MIT"
] | null | null | null | blades_helper/__init__.py | eriksalt/blades_helper_proj | 96e9d856b0e7f7a9cfc26c9f1bdc89c574eebdbc | [
"MIT"
] | null | null | null | blades_helper/__init__.py | eriksalt/blades_helper_proj | 96e9d856b0e7f7a9cfc26c9f1bdc89c574eebdbc | [
"MIT"
] | null | null | null | from .mission_generator_constants import MissionGeneratorConstants
from .mission import Mission
from .mission_generator import generate_missions, _initialize_mission, _configure_mission_by_type
from .data_gateway import DataGateway
| 46.4 | 97 | 0.896552 | 27 | 232 | 7.296296 | 0.555556 | 0.167513 | 0.203046 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.077586 | 232 | 4 | 98 | 58 | 0.920561 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
89b657e9e9a5179b016740f66f735c08a89eae4c | 96 | py | Python | venv/lib/python3.8/site-packages/pyparsing/testing.py | Retraces/UkraineBot | 3d5d7f8aaa58fa0cb8b98733b8808e5dfbdb8b71 | [
"MIT"
] | 2 | 2022-03-13T01:58:52.000Z | 2022-03-31T06:07:54.000Z | venv/lib/python3.8/site-packages/pyparsing/testing.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | 19 | 2021-11-20T04:09:18.000Z | 2022-03-23T15:05:55.000Z | venv/lib/python3.8/site-packages/pyparsing/testing.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | null | null | null | /home/runner/.cache/pip/pool/b3/3b/3c/00a6511193212f4cb4bec31f6935409e9a8f1deb60e9528904d99ce106 | 96 | 96 | 0.895833 | 9 | 96 | 9.555556 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.447917 | 0 | 96 | 1 | 96 | 96 | 0.447917 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
9841e330b8ff86994c26d11e0ddd74cdcc987c32 | 51,309 | py | Python | cottonformation/res/groundstation.py | MacHu-GWU/cottonformation-project | 23e28c08cfb5a7cc0db6dbfdb1d7e1585c773f3b | [
"BSD-2-Clause"
] | 5 | 2021-07-22T03:45:59.000Z | 2021-12-17T21:07:14.000Z | cottonformation/res/groundstation.py | MacHu-GWU/cottonformation-project | 23e28c08cfb5a7cc0db6dbfdb1d7e1585c773f3b | [
"BSD-2-Clause"
] | 1 | 2021-06-25T18:01:31.000Z | 2021-06-25T18:01:31.000Z | cottonformation/res/groundstation.py | MacHu-GWU/cottonformation-project | 23e28c08cfb5a7cc0db6dbfdb1d7e1585c773f3b | [
"BSD-2-Clause"
] | 2 | 2021-06-27T03:08:21.000Z | 2021-06-28T22:15:51.000Z | # -*- coding: utf-8 -*-
"""
This module
"""
import attr
import typing
from ..core.model import (
Property, Resource, Tag, GetAtt, TypeHint, TypeCheck,
)
from ..core.constant import AttrMeta
#--- Property declaration ---
@attr.s
class PropConfigS3RecordingConfig(Property):
"""
AWS Object Type = "AWS::GroundStation::Config.S3RecordingConfig"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-config-s3recordingconfig.html
Property Document:
- ``p_BucketArn``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-config-s3recordingconfig.html#cfn-groundstation-config-s3recordingconfig-bucketarn
- ``p_Prefix``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-config-s3recordingconfig.html#cfn-groundstation-config-s3recordingconfig-prefix
- ``p_RoleArn``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-config-s3recordingconfig.html#cfn-groundstation-config-s3recordingconfig-rolearn
"""
AWS_OBJECT_TYPE = "AWS::GroundStation::Config.S3RecordingConfig"
p_BucketArn: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "BucketArn"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-config-s3recordingconfig.html#cfn-groundstation-config-s3recordingconfig-bucketarn"""
p_Prefix: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "Prefix"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-config-s3recordingconfig.html#cfn-groundstation-config-s3recordingconfig-prefix"""
p_RoleArn: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "RoleArn"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-config-s3recordingconfig.html#cfn-groundstation-config-s3recordingconfig-rolearn"""
@attr.s
class PropConfigUplinkEchoConfig(Property):
"""
AWS Object Type = "AWS::GroundStation::Config.UplinkEchoConfig"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-config-uplinkechoconfig.html
Property Document:
- ``p_AntennaUplinkConfigArn``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-config-uplinkechoconfig.html#cfn-groundstation-config-uplinkechoconfig-antennauplinkconfigarn
- ``p_Enabled``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-config-uplinkechoconfig.html#cfn-groundstation-config-uplinkechoconfig-enabled
"""
AWS_OBJECT_TYPE = "AWS::GroundStation::Config.UplinkEchoConfig"
p_AntennaUplinkConfigArn: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "AntennaUplinkConfigArn"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-config-uplinkechoconfig.html#cfn-groundstation-config-uplinkechoconfig-antennauplinkconfigarn"""
p_Enabled: bool = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(bool)),
metadata={AttrMeta.PROPERTY_NAME: "Enabled"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-config-uplinkechoconfig.html#cfn-groundstation-config-uplinkechoconfig-enabled"""
@attr.s
class PropConfigDataflowEndpointConfig(Property):
"""
AWS Object Type = "AWS::GroundStation::Config.DataflowEndpointConfig"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-config-dataflowendpointconfig.html
Property Document:
- ``p_DataflowEndpointName``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-config-dataflowendpointconfig.html#cfn-groundstation-config-dataflowendpointconfig-dataflowendpointname
- ``p_DataflowEndpointRegion``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-config-dataflowendpointconfig.html#cfn-groundstation-config-dataflowendpointconfig-dataflowendpointregion
"""
AWS_OBJECT_TYPE = "AWS::GroundStation::Config.DataflowEndpointConfig"
p_DataflowEndpointName: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "DataflowEndpointName"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-config-dataflowendpointconfig.html#cfn-groundstation-config-dataflowendpointconfig-dataflowendpointname"""
p_DataflowEndpointRegion: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "DataflowEndpointRegion"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-config-dataflowendpointconfig.html#cfn-groundstation-config-dataflowendpointconfig-dataflowendpointregion"""
@attr.s
class PropMissionProfileDataflowEdge(Property):
"""
AWS Object Type = "AWS::GroundStation::MissionProfile.DataflowEdge"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-missionprofile-dataflowedge.html
Property Document:
- ``p_Destination``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-missionprofile-dataflowedge.html#cfn-groundstation-missionprofile-dataflowedge-destination
- ``p_Source``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-missionprofile-dataflowedge.html#cfn-groundstation-missionprofile-dataflowedge-source
"""
AWS_OBJECT_TYPE = "AWS::GroundStation::MissionProfile.DataflowEdge"
p_Destination: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "Destination"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-missionprofile-dataflowedge.html#cfn-groundstation-missionprofile-dataflowedge-destination"""
p_Source: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "Source"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-missionprofile-dataflowedge.html#cfn-groundstation-missionprofile-dataflowedge-source"""
@attr.s
class PropConfigDemodulationConfig(Property):
"""
AWS Object Type = "AWS::GroundStation::Config.DemodulationConfig"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-config-demodulationconfig.html
Property Document:
- ``p_UnvalidatedJSON``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-config-demodulationconfig.html#cfn-groundstation-config-demodulationconfig-unvalidatedjson
"""
AWS_OBJECT_TYPE = "AWS::GroundStation::Config.DemodulationConfig"
p_UnvalidatedJSON: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "UnvalidatedJSON"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-config-demodulationconfig.html#cfn-groundstation-config-demodulationconfig-unvalidatedjson"""
@attr.s
class PropDataflowEndpointGroupSecurityDetails(Property):
"""
AWS Object Type = "AWS::GroundStation::DataflowEndpointGroup.SecurityDetails"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-dataflowendpointgroup-securitydetails.html
Property Document:
- ``p_RoleArn``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-dataflowendpointgroup-securitydetails.html#cfn-groundstation-dataflowendpointgroup-securitydetails-rolearn
- ``p_SecurityGroupIds``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-dataflowendpointgroup-securitydetails.html#cfn-groundstation-dataflowendpointgroup-securitydetails-securitygroupids
- ``p_SubnetIds``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-dataflowendpointgroup-securitydetails.html#cfn-groundstation-dataflowendpointgroup-securitydetails-subnetids
"""
AWS_OBJECT_TYPE = "AWS::GroundStation::DataflowEndpointGroup.SecurityDetails"
p_RoleArn: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "RoleArn"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-dataflowendpointgroup-securitydetails.html#cfn-groundstation-dataflowendpointgroup-securitydetails-rolearn"""
p_SecurityGroupIds: typing.List[TypeHint.intrinsic_str] = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.deep_iterable(member_validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type), iterable_validator=attr.validators.instance_of(list))),
metadata={AttrMeta.PROPERTY_NAME: "SecurityGroupIds"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-dataflowendpointgroup-securitydetails.html#cfn-groundstation-dataflowendpointgroup-securitydetails-securitygroupids"""
p_SubnetIds: typing.List[TypeHint.intrinsic_str] = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.deep_iterable(member_validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type), iterable_validator=attr.validators.instance_of(list))),
metadata={AttrMeta.PROPERTY_NAME: "SubnetIds"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-dataflowendpointgroup-securitydetails.html#cfn-groundstation-dataflowendpointgroup-securitydetails-subnetids"""
@attr.s
class PropDataflowEndpointGroupSocketAddress(Property):
"""
AWS Object Type = "AWS::GroundStation::DataflowEndpointGroup.SocketAddress"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-dataflowendpointgroup-socketaddress.html
Property Document:
- ``p_Name``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-dataflowendpointgroup-socketaddress.html#cfn-groundstation-dataflowendpointgroup-socketaddress-name
- ``p_Port``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-dataflowendpointgroup-socketaddress.html#cfn-groundstation-dataflowendpointgroup-socketaddress-port
"""
AWS_OBJECT_TYPE = "AWS::GroundStation::DataflowEndpointGroup.SocketAddress"
p_Name: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "Name"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-dataflowendpointgroup-socketaddress.html#cfn-groundstation-dataflowendpointgroup-socketaddress-name"""
p_Port: int = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(int)),
metadata={AttrMeta.PROPERTY_NAME: "Port"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-dataflowendpointgroup-socketaddress.html#cfn-groundstation-dataflowendpointgroup-socketaddress-port"""
@attr.s
class PropConfigFrequency(Property):
"""
AWS Object Type = "AWS::GroundStation::Config.Frequency"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-config-frequency.html
Property Document:
- ``p_Units``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-config-frequency.html#cfn-groundstation-config-frequency-units
- ``p_Value``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-config-frequency.html#cfn-groundstation-config-frequency-value
"""
AWS_OBJECT_TYPE = "AWS::GroundStation::Config.Frequency"
p_Units: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "Units"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-config-frequency.html#cfn-groundstation-config-frequency-units"""
p_Value: float = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(float)),
metadata={AttrMeta.PROPERTY_NAME: "Value"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-config-frequency.html#cfn-groundstation-config-frequency-value"""
@attr.s
class PropConfigFrequencyBandwidth(Property):
"""
AWS Object Type = "AWS::GroundStation::Config.FrequencyBandwidth"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-config-frequencybandwidth.html
Property Document:
- ``p_Units``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-config-frequencybandwidth.html#cfn-groundstation-config-frequencybandwidth-units
- ``p_Value``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-config-frequencybandwidth.html#cfn-groundstation-config-frequencybandwidth-value
"""
AWS_OBJECT_TYPE = "AWS::GroundStation::Config.FrequencyBandwidth"
p_Units: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "Units"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-config-frequencybandwidth.html#cfn-groundstation-config-frequencybandwidth-units"""
p_Value: float = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(float)),
metadata={AttrMeta.PROPERTY_NAME: "Value"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-config-frequencybandwidth.html#cfn-groundstation-config-frequencybandwidth-value"""
@attr.s
class PropConfigTrackingConfig(Property):
"""
AWS Object Type = "AWS::GroundStation::Config.TrackingConfig"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-config-trackingconfig.html
Property Document:
- ``p_Autotrack``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-config-trackingconfig.html#cfn-groundstation-config-trackingconfig-autotrack
"""
AWS_OBJECT_TYPE = "AWS::GroundStation::Config.TrackingConfig"
p_Autotrack: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "Autotrack"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-config-trackingconfig.html#cfn-groundstation-config-trackingconfig-autotrack"""
@attr.s
class PropConfigDecodeConfig(Property):
"""
AWS Object Type = "AWS::GroundStation::Config.DecodeConfig"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-config-decodeconfig.html
Property Document:
- ``p_UnvalidatedJSON``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-config-decodeconfig.html#cfn-groundstation-config-decodeconfig-unvalidatedjson
"""
AWS_OBJECT_TYPE = "AWS::GroundStation::Config.DecodeConfig"
p_UnvalidatedJSON: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "UnvalidatedJSON"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-config-decodeconfig.html#cfn-groundstation-config-decodeconfig-unvalidatedjson"""
@attr.s
class PropConfigEirp(Property):
"""
AWS Object Type = "AWS::GroundStation::Config.Eirp"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-config-eirp.html
Property Document:
- ``p_Units``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-config-eirp.html#cfn-groundstation-config-eirp-units
- ``p_Value``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-config-eirp.html#cfn-groundstation-config-eirp-value
"""
AWS_OBJECT_TYPE = "AWS::GroundStation::Config.Eirp"
p_Units: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "Units"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-config-eirp.html#cfn-groundstation-config-eirp-units"""
p_Value: float = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(float)),
metadata={AttrMeta.PROPERTY_NAME: "Value"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-config-eirp.html#cfn-groundstation-config-eirp-value"""
@attr.s
class PropDataflowEndpointGroupDataflowEndpoint(Property):
"""
AWS Object Type = "AWS::GroundStation::DataflowEndpointGroup.DataflowEndpoint"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-dataflowendpointgroup-dataflowendpoint.html
Property Document:
- ``p_Address``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-dataflowendpointgroup-dataflowendpoint.html#cfn-groundstation-dataflowendpointgroup-dataflowendpoint-address
- ``p_Mtu``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-dataflowendpointgroup-dataflowendpoint.html#cfn-groundstation-dataflowendpointgroup-dataflowendpoint-mtu
- ``p_Name``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-dataflowendpointgroup-dataflowendpoint.html#cfn-groundstation-dataflowendpointgroup-dataflowendpoint-name
"""
AWS_OBJECT_TYPE = "AWS::GroundStation::DataflowEndpointGroup.DataflowEndpoint"
p_Address: typing.Union['PropDataflowEndpointGroupSocketAddress', dict] = attr.ib(
default=None,
converter=PropDataflowEndpointGroupSocketAddress.from_dict,
validator=attr.validators.optional(attr.validators.instance_of(PropDataflowEndpointGroupSocketAddress)),
metadata={AttrMeta.PROPERTY_NAME: "Address"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-dataflowendpointgroup-dataflowendpoint.html#cfn-groundstation-dataflowendpointgroup-dataflowendpoint-address"""
p_Mtu: int = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(int)),
metadata={AttrMeta.PROPERTY_NAME: "Mtu"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-dataflowendpointgroup-dataflowendpoint.html#cfn-groundstation-dataflowendpointgroup-dataflowendpoint-mtu"""
p_Name: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "Name"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-dataflowendpointgroup-dataflowendpoint.html#cfn-groundstation-dataflowendpointgroup-dataflowendpoint-name"""
@attr.s
class PropConfigUplinkSpectrumConfig(Property):
"""
AWS Object Type = "AWS::GroundStation::Config.UplinkSpectrumConfig"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-config-uplinkspectrumconfig.html
Property Document:
- ``p_CenterFrequency``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-config-uplinkspectrumconfig.html#cfn-groundstation-config-uplinkspectrumconfig-centerfrequency
- ``p_Polarization``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-config-uplinkspectrumconfig.html#cfn-groundstation-config-uplinkspectrumconfig-polarization
"""
AWS_OBJECT_TYPE = "AWS::GroundStation::Config.UplinkSpectrumConfig"
p_CenterFrequency: typing.Union['PropConfigFrequency', dict] = attr.ib(
default=None,
converter=PropConfigFrequency.from_dict,
validator=attr.validators.optional(attr.validators.instance_of(PropConfigFrequency)),
metadata={AttrMeta.PROPERTY_NAME: "CenterFrequency"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-config-uplinkspectrumconfig.html#cfn-groundstation-config-uplinkspectrumconfig-centerfrequency"""
p_Polarization: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "Polarization"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-config-uplinkspectrumconfig.html#cfn-groundstation-config-uplinkspectrumconfig-polarization"""
@attr.s
class PropConfigSpectrumConfig(Property):
"""
AWS Object Type = "AWS::GroundStation::Config.SpectrumConfig"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-config-spectrumconfig.html
Property Document:
- ``p_Bandwidth``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-config-spectrumconfig.html#cfn-groundstation-config-spectrumconfig-bandwidth
- ``p_CenterFrequency``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-config-spectrumconfig.html#cfn-groundstation-config-spectrumconfig-centerfrequency
- ``p_Polarization``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-config-spectrumconfig.html#cfn-groundstation-config-spectrumconfig-polarization
"""
AWS_OBJECT_TYPE = "AWS::GroundStation::Config.SpectrumConfig"
p_Bandwidth: typing.Union['PropConfigFrequencyBandwidth', dict] = attr.ib(
default=None,
converter=PropConfigFrequencyBandwidth.from_dict,
validator=attr.validators.optional(attr.validators.instance_of(PropConfigFrequencyBandwidth)),
metadata={AttrMeta.PROPERTY_NAME: "Bandwidth"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-config-spectrumconfig.html#cfn-groundstation-config-spectrumconfig-bandwidth"""
p_CenterFrequency: typing.Union['PropConfigFrequency', dict] = attr.ib(
default=None,
converter=PropConfigFrequency.from_dict,
validator=attr.validators.optional(attr.validators.instance_of(PropConfigFrequency)),
metadata={AttrMeta.PROPERTY_NAME: "CenterFrequency"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-config-spectrumconfig.html#cfn-groundstation-config-spectrumconfig-centerfrequency"""
p_Polarization: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "Polarization"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-config-spectrumconfig.html#cfn-groundstation-config-spectrumconfig-polarization"""
@attr.s
class PropDataflowEndpointGroupEndpointDetails(Property):
"""
AWS Object Type = "AWS::GroundStation::DataflowEndpointGroup.EndpointDetails"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-dataflowendpointgroup-endpointdetails.html
Property Document:
- ``p_Endpoint``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-dataflowendpointgroup-endpointdetails.html#cfn-groundstation-dataflowendpointgroup-endpointdetails-endpoint
- ``p_SecurityDetails``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-dataflowendpointgroup-endpointdetails.html#cfn-groundstation-dataflowendpointgroup-endpointdetails-securitydetails
"""
AWS_OBJECT_TYPE = "AWS::GroundStation::DataflowEndpointGroup.EndpointDetails"
p_Endpoint: typing.Union['PropDataflowEndpointGroupDataflowEndpoint', dict] = attr.ib(
default=None,
converter=PropDataflowEndpointGroupDataflowEndpoint.from_dict,
validator=attr.validators.optional(attr.validators.instance_of(PropDataflowEndpointGroupDataflowEndpoint)),
metadata={AttrMeta.PROPERTY_NAME: "Endpoint"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-dataflowendpointgroup-endpointdetails.html#cfn-groundstation-dataflowendpointgroup-endpointdetails-endpoint"""
p_SecurityDetails: typing.Union['PropDataflowEndpointGroupSecurityDetails', dict] = attr.ib(
default=None,
converter=PropDataflowEndpointGroupSecurityDetails.from_dict,
validator=attr.validators.optional(attr.validators.instance_of(PropDataflowEndpointGroupSecurityDetails)),
metadata={AttrMeta.PROPERTY_NAME: "SecurityDetails"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-dataflowendpointgroup-endpointdetails.html#cfn-groundstation-dataflowendpointgroup-endpointdetails-securitydetails"""
@attr.s
class PropConfigAntennaUplinkConfig(Property):
"""
AWS Object Type = "AWS::GroundStation::Config.AntennaUplinkConfig"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-config-antennauplinkconfig.html
Property Document:
- ``p_SpectrumConfig``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-config-antennauplinkconfig.html#cfn-groundstation-config-antennauplinkconfig-spectrumconfig
- ``p_TargetEirp``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-config-antennauplinkconfig.html#cfn-groundstation-config-antennauplinkconfig-targeteirp
- ``p_TransmitDisabled``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-config-antennauplinkconfig.html#cfn-groundstation-config-antennauplinkconfig-transmitdisabled
"""
AWS_OBJECT_TYPE = "AWS::GroundStation::Config.AntennaUplinkConfig"
p_SpectrumConfig: typing.Union['PropConfigUplinkSpectrumConfig', dict] = attr.ib(
default=None,
converter=PropConfigUplinkSpectrumConfig.from_dict,
validator=attr.validators.optional(attr.validators.instance_of(PropConfigUplinkSpectrumConfig)),
metadata={AttrMeta.PROPERTY_NAME: "SpectrumConfig"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-config-antennauplinkconfig.html#cfn-groundstation-config-antennauplinkconfig-spectrumconfig"""
p_TargetEirp: typing.Union['PropConfigEirp', dict] = attr.ib(
default=None,
converter=PropConfigEirp.from_dict,
validator=attr.validators.optional(attr.validators.instance_of(PropConfigEirp)),
metadata={AttrMeta.PROPERTY_NAME: "TargetEirp"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-config-antennauplinkconfig.html#cfn-groundstation-config-antennauplinkconfig-targeteirp"""
p_TransmitDisabled: bool = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(bool)),
metadata={AttrMeta.PROPERTY_NAME: "TransmitDisabled"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-config-antennauplinkconfig.html#cfn-groundstation-config-antennauplinkconfig-transmitdisabled"""
@attr.s
class PropConfigAntennaDownlinkConfig(Property):
"""
AWS Object Type = "AWS::GroundStation::Config.AntennaDownlinkConfig"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-config-antennadownlinkconfig.html
Property Document:
- ``p_SpectrumConfig``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-config-antennadownlinkconfig.html#cfn-groundstation-config-antennadownlinkconfig-spectrumconfig
"""
AWS_OBJECT_TYPE = "AWS::GroundStation::Config.AntennaDownlinkConfig"
p_SpectrumConfig: typing.Union['PropConfigSpectrumConfig', dict] = attr.ib(
default=None,
converter=PropConfigSpectrumConfig.from_dict,
validator=attr.validators.optional(attr.validators.instance_of(PropConfigSpectrumConfig)),
metadata={AttrMeta.PROPERTY_NAME: "SpectrumConfig"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-config-antennadownlinkconfig.html#cfn-groundstation-config-antennadownlinkconfig-spectrumconfig"""
@attr.s
class PropConfigAntennaDownlinkDemodDecodeConfig(Property):
"""
AWS Object Type = "AWS::GroundStation::Config.AntennaDownlinkDemodDecodeConfig"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-config-antennadownlinkdemoddecodeconfig.html
Property Document:
- ``p_DecodeConfig``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-config-antennadownlinkdemoddecodeconfig.html#cfn-groundstation-config-antennadownlinkdemoddecodeconfig-decodeconfig
- ``p_DemodulationConfig``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-config-antennadownlinkdemoddecodeconfig.html#cfn-groundstation-config-antennadownlinkdemoddecodeconfig-demodulationconfig
- ``p_SpectrumConfig``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-config-antennadownlinkdemoddecodeconfig.html#cfn-groundstation-config-antennadownlinkdemoddecodeconfig-spectrumconfig
"""
AWS_OBJECT_TYPE = "AWS::GroundStation::Config.AntennaDownlinkDemodDecodeConfig"
p_DecodeConfig: typing.Union['PropConfigDecodeConfig', dict] = attr.ib(
default=None,
converter=PropConfigDecodeConfig.from_dict,
validator=attr.validators.optional(attr.validators.instance_of(PropConfigDecodeConfig)),
metadata={AttrMeta.PROPERTY_NAME: "DecodeConfig"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-config-antennadownlinkdemoddecodeconfig.html#cfn-groundstation-config-antennadownlinkdemoddecodeconfig-decodeconfig"""
p_DemodulationConfig: typing.Union['PropConfigDemodulationConfig', dict] = attr.ib(
default=None,
converter=PropConfigDemodulationConfig.from_dict,
validator=attr.validators.optional(attr.validators.instance_of(PropConfigDemodulationConfig)),
metadata={AttrMeta.PROPERTY_NAME: "DemodulationConfig"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-config-antennadownlinkdemoddecodeconfig.html#cfn-groundstation-config-antennadownlinkdemoddecodeconfig-demodulationconfig"""
p_SpectrumConfig: typing.Union['PropConfigSpectrumConfig', dict] = attr.ib(
default=None,
converter=PropConfigSpectrumConfig.from_dict,
validator=attr.validators.optional(attr.validators.instance_of(PropConfigSpectrumConfig)),
metadata={AttrMeta.PROPERTY_NAME: "SpectrumConfig"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-config-antennadownlinkdemoddecodeconfig.html#cfn-groundstation-config-antennadownlinkdemoddecodeconfig-spectrumconfig"""
@attr.s
class PropConfigConfigData(Property):
"""
AWS Object Type = "AWS::GroundStation::Config.ConfigData"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-config-configdata.html
Property Document:
- ``p_AntennaDownlinkConfig``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-config-configdata.html#cfn-groundstation-config-configdata-antennadownlinkconfig
- ``p_AntennaDownlinkDemodDecodeConfig``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-config-configdata.html#cfn-groundstation-config-configdata-antennadownlinkdemoddecodeconfig
- ``p_AntennaUplinkConfig``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-config-configdata.html#cfn-groundstation-config-configdata-antennauplinkconfig
- ``p_DataflowEndpointConfig``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-config-configdata.html#cfn-groundstation-config-configdata-dataflowendpointconfig
- ``p_S3RecordingConfig``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-config-configdata.html#cfn-groundstation-config-configdata-s3recordingconfig
- ``p_TrackingConfig``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-config-configdata.html#cfn-groundstation-config-configdata-trackingconfig
- ``p_UplinkEchoConfig``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-config-configdata.html#cfn-groundstation-config-configdata-uplinkechoconfig
"""
AWS_OBJECT_TYPE = "AWS::GroundStation::Config.ConfigData"
p_AntennaDownlinkConfig: typing.Union['PropConfigAntennaDownlinkConfig', dict] = attr.ib(
default=None,
converter=PropConfigAntennaDownlinkConfig.from_dict,
validator=attr.validators.optional(attr.validators.instance_of(PropConfigAntennaDownlinkConfig)),
metadata={AttrMeta.PROPERTY_NAME: "AntennaDownlinkConfig"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-config-configdata.html#cfn-groundstation-config-configdata-antennadownlinkconfig"""
p_AntennaDownlinkDemodDecodeConfig: typing.Union['PropConfigAntennaDownlinkDemodDecodeConfig', dict] = attr.ib(
default=None,
converter=PropConfigAntennaDownlinkDemodDecodeConfig.from_dict,
validator=attr.validators.optional(attr.validators.instance_of(PropConfigAntennaDownlinkDemodDecodeConfig)),
metadata={AttrMeta.PROPERTY_NAME: "AntennaDownlinkDemodDecodeConfig"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-config-configdata.html#cfn-groundstation-config-configdata-antennadownlinkdemoddecodeconfig"""
p_AntennaUplinkConfig: typing.Union['PropConfigAntennaUplinkConfig', dict] = attr.ib(
default=None,
converter=PropConfigAntennaUplinkConfig.from_dict,
validator=attr.validators.optional(attr.validators.instance_of(PropConfigAntennaUplinkConfig)),
metadata={AttrMeta.PROPERTY_NAME: "AntennaUplinkConfig"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-config-configdata.html#cfn-groundstation-config-configdata-antennauplinkconfig"""
p_DataflowEndpointConfig: typing.Union['PropConfigDataflowEndpointConfig', dict] = attr.ib(
default=None,
converter=PropConfigDataflowEndpointConfig.from_dict,
validator=attr.validators.optional(attr.validators.instance_of(PropConfigDataflowEndpointConfig)),
metadata={AttrMeta.PROPERTY_NAME: "DataflowEndpointConfig"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-config-configdata.html#cfn-groundstation-config-configdata-dataflowendpointconfig"""
p_S3RecordingConfig: typing.Union['PropConfigS3RecordingConfig', dict] = attr.ib(
default=None,
converter=PropConfigS3RecordingConfig.from_dict,
validator=attr.validators.optional(attr.validators.instance_of(PropConfigS3RecordingConfig)),
metadata={AttrMeta.PROPERTY_NAME: "S3RecordingConfig"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-config-configdata.html#cfn-groundstation-config-configdata-s3recordingconfig"""
p_TrackingConfig: typing.Union['PropConfigTrackingConfig', dict] = attr.ib(
default=None,
converter=PropConfigTrackingConfig.from_dict,
validator=attr.validators.optional(attr.validators.instance_of(PropConfigTrackingConfig)),
metadata={AttrMeta.PROPERTY_NAME: "TrackingConfig"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-config-configdata.html#cfn-groundstation-config-configdata-trackingconfig"""
p_UplinkEchoConfig: typing.Union['PropConfigUplinkEchoConfig', dict] = attr.ib(
default=None,
converter=PropConfigUplinkEchoConfig.from_dict,
validator=attr.validators.optional(attr.validators.instance_of(PropConfigUplinkEchoConfig)),
metadata={AttrMeta.PROPERTY_NAME: "UplinkEchoConfig"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-groundstation-config-configdata.html#cfn-groundstation-config-configdata-uplinkechoconfig"""
#--- Resource declaration ---
@attr.s
class Config(Resource):
"""
AWS Object Type = "AWS::GroundStation::Config"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-groundstation-config.html
Property Document:
- ``rp_ConfigData``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-groundstation-config.html#cfn-groundstation-config-configdata
- ``rp_Name``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-groundstation-config.html#cfn-groundstation-config-name
- ``p_Tags``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-groundstation-config.html#cfn-groundstation-config-tags
"""
AWS_OBJECT_TYPE = "AWS::GroundStation::Config"
rp_ConfigData: typing.Union['PropConfigConfigData', dict] = attr.ib(
default=None,
converter=PropConfigConfigData.from_dict,
validator=attr.validators.instance_of(PropConfigConfigData),
metadata={AttrMeta.PROPERTY_NAME: "ConfigData"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-groundstation-config.html#cfn-groundstation-config-configdata"""
rp_Name: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "Name"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-groundstation-config.html#cfn-groundstation-config-name"""
p_Tags: typing.List[typing.Union[Tag, dict]] = attr.ib(
default=None,
converter=Tag.from_list,
validator=attr.validators.optional(attr.validators.deep_iterable(member_validator=attr.validators.instance_of(Tag), iterable_validator=attr.validators.instance_of(list))),
metadata={AttrMeta.PROPERTY_NAME: "Tags"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-groundstation-config.html#cfn-groundstation-config-tags"""
@property
def rv_Type(self) -> GetAtt:
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-groundstation-config.html#aws-resource-groundstation-config-return-values"""
return GetAtt(resource=self, attr_name="Type")
@property
def rv_Arn(self) -> GetAtt:
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-groundstation-config.html#aws-resource-groundstation-config-return-values"""
return GetAtt(resource=self, attr_name="Arn")
@property
def rv_Id(self) -> GetAtt:
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-groundstation-config.html#aws-resource-groundstation-config-return-values"""
return GetAtt(resource=self, attr_name="Id")
@attr.s
class DataflowEndpointGroup(Resource):
"""
AWS Object Type = "AWS::GroundStation::DataflowEndpointGroup"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-groundstation-dataflowendpointgroup.html
Property Document:
- ``rp_EndpointDetails``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-groundstation-dataflowendpointgroup.html#cfn-groundstation-dataflowendpointgroup-endpointdetails
- ``p_Tags``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-groundstation-dataflowendpointgroup.html#cfn-groundstation-dataflowendpointgroup-tags
"""
AWS_OBJECT_TYPE = "AWS::GroundStation::DataflowEndpointGroup"
rp_EndpointDetails: typing.List[typing.Union['PropDataflowEndpointGroupEndpointDetails', dict]] = attr.ib(
default=None,
converter=PropDataflowEndpointGroupEndpointDetails.from_list,
validator=attr.validators.deep_iterable(member_validator=attr.validators.instance_of(PropDataflowEndpointGroupEndpointDetails), iterable_validator=attr.validators.instance_of(list)),
metadata={AttrMeta.PROPERTY_NAME: "EndpointDetails"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-groundstation-dataflowendpointgroup.html#cfn-groundstation-dataflowendpointgroup-endpointdetails"""
p_Tags: typing.List[typing.Union[Tag, dict]] = attr.ib(
default=None,
converter=Tag.from_list,
validator=attr.validators.optional(attr.validators.deep_iterable(member_validator=attr.validators.instance_of(Tag), iterable_validator=attr.validators.instance_of(list))),
metadata={AttrMeta.PROPERTY_NAME: "Tags"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-groundstation-dataflowendpointgroup.html#cfn-groundstation-dataflowendpointgroup-tags"""
@property
def rv_Id(self) -> GetAtt:
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-groundstation-dataflowendpointgroup.html#aws-resource-groundstation-dataflowendpointgroup-return-values"""
return GetAtt(resource=self, attr_name="Id")
@property
def rv_Arn(self) -> GetAtt:
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-groundstation-dataflowendpointgroup.html#aws-resource-groundstation-dataflowendpointgroup-return-values"""
return GetAtt(resource=self, attr_name="Arn")
@attr.s
class MissionProfile(Resource):
"""
AWS Object Type = "AWS::GroundStation::MissionProfile"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-groundstation-missionprofile.html
Property Document:
- ``rp_DataflowEdges``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-groundstation-missionprofile.html#cfn-groundstation-missionprofile-dataflowedges
- ``rp_MinimumViableContactDurationSeconds``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-groundstation-missionprofile.html#cfn-groundstation-missionprofile-minimumviablecontactdurationseconds
- ``rp_Name``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-groundstation-missionprofile.html#cfn-groundstation-missionprofile-name
- ``rp_TrackingConfigArn``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-groundstation-missionprofile.html#cfn-groundstation-missionprofile-trackingconfigarn
- ``p_ContactPostPassDurationSeconds``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-groundstation-missionprofile.html#cfn-groundstation-missionprofile-contactpostpassdurationseconds
- ``p_ContactPrePassDurationSeconds``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-groundstation-missionprofile.html#cfn-groundstation-missionprofile-contactprepassdurationseconds
- ``p_Tags``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-groundstation-missionprofile.html#cfn-groundstation-missionprofile-tags
"""
AWS_OBJECT_TYPE = "AWS::GroundStation::MissionProfile"
rp_DataflowEdges: typing.List[typing.Union['PropMissionProfileDataflowEdge', dict]] = attr.ib(
default=None,
converter=PropMissionProfileDataflowEdge.from_list,
validator=attr.validators.deep_iterable(member_validator=attr.validators.instance_of(PropMissionProfileDataflowEdge), iterable_validator=attr.validators.instance_of(list)),
metadata={AttrMeta.PROPERTY_NAME: "DataflowEdges"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-groundstation-missionprofile.html#cfn-groundstation-missionprofile-dataflowedges"""
rp_MinimumViableContactDurationSeconds: int = attr.ib(
default=None,
validator=attr.validators.instance_of(int),
metadata={AttrMeta.PROPERTY_NAME: "MinimumViableContactDurationSeconds"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-groundstation-missionprofile.html#cfn-groundstation-missionprofile-minimumviablecontactdurationseconds"""
rp_Name: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "Name"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-groundstation-missionprofile.html#cfn-groundstation-missionprofile-name"""
rp_TrackingConfigArn: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "TrackingConfigArn"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-groundstation-missionprofile.html#cfn-groundstation-missionprofile-trackingconfigarn"""
p_ContactPostPassDurationSeconds: int = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(int)),
metadata={AttrMeta.PROPERTY_NAME: "ContactPostPassDurationSeconds"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-groundstation-missionprofile.html#cfn-groundstation-missionprofile-contactpostpassdurationseconds"""
p_ContactPrePassDurationSeconds: int = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(int)),
metadata={AttrMeta.PROPERTY_NAME: "ContactPrePassDurationSeconds"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-groundstation-missionprofile.html#cfn-groundstation-missionprofile-contactprepassdurationseconds"""
p_Tags: typing.List[typing.Union[Tag, dict]] = attr.ib(
default=None,
converter=Tag.from_list,
validator=attr.validators.optional(attr.validators.deep_iterable(member_validator=attr.validators.instance_of(Tag), iterable_validator=attr.validators.instance_of(list))),
metadata={AttrMeta.PROPERTY_NAME: "Tags"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-groundstation-missionprofile.html#cfn-groundstation-missionprofile-tags"""
@property
def rv_Id(self) -> GetAtt:
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-groundstation-missionprofile.html#aws-resource-groundstation-missionprofile-return-values"""
return GetAtt(resource=self, attr_name="Id")
@property
def rv_Arn(self) -> GetAtt:
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-groundstation-missionprofile.html#aws-resource-groundstation-missionprofile-return-values"""
return GetAtt(resource=self, attr_name="Arn")
@property
def rv_Region(self) -> GetAtt:
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-groundstation-missionprofile.html#aws-resource-groundstation-missionprofile-return-values"""
return GetAtt(resource=self, attr_name="Region")
| 63.737888 | 244 | 0.778908 | 5,123 | 51,309 | 7.718524 | 0.027913 | 0.098983 | 0.04145 | 0.064058 | 0.8865 | 0.885464 | 0.864524 | 0.805801 | 0.805726 | 0.803551 | 0 | 0.000541 | 0.099144 | 51,309 | 804 | 245 | 63.817164 | 0.854939 | 0.358085 | 0 | 0.46778 | 0 | 0 | 0.114229 | 0.085221 | 0 | 0 | 0 | 0 | 0 | 1 | 0.019093 | false | 0.009547 | 0.009547 | 0 | 0.298329 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7f2ba64c9ab9651acf1caf7619ae77a3c3e13afc | 31 | py | Python | livemark/plugins/blog/__init__.py | AyrtonB/livemark | f8c49d449ea6242c674cf345823468aaabea6e6b | [
"MIT"
] | 73 | 2021-06-07T13:28:36.000Z | 2022-03-26T05:37:59.000Z | livemark/plugins/blog/__init__.py | AyrtonB/livemark | f8c49d449ea6242c674cf345823468aaabea6e6b | [
"MIT"
] | 120 | 2021-06-04T12:51:01.000Z | 2022-03-21T11:11:36.000Z | livemark/plugins/blog/__init__.py | AyrtonB/livemark | f8c49d449ea6242c674cf345823468aaabea6e6b | [
"MIT"
] | 7 | 2021-09-22T11:38:26.000Z | 2022-03-26T05:35:58.000Z | from .plugin import BlogPlugin
| 15.5 | 30 | 0.83871 | 4 | 31 | 6.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.129032 | 31 | 1 | 31 | 31 | 0.962963 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7f4481341f42853f354216de7d09f8b31f721cff | 10,579 | py | Python | hallo/test/modules/convert/test_convert_unit_remove_name.py | SpangleLabs/Hallo | 17145d8f76552ecd4cbc5caef8924bd2cf0cbf24 | [
"MIT"
] | 1 | 2022-01-27T13:25:01.000Z | 2022-01-27T13:25:01.000Z | hallo/test/modules/convert/test_convert_unit_remove_name.py | joshcoales/Hallo | 17145d8f76552ecd4cbc5caef8924bd2cf0cbf24 | [
"MIT"
] | 75 | 2015-09-26T18:07:18.000Z | 2022-01-04T07:15:11.000Z | hallo/test/modules/convert/test_convert_unit_remove_name.py | SpangleLabs/Hallo | 17145d8f76552ecd4cbc5caef8924bd2cf0cbf24 | [
"MIT"
] | 1 | 2021-04-10T12:02:47.000Z | 2021-04-10T12:02:47.000Z | import unittest
from hallo.events import EventMessage
from hallo.test.modules.convert.convert_function_test_base import ConvertFunctionTestBase
class ConvertUnitRemoveNameTest(ConvertFunctionTestBase, unittest.TestCase):
def test_invalid_type_specified(self):
names1a = len(self.test_unit1a.name_list)
self.function_dispatcher.dispatch(
EventMessage(
self.server,
None,
self.test_user,
"convert unit remove name type=new_type unit=unit1a del=added_name",
)
)
data = self.server.get_send_data(1, self.test_user, EventMessage)
assert "invalid type specified" in data[0].text.lower()
assert len(self.test_unit1a.name_list) == names1a
def test_no_units_match_specified_unit_and_specified_del_name_1(self):
names1a = len(self.test_unit1a.name_list)
self.function_dispatcher.dispatch(
EventMessage(
self.server,
None,
self.test_user,
"convert unit remove name unit=unit1a remove=not_a_name",
)
)
data = self.server.get_send_data(1, self.test_user, EventMessage)
assert "there are no units matching that description" in data[0].text.lower()
assert len(self.test_unit1a.name_list) == names1a
def test_no_units_match_specified_unit_and_specified_del_name_2(self):
self.test_unit1a.add_name("added_name")
names1a = len(self.test_unit1a.name_list)
self.function_dispatcher.dispatch(
EventMessage(
self.server,
None,
self.test_user,
"convert unit remove name unit=unit1aa remove=added_name",
)
)
data = self.server.get_send_data(1, self.test_user, EventMessage)
assert "there are no units matching that description" in data[0].text.lower()
assert len(self.test_unit1a.name_list) == names1a
def test_no_units_match_specified_unit_and_specified_del_name_3(self):
names1a = len(self.test_unit1a.name_list)
self.function_dispatcher.dispatch(
EventMessage(
self.server,
None,
self.test_user,
"convert unit remove name unit=unit1aa remove=not_a_name",
)
)
data = self.server.get_send_data(1, self.test_user, EventMessage)
assert "there are no units matching that description" in data[0].text.lower()
assert len(self.test_unit1a.name_list) == names1a
def test_no_units_match_specified_del_name(self):
self.function_dispatcher.dispatch(
EventMessage(
self.server,
None,
self.test_user,
"convert unit remove name remove=not_a_name",
)
)
data = self.server.get_send_data(1, self.test_user, EventMessage)
assert "there are no units matching that description" in data[0].text.lower()
def test_no_units_match_specified_unit_and_del_name_1(self):
names1a = len(self.test_unit1a.name_list)
self.function_dispatcher.dispatch(
EventMessage(
self.server,
None,
self.test_user,
"convert unit remove name unit=unit1a not_a_name",
)
)
data = self.server.get_send_data(1, self.test_user, EventMessage)
assert "there are no units matching that description" in data[0].text.lower()
assert len(self.test_unit1a.name_list) == names1a
def test_no_units_match_specified_unit_and_del_name_2(self):
self.test_unit1a.add_name("added_name")
names1a = len(self.test_unit1a.name_list)
self.function_dispatcher.dispatch(
EventMessage(
self.server,
None,
self.test_user,
"convert unit remove name unit=unit1aa added_name",
)
)
data = self.server.get_send_data(1, self.test_user, EventMessage)
assert "there are no units matching that description" in data[0].text.lower()
assert len(self.test_unit1a.name_list) == names1a
def test_no_units_match_specified_unit_and_del_name_3(self):
names1a = len(self.test_unit1a.name_list)
self.function_dispatcher.dispatch(
EventMessage(
self.server,
None,
self.test_user,
"convert unit remove name unit=unit1aa not_a_name",
)
)
data = self.server.get_send_data(1, self.test_user, EventMessage)
assert "there are no units matching that description" in data[0].text.lower()
assert len(self.test_unit1a.name_list) == names1a
def test_multiple_units_match_name(self):
names1b = len(self.test_unit1b.name_list)
names2b = len(self.test_unit2b.name_list)
self.function_dispatcher.dispatch(
EventMessage(
self.server,
None,
self.test_user,
"convert unit remove name del=same_name",
)
)
data = self.server.get_send_data(1, self.test_user, EventMessage)
assert "it is ambiguous which unit you refer to" in data[0].text.lower()
assert len(self.test_unit1b.name_list) == names1b
assert len(self.test_unit2b.name_list) == names2b
def test_cant_remove_last_name(self):
assert len(self.test_unit1a.name_list) == 1
self.function_dispatcher.dispatch(
EventMessage(
self.server, None, self.test_user, "convert unit remove name del=unit1a"
)
)
data = self.server.get_send_data(1, self.test_user, EventMessage)
assert "unit only has 1 name" in data[0].text.lower()
assert "cannot remove its last name" in data[0].text.lower()
assert len(self.test_unit1a.name_list) == 1
def test_remove_by_just_del_name_1(self):
names1b = len(self.test_unit1b.name_list)
fallback_name = self.test_unit1b.name_list[1]
self.function_dispatcher.dispatch(
EventMessage(
self.server, None, self.test_user, "convert unit remove name del=unit1b"
)
)
data = self.server.get_send_data(1, self.test_user, EventMessage)
assert 'removed name "unit1b"' in data[0].text.lower()
assert 'from "{}" unit'.format(fallback_name) in data[0].text.lower()
assert len(self.test_unit1b.name_list) == names1b - 1
def test_remove_by_just_del_name_2(self):
names1b = len(self.test_unit1b.name_list)
fallback_name = self.test_unit1b.name_list[1]
self.function_dispatcher.dispatch(
EventMessage(
self.server, None, self.test_user, "convert unit remove name unit1b"
)
)
data = self.server.get_send_data(1, self.test_user, EventMessage)
assert 'removed name "unit1b"' in data[0].text.lower()
assert 'from "{}" unit'.format(fallback_name) in data[0].text.lower()
assert len(self.test_unit1b.name_list) == names1b - 1
def test_remove_by_unit_and_del_name_1(self):
self.test_unit1b.add_name("added_name")
names1b = len(self.test_unit1b.name_list)
self.function_dispatcher.dispatch(
EventMessage(
self.server,
None,
self.test_user,
"convert unit remove name unit=unit1b del=added_name",
)
)
data = self.server.get_send_data(1, self.test_user, EventMessage)
assert 'removed name "added_name"' in data[0].text.lower()
assert 'from "unit1b" unit' in data[0].text.lower()
assert len(self.test_unit1b.name_list) == names1b - 1
def test_remove_by_unit_and_del_name_2(self):
self.test_unit1b.add_name("added_name")
names1b = len(self.test_unit1b.name_list)
self.function_dispatcher.dispatch(
EventMessage(
self.server,
None,
self.test_user,
"convert unit remove name unit1b del=added_name",
)
)
data = self.server.get_send_data(1, self.test_user, EventMessage)
assert 'removed name "added_name"' in data[0].text.lower()
assert 'from "unit1b" unit' in data[0].text.lower()
assert len(self.test_unit1b.name_list) == names1b - 1
def test_remove_by_unit_and_del_name_3(self):
self.test_unit1b.add_name("added_name")
names1b = len(self.test_unit1b.name_list)
self.function_dispatcher.dispatch(
EventMessage(
self.server,
None,
self.test_user,
"convert unit remove name unit=unit1b added_name",
)
)
data = self.server.get_send_data(1, self.test_user, EventMessage)
assert 'removed name "added_name"' in data[0].text.lower()
assert 'from "unit1b" unit' in data[0].text.lower()
assert len(self.test_unit1b.name_list) == names1b - 1
def test_remove_by_unit_and_del_name_specifying_type(self):
self.test_unit1b.add_name("added_name")
self.test_unit2b.add_name("added_name")
names1b = len(self.test_unit1b.name_list)
names2b = len(self.test_unit2b.name_list)
self.function_dispatcher.dispatch(
EventMessage(
self.server,
None,
self.test_user,
"convert unit remove name unit=same_name del=added_name",
)
)
data = self.server.get_send_data(1, self.test_user, EventMessage)
assert "it is ambiguous which unit you refer to" in data[0].text.lower()
assert len(self.test_unit1b.name_list) == names1b
assert len(self.test_unit2b.name_list) == names2b
self.function_dispatcher.dispatch(
EventMessage(
self.server,
None,
self.test_user,
"convert unit remove name type=test_type1 unit=same_name del=added_name",
)
)
data = self.server.get_send_data(1, self.test_user, EventMessage)
assert 'removed name "added_name"' in data[0].text.lower()
assert 'from "unit1b" unit' in data[0].text.lower()
assert len(self.test_unit1b.name_list) == names1b - 1
assert len(self.test_unit2b.name_list) == names2b
| 41.980159 | 89 | 0.620096 | 1,316 | 10,579 | 4.733283 | 0.06459 | 0.101461 | 0.063574 | 0.042382 | 0.935624 | 0.935624 | 0.935303 | 0.931129 | 0.908974 | 0.908974 | 0 | 0.021606 | 0.291237 | 10,579 | 251 | 90 | 42.14741 | 0.809149 | 0 | 0 | 0.678112 | 0 | 0 | 0.150109 | 0 | 0 | 0 | 0 | 0 | 0.188841 | 1 | 0.06867 | false | 0 | 0.012876 | 0 | 0.085837 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7f8593e8e7b82f163c7b8036ced45a7e0b451eb8 | 69 | py | Python | src/commercetools/importapi/__init__.py | lime-green/commercetools-python-sdk | 63b77f6e5abe43e2b3ebbf3cdbbe00c7cf80dca6 | [
"MIT"
] | 1 | 2021-04-07T20:01:30.000Z | 2021-04-07T20:01:30.000Z | src/commercetools/platform/__init__.py | lime-green/commercetools-python-sdk | 63b77f6e5abe43e2b3ebbf3cdbbe00c7cf80dca6 | [
"MIT"
] | null | null | null | src/commercetools/platform/__init__.py | lime-green/commercetools-python-sdk | 63b77f6e5abe43e2b3ebbf3cdbbe00c7cf80dca6 | [
"MIT"
] | null | null | null | # Generated file, please do not change!!!
from .client import Client
| 23 | 41 | 0.753623 | 10 | 69 | 5.2 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15942 | 69 | 2 | 42 | 34.5 | 0.896552 | 0.565217 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7f8d92a7557b5c60c61137a3f8300aee22ab0f91 | 1,625 | py | Python | Pardee_Theme/build/lib/Pardee_Package/style_dictionaries.py | PardeeNerdy/Pardee_Visualization_Package | 5dfa1a56adc45ef7c33e23ffeb8c9dec6b9a44ca | [
"MIT"
] | 2 | 2020-07-09T14:02:27.000Z | 2020-07-20T15:29:19.000Z | Pardee_Package/style_dictionaries.py | Coryv221/Pardee_Theme | 32ebf18a3a5f0e7d37ac7bf773bc26efb807a5b3 | [
"MIT"
] | null | null | null | Pardee_Package/style_dictionaries.py | Coryv221/Pardee_Theme | 32ebf18a3a5f0e7d37ac7bf773bc26efb807a5b3 | [
"MIT"
] | null | null | null | from Pardee_Package import color_palletes as pardee_colors
standard_style = {
'axes.facecolor': pardee_colors.off_white(),
'axes.edgecolor': '.8',
'axes.grid': True,
'axes.axisbelow': True,
'axes.labelcolor': '.15',
'figure.facecolor': 'white',
'grid.color': '.8',
'grid.linestyle': '-',
'text.color': '.15',
'xtick.color': '.15',
'ytick.color': '.15',
'xtick.direction': 'out',
'ytick.direction': 'out',
'lines.solid_capstyle': 'round',
'patch.edgecolor': 'w',
'patch.force_edgecolor': True,
'image.cmap': 'rocket',
'font.family': ['Trebuchet MS'],
'font.Tebuchet MS': ['Trebuchet MS'],
'xtick.bottom': False,
'xtick.top': False,
'ytick.left': False,
'ytick.right': False,
'axes.spines.left': True,
'axes.spines.bottom': True,
'axes.spines.right': True,
'axes.spines.top': True
}
grayscale_style = {
'axes.facecolor': 'white',
'axes.edgecolor': '.8',
'axes.grid': True,
'axes.axisbelow': True,
'axes.labelcolor': '.15',
'figure.facecolor': 'white',
'grid.color': '.8',
'grid.linestyle': 'solid',
'text.color': '.15',
'xtick.color': '.15',
'ytick.color': '.15',
'xtick.direction': 'out',
'ytick.direction': 'out',
'lines.solid_capstyle': 'round',
'patch.edgecolor': 'w',
'patch.force_edgecolor': True,
'image.cmap': 'rocket',
'font.family': ['Times New Roman'],
'font.Times New Roman': ['Times New Roman'],
'xtick.bottom': False,
'xtick.top': False,
'ytick.left': False,
'ytick.right': False,
'axes.spines.left': True,
'axes.spines.bottom': True,
'axes.spines.right': True,
'axes.spines.top': True
}
| 26.209677 | 59 | 0.616 | 198 | 1,625 | 5 | 0.267677 | 0.080808 | 0.084848 | 0.038384 | 0.79596 | 0.79596 | 0.79596 | 0.79596 | 0.79596 | 0.79596 | 0 | 0.014609 | 0.157538 | 1,625 | 61 | 60 | 26.639344 | 0.708546 | 0 | 0 | 0.779661 | 0 | 0 | 0.565857 | 0.026854 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.016949 | 0 | 0.016949 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f6c5978aade9368dd85c8bdc559e0884425c1182 | 44 | py | Python | menu/menu.py | cr33dog/pyxfce | ce3fa5e8c556e14a8127d67192484fe2f59b5595 | [
"BSD-3-Clause"
] | 4 | 2017-08-23T06:32:19.000Z | 2019-11-05T09:59:24.000Z | menu/menu.py | cr33dog/pyxfce | ce3fa5e8c556e14a8127d67192484fe2f59b5595 | [
"BSD-3-Clause"
] | null | null | null | menu/menu.py | cr33dog/pyxfce | ce3fa5e8c556e14a8127d67192484fe2f59b5595 | [
"BSD-3-Clause"
] | 2 | 2017-09-03T17:32:12.000Z | 2021-02-27T20:12:34.000Z | #!/usr/bin/env python
from _menu import *
| 8.8 | 21 | 0.681818 | 7 | 44 | 4.142857 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.181818 | 44 | 4 | 22 | 11 | 0.805556 | 0.454545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f6c781ad361787ef2ab3d5226e8437261d1b71de | 34 | py | Python | discord/types/guild.py | Harukomaze/disnake | 541f5c9623a02be894cd1015dbb344070700cb87 | [
"MIT"
] | null | null | null | discord/types/guild.py | Harukomaze/disnake | 541f5c9623a02be894cd1015dbb344070700cb87 | [
"MIT"
] | null | null | null | discord/types/guild.py | Harukomaze/disnake | 541f5c9623a02be894cd1015dbb344070700cb87 | [
"MIT"
] | null | null | null | from disnake.types.guild import *
| 17 | 33 | 0.794118 | 5 | 34 | 5.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.117647 | 34 | 1 | 34 | 34 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f6c7cec1c8cacd806a59f473fdb3f8f816a8543b | 8,334 | py | Python | imageatm/client/client.py | vishalbelsare/imageatm | 2da9d1902d375338e4363ebf95bcb8ac2bb47ac7 | [
"Apache-2.0"
] | 215 | 2019-03-18T10:51:53.000Z | 2022-01-28T20:04:32.000Z | imageatm/client/client.py | vishalbelsare/imageatm | 2da9d1902d375338e4363ebf95bcb8ac2bb47ac7 | [
"Apache-2.0"
] | 21 | 2019-03-19T09:13:20.000Z | 2020-05-02T10:18:02.000Z | imageatm/client/client.py | vishalbelsare/imageatm | 2da9d1902d375338e4363ebf95bcb8ac2bb47ac7 | [
"Apache-2.0"
] | 54 | 2019-03-18T16:39:37.000Z | 2022-03-31T10:16:48.000Z | import click
from imageatm.client import commands
from imageatm.client.config import Config
# creates config object that will be passed from cli to subcommands
pass_config = click.make_pass_decorator(Config, ensure=True)
# cli is the group parent, it gets run before any of the subcommands are run
@click.group()
@pass_config
def cli(config: Config):
pass
@cli.command()
@click.argument('config-file', type=click.Path())
@click.option('--image-dir', type=click.Path(), help='Directory with image files.')
@click.option('--samples-file', type=click.Path(), help='JSON file with samples.')
@click.option(
'--job-dir',
type=click.Path(),
help=('Directory with train, val, and test samples files and class_mapping file.'),
)
@click.option('--provider', help='Cloud provider, currently supported: [aws].')
@click.option('--instance-type', help='Cloud instance_type [aws].')
@click.option('--region', help='Cloud region [aws].')
@click.option('--vpc-id', help='Cloud VPC id [aws].')
@click.option('--bucket', help='Cloud bucket used for persistence [aws].')
@click.option('--tf-dir', help='Directory with Terraform configs [aws].')
@click.option('--train-cloud', is_flag=True, required=False, help='Run training in cloud [aws].')
@click.option('--destroy', is_flag=True, required=False, help='Destroys cloud.')
@click.option('--resize', is_flag=True, required=False, help='Resizes images in dataprep.')
@click.option('--batch-size', type=int, help='Batch size.', required=False)
@click.option(
'--epochs-train-dense',
type=int,
help='Number of epochs train only dense layer.',
required=False,
)
@click.option(
'--epochs-train-all', type=int, help='Number of epochs train all layers.', required=False
)
@click.option(
'--learning-rate-dense', type=float, help='Learning rate dense layers.', required=False
)
@click.option('--learning-rate-all', type=float, help='Learning rate all layers.', required=False)
@click.option(
'--base-model-name', help='Pretrained CNN to be used for transfer learning.', required=False
)
@click.option(
'--create-report', is_flag=True, required=False, help='Create evaluation report via jupyter notebook.'
)
@click.option(
'--kernel-name', required=False, help='Kernel-name for juypter notebook.'
)
@click.option(
'--export-html', is_flag=True, required=False, help='Export evaluation report to html.'
)
@click.option(
'--export-pdf', is_flag=True, required=False, help='Export evaluation report to pdf.'
)
@click.option(
'--cloud-tag', help='Tag under which all cloud resources are created.', required=False
)
@pass_config
def pipeline(config: Config, **kwargs):
"""Runs all components for which run=True in config file.
All activated (run=True) components from config file will be run in sequence. Options overwrite the config file.
The config file is the only way to define pipeline components.
Args:
config-file: Central configuration file.
"""
commands.pipeline(config, **kwargs)
@cli.command()
@click.option('--config-file', type=click.Path(), help='Central configuration file.')
@click.option('--image-dir', type=click.Path(), help='Directory with image files.')
@click.option('--samples-file', type=click.Path(), help='JSON file with samples.')
@click.option(
'--job-dir',
type=click.Path(),
help=('Directory with train, val, and test samples files and class_mapping file.'),
)
@click.option(
'--resize',
is_flag=True,
required=False,
help='Resizes images and stores them in _resized subfolder.',
)
@pass_config
def dataprep(config: Config, **kwargs):
"""Run data preparation and create job dir.
Creates a directory (job_dir) with the following files:
- train_samples.json
- val_samples.json
- test_samples.json
- class_mapping.json
"""
commands.dataprep(config, **kwargs)
@cli.command()
@click.option('--config-file', type=click.Path(), help='Central configuration file.')
@click.option('--image-dir', type=click.Path(), help='Directory with image files.')
@click.option(
'--job-dir',
type=click.Path(),
help=('Directory with train, val, and test samples files and class_mapping file.'),
)
@click.option('--provider', help='Cloud provider, currently supported: [aws].')
@click.option('--instance-type', help='Cloud instance_type [aws].')
@click.option('--region', help='Cloud region [aws].')
@click.option('--vpc-id', help='Cloud VPC id [aws].')
@click.option('--bucket', help='Cloud bucket used for persistence [aws].')
@click.option('--tf-dir', help='Directory with Terraform configs [aws].')
@click.option('--train-cloud', is_flag=True, required=False, help='Run training in cloud [aws].')
@click.option('--destroy', is_flag=True, required=False, help='Destroys cloud.')
@click.option('--batch-size', type=int, help='Batch size.', required=False)
@click.option(
'--epochs-train-dense',
type=int,
help='Number of epochs train only dense layer.',
required=False,
)
@click.option(
'--epochs-train-all', type=int, help='Number of epochs train all layers.', required=False
)
@click.option(
'--learning-rate-dense', type=float, help='Learning rate dense layers.', required=False
)
@click.option('--learning-rate-all', type=float, help='Learning rate all layers.', required=False)
@click.option(
'--base-model-name', help='Pretrained CNN to be used for transfer learning.', required=False
)
@click.option(
'--cloud-tag', help='Tag under which all cloud resources are created.', required=False
)
@pass_config
def train(config: Config, **kwargs):
"""Train a CNN.
Fine-tunes an ImageNet pre-trained CNN. The number of classes are derived from train_samples.json.
After each epoch the model will be evaluated on val_samples.json.
The best model (based on valuation accuracy) will be saved.
Args:
image_dir: Directory with image files.
job_dir: Directory with train_samples, val_samples, and class_mapping.json.
"""
commands.train(config, **kwargs)
@cli.command()
@click.option('--config-file', type=click.Path(), help='Central configuration file.')
@click.option('--image-dir', type=click.Path(), help='Directory with image files.')
@click.option(
'--job-dir', type=click.Path(), help=('Directory with test samples files and trained model.')
)
@click.option(
'--create-report', is_flag=True, required=False, help='Create evaluation report via jupyter notebook.'
)
@click.option(
'--kernel-name', required=False, help='Kernel-name for juypter notebook.'
)
@click.option(
'--export-html', is_flag=True, required=False, help='Export evaluation report to html.'
)
@click.option(
'--export-pdf', is_flag=True, required=False, help='Export evaluation report to pdf.'
)
@pass_config
def evaluate(config: Config, **kwargs):
"""Evaluate a trained model.
Evaluation will be performed on test_samples.json.
Args:
image_dir: Directory with image files.
job_dir: Directory with test_samples.json and class_mapping.json.
"""
commands.evaluate(config, **kwargs)
@cli.command()
@click.option('--config-file', type=click.Path(), help='Central configuration file.')
@click.option(
'--job-dir', type=click.Path(), help=('Directory with test samples files and trained model.')
)
@click.option('--provider', help='Cloud provider, currently supported: [aws].')
@click.option('--instance-type', help='Cloud instance_type [aws].')
@click.option('--region', help='Cloud region [aws].')
@click.option('--vpc-id', help='Cloud VPC id [aws].')
@click.option('--bucket', help='Cloud bucket used for persistence [aws].')
@click.option('--tf-dir', help='Directory with Terraform configs [aws].')
@click.option('--train-cloud', is_flag=True, required=False, help='Run training in cloud [aws].')
@click.option('--destroy', is_flag=True, required=False, help='Destroys cloud.')
@click.option('--no-destroy', is_flag=True, required=False, help='Keeps cloud.')
@click.option(
'--cloud-tag', help='Tag under which all cloud resources are created.', required=False
)
@pass_config
def cloud(config: Config, **kwargs):
"""Launch/destroy a cloud compute instance.
Launch/destroy cloud instances with Terraform based on Terraform files in tf_dir.
"""
commands.cloud(config, **kwargs)
| 37.881818 | 116 | 0.699424 | 1,139 | 8,334 | 5.074627 | 0.14223 | 0.123702 | 0.050865 | 0.044118 | 0.76955 | 0.756228 | 0.756228 | 0.750346 | 0.750346 | 0.750346 | 0 | 0 | 0.135349 | 8,334 | 219 | 117 | 38.054795 | 0.802109 | 0.157907 | 0 | 0.698113 | 0 | 0 | 0.430501 | 0.0061 | 0 | 0 | 0 | 0 | 0 | 1 | 0.037736 | false | 0.050314 | 0.018868 | 0 | 0.056604 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
f6fc7a9842d38033883cba9e68743d74fa9d6c95 | 27 | py | Python | src/euler_python_package/euler_python/medium/p163.py | wilsonify/euler | 5214b776175e6d76a7c6d8915d0e062d189d9b79 | [
"MIT"
] | null | null | null | src/euler_python_package/euler_python/medium/p163.py | wilsonify/euler | 5214b776175e6d76a7c6d8915d0e062d189d9b79 | [
"MIT"
] | null | null | null | src/euler_python_package/euler_python/medium/p163.py | wilsonify/euler | 5214b776175e6d76a7c6d8915d0e062d189d9b79 | [
"MIT"
] | null | null | null | def problem163():
pass
| 9 | 17 | 0.62963 | 3 | 27 | 5.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15 | 0.259259 | 27 | 2 | 18 | 13.5 | 0.7 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
1018d3e975f2c34ac98a81ddcbf5eebaf5f70c11 | 363 | py | Python | tests/support/constants.py | calvdee/weather-network-7-day-forecast-scraper | 805afa99fe0c42d3b45c7f059033921371e15808 | [
"Unlicense",
"MIT"
] | null | null | null | tests/support/constants.py | calvdee/weather-network-7-day-forecast-scraper | 805afa99fe0c42d3b45c7f059033921371e15808 | [
"Unlicense",
"MIT"
] | null | null | null | tests/support/constants.py | calvdee/weather-network-7-day-forecast-scraper | 805afa99fe0c42d3b45c7f059033921371e15808 | [
"Unlicense",
"MIT"
] | null | null | null | IMAGE_PATH = '/Users/calvindelima/code/weekly-rain-day-predictions/weather-network-7-day-forecast-scraper/tests/screenshot.png'
WN_FORECAST_DATA_PATH = '/Users/calvindelima/code/weekly-rain-day-predictions/weather-network-7-day-forecast-scraper/tests/wn_forecast_data.csv'
FORECAST_URL = 'https://www.theweathernetwork.com/ca/14-day-weather-trend/ontario/london'
| 90.75 | 144 | 0.829201 | 53 | 363 | 5.54717 | 0.566038 | 0.061224 | 0.142857 | 0.170068 | 0.591837 | 0.591837 | 0.591837 | 0.591837 | 0.591837 | 0.591837 | 0 | 0.011299 | 0.024793 | 363 | 3 | 145 | 121 | 0.819209 | 0 | 0 | 0 | 0 | 1 | 0.831956 | 0.633609 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
63fa17771b6a68f644eea5a44617c0cd958bcecc | 92 | py | Python | pynobody/propagator.py | ookuyan/pynobody | 70b784b4cbd7f967c58bb0e11f299ef30344006d | [
"Apache-2.0"
] | 3 | 2020-06-23T20:10:08.000Z | 2021-07-07T17:10:27.000Z | pynobody/propagator.py | ookuyan/pynobody | 70b784b4cbd7f967c58bb0e11f299ef30344006d | [
"Apache-2.0"
] | null | null | null | pynobody/propagator.py | ookuyan/pynobody | 70b784b4cbd7f967c58bb0e11f299ef30344006d | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
import numpy as np
def forward_euler(position, velocity):
pass
| 11.5 | 38 | 0.717391 | 14 | 92 | 4.642857 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.184783 | 92 | 7 | 39 | 13.142857 | 0.866667 | 0.217391 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
121db7107bbc1124dfa64d08bcf7cc3058bba5ca | 23,783 | py | Python | graphpype/utils_stats.py | EtienneCmb/graphpype | f19fdcd8e98660625a53c733ff8e44d60c31bd68 | [
"BSD-3-Clause"
] | null | null | null | graphpype/utils_stats.py | EtienneCmb/graphpype | f19fdcd8e98660625a53c733ff8e44d60c31bd68 | [
"BSD-3-Clause"
] | null | null | null | graphpype/utils_stats.py | EtienneCmb/graphpype | f19fdcd8e98660625a53c733ff8e44d60c31bd68 | [
"BSD-3-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
import scipy.stats as stat
import numpy as np
import itertools as it
def info_CI(X,Y):
""" Compute binomial comparaison
"""
nX = len(X) * 1.
nY = len(Y) * 1.
pX = np.sum(X == 1)/nX
pY = np.sum(Y == 1)/nY
#print pX,pY,np.absolute(pX-pY)
SE = np.sqrt(pX * (1-pX)/nX + pY * (1-pY)/nY)
#print SE
#if (np.absolute(pX-pY) > norm) == True:
#print pX,pY,np.absolute(pX-pY),norm
return np.absolute(pX-pY),SE,np.sign(pX-pY)
###################################################################################### pairwise/nodewise stats ###########################################################################################################
def return_signif_code(p_values,uncor_alpha = 0.05,fdr_alpha = 0.05,bon_alpha = 0.05):
#print p_values
#print uncor_alpha,fdr_alpha,bon_alpha
N = p_values.shape[0]
order = p_values.argsort()
#print order
sorted_p_values= p_values[order]
#print sorted_p_values
### by default, code = 1 (cor at 0.05)
signif_code = np.ones(shape = N)
################ uncor #############################
### code = 0 for all correlation below uncor_alpha
signif_code[p_values > uncor_alpha] = 0
#print p_values[p_values > uncor_alpha]
#print p_values < uncor_alpha
#print p_values == uncor_alpha
##print signif_code
################ FPcor #############################
if 1.0/N < uncor_alpha:
signif_code[p_values < 1.0/N] = 2
################ fdr ###############################
seq = np.arange(N,0,-1)
seq_fdr_p_values = fdr_alpha/seq
#print seq_fdr_p_values
signif_sorted = sorted_p_values < seq_fdr_p_values
signif_code[order[signif_sorted]] = 3
################# bonferroni #######################
signif_code[p_values < bon_alpha/N] = 4
#print signif_code
return signif_code
def return_signif_code_Z(Z_values,conf_interval_binom_fdr = 0.05):
print(Z_values)
N = Z_values.shape[0]
order = Z_values.argsort()
#order = np_list_diff[:,2].argsort()
#print order
#sort_np_list_diff = np_list_diff[order[::-1]]
#print order
sorted_Z_values= Z_values[order[::-1]]
print(sorted_Z_values)
### by default, code = 1 (cor at 0.05)
signif_code = np.ones(shape = N)
################ uncor #############################
### code = 0 for all correlation below uncor_alpha
Z_uncor = stat.norm.ppf(1-conf_interval_binom_fdr/2)
print(Z_uncor)
signif_code[Z_values < Z_uncor] = 0
print(np.sum(signif_code != 0))
################ FPcor #############################
Z_FPcor = stat.norm.ppf(1-(1.0/(2*N)))
print(Z_FPcor)
signif_code[Z_values > Z_FPcor] = 2
print(np.sum(signif_code == 2))
################ fdr ###############################
seq = np.arange(N,0,-1)
seq_fdr_p_values = conf_interval_binom_fdr/seq
seq_Z_val = stat.norm.ppf(1-seq_fdr_p_values/2)
#print seq_fdr_p_values
signif_sorted = sorted_Z_values > seq_Z_val
signif_code[order[signif_sorted]] = 3
print(np.sum(signif_code == 3))
################# bonferroni #######################
Z_bon = stat.norm.ppf(1-conf_interval_binom_fdr/(2*N))
print(Z_bon)
signif_code[Z_values > Z_bon] = 4
print(np.sum(signif_code == 4))
return signif_code
################################################ pairwise stats ##########################################
def compute_pairwise_binom(X,Y,conf_interval_binom):
# number of nodes
N = X.shape[0]
# Perform binomial test at each edge
ADJ = np.zeros((N,N),dtype = 'int')
for i,j in it.combinations(list(range(N)), 2):
ADJ[i,j] = ADJ[j,i] = binom_CI_test(X[i,j,:],Y[i,j,:],conf_interval_binom)
return ADJ
def compute_pairwise_ttest_fdr(X,Y, cor_alpha, uncor_alpha, paired = True,old_order = True, keep_intracon = False):
# number of nodes
if old_order:
N = X.shape[0]
else:
N = X.shape[1]
if keep_intracon:
iter_indexes = it.combinations_with_replacement(list(range(N)), 2)
else:
iter_indexes = it.combinations(list(range(N)), 2)
if old_order:
print(X.shape)
list_diff = []
for i,j in iter_indexes:
X_nonan = X[i,j,np.logical_not(np.isnan(X[i,j,:]))]
Y_nonan = Y[i,j,np.logical_not(np.isnan(Y[i,j,:]))]
#print len(X_nonan),len(Y_nonan)
if len(X_nonan) < 2 or len(Y_nonan) < 2:
#if len(X_nonan) < 1 or len(Y_nonan) < 1:
#list_diff.append([i,j,1.0,0.0])
continue
if paired == True:
t_stat,p_val = stat.ttest_rel(X_nonan,Y_nonan)
if np.isnan(p_val):
print("Warning, unable to compute T-test: ")
print(t_stat,p_val,X_nonan,Y_nonan)
## pas encore present (version scipy 0.18)
#t_stat,p_val = stat.ttest_rel(X[i,j,:],Y[i,j,:],nan_policy = 'omit')
else:
t_stat,p_val = stat.ttest_ind(X_nonan,Y_nonan)
#print t_stat,p_val
list_diff.append([i,j,p_val,np.sign(np.mean(X_nonan)-np.mean(Y_nonan)),t_stat])
else:
# number of nodes
assert X.shape[1] == X.shape[2] and Y.shape[1] == Y.shape[2], "Error, X {}{} and/or Y {}{} are not squared".format(X.shape[1],X.shape[2], Y.shape[1], Y.shape[2])
if paired:
assert X.shape[0] == Y.shape[0], "Error, X and Y are paired but do not have the same number od samples{}{}".format(X.shape[0],Y.shape[0])
#print X.shape
list_diff = []
for i,j in iter_indexes:
X_nonan = X[np.logical_not(np.isnan(X[:,i,j])),i,j]
Y_nonan = Y[np.logical_not(np.isnan(Y[:,i,j])),i,j]
#print len(X_nonan),len(Y_nonan)
if len(X_nonan) < 2 or len(Y_nonan) < 2:
#if len(X_nonan) < 1 or len(Y_nonan) < 1:
#list_diff.append([i,j,1.0,0.0])
continue
if paired == True:
t_stat,p_val = stat.ttest_rel(X_nonan,Y_nonan)
if np.isnan(p_val):
print("Warning, unable to compute T-test: ")
#print t_stat,p_val,X_nonan,Y_nonan
## pas encore present (version scipy 0.18)
#t_stat,p_val = stat.ttest_rel(X[i,j,:],Y[i,j,:],nan_policy = 'omit')
else:
t_stat,p_val = stat.ttest_ind(X_nonan,Y_nonan)
#print t_stat,p_val
list_diff.append([i,j,p_val,np.sign(np.mean(X_nonan)-np.mean(Y_nonan)),t_stat])
#print list_diff
assert len(list_diff) != 0, "Error, list_diff is empty"
np_list_diff = np.array(list_diff)
#print np_list_diff
signif_code = return_signif_code(np_list_diff[:,2],uncor_alpha = uncor_alpha,fdr_alpha = cor_alpha, bon_alpha = cor_alpha)
#print np.sum(signif_code == 0.0),np.sum(signif_code == 1.0),np.sum(signif_code == 2.0),np.sum(signif_code == 3.0),np.sum(signif_code == 4.0)
np_list_diff[:,3] = np_list_diff[:,3] * signif_code
#print np.sum(np_list_diff[:,3] == 0.0)
#print np.sum(np_list_diff[:,3] == 1.0),np.sum(np_list_diff[:,3] == 2.0),np.sum(np_list_diff[:,3] == 3.0),np.sum(np_list_diff[:,3] == 4.0)
#print np.sum(np_list_diff[:,3] == -1.0),np.sum(np_list_diff[:,3] == -2.0),np.sum(np_list_diff[:,3] == -3.0),np.sum(np_list_diff[:,3] == -4.0)
signif_signed_adj_mat = np.zeros((N,N),dtype = 'int')
p_val_mat = np.zeros((N,N),dtype = 'float')
T_stat_mat = np.zeros((N,N),dtype = 'float')
signif_i = np.array(np_list_diff[:,0],dtype = int)
signif_j = np.array(np_list_diff[:,1],dtype = int)
signif_signed_adj_mat[signif_i,signif_j] = signif_signed_adj_mat[signif_j,signif_i] = np.array(np_list_diff[:,3],dtype = int)
p_val_mat[signif_i,signif_j] = p_val_mat[signif_j,signif_i] = np_list_diff[:,2]
T_stat_mat[signif_i,signif_j] = T_stat_mat[signif_j,signif_i] = np_list_diff[:,4]
#print T_stat_mat
return signif_signed_adj_mat, p_val_mat, T_stat_mat
def compute_pairwise_oneway_ttest_fdr(X, cor_alpha, uncor_alpha, old_order = True):
if old_order:
# number of nodes
N = X.shape[0]
#print X.shape
list_diff = []
for i,j in it.combinations(list(range(N)), 2):
X_nonan = X[i,j,np.logical_not(np.isnan(X[i,j,:]))]
#print len(X_nonan),len(Y_nonan)
if len(X_nonan) < 2 :
#if len(X_nonan) < 1 or len(Y_nonan) < 1:
#list_diff.append([i,j,1.0,0.0])
continue
t_stat,p_val = stat.ttest_1samp(X_nonan,0.0)
if np.isnan(p_val):
print("Warning, unable to compute T-test: ")
print(t_stat,p_val,X_nonan)
list_diff.append([i,j,p_val,np.sign(np.mean(X_nonan)),t_stat])
else:
# number of nodes
assert X.shape[1] == X.shape[2] and Y.shape[1] == Y.shape[2], "Error, X {}{} and/or Y {}{} are not squared".format(X.shape[1],X.shape[2], Y.shape[1], Y.shape[2])
N = X.shape[1]
if paired:
assert X.shape[0] == Y.shape[0], "Error, X and Y are paired but do not have the same number od samples{}{}".format(X.shape[0],Y.shape[0])
#print X.shape
list_diff = []
for i,j in it.combinations(list(range(N)), 2):
X_nonan = X[np.logical_not(np.isnan(X[:,i,j])),i,j]
#print len(X_nonan),len(Y_nonan)
if len(X_nonan) < 2:
#if len(X_nonan) < 1 or len(Y_nonan) < 1:
#list_diff.append([i,j,1.0,0.0])
continue
t_stat,p_val = stat.ttest_1samp(X_nonan,0.0)
if np.isnan(p_val):
print("Warning, unable to compute T-test: ")
print(t_stat,p_val,X_nonan, end=' ')
list_diff.append([i,j,p_val,np.sign(np.mean(X_nonan)),t_stat])
print(list_diff)
assert len(list_diff) != 0, "Error, list_diff is empty"
np_list_diff = np.array(list_diff)
print(np_list_diff)
signif_code = return_signif_code(np_list_diff[:,2],uncor_alpha = uncor_alpha,fdr_alpha = cor_alpha, bon_alpha = cor_alpha)
print(np.sum(signif_code == 0.0),np.sum(signif_code == 1.0),np.sum(signif_code == 2.0),np.sum(signif_code == 3.0),np.sum(signif_code == 4.0))
np_list_diff[:,3] = np_list_diff[:,3] * signif_code
print(np.sum(np_list_diff[:,3] == 0.0))
print(np.sum(np_list_diff[:,3] == 1.0),np.sum(np_list_diff[:,3] == 2.0),np.sum(np_list_diff[:,3] == 3.0),np.sum(np_list_diff[:,3] == 4.0))
print(np.sum(np_list_diff[:,3] == -1.0),np.sum(np_list_diff[:,3] == -2.0),np.sum(np_list_diff[:,3] == -3.0),np.sum(np_list_diff[:,3] == -4.0))
signif_signed_adj_mat = np.zeros((N,N),dtype = 'int')
p_val_mat = np.zeros((N,N),dtype = 'float')
T_stat_mat = np.zeros((N,N),dtype = 'float')
signif_i = np.array(np_list_diff[:,0],dtype = int)
signif_j = np.array(np_list_diff[:,1],dtype = int)
signif_signed_adj_mat[signif_i,signif_j] = signif_signed_adj_mat[signif_j,signif_i] = np.array(np_list_diff[:,3],dtype = int)
p_val_mat[signif_i,signif_j] = p_val_mat[signif_j,signif_i] = np_list_diff[:,2]
T_stat_mat[signif_i,signif_j] = T_stat_mat[signif_j,signif_i] = np_list_diff[:,4]
print(T_stat_mat)
return signif_signed_adj_mat, p_val_mat, T_stat_mat
def compute_pairwise_mannwhitney_fdr(X,Y,t_test_thresh_fdr,uncor_alpha = 0.01):
# number of nodes
N = X.shape[0]
list_diff = []
for i,j in it.combinations(list(range(N)), 2):
#t_stat_zalewski = ttest2(X[i,j,:],Y[i,j,:])
u_stat,p_val = stat.mannwhitneyu(X[i,j,:],Y[i,j,:],use_continuity = False)
list_diff.append([i,j,p_val,np.sign(np.mean(X[i,j,:])-np.mean(Y[i,j,:]))])
#print list_diff
np_list_diff = np.array(list_diff)
signif_code = return_signif_code(np_list_diff[:,2],uncor_alpha = uncor_alpha,fdr_alpha = t_test_thresh_fdr, bon_alpha = 0.05)
print(np.sum(signif_code == 0.0),np.sum(signif_code == 1.0),np.sum(signif_code == 2.0),np.sum(signif_code == 3.0),np.sum(signif_code == 4.0))
np_list_diff[:,3] = np_list_diff[:,3] * signif_code
print(np.sum(np_list_diff[:,3] == 0.0))
print(np.sum(np_list_diff[:,3] == 1.0),np.sum(np_list_diff[:,3] == 2.0),np.sum(np_list_diff[:,3] == 3.0),np.sum(np_list_diff[:,3] == 4.0))
print(np.sum(np_list_diff[:,3] == -1.0),np.sum(np_list_diff[:,3] == -2.0),np.sum(np_list_diff[:,3] == -3.0),np.sum(np_list_diff[:,3] == -4.0))
signif_signed_adj_mat = np.zeros((N,N),dtype = 'int')
signif_i = np.array(np_list_diff[:,0],dtype = int)
signif_j = np.array(np_list_diff[:,1],dtype = int)
signif_sign = np.array(np_list_diff[:,3],dtype = int)
signif_signed_adj_mat[signif_i,signif_j] = signif_signed_adj_mat[signif_j,signif_i] = signif_sign
#print signif_signed_adj_mat
return signif_signed_adj_mat
def compute_pairwise_binom_fdr(X,Y,conf_interval_binom_fdr):
# number of nodes
N = X.shape[0]
# Perform binomial test at each edge
list_diff = []
for i,j in it.combinations(list(range(N)), 2):
abs_diff,SE,sign_diff = info_CI(X[i,j,:],Y[i,j,:])
list_diff.append([i,j,abs_diff/SE,sign_diff])
print(list_diff)
np_list_diff = np.array(list_diff)
signif_code = return_signif_code_Z(np_list_diff[:,2],conf_interval_binom_fdr)
print(np.sum(signif_code == 0.0),np.sum(signif_code == 1.0),np.sum(signif_code == 2.0),np.sum(signif_code == 3.0),np.sum(signif_code == 4.0))
np_list_diff[:,3] = np_list_diff[:,3] * signif_code
print(np.sum(np_list_diff[:,3] == 0.0))
print(np.sum(np_list_diff[:,3] == 1.0),np.sum(np_list_diff[:,3] == 2.0),np.sum(np_list_diff[:,3] == 3.0),np.sum(np_list_diff[:,3] == 4.0))
print(np.sum(np_list_diff[:,3] == -1.0),np.sum(np_list_diff[:,3] == -2.0),np.sum(np_list_diff[:,3] == -3.0),np.sum(np_list_diff[:,3] == -4.0))
signif_signed_adj_mat = np.zeros((N,N),dtype = 'int')
signif_i = np.array(np_list_diff[:,0],dtype = int)
signif_j = np.array(np_list_diff[:,1],dtype = int)
signif_sign = np.array(np_list_diff[:,3],dtype = int)
#print signif_i,signif_j
#print signif_signed_adj_mat[signif_i,signif_j]
#print signif_sign
signif_signed_adj_mat[signif_i,signif_j] = signif_signed_adj_mat[signif_j,signif_i] = signif_sign
#print signif_signed_adj_mat
return signif_signed_adj_mat
############### OneWay Anova (F-test)
def compute_oneway_anova_fwe(list_of_list_matrices,cor_alpha = 0.05, uncor_alpha = 0.001, keep_intracon = False):
for group_mat in list_of_list_matrices:
assert group_mat.shape[1] == group_mat.shape[2], "warning, matrices are not squared {} {}".format(group_mat.shape[1], group_mat.shape[2])
N = group_mat.shape[2]
list_diff = []
if keep_intracon:
iter_indexes = it.combinations_with_replacement(list(range(N)), 2)
else:
iter_indexes = it.combinations(list(range(N)), 2)
for i,j in iter_indexes:
#print i,j
list_val = [group_mat[:,i,j].tolist() for group_mat in list_of_list_matrices]
#print list_val
F_stat,p_val = stat.f_oneway(*list_val)
#print F_stat,p_val
list_diff.append([i,j,p_val,F_stat])
############### computing significance code ################
np_list_diff = np.array(list_diff)
print(np_list_diff)
signif_code = return_signif_code(np_list_diff[:,2],uncor_alpha = uncor_alpha, fdr_alpha = cor_alpha, bon_alpha = cor_alpha)
signif_code[np.isnan(np_list_diff[:,2])] = 0
print(np.sum(signif_code == 0.0),np.sum(signif_code == 1.0),np.sum(signif_code == 2.0),np.sum(signif_code == 3.0),np.sum(signif_code == 4.0))
################ converting to matrix
signif_adj_mat = np.zeros((N,N),dtype = 'int')
p_val_mat = np.zeros((N,N),dtype = 'float')
F_stat_mat = np.zeros((N,N),dtype = 'float')
signif_i = np.array(np_list_diff[:,0],dtype = int)
signif_j = np.array(np_list_diff[:,1],dtype = int)
signif_adj_mat[signif_i,signif_j] = signif_adj_mat[signif_j,signif_i] = signif_code
p_val_mat[signif_i,signif_j] = p_val_mat[signif_i,signif_j] = np_list_diff[:,2]
F_stat_mat[signif_i,signif_j] = F_stat_mat[signif_i,signif_j] = np_list_diff[:,3]
#print signif_adj_mat
#print p_val_mat
#print F_stat_mat
return signif_adj_mat, p_val_mat, F_stat_mat
def compute_correl_behav(X, reg_interest,uncor_alpha = 0.001,cor_alpha = 0.05,old_order =False,keep_intracon = False):
import numpy as np
import itertools as it
import scipy.stats as stat
from graphpype.utils_stats import return_signif_code
if old_order:
N = X.shape[0]
else:
N = X.shape[1]
print(reg_interest)
print(reg_interest.dtype)
if keep_intracon:
iter_indexes = it.combinations_with_replacement(list(range(N)), 2)
else:
iter_indexes = it.combinations(list(range(N)), 2)
if not old_order:
# number of nodes
assert X.shape[1] == X.shape[2] and "Error, X {}{} is not squared".format(X.shape[1],X.shape[2])
assert X.shape[0] == reg_interest.shape[0], "Incompatible number of fields in dataframe and nb matrices"
list_diff = []
for i,j in iter_indexes:
keep_val = (~np.isnan(X[:,i,j])) & (~np.isnan(reg_interest))
print(keep_val)
r_stat,p_val = stat.pearsonr(X[keep_val,i,j],reg_interest[keep_val])
print(r_stat,p_val)
if np.isnan(p_val):
print("Warning, unable to compute T-test: ")
print(r_stat,p_val,X_nonan,Y_nonan)
## pas encore present (version scipy 0.18)
#t_stat,p_val = stat.ttest_rel(X[i,j,:],Y[i,j,:],nan_policy = 'omit')
#print t_stat,p_val
list_diff.append([i,j,p_val,np.sign(r_stat),r_stat])
print(list_diff)
assert len(list_diff) != 0, "Error, list_diff is empty"
np_list_diff = np.array(list_diff)
print(np_list_diff)
signif_code = return_signif_code(np_list_diff[:,2],uncor_alpha = uncor_alpha,fdr_alpha = cor_alpha, bon_alpha = cor_alpha)
print(np.sum(signif_code == 0.0),np.sum(signif_code == 1.0),np.sum(signif_code == 2.0),np.sum(signif_code == 3.0),np.sum(signif_code == 4.0))
np_list_diff[:,3] = np_list_diff[:,3] * signif_code
print(np.sum(np_list_diff[:,3] == 0.0))
print(np.sum(np_list_diff[:,3] == 1.0),np.sum(np_list_diff[:,3] == 2.0),np.sum(np_list_diff[:,3] == 3.0),np.sum(np_list_diff[:,3] == 4.0))
print(np.sum(np_list_diff[:,3] == -1.0),np.sum(np_list_diff[:,3] == -2.0),np.sum(np_list_diff[:,3] == -3.0),np.sum(np_list_diff[:,3] == -4.0))
signif_signed_adj_mat = np.zeros((N,N),dtype = 'int')
p_val_mat = np.zeros((N,N),dtype = 'float')
r_stat_mat = np.zeros((N,N),dtype = 'float')
signif_i = np.array(np_list_diff[:,0],dtype = int)
signif_j = np.array(np_list_diff[:,1],dtype = int)
signif_signed_adj_mat[signif_i,signif_j] = signif_signed_adj_mat[signif_j,signif_i] = np.array(np_list_diff[:,3],dtype = int)
p_val_mat[signif_i,signif_j] = p_val_mat[signif_j,signif_i] = np_list_diff[:,2]
r_stat_mat[signif_i,signif_j] = r_stat_mat[signif_j,signif_i] = np_list_diff[:,4]
print(r_stat_mat)
return signif_signed_adj_mat, p_val_mat, r_stat_mat
############### nodewise stats #########################
def compute_nodewise_t_test_vect(d_stacked, nx, ny):
print(d_stacked.shape)
assert d_stacked.shape[1] == nx + ny
t1 = time.time()
t_val_vect = compute_nodewise_t_values(d_stacked[:,:nx],d_stacked[:,nx:nx+ny])
t2 = time.time()
print("computation took %f" %(t2-t1))
return t_val_vect
######################## correl ######################################
def compute_pairwise_correl_fdr(X,behav_score,correl_thresh_fdr):
from scipy.stats.stats import pearsonr
# number of nodes
N = X.shape[0]
list_diff = []
for i,j in it.combinations(list(range(N)), 2):
#t_stat_zalewski = ttest2(X[i,j,:],Y[i,j,:])
r_stat,p_val = pearsonr(X[i,j,:],behav_score)
#print i,j,p_val,r_stat
list_diff.append([i,j,p_val,np.sign(r_stat)])
#print list_diff
np_list_diff = np.array(list_diff)
np_list_diff = np.array(list_diff)
signif_code = return_signif_code(np_list_diff[:,2],uncor_alpha = 0.001,fdr_alpha = correl_thresh_fdr, bon_alpha = 0.05)
print(np.sum(signif_code == 0.0),np.sum(signif_code == 1.0),np.sum(signif_code == 2.0),np.sum(signif_code == 3.0),np.sum(signif_code == 4.0))
np_list_diff[:,3] = np_list_diff[:,3] * signif_code
print(np.sum(np_list_diff[:,3] == 0.0))
print(np.sum(np_list_diff[:,3] == 1.0),np.sum(np_list_diff[:,3] == 2.0),np.sum(np_list_diff[:,3] == 3.0),np.sum(np_list_diff[:,3] == 4.0))
print(np.sum(np_list_diff[:,3] == -1.0),np.sum(np_list_diff[:,3] == -2.0),np.sum(np_list_diff[:,3] == -3.0),np.sum(np_list_diff[:,3] == -4.0))
signif_signed_adj_mat = np.zeros((N,N),dtype = 'int')
signif_i = np.array(np_list_diff[:,0],dtype = int)
signif_j = np.array(np_list_diff[:,1],dtype = int)
signif_sign = np.array(np_list_diff[:,3],dtype = int)
print(signif_i,signif_j)
print(signif_signed_adj_mat[signif_i,signif_j])
#print signif_sign
signif_signed_adj_mat[signif_i,signif_j] = signif_signed_adj_mat[signif_j,signif_i] = signif_sign
print(signif_signed_adj_mat)
return signif_signed_adj_mat | 31.753004 | 222 | 0.551108 | 3,671 | 23,783 | 3.287932 | 0.052302 | 0.105385 | 0.096935 | 0.066529 | 0.81657 | 0.776802 | 0.761972 | 0.743662 | 0.71309 | 0.705137 | 0 | 0.028099 | 0.275743 | 23,783 | 749 | 223 | 31.753004 | 0.672627 | 0.119918 | 0 | 0.64918 | 0 | 0 | 0.034402 | 0 | 0 | 0 | 0 | 0 | 0.036066 | 1 | 0.039344 | false | 0 | 0.02623 | 0 | 0.104918 | 0.186885 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
12704f813166d4e41a218cad1e8026eac08d092e | 100,968 | py | Python | sqlitely/grammar/SQLiteLexer.py | suurjaak/SQLitely | 3be9611385d58fbe2548dda6f27c7380b50297ce | [
"MIT"
] | 2 | 2020-08-29T01:31:12.000Z | 2021-07-17T03:07:36.000Z | sqlitely/grammar/SQLiteLexer.py | suurjaak/SQLitely | 3be9611385d58fbe2548dda6f27c7380b50297ce | [
"MIT"
] | 1 | 2020-11-25T05:49:11.000Z | 2020-11-25T10:40:20.000Z | sqlitely/grammar/SQLiteLexer.py | suurjaak/SQLitely | 3be9611385d58fbe2548dda6f27c7380b50297ce | [
"MIT"
] | 1 | 2022-02-16T13:12:14.000Z | 2022-02-16T13:12:14.000Z | # Generated from SQLite.g4 by ANTLR 4.8
# encoding: utf-8
from __future__ import print_function
from antlr4 import *
from io import StringIO
import sys
def serializedATN():
with StringIO() as buf:
buf.write(u"\3\u608b\ua72a\u8133\ub9ed\u417c\u3be7\u7786\u5964\2")
buf.write(u"\u00a0\u05c1\b\1\4\2\t\2\4\3\t\3\4\4\t\4\4\5\t\5\4\6")
buf.write(u"\t\6\4\7\t\7\4\b\t\b\4\t\t\t\4\n\t\n\4\13\t\13\4\f\t")
buf.write(u"\f\4\r\t\r\4\16\t\16\4\17\t\17\4\20\t\20\4\21\t\21\4")
buf.write(u"\22\t\22\4\23\t\23\4\24\t\24\4\25\t\25\4\26\t\26\4\27")
buf.write(u"\t\27\4\30\t\30\4\31\t\31\4\32\t\32\4\33\t\33\4\34\t")
buf.write(u"\34\4\35\t\35\4\36\t\36\4\37\t\37\4 \t \4!\t!\4\"\t\"")
buf.write(u"\4#\t#\4$\t$\4%\t%\4&\t&\4\'\t\'\4(\t(\4)\t)\4*\t*\4")
buf.write(u"+\t+\4,\t,\4-\t-\4.\t.\4/\t/\4\60\t\60\4\61\t\61\4\62")
buf.write(u"\t\62\4\63\t\63\4\64\t\64\4\65\t\65\4\66\t\66\4\67\t")
buf.write(u"\67\48\t8\49\t9\4:\t:\4;\t;\4<\t<\4=\t=\4>\t>\4?\t?\4")
buf.write(u"@\t@\4A\tA\4B\tB\4C\tC\4D\tD\4E\tE\4F\tF\4G\tG\4H\tH")
buf.write(u"\4I\tI\4J\tJ\4K\tK\4L\tL\4M\tM\4N\tN\4O\tO\4P\tP\4Q\t")
buf.write(u"Q\4R\tR\4S\tS\4T\tT\4U\tU\4V\tV\4W\tW\4X\tX\4Y\tY\4Z")
buf.write(u"\tZ\4[\t[\4\\\t\\\4]\t]\4^\t^\4_\t_\4`\t`\4a\ta\4b\t")
buf.write(u"b\4c\tc\4d\td\4e\te\4f\tf\4g\tg\4h\th\4i\ti\4j\tj\4k")
buf.write(u"\tk\4l\tl\4m\tm\4n\tn\4o\to\4p\tp\4q\tq\4r\tr\4s\ts\4")
buf.write(u"t\tt\4u\tu\4v\tv\4w\tw\4x\tx\4y\ty\4z\tz\4{\t{\4|\t|")
buf.write(u"\4}\t}\4~\t~\4\177\t\177\4\u0080\t\u0080\4\u0081\t\u0081")
buf.write(u"\4\u0082\t\u0082\4\u0083\t\u0083\4\u0084\t\u0084\4\u0085")
buf.write(u"\t\u0085\4\u0086\t\u0086\4\u0087\t\u0087\4\u0088\t\u0088")
buf.write(u"\4\u0089\t\u0089\4\u008a\t\u008a\4\u008b\t\u008b\4\u008c")
buf.write(u"\t\u008c\4\u008d\t\u008d\4\u008e\t\u008e\4\u008f\t\u008f")
buf.write(u"\4\u0090\t\u0090\4\u0091\t\u0091\4\u0092\t\u0092\4\u0093")
buf.write(u"\t\u0093\4\u0094\t\u0094\4\u0095\t\u0095\4\u0096\t\u0096")
buf.write(u"\4\u0097\t\u0097\4\u0098\t\u0098\4\u0099\t\u0099\4\u009a")
buf.write(u"\t\u009a\4\u009b\t\u009b\4\u009c\t\u009c\4\u009d\t\u009d")
buf.write(u"\4\u009e\t\u009e\4\u009f\t\u009f\4\u00a0\t\u00a0\4\u00a1")
buf.write(u"\t\u00a1\4\u00a2\t\u00a2\4\u00a3\t\u00a3\4\u00a4\t\u00a4")
buf.write(u"\4\u00a5\t\u00a5\4\u00a6\t\u00a6\4\u00a7\t\u00a7\4\u00a8")
buf.write(u"\t\u00a8\4\u00a9\t\u00a9\4\u00aa\t\u00aa\4\u00ab\t\u00ab")
buf.write(u"\4\u00ac\t\u00ac\4\u00ad\t\u00ad\4\u00ae\t\u00ae\4\u00af")
buf.write(u"\t\u00af\4\u00b0\t\u00b0\4\u00b1\t\u00b1\4\u00b2\t\u00b2")
buf.write(u"\4\u00b3\t\u00b3\4\u00b4\t\u00b4\4\u00b5\t\u00b5\4\u00b6")
buf.write(u"\t\u00b6\4\u00b7\t\u00b7\4\u00b8\t\u00b8\4\u00b9\t\u00b9")
buf.write(u"\4\u00ba\t\u00ba\3\2\3\2\3\3\3\3\3\4\3\4\3\5\3\5\3\6")
buf.write(u"\3\6\3\7\3\7\3\b\3\b\3\t\3\t\3\n\3\n\3\13\3\13\3\f\3")
buf.write(u"\f\3\f\3\r\3\r\3\16\3\16\3\17\3\17\3\17\3\20\3\20\3\20")
buf.write(u"\3\21\3\21\3\22\3\22\3\23\3\23\3\24\3\24\3\24\3\25\3")
buf.write(u"\25\3\26\3\26\3\26\3\27\3\27\3\27\3\30\3\30\3\30\3\31")
buf.write(u"\3\31\3\31\3\32\3\32\3\32\3\32\3\32\3\32\3\33\3\33\3")
buf.write(u"\33\3\33\3\33\3\33\3\33\3\34\3\34\3\34\3\34\3\35\3\35")
buf.write(u"\3\35\3\35\3\35\3\35\3\36\3\36\3\36\3\36\3\37\3\37\3")
buf.write(u"\37\3\37\3\37\3\37\3 \3 \3 \3 \3 \3 \3 \3 \3!\3!\3!\3")
buf.write(u"!\3\"\3\"\3\"\3#\3#\3#\3#\3$\3$\3$\3$\3$\3$\3$\3%\3%")
buf.write(u"\3%\3%\3%\3%\3%\3%\3%\3%\3%\3%\3%\3%\3&\3&\3&\3&\3&\3")
buf.write(u"&\3&\3\'\3\'\3\'\3\'\3\'\3\'\3(\3(\3(\3(\3(\3(\3(\3(")
buf.write(u"\3)\3)\3)\3*\3*\3*\3*\3*\3*\3*\3*\3+\3+\3+\3+\3+\3,\3")
buf.write(u",\3,\3,\3,\3-\3-\3-\3-\3-\3-\3.\3.\3.\3.\3.\3.\3.\3.")
buf.write(u"\3/\3/\3/\3/\3/\3/\3/\3\60\3\60\3\60\3\60\3\60\3\60\3")
buf.write(u"\60\3\61\3\61\3\61\3\61\3\61\3\61\3\61\3\61\3\61\3\62")
buf.write(u"\3\62\3\62\3\62\3\62\3\62\3\62\3\62\3\62\3\62\3\62\3")
buf.write(u"\63\3\63\3\63\3\63\3\63\3\63\3\63\3\64\3\64\3\64\3\64")
buf.write(u"\3\64\3\64\3\65\3\65\3\65\3\65\3\65\3\65\3\65\3\65\3")
buf.write(u"\65\3\65\3\65\3\65\3\65\3\66\3\66\3\66\3\66\3\66\3\66")
buf.write(u"\3\66\3\66\3\66\3\66\3\66\3\66\3\66\3\67\3\67\3\67\3")
buf.write(u"\67\3\67\3\67\3\67\3\67\3\67\3\67\3\67\3\67\3\67\3\67")
buf.write(u"\3\67\3\67\3\67\3\67\38\38\38\38\38\38\38\38\38\39\3")
buf.write(u"9\39\39\39\39\39\39\3:\3:\3:\3:\3:\3:\3:\3:\3:\3:\3:")
buf.write(u"\3;\3;\3;\3;\3;\3;\3;\3;\3;\3<\3<\3<\3<\3<\3<\3<\3=\3")
buf.write(u"=\3=\3=\3=\3>\3>\3>\3>\3>\3>\3>\3?\3?\3?\3?\3?\3?\3?")
buf.write(u"\3?\3?\3@\3@\3@\3@\3@\3A\3A\3A\3A\3A\3B\3B\3B\3B\3B\3")
buf.write(u"C\3C\3C\3C\3D\3D\3D\3D\3D\3D\3D\3E\3E\3E\3E\3E\3E\3E")
buf.write(u"\3F\3F\3F\3F\3F\3F\3F\3F\3F\3F\3G\3G\3G\3G\3G\3G\3G\3")
buf.write(u"H\3H\3H\3H\3H\3H\3H\3H\3I\3I\3I\3I\3I\3J\3J\3J\3J\3K")
buf.write(u"\3K\3K\3K\3K\3K\3K\3K\3L\3L\3L\3L\3L\3M\3M\3M\3M\3M\3")
buf.write(u"N\3N\3N\3N\3N\3O\3O\3O\3O\3O\3O\3P\3P\3P\3P\3P\3P\3P")
buf.write(u"\3Q\3Q\3Q\3R\3R\3R\3R\3R\3R\3R\3S\3S\3S\3S\3S\3S\3S\3")
buf.write(u"S\3S\3S\3T\3T\3T\3U\3U\3U\3U\3U\3U\3V\3V\3V\3V\3V\3V")
buf.write(u"\3V\3V\3W\3W\3W\3W\3W\3W\3W\3W\3W\3W\3X\3X\3X\3X\3X\3")
buf.write(u"X\3Y\3Y\3Y\3Y\3Y\3Y\3Y\3Z\3Z\3Z\3Z\3Z\3Z\3Z\3Z\3[\3[")
buf.write(u"\3[\3[\3[\3[\3[\3[\3[\3[\3\\\3\\\3\\\3\\\3\\\3]\3]\3")
buf.write(u"]\3^\3^\3^\3^\3^\3^\3^\3_\3_\3_\3_\3_\3`\3`\3`\3`\3a")
buf.write(u"\3a\3a\3a\3a\3b\3b\3b\3b\3b\3c\3c\3c\3c\3c\3c\3d\3d\3")
buf.write(u"d\3d\3d\3d\3e\3e\3e\3e\3e\3e\3e\3e\3f\3f\3f\3g\3g\3g")
buf.write(u"\3g\3h\3h\3h\3h\3h\3h\3h\3h\3i\3i\3i\3i\3i\3j\3j\3j\3")
buf.write(u"k\3k\3k\3k\3k\3k\3k\3l\3l\3l\3m\3m\3m\3n\3n\3n\3n\3n")
buf.write(u"\3n\3o\3o\3o\3o\3o\3o\3p\3p\3p\3p\3p\3q\3q\3q\3q\3q\3")
buf.write(u"q\3q\3r\3r\3r\3r\3r\3r\3r\3r\3s\3s\3s\3s\3s\3s\3t\3t")
buf.write(u"\3t\3t\3t\3t\3u\3u\3u\3u\3u\3u\3u\3u\3u\3u\3v\3v\3v\3")
buf.write(u"v\3v\3v\3v\3v\3v\3v\3v\3w\3w\3w\3w\3w\3w\3w\3x\3x\3x")
buf.write(u"\3x\3x\3x\3x\3x\3y\3y\3y\3y\3y\3y\3y\3y\3z\3z\3z\3z\3")
buf.write(u"z\3z\3z\3{\3{\3{\3{\3{\3{\3{\3{\3|\3|\3|\3|\3|\3|\3|")
buf.write(u"\3|\3|\3}\3}\3}\3}\3}\3}\3~\3~\3~\3~\3~\3~\3~\3~\3~\3")
buf.write(u"\177\3\177\3\177\3\177\3\u0080\3\u0080\3\u0080\3\u0080")
buf.write(u"\3\u0080\3\u0080\3\u0080\3\u0080\3\u0080\3\u0080\3\u0081")
buf.write(u"\3\u0081\3\u0081\3\u0081\3\u0081\3\u0081\3\u0081\3\u0082")
buf.write(u"\3\u0082\3\u0082\3\u0082\3\u0083\3\u0083\3\u0083\3\u0083")
buf.write(u"\3\u0083\3\u0083\3\u0084\3\u0084\3\u0084\3\u0084\3\u0084")
buf.write(u"\3\u0085\3\u0085\3\u0085\3\u0085\3\u0085\3\u0085\3\u0085")
buf.write(u"\3\u0085\3\u0085\3\u0085\3\u0086\3\u0086\3\u0086\3\u0086")
buf.write(u"\3\u0086\3\u0087\3\u0087\3\u0087\3\u0088\3\u0088\3\u0088")
buf.write(u"\3\u0088\3\u0088\3\u0088\3\u0088\3\u0088\3\u0088\3\u0088")
buf.write(u"\3\u0088\3\u0088\3\u0089\3\u0089\3\u0089\3\u0089\3\u0089")
buf.write(u"\3\u0089\3\u0089\3\u0089\3\u008a\3\u008a\3\u008a\3\u008a")
buf.write(u"\3\u008a\3\u008a\3\u008b\3\u008b\3\u008b\3\u008b\3\u008b")
buf.write(u"\3\u008b\3\u008b\3\u008c\3\u008c\3\u008c\3\u008c\3\u008c")
buf.write(u"\3\u008c\3\u008c\3\u008d\3\u008d\3\u008d\3\u008d\3\u008d")
buf.write(u"\3\u008d\3\u008e\3\u008e\3\u008e\3\u008e\3\u008e\3\u008e")
buf.write(u"\3\u008e\3\u008f\3\u008f\3\u008f\3\u008f\3\u008f\3\u008f")
buf.write(u"\3\u008f\3\u0090\3\u0090\3\u0090\3\u0090\3\u0090\3\u0091")
buf.write(u"\3\u0091\3\u0091\3\u0091\3\u0091\3\u0091\3\u0091\3\u0091")
buf.write(u"\3\u0092\3\u0092\3\u0092\3\u0092\3\u0092\3\u0093\3\u0093")
buf.write(u"\3\u0093\3\u0093\3\u0093\3\u0093\3\u0094\3\u0094\3\u0094")
buf.write(u"\3\u0094\3\u0094\3\u0095\3\u0095\3\u0095\3\u0095\3\u0095")
buf.write(u"\3\u0095\3\u0095\3\u0095\3\u0096\3\u0096\3\u0096\3\u0096")
buf.write(u"\3\u0096\3\u0096\3\u0097\3\u0097\3\u0097\3\u0097\7\u0097")
buf.write(u"\u04fa\n\u0097\f\u0097\16\u0097\u04fd\13\u0097\3\u0097")
buf.write(u"\3\u0097\3\u0097\3\u0097\3\u0097\7\u0097\u0504\n\u0097")
buf.write(u"\f\u0097\16\u0097\u0507\13\u0097\3\u0097\3\u0097\3\u0097")
buf.write(u"\7\u0097\u050c\n\u0097\f\u0097\16\u0097\u050f\13\u0097")
buf.write(u"\3\u0097\3\u0097\3\u0097\7\u0097\u0514\n\u0097\f\u0097")
buf.write(u"\16\u0097\u0517\13\u0097\5\u0097\u0519\n\u0097\3\u0098")
buf.write(u"\6\u0098\u051c\n\u0098\r\u0098\16\u0098\u051d\3\u0098")
buf.write(u"\3\u0098\7\u0098\u0522\n\u0098\f\u0098\16\u0098\u0525")
buf.write(u"\13\u0098\5\u0098\u0527\n\u0098\3\u0098\3\u0098\5\u0098")
buf.write(u"\u052b\n\u0098\3\u0098\6\u0098\u052e\n\u0098\r\u0098")
buf.write(u"\16\u0098\u052f\5\u0098\u0532\n\u0098\3\u0098\3\u0098")
buf.write(u"\6\u0098\u0536\n\u0098\r\u0098\16\u0098\u0537\3\u0098")
buf.write(u"\3\u0098\5\u0098\u053c\n\u0098\3\u0098\6\u0098\u053f")
buf.write(u"\n\u0098\r\u0098\16\u0098\u0540\5\u0098\u0543\n\u0098")
buf.write(u"\5\u0098\u0545\n\u0098\3\u0099\3\u0099\7\u0099\u0549")
buf.write(u"\n\u0099\f\u0099\16\u0099\u054c\13\u0099\3\u0099\3\u0099")
buf.write(u"\5\u0099\u0550\n\u0099\3\u009a\3\u009a\3\u009a\3\u009a")
buf.write(u"\7\u009a\u0556\n\u009a\f\u009a\16\u009a\u0559\13\u009a")
buf.write(u"\3\u009a\3\u009a\3\u009a\3\u009a\3\u009a\7\u009a\u0560")
buf.write(u"\n\u009a\f\u009a\16\u009a\u0563\13\u009a\3\u009a\5\u009a")
buf.write(u"\u0566\n\u009a\3\u009b\3\u009b\3\u009b\3\u009c\3\u009c")
buf.write(u"\3\u009c\3\u009c\7\u009c\u056f\n\u009c\f\u009c\16\u009c")
buf.write(u"\u0572\13\u009c\3\u009c\3\u009c\3\u009d\3\u009d\3\u009d")
buf.write(u"\3\u009d\7\u009d\u057a\n\u009d\f\u009d\16\u009d\u057d")
buf.write(u"\13\u009d\3\u009d\3\u009d\3\u009d\5\u009d\u0582\n\u009d")
buf.write(u"\3\u009d\3\u009d\3\u009e\3\u009e\3\u009e\3\u009e\3\u009f")
buf.write(u"\3\u009f\3\u00a0\3\u00a0\3\u00a1\3\u00a1\3\u00a2\3\u00a2")
buf.write(u"\3\u00a3\3\u00a3\3\u00a4\3\u00a4\3\u00a5\3\u00a5\3\u00a6")
buf.write(u"\3\u00a6\3\u00a7\3\u00a7\3\u00a8\3\u00a8\3\u00a9\3\u00a9")
buf.write(u"\3\u00aa\3\u00aa\3\u00ab\3\u00ab\3\u00ac\3\u00ac\3\u00ad")
buf.write(u"\3\u00ad\3\u00ae\3\u00ae\3\u00af\3\u00af\3\u00b0\3\u00b0")
buf.write(u"\3\u00b1\3\u00b1\3\u00b2\3\u00b2\3\u00b3\3\u00b3\3\u00b4")
buf.write(u"\3\u00b4\3\u00b5\3\u00b5\3\u00b6\3\u00b6\3\u00b7\3\u00b7")
buf.write(u"\3\u00b8\3\u00b8\3\u00b9\3\u00b9\3\u00ba\3\u00ba\3\u057b")
buf.write(u"\2\u00bb\3\3\5\4\7\5\t\6\13\7\r\b\17\t\21\n\23\13\25")
buf.write(u"\f\27\r\31\16\33\17\35\20\37\21!\22#\23%\24\'\25)\26")
buf.write(u"+\27-\30/\31\61\32\63\33\65\34\67\359\36;\37= ?!A\"C")
buf.write(u"#E$G%I&K\'M(O)Q*S+U,W-Y.[/]\60_\61a\62c\63e\64g\65i\66")
buf.write(u"k\67m8o9q:s;u<w=y>{?}@\177A\u0081B\u0083C\u0085D\u0087")
buf.write(u"E\u0089F\u008bG\u008dH\u008fI\u0091J\u0093K\u0095L\u0097")
buf.write(u"M\u0099N\u009bO\u009dP\u009fQ\u00a1R\u00a3S\u00a5T\u00a7")
buf.write(u"U\u00a9V\u00abW\u00adX\u00afY\u00b1Z\u00b3[\u00b5\\\u00b7")
buf.write(u"]\u00b9^\u00bb_\u00bd`\u00bfa\u00c1b\u00c3c\u00c5d\u00c7")
buf.write(u"e\u00c9f\u00cbg\u00cdh\u00cfi\u00d1j\u00d3k\u00d5l\u00d7")
buf.write(u"m\u00d9n\u00dbo\u00ddp\u00dfq\u00e1r\u00e3s\u00e5t\u00e7")
buf.write(u"u\u00e9v\u00ebw\u00edx\u00efy\u00f1z\u00f3{\u00f5|\u00f7")
buf.write(u"}\u00f9~\u00fb\177\u00fd\u0080\u00ff\u0081\u0101\u0082")
buf.write(u"\u0103\u0083\u0105\u0084\u0107\u0085\u0109\u0086\u010b")
buf.write(u"\u0087\u010d\u0088\u010f\u0089\u0111\u008a\u0113\u008b")
buf.write(u"\u0115\u008c\u0117\u008d\u0119\u008e\u011b\u008f\u011d")
buf.write(u"\u0090\u011f\u0091\u0121\u0092\u0123\u0093\u0125\u0094")
buf.write(u"\u0127\u0095\u0129\u0096\u012b\u0097\u012d\u0098\u012f")
buf.write(u"\u0099\u0131\u009a\u0133\u009b\u0135\u009c\u0137\u009d")
buf.write(u"\u0139\u009e\u013b\u009f\u013d\u00a0\u013f\2\u0141\2")
buf.write(u"\u0143\2\u0145\2\u0147\2\u0149\2\u014b\2\u014d\2\u014f")
buf.write(u"\2\u0151\2\u0153\2\u0155\2\u0157\2\u0159\2\u015b\2\u015d")
buf.write(u"\2\u015f\2\u0161\2\u0163\2\u0165\2\u0167\2\u0169\2\u016b")
buf.write(u"\2\u016d\2\u016f\2\u0171\2\u0173\2\3\2%\3\2$$\3\2bb\3")
buf.write(u"\2__\4\2--//\5\2&&<<BB\3\2))\4\2\f\f\17\17\5\2\13\r\17")
buf.write(u"\17\"\"\3\2\62;\4\2CCcc\4\2DDdd\4\2EEee\4\2FFff\4\2G")
buf.write(u"Ggg\4\2HHhh\4\2IIii\4\2JJjj\4\2KKkk\4\2LLll\4\2MMmm\4")
buf.write(u"\2NNnn\4\2OOoo\4\2PPpp\4\2QQqq\4\2RRrr\4\2SSss\4\2TT")
buf.write(u"tt\4\2UUuu\4\2VVvv\4\2WWww\4\2XXxx\4\2YYyy\4\2ZZzz\4")
buf.write(u"\2[[{{\4\2\\\\||\4\u0297\2C\2\\\2a\2a\2c\2|\2\u00ac\2")
buf.write(u"\u00ac\2\u00b7\2\u00b7\2\u00bc\2\u00bc\2\u00c2\2\u00d8")
buf.write(u"\2\u00da\2\u00f8\2\u00fa\2\u02c3\2\u02c8\2\u02d3\2\u02e2")
buf.write(u"\2\u02e6\2\u02ee\2\u02ee\2\u02f0\2\u02f0\2\u0347\2\u0347")
buf.write(u"\2\u0372\2\u0376\2\u0378\2\u0379\2\u037c\2\u037f\2\u0381")
buf.write(u"\2\u0381\2\u0388\2\u0388\2\u038a\2\u038c\2\u038e\2\u038e")
buf.write(u"\2\u0390\2\u03a3\2\u03a5\2\u03f7\2\u03f9\2\u0483\2\u048c")
buf.write(u"\2\u0531\2\u0533\2\u0558\2\u055b\2\u055b\2\u0563\2\u0589")
buf.write(u"\2\u05b2\2\u05bf\2\u05c1\2\u05c1\2\u05c3\2\u05c4\2\u05c6")
buf.write(u"\2\u05c7\2\u05c9\2\u05c9\2\u05d2\2\u05ec\2\u05f2\2\u05f4")
buf.write(u"\2\u0612\2\u061c\2\u0622\2\u0659\2\u065b\2\u0661\2\u0670")
buf.write(u"\2\u06d5\2\u06d7\2\u06de\2\u06e3\2\u06ea\2\u06ef\2\u06f1")
buf.write(u"\2\u06fc\2\u06fe\2\u0701\2\u0701\2\u0712\2\u0741\2\u074f")
buf.write(u"\2\u07b3\2\u07cc\2\u07ec\2\u07f6\2\u07f7\2\u07fc\2\u07fc")
buf.write(u"\2\u0802\2\u0819\2\u081c\2\u082e\2\u0842\2\u085a\2\u0862")
buf.write(u"\2\u086c\2\u08a2\2\u08b6\2\u08b8\2\u08bf\2\u08d6\2\u08e1")
buf.write(u"\2\u08e5\2\u08eb\2\u08f2\2\u093d\2\u093f\2\u094e\2\u0950")
buf.write(u"\2\u0952\2\u0957\2\u0965\2\u0973\2\u0985\2\u0987\2\u098e")
buf.write(u"\2\u0991\2\u0992\2\u0995\2\u09aa\2\u09ac\2\u09b2\2\u09b4")
buf.write(u"\2\u09b4\2\u09b8\2\u09bb\2\u09bf\2\u09c6\2\u09c9\2\u09ca")
buf.write(u"\2\u09cd\2\u09ce\2\u09d0\2\u09d0\2\u09d9\2\u09d9\2\u09de")
buf.write(u"\2\u09df\2\u09e1\2\u09e5\2\u09f2\2\u09f3\2\u09fe\2\u09fe")
buf.write(u"\2\u0a03\2\u0a05\2\u0a07\2\u0a0c\2\u0a11\2\u0a12\2\u0a15")
buf.write(u"\2\u0a2a\2\u0a2c\2\u0a32\2\u0a34\2\u0a35\2\u0a37\2\u0a38")
buf.write(u"\2\u0a3a\2\u0a3b\2\u0a40\2\u0a44\2\u0a49\2\u0a4a\2\u0a4d")
buf.write(u"\2\u0a4e\2\u0a53\2\u0a53\2\u0a5b\2\u0a5e\2\u0a60\2\u0a60")
buf.write(u"\2\u0a72\2\u0a77\2\u0a83\2\u0a85\2\u0a87\2\u0a8f\2\u0a91")
buf.write(u"\2\u0a93\2\u0a95\2\u0aaa\2\u0aac\2\u0ab2\2\u0ab4\2\u0ab5")
buf.write(u"\2\u0ab7\2\u0abb\2\u0abf\2\u0ac7\2\u0ac9\2\u0acb\2\u0acd")
buf.write(u"\2\u0ace\2\u0ad2\2\u0ad2\2\u0ae2\2\u0ae5\2\u0afb\2\u0afe")
buf.write(u"\2\u0b03\2\u0b05\2\u0b07\2\u0b0e\2\u0b11\2\u0b12\2\u0b15")
buf.write(u"\2\u0b2a\2\u0b2c\2\u0b32\2\u0b34\2\u0b35\2\u0b37\2\u0b3b")
buf.write(u"\2\u0b3f\2\u0b46\2\u0b49\2\u0b4a\2\u0b4d\2\u0b4e\2\u0b58")
buf.write(u"\2\u0b59\2\u0b5e\2\u0b5f\2\u0b61\2\u0b65\2\u0b73\2\u0b73")
buf.write(u"\2\u0b84\2\u0b85\2\u0b87\2\u0b8c\2\u0b90\2\u0b92\2\u0b94")
buf.write(u"\2\u0b97\2\u0b9b\2\u0b9c\2\u0b9e\2\u0b9e\2\u0ba0\2\u0ba1")
buf.write(u"\2\u0ba5\2\u0ba6\2\u0baa\2\u0bac\2\u0bb0\2\u0bbb\2\u0bc0")
buf.write(u"\2\u0bc4\2\u0bc8\2\u0bca\2\u0bcc\2\u0bce\2\u0bd2\2\u0bd2")
buf.write(u"\2\u0bd9\2\u0bd9\2\u0c02\2\u0c05\2\u0c07\2\u0c0e\2\u0c10")
buf.write(u"\2\u0c12\2\u0c14\2\u0c2a\2\u0c2c\2\u0c3b\2\u0c3f\2\u0c46")
buf.write(u"\2\u0c48\2\u0c4a\2\u0c4c\2\u0c4e\2\u0c57\2\u0c58\2\u0c5a")
buf.write(u"\2\u0c5c\2\u0c62\2\u0c65\2\u0c82\2\u0c85\2\u0c87\2\u0c8e")
buf.write(u"\2\u0c90\2\u0c92\2\u0c94\2\u0caa\2\u0cac\2\u0cb5\2\u0cb7")
buf.write(u"\2\u0cbb\2\u0cbf\2\u0cc6\2\u0cc8\2\u0cca\2\u0ccc\2\u0cce")
buf.write(u"\2\u0cd7\2\u0cd8\2\u0ce0\2\u0ce0\2\u0ce2\2\u0ce5\2\u0cf3")
buf.write(u"\2\u0cf4\2\u0d02\2\u0d05\2\u0d07\2\u0d0e\2\u0d10\2\u0d12")
buf.write(u"\2\u0d14\2\u0d3c\2\u0d3f\2\u0d46\2\u0d48\2\u0d4a\2\u0d4c")
buf.write(u"\2\u0d4e\2\u0d50\2\u0d50\2\u0d56\2\u0d59\2\u0d61\2\u0d65")
buf.write(u"\2\u0d7c\2\u0d81\2\u0d84\2\u0d85\2\u0d87\2\u0d98\2\u0d9c")
buf.write(u"\2\u0db3\2\u0db5\2\u0dbd\2\u0dbf\2\u0dbf\2\u0dc2\2\u0dc8")
buf.write(u"\2\u0dd1\2\u0dd6\2\u0dd8\2\u0dd8\2\u0dda\2\u0de1\2\u0df4")
buf.write(u"\2\u0df5\2\u0e03\2\u0e3c\2\u0e42\2\u0e48\2\u0e4f\2\u0e4f")
buf.write(u"\2\u0e83\2\u0e84\2\u0e86\2\u0e86\2\u0e89\2\u0e8a\2\u0e8c")
buf.write(u"\2\u0e8c\2\u0e8f\2\u0e8f\2\u0e96\2\u0e99\2\u0e9b\2\u0ea1")
buf.write(u"\2\u0ea3\2\u0ea5\2\u0ea7\2\u0ea7\2\u0ea9\2\u0ea9\2\u0eac")
buf.write(u"\2\u0ead\2\u0eaf\2\u0ebb\2\u0ebd\2\u0ebf\2\u0ec2\2\u0ec6")
buf.write(u"\2\u0ec8\2\u0ec8\2\u0ecf\2\u0ecf\2\u0ede\2\u0ee1\2\u0f02")
buf.write(u"\2\u0f02\2\u0f42\2\u0f49\2\u0f4b\2\u0f6e\2\u0f73\2\u0f83")
buf.write(u"\2\u0f8a\2\u0f99\2\u0f9b\2\u0fbe\2\u1002\2\u1038\2\u103a")
buf.write(u"\2\u103a\2\u103d\2\u1041\2\u1052\2\u1064\2\u1067\2\u106a")
buf.write(u"\2\u1070\2\u1088\2\u1090\2\u1090\2\u109e\2\u109f\2\u10a2")
buf.write(u"\2\u10c7\2\u10c9\2\u10c9\2\u10cf\2\u10cf\2\u10d2\2\u10fc")
buf.write(u"\2\u10fe\2\u124a\2\u124c\2\u124f\2\u1252\2\u1258\2\u125a")
buf.write(u"\2\u125a\2\u125c\2\u125f\2\u1262\2\u128a\2\u128c\2\u128f")
buf.write(u"\2\u1292\2\u12b2\2\u12b4\2\u12b7\2\u12ba\2\u12c0\2\u12c2")
buf.write(u"\2\u12c2\2\u12c4\2\u12c7\2\u12ca\2\u12d8\2\u12da\2\u1312")
buf.write(u"\2\u1314\2\u1317\2\u131a\2\u135c\2\u1361\2\u1361\2\u1382")
buf.write(u"\2\u1391\2\u13a2\2\u13f7\2\u13fa\2\u13ff\2\u1403\2\u166e")
buf.write(u"\2\u1671\2\u1681\2\u1683\2\u169c\2\u16a2\2\u16ec\2\u16f0")
buf.write(u"\2\u16fa\2\u1702\2\u170e\2\u1710\2\u1715\2\u1722\2\u1735")
buf.write(u"\2\u1742\2\u1755\2\u1762\2\u176e\2\u1770\2\u1772\2\u1774")
buf.write(u"\2\u1775\2\u1782\2\u17b5\2\u17b8\2\u17ca\2\u17d9\2\u17d9")
buf.write(u"\2\u17de\2\u17de\2\u1822\2\u1879\2\u1882\2\u18ac\2\u18b2")
buf.write(u"\2\u18f7\2\u1902\2\u1920\2\u1922\2\u192d\2\u1932\2\u193a")
buf.write(u"\2\u1952\2\u196f\2\u1972\2\u1976\2\u1982\2\u19ad\2\u19b2")
buf.write(u"\2\u19cb\2\u1a02\2\u1a1d\2\u1a22\2\u1a60\2\u1a63\2\u1a76")
buf.write(u"\2\u1aa9\2\u1aa9\2\u1b02\2\u1b35\2\u1b37\2\u1b45\2\u1b47")
buf.write(u"\2\u1b4d\2\u1b82\2\u1bab\2\u1bae\2\u1bb1\2\u1bbc\2\u1be7")
buf.write(u"\2\u1be9\2\u1bf3\2\u1c02\2\u1c37\2\u1c4f\2\u1c51\2\u1c5c")
buf.write(u"\2\u1c7f\2\u1c82\2\u1c8a\2\u1ceb\2\u1cee\2\u1cf0\2\u1cf5")
buf.write(u"\2\u1cf7\2\u1cf8\2\u1d02\2\u1dc1\2\u1de9\2\u1df6\2\u1e02")
buf.write(u"\2\u1f17\2\u1f1a\2\u1f1f\2\u1f22\2\u1f47\2\u1f4a\2\u1f4f")
buf.write(u"\2\u1f52\2\u1f59\2\u1f5b\2\u1f5b\2\u1f5d\2\u1f5d\2\u1f5f")
buf.write(u"\2\u1f5f\2\u1f61\2\u1f7f\2\u1f82\2\u1fb6\2\u1fb8\2\u1fbe")
buf.write(u"\2\u1fc0\2\u1fc0\2\u1fc4\2\u1fc6\2\u1fc8\2\u1fce\2\u1fd2")
buf.write(u"\2\u1fd5\2\u1fd8\2\u1fdd\2\u1fe2\2\u1fee\2\u1ff4\2\u1ff6")
buf.write(u"\2\u1ff8\2\u1ffe\2\u2073\2\u2073\2\u2081\2\u2081\2\u2092")
buf.write(u"\2\u209e\2\u2104\2\u2104\2\u2109\2\u2109\2\u210c\2\u2115")
buf.write(u"\2\u2117\2\u2117\2\u211b\2\u211f\2\u2126\2\u2126\2\u2128")
buf.write(u"\2\u2128\2\u212a\2\u212a\2\u212c\2\u212f\2\u2131\2\u213b")
buf.write(u"\2\u213e\2\u2141\2\u2147\2\u214b\2\u2150\2\u2150\2\u2162")
buf.write(u"\2\u218a\2\u24b8\2\u24eb\2\u2c02\2\u2c30\2\u2c32\2\u2c60")
buf.write(u"\2\u2c62\2\u2ce6\2\u2ced\2\u2cf0\2\u2cf4\2\u2cf5\2\u2d02")
buf.write(u"\2\u2d27\2\u2d29\2\u2d29\2\u2d2f\2\u2d2f\2\u2d32\2\u2d69")
buf.write(u"\2\u2d71\2\u2d71\2\u2d82\2\u2d98\2\u2da2\2\u2da8\2\u2daa")
buf.write(u"\2\u2db0\2\u2db2\2\u2db8\2\u2dba\2\u2dc0\2\u2dc2\2\u2dc8")
buf.write(u"\2\u2dca\2\u2dd0\2\u2dd2\2\u2dd8\2\u2dda\2\u2de0\2\u2de2")
buf.write(u"\2\u2e01\2\u2e31\2\u2e31\2\u3007\2\u3009\2\u3023\2\u302b")
buf.write(u"\2\u3033\2\u3037\2\u303a\2\u303e\2\u3043\2\u3098\2\u309f")
buf.write(u"\2\u30a1\2\u30a3\2\u30fc\2\u30fe\2\u3101\2\u3107\2\u3130")
buf.write(u"\2\u3133\2\u3190\2\u31a2\2\u31bc\2\u31f2\2\u3201\2\u3402")
buf.write(u"\2\u4db7\2\u4e02\2\u9fec\2\ua002\2\ua48e\2\ua4d2\2\ua4ff")
buf.write(u"\2\ua502\2\ua60e\2\ua612\2\ua621\2\ua62c\2\ua62d\2\ua642")
buf.write(u"\2\ua670\2\ua676\2\ua67d\2\ua681\2\ua6f1\2\ua719\2\ua721")
buf.write(u"\2\ua724\2\ua78a\2\ua78d\2\ua7b0\2\ua7b2\2\ua7b9\2\ua7f9")
buf.write(u"\2\ua803\2\ua805\2\ua807\2\ua809\2\ua80c\2\ua80e\2\ua829")
buf.write(u"\2\ua842\2\ua875\2\ua882\2\ua8c5\2\ua8c7\2\ua8c7\2\ua8f4")
buf.write(u"\2\ua8f9\2\ua8fd\2\ua8fd\2\ua8ff\2\ua8ff\2\ua90c\2\ua92c")
buf.write(u"\2\ua932\2\ua954\2\ua962\2\ua97e\2\ua982\2\ua9b4\2\ua9b6")
buf.write(u"\2\ua9c1\2\ua9d1\2\ua9d1\2\ua9e2\2\ua9e6\2\ua9e8\2\ua9f1")
buf.write(u"\2\ua9fc\2\uaa00\2\uaa02\2\uaa38\2\uaa42\2\uaa4f\2\uaa62")
buf.write(u"\2\uaa78\2\uaa7c\2\uaa7c\2\uaa80\2\uaac0\2\uaac2\2\uaac2")
buf.write(u"\2\uaac4\2\uaac4\2\uaadd\2\uaadf\2\uaae2\2\uaaf1\2\uaaf4")
buf.write(u"\2\uaaf7\2\uab03\2\uab08\2\uab0b\2\uab10\2\uab13\2\uab18")
buf.write(u"\2\uab22\2\uab28\2\uab2a\2\uab30\2\uab32\2\uab5c\2\uab5e")
buf.write(u"\2\uab67\2\uab72\2\uabec\2\uac02\2\ud7a5\2\ud7b2\2\ud7c8")
buf.write(u"\2\ud7cd\2\ud7fd\2\uf902\2\ufa6f\2\ufa72\2\ufadb\2\ufb02")
buf.write(u"\2\ufb08\2\ufb15\2\ufb19\2\ufb1f\2\ufb2a\2\ufb2c\2\ufb38")
buf.write(u"\2\ufb3a\2\ufb3e\2\ufb40\2\ufb40\2\ufb42\2\ufb43\2\ufb45")
buf.write(u"\2\ufb46\2\ufb48\2\ufbb3\2\ufbd5\2\ufd3f\2\ufd52\2\ufd91")
buf.write(u"\2\ufd94\2\ufdc9\2\ufdf2\2\ufdfd\2\ufe72\2\ufe76\2\ufe78")
buf.write(u"\2\ufefe\2\uff23\2\uff3c\2\uff43\2\uff5c\2\uff68\2\uffc0")
buf.write(u"\2\uffc4\2\uffc9\2\uffcc\2\uffd1\2\uffd4\2\uffd9\2\uffdc")
buf.write(u"\2\uffde\2\2\3\r\3\17\3(\3*\3<\3>\3?\3A\3O\3R\3_\3\u0082")
buf.write(u"\3\u00fc\3\u0142\3\u0176\3\u0282\3\u029e\3\u02a2\3\u02d2")
buf.write(u"\3\u0302\3\u0321\3\u032f\3\u034c\3\u0352\3\u037c\3\u0382")
buf.write(u"\3\u039f\3\u03a2\3\u03c5\3\u03ca\3\u03d1\3\u03d3\3\u03d7")
buf.write(u"\3\u0402\3\u049f\3\u04b2\3\u04d5\3\u04da\3\u04fd\3\u0502")
buf.write(u"\3\u0529\3\u0532\3\u0565\3\u0602\3\u0738\3\u0742\3\u0757")
buf.write(u"\3\u0762\3\u0769\3\u0802\3\u0807\3\u080a\3\u080a\3\u080c")
buf.write(u"\3\u0837\3\u0839\3\u083a\3\u083e\3\u083e\3\u0841\3\u0857")
buf.write(u"\3\u0862\3\u0878\3\u0882\3\u08a0\3\u08e2\3\u08f4\3\u08f6")
buf.write(u"\3\u08f7\3\u0902\3\u0917\3\u0922\3\u093b\3\u0982\3\u09b9")
buf.write(u"\3\u09c0\3\u09c1\3\u0a02\3\u0a05\3\u0a07\3\u0a08\3\u0a0e")
buf.write(u"\3\u0a15\3\u0a17\3\u0a19\3\u0a1b\3\u0a35\3\u0a62\3\u0a7e")
buf.write(u"\3\u0a82\3\u0a9e\3\u0ac2\3\u0ac9\3\u0acb\3\u0ae6\3\u0b02")
buf.write(u"\3\u0b37\3\u0b42\3\u0b57\3\u0b62\3\u0b74\3\u0b82\3\u0b93")
buf.write(u"\3\u0c02\3\u0c4a\3\u0c82\3\u0cb4\3\u0cc2\3\u0cf4\3\u1002")
buf.write(u"\3\u1047\3\u1084\3\u10ba\3\u10d2\3\u10ea\3\u1102\3\u1134")
buf.write(u"\3\u1152\3\u1174\3\u1178\3\u1178\3\u1182\3\u11c1\3\u11c3")
buf.write(u"\3\u11c6\3\u11dc\3\u11dc\3\u11de\3\u11de\3\u1202\3\u1213")
buf.write(u"\3\u1215\3\u1236\3\u1239\3\u1239\3\u1240\3\u1240\3\u1282")
buf.write(u"\3\u1288\3\u128a\3\u128a\3\u128c\3\u128f\3\u1291\3\u129f")
buf.write(u"\3\u12a1\3\u12aa\3\u12b2\3\u12ea\3\u1302\3\u1305\3\u1307")
buf.write(u"\3\u130e\3\u1311\3\u1312\3\u1315\3\u132a\3\u132c\3\u1332")
buf.write(u"\3\u1334\3\u1335\3\u1337\3\u133b\3\u133f\3\u1346\3\u1349")
buf.write(u"\3\u134a\3\u134d\3\u134e\3\u1352\3\u1352\3\u1359\3\u1359")
buf.write(u"\3\u135f\3\u1365\3\u1402\3\u1443\3\u1445\3\u1447\3\u1449")
buf.write(u"\3\u144c\3\u1482\3\u14c3\3\u14c6\3\u14c7\3\u14c9\3\u14c9")
buf.write(u"\3\u1582\3\u15b7\3\u15ba\3\u15c0\3\u15da\3\u15df\3\u1602")
buf.write(u"\3\u1640\3\u1642\3\u1642\3\u1646\3\u1646\3\u1682\3\u16b7")
buf.write(u"\3\u1702\3\u171b\3\u171f\3\u172c\3\u18a2\3\u18e1\3\u1901")
buf.write(u"\3\u1901\3\u1a02\3\u1a34\3\u1a37\3\u1a40\3\u1a52\3\u1a85")
buf.write(u"\3\u1a88\3\u1a99\3\u1ac2\3\u1afa\3\u1c02\3\u1c0a\3\u1c0c")
buf.write(u"\3\u1c38\3\u1c3a\3\u1c40\3\u1c42\3\u1c42\3\u1c74\3\u1c91")
buf.write(u"\3\u1c94\3\u1ca9\3\u1cab\3\u1cb8\3\u1d02\3\u1d08\3\u1d0a")
buf.write(u"\3\u1d0b\3\u1d0d\3\u1d38\3\u1d3c\3\u1d3c\3\u1d3e\3\u1d3f")
buf.write(u"\3\u1d41\3\u1d43\3\u1d45\3\u1d45\3\u1d48\3\u1d49\3\u2002")
buf.write(u"\3\u239b\3\u2402\3\u2470\3\u2482\3\u2545\3\u3002\3\u3430")
buf.write(u"\3\u4402\3\u4648\3\u6802\3\u6a3a\3\u6a42\3\u6a60\3\u6ad2")
buf.write(u"\3\u6aef\3\u6b02\3\u6b38\3\u6b42\3\u6b45\3\u6b65\3\u6b79")
buf.write(u"\3\u6b7f\3\u6b91\3\u6f02\3\u6f46\3\u6f52\3\u6f80\3\u6f95")
buf.write(u"\3\u6fa1\3\u6fe2\3\u6fe3\3\u7002\3\u87ee\3\u8802\3\u8af4")
buf.write(u"\3\ub002\3\ub120\3\ub172\3\ub2fd\3\ubc02\3\ubc6c\3\ubc72")
buf.write(u"\3\ubc7e\3\ubc82\3\ubc8a\3\ubc92\3\ubc9b\3\ubca0\3\ubca0")
buf.write(u"\3\ud402\3\ud456\3\ud458\3\ud49e\3\ud4a0\3\ud4a1\3\ud4a4")
buf.write(u"\3\ud4a4\3\ud4a7\3\ud4a8\3\ud4ab\3\ud4ae\3\ud4b0\3\ud4bb")
buf.write(u"\3\ud4bd\3\ud4bd\3\ud4bf\3\ud4c5\3\ud4c7\3\ud507\3\ud509")
buf.write(u"\3\ud50c\3\ud50f\3\ud516\3\ud518\3\ud51e\3\ud520\3\ud53b")
buf.write(u"\3\ud53d\3\ud540\3\ud542\3\ud546\3\ud548\3\ud548\3\ud54c")
buf.write(u"\3\ud552\3\ud554\3\ud6a7\3\ud6aa\3\ud6c2\3\ud6c4\3\ud6dc")
buf.write(u"\3\ud6de\3\ud6fc\3\ud6fe\3\ud716\3\ud718\3\ud736\3\ud738")
buf.write(u"\3\ud750\3\ud752\3\ud770\3\ud772\3\ud78a\3\ud78c\3\ud7aa")
buf.write(u"\3\ud7ac\3\ud7c4\3\ud7c6\3\ud7cd\3\ue002\3\ue008\3\ue00a")
buf.write(u"\3\ue01a\3\ue01d\3\ue023\3\ue025\3\ue026\3\ue028\3\ue02c")
buf.write(u"\3\ue802\3\ue8c6\3\ue902\3\ue945\3\ue949\3\ue949\3\uee02")
buf.write(u"\3\uee05\3\uee07\3\uee21\3\uee23\3\uee24\3\uee26\3\uee26")
buf.write(u"\3\uee29\3\uee29\3\uee2b\3\uee34\3\uee36\3\uee39\3\uee3b")
buf.write(u"\3\uee3b\3\uee3d\3\uee3d\3\uee44\3\uee44\3\uee49\3\uee49")
buf.write(u"\3\uee4b\3\uee4b\3\uee4d\3\uee4d\3\uee4f\3\uee51\3\uee53")
buf.write(u"\3\uee54\3\uee56\3\uee56\3\uee59\3\uee59\3\uee5b\3\uee5b")
buf.write(u"\3\uee5d\3\uee5d\3\uee5f\3\uee5f\3\uee61\3\uee61\3\uee63")
buf.write(u"\3\uee64\3\uee66\3\uee66\3\uee69\3\uee6c\3\uee6e\3\uee74")
buf.write(u"\3\uee76\3\uee79\3\uee7b\3\uee7e\3\uee80\3\uee80\3\uee82")
buf.write(u"\3\uee8b\3\uee8d\3\uee9d\3\ueea3\3\ueea5\3\ueea7\3\ueeab")
buf.write(u"\3\ueead\3\ueebd\3\uf132\3\uf14b\3\uf152\3\uf16b\3\uf172")
buf.write(u"\3\uf18b\3\2\4\ua6d8\4\ua702\4\ub736\4\ub742\4\ub81f")
buf.write(u"\4\ub822\4\ucea3\4\uceb2\4\uebe2\4\uf802\4\ufa1f\4\u02ba")
buf.write(u"\2\62\2;\2C\2\\\2a\2a\2c\2|\2\u00ac\2\u00ac\2\u00b7\2")
buf.write(u"\u00b7\2\u00bc\2\u00bc\2\u00c2\2\u00d8\2\u00da\2\u00f8")
buf.write(u"\2\u00fa\2\u02c3\2\u02c8\2\u02d3\2\u02e2\2\u02e6\2\u02ee")
buf.write(u"\2\u02ee\2\u02f0\2\u02f0\2\u0347\2\u0347\2\u0372\2\u0376")
buf.write(u"\2\u0378\2\u0379\2\u037c\2\u037f\2\u0381\2\u0381\2\u0388")
buf.write(u"\2\u0388\2\u038a\2\u038c\2\u038e\2\u038e\2\u0390\2\u03a3")
buf.write(u"\2\u03a5\2\u03f7\2\u03f9\2\u0483\2\u048c\2\u0531\2\u0533")
buf.write(u"\2\u0558\2\u055b\2\u055b\2\u0563\2\u0589\2\u05b2\2\u05bf")
buf.write(u"\2\u05c1\2\u05c1\2\u05c3\2\u05c4\2\u05c6\2\u05c7\2\u05c9")
buf.write(u"\2\u05c9\2\u05d2\2\u05ec\2\u05f2\2\u05f4\2\u0612\2\u061c")
buf.write(u"\2\u0622\2\u0659\2\u065b\2\u066b\2\u0670\2\u06d5\2\u06d7")
buf.write(u"\2\u06de\2\u06e3\2\u06ea\2\u06ef\2\u06fe\2\u0701\2\u0701")
buf.write(u"\2\u0712\2\u0741\2\u074f\2\u07b3\2\u07c2\2\u07ec\2\u07f6")
buf.write(u"\2\u07f7\2\u07fc\2\u07fc\2\u0802\2\u0819\2\u081c\2\u082e")
buf.write(u"\2\u0842\2\u085a\2\u0862\2\u086c\2\u08a2\2\u08b6\2\u08b8")
buf.write(u"\2\u08bf\2\u08d6\2\u08e1\2\u08e5\2\u08eb\2\u08f2\2\u093d")
buf.write(u"\2\u093f\2\u094e\2\u0950\2\u0952\2\u0957\2\u0965\2\u0968")
buf.write(u"\2\u0971\2\u0973\2\u0985\2\u0987\2\u098e\2\u0991\2\u0992")
buf.write(u"\2\u0995\2\u09aa\2\u09ac\2\u09b2\2\u09b4\2\u09b4\2\u09b8")
buf.write(u"\2\u09bb\2\u09bf\2\u09c6\2\u09c9\2\u09ca\2\u09cd\2\u09ce")
buf.write(u"\2\u09d0\2\u09d0\2\u09d9\2\u09d9\2\u09de\2\u09df\2\u09e1")
buf.write(u"\2\u09e5\2\u09e8\2\u09f3\2\u09fe\2\u09fe\2\u0a03\2\u0a05")
buf.write(u"\2\u0a07\2\u0a0c\2\u0a11\2\u0a12\2\u0a15\2\u0a2a\2\u0a2c")
buf.write(u"\2\u0a32\2\u0a34\2\u0a35\2\u0a37\2\u0a38\2\u0a3a\2\u0a3b")
buf.write(u"\2\u0a40\2\u0a44\2\u0a49\2\u0a4a\2\u0a4d\2\u0a4e\2\u0a53")
buf.write(u"\2\u0a53\2\u0a5b\2\u0a5e\2\u0a60\2\u0a60\2\u0a68\2\u0a77")
buf.write(u"\2\u0a83\2\u0a85\2\u0a87\2\u0a8f\2\u0a91\2\u0a93\2\u0a95")
buf.write(u"\2\u0aaa\2\u0aac\2\u0ab2\2\u0ab4\2\u0ab5\2\u0ab7\2\u0abb")
buf.write(u"\2\u0abf\2\u0ac7\2\u0ac9\2\u0acb\2\u0acd\2\u0ace\2\u0ad2")
buf.write(u"\2\u0ad2\2\u0ae2\2\u0ae5\2\u0ae8\2\u0af1\2\u0afb\2\u0afe")
buf.write(u"\2\u0b03\2\u0b05\2\u0b07\2\u0b0e\2\u0b11\2\u0b12\2\u0b15")
buf.write(u"\2\u0b2a\2\u0b2c\2\u0b32\2\u0b34\2\u0b35\2\u0b37\2\u0b3b")
buf.write(u"\2\u0b3f\2\u0b46\2\u0b49\2\u0b4a\2\u0b4d\2\u0b4e\2\u0b58")
buf.write(u"\2\u0b59\2\u0b5e\2\u0b5f\2\u0b61\2\u0b65\2\u0b68\2\u0b71")
buf.write(u"\2\u0b73\2\u0b73\2\u0b84\2\u0b85\2\u0b87\2\u0b8c\2\u0b90")
buf.write(u"\2\u0b92\2\u0b94\2\u0b97\2\u0b9b\2\u0b9c\2\u0b9e\2\u0b9e")
buf.write(u"\2\u0ba0\2\u0ba1\2\u0ba5\2\u0ba6\2\u0baa\2\u0bac\2\u0bb0")
buf.write(u"\2\u0bbb\2\u0bc0\2\u0bc4\2\u0bc8\2\u0bca\2\u0bcc\2\u0bce")
buf.write(u"\2\u0bd2\2\u0bd2\2\u0bd9\2\u0bd9\2\u0be8\2\u0bf1\2\u0c02")
buf.write(u"\2\u0c05\2\u0c07\2\u0c0e\2\u0c10\2\u0c12\2\u0c14\2\u0c2a")
buf.write(u"\2\u0c2c\2\u0c3b\2\u0c3f\2\u0c46\2\u0c48\2\u0c4a\2\u0c4c")
buf.write(u"\2\u0c4e\2\u0c57\2\u0c58\2\u0c5a\2\u0c5c\2\u0c62\2\u0c65")
buf.write(u"\2\u0c68\2\u0c71\2\u0c82\2\u0c85\2\u0c87\2\u0c8e\2\u0c90")
buf.write(u"\2\u0c92\2\u0c94\2\u0caa\2\u0cac\2\u0cb5\2\u0cb7\2\u0cbb")
buf.write(u"\2\u0cbf\2\u0cc6\2\u0cc8\2\u0cca\2\u0ccc\2\u0cce\2\u0cd7")
buf.write(u"\2\u0cd8\2\u0ce0\2\u0ce0\2\u0ce2\2\u0ce5\2\u0ce8\2\u0cf1")
buf.write(u"\2\u0cf3\2\u0cf4\2\u0d02\2\u0d05\2\u0d07\2\u0d0e\2\u0d10")
buf.write(u"\2\u0d12\2\u0d14\2\u0d3c\2\u0d3f\2\u0d46\2\u0d48\2\u0d4a")
buf.write(u"\2\u0d4c\2\u0d4e\2\u0d50\2\u0d50\2\u0d56\2\u0d59\2\u0d61")
buf.write(u"\2\u0d65\2\u0d68\2\u0d71\2\u0d7c\2\u0d81\2\u0d84\2\u0d85")
buf.write(u"\2\u0d87\2\u0d98\2\u0d9c\2\u0db3\2\u0db5\2\u0dbd\2\u0dbf")
buf.write(u"\2\u0dbf\2\u0dc2\2\u0dc8\2\u0dd1\2\u0dd6\2\u0dd8\2\u0dd8")
buf.write(u"\2\u0dda\2\u0de1\2\u0de8\2\u0df1\2\u0df4\2\u0df5\2\u0e03")
buf.write(u"\2\u0e3c\2\u0e42\2\u0e48\2\u0e4f\2\u0e4f\2\u0e52\2\u0e5b")
buf.write(u"\2\u0e83\2\u0e84\2\u0e86\2\u0e86\2\u0e89\2\u0e8a\2\u0e8c")
buf.write(u"\2\u0e8c\2\u0e8f\2\u0e8f\2\u0e96\2\u0e99\2\u0e9b\2\u0ea1")
buf.write(u"\2\u0ea3\2\u0ea5\2\u0ea7\2\u0ea7\2\u0ea9\2\u0ea9\2\u0eac")
buf.write(u"\2\u0ead\2\u0eaf\2\u0ebb\2\u0ebd\2\u0ebf\2\u0ec2\2\u0ec6")
buf.write(u"\2\u0ec8\2\u0ec8\2\u0ecf\2\u0ecf\2\u0ed2\2\u0edb\2\u0ede")
buf.write(u"\2\u0ee1\2\u0f02\2\u0f02\2\u0f22\2\u0f2b\2\u0f42\2\u0f49")
buf.write(u"\2\u0f4b\2\u0f6e\2\u0f73\2\u0f83\2\u0f8a\2\u0f99\2\u0f9b")
buf.write(u"\2\u0fbe\2\u1002\2\u1038\2\u103a\2\u103a\2\u103d\2\u104b")
buf.write(u"\2\u1052\2\u1064\2\u1067\2\u106a\2\u1070\2\u1088\2\u1090")
buf.write(u"\2\u1090\2\u1092\2\u109b\2\u109e\2\u109f\2\u10a2\2\u10c7")
buf.write(u"\2\u10c9\2\u10c9\2\u10cf\2\u10cf\2\u10d2\2\u10fc\2\u10fe")
buf.write(u"\2\u124a\2\u124c\2\u124f\2\u1252\2\u1258\2\u125a\2\u125a")
buf.write(u"\2\u125c\2\u125f\2\u1262\2\u128a\2\u128c\2\u128f\2\u1292")
buf.write(u"\2\u12b2\2\u12b4\2\u12b7\2\u12ba\2\u12c0\2\u12c2\2\u12c2")
buf.write(u"\2\u12c4\2\u12c7\2\u12ca\2\u12d8\2\u12da\2\u1312\2\u1314")
buf.write(u"\2\u1317\2\u131a\2\u135c\2\u1361\2\u1361\2\u1382\2\u1391")
buf.write(u"\2\u13a2\2\u13f7\2\u13fa\2\u13ff\2\u1403\2\u166e\2\u1671")
buf.write(u"\2\u1681\2\u1683\2\u169c\2\u16a2\2\u16ec\2\u16f0\2\u16fa")
buf.write(u"\2\u1702\2\u170e\2\u1710\2\u1715\2\u1722\2\u1735\2\u1742")
buf.write(u"\2\u1755\2\u1762\2\u176e\2\u1770\2\u1772\2\u1774\2\u1775")
buf.write(u"\2\u1782\2\u17b5\2\u17b8\2\u17ca\2\u17d9\2\u17d9\2\u17de")
buf.write(u"\2\u17de\2\u17e2\2\u17eb\2\u1812\2\u181b\2\u1822\2\u1879")
buf.write(u"\2\u1882\2\u18ac\2\u18b2\2\u18f7\2\u1902\2\u1920\2\u1922")
buf.write(u"\2\u192d\2\u1932\2\u193a\2\u1948\2\u196f\2\u1972\2\u1976")
buf.write(u"\2\u1982\2\u19ad\2\u19b2\2\u19cb\2\u19d2\2\u19db\2\u1a02")
buf.write(u"\2\u1a1d\2\u1a22\2\u1a60\2\u1a63\2\u1a76\2\u1a82\2\u1a8b")
buf.write(u"\2\u1a92\2\u1a9b\2\u1aa9\2\u1aa9\2\u1b02\2\u1b35\2\u1b37")
buf.write(u"\2\u1b45\2\u1b47\2\u1b4d\2\u1b52\2\u1b5b\2\u1b82\2\u1bab")
buf.write(u"\2\u1bae\2\u1be7\2\u1be9\2\u1bf3\2\u1c02\2\u1c37\2\u1c42")
buf.write(u"\2\u1c4b\2\u1c4f\2\u1c7f\2\u1c82\2\u1c8a\2\u1ceb\2\u1cee")
buf.write(u"\2\u1cf0\2\u1cf5\2\u1cf7\2\u1cf8\2\u1d02\2\u1dc1\2\u1de9")
buf.write(u"\2\u1df6\2\u1e02\2\u1f17\2\u1f1a\2\u1f1f\2\u1f22\2\u1f47")
buf.write(u"\2\u1f4a\2\u1f4f\2\u1f52\2\u1f59\2\u1f5b\2\u1f5b\2\u1f5d")
buf.write(u"\2\u1f5d\2\u1f5f\2\u1f5f\2\u1f61\2\u1f7f\2\u1f82\2\u1fb6")
buf.write(u"\2\u1fb8\2\u1fbe\2\u1fc0\2\u1fc0\2\u1fc4\2\u1fc6\2\u1fc8")
buf.write(u"\2\u1fce\2\u1fd2\2\u1fd5\2\u1fd8\2\u1fdd\2\u1fe2\2\u1fee")
buf.write(u"\2\u1ff4\2\u1ff6\2\u1ff8\2\u1ffe\2\u2073\2\u2073\2\u2081")
buf.write(u"\2\u2081\2\u2092\2\u209e\2\u2104\2\u2104\2\u2109\2\u2109")
buf.write(u"\2\u210c\2\u2115\2\u2117\2\u2117\2\u211b\2\u211f\2\u2126")
buf.write(u"\2\u2126\2\u2128\2\u2128\2\u212a\2\u212a\2\u212c\2\u212f")
buf.write(u"\2\u2131\2\u213b\2\u213e\2\u2141\2\u2147\2\u214b\2\u2150")
buf.write(u"\2\u2150\2\u2162\2\u218a\2\u24b8\2\u24eb\2\u2c02\2\u2c30")
buf.write(u"\2\u2c32\2\u2c60\2\u2c62\2\u2ce6\2\u2ced\2\u2cf0\2\u2cf4")
buf.write(u"\2\u2cf5\2\u2d02\2\u2d27\2\u2d29\2\u2d29\2\u2d2f\2\u2d2f")
buf.write(u"\2\u2d32\2\u2d69\2\u2d71\2\u2d71\2\u2d82\2\u2d98\2\u2da2")
buf.write(u"\2\u2da8\2\u2daa\2\u2db0\2\u2db2\2\u2db8\2\u2dba\2\u2dc0")
buf.write(u"\2\u2dc2\2\u2dc8\2\u2dca\2\u2dd0\2\u2dd2\2\u2dd8\2\u2dda")
buf.write(u"\2\u2de0\2\u2de2\2\u2e01\2\u2e31\2\u2e31\2\u3007\2\u3009")
buf.write(u"\2\u3023\2\u302b\2\u3033\2\u3037\2\u303a\2\u303e\2\u3043")
buf.write(u"\2\u3098\2\u309f\2\u30a1\2\u30a3\2\u30fc\2\u30fe\2\u3101")
buf.write(u"\2\u3107\2\u3130\2\u3133\2\u3190\2\u31a2\2\u31bc\2\u31f2")
buf.write(u"\2\u3201\2\u3402\2\u4db7\2\u4e02\2\u9fec\2\ua002\2\ua48e")
buf.write(u"\2\ua4d2\2\ua4ff\2\ua502\2\ua60e\2\ua612\2\ua62d\2\ua642")
buf.write(u"\2\ua670\2\ua676\2\ua67d\2\ua681\2\ua6f1\2\ua719\2\ua721")
buf.write(u"\2\ua724\2\ua78a\2\ua78d\2\ua7b0\2\ua7b2\2\ua7b9\2\ua7f9")
buf.write(u"\2\ua803\2\ua805\2\ua807\2\ua809\2\ua80c\2\ua80e\2\ua829")
buf.write(u"\2\ua842\2\ua875\2\ua882\2\ua8c5\2\ua8c7\2\ua8c7\2\ua8d2")
buf.write(u"\2\ua8db\2\ua8f4\2\ua8f9\2\ua8fd\2\ua8fd\2\ua8ff\2\ua8ff")
buf.write(u"\2\ua902\2\ua92c\2\ua932\2\ua954\2\ua962\2\ua97e\2\ua982")
buf.write(u"\2\ua9b4\2\ua9b6\2\ua9c1\2\ua9d1\2\ua9db\2\ua9e2\2\ua9e6")
buf.write(u"\2\ua9e8\2\uaa00\2\uaa02\2\uaa38\2\uaa42\2\uaa4f\2\uaa52")
buf.write(u"\2\uaa5b\2\uaa62\2\uaa78\2\uaa7c\2\uaa7c\2\uaa80\2\uaac0")
buf.write(u"\2\uaac2\2\uaac2\2\uaac4\2\uaac4\2\uaadd\2\uaadf\2\uaae2")
buf.write(u"\2\uaaf1\2\uaaf4\2\uaaf7\2\uab03\2\uab08\2\uab0b\2\uab10")
buf.write(u"\2\uab13\2\uab18\2\uab22\2\uab28\2\uab2a\2\uab30\2\uab32")
buf.write(u"\2\uab5c\2\uab5e\2\uab67\2\uab72\2\uabec\2\uabf2\2\uabfb")
buf.write(u"\2\uac02\2\ud7a5\2\ud7b2\2\ud7c8\2\ud7cd\2\ud7fd\2\uf902")
buf.write(u"\2\ufa6f\2\ufa72\2\ufadb\2\ufb02\2\ufb08\2\ufb15\2\ufb19")
buf.write(u"\2\ufb1f\2\ufb2a\2\ufb2c\2\ufb38\2\ufb3a\2\ufb3e\2\ufb40")
buf.write(u"\2\ufb40\2\ufb42\2\ufb43\2\ufb45\2\ufb46\2\ufb48\2\ufbb3")
buf.write(u"\2\ufbd5\2\ufd3f\2\ufd52\2\ufd91\2\ufd94\2\ufdc9\2\ufdf2")
buf.write(u"\2\ufdfd\2\ufe72\2\ufe76\2\ufe78\2\ufefe\2\uff12\2\uff1b")
buf.write(u"\2\uff23\2\uff3c\2\uff43\2\uff5c\2\uff68\2\uffc0\2\uffc4")
buf.write(u"\2\uffc9\2\uffcc\2\uffd1\2\uffd4\2\uffd9\2\uffdc\2\uffde")
buf.write(u"\2\2\3\r\3\17\3(\3*\3<\3>\3?\3A\3O\3R\3_\3\u0082\3\u00fc")
buf.write(u"\3\u0142\3\u0176\3\u0282\3\u029e\3\u02a2\3\u02d2\3\u0302")
buf.write(u"\3\u0321\3\u032f\3\u034c\3\u0352\3\u037c\3\u0382\3\u039f")
buf.write(u"\3\u03a2\3\u03c5\3\u03ca\3\u03d1\3\u03d3\3\u03d7\3\u0402")
buf.write(u"\3\u049f\3\u04a2\3\u04ab\3\u04b2\3\u04d5\3\u04da\3\u04fd")
buf.write(u"\3\u0502\3\u0529\3\u0532\3\u0565\3\u0602\3\u0738\3\u0742")
buf.write(u"\3\u0757\3\u0762\3\u0769\3\u0802\3\u0807\3\u080a\3\u080a")
buf.write(u"\3\u080c\3\u0837\3\u0839\3\u083a\3\u083e\3\u083e\3\u0841")
buf.write(u"\3\u0857\3\u0862\3\u0878\3\u0882\3\u08a0\3\u08e2\3\u08f4")
buf.write(u"\3\u08f6\3\u08f7\3\u0902\3\u0917\3\u0922\3\u093b\3\u0982")
buf.write(u"\3\u09b9\3\u09c0\3\u09c1\3\u0a02\3\u0a05\3\u0a07\3\u0a08")
buf.write(u"\3\u0a0e\3\u0a15\3\u0a17\3\u0a19\3\u0a1b\3\u0a35\3\u0a62")
buf.write(u"\3\u0a7e\3\u0a82\3\u0a9e\3\u0ac2\3\u0ac9\3\u0acb\3\u0ae6")
buf.write(u"\3\u0b02\3\u0b37\3\u0b42\3\u0b57\3\u0b62\3\u0b74\3\u0b82")
buf.write(u"\3\u0b93\3\u0c02\3\u0c4a\3\u0c82\3\u0cb4\3\u0cc2\3\u0cf4")
buf.write(u"\3\u1002\3\u1047\3\u1068\3\u1071\3\u1084\3\u10ba\3\u10d2")
buf.write(u"\3\u10ea\3\u10f2\3\u10fb\3\u1102\3\u1134\3\u1138\3\u1141")
buf.write(u"\3\u1152\3\u1174\3\u1178\3\u1178\3\u1182\3\u11c1\3\u11c3")
buf.write(u"\3\u11c6\3\u11d2\3\u11dc\3\u11de\3\u11de\3\u1202\3\u1213")
buf.write(u"\3\u1215\3\u1236\3\u1239\3\u1239\3\u1240\3\u1240\3\u1282")
buf.write(u"\3\u1288\3\u128a\3\u128a\3\u128c\3\u128f\3\u1291\3\u129f")
buf.write(u"\3\u12a1\3\u12aa\3\u12b2\3\u12ea\3\u12f2\3\u12fb\3\u1302")
buf.write(u"\3\u1305\3\u1307\3\u130e\3\u1311\3\u1312\3\u1315\3\u132a")
buf.write(u"\3\u132c\3\u1332\3\u1334\3\u1335\3\u1337\3\u133b\3\u133f")
buf.write(u"\3\u1346\3\u1349\3\u134a\3\u134d\3\u134e\3\u1352\3\u1352")
buf.write(u"\3\u1359\3\u1359\3\u135f\3\u1365\3\u1402\3\u1443\3\u1445")
buf.write(u"\3\u1447\3\u1449\3\u144c\3\u1452\3\u145b\3\u1482\3\u14c3")
buf.write(u"\3\u14c6\3\u14c7\3\u14c9\3\u14c9\3\u14d2\3\u14db\3\u1582")
buf.write(u"\3\u15b7\3\u15ba\3\u15c0\3\u15da\3\u15df\3\u1602\3\u1640")
buf.write(u"\3\u1642\3\u1642\3\u1646\3\u1646\3\u1652\3\u165b\3\u1682")
buf.write(u"\3\u16b7\3\u16c2\3\u16cb\3\u1702\3\u171b\3\u171f\3\u172c")
buf.write(u"\3\u1732\3\u173b\3\u18a2\3\u18eb\3\u1901\3\u1901\3\u1a02")
buf.write(u"\3\u1a34\3\u1a37\3\u1a40\3\u1a52\3\u1a85\3\u1a88\3\u1a99")
buf.write(u"\3\u1ac2\3\u1afa\3\u1c02\3\u1c0a\3\u1c0c\3\u1c38\3\u1c3a")
buf.write(u"\3\u1c40\3\u1c42\3\u1c42\3\u1c52\3\u1c5b\3\u1c74\3\u1c91")
buf.write(u"\3\u1c94\3\u1ca9\3\u1cab\3\u1cb8\3\u1d02\3\u1d08\3\u1d0a")
buf.write(u"\3\u1d0b\3\u1d0d\3\u1d38\3\u1d3c\3\u1d3c\3\u1d3e\3\u1d3f")
buf.write(u"\3\u1d41\3\u1d43\3\u1d45\3\u1d45\3\u1d48\3\u1d49\3\u1d52")
buf.write(u"\3\u1d5b\3\u2002\3\u239b\3\u2402\3\u2470\3\u2482\3\u2545")
buf.write(u"\3\u3002\3\u3430\3\u4402\3\u4648\3\u6802\3\u6a3a\3\u6a42")
buf.write(u"\3\u6a60\3\u6a62\3\u6a6b\3\u6ad2\3\u6aef\3\u6b02\3\u6b38")
buf.write(u"\3\u6b42\3\u6b45\3\u6b52\3\u6b5b\3\u6b65\3\u6b79\3\u6b7f")
buf.write(u"\3\u6b91\3\u6f02\3\u6f46\3\u6f52\3\u6f80\3\u6f95\3\u6fa1")
buf.write(u"\3\u6fe2\3\u6fe3\3\u7002\3\u87ee\3\u8802\3\u8af4\3\ub002")
buf.write(u"\3\ub120\3\ub172\3\ub2fd\3\ubc02\3\ubc6c\3\ubc72\3\ubc7e")
buf.write(u"\3\ubc82\3\ubc8a\3\ubc92\3\ubc9b\3\ubca0\3\ubca0\3\ud402")
buf.write(u"\3\ud456\3\ud458\3\ud49e\3\ud4a0\3\ud4a1\3\ud4a4\3\ud4a4")
buf.write(u"\3\ud4a7\3\ud4a8\3\ud4ab\3\ud4ae\3\ud4b0\3\ud4bb\3\ud4bd")
buf.write(u"\3\ud4bd\3\ud4bf\3\ud4c5\3\ud4c7\3\ud507\3\ud509\3\ud50c")
buf.write(u"\3\ud50f\3\ud516\3\ud518\3\ud51e\3\ud520\3\ud53b\3\ud53d")
buf.write(u"\3\ud540\3\ud542\3\ud546\3\ud548\3\ud548\3\ud54c\3\ud552")
buf.write(u"\3\ud554\3\ud6a7\3\ud6aa\3\ud6c2\3\ud6c4\3\ud6dc\3\ud6de")
buf.write(u"\3\ud6fc\3\ud6fe\3\ud716\3\ud718\3\ud736\3\ud738\3\ud750")
buf.write(u"\3\ud752\3\ud770\3\ud772\3\ud78a\3\ud78c\3\ud7aa\3\ud7ac")
buf.write(u"\3\ud7c4\3\ud7c6\3\ud7cd\3\ud7d0\3\ud801\3\ue002\3\ue008")
buf.write(u"\3\ue00a\3\ue01a\3\ue01d\3\ue023\3\ue025\3\ue026\3\ue028")
buf.write(u"\3\ue02c\3\ue802\3\ue8c6\3\ue902\3\ue945\3\ue949\3\ue949")
buf.write(u"\3\ue952\3\ue95b\3\uee02\3\uee05\3\uee07\3\uee21\3\uee23")
buf.write(u"\3\uee24\3\uee26\3\uee26\3\uee29\3\uee29\3\uee2b\3\uee34")
buf.write(u"\3\uee36\3\uee39\3\uee3b\3\uee3b\3\uee3d\3\uee3d\3\uee44")
buf.write(u"\3\uee44\3\uee49\3\uee49\3\uee4b\3\uee4b\3\uee4d\3\uee4d")
buf.write(u"\3\uee4f\3\uee51\3\uee53\3\uee54\3\uee56\3\uee56\3\uee59")
buf.write(u"\3\uee59\3\uee5b\3\uee5b\3\uee5d\3\uee5d\3\uee5f\3\uee5f")
buf.write(u"\3\uee61\3\uee61\3\uee63\3\uee64\3\uee66\3\uee66\3\uee69")
buf.write(u"\3\uee6c\3\uee6e\3\uee74\3\uee76\3\uee79\3\uee7b\3\uee7e")
buf.write(u"\3\uee80\3\uee80\3\uee82\3\uee8b\3\uee8d\3\uee9d\3\ueea3")
buf.write(u"\3\ueea5\3\ueea7\3\ueeab\3\ueead\3\ueebd\3\uf132\3\uf14b")
buf.write(u"\3\uf152\3\uf16b\3\uf172\3\uf18b\3\2\4\ua6d8\4\ua702")
buf.write(u"\4\ub736\4\ub742\4\ub81f\4\ub822\4\ucea3\4\uceb2\4\uebe2")
buf.write(u"\4\uf802\4\ufa1f\4\u05c3\2\3\3\2\2\2\2\5\3\2\2\2\2\7")
buf.write(u"\3\2\2\2\2\t\3\2\2\2\2\13\3\2\2\2\2\r\3\2\2\2\2\17\3")
buf.write(u"\2\2\2\2\21\3\2\2\2\2\23\3\2\2\2\2\25\3\2\2\2\2\27\3")
buf.write(u"\2\2\2\2\31\3\2\2\2\2\33\3\2\2\2\2\35\3\2\2\2\2\37\3")
buf.write(u"\2\2\2\2!\3\2\2\2\2#\3\2\2\2\2%\3\2\2\2\2\'\3\2\2\2\2")
buf.write(u")\3\2\2\2\2+\3\2\2\2\2-\3\2\2\2\2/\3\2\2\2\2\61\3\2\2")
buf.write(u"\2\2\63\3\2\2\2\2\65\3\2\2\2\2\67\3\2\2\2\29\3\2\2\2")
buf.write(u"\2;\3\2\2\2\2=\3\2\2\2\2?\3\2\2\2\2A\3\2\2\2\2C\3\2\2")
buf.write(u"\2\2E\3\2\2\2\2G\3\2\2\2\2I\3\2\2\2\2K\3\2\2\2\2M\3\2")
buf.write(u"\2\2\2O\3\2\2\2\2Q\3\2\2\2\2S\3\2\2\2\2U\3\2\2\2\2W\3")
buf.write(u"\2\2\2\2Y\3\2\2\2\2[\3\2\2\2\2]\3\2\2\2\2_\3\2\2\2\2")
buf.write(u"a\3\2\2\2\2c\3\2\2\2\2e\3\2\2\2\2g\3\2\2\2\2i\3\2\2\2")
buf.write(u"\2k\3\2\2\2\2m\3\2\2\2\2o\3\2\2\2\2q\3\2\2\2\2s\3\2\2")
buf.write(u"\2\2u\3\2\2\2\2w\3\2\2\2\2y\3\2\2\2\2{\3\2\2\2\2}\3\2")
buf.write(u"\2\2\2\177\3\2\2\2\2\u0081\3\2\2\2\2\u0083\3\2\2\2\2")
buf.write(u"\u0085\3\2\2\2\2\u0087\3\2\2\2\2\u0089\3\2\2\2\2\u008b")
buf.write(u"\3\2\2\2\2\u008d\3\2\2\2\2\u008f\3\2\2\2\2\u0091\3\2")
buf.write(u"\2\2\2\u0093\3\2\2\2\2\u0095\3\2\2\2\2\u0097\3\2\2\2")
buf.write(u"\2\u0099\3\2\2\2\2\u009b\3\2\2\2\2\u009d\3\2\2\2\2\u009f")
buf.write(u"\3\2\2\2\2\u00a1\3\2\2\2\2\u00a3\3\2\2\2\2\u00a5\3\2")
buf.write(u"\2\2\2\u00a7\3\2\2\2\2\u00a9\3\2\2\2\2\u00ab\3\2\2\2")
buf.write(u"\2\u00ad\3\2\2\2\2\u00af\3\2\2\2\2\u00b1\3\2\2\2\2\u00b3")
buf.write(u"\3\2\2\2\2\u00b5\3\2\2\2\2\u00b7\3\2\2\2\2\u00b9\3\2")
buf.write(u"\2\2\2\u00bb\3\2\2\2\2\u00bd\3\2\2\2\2\u00bf\3\2\2\2")
buf.write(u"\2\u00c1\3\2\2\2\2\u00c3\3\2\2\2\2\u00c5\3\2\2\2\2\u00c7")
buf.write(u"\3\2\2\2\2\u00c9\3\2\2\2\2\u00cb\3\2\2\2\2\u00cd\3\2")
buf.write(u"\2\2\2\u00cf\3\2\2\2\2\u00d1\3\2\2\2\2\u00d3\3\2\2\2")
buf.write(u"\2\u00d5\3\2\2\2\2\u00d7\3\2\2\2\2\u00d9\3\2\2\2\2\u00db")
buf.write(u"\3\2\2\2\2\u00dd\3\2\2\2\2\u00df\3\2\2\2\2\u00e1\3\2")
buf.write(u"\2\2\2\u00e3\3\2\2\2\2\u00e5\3\2\2\2\2\u00e7\3\2\2\2")
buf.write(u"\2\u00e9\3\2\2\2\2\u00eb\3\2\2\2\2\u00ed\3\2\2\2\2\u00ef")
buf.write(u"\3\2\2\2\2\u00f1\3\2\2\2\2\u00f3\3\2\2\2\2\u00f5\3\2")
buf.write(u"\2\2\2\u00f7\3\2\2\2\2\u00f9\3\2\2\2\2\u00fb\3\2\2\2")
buf.write(u"\2\u00fd\3\2\2\2\2\u00ff\3\2\2\2\2\u0101\3\2\2\2\2\u0103")
buf.write(u"\3\2\2\2\2\u0105\3\2\2\2\2\u0107\3\2\2\2\2\u0109\3\2")
buf.write(u"\2\2\2\u010b\3\2\2\2\2\u010d\3\2\2\2\2\u010f\3\2\2\2")
buf.write(u"\2\u0111\3\2\2\2\2\u0113\3\2\2\2\2\u0115\3\2\2\2\2\u0117")
buf.write(u"\3\2\2\2\2\u0119\3\2\2\2\2\u011b\3\2\2\2\2\u011d\3\2")
buf.write(u"\2\2\2\u011f\3\2\2\2\2\u0121\3\2\2\2\2\u0123\3\2\2\2")
buf.write(u"\2\u0125\3\2\2\2\2\u0127\3\2\2\2\2\u0129\3\2\2\2\2\u012b")
buf.write(u"\3\2\2\2\2\u012d\3\2\2\2\2\u012f\3\2\2\2\2\u0131\3\2")
buf.write(u"\2\2\2\u0133\3\2\2\2\2\u0135\3\2\2\2\2\u0137\3\2\2\2")
buf.write(u"\2\u0139\3\2\2\2\2\u013b\3\2\2\2\2\u013d\3\2\2\2\3\u0175")
buf.write(u"\3\2\2\2\5\u0177\3\2\2\2\7\u0179\3\2\2\2\t\u017b\3\2")
buf.write(u"\2\2\13\u017d\3\2\2\2\r\u017f\3\2\2\2\17\u0181\3\2\2")
buf.write(u"\2\21\u0183\3\2\2\2\23\u0185\3\2\2\2\25\u0187\3\2\2\2")
buf.write(u"\27\u0189\3\2\2\2\31\u018c\3\2\2\2\33\u018e\3\2\2\2\35")
buf.write(u"\u0190\3\2\2\2\37\u0193\3\2\2\2!\u0196\3\2\2\2#\u0198")
buf.write(u"\3\2\2\2%\u019a\3\2\2\2\'\u019c\3\2\2\2)\u019f\3\2\2")
buf.write(u"\2+\u01a1\3\2\2\2-\u01a4\3\2\2\2/\u01a7\3\2\2\2\61\u01aa")
buf.write(u"\3\2\2\2\63\u01ad\3\2\2\2\65\u01b3\3\2\2\2\67\u01ba\3")
buf.write(u"\2\2\29\u01be\3\2\2\2;\u01c4\3\2\2\2=\u01c8\3\2\2\2?")
buf.write(u"\u01ce\3\2\2\2A\u01d6\3\2\2\2C\u01da\3\2\2\2E\u01dd\3")
buf.write(u"\2\2\2G\u01e1\3\2\2\2I\u01e8\3\2\2\2K\u01f6\3\2\2\2M")
buf.write(u"\u01fd\3\2\2\2O\u0203\3\2\2\2Q\u020b\3\2\2\2S\u020e\3")
buf.write(u"\2\2\2U\u0216\3\2\2\2W\u021b\3\2\2\2Y\u0220\3\2\2\2[")
buf.write(u"\u0226\3\2\2\2]\u022e\3\2\2\2_\u0235\3\2\2\2a\u023c\3")
buf.write(u"\2\2\2c\u0245\3\2\2\2e\u0250\3\2\2\2g\u0257\3\2\2\2i")
buf.write(u"\u025d\3\2\2\2k\u026a\3\2\2\2m\u0277\3\2\2\2o\u0289\3")
buf.write(u"\2\2\2q\u0292\3\2\2\2s\u029a\3\2\2\2u\u02a5\3\2\2\2w")
buf.write(u"\u02ae\3\2\2\2y\u02b5\3\2\2\2{\u02ba\3\2\2\2}\u02c1\3")
buf.write(u"\2\2\2\177\u02ca\3\2\2\2\u0081\u02cf\3\2\2\2\u0083\u02d4")
buf.write(u"\3\2\2\2\u0085\u02d9\3\2\2\2\u0087\u02dd\3\2\2\2\u0089")
buf.write(u"\u02e4\3\2\2\2\u008b\u02eb\3\2\2\2\u008d\u02f5\3\2\2")
buf.write(u"\2\u008f\u02fc\3\2\2\2\u0091\u0304\3\2\2\2\u0093\u0309")
buf.write(u"\3\2\2\2\u0095\u030d\3\2\2\2\u0097\u0315\3\2\2\2\u0099")
buf.write(u"\u031a\3\2\2\2\u009b\u031f\3\2\2\2\u009d\u0324\3\2\2")
buf.write(u"\2\u009f\u032a\3\2\2\2\u00a1\u0331\3\2\2\2\u00a3\u0334")
buf.write(u"\3\2\2\2\u00a5\u033b\3\2\2\2\u00a7\u0345\3\2\2\2\u00a9")
buf.write(u"\u0348\3\2\2\2\u00ab\u034e\3\2\2\2\u00ad\u0356\3\2\2")
buf.write(u"\2\u00af\u0360\3\2\2\2\u00b1\u0366\3\2\2\2\u00b3\u036d")
buf.write(u"\3\2\2\2\u00b5\u0375\3\2\2\2\u00b7\u037f\3\2\2\2\u00b9")
buf.write(u"\u0384\3\2\2\2\u00bb\u0387\3\2\2\2\u00bd\u038e\3\2\2")
buf.write(u"\2\u00bf\u0393\3\2\2\2\u00c1\u0397\3\2\2\2\u00c3\u039c")
buf.write(u"\3\2\2\2\u00c5\u03a1\3\2\2\2\u00c7\u03a7\3\2\2\2\u00c9")
buf.write(u"\u03ad\3\2\2\2\u00cb\u03b5\3\2\2\2\u00cd\u03b8\3\2\2")
buf.write(u"\2\u00cf\u03bc\3\2\2\2\u00d1\u03c4\3\2\2\2\u00d3\u03c9")
buf.write(u"\3\2\2\2\u00d5\u03cc\3\2\2\2\u00d7\u03d3\3\2\2\2\u00d9")
buf.write(u"\u03d6\3\2\2\2\u00db\u03d9\3\2\2\2\u00dd\u03df\3\2\2")
buf.write(u"\2\u00df\u03e5\3\2\2\2\u00e1\u03ea\3\2\2\2\u00e3\u03f1")
buf.write(u"\3\2\2\2\u00e5\u03f9\3\2\2\2\u00e7\u03ff\3\2\2\2\u00e9")
buf.write(u"\u0405\3\2\2\2\u00eb\u040f\3\2\2\2\u00ed\u041a\3\2\2")
buf.write(u"\2\u00ef\u0421\3\2\2\2\u00f1\u0429\3\2\2\2\u00f3\u0431")
buf.write(u"\3\2\2\2\u00f5\u0438\3\2\2\2\u00f7\u0440\3\2\2\2\u00f9")
buf.write(u"\u0449\3\2\2\2\u00fb\u044f\3\2\2\2\u00fd\u0458\3\2\2")
buf.write(u"\2\u00ff\u045c\3\2\2\2\u0101\u0466\3\2\2\2\u0103\u046d")
buf.write(u"\3\2\2\2\u0105\u0471\3\2\2\2\u0107\u0477\3\2\2\2\u0109")
buf.write(u"\u047c\3\2\2\2\u010b\u0486\3\2\2\2\u010d\u048b\3\2\2")
buf.write(u"\2\u010f\u048e\3\2\2\2\u0111\u049a\3\2\2\2\u0113\u04a2")
buf.write(u"\3\2\2\2\u0115\u04a8\3\2\2\2\u0117\u04af\3\2\2\2\u0119")
buf.write(u"\u04b6\3\2\2\2\u011b\u04bc\3\2\2\2\u011d\u04c3\3\2\2")
buf.write(u"\2\u011f\u04ca\3\2\2\2\u0121\u04cf\3\2\2\2\u0123\u04d7")
buf.write(u"\3\2\2\2\u0125\u04dc\3\2\2\2\u0127\u04e2\3\2\2\2\u0129")
buf.write(u"\u04e7\3\2\2\2\u012b\u04ef\3\2\2\2\u012d\u0518\3\2\2")
buf.write(u"\2\u012f\u0544\3\2\2\2\u0131\u054f\3\2\2\2\u0133\u0565")
buf.write(u"\3\2\2\2\u0135\u0567\3\2\2\2\u0137\u056a\3\2\2\2\u0139")
buf.write(u"\u0575\3\2\2\2\u013b\u0585\3\2\2\2\u013d\u0589\3\2\2")
buf.write(u"\2\u013f\u058b\3\2\2\2\u0141\u058d\3\2\2\2\u0143\u058f")
buf.write(u"\3\2\2\2\u0145\u0591\3\2\2\2\u0147\u0593\3\2\2\2\u0149")
buf.write(u"\u0595\3\2\2\2\u014b\u0597\3\2\2\2\u014d\u0599\3\2\2")
buf.write(u"\2\u014f\u059b\3\2\2\2\u0151\u059d\3\2\2\2\u0153\u059f")
buf.write(u"\3\2\2\2\u0155\u05a1\3\2\2\2\u0157\u05a3\3\2\2\2\u0159")
buf.write(u"\u05a5\3\2\2\2\u015b\u05a7\3\2\2\2\u015d\u05a9\3\2\2")
buf.write(u"\2\u015f\u05ab\3\2\2\2\u0161\u05ad\3\2\2\2\u0163\u05af")
buf.write(u"\3\2\2\2\u0165\u05b1\3\2\2\2\u0167\u05b3\3\2\2\2\u0169")
buf.write(u"\u05b5\3\2\2\2\u016b\u05b7\3\2\2\2\u016d\u05b9\3\2\2")
buf.write(u"\2\u016f\u05bb\3\2\2\2\u0171\u05bd\3\2\2\2\u0173\u05bf")
buf.write(u"\3\2\2\2\u0175\u0176\7=\2\2\u0176\4\3\2\2\2\u0177\u0178")
buf.write(u"\7\60\2\2\u0178\6\3\2\2\2\u0179\u017a\7*\2\2\u017a\b")
buf.write(u"\3\2\2\2\u017b\u017c\7+\2\2\u017c\n\3\2\2\2\u017d\u017e")
buf.write(u"\7.\2\2\u017e\f\3\2\2\2\u017f\u0180\7?\2\2\u0180\16\3")
buf.write(u"\2\2\2\u0181\u0182\7,\2\2\u0182\20\3\2\2\2\u0183\u0184")
buf.write(u"\7-\2\2\u0184\22\3\2\2\2\u0185\u0186\7/\2\2\u0186\24")
buf.write(u"\3\2\2\2\u0187\u0188\7\u0080\2\2\u0188\26\3\2\2\2\u0189")
buf.write(u"\u018a\7~\2\2\u018a\u018b\7~\2\2\u018b\30\3\2\2\2\u018c")
buf.write(u"\u018d\7\61\2\2\u018d\32\3\2\2\2\u018e\u018f\7\'\2\2")
buf.write(u"\u018f\34\3\2\2\2\u0190\u0191\7>\2\2\u0191\u0192\7>\2")
buf.write(u"\2\u0192\36\3\2\2\2\u0193\u0194\7@\2\2\u0194\u0195\7")
buf.write(u"@\2\2\u0195 \3\2\2\2\u0196\u0197\7(\2\2\u0197\"\3\2\2")
buf.write(u"\2\u0198\u0199\7~\2\2\u0199$\3\2\2\2\u019a\u019b\7>\2")
buf.write(u"\2\u019b&\3\2\2\2\u019c\u019d\7>\2\2\u019d\u019e\7?\2")
buf.write(u"\2\u019e(\3\2\2\2\u019f\u01a0\7@\2\2\u01a0*\3\2\2\2\u01a1")
buf.write(u"\u01a2\7@\2\2\u01a2\u01a3\7?\2\2\u01a3,\3\2\2\2\u01a4")
buf.write(u"\u01a5\7?\2\2\u01a5\u01a6\7?\2\2\u01a6.\3\2\2\2\u01a7")
buf.write(u"\u01a8\7#\2\2\u01a8\u01a9\7?\2\2\u01a9\60\3\2\2\2\u01aa")
buf.write(u"\u01ab\7>\2\2\u01ab\u01ac\7@\2\2\u01ac\62\3\2\2\2\u01ad")
buf.write(u"\u01ae\5\u0141\u00a1\2\u01ae\u01af\5\u0143\u00a2\2\u01af")
buf.write(u"\u01b0\5\u015d\u00af\2\u01b0\u01b1\5\u0163\u00b2\2\u01b1")
buf.write(u"\u01b2\5\u0167\u00b4\2\u01b2\64\3\2\2\2\u01b3\u01b4\5")
buf.write(u"\u0141\u00a1\2\u01b4\u01b5\5\u0145\u00a3\2\u01b5\u01b6")
buf.write(u"\5\u0167\u00b4\2\u01b6\u01b7\5\u0151\u00a9\2\u01b7\u01b8")
buf.write(u"\5\u015d\u00af\2\u01b8\u01b9\5\u015b\u00ae\2\u01b9\66")
buf.write(u"\3\2\2\2\u01ba\u01bb\5\u0141\u00a1\2\u01bb\u01bc\5\u0147")
buf.write(u"\u00a4\2\u01bc\u01bd\5\u0147\u00a4\2\u01bd8\3\2\2\2\u01be")
buf.write(u"\u01bf\5\u0141\u00a1\2\u01bf\u01c0\5\u014b\u00a6\2\u01c0")
buf.write(u"\u01c1\5\u0167\u00b4\2\u01c1\u01c2\5\u0149\u00a5\2\u01c2")
buf.write(u"\u01c3\5\u0163\u00b2\2\u01c3:\3\2\2\2\u01c4\u01c5\5\u0141")
buf.write(u"\u00a1\2\u01c5\u01c6\5\u0157\u00ac\2\u01c6\u01c7\5\u0157")
buf.write(u"\u00ac\2\u01c7<\3\2\2\2\u01c8\u01c9\5\u0141\u00a1\2\u01c9")
buf.write(u"\u01ca\5\u0157\u00ac\2\u01ca\u01cb\5\u0167\u00b4\2\u01cb")
buf.write(u"\u01cc\5\u0149\u00a5\2\u01cc\u01cd\5\u0163\u00b2\2\u01cd")
buf.write(u">\3\2\2\2\u01ce\u01cf\5\u0141\u00a1\2\u01cf\u01d0\5\u015b")
buf.write(u"\u00ae\2\u01d0\u01d1\5\u0141\u00a1\2\u01d1\u01d2\5\u0157")
buf.write(u"\u00ac\2\u01d2\u01d3\5\u0171\u00b9\2\u01d3\u01d4\5\u0173")
buf.write(u"\u00ba\2\u01d4\u01d5\5\u0149\u00a5\2\u01d5@\3\2\2\2\u01d6")
buf.write(u"\u01d7\5\u0141\u00a1\2\u01d7\u01d8\5\u015b\u00ae\2\u01d8")
buf.write(u"\u01d9\5\u0147\u00a4\2\u01d9B\3\2\2\2\u01da\u01db\5\u0141")
buf.write(u"\u00a1\2\u01db\u01dc\5\u0165\u00b3\2\u01dcD\3\2\2\2\u01dd")
buf.write(u"\u01de\5\u0141\u00a1\2\u01de\u01df\5\u0165\u00b3\2\u01df")
buf.write(u"\u01e0\5\u0145\u00a3\2\u01e0F\3\2\2\2\u01e1\u01e2\5\u0141")
buf.write(u"\u00a1\2\u01e2\u01e3\5\u0167\u00b4\2\u01e3\u01e4\5\u0167")
buf.write(u"\u00b4\2\u01e4\u01e5\5\u0141\u00a1\2\u01e5\u01e6\5\u0145")
buf.write(u"\u00a3\2\u01e6\u01e7\5\u014f\u00a8\2\u01e7H\3\2\2\2\u01e8")
buf.write(u"\u01e9\5\u0141\u00a1\2\u01e9\u01ea\5\u0169\u00b5\2\u01ea")
buf.write(u"\u01eb\5\u0167\u00b4\2\u01eb\u01ec\5\u015d\u00af\2\u01ec")
buf.write(u"\u01ed\5\u0151\u00a9\2\u01ed\u01ee\5\u015b\u00ae\2\u01ee")
buf.write(u"\u01ef\5\u0145\u00a3\2\u01ef\u01f0\5\u0163\u00b2\2\u01f0")
buf.write(u"\u01f1\5\u0149\u00a5\2\u01f1\u01f2\5\u0159\u00ad\2\u01f2")
buf.write(u"\u01f3\5\u0149\u00a5\2\u01f3\u01f4\5\u015b\u00ae\2\u01f4")
buf.write(u"\u01f5\5\u0167\u00b4\2\u01f5J\3\2\2\2\u01f6\u01f7\5\u0143")
buf.write(u"\u00a2\2\u01f7\u01f8\5\u0149\u00a5\2\u01f8\u01f9\5\u014b")
buf.write(u"\u00a6\2\u01f9\u01fa\5\u015d\u00af\2\u01fa\u01fb\5\u0163")
buf.write(u"\u00b2\2\u01fb\u01fc\5\u0149\u00a5\2\u01fcL\3\2\2\2\u01fd")
buf.write(u"\u01fe\5\u0143\u00a2\2\u01fe\u01ff\5\u0149\u00a5\2\u01ff")
buf.write(u"\u0200\5\u014d\u00a7\2\u0200\u0201\5\u0151\u00a9\2\u0201")
buf.write(u"\u0202\5\u015b\u00ae\2\u0202N\3\2\2\2\u0203\u0204\5\u0143")
buf.write(u"\u00a2\2\u0204\u0205\5\u0149\u00a5\2\u0205\u0206\5\u0167")
buf.write(u"\u00b4\2\u0206\u0207\5\u016d\u00b7\2\u0207\u0208\5\u0149")
buf.write(u"\u00a5\2\u0208\u0209\5\u0149\u00a5\2\u0209\u020a\5\u015b")
buf.write(u"\u00ae\2\u020aP\3\2\2\2\u020b\u020c\5\u0143\u00a2\2\u020c")
buf.write(u"\u020d\5\u0171\u00b9\2\u020dR\3\2\2\2\u020e\u020f\5\u0145")
buf.write(u"\u00a3\2\u020f\u0210\5\u0141\u00a1\2\u0210\u0211\5\u0165")
buf.write(u"\u00b3\2\u0211\u0212\5\u0145\u00a3\2\u0212\u0213\5\u0141")
buf.write(u"\u00a1\2\u0213\u0214\5\u0147\u00a4\2\u0214\u0215\5\u0149")
buf.write(u"\u00a5\2\u0215T\3\2\2\2\u0216\u0217\5\u0145\u00a3\2\u0217")
buf.write(u"\u0218\5\u0141\u00a1\2\u0218\u0219\5\u0165\u00b3\2\u0219")
buf.write(u"\u021a\5\u0149\u00a5\2\u021aV\3\2\2\2\u021b\u021c\5\u0145")
buf.write(u"\u00a3\2\u021c\u021d\5\u0141\u00a1\2\u021d\u021e\5\u0165")
buf.write(u"\u00b3\2\u021e\u021f\5\u0167\u00b4\2\u021fX\3\2\2\2\u0220")
buf.write(u"\u0221\5\u0145\u00a3\2\u0221\u0222\5\u014f\u00a8\2\u0222")
buf.write(u"\u0223\5\u0149\u00a5\2\u0223\u0224\5\u0145\u00a3\2\u0224")
buf.write(u"\u0225\5\u0155\u00ab\2\u0225Z\3\2\2\2\u0226\u0227\5\u0145")
buf.write(u"\u00a3\2\u0227\u0228\5\u015d\u00af\2\u0228\u0229\5\u0157")
buf.write(u"\u00ac\2\u0229\u022a\5\u0157\u00ac\2\u022a\u022b\5\u0141")
buf.write(u"\u00a1\2\u022b\u022c\5\u0167\u00b4\2\u022c\u022d\5\u0149")
buf.write(u"\u00a5\2\u022d\\\3\2\2\2\u022e\u022f\5\u0145\u00a3\2")
buf.write(u"\u022f\u0230\5\u015d\u00af\2\u0230\u0231\5\u0157\u00ac")
buf.write(u"\2\u0231\u0232\5\u0169\u00b5\2\u0232\u0233\5\u0159\u00ad")
buf.write(u"\2\u0233\u0234\5\u015b\u00ae\2\u0234^\3\2\2\2\u0235\u0236")
buf.write(u"\5\u0145\u00a3\2\u0236\u0237\5\u015d\u00af\2\u0237\u0238")
buf.write(u"\5\u0159\u00ad\2\u0238\u0239\5\u0159\u00ad\2\u0239\u023a")
buf.write(u"\5\u0151\u00a9\2\u023a\u023b\5\u0167\u00b4\2\u023b`\3")
buf.write(u"\2\2\2\u023c\u023d\5\u0145\u00a3\2\u023d\u023e\5\u015d")
buf.write(u"\u00af\2\u023e\u023f\5\u015b\u00ae\2\u023f\u0240\5\u014b")
buf.write(u"\u00a6\2\u0240\u0241\5\u0157\u00ac\2\u0241\u0242\5\u0151")
buf.write(u"\u00a9\2\u0242\u0243\5\u0145\u00a3\2\u0243\u0244\5\u0167")
buf.write(u"\u00b4\2\u0244b\3\2\2\2\u0245\u0246\5\u0145\u00a3\2\u0246")
buf.write(u"\u0247\5\u015d\u00af\2\u0247\u0248\5\u015b\u00ae\2\u0248")
buf.write(u"\u0249\5\u0165\u00b3\2\u0249\u024a\5\u0167\u00b4\2\u024a")
buf.write(u"\u024b\5\u0163\u00b2\2\u024b\u024c\5\u0141\u00a1\2\u024c")
buf.write(u"\u024d\5\u0151\u00a9\2\u024d\u024e\5\u015b\u00ae\2\u024e")
buf.write(u"\u024f\5\u0167\u00b4\2\u024fd\3\2\2\2\u0250\u0251\5\u0145")
buf.write(u"\u00a3\2\u0251\u0252\5\u0163\u00b2\2\u0252\u0253\5\u0149")
buf.write(u"\u00a5\2\u0253\u0254\5\u0141\u00a1\2\u0254\u0255\5\u0167")
buf.write(u"\u00b4\2\u0255\u0256\5\u0149\u00a5\2\u0256f\3\2\2\2\u0257")
buf.write(u"\u0258\5\u0145\u00a3\2\u0258\u0259\5\u0163\u00b2\2\u0259")
buf.write(u"\u025a\5\u015d\u00af\2\u025a\u025b\5\u0165\u00b3\2\u025b")
buf.write(u"\u025c\5\u0165\u00b3\2\u025ch\3\2\2\2\u025d\u025e\5\u0145")
buf.write(u"\u00a3\2\u025e\u025f\5\u0169\u00b5\2\u025f\u0260\5\u0163")
buf.write(u"\u00b2\2\u0260\u0261\5\u0163\u00b2\2\u0261\u0262\5\u0149")
buf.write(u"\u00a5\2\u0262\u0263\5\u015b\u00ae\2\u0263\u0264\5\u0167")
buf.write(u"\u00b4\2\u0264\u0265\7a\2\2\u0265\u0266\5\u0147\u00a4")
buf.write(u"\2\u0266\u0267\5\u0141\u00a1\2\u0267\u0268\5\u0167\u00b4")
buf.write(u"\2\u0268\u0269\5\u0149\u00a5\2\u0269j\3\2\2\2\u026a\u026b")
buf.write(u"\5\u0145\u00a3\2\u026b\u026c\5\u0169\u00b5\2\u026c\u026d")
buf.write(u"\5\u0163\u00b2\2\u026d\u026e\5\u0163\u00b2\2\u026e\u026f")
buf.write(u"\5\u0149\u00a5\2\u026f\u0270\5\u015b\u00ae\2\u0270\u0271")
buf.write(u"\5\u0167\u00b4\2\u0271\u0272\7a\2\2\u0272\u0273\5\u0167")
buf.write(u"\u00b4\2\u0273\u0274\5\u0151\u00a9\2\u0274\u0275\5\u0159")
buf.write(u"\u00ad\2\u0275\u0276\5\u0149\u00a5\2\u0276l\3\2\2\2\u0277")
buf.write(u"\u0278\5\u0145\u00a3\2\u0278\u0279\5\u0169\u00b5\2\u0279")
buf.write(u"\u027a\5\u0163\u00b2\2\u027a\u027b\5\u0163\u00b2\2\u027b")
buf.write(u"\u027c\5\u0149\u00a5\2\u027c\u027d\5\u015b\u00ae\2\u027d")
buf.write(u"\u027e\5\u0167\u00b4\2\u027e\u027f\7a\2\2\u027f\u0280")
buf.write(u"\5\u0167\u00b4\2\u0280\u0281\5\u0151\u00a9\2\u0281\u0282")
buf.write(u"\5\u0159\u00ad\2\u0282\u0283\5\u0149\u00a5\2\u0283\u0284")
buf.write(u"\5\u0165\u00b3\2\u0284\u0285\5\u0167\u00b4\2\u0285\u0286")
buf.write(u"\5\u0141\u00a1\2\u0286\u0287\5\u0159\u00ad\2\u0287\u0288")
buf.write(u"\5\u015f\u00b0\2\u0288n\3\2\2\2\u0289\u028a\5\u0147\u00a4")
buf.write(u"\2\u028a\u028b\5\u0141\u00a1\2\u028b\u028c\5\u0167\u00b4")
buf.write(u"\2\u028c\u028d\5\u0141\u00a1\2\u028d\u028e\5\u0143\u00a2")
buf.write(u"\2\u028e\u028f\5\u0141\u00a1\2\u028f\u0290\5\u0165\u00b3")
buf.write(u"\2\u0290\u0291\5\u0149\u00a5\2\u0291p\3\2\2\2\u0292\u0293")
buf.write(u"\5\u0147\u00a4\2\u0293\u0294\5\u0149\u00a5\2\u0294\u0295")
buf.write(u"\5\u014b\u00a6\2\u0295\u0296\5\u0141\u00a1\2\u0296\u0297")
buf.write(u"\5\u0169\u00b5\2\u0297\u0298\5\u0157\u00ac\2\u0298\u0299")
buf.write(u"\5\u0167\u00b4\2\u0299r\3\2\2\2\u029a\u029b\5\u0147\u00a4")
buf.write(u"\2\u029b\u029c\5\u0149\u00a5\2\u029c\u029d\5\u014b\u00a6")
buf.write(u"\2\u029d\u029e\5\u0149\u00a5\2\u029e\u029f\5\u0163\u00b2")
buf.write(u"\2\u029f\u02a0\5\u0163\u00b2\2\u02a0\u02a1\5\u0141\u00a1")
buf.write(u"\2\u02a1\u02a2\5\u0143\u00a2\2\u02a2\u02a3\5\u0157\u00ac")
buf.write(u"\2\u02a3\u02a4\5\u0149\u00a5\2\u02a4t\3\2\2\2\u02a5\u02a6")
buf.write(u"\5\u0147\u00a4\2\u02a6\u02a7\5\u0149\u00a5\2\u02a7\u02a8")
buf.write(u"\5\u014b\u00a6\2\u02a8\u02a9\5\u0149\u00a5\2\u02a9\u02aa")
buf.write(u"\5\u0163\u00b2\2\u02aa\u02ab\5\u0163\u00b2\2\u02ab\u02ac")
buf.write(u"\5\u0149\u00a5\2\u02ac\u02ad\5\u0147\u00a4\2\u02adv\3")
buf.write(u"\2\2\2\u02ae\u02af\5\u0147\u00a4\2\u02af\u02b0\5\u0149")
buf.write(u"\u00a5\2\u02b0\u02b1\5\u0157\u00ac\2\u02b1\u02b2\5\u0149")
buf.write(u"\u00a5\2\u02b2\u02b3\5\u0167\u00b4\2\u02b3\u02b4\5\u0149")
buf.write(u"\u00a5\2\u02b4x\3\2\2\2\u02b5\u02b6\5\u0147\u00a4\2\u02b6")
buf.write(u"\u02b7\5\u0149\u00a5\2\u02b7\u02b8\5\u0165\u00b3\2\u02b8")
buf.write(u"\u02b9\5\u0145\u00a3\2\u02b9z\3\2\2\2\u02ba\u02bb\5\u0147")
buf.write(u"\u00a4\2\u02bb\u02bc\5\u0149\u00a5\2\u02bc\u02bd\5\u0167")
buf.write(u"\u00b4\2\u02bd\u02be\5\u0141\u00a1\2\u02be\u02bf\5\u0145")
buf.write(u"\u00a3\2\u02bf\u02c0\5\u014f\u00a8\2\u02c0|\3\2\2\2\u02c1")
buf.write(u"\u02c2\5\u0147\u00a4\2\u02c2\u02c3\5\u0151\u00a9\2\u02c3")
buf.write(u"\u02c4\5\u0165\u00b3\2\u02c4\u02c5\5\u0167\u00b4\2\u02c5")
buf.write(u"\u02c6\5\u0151\u00a9\2\u02c6\u02c7\5\u015b\u00ae\2\u02c7")
buf.write(u"\u02c8\5\u0145\u00a3\2\u02c8\u02c9\5\u0167\u00b4\2\u02c9")
buf.write(u"~\3\2\2\2\u02ca\u02cb\5\u0147\u00a4\2\u02cb\u02cc\5\u0163")
buf.write(u"\u00b2\2\u02cc\u02cd\5\u015d\u00af\2\u02cd\u02ce\5\u015f")
buf.write(u"\u00b0\2\u02ce\u0080\3\2\2\2\u02cf\u02d0\5\u0149\u00a5")
buf.write(u"\2\u02d0\u02d1\5\u0141\u00a1\2\u02d1\u02d2\5\u0145\u00a3")
buf.write(u"\2\u02d2\u02d3\5\u014f\u00a8\2\u02d3\u0082\3\2\2\2\u02d4")
buf.write(u"\u02d5\5\u0149\u00a5\2\u02d5\u02d6\5\u0157\u00ac\2\u02d6")
buf.write(u"\u02d7\5\u0165\u00b3\2\u02d7\u02d8\5\u0149\u00a5\2\u02d8")
buf.write(u"\u0084\3\2\2\2\u02d9\u02da\5\u0149\u00a5\2\u02da\u02db")
buf.write(u"\5\u015b\u00ae\2\u02db\u02dc\5\u0147\u00a4\2\u02dc\u0086")
buf.write(u"\3\2\2\2\u02dd\u02de\5\u0149\u00a5\2\u02de\u02df\5\u0165")
buf.write(u"\u00b3\2\u02df\u02e0\5\u0145\u00a3\2\u02e0\u02e1\5\u0141")
buf.write(u"\u00a1\2\u02e1\u02e2\5\u015f\u00b0\2\u02e2\u02e3\5\u0149")
buf.write(u"\u00a5\2\u02e3\u0088\3\2\2\2\u02e4\u02e5\5\u0149\u00a5")
buf.write(u"\2\u02e5\u02e6\5\u016f\u00b8\2\u02e6\u02e7\5\u0145\u00a3")
buf.write(u"\2\u02e7\u02e8\5\u0149\u00a5\2\u02e8\u02e9\5\u015f\u00b0")
buf.write(u"\2\u02e9\u02ea\5\u0167\u00b4\2\u02ea\u008a\3\2\2\2\u02eb")
buf.write(u"\u02ec\5\u0149\u00a5\2\u02ec\u02ed\5\u016f\u00b8\2\u02ed")
buf.write(u"\u02ee\5\u0145\u00a3\2\u02ee\u02ef\5\u0157\u00ac\2\u02ef")
buf.write(u"\u02f0\5\u0169\u00b5\2\u02f0\u02f1\5\u0165\u00b3\2\u02f1")
buf.write(u"\u02f2\5\u0151\u00a9\2\u02f2\u02f3\5\u016b\u00b6\2\u02f3")
buf.write(u"\u02f4\5\u0149\u00a5\2\u02f4\u008c\3\2\2\2\u02f5\u02f6")
buf.write(u"\5\u0149\u00a5\2\u02f6\u02f7\5\u016f\u00b8\2\u02f7\u02f8")
buf.write(u"\5\u0151\u00a9\2\u02f8\u02f9\5\u0165\u00b3\2\u02f9\u02fa")
buf.write(u"\5\u0167\u00b4\2\u02fa\u02fb\5\u0165\u00b3\2\u02fb\u008e")
buf.write(u"\3\2\2\2\u02fc\u02fd\5\u0149\u00a5\2\u02fd\u02fe\5\u016f")
buf.write(u"\u00b8\2\u02fe\u02ff\5\u015f\u00b0\2\u02ff\u0300\5\u0157")
buf.write(u"\u00ac\2\u0300\u0301\5\u0141\u00a1\2\u0301\u0302\5\u0151")
buf.write(u"\u00a9\2\u0302\u0303\5\u015b\u00ae\2\u0303\u0090\3\2")
buf.write(u"\2\2\u0304\u0305\5\u014b\u00a6\2\u0305\u0306\5\u0141")
buf.write(u"\u00a1\2\u0306\u0307\5\u0151\u00a9\2\u0307\u0308\5\u0157")
buf.write(u"\u00ac\2\u0308\u0092\3\2\2\2\u0309\u030a\5\u014b\u00a6")
buf.write(u"\2\u030a\u030b\5\u015d\u00af\2\u030b\u030c\5\u0163\u00b2")
buf.write(u"\2\u030c\u0094\3\2\2\2\u030d\u030e\5\u014b\u00a6\2\u030e")
buf.write(u"\u030f\5\u015d\u00af\2\u030f\u0310\5\u0163\u00b2\2\u0310")
buf.write(u"\u0311\5\u0149\u00a5\2\u0311\u0312\5\u0151\u00a9\2\u0312")
buf.write(u"\u0313\5\u014d\u00a7\2\u0313\u0314\5\u015b\u00ae\2\u0314")
buf.write(u"\u0096\3\2\2\2\u0315\u0316\5\u014b\u00a6\2\u0316\u0317")
buf.write(u"\5\u0163\u00b2\2\u0317\u0318\5\u015d\u00af\2\u0318\u0319")
buf.write(u"\5\u0159\u00ad\2\u0319\u0098\3\2\2\2\u031a\u031b\5\u014b")
buf.write(u"\u00a6\2\u031b\u031c\5\u0169\u00b5\2\u031c\u031d\5\u0157")
buf.write(u"\u00ac\2\u031d\u031e\5\u0157\u00ac\2\u031e\u009a\3\2")
buf.write(u"\2\2\u031f\u0320\5\u014d\u00a7\2\u0320\u0321\5\u0157")
buf.write(u"\u00ac\2\u0321\u0322\5\u015d\u00af\2\u0322\u0323\5\u0143")
buf.write(u"\u00a2\2\u0323\u009c\3\2\2\2\u0324\u0325\5\u014d\u00a7")
buf.write(u"\2\u0325\u0326\5\u0163\u00b2\2\u0326\u0327\5\u015d\u00af")
buf.write(u"\2\u0327\u0328\5\u0169\u00b5\2\u0328\u0329\5\u015f\u00b0")
buf.write(u"\2\u0329\u009e\3\2\2\2\u032a\u032b\5\u014f\u00a8\2\u032b")
buf.write(u"\u032c\5\u0141\u00a1\2\u032c\u032d\5\u016b\u00b6\2\u032d")
buf.write(u"\u032e\5\u0151\u00a9\2\u032e\u032f\5\u015b\u00ae\2\u032f")
buf.write(u"\u0330\5\u014d\u00a7\2\u0330\u00a0\3\2\2\2\u0331\u0332")
buf.write(u"\5\u0151\u00a9\2\u0332\u0333\5\u014b\u00a6\2\u0333\u00a2")
buf.write(u"\3\2\2\2\u0334\u0335\5\u0151\u00a9\2\u0335\u0336\5\u014d")
buf.write(u"\u00a7\2\u0336\u0337\5\u015b\u00ae\2\u0337\u0338\5\u015d")
buf.write(u"\u00af\2\u0338\u0339\5\u0163\u00b2\2\u0339\u033a\5\u0149")
buf.write(u"\u00a5\2\u033a\u00a4\3\2\2\2\u033b\u033c\5\u0151\u00a9")
buf.write(u"\2\u033c\u033d\5\u0159\u00ad\2\u033d\u033e\5\u0159\u00ad")
buf.write(u"\2\u033e\u033f\5\u0149\u00a5\2\u033f\u0340\5\u0147\u00a4")
buf.write(u"\2\u0340\u0341\5\u0151\u00a9\2\u0341\u0342\5\u0141\u00a1")
buf.write(u"\2\u0342\u0343\5\u0167\u00b4\2\u0343\u0344\5\u0149\u00a5")
buf.write(u"\2\u0344\u00a6\3\2\2\2\u0345\u0346\5\u0151\u00a9\2\u0346")
buf.write(u"\u0347\5\u015b\u00ae\2\u0347\u00a8\3\2\2\2\u0348\u0349")
buf.write(u"\5\u0151\u00a9\2\u0349\u034a\5\u015b\u00ae\2\u034a\u034b")
buf.write(u"\5\u0147\u00a4\2\u034b\u034c\5\u0149\u00a5\2\u034c\u034d")
buf.write(u"\5\u016f\u00b8\2\u034d\u00aa\3\2\2\2\u034e\u034f\5\u0151")
buf.write(u"\u00a9\2\u034f\u0350\5\u015b\u00ae\2\u0350\u0351\5\u0147")
buf.write(u"\u00a4\2\u0351\u0352\5\u0149\u00a5\2\u0352\u0353\5\u016f")
buf.write(u"\u00b8\2\u0353\u0354\5\u0149\u00a5\2\u0354\u0355\5\u0147")
buf.write(u"\u00a4\2\u0355\u00ac\3\2\2\2\u0356\u0357\5\u0151\u00a9")
buf.write(u"\2\u0357\u0358\5\u015b\u00ae\2\u0358\u0359\5\u0151\u00a9")
buf.write(u"\2\u0359\u035a\5\u0167\u00b4\2\u035a\u035b\5\u0151\u00a9")
buf.write(u"\2\u035b\u035c\5\u0141\u00a1\2\u035c\u035d\5\u0157\u00ac")
buf.write(u"\2\u035d\u035e\5\u0157\u00ac\2\u035e\u035f\5\u0171\u00b9")
buf.write(u"\2\u035f\u00ae\3\2\2\2\u0360\u0361\5\u0151\u00a9\2\u0361")
buf.write(u"\u0362\5\u015b\u00ae\2\u0362\u0363\5\u015b\u00ae\2\u0363")
buf.write(u"\u0364\5\u0149\u00a5\2\u0364\u0365\5\u0163\u00b2\2\u0365")
buf.write(u"\u00b0\3\2\2\2\u0366\u0367\5\u0151\u00a9\2\u0367\u0368")
buf.write(u"\5\u015b\u00ae\2\u0368\u0369\5\u0165\u00b3\2\u0369\u036a")
buf.write(u"\5\u0149\u00a5\2\u036a\u036b\5\u0163\u00b2\2\u036b\u036c")
buf.write(u"\5\u0167\u00b4\2\u036c\u00b2\3\2\2\2\u036d\u036e\5\u0151")
buf.write(u"\u00a9\2\u036e\u036f\5\u015b\u00ae\2\u036f\u0370\5\u0165")
buf.write(u"\u00b3\2\u0370\u0371\5\u0167\u00b4\2\u0371\u0372\5\u0149")
buf.write(u"\u00a5\2\u0372\u0373\5\u0141\u00a1\2\u0373\u0374\5\u0147")
buf.write(u"\u00a4\2\u0374\u00b4\3\2\2\2\u0375\u0376\5\u0151\u00a9")
buf.write(u"\2\u0376\u0377\5\u015b\u00ae\2\u0377\u0378\5\u0167\u00b4")
buf.write(u"\2\u0378\u0379\5\u0149\u00a5\2\u0379\u037a\5\u0163\u00b2")
buf.write(u"\2\u037a\u037b\5\u0165\u00b3\2\u037b\u037c\5\u0149\u00a5")
buf.write(u"\2\u037c\u037d\5\u0145\u00a3\2\u037d\u037e\5\u0167\u00b4")
buf.write(u"\2\u037e\u00b6\3\2\2\2\u037f\u0380\5\u0151\u00a9\2\u0380")
buf.write(u"\u0381\5\u015b\u00ae\2\u0381\u0382\5\u0167\u00b4\2\u0382")
buf.write(u"\u0383\5\u015d\u00af\2\u0383\u00b8\3\2\2\2\u0384\u0385")
buf.write(u"\5\u0151\u00a9\2\u0385\u0386\5\u0165\u00b3\2\u0386\u00ba")
buf.write(u"\3\2\2\2\u0387\u0388\5\u0151\u00a9\2\u0388\u0389\5\u0165")
buf.write(u"\u00b3\2\u0389\u038a\5\u015b\u00ae\2\u038a\u038b\5\u0169")
buf.write(u"\u00b5\2\u038b\u038c\5\u0157\u00ac\2\u038c\u038d\5\u0157")
buf.write(u"\u00ac\2\u038d\u00bc\3\2\2\2\u038e\u038f\5\u0153\u00aa")
buf.write(u"\2\u038f\u0390\5\u015d\u00af\2\u0390\u0391\5\u0151\u00a9")
buf.write(u"\2\u0391\u0392\5\u015b\u00ae\2\u0392\u00be\3\2\2\2\u0393")
buf.write(u"\u0394\5\u0155\u00ab\2\u0394\u0395\5\u0149\u00a5\2\u0395")
buf.write(u"\u0396\5\u0171\u00b9\2\u0396\u00c0\3\2\2\2\u0397\u0398")
buf.write(u"\5\u0157\u00ac\2\u0398\u0399\5\u0149\u00a5\2\u0399\u039a")
buf.write(u"\5\u014b\u00a6\2\u039a\u039b\5\u0167\u00b4\2\u039b\u00c2")
buf.write(u"\3\2\2\2\u039c\u039d\5\u0157\u00ac\2\u039d\u039e\5\u0151")
buf.write(u"\u00a9\2\u039e\u039f\5\u0155\u00ab\2\u039f\u03a0\5\u0149")
buf.write(u"\u00a5\2\u03a0\u00c4\3\2\2\2\u03a1\u03a2\5\u0157\u00ac")
buf.write(u"\2\u03a2\u03a3\5\u0151\u00a9\2\u03a3\u03a4\5\u0159\u00ad")
buf.write(u"\2\u03a4\u03a5\5\u0151\u00a9\2\u03a5\u03a6\5\u0167\u00b4")
buf.write(u"\2\u03a6\u00c6\3\2\2\2\u03a7\u03a8\5\u0159\u00ad\2\u03a8")
buf.write(u"\u03a9\5\u0141\u00a1\2\u03a9\u03aa\5\u0167\u00b4\2\u03aa")
buf.write(u"\u03ab\5\u0145\u00a3\2\u03ab\u03ac\5\u014f\u00a8\2\u03ac")
buf.write(u"\u00c8\3\2\2\2\u03ad\u03ae\5\u015b\u00ae\2\u03ae\u03af")
buf.write(u"\5\u0141\u00a1\2\u03af\u03b0\5\u0167\u00b4\2\u03b0\u03b1")
buf.write(u"\5\u0169\u00b5\2\u03b1\u03b2\5\u0163\u00b2\2\u03b2\u03b3")
buf.write(u"\5\u0141\u00a1\2\u03b3\u03b4\5\u0157\u00ac\2\u03b4\u00ca")
buf.write(u"\3\2\2\2\u03b5\u03b6\5\u015b\u00ae\2\u03b6\u03b7\5\u015d")
buf.write(u"\u00af\2\u03b7\u00cc\3\2\2\2\u03b8\u03b9\5\u015b\u00ae")
buf.write(u"\2\u03b9\u03ba\5\u015d\u00af\2\u03ba\u03bb\5\u0167\u00b4")
buf.write(u"\2\u03bb\u00ce\3\2\2\2\u03bc\u03bd\5\u015b\u00ae\2\u03bd")
buf.write(u"\u03be\5\u015d\u00af\2\u03be\u03bf\5\u0167\u00b4\2\u03bf")
buf.write(u"\u03c0\5\u015b\u00ae\2\u03c0\u03c1\5\u0169\u00b5\2\u03c1")
buf.write(u"\u03c2\5\u0157\u00ac\2\u03c2\u03c3\5\u0157\u00ac\2\u03c3")
buf.write(u"\u00d0\3\2\2\2\u03c4\u03c5\5\u015b\u00ae\2\u03c5\u03c6")
buf.write(u"\5\u0169\u00b5\2\u03c6\u03c7\5\u0157\u00ac\2\u03c7\u03c8")
buf.write(u"\5\u0157\u00ac\2\u03c8\u00d2\3\2\2\2\u03c9\u03ca\5\u015d")
buf.write(u"\u00af\2\u03ca\u03cb\5\u014b\u00a6\2\u03cb\u00d4\3\2")
buf.write(u"\2\2\u03cc\u03cd\5\u015d\u00af\2\u03cd\u03ce\5\u014b")
buf.write(u"\u00a6\2\u03ce\u03cf\5\u014b\u00a6\2\u03cf\u03d0\5\u0165")
buf.write(u"\u00b3\2\u03d0\u03d1\5\u0149\u00a5\2\u03d1\u03d2\5\u0167")
buf.write(u"\u00b4\2\u03d2\u00d6\3\2\2\2\u03d3\u03d4\5\u015d\u00af")
buf.write(u"\2\u03d4\u03d5\5\u015b\u00ae\2\u03d5\u00d8\3\2\2\2\u03d6")
buf.write(u"\u03d7\5\u015d\u00af\2\u03d7\u03d8\5\u0163\u00b2\2\u03d8")
buf.write(u"\u00da\3\2\2\2\u03d9\u03da\5\u015d\u00af\2\u03da\u03db")
buf.write(u"\5\u0163\u00b2\2\u03db\u03dc\5\u0147\u00a4\2\u03dc\u03dd")
buf.write(u"\5\u0149\u00a5\2\u03dd\u03de\5\u0163\u00b2\2\u03de\u00dc")
buf.write(u"\3\2\2\2\u03df\u03e0\5\u015d\u00af\2\u03e0\u03e1\5\u0169")
buf.write(u"\u00b5\2\u03e1\u03e2\5\u0167\u00b4\2\u03e2\u03e3\5\u0149")
buf.write(u"\u00a5\2\u03e3\u03e4\5\u0163\u00b2\2\u03e4\u00de\3\2")
buf.write(u"\2\2\u03e5\u03e6\5\u015f\u00b0\2\u03e6\u03e7\5\u0157")
buf.write(u"\u00ac\2\u03e7\u03e8\5\u0141\u00a1\2\u03e8\u03e9\5\u015b")
buf.write(u"\u00ae\2\u03e9\u00e0\3\2\2\2\u03ea\u03eb\5\u015f\u00b0")
buf.write(u"\2\u03eb\u03ec\5\u0163\u00b2\2\u03ec\u03ed\5\u0141\u00a1")
buf.write(u"\2\u03ed\u03ee\5\u014d\u00a7\2\u03ee\u03ef\5\u0159\u00ad")
buf.write(u"\2\u03ef\u03f0\5\u0141\u00a1\2\u03f0\u00e2\3\2\2\2\u03f1")
buf.write(u"\u03f2\5\u015f\u00b0\2\u03f2\u03f3\5\u0163\u00b2\2\u03f3")
buf.write(u"\u03f4\5\u0151\u00a9\2\u03f4\u03f5\5\u0159\u00ad\2\u03f5")
buf.write(u"\u03f6\5\u0141\u00a1\2\u03f6\u03f7\5\u0163\u00b2\2\u03f7")
buf.write(u"\u03f8\5\u0171\u00b9\2\u03f8\u00e4\3\2\2\2\u03f9\u03fa")
buf.write(u"\5\u0161\u00b1\2\u03fa\u03fb\5\u0169\u00b5\2\u03fb\u03fc")
buf.write(u"\5\u0149\u00a5\2\u03fc\u03fd\5\u0163\u00b2\2\u03fd\u03fe")
buf.write(u"\5\u0171\u00b9\2\u03fe\u00e6\3\2\2\2\u03ff\u0400\5\u0163")
buf.write(u"\u00b2\2\u0400\u0401\5\u0141\u00a1\2\u0401\u0402\5\u0151")
buf.write(u"\u00a9\2\u0402\u0403\5\u0165\u00b3\2\u0403\u0404\5\u0149")
buf.write(u"\u00a5\2\u0404\u00e8\3\2\2\2\u0405\u0406\5\u0163\u00b2")
buf.write(u"\2\u0406\u0407\5\u0149\u00a5\2\u0407\u0408\5\u0145\u00a3")
buf.write(u"\2\u0408\u0409\5\u0169\u00b5\2\u0409\u040a\5\u0163\u00b2")
buf.write(u"\2\u040a\u040b\5\u0165\u00b3\2\u040b\u040c\5\u0151\u00a9")
buf.write(u"\2\u040c\u040d\5\u016b\u00b6\2\u040d\u040e\5\u0149\u00a5")
buf.write(u"\2\u040e\u00ea\3\2\2\2\u040f\u0410\5\u0163\u00b2\2\u0410")
buf.write(u"\u0411\5\u0149\u00a5\2\u0411\u0412\5\u014b\u00a6\2\u0412")
buf.write(u"\u0413\5\u0149\u00a5\2\u0413\u0414\5\u0163\u00b2\2\u0414")
buf.write(u"\u0415\5\u0149\u00a5\2\u0415\u0416\5\u015b\u00ae\2\u0416")
buf.write(u"\u0417\5\u0145\u00a3\2\u0417\u0418\5\u0149\u00a5\2\u0418")
buf.write(u"\u0419\5\u0165\u00b3\2\u0419\u00ec\3\2\2\2\u041a\u041b")
buf.write(u"\5\u0163\u00b2\2\u041b\u041c\5\u0149\u00a5\2\u041c\u041d")
buf.write(u"\5\u014d\u00a7\2\u041d\u041e\5\u0149\u00a5\2\u041e\u041f")
buf.write(u"\5\u016f\u00b8\2\u041f\u0420\5\u015f\u00b0\2\u0420\u00ee")
buf.write(u"\3\2\2\2\u0421\u0422\5\u0163\u00b2\2\u0422\u0423\5\u0149")
buf.write(u"\u00a5\2\u0423\u0424\5\u0151\u00a9\2\u0424\u0425\5\u015b")
buf.write(u"\u00ae\2\u0425\u0426\5\u0147\u00a4\2\u0426\u0427\5\u0149")
buf.write(u"\u00a5\2\u0427\u0428\5\u016f\u00b8\2\u0428\u00f0\3\2")
buf.write(u"\2\2\u0429\u042a\5\u0163\u00b2\2\u042a\u042b\5\u0149")
buf.write(u"\u00a5\2\u042b\u042c\5\u0157\u00ac\2\u042c\u042d\5\u0149")
buf.write(u"\u00a5\2\u042d\u042e\5\u0141\u00a1\2\u042e\u042f\5\u0165")
buf.write(u"\u00b3\2\u042f\u0430\5\u0149\u00a5\2\u0430\u00f2\3\2")
buf.write(u"\2\2\u0431\u0432\5\u0163\u00b2\2\u0432\u0433\5\u0149")
buf.write(u"\u00a5\2\u0433\u0434\5\u015b\u00ae\2\u0434\u0435\5\u0141")
buf.write(u"\u00a1\2\u0435\u0436\5\u0159\u00ad\2\u0436\u0437\5\u0149")
buf.write(u"\u00a5\2\u0437\u00f4\3\2\2\2\u0438\u0439\5\u0163\u00b2")
buf.write(u"\2\u0439\u043a\5\u0149\u00a5\2\u043a\u043b\5\u015f\u00b0")
buf.write(u"\2\u043b\u043c\5\u0157\u00ac\2\u043c\u043d\5\u0141\u00a1")
buf.write(u"\2\u043d\u043e\5\u0145\u00a3\2\u043e\u043f\5\u0149\u00a5")
buf.write(u"\2\u043f\u00f6\3\2\2\2\u0440\u0441\5\u0163\u00b2\2\u0441")
buf.write(u"\u0442\5\u0149\u00a5\2\u0442\u0443\5\u0165\u00b3\2\u0443")
buf.write(u"\u0444\5\u0167\u00b4\2\u0444\u0445\5\u0163\u00b2\2\u0445")
buf.write(u"\u0446\5\u0151\u00a9\2\u0446\u0447\5\u0145\u00a3\2\u0447")
buf.write(u"\u0448\5\u0167\u00b4\2\u0448\u00f8\3\2\2\2\u0449\u044a")
buf.write(u"\5\u0163\u00b2\2\u044a\u044b\5\u0151\u00a9\2\u044b\u044c")
buf.write(u"\5\u014d\u00a7\2\u044c\u044d\5\u014f\u00a8\2\u044d\u044e")
buf.write(u"\5\u0167\u00b4\2\u044e\u00fa\3\2\2\2\u044f\u0450\5\u0163")
buf.write(u"\u00b2\2\u0450\u0451\5\u015d\u00af\2\u0451\u0452\5\u0157")
buf.write(u"\u00ac\2\u0452\u0453\5\u0157\u00ac\2\u0453\u0454\5\u0143")
buf.write(u"\u00a2\2\u0454\u0455\5\u0141\u00a1\2\u0455\u0456\5\u0145")
buf.write(u"\u00a3\2\u0456\u0457\5\u0155\u00ab\2\u0457\u00fc\3\2")
buf.write(u"\2\2\u0458\u0459\5\u0163\u00b2\2\u0459\u045a\5\u015d")
buf.write(u"\u00af\2\u045a\u045b\5\u016d\u00b7\2\u045b\u00fe\3\2")
buf.write(u"\2\2\u045c\u045d\5\u0165\u00b3\2\u045d\u045e\5\u0141")
buf.write(u"\u00a1\2\u045e\u045f\5\u016b\u00b6\2\u045f\u0460\5\u0149")
buf.write(u"\u00a5\2\u0460\u0461\5\u015f\u00b0\2\u0461\u0462\5\u015d")
buf.write(u"\u00af\2\u0462\u0463\5\u0151\u00a9\2\u0463\u0464\5\u015b")
buf.write(u"\u00ae\2\u0464\u0465\5\u0167\u00b4\2\u0465\u0100\3\2")
buf.write(u"\2\2\u0466\u0467\5\u0165\u00b3\2\u0467\u0468\5\u0149")
buf.write(u"\u00a5\2\u0468\u0469\5\u0157\u00ac\2\u0469\u046a\5\u0149")
buf.write(u"\u00a5\2\u046a\u046b\5\u0145\u00a3\2\u046b\u046c\5\u0167")
buf.write(u"\u00b4\2\u046c\u0102\3\2\2\2\u046d\u046e\5\u0165\u00b3")
buf.write(u"\2\u046e\u046f\5\u0149\u00a5\2\u046f\u0470\5\u0167\u00b4")
buf.write(u"\2\u0470\u0104\3\2\2\2\u0471\u0472\5\u0167\u00b4\2\u0472")
buf.write(u"\u0473\5\u0141\u00a1\2\u0473\u0474\5\u0143\u00a2\2\u0474")
buf.write(u"\u0475\5\u0157\u00ac\2\u0475\u0476\5\u0149\u00a5\2\u0476")
buf.write(u"\u0106\3\2\2\2\u0477\u0478\5\u0167\u00b4\2\u0478\u0479")
buf.write(u"\5\u0149\u00a5\2\u0479\u047a\5\u0159\u00ad\2\u047a\u047b")
buf.write(u"\5\u015f\u00b0\2\u047b\u0108\3\2\2\2\u047c\u047d\5\u0167")
buf.write(u"\u00b4\2\u047d\u047e\5\u0149\u00a5\2\u047e\u047f\5\u0159")
buf.write(u"\u00ad\2\u047f\u0480\5\u015f\u00b0\2\u0480\u0481\5\u015d")
buf.write(u"\u00af\2\u0481\u0482\5\u0163\u00b2\2\u0482\u0483\5\u0141")
buf.write(u"\u00a1\2\u0483\u0484\5\u0163\u00b2\2\u0484\u0485\5\u0171")
buf.write(u"\u00b9\2\u0485\u010a\3\2\2\2\u0486\u0487\5\u0167\u00b4")
buf.write(u"\2\u0487\u0488\5\u014f\u00a8\2\u0488\u0489\5\u0149\u00a5")
buf.write(u"\2\u0489\u048a\5\u015b\u00ae\2\u048a\u010c\3\2\2\2\u048b")
buf.write(u"\u048c\5\u0167\u00b4\2\u048c\u048d\5\u015d\u00af\2\u048d")
buf.write(u"\u010e\3\2\2\2\u048e\u048f\5\u0167\u00b4\2\u048f\u0490")
buf.write(u"\5\u0163\u00b2\2\u0490\u0491\5\u0141\u00a1\2\u0491\u0492")
buf.write(u"\5\u015b\u00ae\2\u0492\u0493\5\u0165\u00b3\2\u0493\u0494")
buf.write(u"\5\u0141\u00a1\2\u0494\u0495\5\u0145\u00a3\2\u0495\u0496")
buf.write(u"\5\u0167\u00b4\2\u0496\u0497\5\u0151\u00a9\2\u0497\u0498")
buf.write(u"\5\u015d\u00af\2\u0498\u0499\5\u015b\u00ae\2\u0499\u0110")
buf.write(u"\3\2\2\2\u049a\u049b\5\u0167\u00b4\2\u049b\u049c\5\u0163")
buf.write(u"\u00b2\2\u049c\u049d\5\u0151\u00a9\2\u049d\u049e\5\u014d")
buf.write(u"\u00a7\2\u049e\u049f\5\u014d\u00a7\2\u049f\u04a0\5\u0149")
buf.write(u"\u00a5\2\u04a0\u04a1\5\u0163\u00b2\2\u04a1\u0112\3\2")
buf.write(u"\2\2\u04a2\u04a3\5\u0169\u00b5\2\u04a3\u04a4\5\u015b")
buf.write(u"\u00ae\2\u04a4\u04a5\5\u0151\u00a9\2\u04a5\u04a6\5\u015d")
buf.write(u"\u00af\2\u04a6\u04a7\5\u015b\u00ae\2\u04a7\u0114\3\2")
buf.write(u"\2\2\u04a8\u04a9\5\u0169\u00b5\2\u04a9\u04aa\5\u015b")
buf.write(u"\u00ae\2\u04aa\u04ab\5\u0151\u00a9\2\u04ab\u04ac\5\u0161")
buf.write(u"\u00b1\2\u04ac\u04ad\5\u0169\u00b5\2\u04ad\u04ae\5\u0149")
buf.write(u"\u00a5\2\u04ae\u0116\3\2\2\2\u04af\u04b0\5\u0169\u00b5")
buf.write(u"\2\u04b0\u04b1\5\u015f\u00b0\2\u04b1\u04b2\5\u0147\u00a4")
buf.write(u"\2\u04b2\u04b3\5\u0141\u00a1\2\u04b3\u04b4\5\u0167\u00b4")
buf.write(u"\2\u04b4\u04b5\5\u0149\u00a5\2\u04b5\u0118\3\2\2\2\u04b6")
buf.write(u"\u04b7\5\u0169\u00b5\2\u04b7\u04b8\5\u0165\u00b3\2\u04b8")
buf.write(u"\u04b9\5\u0151\u00a9\2\u04b9\u04ba\5\u015b\u00ae\2\u04ba")
buf.write(u"\u04bb\5\u014d\u00a7\2\u04bb\u011a\3\2\2\2\u04bc\u04bd")
buf.write(u"\5\u016b\u00b6\2\u04bd\u04be\5\u0141\u00a1\2\u04be\u04bf")
buf.write(u"\5\u0145\u00a3\2\u04bf\u04c0\5\u0169\u00b5\2\u04c0\u04c1")
buf.write(u"\5\u0169\u00b5\2\u04c1\u04c2\5\u0159\u00ad\2\u04c2\u011c")
buf.write(u"\3\2\2\2\u04c3\u04c4\5\u016b\u00b6\2\u04c4\u04c5\5\u0141")
buf.write(u"\u00a1\2\u04c5\u04c6\5\u0157\u00ac\2\u04c6\u04c7\5\u0169")
buf.write(u"\u00b5\2\u04c7\u04c8\5\u0149\u00a5\2\u04c8\u04c9\5\u0165")
buf.write(u"\u00b3\2\u04c9\u011e\3\2\2\2\u04ca\u04cb\5\u016b\u00b6")
buf.write(u"\2\u04cb\u04cc\5\u0151\u00a9\2\u04cc\u04cd\5\u0149\u00a5")
buf.write(u"\2\u04cd\u04ce\5\u016d\u00b7\2\u04ce\u0120\3\2\2\2\u04cf")
buf.write(u"\u04d0\5\u016b\u00b6\2\u04d0\u04d1\5\u0151\u00a9\2\u04d1")
buf.write(u"\u04d2\5\u0163\u00b2\2\u04d2\u04d3\5\u0167\u00b4\2\u04d3")
buf.write(u"\u04d4\5\u0169\u00b5\2\u04d4\u04d5\5\u0141\u00a1\2\u04d5")
buf.write(u"\u04d6\5\u0157\u00ac\2\u04d6\u0122\3\2\2\2\u04d7\u04d8")
buf.write(u"\5\u016d\u00b7\2\u04d8\u04d9\5\u014f\u00a8\2\u04d9\u04da")
buf.write(u"\5\u0149\u00a5\2\u04da\u04db\5\u015b\u00ae\2\u04db\u0124")
buf.write(u"\3\2\2\2\u04dc\u04dd\5\u016d\u00b7\2\u04dd\u04de\5\u014f")
buf.write(u"\u00a8\2\u04de\u04df\5\u0149\u00a5\2\u04df\u04e0\5\u0163")
buf.write(u"\u00b2\2\u04e0\u04e1\5\u0149\u00a5\2\u04e1\u0126\3\2")
buf.write(u"\2\2\u04e2\u04e3\5\u016d\u00b7\2\u04e3\u04e4\5\u0151")
buf.write(u"\u00a9\2\u04e4\u04e5\5\u0167\u00b4\2\u04e5\u04e6\5\u014f")
buf.write(u"\u00a8\2\u04e6\u0128\3\2\2\2\u04e7\u04e8\5\u016d\u00b7")
buf.write(u"\2\u04e8\u04e9\5\u0151\u00a9\2\u04e9\u04ea\5\u0167\u00b4")
buf.write(u"\2\u04ea\u04eb\5\u014f\u00a8\2\u04eb\u04ec\5\u015d\u00af")
buf.write(u"\2\u04ec\u04ed\5\u0169\u00b5\2\u04ed\u04ee\5\u0167\u00b4")
buf.write(u"\2\u04ee\u012a\3\2\2\2\u04ef\u04f0\5\u0163\u00b2\2\u04f0")
buf.write(u"\u04f1\5\u015d\u00af\2\u04f1\u04f2\5\u016d\u00b7\2\u04f2")
buf.write(u"\u04f3\5\u0151\u00a9\2\u04f3\u04f4\5\u0147\u00a4\2\u04f4")
buf.write(u"\u012c\3\2\2\2\u04f5\u04fb\7$\2\2\u04f6\u04fa\n\2\2\2")
buf.write(u"\u04f7\u04f8\7$\2\2\u04f8\u04fa\7$\2\2\u04f9\u04f6\3")
buf.write(u"\2\2\2\u04f9\u04f7\3\2\2\2\u04fa\u04fd\3\2\2\2\u04fb")
buf.write(u"\u04f9\3\2\2\2\u04fb\u04fc\3\2\2\2\u04fc\u04fe\3\2\2")
buf.write(u"\2\u04fd\u04fb\3\2\2\2\u04fe\u0519\7$\2\2\u04ff\u0505")
buf.write(u"\7b\2\2\u0500\u0504\n\3\2\2\u0501\u0502\7b\2\2\u0502")
buf.write(u"\u0504\7b\2\2\u0503\u0500\3\2\2\2\u0503\u0501\3\2\2\2")
buf.write(u"\u0504\u0507\3\2\2\2\u0505\u0503\3\2\2\2\u0505\u0506")
buf.write(u"\3\2\2\2\u0506\u0508\3\2\2\2\u0507\u0505\3\2\2\2\u0508")
buf.write(u"\u0519\7b\2\2\u0509\u050d\7]\2\2\u050a\u050c\n\4\2\2")
buf.write(u"\u050b\u050a\3\2\2\2\u050c\u050f\3\2\2\2\u050d\u050b")
buf.write(u"\3\2\2\2\u050d\u050e\3\2\2\2\u050e\u0510\3\2\2\2\u050f")
buf.write(u"\u050d\3\2\2\2\u0510\u0519\7_\2\2\u0511\u0515\t%\2\2")
buf.write(u"\u0512\u0514\t&\2\2\u0513\u0512\3\2\2\2\u0514\u0517\3")
buf.write(u"\2\2\2\u0515\u0513\3\2\2\2\u0515\u0516\3\2\2\2\u0516")
buf.write(u"\u0519\3\2\2\2\u0517\u0515\3\2\2\2\u0518\u04f5\3\2\2")
buf.write(u"\2\u0518\u04ff\3\2\2\2\u0518\u0509\3\2\2\2\u0518\u0511")
buf.write(u"\3\2\2\2\u0519\u012e\3\2\2\2\u051a\u051c\5\u013f\u00a0")
buf.write(u"\2\u051b\u051a\3\2\2\2\u051c\u051d\3\2\2\2\u051d\u051b")
buf.write(u"\3\2\2\2\u051d\u051e\3\2\2\2\u051e\u0526\3\2\2\2\u051f")
buf.write(u"\u0523\7\60\2\2\u0520\u0522\5\u013f\u00a0\2\u0521\u0520")
buf.write(u"\3\2\2\2\u0522\u0525\3\2\2\2\u0523\u0521\3\2\2\2\u0523")
buf.write(u"\u0524\3\2\2\2\u0524\u0527\3\2\2\2\u0525\u0523\3\2\2")
buf.write(u"\2\u0526\u051f\3\2\2\2\u0526\u0527\3\2\2\2\u0527\u0531")
buf.write(u"\3\2\2\2\u0528\u052a\5\u0149\u00a5\2\u0529\u052b\t\5")
buf.write(u"\2\2\u052a\u0529\3\2\2\2\u052a\u052b\3\2\2\2\u052b\u052d")
buf.write(u"\3\2\2\2\u052c\u052e\5\u013f\u00a0\2\u052d\u052c\3\2")
buf.write(u"\2\2\u052e\u052f\3\2\2\2\u052f\u052d\3\2\2\2\u052f\u0530")
buf.write(u"\3\2\2\2\u0530\u0532\3\2\2\2\u0531\u0528\3\2\2\2\u0531")
buf.write(u"\u0532\3\2\2\2\u0532\u0545\3\2\2\2\u0533\u0535\7\60\2")
buf.write(u"\2\u0534\u0536\5\u013f\u00a0\2\u0535\u0534\3\2\2\2\u0536")
buf.write(u"\u0537\3\2\2\2\u0537\u0535\3\2\2\2\u0537\u0538\3\2\2")
buf.write(u"\2\u0538\u0542\3\2\2\2\u0539\u053b\5\u0149\u00a5\2\u053a")
buf.write(u"\u053c\t\5\2\2\u053b\u053a\3\2\2\2\u053b\u053c\3\2\2")
buf.write(u"\2\u053c\u053e\3\2\2\2\u053d\u053f\5\u013f\u00a0\2\u053e")
buf.write(u"\u053d\3\2\2\2\u053f\u0540\3\2\2\2\u0540\u053e\3\2\2")
buf.write(u"\2\u0540\u0541\3\2\2\2\u0541\u0543\3\2\2\2\u0542\u0539")
buf.write(u"\3\2\2\2\u0542\u0543\3\2\2\2\u0543\u0545\3\2\2\2\u0544")
buf.write(u"\u051b\3\2\2\2\u0544\u0533\3\2\2\2\u0545\u0130\3\2\2")
buf.write(u"\2\u0546\u054a\7A\2\2\u0547\u0549\5\u013f\u00a0\2\u0548")
buf.write(u"\u0547\3\2\2\2\u0549\u054c\3\2\2\2\u054a\u0548\3\2\2")
buf.write(u"\2\u054a\u054b\3\2\2\2\u054b\u0550\3\2\2\2\u054c\u054a")
buf.write(u"\3\2\2\2\u054d\u054e\t\6\2\2\u054e\u0550\5\u012d\u0097")
buf.write(u"\2\u054f\u0546\3\2\2\2\u054f\u054d\3\2\2\2\u0550\u0132")
buf.write(u"\3\2\2\2\u0551\u0557\7)\2\2\u0552\u0556\n\7\2\2\u0553")
buf.write(u"\u0554\7)\2\2\u0554\u0556\7)\2\2\u0555\u0552\3\2\2\2")
buf.write(u"\u0555\u0553\3\2\2\2\u0556\u0559\3\2\2\2\u0557\u0555")
buf.write(u"\3\2\2\2\u0557\u0558\3\2\2\2\u0558\u055a\3\2\2\2\u0559")
buf.write(u"\u0557\3\2\2\2\u055a\u0566\7)\2\2\u055b\u0561\7$\2\2")
buf.write(u"\u055c\u0560\n\2\2\2\u055d\u055e\7$\2\2\u055e\u0560\7")
buf.write(u"$\2\2\u055f\u055c\3\2\2\2\u055f\u055d\3\2\2\2\u0560\u0563")
buf.write(u"\3\2\2\2\u0561\u055f\3\2\2\2\u0561\u0562\3\2\2\2\u0562")
buf.write(u"\u0564\3\2\2\2\u0563\u0561\3\2\2\2\u0564\u0566\7$\2\2")
buf.write(u"\u0565\u0551\3\2\2\2\u0565\u055b\3\2\2\2\u0566\u0134")
buf.write(u"\3\2\2\2\u0567\u0568\5\u016f\u00b8\2\u0568\u0569\5\u0133")
buf.write(u"\u009a\2\u0569\u0136\3\2\2\2\u056a\u056b\7/\2\2\u056b")
buf.write(u"\u056c\7/\2\2\u056c\u0570\3\2\2\2\u056d\u056f\n\b\2\2")
buf.write(u"\u056e\u056d\3\2\2\2\u056f\u0572\3\2\2\2\u0570\u056e")
buf.write(u"\3\2\2\2\u0570\u0571\3\2\2\2\u0571\u0573\3\2\2\2\u0572")
buf.write(u"\u0570\3\2\2\2\u0573\u0574\b\u009c\2\2\u0574\u0138\3")
buf.write(u"\2\2\2\u0575\u0576\7\61\2\2\u0576\u0577\7,\2\2\u0577")
buf.write(u"\u057b\3\2\2\2\u0578\u057a\13\2\2\2\u0579\u0578\3\2\2")
buf.write(u"\2\u057a\u057d\3\2\2\2\u057b\u057c\3\2\2\2\u057b\u0579")
buf.write(u"\3\2\2\2\u057c\u0581\3\2\2\2\u057d\u057b\3\2\2\2\u057e")
buf.write(u"\u057f\7,\2\2\u057f\u0582\7\61\2\2\u0580\u0582\7\2\2")
buf.write(u"\3\u0581\u057e\3\2\2\2\u0581\u0580\3\2\2\2\u0582\u0583")
buf.write(u"\3\2\2\2\u0583\u0584\b\u009d\2\2\u0584\u013a\3\2\2\2")
buf.write(u"\u0585\u0586\t\t\2\2\u0586\u0587\3\2\2\2\u0587\u0588")
buf.write(u"\b\u009e\3\2\u0588\u013c\3\2\2\2\u0589\u058a\13\2\2\2")
buf.write(u"\u058a\u013e\3\2\2\2\u058b\u058c\t\n\2\2\u058c\u0140")
buf.write(u"\3\2\2\2\u058d\u058e\t\13\2\2\u058e\u0142\3\2\2\2\u058f")
buf.write(u"\u0590\t\f\2\2\u0590\u0144\3\2\2\2\u0591\u0592\t\r\2")
buf.write(u"\2\u0592\u0146\3\2\2\2\u0593\u0594\t\16\2\2\u0594\u0148")
buf.write(u"\3\2\2\2\u0595\u0596\t\17\2\2\u0596\u014a\3\2\2\2\u0597")
buf.write(u"\u0598\t\20\2\2\u0598\u014c\3\2\2\2\u0599\u059a\t\21")
buf.write(u"\2\2\u059a\u014e\3\2\2\2\u059b\u059c\t\22\2\2\u059c\u0150")
buf.write(u"\3\2\2\2\u059d\u059e\t\23\2\2\u059e\u0152\3\2\2\2\u059f")
buf.write(u"\u05a0\t\24\2\2\u05a0\u0154\3\2\2\2\u05a1\u05a2\t\25")
buf.write(u"\2\2\u05a2\u0156\3\2\2\2\u05a3\u05a4\t\26\2\2\u05a4\u0158")
buf.write(u"\3\2\2\2\u05a5\u05a6\t\27\2\2\u05a6\u015a\3\2\2\2\u05a7")
buf.write(u"\u05a8\t\30\2\2\u05a8\u015c\3\2\2\2\u05a9\u05aa\t\31")
buf.write(u"\2\2\u05aa\u015e\3\2\2\2\u05ab\u05ac\t\32\2\2\u05ac\u0160")
buf.write(u"\3\2\2\2\u05ad\u05ae\t\33\2\2\u05ae\u0162\3\2\2\2\u05af")
buf.write(u"\u05b0\t\34\2\2\u05b0\u0164\3\2\2\2\u05b1\u05b2\t\35")
buf.write(u"\2\2\u05b2\u0166\3\2\2\2\u05b3\u05b4\t\36\2\2\u05b4\u0168")
buf.write(u"\3\2\2\2\u05b5\u05b6\t\37\2\2\u05b6\u016a\3\2\2\2\u05b7")
buf.write(u"\u05b8\t \2\2\u05b8\u016c\3\2\2\2\u05b9\u05ba\t!\2\2")
buf.write(u"\u05ba\u016e\3\2\2\2\u05bb\u05bc\t\"\2\2\u05bc\u0170")
buf.write(u"\3\2\2\2\u05bd\u05be\t#\2\2\u05be\u0172\3\2\2\2\u05bf")
buf.write(u"\u05c0\t$\2\2\u05c0\u0174\3\2\2\2\37\2\u04f9\u04fb\u0503")
buf.write(u"\u0505\u050d\u0515\u0518\u051d\u0523\u0526\u052a\u052f")
buf.write(u"\u0531\u0537\u053b\u0540\u0542\u0544\u054a\u054f\u0555")
buf.write(u"\u0557\u055f\u0561\u0565\u0570\u057b\u0581\4\2\4\2\2")
buf.write(u"\3\2")
return buf.getvalue()
class SQLiteLexer(Lexer):
atn = ATNDeserializer().deserialize(serializedATN())
decisionsToDFA = [ DFA(ds, i) for i, ds in enumerate(atn.decisionToState) ]
SCOL = 1
DOT = 2
OPEN_PAR = 3
CLOSE_PAR = 4
COMMA = 5
ASSIGN = 6
STAR = 7
PLUS = 8
MINUS = 9
TILDE = 10
PIPE2 = 11
DIV = 12
MOD = 13
LT2 = 14
GT2 = 15
AMP = 16
PIPE = 17
LT = 18
LT_EQ = 19
GT = 20
GT_EQ = 21
EQ = 22
NOT_EQ1 = 23
NOT_EQ2 = 24
K_ABORT = 25
K_ACTION = 26
K_ADD = 27
K_AFTER = 28
K_ALL = 29
K_ALTER = 30
K_ANALYZE = 31
K_AND = 32
K_AS = 33
K_ASC = 34
K_ATTACH = 35
K_AUTOINCREMENT = 36
K_BEFORE = 37
K_BEGIN = 38
K_BETWEEN = 39
K_BY = 40
K_CASCADE = 41
K_CASE = 42
K_CAST = 43
K_CHECK = 44
K_COLLATE = 45
K_COLUMN = 46
K_COMMIT = 47
K_CONFLICT = 48
K_CONSTRAINT = 49
K_CREATE = 50
K_CROSS = 51
K_CURRENT_DATE = 52
K_CURRENT_TIME = 53
K_CURRENT_TIMESTAMP = 54
K_DATABASE = 55
K_DEFAULT = 56
K_DEFERRABLE = 57
K_DEFERRED = 58
K_DELETE = 59
K_DESC = 60
K_DETACH = 61
K_DISTINCT = 62
K_DROP = 63
K_EACH = 64
K_ELSE = 65
K_END = 66
K_ESCAPE = 67
K_EXCEPT = 68
K_EXCLUSIVE = 69
K_EXISTS = 70
K_EXPLAIN = 71
K_FAIL = 72
K_FOR = 73
K_FOREIGN = 74
K_FROM = 75
K_FULL = 76
K_GLOB = 77
K_GROUP = 78
K_HAVING = 79
K_IF = 80
K_IGNORE = 81
K_IMMEDIATE = 82
K_IN = 83
K_INDEX = 84
K_INDEXED = 85
K_INITIALLY = 86
K_INNER = 87
K_INSERT = 88
K_INSTEAD = 89
K_INTERSECT = 90
K_INTO = 91
K_IS = 92
K_ISNULL = 93
K_JOIN = 94
K_KEY = 95
K_LEFT = 96
K_LIKE = 97
K_LIMIT = 98
K_MATCH = 99
K_NATURAL = 100
K_NO = 101
K_NOT = 102
K_NOTNULL = 103
K_NULL = 104
K_OF = 105
K_OFFSET = 106
K_ON = 107
K_OR = 108
K_ORDER = 109
K_OUTER = 110
K_PLAN = 111
K_PRAGMA = 112
K_PRIMARY = 113
K_QUERY = 114
K_RAISE = 115
K_RECURSIVE = 116
K_REFERENCES = 117
K_REGEXP = 118
K_REINDEX = 119
K_RELEASE = 120
K_RENAME = 121
K_REPLACE = 122
K_RESTRICT = 123
K_RIGHT = 124
K_ROLLBACK = 125
K_ROW = 126
K_SAVEPOINT = 127
K_SELECT = 128
K_SET = 129
K_TABLE = 130
K_TEMP = 131
K_TEMPORARY = 132
K_THEN = 133
K_TO = 134
K_TRANSACTION = 135
K_TRIGGER = 136
K_UNION = 137
K_UNIQUE = 138
K_UPDATE = 139
K_USING = 140
K_VACUUM = 141
K_VALUES = 142
K_VIEW = 143
K_VIRTUAL = 144
K_WHEN = 145
K_WHERE = 146
K_WITH = 147
K_WITHOUT = 148
C_ROWID = 149
IDENTIFIER = 150
NUMERIC_LITERAL = 151
BIND_PARAMETER = 152
STRING_LITERAL = 153
BLOB_LITERAL = 154
SINGLE_LINE_COMMENT = 155
MULTILINE_COMMENT = 156
SPACES = 157
UNEXPECTED_CHAR = 158
channelNames = [ u"DEFAULT_TOKEN_CHANNEL", u"HIDDEN" ]
modeNames = [ u"DEFAULT_MODE" ]
literalNames = [ u"<INVALID>",
u"';'", u"'.'", u"'('", u"')'", u"','", u"'='", u"'*'", u"'+'",
u"'-'", u"'~'", u"'||'", u"'/'", u"'%'", u"'<<'", u"'>>'", u"'&'",
u"'|'", u"'<'", u"'<='", u"'>'", u"'>='", u"'=='", u"'!='",
u"'<>'" ]
symbolicNames = [ u"<INVALID>",
u"SCOL", u"DOT", u"OPEN_PAR", u"CLOSE_PAR", u"COMMA", u"ASSIGN",
u"STAR", u"PLUS", u"MINUS", u"TILDE", u"PIPE2", u"DIV", u"MOD",
u"LT2", u"GT2", u"AMP", u"PIPE", u"LT", u"LT_EQ", u"GT", u"GT_EQ",
u"EQ", u"NOT_EQ1", u"NOT_EQ2", u"K_ABORT", u"K_ACTION", u"K_ADD",
u"K_AFTER", u"K_ALL", u"K_ALTER", u"K_ANALYZE", u"K_AND", u"K_AS",
u"K_ASC", u"K_ATTACH", u"K_AUTOINCREMENT", u"K_BEFORE", u"K_BEGIN",
u"K_BETWEEN", u"K_BY", u"K_CASCADE", u"K_CASE", u"K_CAST", u"K_CHECK",
u"K_COLLATE", u"K_COLUMN", u"K_COMMIT", u"K_CONFLICT", u"K_CONSTRAINT",
u"K_CREATE", u"K_CROSS", u"K_CURRENT_DATE", u"K_CURRENT_TIME",
u"K_CURRENT_TIMESTAMP", u"K_DATABASE", u"K_DEFAULT", u"K_DEFERRABLE",
u"K_DEFERRED", u"K_DELETE", u"K_DESC", u"K_DETACH", u"K_DISTINCT",
u"K_DROP", u"K_EACH", u"K_ELSE", u"K_END", u"K_ESCAPE", u"K_EXCEPT",
u"K_EXCLUSIVE", u"K_EXISTS", u"K_EXPLAIN", u"K_FAIL", u"K_FOR",
u"K_FOREIGN", u"K_FROM", u"K_FULL", u"K_GLOB", u"K_GROUP", u"K_HAVING",
u"K_IF", u"K_IGNORE", u"K_IMMEDIATE", u"K_IN", u"K_INDEX", u"K_INDEXED",
u"K_INITIALLY", u"K_INNER", u"K_INSERT", u"K_INSTEAD", u"K_INTERSECT",
u"K_INTO", u"K_IS", u"K_ISNULL", u"K_JOIN", u"K_KEY", u"K_LEFT",
u"K_LIKE", u"K_LIMIT", u"K_MATCH", u"K_NATURAL", u"K_NO", u"K_NOT",
u"K_NOTNULL", u"K_NULL", u"K_OF", u"K_OFFSET", u"K_ON", u"K_OR",
u"K_ORDER", u"K_OUTER", u"K_PLAN", u"K_PRAGMA", u"K_PRIMARY",
u"K_QUERY", u"K_RAISE", u"K_RECURSIVE", u"K_REFERENCES", u"K_REGEXP",
u"K_REINDEX", u"K_RELEASE", u"K_RENAME", u"K_REPLACE", u"K_RESTRICT",
u"K_RIGHT", u"K_ROLLBACK", u"K_ROW", u"K_SAVEPOINT", u"K_SELECT",
u"K_SET", u"K_TABLE", u"K_TEMP", u"K_TEMPORARY", u"K_THEN",
u"K_TO", u"K_TRANSACTION", u"K_TRIGGER", u"K_UNION", u"K_UNIQUE",
u"K_UPDATE", u"K_USING", u"K_VACUUM", u"K_VALUES", u"K_VIEW",
u"K_VIRTUAL", u"K_WHEN", u"K_WHERE", u"K_WITH", u"K_WITHOUT",
u"C_ROWID", u"IDENTIFIER", u"NUMERIC_LITERAL", u"BIND_PARAMETER",
u"STRING_LITERAL", u"BLOB_LITERAL", u"SINGLE_LINE_COMMENT",
u"MULTILINE_COMMENT", u"SPACES", u"UNEXPECTED_CHAR" ]
ruleNames = [ u"SCOL", u"DOT", u"OPEN_PAR", u"CLOSE_PAR", u"COMMA",
u"ASSIGN", u"STAR", u"PLUS", u"MINUS", u"TILDE", u"PIPE2",
u"DIV", u"MOD", u"LT2", u"GT2", u"AMP", u"PIPE", u"LT",
u"LT_EQ", u"GT", u"GT_EQ", u"EQ", u"NOT_EQ1", u"NOT_EQ2",
u"K_ABORT", u"K_ACTION", u"K_ADD", u"K_AFTER", u"K_ALL",
u"K_ALTER", u"K_ANALYZE", u"K_AND", u"K_AS", u"K_ASC",
u"K_ATTACH", u"K_AUTOINCREMENT", u"K_BEFORE", u"K_BEGIN",
u"K_BETWEEN", u"K_BY", u"K_CASCADE", u"K_CASE", u"K_CAST",
u"K_CHECK", u"K_COLLATE", u"K_COLUMN", u"K_COMMIT", u"K_CONFLICT",
u"K_CONSTRAINT", u"K_CREATE", u"K_CROSS", u"K_CURRENT_DATE",
u"K_CURRENT_TIME", u"K_CURRENT_TIMESTAMP", u"K_DATABASE",
u"K_DEFAULT", u"K_DEFERRABLE", u"K_DEFERRED", u"K_DELETE",
u"K_DESC", u"K_DETACH", u"K_DISTINCT", u"K_DROP", u"K_EACH",
u"K_ELSE", u"K_END", u"K_ESCAPE", u"K_EXCEPT", u"K_EXCLUSIVE",
u"K_EXISTS", u"K_EXPLAIN", u"K_FAIL", u"K_FOR", u"K_FOREIGN",
u"K_FROM", u"K_FULL", u"K_GLOB", u"K_GROUP", u"K_HAVING",
u"K_IF", u"K_IGNORE", u"K_IMMEDIATE", u"K_IN", u"K_INDEX",
u"K_INDEXED", u"K_INITIALLY", u"K_INNER", u"K_INSERT",
u"K_INSTEAD", u"K_INTERSECT", u"K_INTO", u"K_IS", u"K_ISNULL",
u"K_JOIN", u"K_KEY", u"K_LEFT", u"K_LIKE", u"K_LIMIT",
u"K_MATCH", u"K_NATURAL", u"K_NO", u"K_NOT", u"K_NOTNULL",
u"K_NULL", u"K_OF", u"K_OFFSET", u"K_ON", u"K_OR", u"K_ORDER",
u"K_OUTER", u"K_PLAN", u"K_PRAGMA", u"K_PRIMARY", u"K_QUERY",
u"K_RAISE", u"K_RECURSIVE", u"K_REFERENCES", u"K_REGEXP",
u"K_REINDEX", u"K_RELEASE", u"K_RENAME", u"K_REPLACE",
u"K_RESTRICT", u"K_RIGHT", u"K_ROLLBACK", u"K_ROW", u"K_SAVEPOINT",
u"K_SELECT", u"K_SET", u"K_TABLE", u"K_TEMP", u"K_TEMPORARY",
u"K_THEN", u"K_TO", u"K_TRANSACTION", u"K_TRIGGER", u"K_UNION",
u"K_UNIQUE", u"K_UPDATE", u"K_USING", u"K_VACUUM", u"K_VALUES",
u"K_VIEW", u"K_VIRTUAL", u"K_WHEN", u"K_WHERE", u"K_WITH",
u"K_WITHOUT", u"C_ROWID", u"IDENTIFIER", u"NUMERIC_LITERAL",
u"BIND_PARAMETER", u"STRING_LITERAL", u"BLOB_LITERAL",
u"SINGLE_LINE_COMMENT", u"MULTILINE_COMMENT", u"SPACES",
u"UNEXPECTED_CHAR", u"DIGIT", u"A", u"B", u"C", u"D",
u"E", u"F", u"G", u"H", u"I", u"J", u"K", u"L", u"M",
u"N", u"O", u"P", u"Q", u"R", u"S", u"T", u"U", u"V",
u"W", u"X", u"Y", u"Z" ]
grammarFileName = u"SQLite.g4"
def __init__(self, input=None, output=sys.stdout):
super(SQLiteLexer, self).__init__(input, output=output)
self.checkVersion("4.8")
self._interp = LexerATNSimulator(self, self.atn, self.decisionsToDFA, PredictionContextCache())
self._actions = None
self._predicates = None
| 69.825726 | 103 | 0.624891 | 21,793 | 100,968 | 2.873996 | 0.143854 | 0.045822 | 0.168984 | 0.035125 | 0.44435 | 0.416777 | 0.380375 | 0.263966 | 0.223013 | 0.215573 | 0 | 0.36815 | 0.132666 | 100,968 | 1,445 | 104 | 69.874048 | 0.347059 | 0.000525 | 0 | 0.021038 | 1 | 0.415849 | 0.665712 | 0.639411 | 0 | 0 | 0 | 0 | 0 | 1 | 0.001403 | false | 0 | 0.002805 | 0 | 0.12202 | 0.000701 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d61ef78f600952878c9ce823c51e213141e691e4 | 31 | py | Python | dst/utils/metrics/__init__.py | gooppe/deep-summarization-toolkit | e249b7c31c817fedbc3133a3799c23a0115091bd | [
"MIT"
] | 7 | 2019-05-30T18:19:42.000Z | 2020-03-25T06:52:40.000Z | dst/utils/metrics/__init__.py | gooppe/deep-summarization-toolkit | e249b7c31c817fedbc3133a3799c23a0115091bd | [
"MIT"
] | null | null | null | dst/utils/metrics/__init__.py | gooppe/deep-summarization-toolkit | e249b7c31c817fedbc3133a3799c23a0115091bd | [
"MIT"
] | 3 | 2019-05-26T18:45:15.000Z | 2020-03-24T20:20:46.000Z | from .rouge import RougeMetric
| 15.5 | 30 | 0.83871 | 4 | 31 | 6.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.129032 | 31 | 1 | 31 | 31 | 0.962963 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c3a8a2e3c65130151fcc01becc1fbb89d961a4d1 | 27 | py | Python | achilles/terminal/pull/__init__.py | np-core/druid | ae483db67eeace4dabdb15e1401e9fc01e2c7d07 | [
"MIT"
] | 5 | 2019-09-04T21:02:36.000Z | 2021-11-05T01:14:56.000Z | achilles/terminal/pull/__init__.py | np-core/druid | ae483db67eeace4dabdb15e1401e9fc01e2c7d07 | [
"MIT"
] | 7 | 2019-03-14T05:42:11.000Z | 2019-03-15T01:49:20.000Z | achilles/terminal/pull/__init__.py | np-core/druid | ae483db67eeace4dabdb15e1401e9fc01e2c7d07 | [
"MIT"
] | 2 | 2021-03-01T06:53:35.000Z | 2021-11-05T01:18:43.000Z | from .commands import pull
| 13.5 | 26 | 0.814815 | 4 | 27 | 5.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148148 | 27 | 1 | 27 | 27 | 0.956522 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c3ccf96301b413812915c06bd005907b84920d6a | 124 | py | Python | admin.py | jbmyre/ip | 99178b3e25b91cdabdbb1e453ad753dd775f773a | [
"MIT"
] | 1 | 2016-11-02T20:32:34.000Z | 2016-11-02T20:32:34.000Z | admin.py | jbmyre/ip_manager | 99178b3e25b91cdabdbb1e453ad753dd775f773a | [
"MIT"
] | null | null | null | admin.py | jbmyre/ip_manager | 99178b3e25b91cdabdbb1e453ad753dd775f773a | [
"MIT"
] | null | null | null | from django.contrib import admin
from . import models
admin.site.register(models.Subnet)
admin.site.register(models.Host)
| 17.714286 | 34 | 0.806452 | 18 | 124 | 5.555556 | 0.555556 | 0.18 | 0.34 | 0.46 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.096774 | 124 | 6 | 35 | 20.666667 | 0.892857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
c3f25885627db5633d8d5b89111721a4902a8b07 | 275 | py | Python | src/day05_golf.py | guibou/AdventOfCode2020 | bfdfb0fb4c3198b7019b88f77e762a82665a024d | [
"BSD-3-Clause"
] | 1 | 2020-12-02T03:24:59.000Z | 2020-12-02T03:24:59.000Z | src/day05_golf.py | guibou/AdventOfCode2020 | bfdfb0fb4c3198b7019b88f77e762a82665a024d | [
"BSD-3-Clause"
] | null | null | null | src/day05_golf.py | guibou/AdventOfCode2020 | bfdfb0fb4c3198b7019b88f77e762a82665a024d | [
"BSD-3-Clause"
] | null | null | null | r=str.replace;print(max(map(lambda x:int(r(r(r(r(x,'F','0'),'B','1'),'R','1'),'L','0'),2),open("../content/day05"))))
s=set(map(lambda x:int(r(r(r(r(x,'F','0'),'B','1'),'R','1'),'L','0'),2),open("../content/day05")));print([i for i in s if (i+1) not in s and (i+2) in s][0])
| 91.666667 | 156 | 0.512727 | 67 | 275 | 2.104478 | 0.38806 | 0.085106 | 0.085106 | 0.184397 | 0.609929 | 0.609929 | 0.609929 | 0.609929 | 0.609929 | 0.609929 | 0 | 0.065891 | 0.061818 | 275 | 2 | 157 | 137.5 | 0.48062 | 0 | 0 | 0 | 0 | 0 | 0.174545 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
7f163c52275d5e9e70e4e21bd9ec594990d25fc6 | 13,878 | py | Python | tests/barriers/test_change_status.py | felix781/market-access-python-frontend | 3b0e49feb4fdf0224816326938a46002aa4a2b1c | [
"MIT"
] | 1 | 2021-12-15T04:14:03.000Z | 2021-12-15T04:14:03.000Z | tests/barriers/test_change_status.py | felix781/market-access-python-frontend | 3b0e49feb4fdf0224816326938a46002aa4a2b1c | [
"MIT"
] | 19 | 2019-12-11T11:32:47.000Z | 2022-03-29T15:40:57.000Z | tests/barriers/test_change_status.py | felix781/market-access-python-frontend | 3b0e49feb4fdf0224816326938a46002aa4a2b1c | [
"MIT"
] | 2 | 2021-02-09T09:38:45.000Z | 2021-03-29T19:07:09.000Z | from http import HTTPStatus
from django.urls import reverse
from mock import patch
from core.tests import MarketAccessTestCase
class ChangeStatusTestCase(MarketAccessTestCase):
def test_existing_status_not_in_choices(self):
response = self.client.get(
reverse("barriers:change_status", kwargs={"barrier_id": self.barrier["id"]})
)
assert response.status_code == HTTPStatus.OK
assert "form" in response.context
form = response.context["form"]
status_choice_values = [choice[0] for choice in form.fields["status"].choices]
assert str(self.barrier["status"]["id"]) not in status_choice_values
@patch("utils.api.client.BarriersResource.set_status")
def test_no_status_gets_error(self, mock_set_status):
response = self.client.post(
reverse("barriers:change_status", kwargs={"barrier_id": self.barrier["id"]})
)
assert response.status_code == HTTPStatus.OK
assert "form" in response.context
form = response.context["form"]
assert form.is_valid() is False
assert "status" in form.errors
assert len(form.errors) == 1
assert mock_set_status.called is False
@patch("utils.api.client.BarriersResource.set_status")
def test_open_pending_errors(self, mock_set_status):
response = self.client.post(
reverse(
"barriers:change_status", kwargs={"barrier_id": self.barrier["id"]}
),
data={"status": "1"},
)
assert response.status_code == HTTPStatus.OK
assert "form" in response.context
form = response.context["form"]
assert form.is_valid() is False
assert "status" not in form.errors
assert "pending_summary" in form.errors
assert "pending_type" in form.errors
assert "pending_type_other" not in form.errors
assert len(form.errors) == 2
assert mock_set_status.called is False
@patch("utils.api.client.BarriersResource.set_status")
def test_open_pending_success(self, mock_set_status):
response = self.client.post(
reverse(
"barriers:change_status", kwargs={"barrier_id": self.barrier["id"]}
),
data={
"status": "1",
"pending_summary": "Test pending summary",
"pending_type": "FOR_GOVT",
},
)
assert response.status_code == HTTPStatus.FOUND
mock_set_status.assert_called_with(
barrier_id=self.barrier["id"],
status="1",
sub_status="FOR_GOVT",
status_summary="Test pending summary",
)
@patch("utils.api.client.BarriersResource.set_status")
def test_open_pending_errors_other(self, mock_set_status):
response = self.client.post(
reverse(
"barriers:change_status", kwargs={"barrier_id": self.barrier["id"]}
),
data={
"status": "1",
"pending_summary": "Test pending summary",
"pending_type": "OTHER",
},
)
assert response.status_code == HTTPStatus.OK
assert "form" in response.context
form = response.context["form"]
assert form.is_valid() is False
assert "status" not in form.errors
assert "pending_summary" not in form.errors
assert "pending_type" not in form.errors
assert "pending_type_other" in form.errors
assert len(form.errors) == 1
assert mock_set_status.called is False
@patch("utils.api.client.BarriersResource.set_status")
def test_open_pending_success_other(self, mock_set_status):
response = self.client.post(
reverse(
"barriers:change_status", kwargs={"barrier_id": self.barrier["id"]}
),
data={
"status": "1",
"pending_summary": "Test pending summary",
"pending_type": "OTHER",
"pending_type_other": "Other test",
},
)
assert response.status_code == HTTPStatus.FOUND
mock_set_status.assert_called_with(
barrier_id=self.barrier["id"],
status="1",
sub_status="OTHER",
sub_status_other="Other test",
status_summary="Test pending summary",
)
@patch("utils.api.client.BarriersResource.set_status")
def test_open_in_progress_errors(self, mock_set_status):
response = self.client.post(
reverse(
"barriers:change_status", kwargs={"barrier_id": self.barrier["id"]}
),
data={"status": "2"},
)
assert response.status_code == HTTPStatus.OK
assert "form" in response.context
form = response.context["form"]
assert form.is_valid() is False
assert "status" not in form.errors
assert "open_in_progress_summary" in form.errors
assert len(form.errors) == 1
assert mock_set_status.called is False
@patch("utils.api.client.BarriersResource.set_status")
def test_open_in_progress_success(self, mock_set_status):
response = self.client.post(
reverse(
"barriers:change_status", kwargs={"barrier_id": self.barrier["id"]}
),
data={
"status": "2",
"open_in_progress_summary": "Test summary",
},
)
assert response.status_code == HTTPStatus.FOUND
mock_set_status.assert_called_with(
barrier_id=self.barrier["id"],
status="2",
status_summary="Test summary",
)
@patch("utils.api.client.BarriersResource.set_status")
def test_partially_resolved_errors(self, mock_set_status):
self.barrier["status"]["id"] = 1
response = self.client.post(
reverse(
"barriers:change_status", kwargs={"barrier_id": self.barrier["id"]}
),
data={"status": "3"},
)
assert response.status_code == HTTPStatus.OK
assert "form" in response.context
form = response.context["form"]
assert form.is_valid() is False
assert "status" not in form.errors
assert "part_resolved_summary" in form.errors
assert "part_resolved_date" in form.errors
assert len(form.errors) == 2
assert mock_set_status.called is False
@patch("utils.api.client.BarriersResource.set_status")
def test_partially_resolved_future_date_error(self, mock_set_status):
response = self.client.post(
reverse(
"barriers:change_status", kwargs={"barrier_id": self.barrier["id"]}
),
data={
"status": "3",
"part_resolved_date_0": "5",
"part_resolved_date_1": "2050",
"part_resolved_summary": "Part resolved summary",
},
)
assert response.status_code == HTTPStatus.OK
assert "form" in response.context
form = response.context["form"]
assert form.is_valid() is False
assert "status" not in form.errors
assert "part_resolved_summary" not in form.errors
assert "part_resolved_date" in form.errors
assert len(form.errors) == 1
assert mock_set_status.called is False
@patch("utils.api.client.BarriersResource.set_status")
def test_partially_resolved_bad_date_error(self, mock_set_status):
response = self.client.post(
reverse(
"barriers:change_status", kwargs={"barrier_id": self.barrier["id"]}
),
data={
"status": "3",
"part_resolved_date_0": "5",
"part_resolved_date_1": "20xx",
"part_resolved_summary": "Part resolved summary",
},
)
assert response.status_code == HTTPStatus.OK
assert "form" in response.context
form = response.context["form"]
assert form.is_valid() is False
assert "status" not in form.errors
assert "part_resolved_summary" not in form.errors
assert "part_resolved_date" in form.errors
assert len(form.errors) == 1
assert mock_set_status.called is False
@patch("utils.api.client.BarriersResource.set_status")
def test_partially_resolved_success(self, mock_set_status):
response = self.client.post(
reverse(
"barriers:change_status", kwargs={"barrier_id": self.barrier["id"]}
),
data={
"status": "3",
"part_resolved_date_0": "12",
"part_resolved_date_1": "2015",
"part_resolved_summary": "Part resolved summary",
},
)
assert response.status_code == HTTPStatus.FOUND
mock_set_status.assert_called_with(
barrier_id=self.barrier["id"],
status="3",
status_date="2015-12-01",
status_summary="Part resolved summary",
)
@patch("utils.api.client.BarriersResource.set_status")
def test_fully_resolved_errors(self, mock_set_status):
self.barrier["status"]["id"] = 1
response = self.client.post(
reverse(
"barriers:change_status", kwargs={"barrier_id": self.barrier["id"]}
),
data={"status": "4"},
)
assert response.status_code == HTTPStatus.OK
assert "form" in response.context
form = response.context["form"]
assert form.is_valid() is False
assert "status" not in form.errors
assert "resolved_summary" in form.errors
assert "resolved_date" in form.errors
assert len(form.errors) == 2
assert mock_set_status.called is False
@patch("utils.api.client.BarriersResource.set_status")
def test_fully_resolved_future_date_error(self, mock_set_status):
self.barrier["status"]["id"] = 1
response = self.client.post(
reverse(
"barriers:change_status", kwargs={"barrier_id": self.barrier["id"]}
),
data={
"status": "4",
"resolved_date_0": "5",
"resolved_date_1": "2050",
"resolved_summary": "Test resolved summary",
},
)
assert response.status_code == HTTPStatus.OK
assert "form" in response.context
form = response.context["form"]
assert form.is_valid() is False
assert "status" not in form.errors
assert "resolved_summary" not in form.errors
assert "resolved_date" in form.errors
assert len(form.errors) == 1
assert mock_set_status.called is False
@patch("utils.api.client.BarriersResource.set_status")
def test_fully_resolved_bad_date_error(self, mock_set_status):
self.barrier["status"]["id"] = 1
response = self.client.post(
reverse(
"barriers:change_status", kwargs={"barrier_id": self.barrier["id"]}
),
data={
"status": "4",
"resolved_date_0": "5",
"resolved_date_1": "20xx",
"resolved_summary": "Test resolved summary",
},
)
assert response.status_code == HTTPStatus.OK
assert "form" in response.context
form = response.context["form"]
assert form.is_valid() is False
assert "status" not in form.errors
assert "resolved_summary" not in form.errors
assert "resolved_date" in form.errors
assert len(form.errors) == 1
assert mock_set_status.called is False
@patch("utils.api.client.BarriersResource.set_status")
def test_fully_resolved_success(self, mock_set_status):
self.barrier["status"]["id"] = 1
response = self.client.post(
reverse(
"barriers:change_status", kwargs={"barrier_id": self.barrier["id"]}
),
data={
"status": "4",
"resolved_date_0": "5",
"resolved_date_1": "2019",
"resolved_summary": "Test resolved summary",
},
)
assert response.status_code == HTTPStatus.FOUND
mock_set_status.assert_called_with(
barrier_id=self.barrier["id"],
status="4",
status_date="2019-05-01",
status_summary="Test resolved summary",
)
@patch("utils.api.client.BarriersResource.set_status")
def test_dormant_status_errors(self, mock_set_status):
response = self.client.post(
reverse(
"barriers:change_status", kwargs={"barrier_id": self.barrier["id"]}
),
data={"status": "5"},
)
assert response.status_code == HTTPStatus.OK
assert "form" in response.context
form = response.context["form"]
assert form.is_valid() is False
assert "status" not in form.errors
assert "dormant_summary" in form.errors
assert len(form.errors) == 1
assert mock_set_status.called is False
@patch("utils.api.client.BarriersResource.set_status")
def test_dormant_status_success(self, mock_set_status):
response = self.client.post(
reverse(
"barriers:change_status", kwargs={"barrier_id": self.barrier["id"]}
),
data={"status": "5", "dormant_summary": "Test dormant summary"},
)
assert response.status_code == HTTPStatus.FOUND
mock_set_status.assert_called_with(
barrier_id=self.barrier["id"],
status="5",
status_summary="Test dormant summary",
)
| 38.765363 | 88 | 0.590215 | 1,535 | 13,878 | 5.117264 | 0.056678 | 0.058434 | 0.05627 | 0.071038 | 0.918905 | 0.918523 | 0.91394 | 0.900828 | 0.898536 | 0.892043 | 0 | 0.009667 | 0.299323 | 13,878 | 357 | 89 | 38.87395 | 0.798128 | 0 | 0 | 0.699405 | 0 | 0 | 0.21127 | 0.09497 | 0 | 0 | 0 | 0 | 0.300595 | 1 | 0.053571 | false | 0 | 0.011905 | 0 | 0.068452 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
615891cbdbc3bc29c3c6913eb2224bf665c2bac0 | 79 | py | Python | anime_downloader/commands/gui.py | Alsira/anime-downloader | d82b4cfd5c7c6c358d0d8ffd36ce2d5c4a285595 | [
"Unlicense"
] | 1,077 | 2020-10-17T15:43:17.000Z | 2022-03-31T15:24:29.000Z | anime_downloader/commands/gui.py | Alsira/anime-downloader | d82b4cfd5c7c6c358d0d8ffd36ce2d5c4a285595 | [
"Unlicense"
] | 509 | 2018-06-01T13:07:56.000Z | 2020-10-17T13:34:39.000Z | anime_downloader/commands/gui.py | Alsira/anime-downloader | d82b4cfd5c7c6c358d0d8ffd36ce2d5c4a285595 | [
"Unlicense"
] | 255 | 2018-05-27T03:52:11.000Z | 2020-10-12T17:27:38.000Z | import click
@click.command()
def command():
import anime_downloader.gui
| 11.285714 | 31 | 0.734177 | 10 | 79 | 5.7 | 0.7 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.164557 | 79 | 6 | 32 | 13.166667 | 0.863636 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | true | 0 | 0.5 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4ee2b453cc785d9da3542780330214449bc46fe1 | 38 | py | Python | modules/tests/core/__init__.py | unimauro/eden | b739d334e6828d0db14b3790f2f5e2666fc83576 | [
"MIT"
] | null | null | null | modules/tests/core/__init__.py | unimauro/eden | b739d334e6828d0db14b3790f2f5e2666fc83576 | [
"MIT"
] | null | null | null | modules/tests/core/__init__.py | unimauro/eden | b739d334e6828d0db14b3790f2f5e2666fc83576 | [
"MIT"
] | null | null | null | from auth import *
from utils import * | 19 | 19 | 0.763158 | 6 | 38 | 4.833333 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.184211 | 38 | 2 | 19 | 19 | 0.935484 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f637c017ebd1f1c720a68bfffd5e21742871e100 | 11,008 | py | Python | tests/test_feed_manager.py | exxamalte/python-aio-georss-client | 045a8ad0b5f8969c5634b18bbb3360e085089432 | [
"Apache-2.0"
] | null | null | null | tests/test_feed_manager.py | exxamalte/python-aio-georss-client | 045a8ad0b5f8969c5634b18bbb3360e085089432 | [
"Apache-2.0"
] | 4 | 2020-12-30T10:54:30.000Z | 2022-03-05T09:54:45.000Z | tests/test_feed_manager.py | exxamalte/python-aio-georss-client | 045a8ad0b5f8969c5634b18bbb3360e085089432 | [
"Apache-2.0"
] | 1 | 2021-06-05T12:47:30.000Z | 2021-06-05T12:47:30.000Z | """Test for the generic georss feed manager."""
import datetime
import aiohttp
import pytest
from asynctest import CoroutineMock, patch
from aio_georss_client.consts import UPDATE_OK_NO_DATA
from aio_georss_client.feed_manager import FeedManagerBase
from aio_georss_client.status_update import StatusUpdate
from tests import MockGeoRssFeed
from tests.utils import load_fixture
HOME_COORDINATES_1 = (-31.0, 151.0)
HOME_COORDINATES_2 = (-37.0, 150.0)
@pytest.mark.asyncio
async def test_feed_manager(aresponses, event_loop):
"""Test the feed manager."""
aresponses.add(
"test.url",
"/testpath",
"get",
aresponses.Response(text=load_fixture("generic_feed_1.xml"), status=200),
)
async with aiohttp.ClientSession(loop=event_loop) as websession:
feed = MockGeoRssFeed(
websession, HOME_COORDINATES_1, "http://test.url/testpath"
)
# This will just record calls and keep track of external ids.
generated_entity_external_ids = []
updated_entity_external_ids = []
removed_entity_external_ids = []
async def _generate_entity(entity_external_id: str) -> None:
"""Generate new entity."""
generated_entity_external_ids.append(entity_external_id)
async def _update_entity(entity_external_id: str) -> None:
"""Update entity."""
updated_entity_external_ids.append(entity_external_id)
async def _remove_entity(entity_external_id: str) -> None:
"""Remove entity."""
removed_entity_external_ids.append(entity_external_id)
feed_manager = FeedManagerBase(
feed, _generate_entity, _update_entity, _remove_entity
)
assert (
repr(feed_manager) == "<FeedManagerBase("
"feed=<MockGeoRssFeed(home="
"(-31.0, 151.0), "
"url=http://test.url/testpath, "
"radius=None, categories=None)>)>"
)
await feed_manager.update()
entries = feed_manager.feed_entries
assert entries is not None
assert len(entries) == 5
assert feed_manager.last_update is not None
assert feed_manager.last_timestamp == datetime.datetime(2018, 9, 23, 9, 10)
assert len(generated_entity_external_ids) == 5
assert len(updated_entity_external_ids) == 0
assert len(removed_entity_external_ids) == 0
feed_entry = entries.get("1234")
assert feed_entry.title == "Title 1"
assert feed_entry.external_id == "1234"
assert feed_entry.coordinates == (-37.2345, 149.1234)
assert round(abs(feed_entry.distance_to_home - 714.4), 1) == 0
assert repr(feed_entry) == "<MockFeedEntry(id=1234)>"
feed_entry = entries.get("2345")
assert feed_entry.title == "Title 2"
assert feed_entry.external_id == "2345"
feed_entry = entries.get("Title 3")
assert feed_entry.title == "Title 3"
assert feed_entry.external_id == "Title 3"
external_id = hash((-37.8901, 149.7890))
feed_entry = entries.get(external_id)
assert feed_entry.title is None
assert feed_entry.external_id == external_id
feed_entry = entries.get("5678")
assert feed_entry.title == "Title 5"
assert feed_entry.external_id == "5678"
# Simulate an update with several changes.
generated_entity_external_ids.clear()
updated_entity_external_ids.clear()
removed_entity_external_ids.clear()
aresponses.add(
"test.url",
"/testpath",
"get",
aresponses.Response(text=load_fixture("generic_feed_4.xml"), status=200),
)
await feed_manager.update()
entries = feed_manager.feed_entries
assert entries is not None
assert len(entries) == 3
assert len(generated_entity_external_ids) == 1
assert len(updated_entity_external_ids) == 2
assert len(removed_entity_external_ids) == 3
feed_entry = entries.get("1234")
assert feed_entry.title == "Title 1 UPDATED"
feed_entry = entries.get("2345")
assert feed_entry.title == "Title 2"
feed_entry = entries.get("6789")
assert feed_entry.title == "Title 6"
# Simulate an update with no data.
generated_entity_external_ids.clear()
updated_entity_external_ids.clear()
removed_entity_external_ids.clear()
with patch(
"aio_georss_client.feed.GeoRssFeed._fetch", new_callable=CoroutineMock
) as mock_fetch:
mock_fetch.return_value = (UPDATE_OK_NO_DATA, None)
await feed_manager.update()
entries = feed_manager.feed_entries
assert len(entries) == 3
assert len(generated_entity_external_ids) == 0
assert len(updated_entity_external_ids) == 0
assert len(removed_entity_external_ids) == 0
# Simulate an update producing an error.
generated_entity_external_ids.clear()
updated_entity_external_ids.clear()
removed_entity_external_ids.clear()
aresponses.add(
"test.url",
"/testpath",
"get",
aresponses.Response(status=500),
)
await feed_manager.update()
entries = feed_manager.feed_entries
assert len(entries) == 0
assert len(generated_entity_external_ids) == 0
assert len(updated_entity_external_ids) == 0
assert len(removed_entity_external_ids) == 3
@pytest.mark.asyncio
async def test_feed_manager_no_timestamp(aresponses, event_loop):
"""Test updating feed is ok."""
aresponses.add(
"test.url",
"/testpath",
"get",
aresponses.Response(text=load_fixture("generic_feed_5.xml"), status=200),
)
async with aiohttp.ClientSession(loop=event_loop) as websession:
feed = MockGeoRssFeed(
websession, HOME_COORDINATES_1, "http://test.url/testpath"
)
# This will just record calls and keep track of external ids.
generated_entity_external_ids = []
updated_entity_external_ids = []
removed_entity_external_ids = []
async def _generate_entity(external_id: str) -> None:
"""Generate new entity."""
generated_entity_external_ids.append(external_id)
async def _update_entity(external_id: str) -> None:
"""Update entity."""
updated_entity_external_ids.append(external_id)
async def _remove_entity(external_id: str) -> None:
"""Remove entity."""
removed_entity_external_ids.append(external_id)
feed_manager = FeedManagerBase(
feed, _generate_entity, _update_entity, _remove_entity
)
assert (
repr(feed_manager) == "<FeedManagerBase("
"feed=<MockGeoRssFeed(home="
"(-31.0, 151.0), "
"url=http://test.url/testpath, "
"radius=None, categories=None)>)>"
)
await feed_manager.update()
entries = feed_manager.feed_entries
assert entries is not None
assert len(entries) == 1
assert feed_manager.last_timestamp is None
@pytest.mark.asyncio
async def test_feed_manager_with_status_callback(aresponses, event_loop):
"""Test the feed manager."""
aresponses.add(
"test.url",
"/testpath",
"get",
aresponses.Response(text=load_fixture("generic_feed_1.xml"), status=200),
)
async with aiohttp.ClientSession(loop=event_loop) as websession:
feed = MockGeoRssFeed(
websession, HOME_COORDINATES_1, "http://test.url/testpath"
)
# This will just record calls and keep track of external ids.
generated_entity_external_ids = []
updated_entity_external_ids = []
removed_entity_external_ids = []
status_update = []
async def _generate_entity(external_id: str) -> None:
"""Generate new entity."""
generated_entity_external_ids.append(external_id)
async def _update_entity(external_id: str) -> None:
"""Update entity."""
updated_entity_external_ids.append(external_id)
async def _remove_entity(external_id: str) -> None:
"""Remove entity."""
removed_entity_external_ids.append(external_id)
async def _status(status_details: StatusUpdate) -> None:
"""Capture status update details."""
status_update.append(status_details)
feed_manager = FeedManagerBase(
feed, _generate_entity, _update_entity, _remove_entity, _status
)
assert (
repr(feed_manager) == "<FeedManagerBase(feed=<"
"MockGeoRssFeed(home=(-31.0, 151.0), "
"url=http://test.url/testpath, "
"radius=None, categories=None)>)>"
)
await feed_manager.update()
entries = feed_manager.feed_entries
assert entries is not None
assert len(entries) == 5
assert feed_manager.last_update is not None
assert feed_manager.last_timestamp == datetime.datetime(2018, 9, 23, 9, 10)
assert len(generated_entity_external_ids) == 5
assert len(updated_entity_external_ids) == 0
assert len(removed_entity_external_ids) == 0
assert status_update[0].status == "OK"
assert status_update[0].last_update is not None
last_update_successful = status_update[0].last_update_successful
assert status_update[0].last_update == last_update_successful
assert status_update[0].last_timestamp == datetime.datetime(2018, 9, 23, 9, 10)
assert status_update[0].total == 5
assert status_update[0].created == 5
assert status_update[0].updated == 0
assert status_update[0].removed == 0
assert (
repr(status_update[0]) == f"<StatusUpdate("
f"OK@{status_update[0].last_update})>"
)
# Simulate an update with no data.
generated_entity_external_ids.clear()
updated_entity_external_ids.clear()
removed_entity_external_ids.clear()
status_update.clear()
aresponses.add(
"test.url",
"/testpath",
"get",
aresponses.Response(status=500),
)
await feed_manager.update()
entries = feed_manager.feed_entries
assert len(entries) == 0
assert len(generated_entity_external_ids) == 0
assert len(updated_entity_external_ids) == 0
assert len(removed_entity_external_ids) == 5
assert status_update[0].status == "ERROR"
assert status_update[0].last_update is not None
assert status_update[0].last_update_successful is not None
assert status_update[0].last_update_successful == last_update_successful
assert status_update[0].total == 0
| 35.282051 | 87 | 0.637536 | 1,283 | 11,008 | 5.176929 | 0.10756 | 0.126468 | 0.122855 | 0.062632 | 0.827612 | 0.768293 | 0.75685 | 0.744806 | 0.713189 | 0.695875 | 0 | 0.027101 | 0.265898 | 11,008 | 311 | 88 | 35.395498 | 0.794827 | 0.03343 | 0 | 0.625 | 0 | 0 | 0.082856 | 0.019501 | 0 | 0 | 0 | 0 | 0.288793 | 1 | 0 | false | 0 | 0.038793 | 0 | 0.038793 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f645b35a3f598d1f22f2b643f487c8efb6595b82 | 108 | py | Python | psam/src/login.py | snwdaaa/PSAchivementManager | 403b9de5a25ca0a8c07ccd552b27157d53843def | [
"MIT"
] | null | null | null | psam/src/login.py | snwdaaa/PSAchivementManager | 403b9de5a25ca0a8c07ccd552b27157d53843def | [
"MIT"
] | null | null | null | psam/src/login.py | snwdaaa/PSAchivementManager | 403b9de5a25ca0a8c07ccd552b27157d53843def | [
"MIT"
] | null | null | null | # 유저 정보
onlineID = "kkj48188"
npssoCode = '4SDFJNJ6twByuLa3Bd7fPSqi4A7L1xxKaxhRP5tRnjvYEVDNhYxM4XxwaRej6BHi' | 36 | 78 | 0.861111 | 6 | 108 | 15.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15 | 0.074074 | 108 | 3 | 78 | 36 | 0.78 | 0.046296 | 0 | 0 | 0 | 0 | 0.705882 | 0.627451 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f64a0d8fe742ec413571e090fda572c44f5ee5f4 | 92 | py | Python | mxdc/com/ca.py | michel4j/mxdc | 844f0854cc696553c8a51f8e9b5b06a8e4345261 | [
"BSD-3-Clause"
] | 2 | 2018-10-23T19:05:40.000Z | 2021-03-18T20:06:32.000Z | mxdc/com/ca.py | michel4j/mxdc | 844f0854cc696553c8a51f8e9b5b06a8e4345261 | [
"BSD-3-Clause"
] | null | null | null | mxdc/com/ca.py | michel4j/mxdc | 844f0854cc696553c8a51f8e9b5b06a8e4345261 | [
"BSD-3-Clause"
] | null | null | null | from gepics import * # Use pyepics
#from .oepics import * # use built-in epics interface | 46 | 55 | 0.717391 | 13 | 92 | 5.076923 | 0.769231 | 0.272727 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.206522 | 92 | 2 | 55 | 46 | 0.90411 | 0.695652 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f65afb7425d58fcccea48f5d1a5dd2fb53425e5e | 168 | py | Python | cityyouthmatrix/apps/api/models.py | johnathaningle/CityYouthMatrix | b4ad244d92c97f3e20a923e18babf1ed1d278a87 | [
"MIT"
] | 1 | 2020-06-13T11:26:31.000Z | 2020-06-13T11:26:31.000Z | cityyouthmatrix/apps/api/models.py | jingle1000/CityYouthMatrix | b4ad244d92c97f3e20a923e18babf1ed1d278a87 | [
"MIT"
] | 2 | 2021-03-30T13:37:26.000Z | 2021-04-08T21:01:26.000Z | cityyouthmatrix/apps/api/models.py | johnathaningle/CityYouthMatrix | b4ad244d92c97f3e20a923e18babf1ed1d278a87 | [
"MIT"
] | null | null | null | from django.contrib.auth.models import AbstractUser
from django.utils.translation import gettext_lazy as _
from django.db import models
from django.db.models import Q
| 28 | 54 | 0.839286 | 26 | 168 | 5.346154 | 0.538462 | 0.28777 | 0.172662 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.113095 | 168 | 5 | 55 | 33.6 | 0.932886 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
14461bbda815ddf288c90e8650a293f9db927857 | 196 | py | Python | ker/api/__init__.py | csvwolf/ker.py | ac86e1f01cdef0f3ae8b45022df20e8f4fcc9a76 | [
"MIT"
] | 5 | 2019-09-30T08:40:53.000Z | 2019-10-14T10:22:06.000Z | ker/api/__init__.py | csvwolf/ker.py | ac86e1f01cdef0f3ae8b45022df20e8f4fcc9a76 | [
"MIT"
] | null | null | null | ker/api/__init__.py | csvwolf/ker.py | ac86e1f01cdef0f3ae8b45022df20e8f4fcc9a76 | [
"MIT"
] | null | null | null | """
export apis to root
"""
from ker.api.list import *
from ker.api.account import *
from ker.api.dns import *
from ker.api.monitor import *
from ker.api.server import *
from ker.api.ssh import *
| 19.6 | 29 | 0.72449 | 34 | 196 | 4.176471 | 0.411765 | 0.295775 | 0.422535 | 0.56338 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.153061 | 196 | 9 | 30 | 21.777778 | 0.855422 | 0.096939 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
144cd6b86121304aeccdfbcbfd145d4319dedbf7 | 66 | py | Python | gputools/separable/__init__.py | gmazzamuto/gputools | 73a4dee76a119f94d8163781a85b691fd080d506 | [
"BSD-3-Clause"
] | 89 | 2015-08-28T14:17:33.000Z | 2022-01-20T16:19:34.000Z | gputools/separable/__init__.py | gmazzamuto/gputools | 73a4dee76a119f94d8163781a85b691fd080d506 | [
"BSD-3-Clause"
] | 24 | 2015-08-28T19:06:22.000Z | 2022-02-21T21:10:13.000Z | gputools/separable/__init__.py | gmazzamuto/gputools | 73a4dee76a119f94d8163781a85b691fd080d506 | [
"BSD-3-Clause"
] | 17 | 2015-08-28T18:56:43.000Z | 2021-09-15T23:15:36.000Z |
from .separable_approx import separable_approx, separable_series | 22 | 64 | 0.878788 | 8 | 66 | 6.875 | 0.625 | 0.545455 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.090909 | 66 | 3 | 64 | 22 | 0.916667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
21152fc28a6ca5d535b00e0f61e26e6bb25fcdca | 111 | py | Python | linkify_it/__init__.py | tsutsu3/linkify-it-py | d7e3b77bbac4b4a9d61dcd7a9c47d74acc927831 | [
"MIT"
] | 7 | 2020-12-13T19:42:39.000Z | 2021-12-31T17:35:26.000Z | linkify_it/__init__.py | tsutsu3/linkify-it-py | d7e3b77bbac4b4a9d61dcd7a9c47d74acc927831 | [
"MIT"
] | 23 | 2020-11-10T13:40:12.000Z | 2021-12-21T09:13:13.000Z | linkify_it/__init__.py | tsutsu3/linkify-it-py | d7e3b77bbac4b4a9d61dcd7a9c47d74acc927831 | [
"MIT"
] | 2 | 2021-11-14T20:35:11.000Z | 2021-12-13T13:39:07.000Z | from .main import LinkifyIt # noqa: F401p
from .main import SchemaError # noqa: F401p
__version__ = "1.0.2"
| 22.2 | 44 | 0.720721 | 16 | 111 | 4.75 | 0.6875 | 0.210526 | 0.368421 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.098901 | 0.18018 | 111 | 4 | 45 | 27.75 | 0.736264 | 0.207207 | 0 | 0 | 0 | 0 | 0.058824 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
21236b97bbfc3c05665a0b5d9998b2d8d125d063 | 16,676 | py | Python | velkozz_web_api/apps/finance_api/model_views_seralizers/market_index/market_indicies_views.py | velkoz-data-ingestion/velkozz_web_api | 519a6a90e5fdf5bab8ba2daf637768c5fd424a12 | [
"MIT"
] | null | null | null | velkozz_web_api/apps/finance_api/model_views_seralizers/market_index/market_indicies_views.py | velkoz-data-ingestion/velkozz_web_api | 519a6a90e5fdf5bab8ba2daf637768c5fd424a12 | [
"MIT"
] | null | null | null | velkozz_web_api/apps/finance_api/model_views_seralizers/market_index/market_indicies_views.py | velkoz-data-ingestion/velkozz_web_api | 519a6a90e5fdf5bab8ba2daf637768c5fd424a12 | [
"MIT"
] | null | null | null | # Importing Django Methods:
from django.shortcuts import render
from rest_framework import viewsets
from rest_framework.decorators import api_view
from rest_framework.response import Response
from rest_framework.permissions import IsAuthenticated
from django.contrib.auth.models import Permission
# Importing Data Management Packages:
import json
# Importing the Market Index Seralizers, Database Models and Base ModelViewSets:
from accounts.views import AbstractModelViewSet
from finance_api.models.market_indicies.market_indicies_models import *
from .market_indicies_serializers import *
# Market Indicies ModelViewSet
class SPYIndexCompositionViewSet(AbstractModelViewSet):
"""The ViewSets providing the REST API routes for the SPYIndexComposition
database table.
"""
queryset = SPYIndexComposition.objects.all()
serializer_class = SPYIndexSerializer
def list(self, request):
"""The ViewSet method overwritten that contains the logic for processing GET requests
to the SPY Index database table.
"""
# Creating a context dict to be populated:
context = {}
context['request'] = request
# Querying all of the data from the database:
queryset = SPYIndexComposition.objects.all()
serializer = SPYIndexSerializer(queryset, many=True, context=context)
return Response(serializer.data)
def create(self, request):
"""The ViewSet method that contains logic for processing POST requests
to the SPY Index database table.
"""
# Creating a context dict to be populated:
context = {}
context['request'] = request
queryset = SPYIndexComposition.objects.all()
serializer = SPYIndexSerializer(queryset, many=True, context=context)
# Extracting the json content from the request body:
if request.body:
# Loading all data from the request body:
body_content = json.loads(request.body)
# Creating or Updating existing model instance via list comprehension:
django_objs_lst = [
SPYIndexComposition.objects.update_or_create(
cik= json["CIK"],
defaults = {
"symbol" : json["Symbol"],
"security_name" : json["Security"],
'gics_sector' : json["GICS Sector"],
'gics_sub_industry' : json["GICS Sub-Industry"],
'headquarters_location' : json["Headquarters Location"],
'date_added' : json["Date first added"],
'founded' : json["Founded"]}
) for json in body_content]
return Response(serializer.data)
class DJIAIndexCompositionViewSet(AbstractModelViewSet):
"""The ViewSets providing the REST API routes for the DJIAIndexComposition
database table.
"""
queryset = DJIAIndexComposition.objects.all()
serializer_class = DJIAIndexSerializer
def list(self, request):
"""Method that contains logic for processing GET request to the
DJIA Index database table.
"""
# Creating a context dict to be populated:
context = {}
context['request'] = request
# Querying all of the data from the database:
queryset = DJIAIndexComposition.objects.all()
serializer = DJIAIndexSerializer(queryset, many=True, context=context)
return Response(serializer.data)
def create(self, request):
"""The ViewSet method that contains logic for processing POST requests
to the DJIA Index database table.
"""
# Creating a context dict to be populated:
context = {}
context['request'] = request
queryset = DJIAIndexComposition.objects.all()
serializer = DJIAIndexSerializer(queryset, many=True, context=context)
# Extracting the json content from the request body:
if request.body:
# Loading all data from the request body:
body_content = json.loads(request.body)
# Creating or Updating existing model instance via list comprehension:
django_objs_lst = [
DJIAIndexComposition.objects.update_or_create(
company = json["Company"],
defaults = {
'exchange' : json["Exchange"],
'symbol' : json["Symbol"],
'industry' : json["Industry"],
'date_added' : json["Date added"],
'notes' : json["Notes"],
'weighting' : json[list(json)[-1]]
}
) for json in body_content]
return Response(serializer.data)
class SPTSXIndexCompositionViewSet(AbstractModelViewSet):
"""The ViewSets providing the REST API routes for the SPTSXIndexComposition
database table.
"""
queryset = SPTSXIndexComposition.objects.all()
serializer_class = SPTSXIndexSerializer
def list(self, request):
"""The ViewSet method for processing GET requests to the SPTSX
Index Database table.
"""
# Creating a context dict to be populated:
context = {}
context["request"] = request
queryset = SPTSXIndexComposition.objects.all()
serializer = SPTSXIndexSerializer(queryset, many=True, context=context)
return Response(serializer.data)
def create(self, request):
"""The ViewSet method that contains logic for processing POST requests
to the SPTSX Index database table.
"""
# Creating a context dict to be populated:
context = {}
context['request'] = request
queryset = SPTSXIndexComposition.objects.all()
serializer = SPTSXIndexSerializer(queryset, many=True, context=context)
# Extracting the json content from the request body:
if request.body:
# Loading all data from the request body:
body_content = json.loads(request.body)
# Creating or Updating existing model instance via list comprehension:
django_objs_lst = [
SPTSXIndexComposition.objects.update_or_create(
symbol = json["Symbol"],
defaults = {
'company' : json["Company"],
'sector' : json["Sector [5]"],
'industry' : json["Industry [5]"]
}
) for json in body_content]
return Response(serializer.data)
class FTSE100IndexCompositionViewSet(AbstractModelViewSet):
"""The ViewSets providing the REST API routes for the FTSE100IndexComposition
database table.
"""
queryset = FTSE100IndexComposition.objects.all()
serializer_class = FTSE100IndexSerializer
def list(self, request):
"""The ViewSet method for processing GET requests to the SPTSX
Index Database table.
"""
# Creating a context dict to be populated:
context = {}
context["request"] = request
queryset = FTSE100IndexComposition.objects.all()
serializer = FTSE100IndexSerializer(queryset, many=True, context=context)
return Response(serializer.data)
def create(self, request):
"""The ViewSet method that contains logic for processing POST requests
to the FTSE100 Index database table.
"""
# Creating a context dict to be populated:
context = {}
context['request'] = request
queryset = FTSE100IndexSerializer.objects.all()
serializer = FTSE100IndexSerializer(queryset, many=True, context=context)
# Extracting the json content from the request body:
if request.body:
# Loading all data from the request body:
body_content = json.loads(request.body)
# Creating or Updating existing model instance via list comprehension:
django_objs_lst = [
FTSE100IndexComposition.objects.update_or_create(
symbol = json["Company"],
defaults = {
'symbol' : json["EPIC"],
'industry' : json["FTSE Industry Classification Benchmark sector[13]"]
}
) for json in body_content]
return Response(serializer.data)
class SMICompositionViewSet(AbstractModelViewSet):
"""The ViewSets providing the REST API routes for the SMIComposition
database table.
"""
queryset = SMIComposition.objects.all()
serializer_class = SMISerializer
def list(self, request):
"""The ViewSet method for processing GET requests to the SMI
Index Database table.
"""
# Creating a context dict to be populated:
context = {}
context["request"] = request
queryset = SMIComposition.objects.all()
serializer = SMISerializer(queryset, many=True, context=context)
return Response(serializer.data)
def create(self, request):
"""The ViewSet method that contains logic for processing POST requests
to the SMI Index database table.
"""
# Creating a context dict to be populated:
context = {}
context['request'] = request
queryset = SMIComposition.objects.all()
serializer = SMISerializer(queryset, many=True, context=context)
# Extracting the json content from the request body:
if request.body:
# Loading all data from the request body:
body_content = json.loads(request.body)
# Creating or Updating existing model instance via list comprehension:
django_objs_lst = [
SMIComposition.objects.update_or_create(
symbol = json["Ticker"],
defaults = {
'rank' : json["Rank"],
'company': json["Name"],
'industry' : json["Industry"],
'canton' : json["Canton"],
"weighting" : json["Weighting in\xa0%"]
}
) for json in body_content]
return Response(serializer.data)
class SPICompositionViewSet(AbstractModelViewSet):
"""
The ViewSets providing the REST API routes for the SPIComposition
database table.
"""
queryset = SPIComposition.objects.all()
serializer_class = SPISerializer
def list(self, request):
"""The ViewSet method for processing GET requests to the Swiss Performance
Index Database table.
"""
# Creating a context dict to be populated:
context = {}
context["request"] = request
queryset = SPIComposition.objects.all()
serializer = SPISerializer(queryset, many=True, context=context)
return Response(serializer.data)
def create(self, request):
"""The ViewSet method that contains logic for processing POST requests
to the Swiss Performance Index database table.
"""
# Creating a context dict to be populated:
context = {}
context['request'] = request
queryset = SPIComposition.objects.all()
serializer = SPISerializer(queryset, many=True, context=context)
# Extracting the json content from the request body:
if request.body:
# Loading all data from the request body:
body_content = json.loads(request.body)
# Creating or Updating existing model instance via list comprehension:
django_objs_lst = [
SPIComposition.objects.update_or_create(
symbol = json["Symbol"],
defaults = {
'company' : json["Company"],
'smi_family': json["SMI Family"],
'date_added' : json["Listing"],
'notes' : json["Remarks"]
}
) for json in body_content]
return Response(serializer.data)
class NASDAQCompositionViewSet(AbstractModelViewSet):
"""
The ViewSets providing the REST API routes for the NASDAQ
database table.
"""
queryset = NASDAQComposition.objects.all()
serializer_class = NASDAQSerializer
def list(self, request):
"""The ViewSet method for processing GET requests to the NASDAQ
Index Database table.
"""
# Creating a context dict to be populated:
context = {}
context["request"] = request
queryset = NASDAQComposition.objects.all()
serializer = NASDAQSerializer(queryset, many=True, context=context)
return Response(serializer.data)
def create(self, request):
"""The ViewSet method that contains logic for processing POST requests
to the NASDAQ Index database table.
"""
# Creating a context dict to be populated:
context = {}
context['request'] = request
queryset = NASDAQComposition.objects.all()
serializer = NASDAQSerializer(queryset, many=True, context=context)
# Extracting the json content from the request body:
if request.body:
# Loading all data from the request body:
body_content = json.loads(request.body)
# Creating or Updating existing model instance via list comprehension:
django_objs_lst = [
NASDAQComposition.objects.update_or_create(
symbol = json["Symbol"],
defaults = {
'company' : json["Company"],
'market_cap': json["Market Cap"],
'country' : json["Country"],
'ipo_year' : json["IPO Year"],
'sector' : json["Sector"],
'industry' : json["Industry"]
}
) for json in body_content]
return Response(serializer.data)
class NYSECompositionViewSet(AbstractModelViewSet):
"""
The ViewSets providing the REST API routes for the NYSE
database table.
"""
queryset = NYSEComposition.objects.all()
serializer_class = NYSESerializer
def list(self, request):
"""The ViewSet method for processing GET requests to the NYSE
Index Database table.
"""
# Creating a context dict to be populated:
context = {}
context["request"] = request
queryset = NYSEComposition.objects.all()
serializer = NYSESerializer(queryset, many=True, context=context)
return Response(serializer.data)
def create(self, request):
"""The ViewSet method that contains logic for processing POST requests
to the NYSE Index database table.
"""
# Creating a context dict to be populated:
context = {}
context['request'] = request
queryset = NYSEComposition.objects.all()
serializer = NYSESerializer(queryset, many=True, context=context)
# Extracting the json content from the request body:
if request.body:
# Loading all data from the request body:
body_content = json.loads(request.body)
# Creating or Updating existing model instance via list comprehension:
django_objs_lst = [
NYSEComposition.objects.update_or_create(
symbol = json["Symbol"],
defaults = {
'company' : json["Company"],
'market_cap': json["Market Cap"],
'country' : json["Country"],
'ipo_year' : json["IPO Year"],
'sector' : json["Sector"],
'industry' : json["Industry"]
}
) for json in body_content]
return Response(serializer.data)
| 36.893805 | 94 | 0.579815 | 1,543 | 16,676 | 6.214517 | 0.10499 | 0.04672 | 0.050057 | 0.043383 | 0.787986 | 0.747315 | 0.735426 | 0.73188 | 0.73188 | 0.72312 | 0 | 0.003289 | 0.343548 | 16,676 | 451 | 95 | 36.97561 | 0.872659 | 0.263792 | 0 | 0.702586 | 0 | 0 | 0.072328 | 0.001787 | 0 | 0 | 0 | 0 | 0 | 1 | 0.068966 | false | 0 | 0.043103 | 0 | 0.284483 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
216a46c38b5c8a45caaede810cec43986028719f | 5,568 | py | Python | tests/test_plotting.py | harrisong/lifetimes | 12708827fdeb777bce8149ccee41ba19c5ab009b | [
"MIT"
] | 2 | 2017-10-19T10:28:20.000Z | 2019-12-17T03:23:45.000Z | tests/test_plotting.py | harrisong/lifetimes | 12708827fdeb777bce8149ccee41ba19c5ab009b | [
"MIT"
] | 1 | 2021-09-08T11:04:27.000Z | 2021-09-08T11:04:27.000Z | tests/test_plotting.py | isabella232/lifetimes | afb8456df7e9b5df0ea2312c718ae82afcb2f0f8 | [
"MIT"
] | 2 | 2019-12-17T03:23:47.000Z | 2021-09-08T09:32:44.000Z | import pytest
import matplotlib
matplotlib.use('AGG') # use a non-interactive backend
from matplotlib import pyplot as plt
from lifetimes import plotting
from lifetimes import BetaGeoFitter, ParetoNBDFitter, ModifiedBetaGeoFitter
from lifetimes.datasets import load_cdnow_summary, load_transaction_data
from lifetimes import utils
bgf = BetaGeoFitter()
cd_data = load_cdnow_summary()
bgf.fit(cd_data['frequency'], cd_data['recency'], cd_data['T'], iterative_fitting=1)
@pytest.mark.plottest
class TestPlotting():
@pytest.mark.mpl_image_compare(tolerance=30)
def test_plot_period_transactions(self):
plt.figure()
plotting.plot_period_transactions(bgf)
return plt.gcf()
@pytest.mark.mpl_image_compare(tolerance=30)
def test_plot_period_transactions_parento(self):
pnbd = ParetoNBDFitter()
pnbd.fit(cd_data['frequency'], cd_data['recency'], cd_data['T'], iterative_fitting=1)
plt.figure()
plotting.plot_period_transactions(pnbd)
return plt.gcf()
@pytest.mark.mpl_image_compare(tolerance=30)
def test_plot_period_transactions_mbgf(self):
mbgf = ModifiedBetaGeoFitter()
mbgf.fit(cd_data['frequency'], cd_data['recency'], cd_data['T'], iterative_fitting=1)
plt.figure()
plotting.plot_period_transactions(mbgf)
return plt.gcf()
@pytest.mark.mpl_image_compare(tolerance=30)
def test_plot_period_transactions_max_frequency(self):
plt.figure()
plotting.plot_period_transactions(bgf, max_frequency=12)
return plt.gcf()
@pytest.mark.mpl_image_compare(tolerance=30)
def test_plot_period_transactions_labels(self):
plt.figure()
plotting.plot_period_transactions(bgf, label=['A', 'B'])
return plt.gcf()
@pytest.mark.mpl_image_compare(tolerance=30)
def test_plot_frequency_recency_matrix(self):
plt.figure()
plotting.plot_frequency_recency_matrix(bgf)
return plt.gcf()
@pytest.mark.mpl_image_compare(tolerance=30)
def test_plot_frequency_recency_matrix_max_recency(self):
plt.figure()
plotting.plot_frequency_recency_matrix(bgf, max_recency=100)
return plt.gcf()
@pytest.mark.mpl_image_compare(tolerance=30)
def test_plot_frequency_recency_matrix_max_frequency(self):
plt.figure()
plotting.plot_frequency_recency_matrix(bgf, max_frequency=100)
return plt.gcf()
@pytest.mark.mpl_image_compare(tolerance=30)
def test_plot_frequency_recency_matrix_max_frequency_max_recency(self):
plt.figure()
plotting.plot_frequency_recency_matrix(bgf, max_frequency=100, max_recency=100)
return plt.gcf()
@pytest.mark.mpl_image_compare(tolerance=30)
def test_plot_probability_alive_matrix(self):
plt.figure()
plotting.plot_probability_alive_matrix(bgf)
return plt.gcf()
@pytest.mark.mpl_image_compare(tolerance=30)
def test_plot_probability_alive_matrix_max_frequency(self):
plt.figure()
plotting.plot_probability_alive_matrix(bgf, max_frequency=100)
return plt.gcf()
@pytest.mark.mpl_image_compare(tolerance=30)
def test_plot_probability_alive_matrix_max_recency(self):
plt.figure()
plotting.plot_probability_alive_matrix(bgf, max_recency=100)
return plt.gcf()
@pytest.mark.mpl_image_compare(tolerance=30)
def test_plot_probability_alive_matrix_max_frequency_max_recency(self):
plt.figure()
plotting.plot_probability_alive_matrix(bgf, max_frequency=100, max_recency=100)
return plt.gcf()
@pytest.mark.mpl_image_compare(tolerance=30)
def test_plot_expected_repeat_purchases(self):
plt.figure()
plotting.plot_expected_repeat_purchases(bgf)
return plt.gcf()
@pytest.mark.mpl_image_compare(tolerance=30)
def test_plot_expected_repeat_purchases_with_label(self):
plt.figure()
plotting.plot_expected_repeat_purchases(bgf, label='test label')
return plt.gcf()
@pytest.mark.mpl_image_compare(tolerance=30)
def test_plot_customer_alive_history(self):
plt.figure()
transaction_data = load_transaction_data()
# yes I know this is using the wrong data, but I'm testing plotting here.
id = 35
days_since_birth = 200
sp_trans = transaction_data.loc[transaction_data['id'] == id]
plotting.plot_history_alive(bgf, days_since_birth, sp_trans, 'date')
return plt.gcf()
@pytest.mark.mpl_image_compare(tolerance=30)
def test_plot_calibration_purchases_vs_holdout_purchases(self):
transaction_data = load_transaction_data()
summary = utils.calibration_and_holdout_data(transaction_data, 'id', 'date', '2014-09-01', '2014-12-31')
bgf.fit(summary['frequency_cal'], summary['recency_cal'], summary['T_cal'])
plt.figure()
plotting.plot_calibration_purchases_vs_holdout_purchases(bgf, summary)
return plt.gcf()
@pytest.mark.mpl_image_compare(tolerance=30)
def test_plot_calibration_purchases_vs_holdout_purchases_time_since_last_purchase(self):
transaction_data = load_transaction_data()
summary = utils.calibration_and_holdout_data(transaction_data, 'id', 'date', '2014-09-01', '2014-12-31')
bgf.fit(summary['frequency_cal'], summary['recency_cal'], summary['T_cal'])
plt.figure()
plotting.plot_calibration_purchases_vs_holdout_purchases(bgf, summary, kind='time_since_last_purchase')
return plt.gcf()
| 38.136986 | 112 | 0.724138 | 719 | 5,568 | 5.262865 | 0.146036 | 0.050211 | 0.061839 | 0.085624 | 0.802061 | 0.793076 | 0.793076 | 0.789905 | 0.754228 | 0.706395 | 0 | 0.022256 | 0.176904 | 5,568 | 145 | 113 | 38.4 | 0.803404 | 0.018139 | 0 | 0.521368 | 0 | 0 | 0.037701 | 0.004392 | 0 | 0 | 0 | 0 | 0 | 1 | 0.153846 | false | 0 | 0.059829 | 0 | 0.376068 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
dcbea5908540d97be2f03cafd3338e4c50df7af0 | 16,147 | py | Python | agents/college_admission_jury_test.py | jackblandin/ml-fairness-gym | dce1feaacf2588e0a2d6187e896796241a25ed81 | [
"Apache-2.0"
] | null | null | null | agents/college_admission_jury_test.py | jackblandin/ml-fairness-gym | dce1feaacf2588e0a2d6187e896796241a25ed81 | [
"Apache-2.0"
] | null | null | null | agents/college_admission_jury_test.py | jackblandin/ml-fairness-gym | dce1feaacf2588e0a2d6187e896796241a25ed81 | [
"Apache-2.0"
] | null | null | null | # coding=utf-8
# Copyright 2022 The ML Fairness Gym Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from absl.testing import absltest
import core
import params
import test_util
from agents import college_admission_jury
from environments import college_admission
import numpy as np
class FixedJuryTest(absltest.TestCase):
def test_fixed_agent_simulation_runs_successfully(self):
env = college_admission.CollegeAdmissionsEnv()
agent = college_admission_jury.FixedJury(
action_space=env.action_space,
observation_space=env.observation_space,
reward_fn=(lambda x: 0),
threshold=0.7)
test_util.run_test_simulation(env=env, agent=agent, stackelberg=True)
def test_agent_raises_episode_done_error(self):
env = college_admission.CollegeAdmissionsEnv()
agent = college_admission_jury.FixedJury(
action_space=env.action_space,
observation_space=env.observation_space,
reward_fn=(lambda x: 0),
threshold=0.7)
with self.assertRaises(core.EpisodeDoneError):
agent.act(
observation={
'threshold': np.array(0.5),
'epsilon_prob': np.array(0)
},
done=True)
def test_agent_raises_invalid_observation_error(self):
env = college_admission.CollegeAdmissionsEnv()
agent = college_admission_jury.FixedJury(
action_space=env.action_space,
observation_space=env.observation_space,
reward_fn=(lambda x: 0),
threshold=0.7)
with self.assertRaises(core.InvalidObservationError):
agent.act(observation={0: 'Invalid Observation'}, done=False)
def test_agent_produces_zero_no_epsilon_greedy(self):
env = college_admission.CollegeAdmissionsEnv()
agent = college_admission_jury.FixedJury(
action_space=env.action_space,
observation_space=env.observation_space,
reward_fn=(lambda x: 0),
threshold=0.7,
epsilon_greedy=False)
epsilon_probs = [agent.initial_action()['epsilon_prob'] for _ in range(10)]
self.assertEqual(epsilon_probs, [0] * 10)
def test_agent_produces_different_epsilon_with_epsilon_greedy(self):
env = college_admission.CollegeAdmissionsEnv()
agent = college_admission_jury.FixedJury(
action_space=env.action_space,
observation_space=env.observation_space,
reward_fn=(lambda x: 0),
threshold=0.7,
epsilon_greedy=True)
obs, _, done, _ = env.step(agent.initial_action())
epsilon_probs = [float(agent.initial_action()['epsilon_prob'])]
epsilon_probs.extend(
[float(agent.act(obs, done)['epsilon_prob']) for _ in range(10)])
self.assertGreater(len(set(epsilon_probs)), 1)
def test_epsilon_prob_decays_as_expected(self):
env = college_admission.CollegeAdmissionsEnv()
agent = college_admission_jury.FixedJury(
action_space=env.action_space,
observation_space=env.observation_space,
reward_fn=(lambda x: 0),
threshold=0.7,
epsilon_greedy=True,
initial_epsilon_prob=0.3,
decay_steps=5,
epsilon_prob_decay_rate=0.001)
obs, _, done, _ = env.step(agent.initial_action())
epsilon_probs = [float(agent.initial_action()['epsilon_prob'])]
epsilon_probs.extend(
[float(agent.act(obs, done)['epsilon_prob']) for _ in range(2)])
self.assertTrue(
np.all(np.isclose(epsilon_probs, [0.3, 0.0753, 0.0189], atol=1e-2)))
class NaiveJuryTest(absltest.TestCase):
def test_jury_successfully_initializes(self):
env = college_admission.CollegeAdmissionsEnv()
agent = college_admission_jury.NaiveJury(
action_space=env.action_space,
observation_space=env.observation_space,
reward_fn=(lambda x: 0),
threshold=0.7)
self.assertEqual(agent.initial_action()['threshold'], 0.7)
self.assertEqual(agent.initial_action()['epsilon_prob'], 0)
def test_simple_classifier_simulation_runs_successfully(self):
env = college_admission.CollegeAdmissionsEnv()
agent = college_admission_jury.NaiveJury(
action_space=env.action_space,
observation_space=env.observation_space,
reward_fn=(lambda x: 0),
threshold=0.7)
test_util.run_test_simulation(env=env, agent=agent, stackelberg=True)
def test_get_default_features_returns_same_features(self):
"""Checks that the feature selection fn works as expected."""
observations = {
'test_scores_y': [0.2, 0.3, 0.4, 0.5, 0.6],
'selected_ground_truth': [1, 0, 2, 1, 2],
'selected_applicants': [1, 1, 0, 1, 0]
}
env = college_admission.CollegeAdmissionsEnv()
agent = college_admission_jury.NaiveJury(
action_space=env.action_space,
observation_space=env.observation_space,
reward_fn=(lambda x: 0),
threshold=0.7)
features = agent._get_default_features(observations)
self.assertListEqual(features, [0.2, 0.3, 0.5])
def test_label_fn_returns_correct_labels(self):
"""Checks that the label function works as expected."""
observations = {
'test_scores_y': [0.2, 0.3, 0.4, 0.5, 0.6],
'selected_ground_truth': [1, 0, 2, 1, 2],
'selected_applicants': [1, 1, 0, 1, 0]
}
env = college_admission.CollegeAdmissionsEnv()
agent = college_admission_jury.NaiveJury(
action_space=env.action_space,
observation_space=env.observation_space,
reward_fn=(lambda x: 0),
threshold=0.7)
labels = agent._label_fn(observations)
self.assertListEqual(labels, [1, 0, 1])
def test_agent_returns_same_threshold_till_burnin_and_then_change(self):
"""Tests that agent returns same threshold till burnin without freezing."""
env = college_admission.CollegeAdmissionsEnv()
agent = college_admission_jury.NaiveJury(
action_space=env.action_space,
observation_space=env.observation_space,
reward_fn=(lambda x: 0),
threshold=0.3,
burnin=4,
freeze_classifier_after_burnin=False)
test_util.run_test_simulation(
env=env, agent=agent, num_steps=10, stackelberg=True)
actions = [float(action['threshold']) for _, action in env.history]
self.assertEqual(set(actions[:4]), {0.3})
self.assertGreater(len(set(actions)), 4)
def test_agent_returns_same_threshold_till_burnin_learns_and_freezes(self):
"""Tests that agent returns same threshold till burnin and freezes after."""
env = college_admission.CollegeAdmissionsEnv()
agent = college_admission_jury.NaiveJury(
action_space=env.action_space,
observation_space=env.observation_space,
reward_fn=(lambda x: 0),
threshold=0.3,
burnin=4,
freeze_classifier_after_burnin=True)
test_util.run_test_simulation(
env=env, agent=agent, num_steps=10, stackelberg=True)
actions = [float(action['threshold']) for _, action in env.history]
self.assertEqual(set(actions[:4]), {0.3})
self.assertLen(set(actions), 3)
def test_agent_returns_correct_threshold(self):
env = college_admission.CollegeAdmissionsEnv(
user_params={
'gaming':
False,
'subsidize':
False,
'noise_params':
params.BoundedGaussian(max=0.3, min=0, sigma=0, mu=0.1),
'feature_params': params.GMM(mix_weight=[0.5, 0.5], mu=[0.5, 0.5],
sigma=[0.1, 0.1])
})
agent = college_admission_jury.NaiveJury(
action_space=env.action_space,
observation_space=env.observation_space,
reward_fn=(lambda x: 0),
threshold=0,
burnin=9,
freeze_classifier_after_burnin=True)
test_util.run_test_simulation(
env=env, agent=agent, num_steps=10, stackelberg=True)
learned_threshold = env.history[-1].action['threshold']
self.assertTrue(np.isclose(learned_threshold, 0.55, atol=1e-2))
class RobustJuryTest(absltest.TestCase):
def test_robust_classifier_simulation_runs_successfully(self):
env = college_admission.CollegeAdmissionsEnv()
agent = college_admission_jury.RobustJury(
action_space=env.action_space,
observation_space=env.observation_space,
reward_fn=(lambda x: 0),
group_cost=env.initial_params.group_cost,
burnin=10)
test_util.run_test_simulation(env=env, agent=agent, stackelberg=True)
def test_correct_max_score_change_calculated_no_subsidy(self):
"""Tests that the max gaming steps gives output as expected."""
env = college_admission.CollegeAdmissionsEnv(
user_params={
'group_cost': {
0: 2,
1: 4
},
'subsidize': False,
'subsidy_beta': 0.6,
'gaming_control': np.inf
})
agent = college_admission_jury.RobustJury(
action_space=env.action_space,
observation_space=env.observation_space,
reward_fn=(lambda x: 0),
group_cost=env.initial_params.group_cost,
subsidize=env.initial_params.subsidize,
subsidy_beta=env.initial_params.subsidy_beta,
gaming_control=env.initial_params.gaming_control)
obs, _, _, _ = env.step(agent.initial_action())
max_change = agent._get_max_allowed_score_change(obs)
self.assertEqual(max_change, [0.5, 0.25])
def test_correct_max_score_change_calculated_with_subsidy(self):
"""Tests that the max gaming steps gives output as expected."""
env = college_admission.CollegeAdmissionsEnv(
user_params={
'group_cost': {
0: 2,
1: 4
},
'subsidize': True,
'subsidy_beta': 0.8,
'gaming_control': np.inf
})
agent = college_admission_jury.RobustJury(
action_space=env.action_space,
observation_space=env.observation_space,
reward_fn=(lambda x: 0),
group_cost=env.initial_params.group_cost,
subsidize=env.initial_params.subsidize,
subsidy_beta=env.initial_params.subsidy_beta,
gaming_control=env.initial_params.gaming_control)
obs, _, _, _ = env.step(agent.initial_action())
max_change = agent._get_max_allowed_score_change(obs)
self.assertEqual(max_change, [0.5, 0.3125])
def test_correct_robust_threshold_returned(self):
env = college_admission.CollegeAdmissionsEnv()
agent = college_admission_jury.RobustJury(
action_space=env.action_space,
observation_space=env.observation_space,
reward_fn=(lambda x: 0),
group_cost=env.initial_params.group_cost)
agent._features = [0.1, 0.2, 0.4, 0.4, 0.5, 0.6, 0.7, 0.8]
agent._labels = [0, 0, 1, 0, 0, 1, 1, 1]
agent._train_model()
self.assertEqual(agent._threshold, 0.6)
def test_features_manipulated_to_maximum_limit_with_no_control(self):
env = college_admission.CollegeAdmissionsEnv(user_params={
'num_applicants': 5,
'gaming_control': np.inf,
'group_cost': {
0: 2,
1: 4
}
})
agent = college_admission_jury.RobustJury(
action_space=env.action_space,
observation_space=env.observation_space,
reward_fn=(lambda x: 0),
group_cost=env.initial_params.group_cost)
observations = {
'test_scores_y': np.asarray([0.2, 0.3, 0.4, 0.5, 0.4]),
'selected_applicants': np.asarray([0, 1, 0, 1, 1]),
'selected_ground_truth': np.asarray([2, 0, 2, 1, 1]),
'applicant_groups': np.asarray([0, 1, 1, 0, 1])
}
agent.act(observations, done=False)
self.assertTrue(
np.all(
np.isclose(
agent._get_maximum_manipulated_features(observations),
[0.55, 1.0, 0.65],
atol=1e-4)))
self.assertEqual(agent._features,
agent._get_maximum_manipulated_features(observations))
def test_features_manipulated_to_maximum_limit_with_gaming_control(self):
env = college_admission.CollegeAdmissionsEnv(user_params={
'num_applicants': 5,
'gaming_control': 0.3,
'group_cost': {
0: 2,
1: 4,
}
})
agent = college_admission_jury.RobustJury(
action_space=env.action_space,
observation_space=env.observation_space,
reward_fn=(lambda x: 0),
group_cost=env.initial_params.group_cost,
gaming_control=env.initial_params.gaming_control)
observations = {
'test_scores_y': np.asarray([0.2, 0.3, 0.4, 0.5, 0.4]),
'selected_applicants': np.asarray([0, 1, 0, 1, 1]),
'selected_ground_truth': np.asarray([2, 0, 2, 1, 1]),
'applicant_groups': np.asarray([0, 1, 1, 0, 1])
}
self.assertTrue(
np.all(
np.isclose(
agent._get_maximum_manipulated_features(observations),
[0.55, 0.8, 0.65],
atol=1e-4)))
def test_features_manipulated_to_maximum_limit_with_control_epsilon_greedy(
self):
env = college_admission.CollegeAdmissionsEnv(user_params={
'num_applicants': 5,
'gaming_control': 0.3,
'group_cost': {
0: 2,
1: 4,
}
})
agent = college_admission_jury.RobustJury(
action_space=env.action_space,
observation_space=env.observation_space,
reward_fn=(lambda x: 0),
group_cost=env.initial_params.group_cost,
gaming_control=env.initial_params.gaming_control,
epsilon_greedy=True,
initial_epsilon_prob=0.2)
observations = {
'test_scores_y': np.asarray([0.2, 0.3, 0.4, 0.5, 0.4]),
'selected_applicants': np.asarray([0, 1, 0, 1, 1]),
'selected_ground_truth': np.asarray([2, 0, 2, 1, 1]),
'applicant_groups': np.asarray([0, 1, 1, 0, 1])
}
self.assertTrue(
np.all(
np.isclose(
agent._get_maximum_manipulated_features(observations),
[0.5, 0.8, 0.6],
atol=1e-4)))
def test_features_manipulated_to_maximum_limit_no_control_epsilon_greedy(
self):
env = college_admission.CollegeAdmissionsEnv(user_params={
'num_applicants': 5,
'gaming_control': np.inf,
'group_cost': {
0: 2,
1: 4,
}
})
agent = college_admission_jury.RobustJury(
action_space=env.action_space,
observation_space=env.observation_space,
reward_fn=(lambda x: 0),
group_cost=env.initial_params.group_cost,
gaming_control=env.initial_params.gaming_control,
epsilon_greedy=True,
initial_epsilon_prob=0.2)
observations = {
'test_scores_y': np.asarray([0.2, 0.3, 0.4, 0.5, 0.4]),
'selected_applicants': np.asarray([0, 1, 0, 1, 1]),
'selected_ground_truth': np.asarray([2, 0, 2, 1, 1]),
'applicant_groups': np.asarray([0, 1, 1, 0, 1])
}
self.assertTrue(
np.all(
np.isclose(
agent._get_maximum_manipulated_features(observations),
[0.5, 0.9, 0.6],
atol=1e-4)))
def test_assertion_raised_when_burnin_less_than_2(self):
env = college_admission.CollegeAdmissionsEnv()
with self.assertRaises(ValueError):
college_admission_jury.RobustJury(
action_space=env.action_space,
observation_space=env.observation_space,
reward_fn=(lambda x: 0),
group_cost=env.initial_params.group_cost,
burnin=1)
if __name__ == '__main__':
absltest.main()
| 37.903756 | 80 | 0.663095 | 2,033 | 16,147 | 4.979833 | 0.123955 | 0.072699 | 0.045437 | 0.084749 | 0.776669 | 0.76452 | 0.756618 | 0.73706 | 0.71207 | 0.702588 | 0 | 0.03217 | 0.22803 | 16,147 | 425 | 81 | 37.992941 | 0.780024 | 0.058339 | 0 | 0.706199 | 0 | 0 | 0.054152 | 0.008311 | 0 | 0 | 0 | 0 | 0.06469 | 1 | 0.059299 | false | 0 | 0.026954 | 0 | 0.09434 | 0.002695 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
dcdc0a77ab20b81e39002d23af0fcfb71c9814be | 47 | py | Python | scripts/portal/crane_MR.py | G00dBye/YYMS | 1de816fc842b6598d5b4b7896b6ab0ee8f7cdcfb | [
"MIT"
] | 54 | 2019-04-16T23:24:48.000Z | 2021-12-18T11:41:50.000Z | scripts/portal/crane_MR.py | G00dBye/YYMS | 1de816fc842b6598d5b4b7896b6ab0ee8f7cdcfb | [
"MIT"
] | 3 | 2019-05-19T15:19:41.000Z | 2020-04-27T16:29:16.000Z | scripts/portal/crane_MR.py | G00dBye/YYMS | 1de816fc842b6598d5b4b7896b6ab0ee8f7cdcfb | [
"MIT"
] | 49 | 2020-11-25T23:29:16.000Z | 2022-03-26T16:20:24.000Z | # 200090300
sm.warp(250000100, 0)
sm.dispose()
| 11.75 | 21 | 0.723404 | 7 | 47 | 4.857143 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.452381 | 0.106383 | 47 | 3 | 22 | 15.666667 | 0.357143 | 0.191489 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0d007138410a42aab2fd7c09366a75ebcb5eaacf | 10,779 | py | Python | congregation/dag/nodes/internal/unary.py | CCD-HRI/congregation | a552856b03a64a4295792184107c4e529ca3f4ae | [
"MIT"
] | 3 | 2020-10-05T16:30:15.000Z | 2021-01-22T13:38:02.000Z | congregation/dag/nodes/internal/unary.py | CCD-HRI/congregation | a552856b03a64a4295792184107c4e529ca3f4ae | [
"MIT"
] | null | null | null | congregation/dag/nodes/internal/unary.py | CCD-HRI/congregation | a552856b03a64a4295792184107c4e529ca3f4ae | [
"MIT"
] | 1 | 2021-02-19T12:40:57.000Z | 2021-02-19T12:40:57.000Z | from congregation.dag.nodes import UnaryOpNode
from congregation.dag.nodes.node import OpNode
from congregation.dag.nodes import *
from congregation.datasets import Relation
from congregation.datasets import Column
from congregation.utils import *
class Store(UnaryOpNode):
def __init__(self, out_rel: Relation, parent: [OpNode, None]):
super(Store, self).__init__("store", out_rel, parent)
def is_reversible(self):
return True
def update_out_rel_cols(self):
temp_cols = copy.deepcopy(self.get_in_rel().columns)
self.out_rel.columns = temp_cols
self.out_rel.update_columns()
class Read(UnaryOpNode):
def __init__(self, out_rel: Relation, parent: [OpNode, None]):
super(Read, self).__init__("read", out_rel, parent)
def is_reversible(self):
return True
def requires_mpc(self):
return False
class Persist(UnaryOpNode):
def __init__(self, out_rel: Relation, parent: OpNode):
super(Persist, self).__init__("persist", out_rel, parent)
def is_reversible(self):
return True
class Send(UnaryOpNode):
def __init__(self, out_rel: Relation, parent: OpNode):
super(Send, self).__init__("send", out_rel, parent)
def is_reversible(self):
return True
def requires_mpc(self):
return False
class Index(UnaryOpNode):
def __init__(self, out_rel: Relation, parent: OpNode, idx_col_name: str):
super(Index, self).__init__("index", out_rel, parent)
self.idx_col_name = idx_col_name
def is_reversible(self):
return True
class Shuffle(UnaryOpNode):
def __init__(self, out_rel: Relation, parent: OpNode):
super(Shuffle, self).__init__("shuffle", out_rel, parent)
def is_reversible(self):
return True
class Open(UnaryOpNode):
def __init__(self, out_rel: Relation, parent: [OpNode, None]):
""" Initialize Open object. """
super(Open, self).__init__("open", out_rel, parent)
def is_reversible(self):
return True
def requires_mpc(self):
return True
class Close(UnaryOpNode):
def __init__(self, out_rel: Relation, parent: [OpNode, None], holding_party: list):
super(Close, self).__init__("close", out_rel, parent)
# parties who hold this data in plaintext
self.holding_party = self._resolve_holding_party(holding_party)
@staticmethod
def _resolve_holding_party(holding_party):
if len(holding_party) > 1 or len(holding_party[0]) > 1:
raise Exception(f"Holding party for Close() node should be singular: {holding_party}")
return holding_party[0].pop()
def is_reversible(self):
return True
def requires_mpc(self):
return True
class AggregateSumCountCol(UnaryOpNode):
def __init__(self, out_rel: Relation, parent: OpNode, group_cols: [list, None], agg_col: Column):
super(AggregateSumCountCol, self).__init__("aggregate_sum_count_col", out_rel, parent)
self.group_cols = group_cols if group_cols else []
self.agg_col = agg_col
self.count_col = self.gen_count_col()
def gen_count_col(self):
return self.update_count_col()
def update_count_col(self):
if self.group_cols:
min_trust = min_trust_with_from_cols(self.group_cols)
min_pt = min_pt_set_from_cols(self.group_cols)
else:
# count col will just be the number of rows, which
# all parties storing this data already know
min_trust = max_set(self.out_rel.stored_with)
min_pt = max_set(self.out_rel.stored_with)
return Column(
self.get_in_rel().name, "__COUNT__", len(self.group_cols) + 1,
"INTEGER", min_trust, min_pt
)
@staticmethod
def from_existing_agg(node: AggregateMean):
out_rel = copy.deepcopy(node.out_rel)
parent = copy.deepcopy(node.parent)
group_cols = copy.deepcopy(node.group_cols)
agg_col = copy.deepcopy(node.agg_col)
out_node = AggregateSumCountCol(out_rel, parent, group_cols, agg_col)
out_node.update_out_rel_cols()
return out_node
def update_op_specific_cols(self):
temp_cols = copy.deepcopy(self.get_in_rel().columns)
self.group_cols = [temp_cols[group_col.idx] for group_col in self.group_cols]
self.agg_col = temp_cols[self.agg_col.idx]
self.count_col = self.update_count_col()
def update_out_rel_cols(self):
self.update_op_specific_cols()
self.out_rel.columns = \
copy.deepcopy(self.group_cols) + \
[copy.deepcopy(self.agg_col), copy.deepcopy(self.count_col)]
self.out_rel.update_columns()
class AggregateSumSquaresAndCount(UnaryOpNode):
def __init__(self, out_rel: Relation, parent: OpNode, group_cols: [list, None], agg_col: Column):
super(AggregateSumSquaresAndCount, self).__init__("aggregate_sum_squares_and_count", out_rel, parent)
self.group_cols = group_cols if group_cols else []
self.agg_col = agg_col
self.squares_col = self.gen_squares_col()
self.count_col = self.gen_count_col()
def gen_squares_col(self):
return self.update_squares_col()
def update_squares_col(self):
trust_set = copy.deepcopy(self.agg_col.trust_with)
pt_set = copy.deepcopy(self.agg_col.plaintext)
typ = copy.copy(self.agg_col.type_str)
return Column(
self.get_in_rel().name,
"__SQUARES__",
len(self.group_cols) + 1,
typ,
trust_set,
pt_set
)
def gen_count_col(self):
return self.update_count_col()
def update_count_col(self):
if self.group_cols:
min_trust = min_trust_with_from_cols(self.group_cols)
min_pt = min_pt_set_from_cols(self.group_cols)
else:
# count col will just be the number of rows, which
# all parties storing this data already know
min_trust = max_set(self.out_rel.stored_with)
min_pt = max_set(self.out_rel.stored_with)
return Column(
self.get_in_rel().name,
"__COUNT__",
len(self.group_cols) + 2,
"INTEGER",
min_trust,
min_pt
)
@staticmethod
def from_existing_agg(node: [AggregateStdDev, AggregateVariance]):
out_rel = copy.deepcopy(node.out_rel)
out_rel.rename(f"{copy.copy(node.out_rel.name)}_local_squares_and_count")
parent = copy.deepcopy(node.parent)
group_cols = copy.deepcopy(node.group_cols)
agg_col = copy.deepcopy(node.agg_col)
out_node = AggregateSumSquaresAndCount(out_rel, parent, group_cols, agg_col)
out_node.update_out_rel_cols()
return out_node
def update_op_specific_cols(self):
temp_cols = copy.deepcopy(self.get_in_rel().columns)
self.group_cols = [temp_cols[group_col.idx] for group_col in self.group_cols]
self.agg_col = temp_cols[self.agg_col.idx]
self.squares_col = self.update_squares_col()
self.count_col = self.update_count_col()
def update_out_rel_cols(self):
self.update_op_specific_cols()
self.out_rel.columns = \
copy.deepcopy(self.group_cols) + \
[
copy.deepcopy(self.agg_col),
copy.deepcopy(self.squares_col),
copy.deepcopy(self.count_col)
]
self.out_rel.update_columns()
class AggregateStdDevLocalSqrt(UnaryOpNode):
def __init__(self, out_rel: Relation, parent: OpNode):
super(AggregateStdDevLocalSqrt, self).__init__("aggregate_std_dev_local_sqrt", out_rel, parent)
def requires_mpc(self):
return False
@staticmethod
def from_existing_agg(node: [AggregateStdDev, AggregateVariance]):
out_rel = copy.deepcopy(node.out_rel)
out_rel.rename(f"{copy.copy(node.out_rel.name)}_local_sqrt")
parent = copy.deepcopy(node.parent)
out_node = AggregateStdDevLocalSqrt(out_rel, parent)
return out_node
def update_out_rel_cols(self):
temp_cols = copy.deepcopy(self.get_in_rel().columns)
# just filter out column that gets generated by parent AggregateStdDev op
self.out_rel.columns = [c for c in temp_cols if c.name != "__MEAN_SQUARES__"]
self.out_rel.update_columns()
class AggregateVarianceLocalDiff(UnaryOpNode):
def __init__(self, out_rel: Relation, parent: OpNode):
super(AggregateVarianceLocalDiff, self).__init__("aggregate_variance_local_diff", out_rel, parent)
def requires_mpc(self):
return False
@staticmethod
def from_existing_agg(node: AggregateVariance):
out_rel = copy.deepcopy(node.out_rel)
out_rel.rename(f"{copy.copy(node.out_rel.name)}_local_diff")
parent = copy.deepcopy(node.parent)
out_node = AggregateVarianceLocalDiff(out_rel, parent)
return out_node
def update_out_rel_cols(self):
temp_cols = copy.deepcopy(self.get_in_rel().columns)
# just filter out column that gets generated by parent AggregateVariance op
self.out_rel.columns = [c for c in temp_cols if c.name != "__MEAN_SQUARES__"]
self.out_rel.update_columns()
class AllStatsLocalSqrt(UnaryOpNode):
def __init__(self, out_rel: Relation, parent: OpNode):
super(AllStatsLocalSqrt, self).__init__("all_stats_local_sqrt", out_rel, parent)
def requires_mpc(self):
return False
@staticmethod
def from_existing_all_stats(node: AllStats):
out_rel = copy.deepcopy(node.out_rel)
out_rel.rename(f"{copy.copy(node.out_rel.name)}_local_sqrt")
parent = copy.deepcopy(node.parent)
out_node = AllStatsLocalSqrt(out_rel, parent)
return out_node
def update_out_rel_cols(self):
temp_cols = copy.deepcopy(self.get_in_rel().columns)
# filter out column that gets generated by parent AllStats op
self.out_rel.columns = [c for c in temp_cols if c.name != "__MEAN_SQUARES__"]
self.out_rel.update_columns()
class ColSum(UnaryOpNode):
def __init__(self, out_rel: Relation, parent: OpNode):
super(ColSum, self).__init__("col_sum", out_rel, parent)
def update_out_rel_cols(self):
temp_cols = copy.deepcopy(self.get_in_rel().columns)
self.out_rel.columns = temp_cols
self.out_rel.update_columns()
@staticmethod
def from_num_rows(node: NumRows):
out_rel = copy.deepcopy(node.out_rel)
out_rel.rename(f"{copy.copy(node.out_rel.name)}_local_sum")
parent = copy.deepcopy(node.parent)
return ColSum(out_rel, parent)
| 32.762918 | 109 | 0.670656 | 1,426 | 10,779 | 4.704769 | 0.09467 | 0.074229 | 0.047697 | 0.045908 | 0.778954 | 0.750484 | 0.739753 | 0.721121 | 0.715159 | 0.708004 | 0 | 0.000847 | 0.232953 | 10,779 | 328 | 110 | 32.862805 | 0.810595 | 0.042212 | 0 | 0.633188 | 0 | 0 | 0.053637 | 0.031814 | 0 | 0 | 0 | 0 | 0 | 1 | 0.222707 | false | 0 | 0.026201 | 0.078603 | 0.432314 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0d3edd2f2ceebf72da29c04d2d7f35342a496940 | 103 | py | Python | src/langs/python/wheel-part.py | boyanio/wasm-wheel | a35d8f3b6f9af607e210f3d2f8f8150f9c1796e8 | [
"MIT"
] | 40 | 2017-11-30T11:12:21.000Z | 2021-02-09T23:56:40.000Z | src/langs/python/wheel-part.py | boyanio/wasm-wheel | a35d8f3b6f9af607e210f3d2f8f8150f9c1796e8 | [
"MIT"
] | 16 | 2017-12-11T13:59:54.000Z | 2022-02-11T12:09:39.000Z | src/langs/python/wheel-part.py | boyanio/wasm-wheel | a35d8f3b6f9af607e210f3d2f8f8150f9c1796e8 | [
"MIT"
] | 7 | 2018-01-19T08:14:15.000Z | 2021-02-03T20:33:04.000Z | import random
def name():
return "Python"
def feelingLucky():
return random.randrange(1, 101, 1)
| 12.875 | 36 | 0.699029 | 14 | 103 | 5.142857 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.058824 | 0.174757 | 103 | 7 | 37 | 14.714286 | 0.788235 | 0 | 0 | 0 | 0 | 0 | 0.058252 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | true | 0 | 0.2 | 0.4 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
0d4cf4e1f3c86d2ee02fa7339cc1c69429a22a9f | 43 | py | Python | my_classes/.history/ModulesPackages_PackageNamespaces/example3b/module2_20210726190952.py | minefarmer/deep-Dive-1 | b0675b853180c5b5781888266ea63a3793b8d855 | [
"Unlicense"
] | null | null | null | my_classes/.history/ModulesPackages_PackageNamespaces/example3b/module2_20210726190952.py | minefarmer/deep-Dive-1 | b0675b853180c5b5781888266ea63a3793b8d855 | [
"Unlicense"
] | null | null | null | my_classes/.history/ModulesPackages_PackageNamespaces/example3b/module2_20210726190952.py | minefarmer/deep-Dive-1 | b0675b853180c5b5781888266ea63a3793b8d855 | [
"Unlicense"
] | null | null | null | import module1
print('Running moudle2.py') | 14.333333 | 27 | 0.790698 | 6 | 43 | 5.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.051282 | 0.093023 | 43 | 3 | 27 | 14.333333 | 0.820513 | 0 | 0 | 0 | 0 | 0 | 0.409091 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
b4983b88d28bcc71493b83503f227cf45dc33360 | 152 | py | Python | cbprocharts/procharter.py | ricCap/coinbasepro-charts | e1c0c561a37ce0cdf2a1efabb2eb625e7766b38d | [
"MIT"
] | null | null | null | cbprocharts/procharter.py | ricCap/coinbasepro-charts | e1c0c561a37ce0cdf2a1efabb2eb625e7766b38d | [
"MIT"
] | 2 | 2021-02-25T20:15:28.000Z | 2021-02-25T20:17:37.000Z | cbprocharts/procharter.py | ricCap/coinbasepro-charts | e1c0c561a37ce0cdf2a1efabb2eb625e7766b38d | [
"MIT"
] | null | null | null | import cbpro
class ProCharter(cbpro.AuthenticatedClient):
""" A wrapper around cbpro AuthenticatedClient that adds charting capabilities.
"""
| 21.714286 | 83 | 0.763158 | 15 | 152 | 7.733333 | 0.8 | 0.413793 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.164474 | 152 | 6 | 84 | 25.333333 | 0.913386 | 0.493421 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b4c1e92077a9b0f3b0f4867058d0d79e75757db5 | 379 | py | Python | src/mlte/report/__init__.py | turingcompl33t/mlte | 2a0b299502dd9ea2830f5082e492bb000ceffc3d | [
"MIT"
] | 2 | 2022-03-09T15:44:32.000Z | 2022-03-18T16:00:56.000Z | src/mlte/report/__init__.py | turingcompl33t/mlte | 2a0b299502dd9ea2830f5082e492bb000ceffc3d | [
"MIT"
] | 6 | 2022-03-12T11:24:00.000Z | 2022-03-18T17:58:38.000Z | src/mlte/report/__init__.py | turingcompl33t/mlte | 2a0b299502dd9ea2830f5082e492bb000ceffc3d | [
"MIT"
] | null | null | null | from .report import (
Dataset,
User,
UseCase,
Limitation,
Metadata,
ModelDetails,
ModelSpecification,
Considerations,
Report,
)
from .render import render
__all__ = [
"Dataset",
"User",
"UseCase",
"Limitation",
"Metadata",
"ModelDetails",
"ModelSpecification",
"Considerations",
"Report",
"render",
]
| 14.037037 | 26 | 0.588391 | 27 | 379 | 8.111111 | 0.481481 | 0.100457 | 0.164384 | 0.255708 | 0.785388 | 0.785388 | 0.785388 | 0.785388 | 0.785388 | 0 | 0 | 0 | 0.290237 | 379 | 26 | 27 | 14.576923 | 0.814126 | 0 | 0 | 0 | 0 | 0 | 0.242744 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.083333 | 0 | 0.083333 | 0 | 1 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b4e474bc7391df733af773175255ec7ebf5c5870 | 142 | py | Python | python/pyramid-array/solution.py | hiljusti/codewars-solutions | 1a423e8cb0fbcac94738f6e51dc333f057b0a731 | [
"WTFPL"
] | 2 | 2020-02-22T08:47:51.000Z | 2021-05-21T22:21:55.000Z | python/pyramid-array/solution.py | hiljusti/codewars-solutions | 1a423e8cb0fbcac94738f6e51dc333f057b0a731 | [
"WTFPL"
] | null | null | null | python/pyramid-array/solution.py | hiljusti/codewars-solutions | 1a423e8cb0fbcac94738f6e51dc333f057b0a731 | [
"WTFPL"
] | 1 | 2021-11-09T17:22:10.000Z | 2021-11-09T17:22:10.000Z | # https://www.codewars.com/kata/515f51d438015969f7000013
def pyramid(n): return [] if n <= 0 else [[1]] + [[1] + a for a in pyramid(n - 1)]
| 28.4 | 82 | 0.633803 | 23 | 142 | 3.913043 | 0.73913 | 0.177778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.211864 | 0.169014 | 142 | 4 | 83 | 35.5 | 0.550847 | 0.380282 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | false | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
2edfa4c07ced1131df6dcfb93cbe1d89e1e69ce8 | 8,501 | py | Python | lib/pyDHE/DHE_groups.py | ahmedyasserays/pyDHE | bf3b643e43538eca52ffb84f44d63d58245aced7 | [
"BSD-2-Clause"
] | 62 | 2017-08-11T01:38:46.000Z | 2022-01-28T02:22:56.000Z | lib/pyDHE/DHE_groups.py | ahmedyasserays/pyDHE | bf3b643e43538eca52ffb84f44d63d58245aced7 | [
"BSD-2-Clause"
] | 6 | 2017-09-18T19:51:55.000Z | 2018-06-08T08:13:58.000Z | lib/pyDHE/DHE_groups.py | ahmedyasserays/pyDHE | bf3b643e43538eca52ffb84f44d63d58245aced7 | [
"BSD-2-Clause"
] | 13 | 2017-09-18T13:26:59.000Z | 2021-11-25T15:13:58.000Z | # -*- coding: utf-8 -*-
# ===================================================================
#
# Copyright (c) 2017, DeadPix3l <skylerr.curtis@gmail.com>
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions
# are met:
#
# 1. Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# 2. Redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in
# the documentation and/or other materials provided with the
# distribution.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
# POSSIBILITY OF SUCH DAMAGE.
# ===================================================================
# NOTE: the numbers in this file are gigantic! Good practice is to keep lines
# shorter than 79 characters. This file cannot and will not follow this
# convention. Sorry.
"""
Diffie Hellman Groups
This is a supplementary file to DHE.py to simply hold the dictionary
of RFC 3526 groups. For Diffie-Hellman to be secure, it requires the
parameters (p,g) to have special properties and be of a relatively large size.
These numbers were specifically chosen by NIST or other professionals to be
cryptographically secure. DO NOT MODIFY.
each group is a diffent bitsize, and as the group number increases, so does
the security and the time it takes to calculate a secret. group 5 is specified
in RFC 3526, but is ommited here because its small size is no longer secure
with modern technolegy. A minimimum of 2048 bits should be used at all times.
the Groups are as follows:
--------------------------
14: 2048 bits
15: 3072 bits
16: 4096 bits
17: 6144 bits
18: 8192 bits
"""
groups = {
14: (2, 0xFFFFFFFFFFFFFFFFC90FDAA22168C234C4C6628B80DC1CD129024E088A67CC74020BBEA63B139B22514A08798E3404DDEF9519B3CD3A431B302B0A6DF25F14374FE1356D6D51C245E485B576625E7EC6F44C42E9A637ED6B0BFF5CB6F406B7EDEE386BFB5A899FA5AE9F24117C4B1FE649286651ECE45B3DC2007CB8A163BF0598DA48361C55D39A69163FA8FD24CF5F83655D23DCA3AD961C62F356208552BB9ED529077096966D670C354E4ABC9804F1746C08CA18217C32905E462E36CE3BE39E772C180E86039B2783A2EC07A28FB5C55DF06F4C52C9DE2BCBF6955817183995497CEA956AE515D2261898FA051015728E5A8AACAA68FFFFFFFFFFFFFFFF),
15: (2, 0xFFFFFFFFFFFFFFFFC90FDAA22168C234C4C6628B80DC1CD129024E088A67CC74020BBEA63B139B22514A08798E3404DDEF9519B3CD3A431B302B0A6DF25F14374FE1356D6D51C245E485B576625E7EC6F44C42E9A637ED6B0BFF5CB6F406B7EDEE386BFB5A899FA5AE9F24117C4B1FE649286651ECE45B3DC2007CB8A163BF0598DA48361C55D39A69163FA8FD24CF5F83655D23DCA3AD961C62F356208552BB9ED529077096966D670C354E4ABC9804F1746C08CA18217C32905E462E36CE3BE39E772C180E86039B2783A2EC07A28FB5C55DF06F4C52C9DE2BCBF6955817183995497CEA956AE515D2261898FA051015728E5A8AAAC42DAD33170D04507A33A85521ABDF1CBA64ECFB850458DBEF0A8AEA71575D060C7DB3970F85A6E1E4C7ABF5AE8CDB0933D71E8C94E04A25619DCEE3D2261AD2EE6BF12FFA06D98A0864D87602733EC86A64521F2B18177B200CBBE117577A615D6C770988C0BAD946E208E24FA074E5AB3143DB5BFCE0FD108E4B82D120A93AD2CAFFFFFFFFFFFFFFFF),
16: (2, 0xFFFFFFFFFFFFFFFFC90FDAA22168C234C4C6628B80DC1CD129024E088A67CC74020BBEA63B139B22514A08798E3404DDEF9519B3CD3A431B302B0A6DF25F14374FE1356D6D51C245E485B576625E7EC6F44C42E9A637ED6B0BFF5CB6F406B7EDEE386BFB5A899FA5AE9F24117C4B1FE649286651ECE45B3DC2007CB8A163BF0598DA48361C55D39A69163FA8FD24CF5F83655D23DCA3AD961C62F356208552BB9ED529077096966D670C354E4ABC9804F1746C08CA18217C32905E462E36CE3BE39E772C180E86039B2783A2EC07A28FB5C55DF06F4C52C9DE2BCBF6955817183995497CEA956AE515D2261898FA051015728E5A8AAAC42DAD33170D04507A33A85521ABDF1CBA64ECFB850458DBEF0A8AEA71575D060C7DB3970F85A6E1E4C7ABF5AE8CDB0933D71E8C94E04A25619DCEE3D2261AD2EE6BF12FFA06D98A0864D87602733EC86A64521F2B18177B200CBBE117577A615D6C770988C0BAD946E208E24FA074E5AB3143DB5BFCE0FD108E4B82D120A92108011A723C12A787E6D788719A10BDBA5B2699C327186AF4E23C1A946834B6150BDA2583E9CA2AD44CE8DBBBC2DB04DE8EF92E8EFC141FBECAA6287C59474E6BC05D99B2964FA090C3A2233BA186515BE7ED1F612970CEE2D7AFB81BDD762170481CD0069127D5B05AA993B4EA988D8FDDC186FFB7DC90A6C08F4DF435C934063199FFFFFFFFFFFFFFFF),
17: (2, 0xFFFFFFFFFFFFFFFFC90FDAA22168C234C4C6628B80DC1CD129024E088A67CC74020BBEA63B139B22514A08798E3404DDEF9519B3CD3A431B302B0A6DF25F14374FE1356D6D51C245E485B576625E7EC6F44C42E9A637ED6B0BFF5CB6F406B7EDEE386BFB5A899FA5AE9F24117C4B1FE649286651ECE45B3DC2007CB8A163BF0598DA48361C55D39A69163FA8FD24CF5F83655D23DCA3AD961C62F356208552BB9ED529077096966D670C354E4ABC9804F1746C08CA18217C32905E462E36CE3BE39E772C180E86039B2783A2EC07A28FB5C55DF06F4C52C9DE2BCBF6955817183995497CEA956AE515D2261898FA051015728E5A8AAAC42DAD33170D04507A33A85521ABDF1CBA64ECFB850458DBEF0A8AEA71575D060C7DB3970F85A6E1E4C7ABF5AE8CDB0933D71E8C94E04A25619DCEE3D2261AD2EE6BF12FFA06D98A0864D87602733EC86A64521F2B18177B200CBBE117577A615D6C770988C0BAD946E208E24FA074E5AB3143DB5BFCE0FD108E4B82D120A92108011A723C12A787E6D788719A10BDBA5B2699C327186AF4E23C1A946834B6150BDA2583E9CA2AD44CE8DBBBC2DB04DE8EF92E8EFC141FBECAA6287C59474E6BC05D99B2964FA090C3A2233BA186515BE7ED1F612970CEE2D7AFB81BDD762170481CD0069127D5B05AA993B4EA988D8FDDC186FFB7DC90A6C08F4DF435C93402849236C3FAB4D27C7026C1D4DCB2602646DEC9751E763DBA37BDF8FF9406AD9E530EE5DB382F413001AEB06A53ED9027D831179727B0865A8918DA3EDBEBCF9B14ED44CE6CBACED4BB1BDB7F1447E6CC254B332051512BD7AF426FB8F401378CD2BF5983CA01C64B92ECF032EA15D1721D03F482D7CE6E74FEF6D55E702F46980C82B5A84031900B1C9E59E7C97FBEC7E8F323A97A7E36CC88BE0F1D45B7FF585AC54BD407B22B4154AACC8F6D7EBF48E1D814CC5ED20F8037E0A79715EEF29BE32806A1D58BB7C5DA76F550AA3D8A1FBFF0EB19CCB1A313D55CDA56C9EC2EF29632387FE8D76E3C0468043E8F663F4860EE12BF2D5B0B7474D6E694F91E6DCC4024FFFFFFFFFFFFFFFF),
18: (2, 0xFFFFFFFFFFFFFFFFC90FDAA22168C234C4C6628B80DC1CD129024E088A67CC74020BBEA63B139B22514A08798E3404DDEF9519B3CD3A431B302B0A6DF25F14374FE1356D6D51C245E485B576625E7EC6F44C42E9A637ED6B0BFF5CB6F406B7EDEE386BFB5A899FA5AE9F24117C4B1FE649286651ECE45B3DC2007CB8A163BF0598DA48361C55D39A69163FA8FD24CF5F83655D23DCA3AD961C62F356208552BB9ED529077096966D670C354E4ABC9804F1746C08CA18217C32905E462E36CE3BE39E772C180E86039B2783A2EC07A28FB5C55DF06F4C52C9DE2BCBF6955817183995497CEA956AE515D2261898FA051015728E5A8AAAC42DAD33170D04507A33A85521ABDF1CBA64ECFB850458DBEF0A8AEA71575D060C7DB3970F85A6E1E4C7ABF5AE8CDB0933D71E8C94E04A25619DCEE3D2261AD2EE6BF12FFA06D98A0864D87602733EC86A64521F2B18177B200CBBE117577A615D6C770988C0BAD946E208E24FA074E5AB3143DB5BFCE0FD108E4B82D120A92108011A723C12A787E6D788719A10BDBA5B2699C327186AF4E23C1A946834B6150BDA2583E9CA2AD44CE8DBBBC2DB04DE8EF92E8EFC141FBECAA6287C59474E6BC05D99B2964FA090C3A2233BA186515BE7ED1F612970CEE2D7AFB81BDD762170481CD0069127D5B05AA993B4EA988D8FDDC186FFB7DC90A6C08F4DF435C93402849236C3FAB4D27C7026C1D4DCB2602646DEC9751E763DBA37BDF8FF9406AD9E530EE5DB382F413001AEB06A53ED9027D831179727B0865A8918DA3EDBEBCF9B14ED44CE6CBACED4BB1BDB7F1447E6CC254B332051512BD7AF426FB8F401378CD2BF5983CA01C64B92ECF032EA15D1721D03F482D7CE6E74FEF6D55E702F46980C82B5A84031900B1C9E59E7C97FBEC7E8F323A97A7E36CC88BE0F1D45B7FF585AC54BD407B22B4154AACC8F6D7EBF48E1D814CC5ED20F8037E0A79715EEF29BE32806A1D58BB7C5DA76F550AA3D8A1FBFF0EB19CCB1A313D55CDA56C9EC2EF29632387FE8D76E3C0468043E8F663F4860EE12BF2D5B0B7474D6E694F91E6DBE115974A3926F12FEE5E438777CB6A932DF8CD8BEC4D073B931BA3BC832B68D9DD300741FA7BF8AFC47ED2576F6936BA424663AAB639C5AE4F5683423B4742BF1C978238F16CBE39D652DE3FDB8BEFC848AD922222E04A4037C0713EB57A81A23F0C73473FC646CEA306B4BCBC8862F8385DDFA9D4B7FA2C087E879683303ED5BDD3A062B3CF5B3A278A66D2A13F83F44F82DDF310EE074AB6A364597E899A0255DC164F31CC50846851DF9AB48195DED7EA1B1D510BD7EE74D73FAF36BC31ECFA268359046F4EB879F924009438B481C6CD7889A002ED5EE382BC9190DA6FC026E479558E4475677E9AA9E3050E2765694DFC81F56E880B96E7160C980DD98EDD3DFFFFFFFFFFFFFFFFF)
}
| 126.880597 | 2,063 | 0.910363 | 382 | 8,501 | 20.259162 | 0.544503 | 0.004652 | 0.004393 | 0.005944 | 0.023776 | 0.017573 | 0.017573 | 0.017573 | 0.017573 | 0.017573 | 0 | 0.452206 | 0.053641 | 8,501 | 66 | 2,064 | 128.80303 | 0.509758 | 0.28679 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.979409 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
2c014674fbeb4ba62fbdfafe529ea4958b105359 | 783 | py | Python | main.py | FlexLV/Projekts-Lidma-nas- | ace380d17407e88baa62595d591669945933dac6 | [
"MIT"
] | null | null | null | main.py | FlexLV/Projekts-Lidma-nas- | ace380d17407e88baa62595d591669945933dac6 | [
"MIT"
] | null | null | null | main.py | FlexLV/Projekts-Lidma-nas- | ace380d17407e88baa62595d591669945933dac6 | [
"MIT"
] | 1 | 2021-05-21T15:55:04.000Z | 2021-05-21T15:55:04.000Z | from flask import Flask, render_template
app = Flask('app')
@app.route('/')
def main():
return render_template("Main.html")
@app.route('/Covid_19')
def Covid_19():
return render_template("Covid_19.html")
@app.route('/ParLidmasinam')
def Par_lidmasinu():
return render_template("ParLidmasinu.html")
@app.route('/Lidojumi')
def Lidojumi():
return render_template("Lidojumi.html")
@app.route('/Register')
def Register():
return render_template("Register.html")
@app.route('/ForgotPass')
def ForgotPass():
return render_template("ForgotPass.html")
@app.route('/Izlidojumi')
def Izlidojumi():
return render_template("Izlidojumi.html")
@app.route('/Reservation')
def Reservation():
return render_template("Reservation.html")
app.run(host='0.0.0.0', port=8080) | 20.605263 | 45 | 0.724138 | 101 | 783 | 5.485149 | 0.267327 | 0.227437 | 0.288809 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.019886 | 0.100894 | 783 | 38 | 46 | 20.605263 | 0.767045 | 0 | 0 | 0 | 0 | 0 | 0.251276 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.296296 | false | 0.111111 | 0.037037 | 0.296296 | 0.62963 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 6 |
2c1c23835283e7f4645ee2d0e9eb6e85897eb0c9 | 99 | py | Python | Python/CursoEmVideo/Mundo 2/Desafios/Desafio47.py | carlos09v/Mini-Projects_Exercises | 0d457b5c5c83811fdefa1dd8f80ce436f8dac744 | [
"MIT"
] | 1 | 2021-08-23T13:04:55.000Z | 2021-08-23T13:04:55.000Z | Python/CursoEmVideo/Mundo 2/Desafios/Desafio47.py | carlos09v/Mini-Projects_Exercises | 0d457b5c5c83811fdefa1dd8f80ce436f8dac744 | [
"MIT"
] | null | null | null | Python/CursoEmVideo/Mundo 2/Desafios/Desafio47.py | carlos09v/Mini-Projects_Exercises | 0d457b5c5c83811fdefa1dd8f80ce436f8dac744 | [
"MIT"
] | null | null | null | print('Todos os \033[35mPARES\33[m de \033[33m0 a 50!\033[m')
for c in range(2,51,2):
print(c)
| 24.75 | 61 | 0.636364 | 23 | 99 | 2.73913 | 0.73913 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.26506 | 0.161616 | 99 | 3 | 62 | 33 | 0.493976 | 0 | 0 | 0 | 0 | 0.333333 | 0.525253 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.666667 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
2588fa58d573597dc86061dd36cee20baafc0a67 | 258 | py | Python | pytest_testbook/hooks.py | ldiary/pytest-testbook | b47a86784588bb6fb28cf5b8e034f833f2e393b1 | [
"MIT"
] | 3 | 2016-05-28T20:30:46.000Z | 2017-01-06T15:19:46.000Z | pytest_testbook/hooks.py | ldiary/pytest-testbook | b47a86784588bb6fb28cf5b8e034f833f2e393b1 | [
"MIT"
] | null | null | null | pytest_testbook/hooks.py | ldiary/pytest-testbook | b47a86784588bb6fb28cf5b8e034f833f2e393b1 | [
"MIT"
] | 1 | 2019-01-06T18:38:48.000Z | 2019-01-06T18:38:48.000Z | """Pytest-Testbook pytest hooks."""
def pytest_testbook_kernel_setup(scenario):
"""Will be called after the Jupyter kernel is started."""
def pytest_testbook_kernel_teardown(scenario):
"""Will be called before the Jupyter kernel is shut down."""
| 25.8 | 64 | 0.744186 | 35 | 258 | 5.314286 | 0.542857 | 0.225806 | 0.182796 | 0.247312 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.147287 | 258 | 9 | 65 | 28.666667 | 0.845455 | 0.527132 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | false | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
25a7de77c1ebe1a661f53add042f84c0e14081d0 | 59 | py | Python | lib/ua/agents/msie.py | hdknr/ua | bc41f5b46fb99d0d576c7542c2184b39679f4ebf | [
"BSD-2-Clause-FreeBSD"
] | null | null | null | lib/ua/agents/msie.py | hdknr/ua | bc41f5b46fb99d0d576c7542c2184b39679f4ebf | [
"BSD-2-Clause-FreeBSD"
] | null | null | null | lib/ua/agents/msie.py | hdknr/ua | bc41f5b46fb99d0d576c7542c2184b39679f4ebf | [
"BSD-2-Clause-FreeBSD"
] | null | null | null | from . import BaseAgent
class Agent(BaseAgent):
pass
| 9.833333 | 23 | 0.711864 | 7 | 59 | 6 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.220339 | 59 | 5 | 24 | 11.8 | 0.913043 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
25a94418c5bb9fda7b2eebb9a9eb8da506847ea6 | 170 | py | Python | Chapter08/07_dc_motor_on.py | PacktPublishing/MicroPython-Cookbook | ffd6aa15c303459570a89ba31b5bc734f05cb387 | [
"MIT"
] | 16 | 2019-07-01T16:24:22.000Z | 2022-03-03T06:54:57.000Z | Chapter08/07_dc_motor_on.py | ccwu0918/MicroPython-Cookbook | ffd6aa15c303459570a89ba31b5bc734f05cb387 | [
"MIT"
] | null | null | null | Chapter08/07_dc_motor_on.py | ccwu0918/MicroPython-Cookbook | ffd6aa15c303459570a89ba31b5bc734f05cb387 | [
"MIT"
] | 19 | 2019-04-17T08:30:12.000Z | 2022-01-14T03:05:37.000Z | from adafruit_crickit import crickit
import time
while True:
crickit.dc_motor_1.throttle = 1
time.sleep(1)
crickit.dc_motor_1.throttle = 0
time.sleep(1)
| 18.888889 | 36 | 0.729412 | 27 | 170 | 4.407407 | 0.481481 | 0.218487 | 0.235294 | 0.252101 | 0.386555 | 0 | 0 | 0 | 0 | 0 | 0 | 0.043796 | 0.194118 | 170 | 8 | 37 | 21.25 | 0.824818 | 0 | 0 | 0.285714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.285714 | 0 | 0.285714 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
25b083a470f3795fafa74c75418d4c776575a35c | 42 | py | Python | databird/blueprints/driver/databird-driver-+package.driver_name+/tests/test_tests.py | jonas-hagen/databird | cfb358e74da62bb9d7ea0e6c7ac984671472120b | [
"MIT"
] | 1 | 2021-11-05T00:12:00.000Z | 2021-11-05T00:12:00.000Z | databird/blueprints/driver/databird-driver-+package.driver_name+/tests/test_tests.py | jonas-hagen/databird | cfb358e74da62bb9d7ea0e6c7ac984671472120b | [
"MIT"
] | null | null | null | databird/blueprints/driver/databird-driver-+package.driver_name+/tests/test_tests.py | jonas-hagen/databird | cfb358e74da62bb9d7ea0e6c7ac984671472120b | [
"MIT"
] | null | null | null | def test_tests_are_run():
assert True
| 14 | 25 | 0.738095 | 7 | 42 | 4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.190476 | 42 | 2 | 26 | 21 | 0.823529 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.5 | 1 | 0.5 | true | 0 | 0 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
25e7d67e2f3e4ece7a46177fca8b3e52a01e44a1 | 139 | py | Python | jpkgchanger/__init__.py | intermedia-net/jpkgchanger | 0e64a34253a02214cdc99b24bf93ec791fa70498 | [
"MIT"
] | 5 | 2021-08-20T10:47:45.000Z | 2021-08-20T10:51:01.000Z | jpkgchanger/__init__.py | intermedia-net/jpkgchanger | 0e64a34253a02214cdc99b24bf93ec791fa70498 | [
"MIT"
] | null | null | null | jpkgchanger/__init__.py | intermedia-net/jpkgchanger | 0e64a34253a02214cdc99b24bf93ec791fa70498 | [
"MIT"
] | null | null | null | """Change java project package name!"""
__version__ = "0.0.1"
from .jpkgchanger import main
from .jpkgchanger import change_package_name
| 19.857143 | 44 | 0.769784 | 19 | 139 | 5.315789 | 0.631579 | 0.217822 | 0.415842 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.024793 | 0.129496 | 139 | 6 | 45 | 23.166667 | 0.809917 | 0.23741 | 0 | 0 | 0 | 0 | 0.05 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
25ee8fb224714dc5d66ee6e6d1e97b76c862d8bd | 8,523 | py | Python | python/dazl/_gen/com/daml/ledger/api/v1/command_completion_service_pb2_grpc.py | digital-asset/dazl-client | 5d54edaea26d7704cc8d73e5945b37ed2806265b | [
"Apache-2.0"
] | 8 | 2019-09-08T09:41:03.000Z | 2022-02-19T12:54:30.000Z | python/dazl/_gen/com/daml/ledger/api/v1/command_completion_service_pb2_grpc.py | digital-asset/dazl-client | 5d54edaea26d7704cc8d73e5945b37ed2806265b | [
"Apache-2.0"
] | 55 | 2019-05-30T23:00:31.000Z | 2022-01-24T01:51:32.000Z | python/dazl/_gen/com/daml/ledger/api/v1/command_completion_service_pb2_grpc.py | digital-asset/dazl-client | 5d54edaea26d7704cc8d73e5945b37ed2806265b | [
"Apache-2.0"
] | 9 | 2019-06-30T18:15:27.000Z | 2021-12-03T10:15:27.000Z | # Copyright (c) 2017-2021 Digital Asset (Switzerland) GmbH and/or its affiliates. All rights reserved.
# SPDX-License-Identifier: Apache-2.0
# fmt: off
# isort: skip_file
# Generated by the gRPC Python protocol compiler plugin. DO NOT EDIT!
"""Client and server classes corresponding to protobuf-defined services."""
import grpc
from . import command_completion_service_pb2 as com_dot_daml_dot_ledger_dot_api_dot_v1_dot_command__completion__service__pb2
class CommandCompletionServiceStub(object):
"""Allows clients to observe the status of their submissions.
Commands may be submitted via the Command Submission Service.
The on-ledger effects of their submissions are disclosed by the Transaction Service.
Commands may fail in 2 distinct manners:
1. Failure communicated synchronously in the gRPC error of the submission.
2. Failure communicated asynchronously in a Completion, see ``completion.proto``.
Note that not only successfully submitted commands MAY produce a completion event. For example, the participant MAY
choose to produce a completion event for a rejection of a duplicate command.
Clients that do not receive a successful completion about their submission MUST NOT assume that it was successful.
Clients SHOULD subscribe to the CompletionStream before starting to submit commands to prevent race conditions.
"""
def __init__(self, channel):
"""Constructor.
Args:
channel: A grpc.Channel.
"""
self.CompletionStream = channel.unary_stream(
'/com.daml.ledger.api.v1.CommandCompletionService/CompletionStream',
request_serializer=com_dot_daml_dot_ledger_dot_api_dot_v1_dot_command__completion__service__pb2.CompletionStreamRequest.SerializeToString,
response_deserializer=com_dot_daml_dot_ledger_dot_api_dot_v1_dot_command__completion__service__pb2.CompletionStreamResponse.FromString,
)
self.CompletionEnd = channel.unary_unary(
'/com.daml.ledger.api.v1.CommandCompletionService/CompletionEnd',
request_serializer=com_dot_daml_dot_ledger_dot_api_dot_v1_dot_command__completion__service__pb2.CompletionEndRequest.SerializeToString,
response_deserializer=com_dot_daml_dot_ledger_dot_api_dot_v1_dot_command__completion__service__pb2.CompletionEndResponse.FromString,
)
class CommandCompletionServiceServicer(object):
"""Allows clients to observe the status of their submissions.
Commands may be submitted via the Command Submission Service.
The on-ledger effects of their submissions are disclosed by the Transaction Service.
Commands may fail in 2 distinct manners:
1. Failure communicated synchronously in the gRPC error of the submission.
2. Failure communicated asynchronously in a Completion, see ``completion.proto``.
Note that not only successfully submitted commands MAY produce a completion event. For example, the participant MAY
choose to produce a completion event for a rejection of a duplicate command.
Clients that do not receive a successful completion about their submission MUST NOT assume that it was successful.
Clients SHOULD subscribe to the CompletionStream before starting to submit commands to prevent race conditions.
"""
def CompletionStream(self, request, context):
"""Subscribe to command completion events.
Errors:
- ``UNAUTHENTICATED``: if the request does not include a valid access token
- ``PERMISSION_DENIED``: if the claims in the token are insufficient to perform a given operation
- ``NOT_FOUND``: if the request does not include a valid ledger id or if the ledger has been pruned before ``begin``
- ``INVALID_ARGUMENT``: if the payload is malformed or is missing required fields
- ``OUT_OF_RANGE``: if the absolute offset is not before the end of the ledger
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def CompletionEnd(self, request, context):
"""Returns the offset after the latest completion.
Errors:
- ``UNAUTHENTICATED``: if the request does not include a valid access token
- ``PERMISSION_DENIED``: if the claims in the token are insufficient to perform a given operation
- ``NOT_FOUND``: if the request does not include a valid ledger id
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def add_CommandCompletionServiceServicer_to_server(servicer, server):
rpc_method_handlers = {
'CompletionStream': grpc.unary_stream_rpc_method_handler(
servicer.CompletionStream,
request_deserializer=com_dot_daml_dot_ledger_dot_api_dot_v1_dot_command__completion__service__pb2.CompletionStreamRequest.FromString,
response_serializer=com_dot_daml_dot_ledger_dot_api_dot_v1_dot_command__completion__service__pb2.CompletionStreamResponse.SerializeToString,
),
'CompletionEnd': grpc.unary_unary_rpc_method_handler(
servicer.CompletionEnd,
request_deserializer=com_dot_daml_dot_ledger_dot_api_dot_v1_dot_command__completion__service__pb2.CompletionEndRequest.FromString,
response_serializer=com_dot_daml_dot_ledger_dot_api_dot_v1_dot_command__completion__service__pb2.CompletionEndResponse.SerializeToString,
),
}
generic_handler = grpc.method_handlers_generic_handler(
'com.daml.ledger.api.v1.CommandCompletionService', rpc_method_handlers)
server.add_generic_rpc_handlers((generic_handler,))
# This class is part of an EXPERIMENTAL API.
class CommandCompletionService(object):
"""Allows clients to observe the status of their submissions.
Commands may be submitted via the Command Submission Service.
The on-ledger effects of their submissions are disclosed by the Transaction Service.
Commands may fail in 2 distinct manners:
1. Failure communicated synchronously in the gRPC error of the submission.
2. Failure communicated asynchronously in a Completion, see ``completion.proto``.
Note that not only successfully submitted commands MAY produce a completion event. For example, the participant MAY
choose to produce a completion event for a rejection of a duplicate command.
Clients that do not receive a successful completion about their submission MUST NOT assume that it was successful.
Clients SHOULD subscribe to the CompletionStream before starting to submit commands to prevent race conditions.
"""
@staticmethod
def CompletionStream(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_stream(request, target, '/com.daml.ledger.api.v1.CommandCompletionService/CompletionStream',
com_dot_daml_dot_ledger_dot_api_dot_v1_dot_command__completion__service__pb2.CompletionStreamRequest.SerializeToString,
com_dot_daml_dot_ledger_dot_api_dot_v1_dot_command__completion__service__pb2.CompletionStreamResponse.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def CompletionEnd(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/com.daml.ledger.api.v1.CommandCompletionService/CompletionEnd',
com_dot_daml_dot_ledger_dot_api_dot_v1_dot_command__completion__service__pb2.CompletionEndRequest.SerializeToString,
com_dot_daml_dot_ledger_dot_api_dot_v1_dot_command__completion__service__pb2.CompletionEndResponse.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
| 53.943038 | 160 | 0.743283 | 1,015 | 8,523 | 5.967488 | 0.209852 | 0.0421 | 0.055473 | 0.062407 | 0.792141 | 0.792141 | 0.785207 | 0.761763 | 0.743602 | 0.743602 | 0 | 0.007512 | 0.203449 | 8,523 | 157 | 161 | 54.286624 | 0.884666 | 0.431069 | 0 | 0.444444 | 1 | 0 | 0.092503 | 0.06598 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0 | 0.027778 | 0.027778 | 0.180556 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
25f5763ac09ea42797d19f67c25db6d9e824c901 | 129 | py | Python | python-sdk/nuscenes/eval/tracking/render.py | tanjiangyuan/Classification_nuScence | b94c4b0b6257fc1c048a676e3fd9e71183108d53 | [
"Apache-2.0"
] | null | null | null | python-sdk/nuscenes/eval/tracking/render.py | tanjiangyuan/Classification_nuScence | b94c4b0b6257fc1c048a676e3fd9e71183108d53 | [
"Apache-2.0"
] | null | null | null | python-sdk/nuscenes/eval/tracking/render.py | tanjiangyuan/Classification_nuScence | b94c4b0b6257fc1c048a676e3fd9e71183108d53 | [
"Apache-2.0"
] | null | null | null | version https://git-lfs.github.com/spec/v1
oid sha256:ed79645690789316884aed85663afe72461467846ba7eca6822bd4843888a140
size 6528
| 32.25 | 75 | 0.883721 | 13 | 129 | 8.769231 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.455285 | 0.046512 | 129 | 3 | 76 | 43 | 0.471545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
25fdd1af5870b3528a30f737fb8936138f7f2942 | 110 | py | Python | way/python/first_steps/pcc/hello_pcc_world.py | only-romano/junkyard | b60a25b2643f429cdafee438d20f9966178d6f36 | [
"MIT"
] | null | null | null | way/python/first_steps/pcc/hello_pcc_world.py | only-romano/junkyard | b60a25b2643f429cdafee438d20f9966178d6f36 | [
"MIT"
] | null | null | null | way/python/first_steps/pcc/hello_pcc_world.py | only-romano/junkyard | b60a25b2643f429cdafee438d20f9966178d6f36 | [
"MIT"
] | null | null | null | #! Hello python crash course world programm
message = "Hello Python Crash Course World !"
print(message)
| 22 | 46 | 0.736364 | 14 | 110 | 5.785714 | 0.571429 | 0.271605 | 0.395062 | 0.54321 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.190909 | 110 | 4 | 47 | 27.5 | 0.910112 | 0.381818 | 0 | 0 | 0 | 0 | 0.52381 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.5 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
d32d3eea70a664a9392d1574c40e3e7c239f6301 | 9,897 | py | Python | tests/core/networking/test_networking_actions.py | LaudateCorpus1/chaostoolkit-oci | 36da01a47dd1b0881ec21cb70775fde5011b38ed | [
"Apache-2.0"
] | 15 | 2018-11-20T15:36:52.000Z | 2021-12-16T21:46:56.000Z | tests/core/networking/test_networking_actions.py | LaudateCorpus1/chaostoolkit-oci | 36da01a47dd1b0881ec21cb70775fde5011b38ed | [
"Apache-2.0"
] | 21 | 2018-11-26T19:11:52.000Z | 2021-12-15T19:38:37.000Z | tests/core/networking/test_networking_actions.py | LaudateCorpus1/chaostoolkit-oci | 36da01a47dd1b0881ec21cb70775fde5011b38ed | [
"Apache-2.0"
] | 8 | 2018-11-20T15:37:09.000Z | 2021-07-28T20:27:19.000Z | # coding: utf-8
# Copyright 2020, Oracle Corporation and/or its affiliates.
import pytest
from unittest.mock import MagicMock, patch
from chaoslib.exceptions import ActivityFailed
from chaosoci.core.networking.actions import (delete_route_table_by_id, delete_route_table_by_filters,
delete_nat_gateway_by_id, delete_nat_gateway_by_filters,
delete_internet_gateway_by_id, delete_internet_gateway_by_filters,
delete_service_gateway_by_id, delete_service_gateway_by_filters)
from chaosoci.core.networking.common import get_nat_gateway
from chaosoci.util.constants import FILTER_ERR
# FILTER_ERR = 'Some of the chosen filters were not found, we cannot continue.'
@patch('chaosoci.core.networking.actions.oci_client', autospec=True)
def test_delete_route_table_by_id(oci_client):
network_client = MagicMock()
oci_client.return_value = network_client
rt_id = "ocid1.routetable.oc1.phx.aawnm2cdxq3naniep5dsiixtchqjuypcx7l7"
rt_ids = [rt_id, ""]
for id in rt_ids:
if id == rt_id:
delete_route_table_by_id(id)
network_client.delete_route_table.assert_called_with(rt_id=id)
else:
with pytest.raises(ActivityFailed) as f:
delete_route_table_by_id(id)
assert 'A route table id is required.'
@patch('chaosoci.core.networking.actions.filter_route_tables', autospec=True)
@patch('chaosoci.core.networking.actions.get_route_tables', autospec=True)
@patch('chaosoci.core.networking.actions.oci_client', autospec=True)
def test_delete_route_table_by_filters(oci_client, get_route_tables,
filter_route_tables):
network_client = MagicMock()
oci_client.return_value = network_client
c_id = "ocid1.compartment.oc1..oadsocmof6r6ksovxmda44ikwxje7xxu"
vcn_id = "ocid1.vcn.oc1.phx.amaaaaaapwxjxiqavc6zohqv4whr6y65qwwjcexhex"
c_ids = [c_id, None]
vcn_ids = [vcn_id, None]
filters = [[{'display_name': 'random_name', 'region': 'uk-london-1'}],
None]
for c in c_ids:
for v in vcn_ids:
for f in filters:
if c is None or v is None:
with pytest.raises(ActivityFailed) as c_failed:
delete_route_table_by_filters(c, v, f)
assert 'A compartment id or vcn id is required.'
elif f is None:
with pytest.raises(ActivityFailed) as f_failed:
delete_route_table_by_filters(c, v, f)
assert FILTER_ERR
else:
with pytest.raises(ActivityFailed) as rt_failed:
delete_route_table_by_filters(c, v, f)
network_client.delete_route_table.assert_called_with(
filter_route_tables(route_tables=get_route_tables(
oci_client, c, v), filters=f)[0].id)
@patch('chaosoci.core.networking.actions.oci_client', autospec=True)
def test_delete_nat_gateway_by_id(oci_client):
network_client = MagicMock()
oci_client.return_value = network_client
nw_id = "ocid1.routetable.oc1.phx.aawnm2cdxq3naniep5dsiixtchqjuypcx7l7"
nw_ids = [nw_id, ""]
for id in nw_ids:
if id == nw_id:
delete_nat_gateway_by_id(id)
network_client.delete_nat_gateway.assert_called_with(nw_id=id)
else:
with pytest.raises(ActivityFailed) as f:
delete_route_table_by_id(id)
assert 'A route table id is required.'
@patch('chaosoci.core.networking.actions.filter_nat_gateway', autospec=True)
@patch('chaosoci.core.networking.actions.get_nat_gateway', autospec=True)
@patch('chaosoci.core.networking.actions.oci_client', autospec=True)
def test_delete_nat_gateway_by_filters(oci_client, get_nat_gateway,
filter_nat_gateway):
network_client = MagicMock()
oci_client.return_value = network_client
c_id = "ocid1.compartment.oc1..oadsocmof6r6ksovxmda44ikwxje7xxu"
vcn_id = "ocid1.vcn.oc1.phx.amaaaaaapwxjxiqavc6zohqv4whr6y65qwwjcexhex"
c_ids = [c_id, None]
vcn_ids = [vcn_id, None]
filters = [[{'display_name': 'random_name', 'region': 'uk-london-1'}],
None]
for c in c_ids:
for v in vcn_ids:
for f in filters:
if c is None or v is None:
with pytest.raises(ActivityFailed) as c_failed:
delete_nat_gateway_by_filters(c, v, f)
assert 'A compartment id or vcn id is required.'
elif f is None:
with pytest.raises(ActivityFailed) as f_failed:
delete_nat_gateway_by_filters(c, v, f)
assert FILTER_ERR
else:
with pytest.raises(ActivityFailed) as rt_failed:
delete_nat_gateway_by_filters(c, v, f)
network_client.delete_nat_gateway.assert_called_with(
filter_nat_gateway(route_tables=get_nat_gateway(
oci_client, c, v), filters=f)[0].id)
@patch('chaosoci.core.networking.actions.oci_client', autospec=True)
def test_delete_internet_gateway_by_id(oci_client):
network_client = MagicMock()
oci_client.return_value = network_client
ig_id = "ocid1.routetable.oc1.phx.aawnm2cdxq3naniep5dsiixtchqjuypcx7l7"
ig_ids = [ig_id, ""]
for id in ig_ids:
if id == ig_id:
delete_internet_gateway_by_id(id)
network_client.delete_internet_gateway.assert_called_with(ig_id=id)
else:
with pytest.raises(ActivityFailed) as f:
delete_internet_gateway_by_id(id)
assert 'A route table id is required.'
@patch('chaosoci.core.networking.actions.filter_internet_gateway', autospec=True)
@patch('chaosoci.core.networking.actions.get_internet_gateway', autospec=True)
@patch('chaosoci.core.networking.actions.oci_client', autospec=True)
def test_delete_internet_gateway_by_filters(oci_client, get_route_tables,
filter_internet_gateway):
network_client = MagicMock()
oci_client.return_value = network_client
c_id = "ocid1.compartment.oc1..oadsocmof6r6ksovxmda44ikwxje7xxu"
vcn_id = "ocid1.vcn.oc1.phx.amaaaaaapwxjxiqavc6zohqv4whr6y65qwwjcexhex"
c_ids = [c_id, None]
vcn_ids = [vcn_id, None]
filters = [[{'display_name': 'random_name', 'region': 'uk-london-1'}],
None]
for c in c_ids:
for v in vcn_ids:
for f in filters:
if c is None or v is None:
with pytest.raises(ActivityFailed) as c_failed:
delete_internet_gateway_by_filters(c, v, f)
assert 'A compartment id or vcn id is required.'
elif f is None:
with pytest.raises(ActivityFailed) as f_failed:
delete_internet_gateway_by_filters(c, v, f)
assert FILTER_ERR
else:
with pytest.raises(ActivityFailed) as rt_failed:
delete_internet_gateway_by_filters(c, v, f)
network_client.delete_internet_gateway.assert_called_with(
filter_internet_gateway(route_tables=get_route_tables(
oci_client, c, v), filters=f)[0].id)
@patch('chaosoci.core.networking.actions.oci_client', autospec=True)
def test_delete_service_gateway_by_id(oci_client):
network_client = MagicMock()
oci_client.return_value = network_client
sg_id = "ocid1.routetable.oc1.phx.aawnm2cdxq3naniep5dsiixtchqjuypcx7l7"
sg_ids = [sg_id, ""]
for id in sg_ids:
if id == sg_id:
delete_service_gateway_by_id(id)
network_client.delete_service_gateway.assert_called_with(sg_id=id)
else:
with pytest.raises(ActivityFailed) as f:
delete_service_gateway_by_id(id)
assert 'A route table id is required.'
@patch('chaosoci.core.networking.actions.filter_service_gateway', autospec=True)
@patch('chaosoci.core.networking.actions.get_service_gateway', autospec=True)
@patch('chaosoci.core.networking.actions.oci_client', autospec=True)
def test_delete_service_gateway_by_filters(oci_client, get_service_gateway,
filter_service_gateway):
network_client = MagicMock()
oci_client.return_value = network_client
c_id = "ocid1.compartment.oc1..oadsocmof6r6ksovxmda44ikwxje7xxu"
vcn_id = "ocid1.vcn.oc1.phx.amaaaaaapwxjxiqavc6zohqv4whr6y65qwwjcexhex"
c_ids = [c_id, None]
vcn_ids = [vcn_id, None]
filters = [[{'display_name': 'random_name', 'region': 'uk-london-1'}],
None]
for c in c_ids:
for v in vcn_ids:
for f in filters:
if c is None or v is None:
with pytest.raises(ActivityFailed) as c_failed:
delete_service_gateway_by_filters(c, v, f)
assert 'A compartment id or vcn id is required.'
elif f is None:
with pytest.raises(ActivityFailed) as f_failed:
delete_service_gateway_by_filters(c, v, f)
assert FILTER_ERR
else:
with pytest.raises(ActivityFailed) as rt_failed:
delete_service_gateway_by_filters(c, v, f)
network_client.delete_service_gateway.assert_called_with(
filter_service_gateway(get_service_gateway(
oci_client, c, v), filters=f)[0].id)
| 46.464789 | 112 | 0.636152 | 1,209 | 9,897 | 4.894127 | 0.08354 | 0.042589 | 0.066926 | 0.083319 | 0.897076 | 0.857698 | 0.800575 | 0.793983 | 0.727058 | 0.648978 | 0 | 0.013701 | 0.284632 | 9,897 | 212 | 113 | 46.683962 | 0.822034 | 0.015055 | 0 | 0.686486 | 0 | 0 | 0.194581 | 0.150246 | 0 | 0 | 0 | 0 | 0.108108 | 1 | 0.043243 | false | 0 | 0.032432 | 0 | 0.075676 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d378ef154b09b7a7451e091caa233a2e12958e7f | 32 | py | Python | __init__.py | wrow/PushbulletLogging | 34229ca79e6b94607368be639e8167a4b4ee673a | [
"MIT"
] | 1 | 2017-12-06T18:30:37.000Z | 2017-12-06T18:30:37.000Z | __init__.py | wrow/PushbulletLogging | 34229ca79e6b94607368be639e8167a4b4ee673a | [
"MIT"
] | null | null | null | __init__.py | wrow/PushbulletLogging | 34229ca79e6b94607368be639e8167a4b4ee673a | [
"MIT"
] | null | null | null | from PushbulletLogging import *
| 16 | 31 | 0.84375 | 3 | 32 | 9 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 32 | 1 | 32 | 32 | 0.964286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d38ae365b90d064a19cacc4b94d6bc341bad54db | 72 | py | Python | adlib/tests/__init__.py | xyvivian/adlib | 79a93baa8aa542080bbf55734168eb89317df83c | [
"MIT"
] | null | null | null | adlib/tests/__init__.py | xyvivian/adlib | 79a93baa8aa542080bbf55734168eb89317df83c | [
"MIT"
] | null | null | null | adlib/tests/__init__.py | xyvivian/adlib | 79a93baa8aa542080bbf55734168eb89317df83c | [
"MIT"
] | null | null | null | from . import adversaries
from . import datasets
from . import learners
| 18 | 25 | 0.791667 | 9 | 72 | 6.333333 | 0.555556 | 0.526316 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 72 | 3 | 26 | 24 | 0.95 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6c937143cbf5dd8ef9e55846778fc7317263dd04 | 36 | py | Python | solcast/utils/__init__.py | jmccorkindale/solcast-py | 0b572b2878a6045b05cf05709d6655697d95ba91 | [
"MIT"
] | 1 | 2021-03-06T13:53:35.000Z | 2021-03-06T13:53:35.000Z | solcast/utils/__init__.py | jmccorkindale/solcast-py | 0b572b2878a6045b05cf05709d6655697d95ba91 | [
"MIT"
] | null | null | null | solcast/utils/__init__.py | jmccorkindale/solcast-py | 0b572b2878a6045b05cf05709d6655697d95ba91 | [
"MIT"
] | 1 | 2021-03-06T13:54:59.000Z | 2021-03-06T13:54:59.000Z | from .dict_no_none import DictNoNone | 36 | 36 | 0.888889 | 6 | 36 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.083333 | 36 | 1 | 36 | 36 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6cadbba2b6a70753d3c40670e512dd7c4779feb1 | 1,937 | py | Python | culebratester_client/models/__init__.py | dtmilano/CulebraTester2-client | 21979a851943c9a30c3b5f31eed21c1b1d4894dd | [
"Apache-2.0"
] | 7 | 2020-02-07T14:37:09.000Z | 2022-03-11T09:54:47.000Z | culebratester_client/models/__init__.py | dtmilano/CulebraTester2-client | 21979a851943c9a30c3b5f31eed21c1b1d4894dd | [
"Apache-2.0"
] | null | null | null | culebratester_client/models/__init__.py | dtmilano/CulebraTester2-client | 21979a851943c9a30c3b5f31eed21c1b1d4894dd | [
"Apache-2.0"
] | 1 | 2021-09-11T03:18:37.000Z | 2021-09-11T03:18:37.000Z | # coding: utf-8
# flake8: noqa
"""
CulebraTester
## Snaky Android Test --- If you want to be able to try out the API using the **Execute** or **TRY** button from this page - an android device should be connected using `adb` - the server should have been started using `./culebratester2 start-server` then you will be able to invoke the API and see the responses. # noqa: E501
OpenAPI spec version: 2.0.20
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
from __future__ import absolute_import
# import models into model package
from culebratester_client.models.culebra_info import CulebraInfo
from culebratester_client.models.current_package_name import CurrentPackageName
from culebratester_client.models.display_height import DisplayHeight
from culebratester_client.models.display_real_size import DisplayRealSize
from culebratester_client.models.display_rotation import DisplayRotation
from culebratester_client.models.display_rotation_enum import DisplayRotationEnum
from culebratester_client.models.display_size_dp import DisplaySizeDp
from culebratester_client.models.display_width import DisplayWidth
from culebratester_client.models.help import Help
from culebratester_client.models.inline_response200 import InlineResponse200
from culebratester_client.models.last_traversed_text import LastTraversedText
from culebratester_client.models.object_ref import ObjectRef
from culebratester_client.models.point import Point
from culebratester_client.models.product_name import ProductName
from culebratester_client.models.selector import Selector
from culebratester_client.models.status_response import StatusResponse
from culebratester_client.models.swipe_body import SwipeBody
from culebratester_client.models.text import Text
from culebratester_client.models.window_hierarchy import WindowHierarchy
from culebratester_client.models.window_hierarchy_child import WindowHierarchyChild
| 52.351351 | 333 | 0.852865 | 250 | 1,937 | 6.42 | 0.448 | 0.211838 | 0.286604 | 0.361371 | 0.199377 | 0.109657 | 0 | 0 | 0 | 0 | 0 | 0.009185 | 0.100671 | 1,937 | 36 | 334 | 53.805556 | 0.91217 | 0.258647 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6ce13764fdf18cf1d416d94d71e7ec832c6888a3 | 35 | py | Python | walmart_api_client/__init__.py | EitherSoft/python-walmart-api-client | 70b169977130c49f52b958e90c13007f365c9eed | [
"MIT"
] | 5 | 2016-12-09T11:32:16.000Z | 2017-07-15T17:22:42.000Z | walmart_api_client/__init__.py | kronas/python-walmart-api-client | 70b169977130c49f52b958e90c13007f365c9eed | [
"MIT"
] | null | null | null | walmart_api_client/__init__.py | kronas/python-walmart-api-client | 70b169977130c49f52b958e90c13007f365c9eed | [
"MIT"
] | 1 | 2017-07-06T06:30:35.000Z | 2017-07-06T06:30:35.000Z | from .core import WalmartApiClient
| 17.5 | 34 | 0.857143 | 4 | 35 | 7.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.114286 | 35 | 1 | 35 | 35 | 0.967742 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6ce5962772a5bc615a0abf3ee8b9784fd860fe86 | 58 | py | Python | classifier_keras/transformer/__init__.py | leefsir/Text_Classification | 1d7ccd483374fd5a19b064433cb6e5d51e2039dc | [
"Apache-2.0"
] | null | null | null | classifier_keras/transformer/__init__.py | leefsir/Text_Classification | 1d7ccd483374fd5a19b064433cb6e5d51e2039dc | [
"Apache-2.0"
] | null | null | null | classifier_keras/transformer/__init__.py | leefsir/Text_Classification | 1d7ccd483374fd5a19b064433cb6e5d51e2039dc | [
"Apache-2.0"
] | null | null | null | # _*_coding:utf-8_*_
# author leewfeng
# 2020/12/5 15:41 | 19.333333 | 20 | 0.672414 | 10 | 58 | 3.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.244898 | 0.155172 | 58 | 3 | 21 | 19.333333 | 0.469388 | 0.862069 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
9f5788861adf8607e29e610f994e80d33529686e | 371 | py | Python | src/dist.py | papaljuka/Barnes-Hut | 7fb52a7962d37676c70881657eb09db5497e2c9e | [
"MIT"
] | null | null | null | src/dist.py | papaljuka/Barnes-Hut | 7fb52a7962d37676c70881657eb09db5497e2c9e | [
"MIT"
] | null | null | null | src/dist.py | papaljuka/Barnes-Hut | 7fb52a7962d37676c70881657eb09db5497e2c9e | [
"MIT"
] | null | null | null | import numpy as np
class Point():
def __init__(self, x, y, z):
self.x = x
self.y = y
def distx(self, other):
return np.abs(self.x - other.x)
def disty(self, other):
return np.abs(self.y - other.y)
def dist(self, other):
x = distx(self, other)
y = disty(self, other)
return np.sqrt(x**2 + y**2)
| 19.526316 | 39 | 0.525606 | 59 | 371 | 3.237288 | 0.338983 | 0.235602 | 0.235602 | 0.267016 | 0.39267 | 0.251309 | 0 | 0 | 0 | 0 | 0 | 0.008065 | 0.331536 | 371 | 18 | 40 | 20.611111 | 0.762097 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.307692 | false | 0 | 0.076923 | 0.153846 | 0.692308 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
9f7b53959c61f30eb4a6a9df0180ce70e6648231 | 157 | py | Python | tiktalik/loadbalancer/__init__.py | tiktalik-cloud/tiktalik-python | 7bbaf394a2259b1427f63f03da7d164305c1af16 | [
"MIT"
] | 1 | 2017-02-01T19:26:34.000Z | 2017-02-01T19:26:34.000Z | tiktalik/loadbalancer/__init__.py | tiktalik-cloud/tiktalik-python | 7bbaf394a2259b1427f63f03da7d164305c1af16 | [
"MIT"
] | 2 | 2016-10-07T16:06:34.000Z | 2021-01-15T20:06:06.000Z | tiktalik/loadbalancer/__init__.py | tiktalik-cloud/tiktalik-python | 7bbaf394a2259b1427f63f03da7d164305c1af16 | [
"MIT"
] | 6 | 2016-07-01T16:06:31.000Z | 2020-01-03T13:33:13.000Z | """Module tiktalik.loadbalancer"""
from .connection import LoadBalancerConnection
from .objects import LoadBalancer, LoadBalancerBackend, LoadBalancerAction
| 39.25 | 74 | 0.853503 | 13 | 157 | 10.307692 | 0.769231 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.076433 | 157 | 3 | 75 | 52.333333 | 0.924138 | 0.178344 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9f9153213e967995a31e769229f6ee525e24f49a | 6,957 | py | Python | bender_service/bender/tests/helpers.py | Dreem-Organization/bender-api | 9ddc817f130b853127a1925b2a9dced2662f66fc | [
"MIT"
] | null | null | null | bender_service/bender/tests/helpers.py | Dreem-Organization/bender-api | 9ddc817f130b853127a1925b2a9dced2662f66fc | [
"MIT"
] | 2 | 2021-03-19T22:20:06.000Z | 2021-06-10T21:17:12.000Z | bender_service/bender/tests/helpers.py | Dreem-Organization/bender-api | 9ddc817f130b853127a1925b2a9dced2662f66fc | [
"MIT"
] | null | null | null | from rest_framework.test import APITestCase
from django.contrib.auth import get_user_model
from bender.models import Experiment, Algo, Trial, Parameter
from django.db import transaction
import random
import uuid
User = get_user_model()
class BenderTestCase(APITestCase):
@transaction.atomic
def setUp(self):
self.user1 = User.objects.create_user(
username="Toto1",
password="123456",
email="toto1@gmail.com"
)
for _ in range(2):
experiment = Experiment.objects.create(
name="This is my experiment {}".format(uuid.uuid4()),
metrics={"metric_name": "lol", "type": "gain"},
owner=self.user1,
)
for _ in range(3):
algo = Algo.objects.create(
experiment=experiment,
owner=self.user1,
name="algo {}".format(uuid.uuid4())
)
Parameter.objects.create(
name="alpha",
algo=algo,
category=Parameter.UNIFORM,
search_space={"high": 5, "low": -3},
)
Parameter.objects.create(
name="beta",
algo=algo,
category=Parameter.UNIFORM,
search_space={"high": 5, "low": -3},
)
Parameter.objects.create(
name="gamma",
algo=algo,
category=Parameter.UNIFORM,
search_space={"high": 5, "low": -3},
)
for _ in range(4):
Trial.objects.create(
experiment=experiment,
algo=algo,
owner=self.user1,
parameters={"alpha": random.random(), "beta": random.random(),
"gamma": random.random()},
comment="izi trial",
results={"lol": random.random()}
)
self.user2 = User.objects.create_user(
username="Toto2",
password="123456",
email="toto2@gmail.com",
)
for _ in range(3):
experiment = Experiment.objects.create(
name="This is my experiment {}".format(uuid.uuid4()),
metrics=[{'metric_name': 'lol', "type": "loss"},
{'metric_name': 'lal', "type": "reward"}],
owner=self.user2,
)
for _ in range(3):
algo = Algo.objects.create(
experiment=experiment,
owner=self.user2,
name="algo {}".format(uuid.uuid4())
)
Parameter.objects.create(
name="alpha",
algo=algo,
category=Parameter.UNIFORM,
search_space={"high": 5, "low": -3},
)
Parameter.objects.create(
name="beta",
algo=algo,
category=Parameter.UNIFORM,
search_space={"high": 5, "low": -3},
)
Parameter.objects.create(
name="gamma",
algo=algo,
category=Parameter.UNIFORM,
search_space={"high": 5, "low": -3},
)
for _ in range(3):
Trial.objects.create(
experiment=experiment,
algo=algo,
owner=self.user2,
parameters={"alpha": random.random(), "beta": random.random(),
"gamma": random.random()},
comment="izi trial",
results={"lol": random.random(), "lal": "a"}
)
experiment = Experiment.objects.create(
name="This is my experiment 5",
metrics=[{"metric_name": 'lole', "type": "loss"}],
owner=self.user1,
)
experiment.shared_with.add(self.user2)
for _ in range(2):
algo = Algo.objects.create(
experiment=experiment,
owner=self.user1,
name="mon algo {}".format(uuid.uuid4()),
)
Parameter.objects.create(
name="alpha",
algo=algo,
category=Parameter.UNIFORM,
search_space={"high": 5, "low": -3},
)
Parameter.objects.create(
name="beta",
algo=algo,
category=Parameter.UNIFORM,
search_space={"high": 5, "low": -3},
)
Parameter.objects.create(
name="gamma",
algo=algo,
category=Parameter.UNIFORM,
search_space={"high": 5, "low": -3},
)
for _ in range(4):
Trial.objects.create(
experiment=experiment,
owner=self.user1,
algo=algo,
parameters={"alpha": random.random(), "beta": random.random(),
"gamma": random.random()},
comment="izi trial",
results={"lole": random.random()}
)
for _ in range(2):
algo = Algo.objects.create(
experiment=experiment,
owner=self.user2,
name="mon algo {}".format(uuid.uuid4()),
)
Parameter.objects.create(
name="alpha",
algo=algo,
category=Parameter.UNIFORM,
search_space={"high": 5, "low": -3},
)
Parameter.objects.create(
name="beta",
algo=algo,
category=Parameter.UNIFORM,
search_space={"high": 5, "low": -3},
)
Parameter.objects.create(
name="gamma",
algo=algo,
category=Parameter.UNIFORM,
search_space={"high": 5, "low": -3},
)
for _ in range(5):
Trial.objects.create(
experiment=experiment,
algo=algo,
owner=self.user2,
parameters={"alpha": random.random(), "beta": random.random(),
"gamma": random.random()},
comment="izi trial",
results={"lole": random.random()}
)
self.user_admin = User.objects.create_superuser(
username="admin",
password="123456",
email="admin@gmail.com"
)
| 37.005319 | 86 | 0.424752 | 558 | 6,957 | 5.231183 | 0.148746 | 0.115793 | 0.087359 | 0.106886 | 0.811237 | 0.772867 | 0.772867 | 0.768071 | 0.768071 | 0.743405 | 0 | 0.020446 | 0.458675 | 6,957 | 187 | 87 | 37.203209 | 0.754647 | 0 | 0 | 0.7 | 0 | 0 | 0.075607 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.005556 | false | 0.016667 | 0.033333 | 0 | 0.044444 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4ca9d627692efc733f8f99902fdc2154c9bb51c2 | 188 | py | Python | top/__init__.py | cloudorz/top | 847f784d47a746106ec937edfa1d23d509c533a2 | [
"BSD-3-Clause"
] | 2 | 2017-02-05T16:32:14.000Z | 2017-03-01T07:23:14.000Z | top/__init__.py | cloudorz/top | 847f784d47a746106ec937edfa1d23d509c533a2 | [
"BSD-3-Clause"
] | null | null | null | top/__init__.py | cloudorz/top | 847f784d47a746106ec937edfa1d23d509c533a2 | [
"BSD-3-Clause"
] | null | null | null | # coding: utf-8
from .topclient import TopClient
from .request import Request
from .user import *
from .appstore import *
from .shipping import *
from .item import *
from .trade import *
| 18.8 | 32 | 0.75 | 26 | 188 | 5.423077 | 0.461538 | 0.283688 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00641 | 0.170213 | 188 | 9 | 33 | 20.888889 | 0.897436 | 0.069149 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4cb8fb60a1383132dd90fafad3eed4ee5680be97 | 34 | py | Python | NimSum/__init__.py | jkcw/nim-sum | 0d221035697faa5fd9a54e360680949fcaa19c8e | [
"MIT"
] | 4 | 2021-11-28T09:44:33.000Z | 2021-12-09T17:40:36.000Z | NimSum/__init__.py | jkcw/nim-sum-calculator | 0d221035697faa5fd9a54e360680949fcaa19c8e | [
"MIT"
] | null | null | null | NimSum/__init__.py | jkcw/nim-sum-calculator | 0d221035697faa5fd9a54e360680949fcaa19c8e | [
"MIT"
] | null | null | null | from NimSum.NimSum import NimSum
| 17 | 33 | 0.823529 | 5 | 34 | 5.6 | 0.6 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.147059 | 34 | 1 | 34 | 34 | 0.965517 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4cc46e7300b7f29187a8d60903bdb40d7e0f13d4 | 164 | py | Python | jiamtrader/gateway/binance/__init__.py | zxc1342802/leijmtrader | f24d5593d8708e48f2a9180d9469a6c2af93a08d | [
"MIT"
] | null | null | null | jiamtrader/gateway/binance/__init__.py | zxc1342802/leijmtrader | f24d5593d8708e48f2a9180d9469a6c2af93a08d | [
"MIT"
] | null | null | null | jiamtrader/gateway/binance/__init__.py | zxc1342802/leijmtrader | f24d5593d8708e48f2a9180d9469a6c2af93a08d | [
"MIT"
] | null | null | null | from .binance_usdt_gateway import BinanceUsdtGateway
from .binance_spot_gateway import BinanceSpotGateway
from .binance_inverse_gateway import BinanceInverseGateway | 54.666667 | 58 | 0.914634 | 18 | 164 | 8 | 0.555556 | 0.229167 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.067073 | 164 | 3 | 58 | 54.666667 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4cc9646efce101372d1c3b114bb1bc9f8b2f77f1 | 217 | py | Python | zvt/domain/fundamental/__init__.py | Evergreen2020/zvt | 446a2512d716a38a12164b6d4468a6c9de01b986 | [
"MIT"
] | 2 | 2020-09-04T03:24:03.000Z | 2020-11-27T20:57:55.000Z | zvt/domain/fundamental/__init__.py | Evergreen2020/zvt | 446a2512d716a38a12164b6d4468a6c9de01b986 | [
"MIT"
] | 2 | 2019-12-20T13:12:30.000Z | 2020-01-03T06:24:30.000Z | zvt/domain/fundamental/__init__.py | Evergreen2020/zvt | 446a2512d716a38a12164b6d4468a6c9de01b986 | [
"MIT"
] | 1 | 2021-01-24T15:44:53.000Z | 2021-01-24T15:44:53.000Z | # -*- coding: utf-8 -*-
from zvt.domain.fundamental.dividend_financing import *
from zvt.domain.fundamental.finance import *
from zvt.domain.fundamental.trading import *
from zvt.domain.fundamental.valuation import *
| 36.166667 | 55 | 0.788018 | 28 | 217 | 6.071429 | 0.464286 | 0.164706 | 0.305882 | 0.564706 | 0.529412 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005102 | 0.096774 | 217 | 5 | 56 | 43.4 | 0.862245 | 0.096774 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
e21f8b80d1373cae40e35deb14b757b3d23ae7cd | 85 | py | Python | studentManagementSystem/studentManagementSystem/__init__.py | fanlianguo/systemStudent | 9e5d7c2f1084208cb73d6f9481a37e7a0950e710 | [
"MIT"
] | null | null | null | studentManagementSystem/studentManagementSystem/__init__.py | fanlianguo/systemStudent | 9e5d7c2f1084208cb73d6f9481a37e7a0950e710 | [
"MIT"
] | null | null | null | studentManagementSystem/studentManagementSystem/__init__.py | fanlianguo/systemStudent | 9e5d7c2f1084208cb73d6f9481a37e7a0950e710 | [
"MIT"
] | null | null | null | # install database driver
from pymysql import install_as_MySQLdb
install_as_MySQLdb() | 28.333333 | 38 | 0.870588 | 12 | 85 | 5.833333 | 0.666667 | 0.257143 | 0.457143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.094118 | 85 | 3 | 39 | 28.333333 | 0.909091 | 0.270588 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
e23a2e5d8e2418b644c1c700360b854404808508 | 52 | py | Python | jarviscli/packages/clear.py | lakshay3697/Pydex | 348add631838280f904ddb6869d511216b165dc4 | [
"MIT"
] | 1 | 2018-02-24T12:41:24.000Z | 2018-02-24T12:41:24.000Z | jarviscli/packages/clear.py | lakshay3697/Pydex | 348add631838280f904ddb6869d511216b165dc4 | [
"MIT"
] | 21 | 2018-02-24T13:39:05.000Z | 2018-02-27T20:03:12.000Z | jarviscli/packages/clear.py | remnestal/Jarvis | cd26682e8f2c89585a04566a60abaa88aa20b8f6 | [
"MIT"
] | null | null | null | import os
def clear_scr():
os.system("clear")
| 8.666667 | 22 | 0.634615 | 8 | 52 | 4 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.211538 | 52 | 5 | 23 | 10.4 | 0.780488 | 0 | 0 | 0 | 0 | 0 | 0.096154 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e26f48cda1881f4384a779356f34ebab671ec965 | 546 | py | Python | mindhome_alpha/erpnext/patches/v5_0/rename_taxes_and_charges_master.py | Mindhome/field_service | 3aea428815147903eb9af1d0c1b4b9fc7faed057 | [
"MIT"
] | 1 | 2021-04-29T14:55:29.000Z | 2021-04-29T14:55:29.000Z | mindhome_alpha/erpnext/patches/v5_0/rename_taxes_and_charges_master.py | Mindhome/field_service | 3aea428815147903eb9af1d0c1b4b9fc7faed057 | [
"MIT"
] | null | null | null | mindhome_alpha/erpnext/patches/v5_0/rename_taxes_and_charges_master.py | Mindhome/field_service | 3aea428815147903eb9af1d0c1b4b9fc7faed057 | [
"MIT"
] | 1 | 2021-04-29T14:39:01.000Z | 2021-04-29T14:39:01.000Z | from __future__ import unicode_literals
import frappe
def execute():
if frappe.db.table_exists("Sales Taxes and Charges Master"):
frappe.rename_doc("DocType", "Sales Taxes and Charges Master",
"Sales Taxes and Charges Template")
frappe.delete_doc("DocType", "Sales Taxes and Charges Master")
if frappe.db.table_exists("Purchase Taxes and Charges Master"):
frappe.rename_doc("DocType", "Purchase Taxes and Charges Master",
"Purchase Taxes and Charges Template")
frappe.delete_doc("DocType", "Purchase Taxes and Charges Master")
| 36.4 | 67 | 0.765568 | 75 | 546 | 5.426667 | 0.306667 | 0.157248 | 0.29484 | 0.309582 | 0.837838 | 0.702703 | 0.702703 | 0.432432 | 0 | 0 | 0 | 0 | 0.1337 | 546 | 14 | 68 | 39 | 0.860465 | 0 | 0 | 0 | 0 | 0 | 0.520147 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | true | 0 | 0.181818 | 0 | 0.272727 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e275a66c04c34cfbb4a7774d16836e3c136dc34e | 249 | py | Python | backend/app/core/auth.py | hollyfoxx/ace2-gui | e0f72cafdd524e0cd66549a9315697aa21ae46fa | [
"Apache-2.0"
] | 1 | 2021-07-16T10:34:22.000Z | 2021-07-16T10:34:22.000Z | backend/app/core/auth.py | hollyfoxx/ace2-gui | e0f72cafdd524e0cd66549a9315697aa21ae46fa | [
"Apache-2.0"
] | null | null | null | backend/app/core/auth.py | hollyfoxx/ace2-gui | e0f72cafdd524e0cd66549a9315697aa21ae46fa | [
"Apache-2.0"
] | null | null | null | from passlib.hash import bcrypt_sha256
def hash_password(password: str) -> str:
return bcrypt_sha256.hash(password)
def verify_password(password: str, hashed_password: str) -> bool:
return bcrypt_sha256.verify(password, hashed_password)
| 24.9 | 65 | 0.779116 | 33 | 249 | 5.666667 | 0.393939 | 0.192513 | 0.203209 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.041667 | 0.13253 | 249 | 9 | 66 | 27.666667 | 0.824074 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 1 | 0.2 | 0.4 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 6 |
e2c9d72243fc2f421522a2ced477efe81afadda2 | 183 | py | Python | CA117/Lab_6/main_words_41.py | PRITI1999/OneLineWonders | 91a7368e0796e5a3b5839c9165f9fbe5460879f5 | [
"MIT"
] | 6 | 2016-02-04T00:15:20.000Z | 2019-10-13T13:53:16.000Z | CA117/Lab_6/main_words_41.py | PRITI1999/OneLineWonders | 91a7368e0796e5a3b5839c9165f9fbe5460879f5 | [
"MIT"
] | 2 | 2016-03-14T04:01:36.000Z | 2019-10-16T12:45:34.000Z | CA117/Lab_6/main_words_41.py | PRITI1999/OneLineWonders | 91a7368e0796e5a3b5839c9165f9fbe5460879f5 | [
"MIT"
] | 10 | 2016-02-09T14:38:32.000Z | 2021-05-25T08:16:26.000Z | (lambda W:[print("%s : %d"%(w,W.count(w)))for w in sorted(set(W))if len(w)>3 and W.count(w)>=3])([w.strip("!&'(),-.:;?_").lower()for l in __import__('sys').stdin for w in l.split()])
| 91.5 | 182 | 0.568306 | 38 | 183 | 2.605263 | 0.552632 | 0.121212 | 0.141414 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012048 | 0.092896 | 183 | 1 | 183 | 183 | 0.584337 | 0 | 0 | 0 | 0 | 0 | 0.120219 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 6 |
391d337bd332b98cfbe53455750fe4b0cd76180d | 42 | py | Python | pybble/pebblejs/clock.py | clach04/pybble | 736614664d50c07855160730e9b0479b8cc89527 | [
"Apache-2.0"
] | 72 | 2016-08-21T00:59:47.000Z | 2021-07-17T17:49:05.000Z | pybble/pebblejs/clock.py | clach04/pybble | 736614664d50c07855160730e9b0479b8cc89527 | [
"Apache-2.0"
] | 2 | 2016-08-21T15:39:28.000Z | 2016-08-24T00:09:07.000Z | pybble/pebblejs/clock.py | clach04/pybble | 736614664d50c07855160730e9b0479b8cc89527 | [
"Apache-2.0"
] | 7 | 2016-08-21T05:47:05.000Z | 2019-08-21T17:02:17.000Z | def Clock():
return require('clock')
| 10.5 | 27 | 0.619048 | 5 | 42 | 5.2 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.214286 | 42 | 3 | 28 | 14 | 0.787879 | 0 | 0 | 0 | 0 | 0 | 0.121951 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0 | 0 | 0.5 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 6 |
394c2521227c426544bbd51cb106c5397ebe4e0f | 6,679 | py | Python | RTND/py/myplot.py | hkujy/BTNDP | 06a1555c4a32b91d315649a0abf7378a86aa59f8 | [
"MIT"
] | null | null | null | RTND/py/myplot.py | hkujy/BTNDP | 06a1555c4a32b91d315649a0abf7378a86aa59f8 | [
"MIT"
] | 7 | 2019-12-09T18:51:54.000Z | 2020-07-08T20:34:12.000Z | RTND/py/myplot.py | hkujy/BTNDP | 06a1555c4a32b91d315649a0abf7378a86aa59f8 | [
"MIT"
] | null | null | null | """
plot graphs
"""
import pandas as pd
import mypara
import matplotlib.pyplot as plt
import global_para_class as gpc
def count_changes(vals):
"""
input a list of values and check the trend
"""
counter = 0
num = len(vals)
for i in range(1, num-1):
if (vals[i]-vals[i-1])*(vals[i]-vals[i+1])<0:
counter = counter + 1
return counter
def plt_od_cost(mp:mypara.ParaClass(),cases,gl:gpc.GloParaClass):
# all_od = []
for w in range(0, mp.num_od):
od = []
x = []
for s in cases:
if len(s.od)==0:
continue
map_od =[sw for sw in s.od if sw.id == w][0]
od.append(map_od.demand*map_od.mincost)
x.append(s.id)
plt.figure("OD "+str(w))
if gl.exp_id==1:
x_fre = []
num = int((gl.fre_up - gl.fre_lb)/gl.incre)
for i in range(0, num):
x_fre.append(gl.fre_lb + i * gl.incre)
plt.plot(x_fre, od)
axes=plt.gca()
plt.xlabel("Frequency of Line 2", fontsize=10,fontname='Times New Roman')
plt.ylabel("TTC",fontsize=10,fontname='Times New Roman')
xtick = axes.get_xticks()
ytick = axes.get_yticks()
axes.set_xticklabels(xtick, fontsize=10,fontname='Times New Roman')
axes.set_yticklabels(ytick, fontsize=10,fontname='Times New Roman')
plt.savefig(mp.output_folder+"\\Exp_"+str(gl.exp_id)+"_OD_"+str(w)+".png",bbox_inches='tight',dpi=600)
if gpc.is_show_fig:
plt.show(block=False)
plt.pause(2)
plt.close()
filename = mp.output_folder+"\\Exp_TTC_"+str(gl.exp_id)+"OD_"+str(w)+".txt"
with open(filename,"w+") as f:
for oc in od:
print("{0}".format(oc), file=f)
else:
plt.plot(x, od)
axes=plt.gca()
plt.xlabel("Frequency of Line 2", fontsize=10,fontname='Times New Roman')
plt.ylabel("TTC",fontsize=10,fontname='Times New Roman')
xtick = axes.get_xticks()
ytick = axes.get_yticks()
axes.set_xticklabels(xtick, fontsize=10,fontname='Times New Roman')
axes.set_yticklabels(ytick, fontsize=10,fontname='Times New Roman')
plt.savefig(mp.output_folder+"\\Exp_TTC_"+str(gl.exp_id)+"_OD_"+str(w)+".png",bbox_inches='tight',dpi=600)
if gpc.is_show_fig:
plt.show(block=False)
plt.pause(2)
plt.close()
filename = mp.output_folder+"\\Exp_"+str(gl.exp_id)+"OD_"+str(w)+".txt"
with open(filename,"w+") as f:
for oc in od:
print("{0}".format(oc), file=f)
def main(mp:mypara.ParaClass(), cases,gl:gpc.GloParaClass):
plt_od_cost(mp, cases,gl)
ttc = []
fair = []
with open(mp.output_folder+"\\objects.txt", "w") as f:
print("id,tc,fair",file=f)
for c in cases:
print("{0},{1},{2}".format(c.id,c.ttc,c.fair),file=f)
ttc.append(c.ttc)
fair.append(c.fair)
if gl.exp_id ==1:
plt.figure("ttc")
x_fre = []
num = int((gl.fre_up - gl.fre_lb)/gl.incre)
for i in range(0, num):
x_fre.append(gl.fre_lb + i * gl.incre)
plt.plot(x_fre,ttc)
plt.xlabel("Frequency of Line 2",fontsize=10, fontname='Times New Roman')
plt.ylabel('TTC',fontsize=10,fontname='Times New Roman')
axes=plt.gca()
xtick = axes.get_xticks()
ytick = axes.get_yticks()
axes.set_xticklabels(xtick, fontsize=10,fontname='Times New Roman')
axes.set_yticklabels(ytick, fontsize=10,fontname='Times New Roman')
xmajorFormatter = plt.FormatStrFormatter('%.1f')
ymajorFormatter = plt.FormatStrFormatter('%.1f')
axes.xaxis.set_major_formatter(xmajorFormatter)
axes.yaxis.set_major_formatter(ymajorFormatter)
plt.savefig(mp.output_folder+"\\Exp_"+str(gl.exp_id)+"_ttc.png",bbox_inches='tight',dpi=600)
if gpc.is_show_fig:
plt.show(block=False)
plt.pause(2)
plt.close()
plt.plot(x_fre,fair)
plt.xlabel("Frequency of Line 2",fontsize=10, fontname='Times New Roman')
plt.ylabel('Fair',fontsize=10,fontname='Times New Roman')
axes=plt.gca()
xtick = axes.get_xticks()
ytick = axes.get_yticks()
axes.xaxis.set_major_formatter(xmajorFormatter)
axes.yaxis.set_major_formatter(ymajorFormatter)
axes.set_xticklabels(xtick, fontsize=10,fontname='Times New Roman')
axes.set_yticklabels(ytick, fontsize=10,fontname='Times New Roman')
xmajorFormatter = plt.FormatStrFormatter('%.1f')
plt.gca().set_yticklabels(['{0:.3f}'.format(x) for x in ytick], fontsize=10,fontname='Times New Roman')
ymajorFormatter = plt.FormatStrFormatter('%.3f')
plt.savefig(mp.output_folder+"\\Exp_"+str(gl.exp_id)+"_fair.png",bbox_inches='tight',dpi=600)
if gpc.is_show_fig:
plt.show(block=False)
plt.pause(2)
plt.close()
else:
plt.plot(ttc)
plt.xlabel("Case Index",fontsize=10, fontname='Times New Roman')
plt.ylabel('TTC',fontsize=10,fontname='Times New Roman')
axes=plt.gca()
xtick = axes.get_xticks()
ytick = axes.get_yticks()
axes.set_xticklabels(xtick, fontsize=10,fontname='Times New Roman')
axes.set_yticklabels(ytick, fontsize=10,fontname='Times New Roman')
plt.savefig(mp.output_folder+"\\Exp_"+str(gl.exp_id)+"_ttc.png",bbox_inches='tight',dpi=600)
if gpc.is_show_fig:
plt.show(block=False)
plt.pause(2)
plt.close()
plt.plot(fair)
plt.xlabel("Case Index",fontsize=10, fontname='Times New Roman')
plt.ylabel('Fair',fontsize=10,fontname='Times New Roman')
axes=plt.gca()
xtick = axes.get_xticks()
ytick = axes.get_yticks()
axes.set_xticklabels(xtick, fontsize=10,fontname='Times New Roman')
axes.set_yticklabels(ytick, fontsize=10,fontname='Times New Roman')
xmajorFormatter = plt.FormatStrFormatter('%.1f')
ymajorFormatter = plt.FormatStrFormatter('%.1f')
axes.xaxis.set_major_formatter(xmajorFormatter)
axes.yaxis.set_major_formatter(ymajorFormatter)
plt.savefig(mp.output_folder+"\\Exp_"+str(gl.exp_id)+"_fare.png",bbox_inches='tight',dpi=600)
if gpc.is_show_fig:
plt.show(block=False)
plt.pause(2)
plt.close()
pass | 38.831395 | 118 | 0.585866 | 913 | 6,679 | 4.155531 | 0.142388 | 0.065894 | 0.118608 | 0.151555 | 0.82815 | 0.81708 | 0.81708 | 0.787032 | 0.787032 | 0.787032 | 0 | 0.02149 | 0.268453 | 6,679 | 172 | 119 | 38.831395 | 0.755014 | 0.010181 | 0 | 0.668966 | 0 | 0 | 0.109862 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.02069 | false | 0.006897 | 0.027586 | 0 | 0.055172 | 0.027586 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1a3e542396420699fc7d93bfb3e45abd36276781 | 28 | py | Python | directory/hello/world.py | jacksorjacksor/introduction-to-html-and-css | b60e381ee3670f83118c80496e1fd9b4341042d3 | [
"MIT"
] | null | null | null | directory/hello/world.py | jacksorjacksor/introduction-to-html-and-css | b60e381ee3670f83118c80496e1fd9b4341042d3 | [
"MIT"
] | null | null | null | directory/hello/world.py | jacksorjacksor/introduction-to-html-and-css | b60e381ee3670f83118c80496e1fd9b4341042d3 | [
"MIT"
] | null | null | null | print("HELLO EVERYONE!!!!")
| 14 | 27 | 0.642857 | 3 | 28 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.071429 | 28 | 1 | 28 | 28 | 0.692308 | 0 | 0 | 0 | 0 | 0 | 0.642857 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
1aa5ac109a883369207a81588fbd3ab44f3252da | 42 | py | Python | analyser_sourcecode/lang_python/src/test/resources/py/class.py | archguard/archguard | 181fbc09411b80bd5ed1d205d57ddb66ecd7e676 | [
"MIT"
] | 6 | 2022-03-31T01:16:03.000Z | 2022-03-31T06:08:07.000Z | analyser_sourcecode/lang_python/src/test/resources/py/class.py | archguard/scanner | c7723b67ef980af88c078da28c9d2927147daf7a | [
"MIT"
] | null | null | null | analyser_sourcecode/lang_python/src/test/resources/py/class.py | archguard/scanner | c7723b67ef980af88c078da28c9d2927147daf7a | [
"MIT"
] | null | null | null | class foo: pass
class baz(foo):
pass
| 8.4 | 15 | 0.642857 | 7 | 42 | 3.857143 | 0.571429 | 0.518519 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.261905 | 42 | 4 | 16 | 10.5 | 0.870968 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.666667 | 0 | 0 | 0.666667 | 0 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
1aac8b0efdc579dfb41d0d0b57a3f3b080db1347 | 142 | py | Python | linora/param_search/XGBClassifier/__init__.py | Hourout/linora | 4269516c9227a18bd1a65e1c6a59e73c74e874d0 | [
"Apache-2.0"
] | 10 | 2018-11-22T03:30:39.000Z | 2020-08-20T04:39:35.000Z | linora/param_search/XGBClassifier/__init__.py | Hourout/linora | 4269516c9227a18bd1a65e1c6a59e73c74e874d0 | [
"Apache-2.0"
] | null | null | null | linora/param_search/XGBClassifier/__init__.py | Hourout/linora | 4269516c9227a18bd1a65e1c6a59e73c74e874d0 | [
"Apache-2.0"
] | 3 | 2019-04-09T12:17:34.000Z | 2020-08-20T04:33:31.000Z | from linora.param_search.XGBClassifier._RandomSearch import RandomSearch
from linora.param_search.XGBClassifier._GridSearch import GridSearch
| 47.333333 | 72 | 0.901408 | 16 | 142 | 7.75 | 0.5 | 0.16129 | 0.241935 | 0.33871 | 0.548387 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.056338 | 142 | 2 | 73 | 71 | 0.925373 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
1ab9363c029472c53b118c43166896e326d38782 | 37 | py | Python | main.py | junhuanchen/BPI-Bit-Python | c03ed50f47210288a14c05c66ace0686374110b3 | [
"MIT"
] | 28 | 2019-06-01T12:23:09.000Z | 2022-01-30T15:27:51.000Z | main.py | BPI-STEAM/micropython-samples | c03ed50f47210288a14c05c66ace0686374110b3 | [
"MIT"
] | 2 | 2019-07-29T12:54:58.000Z | 2021-11-27T08:44:57.000Z | main.py | BPI-STEAM/micropython-samples | c03ed50f47210288a14c05c66ace0686374110b3 | [
"MIT"
] | 15 | 2019-09-13T12:22:39.000Z | 2022-01-05T08:08:45.000Z | import microbit
print(dir(microbit)) | 12.333333 | 20 | 0.810811 | 5 | 37 | 6 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.081081 | 37 | 3 | 20 | 12.333333 | 0.882353 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
46d68a31ed0c93465cd4560a8a0fd746d6aeff81 | 15,149 | py | Python | tests/renderers/httpdomain/test_render_parameters.py | nayarsystems/openapi | 2294de30c37f0876a11d13d003c19a67910af1e0 | [
"BSD-2-Clause"
] | 1 | 2020-07-15T15:00:38.000Z | 2020-07-15T15:00:38.000Z | tests/renderers/httpdomain/test_render_parameters.py | dpgaspar/openapi | 227ef532eef5e8d21003ce0a77afc687c92944c3 | [
"BSD-2-Clause"
] | null | null | null | tests/renderers/httpdomain/test_render_parameters.py | dpgaspar/openapi | 227ef532eef5e8d21003ce0a77afc687c92944c3 | [
"BSD-2-Clause"
] | null | null | null | """OpenAPI spec renderer: render_parameters."""
import itertools
import textwrap
import pytest
from sphinxcontrib.openapi import renderers
def textify(generator):
return "\n".join(generator)
def test_render_parameters_no_items(testrenderer):
"""No parameter definitions are rendered."""
markup = textify(testrenderer.render_parameters([]))
assert markup == ""
def test_render_parameters_one_item(testrenderer):
"""One usual parameter definition is rendered."""
markup = textify(
testrenderer.render_parameters(
[
{
"name": "evidenceId",
"in": "path",
"required": True,
"description": "A unique evidence identifier to query.",
"schema": {"type": "string"},
}
]
)
)
assert markup == textwrap.dedent(
"""\
:param evidenceId:
A unique evidence identifier to query.
:paramtype evidenceId: string, required
""".rstrip()
)
def test_render_parameters_many_items(testrenderer):
"""Many parameter definitions are rendered."""
markup = textify(
testrenderer.render_parameters(
[
{
"name": "evidenceId",
"in": "path",
"required": True,
"description": "A unique evidence identifier to query.",
"schema": {"type": "string"},
},
{
"name": "details",
"in": "query",
"description": "If true, information w/ details is returned.",
"schema": {"type": "boolean"},
},
{
"name": "Api-Version",
"in": "header",
"default": "1",
"description": "API version to use for the request.",
"schema": {"type": "integer"},
},
]
)
)
assert markup == textwrap.dedent(
"""\
:reqheader Api-Version:
API version to use for the request.
:reqheadertype Api-Version: integer
:param evidenceId:
A unique evidence identifier to query.
:paramtype evidenceId: string, required
:queryparam details:
If true, information w/ details is returned.
:queryparamtype details: boolean
""".rstrip()
)
@pytest.mark.parametrize("permutation_seq", itertools.permutations(range(3)))
def test_render_parameters_many_items_ordered(testrenderer, permutation_seq):
"""Many parameter definitions are rendered and properly ordered."""
parameters = [
{
"name": "evidenceId",
"in": "path",
"required": True,
"description": "A unique evidence identifier to query.",
"schema": {"type": "string"},
},
{
"name": "details",
"in": "query",
"description": "If true, information w/ details is returned.",
"schema": {"type": "boolean"},
},
{
"name": "Api-Version",
"in": "header",
"required": False,
"default": "1",
"description": "API version to use for the request.",
"schema": {"type": "integer"},
},
]
markup = textify(
testrenderer.render_parameters(
# Since the test receives a permutation sequence as input,
# we need to ensure that parameters are shuffled according
# to that sequence, because this is the essence of the test.
[parameters[seq] for seq in permutation_seq]
)
)
assert markup == textwrap.dedent(
"""\
:reqheader Api-Version:
API version to use for the request.
:reqheadertype Api-Version: integer
:param evidenceId:
A unique evidence identifier to query.
:paramtype evidenceId: string, required
:queryparam details:
If true, information w/ details is returned.
:queryparamtype details: boolean
""".rstrip()
)
def test_render_parameters_many_items_stable_order(testrenderer):
"""Many parameter definitions are rendered w/ preserved order."""
markup = textify(
testrenderer.render_parameters(
[
{
"name": "kind",
"in": "path",
"required": True,
"description": "An evidence kind.",
"schema": {"type": "string"},
},
{
"name": "Api-Version",
"in": "header",
"default": "1",
"description": "API version to use for the request.",
"schema": {"type": "integer"},
},
{
"name": "details",
"in": "query",
"description": "If true, information w/ details is returned.",
"schema": {"type": "boolean"},
},
{
"name": "evidenceId",
"in": "path",
"required": True,
"description": "A unique evidence identifier to query.",
"schema": {"type": "string"},
},
{
"name": "related",
"in": "query",
"description": "If true, links to related evidences are returned.",
"schema": {"type": "boolean"},
},
{
"name": "Accept",
"in": "header",
"default": "application/json",
"description": "A desired Content-Type of HTTP response.",
"schema": {"type": "string"},
},
]
)
)
assert markup == textwrap.dedent(
"""\
:reqheader Api-Version:
API version to use for the request.
:reqheadertype Api-Version: integer
:reqheader Accept:
A desired Content-Type of HTTP response.
:reqheadertype Accept: string
:param kind:
An evidence kind.
:paramtype kind: string, required
:param evidenceId:
A unique evidence identifier to query.
:paramtype evidenceId: string, required
:queryparam details:
If true, information w/ details is returned.
:queryparamtype details: boolean
:queryparam related:
If true, links to related evidences are returned.
:queryparamtype related: boolean
""".rstrip()
)
def test_render_parameters_custom_order(fakestate):
"""Many parameter definitions are rendered w/ preserved order."""
testrenderer = renderers.HttpdomainRenderer(
fakestate, {"request-parameters-order": ["query", "path", "header"]}
)
markup = textify(
testrenderer.render_parameters(
[
{
"name": "kind",
"in": "path",
"required": True,
"description": "An evidence kind.",
"schema": {"type": "string"},
},
{
"name": "Api-Version",
"in": "header",
"default": "1",
"description": "API version to use for the request.",
"schema": {"type": "integer"},
},
{
"name": "details",
"in": "query",
"description": "If true, information w/ details is returned.",
"schema": {"type": "boolean"},
},
{
"name": "evidenceId",
"in": "path",
"required": True,
"description": "A unique evidence identifier to query.",
"schema": {"type": "string"},
},
{
"name": "related",
"in": "query",
"description": "If true, links to related evidences are returned.",
"schema": {"type": "boolean"},
},
{
"name": "Accept",
"in": "header",
"default": "application/json",
"description": "A desired Content-Type of HTTP response.",
"schema": {"type": "string"},
},
]
)
)
assert markup == textwrap.dedent(
"""\
:queryparam details:
If true, information w/ details is returned.
:queryparamtype details: boolean
:queryparam related:
If true, links to related evidences are returned.
:queryparamtype related: boolean
:param kind:
An evidence kind.
:paramtype kind: string, required
:param evidenceId:
A unique evidence identifier to query.
:paramtype evidenceId: string, required
:reqheader Api-Version:
API version to use for the request.
:reqheadertype Api-Version: integer
:reqheader Accept:
A desired Content-Type of HTTP response.
:reqheadertype Accept: string
""".rstrip()
)
def test_render_parameters_custom_order_partial(fakestate):
"""Many parameter definitions are rendered w/ preserved order."""
testrenderer = renderers.HttpdomainRenderer(
fakestate, {"request-parameters-order": ["query", "path"]}
)
markup = textify(
testrenderer.render_parameters(
[
{
"name": "kind",
"in": "path",
"required": True,
"description": "An evidence kind.",
"schema": {"type": "string"},
},
{
"name": "Api-Version",
"in": "header",
"default": "1",
"description": "API version to use for the request.",
"schema": {"type": "integer"},
},
{
"name": "details",
"in": "query",
"description": "If true, information w/ details is returned.",
"schema": {"type": "boolean"},
},
{
"name": "evidenceId",
"in": "path",
"required": True,
"description": "A unique evidence identifier to query.",
"schema": {"type": "string"},
},
{
"name": "related",
"in": "query",
"description": "If true, links to related evidences are returned.",
"schema": {"type": "boolean"},
},
{
"name": "Accept",
"in": "header",
"default": "application/json",
"description": "A desired Content-Type of HTTP response.",
"schema": {"type": "string"},
},
]
)
)
assert markup == textwrap.dedent(
"""\
:queryparam details:
If true, information w/ details is returned.
:queryparamtype details: boolean
:queryparam related:
If true, links to related evidences are returned.
:queryparamtype related: boolean
:param kind:
An evidence kind.
:paramtype kind: string, required
:param evidenceId:
A unique evidence identifier to query.
:paramtype evidenceId: string, required
:reqheader Api-Version:
API version to use for the request.
:reqheadertype Api-Version: integer
:reqheader Accept:
A desired Content-Type of HTTP response.
:reqheadertype Accept: string
""".rstrip()
)
def test_render_parameters_case_insensitive(fakestate):
"""Many parameter definitions are rendered w/ preserved order."""
testrenderer = renderers.HttpdomainRenderer(
fakestate, {"request-parameters-order": ["QUERY", "pAth", "Header"]}
)
markup = textify(
testrenderer.render_parameters(
[
{
"name": "kind",
"in": "PATH",
"required": True,
"description": "An evidence kind.",
"schema": {"type": "string"},
},
{
"name": "Api-Version",
"in": "header",
"default": "1",
"description": "API version to use for the request.",
"schema": {"type": "integer"},
},
{
"name": "details",
"in": "query",
"description": "If true, information w/ details is returned.",
"schema": {"type": "boolean"},
},
{
"name": "evidenceId",
"in": "Path",
"required": True,
"description": "A unique evidence identifier to query.",
"schema": {"type": "string"},
},
{
"name": "related",
"in": "qUery",
"description": "If true, links to related evidences are returned.",
"schema": {"type": "boolean"},
},
{
"name": "Accept",
"in": "headeR",
"default": "application/json",
"description": "A desired Content-Type of HTTP response.",
"schema": {"type": "string"},
},
]
)
)
assert markup == textwrap.dedent(
"""\
:queryparam details:
If true, information w/ details is returned.
:queryparamtype details: boolean
:queryparam related:
If true, links to related evidences are returned.
:queryparamtype related: boolean
:param kind:
An evidence kind.
:paramtype kind: string, required
:param evidenceId:
A unique evidence identifier to query.
:paramtype evidenceId: string, required
:reqheader Api-Version:
API version to use for the request.
:reqheadertype Api-Version: integer
:reqheader Accept:
A desired Content-Type of HTTP response.
:reqheadertype Accept: string
""".rstrip()
)
| 34.042697 | 87 | 0.462539 | 1,169 | 15,149 | 5.952951 | 0.103507 | 0.044547 | 0.034488 | 0.050295 | 0.911482 | 0.90056 | 0.891939 | 0.870815 | 0.86363 | 0.849978 | 0 | 0.000802 | 0.423856 | 15,149 | 444 | 88 | 34.119369 | 0.796517 | 0.042313 | 0 | 0.577181 | 0 | 0 | 0.279405 | 0.00678 | 0 | 0 | 0 | 0 | 0.026846 | 1 | 0.030201 | false | 0 | 0.013423 | 0.003356 | 0.04698 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
20194f53cc9d746bc3c43a949e6149571eebfac9 | 5,385 | py | Python | bq_loader/__main__.py | naustica/bq_loader | 370c0f721d22bae485d626bc14cbc5162d86f2db | [
"MIT"
] | null | null | null | bq_loader/__main__.py | naustica/bq_loader | 370c0f721d22bae485d626bc14cbc5162d86f2db | [
"MIT"
] | null | null | null | bq_loader/__main__.py | naustica/bq_loader | 370c0f721d22bae485d626bc14cbc5162d86f2db | [
"MIT"
] | null | null | null | from PyInquirer import prompt
from bq_loader import create_table_from_local, create_table_from_bucket, upload_files_to_bucket
introduction_question = [
{
'type': 'list',
'name': 'method',
'message': 'Please choose a method',
'choices': ['create_table_from_local',
'create_table_from_bucket',
'upload_files_to_bucket']
}
]
questions_create_table_from_local = [
{
'type': 'input',
'name': 'table_id',
'message': 'Please enter a table id'
},
{
'type': 'input',
'name': 'project_id',
'message': 'Please enter a project id'
},
{
'type': 'input',
'name': 'dataset_id',
'message': 'Please enter a dataset id'
},
{
'type': 'input',
'name': 'file_path',
'message': 'Please enter a file path'
},
{
'type': 'input',
'name': 'schema_file_path',
'message': 'Please enter a schema file'
},
{
'type': 'list',
'name': 'source_format',
'message': 'Please choose a source format',
'choices': ['jsonl', 'avro', 'csv', 'mro', 'orc', 'parquet']
},
{
'type': 'list',
'name': 'write_disposition',
'message': 'Please choose an action that should occur if the destination table already exists',
'choices': ['WRITE_TRUNCATE', 'WRITE_APPEND', 'WRITE_EMPTY']
},
{
'type': 'input',
'name': 'table_description',
'message': 'Please enter a table description'
},
{
'type': 'confirm',
'name': 'ignore_unknown_values',
'message': 'Ignore unknown values?',
},
]
questions_create_table_from_bucket = [
{
'type': 'input',
'name': 'uri',
'message': 'Please enter a Google Bucket URI'
},
{
'type': 'input',
'name': 'table_id',
'message': 'Please enter a table id'
},
{
'type': 'input',
'name': 'project_id',
'message': 'Please enter a project id'
},
{
'type': 'input',
'name': 'dataset_id',
'message': 'Please enter a dataset id'
},
{
'type': 'input',
'name': 'schema_file_path',
'message': 'Please enter a schema file'
},
{
'type': 'list',
'name': 'source_format',
'message': 'Please choose a source format',
'choices': ['jsonl', 'avro', 'csv', 'mro', 'orc', 'parquet']
},
{
'type': 'list',
'name': 'write_disposition',
'message': 'Please choose an action that should occur if the destination table already exists',
'choices': ['WRITE_TRUNCATE', 'WRITE_APPEND', 'WRITE_EMPTY']
},
{
'type': 'input',
'name': 'table_description',
'message': 'Please enter a table description'
},
{
'type': 'confirm',
'name': 'ignore_unknown_values',
'message': 'Ignore unknown values?',
},
]
questions_upload_files_to_bucket = [
{
'type': 'input',
'name': 'bucket_name',
'message': 'Please enter a bucket name'
},
{
'type': 'input',
'name': 'file_path',
'message': 'Please enter a file path'
},
{
'type': 'input',
'name': 'gcb_dir',
'message': 'Please enter the name of the directory which should be created'
}
]
def main():
answer = prompt(introduction_question)
if answer['method'] == 'create_table_from_local':
answers = prompt(questions_create_table_from_local)
create_table_from_local(table_id=answers['table_id'],
project_id=answers['project_id'],
dataset_id=answers['dataset_id'],
file_path=answers['file_path'],
schema_file_path=answers['schema_file_path'],
source_format=answers['source_format'],
write_disposition=answers['write_disposition'],
table_description=answers['table_description'],
ignore_unknown_values=answers['ignore_unknown_values'])
if answer['method'] == 'create_table_from_bucket':
answers = prompt(questions_create_table_from_bucket)
create_table_from_bucket(uri=answers['uri'],
table_id=answers['table_id'],
project_id=answers['project_id'],
dataset_id=answers['dataset_id'],
schema_file_path=answers['schema_file_path'],
source_format=answers['source_format'],
write_disposition=answers['write_disposition'],
table_description=answers['table_description'],
ignore_unknown_values=answers['ignore_unknown_values'])
if answer['method'] == 'upload_files_to_bucket':
answers = prompt(questions_upload_files_to_bucket)
upload_files_to_bucket(bucket_name=answers['bucket_name'],
file_path=answers['file_path'],
gcb_dir=answers['gcb_dir'])
if __name__ == '__main__':
main()
| 31.491228 | 103 | 0.52572 | 515 | 5,385 | 5.207767 | 0.143689 | 0.096943 | 0.072707 | 0.09918 | 0.827368 | 0.764728 | 0.726324 | 0.713274 | 0.713274 | 0.713274 | 0 | 0 | 0.340204 | 5,385 | 170 | 104 | 31.676471 | 0.754855 | 0 | 0 | 0.493671 | 0 | 0 | 0.373259 | 0.041226 | 0 | 0 | 0 | 0 | 0 | 1 | 0.006329 | false | 0 | 0.012658 | 0 | 0.018987 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
2022aecc2d61794eec27e4fc0401a51eda13e91b | 10,544 | py | Python | tensorflow/utils/layers.py | cadurosar/GAT | 2b674568a19403143543ab938c84aeba0a5d16e9 | [
"MIT"
] | null | null | null | tensorflow/utils/layers.py | cadurosar/GAT | 2b674568a19403143543ab938c84aeba0a5d16e9 | [
"MIT"
] | null | null | null | tensorflow/utils/layers.py | cadurosar/GAT | 2b674568a19403143543ab938c84aeba0a5d16e9 | [
"MIT"
] | 1 | 2018-12-11T16:35:55.000Z | 2018-12-11T16:35:55.000Z | import numpy as np
import tensorflow as tf
conv1d = tf.layers.conv1d
# attention (Velickovic et al.)
def attn_head(seq, out_sz, bias_mat, activation, in_drop=0.0, coef_drop=0.0, residual=False, use_bias=True):
with tf.name_scope('my_attn'):
if in_drop != 0.0:
seq = tf.nn.dropout(seq, 1.0 - in_drop)
seq_fts = tf.layers.conv1d(seq, out_sz, 1, use_bias=False)
# simplest self-attention possible
f_1 = tf.layers.conv1d(seq_fts, 1, 1)
f_2 = tf.layers.conv1d(seq_fts, 1, 1)
logits = f_1 + tf.transpose(f_2, [0, 2, 1])
coefs = tf.nn.softmax(tf.nn.leaky_relu(logits) + bias_mat)
if coef_drop != 0.0:
coefs = tf.nn.dropout(coefs, 1.0 - coef_drop)
if in_drop != 0.0:
seq_fts = tf.nn.dropout(seq_fts, 1.0 - in_drop)
vals = tf.matmul(coefs, seq_fts)
if use_bias:
ret = tf.contrib.layers.bias_add(vals)
# residual connection
if residual:
if seq.shape[-1] != ret.shape[-1]:
ret = ret + conv1d(seq, ret.shape[-1], 1) # activation
else:
seq_fts = ret + seq
return activation(ret) # activation
# Experimental sparse attention head (for running on datasets such as Pubmed)
# N.B. Because of limitations of current TF implementation, will work _only_ if batch_size = 1!
def sp_attn_head(seq, out_sz, adj_mat, activation, nb_nodes, in_drop=0.0, coef_drop=0.0, residual=False,
nnz=None, use_bias=True, intra_drop=None, intra_activation=None, scheme_norm=False,
scheme_init_std=None):
with tf.name_scope('sp_attn'):
if in_drop != 0.0:
seq = tf.nn.dropout(seq, 1.0 - in_drop)
seq_fts = tf.layers.conv1d(seq, out_sz, 1, use_bias=False)
# simplest self-attention possible
f_1 = tf.layers.conv1d(seq_fts, 1, 1)
f_2 = tf.layers.conv1d(seq_fts, 1, 1)
f_1 = tf.reshape(f_1, (nb_nodes, 1))
f_2 = tf.reshape(f_2, (nb_nodes, 1))
f_1 = adj_mat*f_1
f_2 = adj_mat * tf.transpose(f_2, [1,0])
logits = tf.sparse_add(f_1, f_2)
lrelu = tf.SparseTensor(indices=logits.indices,
values=tf.nn.leaky_relu(logits.values),
dense_shape=logits.dense_shape)
coefs = tf.sparse_softmax(lrelu)
if coef_drop != 0.0:
coefs = tf.SparseTensor(indices=coefs.indices,
values=tf.nn.dropout(coefs.values, 1.0 - coef_drop),
dense_shape=coefs.dense_shape)
if in_drop != 0.0:
seq_fts = tf.nn.dropout(seq_fts, 1.0 - in_drop)
# As tf.sparse_tensor_dense_matmul expects its arguments to have rank-2,
# here we make an assumption that our input is of batch size 1, and reshape appropriately.
# The method will fail in all other cases!
coefs = tf.sparse_reshape(coefs, [nb_nodes, nb_nodes])
seq_fts = tf.squeeze(seq_fts)
vals = tf.sparse_tensor_dense_matmul(coefs, seq_fts)
vals = tf.expand_dims(vals, axis=0)
vals.set_shape([1, nb_nodes, out_sz])
ret = tf.contrib.layers.bias_add(vals)
# residual connection
if residual:
if seq.shape[-1] != ret.shape[-1]:
ret = ret + conv1d(seq, ret.shape[-1], 1) # activation
else:
seq_fts = ret + seq
return activation(ret) # activation
# neural contraction (Vialatte et al.)
def sp_cttn_head(seq, out_sz, adj_mat, activation, nb_nodes, in_drop=0.0, coef_drop=0.0, residual=False,
nnz=None, use_bias=True, intra_drop=None, intra_activation=None, scheme_norm=tf.sparse_softmax,
scheme_init_std=None):
if intra_drop is None:
intra_drop = in_drop
with tf.name_scope('sp_contraction'):
if in_drop != 0.0:
seq = tf.nn.dropout(seq, 1.0 - in_drop)
# right operand SXW
seq_fts = tf.layers.conv1d(seq, out_sz, 1, use_bias=False)
if not(intra_activation is None):
seq_fts = intra_activation(seq_fts)
if intra_drop != 0.0:
seq_fts = tf.nn.dropout(seq_fts, 1.0 - intra_drop)
# left operand SXW
initializer = None if scheme_init_std is None else tf.truncated_normal_initializer(0.0,scheme_init_std)
scheme_kernel = tf.get_variable('scheme_kernel', (nnz,),
initializer=initializer,
trainable=True)
scheme = tf.SparseTensor(indices = adj_mat.indices,
values = scheme_kernel,
dense_shape = adj_mat.dense_shape)
scheme = tf.sparse_add(scheme, adj_mat)
if not(scheme_norm is None):
scheme = scheme_norm(scheme)
if coef_drop != 0.0:
scheme = tf.SparseTensor(indices=scheme.indices,
values=tf.nn.dropout(scheme.values, 1.0 - coef_drop),
dense_shape=scheme.dense_shape)
scheme = tf.sparse_reshape(scheme, [nb_nodes, nb_nodes])
seq_fts = tf.squeeze(seq_fts)
vals = tf.sparse_tensor_dense_matmul(scheme, seq_fts)
vals = tf.expand_dims(vals, axis=0)
vals.set_shape([1, nb_nodes, out_sz])
# bias
if use_bias:
ret = tf.contrib.layers.bias_add(vals)
# residual connection
if residual:
if seq.shape[-1] != ret.shape[-1]:
ret = ret + conv1d(seq, ret.shape[-1], 1)
else:
seq_fts = ret + seq
# activation
return activation(ret)
# original graph convolution (Kipf et al.)
def sp_gcn_head(seq, out_sz, adj_mat, activation, nb_nodes, in_drop=0.0, coef_drop=0.0, residual=False,
nnz=None, use_bias=True, intra_drop=None, intra_activation=None, scheme_norm=None,
scheme_init_std=None):
if intra_drop is None:
intra_drop = in_drop
with tf.name_scope('sp_gcn'):
if in_drop != 0.0:
seq = tf.nn.dropout(seq, 1.0 - in_drop)
# right operand SXW
seq_fts = tf.layers.conv1d(seq, out_sz, 1, use_bias=False)
if not(intra_activation is None):
seq_fts = intra_activation(seq_fts)
if intra_drop != 0.0:
seq_fts = tf.nn.dropout(seq_fts, 1.0 - intra_drop)
# left operand SXW
scheme = adj_mat
if not(scheme_norm is None):
scheme = scheme_norm(scheme)
if coef_drop != 0.0:
scheme = tf.SparseTensor(indices=scheme.indices,
values=tf.nn.dropout(scheme.values, 1.0 - coef_drop),
dense_shape=scheme.dense_shape)
scheme = tf.sparse_reshape(scheme, [nb_nodes, nb_nodes])
seq_fts = tf.squeeze(seq_fts)
vals = tf.sparse_tensor_dense_matmul(scheme, seq_fts)
vals = tf.expand_dims(vals, axis=0)
vals.set_shape([1, nb_nodes, out_sz])
# bias
if use_bias:
ret = tf.contrib.layers.bias_add(vals)
# residual connection
if residual:
if seq.shape[-1] != ret.shape[-1]:
ret = ret + conv1d(seq, ret.shape[-1], 1)
else:
seq_fts = ret + seq
# activation
return activation(ret)
# topology adaptative graph convolution (Du et al.)
def sp_tagcn_head(seq, out_sz, adj_mat, activation, nb_nodes, in_drop=0.0, coef_drop=0.0, residual=False,
nnz=None, use_bias=True, intra_drop=None, intra_activation=None, scheme_norm=None,
scheme_init_std=None, K=2): #K is the polynomial order
if intra_drop is None:
intra_drop = in_drop
with tf.name_scope('sp_gcn'):
if in_drop != 0.0:
seq = tf.nn.dropout(seq, 1.0 - in_drop)
# preprocessing S
scheme = adj_mat
if not(scheme_norm is None):
scheme = scheme_norm(scheme)
if coef_drop != 0.0:
scheme = tf.SparseTensor(indices=scheme.indices,
values=tf.nn.dropout(scheme.values, 1.0 - coef_drop),
dense_shape=scheme.dense_shape)
scheme = tf.sparse_reshape(scheme, [nb_nodes, nb_nodes])
vals = None
assert K > 0
for k in range(1, K+1):
# right operand SXW
seq_fts = tf.layers.conv1d(seq, out_sz, 1, use_bias=False)
if not(intra_activation is None):
seq_fts = intra_activation(seq_fts)
if intra_drop != 0.0:
seq_fts = tf.nn.dropout(seq_fts, 1.0 - intra_drop)
seq_fts = tf.squeeze(seq_fts)
# left operand SXW
for ik in range(k):
seq_fts = tf.sparse_tensor_dense_matmul(scheme, seq_fts)
# sum
if vals is None:
vals = seq_fts
else:
vals = vals + seq_fts
# shape
vals = tf.expand_dims(vals, axis=0)
vals.set_shape([1, nb_nodes, out_sz])
# bias
if use_bias:
ret = tf.contrib.layers.bias_add(vals)
# residual connection
if residual:
if seq.shape[-1] != ret.shape[-1]:
ret = ret + conv1d(seq, ret.shape[-1], 1)
else:
seq_fts = ret + seq
# activation
return activation(ret)
# regular fully connected layer without using the graph
def mlp_head(seq, out_sz, adj_mat=None, activation=tf.nn.relu, nb_nodes=None, in_drop=0.0, coef_drop=0.0, residual=False,
nnz=None, use_bias=True, intra_drop=None, intra_activation=None, scheme_norm=None,
scheme_init_std=None):
if intra_drop is None:
intra_drop = in_drop
with tf.name_scope('mlp'):
if in_drop != 0.0:
seq = tf.nn.dropout(seq, 1.0 - in_drop)
# operation XW
seq_fts = tf.layers.conv1d(seq, out_sz, 1, use_bias=False)
if not(intra_activation is None):
seq_fts = intra_activation(seq_fts)
#if intra_drop != 0.0:
# seq_fts = tf.nn.dropout(seq_fts, 1.0 - intra_drop)
# bias
if use_bias:
ret = tf.contrib.layers.bias_add(seq_fts)
# residual connection
if residual:
if seq.shape[-1] != ret.shape[-1]:
ret = ret + conv1d(seq, ret.shape[-1], 1)
else:
seq_fts = ret + seq
# activation
return activation(ret)
| 37.257951 | 121 | 0.579097 | 1,497 | 10,544 | 3.867067 | 0.111556 | 0.054932 | 0.030057 | 0.019347 | 0.753153 | 0.727587 | 0.72085 | 0.709794 | 0.70323 | 0.70323 | 0 | 0.026086 | 0.316483 | 10,544 | 282 | 122 | 37.390071 | 0.777161 | 0.10698 | 0 | 0.737113 | 0 | 0 | 0.005975 | 0 | 0 | 0 | 0 | 0 | 0.005155 | 1 | 0.030928 | false | 0 | 0.010309 | 0 | 0.072165 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b37b8a17ac980bb70d3d8c8d6efaf79ebc7a3e6e | 6,524 | py | Python | MacGybot/commands/usercalendar.py | MathieuBrillard/le_tavernier | 45f27bb20b47478f5d2ea92c2433cfffb32ddbb8 | [
"MIT"
] | null | null | null | MacGybot/commands/usercalendar.py | MathieuBrillard/le_tavernier | 45f27bb20b47478f5d2ea92c2433cfffb32ddbb8 | [
"MIT"
] | null | null | null | MacGybot/commands/usercalendar.py | MathieuBrillard/le_tavernier | 45f27bb20b47478f5d2ea92c2433cfffb32ddbb8 | [
"MIT"
] | null | null | null | # date time and background tasks
import asyncio
import datetime as dt
# manage generated files
import os
# errors and calendar generation
from commands.calendrier.gen_cal import gen_cal
from commands.calendrier.errors import IncorrectFormat
# discord libs
import hikari
import lightbulb
from lightbulb import commands
# The options the command will have.
@lightbulb.option("format", "The format of calendar you want to get.", str)
# Convert the function into a command
@lightbulb.command("usercalendar", "Get a calendar.")
# Define the types of command that this function will implement
@lightbulb.implements(commands.PrefixCommand, commands.SlashCommand)
async def usercalendar(ctx: lightbulb.context.Context) -> None:
format = ctx.options._options["format"] # get the argument
try:
## generate the name of the file ##
user_id = str(ctx.author.id)
file_name = user_id + "_" + str(dt.datetime.now().date()) + "_0.png"
path = os.getcwd()
gen_cal(format, file_name, user_id)
except IncorrectFormat as e: # handle arguments errors
await ctx.respond(IncorrectFormat.__str__(e))
return
## dropdown creation ##
select_menu = (
ctx.bot.rest.build_action_row()
.add_select_menu("format_selection")
.set_placeholder("Pick another format here.")
)
select_menu_disabled = (
ctx.bot.rest.build_action_row()
.add_select_menu("format_selection_disabled")
.set_placeholder("Pick another format here.")
).set_is_disabled(True)
opts = ("day", "week", "month")
for opt in opts:
select_menu.add_option(
opt.capitalize(),
opt,
).add_to_menu()
select_menu_disabled.add_option(
opt.capitalize(),
opt,
).add_to_menu()
## generate embed response ##
embed = (
hikari.Embed(
title=f"{format.capitalize()}ly Calendar",
description="",
colour=hikari.Colour(0x563275),
# Doing it like this is important.
timestamp=dt.datetime.now().astimezone(),
)
.set_image(hikari.files.File(f"{path}\\commands\\calendrier\\generated\\{file_name}"))
.set_author(name="Information")
.set_footer(
text=f"Requested by {ctx.member.display_name}",
icon=ctx.member.avatar_url,
)
)
## send response ##
resp = await ctx.respond(embed=embed, component=select_menu.add_to_container())
msg = await resp.message()
## handle events ##
try:
event = await ctx.bot.wait_for(
hikari.InteractionCreateEvent,
timeout = 10, #TODO: set it to 60
predicate = lambda e:
isinstance(e.interaction, hikari.ComponentInteraction)
and e.interaction.user.id == ctx.author.id
and e.interaction.message.id == msg.id
and e.interaction.component_type == hikari.ComponentType.SELECT_MENU
)
except asyncio.TimeoutError:
await msg.edit(component=select_menu_disabled.add_to_container())
else:
await ctx.respond(response_type=5, content="Loading ...")
await msg.edit(component=select_menu_disabled.add_to_container())
if event.interaction.values[0] == "month":
await handle_dropdown(ctx, format="month", old_msg=msg)
elif event.interaction.values[0] == "week":
await handle_dropdown(ctx, format="week", old_msg=msg)
elif event.interaction.values[0] == "day":
await handle_dropdown(ctx, format="day", old_msg=msg)
async def handle_dropdown(ctx: lightbulb.context.Context, format: str, old_msg: hikari.Message) -> None:
## generate the name of the file ##
user_id = str(ctx.author.id)
file_name = user_id + str(dt.datetime.now().date()) + "_1.png"
path = os.getcwd()
gen_cal(format, file_name, user_id)
## dropdown creation ##
select_menu = (
ctx.bot.rest.build_action_row()
.add_select_menu("format_selection_2")
.set_placeholder("Pick another format here.")
)
select_menu_disabled = (
ctx.bot.rest.build_action_row()
.add_select_menu("format_selection_disabled_2")
.set_placeholder("Pick another format here.")
).set_is_disabled(True)
opts = ("day", "week", "month")
for opt in opts:
select_menu.add_option(
opt.capitalize(),
opt,
).add_to_menu()
select_menu_disabled.add_option(
opt.capitalize(),
opt,
).add_to_menu()
## generate embed response ##
embed = (
hikari.Embed(
title=f"{format.capitalize()}ly Calendar",
description="",
colour=hikari.Colour(0x563275),
# Doing it like this is important.
timestamp=dt.datetime.now().astimezone(),
)
.set_image(hikari.files.File(f"{path}\\commands\\calendrier\\generated\\{file_name}"))
.set_author(name="Information")
.set_footer(
text=f"Requested by {ctx.member.display_name}",
icon=ctx.member.avatar_url,
)
)
## send response ##
old_msg = await old_msg.edit(embed=embed, replace_attachments=True, component=select_menu.add_to_container())
await ctx.delete_last_response()
## handle events ##
try:
event = await ctx.bot.wait_for(
hikari.InteractionCreateEvent,
timeout = 10, #TODO: set it to 60
predicate = lambda e:
isinstance(e.interaction, hikari.ComponentInteraction)
and e.interaction.user.id == ctx.author.id
and e.interaction.message.id == old_msg.id
and e.interaction.component_type == hikari.ComponentType.SELECT_MENU
)
except asyncio.TimeoutError:
await old_msg.edit(component=select_menu_disabled.add_to_container())
else:
if event.interaction.values[0] == "month":
await handle_dropdown(ctx, format="month", old_msg=old_msg)
elif event.interaction.values[0] == "week":
await handle_dropdown(ctx, format="week", old_msg=old_msg)
elif event.interaction.values[0] == "day":
await handle_dropdown(ctx, format="day", old_msg=old_msg)
def load(bot: lightbulb.BotApp) -> None:
bot.command(usercalendar)
def unload(bot: lightbulb.BotApp) -> None:
bot.remove_command(bot.get_slash_command("usercalendar"))
| 38.152047 | 113 | 0.633507 | 778 | 6,524 | 5.12982 | 0.22108 | 0.047607 | 0.031571 | 0.034578 | 0.735405 | 0.722876 | 0.706339 | 0.705838 | 0.705838 | 0.704335 | 0 | 0.006757 | 0.25138 | 6,524 | 170 | 114 | 38.376471 | 0.810401 | 0.089362 | 0 | 0.640845 | 1 | 0 | 0.108374 | 0.042806 | 0 | 0 | 0.002718 | 0.005882 | 0 | 1 | 0.014085 | false | 0 | 0.056338 | 0 | 0.077465 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b3af5a30a7309c2237293b079594ba01ad993b36 | 162 | py | Python | memberportal/api_member_tools/serializers.py | kodaxx/MemberMatters | 880655bb3201441985a88a2f7750ad65cc781052 | [
"MIT"
] | 18 | 2020-02-03T12:38:53.000Z | 2022-02-08T00:44:35.000Z | memberportal/api_member_tools/serializers.py | kodaxx/MemberMatters | 880655bb3201441985a88a2f7750ad65cc781052 | [
"MIT"
] | 111 | 2020-02-02T05:02:01.000Z | 2022-03-28T04:46:27.000Z | memberportal/api_member_tools/serializers.py | kodaxx/MemberMatters | 880655bb3201441985a88a2f7750ad65cc781052 | [
"MIT"
] | 15 | 2020-02-03T11:03:30.000Z | 2021-12-05T00:33:02.000Z | from rest_framework_simplejwt.serializers import TokenObtainPairSerializer
from rest_framework import serializers
from access.models import DoorLog, InterlockLog
| 40.5 | 74 | 0.901235 | 18 | 162 | 7.944444 | 0.611111 | 0.111888 | 0.237762 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.080247 | 162 | 3 | 75 | 54 | 0.959732 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b613de79bf21442175f48e8d6ec8b83b4cd95c66 | 1,454 | py | Python | designpdfviwer.py | xi6th/Python_Algorithm | 05852b6fe133df2d83ae464b779b0818b173919d | [
"MIT"
] | null | null | null | designpdfviwer.py | xi6th/Python_Algorithm | 05852b6fe133df2d83ae464b779b0818b173919d | [
"MIT"
] | null | null | null | designpdfviwer.py | xi6th/Python_Algorithm | 05852b6fe133df2d83ae464b779b0818b173919d | [
"MIT"
] | null | null | null | #!/bin/python3
import math
import os
import random
import re
import sys
import string
#
# Complete the 'designerPdfViewer' function below.
#
# The function is expected to return an INTEGER.
# The function accepts following parameters:
# 1. INTEGER_ARRAY h
# 2. STRING word
#
# def designerPdfViewer(h, word):
# letters = list(string.ascii_lowercase)
# words = []
# height = []
# length_of_word = len(word)
# for w in word:
# relative_position = letters.index(w)
# words.append(relative_position)
# for i in words:
# index_item = h[i]
# height.append(index_item)
# # Write your code here
# maximum_letter = max(height)
# total_height = (length_of_word * 1) * maximum_letter
# print(total_height)
# h = [1,3,1,3,1,4,1,3,2,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,7]
# word = ["abcy"]
# designerPdfViewer(h,word)
def designerPdfViewer(h, word):
letters = list(string.ascii_lowercase)
words = []
height = []
length_of_word = len(word)
for w in word:
relative_position = letters.index(w)
words.append(relative_position)
for i in words:
index_item = h[i]
height.append(index_item)
# Write your code here
maximum_letter = max(height)
total_height = (length_of_word * 1) * maximum_letter
return(total_height)
h = [1,3,1,3,1,4,1,3,2,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,7]
word = "abcy"
print(designerPdfViewer(h,word)) | 23.079365 | 59 | 0.640303 | 224 | 1,454 | 4.035714 | 0.272321 | 0.066372 | 0.09292 | 0.115044 | 0.710177 | 0.710177 | 0.710177 | 0.710177 | 0.710177 | 0.710177 | 0 | 0.050622 | 0.225585 | 1,454 | 63 | 60 | 23.079365 | 0.75222 | 0.529574 | 0 | 0 | 0 | 0 | 0.006098 | 0 | 0 | 0 | 0 | 0.015873 | 0 | 1 | 0.043478 | false | 0 | 0.26087 | 0 | 0.304348 | 0.043478 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
373a3c867afe3f8541cd9909489c2577dc8458fd | 211 | py | Python | torchtoolbox/nn/__init__.py | daBawse167/torch-toolbox | a1c0440127bca00201dfa0d0e0dcbafb65628632 | [
"BSD-3-Clause"
] | 1 | 2021-01-06T08:17:28.000Z | 2021-01-06T08:17:28.000Z | torchtoolbox/nn/__init__.py | daBawse167/torch-toolbox | a1c0440127bca00201dfa0d0e0dcbafb65628632 | [
"BSD-3-Clause"
] | null | null | null | torchtoolbox/nn/__init__.py | daBawse167/torch-toolbox | a1c0440127bca00201dfa0d0e0dcbafb65628632 | [
"BSD-3-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
# Author: pistonyang@gmail.com
from .loss import *
from .sequential import *
from .norm import *
from .activation import *
try:
from .parallel import *
except ImportError:
pass
| 16.230769 | 30 | 0.682464 | 26 | 211 | 5.538462 | 0.692308 | 0.208333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005917 | 0.199052 | 211 | 12 | 31 | 17.583333 | 0.846154 | 0.236967 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.125 | 0.75 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
3754988848428c18a8256ac75618b1ea8b805280 | 3,073 | py | Python | tests/test_experiment/test_core_metrics/test_classification.py | mv1388/AIToolbox | c64ac4810a02d230ce471d86b758e82ea232a7e7 | [
"MIT"
] | 3 | 2019-10-12T12:24:09.000Z | 2020-08-02T02:42:43.000Z | tests/test_experiment/test_core_metrics/test_classification.py | mv1388/aitoolbox | 1060435e6cbdfd19abcb726c4080b663536b7467 | [
"MIT"
] | 3 | 2020-04-10T14:07:07.000Z | 2020-04-22T19:04:38.000Z | tests/test_experiment/test_core_metrics/test_classification.py | mv1388/aitoolbox | 1060435e6cbdfd19abcb726c4080b663536b7467 | [
"MIT"
] | null | null | null | import unittest
import numpy as np
from aitoolbox.experiment.core_metrics.classification import AccuracyMetric
class TestAccuracyMetric(unittest.TestCase):
def test_basic(self):
acc_1 = AccuracyMetric([1.] * 100, [1.] * 100)
self.assertEqual(acc_1.get_metric(), 1.)
self.assertEqual(acc_1.get_metric_dict(), {'Accuracy': 1.})
acc_2 = AccuracyMetric([1., 0.], [1., 1.])
self.assertEqual(acc_2.get_metric(), 0.5)
self.assertEqual(acc_2.get_metric_dict(), {'Accuracy': 0.5})
def test_2d_prediction(self):
acc_1 = AccuracyMetric([1.] * 100, [[0, 1]] * 100)
self.assertEqual(acc_1.get_metric(), 1.)
self.assertEqual(acc_1.get_metric_dict(), {'Accuracy': 1.})
acc_2 = AccuracyMetric([1.], [[0, 1]])
self.assertEqual(acc_2.get_metric(), 1.)
self.assertEqual(acc_2.get_metric_dict(), {'Accuracy': 1.})
acc_3 = AccuracyMetric([1.], [[1, 0]])
self.assertEqual(acc_3.get_metric(), 0.)
self.assertEqual(acc_3.get_metric_dict(), {'Accuracy': 0.})
acc_4 = AccuracyMetric([0, 1], [[0, 1], [0, 1]])
self.assertEqual(acc_4.get_metric(), 0.5)
self.assertEqual(acc_4.get_metric_dict(), {'Accuracy': 0.5})
acc_5 = AccuracyMetric([0, 1], [[1, 0], [0, 1]])
self.assertEqual(acc_5.get_metric(), 1.)
self.assertEqual(acc_5.get_metric_dict(), {'Accuracy': 1.})
def test_2d_vector_non_argmax(self):
y_pred = np.array([0, 1, 0, 0, 1, 1]).reshape((-1, 1))
y_true = np.array([1, 0, 1, 0, 1, 1]).reshape((-1, 1))
acc_1 = AccuracyMetric(y_true, y_pred, positive_class_thresh=None)
self.assertEqual(acc_1.get_metric(), 0.5)
self.assertEqual(acc_1.get_metric_dict(), {'Accuracy': 0.5})
y_pred = np.array([0, 1, 0, 2, 3, 1]).reshape((-1, 1))
y_true = np.array([1, 0, 1, 2, 3, 1]).reshape((-1, 1))
acc_2 = AccuracyMetric(y_true, y_pred, positive_class_thresh=None)
self.assertEqual(acc_2.get_metric(), 0.5)
self.assertEqual(acc_2.get_metric_dict(), {'Accuracy': 0.5})
y_pred = np.array([0, 1, 0, 2, 3, 1])
y_true = np.array([1, 0, 1, 2, 3, 1])
acc_3 = AccuracyMetric(y_true, y_pred, positive_class_thresh=None)
self.assertEqual(acc_3.get_metric(), 0.5)
self.assertEqual(acc_3.get_metric_dict(), {'Accuracy': 0.5})
def test_threshold_binary_classification(self):
y_pred = np.array([0.1, 0.6, 0.3, 0.2, 1.0, 0.9]).reshape((-1, 1))
y_true = np.array([1, 0, 1, 0, 1, 1]).reshape((-1, 1))
acc_1 = AccuracyMetric(y_true, y_pred, positive_class_thresh=0.5)
self.assertEqual(acc_1.get_metric(), 0.5)
self.assertEqual(acc_1.get_metric_dict(), {'Accuracy': 0.5})
y_pred = np.array([0.1, 0.6, 0.3, 0.2, 1.0, 0.9])
y_true = np.array([1, 0, 1, 0, 1, 1])
acc_2 = AccuracyMetric(y_true, y_pred, positive_class_thresh=0.5)
self.assertEqual(acc_2.get_metric(), 0.5)
self.assertEqual(acc_2.get_metric_dict(), {'Accuracy': 0.5})
| 44.536232 | 75 | 0.609177 | 485 | 3,073 | 3.626804 | 0.101031 | 0.204662 | 0.245594 | 0.143263 | 0.843093 | 0.833997 | 0.770893 | 0.6805 | 0.636157 | 0.58954 | 0 | 0.083368 | 0.207615 | 3,073 | 68 | 76 | 45.191176 | 0.639014 | 0 | 0 | 0.296296 | 0 | 0 | 0.03124 | 0 | 0 | 0 | 0 | 0 | 0.444444 | 1 | 0.074074 | false | 0 | 0.055556 | 0 | 0.148148 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
375d692c957f2227130f642171b1b780714fc429 | 103 | py | Python | celery_janitor/exceptions.py | comandrei/celery-janitor | 5f1ad663958e24ccb3abceaa227a12da495b6ecb | [
"MIT"
] | null | null | null | celery_janitor/exceptions.py | comandrei/celery-janitor | 5f1ad663958e24ccb3abceaa227a12da495b6ecb | [
"MIT"
] | null | null | null | celery_janitor/exceptions.py | comandrei/celery-janitor | 5f1ad663958e24ccb3abceaa227a12da495b6ecb | [
"MIT"
] | null | null | null | class MissingDependency(Exception):
pass
class BackendNotSupportedException(Exception):
pass
| 14.714286 | 46 | 0.786408 | 8 | 103 | 10.125 | 0.625 | 0.320988 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15534 | 103 | 6 | 47 | 17.166667 | 0.931034 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 0 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
376e394b98cba728d8f623c912b62ef75cc0cb66 | 1,831 | py | Python | backend/test/queries/test_flagged_committee_contributions.py | Healthcare-NOW/poppwatch | b8545f40699ef521f4df2a2f2f4edcf3c5b1cc04 | [
"MIT"
] | 1 | 2020-09-09T23:06:36.000Z | 2020-09-09T23:06:36.000Z | backend/test/queries/test_flagged_committee_contributions.py | Healthcare-NOW/fec-watch | b8545f40699ef521f4df2a2f2f4edcf3c5b1cc04 | [
"MIT"
] | 9 | 2020-07-18T15:09:59.000Z | 2022-02-27T00:53:47.000Z | backend/test/queries/test_flagged_committee_contributions.py | Healthcare-NOW/pop-donation-tracker | b8545f40699ef521f4df2a2f2f4edcf3c5b1cc04 | [
"MIT"
] | null | null | null | from app.queries import fetch_flagged_committee_contributions
from test.factories import CommitteeContributionFactory
def test_no_flagged_contributions(candidate_setup):
result = fetch_flagged_committee_contributions(candidate_setup.candidate)
assert len(result) == 0
def test_contributions_below_200_ignored(candidate_setup):
CommitteeContributionFactory(
donor_committee=candidate_setup.bad_pac,
recipient_committee=candidate_setup.committee_1,
amount=199.99,
)
result = fetch_flagged_committee_contributions(candidate_setup.candidate)
assert len(result) == 0
def test_contributions_of_type_24a_ignored(candidate_setup):
CommitteeContributionFactory(
donor_committee=candidate_setup.bad_pac,
recipient_committee=candidate_setup.committee_1,
transaction_type="24A",
amount=200.1,
)
result = fetch_flagged_committee_contributions(candidate_setup.candidate)
assert len(result) == 0
def test_contributions_across_committees_flagged(candidate_setup):
CommitteeContributionFactory(
donor_committee=candidate_setup.bad_pac,
recipient_committee=candidate_setup.committee_1,
amount=100,
)
CommitteeContributionFactory(
donor_committee=candidate_setup.bad_pac,
candidate=candidate_setup.candidate,
amount=100,
)
CommitteeContributionFactory(
donor_committee=candidate_setup.bad_pac,
recipient_committee=candidate_setup.committee_2,
amount=100,
)
result = fetch_flagged_committee_contributions(candidate_setup.candidate)
assert len(result) == 1
(committee, flagged_employer, amount) = result[0]
assert amount == 300
assert flagged_employer == candidate_setup.bad_employer
assert committee == candidate_setup.bad_pac
| 34.54717 | 77 | 0.766248 | 193 | 1,831 | 6.88601 | 0.207254 | 0.210685 | 0.173062 | 0.117381 | 0.7231 | 0.701279 | 0.701279 | 0.701279 | 0.701279 | 0.645598 | 0 | 0.024342 | 0.169853 | 1,831 | 52 | 78 | 35.211538 | 0.85 | 0 | 0 | 0.522727 | 0 | 0 | 0.001638 | 0 | 0 | 0 | 0 | 0 | 0.159091 | 1 | 0.090909 | false | 0 | 0.045455 | 0 | 0.136364 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
80aa30f94a0cd56e4a72079efcfc1eadff751e48 | 165 | py | Python | tests/test_utils.py | HealthByRo/fdadb | e020a902ca20cebd5999bc2dbc530375ab0922fb | [
"MIT"
] | 1 | 2020-06-11T04:44:22.000Z | 2020-06-11T04:44:22.000Z | tests/test_utils.py | HealthByRo/fdadb | e020a902ca20cebd5999bc2dbc530375ab0922fb | [
"MIT"
] | 8 | 2018-11-26T09:22:14.000Z | 2019-10-23T13:17:44.000Z | tests/test_utils.py | HealthByRo/fdadb | e020a902ca20cebd5999bc2dbc530375ab0922fb | [
"MIT"
] | null | null | null | from django.contrib.auth import get_user_model
from rest_framework.test import APITestCase
UserModel = get_user_model()
class BaseTestCase(APITestCase):
pass
| 18.333333 | 46 | 0.818182 | 22 | 165 | 5.909091 | 0.727273 | 0.107692 | 0.184615 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.127273 | 165 | 8 | 47 | 20.625 | 0.902778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.2 | 0.4 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.