hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
9aad78ebd9487f88e4eaa3d2967893534725787e | 46,918 | py | Python | tests/unit/test_connection.py | trir262/oneview-python | 4636af9bc04cf651da779ccf9b5b20705683a56c | [
"Apache-2.0"
] | null | null | null | tests/unit/test_connection.py | trir262/oneview-python | 4636af9bc04cf651da779ccf9b5b20705683a56c | [
"Apache-2.0"
] | null | null | null | tests/unit/test_connection.py | trir262/oneview-python | 4636af9bc04cf651da779ccf9b5b20705683a56c | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
###
# (C) Copyright [2020] Hewlett Packard Enterprise Development LP
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###
import json
import ssl
import unittest
import mmap
import os
import shutil
import os.path
from mock import patch, call, Mock, ANY
from http.client import HTTPSConnection, BadStatusLine, HTTPException
from hpeOneView.connection import connection
from hpeOneView.exceptions import HPEOneViewException
class ConnectionTest(unittest.TestCase):
def setUp(self):
self.host = '127.0.0.1'
self.connection = connection(self.host, 800)
self.accept_language_header = {
'Accept-Language': 'en_US'
}
self.default_headers = {
'X-API-Version': 800,
'Accept': 'application/json',
'Content-Type': 'application/json'
}
self.default_headers_with_etag_validation_off = {
'X-API-Version': 800,
'Accept': 'application/json',
'Content-Type': 'application/json',
'If-Match': '*'
}
self.merged_headers = {
'X-API-Version': 800,
'Accept': 'application/json',
'Content-Type': 'application/json',
'Accept-Language': 'en_US'
}
self.request_body = {"request body": "content"}
self.response_body = {"response body": "content",
"message": "An error occurred."}
self.dumped_request_body = json.dumps(self.request_body.copy())
self.expected_response_body = self.response_body.copy()
def __make_http_response(self, status):
mock_response = Mock(status=status)
mock_response.read.return_value = json.dumps(self.response_body).encode('utf-8')
if status == 200 or status == 202:
mock_response.getheader.return_value = '/task/uri'
return mock_response
def __create_fake_mapped_file(self):
mock_mapped_file = Mock()
mock_mapped_file.tell.side_effect = [0, 1048576, 2097152, 2621440] # 0, 1MB, 2MB 2.5MB
mock_mapped_file.size.return_value = 2621440 # 2.5MB
mock_mapped_file.read.side_effect = ['data chunck 1', 'data chunck 2', 'data chunck 3']
return mock_mapped_file
def __prepare_connection_to_post_multipart(self, response_status=200):
fake_connection = Mock()
fake_connection.getresponse.return_value.read.return_value = json.dumps(self.response_body).encode('utf-8')
fake_connection.getresponse.return_value.status = response_status
self.connection.get_connection = Mock()
self.connection.get_connection.return_value = fake_connection
self.connection._open = Mock()
self.connection._headers['auth'] = 'LTIxNjUzMjc0OTUzzHoF7eEkZLEUWVA-fuOZP4VGA3U8e67E'
encode_multipart = "multipart/form-data; boundary=----------ThIs_Is_tHe_bouNdaRY_$"
self.connection.encode_multipart_formdata = Mock()
self.connection.encode_multipart_formdata.return_value = encode_multipart
def test_default_headers(self):
self.assertEqual(self.default_headers, self.connection._headers)
def test_default_headers_when_etag_validation_is_disabled(self):
self.connection.disable_etag_validation()
self.assertEqual(self.default_headers_with_etag_validation_off, self.connection._headers)
def test_default_headers_when_etag_validation_is_enabled(self):
self.connection.enable_etag_validation()
self.assertEqual(self.default_headers, self.connection._headers)
def test_default_headers_when_etag_validation_is_disabled_and_enabled(self):
self.connection.disable_etag_validation()
self.connection.enable_etag_validation()
self.assertEqual(self.default_headers, self.connection._headers)
def test_default_headers_when_etag_validation_is_enabled_and_disabled(self):
self.connection.enable_etag_validation()
self.connection.disable_etag_validation()
self.assertEqual(self.default_headers_with_etag_validation_off, self.connection._headers)
def test_headers_with_api_version_800(self):
self.connection = connection(self.host, 800)
expected_headers = self.default_headers.copy()
expected_headers['X-API-Version'] = 800
self.assertEqual(expected_headers, self.connection._headers)
@patch.object(connection, 'get')
def test_headers_with_default_api_version_800(self, mock_get):
self.connection = connection(self.host)
self.connection._apiVersion = None
mock_get.side_effect = [{'minimumVersion': 400, 'currentVersion': 1800}]
expected_version = self.connection.get_default_api_version()
self.assertEqual(expected_version, 1800)
@patch.object(HTTPSConnection, 'request')
@patch.object(HTTPSConnection, 'getresponse')
def test_post_when_status_is_202_and_task_contains_taskState(self, mock_response, mock_request):
mock_request.return_value = {}
fake_task = {"taskState": "Completed"}
response = Mock(status=202)
response.read.return_value = json.dumps(fake_task).encode('utf-8')
response.getheader.return_value = ''
mock_response.return_value = response
task, body = self.connection.post('/path', self.request_body)
self.assertEqual(task, fake_task)
self.assertEqual(body, fake_task)
@patch.object(HTTPSConnection, 'request')
@patch.object(HTTPSConnection, 'getresponse')
def test_post_when_status_is_202_and_response_is_not_a_task(self, mock_response, mock_request):
mock_request.return_value = {}
response = Mock(status=202)
response.read.return_value = json.dumps(self.response_body).encode('utf-8')
response.getheader.return_value = ''
mock_response.return_value = response
task, body = self.connection.post('/path', self.request_body)
self.assertEqual(task, None)
self.assertEqual(body, self.response_body)
@patch.object(HTTPSConnection, 'request')
@patch.object(HTTPSConnection, 'getresponse')
def test_post_should_do_rest_call_when_status_ok(self, mock_response, mock_request):
mock_request.return_value = {}
mock_response.return_value = self.__make_http_response(status=200)
self.connection.post('/path', self.request_body)
mock_request.assert_called_once_with('POST', '/path', self.dumped_request_body, self.default_headers)
@patch.object(HTTPSConnection, 'request')
@patch.object(HTTPSConnection, 'getresponse')
def test_post_should_do_rest_calls_when_status_accepted(self, mock_response, mock_request):
mock_request.return_value = {}
mock_response.return_value = self.__make_http_response(status=202)
self.connection.post('/path', self.request_body)
expected_calls = [call('POST', '/path', self.dumped_request_body, self.default_headers),
call('GET', '/task/uri', '', self.default_headers)]
self.assertEqual(expected_calls, mock_request.call_args_list)
@patch.object(HTTPSConnection, 'request')
@patch.object(HTTPSConnection, 'getresponse')
def test_post_should_send_merged_headers_when_headers_provided(self, mock_response, mock_request):
mock_request.return_value = {}
mock_response.return_value = self.__make_http_response(status=202)
self.connection.post('/path', self.request_body, custom_headers=self.accept_language_header)
expected_calls = [call('POST', ANY, ANY, self.merged_headers), ANY]
self.assertEqual(expected_calls, mock_request.call_args_list)
@patch.object(HTTPSConnection, 'request')
@patch.object(HTTPSConnection, 'getresponse')
def test_post_should_return_body_when_status_ok(self, mock_response, mock_request):
mock_request.return_value = {}
mock_response.return_value = self.__make_http_response(status=200)
result = self.connection.post('/path', self.response_body, custom_headers=self.accept_language_header)
expected_result = (None, self.expected_response_body)
self.assertEqual(expected_result, result)
@patch.object(HTTPSConnection, 'request')
@patch.object(HTTPSConnection, 'getresponse')
def test_post_should_return_tuple_when_status_accepted(self, mock_response, mock_request):
mock_request.return_value = {}
mock_response.return_value = self.__make_http_response(status=202)
result = self.connection.post('/path', self.response_body, custom_headers=self.accept_language_header)
expected_result = (self.expected_response_body, self.expected_response_body)
self.assertEqual(result, expected_result)
@patch.object(HTTPSConnection, 'request')
@patch.object(HTTPSConnection, 'getresponse')
def test_post_should_raise_exception_when_status_internal_error(self, mock_response, mock_request):
mock_request.return_value = {}
mock_response.return_value = self.__make_http_response(status=400)
try:
self.connection.post('/path', self.request_body)
except HPEOneViewException as e:
self.assertEqual(e.oneview_response, self.expected_response_body)
else:
self.fail()
@patch.object(HTTPSConnection, 'request')
@patch.object(HTTPSConnection, 'getresponse')
def test_post_should_raise_exception_when_status_not_found(self, mock_response, mock_request):
mock_request.return_value = {}
mock_response.return_value = self.__make_http_response(status=404)
try:
self.connection.post('/path', self.request_body)
except HPEOneViewException as e:
self.assertEqual(e.oneview_response, self.expected_response_body)
else:
self.fail()
@patch.object(HTTPSConnection, 'request')
@patch.object(HTTPSConnection, 'getresponse')
def test_put_should_do_rest_call_when_status_ok(self, mock_response, mock_request):
mock_request.return_value = {}
mock_response.return_value = self.__make_http_response(status=200)
self.connection.put('/path', self.request_body)
mock_request.assert_called_once_with('PUT', '/path', self.dumped_request_body, self.default_headers)
@patch.object(HTTPSConnection, 'request')
@patch.object(HTTPSConnection, 'getresponse')
def test_put_should_do_rest_calls_when_status_accepted(self, mock_response, mock_request):
mock_request.return_value = {}
mock_response.return_value = self.__make_http_response(status=202)
self.connection.put('/path', self.request_body)
expected_calls = [call('PUT', '/path', self.dumped_request_body, self.default_headers),
call('GET', '/task/uri', '', self.default_headers)]
self.assertEqual(expected_calls, mock_request.call_args_list)
@patch.object(HTTPSConnection, 'request')
@patch.object(HTTPSConnection, 'getresponse')
def test_put_should_send_merged_headers_when_headers_provided(self, mock_response, mock_request):
mock_request.return_value = {}
mock_response.return_value = self.__make_http_response(status=202)
self.connection.put('/path', self.request_body, custom_headers=self.accept_language_header)
expected_calls = [call('PUT', ANY, ANY, self.merged_headers), ANY]
self.assertEqual(expected_calls, mock_request.call_args_list)
@patch.object(HTTPSConnection, 'request')
@patch.object(HTTPSConnection, 'getresponse')
def test_put_should_return_body_when_status_ok(self, mock_response, mock_request):
mock_request.return_value = {}
mock_response.return_value = self.__make_http_response(status=200)
result = self.connection.put('/path', self.response_body, custom_headers=self.accept_language_header)
expected_result = (None, self.expected_response_body)
self.assertEqual(result, expected_result)
@patch.object(HTTPSConnection, 'request')
@patch.object(HTTPSConnection, 'getresponse')
def test_put_should_return_tuple_when_status_accepted(self, mock_response, mock_request):
mock_request.return_value = {}
mock_response.return_value = self.__make_http_response(status=202)
result = self.connection.put('/path', self.response_body, custom_headers=self.accept_language_header)
expected_result = (self.expected_response_body, self.expected_response_body)
self.assertEqual(result, expected_result)
@patch.object(HTTPSConnection, 'request')
@patch.object(HTTPSConnection, 'getresponse')
def test_put_should_raise_exception_when_status_internal_error(self, mock_response, mock_request):
mock_request.return_value = {}
mock_response.return_value = self.__make_http_response(status=400)
try:
self.connection.put('/path', self.request_body)
except HPEOneViewException as e:
self.assertEqual(e.oneview_response, self.expected_response_body)
else:
self.fail()
@patch.object(HTTPSConnection, 'request')
@patch.object(HTTPSConnection, 'getresponse')
def test_put_should_raise_exception_when_status_not_found(self, mock_response, mock_request):
mock_request.return_value = {}
mock_response.return_value = self.__make_http_response(status=404)
try:
self.connection.put('/path', self.request_body)
except HPEOneViewException as e:
self.assertEqual(e.oneview_response, self.expected_response_body)
else:
self.fail()
@patch.object(HTTPSConnection, 'request')
@patch.object(HTTPSConnection, 'getresponse')
def test_patch_should_do_rest_call_when_status_ok(self, mock_response, mock_request):
mock_request.return_value = {}
mock_response.return_value = self.__make_http_response(status=200)
self.connection.patch('/path', self.request_body)
mock_request.assert_called_once_with('PATCH', '/path', self.dumped_request_body, self.default_headers)
@patch.object(HTTPSConnection, 'request')
@patch.object(HTTPSConnection, 'getresponse')
def test_patch_should_do_rest_calls_when_status_accepted(self, mock_response, mock_request):
mock_request.return_value = {}
mock_response.return_value = self.__make_http_response(status=202)
self.connection.patch('/path', self.request_body)
expected_calls = [call('PATCH', '/path', self.dumped_request_body, self.default_headers),
call('GET', '/task/uri', '', self.default_headers)]
self.assertEqual(expected_calls, mock_request.call_args_list)
@patch.object(HTTPSConnection, 'request')
@patch.object(HTTPSConnection, 'getresponse')
def test_patch_should_send_merged_headers_when_headers_provided(self, mock_response, mock_request):
mock_request.return_value = {}
mock_response.return_value = self.__make_http_response(status=202)
self.connection.patch('/path', self.request_body, custom_headers=self.accept_language_header)
expected_calls = [call('PATCH', ANY, ANY, self.merged_headers), ANY]
self.assertEqual(expected_calls, mock_request.call_args_list)
@patch.object(HTTPSConnection, 'request')
@patch.object(HTTPSConnection, 'getresponse')
def test_patch_should_return_body_when_status_ok(self, mock_response, mock_request):
mock_request.return_value = {}
mock_response.return_value = self.__make_http_response(status=200)
result = self.connection.patch('/path', self.response_body, custom_headers=self.accept_language_header)
expected_result = (None, self.expected_response_body)
self.assertEqual(result, expected_result)
@patch.object(HTTPSConnection, 'request')
@patch.object(HTTPSConnection, 'getresponse')
def test_patch_should_return_tuple_when_status_accepted(self, mock_response, mock_request):
mock_request.return_value = {}
mock_response.return_value = self.__make_http_response(status=202)
result = self.connection.patch('/path', self.response_body, custom_headers=self.accept_language_header)
expected_result = (self.expected_response_body, self.expected_response_body)
self.assertEqual(result, expected_result)
@patch.object(HTTPSConnection, 'request')
@patch.object(HTTPSConnection, 'getresponse')
def test_patch_should_raise_exception_when_status_internal_error(self, mock_response, mock_request):
mock_request.return_value = {}
mock_response.return_value = self.__make_http_response(status=400)
try:
self.connection.patch('/path', self.request_body)
except HPEOneViewException as e:
self.assertEqual(e.oneview_response, self.expected_response_body)
else:
self.fail()
@patch.object(HTTPSConnection, 'request')
@patch.object(HTTPSConnection, 'getresponse')
def test_patch_should_raise_exception_when_status_not_found(self, mock_response, mock_request):
mock_request.return_value = {}
mock_response.return_value = self.__make_http_response(status=404)
try:
self.connection.patch('/path', self.request_body)
except HPEOneViewException as e:
self.assertEqual(e.oneview_response, self.expected_response_body)
else:
self.fail()
@patch.object(HTTPSConnection, 'request')
@patch.object(HTTPSConnection, 'getresponse')
def test_delete_should_do_rest_calls_when_status_ok(self, mock_response, mock_request):
mock_request.return_value = {}
mock_response.return_value = self.__make_http_response(status=200)
self.connection.delete('/path')
mock_request.assert_called_once_with('DELETE', '/path', json.dumps({}), self.default_headers)
@patch.object(HTTPSConnection, 'request')
@patch.object(HTTPSConnection, 'getresponse')
def test_delete_should_do_rest_calls_when_status_accepted(self, mock_response, mock_request):
mock_request.return_value = {}
mock_response.return_value = self.__make_http_response(status=202)
self.connection.delete('/path')
expected_calls = [call('DELETE', '/path', json.dumps({}), self.default_headers),
call('GET', '/task/uri', '', self.default_headers)]
self.assertEqual(expected_calls, mock_request.call_args_list)
@patch.object(HTTPSConnection, 'request')
@patch.object(HTTPSConnection, 'getresponse')
def test_delete_should_send_merged_headers_when_headers_provided(self, mock_response, mock_request):
mock_request.return_value = {}
mock_response.return_value = self.__make_http_response(status=202)
self.connection.delete('/path', custom_headers=self.accept_language_header)
expected_calls = [call('DELETE', ANY, ANY, self.merged_headers), ANY]
self.assertEqual(expected_calls, mock_request.call_args_list)
@patch.object(HTTPSConnection, 'request')
@patch.object(HTTPSConnection, 'getresponse')
def test_delete_should_return_body_when_status_ok(self, mock_response, mock_request):
mock_request.return_value = {}
mock_response.return_value = self.__make_http_response(status=200)
result = self.connection.delete('/path', custom_headers=self.accept_language_header)
expected_result = (None, self.expected_response_body)
self.assertEqual(result, expected_result)
@patch.object(HTTPSConnection, 'request')
@patch.object(HTTPSConnection, 'getresponse')
def test_delete_should_return_tuple_when_status_accepted(self, mock_response, mock_request):
mock_request.return_value = {}
mock_response.return_value = self.__make_http_response(status=202)
result = self.connection.delete('/path', custom_headers=self.accept_language_header)
expected_result = (self.expected_response_body, self.expected_response_body)
self.assertEqual(result, expected_result)
@patch.object(HTTPSConnection, 'request')
@patch.object(HTTPSConnection, 'getresponse')
def test_delete_should_raise_exception_when_status_internal_error(self, mock_response, mock_request):
mock_request.return_value = {}
mock_response.return_value = self.__make_http_response(status=400)
try:
self.connection.delete('/path')
except HPEOneViewException as e:
self.assertEqual(e.oneview_response, self.expected_response_body)
else:
self.fail()
@patch.object(HTTPSConnection, 'request')
@patch.object(HTTPSConnection, 'getresponse')
def test_delete_should_raise_exception_when_status_not_found(self, mock_response, mock_request):
mock_request.return_value = {}
mock_response.return_value = self.__make_http_response(status=404)
try:
self.connection.delete('/path', self.request_body)
except HPEOneViewException as e:
self.assertEqual(e.oneview_response, self.expected_response_body)
else:
self.fail()
@patch.object(connection, 'do_http')
def test_task_in_response_body_without_202_status(self, mock_do_http):
# create the return values
mockedResponse = type('mockResponse', (), {'status': 200})()
mockedTaskBody = {'category': 'tasks'}
# set-up the mock
mock_do_http.return_value = (mockedResponse, mockedTaskBody)
# call the method we are testing
(testTask, testBody) = self.connection._connection__do_rest_call('PUT', '/rest/test', '{ "body": "test" }',
None)
# verify the result
self.assertEqual(mockedTaskBody, testTask)
self.assertEqual(mockedTaskBody, testBody)
@patch.object(connection, 'do_http')
def test_do_rest_call_with_304_status(self, mock_do_http):
mockedResponse = type('mockResponse', (), {'status': 304})()
mock_do_http.return_value = (mockedResponse, '{ "body": "test" }')
(testTask, testBody) = self.connection._connection__do_rest_call('PUT',
'/rest/test',
'{ "body": "test" }',
None)
self.assertIsNone(testTask)
self.assertEqual(testBody, {"body": "test"})
@patch.object(connection, 'do_http')
def test_do_rest_call_with_304_status_and_invalid_json(self, mock_do_http):
mockedResponse = type('mockResponse', (), {'status': 304})()
mock_do_http.return_value = (mockedResponse, 111)
(testTask, testBody) = self.connection._connection__do_rest_call('PUT',
'/rest/test',
111,
None)
self.assertIsNone(testTask)
self.assertEqual(testBody, 111)
@patch('time.sleep')
@patch.object(connection, 'get_connection')
def test_download_to_stream_when_status_ok(self, mock_get_conn, mock_sleep):
mock_conn = Mock()
# First attempt: Error, second attempt: successful connection
mock_get_conn.side_effect = [BadStatusLine(0), mock_conn]
mock_response = mock_conn.getresponse.return_value
# Stops at the fourth read call
mock_response.read.side_effect = ['111', '222', '333', None]
mock_response.status = 200
mock_stream = Mock()
result = self.connection.download_to_stream(mock_stream, '/rest/download.zip',
custom_headers={'custom': 'custom'})
self.assertTrue(result)
mock_stream.write.assert_has_calls([call('111'), call('222'), call('333')])
@patch.object(connection, 'get_connection')
def test_download_to_stream_handling_of_status_302(self, mock_get_conn):
# Mocking two responses as the first response would be redirect status 302 with header
# having location to download. Second response would have 200
mock_redirect_resp = Mock(status=302)
mock_redirect_resp.getheader.return_value = "/redirect/download.zip"
mock_resp = Mock(status=200)
mock_resp.read.side_effect = ["Something", None]
mock_conn = Mock()
mock_get_conn.return_value = mock_conn
mock_conn.getresponse.side_effect = [mock_redirect_resp, mock_resp]
mock_stream = Mock()
result = self.connection.download_to_stream(mock_stream, '/rest/download.zip')
mock_redirect_resp.getheader.assert_has_calls([call('Location')])
mock_stream.write.assert_has_calls([call("Something")])
self.assertTrue(result)
@patch('time.sleep')
@patch.object(connection, 'get_connection')
def test_download_to_stream_when_error_status_with_response_body(self, mock_get_conn, mock_sleep):
mock_conn = Mock()
mock_get_conn.return_value = mock_conn
mock_response = mock_conn.getresponse.return_value
mock_response.read.return_value = json.dumps('error message').encode('utf-8')
mock_response.status = 500
mock_stream = Mock()
try:
self.connection.download_to_stream(mock_stream, '/rest/download.zip')
except HPEOneViewException as e:
self.assertEqual(e.msg, 'error message')
else:
self.fail()
@patch('time.sleep')
@patch.object(connection, 'get_connection')
def test_download_to_stream_when_error_status_with_decode_error(self, mock_get_conn, mock_sleep):
mock_conn = Mock()
mock_get_conn.return_value = mock_conn
mock_response = mock_conn.getresponse.return_value
mock_response.read.return_value = json.dumps('error message').encode('utf-8')
mock_response.read.decode.side_effect = UnicodeDecodeError('sn33af', b"", 42, 43, 'ths239sn')
mock_response.status = 500
mock_stream = Mock()
try:
self.connection.download_to_stream(mock_stream, '/rest/download.zip')
except HPEOneViewException as e:
self.assertEqual(e.msg, 'error message')
else:
self.fail()
@patch('time.sleep')
@patch.object(connection, 'get_connection')
def test_download_to_stream_when_error_status_with_empty_body(self, mock_get_conn, mock_sleep):
mock_conn = Mock()
mock_get_conn.return_value = mock_conn
mock_response = mock_conn.getresponse.return_value
mock_response.read.return_value = json.dumps('').encode('utf-8')
mock_response.status = 500
mock_stream = Mock()
try:
self.connection.download_to_stream(mock_stream, '/rest/download.zip')
except HPEOneViewException as e:
self.assertEqual(e.msg, 'Error 500')
else:
self.fail()
@patch.object(connection, 'get_connection')
def test_download_to_stream_with_timeout_error(self, mock_get_connection):
mock_conn = mock_get_connection.return_value = Mock()
mock_response = Mock()
mock_conn.getresponse.side_effect = [HTTPException('timed out'), mock_response]
mock_stream = Mock()
with self.assertRaises(HPEOneViewException) as context:
resp, body = self.connection.download_to_stream(mock_stream, '/rest/download.zip')
self.assertTrue('timed out' in context.exception.msg)
@patch.object(mmap, 'mmap')
@patch.object(shutil, 'copyfileobj')
@patch.object(os.path, 'getsize')
@patch.object(os, 'remove')
def test_post_multipart_should_put_request(self, mock_rm, mock_path_size, mock_copy, mock_mmap):
self.__prepare_connection_to_post_multipart()
mock_mmap.return_value = self.__create_fake_mapped_file()
self.connection.post_multipart(uri='/rest/resources/',
fields=None,
files="/a/path/filename.zip",
baseName="archive.zip")
internal_conn = self.connection.get_connection.return_value
internal_conn.putrequest.assert_called_once_with('POST', '/rest/resources/')
@patch.object(mmap, 'mmap')
@patch.object(shutil, 'copyfileobj')
@patch.object(os.path, 'getsize')
@patch.object(os, 'remove')
def test_post_multipart_should_put_headers(self, mock_rm, mock_path_size, mock_copy, mock_mmap):
self.__prepare_connection_to_post_multipart()
mock_mmap.return_value = self.__create_fake_mapped_file()
mock_path_size.return_value = 2621440 # 2.5 MB
self.connection.post_multipart(uri='/rest/resources/',
fields=None,
files="/a/path/filename.zip",
baseName="archive.zip")
expected_putheader_calls = [
call('uploadfilename', 'archive.zip'),
call('auth', 'LTIxNjUzMjc0OTUzzHoF7eEkZLEUWVA-fuOZP4VGA3U8e67E'),
call('Content-Type', 'multipart/form-data; boundary=----------ThIs_Is_tHe_bouNdaRY_$'),
call('Content-Length', 2621440),
call('X-API-Version', 800)]
internal_conn = self.connection.get_connection.return_value
internal_conn.putheader.assert_has_calls(expected_putheader_calls)
@patch.object(mmap, 'mmap')
@patch.object(shutil, 'copyfileobj')
@patch.object(os.path, 'getsize')
@patch.object(os, 'remove')
def test_post_multipart_should_read_file_in_chunks_of_1mb(self, mock_rm, mock_path_size, mock_copy, mock_mmap):
self.__prepare_connection_to_post_multipart()
mock_mmap.return_value = self.__create_fake_mapped_file()
self.connection.post_multipart(uri='/rest/resources/',
fields=None,
files="/a/path/filename.zip",
baseName="archive.zip")
expected_mmap_read_calls = [
call(1048576),
call(1048576),
call(1048576)]
mock_mmap.return_value.read.assert_has_calls(expected_mmap_read_calls)
@patch.object(mmap, 'mmap')
@patch.object(shutil, 'copyfileobj')
@patch.object(os.path, 'getsize')
@patch.object(os, 'remove')
def test_post_multipart_should_send_file_in_chuncks_of_1mb(self, mock_rm, mock_path_size, mock_copy, mock_mmap):
self.__prepare_connection_to_post_multipart()
mock_mmap.return_value = self.__create_fake_mapped_file()
self.connection.post_multipart(uri='/rest/resources/',
fields=None,
files="/a/path/filename.zip",
baseName="archive.zip")
expected_conn_send_calls = [
call('data chunck 1'),
call('data chunck 2'),
call('data chunck 3')]
internal_conn = self.connection.get_connection.return_value
internal_conn.send.assert_has_calls(expected_conn_send_calls)
@patch.object(mmap, 'mmap')
@patch.object(shutil, 'copyfileobj')
@patch.object(os.path, 'getsize')
@patch.object(os, 'remove')
def test_post_multipart_should_remove_temp_encoded_file(self, mock_rm, mock_path_size, mock_copy, mock_mmap):
self.__prepare_connection_to_post_multipart()
mock_mmap.return_value = self.__create_fake_mapped_file()
self.connection.post_multipart(uri='/rest/resources/',
fields=None,
files="/a/path/filename.zip",
baseName="archive.zip")
mock_rm.assert_called_once_with('/a/path/filename.zip.b64')
@patch.object(mmap, 'mmap')
@patch.object(shutil, 'copyfileobj')
@patch.object(os.path, 'getsize')
@patch.object(os, 'remove')
def test_post_multipart_should_raise_exception_when_response_status_400(self, mock_rm, mock_path_size, mock_copy,
mock_mmap):
self.__prepare_connection_to_post_multipart(response_status=400)
mock_mmap.return_value = self.__create_fake_mapped_file()
try:
self.connection.post_multipart(uri='/rest/resources/',
fields=None,
files="/a/path/filename.zip",
baseName="archive.zip")
except HPEOneViewException as e:
self.assertEqual(e.msg, "An error occurred.")
else:
self.fail()
@patch.object(mmap, 'mmap')
@patch.object(shutil, 'copyfileobj')
@patch.object(os.path, 'getsize')
@patch.object(os, 'remove')
def test_post_multipart_should_return_response_and_body_when_response_status_200(self, mock_rm, mock_path_size,
mock_copy, mock_mmap):
self.__prepare_connection_to_post_multipart()
mock_mmap.return_value = self.__create_fake_mapped_file()
response, body = self.connection.post_multipart(uri='/rest/resources/',
fields=None,
files="/a/path/filename.zip",
baseName="archive.zip")
self.assertEqual(body, self.expected_response_body)
self.assertEqual(response.status, 200)
@patch.object(mmap, 'mmap')
@patch.object(shutil, 'copyfileobj')
@patch.object(os.path, 'getsize')
@patch.object(os, 'remove')
@patch.object(json, 'loads')
def test_post_multipart_should_handle_json_load_exception(self, mock_json_loads, mock_rm, mock_path_size, mock_copy,
mock_mmap):
self.__prepare_connection_to_post_multipart()
mock_mmap.return_value = self.__create_fake_mapped_file()
mock_json_loads.side_effect = ValueError("Invalid JSON")
response, body = self.connection.post_multipart(uri='/rest/resources/',
fields=None,
files="/a/path/filename.zip",
baseName="archive.zip")
self.assertTrue(body)
self.assertEqual(response.status, 200)
@patch.object(connection, 'post_multipart')
def test_post_multipart_with_response_handling_when_status_202_without_task(self, mock_post_multipart):
mock_response = Mock(status=202)
mock_response.getheader.return_value = None
mock_post_multipart.return_value = mock_response, "content"
task, body = self.connection.post_multipart_with_response_handling("uri", "filepath", "basename")
self.assertFalse(task)
self.assertEqual(body, "content")
@patch.object(connection, 'post_multipart')
@patch.object(connection, 'get')
def test_post_multipart_with_response_handling_when_status_202_with_task(self, mock_get, mock_post_multipart):
fake_task = {"category": "tasks"}
mock_response = Mock(status=202)
mock_response.getheader.return_value = "/rest/tasks/taskid"
mock_post_multipart.return_value = mock_response, "content"
mock_get.return_value = fake_task
task, body = self.connection.post_multipart_with_response_handling("uri", "filepath", "basename")
self.assertEqual(task, fake_task)
self.assertEqual(body, "content")
@patch.object(connection, 'post_multipart')
def test_post_multipart_with_response_handling_when_status_200_and_body_is_task(self, mock_post_multipart):
fake_task = {"category": "tasks"}
mock_post_multipart.return_value = Mock(status=200), fake_task
task, body = self.connection.post_multipart_with_response_handling("uri", "filepath", "basename")
self.assertEqual(task, fake_task)
self.assertEqual(body, fake_task)
@patch.object(connection, 'post_multipart')
def test_post_multipart_with_response_handling_when_status_200_and_body_is_not_task(self, mock_post_multipart):
mock_post_multipart.return_value = Mock(status=200), "content"
task, body = self.connection.post_multipart_with_response_handling("uri", "filepath", "basename")
self.assertFalse(task)
self.assertEqual(body, "content")
@patch.object(connection, 'get_connection')
def test_do_http_with_invalid_unicode(self, mock_get_connection):
mock_conn = mock_get_connection.return_value = Mock()
mock_conn.getresponse.return_value = Mock()
mock_conn.getresponse.return_value.read.side_effect = UnicodeDecodeError("utf8", b"response", 0, 4, "reason")
_, body = self.connection.do_http('POST', '/rest/test', 'body')
self.assertEqual(body, '')
mock_conn.request.assert_called_once_with('POST', '/rest/test', 'body',
{'Content-Type': 'application/json',
'X-API-Version': 800, 'Accept': 'application/json'})
mock_conn.close.assert_called_once()
@patch.object(connection, 'get_connection')
def test_do_http_with_invalid_json_return(self, mock_get_connection):
mock_conn = mock_get_connection.return_value = Mock()
mock_conn.getresponse.return_value = Mock()
mock_conn.getresponse.return_value.read.return_value = b"response data"
resp, body = self.connection.do_http('POST', '/rest/test', 'body')
self.assertEqual(body, 'response data')
mock_conn.request.assert_called_once_with('POST', '/rest/test', 'body',
{'Content-Type': 'application/json',
'X-API-Version': 800, 'Accept': 'application/json'})
mock_conn.close.assert_called_once()
@patch.object(connection, 'get_connection')
def test_do_http_with_bad_status_line(self, mock_get_connection):
mock_conn = mock_get_connection.return_value = Mock()
# First attempt: Error, second attempt: successful response
mock_response = Mock()
mock_conn.getresponse.side_effect = [BadStatusLine(0), mock_response]
# Stops at the fourth read call
mock_response.read.return_value = b"response data"
mock_response.status = 200
with patch('time.sleep'):
resp, body = self.connection.do_http('POST', '/rest/test', 'body')
self.assertEqual(body, 'response data')
mock_conn.request.assert_called_with('POST', '/rest/test', 'body',
{'Content-Type': 'application/json',
'X-API-Version': 800,
'Accept': 'application/json'})
mock_conn.close.assert_has_calls([call(), call()])
@patch.object(connection, 'get_connection')
def test_do_http_with_timeout_error(self, mock_get_connection):
mock_conn = mock_get_connection.return_value = Mock()
mock_response = Mock()
mock_conn.getresponse.side_effect = [HTTPException('timed out'), mock_response]
with self.assertRaises(HPEOneViewException) as context:
resp, body = self.connection.do_http('POST', '/rest/test', 'body')
self.assertTrue('timed out' in context.exception.msg)
@patch.object(connection, 'get')
def test_get_by_uri(self, mock_get):
uri = "/rest/uri"
self.connection.get_by_uri(uri)
mock_get.assert_called_once_with(uri)
def test_make_url(self):
url = self.connection.make_url('/test/path')
self.assertEqual(url, url)
@patch.object(connection, 'get')
@patch.object(connection, 'post')
def test_login(self, mock_post, mock_get):
mock_get.side_effect = [{'minimumVersion': 800, 'currentVersion': 1000}]
mock_post.return_value = {'cat': 'task'}, {'sessionID': '123'}
self.connection.login({})
self.assertEqual(self.connection.get_session_id(), '123')
self.assertEqual(self.connection.get_session(), True)
@patch.object(connection, 'get')
def test_login_catches_exceptions_as_hpeOneView(self, mock_get):
mock_get.side_effect = [Exception('test')]
with self.assertRaises(HPEOneViewException):
self.connection.login({})
@patch.object(connection, 'get')
@patch.object(connection, 'post')
def test_login_with_exception_in_post(self, mock_post, mock_get):
mock_get.side_effect = [{'minimumVersion': 800, 'currentVersion': 1000}]
mock_post.side_effect = HPEOneViewException("Failed")
self.assertRaises(HPEOneViewException, self.connection.login, {})
@patch.object(connection, 'get')
@patch.object(connection, 'put')
def test_login_sessionID(self, mock_put, mock_get):
mock_get.side_effect = [{'minimumVersion': 800, 'currentVersion': 1000}]
mock_put.return_value = {'cat': 'task'}, {'sessionID': '123'}
self.connection.login({"sessionID": "123"})
self.assertEqual(self.connection.get_session_id(), '123')
self.assertEqual(self.connection.get_session(), True)
@patch.object(connection, 'get')
@patch.object(connection, 'put')
def test_login_username_password_sessionID(self, mock_put, mock_get):
mock_get.side_effect = [{'minimumVersion': 800, 'currentVersion': 1000}]
mock_put.return_value = {'cat': 'task'}, {'sessionID': '123'}
self.connection.login({"userName": "administrator", "password": "", "sessionID": "123"})
self.assertEqual(self.connection.get_session_id(), '123')
self.assertEqual(self.connection.get_session(), True)
@patch.object(connection, 'get')
@patch.object(connection, 'put')
def test_login_with_exception_in_put(self, mock_put, mock_get):
mock_get.side_effect = [{'minimumVersion': 800, 'currentVersion': 400}]
mock_put.side_effect = HPEOneViewException("Failed")
self.assertRaises(HPEOneViewException, self.connection.login, {"sessionID": "123"})
@patch.object(connection, 'get')
@patch.object(connection, 'put')
def test_login_with_exception_in_put_username_password_sessionID(self, mock_put, mock_get):
mock_get.side_effect = [{'minimumVersion': 800, 'currentVersion': 400}]
mock_put.side_effect = HPEOneViewException("Failed")
self.assertRaises(HPEOneViewException, self.connection.login, {"userName": "administrator",
"password": "", "sessionID": "123"})
@patch.object(connection, 'get')
def test_validate_version_exceeding_minimum(self, mock_get):
self.connection._apiVersion = 800
mock_get.side_effect = [{'minimumVersion': 400, 'currentVersion': 400}]
self.assertRaises(HPEOneViewException, self.connection.validateVersion)
@patch.object(connection, 'get')
def test_validate_version_exceeding_current(self, mock_get):
self.connection._apiVersion = 400
mock_get.side_effect = [{'minimumVersion': 800, 'currentVersion': 400}]
self.assertRaises(HPEOneViewException, self.connection.validateVersion)
@patch.object(shutil, 'copyfileobj')
@patch.object(connection, '_open')
def test_encode_multipart_formdata(self, mock_open, mock_copyfileobj):
mock_in = Mock()
mock_out = Mock()
mock_open.side_effect = [mock_in, mock_out]
self.connection.encode_multipart_formdata('', "/a/path/filename.zip", 'filename.zip')
mock_open.assert_has_calls([call('/a/path/filename.zip', 'rb'),
call('/a/path/filename.zip.b64', 'wb')])
mock_out.write.assert_has_calls(
[call(bytearray(b'------------ThIs_Is_tHe_bouNdaRY_$\r\n')),
call(bytearray(
b'Content-Disposition: form-data; name="file"; filename="filename.zip"\r\n')),
call(bytearray(b'Content-Type: application/octet-stream\r\n')),
call(bytearray(b'\r\n')),
call(bytearray(b'\r\n')),
call(bytearray(b'------------ThIs_Is_tHe_bouNdaRY_$--\r\n')),
call(bytearray(b'\r\n'))])
mock_in.close.assert_called_once()
mock_out.close.assert_called_once()
def test_get_connection_ssl_trust_all(self):
conn = self.connection.get_connection()
self.assertEqual(conn.host, '127.0.0.1')
self.assertEqual(conn.port, 443)
self.assertEqual(conn._context.protocol, ssl.PROTOCOL_TLSv1_2)
def test_get_connection_ssl_trust_all_with_proxy(self):
self.connection.set_proxy('10.0.0.1', 3128)
conn = self.connection.get_connection()
self.assertEqual(conn.host, '10.0.0.1')
self.assertEqual(conn.port, 3128)
self.assertEqual(conn._context.protocol, ssl.PROTOCOL_TLSv1_2)
@patch.object(ssl.SSLContext, 'load_verify_locations')
def test_get_connection_trusted_ssl_bundle_with_proxy(self, mock_lvl):
self.connection.set_proxy('10.0.0.1', 3128)
self.connection.set_trusted_ssl_bundle('/test')
conn = self.connection.get_connection()
self.assertEqual(conn.host, '10.0.0.1')
self.assertEqual(conn.port, 3128)
self.assertEqual(conn._context.protocol, ssl.PROTOCOL_TLSv1_2)
@patch.object(ssl.SSLContext, 'load_verify_locations')
def test_get_connection_trusted_ssl_bundle(self, mock_lvl):
self.connection.set_trusted_ssl_bundle('/test')
conn = self.connection.get_connection()
self.assertEqual(conn.host, '127.0.0.1')
self.assertEqual(conn.port, 443)
self.assertEqual(conn._context.protocol, ssl.PROTOCOL_TLSv1_2)
if __name__ == '__main__':
unittest.main()
| 43.889616 | 120 | 0.671427 | 5,383 | 46,918 | 5.514211 | 0.065205 | 0.048917 | 0.052555 | 0.02712 | 0.851127 | 0.828488 | 0.807364 | 0.783546 | 0.765017 | 0.744904 | 0 | 0.015142 | 0.220193 | 46,918 | 1,068 | 121 | 43.930712 | 0.796157 | 0.022337 | 0 | 0.662864 | 0 | 0.001267 | 0.097674 | 0.009229 | 0 | 0 | 0 | 0 | 0.144487 | 1 | 0.103929 | false | 0.00507 | 0.013942 | 0 | 0.121673 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
9ac0858db080ee521a93e453e909a3341da4d7af | 33 | py | Python | models/ops/depthconv/functions/__init__.py | aksh1501/DepthAware_CNN_edit | 3f6a859ef3f3b7bba1201dc087c860a22d1cd258 | [
"MIT"
] | null | null | null | models/ops/depthconv/functions/__init__.py | aksh1501/DepthAware_CNN_edit | 3f6a859ef3f3b7bba1201dc087c860a22d1cd258 | [
"MIT"
] | null | null | null | models/ops/depthconv/functions/__init__.py | aksh1501/DepthAware_CNN_edit | 3f6a859ef3f3b7bba1201dc087c860a22d1cd258 | [
"MIT"
] | null | null | null | from .depthconv import depth_conv | 33 | 33 | 0.878788 | 5 | 33 | 5.6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.090909 | 33 | 1 | 33 | 33 | 0.933333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9ad569d64c7bfa36518a7bd5bad6bfcea1e5fef2 | 93 | py | Python | Test.py | csy1993/PythonLeetcode | 98fd9b1639626459fbf81bf94727775d39248dde | [
"Apache-2.0"
] | null | null | null | Test.py | csy1993/PythonLeetcode | 98fd9b1639626459fbf81bf94727775d39248dde | [
"Apache-2.0"
] | null | null | null | Test.py | csy1993/PythonLeetcode | 98fd9b1639626459fbf81bf94727775d39248dde | [
"Apache-2.0"
] | null | null | null | """
* @File: Test.py
* @Author: CSY - 25809
* @Date: 2019/8/27 - 19:11
* @Project: Test
"""
| 13.285714 | 26 | 0.537634 | 14 | 93 | 3.571429 | 0.928571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.216216 | 0.204301 | 93 | 6 | 27 | 15.5 | 0.459459 | 0.903226 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b1360c5575518d5b9e61c15fa171a87b9d334abe | 81 | py | Python | tests/conftest.py | ketgo/nameko-mongoengine | 94b4a41dfa845cec8b48f874c0b64658a1c4bef6 | [
"Apache-2.0"
] | 2 | 2019-12-06T17:51:44.000Z | 2020-02-20T22:38:38.000Z | tests/conftest.py | ketgo/nameko-mongoengine | 94b4a41dfa845cec8b48f874c0b64658a1c4bef6 | [
"Apache-2.0"
] | 1 | 2022-03-07T02:32:45.000Z | 2022-03-07T06:45:43.000Z | tests/conftest.py | ketgo/nameko-mongoengine | 94b4a41dfa845cec8b48f874c0b64658a1c4bef6 | [
"Apache-2.0"
] | null | null | null | # Nameko requires eventlet monkey patch
import eventlet
eventlet.monkey_patch()
| 16.2 | 39 | 0.82716 | 10 | 81 | 6.6 | 0.6 | 0.424242 | 0.575758 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.123457 | 81 | 4 | 40 | 20.25 | 0.929577 | 0.45679 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
b1389bfb3d0cad3b7d6d35484fe3d0d5d3308972 | 39 | py | Python | hbussi/sunnify.py | hannahbus/hbussi | 31fbbf433a4511191850bd0221ec279a775e0cf0 | [
"MIT"
] | 1 | 2021-05-20T08:01:40.000Z | 2021-05-20T08:01:40.000Z | hbussi/sunnify.py | hannahbus/hbussi | 31fbbf433a4511191850bd0221ec279a775e0cf0 | [
"MIT"
] | null | null | null | hbussi/sunnify.py | hannahbus/hbussi | 31fbbf433a4511191850bd0221ec279a775e0cf0 | [
"MIT"
] | null | null | null | def sunshine():
return('whatever') | 19.5 | 22 | 0.641026 | 4 | 39 | 6.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.179487 | 39 | 2 | 22 | 19.5 | 0.78125 | 0 | 0 | 0 | 0 | 0 | 0.2 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0 | 0 | 0.5 | 0.5 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 6 |
b14aa378e6f860dc5390afde3148152e61315014 | 63,331 | py | Python | autotest/smoother_tests.py | jbellino-usgs/pyemu | abcff190f517ac068298e5bbefea7026046d4830 | [
"BSD-3-Clause"
] | null | null | null | autotest/smoother_tests.py | jbellino-usgs/pyemu | abcff190f517ac068298e5bbefea7026046d4830 | [
"BSD-3-Clause"
] | null | null | null | autotest/smoother_tests.py | jbellino-usgs/pyemu | abcff190f517ac068298e5bbefea7026046d4830 | [
"BSD-3-Clause"
] | null | null | null | import os
if not os.path.exists("temp"):
os.mkdir("temp")
def henry_setup():
import os
import pyemu
pst = pyemu.Pst(os.path.join("smoother","henry_pc","pest.pst"))
par = pst.parameter_data
par.loc[:,"parlbnd"] = 20.0
par.loc[:,"parubnd"] = 2000.0
par.loc["mult1","parlbnd"] = 0.9
par.loc["mult1","parubnd"] = 1.1
# obs = pst.observation_data
# head_groups = obs.groupby(obs.apply(lambda x: x.obgnme=="head" and x.weight>0.0, axis=1)).groups[True]
# obs.loc[head_groups,"weight"] = 1.0
# conc_groups = obs.groupby(obs.apply(lambda x: x.obgnme=="conc" and x.weight>0.0, axis=1)).groups[True]
# obs.loc[conc_groups,"weight"] = 0.5
pst.pestpp_options["sweep_parameter_csv_file"] = "sweep_in.csv"
pst.write(pst.filename.replace("pest.pst","henry.pst"))
def henry():
import os
import pyemu
os.chdir(os.path.join("smoother", "henry_pc"))
csv_files = [f for f in os.listdir('.') if f.endswith(".csv")]
[os.remove(csv_file) for csv_file in csv_files]
pst = pyemu.Pst(os.path.join("henry.pst"))
es = pyemu.EnsembleSmoother(pst, num_slaves=15,verbose="ies.log")
es.initialize(210, init_lambda=1.0)
for i in range(10):
es.update(lambda_mults=[0.2,5.0],run_subset=45)
os.chdir(os.path.join("..", ".."))
def henry_plot():
import os
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.backends.backend_pdf import PdfPages
import pandas as pd
from pyemu import Pst
d = os.path.join("smoother","henry_pc")
pst = Pst(os.path.join(d,"henry.pst"))
plt_dir = os.path.join(d,"plot")
if not os.path.exists(plt_dir):
os.mkdir(plt_dir)
par_files = [os.path.join(d,f) for f in os.listdir(d) if "parensemble." in f
and ".png" not in f]
par_dfs = [pd.read_csv(par_file,index_col=0).apply(np.log10) for par_file in par_files]
par_names = ["mult1"]
mx = (pst.parameter_data.loc[:,"parubnd"] * 1.1).apply(np.log10)
mn = (pst.parameter_data.loc[:,"parlbnd"] * 0.9).apply(np.log10)
obj_df = pd.read_csv(os.path.join(d,"henry.pst.iobj.csv"),index_col=0)
real_cols = [col for col in obj_df.columns if col.startswith("0")]
obj_df.loc[:,real_cols] = obj_df.loc[:,real_cols].apply(np.log10)
obj_df.loc[:,"mean"] = obj_df.loc[:,"mean"].apply(np.log10)
obj_df.loc[:, "std"] = obj_df.loc[:, "std"].apply(np.log10)
fig = plt.figure(figsize=(20, 10))
ax = plt.subplot(111)
axt = plt.twinx()
obj_df.loc[:, real_cols].plot(ax=ax, lw=0.5, color="0.5", alpha=0.5, legend=False)
ax.plot(obj_df.index, obj_df.loc[:, "mean"], 'b', lw=2.5,marker='.',markersize=5)
#ax.fill_between(obj_df.index, obj_df.loc[:, "mean"] - (1.96 * obj_df.loc[:, "std"]),
# obj_df.loc[:, "mean"] + (1.96 * obj_df.loc[:, "std"]),
# facecolor="b", edgecolor="none", alpha=0.25)
axt.plot(obj_df.index,obj_df.loc[:,"lambda"],"k",dashes=(2,1),lw=2.5)
ax.set_ylabel("log$_10$ phi")
axt.set_ylabel("lambda")
ax.set_title("total runs:{0}".format(obj_df.total_runs.max()))
plt.savefig(os.path.join(plt_dir,"iobj.pdf"))
plt.close()
with PdfPages(os.path.join(plt_dir,"parensemble.pdf")) as pdf:
for par_file,par_df in zip(par_files,par_dfs):
print(par_file)
fig = plt.figure(figsize=(20,10))
plt.figtext(0.5,0.975,par_file,ha="center")
axes = [plt.subplot(1,1,i+1) for i in range(len(par_names))]
for par_name,ax in zip(par_names,axes):
mean = par_df.loc[:,par_name].mean()
std = par_df.loc[:,par_name].std()
par_df.loc[:,par_name].hist(ax=ax,edgecolor="none",
alpha=0.5,grid=False)
ax.set_yticklabels([])
ax.set_title("{0}, {1:6.2f}".\
format(par_name,10.0**mean))
ax.set_xlim(mn[par_name],mx[par_name])
ylim = ax.get_ylim()
if "mult1" in par_name:
val = np.log10(1.0)
else:
val = np.log10(200.0)
ticks = ["{0:2.1f}".format(x) for x in 10.0**ax.get_xticks()]
ax.set_xticklabels(ticks,rotation=90)
ax.plot([val,val],ylim,"k-",lw=2.0)
ax.plot([mean,mean],ylim,"b-",lw=1.5)
ax.plot([mean+(2.0*std),mean+(2.0*std)],ylim,"b--",lw=1.5)
ax.plot([mean-(2.0*std),mean-(2.0*std)],ylim,"b--",lw=1.5)
pdf.savefig()
plt.close()
obs_files = [os.path.join(d,f) for f in os.listdir(d) if "obsensemble." in f
and ".png" not in f]
obs_dfs = [pd.read_csv(obs_file) for obs_file in obs_files]
#print(obs_files)
#mx = max([obs_df.obs.max() for obs_df in obs_dfs])
#mn = min([obs_df.obs.min() for obs_df in obs_dfs])
#print(mn,mx)
obs_names = pst.nnz_obs_names
obs_names.extend(["pd_one","pd_ten","pd_half"])
print(len(obs_names))
#print(obs_files)
obs_dfs = [obs_df.loc[:,obs_names] for obs_df in obs_dfs]
mx = {obs_name:max([obs_df.loc[:,obs_name].max() for obs_df in obs_dfs]) for obs_name in obs_names}
mn = {obs_name:min([obs_df.loc[:,obs_name].min() for obs_df in obs_dfs]) for obs_name in obs_names}
with PdfPages(os.path.join(plt_dir,"obsensemble.pdf")) as pdf:
for obs_file,obs_df in zip(obs_files,obs_dfs):
fig = plt.figure(figsize=(30,20))
plt.figtext(0.5,0.975,obs_file,ha="center")
print(obs_file)
axes = [plt.subplot(8,5,i+1) for i in range(len(obs_names))]
for ax,obs_name in zip(axes,obs_names):
mean = obs_df.loc[:,obs_name].mean()
std = obs_df.loc[:,obs_name].std()
obs_df.loc[:,obs_name].hist(ax=ax,edgecolor="none",
alpha=0.5,grid=False)
ax.set_yticklabels([])
#print(ax.get_xlim(),mn[obs_name],mx[obs_name])
ax.set_title("{0}, {1:6.2f}:{2:6.2f}".format(obs_name,mean,std))
ax.set_xlim(mn[obs_name],mx[obs_name])
#ax.set_xlim(0.0,20.0)
ylim = ax.get_ylim()
oval = pst.observation_data.loc[obs_name,"obsval"]
ax.plot([oval,oval],ylim,"k-",lw=2)
ax.plot([mean,mean],ylim,"b-",lw=1.5)
ax.plot([mean+(2.0*std),mean+(2.0*std)],ylim,"b--",lw=1.5)
ax.plot([mean-(2.0*std),mean-(2.0*std)],ylim,"b--",lw=1.5)
ax.set_xticklabels([])
pdf.savefig()
plt.close()
def freyberg_check_phi_calc():
import os
import pandas as pd
import pyemu
import shutil
os.chdir(os.path.join("smoother","freyberg"))
xy = pd.read_csv("freyberg.xy")
csv_files = [f for f in os.listdir('.') if f.endswith(".csv")]
[os.remove(csv_file) for csv_file in csv_files]
pst = pyemu.Pst(os.path.join("freyberg.pst"))
dia_parcov = pyemu.Cov.from_parameter_data(pst,sigma_range=6.0)
nothk_names = [pname for pname in pst.adj_par_names if "hk" not in pname]
parcov_nothk = dia_parcov.get(row_names=nothk_names)
gs = pyemu.utils.geostats.read_struct_file(os.path.join("template","structure.dat"))
print(gs.variograms[0].a,gs.variograms[0].contribution)
#gs.variograms[0].a *= 10.0
#gs.variograms[0].contribution *= 10.0
gs.nugget = 0.0
print(gs.variograms[0].a,gs.variograms[0].contribution)
full_parcov = gs.covariance_matrix(xy.x,xy.y,xy.name)
parcov = parcov_nothk.extend(full_parcov)
#print(parcov.to_pearson().x[-1,:])
pst.observation_data.loc[:,"weight"] /= 10.0
#pst.write("temp.pst")
obscov = pyemu.Cov.from_observation_data(pst)
es = pyemu.EnsembleSmoother(pst,parcov=parcov,obscov=obscov,num_slaves=1,
verbose=True)
es.initialize(num_reals=3)
print(es.parensemble.loc[:,"hkr00c07"])
pst.parameter_data.loc[:,"parval1"] = es.parensemble.iloc[0,:]
pst.observation_data.loc[pst.nnz_obs_names,"obsval"] = es.obsensemble_0.loc[0,pst.nnz_obs_names]
pst.control_data.noptmax = 0
if os.path.exists("temp"):
shutil.rmtree("temp")
shutil.copytree("template","temp")
pst.write(os.path.join("temp","temp.pst"))
os.chdir("temp")
os.system("pestpp temp.pst")
os.chdir("..")
p = pyemu.Pst(os.path.join("temp","temp.pst"))
print(p.phi)
os.chdir(os.path.join("..",".."))
def freyberg():
import os
import pandas as pd
import pyemu
os.chdir(os.path.join("smoother","freyberg"))
if not os.path.exists("freyberg.xy"):
import flopy
ml = flopy.modflow.Modflow.load("freyberg.nam",model_ws="template",
load_only=[])
xy = pd.DataFrame([(x,y) for x,y in zip(ml.sr.xcentergrid.flatten(),ml.sr.ycentergrid.flatten())],
columns=['x','y'])
names = []
for i in range(ml.nrow):
for j in range(ml.ncol ):
names.append("hkr{0:02d}c{1:02d}".format(i,j))
xy.loc[:,"name"] = names
xy.to_csv("freyberg.xy")
else:
xy = pd.read_csv("freyberg.xy")
csv_files = [f for f in os.listdir('.') if f.endswith(".csv")]
[os.remove(csv_file) for csv_file in csv_files]
pst = pyemu.Pst(os.path.join("freyberg.pst"))
dia_parcov = pyemu.Cov.from_parameter_data(pst,sigma_range=6.0)
nothk_names = [pname for pname in pst.adj_par_names if "hk" not in pname]
parcov_nothk = dia_parcov.get(row_names=nothk_names)
gs = pyemu.utils.geostats.read_struct_file(os.path.join("template","structure.dat"))
print(gs.variograms[0].a,gs.variograms[0].contribution)
#gs.variograms[0].a *= 10.0
#gs.variograms[0].contribution *= 10.0
gs.nugget = 0.0
print(gs.variograms[0].a,gs.variograms[0].contribution)
full_parcov = gs.covariance_matrix(xy.x,xy.y,xy.name)
parcov = parcov_nothk.extend(full_parcov)
#print(parcov.to_pearson().x[-1,:])
parcov.to_binary("freyberg_prior.jcb")
parcov.to_ascii("freyberg_prior.cov")
return
pst.observation_data.loc[:,"weight"] /= 10.0
pst.write("temp.pst")
obscov = pyemu.Cov.from_obsweights(os.path.join("temp.pst"))
es = pyemu.EnsembleSmoother(pst,parcov=parcov,obscov=obscov,num_slaves=20,
verbose=True)
#gs.variograms[0].a=10000
#gs.variograms[0].contribution=0.01
#gs.variograms[0].anisotropy = 10.0
# pp_df = pyemu.utils.gw_utils.pp_file_to_dataframe("points1.dat")
# parcov_hk = gs.covariance_matrix(pp_df.x,pp_df.y,pp_df.name)
# parcov_full = parcov_hk.extend(parcov_rch)
es.initialize(100,init_lambda=100.0,enforce_bounds="reset")
for i in range(10):
es.update(lambda_mults=[0.01,0.2,5.0,100.0],run_subset=20)
os.chdir(os.path.join("..",".."))
def freyberg_condor():
import os
import pandas as pd
import pyemu
os.chdir(os.path.join("smoother","freyberg"))
if not os.path.exists("freyberg.xy"):
import flopy
ml = flopy.modflow.Modflow.load("freyberg.nam",model_ws="template",
load_only=[])
xy = pd.DataFrame([(x,y) for x,y in zip(ml.sr.xcentergrid.flatten(),ml.sr.ycentergrid.flatten())],
columns=['x','y'])
names = []
for i in range(ml.nrow):
for j in range(ml.ncol ):
names.append("hkr{0:02d}c{1:02d}".format(i,j))
xy.loc[:,"name"] = names
xy.to_csv("freyberg.xy")
else:
xy = pd.read_csv("freyberg.xy")
csv_files = [f for f in os.listdir('.') if f.endswith(".csv")]
[os.remove(csv_file) for csv_file in csv_files]
pst = pyemu.Pst(os.path.join("freyberg.pst"))
dia_parcov = pyemu.Cov.from_parameter_data(pst,sigma_range=6.0)
nothk_names = [pname for pname in pst.adj_par_names if "hk" not in pname]
parcov_nothk = dia_parcov.get(row_names=nothk_names)
gs = pyemu.utils.geostats.read_struct_file(os.path.join("template","structure.dat"))
print(gs.variograms[0].a,gs.variograms[0].contribution)
#gs.variograms[0].a *= 10.0
#gs.variograms[0].contribution *= 10.0
gs.nugget = 0.0
print(gs.variograms[0].a,gs.variograms[0].contribution)
full_parcov = gs.covariance_matrix(xy.x,xy.y,xy.name)
parcov = parcov_nothk.extend(full_parcov)
#print(parcov.to_pearson().x[-1,:])
pst.observation_data.loc[:,"weight"] /= 10.0
pst.write("temp.pst")
obscov = pyemu.Cov.from_obsweights(os.path.join("temp.pst"))
es = pyemu.EnsembleSmoother(pst,parcov=parcov,obscov=obscov,num_slaves=20,
verbose=True,submit_file="freyberg.sub")
#gs.variograms[0].a=10000
#gs.variograms[0].contribution=0.01
#gs.variograms[0].anisotropy = 10.0
# pp_df = pyemu.utils.gw_utils.pp_file_to_dataframe("points1.dat")
# parcov_hk = gs.covariance_matrix(pp_df.x,pp_df.y,pp_df.name)
# parcov_full = parcov_hk.extend(parcov_rch)
es.initialize(300,init_lambda=10000.0,enforce_bounds="reset")
for i in range(10):
es.update(lambda_mults=[0.2,5.0],run_subset=40)
os.chdir(os.path.join("..",".."))
def freyberg_pars_to_array(par_df):
import numpy as np
#print(par_df.index)
real_col = par_df.columns[0]
hk_names = par_df.index.map(lambda x:x.startswith("hk"))
hk_df = par_df.loc[hk_names,:]
hk_df.loc[:,"row"] = hk_df.index.map(lambda x: int(x[3:5]))
hk_df.loc[:,"column"] = hk_df.index.map(lambda x: int(x[-2:]))
nrow,ncol = hk_df.row.max() + 1, hk_df.column.max() + 1
arr = np.zeros((nrow,ncol)) - 999.0
for r,c,v in zip(hk_df.row,hk_df.column,hk_df.loc[:,real_col]):
arr[r-1,c-1] = v
arr = np.ma.masked_where(arr==-999.,arr)
return arr
def freyberg_plot_par_seq():
import os
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.backends.backend_pdf import PdfPages
import pandas as pd
from pyemu import Pst
d = os.path.join("smoother","freyberg")
pst = Pst(os.path.join(d,"freyberg.pst"))
plt_dir = os.path.join(d,"plot")
if not os.path.exists(plt_dir):
os.mkdir(plt_dir)
par_files = [os.path.join(d,f) for f in os.listdir(d) if "parensemble." in f
and ".png" not in f]
par_dfs = [pd.read_csv(par_file,index_col=0).apply(np.log10) for par_file in par_files]
#par_names = list(par_dfs[0].columns)
par_names = ["rch_1","rch_2"]
mx = (pst.parameter_data.loc[:,"parubnd"] * 1.1).apply(np.log10)
mn = (pst.parameter_data.loc[:,"parlbnd"] * 0.9).apply(np.log10)
f_count = 0
for par_file,par_df in zip(par_files,par_dfs):
#print(par_file)
fig = plt.figure(figsize=(4.5,3.5))
plt.figtext(0.5,0.95,"iteration {0}".format(f_count),ha="center")
axes = [plt.subplot(3,4,i+1) for i in range(12)]
arrs = []
for ireal in range(12):
arrs.append(freyberg_pars_to_array(par_df.iloc[[ireal],:].T))
amx = max([arr.max() for arr in arrs])
amn = max([arr.min() for arr in arrs])
for ireal,arr in enumerate(arrs):
axes[ireal].imshow(arr,vmax=amx,vmin=amn,interpolation="nearest")
axes[ireal].set_xticklabels([])
axes[ireal].set_yticklabels([])
plt.savefig(os.path.join(plt_dir,"par_{0:03d}.png".format(f_count)))
f_count += 1
plt.close()
bdir = os.getcwd()
os.chdir(plt_dir)
#os.system("ffmpeg -r 1 -i par_%03d.png -vcodec libx264 -pix_fmt yuv420p freyberg_pars.mp4")
os.system("ffmpeg -r 2 -i par_%03d.png -loop 0 -final_delay 100 -y freyberg_pars.gif")
os.chdir(bdir)
def freyberg_plot_obs_seq():
import os
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.backends.backend_pdf import PdfPages
import pandas as pd
from pyemu import Pst
d = os.path.join("smoother","freyberg")
pst = Pst(os.path.join(d,"freyberg.pst"))
plt_dir = os.path.join(d,"plot")
if not os.path.exists(plt_dir):
os.mkdir(plt_dir)
obs_files = [os.path.join(d,f) for f in os.listdir(d) if "obsensemble." in f
and ".png" not in f]
obs_dfs = [pd.read_csv(obs_file) for obs_file in obs_files]
obs_names = pst.nnz_obs_names
obs_names.extend(pst.pestpp_options["forecasts"].split(',')[:-1])
print(obs_names)
print(len(obs_names))
#print(obs_files)
obs_dfs = [obs_df.loc[:,obs_names] for obs_df in obs_dfs]
mx = {obs_name:max([obs_df.loc[:,obs_name].max() for obs_df in obs_dfs]) for obs_name in obs_names}
mn = {obs_name:min([obs_df.loc[:,obs_name].min() for obs_df in obs_dfs]) for obs_name in obs_names}
f_count = 0
for obs_df in obs_dfs[1:]:
fig = plt.figure(figsize=(4.5,3.5))
plt.figtext(0.5,0.95,"iteration {0}".format(f_count),ha="center",fontsize=8)
#print(obs_file)
axes = [plt.subplot(3,4,i+1) for i in range(len(obs_names))]
for ax,obs_name in zip(axes,obs_names):
mean = obs_df.loc[:,obs_name].mean()
std = obs_df.loc[:,obs_name].std()
obs_df.loc[:,obs_name].hist(ax=ax,edgecolor="none",
alpha=0.25,grid=False)
ax.set_yticklabels([])
#print(ax.get_xlim(),mn[obs_name],mx[obs_name])
ax.set_title(obs_name,fontsize=6)
ttl = ax.title
ttl.set_position([.5, 1.00])
ax.set_xlim(mn[obs_name],mx[obs_name])
#ax.set_xlim(0.0,20.0)
ylim = ax.get_ylim()
oval = pst.observation_data.loc[obs_name,"obsval"]
ax.plot([oval,oval],ylim,"k--",lw=0.5)
#ax.plot([mean,mean],ylim,"b-",lw=0.5)
#ax.plot([mean+(2.0*std),mean+(2.0*std)],ylim,"b--",lw=0.5)
#ax.plot([mean-(2.0*std),mean-(2.0*std)],ylim,"b--",lw=0.5)
ax.set_xticks([])
ax.set_yticks([])
plt.savefig(os.path.join(plt_dir,"obs_{0:03d}.png".format(f_count)))
f_count += 1
plt.close()
bdir = os.getcwd()
os.chdir(plt_dir)
#os.system("ffmpeg -r 1 -i obs_%03d.png -vcodec libx264 -pix_fmt yuv420p freyberg_obs.mp4")
os.system("ffmpeg -r 2 -i obs_%03d.png -loop 0 -final_delay 100 -y freyberg_obs.gif")
os.chdir(bdir)
def freyberg_plot_iobj():
import os
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.backends.backend_pdf import PdfPages
import pandas as pd
from pyemu import Pst
d = os.path.join("smoother","freyberg")
pst = Pst(os.path.join(d,"freyberg.pst"))
plt_dir = os.path.join(d,"plot")
if not os.path.exists(plt_dir):
os.mkdir(plt_dir)
obj_df = pd.read_csv(os.path.join(d, "freyberg.pst.iobj.csv"), index_col=0)
real_cols = [col for col in obj_df.columns if col.startswith("0")]
obj_df.loc[:, real_cols] = obj_df.loc[:, real_cols].apply(np.log10)
obj_df.loc[:, "mean"] = obj_df.loc[:, "mean"].apply(np.log10)
obj_df.loc[:, "std"] = obj_df.loc[:, "std"].apply(np.log10)
fig = plt.figure(figsize=(10, 5))
ax = plt.subplot(111)
obj_df.index = obj_df.total_runs
obj_df.loc[:, real_cols].plot(ax=ax, lw=0.5, color="0.5", alpha=0.5, legend=False)
ax.plot(obj_df.index, obj_df.loc[:, "mean"], 'b', lw=2.5, marker='.', markersize=5)
# ax.fill_between(obj_df.index, obj_df.loc[:, "mean"] - (1.96 * obj_df.loc[:, "std"]),
# obj_df.loc[:, "mean"] + (1.96 * obj_df.loc[:, "std"]),
# facecolor="b", edgecolor="none", alpha=0.25)
#axt = plt.twinx()
#axt.plot(obj_df.index, obj_df.loc[:, "lambda"], "k", dashes=(2, 1), lw=2.5)
pobj_df = pd.read_csv(os.path.join(d,"pest_master","freyberg.iobj"),index_col=0)
#print(pobj_df.total_phi)
#print(pobj_df.model_runs_completed)
ax.plot(pobj_df.model_runs_completed.values,pobj_df.total_phi.apply(np.log10).values,"m",lw=2.5)
#pobj_reg_df = pd.read_csv(os.path.join(d,"pest_master_reg","freyberg_reg.iobj"),index_col=0)
#ax.plot(pobj_reg_df.model_runs_completed.values,pobj_reg_df.measurement_phi.apply(np.log10).values,"m",lw=2.5)
ax.set_ylabel("log$_{10}$ $\phi$")
#axt.set_ylabel("lambda")
ax.set_xlabel("total runs")
ax.grid()
#ax.set_title("EnsembleSmoother $\phi$ summary; {0} realizations in ensemble".\
# format(obj_df.shape[1]-7))
#ax.set_xticks(obj_df.index.values)
#ax.set_xticklabels(["{0}".format(tr) for tr in obj_df.total_runs])
plt.savefig(os.path.join(plt_dir, "iobj.png"))
plt.close()
def freyberg_plot():
import os
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.backends.backend_pdf import PdfPages
import pandas as pd
from pyemu import Pst
d = os.path.join("smoother","freyberg")
pst = Pst(os.path.join(d,"freyberg.pst"))
plt_dir = os.path.join(d,"plot")
if not os.path.exists(plt_dir):
os.mkdir(plt_dir)
obj_df = pd.read_csv(os.path.join(d, "freyberg.pst.iobj.csv"), index_col=0)
real_cols = [col for col in obj_df.columns if col.startswith("0")]
obj_df.loc[:, real_cols] = obj_df.loc[:, real_cols].apply(np.log10)
obj_df.loc[:, "mean"] = obj_df.loc[:, "mean"].apply(np.log10)
obj_df.loc[:, "std"] = obj_df.loc[:, "std"].apply(np.log10)
fig = plt.figure(figsize=(20, 10))
ax = plt.subplot(111)
obj_df.loc[:, real_cols].plot(ax=ax, lw=0.5, color="0.5", alpha=0.5, legend=False)
ax.plot(obj_df.index, obj_df.loc[:, "mean"], 'b', lw=2.5, marker='.', markersize=5)
ax.set_xticks(obj_df.index.values)
ax.set_xticklabels(["{0}".format(tr) for tr in obj_df.total_runs])
# ax.fill_between(obj_df.index, obj_df.loc[:, "mean"] - (1.96 * obj_df.loc[:, "std"]),
# obj_df.loc[:, "mean"] + (1.96 * obj_df.loc[:, "std"]),
# facecolor="b", edgecolor="none", alpha=0.25)
axt = plt.twinx()
axt.plot(obj_df.index, obj_df.loc[:, "lambda"], "k", dashes=(2, 1), lw=2.5)
ax.set_ylabel("log$_10$ $\phi$")
axt.set_ylabel("lambda")
ax.set_xlabel("total runs")
ax.set_title("EnsembleSmoother $\phi$ summary; {0} realizations in ensemble".\
format(obj_df.shape[1]-7))
plt.savefig(os.path.join(plt_dir, "iobj.pdf"))
plt.close()
par_files = [os.path.join(d,f) for f in os.listdir(d) if "parensemble." in f
and ".png" not in f]
par_dfs = [pd.read_csv(par_file,index_col=0).apply(np.log10) for par_file in par_files]
#par_names = list(par_dfs[0].columns)
par_names = ["rch_1","rch_2"]
mx = (pst.parameter_data.loc[:,"parubnd"] * 1.1).apply(np.log10)
mn = (pst.parameter_data.loc[:,"parlbnd"] * 0.9).apply(np.log10)
with PdfPages(os.path.join(plt_dir,"parensemble.pdf")) as pdf:
for par_file,par_df in zip(par_files,par_dfs):
#print(par_file)
fig = plt.figure(figsize=(20,10))
plt.figtext(0.5,0.975,par_file,ha="center")
axes = [plt.subplot(2,6,i+1) for i in range(12)]
arrs = []
for ireal in range(10):
arrs.append(freyberg_pars_to_array(par_df.iloc[[ireal],:].T))
amx = max([arr.max() for arr in arrs])
amn = max([arr.min() for arr in arrs])
for ireal,arr in enumerate(arrs):
axes[ireal].imshow(arr,vmax=amx,vmin=amn,interpolation="nearest")
for par_name,ax in zip(par_names,axes[-2:]):
mean = par_df.loc[:,par_name].mean()
std = par_df.loc[:,par_name].std()
par_df.loc[:,par_name].hist(ax=ax,edgecolor="none",
alpha=0.25,grid=False)
ax.set_yticklabels([])
ax.set_title("{0}, {1:6.2f}".\
format(par_name,10.0**mean))
ax.set_xlim(mn[par_name],mx[par_name])
ylim = ax.get_ylim()
if "stage" in par_name:
val = np.log10(1.5)
else:
val = np.log10(2.5)
ticks = ["{0:2.1f}".format(x) for x in 10.0**ax.get_xticks()]
ax.set_xticklabels(ticks,rotation=90)
ax.plot([val,val],ylim,"k-",lw=2.0)
ax.plot([mean,mean],ylim,"b-",lw=1.5)
ax.plot([mean+(2.0*std),mean+(2.0*std)],ylim,"b--",lw=1.5)
ax.plot([mean-(2.0*std),mean-(2.0*std)],ylim,"b--",lw=1.5)
pdf.savefig()
plt.close()
obs_files = [os.path.join(d,f) for f in os.listdir(d) if "obsensemble." in f
and ".png" not in f]
obs_dfs = [pd.read_csv(obs_file) for obs_file in obs_files]
obs_names = pst.nnz_obs_names
obs_names.extend(pst.pestpp_options["forecasts"].split(',')[:-1])
print(obs_names)
print(len(obs_names))
#print(obs_files)
obs_dfs = [obs_df.loc[:,obs_names] for obs_df in obs_dfs]
mx = {obs_name:max([obs_df.loc[:,obs_name].max() for obs_df in obs_dfs]) for obs_name in obs_names}
mn = {obs_name:min([obs_df.loc[:,obs_name].min() for obs_df in obs_dfs]) for obs_name in obs_names}
with PdfPages(os.path.join(plt_dir,"obsensemble.pdf")) as pdf:
for obs_file,obs_df in zip(obs_files,obs_dfs):
fig = plt.figure(figsize=(30,40))
plt.figtext(0.5,0.975,obs_file,ha="center")
print(obs_file)
axes = [plt.subplot(3,4,i+1) for i in range(len(obs_names))]
for ax,obs_name in zip(axes,obs_names):
mean = obs_df.loc[:,obs_name].mean()
std = obs_df.loc[:,obs_name].std()
obs_df.loc[:,obs_name].hist(ax=ax,edgecolor="none",
alpha=0.25,grid=False)
ax.set_yticklabels([])
#print(ax.get_xlim(),mn[obs_name],mx[obs_name])
ax.set_title("{0}, {1:6.2f}:{2:6.2f}".format(obs_name,mean,std))
ax.set_xlim(mn[obs_name],mx[obs_name])
#ax.set_xlim(0.0,20.0)
ylim = ax.get_ylim()
oval = pst.observation_data.loc[obs_name,"obsval"]
ax.plot([oval,oval],ylim,"k-",lw=2)
ax.plot([mean,mean],ylim,"b-",lw=1.5)
ax.plot([mean+(2.0*std),mean+(2.0*std)],ylim,"b--",lw=1.5)
ax.plot([mean-(2.0*std),mean-(2.0*std)],ylim,"b--",lw=1.5)
pdf.savefig()
plt.close()
def chenoliver_setup():
import pyemu
os.chdir(os.path.join("smoother","chenoliver"))
in_file = os.path.join("par.dat")
tpl_file = in_file+".tpl"
out_file = os.path.join("obs.dat")
ins_file = out_file+".ins"
pst = pyemu.pst_utils.pst_from_io_files(tpl_file,in_file,ins_file,out_file)
par = pst.parameter_data
par.loc[:,"partrans"] = "none"
par.loc[:,"parval1"] = -2.0
par.loc[:,"parubnd"] = 20.0
par.loc[:,"parlbnd"] = -20.0
obs = pst.observation_data
obs.loc[:,"obsval"] = 48.0
obs.loc[:,"weight"] = 1.0
pst.model_command = ["python chenoliver.py"]
pst.control_data.noptmax = 0
pst.pestpp_options["sweep_parameter_csv_file"] = os.path.join("sweep_in.csv")
pst.write(os.path.join("chenoliver.pst"))
os.chdir(os.path.join("..",".."))
def chenoliver_func_plot(ax=None):
def func(par):
return ((7.0/12.0) * par**3) - ((7.0/2.0) * par**2) + (8.0 * par)
import numpy as np
import matplotlib.pyplot as plt
par = np.arange(-5.0,10.0,0.1)
obs = func(par)
if ax is None:
fig = plt.figure(figsize=(10,5))
ax = plt.subplot(111)
ax.plot(par,obs,"0.5",dashes=(3,2),lw=4.0)
ax.scatter(-2.0,func(-2.0),marker='^',s=175,color="b",label="prior mean",zorder=4)
ax.scatter(5.9,func(5.9),marker='*',s=175,color="m",label="posterior mean",zorder=4)
ax.set_xlabel("parameter value")
ax.set_ylabel("observation value")
ax.grid()
plt.savefig(os.path.join("smoother","chenoliver","function.png"))
#plt.show()
def chenoliver_plot_sidebyside():
import os
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.patches import Rectangle as rect
import pandas as pd
d = os.path.join("smoother","chenoliver")
bins = 20
plt_dir = os.path.join(d,"plot")
if not os.path.exists(plt_dir):
os.mkdir(plt_dir)
obs_files = [os.path.join(d,f) for f in os.listdir(d) if "obsensemble." in f
and ".png" not in f]
obs_dfs = [pd.read_csv(obs_file) for obs_file in obs_files]
#print(obs_files)
omx = max([obs_df.obs.max() for obs_df in obs_dfs])
omn = min([obs_df.obs.min() for obs_df in obs_dfs])
par_files = [os.path.join(d,f) for f in os.listdir(d) if "parensemble." in f
and ".png" not in f]
par_dfs = [pd.read_csv(par_file) for par_file in par_files]
#mx = max([par_df.par.max() for par_df in par_dfs])
#mn = min([par_df.par.min() for par_df in par_dfs])
pmx = 7
pmn = -5
figsize = (10,3)
fcount = 1
for pdf, odf in zip(par_dfs,obs_dfs[1:]):
fig = plt.figure(figsize=figsize)
plt.figtext(0.5,0.95,"iteration {0}".format(fcount),ha="center",fontsize=8)
#axp = plt.subplot(1,3,1)
#axo = plt.subplot(1,3,2)
#axf = plt.subplot(1,3,3)
axp = plt.axes((0.05,0.075,0.25,0.825))
axo = plt.axes((0.375,0.075,0.25,0.825))
axf = plt.axes((0.7,0.075,0.25,0.825))
chenoliver_func_plot(axf)
pdf.par.hist(ax=axp,bins=bins,edgecolor="none",grid=False)
odf.obs.hist(ax=axo,bins=bins,edgecolor="none",grid=False)
axf.scatter(pdf.par.values,odf.obs.values,marker='.',color="c",s=100)
ylim = axf.get_ylim()
r = rect((0.0,ylim[0]),4,ylim[1]-ylim[0],facecolor='0.5',alpha=0.25)
axf.add_patch(r)
axp.set_yticks([])
axo.set_yticks([])
ylim = axp.get_ylim()
axp.plot([5.9,5.9],ylim,"k--")
r = rect((0.0,ylim[0]),4,ylim[1]-ylim[0],facecolor='0.5',alpha=0.25)
axp.add_patch(r)
ylim = axo.get_ylim()
axo.plot([48,48],ylim,"k--")
axp.set_xlim(pmn,pmx)
axo.set_xlim(omn,omx)
axp.set_title("parameter",fontsize=6)
axo.set_title("observation",fontsize=6)
axf.set_ylabel("")
axf.set_xlabel("")
axf.set_title("par vs obs",fontsize=6)
plt.savefig(os.path.join(plt_dir,"sbs_{0:03d}.png".format(fcount)))
#plt.tight_layout()
plt.close(fig)
fcount += 1
#if fcount > 15:
# break
bdir = os.getcwd()
os.chdir(plt_dir)
#os.system("ffmpeg -r 6 -i sbs_%03d.png -vcodec libx264 -pix_fmt yuv420p chenoliver.mp4")
os.system("ffmpeg -r 2 -i sbs_%03d.png -loop 0 -final_delay 100 -y chenoliver.gif")
os.chdir(bdir)
def chenoliver_obj_plot():
import os
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
d = os.path.join("smoother","chenoliver")
plt_dir = os.path.join(d,"plot")
if not os.path.exists(plt_dir):
os.mkdir(plt_dir)
obj_df = pd.read_csv(os.path.join(d,"chenoliver.pst.iobj.csv"),index_col=0)
real_cols = [col for col in obj_df.columns if col.startswith("0")]
obj_df.loc[:,real_cols] = obj_df.loc[:,real_cols].apply(np.log10)
obj_df.loc[:,"mean"] = obj_df.loc[:,"mean"].apply(np.log10)
obj_df.loc[:, "std"] = obj_df.loc[:, "std"].apply(np.log10)
real_cols = [col for col in obj_df.columns if col.startswith("0")]
#obj_df.loc[:, real_cols] = obj_df.loc[:, real_cols].apply(np.log10)
#obj_df.loc[:, "mean"] = obj_df.loc[:, "mean"].apply(np.log10)
#obj_df.loc[:, "std"] = obj_df.loc[:, "std"].apply(np.log10)
fig = plt.figure(figsize=(10, 5))
ax = plt.subplot(111)
obj_df.loc[:, real_cols].plot(ax=ax, lw=0.5, color="0.5", alpha=0.5, legend=False)
ax.plot(obj_df.index, obj_df.loc[:, "mean"], 'b', lw=1.5, marker='.', markersize=5,label="ensemble mean")
ax.set_ylabel("log$_{10}$ $\phi$")
ax.set_xlabel("iteration")
pobj_df = pd.read_csv(os.path.join(d,"pest","chenoliver.iobj"),index_col=0)
ax.plot(pobj_df.index,pobj_df.total_phi.apply(np.log10),"m",lw=2.5,label="pest++")
#ax.legend(loc="upper left")
ax.grid()
plt.savefig(os.path.join(plt_dir, "iobj.png"))
plt.close()
def chenoliver_plot():
import os
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.backends.backend_pdf import PdfPages
import pandas as pd
d = os.path.join("smoother","chenoliver")
bins = 20
plt_dir = os.path.join(d,"plot")
if not os.path.exists(plt_dir):
os.mkdir(plt_dir)
obs_files = [os.path.join(d,f) for f in os.listdir(d) if "obsensemble." in f
and ".png" not in f]
obs_dfs = [pd.read_csv(obs_file) for obs_file in obs_files]
#print(obs_files)
mx = max([obs_df.obs.max() for obs_df in obs_dfs])
mn = min([obs_df.obs.min() for obs_df in obs_dfs])
#print(mn,mx)
with PdfPages(os.path.join(plt_dir,"obsensemble.pdf")) as pdf:
for obs_file,obs_df in zip(obs_files,obs_dfs):
#fig = plt.figure(figsize=(10,10))
ax = plt.subplot(111)
obs_df.loc[:,["obs"]].hist(ax=ax,bins=bins,edgecolor="none")
ax.set_xlim(mn,mx)
ax.set_title("{0}".format(obs_file))
#plt.savefig(os.path.join(plt_dir,os.path.split(obs_file)[-1]+".png"))
#plt.close("all")
pdf.savefig()
plt.close()
par_files = [os.path.join(d,f) for f in os.listdir(d) if "parensemble." in f
and ".png" not in f]
par_dfs = [pd.read_csv(par_file) for par_file in par_files]
#mx = max([par_df.par.max() for par_df in par_dfs])
#mn = min([par_df.par.min() for par_df in par_dfs])
mx = 7
mn = -5
with PdfPages(os.path.join(plt_dir,"parensemble.pdf")) as pdf:
for par_file in par_files:
par_df = pd.read_csv(par_file)
fig = plt.figure(figsize=(10,10))
ax = plt.subplot(111)
par_df.loc[:,["par"]].hist(ax=ax,bins=bins,edgecolor="none")
#ax.set_xlim(-10,10)
ax.set_xlim(mn,mx)
ax.set_xticks(np.arange(mn,mx+0.25,0.25))
ax.set_xticklabels(["{0:2.2f}".format(x) for x in np.arange(mn,mx+0.25,0.25)], rotation=90)
ax.set_title("{0}".format(par_file))
#plt.savefig(os.path.join(plt_dir,os.path.split(par_file)[-1]+".png"))
#plt.close("all")
pdf.savefig()
plt.close()
def chenoliver():
import os
import numpy as np
import pyemu
os.chdir(os.path.join("smoother","chenoliver"))
csv_files = [f for f in os.listdir('.') if f.endswith(".csv") and "bak" not in f]
[os.remove(csv_file) for csv_file in csv_files]
parcov = pyemu.Cov(x=np.ones((1,1)),names=["par"],isdiagonal=True)
pst = pyemu.Pst("chenoliver.pst")
#obscov = pyemu.Cov(x=np.ones((1,1))*16.0,names=["obs"],isdiagonal=True)
obscov = pyemu.Cov(x=np.ones((1,1)),names=["obs"],isdiagonal=True)
num_reals = 100
es = pyemu.EnsembleSmoother(pst,parcov=parcov,obscov=obscov,
num_slaves=15,verbose=True)
es.initialize(num_reals=num_reals,enforce_bounds=None,init_lambda=10.0)
for it in range(25):
es.update(use_approx=False)
os.chdir(os.path.join("..",".."))
def chenoliver_existing():
import os
import numpy as np
import pyemu
os.chdir(os.path.join("smoother","chenoliver"))
csv_files = [f for f in os.listdir('.') if f.endswith(".csv") and "bak" not in f]
[os.remove(csv_file) for csv_file in csv_files]
parcov = pyemu.Cov(x=np.ones((1,1)),names=["par"],isdiagonal=True)
pst = pyemu.Pst("chenoliver.pst")
obscov = pyemu.Cov(x=np.ones((1,1))*16.0,names=["obs"],isdiagonal=True)
#obscov = pyemu.Cov(x=np.ones((1,1))*16.0,names=["obs"],isdiagonal=True)
num_reals = 100
es = pyemu.EnsembleSmoother(pst,parcov=parcov,obscov=obscov,
num_slaves=10,verbose=True)
es.initialize(num_reals=num_reals,enforce_bounds=None)
obs1 = es.obsensemble.copy()
es.parensemble_0.to_csv("paren.csv")
es.obsensemble_0.to_csv("obsen.csv")
#es = pyemu.EnsembleSmoother(pst,parcov=parcov,obscov=obscov,
# num_slaves=1,verbose=True)
es.initialize(parensemble="paren.csv",obsensemble="obsen.csv")
obs2 = es.obsensemble.copy()
print(obs1.shape,obs2.shape)
print(obs1)
print(obs2)
assert (obs1 - obs2).loc[:,"obs"].sum() == 0.0
for it in range(40):
es.update(lambda_mults=[0.1,1.0,10.0],use_approx=False,run_subset=30)
os.chdir(os.path.join("..",".."))
def chenoliver_condor():
import os
import numpy as np
import pyemu
os.chdir(os.path.join("smoother","chenoliver"))
csv_files = [f for f in os.listdir('.') if f.endswith(".csv") and "bak" not in f]
[os.remove(csv_file) for csv_file in csv_files]
parcov = pyemu.Cov(x=np.ones((1,1)),names=["par"],isdiagonal=True)
pst = pyemu.Pst("chenoliver.pst")
obscov = pyemu.Cov(x=np.ones((1,1))*16.0,names=["obs"],isdiagonal=True)
num_reals = 100
es = pyemu.EnsembleSmoother(pst,parcov=parcov,obscov=obscov,
num_slaves=10,verbose=True,
submit_file="chenoliver.sub")
es.initialize(num_reals=num_reals,enforce_bounds=None)
for it in range(40):
es.update(lambda_mults=[1.0],use_approx=True)
os.chdir(os.path.join("..",".."))
def tenpar_test():
import os
import numpy as np
import pandas as pd
import flopy
import pyemu
os.chdir(os.path.join("smoother", "10par_xsec"))
#bak_obj = pd.read_csv("iobj.bak",skipinitialspace=True)
#bak_obj_act = pd.read_csv("iobj.actual.bak")
bak_upgrade = pd.read_csv("upgrade_1.bak")
csv_files = [f for f in os.listdir('.') if f.endswith(".csv")]
[os.remove(csv_file) for csv_file in csv_files]
pst = pyemu.Pst("10par_xsec.pst")
par = pst.parameter_data
par.loc["stage", "partrans"] = "fixed"
v = pyemu.utils.ExpVario(contribution=0.25, a=60.0)
gs = pyemu.utils.GeoStruct(variograms=[v], transform="log")
par = pst.parameter_data
k_names = par.loc[par.parnme.apply(lambda x: x.startswith('k')), "parnme"]
sr = flopy.utils.SpatialReference(delc=[10], delr=np.zeros((10)) + 10.0)
cov = gs.covariance_matrix(sr.xcentergrid[0, :], sr.ycentergrid[0, :], k_names)
obs = pst.observation_data
obs.loc["h01_09", "weight"] = 100.0
obs.loc["h01_09", 'obgnme'] = "lt_test"
obs.loc["h01_09", 'obsval'] = 2.0
es = pyemu.EnsembleSmoother(pst, parcov=cov,
num_slaves=10, port=4005, verbose=True,
drop_bad_reals=14000.)
lz = es.get_localizer().to_dataframe()
# the k pars upgrad of h01_04 and h01_06 are localized
upgrad_pars = [pname for pname in lz.columns if "_" in pname and \
int(pname.split('_')[1]) > 4]
lz.loc["h01_04", upgrad_pars] = 0.0
upgrad_pars = [pname for pname in lz.columns if '_' in pname and \
int(pname.split('_')[1]) > 6]
lz.loc["h01_06", upgrad_pars] = 0.0
lz = pyemu.Matrix.from_dataframe(lz).T
es.initialize(parensemble="10par_xsec.pe.bak",obsensemble="10par_xsec.oe.bak",
restart_obsensemble="10par_xsec.oe.restart.bak",init_lambda=10000.0)
# just for force full upgrade testing for
es.iter_num = 2
es.update(lambda_mults=[.1, 1000.0],calc_only=True,use_approx=False,localizer=lz)
#obj = pd.read_csv("10par_xsec.pst.iobj.csv")
#obj_act = pd.read_csv("10par_xsec.pst.iobj.actual.csv")
upgrade = pd.read_csv("10par_xsec.pst.upgrade_1.0003.csv")
os.chdir(os.path.join("..", ".."))
# for b,n in zip([bak_obj,bak_obj_act,bak_upgrade],[obj,obj_act,upgrade]):
# print(b,n)
# d = b - n
# print(d.max(),d.min())
d = (bak_upgrade - upgrade).apply(np.abs)
assert d.max().max() < 1.0e-6
def tenpar_fixed():
import os
import numpy as np
import flopy
import pyemu
os.chdir(os.path.join("smoother","10par_xsec"))
csv_files = [f for f in os.listdir('.') if f.endswith(".csv")]
[os.remove(csv_file) for csv_file in csv_files]
pst = pyemu.Pst("10par_xsec.pst")
par = pst.parameter_data
par.loc["stage","partrans"] = "fixed"
v = pyemu.utils.ExpVario(contribution=0.25,a=60.0)
gs = pyemu.utils.GeoStruct(variograms=[v],transform="log")
par = pst.parameter_data
k_names = par.loc[par.parnme.apply(lambda x: x.startswith('k')),"parnme"]
sr = flopy.utils.SpatialReference(delc=[10],delr=np.zeros((10))+10.0)
cov = gs.covariance_matrix(sr.xcentergrid[0,:],sr.ycentergrid[0,:],k_names)
es = pyemu.EnsembleSmoother(pst,parcov=cov,
num_slaves=10,port=4005,verbose=True,
drop_bad_reals=14000.)
lz = es.get_localizer().to_dataframe()
#the k pars upgrad of h01_04 and h01_06 are localized
upgrad_pars = [pname for pname in lz.columns if "_" in pname and\
int(pname.split('_')[1]) > 4]
lz.loc["h01_04",upgrad_pars] = 0.0
upgrad_pars = [pname for pname in lz.columns if '_' in pname and \
int(pname.split('_')[1]) > 6]
lz.loc["h01_06", upgrad_pars] = 0.0
lz = pyemu.Matrix.from_dataframe(lz).T
print(lz)
es.initialize(num_reals=100,init_lambda=10000.0)
for it in range(1):
#es.update(lambda_mults=[0.1,1.0,10.0],localizer=lz,run_subset=20)
#es.update(lambda_mults=[0.1,1.0,10.0],run_subset=30)
es.update(lambda_mults=[.1,1000.0])
os.chdir(os.path.join("..",".."))
def tenpar():
import os
import numpy as np
import flopy
import pyemu
os.chdir(os.path.join("smoother","10par_xsec"))
csv_files = [f for f in os.listdir('.') if f.endswith(".csv")]
[os.remove(csv_file) for csv_file in csv_files]
pst = pyemu.Pst("10par_xsec.pst")
dia_parcov = pyemu.Cov.from_parameter_data(pst,sigma_range=6.0)
v = pyemu.utils.ExpVario(contribution=0.25,a=60.0)
gs = pyemu.utils.GeoStruct(variograms=[v],transform="log")
par = pst.parameter_data
k_names = par.loc[par.parnme.apply(lambda x: x.startswith('k')),"parnme"]
sr = flopy.utils.SpatialReference(delc=[10],delr=np.zeros((10))+10.0)
full_cov = gs.covariance_matrix(sr.xcentergrid[0,:],sr.ycentergrid[0,:],k_names)
dia_parcov.drop(list(k_names),axis=1)
cov = dia_parcov.extend(full_cov)
es = pyemu.EnsembleSmoother("10par_xsec.pst",parcov=cov,
num_slaves=10,port=4005,verbose=True,
drop_bad_reals=14000.)
lz = es.get_localizer().to_dataframe()
#the k pars upgrad of h01_04 and h01_06 are localized
upgrad_pars = [pname for pname in lz.columns if "_" in pname and\
int(pname.split('_')[1]) > 4]
lz.loc["h01_04",upgrad_pars] = 0.0
upgrad_pars = [pname for pname in lz.columns if '_' in pname and \
int(pname.split('_')[1]) > 6]
lz.loc["h01_06", upgrad_pars] = 0.0
lz = pyemu.Matrix.from_dataframe(lz).T
print(lz)
es.initialize(num_reals=100,init_lambda=10000.0)
for it in range(1):
#es.update(lambda_mults=[0.1,1.0,10.0],localizer=lz,run_subset=20)
#es.update(lambda_mults=[0.1,1.0,10.0],run_subset=30)
es.update(lambda_mults=[.1,1000.0])
os.chdir(os.path.join("..",".."))
def tenpar_opt():
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import flopy
import pyemu
os.chdir(os.path.join("smoother","10par_xsec"))
csv_files = [f for f in os.listdir('.') if f.endswith(".csv")]
[os.remove(csv_file) for csv_file in csv_files]
pst = pyemu.Pst("10par_xsec.pst")
dia_parcov = pyemu.Cov.from_parameter_data(pst,sigma_range=6.0)
v = pyemu.utils.ExpVario(contribution=0.25,a=60.0)
gs = pyemu.utils.GeoStruct(variograms=[v],transform="log")
par = pst.parameter_data
k_names = par.loc[par.parnme.apply(lambda x: x.startswith('k')),"parnme"]
sr = flopy.utils.SpatialReference(delc=[10],delr=np.zeros((10))+10.0)
full_cov = gs.covariance_matrix(sr.xcentergrid[0,:],sr.ycentergrid[0,:],k_names)
dia_parcov.drop(list(k_names),axis=1)
cov = dia_parcov.extend(full_cov)
obs = pst.observation_data
# obs.loc["h01_02","weight"] = 10.0
# obs.loc["h01_02","obgnme"] = "lt_test"
# obs.loc["h01_02", "obsval"] = 1.0
obs.loc["h01_09","weight"] = 100.0
obs.loc["h01_09",'obgnme'] = "lt_test"
obs.loc["h01_09", 'obsval'] = 2.0
print(obs)
#return()
pst.write("10par_xsec_opt.pst")
pst.write(os.path.join("template","10par_xsec_opt.pst"))
es = pyemu.EnsembleSmoother("10par_xsec_opt.pst",parcov=cov,
num_slaves=10,port=4005,verbose=True)
lz = es.get_localizer().to_dataframe()
#the k pars upgrad of h01_04 and h01_06 are localized
upgrad_pars = [pname for pname in lz.columns if "_" in pname and\
int(pname.split('_')[1]) > 4]
lz.loc["h01_04",upgrad_pars] = 0.0
upgrad_pars = [pname for pname in lz.columns if '_' in pname and \
int(pname.split('_')[1]) > 6]
lz.loc["h01_06", upgrad_pars] = 0.0
lz = pyemu.Matrix.from_dataframe(lz).T
print(lz)
mc = pyemu.MonteCarlo(pst=pst,parcov=cov)
mc.draw(300,obs=True)
es.initialize(parensemble=mc.parensemble,obsensemble=mc.obsensemble,init_lambda=10000.0)
niter=20
for it in range(niter):
#es.update(lambda_mults=[0.1,1.0,10.0],localizer=lz,run_subset=20)
#es.update(lambda_mults=[0.1,1.0,10.0],run_subset=30)
es.update(lambda_mults=[.1,1.0,10.0],run_subset=30)
oe_ieq = pd.read_csv("10par_xsec_opt.pst.obsensemble.{0:04d}.csv".format(niter))
#obs.loc["h01_09","weight"] = 0.0
es = pyemu.EnsembleSmoother("10par_xsec.pst", parcov=cov,
num_slaves=10, port=4005, verbose=True)
lz = es.get_localizer().to_dataframe()
# the k pars upgrad of h01_04 and h01_06 are localized
upgrad_pars = [pname for pname in lz.columns if "_" in pname and \
int(pname.split('_')[1]) > 4]
lz.loc["h01_04", upgrad_pars] = 0.0
upgrad_pars = [pname for pname in lz.columns if '_' in pname and \
int(pname.split('_')[1]) > 6]
lz.loc["h01_06", upgrad_pars] = 0.0
lz = pyemu.Matrix.from_dataframe(lz).T
print(lz)
es.initialize(parensemble=mc.parensemble,obsensemble=mc.obsensemble, init_lambda=10000.0)
for it in range(niter):
# es.update(lambda_mults=[0.1,1.0,10.0],localizer=lz,run_subset=20)
# es.update(lambda_mults=[0.1,1.0,10.0],run_subset=30)
es.update(lambda_mults=[.1, 1.0,10.0], run_subset=30)
oe_base = pd.read_csv("10par_xsec.pst.obsensemble.{0:04d}.csv".format(niter))
for oname in obs.obsnme:
ax = plt.subplot(111)
oe_base.loc[:,oname].hist(bins=20, ax=ax, color="0.5", alpha=0.54)
oe_ieq.loc[:,oname].hist(bins=20,ax=ax,color="b",alpha=0.5)
ax.set_xlim(oe_ieq.loc[:,oname].min()*0.75,oe_ieq.loc[:,oname].max() * 1.25)
plt.savefig(oname+".png")
plt.close("all")
#oe_base.to_csv("base.csv")
#oe_ieq.to_csv("ieq.csv")
os.chdir(os.path.join("..",".."))
def plot_10par_opt_traj():
import numpy as np
import pandas as pd
from matplotlib.backends.backend_pdf import PdfPages
import matplotlib.pyplot as plt
import pyemu
d = os.path.join("smoother","10par_xsec")
case1 = "10par_xsec.pst"
case2 = "10par_xsec_opt.pst"
files = os.listdir(d)
case1_oes = [f for f in files if case1 in f and "obsensemble" in f]
case2_oes = [f for f in files if case2 in f and "obsensemble" in f]
case1_oes = [pd.read_csv(os.path.join(d,f)) for f in case1_oes]
case2_oes = [pd.read_csv(os.path.join(d,f)) for f in case2_oes]
case1_pes = [f for f in files if case1 in f and "parensemble" in f]
case2_pes = [f for f in files if case2 in f and "parensemble" in f]
case1_pes = [pd.read_csv(os.path.join(d, f)) for f in case1_pes]
case2_pes = [pd.read_csv(os.path.join(d, f)) for f in case2_pes]
print(case1_oes)
print(case2_oes)
pst = pyemu.Pst(os.path.join(d,"10par_xsec.pst"))
with PdfPages("traj.pdf") as pdf:
for oname in pst.observation_data.obsnme:
#dfs1 = [c.loc[:,[oname]] for c in case1_oes]
df1 = pd.concat([c.loc[:,[oname]] for c in case1_oes],axis=1)
df2 = pd.concat([c.loc[:, [oname]] for c in case2_oes], axis=1)
df1.columns = np.arange(df1.shape[1])
df2.columns = np.arange(df2.shape[1])
fig = plt.figure(figsize=(10,5))
ax = plt.subplot(111)
[ax.plot(df1.columns,df1.loc[i,:],color='0.5',lw=0.2) for i in df1.index]
[ax.plot(df2.columns, df2.loc[i, :], color='b', lw=0.2) for i in df2.index]
ax.set_title(oname)
pdf.savefig()
plt.close(fig)
for pname in pst.parameter_data.parnme:
#dfs1 = [c.loc[:,[oname]] for c in case1_oes]
df1 = pd.concat([c.loc[:,[pname]] for c in case1_pes],axis=1)
df2 = pd.concat([c.loc[:, [pname]] for c in case2_pes], axis=1)
df1.columns = np.arange(df1.shape[1])
df2.columns = np.arange(df2.shape[1])
fig = plt.figure(figsize=(10,5))
ax = plt.subplot(111)
[ax.plot(df1.columns,df1.loc[i,:],color='0.5',lw=0.2) for i in df1.index]
[ax.plot(df2.columns, df2.loc[i, :], color='b', lw=0.2) for i in df2.index]
ax.set_title(pname)
pdf.savefig()
plt.close(fig)
def tenpar_restart():
import os
import numpy as np
import flopy
import pyemu
os.chdir(os.path.join("smoother","10par_xsec"))
pst = pyemu.Pst("10par_xsec.pst")
dia_parcov = pyemu.Cov.from_parameter_data(pst,sigma_range=6.0)
v = pyemu.utils.ExpVario(contribution=0.25,a=60.0)
gs = pyemu.utils.GeoStruct(variograms=[v],transform="log")
par = pst.parameter_data
k_names = par.loc[par.parnme.apply(lambda x: x.startswith('k')),"parnme"]
sr = flopy.utils.SpatialReference(delc=[10],delr=np.zeros((10))+10.0)
full_cov = gs.covariance_matrix(sr.xcentergrid[0,:],sr.ycentergrid[0,:],k_names)
dia_parcov.drop(list(k_names),axis=1)
cov = dia_parcov.extend(full_cov)
es = pyemu.EnsembleSmoother("10par_xsec.pst",parcov=cov,
num_slaves=10,port=4005,verbose=True)
lz = es.get_localizer().to_dataframe()
#the k pars upgrad of h01_04 and h01_06 are localized
upgrad_pars = [pname for pname in lz.columns if "_" in pname and\
int(pname.split('_')[1]) > 4]
lz.loc["h01_04",upgrad_pars] = 0.0
upgrad_pars = [pname for pname in lz.columns if '_' in pname and \
int(pname.split('_')[1]) > 6]
lz.loc["h01_06", upgrad_pars] = 0.0
lz = pyemu.Matrix.from_dataframe(lz).T
print(lz)
es.initialize(parensemble="par_start.csv",obsensemble="obs_start.csv",
restart_obsensemble="obs_restart.csv",init_lambda=10000.0)
for it in range(1):
#es.update(lambda_mults=[0.1,1.0,10.0],localizer=lz,run_subset=20)
es.update(lambda_mults=[0.1,1.0,10.0],run_subset=30)
os.chdir(os.path.join("..",".."))
def tenpar_failed_runs():
import os
import numpy as np
import pyemu
os.chdir(os.path.join("smoother","10par_xsec"))
#csv_files = [f for f in os.listdir('.') if f.endswith(".csv")]
#[os.remove(csv_file) for csv_file in csv_files]
pst = pyemu.Pst("10par_xsec.pst")
dia_parcov = pyemu.Cov.from_parameter_data(pst,sigma_range=6.0)
v = pyemu.utils.ExpVario(contribution=0.25,a=60.0)
gs = pyemu.utils.GeoStruct(variograms=[v],transform="log")
par = pst.parameter_data
k_names = par.loc[par.parnme.apply(lambda x: x.startswith('k')),"parnme"]
sr = pyemu.utils.SpatialReference(delc=[10],delr=np.zeros((10))+10.0)
full_cov = gs.covariance_matrix(sr.xcentergrid[0,:],sr.ycentergrid[0,:],k_names)
dia_parcov.drop(list(k_names),axis=1)
cov = dia_parcov.extend(full_cov)
es = pyemu.EnsembleSmoother("10par_xsec.pst",parcov=cov,
num_slaves=2,
verbose=True)
lz = es.get_localizer().to_dataframe()
#the k pars upgrad of h01_04 and h01_06 are localized
upgrad_pars = [pname for pname in lz.columns if "_" in pname and\
int(pname.split('_')[1]) > 4]
lz.loc["h01_04",upgrad_pars] = 0.0
upgrad_pars = [pname for pname in lz.columns if '_' in pname and \
int(pname.split('_')[1]) > 6]
lz.loc["h01_06", upgrad_pars] = 0.0
lz = pyemu.Matrix.from_dataframe(lz).T
print(lz)
#es.initialize(num_reals=10,init_lambda=10000.0)
es.initialize(parensemble="par_start.csv",obsensemble="obs_start.csv")
for it in range(10):
#es.update(lambda_mults=[0.1,1.0,10.0],localizer=lz,run_subset=20)
#es.update(lambda_mults=[0.1,1.0,10.0],run_subset=7)
es.update(use_approx=False,lambda_mults=[0.1,1.0,10.0])
os.chdir(os.path.join("..",".."))
def tenpar_plot():
import os
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.backends.backend_pdf import PdfPages
import pandas as pd
from pyemu import Pst
d = os.path.join("smoother","10par_xsec")
pst = Pst(os.path.join(d,"10par_xsec.pst"))
plt_dir = os.path.join(d,"plot")
if not os.path.exists(plt_dir):
os.mkdir(plt_dir)
par_files = [os.path.join(d,f) for f in os.listdir(d) if "parensemble." in f
and ".png" not in f]
par_dfs = [pd.read_csv(par_file,index_col=0) for par_file in par_files]
par_names = list(par_dfs[0].columns)
#mx = (pst.parameter_data.loc[:,"parubnd"] * 1.1)
#mn = (pst.parameter_data.loc[:,"parlbnd"] * 0.9)
mx = max([pdf.max().max() for pdf in par_dfs])
num_reals_plot = 12
plot_rows = 2
plot_cols = 6
assert plot_rows * plot_cols == num_reals_plot
figsize = (20,10)
with PdfPages(os.path.join(plt_dir,"parensemble_reals.pdf")) as pdf:
for par_file,par_df in zip(par_files,par_dfs):
#print(par_file)
fig = plt.figure(figsize=figsize)
plt.figtext(0.5,0.975,par_file,ha="center")
axes = [plt.subplot(plot_rows,plot_cols,i+1) for i in range(num_reals_plot)]
for ireal in range(num_reals_plot):
real_df = par_df.iloc[ireal,:]
#print(real_df)
real_df.plot(kind="bar",ax=axes[ireal])
axes[ireal].set_ylim(0,mx.max())
pdf.savefig()
plt.close()
obj_df = pd.read_csv(os.path.join(d,"10par_xsec.pst.iobj.csv"),index_col=0)
real_cols = [col for col in obj_df.columns if col.startswith("0")]
obj_df.loc[:,real_cols] = obj_df.loc[:,real_cols].apply(np.log10)
obj_df.loc[:,"mean"] = obj_df.loc[:,"mean"].apply(np.log10)
obj_df.loc[:, "std"] = obj_df.loc[:, "std"].apply(np.log10)
fig = plt.figure(figsize=(20, 10))
ax = plt.subplot(111)
axt = plt.twinx()
obj_df.loc[:, real_cols].plot(ax=ax, lw=0.5, color="0.5", alpha=0.5, legend=False)
ax.plot(obj_df.index, obj_df.loc[:, "mean"], 'b', lw=2.5,marker='.',markersize=5)
#ax.fill_between(obj_df.index, obj_df.loc[:, "mean"] - (1.96 * obj_df.loc[:, "std"]),
# obj_df.loc[:, "mean"] + (1.96 * obj_df.loc[:, "std"]),
# facecolor="b", edgecolor="none", alpha=0.25)
axt.plot(obj_df.index,obj_df.loc[:,"lambda"],"k",dashes=(2,1),lw=2.5)
ax.set_ylabel("log$_10$ phi")
axt.set_ylabel("lambda")
ax.set_title("total runs:{0}".format(obj_df.total_runs.max()))
plt.savefig(os.path.join(plt_dir,"iobj.pdf"))
plt.close()
mx = (pst.parameter_data.loc[:,"parubnd"] * 1.1)
mn = (pst.parameter_data.loc[:,"parlbnd"] * 0.9)
with PdfPages(os.path.join(plt_dir,"parensemble.pdf")) as pdf:
for par_file,par_df in zip(par_files,par_dfs):
print(par_file)
fig = plt.figure(figsize=(20,10))
plt.figtext(0.5,0.975,par_file,ha="center")
axes = [plt.subplot(2,6,i+1) for i in range(len(par_names))]
for par_name,ax in zip(par_names,axes):
mean = par_df.loc[:,par_name].mean()
std = par_df.loc[:,par_name].std()
par_df.loc[:,par_name].hist(ax=ax,edgecolor="none",
alpha=0.5,grid=False)
ax.set_yticklabels([])
ax.set_title("{0}, {1:6.2f}".\
format(par_name,10.0**mean))
ax.set_xlim(mn[par_name],mx[par_name])
ylim = ax.get_ylim()
if "stage" in par_name:
val = 1.5
else:
val = 2.5
ticks = ["{0:2.1f}".format(x) for x in ax.get_xticks()]
ax.set_xticklabels(ticks,rotation=90)
ax.plot([val,val],ylim,"k-",lw=2.0)
ax.plot([mean,mean],ylim,"b-",lw=1.5)
ax.plot([mean+(2.0*std),mean+(2.0*std)],ylim,"b--",lw=1.5)
ax.plot([mean-(2.0*std),mean-(2.0*std)],ylim,"b--",lw=1.5)
pdf.savefig()
plt.close()
obs_files = [os.path.join(d,f) for f in os.listdir(d) if "obsensemble." in f
and ".png" not in f]
obs_dfs = [pd.read_csv(obs_file) for obs_file in obs_files]
#print(obs_files)
#mx = max([obs_df.obs.max() for obs_df in obs_dfs])
#mn = min([obs_df.obs.min() for obs_df in obs_dfs])
#print(mn,mx)
obs_names = ["h01_04","h01_06","h01_08","h02_08"]
#print(obs_files)
obs_dfs = [obs_df.loc[:,obs_names] for obs_df in obs_dfs]
mx = {obs_name:max([obs_df.loc[:,obs_name].max() for obs_df in obs_dfs]) for obs_name in obs_names}
mn = {obs_name:min([obs_df.loc[:,obs_name].min() for obs_df in obs_dfs]) for obs_name in obs_names}
with PdfPages(os.path.join(plt_dir,"obsensemble.pdf")) as pdf:
for obs_file,obs_df in zip(obs_files,obs_dfs):
fig = plt.figure(figsize=(10,10))
plt.figtext(0.5,0.975,obs_file,ha="center")
print(obs_file)
axes = [plt.subplot(2,2,i+1) for i in range(len(obs_names))]
for ax,obs_name in zip(axes,obs_names):
mean = obs_df.loc[:,obs_name].mean()
std = obs_df.loc[:,obs_name].std()
obs_df.loc[:,obs_name].hist(ax=ax,edgecolor="none",
alpha=0.5,grid=False)
ax.set_yticklabels([])
#print(ax.get_xlim(),mn[obs_name],mx[obs_name])
ax.set_title("{0}, {1:6.2f}:{2:6.2f}".format(obs_name,mean,std))
#ax.set_xlim(mn[obs_name],mx[obs_name])
ax.set_xlim(0.0,20.0)
ylim = ax.get_ylim()
oval = pst.observation_data.loc[obs_name,"obsval"]
ax.plot([oval,oval],ylim,"k-",lw=2)
ax.plot([mean,mean],ylim,"b-",lw=1.5)
ax.plot([mean+(2.0*std),mean+(2.0*std)],ylim,"b--",lw=1.5)
ax.plot([mean-(2.0*std),mean-(2.0*std)],ylim,"b--",lw=1.5)
pdf.savefig()
plt.close()
def setup_lorenz():
import os
import shutil
import pandas as pd
import pyemu
state_file = "lorenz.dat"
d = os.path.join("smoother", "lorenz","template")
dt = 1.0
prev = [1.0,1.0,1.05,dt]
if os.path.exists(d):
shutil.rmtree(d)
#os.mkdir(d)
os.makedirs(d)
df = pd.DataFrame({"variable":['x','y','z','dt']},index=['x','y','z','dt'])
df.loc[:,"prev"] = prev
df.loc[:,"new"] = prev
df.to_csv(os.path.join(d,state_file),sep=' ',index=False)
df.loc[:,"prev"] = df.variable.apply(lambda x: "~ {0} ~".format(x))
with open(os.path.join(d,state_file+".tpl"),'w') as f:
f.write("ptf ~\n")
df.to_csv(f,sep=' ',index=False)
with open(os.path.join(d,state_file+".ins"),'w') as f:
f.write("pif ~\nl1\n")
for v in df.variable:
f.write("l1 w !prev_{0}! !{0}!\n".format(v))
with open(os.path.join(d,"forward_run.py"),'w') as f:
f.write("import os\nimport numpy as np\nimport pandas as pd\n")
f.write("sigma,rho,beta = 10.0,28.0,2.66667\n")
f.write("df = pd.read_csv('{0}',delim_whitespace=True,index_col=0)\n".format(state_file))
f.write("x,y,z,dt = df.loc[:,'prev'].values\n")
f.write("df.loc['x','new'] = sigma * (y - x)\n")
f.write("df.loc['y','new'] = (rho * x) - y - (x * z)\n")
f.write("df.loc['z','new'] = (x * y) - (beta * z)\n")
f.write("df.loc[:,'new'] *= dt\n")
f.write("df.to_csv('{0}',sep=' ')\n".format(state_file))
#with open(os.path.join(d,"par.tpl"),'w') as f:
# f.write("ptf ~\n")
# f.write("dum ~ dum ~\n")
base_dir = os.getcwd()
os.chdir(d)
pst = pyemu.Pst.from_io_files(*pyemu.helpers.parse_dir_for_io_files('.'))
os.chdir(base_dir)
pst.parameter_data.loc[:,"parval1"] = prev
pst.parameter_data.loc['y',"parlbnd"] = -40.0
pst.parameter_data.loc['y', "parubnd"] = 40.0
pst.parameter_data.loc['x', "parlbnd"] = -40.0
pst.parameter_data.loc['x', "parubnd"] = 40.0
pst.parameter_data.loc['z', "parlbnd"] = 0.0
pst.parameter_data.loc['z', "parubnd"] = 50.0
pst.parameter_data.loc[:,"partrans"] = "none"
pst.parameter_data.loc['dt','partrans'] = 'fixed'
pst.observation_data.loc[:,"weight"] = 0.0
pst.observation_data.loc[['x','y','z'],'weight'] = 1.0
pst.model_command = "python forward_run.py"
pst.pestpp_options["lambda_scale_fac"] = 1.0
pst.pestpp_options["upgrade_augment"] = "false"
pst.control_data.noptmax = 10
pst.write(os.path.join(d,"lorenz.pst"))
print(pst.parameter_data)
pyemu.helpers.run("pestpp lorenz.pst",cwd=d)
if __name__ == "__main__":
#setup_lorenz()
#henry_setup()
#henry()
#henry_plot()
#freyberg()
#freyberg_plot()
#freyberg_plot_iobj()
#freyberg_plot_par_seq()
#freyberg_plot_obs_seq()
#chenoliver_func_plot()
#chenoliver_plot_sidebyside()
#chenoliver_obj_plot()
#chenoliver_setup()
#chenoliver_condor()
#chenoliver()
#chenoliver_existing()
#chenoliver_plot()
#chenoliver_func_plot()
#chenoliver_plot_sidebyside()
#chenoliver_obj_plot()
#tenpar_fixed()
#tenpar()
tenpar_test()
#tenpar_opt()
#plot_10par_opt_traj()
#tenpar_restart()
#tenpar_plot()
#tenpar_failed_runs()
#freyberg()
#freyberg_check_phi_calc()
#freyberg_condor()
#freyberg_plot()
#freyberg_plot_iobj()
#freyberg_plotuse_iobj()
#freyberg_plot_par_seq()
#freyberg_plot_obs_seq()
| 40.032238 | 115 | 0.601127 | 10,178 | 63,331 | 3.588131 | 0.050403 | 0.022673 | 0.033406 | 0.013855 | 0.84655 | 0.823604 | 0.800438 | 0.766731 | 0.747508 | 0.727136 | 0 | 0.041697 | 0.225214 | 63,331 | 1,581 | 116 | 40.057559 | 0.702576 | 0.118441 | 0 | 0.698523 | 0 | 0.004344 | 0.088197 | 0.0071 | 0 | 0 | 0 | 0 | 0.002606 | 1 | 0.025195 | false | 0 | 0.100782 | 0.000869 | 0.128584 | 0.026933 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b15ad7c8867d0b7d2b1e32a9bfa6e737732cebcc | 225 | py | Python | tests/unit/conftest.py | Egnod/sitri | ca974cce9041bea8296284b0ca67d970a6e072cf | [
"MIT"
] | 11 | 2020-12-16T07:00:29.000Z | 2021-05-25T16:24:50.000Z | tests/unit/conftest.py | Egnod/sitri | ca974cce9041bea8296284b0ca67d970a6e072cf | [
"MIT"
] | 6 | 2019-10-08T22:55:21.000Z | 2019-10-11T19:29:53.000Z | tests/unit/conftest.py | Egnod/sitri | ca974cce9041bea8296284b0ca67d970a6e072cf | [
"MIT"
] | 2 | 2019-10-10T12:09:50.000Z | 2019-10-10T23:52:38.000Z | import pytest
from sitri import Sitri
from sitri.providers.contrib.system import SystemConfigProvider
@pytest.fixture(scope="module")
def test_sitri():
return Sitri(config_provider=SystemConfigProvider(prefix="test"))
| 22.5 | 69 | 0.808889 | 27 | 225 | 6.666667 | 0.62963 | 0.1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.097778 | 225 | 9 | 70 | 25 | 0.8867 | 0 | 0 | 0 | 0 | 0 | 0.044444 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | true | 0 | 0.5 | 0.166667 | 0.833333 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
b16142150dd443a20d5c47f8a101609b68b27c43 | 161 | py | Python | scripts/generate_multiple.py | j1nma/Automaton-Off-Lattice | 55a73ffbd75251d822c037c7f048c4299cda46c1 | [
"MIT"
] | null | null | null | scripts/generate_multiple.py | j1nma/Automaton-Off-Lattice | 55a73ffbd75251d822c037c7f048c4299cda46c1 | [
"MIT"
] | null | null | null | scripts/generate_multiple.py | j1nma/Automaton-Off-Lattice | 55a73ffbd75251d822c037c7f048c4299cda46c1 | [
"MIT"
] | 1 | 2020-04-19T02:11:09.000Z | 2020-04-19T02:11:09.000Z | from functions import generate_multiple_files
numbers = [40,100,4000,10000]
i = 0
for x in numbers:
i += 1
generate_multiple_files(numbers[i-1], 20.0)
| 17.888889 | 47 | 0.714286 | 27 | 161 | 4.111111 | 0.666667 | 0.288288 | 0.378378 | 0.504505 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.151515 | 0.180124 | 161 | 8 | 48 | 20.125 | 0.689394 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.166667 | 0 | 0.166667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b1646421e1bafc53e901f1d8608a804dcc96e96d | 6,452 | py | Python | src/tests/test_get_redemption_request.py | andela/andela-societies-backend | b8382f308449a08e5c7bda46c6deabe597cc2e25 | [
"MIT"
] | 1 | 2018-09-13T16:33:20.000Z | 2018-09-13T16:33:20.000Z | src/tests/test_get_redemption_request.py | jonathankamau/andela-societies-backend | b8382f308449a08e5c7bda46c6deabe597cc2e25 | [
"MIT"
] | 6 | 2019-03-11T17:50:27.000Z | 2019-08-26T11:00:40.000Z | src/tests/test_get_redemption_request.py | jonathankamau/andela-societies-backend | b8382f308449a08e5c7bda46c6deabe597cc2e25 | [
"MIT"
] | 9 | 2019-01-09T12:23:12.000Z | 2021-05-28T04:58:31.000Z | """Test suite for Point Redemption Module."""
import json
import uuid
from .points_redemption_base_test_case_setup import PointRedemptionBaseTestCase
class GetRedemptionRequest(PointRedemptionBaseTestCase):
def test_get_all_redemption_requests(self):
"""Test retrieval of Redemption Requests."""
response = self.client.get("api/v1/societies/redeem",
headers=self.society_president,
content_type='application/json')
message = "fetched successfully"
response_details = json.loads(response.data)
self.assertIn(message, response_details["message"])
self.assertEqual(response.status_code, 200)
def test_get_all_redemption_requests_by_cio(self):
"""Test retrieval of Redemption Requests."""
response = self.client.get("api/v1/societies/redeem?paginate=false",
headers=self.cio,
content_type='application/json')
message = "fetched successfully"
response_details = json.loads(response.data)
self.assertIn(message, response_details["message"])
self.assertEqual(response.status_code, 200)
def test_get_existing_redemption_requests_by_id(self):
"""Test retrieval of Redemption Requests."""
response = self.client.get(
f"api/v1/societies/redeem/{self.redemp_req.uuid}",
headers=self.society_president,
content_type='application/json')
message = "fetched successfully"
response_details = json.loads(response.data)
self.assertIn(message, response_details["message"])
self.assertEqual(response.status_code, 200)
def test_get_existing_redemption_requests_by_name(self):
"""Test retrieval of Redemption Requests."""
response = self.client.get(
f"api/v1/societies/redeem?name={self.redemp_req.name}",
headers=self.society_president,
content_type='application/json')
message = "fetched successfully"
response_details = json.loads(response.data)
self.assertIn(message, response_details["message"])
self.assertEqual(response.status_code, 200)
def test_get_existing_redemption_requests_by_society(self):
"""Test retrieval of Redemption Requests."""
self.test_user.society.save()
response = self.client.get(
f"api/v1/societies/redeem?society={self.test_user.society.name}",
headers=self.society_president,
content_type='application/json')
message = "fetched successfully"
response_details = json.loads(response.data)
self.assertIn(message, response_details["message"])
self.assertEqual(response.status_code, 200)
def test_get_existing_redemption_requests_by_status(self):
"""Test retrieval of Redemption Requests."""
response = self.client.get(
f"api/v1/societies/redeem?status={self.redemp_req.status}",
headers=self.society_president,
content_type='application/json')
message = "fetched successfully"
response_details = json.loads(response.data)
self.assertIn(message, response_details["message"])
self.assertEqual(response.status_code, 200)
def test_get_existing_redemption_requests_by_center(self):
"""Test retrieval of Redemption Requests."""
response = self.client.get(
f"api/v1/societies/redeem?center={self.redemp_req.center.name}",
headers=self.society_president,
content_type='application/json')
message = "fetched successfully"
response_details = json.loads(response.data)
self.assertIn(message, response_details["message"])
self.assertEqual(response.status_code, 200)
def test_get_non_existing_redemption_requests_by_id(self):
"""Test retrieval of Redemption Requests."""
response = self.client.get(
f"api/v1/societies/redeem/{str(uuid.uuid4())}",
headers=self.society_president,
content_type='application/json')
message = "does not exist"
response_details = json.loads(response.data)
self.assertIn(message, response_details["message"])
self.assertEqual(response.status_code, 404)
def test_get_non_existing_redemption_requests_by_name(self):
"""Test retrieval of Redemption Requests."""
response = self.client.get(
f"api/v1/societies/redeem?name={str(uuid.uuid4())}",
headers=self.society_president,
content_type='application/json')
message = "Resources were not found."
response_details = json.loads(response.data)
self.assertIn(message, response_details["message"])
self.assertEqual(response.status_code, 404)
def test_get_non_existing_redemption_requests_by_society(self):
"""Test retrieval of Redemption Requests."""
response = self.client.get(
f"api/v1/societies/redeem?society={str(uuid.uuid4())}",
headers=self.society_president,
content_type='application/json')
message = f'not found'
response_details = json.loads(response.data)
self.assertTrue(response_details["message"].find(message))
self.assertEqual(response.status_code, 400)
def test_get_non_existing_redemption_requests_by_status(self):
"""Test retrieval of Redemption Requests."""
response = self.client.get(
f"api/v1/societies/redeem?status={str(uuid.uuid4())}",
headers=self.society_president,
content_type='application/json')
message = "Resources were not found."
response_details = json.loads(response.data)
self.assertIn(message, response_details["message"])
self.assertEqual(response.status_code, 404)
def test_get_non_existing_redemption_requests_by_center(self):
"""Test retrieval of Redemption Requests."""
response = self.client.get(
f"api/v1/societies/redeem?center={str(uuid.uuid4())}",
headers=self.society_president,
content_type='application/json')
message = "not found"
response_details = json.loads(response.data)
self.assertIn(message, response_details["message"])
self.assertEqual(response.status_code, 400)
| 40.074534 | 79 | 0.66522 | 705 | 6,452 | 5.88227 | 0.103546 | 0.104172 | 0.028937 | 0.05498 | 0.911743 | 0.911743 | 0.899204 | 0.888112 | 0.884254 | 0.872679 | 0 | 0.010649 | 0.228611 | 6,452 | 160 | 80 | 40.325 | 0.822584 | 0.07858 | 0 | 0.693694 | 0 | 0 | 0.182684 | 0.097976 | 0 | 0 | 0 | 0 | 0.216216 | 1 | 0.108108 | false | 0 | 0.027027 | 0 | 0.144144 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b186f1b58985529834d28c5e2b50f32009b6c36e | 1,970 | py | Python | sarna/core/roles.py | rsrdesarrollo/sarna | 0c1f44e06a932520b70e505585a5469b77f6302e | [
"MIT"
] | 25 | 2019-03-11T22:42:52.000Z | 2022-03-15T09:49:15.000Z | sarna/core/roles.py | hackingmess/sarna | 0c1f44e06a932520b70e505585a5469b77f6302e | [
"MIT"
] | 14 | 2019-01-08T08:35:51.000Z | 2022-03-11T23:30:28.000Z | sarna/core/roles.py | hackingmess/sarna | 0c1f44e06a932520b70e505585a5469b77f6302e | [
"MIT"
] | 12 | 2019-07-26T05:38:32.000Z | 2022-03-29T09:54:49.000Z | from functools import wraps
from flask_login import login_required
from werkzeug.exceptions import abort
from sarna.model.enums import UserType
valid_auditors = {UserType.manager, UserType.trusted_auditor, UserType.auditor}
valid_trusted = {UserType.manager, UserType.trusted_auditor}
valid_managers = {UserType.manager}
valid_admins = {UserType.admin}
def admin_required(func):
from sarna.core.auth import current_user
needs_accounts = valid_admins
setattr(func, 'needs_accounts', needs_accounts)
@wraps(func)
@login_required
def decorated_view(*args, **kwargs):
if current_user.user_type not in needs_accounts:
abort(403)
else:
return func(*args, **kwargs)
return decorated_view
def manager_required(func):
from sarna.core.auth import current_user
needs_accounts = valid_managers
setattr(func, 'needs_accounts', needs_accounts)
@wraps(func)
@login_required
def decorated_view(*args, **kwargs):
if current_user.user_type not in needs_accounts:
abort(403)
else:
return func(*args, **kwargs)
return decorated_view
def trusted_required(func):
from sarna.core.auth import current_user
needs_accounts = valid_trusted
setattr(func, 'needs_accounts', needs_accounts)
@wraps(func)
@login_required
def decorated_view(*args, **kwargs):
if current_user.user_type not in needs_accounts:
abort(403)
else:
return func(*args, **kwargs)
return decorated_view
def auditor_required(func):
from sarna.core.auth import current_user
needs_accounts = valid_auditors
setattr(func, 'needs_accounts', needs_accounts)
@wraps(func)
@login_required
def decorated_view(*args, **kwargs):
if current_user.user_type not in needs_accounts:
abort(403)
else:
return func(*args, **kwargs)
return decorated_view
| 24.936709 | 79 | 0.694416 | 241 | 1,970 | 5.443983 | 0.174274 | 0.158537 | 0.04878 | 0.064024 | 0.791921 | 0.735518 | 0.735518 | 0.735518 | 0.735518 | 0.735518 | 0 | 0.007848 | 0.223858 | 1,970 | 78 | 80 | 25.25641 | 0.850229 | 0 | 0 | 0.714286 | 0 | 0 | 0.028426 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.142857 | 0 | 0.428571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4934a572b7f8cbb4408575a9c61f5cbb5aef7bc6 | 16,168 | py | Python | lifelist/api/tests/test_api.py | andela-mnzomo/life-list | 28a7fa9d16e2b322e4a1bce269dbe7331e783534 | [
"Unlicense"
] | 3 | 2017-08-17T07:12:03.000Z | 2017-10-18T11:13:44.000Z | lifelist/api/tests/test_api.py | andela-mnzomo/life-list | 28a7fa9d16e2b322e4a1bce269dbe7331e783534 | [
"Unlicense"
] | 1 | 2018-05-30T14:38:52.000Z | 2018-05-30T14:38:52.000Z | lifelist/api/tests/test_api.py | andela-mnzomo/life-list | 28a7fa9d16e2b322e4a1bce269dbe7331e783534 | [
"Unlicense"
] | null | null | null | from django.core.urlresolvers import reverse
from rest_framework import status
from rest_framework.test import APIRequestFactory, APITestCase
from django.contrib.auth.models import User
from api.models import Bucketlist, Item
class TestBase(APITestCase):
""" Base configurations for the tests """
# Get authentication token
def get_token(self):
""" Returns authentication token """
url = reverse("api-login")
self.user = {"username": "testuser",
"password": "testpassword"}
response = self.client.post(url, data=self.user)
token = str(response.data.get("token"))
return token
def setUp(self):
# Add test user
url = reverse("user-list")
self.user = {"username": "testuser",
"email": "testuser@email.com",
"password": "testpassword"}
response = self.client.post(url, data=self.user)
self.test_user_id = str(response.data["id"])
# Add first test bucket list
url = reverse("bucketlist-list")
self.bucketlist = {"title": "The List of Awesome",
"description": "Awesome things!",
"created_by": self.test_user_id}
self.client.credentials(HTTP_AUTHORIZATION="Token " + self.get_token())
response = self.client.post(url, data=self.bucketlist)
self.first_bucketlist_id = str(response.data["id"])
# Add second test bucket list
self.bucketlist = {"title": "Knowledge Goals",
"description": "Things to learn",
"created_by": self.test_user_id}
response = self.client.post(url, data=self.bucketlist)
self.second_bucketlist_id = str(response.data["id"])
# Add first test bucket list item
url = "/api/v1/bucketlists/" + self.first_bucketlist_id + "/items/"
self.item = {"title": "Swim with dolphins",
"description": "Swim with dolphins in Watamu"}
response = self.client.post(url, data=self.item)
self.first_item_id = str(response.data["id"])
# Add first second bucket list item
url = "/api/v1/bucketlists/" + self.second_bucketlist_id + "/items/"
self.item = {"title": "Visit all continents",
"description": "Within 5 years"}
response = self.client.post(url, data=self.item)
self.second_item_id = str(response.data["id"])
class TestAuth(TestBase):
""" Test user registration and login """
def test_registration(self):
""" Test user registration """
url = reverse("user-list")
self.user = {"username": "testuser2",
"email": "testuser2@email.com",
"password": "testpassword"}
response = self.client.post(url, data=self.user)
self.assertEqual(response.status_code, status.HTTP_201_CREATED)
self.assertEqual(User.objects.count(), 2)
self.assertTrue("testuser2" in response.data["username"])
self.assertTrue("testuser2@email.com" in response.data["email"])
def test_login(self):
""" Test user login """
url = reverse("api-login")
self.user = {"username": "testuser",
"password": "testpassword"}
response = self.client.post(url, data=self.user)
self.assertEqual(response.status_code, status.HTTP_200_OK)
def test_invalid_credentials(self):
""" Test that users cannot login with invalid credentials """
# Invalid username
url = reverse("api-login")
self.user = {"username": "invalid",
"password": "testpassword"}
response = self.client.post(url, data=self.user)
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
# Invalid password
url = reverse("api-login")
self.user = {"username": "testuser",
"password": "invalid"}
response = self.client.post(url, data=self.user)
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
class TestBucketlists(TestBase):
""" Test operations on bucketlists """
def test_no_token_bucketlist(self):
"""
Test that user cannot add a bucket list without
an authentication token
"""
url = reverse("bucketlist-list")
self.bucketlist = {"title": "The List of Awesome",
"description": "Awesome things I want to do",
"created_by": self.test_user_id}
self.client.credentials()
response = self.client.post(url, data=self.bucketlist)
self.assertEqual(response.status_code, status.HTTP_401_UNAUTHORIZED)
self.assertTrue("Authentication credentials were not provided"
in response.data["detail"])
def test_invalid_token_bucketlist(self):
"""
Test that user cannot add a bucket list with
an invalid token
"""
url = reverse("bucketlist-list")
self.bucketlist = {"title": "The List of Awesome",
"description": "Awesome things I want to do",
"created_by": self.test_user_id}
invalid_token = "1234"
self.client.credentials(HTTP_AUTHORIZATION="Token " + invalid_token)
response = self.client.post(url, data=self.bucketlist)
self.assertEqual(response.status_code, status.HTTP_401_UNAUTHORIZED)
self.assertTrue("Invalid token" in response.data["detail"])
def test_add_bucketlist(self):
""" Test that user can add a bucket list """
url = reverse("bucketlist-list")
self.bucketlist = {"title": "Adventure!",
"description": "Adventurous stuff",
"created_by": self.test_user_id}
response = self.client.post(url, data=self.bucketlist)
self.assertEqual(response.status_code, status.HTTP_201_CREATED)
self.assertEqual(Bucketlist.objects.count(), 3)
self.assertTrue("Adventure!" in response.data["title"])
self.assertTrue("Adventurous stuff" in response.data["description"])
def test_delete_bucketlist(self):
""" Test deletion of bucket lists """
url = "/api/v1/bucketlists/" + self.first_bucketlist_id + "/"
response = self.client.delete(url)
# Only one bucket list remains
self.assertEqual(Bucketlist.objects.count(), 1)
self.assertEqual(response.status_code, status.HTTP_204_NO_CONTENT)
def test_edit_bucketlist(self):
""" Test editing of bucket lists """
self.bucketlist = {"title": "Mission Multilinguist",
"description": "Languages to learn"}
url = "/api/v1/bucketlists/" + self.first_bucketlist_id + "/"
response = self.client.put(url, data=self.bucketlist)
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertTrue("Mission Multilinguist" in response.data["title"])
self.assertTrue("Languages to learn" in response.data["description"])
def test_get_bucketlists(self):
""" Test that all bucket lists are displayed """
url = reverse("bucketlist-list")
response = self.client.get(url)
self.assertEqual(response.status_code, status.HTTP_200_OK)
bucketlist1 = response.data[0]
bucketlist2 = response.data[1]
# Both bucket lists are displayed
self.assertEqual(bucketlist1.get("title"), "The List of Awesome")
self.assertEqual(bucketlist2.get("title"), "Knowledge Goals")
def test_get_bucketlist(self):
""" Test that specified bucket list is displayed """
# Get first bucket list
url = "/api/v1/bucketlists/" + self.first_bucketlist_id + "/"
response = self.client.get(url)
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertEqual(response.data.get("title"), "The List of Awesome")
# Get second bucket list
url = "/api/v1/bucketlists/" + self.second_bucketlist_id + "/"
response = self.client.get(url)
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertEqual(response.data.get("title"), "Knowledge Goals")
def test_get_nonexistent_bucketlist(self):
"""
Test that specifying a bucket list with invalid id
will throw an error
"""
url = "/api/v1/bucketlists/1234/"
response = self.client.get(url)
self.assertEqual(response.status_code, status.HTTP_404_NOT_FOUND)
self.assertTrue("Not found" in response.data["detail"])
def test_unauthorized_access_bucketlist(self):
"""
Test that users cannot edit or delete another user's bucket lists
"""
# Register a new user
url = reverse("user-list")
self.user = {"username": "testuser2",
"email": "testuser2@email.com",
"password": "testpassword"}
self.client.post(url, data=self.user)
# Log new user in and obtain their token
url = reverse("api-login")
self.user = {"username": "testuser2",
"password": "testpassword"}
response = self.client.post(url, data=self.user)
token = str(response.data.get("token"))
# Cannot edit bucket list
self.bucketlist = {"title": "Mission Multilinguist",
"description": "Languages to learn"}
url = "/api/v1/bucketlists/" + self.first_bucketlist_id + "/"
self.client.credentials(HTTP_AUTHORIZATION="Token " + token)
response = self.client.put(url, data=self.bucketlist)
self.assertEqual(response.status_code, status.HTTP_403_FORBIDDEN)
self.assertTrue("You do not have permission to perform this action"
in response.data["detail"])
# Cannot delete bucket list
url = "/api/v1/bucketlists/" + self.first_bucketlist_id + "/"
response = self.client.delete(url)
# Number of bucket lists remains the same
self.assertEqual(Bucketlist.objects.count(), 2)
self.assertEqual(response.status_code, status.HTTP_403_FORBIDDEN)
class TestItems(TestBase):
""" Test operations on bucket list items"""
def test_no_token_item(self):
"""
Test that user cannot add a bucket list item without
an authentication token
"""
url = "/api/v1/bucketlists/" + self.first_bucketlist_id + "/items/"
self.item = {"title": "Learn Japanese",
"description": "To fluency!",
"item_bucketlist_id": self.first_bucketlist_id}
self.client.credentials()
response = self.client.post(url, data=self.item)
self.assertEqual(response.status_code, status.HTTP_401_UNAUTHORIZED)
self.assertTrue("Authentication credentials were not provided"
in response.data["detail"])
def test_invalid_token_item(self):
"""
Test that user cannot add a bucket list item with
an invalid token
"""
url = "/api/v1/bucketlists/" + self.first_bucketlist_id + "/items/"
self.item = {"title": "Learn Japanese",
"description": "To fluency!",
"item_bucketlist_id": self.first_bucketlist_id}
invalid_token = "1234"
self.client.credentials(HTTP_AUTHORIZATION="Token " + invalid_token)
response = self.client.post(url, data=self.item)
self.assertEqual(response.status_code, status.HTTP_401_UNAUTHORIZED)
self.assertTrue("Invalid token" in response.data["detail"])
def test_add_item(self):
""" Test that user can add a bucket list item"""
url = "/api/v1/bucketlists/" + self.first_bucketlist_id + "/items/"
self.item = {"title": "Learn Japanese",
"description": "To fluency!"}
response = self.client.post(url, data=self.item)
self.assertEqual(response.status_code, status.HTTP_201_CREATED)
self.assertEqual(Item.objects.count(), 3)
self.assertTrue("Learn Japanese" in response.data["title"])
self.assertTrue("To fluency!" in response.data["description"])
def test_delete_item(self):
""" Test deletion of bucket list items """
url = ("/api/v1/bucketlists/" + self.first_bucketlist_id +
"/items/" + self.first_item_id + "/")
response = self.client.delete(url)
self.assertEqual(Item.objects.count(), 1)
self.assertEqual(response.status_code, status.HTTP_204_NO_CONTENT)
def test_edit_item(self):
""" Test editing of bucket list items """
self.bucketlist = {"title": "Learn Spanish",
"description": "To fluency!"}
url = ("/api/v1/bucketlists/" + self.first_bucketlist_id +
"/items/" + self.first_item_id + "/")
response = self.client.put(url, data=self.bucketlist)
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertTrue("Learn Spanish" in response.data["title"])
self.assertTrue("To fluency!" in response.data["description"])
def test_get_item(self):
""" Test that specified bucket list item is displayed """
# Get first bucket list item
url = ("/api/v1/bucketlists/" + self.first_bucketlist_id +
"/items/" + self.first_item_id + "/")
response = self.client.get(url)
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertEqual(response.data.get("title"), "Swim with dolphins")
# Get second bucket list item
url = ("/api/v1/bucketlists/" + self.second_bucketlist_id +
"/items/" + self.second_item_id + "/")
response = self.client.get(url)
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertEqual(response.data.get("title"), "Visit all continents")
def test_get_nonexistent_bucketlist(self):
"""
Test that specifying a bucket list with invalid id
will throw an error
"""
url = ("/api/v1/bucketlists/" + self.first_bucketlist_id +
"/items/1234/")
response = self.client.get(url)
self.assertEqual(response.status_code, status.HTTP_404_NOT_FOUND)
self.assertTrue("Not found" in response.data["detail"])
def test_unauthorized_access_item(self):
"""
Test that users cannot edit or delete another user's bucket list items
"""
# Register a new user
url = reverse("user-list")
self.user = {"username": "testuser2",
"email": "testuser2@email.com",
"password": "testpassword"}
self.client.post(url, data=self.user)
# Log new user in and obtain their token
url = reverse("api-login")
self.user = {"username": "testuser2",
"password": "testpassword"}
response = self.client.post(url, data=self.user)
token = str(response.data.get("token"))
# Cannot edit bucket list item
self.item = {"title": "Learn Japanese",
"description": "To fluency!"}
url = ("/api/v1/bucketlists/" + self.second_bucketlist_id +
"/items/" + self.second_item_id + "/")
self.client.credentials(HTTP_AUTHORIZATION="Token " + token)
response = self.client.put(url, data=self.item)
self.assertEqual(response.status_code, status.HTTP_403_FORBIDDEN)
self.assertTrue("You do not have permission to perform this action"
in response.data["detail"])
# Cannot delete bucket list item
url = ("/api/v1/bucketlists/" + self.second_bucketlist_id +
"/items/" + self.second_item_id + "/")
response = self.client.delete(url)
# Number of bucket list items remains the same
self.assertEqual(Item.objects.count(), 2)
self.assertEqual(response.status_code, status.HTTP_403_FORBIDDEN)
| 44.786704 | 79 | 0.615475 | 1,825 | 16,168 | 5.331507 | 0.096438 | 0.043165 | 0.061048 | 0.074512 | 0.842754 | 0.8 | 0.775128 | 0.747174 | 0.732991 | 0.697739 | 0 | 0.011274 | 0.264844 | 16,168 | 360 | 80 | 44.911111 | 0.807336 | 0.113805 | 0 | 0.686992 | 0 | 0 | 0.184467 | 0.00179 | 0 | 0 | 0 | 0 | 0.227642 | 1 | 0.089431 | false | 0.04065 | 0.020325 | 0 | 0.130081 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
496e764d0d7f0bbc002a0dba85baa37dbd909e4d | 21 | py | Python | comma/__init__.py | zbanks/comma | 75f77d659a47a777b6790b2e47114a0355bbf0cc | [
"MIT"
] | 1 | 2020-06-15T02:22:14.000Z | 2020-06-15T02:22:14.000Z | comma/__init__.py | zbanks/comma | 75f77d659a47a777b6790b2e47114a0355bbf0cc | [
"MIT"
] | null | null | null | comma/__init__.py | zbanks/comma | 75f77d659a47a777b6790b2e47114a0355bbf0cc | [
"MIT"
] | null | null | null | from .comma import *
| 10.5 | 20 | 0.714286 | 3 | 21 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.190476 | 21 | 1 | 21 | 21 | 0.882353 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
49880fbc6c6fb0176a5db60138e603e1cd701057 | 173 | py | Python | checks/check.py | signalfx/alert-assessor | 9376793c9e9c1ebf3f98e5e0b5df2e695cb495aa | [
"Apache-2.0"
] | 3 | 2020-05-07T17:58:41.000Z | 2021-06-10T19:58:46.000Z | checks/check.py | signalfx/alert-assessor | 9376793c9e9c1ebf3f98e5e0b5df2e695cb495aa | [
"Apache-2.0"
] | 3 | 2019-09-13T16:07:52.000Z | 2020-06-24T20:14:23.000Z | checks/check.py | signalfx/alert-assessor | 9376793c9e9c1ebf3f98e5e0b5df2e695cb495aa | [
"Apache-2.0"
] | 3 | 2019-08-29T09:14:08.000Z | 2021-12-20T09:29:04.000Z | import re
class Check:
def __init__(self):
pass
class RuleCheck:
RE_USES_PARAMETER_VARS = re.compile("\{\{\S*\}\}")
def __init__(self):
pass
| 13.307692 | 54 | 0.589595 | 21 | 173 | 4.333333 | 0.666667 | 0.153846 | 0.241758 | 0.32967 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.271676 | 173 | 12 | 55 | 14.416667 | 0.722222 | 0 | 0 | 0.5 | 0 | 0 | 0.063584 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0.25 | 0.125 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 6 |
b8fa07b1f35fe7db1f7c55005fa49e658e44e8cc | 1,760 | py | Python | app/src/main/python/cve-2020-0796-scanner.py | lionche/KillNet | e5ec7a744c74fecc4bf480cb2474e387cba23d54 | [
"MIT"
] | 1 | 2021-12-01T03:22:55.000Z | 2021-12-01T03:22:55.000Z | app/src/main/python/cve-2020-0796-scanner.py | lionche/KillNet | e5ec7a744c74fecc4bf480cb2474e387cba23d54 | [
"MIT"
] | null | null | null | app/src/main/python/cve-2020-0796-scanner.py | lionche/KillNet | e5ec7a744c74fecc4bf480cb2474e387cba23d54 | [
"MIT"
] | null | null | null | import struct
import socket
def scannerIp(stringIp):
sock = socket.socket(socket.AF_INET)
sock.settimeout(3)
sock.connect((stringIp, 445))
packet = b'\x00\x00\x00\xc0\xfeSMB@\x00\x00\x00\x00\x00\x00\x00\x00\x00\x1f\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00$\x00\x08\x00\x01\x00\x00\x00\x7f\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00x\x00\x00\x00\x02\x00\x00\x00\x02\x02\x10\x02"\x02$\x02\x00\x03\x02\x03\x10\x03\x11\x03\x00\x00\x00\x00\x01\x00&\x00\x00\x00\x00\x00\x01\x00 \x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\n\x00\x00\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00'
sock.send(packet)
length = sock.recv(4)
# print(length)
nb, = struct.unpack(">I", length)
# print(nb)
result = sock.recv(nb)
# if not result[68:70] == b"\x11\x03":
#
# print("Not vulnerable")
# if not result[70:72] == b"\x02\x00":
# print("Not vulnerable")
# print("Vulnerable")
# if result[68:70] == b"\x11\x03":
# exit("vulnerable")
# if result[70:72] == b"\x02\x00":
# exit("vulnerable")
# exit("Not Vulnerable")
# print(result[68:70])
# print(result[70:72])
# ifVulnerable = False
if result[68:72] != b"\x11\x03\x02\x00":
print("Not Vulnerable")
ifVulnerable = False
return False
else:
print("vulnerable")
ifVulnerable = True
return True
return False
exit()
# scannerIp() | 41.904762 | 768 | 0.639773 | 310 | 1,760 | 3.629032 | 0.158065 | 0.741333 | 0.992 | 1.173333 | 0.505778 | 0.455111 | 0.394667 | 0.394667 | 0.384 | 0.36 | 0 | 0.285135 | 0.159091 | 1,760 | 42 | 769 | 41.904762 | 0.475 | 0.217614 | 0 | 0.095238 | 0 | 0.047619 | 0.583824 | 0.552206 | 0 | 0 | 0 | 0 | 0 | 1 | 0.047619 | false | 0 | 0.095238 | 0 | 0.285714 | 0.095238 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7707094cf29645fbcec0a5b4291ed680144ef744 | 53 | py | Python | example/diff_imports/import_from_module.py | DKorytkin/pytest-never-sleep | e655fbff4d51b8a7e41a56e584dae55013f7160f | [
"MIT"
] | null | null | null | example/diff_imports/import_from_module.py | DKorytkin/pytest-never-sleep | e655fbff4d51b8a7e41a56e584dae55013f7160f | [
"MIT"
] | 2 | 2021-05-19T07:55:13.000Z | 2021-05-21T09:49:05.000Z | example/diff_imports/import_from_module.py | DKorytkin/pytest-never-sleep | e655fbff4d51b8a7e41a56e584dae55013f7160f | [
"MIT"
] | null | null | null | import time
def do_some_stuff():
time.sleep(1)
| 8.833333 | 20 | 0.679245 | 9 | 53 | 3.777778 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.02381 | 0.207547 | 53 | 5 | 21 | 10.6 | 0.785714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
77112182a8c37f277bd2bb6200addd0506822e1c | 11,720 | py | Python | constructor_io/modules/browse.py | Constructor-io/constructorio-python | ff4f068816f51914893c6c40093f5a0503cbf1a3 | [
"MIT"
] | null | null | null | constructor_io/modules/browse.py | Constructor-io/constructorio-python | ff4f068816f51914893c6c40093f5a0503cbf1a3 | [
"MIT"
] | 8 | 2015-10-05T03:28:37.000Z | 2021-11-17T17:23:41.000Z | constructor_io/modules/browse.py | Constructor-io/constructorio-python | ff4f068816f51914893c6c40093f5a0503cbf1a3 | [
"MIT"
] | 4 | 2016-05-12T06:16:19.000Z | 2018-02-21T18:49:20.000Z | '''Browse Module'''
from time import time
from urllib.parse import quote, urlencode
import requests as r
from constructor_io.helpers.exception import ConstructorException
from constructor_io.helpers.utils import (clean_params, create_auth_header,
create_request_headers,
create_shared_query_params,
throw_http_exception_from_response)
def _create_browse_url(prefix, parameters, user_parameters, options, omit_timestamp = False):
# pylint: disable=too-many-branches
'''Create URL from supplied filter name, filter value, and parameters'''
query_params = create_shared_query_params(options, parameters, user_parameters)
if parameters:
if parameters.get('item_ids'):
query_params['ids'] = parameters.get('item_ids')
if not omit_timestamp:
query_params['_dt'] = int(time()*1000.0)
query_params = clean_params(query_params)
query_string = urlencode(query_params, doseq=True)
return f'{options.get("service_url")}/{prefix}?{query_string}' # pylint: disable=line-too-long
class Browse:
'''Browse Class'''
def __init__(self, options):
self.__options = options or {}
def get_browse_results(self, filter_name, filter_value, parameters=None, user_parameters=None):
'''
Retrieve browse results from API
:param str filter_name: Filter name to display results from
:param str filter_value: Filter value to display results from
:param dict parameters: Additional parameters to refine result set
:param int parameters.page: The page number of the results
:param int parameters.results_per_page: The number of results per page to return
:param dict parameters.filters: Filters used to refine results
:param str parameters.sort_by: The sort method for results
:param str parameters.sort_order: The sort order for results
:param str parameters.section: Section name for results
:param dict parameters.fmt_options: The format options used to refine result groups
:param list parameters.hidden_fields: Hidden metadata fields to return
:param dict user_parameters: Parameters relevant to the user request
:param int user_parameters.session_id: Session ID, utilized to personalize results
:param str user_parameters.client_id: Client ID, utilized to personalize results
:param str user_parameters.user_id: User ID, utilized to personalize results
:param str user_parameters.segments: User segments
:param dict user_parameters.test_cells: User test cells
:param str user_parameters.user_ip: Origin user IP, from client
:param str user_parameters.user_agent: Origin user agent, from client
:return: dict
'''
if not filter_name or not isinstance(filter_name, str):
raise ConstructorException('filter_name is a required parameter of type string')
if not filter_value or not isinstance(filter_value, str):
raise ConstructorException('filter_value is a required parameter of type string')
if not parameters:
parameters = {}
if not user_parameters:
user_parameters = {}
url_prefix = f'browse/{quote(filter_name)}/{quote(filter_value)}'
request_url = _create_browse_url(
url_prefix,
parameters,
user_parameters,
self.__options
)
requests = self.__options.get('requests') or r
response = requests.get(
request_url,
auth=create_auth_header(self.__options),
headers=create_request_headers(self.__options, user_parameters)
)
if not response.ok:
throw_http_exception_from_response(response)
json = response.json()
json_response = json.get('response')
if json_response:
if json_response.get('results') or json_response.get('results') == []:
result_id = json.get('result_id')
if result_id:
for result in json_response.get('results'):
result['result_id'] = result_id
return json
raise ConstructorException('get_browse_results response data is malformed')
def get_browse_results_for_item_ids(self, item_ids, parameters=None, user_parameters=None):
'''
Retrieve browse results from API using item ID's
:param list item_ids: Item ID's of results to get results for
:param dict parameters: Additional parameters to refine result set
:param int parameters.page: The page number of the results
:param int parameters.results_per_page: The number of results per page to return
:param dict parameters.filters: Filters used to refine results
:param str parameters.sort_by: The sort method for results
:param str parameters.sort_order: The sort order for results
:param str parameters.section: Section name for results
:param dict parameters.fmt_options: The format options used to refine result groups
:param list parameters.hidden_fields: Hidden metadata fields to return
:param dict user_parameters: Parameters relevant to the user request
:param int user_parameters.session_id: Session ID, utilized to personalize results
:param str user_parameters.client_id: Client ID, utilized to personalize results
:param str user_parameters.user_id: User ID, utilized to personalize results
:param str user_parameters.segments: User segments
:param dict user_parameters.test_cells: User test cells
:param str user_parameters.user_ip: Origin user IP, from client
:param str user_parameters.user_agent: Origin user agent, from client
:return: dict
'''
if not item_ids or not isinstance(item_ids, list):
raise ConstructorException('item_ids is a required parameter of type list')
if not parameters:
parameters = {}
if not user_parameters:
user_parameters = {}
url_prefix = 'browse/items'
request_url = _create_browse_url(
url_prefix,
{ **parameters, 'item_ids': item_ids},
user_parameters,
self.__options
)
requests = self.__options.get('requests') or r
response = requests.get(
request_url,
auth=create_auth_header(self.__options),
headers=create_request_headers(self.__options, user_parameters)
)
if not response.ok:
throw_http_exception_from_response(response)
json = response.json()
json_response = json.get('response')
if json_response:
if json_response.get('results') or json_response.get('results') == []:
result_id = json.get('result_id')
if result_id:
for result in json_response.get('results'):
result['result_id'] = result_id
return json
raise ConstructorException('get_browse_results_for_item_ids response data is malformed')
def get_browse_groups(self, parameters=None, user_parameters=None):
'''
Retrieve groups from API
:param dict parameters: Additional parameters to refine result set
:param dict parameters.filters: Filters used to refine results
:param dict parameters.fmt_options: The format options used to refine result groups
:param int parameters.fmt_options.groups_max_depth: The maximum depth of the hierarchy group structure # pylint: disable=line-too-long
:param dict user_parameters: Parameters relevant to the user request
:param int user_parameters.session_id: Session ID, utilized to personalize results
:param str user_parameters.client_id: Client ID, utilized to personalize results
:param str user_parameters.user_id: User ID, utilized to personalize results
:param str user_parameters.segments: User segments
:param dict user_parameters.test_cells: User test cells
:param str user_parameters.user_ip: Origin user IP, from client
:param str user_parameters.user_agent: Origin user agent, from client
:return: dict
'''
if not parameters:
parameters = {}
if not user_parameters:
user_parameters = {}
url_prefix = 'browse/groups'
request_url = _create_browse_url(
url_prefix,
parameters,
user_parameters,
self.__options,
True
)
requests = self.__options.get('requests') or r
response = requests.get(
request_url,
auth=create_auth_header(self.__options),
headers=create_request_headers(self.__options, user_parameters)
)
if not response.ok:
throw_http_exception_from_response(response)
json = response.json()
json_response = json.get('response')
if json_response:
if json_response.get('groups') or json_response.get('groups') == []:
return json
raise ConstructorException('get_browse_groups response data is malformed')
def get_browse_facets(self, parameters=None, user_parameters=None):
'''
Retrieve facets from API
:param dict parameters: Additional parameters to refine result set
:param dict parameters.page: The page number of the results
:param dict parameters.results_per_page: The number of results per page to return
:param dict parameters.fmt_options: The format options used to refine result groups
:param int parameters.fmt_options.show_hidden_facets: Include facets configured as hidden
:param int parameters.fmt_options.show_protected_facets: Include facets configured as protected # pylint: disable=line-too-long
:param dict user_parameters: Parameters relevant to the user request
:param int user_parameters.session_id: Session ID, utilized to personalize results
:param str user_parameters.client_id: Client ID, utilized to personalize results
:param str user_parameters.user_id: User ID, utilized to personalize results
:param str user_parameters.segments: User segments
:param dict user_parameters.test_cells: User test cells
:param str user_parameters.user_ip: Origin user IP, from client
:param str user_parameters.user_agent: Origin user agent, from client
:return: dict
'''
if not parameters:
parameters = {}
if not user_parameters:
user_parameters = {}
url_prefix = 'browse/facets'
request_url = _create_browse_url(
url_prefix,
parameters,
user_parameters,
self.__options,
True
)
requests = self.__options.get('requests') or r
response = requests.get(
request_url,
auth=create_auth_header(self.__options),
headers=create_request_headers(self.__options, user_parameters)
)
if not response.ok:
throw_http_exception_from_response(response)
json = response.json()
json_response = json.get('response')
if json_response:
if json_response.get('facets') or json_response.get('facets') == []:
return json
raise ConstructorException('get_browse_facets response data is malformed')
| 41.122807 | 142 | 0.664164 | 1,406 | 11,720 | 5.319346 | 0.100285 | 0.101083 | 0.03209 | 0.058831 | 0.814949 | 0.789009 | 0.765343 | 0.739537 | 0.733654 | 0.719882 | 0 | 0.000584 | 0.270051 | 11,720 | 284 | 143 | 41.267606 | 0.873641 | 0.409898 | 0 | 0.64539 | 0 | 0 | 0.10716 | 0.021049 | 0 | 0 | 0 | 0 | 0 | 1 | 0.042553 | false | 0 | 0.035461 | 0 | 0.120567 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
771255520ef922971e08ab1f3a6fea3938b0c54c | 2,605 | py | Python | tests/testMapping.py | shimpe/pyvectortween | aff071180474739060ec2d3102c39c8e73510988 | [
"MIT"
] | 6 | 2017-05-17T23:34:41.000Z | 2022-03-15T17:12:19.000Z | tests/testMapping.py | shimpe/pyvectortween | aff071180474739060ec2d3102c39c8e73510988 | [
"MIT"
] | null | null | null | tests/testMapping.py | shimpe/pyvectortween | aff071180474739060ec2d3102c39c8e73510988 | [
"MIT"
] | null | null | null | from vectortween.Mapping import Mapping
def test_linlin():
clip = True
noclip = False
table = (
# inputs, expected output
([0, 0, 0, 0, 0], 0),
([0, 0, 0, 0, 0, clip], 0),
([1, 0, 0, 0, 0], None),
([1, 0, 0, 0, 0, noclip], None),
([1, 0, 2, 0, 100], 50),
([-1, 0, -2, 0, 100], 50),
([-2, 0, -2, 0, 100], 100),
([6, 5, 10, 50, 100], 60),
([2, 0, 1, 0, 100, noclip], 200),
([2, 0, 1, 0, 100, clip], 100),
([-2, 0, -1, 0, -100, noclip], -200),
([2, 0, 1, 0, -100, clip], -100),
([2, 1, 10, 1, 100], 12),
([2, 10, 1, 1, 100], 89),
([2, 10, 1, 100, 1], 12),
([2, 1, 10, 100, 1], 89),
([-2, -1, -10, 1, 100], 12),
([-2, -10, -1, 1, 100], 89),
([-2, -10, -1, 100, 1], 12),
([-2, -1, -10, 100, 1], 89),
([2, 1, 10, -1, -100], -12),
([2, 10, 1, -1, -100], -89),
([2, 10, 1, -100, -1], -12),
([2, 1, 10, -100, -1], -89),
)
for test in table:
assert Mapping.linlin(*test[0]) == test[1]
def test_linexp():
clip = True
noclip = False
table = (
# inputs, expected output
([0, 0, 0, 0, 0], None),
([0, 0, 0, 0, 0, clip], None),
([1, 0, 0, 0, 0], None),
([1, 0, 0, 0, 0, noclip], None),
([1, 0, 2, 0, 100], None),
([1, 1, 10, 1, 100], 1),
([2, 1, 10, 1, 100], 1.6681005372000588),
([8, 1, 10, 1, 100], 35.938136638046274),
([8, 1, 10, -1, -100], -35.938136638046274),
([11, 1, 10, -1, -100], -100),
([11, 1, 10, -1, -100, noclip], -166.81005372000593),
([-2, -1, -10, -1, -100], -1.6681005372000588),
([-2, -10, -1, -1, -100], -59.94842503189409),
([-2, -10, -1, -100, -1], -1.6681005372000592),
([-2, -1, -10, -100, -1], -59.948425031894104),
([2, 1, 10, 1, 100], 1.6681005372000588),
([2, 10, 1, 1, 100], 59.94842503189409),
([2, 10, 1, 100, 1], 1.6681005372000592),
([2, 1, 10, 100, 1], 59.948425031894104),
([-2, -1, -10, 1, 100], 1.6681005372000588),
([-2, -10, -1, 1, 100], 59.94842503189409),
([-2, -10, -1, 100, 1], 1.6681005372000592),
([-2, -1, -10, 100, 1], 59.948425031894104),
([2, 1, 10, -1, -100], -1.6681005372000588),
([2, 10, 1, -1, -100], -59.94842503189409),
([2, 10, 1, -100, -1], -1.6681005372000592),
([2, 1, 10, -100, -1], -59.948425031894104),
)
for test in table:
assert Mapping.linexp(*test[0]) == test[1]
| 33.831169 | 61 | 0.413436 | 384 | 2,605 | 2.799479 | 0.111979 | 0.055814 | 0.064186 | 0.059535 | 0.864186 | 0.836279 | 0.76 | 0.690233 | 0.690233 | 0.690233 | 0 | 0.44 | 0.328215 | 2,605 | 76 | 62 | 34.276316 | 0.174286 | 0.018042 | 0 | 0.212121 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.030303 | 1 | 0.030303 | false | 0 | 0.015152 | 0 | 0.045455 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
77433b7438319111901c902898b32a94e99e6bc5 | 1,523 | py | Python | tests/test_metric_optimization_tf.py | junpenglao/pysaliency | 2b243a26086669bf089391a8cc9cd5d80a718188 | [
"MIT"
] | 118 | 2015-12-29T20:52:24.000Z | 2022-03-14T20:57:30.000Z | tests/test_metric_optimization_tf.py | junpenglao/pysaliency | 2b243a26086669bf089391a8cc9cd5d80a718188 | [
"MIT"
] | 20 | 2016-10-13T09:25:56.000Z | 2021-12-01T03:06:55.000Z | tests/test_metric_optimization_tf.py | junpenglao/pysaliency | 2b243a26086669bf089391a8cc9cd5d80a718188 | [
"MIT"
] | 35 | 2015-12-23T09:11:24.000Z | 2022-02-27T03:44:17.000Z | import numpy as np
import pytest
# from pysaliency.metric_optimization_tf import maximize_expected_sim
@pytest.mark.skip("tensorflow <2.0 not available for new python versions, need to upgrade to tensorflow 2 in pysaliency")
def test_maximize_expected_sim_decay_1overk():
density = np.ones((20, 20))
density[6:17, 8:12] = 20
density[2:4, 18:18] = 30
density /= density.sum()
log_density = np.log(density)
saliency_map, score = maximize_expected_sim(
log_density=log_density,
kernel_size=1,
train_samples_per_epoch=1000,
val_samples=1000,
max_iter=100
)
np.testing.assert_allclose(score, -0.8202789932489393, rtol=5e-7) # need bigger tolerance to handle differences between CPU and GPU
@pytest.mark.skip("tensorflow <2.0 not available for new python versions, need to upgrade to tensorflow 2 in pysaliency")
def test_maximize_expected_sim_decay_on_plateau():
density = np.ones((20, 20))
density[6:17, 8:12] = 20
density[2:4, 18:18] = 30
density /= density.sum()
log_density = np.log(density)
saliency_map, score = maximize_expected_sim(
log_density=log_density,
kernel_size=1,
train_samples_per_epoch=1000,
val_samples=1000,
max_iter=100,
backlook=1,
min_iter=10,
learning_rate_decay_scheme='validation_loss',
)
np.testing.assert_allclose(score, -0.8203513294458387, rtol=5e-7) # need bigger tolerance to handle differences between CPU and GPU
| 32.404255 | 136 | 0.701248 | 219 | 1,523 | 4.666667 | 0.388128 | 0.078278 | 0.092955 | 0.046967 | 0.816047 | 0.816047 | 0.759296 | 0.759296 | 0.759296 | 0.759296 | 0 | 0.092639 | 0.206172 | 1,523 | 46 | 137 | 33.108696 | 0.752688 | 0.128037 | 0 | 0.628571 | 0 | 0 | 0.162387 | 0 | 0 | 0 | 0 | 0 | 0.057143 | 1 | 0.057143 | false | 0 | 0.057143 | 0 | 0.114286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
774d01fb4eaf23896e03b57bf10213cdaf92039a | 5,839 | py | Python | tests/ci/unit_tests/release_flow/test_npm_ci.py | Food-X-Technologies/foodx_devops_tools | 57d1bf1304d9c9a386eaffa427f9eb36c410c350 | [
"MIT"
] | 3 | 2021-06-23T20:53:43.000Z | 2022-01-26T14:19:43.000Z | tests/ci/unit_tests/release_flow/test_npm_ci.py | Food-X-Technologies/foodx_devops_tools | 57d1bf1304d9c9a386eaffa427f9eb36c410c350 | [
"MIT"
] | 33 | 2021-08-09T15:44:51.000Z | 2022-03-03T18:28:02.000Z | tests/ci/unit_tests/release_flow/test_npm_ci.py | Food-X-Technologies/foodx_devops_tools | 57d1bf1304d9c9a386eaffa427f9eb36c410c350 | [
"MIT"
] | 1 | 2021-06-23T20:53:52.000Z | 2021-06-23T20:53:52.000Z | # Copyright (c) 2021 Food-X Technologies
#
# This file is part of foodx_devops_tools.
#
# You should have received a copy of the MIT License along with
# foodx_devops_tools. If not, see <https://opensource.org/licenses/MIT>.
from foodx_devops_tools.release_flow_entry import release_flow
from tests.ci.support.click_runner import click_runner # noqa: F401
class TestNpmSubcommand:
MOCK_PACKAGE_JSON_CONTENT = """{
"author": "FoodX",
"description": "some package",
"keywords": [
"foodx"
],
"license": "SEE LICENSE IN LICENSE",
"name": "@foodx/some-package",
"version": "0.0.0+local"
}
"""
def test_npm_id_nonrelease(self, click_runner, mocker):
mock_input = [
"npm",
"id",
"package.json",
"refs/tags/3.14.159-alpha.13",
"abc123def",
]
mocker.patch(
"foodx_devops_tools.release_flow._simple_ci_release_id.acquire_post_data",
return_value=("3.1.4", "26"),
)
result = click_runner.invoke(release_flow, mock_input)
assert result.exit_code == 0
assert result.output == "3.1.4-post.26.abc123d"
def test_npm_id_release(self, click_runner, mocker):
mock_input = [
"npm",
"id",
"package.json",
"refs/tags/3.14.159",
"abc123def",
]
mocker.patch(
"foodx_devops_tools.release_flow._simple_ci_release_id.acquire_post_data",
return_value=("3.1.4", "26"),
)
result = click_runner.invoke(release_flow, mock_input)
assert result.exit_code == 0
assert result.output == "3.14.159"
def test_npm_package_nonrelease(self, click_runner, mocker):
mock_input = [
"npm",
"package",
"package.json",
"refs/tags/3.14.159-alpha.13",
"abc123def",
]
mocker.patch(
"foodx_devops_tools.release_flow._simple_ci_release_id.acquire_post_data",
return_value=("3.1.4", "26"),
)
with click_runner.isolated_filesystem():
with open("package.json", mode="w") as f:
f.write(self.MOCK_PACKAGE_JSON_CONTENT)
result = click_runner.invoke(release_flow, mock_input)
assert result.exit_code == 0
assert result.output == "foodx-some-package-3.1.4-post.26.abc123d.tgz"
def test_npm_package_release(self, click_runner, mocker):
mock_input = [
"npm",
"package",
"package.json",
"refs/tags/3.14.159",
"abc123def",
]
mocker.patch(
"foodx_devops_tools.release_flow._simple_ci_release_id.acquire_post_data",
return_value=("3.1.4", "26"),
)
with click_runner.isolated_filesystem():
with open("package.json", mode="w") as f:
f.write(self.MOCK_PACKAGE_JSON_CONTENT)
result = click_runner.invoke(release_flow, mock_input)
assert result.exit_code == 0
assert result.output == "foodx-some-package-3.14.159.tgz"
def test_main_branch(self, click_runner, mocker):
mock_arguments = [
"npm",
"package",
"package.json",
"refs/heads/main",
"123abc",
]
mocker.patch(
"foodx_devops_tools.release_flow.npm_ci.apply_package_release_id",
return_value="@some-group/this-package",
)
mocker.patch(
"foodx_devops_tools.release_flow._simple_ci_release_id.acquire_post_data",
return_value=("3.1.4", "26"),
)
with click_runner.isolated_filesystem():
with open("package.json", mode="w") as f:
f.write(self.MOCK_PACKAGE_JSON_CONTENT)
result = click_runner.invoke(release_flow, mock_arguments)
assert result.exit_code == 0
assert (
result.output == "some-group-this-package-3.1.4-post.26.123abc.tgz"
)
def test_release_tag(self, click_runner, mocker):
mock_arguments = [
"npm",
"package",
"package.json",
"refs/tags/3.14.159",
"123abc",
]
mocker.patch(
"foodx_devops_tools.release_flow.npm_ci.apply_package_release_id",
return_value="@some-group/this-package",
)
mocker.patch(
"foodx_devops_tools.release_flow._simple_ci_release_id.acquire_post_data",
return_value=("3.1.4", "26"),
)
with click_runner.isolated_filesystem():
with open("package.json", mode="w") as f:
f.write(self.MOCK_PACKAGE_JSON_CONTENT)
result = click_runner.invoke(release_flow, mock_arguments)
assert result.exit_code == 0
assert result.output == "some-group-this-package-3.14.159.tgz"
def test_dryrun_tag(self, click_runner, mocker):
mock_arguments = [
"npm",
"package",
"package.json",
"refs/tags/3.14.159-dryrun45",
"123abc",
]
mocker.patch(
"foodx_devops_tools.release_flow.npm_ci.apply_package_release_id",
return_value="@some-group/this-package",
)
mocker.patch(
"foodx_devops_tools.release_flow._simple_ci_release_id.acquire_post_data",
return_value=("3.1.4", "26"),
)
with click_runner.isolated_filesystem():
with open("package.json", mode="w") as f:
f.write(self.MOCK_PACKAGE_JSON_CONTENT)
result = click_runner.invoke(release_flow, mock_arguments)
assert result.exit_code == 0
assert result.output == "some-group-this-package-3.14.159-dryrun45.tgz"
| 32.620112 | 86 | 0.584004 | 695 | 5,839 | 4.628777 | 0.159712 | 0.071806 | 0.064657 | 0.078645 | 0.83525 | 0.826857 | 0.815045 | 0.807274 | 0.796705 | 0.796705 | 0 | 0.040146 | 0.296112 | 5,839 | 178 | 87 | 32.803371 | 0.742579 | 0.038191 | 0 | 0.668919 | 0 | 0.006757 | 0.292974 | 0.193652 | 0 | 0 | 0 | 0 | 0.094595 | 1 | 0.047297 | false | 0 | 0.013514 | 0 | 0.074324 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6272e7e485e16c1f10471bf63c8e9bc275f7a9f7 | 84,151 | py | Python | TP_model/fit_and_forecast/generate_posterior.py | djmorris7/covid19-forecasting-aus | 789bd40637738292b7a77103cbae636c177c2479 | [
"MIT"
] | 1 | 2021-10-12T10:25:31.000Z | 2021-10-12T10:25:31.000Z | TP_model/fit_and_forecast/generate_posterior.py | djmorris7/covid19-forecasting-aus | 789bd40637738292b7a77103cbae636c177c2479 | [
"MIT"
] | null | null | null | TP_model/fit_and_forecast/generate_posterior.py | djmorris7/covid19-forecasting-aus | 789bd40637738292b7a77103cbae636c177c2479 | [
"MIT"
] | null | null | null | ######### imports #########
from ast import arg
from datetime import timedelta
import sys
sys.path.insert(0, "TP_model")
sys.path.insert(0, "TP_model/fit_and_forecast")
from Reff_constants import *
from Reff_functions import *
import glob
import os
from sys import argv
import arviz as az
import seaborn as sns
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import matplotlib
from math import ceil
import pickle
from cmdstanpy import CmdStanModel
matplotlib.use("Agg")
from params import (
truncation_days,
start_date,
third_start_date,
alpha_start_date,
omicron_start_date,
omicron_only_date,
omicron_dominance_date,
pop_sizes,
num_forecast_days,
get_all_p_detect_old,
get_all_p_detect,
)
def process_vax_data_array(
data_date,
third_states,
third_end_date,
variant="Delta",
print_latest_date_in_ts=False,
):
"""
Processes the vaccination data to an array for either the Omicron or Delta strain.
"""
# Load in vaccination data by state and date
vaccination_by_state = pd.read_csv(
"data/vaccine_effect_timeseries_" + data_date.strftime("%Y-%m-%d") + ".csv",
parse_dates=["date"],
)
# there are a couple NA's early on in the time series but is likely due to slightly
# different start dates
vaccination_by_state.fillna(1, inplace=True)
vaccination_by_state = vaccination_by_state.loc[
vaccination_by_state["variant"] == variant
]
vaccination_by_state = vaccination_by_state[["state", "date", "effect"]]
if print_latest_date_in_ts:
# display the latest available date in the NSW data (will be the same date between states)
print(
"Latest date in vaccine data is {}".format(
vaccination_by_state[vaccination_by_state.state == "NSW"].date.values[-1]
)
)
# Get only the dates we need + 1 (this serves as the initial value)
vaccination_by_state = vaccination_by_state[
(
vaccination_by_state.date
>= pd.to_datetime(third_start_date) - timedelta(days=1)
)
& (vaccination_by_state.date <= third_end_date)
]
vaccination_by_state = vaccination_by_state[
vaccination_by_state["state"].isin(third_states)
] # Isolate fitting states
vaccination_by_state = vaccination_by_state.pivot(
index="state", columns="date", values="effect"
) # Convert to matrix form
# If we are missing recent vaccination data, fill it in with the most recent available data.
latest_vacc_data = vaccination_by_state.columns[-1]
if latest_vacc_data < pd.to_datetime(third_end_date):
vaccination_by_state = pd.concat(
[vaccination_by_state]
+ [
pd.Series(vaccination_by_state[latest_vacc_data], name=day)
for day in pd.date_range(start=latest_vacc_data, end=third_end_date)
],
axis=1,
)
# Convert to simple array only useful to pass to stan (index 1 onwards)
vaccination_by_state_array = vaccination_by_state.iloc[:, 1:].to_numpy()
return vaccination_by_state_array
def get_data_for_posterior(data_date):
"""
Read in the various datastreams and combine the samples into a dictionary that we then
dump to a pickle file.
"""
print("Performing inference on state level Reff")
data_date = pd.to_datetime(data_date) # Define data date
print("Data date is {}".format(data_date.strftime("%d%b%Y")))
fit_date = pd.to_datetime(data_date - timedelta(days=truncation_days))
print("Last date in fitting {}".format(fit_date.strftime("%d%b%Y")))
# * Note: 2020-09-09 won't work (for some reason)
# read in microdistancing survey data
surveys = pd.DataFrame()
path = "data/md/Barometer wave*.csv"
for file in glob.glob(path):
surveys = surveys.append(pd.read_csv(file, parse_dates=["date"]))
surveys = surveys.sort_values(by="date")
print("Latest Microdistancing survey is {}".format(surveys.date.values[-1]))
surveys["state"] = surveys["state"].map(states_initials).fillna(surveys["state"])
surveys["proportion"] = surveys["count"] / surveys.respondents
surveys.date = pd.to_datetime(surveys.date)
always = surveys.loc[surveys.response == "Always"].set_index(["state", "date"])
always = always.unstack(["state"])
# If you get an error here saying 'cannot create a new series when the index is not unique',
# then you have a duplicated md file.
idx = pd.date_range("2020-03-01", pd.to_datetime("today"))
always = always.reindex(idx, fill_value=np.nan)
always.index.name = "date"
# fill back to earlier and between weeks.
# Assume survey on day x applies for all days up to x - 6
always = always.fillna(method="bfill")
# assume values continue forward if survey hasn't completed
always = always.fillna(method="ffill")
always = always.stack(["state"])
# Zero out before first survey 20th March
always = always.reset_index().set_index("date")
always.loc[:"2020-03-20", "count"] = 0
always.loc[:"2020-03-20", "respondents"] = 0
always.loc[:"2020-03-20", "proportion"] = 0
always = always.reset_index().set_index(["state", "date"])
survey_X = pd.pivot_table(
data=always, index="date", columns="state", values="proportion"
)
survey_counts_base = (
pd.pivot_table(data=always, index="date", columns="state", values="count")
.drop(["Australia", "Other"], axis=1)
.astype(int)
)
survey_respond_base = (
pd.pivot_table(data=always, index="date", columns="state", values="respondents")
.drop(["Australia", "Other"], axis=1)
.astype(int)
)
# read in and process mask wearing data
mask_wearing = pd.DataFrame()
path = "data/face_coverings/face_covering_*_.csv"
for file in glob.glob(path):
mask_wearing = mask_wearing.append(pd.read_csv(file, parse_dates=["date"]))
mask_wearing = mask_wearing.sort_values(by="date")
print("Latest Mask wearing survey is {}".format(mask_wearing.date.values[-1]))
mask_wearing["state"] = (
mask_wearing["state"].map(states_initials).fillna(mask_wearing["state"])
)
mask_wearing["proportion"] = mask_wearing["count"] / mask_wearing.respondents
mask_wearing.date = pd.to_datetime(mask_wearing.date)
mask_wearing_always = mask_wearing.loc[
mask_wearing.face_covering == "Always"
].set_index(["state", "date"])
mask_wearing_always = mask_wearing_always.unstack(["state"])
idx = pd.date_range("2020-03-01", pd.to_datetime("today"))
mask_wearing_always = mask_wearing_always.reindex(idx, fill_value=np.nan)
mask_wearing_always.index.name = "date"
# fill back to earlier and between weeks.
# Assume survey on day x applies for all days up to x - 6
mask_wearing_always = mask_wearing_always.fillna(method="bfill")
# assume values continue forward if survey hasn't completed
mask_wearing_always = mask_wearing_always.fillna(method="ffill")
mask_wearing_always = mask_wearing_always.stack(["state"])
# Zero out before first survey 20th March
mask_wearing_always = mask_wearing_always.reset_index().set_index("date")
mask_wearing_always.loc[:"2020-03-20", "count"] = 0
mask_wearing_always.loc[:"2020-03-20", "respondents"] = 0
mask_wearing_always.loc[:"2020-03-20", "proportion"] = 0
mask_wearing_X = pd.pivot_table(
data=mask_wearing_always, index="date", columns="state", values="proportion"
)
mask_wearing_counts_base = pd.pivot_table(
data=mask_wearing_always, index="date", columns="state", values="count"
).astype(int)
mask_wearing_respond_base = pd.pivot_table(
data=mask_wearing_always, index="date", columns="state", values="respondents"
).astype(int)
df_Reff = pd.read_csv(
"results/EpyReff/Reff_delta" + data_date.strftime("%Y-%m-%d") + "tau_4.csv",
parse_dates=["INFECTION_DATES"],
)
df_Reff["date"] = df_Reff.INFECTION_DATES
df_Reff["state"] = df_Reff.STATE
df_Reff_omicron = pd.read_csv(
"results/EpyReff/Reff_omicron" + data_date.strftime("%Y-%m-%d") + "tau_4.csv",
parse_dates=["INFECTION_DATES"],
)
df_Reff_omicron["date"] = df_Reff_omicron.INFECTION_DATES
df_Reff_omicron["state"] = df_Reff_omicron.STATE
# relabel some of the columns to avoid replication in the merged dataframe
col_names_replace = {
"mean": "mean_omicron",
"lower": "lower_omicron",
"upper": "upper_omicron",
"top": "top_omicron",
"bottom": "bottom_omicron",
"std": "std_omicron",
}
df_Reff_omicron.rename(col_names_replace, axis=1, inplace=True)
# read in NNDSS/linelist data
# If this errors it may be missing a leading zero on the date.
df_state = read_in_cases(
case_file_date=data_date.strftime("%d%b%Y"),
apply_delay_at_read=True,
apply_inc_at_read=True,
)
# save the case file for convenience
df_state.to_csv("results/cases_" + data_date.strftime("%Y-%m-%d") + ".csv")
df_Reff = df_Reff.merge(
df_state,
how="left",
left_on=["state", "date"],
right_on=["STATE", "date_inferred"],
) # how = left to use Reff days, NNDSS missing dates
# merge in the omicron stuff
df_Reff = df_Reff.merge(
df_Reff_omicron,
how="left",
left_on=["state", "date"],
right_on=["state", "date"],
)
df_Reff["rho_moving"] = df_Reff.groupby(["state"])["rho"].transform(
lambda x: x.rolling(7, 1).mean()
) # minimum number of 1
# some days have no cases, so need to fillna
df_Reff["rho_moving"] = df_Reff.rho_moving.fillna(method="bfill")
# counts are already aligned with infection date by subtracting a random incubation period
df_Reff["local"] = df_Reff.local.fillna(0)
df_Reff["imported"] = df_Reff.imported.fillna(0)
######### Read in Google mobility results #########
sys.path.insert(0, "../")
df_google = read_in_google(moving=True, moving_window=7)
# df_google = read_in_google(moving=False)
df = df_google.merge(df_Reff[[
"date",
"state",
"mean",
"lower",
"upper",
"top",
"bottom",
"std",
"mean_omicron",
"lower_omicron",
"upper_omicron",
"top_omicron",
"bottom_omicron",
"std_omicron",
"rho",
"rho_moving",
"local",
"imported",
]],
on=["date", "state"],
how="inner",
)
######### Create useable dataset #########
# ACT and NT not in original estimates, need to extrapolated sorting keeps consistent
# with sort in data_by_state
# Note that as we now consider the third wave for ACT, we include it in the third
# wave fitting only!
states_to_fit_all_waves = sorted(
["NSW", "VIC", "QLD", "SA", "WA", "TAS", "ACT", "NT"]
)
first_states = sorted(["NSW", "VIC", "QLD", "SA", "WA", "TAS"])
fit_post_March = True
ban = "2020-03-20"
first_end_date = "2020-03-31"
# data for the first wave
first_date_range = {
"NSW": pd.date_range(start="2020-03-01", end=first_end_date).values,
"QLD": pd.date_range(start="2020-03-01", end=first_end_date).values,
"SA": pd.date_range(start="2020-03-01", end=first_end_date).values,
"TAS": pd.date_range(start="2020-03-01", end=first_end_date).values,
"VIC": pd.date_range(start="2020-03-01", end=first_end_date).values,
"WA": pd.date_range(start="2020-03-01", end=first_end_date).values,
}
# Second wave inputs
sec_states = sorted([
"NSW",
# "VIC",
])
sec_start_date = "2020-06-01"
sec_end_date = "2021-01-19"
# choose dates for each state for sec wave
sec_date_range = {
"NSW": pd.date_range(start="2020-06-01", end="2021-01-19").values,
# "VIC": pd.date_range(start="2020-06-01", end="2020-10-28").values,
}
# Third wave inputs
third_states = sorted([
"NSW",
"VIC",
"ACT",
"QLD",
"SA",
"TAS",
# "NT",
"WA",
])
# Subtract the truncation days to avoid right truncation as we consider infection dates
# and not symptom onset dates
third_end_date = data_date - pd.Timedelta(days=truncation_days)
# choose dates for each state for third wave
# Note that as we now consider the third wave for ACT, we include it in
# the third wave fitting only!
third_date_range = {
"ACT": pd.date_range(start="2021-08-15", end=third_end_date).values,
"NSW": pd.date_range(start="2021-06-25", end=third_end_date).values,
# "NT": pd.date_range(start="2021-12-20", end=third_end_date).values,
"QLD": pd.date_range(start="2021-07-30", end=third_end_date).values,
"SA": pd.date_range(start="2021-12-10", end=third_end_date).values,
"TAS": pd.date_range(start="2021-12-20", end=third_end_date).values,
"VIC": pd.date_range(start="2021-07-10", end=third_end_date).values,
"WA": pd.date_range(start="2022-01-01", end=third_end_date).values,
}
fit_mask = df.state.isin(first_states)
if fit_post_March:
fit_mask = (fit_mask) & (df.date >= start_date)
fit_mask = (fit_mask) & (df.date <= first_end_date)
second_wave_mask = df.state.isin(sec_states)
second_wave_mask = (second_wave_mask) & (df.date >= sec_start_date)
second_wave_mask = (second_wave_mask) & (df.date <= sec_end_date)
# Add third wave stuff here
third_wave_mask = df.state.isin(third_states)
third_wave_mask = (third_wave_mask) & (df.date >= third_start_date)
third_wave_mask = (third_wave_mask) & (df.date <= third_end_date)
predictors = mov_values.copy()
# predictors.extend(['driving_7days','transit_7days','walking_7days','pc'])
# remove residential to see if it improves fit
# predictors.remove("residential_7days")
df["post_policy"] = (df.date >= ban).astype(int)
dfX = df.loc[fit_mask].sort_values("date")
df2X = df.loc[second_wave_mask].sort_values("date")
df3X = df.loc[third_wave_mask].sort_values("date")
dfX["is_first_wave"] = 0
for state in first_states:
dfX.loc[dfX.state == state, "is_first_wave"] = (
dfX.loc[dfX.state == state]
.date.isin(first_date_range[state])
.astype(int)
.values
)
df2X["is_sec_wave"] = 0
for state in sec_states:
df2X.loc[df2X.state == state, "is_sec_wave"] = (
df2X.loc[df2X.state == state]
.date.isin(sec_date_range[state])
.astype(int)
.values
)
# used to index what dates are featured in omicron AND third wave
omicron_date_range = pd.date_range(start=omicron_start_date, end=third_end_date)
df3X["is_third_wave"] = 0
for state in third_states:
df3X.loc[df3X.state == state, "is_third_wave"] = (
df3X.loc[df3X.state == state]
.date.isin(third_date_range[state])
.astype(int)
.values
)
# condition on being in third wave AND omicron
df3X.loc[df3X.state == state, "is_omicron_wave"] = (
(
df3X.loc[df3X.state == state].date.isin(omicron_date_range)
* df3X.loc[df3X.state == state].date.isin(third_date_range[state])
)
.astype(int)
.values
)
data_by_state = {}
sec_data_by_state = {}
third_data_by_state = {}
for value in ["mean", "std", "local", "imported"]:
data_by_state[value] = pd.pivot(
dfX[["state", value, "date"]],
index="date",
columns="state",
values=value,
).sort_index(axis="columns")
# account for dates pre pre second wave
if df2X.loc[df2X.state == sec_states[0]].shape[0] == 0:
print("making empty")
sec_data_by_state[value] = pd.DataFrame(columns=sec_states).astype(float)
else:
sec_data_by_state[value] = pd.pivot(
df2X[["state", value, "date"]],
index="date",
columns="state",
values=value,
).sort_index(axis="columns")
# account for dates pre pre third wave
if df3X.loc[df3X.state == third_states[0]].shape[0] == 0:
print("making empty")
third_data_by_state[value] = pd.DataFrame(columns=third_states).astype(
float
)
else:
third_data_by_state[value] = pd.pivot(
df3X[["state", value, "date"]],
index="date",
columns="state",
values=value,
).sort_index(axis="columns")
# now add in the summary stats for Omicron Reff
for value in ["mean_omicron", "std_omicron"]:
if df3X.loc[df3X.state == third_states[0]].shape[0] == 0:
print("making empty")
third_data_by_state[value] = pd.DataFrame(columns=third_states).astype(
float
)
else:
third_data_by_state[value] = pd.pivot(
df3X[["state", value, "date"]],
index="date",
columns="state",
values=value,
).sort_index(axis="columns")
# FIRST PHASE
mobility_by_state = []
mobility_std_by_state = []
count_by_state = []
respond_by_state = []
mask_wearing_count_by_state = []
mask_wearing_respond_by_state = []
include_in_first_wave = []
# filtering survey responses to dates before this wave fitting
survey_respond = survey_respond_base.loc[: dfX.date.values[-1]]
survey_counts = survey_counts_base.loc[: dfX.date.values[-1]]
mask_wearing_respond = mask_wearing_respond_base.loc[: dfX.date.values[-1]]
mask_wearing_counts = mask_wearing_counts_base.loc[: dfX.date.values[-1]]
for state in first_states:
mobility_by_state.append(dfX.loc[dfX.state == state, predictors].values / 100)
mobility_std_by_state.append(
dfX.loc[dfX.state == state, [val + "_std" for val in predictors]].values / 100
)
count_by_state.append(survey_counts.loc[start_date:first_end_date, state].values)
respond_by_state.append(survey_respond.loc[start_date:first_end_date, state].values)
mask_wearing_count_by_state.append(
mask_wearing_counts.loc[start_date:first_end_date, state].values
)
mask_wearing_respond_by_state.append(
mask_wearing_respond.loc[start_date:first_end_date, state].values
)
include_in_first_wave.append(
dfX.loc[dfX.state == state, "is_first_wave"].values
)
# SECOND PHASE
sec_mobility_by_state = []
sec_mobility_std_by_state = []
sec_count_by_state = []
sec_respond_by_state = []
sec_mask_wearing_count_by_state = []
sec_mask_wearing_respond_by_state = []
include_in_sec_wave = []
# filtering survey responses to dates before this wave fitting
survey_respond = survey_respond_base.loc[: df2X.date.values[-1]]
survey_counts = survey_counts_base.loc[: df2X.date.values[-1]]
mask_wearing_respond = mask_wearing_respond_base.loc[: df2X.date.values[-1]]
mask_wearing_counts = mask_wearing_counts_base.loc[: df2X.date.values[-1]]
for state in sec_states:
sec_mobility_by_state.append(
df2X.loc[df2X.state == state, predictors].values / 100
)
sec_mobility_std_by_state.append(
df2X.loc[df2X.state == state, [val + "_std" for val in predictors]].values / 100
)
sec_count_by_state.append(
survey_counts.loc[sec_start_date:sec_end_date, state].values
)
sec_respond_by_state.append(
survey_respond.loc[sec_start_date:sec_end_date, state].values
)
sec_mask_wearing_count_by_state.append(
mask_wearing_counts.loc[sec_start_date:sec_end_date, state].values
)
sec_mask_wearing_respond_by_state.append(
mask_wearing_respond.loc[sec_start_date:sec_end_date, state].values
)
include_in_sec_wave.append(df2X.loc[df2X.state == state, "is_sec_wave"].values)
# THIRD WAVE
third_mobility_by_state = []
third_mobility_std_by_state = []
third_count_by_state = []
third_respond_by_state = []
third_mask_wearing_count_by_state = []
third_mask_wearing_respond_by_state = []
include_in_third_wave = []
include_in_omicron_wave = []
# filtering survey responses to dates before this wave fitting
survey_respond = survey_respond_base.loc[: df3X.date.values[-1]]
survey_counts = survey_counts_base.loc[: df3X.date.values[-1]]
mask_wearing_respond = mask_wearing_respond_base.loc[: df3X.date.values[-1]]
mask_wearing_counts = mask_wearing_counts_base.loc[: df3X.date.values[-1]]
for state in third_states:
third_mobility_by_state.append(
df3X.loc[df3X.state == state, predictors].values / 100
)
third_mobility_std_by_state.append(
df3X.loc[df3X.state == state, [val + "_std" for val in predictors]].values / 100
)
third_count_by_state.append(
survey_counts.loc[third_start_date:third_end_date, state].values
)
third_respond_by_state.append(
survey_respond.loc[third_start_date:third_end_date, state].values
)
third_mask_wearing_count_by_state.append(
mask_wearing_counts.loc[third_start_date:third_end_date, state].values
)
third_mask_wearing_respond_by_state.append(
mask_wearing_respond.loc[third_start_date:third_end_date, state].values
)
include_in_third_wave.append(
df3X.loc[df3X.state == state, "is_third_wave"].values
)
include_in_omicron_wave.append(
df3X.loc[df3X.state == state, "is_omicron_wave"].values
)
# policy boolean flag for after travel ban in each wave
policy = dfX.loc[
dfX.state == first_states[0], "post_policy"
] # this is the post ban policy
policy_sec_wave = [1] * df2X.loc[df2X.state == sec_states[0]].shape[0]
policy_third_wave = [1] * df3X.loc[df3X.state == third_states[0]].shape[0]
# read in the vaccination data
delta_vaccination_by_state_array = process_vax_data_array(
data_date=data_date,
third_states=third_states,
third_end_date=third_end_date,
variant="Delta",
print_latest_date_in_ts=True,
)
omicron_vaccination_by_state_array = process_vax_data_array(
data_date=data_date,
third_states=third_states,
third_end_date=third_end_date,
variant="Omicron",
)
# Make state by state arrays
state_index = {state: i + 1 for i, state in enumerate(states_to_fit_all_waves)}
# dates to apply alpha in the second wave (this won't allow for VIC to be added as
# the date_ranges are different)
apply_alpha_sec_wave = (
sec_date_range["NSW"] >= pd.to_datetime(alpha_start_date)
).astype(int)
omicron_start_day = (
pd.to_datetime(omicron_start_date) - pd.to_datetime(third_start_date)
).days
omicron_only_day = (
pd.to_datetime(omicron_only_date) - pd.to_datetime(third_start_date)
).days
heterogeneity_start_day = (
pd.to_datetime("2021-08-20") - pd.to_datetime(third_start_date)
).days
# number of days we fit the average VE over
tau_vax_block_size = 3
# get pop size array
pop_size_array = []
for s in states_to_fit_all_waves:
pop_size_array.append(pop_sizes[s])
p_detect = get_all_p_detect_old(
states=third_states,
end_date=third_end_date,
num_days=df3X.loc[df3X.state == "NSW"].shape[0],
)
df_p_detect = pd.DataFrame(p_detect, columns=third_states)
df_p_detect["date"] = third_date_range["NSW"]
df_p_detect.to_csv("results/CA_" + data_date.strftime("%Y-%m-%d") + ".csv")
# p_detect = get_all_p_detect(
# end_date=third_end_date,
# num_days=df3X.loc[df3X.state == "NSW"].shape[0],
# )
# input data block for stan model
input_data = {
"j_total": len(states_to_fit_all_waves),
"N_first": dfX.loc[dfX.state == first_states[0]].shape[0],
"K": len(predictors),
"j_first": len(first_states),
"Reff": data_by_state["mean"].values,
"mob": mobility_by_state,
"mob_std": mobility_std_by_state,
"sigma2": data_by_state["std"].values ** 2,
"policy": policy.values,
"local": data_by_state["local"].values,
"imported": data_by_state["imported"].values,
"N_sec": df2X.loc[df2X.state == sec_states[0]].shape[0],
"j_sec": len(sec_states),
"Reff_sec": sec_data_by_state["mean"].values,
"mob_sec": sec_mobility_by_state,
"mob_sec_std": sec_mobility_std_by_state,
"sigma2_sec": sec_data_by_state["std"].values ** 2,
"policy_sec": policy_sec_wave,
"local_sec": sec_data_by_state["local"].values,
"imported_sec": sec_data_by_state["imported"].values,
"apply_alpha_sec": apply_alpha_sec_wave,
"N_third": df3X.loc[df3X.state == "NSW"].shape[0],
"j_third": len(third_states),
"Reff_third": third_data_by_state["mean"].values,
"Reff_omicron": third_data_by_state["mean_omicron"].values,
"mob_third": third_mobility_by_state,
"mob_third_std": third_mobility_std_by_state,
"sigma2_third": third_data_by_state["std"].values ** 2,
"sigma2_omicron": third_data_by_state["std_omicron"].values ** 2,
"policy_third": policy_third_wave,
"local_third": third_data_by_state["local"].values,
"imported_third": third_data_by_state["imported"].values,
"count_md": count_by_state,
"respond_md": respond_by_state,
"count_md_sec": sec_count_by_state,
"respond_md_sec": sec_respond_by_state,
"count_md_third": third_count_by_state,
"respond_md_third": third_respond_by_state,
"count_masks": mask_wearing_count_by_state,
"respond_masks": mask_wearing_respond_by_state,
"count_masks_sec": sec_mask_wearing_count_by_state,
"respond_masks_sec": sec_mask_wearing_respond_by_state,
"count_masks_third": third_mask_wearing_count_by_state,
"respond_masks_third": third_mask_wearing_respond_by_state,
"map_to_state_index_first": [state_index[state] for state in first_states],
"map_to_state_index_sec": [state_index[state] for state in sec_states],
"map_to_state_index_third": [state_index[state] for state in third_states],
"total_N_p_sec": sum([sum(x) for x in include_in_sec_wave]).item(),
"total_N_p_third": sum([sum(x) for x in include_in_third_wave]).item(),
"include_in_first": include_in_first_wave,
"include_in_sec": include_in_sec_wave,
"include_in_third": include_in_third_wave,
"pos_starts_sec": np.cumsum([sum(x) for x in include_in_sec_wave]).astype(int).tolist(),
"pos_starts_third": np.cumsum(
[sum(x) for x in include_in_third_wave]
).astype(int).tolist(),
"ve_delta_data": delta_vaccination_by_state_array,
"ve_omicron_data": omicron_vaccination_by_state_array,
"omicron_start_day": omicron_start_day,
"omicron_only_day": omicron_only_day,
"include_in_omicron": include_in_omicron_wave,
"total_N_p_third_omicron": int(sum([sum(x) for x in include_in_omicron_wave]).item()),
"pos_starts_third_omicron": np.cumsum(
[sum(x) for x in include_in_omicron_wave]
).astype(int).tolist(),
'tau_vax_block_size': tau_vax_block_size,
'total_N_p_third_blocks': int(
sum([int(ceil(sum(x)/tau_vax_block_size)) for x in include_in_third_wave])
),
'pos_starts_third_blocks': np.cumsum(
[int(ceil(sum(x)/tau_vax_block_size)) for x in include_in_third_wave]
).astype(int),
'total_N_p_third_omicron_blocks': int(
sum([int(ceil(sum(x)/tau_vax_block_size)) for x in include_in_omicron_wave])
),
'pos_starts_third_omicron_blocks': np.cumsum(
[int(ceil(sum(x)/tau_vax_block_size)) for x in include_in_omicron_wave]
).astype(int),
"pop_size_array": pop_size_array,
"heterogeneity_start_day": heterogeneity_start_day,
"p_detect": p_detect,
}
# dump the dictionary to a json file
with open("results/stan_input_data.pkl", "wb") as f:
pickle.dump(input_data, f)
return None
def run_stan(
data_date,
num_chains=4,
num_samples=1000,
num_warmup_samples=500,
max_treedepth=12,
):
"""
Read the input_data.json in and run the stan model.
"""
data_date = pd.to_datetime(data_date)
# read in the input data as a dictionary
with open("results/stan_input_data.pkl", "rb") as f:
input_data = pickle.load(f)
# make results and figs dir
figs_dir = (
"figs/stan_fit/stan_fit_"
+ data_date.strftime("%Y-%m-%d")
+ "/"
)
results_dir = (
"results/"
+ data_date.strftime("%Y-%m-%d")
+ "/"
)
os.makedirs(figs_dir, exist_ok=True)
os.makedirs(results_dir, exist_ok=True)
# path to the stan model
# basic model with a switchover between Reffs
# rho_model_gamma = "TP_model/fit_and_forecast/stan_models/TP_switchover.stan"
# mixture model with basic susceptible depletion
# rho_model_gamma = "TP_model/fit_and_forecast/stan_models/TP_gamma_mix.stan"
# model that has a switchover but incorporates a waning in infection acquired immunity
rho_model_gamma = "TP_model/fit_and_forecast/stan_models/TP_switchover_waning_infection.stan"
# model that incorporates a waning in infection acquired immunity but is coded as a mixture
# rho_model_gamma = "TP_model/fit_and_forecast/stan_models/TP_gamma_mix_waning_infection.stan"
# model that has a switchover but incorporates a waning in infection acquired immunity
# rho_model_gamma = "TP_model/fit_and_forecast/stan_models/TP_switchover_waning_infection_single_md.stan"
# compile the stan model
model = CmdStanModel(stan_file=rho_model_gamma)
# obtain a posterior sample from the model conditioned on the data
fit = model.sample(
chains=num_chains,
iter_warmup=num_warmup_samples,
iter_sampling=num_samples,
data=input_data,
max_treedepth=max_treedepth,
refresh=10
)
# display convergence diagnostics for the current run
print("===========")
print(fit.diagnose())
print("===========")
# save output file to
fit.save_csvfiles(dir=results_dir)
df_fit = fit.draws_pd()
df_fit.to_csv(
results_dir
+ "posterior_sample_"
+ data_date.strftime("%Y-%m-%d")
+ ".csv"
)
# output a set of diagnostics
filename = (
figs_dir
+ "fit_summary_all_parameters"
+ data_date.strftime("%Y-%m-%d")
+ ".csv"
)
# save a summary file for all parameters; this involves ESS and ESS/s as well as summary stats
fit_summary = fit.summary()
fit_summary.to_csv(filename)
# now save a small summary to easily view key parameters
pars_of_interest = ["bet[" + str(i + 1) + "]" for i in range(5)]
pars_of_interest = pars_of_interest + ["R_Li[" + str(i + 1) + "]" for i in range(8)]
pars_of_interest = pars_of_interest + [
"R_I",
"R_L",
"theta_md",
"theta_masks",
"sig",
"voc_effect_alpha",
"voc_effect_delta",
"voc_effect_omicron",
]
pars_of_interest = pars_of_interest + [
col for col in df_fit if "phi" in col and "simplex" not in col
]
# save a summary for ease of viewing
# output a set of diagnostics
filename = (
figs_dir
+ "fit_summary_main_parameters"
+ data_date.strftime("%Y-%m-%d")
+ ".csv"
)
fit_summary.loc[pars_of_interest].to_csv(filename)
return None
def plot_and_save_posterior_samples(data_date):
"""
Runs the full suite of plotting.
"""
data_date = pd.to_datetime(data_date) # Define data date
figs_dir = (
"figs/stan_fit/stan_fit_"
+ data_date.strftime("%Y-%m-%d")
+ "/"
)
# read in the posterior sample
samples_mov_gamma = pd.read_csv(
"results/"
+ data_date.strftime("%Y-%m-%d")
+ "/posterior_sample_"
+ data_date.strftime("%Y-%m-%d")
+ ".csv"
)
# * Note: 2020-09-09 won't work (for some reason)
######### Read in microdistancing (md) surveys #########
surveys = pd.DataFrame()
path = "data/md/Barometer wave*.csv"
for file in glob.glob(path):
surveys = surveys.append(pd.read_csv(file, parse_dates=["date"]))
surveys = surveys.sort_values(by="date")
print("Latest Microdistancing survey is {}".format(surveys.date.values[-1]))
surveys["state"] = surveys["state"].map(states_initials).fillna(surveys["state"])
surveys["proportion"] = surveys["count"] / surveys.respondents
surveys.date = pd.to_datetime(surveys.date)
always = surveys.loc[surveys.response == "Always"].set_index(["state", "date"])
always = always.unstack(["state"])
# If you get an error here saying 'cannot create a new series when the index is not unique',
# then you have a duplicated md file.
idx = pd.date_range("2020-03-01", pd.to_datetime("today"))
always = always.reindex(idx, fill_value=np.nan)
always.index.name = "date"
# fill back to earlier and between weeks.
# Assume survey on day x applies for all days up to x - 6
always = always.fillna(method="bfill")
# assume values continue forward if survey hasn't completed
always = always.fillna(method="ffill")
always = always.stack(["state"])
# Zero out before first survey 20th March
always = always.reset_index().set_index("date")
always.loc[:"2020-03-20", "count"] = 0
always.loc[:"2020-03-20", "respondents"] = 0
always.loc[:"2020-03-20", "proportion"] = 0
always = always.reset_index().set_index(["state", "date"])
survey_X = pd.pivot_table(
data=always, index="date", columns="state", values="proportion"
)
survey_counts_base = (
pd.pivot_table(data=always, index="date", columns="state", values="count")
.drop(["Australia", "Other"], axis=1)
.astype(int)
)
survey_respond_base = (
pd.pivot_table(data=always, index="date", columns="state", values="respondents")
.drop(["Australia", "Other"], axis=1)
.astype(int)
)
## read in and process mask wearing data
mask_wearing = pd.DataFrame()
path = "data/face_coverings/face_covering_*_.csv"
for file in glob.glob(path):
mask_wearing = mask_wearing.append(pd.read_csv(file, parse_dates=["date"]))
mask_wearing = mask_wearing.sort_values(by="date")
print("Latest Mask wearing survey is {}".format(mask_wearing.date.values[-1]))
mask_wearing["state"] = (
mask_wearing["state"].map(states_initials).fillna(mask_wearing["state"])
)
mask_wearing["proportion"] = mask_wearing["count"] / mask_wearing.respondents
mask_wearing.date = pd.to_datetime(mask_wearing.date)
mask_wearing_always = mask_wearing.loc[
mask_wearing.face_covering == "Always"
].set_index(["state", "date"])
mask_wearing_always = mask_wearing_always.unstack(["state"])
idx = pd.date_range("2020-03-01", pd.to_datetime("today"))
mask_wearing_always = mask_wearing_always.reindex(idx, fill_value=np.nan)
mask_wearing_always.index.name = "date"
# fill back to earlier and between weeks.
# Assume survey on day x applies for all days up to x - 6
mask_wearing_always = mask_wearing_always.fillna(method="bfill")
# assume values continue forward if survey hasn't completed
mask_wearing_always = mask_wearing_always.fillna(method="ffill")
mask_wearing_always = mask_wearing_always.stack(["state"])
# Zero out before first survey 20th March
mask_wearing_always = mask_wearing_always.reset_index().set_index("date")
mask_wearing_always.loc[:"2020-03-20", "count"] = 0
mask_wearing_always.loc[:"2020-03-20", "respondents"] = 0
mask_wearing_always.loc[:"2020-03-20", "proportion"] = 0
mask_wearing_X = pd.pivot_table(
data=mask_wearing_always, index="date", columns="state", values="proportion"
)
mask_wearing_counts_base = pd.pivot_table(
data=mask_wearing_always, index="date", columns="state", values="count"
).astype(int)
mask_wearing_respond_base = pd.pivot_table(
data=mask_wearing_always, index="date", columns="state", values="respondents"
).astype(int)
df_Reff = pd.read_csv(
"results/EpyReff/Reff_delta" + data_date.strftime("%Y-%m-%d") + "tau_4.csv",
parse_dates=["INFECTION_DATES"],
)
df_Reff["date"] = df_Reff.INFECTION_DATES
df_Reff["state"] = df_Reff.STATE
df_Reff_omicron = pd.read_csv(
"results/EpyReff/Reff_omicron" + data_date.strftime("%Y-%m-%d") + "tau_4.csv",
parse_dates=["INFECTION_DATES"],
)
df_Reff_omicron["date"] = df_Reff_omicron.INFECTION_DATES
df_Reff_omicron["state"] = df_Reff_omicron.STATE
# relabel some of the columns to avoid replication in the merged dataframe
col_names_replace = {
"mean": "mean_omicron",
"lower": "lower_omicron",
"upper": "upper_omicron",
"top": "top_omicron",
"bottom": "bottom_omicron",
"std": "std_omicron",
}
df_Reff_omicron.rename(col_names_replace, axis=1, inplace=True)
# read in NNDSS/linelist data
# If this errors it may be missing a leading zero on the date.
df_state = read_in_cases(
case_file_date=data_date.strftime("%d%b%Y"),
apply_delay_at_read=True,
apply_inc_at_read=True,
)
df_Reff = df_Reff.merge(
df_state,
how="left",
left_on=["state", "date"],
right_on=["STATE", "date_inferred"],
) # how = left to use Reff days, NNDSS missing dates
# merge in the omicron stuff
df_Reff = df_Reff.merge(
df_Reff_omicron,
how="left",
left_on=["state", "date"],
right_on=["state", "date"],
)
df_Reff["rho_moving"] = df_Reff.groupby(["state"])["rho"].transform(
lambda x: x.rolling(7, 1).mean()
) # minimum number of 1
# some days have no cases, so need to fillna
df_Reff["rho_moving"] = df_Reff.rho_moving.fillna(method="bfill")
# counts are already aligned with infection date by subtracting a random incubation period
df_Reff["local"] = df_Reff.local.fillna(0)
df_Reff["imported"] = df_Reff.imported.fillna(0)
######### Read in Google mobility results #########
sys.path.insert(0, "../")
df_google = read_in_google(moving=True)
df = df_google.merge(
df_Reff[
[
"date",
"state",
"mean",
"lower",
"upper",
"top",
"bottom",
"std",
"mean_omicron",
"lower_omicron",
"upper_omicron",
"top_omicron",
"bottom_omicron",
"std_omicron",
"rho",
"rho_moving",
"local",
"imported",
]
],
on=["date", "state"],
how="inner",
)
# ACT and NT not in original estimates, need to extrapolated sorting keeps consistent
# with sort in data_by_state
# Note that as we now consider the third wave for ACT, we include it in the third
# wave fitting only!
states_to_fit_all_waves = sorted(
["NSW", "VIC", "QLD", "SA", "WA", "TAS", "ACT", "NT"]
)
first_states = sorted(["NSW", "VIC", "QLD", "SA", "WA", "TAS"])
fit_post_March = True
ban = "2020-03-20"
first_end_date = "2020-03-31"
# data for the first wave
first_date_range = {
"NSW": pd.date_range(start="2020-03-01", end=first_end_date).values,
"QLD": pd.date_range(start="2020-03-01", end=first_end_date).values,
"SA": pd.date_range(start="2020-03-01", end=first_end_date).values,
"TAS": pd.date_range(start="2020-03-01", end=first_end_date).values,
"VIC": pd.date_range(start="2020-03-01", end=first_end_date).values,
"WA": pd.date_range(start="2020-03-01", end=first_end_date).values,
}
# Second wave inputs
sec_states = sorted([
'NSW',
# 'VIC',
])
sec_start_date = "2020-06-01"
sec_end_date = "2021-01-19"
# choose dates for each state for sec wave
sec_date_range = {
"NSW": pd.date_range(start="2020-06-01", end="2021-01-19").values,
# "VIC": pd.date_range(start="2020-06-01", end="2020-10-28").values,
}
# Third wave inputs
third_states = sorted([
"NSW",
"VIC",
"ACT",
"QLD",
"SA",
"TAS",
# "NT",
"WA",
])
# Subtract the truncation days to avoid right truncation as we consider infection dates
# and not symptom onset dates
third_end_date = data_date - pd.Timedelta(days=truncation_days)
# choose dates for each state for third wave
# Note that as we now consider the third wave for ACT, we include it in
# the third wave fitting only!
third_date_range = {
"ACT": pd.date_range(start="2021-08-15", end=third_end_date).values,
"NSW": pd.date_range(start="2021-06-25", end=third_end_date).values,
# "NT": pd.date_range(start="2021-12-20", end=third_end_date).values,
"QLD": pd.date_range(start="2021-07-30", end=third_end_date).values,
"SA": pd.date_range(start="2021-12-10", end=third_end_date).values,
"TAS": pd.date_range(start="2021-12-20", end=third_end_date).values,
"VIC": pd.date_range(start="2021-07-10", end=third_end_date).values,
"WA": pd.date_range(start="2022-01-01", end=third_end_date).values,
}
fit_mask = df.state.isin(first_states)
if fit_post_March:
fit_mask = (fit_mask) & (df.date >= start_date)
fit_mask = (fit_mask) & (df.date <= first_end_date)
second_wave_mask = df.state.isin(sec_states)
second_wave_mask = (second_wave_mask) & (df.date >= sec_start_date)
second_wave_mask = (second_wave_mask) & (df.date <= sec_end_date)
# Add third wave stuff here
third_wave_mask = df.state.isin(third_states)
third_wave_mask = (third_wave_mask) & (df.date >= third_start_date)
third_wave_mask = (third_wave_mask) & (df.date <= third_end_date)
predictors = mov_values.copy()
# predictors.extend(['driving_7days','transit_7days','walking_7days','pc'])
# remove residential to see if it improves fit
# predictors.remove("residential_7days")
df["post_policy"] = (df.date >= ban).astype(int)
dfX = df.loc[fit_mask].sort_values("date")
df2X = df.loc[second_wave_mask].sort_values("date")
df3X = df.loc[third_wave_mask].sort_values("date")
dfX["is_first_wave"] = 0
for state in first_states:
dfX.loc[dfX.state == state, "is_first_wave"] = (
dfX.loc[dfX.state == state]
.date.isin(first_date_range[state])
.astype(int)
.values
)
df2X["is_sec_wave"] = 0
for state in sec_states:
df2X.loc[df2X.state == state, "is_sec_wave"] = (
df2X.loc[df2X.state == state]
.date.isin(sec_date_range[state])
.astype(int)
.values
)
# used to index what dates are also featured in omicron
omicron_date_range = pd.date_range(start=omicron_start_date, end=third_end_date)
df3X["is_third_wave"] = 0
for state in third_states:
df3X.loc[df3X.state == state, "is_third_wave"] = (
df3X.loc[df3X.state == state]
.date.isin(third_date_range[state])
.astype(int)
.values
)
# condition on being in third wave AND omicron
df3X.loc[df3X.state == state, "is_omicron_wave"] = (
(
df3X.loc[df3X.state == state].date.isin(omicron_date_range)
* df3X.loc[df3X.state == state].date.isin(third_date_range[state])
)
.astype(int)
.values
)
data_by_state = {}
sec_data_by_state = {}
third_data_by_state = {}
for value in ["mean", "std", "local", "imported"]:
data_by_state[value] = pd.pivot(
dfX[["state", value, "date"]], index="date", columns="state", values=value
).sort_index(axis="columns")
# account for dates pre pre second wave
if df2X.loc[df2X.state == sec_states[0]].shape[0] == 0:
print("making empty")
sec_data_by_state[value] = pd.DataFrame(columns=sec_states).astype(float)
else:
sec_data_by_state[value] = pd.pivot(
df2X[["state", value, "date"]],
index="date",
columns="state",
values=value,
).sort_index(axis="columns")
# account for dates pre pre third wave
if df3X.loc[df3X.state == third_states[0]].shape[0] == 0:
print("making empty")
third_data_by_state[value] = pd.DataFrame(columns=third_states).astype(
float
)
else:
third_data_by_state[value] = pd.pivot(
df3X[["state", value, "date"]],
index="date",
columns="state",
values=value,
).sort_index(axis="columns")
# now add in the summary stats for Omicron Reff
for value in ["mean_omicron", "std_omicron"]:
if df3X.loc[df3X.state == third_states[0]].shape[0] == 0:
print("making empty")
third_data_by_state[value] = pd.DataFrame(columns=third_states).astype(
float
)
else:
third_data_by_state[value] = pd.pivot(
df3X[["state", value, "date"]],
index="date",
columns="state",
values=value,
).sort_index(axis="columns")
# FIRST PHASE
mobility_by_state = []
mobility_std_by_state = []
count_by_state = []
respond_by_state = []
mask_wearing_count_by_state = []
mask_wearing_respond_by_state = []
include_in_first_wave = []
# filtering survey responses to dates before this wave fitting
survey_respond = survey_respond_base.loc[: dfX.date.values[-1]]
survey_counts = survey_counts_base.loc[: dfX.date.values[-1]]
mask_wearing_respond = mask_wearing_respond_base.loc[: dfX.date.values[-1]]
mask_wearing_counts = mask_wearing_counts_base.loc[: dfX.date.values[-1]]
for state in first_states:
mobility_by_state.append(dfX.loc[dfX.state == state, predictors].values / 100)
mobility_std_by_state.append(
dfX.loc[dfX.state == state, [val + "_std" for val in predictors]].values
/ 100
)
count_by_state.append(survey_counts.loc[start_date:first_end_date, state].values)
respond_by_state.append(survey_respond.loc[start_date:first_end_date, state].values)
mask_wearing_count_by_state.append(
mask_wearing_counts.loc[start_date:first_end_date, state].values
)
mask_wearing_respond_by_state.append(
mask_wearing_respond.loc[start_date:first_end_date, state].values
)
include_in_first_wave.append(
dfX.loc[dfX.state == state, "is_first_wave"].values
)
# SECOND PHASE
sec_mobility_by_state = []
sec_mobility_std_by_state = []
sec_count_by_state = []
sec_respond_by_state = []
sec_mask_wearing_count_by_state = []
sec_mask_wearing_respond_by_state = []
include_in_sec_wave = []
# filtering survey responses to dates before this wave fitting
survey_respond = survey_respond_base.loc[: df2X.date.values[-1]]
survey_counts = survey_counts_base.loc[: df2X.date.values[-1]]
mask_wearing_respond = mask_wearing_respond_base.loc[: df2X.date.values[-1]]
mask_wearing_counts = mask_wearing_counts_base.loc[: df2X.date.values[-1]]
for state in sec_states:
sec_mobility_by_state.append(
df2X.loc[df2X.state == state, predictors].values / 100
)
sec_mobility_std_by_state.append(
df2X.loc[df2X.state == state, [val + "_std" for val in predictors]].values
/ 100
)
sec_count_by_state.append(
survey_counts.loc[sec_start_date:sec_end_date, state].values
)
sec_respond_by_state.append(
survey_respond.loc[sec_start_date:sec_end_date, state].values
)
sec_mask_wearing_count_by_state.append(
mask_wearing_counts.loc[sec_start_date:sec_end_date, state].values
)
sec_mask_wearing_respond_by_state.append(
mask_wearing_respond.loc[sec_start_date:sec_end_date, state].values
)
include_in_sec_wave.append(df2X.loc[df2X.state == state, "is_sec_wave"].values)
# THIRD WAVE
third_mobility_by_state = []
third_mobility_std_by_state = []
third_count_by_state = []
third_respond_by_state = []
third_mask_wearing_count_by_state = []
third_mask_wearing_respond_by_state = []
include_in_third_wave = []
include_in_omicron_wave = []
# filtering survey responses to dates before this wave fitting
survey_respond = survey_respond_base.loc[: df3X.date.values[-1]]
survey_counts = survey_counts_base.loc[: df3X.date.values[-1]]
mask_wearing_respond = mask_wearing_respond_base.loc[: df3X.date.values[-1]]
mask_wearing_counts = mask_wearing_counts_base.loc[: df3X.date.values[-1]]
for state in third_states:
third_mobility_by_state.append(
df3X.loc[df3X.state == state, predictors].values / 100
)
third_mobility_std_by_state.append(
df3X.loc[df3X.state == state, [val + "_std" for val in predictors]].values
/ 100
)
third_count_by_state.append(
survey_counts.loc[third_start_date:third_end_date, state].values
)
third_respond_by_state.append(
survey_respond.loc[third_start_date:third_end_date, state].values
)
third_mask_wearing_count_by_state.append(
mask_wearing_counts.loc[third_start_date:third_end_date, state].values
)
third_mask_wearing_respond_by_state.append(
mask_wearing_respond.loc[third_start_date:third_end_date, state].values
)
include_in_third_wave.append(
df3X.loc[df3X.state == state, "is_third_wave"].values
)
include_in_omicron_wave.append(
df3X.loc[df3X.state == state, "is_omicron_wave"].values
)
# Make state by state arrays
state_index = {state: i for i, state in enumerate(states_to_fit_all_waves)}
# get pop size array
pop_size_array = []
for s in states_to_fit_all_waves:
pop_size_array.append(pop_sizes[s])
# First phase
# rho calculated at data entry
if isinstance(df_state.index, pd.MultiIndex):
df_state = df_state.reset_index()
states = sorted(["NSW", "QLD", "VIC", "TAS", "SA", "WA", "ACT", "NT"])
fig, ax = plt.subplots(figsize=(24, 9), ncols=len(states), sharey=True)
states_to_fitd = {state: i + 1 for i, state in enumerate(first_states)}
for i, state in enumerate(states):
if state in first_states:
dates = df_Reff.loc[
(df_Reff.date >= start_date)
& (df_Reff.state == state)
& (df_Reff.date <= first_end_date)
].date
rho_samples = samples_mov_gamma[
[
"brho[" + str(j + 1) + "," + str(states_to_fitd[state]) + "]"
for j in range(dfX.loc[dfX.state == first_states[0]].shape[0])
]
]
ax[i].plot(dates, rho_samples.median(), label="fit", color="C0")
ax[i].fill_between(
dates,
rho_samples.quantile(0.25),
rho_samples.quantile(0.75),
color="C0",
alpha=0.4,
)
ax[i].fill_between(
dates,
rho_samples.quantile(0.05),
rho_samples.quantile(0.95),
color="C0",
alpha=0.4,
)
else:
sns.lineplot(
x="date_inferred",
y="rho",
data=df_state.loc[
(df_state.date_inferred >= start_date)
& (df_state.STATE == state)
& (df_state.date_inferred <= first_end_date)
],
ax=ax[i],
color="C1",
label="data",
)
sns.lineplot(
x="date",
y="rho",
data=df_Reff.loc[
(df_Reff.date >= start_date)
& (df_Reff.state == state)
& (df_Reff.date <= first_end_date)
],
ax=ax[i],
color="C1",
label="data",
)
sns.lineplot(
x="date",
y="rho_moving",
data=df_Reff.loc[
(df_Reff.date >= start_date)
& (df_Reff.state == state)
& (df_Reff.date <= first_end_date)
],
ax=ax[i],
color="C2",
label="moving",
)
dates = dfX.loc[dfX.state == first_states[0]].date
ax[i].tick_params("x", rotation=90)
ax[i].xaxis.set_major_locator(plt.MaxNLocator(4))
ax[i].set_title(state)
ax[0].set_ylabel("Proportion of imported cases")
plt.legend()
plt.savefig(
figs_dir + data_date.strftime("%Y-%m-%d") + "rho_first_phase.png", dpi=144
)
# Second phase
if df2X.shape[0] > 0:
fig, ax = plt.subplots(
figsize=(24, 9), ncols=len(sec_states), sharey=True, squeeze=False
)
states_to_fitd = {state: i + 1 for i, state in enumerate(sec_states)}
pos = 0
for i, state in enumerate(sec_states):
# Google mobility only up to a certain date, so take only up to that value
dates = df2X.loc[
(df2X.state == state) & (df2X.is_sec_wave == 1)
].date.values
rho_samples = samples_mov_gamma[
[
"brho_sec[" + str(j + 1) + "]"
for j in range(
pos, pos + df2X.loc[df2X.state == state].is_sec_wave.sum()
)
]
]
pos = pos + df2X.loc[df2X.state == state].is_sec_wave.sum()
ax[0, i].plot(dates, rho_samples.median(), label="fit", color="C0")
ax[0, i].fill_between(
dates,
rho_samples.quantile(0.25),
rho_samples.quantile(0.75),
color="C0",
alpha=0.4,
)
ax[0, i].fill_between(
dates,
rho_samples.quantile(0.05),
rho_samples.quantile(0.95),
color="C0",
alpha=0.4,
)
sns.lineplot(
x="date_inferred",
y="rho",
data=df_state.loc[
(df_state.date_inferred >= sec_start_date)
& (df_state.STATE == state)
& (df_state.date_inferred <= sec_end_date)
],
ax=ax[0, i],
color="C1",
label="data",
)
sns.lineplot(
x="date",
y="rho",
data=df_Reff.loc[
(df_Reff.date >= sec_start_date)
& (df_Reff.state == state)
& (df_Reff.date <= sec_end_date)
],
ax=ax[0, i],
color="C1",
label="data",
)
sns.lineplot(
x="date",
y="rho_moving",
data=df_Reff.loc[
(df_Reff.date >= sec_start_date)
& (df_Reff.state == state)
& (df_Reff.date <= sec_end_date)
],
ax=ax[0, i],
color="C2",
label="moving",
)
dates = dfX.loc[dfX.state == sec_states[0]].date
ax[0, i].tick_params("x", rotation=90)
ax[0, i].xaxis.set_major_locator(plt.MaxNLocator(4))
ax[0, i].set_title(state)
ax[0, 0].set_ylabel("Proportion of imported cases")
plt.legend()
plt.savefig(
figs_dir + data_date.strftime("%Y-%m-%d") + "rho_sec_phase.png", dpi=144
)
df_rho_third_all_states = pd.DataFrame()
df_rho_third_tmp = pd.DataFrame()
# Third phase
if df3X.shape[0] > 0:
fig, ax = plt.subplots(
figsize=(9, 24), nrows=len(third_states), sharex=True, squeeze=False
)
states_to_fitd = {state: i + 1 for i, state in enumerate(third_states)}
pos = 0
for i, state in enumerate(third_states):
# Google mobility only up to a certain date, so take only up to that value
dates = df3X.loc[
(df3X.state == state) & (df3X.is_third_wave == 1)
].date.values
rho_samples = samples_mov_gamma[
[
"brho_third[" + str(j + 1) + "]"
for j in range(
pos, pos + df3X.loc[df3X.state == state].is_third_wave.sum()
)
]
]
pos = pos + df3X.loc[df3X.state == state].is_third_wave.sum()
df_rho_third_tmp = rho_samples.T
df_rho_third_tmp["date"] = dates
df_rho_third_tmp["state"] = state
df_rho_third_all_states = pd.concat([df_rho_third_all_states, df_rho_third_tmp])
ax[i, 0].plot(dates, rho_samples.median(), label="fit", color="C0")
ax[i, 0].fill_between(
dates,
rho_samples.quantile(0.25),
rho_samples.quantile(0.75),
color="C0",
alpha=0.4,
)
ax[i, 0].fill_between(
dates,
rho_samples.quantile(0.05),
rho_samples.quantile(0.95),
color="C0",
alpha=0.4,
)
sns.lineplot(
x="date_inferred",
y="rho",
data=df_state.loc[
(df_state.date_inferred >= third_start_date)
& (df_state.STATE == state)
& (df_state.date_inferred <= third_end_date)
],
ax=ax[i, 0],
color="C1",
label="data",
)
sns.lineplot(
x="date",
y="rho",
data=df_Reff.loc[
(df_Reff.date >= third_start_date)
& (df_Reff.state == state)
& (df_Reff.date <= third_end_date)
],
ax=ax[i, 0],
color="C1",
label="data",
)
sns.lineplot(
x="date",
y="rho_moving",
data=df_Reff.loc[
(df_Reff.date >= third_start_date)
& (df_Reff.state == state)
& (df_Reff.date <= third_end_date)
],
ax=ax[i, 0],
color="C2",
label="moving",
)
dates = dfX.loc[dfX.state == third_states[0]].date
ax[i, 0].tick_params("x", rotation=90)
ax[i, 0].xaxis.set_major_locator(plt.MaxNLocator(4))
ax[i, 0].set_title(state)
ax[i, 0].set_ylabel("Proportion of imported cases")
plt.legend()
plt.savefig(
figs_dir + data_date.strftime("%Y-%m-%d") + "rho_third_phase.png", dpi=144,
)
df_rho_third_all_states.to_csv(
"results/"
+ data_date.strftime("%Y-%m-%d")
+ "/rho_samples"
+ data_date.strftime("%Y-%m-%d")
+ ".csv"
)
# plotting
fig, ax = plt.subplots(figsize=(12, 9))
# sample from the priors for RL and RI
samples_mov_gamma["R_L_prior"] = np.random.gamma(
1.8 * 1.8 / 0.05, 0.05 / 1.8, size=samples_mov_gamma.shape[0]
)
samples_mov_gamma["R_I_prior"] = np.random.gamma(
0.5 ** 2 / 0.2, 0.2 / 0.5, size=samples_mov_gamma.shape[0]
)
samples_mov_gamma["R_L_national"] = np.random.gamma(
samples_mov_gamma.R_L.values ** 2 / samples_mov_gamma.sig.values,
samples_mov_gamma.sig.values / samples_mov_gamma.R_L.values,
)
sns.violinplot(
x="variable",
y="value",
data=pd.melt(
samples_mov_gamma[[
col for col in samples_mov_gamma
if "R" in col and col not in ("R_I0", "R_I0_omicron")
]]
),
ax=ax,
cut=0,
)
ax.set_yticks(
[1],
minor=True,
)
ax.set_yticks([0, 2, 3], minor=False)
ax.set_yticklabels([0, 2, 3], minor=False)
ax.set_ylim((0, 3))
# state labels in alphabetical
ax.set_xticklabels(
[
"R_I",
"R_I_omicron",
"R_L0 mean",
"R_L0 ACT",
"R_L0 NSW",
"R_L0 NT",
"R_L0 QLD",
"R_L0 SA",
"R_L0 TAS",
"R_L0 VIC",
"R_L0 WA",
"R_L0 prior",
"R_I prior",
"R_L0 national",
]
)
ax.set_xlabel("")
ax.set_ylabel("Effective reproduction number")
ax.tick_params("x", rotation=90)
ax.yaxis.grid(which="minor", linestyle="--", color="black", linewidth=2)
plt.tight_layout()
plt.savefig(figs_dir + data_date.strftime("%Y-%m-%d") + "R_priors.png", dpi=144)
# Making a new figure that doesn't include the priors
fig, ax = plt.subplots(figsize=(12, 9))
small_plot_cols = ["R_Li[" + str(i) + "]" for i in range(1, 9)] + ["R_I"]
sns.violinplot(
x="variable",
y="value",
data=pd.melt(samples_mov_gamma[small_plot_cols]),
ax=ax,
cut=0,
)
ax.set_yticks(
[1],
minor=True,
)
ax.set_yticks([0, 2, 3], minor=False)
ax.set_yticklabels([0, 2, 3], minor=False)
ax.set_ylim((0, 3))
# state labels in alphabetical
ax.set_xticklabels(
[
"$R_L0$ ACT",
"$R_L0$ NSW",
"$R_L0$ NT",
"$R_L0$ QLD",
"$R_L0$ SA",
"$R_L0$ TAS",
"$R_L0$ VIC",
"$R_L0$ WA",
"$R_I$",
]
)
ax.tick_params("x", rotation=90)
ax.set_xlabel("")
ax.set_ylabel("Effective reproduction number")
ax.yaxis.grid(which="minor", linestyle="--", color="black", linewidth=2)
plt.tight_layout()
plt.savefig(
figs_dir + data_date.strftime("%Y-%m-%d") + "R_priors_(without_priors).png",
dpi=288,
)
# Making a new figure that doesn't include the priors
fig, ax = plt.subplots(figsize=(12, 9))
samples_mov_gamma["voc_effect_third_prior"] = np.random.gamma(
1.5 * 1.5 / 0.05, 0.05 / 1.5, size=samples_mov_gamma.shape[0]
)
small_plot_cols = [
"voc_effect_third_prior",
"voc_effect_delta",
"voc_effect_omicron",
]
sns.violinplot(
x="variable",
y="value",
data=pd.melt(samples_mov_gamma[small_plot_cols]),
ax=ax,
cut=0,
)
ax.set_yticks([1], minor=True)
# ax.set_yticks([0, 0.5, 1, 1.5, 2, 2.5, 3], minor=False)
# ax.set_yticklabels([0, 0.5, 1, 1.5, 2, 2.5, 3], minor=False)
# ax.set_ylim((0, 1))
# state labels in alphabetical
ax.set_xticklabels(["VoC (prior)", "VoC (Delta)", "VoC (Omicron)"])
# ax.tick_params('x', rotation=90)
ax.set_xlabel("")
ax.set_ylabel("value")
ax.yaxis.grid(which="minor", linestyle="--", color="black", linewidth=2)
plt.tight_layout()
plt.savefig(
figs_dir + data_date.strftime("%Y-%m-%d") + "voc_effect_posteriors.png",
dpi=288,
)
posterior = samples_mov_gamma[["bet[" + str(i + 1) + "]" for i in range(len(predictors))]]
split = True
md = "power" # samples_mov_gamma.md.values
posterior.columns = [val for val in predictors]
long = pd.melt(posterior)
fig, ax2 = plt.subplots(figsize=(12, 9))
ax2 = sns.violinplot(x="variable", y="value", data=long, ax=ax2, color="C0")
ax2.plot([0] * len(predictors), linestyle="dashed", alpha=0.6, color="grey")
ax2.tick_params(axis="x", rotation=90)
ax2.set_title("Coefficients of mobility indices")
ax2.set_xlabel("Social mobility index")
ax2.set_xticklabels([var[:-6] for var in predictors])
ax2.set_xticklabels(
[
"Retail and Recreation",
"Grocery and Pharmacy",
"Parks",
"Transit Stations",
"Workplaces",
"Residential",
]
)
ax2.tick_params("x", rotation=15)
plt.tight_layout()
plt.savefig(
figs_dir + data_date.strftime("%Y-%m-%d") + "mobility_posteriors.png",
dpi=288,
)
# plot the TP's
RL_by_state = {
state: samples_mov_gamma["R_Li[" + str(i + 1) + "]"].values
for state, i in state_index.items()
}
ax3 = predict_plot(
samples_mov_gamma,
df.loc[(df.date >= start_date) & (df.date <= first_end_date)],
moving=True,
grocery=True,
rho=first_states,
)
for ax in ax3:
for a in ax:
a.set_ylim((0, 2.5))
a.set_xlim((pd.to_datetime(start_date), pd.to_datetime(first_end_date)))
plt.savefig(
figs_dir + data_date.strftime("%Y-%m-%d") + "Reff_first_phase.png",
dpi=144,
)
if df2X.shape[0] > 0:
df["is_sec_wave"] = 0
for state in sec_states:
df.loc[df.state == state, "is_sec_wave"] = (
df.loc[df.state == state]
.date.isin(sec_date_range[state])
.astype(int)
.values
)
# plot only if there is second phase data - have to have second_phase=True
ax4 = predict_plot(
samples_mov_gamma,
df.loc[(df.date >= sec_start_date) & (df.date <= sec_end_date)],
moving=True,
grocery=True,
rho=sec_states,
second_phase=True,
)
for ax in ax4:
for a in ax:
a.set_ylim((0, 2.5))
plt.savefig(
figs_dir + data_date.strftime("%Y-%m-%d") + "Reff_sec_phase.png", dpi=144
)
# remove plots from memory
fig.clear()
plt.close(fig)
# Load in vaccination data by state and date
vaccination_by_state = pd.read_csv(
"data/vaccine_effect_timeseries_" + data_date.strftime("%Y-%m-%d") + ".csv",
parse_dates=["date"],
)
# there are a couple NA's early on in the time series but is likely due to slightly
# different start dates
vaccination_by_state.fillna(1, inplace=True)
# we take the whole set of estimates up to the end of the forecast period
# (with 10 days padding which won't be used in the forecast)
vaccination_by_state = vaccination_by_state[
(
vaccination_by_state.date
>= pd.to_datetime(third_start_date) - timedelta(days=1)
)
& (
vaccination_by_state.date
<= pd.to_datetime(data_date) + timedelta(days=num_forecast_days + 10)
)
]
vaccination_by_state_delta = vaccination_by_state.loc[
vaccination_by_state["variant"] == "Delta"
][["state", "date", "effect"]]
vaccination_by_state_omicron = vaccination_by_state.loc[
vaccination_by_state["variant"] == "Omicron"
][["state", "date", "effect"]]
vaccination_by_state_delta = vaccination_by_state_delta.pivot(
index="state", columns="date", values="effect"
) # Convert to matrix form
vaccination_by_state_omicron = vaccination_by_state_omicron.pivot(
index="state", columns="date", values="effect"
) # Convert to matrix form
# If we are missing recent vaccination data, fill it in with the most recent available data.
latest_vacc_data = vaccination_by_state_omicron.columns[-1]
if latest_vacc_data < pd.to_datetime(third_end_date):
vaccination_by_state_delta = pd.concat(
[vaccination_by_state_delta]
+ [
pd.Series(vaccination_by_state_delta[latest_vacc_data], name=day)
for day in pd.date_range(start=latest_vacc_data, end=third_end_date)
],
axis=1,
)
vaccination_by_state_omicron = pd.concat(
[vaccination_by_state_omicron]
+ [
pd.Series(vaccination_by_state_omicron[latest_vacc_data], name=day)
for day in pd.date_range(start=latest_vacc_data, end=third_end_date)
],
axis=1,
)
# get the dates for vaccination
dates = vaccination_by_state_delta.columns
third_days = {k: v.shape[0] for (k, v) in third_date_range.items()}
third_days_cumulative = np.append([0], np.cumsum([v for v in third_days.values()]))
delta_ve_idx_ranges = {
k: range(third_days_cumulative[i], third_days_cumulative[i + 1])
for (i, k) in enumerate(third_days.keys())
}
third_days_tot = sum(v for v in third_days.values())
# construct a range of dates for omicron which starts at the maximum of the start date
# for that state or the Omicron start date
third_omicron_date_range = {
k: pd.date_range(
start=max(v[0], pd.to_datetime(omicron_start_date)), end=v[-1]
).values
for (k, v) in third_date_range.items()
}
third_omicron_days = {k: v.shape[0] for (k, v) in third_omicron_date_range.items()}
third_omicron_days_cumulative = np.append(
[0], np.cumsum([v for v in third_omicron_days.values()])
)
omicron_ve_idx_ranges = {
k: range(third_omicron_days_cumulative[i], third_omicron_days_cumulative[i + 1])
for (i, k) in enumerate(third_omicron_days.keys())
}
third_omicron_days_tot = sum(v for v in third_omicron_days.values())
# extrac the samples
delta_ve_samples = samples_mov_gamma[
["ve_delta[" + str(j + 1) + "]" for j in range(third_days_tot)]
].T
omicron_ve_samples = samples_mov_gamma[
["ve_omicron[" + str(j + 1) + "]" for j in range(third_omicron_days_tot)]
].T
# now we plot and save the adjusted ve time series to be read in by the forecasting
plot_adjusted_ve(
data_date,
samples_mov_gamma,
states,
vaccination_by_state_delta,
third_states,
third_date_range,
delta_ve_samples,
delta_ve_idx_ranges,
figs_dir,
"delta",
)
plot_adjusted_ve(
data_date,
samples_mov_gamma,
states,
vaccination_by_state_omicron,
third_states,
third_omicron_date_range,
omicron_ve_samples,
omicron_ve_idx_ranges,
figs_dir,
"omicron",
)
if df3X.shape[0] > 0:
df["is_third_wave"] = 0
for state in third_states:
df.loc[df.state == state, "is_third_wave"] = (
df.loc[df.state == state]
.date.isin(third_date_range[state])
.astype(int)
.values
)
# plot only if there is third phase data - have to have third_phase=True
ax4 = macro_factor_plots(
samples_mov_gamma,
df.loc[(df.date >= third_start_date) & (df.date <= third_end_date)],
) # by states....
for ax in ax4:
for a in ax:
a.set_ylim((0, 1.25))
# a.set_xlim((start_date,end_date))
plt.savefig(
figs_dir + data_date.strftime("%Y-%m-%d") + "macro_factor_comp.png",
dpi=144,
)
# remove plots from memory
fig.clear()
plt.close(fig)
df["is_third_wave"] = 0
for state in third_states:
df.loc[df.state == state, "is_third_wave"] = (
df.loc[df.state == state]
.date.isin(third_date_range[state])
.astype(int)
.values
)
# plot only if there is third phase data - have to have third_phase=True
ax4 = predict_plot(
samples_mov_gamma,
df.loc[(df.date >= third_start_date) & (df.date <= third_end_date)],
moving=True,
grocery=True,
rho=third_states,
third_phase=True,
) # by states....
for ax in ax4:
for a in ax:
a.set_ylim((0, 2.5))
# a.set_xlim((start_date,end_date))
plt.savefig(
figs_dir + data_date.strftime("%Y-%m-%d") + "Reff_third_phase_combined.png",
dpi=144,
)
# remove plots from memory
fig.clear()
plt.close(fig)
# plot only if there is third phase data - have to have third_phase=True
ax4 = predict_plot(
samples_mov_gamma,
df.loc[(df.date >= third_start_date) & (df.date <= third_end_date)],
moving=True,
grocery=True,
rho=third_states,
third_phase=True,
third_plot_type="delta"
) # by states....
for ax in ax4:
for a in ax:
a.set_ylim((0, 2.5))
# a.set_xlim((start_date,end_date))
plt.savefig(
figs_dir + data_date.strftime("%Y-%m-%d") + "Reff_third_phase_delta.png",
dpi=144,
)
# remove plots from memory
fig.clear()
plt.close(fig)
for param in ("micro", "macro", "susceptibility"):
# plot only if there is third phase data - have to have third_phase=True
ax4 = predict_multiplier_plot(
samples_mov_gamma,
df.loc[(df.date >= third_start_date) & (df.date <= third_end_date)],
param=param,
) # by states....
for ax in ax4:
for a in ax:
if param == "macro":
a.set_ylim((0, 1.25))
else:
a.set_ylim((0, 1.1))
plt.savefig(
figs_dir + data_date.strftime("%Y-%m-%d") + param + "_factor.png",
dpi=144,
)
# remove plots from memory
fig.clear()
plt.close(fig)
if df3X.shape[0] > 0:
df["is_omicron_wave"] = 0
for state in third_states:
df.loc[df.state == state, "is_omicron_wave"] = (
df.loc[df.state == state]
.date.isin(third_omicron_date_range[state])
.astype(int)
.values
)
# plot only if there is third phase data - have to have third_phase=True
ax4 = predict_plot(
samples_mov_gamma,
df.loc[(df.date >= omicron_start_date) & (df.date <= third_end_date)],
moving=True,
grocery=True,
rho=third_states,
third_phase=True,
third_plot_type="omicron"
) # by states....
for ax in ax4:
for a in ax:
a.set_ylim((0, 2.5))
# a.set_xlim((start_date,end_date))
plt.savefig(
figs_dir + data_date.strftime("%Y-%m-%d") + "Reff_third_phase_omicron.png",
dpi=144,
)
# remove plots from memory
fig.clear()
plt.close(fig)
# plot the omicron proportion
# create a range of dates from the beginning of Omicron to use for producing the Omicron
# proportion
omicron_date_range = pd.date_range(
omicron_start_date, pd.to_datetime(data_date) + timedelta(45)
)
prop_omicron_to_delta = np.array([])
# create array of times to plot against
t = np.tile(range(len(omicron_date_range)), (samples_mov_gamma.shape[0], 1)).T
fig, ax = plt.subplots(figsize=(15, 12), nrows=4, ncols=2, sharex=True, sharey=True)
for (i, state) in enumerate(third_states):
m0 = np.tile(samples_mov_gamma.loc[:, "m0[" + str(i + 1) + "]"], (len(omicron_date_range), 1))
m1 = np.tile(samples_mov_gamma.loc[:, "m1[" + str(i + 1) + "]"], (len(omicron_date_range), 1))
# m1 = 1.0
r = np.tile(samples_mov_gamma.loc[:, "r[" + str(i + 1) + "]"], (len(omicron_date_range), 1))
tau = np.tile(samples_mov_gamma.loc[:, "tau[" + str(i + 1) + "]"] , (len(omicron_date_range), 1))
omicron_start_date_tmp = max(
pd.to_datetime(omicron_start_date), third_date_range[state][0]
)
omicron_date_range_tmp = pd.date_range(
omicron_start_date_tmp, third_date_range[state][-1]
)
# if state in {"TAS", "WA", "NT"}:
# prop_omicron_to_delta_tmp = m1
# else:
# prop_omicron_to_delta_tmp = m0 + (m1 - m0) / (1 + np.exp(-r * (t - tau)))
prop_omicron_to_delta_tmp = m0 + (m1 - m0) / (1 + np.exp(-r * (t - tau)))
ax[i // 2, i % 2].plot(
omicron_date_range,
np.median(prop_omicron_to_delta_tmp, axis=1),
)
ax[i // 2, i % 2].fill_between(
omicron_date_range,
np.quantile(prop_omicron_to_delta_tmp, 0.05, axis=1),
np.quantile(prop_omicron_to_delta_tmp, 0.95, axis=1),
alpha=0.2,
)
ax[i // 2, i % 2].axvline(
omicron_date_range_tmp[0], ls="--", c="k", lw=1
)
ax[i // 2, i % 2].axvline(
omicron_date_range_tmp[-1], ls="--", c="k", lw=1
)
ax[i // 2, i % 2].set_title(state)
ax[i // 2, i % 2].xaxis.set_major_locator(plt.MaxNLocator(3))
ax[i // 2, 0].set_ylabel("Proportion of Omicron\ncases to Delta")
if len(prop_omicron_to_delta) == 0:
prop_omicron_to_delta = prop_omicron_to_delta_tmp[:, -len(omicron_date_range_tmp):]
else:
prop_omicron_to_delta = np.hstack(
(
prop_omicron_to_delta,
prop_omicron_to_delta_tmp[:, -len(omicron_date_range_tmp):],
)
)
fig.tight_layout()
plt.savefig(
figs_dir + data_date.strftime("%Y-%m-%d") + "omicron_proportion.png", dpi=144
)
# need to rotate to put into a good format
prop_omicron_to_delta = prop_omicron_to_delta.T
df_prop_omicron_to_delta = pd.DataFrame(
prop_omicron_to_delta,
columns=[
"prop_omicron_to_delta." + str(i+1) for i in range(prop_omicron_to_delta.shape[1])
]
)
df_prop_omicron_to_delta.to_csv(
"results/"
+ data_date.strftime("%Y-%m-%d")
+ "/prop_omicron_to_delta"
+ data_date.strftime("%Y-%m-%d")
+ ".csv"
)
# saving the final processed posterior samples to h5 for generate_RL_forecasts.py
var_to_csv = predictors
samples_mov_gamma[predictors] = samples_mov_gamma[
["bet[" + str(i + 1) + "]" for i in range(len(predictors))]
]
# var_to_csv = [
# "R_I",
# "R_I_omicron",
# "R_L",
# "sig",
# "theta_masks",
# "theta_md",
# "voc_effect_alpha",
# "voc_effect_delta",
# "voc_effect_omicron",
# "sus_dep_factor",
# ]
var_to_csv = [
"R_I",
"R_I_omicron",
"R_L",
"sig",
"theta_masks",
"theta_md",
"voc_effect_alpha",
"voc_effect_delta",
"voc_effect_omicron",
]
var_to_csv = var_to_csv + [col for col in samples_mov_gamma if "phi" in col]
var_to_csv = (
var_to_csv
+ predictors
+ ["R_Li[" + str(i + 1) + "]" for i in range(len(states_to_fit_all_waves))]
)
var_to_csv = var_to_csv + ["ve_delta[" + str(j + 1) + "]" for j in range(third_days_tot)]
var_to_csv = var_to_csv + [
"ve_omicron[" + str(j + 1) + "]" for j in range(third_omicron_days_tot)
]
var_to_csv = var_to_csv + ["r[" + str(j + 1) + "]" for j in range(len(third_states))]
var_to_csv = var_to_csv + ["tau[" + str(j + 1) + "]" for j in range(len(third_states))]
var_to_csv = var_to_csv + ["m0[" + str(j + 1) + "]" for j in range(len(third_states))]
var_to_csv = var_to_csv + ["m1[" + str(j + 1) + "]" for j in range(len(third_states))]
# save the posterior
samples_mov_gamma[var_to_csv].to_hdf(
"results/"
+ data_date.strftime("%Y-%m-%d")
+ "/soc_mob_posterior"
+ data_date.strftime("%Y-%m-%d")
+ ".h5",
key="samples",
)
return None
def main(data_date, run_flag=0):
"""
Runs the stan model in parts to cut down on memory. The run_flag enables us to run components
of the model as required and has the following settings:
run_flag=0 (default) : Run full inference and plotting procedures.
run_flag=1 : Generate the data, save it.
run_flag=2 : Using the data from 1, run the inference.
run_flag=3 : Run plotting methods.
"""
if run_flag in (0, 1):
get_data_for_posterior(data_date=data_date)
if run_flag in (0, 2):
num_chains = 4
num_warmup_samples = 500
num_samples = 1000
max_treedepth = 12
run_stan(
data_date=data_date,
num_chains=num_chains,
num_samples=num_samples,
num_warmup_samples=num_warmup_samples,
max_treedepth=max_treedepth,
)
if run_flag in (0, 3):
# remove the susceptibility depletion term from Reff
for strain in ("Delta", "Omicron"):
# remove_sus_from_Reff(strain=strain, data_date=data_date)
remove_sus_with_waning_from_Reff(strain=strain, data_date=data_date)
plot_and_save_posterior_samples(data_date=data_date)
return None
if __name__ == "__main__":
"""
If we are running the script here (which is always) then this ensures things run appropriately.
"""
data_date = argv[1]
try:
run_flag = int(argv[2])
except:
run_flag = 0
main(data_date, run_flag=run_flag) | 35.327876 | 109 | 0.591306 | 11,138 | 84,151 | 4.188454 | 0.062489 | 0.027909 | 0.021221 | 0.013033 | 0.83548 | 0.799404 | 0.769287 | 0.738119 | 0.717541 | 0.690639 | 0 | 0.021911 | 0.285736 | 84,151 | 2,382 | 110 | 35.327876 | 0.754234 | 0.128162 | 0 | 0.604715 | 0 | 0 | 0.10199 | 0.014252 | 0 | 0 | 0 | 0 | 0 | 1 | 0.002741 | false | 0 | 0.016447 | 0 | 0.02193 | 0.010965 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
62880714361c34467c056ad673a30558da4ae482 | 96 | py | Python | venv/lib/python3.8/site-packages/cachecontrol/__init__.py | Retraces/UkraineBot | 3d5d7f8aaa58fa0cb8b98733b8808e5dfbdb8b71 | [
"MIT"
] | 2 | 2022-03-13T01:58:52.000Z | 2022-03-31T06:07:54.000Z | venv/lib/python3.8/site-packages/cachecontrol/__init__.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | 19 | 2021-11-20T04:09:18.000Z | 2022-03-23T15:05:55.000Z | venv/lib/python3.8/site-packages/cachecontrol/__init__.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | null | null | null | /home/runner/.cache/pip/pool/d6/3f/d8/41f8e68a2c7a632a2e2eb7ed0ba37392c026f1ef311928cc28c44f2243 | 96 | 96 | 0.895833 | 9 | 96 | 9.555556 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.416667 | 0 | 96 | 1 | 96 | 96 | 0.479167 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
655f9d3c36e18140819f0acac24e55a30430f2b6 | 165 | py | Python | ramda/split_when_test.py | jakobkolb/ramda.py | 982b2172f4bb95b9a5b09eff8077362d6f2f0920 | [
"MIT"
] | 56 | 2018-08-06T08:44:58.000Z | 2022-03-17T09:49:03.000Z | ramda/split_when_test.py | jakobkolb/ramda.py | 982b2172f4bb95b9a5b09eff8077362d6f2f0920 | [
"MIT"
] | 28 | 2019-06-17T11:09:52.000Z | 2022-02-18T16:59:21.000Z | ramda/split_when_test.py | jakobkolb/ramda.py | 982b2172f4bb95b9a5b09eff8077362d6f2f0920 | [
"MIT"
] | 5 | 2019-09-18T09:24:38.000Z | 2021-07-21T08:40:23.000Z | from ramda import *
from ramda.private.asserts import *
def split_when_test():
assert_equal(split_when(equals(2), [1, 2, 3, 1, 2, 3]), [[1], [2, 3, 1, 2, 3]])
| 23.571429 | 83 | 0.624242 | 30 | 165 | 3.3 | 0.5 | 0.080808 | 0.121212 | 0.121212 | 0.121212 | 0.121212 | 0.121212 | 0.121212 | 0.121212 | 0 | 0 | 0.095588 | 0.175758 | 165 | 6 | 84 | 27.5 | 0.632353 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.5 | 1 | 0.25 | true | 0 | 0.5 | 0 | 0.75 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6566a779dee80f7664ed5cd546eb5b6682ee0a6f | 174 | py | Python | docker_simple_backup/__main__.py | quanturium/docker-simple-backup | 95cfcb2d1f74766ef204fc3bc7820305bfb3f57b | [
"MIT"
] | null | null | null | docker_simple_backup/__main__.py | quanturium/docker-simple-backup | 95cfcb2d1f74766ef204fc3bc7820305bfb3f57b | [
"MIT"
] | null | null | null | docker_simple_backup/__main__.py | quanturium/docker-simple-backup | 95cfcb2d1f74766ef204fc3bc7820305bfb3f57b | [
"MIT"
] | null | null | null | """
Allow docker_simple_backup to be executable through `python -m docker_simple_backup`
"""
from docker_simple_backup.run import main
if __name__ == "__main__":
main()
| 21.75 | 84 | 0.758621 | 24 | 174 | 4.916667 | 0.666667 | 0.305085 | 0.457627 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.143678 | 174 | 7 | 85 | 24.857143 | 0.791946 | 0.482759 | 0 | 0 | 0 | 0 | 0.097561 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
65bdd4cfa9d19005a7bb73e75d43ddf4abf7e090 | 42 | py | Python | boot.py | achomgbah/iot-device | f4e298c22ccdb9e265134f963a74b6110c807bd8 | [
"Apache-2.0"
] | null | null | null | boot.py | achomgbah/iot-device | f4e298c22ccdb9e265134f963a74b6110c807bd8 | [
"Apache-2.0"
] | null | null | null | boot.py | achomgbah/iot-device | f4e298c22ccdb9e265134f963a74b6110c807bd8 | [
"Apache-2.0"
] | null | null | null | import wifi_connect
wifi_connect.connect() | 21 | 22 | 0.880952 | 6 | 42 | 5.833333 | 0.5 | 0.628571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.047619 | 42 | 2 | 22 | 21 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
65c9de6cb38259061443db300a977d10e60f20d2 | 32,874 | py | Python | dynamics/refine_prediction.py | dingmyu/VRDP | 34c15866708f062a099b8b2cf1175adc9bae69a3 | [
"MIT"
] | 31 | 2021-10-30T01:57:11.000Z | 2022-03-21T21:34:12.000Z | dynamics/refine_prediction.py | dingmyu/VRDP | 34c15866708f062a099b8b2cf1175adc9bae69a3 | [
"MIT"
] | 2 | 2022-01-05T07:09:43.000Z | 2022-01-06T10:58:12.000Z | dynamics/refine_prediction.py | dingmyu/VRDP | 34c15866708f062a099b8b2cf1175adc9bae69a3 | [
"MIT"
] | 5 | 2021-12-04T16:07:41.000Z | 2022-03-20T18:43:19.000Z | # -*- coding: utf-8 -*-
# Author: Mingyu Ding
# Time: 1/4/2021 12:44 PM
# Copyright 2019. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
import sys
import time
import json
import numpy as np
import torch
from LBFGS import FullBatchLBFGS
def get_2d_coor(x3d, y3d, z3d=0.2):
cam_mat = np.array(((-207.8461456298828, 525.0000610351562, -120.00001525878906, 1200.0003662109375),
(123.93595886230469, 1.832598354667425e-05, -534.663330078125, 799.9999389648438),
(-0.866025447845459, -3.650024282819686e-08, -0.4999999701976776, 5.000000476837158),
(0, 0, 0, 1)))
pos_3d = np.array([[x3d], [y3d], [z3d], [1.0]], dtype=np.float32)
uv = cam_mat[:3].dot(pos_3d)
pos_2d = uv[:-1] / uv[-1]
return pos_2d
for process_index in range(int(sys.argv[1]), int(sys.argv[2])):
object_dict = json.load(open(f'../data/object_dicts_with_physics/objects_{process_index:05d}.json'))
output_dict = json.load(open(f'../data/object_simulated/sim_{process_index:05d}.json'))
step_88 = output_dict['step_88']
print(f'===============start processing {process_index}==================')
device = 'cpu'
n_balls = len(object_dict)
steps = 210
target_x = torch.zeros((128, n_balls, 2), dtype=torch.float32).to(device) + 1000
shapes = []
shape_dict = {
'sphere': 0,
'cube': 1,
'cylinder': 2
}
for object_index, identity in enumerate(object_dict.keys()):
locations = torch.tensor(object_dict[identity]['trajectory']).to(device)
target_x[:locations.shape[0], object_index, :] = locations
shapes.append(shape_dict[object_dict[identity]['shape']])
target_x = target_x[-40:-19]
for object_index, identity in enumerate(object_dict.keys()):
if target_x[0][object_index][0] > 500:
target_x[0][object_index] = torch.tensor(step_88['x'][object_index])
shape = torch.tensor(shapes, dtype=torch.int8).to(device)
angle0 = torch.tensor(step_88['angle'], dtype=torch.float32).to(device)
angle0.requires_grad = True
interval = 10
dt = 1/350
gravity = 9.806
radius = 0.2
inertia = 0.4 * 0.4 / 6
frictional = torch.tensor(0.03).to(device)
frictional.requires_grad = True
linear_damping = torch.tensor(0.06).to(device)
linear_damping.requires_grad = True
v0 = torch.tensor(step_88['v'], dtype=torch.float32).to(device)
v0.requires_grad = True
restitution = torch.tensor(step_88['restitution'], dtype=torch.float32).to(device)
restitution.requires_grad = True
mass = torch.tensor(step_88['mass'], dtype=torch.float32).to(device)
mass.requires_grad = True
def norm(vector, degree=2, dim=0):
return torch.norm(vector, degree, dim=dim)
def normalized(vector):
return vector / norm(vector)
def collide_sphere(x, v, x_inc, impulse, t, i, j, angle, angle_impulse, collisions):
imp = torch.tensor([0.0, 0.0]).to(device)
x_inc_contrib = torch.tensor([0.0, 0.0]).to(device)
if i != j:
dist = (x[t, i] + dt * v[t, i]) - (x[t, j] + dt * v[t, j])
dist_norm = norm(dist)
rela_v = v[t, i] - v[t, j]
if dist_norm < 2 * radius:
dir = normalized(dist)
projected_v = dir.dot(rela_v)
if projected_v < 0:
if i < j:
repeat = False
for item in collisions:
if json.dumps(item).startswith(json.dumps([i, j])[:-1]):
repeat = True
if not repeat:
collisions.append([i, j, round(t / 10.0)])
imp = -(1 + restitution[i] * restitution[j]) * (mass[j] / (mass[i] + mass[j])) * projected_v * dir
toi = (dist_norm - 2 * radius) / min(
-1e-3, projected_v)
x_inc_contrib = min(toi - dt, 0) * imp
x_inc[t + 1, i] += x_inc_contrib
impulse[t + 1, i] += imp
def sphere_collide_cube(x, v, x_inc, impulse, t, i, j, angle, angle_impulse, collisions):
imp = torch.tensor([0.0, 0.0]).to(device)
x_inc_contrib = torch.tensor([0.0, 0.0]).to(device)
if i != j:
rela_v = v[t, i] - v[t, j]
pos_xy = x[t, i] - x[t, j]
rotate_x = pos_xy.dot(torch.tensor([torch.cos(-angle[t, j]), -torch.sin(-angle[t, j])]))
rotate_y = pos_xy.dot(torch.tensor([torch.sin(-angle[t, j]), torch.cos(-angle[t, j])]))
moving_direction = torch.tensor([0.0, 0.0])
dist_norm = 0.0
collision = True
if torch.abs(rotate_x) > 2 * radius:
collision = False
elif torch.abs(rotate_y) > 2 * radius:
collision = False
elif torch.abs(rotate_x) <= radius:
if rotate_y > 0:
moving_direction = torch.tensor([0.0, 1.0])
dist_norm = rotate_y
elif rotate_y < 0:
moving_direction = torch.tensor([0.0, -1.0])
dist_norm = - rotate_y
elif torch.abs(rotate_y) <= radius:
if rotate_x > 0:
moving_direction = torch.tensor([1.0, 0.0])
dist_norm = rotate_x
elif rotate_x < 0:
moving_direction = torch.tensor([-1.0, 0.0])
dist_norm = - rotate_x
elif (torch.abs(rotate_x) - radius) ** 2 + (torch.abs(rotate_y) - radius) ** 2 <= radius ** 2:
if rotate_x > radius and rotate_y > radius:
moving_direction = normalized(torch.tensor([rotate_x - radius, rotate_y - radius]))
dist_norm = norm(torch.tensor([rotate_x - radius, rotate_y - radius])) + radius
elif rotate_x < -radius and rotate_y > radius:
moving_direction = normalized(torch.tensor([rotate_x + radius, rotate_y - radius]))
dist_norm = norm(torch.tensor([rotate_x + radius, rotate_y - radius])) + radius
elif rotate_x > radius and rotate_y < -radius:
moving_direction = normalized(torch.tensor([rotate_x - radius, rotate_y + radius]))
dist_norm = norm(torch.tensor([rotate_x - radius, rotate_y + radius])) + radius
elif rotate_x < -radius and rotate_y < -radius:
moving_direction = normalized(torch.tensor([rotate_x + radius, rotate_y + radius]))
dist_norm = norm(torch.tensor([rotate_x + radius, rotate_y + radius])) + radius
if collision:
origin_dir = torch.tensor(
[moving_direction.dot(torch.tensor([torch.cos(angle[t, j]), -torch.sin(angle[t, j])])),
moving_direction.dot(torch.tensor([torch.sin(angle[t, j]), torch.cos(angle[t, j])]))]
)
projected_v = origin_dir.dot(rela_v)
if projected_v < 0:
if i < j:
repeat = False
for item in collisions:
if json.dumps(item).startswith(json.dumps([i, j])[:-1]):
repeat = True
if not repeat:
collisions.append([i, j, round(t / 10.0)])
imp = -(1 + restitution[i] * restitution[j]) * (mass[j] / (mass[i] + mass[j])) * projected_v * origin_dir # 冲量,速度变化量
toi = (dist_norm - 2 * radius) / min(
-1e-3, projected_v)
x_inc_contrib = min(toi - dt, 0) * imp
x_inc[t + 1, i] += x_inc_contrib
impulse[t + 1, i] += imp
def cube_collide_sphere(x, v, x_inc, impulse, t, i, j, angle, angle_impulse, collisions):
imp = torch.tensor([0.0, 0.0])
x_inc_contrib = torch.tensor([0.0, 0.0])
a_rotate = 0.0
if i != j:
rela_v = v[t, i] - v[t, j]
pos_xy = x[t, j] - x[t, i]
rotate_x = pos_xy.dot(torch.tensor([torch.cos(-angle[t, i]), -torch.sin(-angle[t, i])]))
rotate_y = pos_xy.dot(torch.tensor([torch.sin(-angle[t, i]), torch.cos(-angle[t, i])]))
moving_direction = torch.tensor([0.0, 0.0])
collision_direction = torch.tensor([0.0, 0.0])
dist_norm = 0.0
r_rotate = 0.0
rotate_dir = False
collision = True
if torch.abs(rotate_x) > 2 * radius:
collision = False
elif torch.abs(rotate_y) > 2 * radius:
collision = False
elif torch.abs(rotate_x) <= radius:
if rotate_y > 0:
moving_direction = torch.tensor([0.0, -1.0])
collision_direction = normalized(torch.tensor([-rotate_x, -radius]))
dist_norm = rotate_y
if rotate_x > 0:
rotate_dir = 1
elif rotate_y < 0:
moving_direction = torch.tensor([0.0, 1.0])
collision_direction = normalized(torch.tensor([-rotate_x, radius]))
dist_norm = - rotate_y
if rotate_x < 0:
rotate_dir = 1
r_rotate = norm(torch.tensor([radius, rotate_x]))
elif torch.abs(rotate_y) <= radius:
if rotate_x > 0:
moving_direction = torch.tensor([-1.0, 0.0])
collision_direction = normalized(torch.tensor([-radius, -rotate_y]))
dist_norm = rotate_x
if rotate_y < 0:
rotate_dir = 1
elif rotate_x < 0:
moving_direction = torch.tensor([1.0, 0.0])
collision_direction = normalized(torch.tensor([radius, -rotate_y]))
dist_norm = - rotate_x
if rotate_y > 0:
rotate_dir = 1
r_rotate = norm(torch.tensor([radius, rotate_y]))
elif (torch.abs(rotate_x) - radius) ** 2 + (torch.abs(rotate_y) - radius) ** 2 <= radius ** 2:
if rotate_x > radius and rotate_y > radius:
moving_direction = - normalized(torch.tensor([rotate_x - radius, rotate_y - radius]))
collision_direction = normalized(torch.tensor([-1.0, -1.0]))
dist_norm = norm(torch.tensor([rotate_x - radius, rotate_y - radius])) + radius
if rotate_y > rotate_x:
rotate_dir = 1
elif rotate_x < -radius and rotate_y > radius:
moving_direction = - normalized(torch.tensor([rotate_x + radius, rotate_y - radius]))
collision_direction = normalized(torch.tensor([1.0, -1.0]))
dist_norm = norm(torch.tensor([rotate_x + radius, rotate_y - radius])) + radius
if -rotate_x > rotate_y:
rotate_dir = 1
elif rotate_x > radius and rotate_y < -radius:
moving_direction = - normalized(torch.tensor([rotate_x - radius, rotate_y + radius]))
collision_direction = normalized(torch.tensor([-1.0, 1.0]))
dist_norm = norm(torch.tensor([rotate_x - radius, rotate_y + radius])) + radius
if rotate_x > -rotate_y:
rotate_dir = 1
elif rotate_x < -radius and rotate_y < -radius:
moving_direction = - normalized(torch.tensor([rotate_x + radius, rotate_y + radius]))
collision_direction = normalized(torch.tensor([1.0, 1.0]))
dist_norm = norm(torch.tensor([rotate_x + radius, rotate_y + radius])) + radius
if -rotate_y > -rotate_x:
rotate_dir = 1
r_rotate = norm(torch.tensor([radius, radius]))
if collision:
origin_moving_dir = torch.tensor(
[moving_direction.dot(torch.tensor([torch.cos(angle[t, i]), -torch.sin(angle[t, i])])),
moving_direction.dot(torch.tensor([torch.sin(angle[t, i]), torch.cos(angle[t, i])]))]
)
origin_collision_dir = torch.tensor(
[collision_direction.dot(torch.tensor([torch.cos(angle[t, i]), -torch.sin(angle[t, i])])),
collision_direction.dot(torch.tensor([torch.sin(angle[t, i]), torch.cos(angle[t, i])]))]
)
projected_v = origin_moving_dir.dot(rela_v)
if projected_v < 0:
if i < j:
repeat = False
for item in collisions:
if json.dumps(item).startswith(json.dumps([i, j])[:-1]):
repeat = True
if not repeat:
collisions.append([i, j, round(t / 10.0)])
imp = -(1 + restitution[i] * restitution[j]) * (mass[j] / (mass[i] + mass[j])) * projected_v * origin_moving_dir
toi = (dist_norm - 2 * radius) / min(
-1e-3, projected_v)
x_inc_contrib = min(toi - dt, 0) * imp
f_rotate = (origin_moving_dir - origin_collision_dir.dot(origin_moving_dir) * origin_collision_dir).dot(-projected_v * origin_moving_dir)
a_rotate = f_rotate * r_rotate / inertia
if rotate_dir:
a_rotate = -a_rotate
x_inc[t + 1, i] += x_inc_contrib
impulse[t + 1, i] += imp
angle_impulse[t + 1, i] += a_rotate
def collide(shape, x, v, x_inc, impulse, t, angle, angle_impulse, collisions):
for i in range(n_balls):
for j in range(i):
if shape[i] != 1 and shape[j] != 1:
collide_sphere(x, v, x_inc, impulse, t, i, j, angle, angle_impulse, collisions)
elif shape[i] != 1 and shape[j] == 1:
sphere_collide_cube(x, v, x_inc, impulse, t, i, j, angle, angle_impulse, collisions)
elif shape[i] == 1 and shape[j] != 1:
cube_collide_sphere(x, v, x_inc, impulse, t, i, j, angle, angle_impulse, collisions)
elif shape[i] == 1 and shape[j] == 1:
collide_sphere(x, v, x_inc, impulse, t, i, j, angle, angle_impulse, collisions)
for i in range(n_balls):
for j in range(i + 1, n_balls):
if shape[i] != 1 and shape[j] != 1:
collide_sphere(x, v, x_inc, impulse, t, i, j, angle, angle_impulse, collisions)
elif shape[i] != 1 and shape[j] == 1:
sphere_collide_cube(x, v, x_inc, impulse, t, i, j, angle, angle_impulse, collisions)
elif shape[i] == 1 and shape[j] != 1:
cube_collide_sphere(x, v, x_inc, impulse, t, i, j, angle, angle_impulse, collisions)
elif shape[i] == 1 and shape[j] == 1:
collide_sphere(x, v, x_inc, impulse, t, i, j, angle, angle_impulse, collisions)
def friction(shape, x, v, x_inc, impulse, v_old, t, i):
if shape[i] == 0:
if v_old[0] > 0.0:
v[t, i][0] = max(0, v_old[0] - linear_damping * dt * v_old[0] * norm(v_old))
elif v_old[0] < 0.0:
v[t, i][0] = min(0, v_old[0] - linear_damping * dt * v_old[0] * norm(v_old))
if v_old[1] > 0.0:
v[t, i][1] = max(0, v_old[1] - linear_damping * dt * v_old[1] * norm(v_old))
elif v_old[1] < 0.0:
v[t, i][1] = min(0, v_old[1] - linear_damping * dt * v_old[1] * norm(v_old))
else:
if v_old[0] > 0.0:
v[t, i][0] = max(0, v_old[0] - gravity * frictional * dt * normalized(v_old)[0] - linear_damping * dt * v_old[0] * norm(v_old))
elif v_old[0] < 0.0:
v[t, i][0] = min(0, v_old[0] - gravity * frictional * dt * normalized(v_old)[0] - linear_damping * dt * v_old[0] * norm(v_old))
if v_old[1] > 0.0:
v[t, i][1] = max(0, v_old[1] - gravity * frictional * dt * normalized(v_old)[1] - linear_damping * dt * v_old[1] * norm(v_old))
elif v_old[1] < 0.0:
v[t, i][1] = min(0, v_old[1] - gravity * frictional * dt * normalized(v_old)[1] - linear_damping * dt * v_old[1] * norm(v_old))
def advance(shape, x, v, x_inc, impulse, t, angle, delta_angle, angle_impulse):
for i in range(n_balls):
v_old = v[t - 1, i] + impulse[t, i]
friction(shape, x, v, x_inc, impulse, v_old, t, i)
x[t, i] = x[t - 1, i] + dt * (v[t, i] + v_old)/2 + x_inc[t, i]
delta_angle[t, i] = delta_angle[t - 1, i] + angle_impulse[t, i]
if delta_angle[t, i] > 0.0:
delta_angle[t, i] = max(0, delta_angle[t, i] - dt * gravity / 2)
elif delta_angle[t, i] < 0.0:
delta_angle[t, i] = min(0, delta_angle[t, i] + dt * gravity / 2)
angle[t, i] = angle[t - 1, i] + dt * delta_angle[t, i]
def init():
x = torch.zeros((steps, n_balls, 2), dtype=torch.float32).to(device)
v = torch.zeros((steps, n_balls, 2), dtype=torch.float32).to(device)
x_inc = torch.zeros((steps, n_balls, 2), dtype=torch.float32).to(device)
impulse = torch.zeros((steps, n_balls, 2), dtype=torch.float32).to(device)
angle = torch.zeros((steps, n_balls), dtype=torch.float32).to(device)
delta_angle = torch.zeros((steps, n_balls), dtype=torch.float32).to(device)
angle_impulse = torch.zeros((steps, n_balls), dtype=torch.float32).to(device)
x[0, :] = target_x[0]
v[0, :] = v0
angle[0, :] = angle0
return x, v, x_inc, impulse, angle, delta_angle, angle_impulse
def closure():
optimizer.zero_grad()
x, v, x_inc, impulse, angle, delta_angle, angle_impulse = init()
loss = 0
collisions = []
for t in range(1, 210):
collide(shape, x, v, x_inc, impulse, t - 1, angle, angle_impulse, collisions)
advance(shape, x, v, x_inc, impulse, t, angle, delta_angle, angle_impulse)
if t % interval == 0:
loss += (((x[t, :] - target_x[int(t/interval), :]) * (target_x[int(t/interval), :] < 100)) ** 2).mean()
return loss
def init_inference():
x = torch.zeros((210, n_balls, 2), dtype=torch.float32).to(device)
v = torch.zeros((210, n_balls, 2), dtype=torch.float32).to(device)
x_inc = torch.zeros((210, n_balls, 2), dtype=torch.float32).to(device)
impulse = torch.zeros((210, n_balls, 2), dtype=torch.float32).to(device)
angle = torch.zeros((210, n_balls), dtype=torch.float32).to(device)
delta_angle = torch.zeros((210, n_balls), dtype=torch.float32).to(device)
angle_impulse = torch.zeros((210, n_balls), dtype=torch.float32).to(device)
x[0, :] = target_x[0]
v[0, :] = v0
angle[0, :] = angle0
return x, v, x_inc, impulse, angle, delta_angle, angle_impulse
# if __name__ == '__main__':
optimizer = FullBatchLBFGS([v0, mass, restitution])
start = time.time()
loss = closure()
loss.backward()
for i in range(15):
options = {'closure': closure, 'current_loss': loss, 'max_ls': 10}
loss, _, lr, _, F_eval, G_eval, _, _ = optimizer.step(options)
print(loss, lr, v0, mass, restitution)
if loss < 0.0002 or lr == 0:
break
time_cost = time.time() - start
print(f'----- learned, cost {time_cost}s')
collisions = []
x, v, x_inc, impulse, angle, delta_angle, angle_impulse = init_inference()
for t in range(1, 210):
collide(shape, x, v, x_inc, impulse, t - 1, angle, angle_impulse, collisions) # 计算碰撞
advance(shape, x, v, x_inc, impulse, t, angle, delta_angle, angle_impulse) # 更新速度和位置
# ==================================================================================
shapes = []
shape_dict = {
'sphere': 0,
'cube': 1,
'cylinder': 2
}
reverse_shape_dict = {
0: 'sphere',
1: 'cube',
2: 'cylinder'
}
colors = []
materials = []
for object_index, identity in enumerate(object_dict.keys()):
shapes.append(shape_dict[object_dict[identity]['shape']])
colors.append(object_dict[identity]['color'])
materials.append(object_dict[identity]['material'])
gt_objects = list(object_dict.keys())
old_collisions = output_dict['predictions'][0]['collisions'].copy()
uniq_collisions = []
for item in old_collisions:
if item['frame'] > 88:
output_dict['predictions'][0]['collisions'].remove(item)
print('remove collision', item['frame'])
else:
uniq_collisions.append([gt_objects.index(item['objects'][0]['color'] + item['objects'][0]['material'] + item['objects'][0]['shape']),
gt_objects.index(item['objects'][1]['color'] + item['objects'][1]['material'] + item['objects'][1]['shape']),
item['frame']])
for collision_index, item in enumerate(collisions):
i, j, frame = item
repeat = False
for colli_item in uniq_collisions:
if json.dumps(colli_item).startswith(json.dumps([i, j])[:-1]):
repeat = True
if not repeat:
output_dict['predictions'][0]['collisions'].append({
'frame': 88 + frame,
'objects': [{
'color': colors[i],
'material': materials[i],
'shape': reverse_shape_dict[shapes[i]],
}, {
'color': colors[j],
'material': materials[j],
'shape': reverse_shape_dict[shapes[j]],
}]
})
print('add collision', 88 + frame)
output_dict['predictions'][0]['trajectory'] = output_dict['predictions'][0]['trajectory'][:18]
print('keep trajectory from 0 to', output_dict['predictions'][0]['trajectory'][-1]['frame_index'])
for frame_index, locations in enumerate(x):
if frame_index % 50 == 20:
frame_info = {'frame_index': 88 + frame_index // 10,
'objects': []}
for object_index, location in enumerate(locations):
xy = get_2d_coor(location[0].cpu().item(), location[1].cpu().item())
xy1 = get_2d_coor(location[0].cpu().item() + radius * 0.7071, location[1].cpu().item(), z3d=radius * (1 - 0.7071))
xy2 = get_2d_coor(location[0].cpu().item() - radius * 0.7071, location[1].cpu().item(), z3d=radius * (1 + 0.7071))
xy3 = get_2d_coor(location[0].cpu().item(), location[1].cpu().item() + radius)
xy4 = get_2d_coor(location[0].cpu().item(), location[1].cpu().item() - radius)
xy5 = get_2d_coor(location[0].cpu().item(), location[1].cpu().item(), z3d=0)
xy6 = get_2d_coor(location[0].cpu().item(), location[1].cpu().item(), z3d=2 * radius)
if (-10 < xy[0] < 490 and -10 < xy[1] < 330) \
or (0 < xy1[0] < 480 and 0 < xy1[1] < 320) \
or (0 < xy2[0] < 480 and 0 < xy2[1] < 320) \
or (0 < xy3[0] < 480 and 0 < xy3[1] < 320) \
or (0 < xy4[0] < 480 and 0 < xy3[1] < 320) \
or (0 < xy5[0] < 480 and 0 < xy3[1] < 320) \
or (0 < xy6[0] < 480 and 0 < xy4[1] < 320):
frame_info['objects'].append({
'x': float(xy[1]) / 3.2,
'y': float(xy[0]) / 3.2,
'color': colors[object_index],
'material': materials[object_index],
'shape': reverse_shape_dict[shapes[object_index]],
})
output_dict['predictions'][0]['trajectory'].append(frame_info)
print('add trajectory', frame_info['frame_index'])
n_balls = len(object_dict)
steps = 200
target_x = torch.zeros((128, n_balls, 2), dtype=torch.float32).to(device) + 1000
shapes = []
shape_dict = {
'sphere': 0,
'cube': 1,
'cylinder': 2
}
for object_index, identity in enumerate(object_dict.keys()):
locations = torch.tensor(object_dict[identity]['trajectory']).to(device)
target_x[:locations.shape[0], object_index, :] = locations
shapes.append(shape_dict[object_dict[identity]['shape']])
target_x = target_x[-20:]
for object_index, identity in enumerate(object_dict.keys()):
if target_x[0][object_index][0] > 500:
target_x[0][object_index] = torch.tensor(x[-1].detach()[object_index])
shape = torch.tensor(shapes, dtype=torch.int8).to(device)
angle0 = angle[-1].detach()
angle0.requires_grad = True
interval = 10
dt = 1/350
gravity = 9.806
radius = 0.2
inertia = 0.4 * 0.4 / 6
frictional = torch.tensor(0.03).to(device)
frictional.requires_grad = True
linear_damping = torch.tensor(0.06).to(device)
linear_damping.requires_grad = True
v0 = torch.tensor(v[-1].detach(), dtype=torch.float32).to(device)
v0.requires_grad = True
restitution = torch.tensor(restitution.detach(), dtype=torch.float32).to(device)
restitution.requires_grad = True
mass = torch.tensor(mass.detach(), dtype=torch.float32).to(device)
mass.requires_grad = True
def closure_108():
optimizer.zero_grad()
x, v, x_inc, impulse, angle, delta_angle, angle_impulse = init()
loss = 0
collisions = []
for t in range(1, 200):
collide(shape, x, v, x_inc, impulse, t - 1, angle, angle_impulse, collisions)
advance(shape, x, v, x_inc, impulse, t, angle, delta_angle, angle_impulse)
if t % interval == 0:
loss += (((x[t, :] - target_x[int(t/interval), :]) * (target_x[int(t/interval), :] < 100)) ** 2).mean()
return loss
def init_inference_108():
x = torch.zeros((780, n_balls, 2), dtype=torch.float32).to(device)
v = torch.zeros((780, n_balls, 2), dtype=torch.float32).to(device)
x_inc = torch.zeros((780, n_balls, 2), dtype=torch.float32).to(device)
impulse = torch.zeros((780, n_balls, 2), dtype=torch.float32).to(device)
angle = torch.zeros((780, n_balls), dtype=torch.float32).to(device)
delta_angle = torch.zeros((780, n_balls), dtype=torch.float32).to(device)
angle_impulse = torch.zeros((780, n_balls), dtype=torch.float32).to(device)
x[0, :] = target_x[0]
v[0, :] = v0
angle[0, :] = angle0
return x, v, x_inc, impulse, angle, delta_angle, angle_impulse
optimizer = FullBatchLBFGS([v0, mass, restitution])
start = time.time()
loss = closure_108()
loss.backward()
for i in range(15):
options = {'closure': closure_108, 'current_loss': loss, 'max_ls': 10}
loss, _, lr, _, F_eval, G_eval, _, _ = optimizer.step(options)
print(loss, lr, v0, mass, restitution)
if loss < 0.0002 or lr == 0:
break
time_cost = time.time() - start
print(f'----- learned, cost {time_cost}s')
collisions = []
x, v, x_inc, impulse, angle, delta_angle, angle_impulse = init_inference_108()
for t in range(1, 780):
collide(shape, x, v, x_inc, impulse, t - 1, angle, angle_impulse, collisions)
advance(shape, x, v, x_inc, impulse, t, angle, delta_angle, angle_impulse)
# ==================================================================================
shapes = []
shape_dict = {
'sphere': 0,
'cube': 1,
'cylinder': 2
}
reverse_shape_dict = {
0: 'sphere',
1: 'cube',
2: 'cylinder'
}
colors = []
materials = []
for object_index, identity in enumerate(object_dict.keys()):
shapes.append(shape_dict[object_dict[identity]['shape']])
colors.append(object_dict[identity]['color'])
materials.append(object_dict[identity]['material'])
gt_objects = list(object_dict.keys())
old_collisions = output_dict['predictions'][0]['collisions'].copy()
uniq_collisions = []
for item in old_collisions:
if item['frame'] > 108:
output_dict['predictions'][0]['collisions'].remove(item)
print('remove collision', item['frame'])
else:
uniq_collisions.append([gt_objects.index(item['objects'][0]['color'] + item['objects'][0]['material'] + item['objects'][0]['shape']),
gt_objects.index(item['objects'][1]['color'] + item['objects'][1]['material'] + item['objects'][1]['shape']),
item['frame']])
for collision_index, item in enumerate(collisions):
i, j, frame = item
repeat = False
for colli_item in uniq_collisions:
if json.dumps(colli_item).startswith(json.dumps([i, j])[:-1]):
repeat = True
if not repeat:
output_dict['predictions'][0]['collisions'].append({
'frame': 108 + frame,
'objects': [{
'color': colors[i],
'material': materials[i],
'shape': reverse_shape_dict[shapes[i]],
}, {
'color': colors[j],
'material': materials[j],
'shape': reverse_shape_dict[shapes[j]],
}]
})
print('add collision', 108 + frame)
output_dict['predictions'][0]['trajectory'] = output_dict['predictions'][0]['trajectory'][:22]
print('keep trajectory from 0 to', output_dict['predictions'][0]['trajectory'][-1]['frame_index'])
for frame_index, locations in enumerate(x):
if frame_index % 50 == 20:
frame_info = {'frame_index': 108 + frame_index // 10,
'objects': []}
for object_index, location in enumerate(locations):
xy = get_2d_coor(location[0].cpu().item(), location[1].cpu().item())
xy1 = get_2d_coor(location[0].cpu().item() + radius * 0.7071, location[1].cpu().item(), z3d=radius * (1 - 0.7071))
xy2 = get_2d_coor(location[0].cpu().item() - radius * 0.7071, location[1].cpu().item(), z3d=radius * (1 + 0.7071))
xy3 = get_2d_coor(location[0].cpu().item(), location[1].cpu().item() + radius)
xy4 = get_2d_coor(location[0].cpu().item(), location[1].cpu().item() - radius)
xy5 = get_2d_coor(location[0].cpu().item(), location[1].cpu().item(), z3d=0)
xy6 = get_2d_coor(location[0].cpu().item(), location[1].cpu().item(), z3d=2 * radius)
if (-10 < xy[0] < 490 and -10 < xy[1] < 330) \
or (0 < xy1[0] < 480 and 0 < xy1[1] < 320) \
or (0 < xy2[0] < 480 and 0 < xy2[1] < 320) \
or (0 < xy3[0] < 480 and 0 < xy3[1] < 320) \
or (0 < xy4[0] < 480 and 0 < xy3[1] < 320) \
or (0 < xy5[0] < 480 and 0 < xy3[1] < 320) \
or (0 < xy6[0] < 480 and 0 < xy4[1] < 320):
frame_info['objects'].append({
'x': float(xy[1]) / 3.2,
'y': float(xy[0]) / 3.2,
'color': colors[object_index],
'material': materials[object_index],
'shape': reverse_shape_dict[shapes[object_index]],
})
output_dict['predictions'][0]['trajectory'].append(frame_info)
print('add trajectory', frame_info['frame_index'])
json.dump(output_dict, open(f'../data/object_updated_results/sim_{process_index:05d}.json', 'w'))
| 47.50578 | 157 | 0.53124 | 4,242 | 32,874 | 3.955446 | 0.074022 | 0.048513 | 0.009834 | 0.033971 | 0.895524 | 0.880684 | 0.874426 | 0.866917 | 0.859825 | 0.855355 | 0 | 0.053278 | 0.316572 | 32,874 | 691 | 158 | 47.57453 | 0.693551 | 0.027925 | 0 | 0.691252 | 0 | 0 | 0.044158 | 0.006608 | 0 | 0 | 0 | 0 | 0 | 1 | 0.024014 | false | 0 | 0.010292 | 0.003431 | 0.048027 | 0.022298 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
65ea73b58981c99fbcec117ca6a7ec83ab6687fe | 2,359 | py | Python | model_test/evaluate_model.py | lover-520/wzm_landform_scene_model | 1bc8894d99b76213ca2544e540dccab2ad52be00 | [
"MIT"
] | 4 | 2021-01-23T15:47:49.000Z | 2021-05-05T17:03:12.000Z | model_test/evaluate_model.py | lover-520/wzm_landform_scene_model | 1bc8894d99b76213ca2544e540dccab2ad52be00 | [
"MIT"
] | null | null | null | model_test/evaluate_model.py | lover-520/wzm_landform_scene_model | 1bc8894d99b76213ca2544e540dccab2ad52be00 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
@author: WZM
@time: 2021/1/2 17:52
@function: 测试模型精度
"""
from net.ouy_net import Network
import numpy as np
import torch
import os
def load_net(fname, net):
import h5py
h5f = h5py.File(fname, mode='r')
for k, v in net.state_dict().items():
param = torch.from_numpy(np.asarray(h5f[k]))
v.copy_(param)
def evaluate_model(trained_model, data_loader, index):
net = Network(index)
load_net(trained_model, net)
device = torch.device('cuda:0')
if torch.cuda.is_available():
net = net.to(device)
net.eval()
count = 0
total = 0
lableresultpath = trained_model.replace(".h5", ".txt")
if os.path.exists(lableresultpath):
os.remove(lableresultpath)
valid_loss = 0.0
for blob in data_loader:
im_data = blob[0]
dem_data = blob[2]
img_data = blob[1]
gt_data = blob[3].reshape((blob[3].shape[0], 1))
index = 61
pre_label = net(im_data, dem_data, img_data, index, gt_data)
pre_label = pre_label.data.cpu().numpy()
valid_loss += net.loss.item()
label = pre_label.argmax(axis=1).flatten()
num = len(label)
for i in range(0, num):
if gt_data[i] == label[i]:
count = count + 1
total = total + 1
return 1.0 * count / total, valid_loss
def evaluate_model1(net, data_loader, index):
device = torch.device('cuda:0')
if torch.cuda.is_available():
net = net.to(device)
net.eval()
count = 0
total = 0
# lableresultpath = trained_model.replace(".h5", ".txt")
# if os.path.exists(lableresultpath):
# os.remove(lableresultpath)
valid_loss = 0.0
for blob in data_loader:
im_data = blob[0]
dem_data = blob[2]
img_data = blob[1]
gt_data = blob[3].reshape((blob[3].shape[0], 1))
index = 61
with torch.no_grad():
pre_label = net(im_data, dem_data, img_data, index, gt_data)
pre_label = pre_label.data.cpu().numpy()
valid_loss += net.loss.item()
label = pre_label.argmax(axis=1).flatten()
num = len(label)
for i in range(0, num):
if gt_data[i] == label[i]:
count = count + 1
total = total + 1
return 1.0 * count / total, valid_loss
| 25.365591 | 72 | 0.578635 | 338 | 2,359 | 3.887574 | 0.269231 | 0.048706 | 0.039574 | 0.031963 | 0.732116 | 0.732116 | 0.732116 | 0.732116 | 0.732116 | 0.732116 | 0 | 0.034503 | 0.28741 | 2,359 | 92 | 73 | 25.641304 | 0.747174 | 0.083934 | 0 | 0.71875 | 0 | 0 | 0.009302 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.046875 | false | 0 | 0.078125 | 0 | 0.15625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
029f520d7ac712ac57449f8c27779487e0ce463b | 27 | py | Python | tests/block/test_block.py | huksley/notion-py | 90c66891ed6b892c77befd8eeeba4cb637b008a9 | [
"MIT"
] | 58 | 2020-07-01T17:13:26.000Z | 2022-03-16T16:02:01.000Z | tests/block/test_block.py | huksley/notion-py | 90c66891ed6b892c77befd8eeeba4cb637b008a9 | [
"MIT"
] | 30 | 2020-07-02T09:28:05.000Z | 2022-02-04T18:10:36.000Z | tests/block/test_block.py | huksley/notion-py | 90c66891ed6b892c77befd8eeeba4cb637b008a9 | [
"MIT"
] | 10 | 2020-07-01T14:59:09.000Z | 2021-11-28T07:57:47.000Z | def test_block():
pass
| 9 | 17 | 0.62963 | 4 | 27 | 4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.259259 | 27 | 2 | 18 | 13.5 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
02af99284141051aff882684d3335e3e35b73d3f | 1,594 | py | Python | Abstraction/IApiClient.py | pjpmosteiro/DiscordGPT-3 | 2a64cb78debf4366d080163c38503b5f6443fbc6 | [
"MIT"
] | 49 | 2020-12-01T21:22:14.000Z | 2022-03-19T03:01:29.000Z | Abstraction/IApiClient.py | pjpmosteiro/DiscordGPT-3 | 2a64cb78debf4366d080163c38503b5f6443fbc6 | [
"MIT"
] | 13 | 2021-01-27T08:04:14.000Z | 2022-03-04T18:12:56.000Z | Abstraction/IApiClient.py | pjpmosteiro/DiscordGPT-3 | 2a64cb78debf4366d080163c38503b5f6443fbc6 | [
"MIT"
] | 19 | 2021-02-01T16:11:04.000Z | 2022-02-15T20:51:16.000Z | # e4c6 ~ 2021
from abc import ABCMeta, abstractmethod
from typing import Tuple
class ApiClientInterface(metaclass=ABCMeta):
@abstractmethod
async def complete(self, prompt: Tuple[str], length: int, api_key: str, language: str, temperature: float) -> str:
raise NotImplementedError
@abstractmethod
async def answer(self, question: Tuple[str], length: int, api_key: str, language: str, temperature: float) -> str:
raise NotImplementedError
@abstractmethod
async def song(self, song_name: Tuple[str], user_name, length: int, api_key: str, language: str,
temperature: float) -> str:
raise NotImplementedError
@abstractmethod
async def headline(self, prompt: Tuple[str], length: int, api_key: str, language: str, temperature: float) -> str:
raise NotImplementedError
@abstractmethod
async def sentiment(self, prompt: Tuple[str], api_key: str, language: str) -> str:
raise NotImplementedError
@abstractmethod
async def emojify(self, prompt: Tuple[str], length: int, api_key: str, language: str, temperature: float) -> str:
raise NotImplementedError
@abstractmethod
async def sarcastic_answer(self, prompt: Tuple[str], length: int, api_key: str, language: str,
temperature: float) -> str:
raise NotImplementedError
@abstractmethod
async def foulmouth_answer(self, prompt: Tuple[str], length: int, api_key: str, language: str,
temperature: float) -> str:
raise NotImplementedError
| 37.069767 | 118 | 0.67064 | 176 | 1,594 | 6.005682 | 0.210227 | 0.143803 | 0.166509 | 0.128666 | 0.77105 | 0.752129 | 0.705771 | 0.705771 | 0.705771 | 0.705771 | 0 | 0.00493 | 0.236512 | 1,594 | 42 | 119 | 37.952381 | 0.863599 | 0.006901 | 0 | 0.633333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.066667 | 0 | 0.1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b858b8a397097b2a1a231aeafd88656d0db76b2a | 229 | py | Python | jft/cfg.py | isimluk/jft | 998a6d4248f6b4757f9e0cb281c9b2cb51d3558f | [
"Unlicense"
] | null | null | null | jft/cfg.py | isimluk/jft | 998a6d4248f6b4757f9e0cb281c9b2cb51d3558f | [
"Unlicense"
] | null | null | null | jft/cfg.py | isimluk/jft | 998a6d4248f6b4757f9e0cb281c9b2cb51d3558f | [
"Unlicense"
] | null | null | null | # Todo replace this with normal python config read from file
config = {
'url': 'https://issues.redhat.com',
'username': 'me',
'password': 'hackmepls',
}
config['url'] = 'https://issues.stage.redhat.com'
| 22.9 | 60 | 0.606987 | 27 | 229 | 5.148148 | 0.740741 | 0.129496 | 0.201439 | 0.28777 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.218341 | 229 | 9 | 61 | 25.444444 | 0.776536 | 0.253275 | 0 | 0 | 0 | 0 | 0.526627 | 0 | 0 | 0 | 0 | 0.111111 | 0 | 1 | 0 | false | 0.166667 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
b868ce889dbbe95c91d7fd7e3d71519821b4baf5 | 30 | py | Python | Fundamentos/holaMundo.py | ricnef2121/python | 9669921f3a9f9cafd1b40a17948c5dcfce60a1ac | [
"MIT"
] | null | null | null | Fundamentos/holaMundo.py | ricnef2121/python | 9669921f3a9f9cafd1b40a17948c5dcfce60a1ac | [
"MIT"
] | null | null | null | Fundamentos/holaMundo.py | ricnef2121/python | 9669921f3a9f9cafd1b40a17948c5dcfce60a1ac | [
"MIT"
] | null | null | null | print("hola mundo con python") | 30 | 30 | 0.766667 | 5 | 30 | 4.6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 30 | 1 | 30 | 30 | 0.851852 | 0 | 0 | 0 | 0 | 0 | 0.677419 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
b87caeda30534d7179124225dbef1779907127dd | 10,858 | py | Python | pysimplegui/DemoPrograms/Demo_Floating_Toolbar.py | konsan1101/py-etc | bcca13119b0d2453866988404fd1c4976f55d4d5 | [
"MIT"
] | null | null | null | pysimplegui/DemoPrograms/Demo_Floating_Toolbar.py | konsan1101/py-etc | bcca13119b0d2453866988404fd1c4976f55d4d5 | [
"MIT"
] | 2 | 2020-06-06T00:30:56.000Z | 2021-06-10T22:30:37.000Z | pysimplegui/DemoPrograms/Demo_Floating_Toolbar.py | konsan1101/py-etc | bcca13119b0d2453866988404fd1c4976f55d4d5 | [
"MIT"
] | null | null | null | import PySimpleGUI as sg
import sys
'''
Example of borderless floating toolbar.
'''
button_names = ('close', 'cookbook', 'cpu', 'github',
'pysimplegui', 'run', 'storage', 'timer')
house64 = 'iVBORw0KGgoAAAANSUhEUgAAACAAAAAgCAYAAABzenr0AAAACXBIWXMAAAsSAAALEgHS3X78AAAHPklEQVRYhbVXbUxb1xl+zjn30/a9/gBsbBwCBhPAUD4W2pClSZM0TemkdZPaSf0RTfszTZv2o1qzqmqiaL82salSqzZptVVqqmRV1dEssERKxJKxLAWajEYkAcxXyoBg4xgcY8AY23c/+EgwNiTRdqTz557zPOd5n/Oe95wLPGFzOp24fPp0yeTJk4cbjxzJelIe9qTA5uPHt7mHho6HOzsP1RQUWODxnO/o6Pj/C3A6naT5/ffLC9raWqZbW2v8t29GEz7/d3dXVuY56us7W69cmX1EHqaqKn1sAWffe6+ipK/vROjChaq+WNj/r2wWN44FEvAHamtcLhtfW3uuo7NT24xHVVUKPIYDzrw80vzuu1WuixdbQufPV3SJC747VcxUWC1ZvtFoRPX6tMX+wR27PJ6CLbt3d3zV1WWy2+0HZVn2APAkEgmPKIqeeDzeAwDhcFgLh8MaeVQB//j445qSrq4TU2fO1HlF+L07BGN5hVmXnWXG4PA4+q/OTVb1RwSjwSRZGxqaLm3deq7z+vU/B4NBjIyMwOfzQVEU+Hw+AgD19fUCAGwqwJmXR08dO1brampqjly7Zuu26/3j35GNNdutOqvVAV4QEA6H0D8wgr7u6OS29oCgSxCj7eWXvyB7snLjCDwLAiSTSe3YB20/avv3aNPD/NxmAk4dPbq9pLX1w3BHh23IrPMH6lW1vMyks+XmQxBEAIDRlI2iIoATJqw9kaS/sDt4P3b27A90d2yJql83EMIzxGILcYGniVT+jAKcDgc99dZbT7tOnGgO9/dn9RZb/f5nzeo2t1lPIGM6GAUlUbBlDxl4WA1GcAcEW2+27LddGiXz7cPqrd9fROXPDkC2GMAYv8q/sgUZBZw6fLi+5PPPj0d6e7NHnNm+qX1Wtdht0muLAj7rVhB0fR81VgLc/AKXTK/ioIuHe/5LFG6NgeMmbTdn4r6szrvM195vIAkN24+8AkYfLNfe3h5bEp4aud3Omo8e3eVubPzrgtdb4PU4fYHvbVFLn3LobblOxKJJdMyWwPXiL/F8XQV6brQjWv8r1D9VBvdsJ7Jy9JBlCXorMYyJmsBGZjA74ENo0IeEq7T5Srf3FrBBHWh5++09ZZ9+eiI2MpL/baHdH/yhS813Z+lzrHmQJD1mQrNIjvXBEf4G/NAFZEXvYCfrRtn9v0MI3oZozYUo6cDxFIZsEWOLiLDAQnR+2Cd7bPkm8759Z77u6oqtqwNOu51refPNvaWNjWcWx8edAzUu3/QrJWphuV2fk+OEJCsglGFuZhYtoTJ0lh2BuXwvvvrPLD6SfwHOtReFiUEYFApKOciyAlEUoOZJwj2zMq0N309GbvWU1VosTxcfOPB1y+XLgXA4rK0K+Nsbbzxfefr0B/GJCceoy+EPveZRHEUWgyXLAUlWQAkDIQxzMzO4Iz+Dssrt2FkkYnzgNsxFz+ClIh7ucBsgLM2jlFtyggKKhTP4CD+FiYg26x1wlypKhfm555qv3bgRZc7cXP7c668frHznnb/EJybsQ3Vuf/hQteIssRnMFgcknRGEstWemI0gSXR4oWARXHQEJVNXUesQ4Ex8C8PkNSQU0+pcSjmIsgJe4GByykooxzgd9wYQ6ekrrTEa64v377/OXqiutv387t0/LHq928bcW3wzP9mu5BRY9EazDZLOuBr5SudFEYViAPpIP5RwP7IMGrIXvJAjXkDgoEnGNfMp5SCIOhCahDFHNAQ5YSoxGsLcwFDRnoaGEDcej09M7NrVNDo+VBR8tcJcVmzT6/QWyDpT2uPJ61RAp0IDoAFIpowTkHX1lTEeJrMTjPlRup/Y2+ZjI4XDscG7VmszAYAd5eXGaHCi7seH6n7TsK9ip6LawPO6tAI+OfklAvem0o4BwEsv7oHH404zoiESnsS9YAD+hfzjv/vtJ38cDoZ6OQDo6Om5D6D1NY3+lOMFUMaDPlS1Hm6Dff2IT42D0vVjszEgUFedEct4AYwTUOyqvnm1b+AGkFIJCWVLi9Olnq7xjEAQCWiaayyhLXOkxWqgjANlHAh5AF4jgFIGxjhQxoNkiIJjFJLIAWStAgJgUUsuJV8GLGU82EYCVqhWsjddY5RCFrjU9UEIEI1vhNWWEjQ1oHSLEMqBMCG9AEZhkLl1W0AAROPxzFhNA8j6xMkgYGMHjBIPgaWQEWBuESCEpsdq2hrrNxGQ2QGOMQgcA5ey/j99KtR44H/hwOY5oOpEiPxash1kAdMzfEYHNE0D8KhbwLiNTwFPwLO1L+98I0FykS47sB5LNDziFhAsO5DpKFHIAoOQ8pIgBJB4BkJpWqz2OElIM0QBLOWAQeIgpiAJAFlkICSTA4+RhNjAAUYpZJGDlLIFhBBIPIOWoRI+hgNk+T7P8F4lFJIkQxHXk0nCIuYJTYsl0ECWk5DQB8/zTf8LUluScAguUG0mvv73bz6exuOHJKwUwg8/+lNk5et/AVSZbsni/k4yAAAAAElFTkSuQmCC'
cpu64 = 'iVBORw0KGgoAAAANSUhEUgAAACAAAAAgCAYAAABzenr0AAAACXBIWXMAAAsSAAALEgHS3X78AAAFzElEQVRYhc1XX2wTdRz/lLv+uV7btbTbXdeyAHZX2g0uTiADM5SbhGgfwOiDIip7MUFf9EEU45MmgJj4gPLAgwHFaGJMTIybIYYRIhH5E93JuuHqssGutKNd73psd2vXrj7QQoO0yOa/T3LJ3fff53P5fe/3+x5wD3Act0cQhGi1LRKJXA8EAj2V52AwuCsSiVyvjhEEIcpx3Ov3qr/kXgH/NOoKcDgcrQ6HgydJ0uX1ersBwO/3PwGAamhoWEfTdAtN0y12u309AKrsA8uy3SRJuhoaGniHw9G6YAEMw2xnGGaH0Wj0hkKhQwDA8/wxADaWZXe7XC7B5XIJDMPsBmAr+xAOhw8ZjUZvU1PTcyzLbq/HYajnpChqmdVqfQAAisXijKIoF9xu98MAjAAwPT19GQBsNtuqckp+amrqR6fTuY4gCBoANE0b1XV9YkECnE5nyOPxPGIwGCz14mqhVCrNptPp04qiDN+3gHA4/MaKFSv2YfGNOj82NvbW0NDQe3UFOByOAMMwT09OTn5BkqRzw4YNv+Tz+YnR0dF38/l8GgDsdnvrypUrDy5AROns2bMPFgoFhWGYZycnJ79SVfV3ACBbW1vfBACn07m6qalph6Zp561WawcAw+Dg4AuJROI0ABgMBsP69es/WwA5ABjcbvcWTdN+5jhuv9PpXK0oyiUAIJctW/YiAJAk6bwVXV7z6rVrb29/x+Px7FigAFT3kcvlEux2ewcAkP39/SEA8Hq9QigUOlwsFrWqvBIABAKBnpaWlrcXSl5BsVjUdF2/PDQ09HIymTwFAGTFmUgk+hOJRAgAHA7HYxV7c3NzdzAYPLJYcgBIJpM/JZPJULWNqNz4/f6tXV1dZzRNO2cymZa73W6hVCqlgsHgR0uWLLEuljyTyZyyWCzzmzZtOqfr+qCqqqMAQEYikUQ5xgrAAcBUSbqj43OZTKbPZDJ5bDZbl67r45qmjVssFhtN0w/Nzc1NAABBEM65ublxs9m85i46TABYnue/5HleAwBSFMW9AODxeNb6fL5Xar3B4OBgj6qq0VwuN9nW1nYgm82Op9PpPoIgKI/Hs65QKBAA5t1u9+OxWOy1zs5OsVateDx+PJ1OXwQAUpKkYwAgy/LJdDp9UZblYZqmN96ZlEqlfli7du2nJEk2z8/P57PZ7DjDMBtomm69du1aH03Tq2sRViDL8rAoij2ZTOakpmkTwH3scgaDAaVSCajavOLx+HeZTGYgHA5/ULbPl6+/XJf0+/27gNtLMDAw0H23QI/H0xWNRl+dnZ1NtbW17QMAhmG2chz3IQA0NjZuHhgY2JlKpb5lWXbb3Wq4XK4Qz/NH4/H44VtLwPP8/rK/bqe3t7cfrW5Cu90+DmCuqvjWjRs3ns3n81Pl+aAmfD7f8z6f7ykAIHt7e73Azc+wfJ7na+SZly5d+mTlgaKo5X8KMJsDZrM5UIc7DyApiuIuSZJOAFUbkSRJJyRJ8gIAx3GP1nuDhSIej5+Jx+PeatutZvF6vYIgCMMsy3b+E+QAwLJsZ5ljc8VGCoIwDNw8jIxGI0sQxKJ3vVogCMJKUdSqNWvWfB4OhxUAICcmJj4Bbh/HwM1J5u8mr64py3L/reM4FosdAG4OJIqiXLpx48aopmlTHMeVcI+R7X740+n098ViURkZGdlbPZD8f0ayu+HfGErJWg4AyOVy07IsXwYWPpbncrnpehx1Bfj9/mc4jjsIALquD/X397d1dnZ+DaARAERR7AEAnuePllNSvb29TR0dHccoigoDQCwW2zMyMvJ+LQ6ilgMACoVCiqKopSaTqTEajb40PT09put6lGXZbYlE4mNJko7Pzs6OWSwWi81mC4miuFNV1Ziu6781NjZumZqa+ubKlStHcrlcphZH3QZTVTWmKIpYKBTkRCJxEgAkSeoDoGez2fMzMzNXZ2Zmrmaz2QsA9LIPyWTyZKFQkBVF+VVV1Vg9jv/87/gP2fZ5DF1CS4UAAAAASUVORK5CYII='
timer64 = 'iVBORw0KGgoAAAANSUhEUgAAACAAAAAgCAYAAABzenr0AAAACXBIWXMAAAsSAAALEgHS3X78AAAJDUlEQVRYhbWWe2xT1x3Hv/fht6+feTiJm6TYCUnaLYUmJFmb0pWu0NKmYhtQxoaKCmKjRe1aVRVV/xh/dFPfj0mZNFUr3TSKIHQCOtYVSkehzCEkJORpJ8GJY8eO7Xvt2L7O9bV97/5Iy3iEdK3YT7rS0e/e8/t+zvmee84hcJOj/nu31zQ23LkxFAxaWC5WYC8rHQDgPXnq9Mcsx6Wu/Z66meLVTkfxbbU1O/oHBo8Mjbg/8IyNd9TW1g46nc5ilYJew3Kx/rm5OfFmal6OhoY7y3bt/OWftvx8s2qh9y++8PyD69c9+ti1+Zs2AzRFN1lMRu7SpK+nra3NVFuztH3z5s3y8RMn3ABQbLNFCFl+YGjEfeb/AsAw+mVT/oDIxWLee1pbf1dZWbHDarVuanv44erKysqp9/d+cMloND7lDwQ6ruxH3iwAAKlqp0N8+623msxm049NJhOCwWmc/OzEYw+uWf2Q1WKhrGbTzLWd6O+i1NzcTNlsNoYgCCkYDKZcLpfEMMxgZUXF1nSaf5Cm6dJ0mod7eBjfr7+j57U33txnLytd5qyqGsAnn343gBUrVuieeOKJlqmpqXV1dXXFhYWFhlwuJwUCgdnm5uaJlpbmI2Nu96X+vr4VdbffjlGPG/lcDhqt7o9yPjdV7XRs9YyNH7q2LvFNwi+//HLNpk2bfuL1el/geZ6RJAn5fB6iKCKTySCfz0MQBPA8D5VKFRi42FeaSiaIrCiivKIiqNNq3xgZGSnr6x94xTM2fp0FNwRoaWnB9u3b766pqWkXRbEmGo0q3G43RkaGQRIkjEYTQADpdBoAUFRUBJqmkckIYKNRtN5996sfffTRxe6enlEAg/7ANL+QzoIWNDc3EwcPHnxubGzsRY7jzF1dXfB4faioq8cjv9oNvbUIFEWDJAiQkJDmIvBccCE8OY5cLg/GYMSw27NBq2f+7Q9Mn1u+fLnh6NGPt3V1nXs2Fo+fevvtd54LBoPpG87Ae++9d7/D4TgkCIKho6MDKosNP3j0ZygvL4dBo4KSIiCkEpBlQM0wkGUgm81hOhDASOfn8I8OQxRF0DQ9abPZNhRYrVtEUdyq1Wi06TQf1OmZzY9v3fo5sMA+sGfPnhWNjY3vx+Pxko6DHVh61wO4b8PjsJs0QCaNnEKDQIRDmBeRysmIxpOQaQ1CAR90ahWqljWBYYwI+cbBp1KmSCT8kEatrpFlyTo40I+xMc9cU3OLd9++D88uCNDe3v5SIpH40cmTJwmF2YYf/nQLbEYtYpEIhse9CLGzyGQEMAYjFAoFkpEQ2JkAaJpGYVk5aJqCucgGiHOIBAPguJjB4x5h0nwqYbFYhpY3rHjqr/s+/JvH4xGvWwN79+6tmZiY2MGyLBHkEnhk+zYUqglEQ0F4QiwonRmEnEdBsQ0EAFKSYLulHEkuClKWQJEEKGLe2DJnLYRUEix7ApRCGdux86mWJ5/c6X/l9TfTV2petROGw+GHs9kscb6rC433rUFJUQF4ngcrypgYugiapmAtsgGShBQbQZINg5Ak6HU6lFXcCgoySFlCMsZBp2dQU78Mer0ekiRZ9u/fX9LTc+Eq8asA1q1bZ2hsbLw/l8shFo/DcUczrCYDxi55MdR9DnZHNb449Gec/fgg2MAkKBJgjAbMRkNQ0BQUJOBzD6LPdRpZgUdJaSnKKp24dckSGI1GHDt2bP1CC/6yBaIoWjKZjGVmZgaWIhsMJhNIALqSSlSZi8AYzSi7pQJ/efUluLvPYsuzL0GjVkNJkTCZzaBJAuVLHMhmSqHVaEAC0GjUsBYUQqVSIZFIFC0EQF4BYBRF0Tg7OwtjoQ1UXsR0cBoCn4Reb4BOq4W1sAjbdv8WZmshXvv1Npz/16cosFqh+Mp7vU4LlUKBcGAKQiqBdCIOlVoDmqahUCgW0v8vgCRJVDabpURRBK1UIptOYWygDzMTYxD5JCgCIAnAUlCAXzy9GzZ7Ob74+6HLeZokQBEEhHQKQZ8XoalJcJGZRcWvsoCiqKQkSUmappFJ82AshVh272qks/I1IvMQu1//w3yOIi/nSQKw2+2ovMUOigAokkBg3INMJgNBEBYHUCgUCVEUE2q1GlwwBDGbg0pBgyLkq8RJAlAQgNpguCr/9UNfAUsSgIKmkc/nIctyZlELWJYNC4LQTRAEUskEOL8XBGSwQR/YaR+EVAIUCShJYv5/J3HZ+/k2EGcjCAV8SHBRQMqDT8QxOuoBy7JobW39x6IALpdLDofDnyQSCej1elwavIBIYBKTwwOYGO5HPBKEgpgf1fxIv2qT821IEob6ejA+PIQ4x2JksB9cNAKWZeHz+fKrVq36bFELACAcDh93Op1fplKpuyaHL8K+pAqtq9eCJIAUF8WEZwhLnFVQKJUgya+mHTK4cAhSTkTrPfdCp9OAIoBYNILj//wEvb290tq1a9t37dp13V0AuOYscLlcMJlMPMMwD/B8SpWeZVFRVQutRouJ0WGEAz5YrQXQ63WQ81nQBAE5n0N351nkxQwMBgaMXoesIKD3Qg/OdXbC6/V68/n8bwYGBgLfCAAAarV6dOXKlfLk5OR9qUSCmOPCMJpMkHI53OpwoLi0FHPJWZw8dhjh6QBq6upQXV0NnVaLqYlL0Gk1GOzvx9GjR3D69Om59evX7zxz5sxxv9+/kP71ANPT0/lgMHhh5cqVt/n9/qUcGyWSbBgOhxOFJaXQqFRQ0hQyc2kweh3sdjtIAlAraOg0Gnx5+gucPfslTp06Ja5atar98OHDv+/s7JQXVMciV7L6+npm48aNT3d3d78gy7LeaDSiqqoKlY4qFJeUwlpgBUWSSM7OIjOXBhuNYGhoCL29vQiFQqG2trbnOzo69p8/fz53I41FAQCgoaFBuWfPng0HDhx4OhgMNuh0OhQXF8NgMMBisUCtVoPneYTDYfj9fvh8PixduvQIy7LtsVjsU5fLdcOR/08AX8czzzxDxmKxtmw2uyaXy92RyWQMgiAwkiTJSqVyVqVSxfR6vctkMh159913z3xzxW8J8HU0NTWRAOyJRMKQTCYZgiBko9E4azabY9lsNuRyub5NOQDAfwBU9w9d4+VBlQAAAABJRU5ErkJggg=='
close64 = 'iVBORw0KGgoAAAANSUhEUgAAACAAAAAgCAYAAABzenr0AAAEQ0lEQVR42r2XW2wbRRSG/1177TgkdkyoS4shaaWogVIKRAXUVn4BgRBEIRBSkSK1lAakPhTxABJSK6BEtAoXCUHEWwWi4oEXUAVvRUASSBuJliAh5QJp6hrspoGQi69r73LO7Npu6kvsBGek0ezOrvf79szsmbG0D2iwAN8DaMQaFA0YHQFaLwCX6TQuHQAuNtjR2PawD05LZeFzKeC7b/txPoLxU8Aj1BVkAf1wqw/uejeU9RsASaqYQGp+Dv8EAvjgdD9OAg9S14gQOPKED1XNWyv7+lT0VArxiVH0fCUEOqjr3JoKcImN/pYW2EOnQyUJTESBJkdpgGkV8Cj/owDDdx59A8Mf92FT+GpR+KSlBrt6ehE6+hL0pLp6AYbvfusE5FontFgUZ989UVAiDU+X0OsvQ0/EVy4g4MeOQ3a6Mn38wKHet3MkrofzZJMsFlzpeRVaeLF8ASPsb8Javy7nDXRVxdA7x7FpIZQXnrlP0yDJMoKvHVpZBKq23Qv3M8/nzQt6PIah93qhRxaLwvPNhbLmgGP7Drg694mHlVqKwcsWEBItD8DVvleM6WrhRQXUwBSsnpthvclDY++BZLdnflS9YxecrZ2QFGVZePDIYcq5yWuGK47k39NIzlCdDkHxNuYXiJzrz/xIrr4BFpdbfAFyTS1CSi1uf7IDrqeeheyoLihxubsD2sI8UuEFaItUKfen5mahRcLZl7nft7xAvjIQs+GFP2cLCmjRCL5p3oDN6nzR56xIYDl4ORJlCwyqDnT7Z5aFL5G4w4vN8dnVCwymatA9daVkeCkSJQv8qDtxcDKYF86AwKEuSDYbvB+doq/DlnMPJ6uvmzfmSJQk0E9D+OLVcEG4f38bwgNnxLmz9Wl4+z6HZLXm3JuYHMfE7i0ri8Ck3Y3Hx4L0lvYl8Et7H0Xk7NJ7Xe1d8H74GX2/2YyZmv8XY3euo4SUXJkAFyvtEbdc+CsDn2r3Ifrrz3nHvW7Pftzy/kmxdhSCly2Qlmj66Xf88dB2qP6LRme+jauuo67rIDyvHMN4i1esmvlK6QIUTrEISbKxDnDlPkk2BK6VIDhXXaddP6Vk0H6A9wSUn0WKFn2lCgiYbDEmFVXJYjWOuU1LcHudgAASSLS0FnD4dV4TksYxNEOqsMDwgAAxELToSFZFfGaiVWzGNV6MWM4Uyc5OE8wQCr2AqwmxIuoJowX3k5CjZSd6vvxhqcBj921Fc2g8C2Mwzf5sax7zNZZjSdkcCg6/EEgacAYzlLZvRk1kW7rm39iELwZHsgLPATN311rqb7trG+65dT2FXTEg4o1NoDinZKOYQ8ICFo4ADwMJpEwBDrnKIU+YMqZQ0pAbC4QwODwCf0Rd/BQ4IATagM46oI+CeiNPPVS40EDF6M/pJ78Ap+n0PL8Cp7sGs9asgQSFDLxBmKJ6STKBVSbcZsa10gKcJHi/Hv0PWqbBbaFH/AEAAAAASUVORK5CYII='
def main():
def tbutton(image_data, key):
return sg.Button(image_data=image_data, button_color=('white', 'black'), pad=(0,0), key=key)
toolbar_buttons = [[tbutton(close64, '-CLOSE-'),
tbutton(timer64, '-TIMER-'),
tbutton(house64, '-HOUSE-'),
tbutton(cpu64, '-CPU-') ]]
# layout = toolbar_buttons
layout = [[sg.Col(toolbar_buttons, background_color='black')]]
window = sg.Window('Toolbar', layout, no_titlebar=True,
grab_anywhere=True, background_color='black', margins=(0, 0))
# ---===--- Loop taking in user input --- #
while True:
button, value = window.read()
print(button)
if button == '-CLOSE-' or button is None:
break # exit button clicked
elif button == '-TIMER-':
# add your call to launch a timer program
print('Timer Button')
elif button == '-CPU-':
# add your call to launch a CPU measuring utility
print('CPU Button')
elif button == '-HOUSE-':
print('Home Button')
window.close()
if __name__ == '__main__':
main()
| 208.807692 | 3,208 | 0.910481 | 433 | 10,858 | 22.785219 | 0.815242 | 0.002737 | 0.004054 | 0.002635 | 0.004054 | 0.004054 | 0 | 0 | 0 | 0 | 0 | 0.138283 | 0.042273 | 10,858 | 51 | 3,209 | 212.901961 | 0.810463 | 0.015933 | 0 | 0 | 0 | 0.125 | 0.899219 | 0.88341 | 0 | 1 | 0 | 0 | 0 | 1 | 0.0625 | false | 0 | 0.0625 | 0.03125 | 0.15625 | 0.125 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b2258ea5d934f449177dc446ebc144260b1a63e7 | 54 | py | Python | tests/core/test_import.py | jmcph4/py-snappy | 1b254774f6c4daccba99114704cb9ecd589e6345 | [
"MIT"
] | null | null | null | tests/core/test_import.py | jmcph4/py-snappy | 1b254774f6c4daccba99114704cb9ecd589e6345 | [
"MIT"
] | null | null | null | tests/core/test_import.py | jmcph4/py-snappy | 1b254774f6c4daccba99114704cb9ecd589e6345 | [
"MIT"
] | null | null | null | def test_import():
import py_snappy # noqa: F401
| 18 | 34 | 0.685185 | 8 | 54 | 4.375 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.071429 | 0.222222 | 54 | 2 | 35 | 27 | 0.761905 | 0.185185 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0 | 1 | 0 | 1.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b248479baeb995fa954afc54dac6361c3ff3af5e | 702 | py | Python | tests/structures/test_set_comprehension.py | jacebrowning/voc | 7bc84e8a870674d300ad5083748cf6b826e7fb68 | [
"BSD-3-Clause"
] | 850 | 2015-08-17T16:45:22.000Z | 2019-03-24T07:50:15.000Z | tests/structures/test_set_comprehension.py | jacebrowning/voc | 7bc84e8a870674d300ad5083748cf6b826e7fb68 | [
"BSD-3-Clause"
] | 506 | 2015-09-26T18:20:00.000Z | 2019-03-19T18:16:18.000Z | tests/structures/test_set_comprehension.py | jacebrowning/voc | 7bc84e8a870674d300ad5083748cf6b826e7fb68 | [
"BSD-3-Clause"
] | 670 | 2015-09-12T21:57:44.000Z | 2019-03-19T13:15:33.000Z | from ..utils import TranspileTestCase
class SetComprehensionTests(TranspileTestCase):
def test_syntax(self):
self.assertCodeExecution("""
x = [1, 2, 3, 4, 5]
s = {v**2 for v in x}
print(len(s))
print(1 in s)
print(4 in s)
print(9 in s)
print(16 in s)
print(25 in s)
""")
def test_method(self):
self.assertCodeExecution("""
x = [1, 2, 3, 4, 5]
s = set(v**2 for v in x)
print(len(s))
print(1 in s)
print(4 in s)
print(9 in s)
print(16 in s)
print(25 in s)
""")
| 25.071429 | 47 | 0.433048 | 92 | 702 | 3.282609 | 0.304348 | 0.198676 | 0.211921 | 0.18543 | 0.655629 | 0.655629 | 0.655629 | 0.655629 | 0.655629 | 0.655629 | 0 | 0.067532 | 0.451567 | 702 | 27 | 48 | 26 | 0.716883 | 0 | 0 | 0.75 | 0 | 0 | 0.679487 | 0 | 0 | 0 | 0 | 0 | 0.083333 | 1 | 0.083333 | false | 0 | 0.041667 | 0 | 0.166667 | 0.5 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
b24e02ec0148963d56df29dc7792f6de19d6d82d | 124 | py | Python | deep_image_compression/__init__.py | LichengXiao2017/deep-image-compression | cf6e5699bad4d7b4a0dd8db6da72aa0c56e3d1e4 | [
"MIT"
] | 9 | 2020-01-09T21:15:17.000Z | 2022-02-08T12:41:54.000Z | deep_image_compression/__init__.py | LichengXiao2017/deep-image-compression | cf6e5699bad4d7b4a0dd8db6da72aa0c56e3d1e4 | [
"MIT"
] | 8 | 2019-10-15T23:50:03.000Z | 2021-11-10T19:40:15.000Z | deep_image_compression/__init__.py | LichengXiao2017/enas-image-compression | cf6e5699bad4d7b4a0dd8db6da72aa0c56e3d1e4 | [
"MIT"
] | 3 | 2019-10-16T06:06:49.000Z | 2020-07-06T15:02:09.000Z | from deep_image_compression.single_psnr import SingleEvaluator
from deep_image_compression.batch_psnr import BatchEvaluator
| 41.333333 | 62 | 0.919355 | 16 | 124 | 6.75 | 0.625 | 0.148148 | 0.240741 | 0.444444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.064516 | 124 | 2 | 63 | 62 | 0.931034 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
a26522302af02e9c03efb369c14c4be8629d1885 | 79 | py | Python | frontend/__init__.py | AzoeDesarrollos/PyMavisDatabase | bfcd0557f63a4d8a73f0f8e891c47b47a1de1b45 | [
"MIT"
] | null | null | null | frontend/__init__.py | AzoeDesarrollos/PyMavisDatabase | bfcd0557f63a4d8a73f0f8e891c47b47a1de1b45 | [
"MIT"
] | 2 | 2019-10-05T14:20:11.000Z | 2019-10-05T14:22:31.000Z | frontend/__init__.py | AzoeDesarrollos/PyMavisDatabase | bfcd0557f63a4d8a73f0f8e891c47b47a1de1b45 | [
"MIT"
] | null | null | null | from .globals import Renderer, WidgetHandler
from .globals.constantes import *
| 26.333333 | 44 | 0.822785 | 9 | 79 | 7.222222 | 0.666667 | 0.338462 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.113924 | 79 | 2 | 45 | 39.5 | 0.928571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0c4566fa37e78a7c2104632f5b27d1e411598246 | 7,844 | py | Python | models/norm_module.py | hassan-mahmood/Layout-Agnostic-Object-Alignment-and-Image-Generation | c526cb365b6fe383bb85423afcbc914e3e791790 | [
"Apache-2.0"
] | 1 | 2021-11-02T05:13:12.000Z | 2021-11-02T05:13:12.000Z | models/norm_module.py | hassan-mahmood/Layout-Agnostic-Object-Alignment-and-Image-Generation | c526cb365b6fe383bb85423afcbc914e3e791790 | [
"Apache-2.0"
] | 1 | 2021-12-17T14:29:18.000Z | 2021-12-17T14:29:18.000Z | baselines/arch/lostgans/norm_module.py | atmacvit/meronymnet | 47e1a7caadc0f770439bb26a93b885f790f62804 | [
"MIT"
] | null | null | null | import torch
import torch.nn as nn
import torch.nn.functional as F
# Adaptive instance normalization
# modified from https://github.com/NVlabs/MUNIT/blob/d79d62d99b588ae341f9826799980ae7298da553/networks.py#L453-L482
class AdaptiveInstanceNorm2d(nn.Module):
def __init__(self, num_features, num_w=512, eps=1e-5, momentum=0.1):
super(AdaptiveInstanceNorm2d, self).__init__()
self.num_features = num_features
self.eps = eps
self.momentum = momentum
# just dummy buffers, not used
self.register_buffer('running_mean', torch.zeros(num_features))
self.register_buffer('running_var', torch.ones(num_features))
# projection layer
self.weight_proj = nn.Linear(num_w, num_features)
self.bias_proj = nn.Linear(num_w, num_features)
def forward(self, x, w):
b, c = x.size(0), x.size(1)
running_mean = self.running_mean.repeat(b)
running_var = self.running_var.repeat(b)
weight, bias = self.weight_proj(w).contiguous().view(-1) + 1, self.bias_proj(w).contiguous().view(-1)
# Apply instance norm
x_reshaped = x.contiguous().view(1, b * c, *x.size()[2:])
out = F.batch_norm(
x_reshaped, running_mean, running_var, weight, bias,
True, self.momentum, self.eps)
return out.view(b, c, *x.size()[2:])
def __repr__(self):
return self.__class__.__name__ + '(' + str(self.num_features) + ')'
class SpatialAdaptiveInstanceNorm2d(nn.Module):
def __init__(self, num_features, num_w=512, eps=1e-5, momentum=0.1):
super(SpatialAdaptiveInstanceNorm2d, self).__init__()
self.num_features = num_features
self.eps = eps
self.momentum = momentum
# just dummy buffers, not used
self.register_buffer('running_mean', torch.zeros(num_features))
self.register_buffer('running_var', torch.ones(num_features))
# projection layer
self.weight_proj = nn.Linear(num_w, num_features)
self.bias_proj = nn.Linear(num_w, num_features)
def forward(self, x, w, bbox):
b, c, h, w = x.size()
running_mean = self.running_mean.repeat(b)
running_var = self.running_var.repeat(b)
return x
class AdaptiveBatchNorm2d(nn.BatchNorm2d):
def __init__(self, num_features, num_w=512, eps=1e-5, momentum=0.1, affine=False, track_running_stats=True):
super(AdaptiveBatchNorm2d, self).__init__(
num_features, eps, momentum, affine, track_running_stats
)
# projection layer
self.weight_proj = nn.Linear(num_w, num_features)
self.bias_proj = nn.Linear(num_w, num_features)
def forward(self, x, w):
self._check_input_dim(x)
exponential_average_factor = 0.0
if self.training and self.track_running_stats:
self.num_batches_tracked += 1
if self.momentum is None: # use cumulative moving average
exponential_average_factor = 1.0 / self.num_batches_tracked.item()
else: # use exponential moving average
exponential_average_factor = self.momentum
output = F.batch_norm(x, self.running_mean, self.running_var,
self.weight, self.bias,
self.training or not self.track_running_stats,
exponential_average_factor, self.eps)
size = output.size()
weight, bias = self.weight_proj(w) + 1, self.bias_proj(w)
weight = weight.unsqueeze(-1).unsqueeze(-1).expand(size)
bias = bias.unsqueeze(-1).unsqueeze(-1).expand(size)
return weight * output + bias
def __repr__(self):
return self.__class__.__name__ + '(' + str(self.num_features) + ')'
class SpatialAdaptiveBatchNorm2d(nn.BatchNorm2d):
def __init__(self, num_features, num_w=512, eps=1e-5, momentum=0.1, affine=False,
track_running_stats=True):
super(SpatialAdaptiveBatchNorm2d, self).__init__(
num_features, eps, momentum, affine, track_running_stats
)
# projection layer
self.weight_proj = nn.Linear(num_w, num_features)
self.bias_proj = nn.Linear(num_w, num_features)
def forward(self, x, vector, bbox):
"""
:param x: input feature map (b, c, h, w)
:param vector: latent vector (b*o, dim_w)
:param bbox: bbox map (b, o, h, w)
:return:
"""
self._check_input_dim(x)
exponential_average_factor = 0.0
if self.training and self.track_running_stats:
self.num_batches_tracked += 1
if self.momentum is None: # use cumulative moving average
exponential_average_factor = 1.0 / self.num_batches_tracked.item()
else: # use exponential moving average
exponential_average_factor = self.momentum
output = F.batch_norm(x, self.running_mean, self.running_var,
self.weight, self.bias,
self.training or not self.track_running_stats,
exponential_average_factor, self.eps)
b, o, _, _ = bbox.size()
_, _, h, w = x.size()
bbox = F.interpolate(bbox, size=(h, w), mode='bilinear')
# calculate weight and bias
weight, bias = self.weight_proj(vector), self.bias_proj(vector)
weight, bias = weight.view(b, o, -1), bias.view(b, o, -1)
weight = torch.sum(bbox.unsqueeze(2) * weight.unsqueeze(-1).unsqueeze(-1), dim=1, keepdim=False) / \
(torch.sum(bbox.unsqueeze(2), dim=1, keepdim=False) + 1e-6) + 1
bias = torch.sum(bbox.unsqueeze(2) * bias.unsqueeze(-1).unsqueeze(-1), dim=1, keepdim=False) / \
(torch.sum(bbox.unsqueeze(2), dim=1, keepdim=False) + 1e-6)
return weight * output + bias
def __repr__(self):
return self.__class__.__name__ + '(' + str(self.num_features) + ')'
from .sync_batchnorm import SynchronizedBatchNorm2d
class SpatialAdaptiveSynBatchNorm2d(nn.Module):
def __init__(self, num_features, num_w=512, batchnorm_func=SynchronizedBatchNorm2d, eps=1e-5, momentum=0.1, affine=False,
track_running_stats=True):
super(SpatialAdaptiveSynBatchNorm2d, self).__init__()
# projection layer
self.num_features = num_features
self.weight_proj = nn.utils.spectral_norm(nn.Linear(num_w, num_features))
self.bias_proj = nn.utils.spectral_norm(nn.Linear(num_w, num_features))
self.batch_norm2d = batchnorm_func(num_features, eps=eps, momentum=momentum,
affine=affine)
def forward(self, x, vector, bbox):
"""
:param x: input feature map (b, c, h, w)
:param vector: latent vector (b*o, dim_w)
:param bbox: bbox map (b, o, h, w)
:return:
"""
# self._check_input_dim(x)
output = self.batch_norm2d(x)
b, o, bh, bw = bbox.size()
_, _, h, w = x.size()
if bh != h or bw != w:
bbox = F.interpolate(bbox, size=(h, w), mode='bilinear')
# calculate weight and bias
weight, bias = self.weight_proj(vector), self.bias_proj(vector)
weight, bias = weight.view(b, o, -1), bias.view(b, o, -1)
weight = torch.sum(bbox.unsqueeze(2) * weight.unsqueeze(-1).unsqueeze(-1), dim=1, keepdim=False) / \
(torch.sum(bbox.unsqueeze(2), dim=1, keepdim=False) + 1e-6) + 1
bias = torch.sum(bbox.unsqueeze(2) * bias.unsqueeze(-1).unsqueeze(-1), dim=1, keepdim=False) / \
(torch.sum(bbox.unsqueeze(2), dim=1, keepdim=False) + 1e-6)
return weight * output + bias
def __repr__(self):
return self.__class__.__name__ + '(' + str(self.num_features) + ')'
| 41.284211 | 125 | 0.621622 | 1,011 | 7,844 | 4.579624 | 0.129575 | 0.076026 | 0.038877 | 0.025918 | 0.815119 | 0.796976 | 0.760259 | 0.760259 | 0.760259 | 0.760259 | 0 | 0.025297 | 0.259179 | 7,844 | 189 | 126 | 41.502646 | 0.771468 | 0.097017 | 0 | 0.672 | 0 | 0 | 0.010042 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.112 | false | 0 | 0.032 | 0.032 | 0.256 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a77d52eea372d1cbcf810c4f1e88cab29b5063d1 | 138 | py | Python | pyaws/AWSLambda/__init__.py | mwozniczak/pyaws | af8f6d64ff47fd2ef2eb9fef25680e4656523fa3 | [
"MIT"
] | null | null | null | pyaws/AWSLambda/__init__.py | mwozniczak/pyaws | af8f6d64ff47fd2ef2eb9fef25680e4656523fa3 | [
"MIT"
] | null | null | null | pyaws/AWSLambda/__init__.py | mwozniczak/pyaws | af8f6d64ff47fd2ef2eb9fef25680e4656523fa3 | [
"MIT"
] | null | null | null | """
Functional Utilities for AWS Lambda
"""
from pyaws.AWSLambda.lambda_utils import *
from pyaws.AWSLambda.env import read_env_variable
| 19.714286 | 49 | 0.804348 | 19 | 138 | 5.684211 | 0.684211 | 0.166667 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.115942 | 138 | 6 | 50 | 23 | 0.885246 | 0.253623 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a78d1e20aa9e77cde2319c9822147f8c21a9c3a6 | 102 | py | Python | pyxwb2/models/exceptions.py | minsis/pyxwb2 | e3f9c898a5669b47bb5b8ab344fdcb37fc98d7f0 | [
"MIT"
] | null | null | null | pyxwb2/models/exceptions.py | minsis/pyxwb2 | e3f9c898a5669b47bb5b8ab344fdcb37fc98d7f0 | [
"MIT"
] | null | null | null | pyxwb2/models/exceptions.py | minsis/pyxwb2 | e3f9c898a5669b47bb5b8ab344fdcb37fc98d7f0 | [
"MIT"
] | null | null | null | class PilotsMissingException(Exception):
pass
class FactionMissingException(Exception):
pass | 17 | 41 | 0.794118 | 8 | 102 | 10.125 | 0.625 | 0.320988 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.147059 | 102 | 6 | 42 | 17 | 0.931034 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 0 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
a7a02cef8d1a6a603b33213f6d3cd29588ccb9e6 | 20 | py | Python | TailScout/tailscout_app/models/__init__.py | MihirSachdeva/IIT_Roorkee_India | 3916e6a3f33596a6eae0ae6c1e38b70645196a49 | [
"MIT"
] | 2 | 2020-10-28T08:11:40.000Z | 2020-12-07T14:29:12.000Z | TailScout/tailscout_app/models/__init__.py | MihirSachdeva/IIT_Roorkee_India | 3916e6a3f33596a6eae0ae6c1e38b70645196a49 | [
"MIT"
] | null | null | null | TailScout/tailscout_app/models/__init__.py | MihirSachdeva/IIT_Roorkee_India | 3916e6a3f33596a6eae0ae6c1e38b70645196a49 | [
"MIT"
] | 1 | 2020-10-23T22:29:49.000Z | 2020-10-23T22:29:49.000Z | from .job import Job | 20 | 20 | 0.8 | 4 | 20 | 4 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15 | 20 | 1 | 20 | 20 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ac19e8c149b818b8fa44bd83af7e40e40a680b7a | 26 | py | Python | packages/pytea/pylib/torch/utils/__init__.py | lego0901/pytea | 8ede650def2e68f4610ba816451d8b9e28f09f76 | [
"MIT"
] | 241 | 2021-03-19T01:11:44.000Z | 2022-03-25T03:15:22.000Z | packages/pytea/pylib/torch/utils/__init__.py | lego0901/pytea | 8ede650def2e68f4610ba816451d8b9e28f09f76 | [
"MIT"
] | 2 | 2021-02-26T08:16:04.000Z | 2022-02-28T02:52:58.000Z | packages/pytea/pylib/torch/utils/__init__.py | lego0901/pytea | 8ede650def2e68f4610ba816451d8b9e28f09f76 | [
"MIT"
] | 14 | 2021-01-08T02:22:58.000Z | 2022-01-19T14:13:14.000Z | from . import data as data | 26 | 26 | 0.769231 | 5 | 26 | 4 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.192308 | 26 | 1 | 26 | 26 | 0.952381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ac3b7eaded551b4a4b25c2f15b8752a231eba205 | 3,936 | py | Python | character/migrations/0002_auto_20200920_2245.py | SamusChief/myth-caster-api | 76a43f48b70c6a4b509c90757d7906689799cc25 | [
"MIT"
] | null | null | null | character/migrations/0002_auto_20200920_2245.py | SamusChief/myth-caster-api | 76a43f48b70c6a4b509c90757d7906689799cc25 | [
"MIT"
] | null | null | null | character/migrations/0002_auto_20200920_2245.py | SamusChief/myth-caster-api | 76a43f48b70c6a4b509c90757d7906689799cc25 | [
"MIT"
] | 1 | 2021-08-14T18:46:52.000Z | 2021-08-14T18:46:52.000Z | # Generated by Django 3.1.1 on 2020-09-20 22:45
from django.conf import settings
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
('character', '0001_initial'),
]
operations = [
migrations.AlterField(
model_name='ancestry',
name='authorized_editors',
field=models.ManyToManyField(blank=True, related_name='_ancestry_authorized_editors_+', to=settings.AUTH_USER_MODEL),
),
migrations.AlterField(
model_name='background',
name='authorized_editors',
field=models.ManyToManyField(blank=True, related_name='_background_authorized_editors_+', to=settings.AUTH_USER_MODEL),
),
migrations.AlterField(
model_name='character',
name='authorized_editors',
field=models.ManyToManyField(blank=True, related_name='_character_authorized_editors_+', to=settings.AUTH_USER_MODEL),
),
migrations.AlterField(
model_name='characterclass',
name='authorized_editors',
field=models.ManyToManyField(blank=True, related_name='_characterclass_authorized_editors_+', to=settings.AUTH_USER_MODEL),
),
migrations.AlterField(
model_name='classandlevel',
name='authorized_editors',
field=models.ManyToManyField(blank=True, related_name='_classandlevel_authorized_editors_+', to=settings.AUTH_USER_MODEL),
),
migrations.AlterField(
model_name='feature',
name='authorized_editors',
field=models.ManyToManyField(blank=True, related_name='_feature_authorized_editors_+', to=settings.AUTH_USER_MODEL),
),
migrations.AlterField(
model_name='featuresatlevel',
name='authorized_editors',
field=models.ManyToManyField(blank=True, related_name='_featuresatlevel_authorized_editors_+', to=settings.AUTH_USER_MODEL),
),
migrations.AlterField(
model_name='inventoryadventuringgear',
name='authorized_editors',
field=models.ManyToManyField(blank=True, related_name='_inventoryadventuringgear_authorized_editors_+', to=settings.AUTH_USER_MODEL),
),
migrations.AlterField(
model_name='inventoryarmor',
name='authorized_editors',
field=models.ManyToManyField(blank=True, related_name='_inventoryarmor_authorized_editors_+', to=settings.AUTH_USER_MODEL),
),
migrations.AlterField(
model_name='inventorytool',
name='authorized_editors',
field=models.ManyToManyField(blank=True, related_name='_inventorytool_authorized_editors_+', to=settings.AUTH_USER_MODEL),
),
migrations.AlterField(
model_name='inventoryweapon',
name='authorized_editors',
field=models.ManyToManyField(blank=True, related_name='_inventoryweapon_authorized_editors_+', to=settings.AUTH_USER_MODEL),
),
migrations.AlterField(
model_name='inventorywondrousitem',
name='authorized_editors',
field=models.ManyToManyField(blank=True, related_name='_inventorywondrousitem_authorized_editors_+', to=settings.AUTH_USER_MODEL),
),
migrations.AlterField(
model_name='skillproficiency',
name='authorized_editors',
field=models.ManyToManyField(blank=True, related_name='_skillproficiency_authorized_editors_+', to=settings.AUTH_USER_MODEL),
),
migrations.AlterField(
model_name='spellslotsatlevel',
name='authorized_editors',
field=models.ManyToManyField(blank=True, related_name='_spellslotsatlevel_authorized_editors_+', to=settings.AUTH_USER_MODEL),
),
]
| 45.767442 | 145 | 0.67251 | 361 | 3,936 | 6.972299 | 0.146814 | 0.189114 | 0.095352 | 0.125149 | 0.744934 | 0.744934 | 0.744934 | 0.729043 | 0.729043 | 0.729043 | 0 | 0.006244 | 0.22688 | 3,936 | 85 | 146 | 46.305882 | 0.8209 | 0.011433 | 0 | 0.531646 | 1 | 0 | 0.250193 | 0.141167 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.025316 | 0 | 0.063291 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ac73383a1c4126e4d6b278860b91753dbc8f3340 | 109 | py | Python | relbert/prompt/__init__.py | asahi417/relbert | cb718e40fb452e88ccae1c271ccdea25013791b1 | [
"MIT"
] | 17 | 2021-09-10T14:49:41.000Z | 2022-01-26T13:18:02.000Z | relbert/prompt/__init__.py | asahi417/relbert | cb718e40fb452e88ccae1c271ccdea25013791b1 | [
"MIT"
] | 2 | 2021-11-14T07:47:36.000Z | 2021-11-22T17:34:06.000Z | relbert/prompt/__init__.py | asahi417/relbert | cb718e40fb452e88ccae1c271ccdea25013791b1 | [
"MIT"
] | 1 | 2021-12-14T01:35:05.000Z | 2021-12-14T01:35:05.000Z | from .discrete_tuning import GradientTriggerSearch
from .continuous_tuning import ContinuousTriggerEmbedding
| 36.333333 | 57 | 0.908257 | 10 | 109 | 9.7 | 0.7 | 0.247423 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.073395 | 109 | 2 | 58 | 54.5 | 0.960396 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ac862584b336e085680f1e3395d7b578190454ca | 164 | py | Python | open_alchemy/schemas/helpers/__init__.py | rgreinho/OpenAlchemy | 23202bdecb94763d09b6d9e84eb9b29506c811ae | [
"Apache-2.0"
] | null | null | null | open_alchemy/schemas/helpers/__init__.py | rgreinho/OpenAlchemy | 23202bdecb94763d09b6d9e84eb9b29506c811ae | [
"Apache-2.0"
] | 53 | 2020-12-30T15:32:55.000Z | 2022-03-31T10:07:00.000Z | open_alchemy/schemas/helpers/__init__.py | rgreinho/OpenAlchemy | 23202bdecb94763d09b6d9e84eb9b29506c811ae | [
"Apache-2.0"
] | null | null | null | """Helper functions for processing the schemas."""
from . import association
from . import backref
from . import clean
from . import iterate
from . import process
| 20.5 | 50 | 0.762195 | 21 | 164 | 5.952381 | 0.619048 | 0.4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.164634 | 164 | 7 | 51 | 23.428571 | 0.912409 | 0.268293 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3bbd1ab196e422a5e1723896f596e8f23e9e2a99 | 233 | py | Python | range-function/range.py | anmolpal1999/python-for-beginners | 738d73006cf21206cd10ea89d9796669fc141df3 | [
"MIT"
] | null | null | null | range-function/range.py | anmolpal1999/python-for-beginners | 738d73006cf21206cd10ea89d9796669fc141df3 | [
"MIT"
] | null | null | null | range-function/range.py | anmolpal1999/python-for-beginners | 738d73006cf21206cd10ea89d9796669fc141df3 | [
"MIT"
] | null | null | null | print('-------------------------------------------------------------------------')
for i in range(1,10):
print(i)
print('thank you')
print('-------------------------------------------------------------------------')
| 38.833333 | 83 | 0.184549 | 13 | 233 | 3.307692 | 0.692308 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014493 | 0.111588 | 233 | 5 | 84 | 46.6 | 0.193237 | 0 | 0 | 0.4 | 0 | 0 | 0.679825 | 0.640351 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.8 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
ce09f6f5a72d97ff2e70c11ce694e0a8d62d266b | 177 | py | Python | Python/packages/databricks-test/tests/widgets_test.py | anandmrya/DataOps | 1a671c707e27b30030687a2a88e5fa94374ce780 | [
"MIT"
] | 42 | 2019-12-04T04:10:53.000Z | 2022-03-31T13:04:17.000Z | Python/packages/databricks-test/tests/widgets_test.py | anandmrya/DataOps | 1a671c707e27b30030687a2a88e5fa94374ce780 | [
"MIT"
] | 2 | 2020-02-25T11:24:34.000Z | 2020-03-05T06:12:59.000Z | Python/packages/databricks-test/tests/widgets_test.py | anandmrya/DataOps | 1a671c707e27b30030687a2a88e5fa94374ce780 | [
"MIT"
] | 18 | 2020-01-25T06:25:08.000Z | 2021-11-16T08:40:09.000Z | import databricks_test
def test_widgets():
with databricks_test.session() as dbrickstest:
# Run notebook
dbrickstest.run_notebook(".", "widgets_notebook")
| 22.125 | 57 | 0.706215 | 19 | 177 | 6.315789 | 0.578947 | 0.233333 | 0.366667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.19774 | 177 | 7 | 58 | 25.285714 | 0.84507 | 0.067797 | 0 | 0 | 0 | 0 | 0.104294 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | true | 0 | 0.25 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ce2937a1f28ddc9e0f27653d1d22065e6e2408a4 | 4,570 | py | Python | test/unitTests/nodeTests/testFloatValueNode.py | pieter-hendriks/STL-monitoring | 114b73b1f4b0687b11b8842b3c4a1c8af7b0d9df | [
"MIT"
] | null | null | null | test/unitTests/nodeTests/testFloatValueNode.py | pieter-hendriks/STL-monitoring | 114b73b1f4b0687b11b8842b3c4a1c8af7b0d9df | [
"MIT"
] | null | null | null | test/unitTests/nodeTests/testFloatValueNode.py | pieter-hendriks/STL-monitoring | 114b73b1f4b0687b11b8842b3c4a1c8af7b0d9df | [
"MIT"
] | null | null | null | import unittest
from stl.tree import FloatValueNode
from stl.signals import Signal, SignalList, BooleanSignal
from typing import Iterable
class FloatValueNodeTest(unittest.TestCase):
def setUp(self):
pass
def testNegativeZeroValue(self):
# Str because that's the data type the node gets
values: Iterable[str] = ['-', '0']
# Create the node and read tokens
node: FloatValueNode = FloatValueNode()
for value in values:
node.processToken(value)
# Create expected result and compare the two
expectedSignal: Signal = Signal("ValueNodeSignal", [0, float('inf')], [0, 0], [0, 0])
self.assertEqual(node.quantitativeValidate(None, None), expectedSignal)
self.assertEqual(node.booleanValidate(None, None), expectedSignal)
def testFractionalZeroValue(self):
# Str because that's the data type the node gets
values: Iterable[str] = ['0', '.', '0']
# Create the node and read tokens
node: FloatValueNode = FloatValueNode()
for value in values:
node.processToken(value)
# Create expected result and compare the two
expectedSignal: Signal = Signal("ValueNodeSignal", [0, float('inf')], [0, 0], [0, 0])
self.assertEqual(node.quantitativeValidate(None, None), expectedSignal)
self.assertEqual(node.booleanValidate(None, None), expectedSignal)
def testNegativeFractionalZeroValue(self):
# Str because that's the data type the node gets
values: Iterable[str] = ['-', '0', '.', '0']
# Create the node and read tokens
node: FloatValueNode = FloatValueNode()
for value in values:
node.processToken(value)
# Create expected result and compare the two
expectedSignal: Signal = Signal("ValueNodeSignal", [0, float('inf')], [0, 0], [0, 0])
self.assertEqual(node.quantitativeValidate(None, None), expectedSignal)
self.assertEqual(node.booleanValidate(None, None), expectedSignal)
def testZeroValue(self):
# Str because that's the data type the node gets
values: Iterable[str] = ['0']
# Create the node and read tokens
node: FloatValueNode = FloatValueNode()
for value in values:
node.processToken(value)
# Create expected result and compare the two
expectedSignal: Signal = Signal("ValueNodeSignal", [0, float('inf')], [0, 0], [0, 0])
self.assertEqual(node.quantitativeValidate(None, None), expectedSignal)
self.assertEqual(node.booleanValidate(None, None), expectedSignal)
def testWholePositiveValue(self):
# Str because that's the data type the node gets
values: Iterable[str] = ['123']
# Create the node and read tokens
node: FloatValueNode = FloatValueNode()
for value in values:
node.processToken(value)
# Create expected result and compare the two
expectedSignal: Signal = Signal("ValueNodeSignal", [0, float('inf')], [123, 123], [0, 0])
self.assertEqual(node.quantitativeValidate(None, None), expectedSignal)
self.assertEqual(node.booleanValidate(None, None), expectedSignal)
def testFractionalPositiveValue(self):
# Str because that's the data type the node gets
values: str = ['123', '.', '456']
# Create the node and read tokens
node: FloatValueNode = FloatValueNode()
for value in values:
node.processToken(value)
# Create expected result and compare the two
expectedSignal: Signal = Signal("ValueNodeSignal", [0, float('inf')], [123.456, 123.456], [0, 0])
self.assertEqual(node.quantitativeValidate(None, None), expectedSignal)
self.assertEqual(node.booleanValidate(None, None), expectedSignal)
def testWholeNegativeValue(self):
# Str because that's the data type the node gets
values: Iterable[str] = ['-', '123']
# Create the node and read token
node: FloatValueNode = FloatValueNode()
for value in values:
node.processToken(value)
# Create expected result and compare the two
expectedSignal: Signal = Signal("ValueNodeSignal", [0, float('inf')], [-123, -123], [0, 0])
self.assertEqual(node.quantitativeValidate(None, None), expectedSignal)
self.assertEqual(node.booleanValidate(None, None), expectedSignal)
def testFractionalNegativeValue(self):
# Str because that's the data type the node gets
values: Iterable[str] = ['-', '123', '.', '456']
# Create the node and read token
node: FloatValueNode = FloatValueNode()
for value in values:
node.processToken(value)
# Create expected result and compare the two
expectedSignal: Signal = Signal("ValueNodeSignal", [0, float('inf')], [-123.456, -123.456], [0, 0])
self.assertEqual(node.quantitativeValidate(None, None), expectedSignal)
self.assertEqual(node.booleanValidate(None, None), expectedSignal)
if __name__ == "__main__":
unittest.main() | 41.545455 | 101 | 0.729759 | 558 | 4,570 | 5.962366 | 0.114695 | 0.010821 | 0.091374 | 0.043282 | 0.884581 | 0.884581 | 0.884581 | 0.884581 | 0.880974 | 0.880974 | 0 | 0.023638 | 0.148359 | 4,570 | 110 | 102 | 41.545455 | 0.831192 | 0.21291 | 0 | 0.60274 | 0 | 0 | 0.051497 | 0 | 0 | 0 | 0 | 0 | 0.219178 | 1 | 0.123288 | false | 0.013699 | 0.054795 | 0 | 0.191781 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
020fa9ab5c5a52035dc50e6bc24f34f3cc0a8b7d | 27 | py | Python | gQiwiAPI/__init__.py | gnifajio/gQiwiAPI | bae74bf11c070410383146674a154c0ffd7b6f8c | [
"MIT"
] | null | null | null | gQiwiAPI/__init__.py | gnifajio/gQiwiAPI | bae74bf11c070410383146674a154c0ffd7b6f8c | [
"MIT"
] | null | null | null | gQiwiAPI/__init__.py | gnifajio/gQiwiAPI | bae74bf11c070410383146674a154c0ffd7b6f8c | [
"MIT"
] | null | null | null | from .API import Qiwi, Bill | 27 | 27 | 0.777778 | 5 | 27 | 4.2 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148148 | 27 | 1 | 27 | 27 | 0.913043 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
022bf55dd2c1c11291d3282e22f643abfc8c38e5 | 4,845 | py | Python | tests/validation/parameter/test_enum_validation.py | maroux/flex | dfd7c6d79d065d7ce1b0c799e51e9bb5292612b2 | [
"MIT"
] | 160 | 2015-01-15T05:36:44.000Z | 2021-08-04T00:43:54.000Z | tests/validation/parameter/test_enum_validation.py | maroux/flex | dfd7c6d79d065d7ce1b0c799e51e9bb5292612b2 | [
"MIT"
] | 151 | 2015-01-20T16:45:36.000Z | 2022-02-23T21:07:58.000Z | tests/validation/parameter/test_enum_validation.py | maroux/flex | dfd7c6d79d065d7ce1b0c799e51e9bb5292612b2 | [
"MIT"
] | 90 | 2015-01-20T11:19:36.000Z | 2021-08-03T08:58:18.000Z | import pytest
import os
from flex.exceptions import ValidationError
from flex.loading.schema.paths.path_item.operation.parameters import (
parameters_validator,
)
from flex.validation.parameter import (
validate_parameters,
)
from flex.constants import (
PATH,
STRING,
NUMBER,
BOOLEAN,
FLEX_DISABLE_X_NULLABLE
)
from flex.error_messages import MESSAGES
from tests.utils import assert_message_in_errors
#
# enum validation tests
#
@pytest.mark.parametrize(
'enum,value',
(
([True, False], 0),
([True, False], 1),
([True, False], ''),
([True, False], None),
([0, 1, 2, 3], True),
([0, 1, 2, 3], False),
([0, 1, 2, 3], '1'),
([0, 1, 2, 3], 4),
([0, 1, 2, 3], 1.0),
([0, 1, 2, 3], None),
(['1', '2', 'a', 'b'], 'A'),
(['1', '2', 'a', 'b'], 1),
(['1', '2', 'a', 'b'], 2),
(['1', '2', 'a', 'b'], None),
),
)
def test_enum_validation_with_invalid_values(enum, value):
parameters = parameters_validator([
{
'name': 'id',
'in': PATH,
'description': 'id',
'type': [STRING, NUMBER, BOOLEAN],
'required': True,
'enum': enum,
},
])
parameter_values = {
'id': value,
}
with pytest.raises(ValidationError) as err:
validate_parameters(parameter_values, parameters, {})
assert_message_in_errors(
MESSAGES['enum']['invalid'],
err.value.detail,
'id.enum',
)
@pytest.mark.parametrize(
'enum,value',
(
([True, False], True),
([True, False], False),
([0, 1, 2, 3], 3),
([0, 1, 2, 3], 1),
(['1', '2', 'a', 'b'], 'a'),
(['1', '2', 'a', 'b'], '1'),
),
)
def test_enum_validation_with_allowed_values(enum, value):
parameters = parameters_validator([
{
'name': 'id',
'in': PATH,
'description': 'id',
'type': [STRING, NUMBER, BOOLEAN],
'required': True,
'enum': enum,
},
])
parameter_values = {
'id': value,
}
validate_parameters(parameter_values, parameters, {})
@pytest.mark.parametrize(
'enum,value',
(
([True, False], True),
([True, False], None),
([0, 1, 2, 3], 1),
([0, 1, 2, 3], None),
(['1', '2', 'a', 'b'], 'a'),
(['1', '2', 'a', 'b'], None),
),
)
def test_nullable_enum_validation_with_allowed_values(enum, value):
parameters = parameters_validator([
{
'name': 'id',
'in': PATH,
'description': 'id',
'type': [STRING, NUMBER, BOOLEAN],
'required': True,
'enum': enum,
'x-nullable': True
},
])
parameter_values = {
'id': value,
}
validate_parameters(parameter_values, parameters, {})
@pytest.mark.parametrize(
'enum,value',
(
([True, False], None),
([0, 1, 2, 3], None),
(['1', '2', 'a', 'b'], None),
),
)
def test_nullable_enum_with_null_values_strict(enum, value, monkeypatch):
parameters = parameters_validator([
{
'name': 'id',
'in': PATH,
'description': 'id',
'type': [STRING, NUMBER, BOOLEAN],
'required': True,
'enum': enum,
'x-nullable': True
},
])
parameter_values = {
'id': value,
}
monkeypatch.setattr(os, 'environ', {FLEX_DISABLE_X_NULLABLE: '1'})
with pytest.raises(ValidationError) as err:
validate_parameters(parameter_values, parameters, {})
assert_message_in_errors(
MESSAGES['enum']['invalid'],
err.value.detail,
'id.enum',
)
@pytest.mark.parametrize(
'enum,value',
(
([True, False], 0),
([True, False], 1),
([True, False], ''),
([0, 1, 2, 3], True),
([0, 1, 2, 3], False),
([0, 1, 2, 3], '1'),
([0, 1, 2, 3], 4),
([0, 1, 2, 3], 1.0),
(['1', '2', 'a', 'b'], 'A'),
(['1', '2', 'a', 'b'], 1),
(['1', '2', 'a', 'b'], 2),
),
)
def test_nullable_enum_with_invalid_values(enum, value):
parameters = parameters_validator([
{
'name': 'id',
'in': PATH,
'description': 'id',
'type': [STRING, NUMBER, BOOLEAN],
'required': True,
'enum': enum,
'x-nullable': True
},
])
parameter_values = {
'id': value,
}
with pytest.raises(ValidationError) as err:
validate_parameters(parameter_values, parameters, {})
assert_message_in_errors(
MESSAGES['enum']['invalid'],
err.value.detail,
'id.enum',
)
| 23.519417 | 73 | 0.478225 | 508 | 4,845 | 4.425197 | 0.13189 | 0.024911 | 0.022687 | 0.02847 | 0.80694 | 0.781139 | 0.781139 | 0.765125 | 0.765125 | 0.738879 | 0 | 0.033725 | 0.332921 | 4,845 | 205 | 74 | 23.634146 | 0.661819 | 0.004334 | 0 | 0.694444 | 0 | 0 | 0.081328 | 0 | 0 | 0 | 0 | 0 | 0.022222 | 1 | 0.027778 | false | 0 | 0.044444 | 0 | 0.072222 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
65fe5a9b5f506d5fdb0dc3a647a62624dcb16a0a | 42 | py | Python | torchcv/engine/__init__.py | RJaikanth/torch-cv | 8102aaae840b674389f09a01c5c45df559cb3819 | [
"MIT"
] | 1 | 2020-10-10T11:40:43.000Z | 2020-10-10T11:40:43.000Z | torchcv/engine/__init__.py | RJaikanth/torch-cv | 8102aaae840b674389f09a01c5c45df559cb3819 | [
"MIT"
] | null | null | null | torchcv/engine/__init__.py | RJaikanth/torch-cv | 8102aaae840b674389f09a01c5c45df559cb3819 | [
"MIT"
] | null | null | null | from .preprocess import PREPROCESS_ENGINE
| 21 | 41 | 0.880952 | 5 | 42 | 7.2 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.095238 | 42 | 1 | 42 | 42 | 0.947368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5a2263d0dbdf57a703525d78da00210c9495a79f | 167 | py | Python | pybamview/tests/__init__.py | mgymrek/pybamview | 719c4251769510260d29287074845650b399a3d0 | [
"MIT"
] | 37 | 2015-01-26T01:06:57.000Z | 2021-04-16T05:48:39.000Z | pybamview/tests/__init__.py | gymreklab/pybamview | 719c4251769510260d29287074845650b399a3d0 | [
"MIT"
] | 10 | 2015-01-10T12:22:27.000Z | 2018-11-17T09:13:07.000Z | pybamview/tests/__init__.py | gymreklab/pybamview | 719c4251769510260d29287074845650b399a3d0 | [
"MIT"
] | 11 | 2015-01-21T12:58:14.000Z | 2021-06-29T10:42:32.000Z | from os.path import dirname, join
from pybamview.tests import __file__ as test_directory
def test_data(path):
return join(dirname(test_directory), 'data', path)
| 23.857143 | 54 | 0.778443 | 25 | 167 | 4.92 | 0.6 | 0.211382 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.137725 | 167 | 6 | 55 | 27.833333 | 0.854167 | 0 | 0 | 0 | 0 | 0 | 0.023952 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.5 | 0.25 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 6 |
5a6f2056accdd38a4839205c7ad9ea2cd73bbc6e | 133 | py | Python | src/cysofa/cypyx/test.py | MattTurnock/cysofa | 15e95288937b765df561a65e24faf780f9e59bd4 | [
"MIT"
] | null | null | null | src/cysofa/cypyx/test.py | MattTurnock/cysofa | 15e95288937b765df561a65e24faf780f9e59bd4 | [
"MIT"
] | null | null | null | src/cysofa/cypyx/test.py | MattTurnock/cysofa | 15e95288937b765df561a65e24faf780f9e59bd4 | [
"MIT"
] | 1 | 2018-12-08T21:10:06.000Z | 2018-12-08T21:10:06.000Z | import os
import sys
#print(os.path.abspath('../../..'))
def tester():
"""
here are some things but idk
"""
return 0 | 14.777778 | 35 | 0.556391 | 18 | 133 | 4.111111 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.01 | 0.24812 | 133 | 9 | 36 | 14.777778 | 0.73 | 0.473684 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | true | 0 | 0.5 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5a8b45683943b6267de692c63187c3fb09c21276 | 288 | py | Python | rfdesigner/components/filter/__init__.py | fronzbot/python-rfdesigner | 3e78722f030efc327a68945b6a09d7cdbf42734d | [
"Apache-2.0"
] | 1 | 2022-01-28T17:50:08.000Z | 2022-01-28T17:50:08.000Z | rfdesigner/components/filter/__init__.py | fronzbot/python-rfdesigner | 3e78722f030efc327a68945b6a09d7cdbf42734d | [
"Apache-2.0"
] | 3 | 2020-06-02T17:23:12.000Z | 2020-06-02T22:29:04.000Z | rfdesigner/components/filter/__init__.py | fronzbot/python-rfdesigner | 3e78722f030efc327a68945b6a09d7cdbf42734d | [
"Apache-2.0"
] | null | null | null | """Initialize the filter classes."""
from rfdesigner.components import Passive
class LPF(Passive):
"""Representation of a low pass filter."""
class HPF(Passive):
"""Representation of a high pass filter."""
class BPF(Passive):
"""Representation of a band pass filter."""
| 19.2 | 47 | 0.694444 | 36 | 288 | 5.555556 | 0.555556 | 0.315 | 0.345 | 0.36 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.173611 | 288 | 14 | 48 | 20.571429 | 0.840336 | 0.496528 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 1 | 0.25 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 6 |
ced9a71142a99213ff9a0ccf1238f835fc74bd6f | 102 | py | Python | routes/api.py | RafaelGSS/pylack | a91a7b76102b60522176e47647744e8fb2421e61 | [
"MIT"
] | 2 | 2018-05-14T22:55:43.000Z | 2018-05-16T12:57:52.000Z | routes/api.py | RafaelGSS/HappyAnalytics | a91a7b76102b60522176e47647744e8fb2421e61 | [
"MIT"
] | null | null | null | routes/api.py | RafaelGSS/HappyAnalytics | a91a7b76102b60522176e47647744e8fb2421e61 | [
"MIT"
] | null | null | null | from bootstrap.main_app import app
@app.route('/api/v1')
def api_index():
return 'Hello worlds!' | 17 | 34 | 0.705882 | 16 | 102 | 4.375 | 0.8125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011494 | 0.147059 | 102 | 6 | 35 | 17 | 0.793103 | 0 | 0 | 0 | 0 | 0 | 0.194175 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | true | 0 | 0.25 | 0.25 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
0c68789c65062d0defab5324e61177fe631829fe | 170 | py | Python | src/procmedia/help.py | miki164/procmedia | ebe2ed1a886c4cbe83bdf5e73f26386a602e4c0b | [
"MIT"
] | null | null | null | src/procmedia/help.py | miki164/procmedia | ebe2ed1a886c4cbe83bdf5e73f26386a602e4c0b | [
"MIT"
] | null | null | null | src/procmedia/help.py | miki164/procmedia | ebe2ed1a886c4cbe83bdf5e73f26386a602e4c0b | [
"MIT"
] | null | null | null | def show_help():
print("Procmedia help:")
print("-detect path_to_media path_to_haarcascade Optional: output_name")
print("Applies haarcascade to image/video") | 42.5 | 76 | 0.747059 | 23 | 170 | 5.26087 | 0.695652 | 0.14876 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.141176 | 170 | 4 | 77 | 42.5 | 0.828767 | 0 | 0 | 0 | 0 | 0 | 0.654971 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | true | 0 | 0 | 0 | 0.25 | 0.75 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
0c85f4634b88b7d12c9607fc258c8b78f29eaa3d | 39 | py | Python | tests/__init__.py | fullatron/generatER | df0d3b0d5cb34481cf358955116808ef170fd7e3 | [
"MIT"
] | 1 | 2021-03-24T13:22:23.000Z | 2021-03-24T13:22:23.000Z | tests/__init__.py | fullatron/generatER | df0d3b0d5cb34481cf358955116808ef170fd7e3 | [
"MIT"
] | null | null | null | tests/__init__.py | fullatron/generatER | df0d3b0d5cb34481cf358955116808ef170fd7e3 | [
"MIT"
] | 1 | 2021-03-24T13:22:30.000Z | 2021-03-24T13:22:30.000Z | """Unit test package for generater."""
| 19.5 | 38 | 0.692308 | 5 | 39 | 5.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.128205 | 39 | 1 | 39 | 39 | 0.794118 | 0.820513 | 0 | null | 1 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0cc4f8b9fbcfcd3adad4c8c9c7c5c5743847ec1c | 5,585 | py | Python | integration/structure-local/test.py | xbabka01/retdec-regression-tests | 1ac40cca5165740364e6f7fb72b20820eac9bc7c | [
"MIT"
] | 8 | 2017-12-14T14:25:17.000Z | 2019-03-09T03:29:12.000Z | integration/structure-local/test.py | xbabka01/retdec-regression-tests | 1ac40cca5165740364e6f7fb72b20820eac9bc7c | [
"MIT"
] | 10 | 2019-06-14T09:12:55.000Z | 2021-10-01T12:15:43.000Z | integration/structure-local/test.py | xbabka01/retdec-regression-tests | 1ac40cca5165740364e6f7fb72b20820eac9bc7c | [
"MIT"
] | 8 | 2019-05-10T14:59:48.000Z | 2022-03-07T16:34:23.000Z | from regression_tests import *
class TestBase(Test):
def test_check_function_fnc_basic_print(self):
assert self.out_c.has_func('fnc_basic_print')
assert self.out_c.funcs['fnc_basic_print'].return_type.is_int(32)
assert self.out_c.funcs['fnc_basic_print'].param_count == 1
#ssert self.out_c.funcs['fnc_basic_print'].params[0].type.is_pointer()
#assert self.out_c.funcs['fnc_basic_print'].params[0].type.point_type.is_struct()
assert self.out_c.funcs['fnc_basic_print'].calls('printf')
def test_check_function_fnc_basic(self):
assert self.out_c.has_func('fnc_basic')
assert self.out_c.funcs['fnc_basic'].return_type.is_void()
assert self.out_c.funcs['fnc_basic'].param_count == 0
assert self.out_c.funcs['fnc_basic'].calls('malloc')
assert self.out_c.funcs['fnc_basic'].calls('scanf')
assert self.out_c.funcs['fnc_basic'].calls('printf')
assert self.out_c.funcs['fnc_basic'].calls('fnc_basic_print')
def test_check_function_fnc_complex_print(self):
assert self.out_c.funcs['fnc_complex_print'].return_type.is_void()
#assert self.out_c.funcs['fnc_complex_print'].param_count == 1
#assert self.out_c.funcs['fnc_complex_print'].params[0].type.is_pointer()
#assert self.out_c.funcs['fnc_complex_print'].params[0].type.point_type.is_struct()
assert self.out_c.funcs['fnc_complex_print'].calls('printf')
assert self.out_c.funcs['fnc_complex_print'].has_any_for_loops() or self.out_c.funcs['fnc_complex_print'].has_any_while_loops()
def test_check_function_fnc_complex(self):
assert self.out_c.has_func('fnc_complex')
assert self.out_c.funcs['fnc_complex'].return_type.is_int(32)
assert self.out_c.funcs['fnc_complex'].param_count == 0
assert self.out_c.funcs['fnc_complex'].has_any_for_loops() or self.out_c.funcs['fnc_complex'].has_any_while_loops()
assert self.out_c.funcs['fnc_complex'].calls('malloc')
assert self.out_c.funcs['fnc_complex'].has_any_return_stmts()
#assert self.out_c.funcs['fnc_complex'].has_return_stmts('return 0')
def test_check_function_fnc_sasa_fill(self):
assert self.out_c.has_func('fnc_sasa_fill')
assert self.out_c.funcs['fnc_sasa_fill'].return_type.is_void()
assert self.out_c.funcs['fnc_sasa_fill'].param_count == 1
#assert self.out_c.funcs['fnc_sasa_fill'].params[0].type.is_pointer()
#assert self.out_c.funcs['fnc_sasa_fill'].params[0].type.point_type.is_pointer()
#assert self.out_c.funcs['fnc_sasa_fill'].params[0].type.point_type.point_type.is_struct()
#assert self.out_c.funcs['fnc_sasa_fill'].calls('malloc')
def test_check_function_fnc_sasa_print(self):
assert self.out_c.has_func('fnc_sasa_print')
assert self.out_c.funcs['fnc_sasa_print'].return_type.is_void()
assert self.out_c.funcs['fnc_sasa_print'].param_count == 1
#assert self.out_c.funcs['fnc_sasa_print'].params[0].type.is_pointer()
#assert self.out_c.funcs['fnc_sasa_print'].params[0].type.point_type.is_struct()
assert self.out_c.funcs['fnc_sasa_print'].calls('printf')
assert self.out_c.funcs['fnc_sasa_print'].has_any_for_loops() or self.out_c.funcs['fnc_sasa_print'].has_any_while_loops()
def test_check_function_fnc_sasa(self):
assert self.out_c.has_func('fnc_sasa')
assert self.out_c.funcs['fnc_sasa'].return_type.is_int(32)
assert self.out_c.funcs['fnc_sasa'].param_count == 0
assert self.out_c.funcs['fnc_sasa'].calls('malloc')
assert self.out_c.funcs['fnc_sasa'].calls('fnc_sasa_fill')
assert self.out_c.funcs['fnc_sasa'].calls('fnc_sasa_print')
assert self.out_c.funcs['fnc_sasa'].has_any_return_stmts()
#assert self.out_c.funcs['fnc_sasa'].has_return_stmts('return 0')
def test_check_function_main(self):
assert self.out_c.has_func('main')
assert self.out_c.funcs['main'].calls('fnc_basic')
assert self.out_c.funcs['main'].calls('fnc_complex')
assert self.out_c.funcs['main'].calls('fnc_sasa')
assert self.out_c.funcs['main'].has_any_return_stmts()
assert self.out_c.funcs['main'].has_return_stmts('return 0')
def test_check_presence_of_literals(self):
#assert self.out_c.has_string_literal("\\n")
assert self.out_c.has_string_literal("%d\\n")
assert self.out_c.has_string_literal("%d %d\\n")
assert self.out_c.has_string_literal("%f %d %d\\n")
assert self.out_c.has_string_literal("%d %d %f\\n")
#assert self.out_c.has_string_literal("%c %d %f\\n")
assert self.out_c.has_string_literal("%d %d %d %f\\n")
class Test_2017(TestBase):
settings_2017 = TestSettings(
input=files_in_dir('2017-11-14'),
)
#class Test_2015(TestBase):
#settings_2015 = TestSettings(
#input=files_in_dir('2015-03-30'),
#)
class TestRun(TestBase):
def test_produce_expected_output(self):
if not on_macos():
self.assert_c_produces_output_when_run(
input='a 10 3.1415',
expected_return_code=0,
expected_output=
'''97 10 3.140000
3.140000 10 97
123 97 3.140000
1 2 3 0.000000
3 4 5 4.140000
5 6 7 8.280001
7 8 9 12.420000
9 10 11 16.560001
123 456
0
55
65
1
65
75
2
75
85
3
85
95
4
95
105
5
105
115
6
115
125
7
125
135
8
135
145
9
145
155
'''
)
class Test_2018(TestBase):
settings_2018 = TestSettings(
input=files_in_dir('2018-09-17'),
)
| 35.801282 | 135 | 0.690958 | 914 | 5,585 | 3.887309 | 0.126915 | 0.126091 | 0.144104 | 0.23642 | 0.841542 | 0.818745 | 0.778216 | 0.711511 | 0.592739 | 0.442724 | 0 | 0.050928 | 0.170278 | 5,585 | 155 | 136 | 36.032258 | 0.715796 | 0.200179 | 0 | 0 | 0 | 0 | 0.160646 | 0 | 0 | 0 | 0 | 0 | 0.643836 | 1 | 0.136986 | false | 0 | 0.013699 | 0 | 0.232877 | 0.246575 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0cda3be79187411fb4467d1a002d751625da8f6b | 36 | py | Python | holistic_records/__init__.py | visinf/mnvi | 654b68888f86e008c9b686950f7f3e493b47c011 | [
"Apache-2.0"
] | null | null | null | holistic_records/__init__.py | visinf/mnvi | 654b68888f86e008c9b686950f7f3e493b47c011 | [
"Apache-2.0"
] | null | null | null | holistic_records/__init__.py | visinf/mnvi | 654b68888f86e008c9b686950f7f3e493b47c011 | [
"Apache-2.0"
] | 1 | 2021-11-24T09:51:55.000Z | 2021-11-24T09:51:55.000Z | from .recorder import EpochRecorder
| 18 | 35 | 0.861111 | 4 | 36 | 7.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 36 | 1 | 36 | 36 | 0.96875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0ce6a8dd77fb3fc28eb0b904592a01a777af62ba | 83,091 | py | Python | test/unit/test_direct_link_provider_v2.py | IBM/networking-services-python-sdk | a19e47db6a5971562a502982d69a5868997245f3 | [
"Apache-2.0"
] | 1 | 2022-03-26T18:20:42.000Z | 2022-03-26T18:20:42.000Z | test/unit/test_direct_link_provider_v2.py | IBM/networking-services-python-sdk | a19e47db6a5971562a502982d69a5868997245f3 | [
"Apache-2.0"
] | null | null | null | test/unit/test_direct_link_provider_v2.py | IBM/networking-services-python-sdk | a19e47db6a5971562a502982d69a5868997245f3 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
# (C) Copyright IBM Corp. 2021.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Unit Tests for DirectLinkProviderV2
"""
from datetime import datetime, timezone
from ibm_cloud_sdk_core.authenticators.no_auth_authenticator import NoAuthAuthenticator
from ibm_cloud_sdk_core.utils import datetime_to_string, string_to_datetime
import inspect
import json
import os
import pytest
import re
import requests
import responses
import urllib
from ibm_cloud_networking_services.direct_link_provider_v2 import *
version = 'testString'
_service = DirectLinkProviderV2(
authenticator=NoAuthAuthenticator(),
version=version
)
_base_url = 'https://directlink.cloud.ibm.com/provider/v2'
_service.set_service_url(_base_url)
##############################################################################
# Start of Service: ProviderAPIs
##############################################################################
# region
class TestNewInstance():
"""
Test Class for new_instance
"""
def test_new_instance(self):
"""
new_instance()
"""
os.environ['TEST_SERVICE_AUTH_TYPE'] = 'noAuth'
service = DirectLinkProviderV2.new_instance(
version=version,
service_name='TEST_SERVICE',
)
assert service is not None
assert isinstance(service, DirectLinkProviderV2)
def test_new_instance_without_authenticator(self):
"""
new_instance_without_authenticator()
"""
with pytest.raises(ValueError, match='authenticator must be provided'):
service = DirectLinkProviderV2.new_instance(
version=version,
)
def test_new_instance_without_required_params(self):
"""
new_instance_without_required_params()
"""
with pytest.raises(TypeError, match='new_instance\\(\\) missing \\d required positional arguments?: \'.*\''):
service = DirectLinkProviderV2.new_instance()
def test_new_instance_required_param_none(self):
"""
new_instance_required_param_none()
"""
with pytest.raises(ValueError, match='version must be provided'):
service = DirectLinkProviderV2.new_instance(
version=None,
)
class TestListProviderGateways():
"""
Test Class for list_provider_gateways
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_list_provider_gateways_all_params(self):
"""
list_provider_gateways()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/gateways')
mock_response = '{"first": {"href": "https://directlink.cloud.ibm.com/provider/v2/gateways?limit=100"}, "limit": 100, "next": {"href": "https://directlink.cloud.ibm.com/provider/v2/gateways?start=8c4a91a3e2cbd233b5a5b33436855fc2&limit=100", "start": "8c4a91a3e2cbd233b5a5b33436855fc2"}, "total_count": 132, "gateways": [{"bgp_asn": 64999, "bgp_cer_cidr": "10.254.30.78/30", "bgp_ibm_asn": 13884, "bgp_ibm_cidr": "10.254.30.77/30", "bgp_status": "active", "change_request": {"type": "create_gateway"}, "created_at": "2019-01-01T12:00:00.000Z", "crn": "crn:v1:bluemix:public:directlink:dal03:a/4111d05f36894e3cb9b46a43556d9000::connect:ef4dcb1a-fee4-41c7-9e11-9cd99e65c1f4", "customer_account_id": "4111d05f36894e3cb9b46a43556d9000", "id": "ef4dcb1a-fee4-41c7-9e11-9cd99e65c1f4", "name": "myGateway", "operational_status": "configuring", "port": {"id": "fffdcb1a-fee4-41c7-9e11-9cd99e65c777"}, "provider_api_managed": true, "speed_mbps": 1000, "type": "connect", "vlan": 10}]}'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
start = 'testString'
limit = 1
# Invoke method
response = _service.list_provider_gateways(
start=start,
limit=limit,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
# Validate query params
query_string = responses.calls[0].request.url.split('?',1)[1]
query_string = urllib.parse.unquote_plus(query_string)
assert 'start={}'.format(start) in query_string
assert 'limit={}'.format(limit) in query_string
def test_list_provider_gateways_all_params_with_retries(self):
# Enable retries and run test_list_provider_gateways_all_params.
_service.enable_retries()
self.test_list_provider_gateways_all_params()
# Disable retries and run test_list_provider_gateways_all_params.
_service.disable_retries()
self.test_list_provider_gateways_all_params()
@responses.activate
def test_list_provider_gateways_required_params(self):
"""
test_list_provider_gateways_required_params()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/gateways')
mock_response = '{"first": {"href": "https://directlink.cloud.ibm.com/provider/v2/gateways?limit=100"}, "limit": 100, "next": {"href": "https://directlink.cloud.ibm.com/provider/v2/gateways?start=8c4a91a3e2cbd233b5a5b33436855fc2&limit=100", "start": "8c4a91a3e2cbd233b5a5b33436855fc2"}, "total_count": 132, "gateways": [{"bgp_asn": 64999, "bgp_cer_cidr": "10.254.30.78/30", "bgp_ibm_asn": 13884, "bgp_ibm_cidr": "10.254.30.77/30", "bgp_status": "active", "change_request": {"type": "create_gateway"}, "created_at": "2019-01-01T12:00:00.000Z", "crn": "crn:v1:bluemix:public:directlink:dal03:a/4111d05f36894e3cb9b46a43556d9000::connect:ef4dcb1a-fee4-41c7-9e11-9cd99e65c1f4", "customer_account_id": "4111d05f36894e3cb9b46a43556d9000", "id": "ef4dcb1a-fee4-41c7-9e11-9cd99e65c1f4", "name": "myGateway", "operational_status": "configuring", "port": {"id": "fffdcb1a-fee4-41c7-9e11-9cd99e65c777"}, "provider_api_managed": true, "speed_mbps": 1000, "type": "connect", "vlan": 10}]}'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Invoke method
response = _service.list_provider_gateways()
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
def test_list_provider_gateways_required_params_with_retries(self):
# Enable retries and run test_list_provider_gateways_required_params.
_service.enable_retries()
self.test_list_provider_gateways_required_params()
# Disable retries and run test_list_provider_gateways_required_params.
_service.disable_retries()
self.test_list_provider_gateways_required_params()
@responses.activate
def test_list_provider_gateways_value_error(self):
"""
test_list_provider_gateways_value_error()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/gateways')
mock_response = '{"first": {"href": "https://directlink.cloud.ibm.com/provider/v2/gateways?limit=100"}, "limit": 100, "next": {"href": "https://directlink.cloud.ibm.com/provider/v2/gateways?start=8c4a91a3e2cbd233b5a5b33436855fc2&limit=100", "start": "8c4a91a3e2cbd233b5a5b33436855fc2"}, "total_count": 132, "gateways": [{"bgp_asn": 64999, "bgp_cer_cidr": "10.254.30.78/30", "bgp_ibm_asn": 13884, "bgp_ibm_cidr": "10.254.30.77/30", "bgp_status": "active", "change_request": {"type": "create_gateway"}, "created_at": "2019-01-01T12:00:00.000Z", "crn": "crn:v1:bluemix:public:directlink:dal03:a/4111d05f36894e3cb9b46a43556d9000::connect:ef4dcb1a-fee4-41c7-9e11-9cd99e65c1f4", "customer_account_id": "4111d05f36894e3cb9b46a43556d9000", "id": "ef4dcb1a-fee4-41c7-9e11-9cd99e65c1f4", "name": "myGateway", "operational_status": "configuring", "port": {"id": "fffdcb1a-fee4-41c7-9e11-9cd99e65c777"}, "provider_api_managed": true, "speed_mbps": 1000, "type": "connect", "vlan": 10}]}'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Pass in all but one required param and check for a ValueError
req_param_dict = {
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
_service.list_provider_gateways(**req_copy)
def test_list_provider_gateways_value_error_with_retries(self):
# Enable retries and run test_list_provider_gateways_value_error.
_service.enable_retries()
self.test_list_provider_gateways_value_error()
# Disable retries and run test_list_provider_gateways_value_error.
_service.disable_retries()
self.test_list_provider_gateways_value_error()
class TestCreateProviderGateway():
"""
Test Class for create_provider_gateway
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_create_provider_gateway_all_params(self):
"""
create_provider_gateway()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/gateways')
mock_response = '{"bgp_asn": 64999, "bgp_cer_cidr": "10.254.30.78/30", "bgp_ibm_asn": 13884, "bgp_ibm_cidr": "10.254.30.77/30", "bgp_status": "active", "change_request": {"type": "create_gateway"}, "created_at": "2019-01-01T12:00:00.000Z", "crn": "crn:v1:bluemix:public:directlink:dal03:a/4111d05f36894e3cb9b46a43556d9000::connect:ef4dcb1a-fee4-41c7-9e11-9cd99e65c1f4", "customer_account_id": "4111d05f36894e3cb9b46a43556d9000", "id": "ef4dcb1a-fee4-41c7-9e11-9cd99e65c1f4", "name": "myGateway", "operational_status": "configuring", "port": {"id": "fffdcb1a-fee4-41c7-9e11-9cd99e65c777"}, "provider_api_managed": true, "speed_mbps": 1000, "type": "connect", "vlan": 10}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=201)
# Construct a dict representation of a ProviderGatewayPortIdentity model
provider_gateway_port_identity_model = {}
provider_gateway_port_identity_model['id'] = 'fffdcb1a-fee4-41c7-9e11-9cd99e65c777'
# Set up parameter values
bgp_asn = 64999
customer_account_id = '4111d05f36894e3cb9b46a43556d9000'
name = 'myGateway'
port = provider_gateway_port_identity_model
speed_mbps = 1000
bgp_cer_cidr = '10.254.30.78/30'
bgp_ibm_cidr = '10.254.30.77/30'
vlan = 10
check_only = 'testString'
# Invoke method
response = _service.create_provider_gateway(
bgp_asn,
customer_account_id,
name,
port,
speed_mbps,
bgp_cer_cidr=bgp_cer_cidr,
bgp_ibm_cidr=bgp_ibm_cidr,
vlan=vlan,
check_only=check_only,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 201
# Validate query params
query_string = responses.calls[0].request.url.split('?',1)[1]
query_string = urllib.parse.unquote_plus(query_string)
assert 'check_only={}'.format(check_only) in query_string
# Validate body params
req_body = json.loads(str(responses.calls[0].request.body, 'utf-8'))
assert req_body['bgp_asn'] == 64999
assert req_body['customer_account_id'] == '4111d05f36894e3cb9b46a43556d9000'
assert req_body['name'] == 'myGateway'
assert req_body['port'] == provider_gateway_port_identity_model
assert req_body['speed_mbps'] == 1000
assert req_body['bgp_cer_cidr'] == '10.254.30.78/30'
assert req_body['bgp_ibm_cidr'] == '10.254.30.77/30'
assert req_body['vlan'] == 10
def test_create_provider_gateway_all_params_with_retries(self):
# Enable retries and run test_create_provider_gateway_all_params.
_service.enable_retries()
self.test_create_provider_gateway_all_params()
# Disable retries and run test_create_provider_gateway_all_params.
_service.disable_retries()
self.test_create_provider_gateway_all_params()
@responses.activate
def test_create_provider_gateway_required_params(self):
"""
test_create_provider_gateway_required_params()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/gateways')
mock_response = '{"bgp_asn": 64999, "bgp_cer_cidr": "10.254.30.78/30", "bgp_ibm_asn": 13884, "bgp_ibm_cidr": "10.254.30.77/30", "bgp_status": "active", "change_request": {"type": "create_gateway"}, "created_at": "2019-01-01T12:00:00.000Z", "crn": "crn:v1:bluemix:public:directlink:dal03:a/4111d05f36894e3cb9b46a43556d9000::connect:ef4dcb1a-fee4-41c7-9e11-9cd99e65c1f4", "customer_account_id": "4111d05f36894e3cb9b46a43556d9000", "id": "ef4dcb1a-fee4-41c7-9e11-9cd99e65c1f4", "name": "myGateway", "operational_status": "configuring", "port": {"id": "fffdcb1a-fee4-41c7-9e11-9cd99e65c777"}, "provider_api_managed": true, "speed_mbps": 1000, "type": "connect", "vlan": 10}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=201)
# Construct a dict representation of a ProviderGatewayPortIdentity model
provider_gateway_port_identity_model = {}
provider_gateway_port_identity_model['id'] = 'fffdcb1a-fee4-41c7-9e11-9cd99e65c777'
# Set up parameter values
bgp_asn = 64999
customer_account_id = '4111d05f36894e3cb9b46a43556d9000'
name = 'myGateway'
port = provider_gateway_port_identity_model
speed_mbps = 1000
bgp_cer_cidr = '10.254.30.78/30'
bgp_ibm_cidr = '10.254.30.77/30'
vlan = 10
# Invoke method
response = _service.create_provider_gateway(
bgp_asn,
customer_account_id,
name,
port,
speed_mbps,
bgp_cer_cidr=bgp_cer_cidr,
bgp_ibm_cidr=bgp_ibm_cidr,
vlan=vlan,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 201
# Validate body params
req_body = json.loads(str(responses.calls[0].request.body, 'utf-8'))
assert req_body['bgp_asn'] == 64999
assert req_body['customer_account_id'] == '4111d05f36894e3cb9b46a43556d9000'
assert req_body['name'] == 'myGateway'
assert req_body['port'] == provider_gateway_port_identity_model
assert req_body['speed_mbps'] == 1000
assert req_body['bgp_cer_cidr'] == '10.254.30.78/30'
assert req_body['bgp_ibm_cidr'] == '10.254.30.77/30'
assert req_body['vlan'] == 10
def test_create_provider_gateway_required_params_with_retries(self):
# Enable retries and run test_create_provider_gateway_required_params.
_service.enable_retries()
self.test_create_provider_gateway_required_params()
# Disable retries and run test_create_provider_gateway_required_params.
_service.disable_retries()
self.test_create_provider_gateway_required_params()
@responses.activate
def test_create_provider_gateway_value_error(self):
"""
test_create_provider_gateway_value_error()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/gateways')
mock_response = '{"bgp_asn": 64999, "bgp_cer_cidr": "10.254.30.78/30", "bgp_ibm_asn": 13884, "bgp_ibm_cidr": "10.254.30.77/30", "bgp_status": "active", "change_request": {"type": "create_gateway"}, "created_at": "2019-01-01T12:00:00.000Z", "crn": "crn:v1:bluemix:public:directlink:dal03:a/4111d05f36894e3cb9b46a43556d9000::connect:ef4dcb1a-fee4-41c7-9e11-9cd99e65c1f4", "customer_account_id": "4111d05f36894e3cb9b46a43556d9000", "id": "ef4dcb1a-fee4-41c7-9e11-9cd99e65c1f4", "name": "myGateway", "operational_status": "configuring", "port": {"id": "fffdcb1a-fee4-41c7-9e11-9cd99e65c777"}, "provider_api_managed": true, "speed_mbps": 1000, "type": "connect", "vlan": 10}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=201)
# Construct a dict representation of a ProviderGatewayPortIdentity model
provider_gateway_port_identity_model = {}
provider_gateway_port_identity_model['id'] = 'fffdcb1a-fee4-41c7-9e11-9cd99e65c777'
# Set up parameter values
bgp_asn = 64999
customer_account_id = '4111d05f36894e3cb9b46a43556d9000'
name = 'myGateway'
port = provider_gateway_port_identity_model
speed_mbps = 1000
bgp_cer_cidr = '10.254.30.78/30'
bgp_ibm_cidr = '10.254.30.77/30'
vlan = 10
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"bgp_asn": bgp_asn,
"customer_account_id": customer_account_id,
"name": name,
"port": port,
"speed_mbps": speed_mbps,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
_service.create_provider_gateway(**req_copy)
def test_create_provider_gateway_value_error_with_retries(self):
# Enable retries and run test_create_provider_gateway_value_error.
_service.enable_retries()
self.test_create_provider_gateway_value_error()
# Disable retries and run test_create_provider_gateway_value_error.
_service.disable_retries()
self.test_create_provider_gateway_value_error()
class TestDeleteProviderGateway():
"""
Test Class for delete_provider_gateway
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_delete_provider_gateway_all_params(self):
"""
delete_provider_gateway()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/gateways/testString')
mock_response = '{"bgp_asn": 64999, "bgp_cer_cidr": "10.254.30.78/30", "bgp_ibm_asn": 13884, "bgp_ibm_cidr": "10.254.30.77/30", "bgp_status": "active", "change_request": {"type": "create_gateway"}, "created_at": "2019-01-01T12:00:00.000Z", "crn": "crn:v1:bluemix:public:directlink:dal03:a/4111d05f36894e3cb9b46a43556d9000::connect:ef4dcb1a-fee4-41c7-9e11-9cd99e65c1f4", "customer_account_id": "4111d05f36894e3cb9b46a43556d9000", "id": "ef4dcb1a-fee4-41c7-9e11-9cd99e65c1f4", "name": "myGateway", "operational_status": "configuring", "port": {"id": "fffdcb1a-fee4-41c7-9e11-9cd99e65c777"}, "provider_api_managed": true, "speed_mbps": 1000, "type": "connect", "vlan": 10}'
responses.add(responses.DELETE,
url,
body=mock_response,
content_type='application/json',
status=202)
# Set up parameter values
id = 'testString'
# Invoke method
response = _service.delete_provider_gateway(
id,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 202
def test_delete_provider_gateway_all_params_with_retries(self):
# Enable retries and run test_delete_provider_gateway_all_params.
_service.enable_retries()
self.test_delete_provider_gateway_all_params()
# Disable retries and run test_delete_provider_gateway_all_params.
_service.disable_retries()
self.test_delete_provider_gateway_all_params()
@responses.activate
def test_delete_provider_gateway_value_error(self):
"""
test_delete_provider_gateway_value_error()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/gateways/testString')
mock_response = '{"bgp_asn": 64999, "bgp_cer_cidr": "10.254.30.78/30", "bgp_ibm_asn": 13884, "bgp_ibm_cidr": "10.254.30.77/30", "bgp_status": "active", "change_request": {"type": "create_gateway"}, "created_at": "2019-01-01T12:00:00.000Z", "crn": "crn:v1:bluemix:public:directlink:dal03:a/4111d05f36894e3cb9b46a43556d9000::connect:ef4dcb1a-fee4-41c7-9e11-9cd99e65c1f4", "customer_account_id": "4111d05f36894e3cb9b46a43556d9000", "id": "ef4dcb1a-fee4-41c7-9e11-9cd99e65c1f4", "name": "myGateway", "operational_status": "configuring", "port": {"id": "fffdcb1a-fee4-41c7-9e11-9cd99e65c777"}, "provider_api_managed": true, "speed_mbps": 1000, "type": "connect", "vlan": 10}'
responses.add(responses.DELETE,
url,
body=mock_response,
content_type='application/json',
status=202)
# Set up parameter values
id = 'testString'
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"id": id,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
_service.delete_provider_gateway(**req_copy)
def test_delete_provider_gateway_value_error_with_retries(self):
# Enable retries and run test_delete_provider_gateway_value_error.
_service.enable_retries()
self.test_delete_provider_gateway_value_error()
# Disable retries and run test_delete_provider_gateway_value_error.
_service.disable_retries()
self.test_delete_provider_gateway_value_error()
class TestGetProviderGateway():
"""
Test Class for get_provider_gateway
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_get_provider_gateway_all_params(self):
"""
get_provider_gateway()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/gateways/testString')
mock_response = '{"bgp_asn": 64999, "bgp_cer_cidr": "10.254.30.78/30", "bgp_ibm_asn": 13884, "bgp_ibm_cidr": "10.254.30.77/30", "bgp_status": "active", "change_request": {"type": "create_gateway"}, "created_at": "2019-01-01T12:00:00.000Z", "crn": "crn:v1:bluemix:public:directlink:dal03:a/4111d05f36894e3cb9b46a43556d9000::connect:ef4dcb1a-fee4-41c7-9e11-9cd99e65c1f4", "customer_account_id": "4111d05f36894e3cb9b46a43556d9000", "id": "ef4dcb1a-fee4-41c7-9e11-9cd99e65c1f4", "name": "myGateway", "operational_status": "configuring", "port": {"id": "fffdcb1a-fee4-41c7-9e11-9cd99e65c777"}, "provider_api_managed": true, "speed_mbps": 1000, "type": "connect", "vlan": 10}'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
id = 'testString'
# Invoke method
response = _service.get_provider_gateway(
id,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
def test_get_provider_gateway_all_params_with_retries(self):
# Enable retries and run test_get_provider_gateway_all_params.
_service.enable_retries()
self.test_get_provider_gateway_all_params()
# Disable retries and run test_get_provider_gateway_all_params.
_service.disable_retries()
self.test_get_provider_gateway_all_params()
@responses.activate
def test_get_provider_gateway_value_error(self):
"""
test_get_provider_gateway_value_error()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/gateways/testString')
mock_response = '{"bgp_asn": 64999, "bgp_cer_cidr": "10.254.30.78/30", "bgp_ibm_asn": 13884, "bgp_ibm_cidr": "10.254.30.77/30", "bgp_status": "active", "change_request": {"type": "create_gateway"}, "created_at": "2019-01-01T12:00:00.000Z", "crn": "crn:v1:bluemix:public:directlink:dal03:a/4111d05f36894e3cb9b46a43556d9000::connect:ef4dcb1a-fee4-41c7-9e11-9cd99e65c1f4", "customer_account_id": "4111d05f36894e3cb9b46a43556d9000", "id": "ef4dcb1a-fee4-41c7-9e11-9cd99e65c1f4", "name": "myGateway", "operational_status": "configuring", "port": {"id": "fffdcb1a-fee4-41c7-9e11-9cd99e65c777"}, "provider_api_managed": true, "speed_mbps": 1000, "type": "connect", "vlan": 10}'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
id = 'testString'
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"id": id,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
_service.get_provider_gateway(**req_copy)
def test_get_provider_gateway_value_error_with_retries(self):
# Enable retries and run test_get_provider_gateway_value_error.
_service.enable_retries()
self.test_get_provider_gateway_value_error()
# Disable retries and run test_get_provider_gateway_value_error.
_service.disable_retries()
self.test_get_provider_gateway_value_error()
class TestUpdateProviderGateway():
"""
Test Class for update_provider_gateway
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_update_provider_gateway_all_params(self):
"""
update_provider_gateway()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/gateways/testString')
mock_response = '{"bgp_asn": 64999, "bgp_cer_cidr": "10.254.30.78/30", "bgp_ibm_asn": 13884, "bgp_ibm_cidr": "10.254.30.77/30", "bgp_status": "active", "change_request": {"type": "create_gateway"}, "created_at": "2019-01-01T12:00:00.000Z", "crn": "crn:v1:bluemix:public:directlink:dal03:a/4111d05f36894e3cb9b46a43556d9000::connect:ef4dcb1a-fee4-41c7-9e11-9cd99e65c1f4", "customer_account_id": "4111d05f36894e3cb9b46a43556d9000", "id": "ef4dcb1a-fee4-41c7-9e11-9cd99e65c1f4", "name": "myGateway", "operational_status": "configuring", "port": {"id": "fffdcb1a-fee4-41c7-9e11-9cd99e65c777"}, "provider_api_managed": true, "speed_mbps": 1000, "type": "connect", "vlan": 10}'
responses.add(responses.PATCH,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
id = 'testString'
bgp_asn = 64999
bgp_cer_cidr = '169.254.0.10/30'
bgp_ibm_cidr = '169.254.0.9/30'
name = 'myNewGateway'
speed_mbps = 1000
vlan = 10
# Invoke method
response = _service.update_provider_gateway(
id,
bgp_asn=bgp_asn,
bgp_cer_cidr=bgp_cer_cidr,
bgp_ibm_cidr=bgp_ibm_cidr,
name=name,
speed_mbps=speed_mbps,
vlan=vlan,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
# Validate body params
req_body = json.loads(str(responses.calls[0].request.body, 'utf-8'))
assert req_body['bgp_asn'] == 64999
assert req_body['bgp_cer_cidr'] == '169.254.0.10/30'
assert req_body['bgp_ibm_cidr'] == '169.254.0.9/30'
assert req_body['name'] == 'myNewGateway'
assert req_body['speed_mbps'] == 1000
assert req_body['vlan'] == 10
def test_update_provider_gateway_all_params_with_retries(self):
# Enable retries and run test_update_provider_gateway_all_params.
_service.enable_retries()
self.test_update_provider_gateway_all_params()
# Disable retries and run test_update_provider_gateway_all_params.
_service.disable_retries()
self.test_update_provider_gateway_all_params()
@responses.activate
def test_update_provider_gateway_value_error(self):
"""
test_update_provider_gateway_value_error()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/gateways/testString')
mock_response = '{"bgp_asn": 64999, "bgp_cer_cidr": "10.254.30.78/30", "bgp_ibm_asn": 13884, "bgp_ibm_cidr": "10.254.30.77/30", "bgp_status": "active", "change_request": {"type": "create_gateway"}, "created_at": "2019-01-01T12:00:00.000Z", "crn": "crn:v1:bluemix:public:directlink:dal03:a/4111d05f36894e3cb9b46a43556d9000::connect:ef4dcb1a-fee4-41c7-9e11-9cd99e65c1f4", "customer_account_id": "4111d05f36894e3cb9b46a43556d9000", "id": "ef4dcb1a-fee4-41c7-9e11-9cd99e65c1f4", "name": "myGateway", "operational_status": "configuring", "port": {"id": "fffdcb1a-fee4-41c7-9e11-9cd99e65c777"}, "provider_api_managed": true, "speed_mbps": 1000, "type": "connect", "vlan": 10}'
responses.add(responses.PATCH,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
id = 'testString'
bgp_asn = 64999
bgp_cer_cidr = '169.254.0.10/30'
bgp_ibm_cidr = '169.254.0.9/30'
name = 'myNewGateway'
speed_mbps = 1000
vlan = 10
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"id": id,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
_service.update_provider_gateway(**req_copy)
def test_update_provider_gateway_value_error_with_retries(self):
# Enable retries and run test_update_provider_gateway_value_error.
_service.enable_retries()
self.test_update_provider_gateway_value_error()
# Disable retries and run test_update_provider_gateway_value_error.
_service.disable_retries()
self.test_update_provider_gateway_value_error()
class TestListProviderPorts():
"""
Test Class for list_provider_ports
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_list_provider_ports_all_params(self):
"""
list_provider_ports()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/ports')
mock_response = '{"first": {"href": "https://directlink.cloud.ibm.com/provider/v2/ports?limit=100"}, "limit": 100, "next": {"href": "https://directlink.cloud.ibm.com/provider/v2/ports?start=9d5a91a3e2cbd233b5a5b33436855ed1&limit=100", "start": "9d5a91a3e2cbd233b5a5b33436855ed1"}, "total_count": 132, "ports": [{"id": "01122b9b-820f-4c44-8a31-77f1f0806765", "label": "XCR-FRK-CS-SEC-01", "location_display_name": "Dallas 03", "location_name": "dal03", "provider_name": "provider_1", "supported_link_speeds": [21]}]}'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
start = 'testString'
limit = 1
# Invoke method
response = _service.list_provider_ports(
start=start,
limit=limit,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
# Validate query params
query_string = responses.calls[0].request.url.split('?',1)[1]
query_string = urllib.parse.unquote_plus(query_string)
assert 'start={}'.format(start) in query_string
assert 'limit={}'.format(limit) in query_string
def test_list_provider_ports_all_params_with_retries(self):
# Enable retries and run test_list_provider_ports_all_params.
_service.enable_retries()
self.test_list_provider_ports_all_params()
# Disable retries and run test_list_provider_ports_all_params.
_service.disable_retries()
self.test_list_provider_ports_all_params()
@responses.activate
def test_list_provider_ports_required_params(self):
"""
test_list_provider_ports_required_params()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/ports')
mock_response = '{"first": {"href": "https://directlink.cloud.ibm.com/provider/v2/ports?limit=100"}, "limit": 100, "next": {"href": "https://directlink.cloud.ibm.com/provider/v2/ports?start=9d5a91a3e2cbd233b5a5b33436855ed1&limit=100", "start": "9d5a91a3e2cbd233b5a5b33436855ed1"}, "total_count": 132, "ports": [{"id": "01122b9b-820f-4c44-8a31-77f1f0806765", "label": "XCR-FRK-CS-SEC-01", "location_display_name": "Dallas 03", "location_name": "dal03", "provider_name": "provider_1", "supported_link_speeds": [21]}]}'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Invoke method
response = _service.list_provider_ports()
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
def test_list_provider_ports_required_params_with_retries(self):
# Enable retries and run test_list_provider_ports_required_params.
_service.enable_retries()
self.test_list_provider_ports_required_params()
# Disable retries and run test_list_provider_ports_required_params.
_service.disable_retries()
self.test_list_provider_ports_required_params()
@responses.activate
def test_list_provider_ports_value_error(self):
"""
test_list_provider_ports_value_error()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/ports')
mock_response = '{"first": {"href": "https://directlink.cloud.ibm.com/provider/v2/ports?limit=100"}, "limit": 100, "next": {"href": "https://directlink.cloud.ibm.com/provider/v2/ports?start=9d5a91a3e2cbd233b5a5b33436855ed1&limit=100", "start": "9d5a91a3e2cbd233b5a5b33436855ed1"}, "total_count": 132, "ports": [{"id": "01122b9b-820f-4c44-8a31-77f1f0806765", "label": "XCR-FRK-CS-SEC-01", "location_display_name": "Dallas 03", "location_name": "dal03", "provider_name": "provider_1", "supported_link_speeds": [21]}]}'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Pass in all but one required param and check for a ValueError
req_param_dict = {
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
_service.list_provider_ports(**req_copy)
def test_list_provider_ports_value_error_with_retries(self):
# Enable retries and run test_list_provider_ports_value_error.
_service.enable_retries()
self.test_list_provider_ports_value_error()
# Disable retries and run test_list_provider_ports_value_error.
_service.disable_retries()
self.test_list_provider_ports_value_error()
class TestGetProviderPort():
"""
Test Class for get_provider_port
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_get_provider_port_all_params(self):
"""
get_provider_port()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/ports/testString')
mock_response = '{"id": "01122b9b-820f-4c44-8a31-77f1f0806765", "label": "XCR-FRK-CS-SEC-01", "location_display_name": "Dallas 03", "location_name": "dal03", "provider_name": "provider_1", "supported_link_speeds": [21]}'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
id = 'testString'
# Invoke method
response = _service.get_provider_port(
id,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
def test_get_provider_port_all_params_with_retries(self):
# Enable retries and run test_get_provider_port_all_params.
_service.enable_retries()
self.test_get_provider_port_all_params()
# Disable retries and run test_get_provider_port_all_params.
_service.disable_retries()
self.test_get_provider_port_all_params()
@responses.activate
def test_get_provider_port_value_error(self):
"""
test_get_provider_port_value_error()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/ports/testString')
mock_response = '{"id": "01122b9b-820f-4c44-8a31-77f1f0806765", "label": "XCR-FRK-CS-SEC-01", "location_display_name": "Dallas 03", "location_name": "dal03", "provider_name": "provider_1", "supported_link_speeds": [21]}'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
id = 'testString'
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"id": id,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
_service.get_provider_port(**req_copy)
def test_get_provider_port_value_error_with_retries(self):
# Enable retries and run test_get_provider_port_value_error.
_service.enable_retries()
self.test_get_provider_port_value_error()
# Disable retries and run test_get_provider_port_value_error.
_service.disable_retries()
self.test_get_provider_port_value_error()
# endregion
##############################################################################
# End of Service: ProviderAPIs
##############################################################################
##############################################################################
# Start of Model Tests
##############################################################################
# region
class TestModel_ProviderGateway():
"""
Test Class for ProviderGateway
"""
def test_provider_gateway_serialization(self):
"""
Test serialization/deserialization for ProviderGateway
"""
# Construct dict forms of any model objects needed in order to build this model.
provider_gateway_change_request_model = {} # ProviderGatewayChangeRequestProviderGatewayCreate
provider_gateway_change_request_model['type'] = 'create_gateway'
provider_gateway_port_reference_model = {} # ProviderGatewayPortReference
provider_gateway_port_reference_model['id'] = 'fffdcb1a-fee4-41c7-9e11-9cd99e65c777'
# Construct a json representation of a ProviderGateway model
provider_gateway_model_json = {}
provider_gateway_model_json['bgp_asn'] = 64999
provider_gateway_model_json['bgp_cer_cidr'] = '10.254.30.78/30'
provider_gateway_model_json['bgp_ibm_asn'] = 13884
provider_gateway_model_json['bgp_ibm_cidr'] = '10.254.30.77/30'
provider_gateway_model_json['bgp_status'] = 'active'
provider_gateway_model_json['change_request'] = provider_gateway_change_request_model
provider_gateway_model_json['created_at'] = "2019-01-01T12:00:00Z"
provider_gateway_model_json['crn'] = 'crn:v1:bluemix:public:directlink:dal03:a/4111d05f36894e3cb9b46a43556d9000::connect:ef4dcb1a-fee4-41c7-9e11-9cd99e65c1f4'
provider_gateway_model_json['customer_account_id'] = '4111d05f36894e3cb9b46a43556d9000'
provider_gateway_model_json['id'] = 'ef4dcb1a-fee4-41c7-9e11-9cd99e65c1f4'
provider_gateway_model_json['name'] = 'myGateway'
provider_gateway_model_json['operational_status'] = 'configuring'
provider_gateway_model_json['port'] = provider_gateway_port_reference_model
provider_gateway_model_json['provider_api_managed'] = True
provider_gateway_model_json['speed_mbps'] = 1000
provider_gateway_model_json['type'] = 'connect'
provider_gateway_model_json['vlan'] = 10
# Construct a model instance of ProviderGateway by calling from_dict on the json representation
provider_gateway_model = ProviderGateway.from_dict(provider_gateway_model_json)
assert provider_gateway_model != False
# Construct a model instance of ProviderGateway by calling from_dict on the json representation
provider_gateway_model_dict = ProviderGateway.from_dict(provider_gateway_model_json).__dict__
provider_gateway_model2 = ProviderGateway(**provider_gateway_model_dict)
# Verify the model instances are equivalent
assert provider_gateway_model == provider_gateway_model2
# Convert model instance back to dict and verify no loss of data
provider_gateway_model_json2 = provider_gateway_model.to_dict()
assert provider_gateway_model_json2 == provider_gateway_model_json
class TestModel_ProviderGatewayCollection():
"""
Test Class for ProviderGatewayCollection
"""
def test_provider_gateway_collection_serialization(self):
"""
Test serialization/deserialization for ProviderGatewayCollection
"""
# Construct dict forms of any model objects needed in order to build this model.
provider_gateway_collection_first_model = {} # ProviderGatewayCollectionFirst
provider_gateway_collection_first_model['href'] = 'https://directlink.cloud.ibm.com/provider/v2/gateways?limit=100'
provider_gateway_collection_next_model = {} # ProviderGatewayCollectionNext
provider_gateway_collection_next_model['href'] = 'https://directlink.cloud.ibm.com/provider/v2/gateways?start=8c4a91a3e2cbd233b5a5b33436855fc2&limit=100'
provider_gateway_collection_next_model['start'] = '8c4a91a3e2cbd233b5a5b33436855fc2'
provider_gateway_change_request_model = {} # ProviderGatewayChangeRequestProviderGatewayCreate
provider_gateway_change_request_model['type'] = 'create_gateway'
provider_gateway_port_reference_model = {} # ProviderGatewayPortReference
provider_gateway_port_reference_model['id'] = 'fffdcb1a-fee4-41c7-9e11-9cd99e65c777'
provider_gateway_model = {} # ProviderGateway
provider_gateway_model['bgp_asn'] = 64999
provider_gateway_model['bgp_cer_cidr'] = '10.254.30.78/30'
provider_gateway_model['bgp_ibm_asn'] = 13884
provider_gateway_model['bgp_ibm_cidr'] = '10.254.30.77/30'
provider_gateway_model['bgp_status'] = 'active'
provider_gateway_model['change_request'] = provider_gateway_change_request_model
provider_gateway_model['created_at'] = "2019-01-01T12:00:00Z"
provider_gateway_model['crn'] = 'crn:v1:bluemix:public:directlink:dal03:a/4111d05f36894e3cb9b46a43556d9000::connect:ef4dcb1a-fee4-41c7-9e11-9cd99e65c1f4'
provider_gateway_model['customer_account_id'] = '4111d05f36894e3cb9b46a43556d9000'
provider_gateway_model['id'] = 'ef4dcb1a-fee4-41c7-9e11-9cd99e65c1f4'
provider_gateway_model['name'] = 'myGateway'
provider_gateway_model['operational_status'] = 'configuring'
provider_gateway_model['port'] = provider_gateway_port_reference_model
provider_gateway_model['provider_api_managed'] = True
provider_gateway_model['speed_mbps'] = 1000
provider_gateway_model['type'] = 'connect'
provider_gateway_model['vlan'] = 10
# Construct a json representation of a ProviderGatewayCollection model
provider_gateway_collection_model_json = {}
provider_gateway_collection_model_json['first'] = provider_gateway_collection_first_model
provider_gateway_collection_model_json['limit'] = 100
provider_gateway_collection_model_json['next'] = provider_gateway_collection_next_model
provider_gateway_collection_model_json['total_count'] = 132
provider_gateway_collection_model_json['gateways'] = [provider_gateway_model]
# Construct a model instance of ProviderGatewayCollection by calling from_dict on the json representation
provider_gateway_collection_model = ProviderGatewayCollection.from_dict(provider_gateway_collection_model_json)
assert provider_gateway_collection_model != False
# Construct a model instance of ProviderGatewayCollection by calling from_dict on the json representation
provider_gateway_collection_model_dict = ProviderGatewayCollection.from_dict(provider_gateway_collection_model_json).__dict__
provider_gateway_collection_model2 = ProviderGatewayCollection(**provider_gateway_collection_model_dict)
# Verify the model instances are equivalent
assert provider_gateway_collection_model == provider_gateway_collection_model2
# Convert model instance back to dict and verify no loss of data
provider_gateway_collection_model_json2 = provider_gateway_collection_model.to_dict()
assert provider_gateway_collection_model_json2 == provider_gateway_collection_model_json
class TestModel_ProviderGatewayCollectionFirst():
"""
Test Class for ProviderGatewayCollectionFirst
"""
def test_provider_gateway_collection_first_serialization(self):
"""
Test serialization/deserialization for ProviderGatewayCollectionFirst
"""
# Construct a json representation of a ProviderGatewayCollectionFirst model
provider_gateway_collection_first_model_json = {}
provider_gateway_collection_first_model_json['href'] = 'https://directlink.cloud.ibm.com/provider/v2/gateways?limit=100'
# Construct a model instance of ProviderGatewayCollectionFirst by calling from_dict on the json representation
provider_gateway_collection_first_model = ProviderGatewayCollectionFirst.from_dict(provider_gateway_collection_first_model_json)
assert provider_gateway_collection_first_model != False
# Construct a model instance of ProviderGatewayCollectionFirst by calling from_dict on the json representation
provider_gateway_collection_first_model_dict = ProviderGatewayCollectionFirst.from_dict(provider_gateway_collection_first_model_json).__dict__
provider_gateway_collection_first_model2 = ProviderGatewayCollectionFirst(**provider_gateway_collection_first_model_dict)
# Verify the model instances are equivalent
assert provider_gateway_collection_first_model == provider_gateway_collection_first_model2
# Convert model instance back to dict and verify no loss of data
provider_gateway_collection_first_model_json2 = provider_gateway_collection_first_model.to_dict()
assert provider_gateway_collection_first_model_json2 == provider_gateway_collection_first_model_json
class TestModel_ProviderGatewayCollectionNext():
"""
Test Class for ProviderGatewayCollectionNext
"""
def test_provider_gateway_collection_next_serialization(self):
"""
Test serialization/deserialization for ProviderGatewayCollectionNext
"""
# Construct a json representation of a ProviderGatewayCollectionNext model
provider_gateway_collection_next_model_json = {}
provider_gateway_collection_next_model_json['href'] = 'https://directlink.cloud.ibm.com/provider/v2/gateways?start=8c4a91a3e2cbd233b5a5b33436855fc2&limit=100'
provider_gateway_collection_next_model_json['start'] = '8c4a91a3e2cbd233b5a5b33436855fc2'
# Construct a model instance of ProviderGatewayCollectionNext by calling from_dict on the json representation
provider_gateway_collection_next_model = ProviderGatewayCollectionNext.from_dict(provider_gateway_collection_next_model_json)
assert provider_gateway_collection_next_model != False
# Construct a model instance of ProviderGatewayCollectionNext by calling from_dict on the json representation
provider_gateway_collection_next_model_dict = ProviderGatewayCollectionNext.from_dict(provider_gateway_collection_next_model_json).__dict__
provider_gateway_collection_next_model2 = ProviderGatewayCollectionNext(**provider_gateway_collection_next_model_dict)
# Verify the model instances are equivalent
assert provider_gateway_collection_next_model == provider_gateway_collection_next_model2
# Convert model instance back to dict and verify no loss of data
provider_gateway_collection_next_model_json2 = provider_gateway_collection_next_model.to_dict()
assert provider_gateway_collection_next_model_json2 == provider_gateway_collection_next_model_json
class TestModel_ProviderGatewayPortIdentity():
"""
Test Class for ProviderGatewayPortIdentity
"""
def test_provider_gateway_port_identity_serialization(self):
"""
Test serialization/deserialization for ProviderGatewayPortIdentity
"""
# Construct a json representation of a ProviderGatewayPortIdentity model
provider_gateway_port_identity_model_json = {}
provider_gateway_port_identity_model_json['id'] = 'fffdcb1a-fee4-41c7-9e11-9cd99e65c777'
# Construct a model instance of ProviderGatewayPortIdentity by calling from_dict on the json representation
provider_gateway_port_identity_model = ProviderGatewayPortIdentity.from_dict(provider_gateway_port_identity_model_json)
assert provider_gateway_port_identity_model != False
# Construct a model instance of ProviderGatewayPortIdentity by calling from_dict on the json representation
provider_gateway_port_identity_model_dict = ProviderGatewayPortIdentity.from_dict(provider_gateway_port_identity_model_json).__dict__
provider_gateway_port_identity_model2 = ProviderGatewayPortIdentity(**provider_gateway_port_identity_model_dict)
# Verify the model instances are equivalent
assert provider_gateway_port_identity_model == provider_gateway_port_identity_model2
# Convert model instance back to dict and verify no loss of data
provider_gateway_port_identity_model_json2 = provider_gateway_port_identity_model.to_dict()
assert provider_gateway_port_identity_model_json2 == provider_gateway_port_identity_model_json
class TestModel_ProviderGatewayPortReference():
"""
Test Class for ProviderGatewayPortReference
"""
def test_provider_gateway_port_reference_serialization(self):
"""
Test serialization/deserialization for ProviderGatewayPortReference
"""
# Construct a json representation of a ProviderGatewayPortReference model
provider_gateway_port_reference_model_json = {}
provider_gateway_port_reference_model_json['id'] = 'fffdcb1a-fee4-41c7-9e11-9cd99e65c777'
# Construct a model instance of ProviderGatewayPortReference by calling from_dict on the json representation
provider_gateway_port_reference_model = ProviderGatewayPortReference.from_dict(provider_gateway_port_reference_model_json)
assert provider_gateway_port_reference_model != False
# Construct a model instance of ProviderGatewayPortReference by calling from_dict on the json representation
provider_gateway_port_reference_model_dict = ProviderGatewayPortReference.from_dict(provider_gateway_port_reference_model_json).__dict__
provider_gateway_port_reference_model2 = ProviderGatewayPortReference(**provider_gateway_port_reference_model_dict)
# Verify the model instances are equivalent
assert provider_gateway_port_reference_model == provider_gateway_port_reference_model2
# Convert model instance back to dict and verify no loss of data
provider_gateway_port_reference_model_json2 = provider_gateway_port_reference_model.to_dict()
assert provider_gateway_port_reference_model_json2 == provider_gateway_port_reference_model_json
class TestModel_ProviderPort():
"""
Test Class for ProviderPort
"""
def test_provider_port_serialization(self):
"""
Test serialization/deserialization for ProviderPort
"""
# Construct a json representation of a ProviderPort model
provider_port_model_json = {}
provider_port_model_json['id'] = '01122b9b-820f-4c44-8a31-77f1f0806765'
provider_port_model_json['label'] = 'XCR-FRK-CS-SEC-01'
provider_port_model_json['location_display_name'] = 'Dallas 03'
provider_port_model_json['location_name'] = 'dal03'
provider_port_model_json['provider_name'] = 'provider_1'
provider_port_model_json['supported_link_speeds'] = [1000, 2000, 5000, 10000]
# Construct a model instance of ProviderPort by calling from_dict on the json representation
provider_port_model = ProviderPort.from_dict(provider_port_model_json)
assert provider_port_model != False
# Construct a model instance of ProviderPort by calling from_dict on the json representation
provider_port_model_dict = ProviderPort.from_dict(provider_port_model_json).__dict__
provider_port_model2 = ProviderPort(**provider_port_model_dict)
# Verify the model instances are equivalent
assert provider_port_model == provider_port_model2
# Convert model instance back to dict and verify no loss of data
provider_port_model_json2 = provider_port_model.to_dict()
assert provider_port_model_json2 == provider_port_model_json
class TestModel_ProviderPortCollection():
"""
Test Class for ProviderPortCollection
"""
def test_provider_port_collection_serialization(self):
"""
Test serialization/deserialization for ProviderPortCollection
"""
# Construct dict forms of any model objects needed in order to build this model.
provider_port_collection_first_model = {} # ProviderPortCollectionFirst
provider_port_collection_first_model['href'] = 'https://directlink.cloud.ibm.com/provider/v2/ports?limit=100'
provider_port_collection_next_model = {} # ProviderPortCollectionNext
provider_port_collection_next_model['href'] = 'https://directlink.cloud.ibm.com/provider/v2/ports?start=9d5a91a3e2cbd233b5a5b33436855ed1&limit=100'
provider_port_collection_next_model['start'] = '9d5a91a3e2cbd233b5a5b33436855ed1'
provider_port_model = {} # ProviderPort
provider_port_model['id'] = '01122b9b-820f-4c44-8a31-77f1f0806765'
provider_port_model['label'] = 'XCR-FRK-CS-SEC-01'
provider_port_model['location_display_name'] = 'Dallas 03'
provider_port_model['location_name'] = 'dal03'
provider_port_model['provider_name'] = 'provider_1'
provider_port_model['supported_link_speeds'] = [1000, 2000, 5000, 10000]
# Construct a json representation of a ProviderPortCollection model
provider_port_collection_model_json = {}
provider_port_collection_model_json['first'] = provider_port_collection_first_model
provider_port_collection_model_json['limit'] = 100
provider_port_collection_model_json['next'] = provider_port_collection_next_model
provider_port_collection_model_json['total_count'] = 132
provider_port_collection_model_json['ports'] = [provider_port_model]
# Construct a model instance of ProviderPortCollection by calling from_dict on the json representation
provider_port_collection_model = ProviderPortCollection.from_dict(provider_port_collection_model_json)
assert provider_port_collection_model != False
# Construct a model instance of ProviderPortCollection by calling from_dict on the json representation
provider_port_collection_model_dict = ProviderPortCollection.from_dict(provider_port_collection_model_json).__dict__
provider_port_collection_model2 = ProviderPortCollection(**provider_port_collection_model_dict)
# Verify the model instances are equivalent
assert provider_port_collection_model == provider_port_collection_model2
# Convert model instance back to dict and verify no loss of data
provider_port_collection_model_json2 = provider_port_collection_model.to_dict()
assert provider_port_collection_model_json2 == provider_port_collection_model_json
class TestModel_ProviderPortCollectionFirst():
"""
Test Class for ProviderPortCollectionFirst
"""
def test_provider_port_collection_first_serialization(self):
"""
Test serialization/deserialization for ProviderPortCollectionFirst
"""
# Construct a json representation of a ProviderPortCollectionFirst model
provider_port_collection_first_model_json = {}
provider_port_collection_first_model_json['href'] = 'https://directlink.cloud.ibm.com/provider/v2/ports?limit=100'
# Construct a model instance of ProviderPortCollectionFirst by calling from_dict on the json representation
provider_port_collection_first_model = ProviderPortCollectionFirst.from_dict(provider_port_collection_first_model_json)
assert provider_port_collection_first_model != False
# Construct a model instance of ProviderPortCollectionFirst by calling from_dict on the json representation
provider_port_collection_first_model_dict = ProviderPortCollectionFirst.from_dict(provider_port_collection_first_model_json).__dict__
provider_port_collection_first_model2 = ProviderPortCollectionFirst(**provider_port_collection_first_model_dict)
# Verify the model instances are equivalent
assert provider_port_collection_first_model == provider_port_collection_first_model2
# Convert model instance back to dict and verify no loss of data
provider_port_collection_first_model_json2 = provider_port_collection_first_model.to_dict()
assert provider_port_collection_first_model_json2 == provider_port_collection_first_model_json
class TestModel_ProviderPortCollectionNext():
"""
Test Class for ProviderPortCollectionNext
"""
def test_provider_port_collection_next_serialization(self):
"""
Test serialization/deserialization for ProviderPortCollectionNext
"""
# Construct a json representation of a ProviderPortCollectionNext model
provider_port_collection_next_model_json = {}
provider_port_collection_next_model_json['href'] = 'https://directlink.cloud.ibm.com/provider/v2/ports?start=9d5a91a3e2cbd233b5a5b33436855ed1&limit=100'
provider_port_collection_next_model_json['start'] = '9d5a91a3e2cbd233b5a5b33436855ed1'
# Construct a model instance of ProviderPortCollectionNext by calling from_dict on the json representation
provider_port_collection_next_model = ProviderPortCollectionNext.from_dict(provider_port_collection_next_model_json)
assert provider_port_collection_next_model != False
# Construct a model instance of ProviderPortCollectionNext by calling from_dict on the json representation
provider_port_collection_next_model_dict = ProviderPortCollectionNext.from_dict(provider_port_collection_next_model_json).__dict__
provider_port_collection_next_model2 = ProviderPortCollectionNext(**provider_port_collection_next_model_dict)
# Verify the model instances are equivalent
assert provider_port_collection_next_model == provider_port_collection_next_model2
# Convert model instance back to dict and verify no loss of data
provider_port_collection_next_model_json2 = provider_port_collection_next_model.to_dict()
assert provider_port_collection_next_model_json2 == provider_port_collection_next_model_json
class TestModel_ProviderGatewayChangeRequestProviderGatewayCreate():
"""
Test Class for ProviderGatewayChangeRequestProviderGatewayCreate
"""
def test_provider_gateway_change_request_provider_gateway_create_serialization(self):
"""
Test serialization/deserialization for ProviderGatewayChangeRequestProviderGatewayCreate
"""
# Construct a json representation of a ProviderGatewayChangeRequestProviderGatewayCreate model
provider_gateway_change_request_provider_gateway_create_model_json = {}
provider_gateway_change_request_provider_gateway_create_model_json['type'] = 'create_gateway'
# Construct a model instance of ProviderGatewayChangeRequestProviderGatewayCreate by calling from_dict on the json representation
provider_gateway_change_request_provider_gateway_create_model = ProviderGatewayChangeRequestProviderGatewayCreate.from_dict(provider_gateway_change_request_provider_gateway_create_model_json)
assert provider_gateway_change_request_provider_gateway_create_model != False
# Construct a model instance of ProviderGatewayChangeRequestProviderGatewayCreate by calling from_dict on the json representation
provider_gateway_change_request_provider_gateway_create_model_dict = ProviderGatewayChangeRequestProviderGatewayCreate.from_dict(provider_gateway_change_request_provider_gateway_create_model_json).__dict__
provider_gateway_change_request_provider_gateway_create_model2 = ProviderGatewayChangeRequestProviderGatewayCreate(**provider_gateway_change_request_provider_gateway_create_model_dict)
# Verify the model instances are equivalent
assert provider_gateway_change_request_provider_gateway_create_model == provider_gateway_change_request_provider_gateway_create_model2
# Convert model instance back to dict and verify no loss of data
provider_gateway_change_request_provider_gateway_create_model_json2 = provider_gateway_change_request_provider_gateway_create_model.to_dict()
assert provider_gateway_change_request_provider_gateway_create_model_json2 == provider_gateway_change_request_provider_gateway_create_model_json
class TestModel_ProviderGatewayChangeRequestProviderGatewayDelete():
"""
Test Class for ProviderGatewayChangeRequestProviderGatewayDelete
"""
def test_provider_gateway_change_request_provider_gateway_delete_serialization(self):
"""
Test serialization/deserialization for ProviderGatewayChangeRequestProviderGatewayDelete
"""
# Construct a json representation of a ProviderGatewayChangeRequestProviderGatewayDelete model
provider_gateway_change_request_provider_gateway_delete_model_json = {}
provider_gateway_change_request_provider_gateway_delete_model_json['type'] = 'delete_gateway'
# Construct a model instance of ProviderGatewayChangeRequestProviderGatewayDelete by calling from_dict on the json representation
provider_gateway_change_request_provider_gateway_delete_model = ProviderGatewayChangeRequestProviderGatewayDelete.from_dict(provider_gateway_change_request_provider_gateway_delete_model_json)
assert provider_gateway_change_request_provider_gateway_delete_model != False
# Construct a model instance of ProviderGatewayChangeRequestProviderGatewayDelete by calling from_dict on the json representation
provider_gateway_change_request_provider_gateway_delete_model_dict = ProviderGatewayChangeRequestProviderGatewayDelete.from_dict(provider_gateway_change_request_provider_gateway_delete_model_json).__dict__
provider_gateway_change_request_provider_gateway_delete_model2 = ProviderGatewayChangeRequestProviderGatewayDelete(**provider_gateway_change_request_provider_gateway_delete_model_dict)
# Verify the model instances are equivalent
assert provider_gateway_change_request_provider_gateway_delete_model == provider_gateway_change_request_provider_gateway_delete_model2
# Convert model instance back to dict and verify no loss of data
provider_gateway_change_request_provider_gateway_delete_model_json2 = provider_gateway_change_request_provider_gateway_delete_model.to_dict()
assert provider_gateway_change_request_provider_gateway_delete_model_json2 == provider_gateway_change_request_provider_gateway_delete_model_json
class TestModel_ProviderGatewayChangeRequestProviderGatewayUpdateAttributes():
"""
Test Class for ProviderGatewayChangeRequestProviderGatewayUpdateAttributes
"""
def test_provider_gateway_change_request_provider_gateway_update_attributes_serialization(self):
"""
Test serialization/deserialization for ProviderGatewayChangeRequestProviderGatewayUpdateAttributes
"""
# Construct dict forms of any model objects needed in order to build this model.
provider_gateway_update_attributes_updates_item_model = {} # ProviderGatewayUpdateAttributesUpdatesItemProviderGatewaySpeedUpdate
provider_gateway_update_attributes_updates_item_model['speed_mbps'] = 500
# Construct a json representation of a ProviderGatewayChangeRequestProviderGatewayUpdateAttributes model
provider_gateway_change_request_provider_gateway_update_attributes_model_json = {}
provider_gateway_change_request_provider_gateway_update_attributes_model_json['type'] = 'update_attributes'
provider_gateway_change_request_provider_gateway_update_attributes_model_json['updates'] = [provider_gateway_update_attributes_updates_item_model]
# Construct a model instance of ProviderGatewayChangeRequestProviderGatewayUpdateAttributes by calling from_dict on the json representation
provider_gateway_change_request_provider_gateway_update_attributes_model = ProviderGatewayChangeRequestProviderGatewayUpdateAttributes.from_dict(provider_gateway_change_request_provider_gateway_update_attributes_model_json)
assert provider_gateway_change_request_provider_gateway_update_attributes_model != False
# Construct a model instance of ProviderGatewayChangeRequestProviderGatewayUpdateAttributes by calling from_dict on the json representation
provider_gateway_change_request_provider_gateway_update_attributes_model_dict = ProviderGatewayChangeRequestProviderGatewayUpdateAttributes.from_dict(provider_gateway_change_request_provider_gateway_update_attributes_model_json).__dict__
provider_gateway_change_request_provider_gateway_update_attributes_model2 = ProviderGatewayChangeRequestProviderGatewayUpdateAttributes(**provider_gateway_change_request_provider_gateway_update_attributes_model_dict)
# Verify the model instances are equivalent
assert provider_gateway_change_request_provider_gateway_update_attributes_model == provider_gateway_change_request_provider_gateway_update_attributes_model2
# Convert model instance back to dict and verify no loss of data
provider_gateway_change_request_provider_gateway_update_attributes_model_json2 = provider_gateway_change_request_provider_gateway_update_attributes_model.to_dict()
assert provider_gateway_change_request_provider_gateway_update_attributes_model_json2 == provider_gateway_change_request_provider_gateway_update_attributes_model_json
class TestModel_ProviderGatewayUpdateAttributesUpdatesItemProviderGatewayBGPASNUpdate():
"""
Test Class for ProviderGatewayUpdateAttributesUpdatesItemProviderGatewayBGPASNUpdate
"""
def test_provider_gateway_update_attributes_updates_item_provider_gateway_bgpasn_update_serialization(self):
"""
Test serialization/deserialization for ProviderGatewayUpdateAttributesUpdatesItemProviderGatewayBGPASNUpdate
"""
# Construct a json representation of a ProviderGatewayUpdateAttributesUpdatesItemProviderGatewayBGPASNUpdate model
provider_gateway_update_attributes_updates_item_provider_gateway_bgpasn_update_model_json = {}
provider_gateway_update_attributes_updates_item_provider_gateway_bgpasn_update_model_json['bgp_asn'] = 64999
# Construct a model instance of ProviderGatewayUpdateAttributesUpdatesItemProviderGatewayBGPASNUpdate by calling from_dict on the json representation
provider_gateway_update_attributes_updates_item_provider_gateway_bgpasn_update_model = ProviderGatewayUpdateAttributesUpdatesItemProviderGatewayBGPASNUpdate.from_dict(provider_gateway_update_attributes_updates_item_provider_gateway_bgpasn_update_model_json)
assert provider_gateway_update_attributes_updates_item_provider_gateway_bgpasn_update_model != False
# Construct a model instance of ProviderGatewayUpdateAttributesUpdatesItemProviderGatewayBGPASNUpdate by calling from_dict on the json representation
provider_gateway_update_attributes_updates_item_provider_gateway_bgpasn_update_model_dict = ProviderGatewayUpdateAttributesUpdatesItemProviderGatewayBGPASNUpdate.from_dict(provider_gateway_update_attributes_updates_item_provider_gateway_bgpasn_update_model_json).__dict__
provider_gateway_update_attributes_updates_item_provider_gateway_bgpasn_update_model2 = ProviderGatewayUpdateAttributesUpdatesItemProviderGatewayBGPASNUpdate(**provider_gateway_update_attributes_updates_item_provider_gateway_bgpasn_update_model_dict)
# Verify the model instances are equivalent
assert provider_gateway_update_attributes_updates_item_provider_gateway_bgpasn_update_model == provider_gateway_update_attributes_updates_item_provider_gateway_bgpasn_update_model2
# Convert model instance back to dict and verify no loss of data
provider_gateway_update_attributes_updates_item_provider_gateway_bgpasn_update_model_json2 = provider_gateway_update_attributes_updates_item_provider_gateway_bgpasn_update_model.to_dict()
assert provider_gateway_update_attributes_updates_item_provider_gateway_bgpasn_update_model_json2 == provider_gateway_update_attributes_updates_item_provider_gateway_bgpasn_update_model_json
class TestModel_ProviderGatewayUpdateAttributesUpdatesItemProviderGatewayBGPIPUpdate():
"""
Test Class for ProviderGatewayUpdateAttributesUpdatesItemProviderGatewayBGPIPUpdate
"""
def test_provider_gateway_update_attributes_updates_item_provider_gateway_bgpip_update_serialization(self):
"""
Test serialization/deserialization for ProviderGatewayUpdateAttributesUpdatesItemProviderGatewayBGPIPUpdate
"""
# Construct a json representation of a ProviderGatewayUpdateAttributesUpdatesItemProviderGatewayBGPIPUpdate model
provider_gateway_update_attributes_updates_item_provider_gateway_bgpip_update_model_json = {}
provider_gateway_update_attributes_updates_item_provider_gateway_bgpip_update_model_json['bgp_cer_cidr'] = '169.254.0.10/30'
provider_gateway_update_attributes_updates_item_provider_gateway_bgpip_update_model_json['bgp_ibm_cidr'] = '169.254.0.9/30'
# Construct a model instance of ProviderGatewayUpdateAttributesUpdatesItemProviderGatewayBGPIPUpdate by calling from_dict on the json representation
provider_gateway_update_attributes_updates_item_provider_gateway_bgpip_update_model = ProviderGatewayUpdateAttributesUpdatesItemProviderGatewayBGPIPUpdate.from_dict(provider_gateway_update_attributes_updates_item_provider_gateway_bgpip_update_model_json)
assert provider_gateway_update_attributes_updates_item_provider_gateway_bgpip_update_model != False
# Construct a model instance of ProviderGatewayUpdateAttributesUpdatesItemProviderGatewayBGPIPUpdate by calling from_dict on the json representation
provider_gateway_update_attributes_updates_item_provider_gateway_bgpip_update_model_dict = ProviderGatewayUpdateAttributesUpdatesItemProviderGatewayBGPIPUpdate.from_dict(provider_gateway_update_attributes_updates_item_provider_gateway_bgpip_update_model_json).__dict__
provider_gateway_update_attributes_updates_item_provider_gateway_bgpip_update_model2 = ProviderGatewayUpdateAttributesUpdatesItemProviderGatewayBGPIPUpdate(**provider_gateway_update_attributes_updates_item_provider_gateway_bgpip_update_model_dict)
# Verify the model instances are equivalent
assert provider_gateway_update_attributes_updates_item_provider_gateway_bgpip_update_model == provider_gateway_update_attributes_updates_item_provider_gateway_bgpip_update_model2
# Convert model instance back to dict and verify no loss of data
provider_gateway_update_attributes_updates_item_provider_gateway_bgpip_update_model_json2 = provider_gateway_update_attributes_updates_item_provider_gateway_bgpip_update_model.to_dict()
assert provider_gateway_update_attributes_updates_item_provider_gateway_bgpip_update_model_json2 == provider_gateway_update_attributes_updates_item_provider_gateway_bgpip_update_model_json
class TestModel_ProviderGatewayUpdateAttributesUpdatesItemProviderGatewaySpeedUpdate():
"""
Test Class for ProviderGatewayUpdateAttributesUpdatesItemProviderGatewaySpeedUpdate
"""
def test_provider_gateway_update_attributes_updates_item_provider_gateway_speed_update_serialization(self):
"""
Test serialization/deserialization for ProviderGatewayUpdateAttributesUpdatesItemProviderGatewaySpeedUpdate
"""
# Construct a json representation of a ProviderGatewayUpdateAttributesUpdatesItemProviderGatewaySpeedUpdate model
provider_gateway_update_attributes_updates_item_provider_gateway_speed_update_model_json = {}
provider_gateway_update_attributes_updates_item_provider_gateway_speed_update_model_json['speed_mbps'] = 500
# Construct a model instance of ProviderGatewayUpdateAttributesUpdatesItemProviderGatewaySpeedUpdate by calling from_dict on the json representation
provider_gateway_update_attributes_updates_item_provider_gateway_speed_update_model = ProviderGatewayUpdateAttributesUpdatesItemProviderGatewaySpeedUpdate.from_dict(provider_gateway_update_attributes_updates_item_provider_gateway_speed_update_model_json)
assert provider_gateway_update_attributes_updates_item_provider_gateway_speed_update_model != False
# Construct a model instance of ProviderGatewayUpdateAttributesUpdatesItemProviderGatewaySpeedUpdate by calling from_dict on the json representation
provider_gateway_update_attributes_updates_item_provider_gateway_speed_update_model_dict = ProviderGatewayUpdateAttributesUpdatesItemProviderGatewaySpeedUpdate.from_dict(provider_gateway_update_attributes_updates_item_provider_gateway_speed_update_model_json).__dict__
provider_gateway_update_attributes_updates_item_provider_gateway_speed_update_model2 = ProviderGatewayUpdateAttributesUpdatesItemProviderGatewaySpeedUpdate(**provider_gateway_update_attributes_updates_item_provider_gateway_speed_update_model_dict)
# Verify the model instances are equivalent
assert provider_gateway_update_attributes_updates_item_provider_gateway_speed_update_model == provider_gateway_update_attributes_updates_item_provider_gateway_speed_update_model2
# Convert model instance back to dict and verify no loss of data
provider_gateway_update_attributes_updates_item_provider_gateway_speed_update_model_json2 = provider_gateway_update_attributes_updates_item_provider_gateway_speed_update_model.to_dict()
assert provider_gateway_update_attributes_updates_item_provider_gateway_speed_update_model_json2 == provider_gateway_update_attributes_updates_item_provider_gateway_speed_update_model_json
class TestModel_ProviderGatewayUpdateAttributesUpdatesItemProviderGatewayVLAN():
"""
Test Class for ProviderGatewayUpdateAttributesUpdatesItemProviderGatewayVLAN
"""
def test_provider_gateway_update_attributes_updates_item_provider_gateway_vlan_serialization(self):
"""
Test serialization/deserialization for ProviderGatewayUpdateAttributesUpdatesItemProviderGatewayVLAN
"""
# Construct a json representation of a ProviderGatewayUpdateAttributesUpdatesItemProviderGatewayVLAN model
provider_gateway_update_attributes_updates_item_provider_gateway_vlan_model_json = {}
provider_gateway_update_attributes_updates_item_provider_gateway_vlan_model_json['vlan'] = 10
# Construct a model instance of ProviderGatewayUpdateAttributesUpdatesItemProviderGatewayVLAN by calling from_dict on the json representation
provider_gateway_update_attributes_updates_item_provider_gateway_vlan_model = ProviderGatewayUpdateAttributesUpdatesItemProviderGatewayVLAN.from_dict(provider_gateway_update_attributes_updates_item_provider_gateway_vlan_model_json)
assert provider_gateway_update_attributes_updates_item_provider_gateway_vlan_model != False
# Construct a model instance of ProviderGatewayUpdateAttributesUpdatesItemProviderGatewayVLAN by calling from_dict on the json representation
provider_gateway_update_attributes_updates_item_provider_gateway_vlan_model_dict = ProviderGatewayUpdateAttributesUpdatesItemProviderGatewayVLAN.from_dict(provider_gateway_update_attributes_updates_item_provider_gateway_vlan_model_json).__dict__
provider_gateway_update_attributes_updates_item_provider_gateway_vlan_model2 = ProviderGatewayUpdateAttributesUpdatesItemProviderGatewayVLAN(**provider_gateway_update_attributes_updates_item_provider_gateway_vlan_model_dict)
# Verify the model instances are equivalent
assert provider_gateway_update_attributes_updates_item_provider_gateway_vlan_model == provider_gateway_update_attributes_updates_item_provider_gateway_vlan_model2
# Convert model instance back to dict and verify no loss of data
provider_gateway_update_attributes_updates_item_provider_gateway_vlan_model_json2 = provider_gateway_update_attributes_updates_item_provider_gateway_vlan_model.to_dict()
assert provider_gateway_update_attributes_updates_item_provider_gateway_vlan_model_json2 == provider_gateway_update_attributes_updates_item_provider_gateway_vlan_model_json
# endregion
##############################################################################
# End of Model Tests
##############################################################################
| 54.701119 | 983 | 0.728888 | 9,243 | 83,091 | 6.152548 | 0.038299 | 0.124763 | 0.031388 | 0.046335 | 0.897851 | 0.873584 | 0.820602 | 0.779454 | 0.740117 | 0.670694 | 0 | 0.053627 | 0.184232 | 83,091 | 1,518 | 984 | 54.737154 | 0.785344 | 0.199468 | 0 | 0.537241 | 0 | 0.028083 | 0.230657 | 0.08858 | 0 | 0 | 0 | 0 | 0.1221 | 1 | 0.075702 | false | 0 | 0.014652 | 0 | 0.137973 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4900366a8de4b64633258f8779ebc71d3304d130 | 83 | py | Python | src/rectangle/utils/__init__.py | Jiongqi/RectAngle | 558fa036d1b21b5ae0a556271ab674cd8ffe88b6 | [
"MIT"
] | 1 | 2021-04-23T01:00:53.000Z | 2021-04-23T01:00:53.000Z | src/rectangle/utils/__init__.py | Jiongqi/RectAngle | 558fa036d1b21b5ae0a556271ab674cd8ffe88b6 | [
"MIT"
] | null | null | null | src/rectangle/utils/__init__.py | Jiongqi/RectAngle | 558fa036d1b21b5ae0a556271ab674cd8ffe88b6 | [
"MIT"
] | 3 | 2021-06-17T10:17:36.000Z | 2021-06-24T19:07:05.000Z | from . import io
from . import metrics
from . import train
from . import transforms | 20.75 | 24 | 0.771084 | 12 | 83 | 5.333333 | 0.5 | 0.625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.180723 | 83 | 4 | 24 | 20.75 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0b554af7fe0d78ee8c1305c08c5f93c93fc07d21 | 72 | py | Python | data structures/set py.py | hamzashabbir11/dataStructures | 1918b4e7636aad3a40db9c1e7acea6a829f82671 | [
"MIT"
] | null | null | null | data structures/set py.py | hamzashabbir11/dataStructures | 1918b4e7636aad3a40db9c1e7acea6a829f82671 | [
"MIT"
] | null | null | null | data structures/set py.py | hamzashabbir11/dataStructures | 1918b4e7636aad3a40db9c1e7acea6a829f82671 | [
"MIT"
] | null | null | null | l={1,2,3,4,5}
a=set(['a','b','c'])
print(l)
print(a)
print(l.union(a)) | 10.285714 | 20 | 0.513889 | 19 | 72 | 1.947368 | 0.631579 | 0.324324 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.075758 | 0.083333 | 72 | 7 | 21 | 10.285714 | 0.484848 | 0 | 0 | 0 | 0 | 0 | 0.041096 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.6 | 1 | 0 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
f01a0cb917b8c8133b45022985b4def898b043cc | 189 | py | Python | tf2lib/__init__.py | jai-kannan1184/Cyc_Gan2 | 8f09307f644e49339f657635c353d98df8ae0131 | [
"MIT"
] | null | null | null | tf2lib/__init__.py | jai-kannan1184/Cyc_Gan2 | 8f09307f644e49339f657635c353d98df8ae0131 | [
"MIT"
] | null | null | null | tf2lib/__init__.py | jai-kannan1184/Cyc_Gan2 | 8f09307f644e49339f657635c353d98df8ae0131 | [
"MIT"
] | 1 | 2019-05-26T14:38:54.000Z | 2019-05-26T14:38:54.000Z | import tensorflow as tf
from tf2lib.data import *
from tf2lib.image import *
from tf2lib.ops import *
from tf2lib.utils import *
tf.config.gpu.set_per_process_memory_growth(enabled=True)
| 21 | 57 | 0.804233 | 30 | 189 | 4.933333 | 0.633333 | 0.27027 | 0.324324 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.024096 | 0.121693 | 189 | 8 | 58 | 23.625 | 0.86747 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.833333 | 0 | 0.833333 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f02f4359fa1b1f4ab19e469b87a6232c3552a308 | 48 | py | Python | docs/multiple-tests/max-line-length/src/file.py | codacy/codacy-pylint-python3 | 462614fbe679d2f7978dc3e74993099b4ef5c1c9 | [
"Apache-2.0"
] | 1 | 2021-02-02T06:57:31.000Z | 2021-02-02T06:57:31.000Z | docs/multiple-tests/max-line-length/src/file.py | itsMo07/codacy-pylint-python3 | e25ddfcea787d790c7df05407966fadd6e0a209b | [
"Apache-2.0"
] | 50 | 2019-08-14T16:14:45.000Z | 2022-03-31T11:00:50.000Z | docs/multiple-tests/max-line-length/src/file.py | itsMo07/codacy-pylint-python3 | e25ddfcea787d790c7df05407966fadd6e0a209b | [
"Apache-2.0"
] | 5 | 2019-08-27T14:56:36.000Z | 2021-02-02T06:48:30.000Z | def function():
print("A very long string")
| 16 | 31 | 0.645833 | 7 | 48 | 4.428571 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.208333 | 48 | 2 | 32 | 24 | 0.815789 | 0 | 0 | 0 | 0 | 0 | 0.375 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0 | 0 | 0 | 0.5 | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
f05efd4b9b3caef4711de4ae644e75d0ef4a1753 | 323 | py | Python | cellstates/__init__.py | pgrobecker/cellstates | 0c50b9e444487e0e822541c4ad67b3bd92524210 | [
"MIT"
] | 4 | 2021-09-14T08:50:47.000Z | 2021-09-18T19:43:15.000Z | cellstates/__init__.py | pgrobecker/cellstates | 0c50b9e444487e0e822541c4ad67b3bd92524210 | [
"MIT"
] | 10 | 2021-02-20T21:01:12.000Z | 2022-01-12T07:16:18.000Z | cellstates/__init__.py | pgrobecker/cellstates | 0c50b9e444487e0e822541c4ad67b3bd92524210 | [
"MIT"
] | 1 | 2022-02-06T17:13:57.000Z | 2022-02-06T17:13:57.000Z | from .cluster import Cluster
from .helpers import clusters_from_hierarchy, get_hierarchy_df, get_scipy_hierarchy, hierarchy_to_newick
from .helpers import marker_score_table
from .plotting import plot_hierarchy_scipy
try:
from .plotting import plot_hierarchy_ete3
except ImportError:
pass
from .run import run_mcmc
| 32.3 | 104 | 0.845201 | 46 | 323 | 5.608696 | 0.5 | 0.085271 | 0.131783 | 0.170543 | 0.24031 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003521 | 0.120743 | 323 | 9 | 105 | 35.888889 | 0.90493 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.111111 | 0.777778 | 0 | 0.777778 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
b2c2fb28a84c1fd75320abfe94cfb90a7c02f399 | 1,193 | py | Python | frille-lang/lib/python3.6/site-packages/pygraphviz/tests/test_readwrite.py | frillecode/CDS-spring-2021-language | a0b2116044cd20d4a34b98f23bd2663256c90c5d | [
"MIT"
] | null | null | null | frille-lang/lib/python3.6/site-packages/pygraphviz/tests/test_readwrite.py | frillecode/CDS-spring-2021-language | a0b2116044cd20d4a34b98f23bd2663256c90c5d | [
"MIT"
] | null | null | null | frille-lang/lib/python3.6/site-packages/pygraphviz/tests/test_readwrite.py | frillecode/CDS-spring-2021-language | a0b2116044cd20d4a34b98f23bd2663256c90c5d | [
"MIT"
] | null | null | null | from nose.tools import assert_equal
from nose.tools import assert_true
from nose.tools import assert_false
import pygraphviz as pgv
import os
import tempfile
import pathlib
def test_readwrite():
A = pgv.AGraph(name="test graph")
A.add_path([1, 2, 3, 4, 5, 6, 7, 8, 9, 10])
the_file = tempfile.NamedTemporaryFile(delete=False)
fname = the_file.name
# Make sure it can be opened for writing again on Win32
the_file.close()
# Pass a string to trigger the code paths that close the newly created file handle
A.write(fname)
B = pgv.AGraph(fname)
assert_equal(A, B)
assert_true(B == A)
assert_false(B is A)
os.unlink(fname)
def test_readwrite_pathobj():
A = pgv.AGraph(name="test graph")
A.add_path([1, 2, 3, 4, 5, 6, 7, 8, 9, 10])
the_file = tempfile.NamedTemporaryFile(delete=False)
fname = pathlib.Path(the_file.name)
# Make sure it can be opened for writing again on Win32
the_file.close()
# Pass a string to trigger the code paths that close the newly created file handle
A.write(fname)
B = pgv.AGraph(fname)
assert_equal(A, B)
assert_true(B == A)
assert_false(B is A)
os.unlink(fname)
| 29.825 | 86 | 0.686505 | 199 | 1,193 | 4.015075 | 0.326633 | 0.052566 | 0.048811 | 0.071339 | 0.849812 | 0.755945 | 0.755945 | 0.755945 | 0.755945 | 0.755945 | 0 | 0.027807 | 0.216262 | 1,193 | 39 | 87 | 30.589744 | 0.826738 | 0.225482 | 0 | 0.645161 | 0 | 0 | 0.021763 | 0 | 0 | 0 | 0 | 0 | 0.290323 | 1 | 0.064516 | false | 0 | 0.225806 | 0 | 0.290323 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
650c65fe09ef31d814cd0c277a374b45eb1d3ae9 | 22,189 | py | Python | tests/python/modules/config_loading/test__config_parser.py | dotmodules/dm | ec2ebf6c8b9ac707440a81d0f25003af6f0603c2 | [
"MIT"
] | null | null | null | tests/python/modules/config_loading/test__config_parser.py | dotmodules/dm | ec2ebf6c8b9ac707440a81d0f25003af6f0603c2 | [
"MIT"
] | null | null | null | tests/python/modules/config_loading/test__config_parser.py | dotmodules/dm | ec2ebf6c8b9ac707440a81d0f25003af6f0603c2 | [
"MIT"
] | null | null | null | from typing import cast
from unittest.mock import MagicMock
import pytest
from pytest_mock.plugin import MockerFixture
from dotmodules.modules.loader import ConfigLoader, LoaderError
from dotmodules.modules.parser import ConfigParser, ParserError
@pytest.fixture
def mock_loader(mocker: MockerFixture) -> MagicMock:
# By default the type of a MagicMock object is Any. We want to narrow it
# back to MagicMock..
return cast(MagicMock, mocker.MagicMock())
@pytest.fixture
def parser(mocker: MockerFixture, mock_loader: MagicMock) -> ConfigParser:
loader = cast(ConfigLoader, mock_loader)
return ConfigParser(loader=loader)
# =============================================================================
# LOW LEVEL PARSING METHODS
# =============================================================================
class TestStringParsing:
def test__valid_string_can_be_parsed(
self, parser: ConfigParser, mock_loader: MagicMock
) -> None:
dummy_key = "key"
dummy_value = "some value"
mock_loader.get.return_value = dummy_value
result = parser._parse_string(key=dummy_key)
assert result == dummy_value
mock_loader.get.assert_called_with(key=dummy_key)
def test__missing_key__error_should_be_raised(
self, parser: ConfigParser, mock_loader: MagicMock
) -> None:
dummy_key = "invalid_key"
mock_loader.get.side_effect = LoaderError("missing key")
with pytest.raises(ParserError) as exception_info:
parser._parse_string(key=dummy_key)
expected = f"Mandatory '{dummy_key}' section is missing!"
assert str(exception_info.value) == expected
mock_loader.get.assert_called_with(key=dummy_key)
def test__missing_key__but_not_mandatory__empty_should_be_returned(
self, parser: ConfigParser, mock_loader: MagicMock
) -> None:
dummy_key = "invalid_key"
mock_loader.get.side_effect = LoaderError("missing key")
result = parser._parse_string(key=dummy_key, mandatory=False)
assert result == ""
mock_loader.get.assert_called_with(key=dummy_key)
def test__empty_value__error_should_be_raised(
self, parser: ConfigParser, mock_loader: MagicMock
) -> None:
dummy_value = ""
dummy_key = "key"
mock_loader.get.return_value = dummy_value
with pytest.raises(ParserError) as exception_info:
parser._parse_string(key=dummy_key)
expected = f"Empty value for section '{dummy_key}'!"
assert str(exception_info.value) == expected
mock_loader.get.assert_called_with(key=dummy_key)
def test__empty_value__but_not_mandatory__empty_should_be_returned(
self, parser: ConfigParser, mock_loader: MagicMock
) -> None:
dummy_value = ""
dummy_key = "invalid_key"
mock_loader.get.return_value = dummy_value
result = parser._parse_string(key=dummy_key, mandatory=False)
assert result == ""
mock_loader.get.assert_called_with(key=dummy_key)
def test__non_string_value__error_should_be_raised(
self, parser: ConfigParser, mock_loader: MagicMock
) -> None:
dummy_value = ["this", "is", "not", "a", "string"]
dummy_key = "key"
mock_loader.get.return_value = dummy_value
with pytest.raises(ParserError) as exception_info:
parser._parse_string(key=dummy_key)
expected = f"Value for section '{dummy_key}' should be a string, got '['this', 'is', 'not', 'a', 'string']'!"
assert str(exception_info.value) == expected
mock_loader.get.assert_called_with(key=dummy_key)
class TestBooleanParsing:
def test__valid_boolean_can_be_parsed(
self, parser: ConfigParser, mock_loader: MagicMock
) -> None:
dummy_key = "key"
dummy_value = True
mock_loader.get.return_value = dummy_value
result = parser._parse_boolean(key=dummy_key)
assert result == dummy_value
mock_loader.get.assert_called_with(key=dummy_key)
def test__missing_key__error_should_be_raised(
self, parser: ConfigParser, mock_loader: MagicMock
) -> None:
dummy_key = "invalid_key"
mock_loader.get.side_effect = LoaderError("missing key")
with pytest.raises(ParserError) as exception_info:
parser._parse_boolean(key=dummy_key)
expected = f"Mandatory '{dummy_key}' section is missing!"
assert str(exception_info.value) == expected
mock_loader.get.assert_called_with(key=dummy_key)
def test__missing_key__but_not_mandatory__false_should_be_returned(
self, parser: ConfigParser, mock_loader: MagicMock
) -> None:
dummy_key = "invalid_key"
mock_loader.get.side_effect = LoaderError("missing key")
result = parser._parse_boolean(key=dummy_key, mandatory=False)
assert not result
mock_loader.get.assert_called_with(key=dummy_key)
def test__non_boolean_value__error_should_be_raised(
self, parser: ConfigParser, mock_loader: MagicMock
) -> None:
dummy_value = 42
dummy_key = "key"
mock_loader.get.return_value = dummy_value
with pytest.raises(ParserError) as exception_info:
parser._parse_boolean(key=dummy_key)
expected = f"Value for section '{dummy_key}' should be a boolean, got '42'!"
assert str(exception_info.value) == expected
mock_loader.get.assert_called_with(key=dummy_key)
class TestItemListParsing:
def test__missing_key_should_be_converted_to_an_empty_list(
self, parser: ConfigParser, mock_loader: MagicMock
) -> None:
dummy_key = "my_key"
dummy_expected_item = {"irrelevant": "irrelevant"}
mock_loader.get.side_effect = LoaderError("missing key")
result = parser._parse_item_list(
key=dummy_key,
# We are testing here with a simplified expected item type, ignoring
# the mypy warning.
expected_item=dummy_expected_item, # type: ignore
)
assert result == []
mock_loader.get.assert_called_with(key=dummy_key)
def test__none_value_should_be_converted_to_an_empty_list(
self, parser: ConfigParser, mock_loader: MagicMock
) -> None:
dummy_key = "my_key"
dummy_expected_item = {"irrelevant": "irrelevant"}
mock_loader.get.return_value = None
result = parser._parse_item_list(
key=dummy_key,
# We are testing here with a simplified expected item type, ignoring
# the mypy warning.
expected_item=dummy_expected_item, # type: ignore
)
assert result == []
mock_loader.get.assert_called_with(key=dummy_key)
def test__empty_value_should_be_converted_to_an_empty_list(
self, parser: ConfigParser, mock_loader: MagicMock
) -> None:
dummy_key = "my_key"
dummy_expected_item = {"irrelevant": "irrelevant"}
mock_loader.get.return_value = {}
result = parser._parse_item_list(
key=dummy_key,
# We are testing here with a simplified expected item type, ignoring
# the mypy warning.
expected_item=dummy_expected_item, # type: ignore
)
assert result == []
mock_loader.get.assert_called_with(key=dummy_key)
def test__not_a_list__error_should_be_raised(
self, parser: ConfigParser, mock_loader: MagicMock
) -> None:
dummy_key = "my_key"
dummy_expected_item = {
"field_1": "string",
"field_2": 42,
}
mock_loader.get.return_value = "I am not a list"
with pytest.raises(ParserError) as exception_info:
parser._parse_item_list(
key=dummy_key,
# We are testing here with a simplified expected item type,
# ignoring the mypy warning.
expected_item=dummy_expected_item, # type: ignore
)
expected = "Invalid value for 'my_key'! It should contain a list of objects!"
assert str(exception_info.value) == expected
def test__not_a_list_of_dictionaries__error_should_be_raised(
self, parser: ConfigParser, mock_loader: MagicMock
) -> None:
dummy_key = "my_key"
dummy_expected_item = {
"field_1": "string",
"field_2": 42,
}
mock_loader.get.return_value = [42, {"hello": "hello"}]
with pytest.raises(ParserError) as exception_info:
parser._parse_item_list(
key=dummy_key,
# We are testing here with a simplified expected item type,
# ignoring the mypy warning.
expected_item=dummy_expected_item, # type: ignore
)
expected = "Invalid value for 'my_key'! It should contain a list of objects!"
assert str(exception_info.value) == expected
def test__valid_item_can_be_parsed(
self, parser: ConfigParser, mock_loader: MagicMock
) -> None:
dummy_key = "my_key"
dummy_expected_item = {
"field_1": "string",
"field_2": 42,
}
mock_loader.get.return_value = [
{
"field_1": "value_1",
"field_2": 42,
},
]
result = parser._parse_item_list(
key=dummy_key,
# We are testing here with a simplified expected item type, ignoring
# the mypy warning.
expected_item=dummy_expected_item, # type: ignore
)
assert result == [
{
"field_1": "value_1",
"field_2": 42,
},
]
def test__multiple_valid_items_can_be_parsed(
self, parser: ConfigParser, mock_loader: MagicMock
) -> None:
dummy_key = "my_key"
dummy_expected_item = {
"field_1": "string",
"field_2": 42,
}
mock_loader.get.return_value = [
{
"field_1": "value_1",
"field_2": 42,
},
{
"field_1": "value_2",
"field_2": 43,
},
]
result = parser._parse_item_list(
key=dummy_key,
# We are testing here with a simplified expected item type, ignoring
# the mypy warning.
expected_item=dummy_expected_item, # type: ignore
)
assert result == [
{
"field_1": "value_1",
"field_2": 42,
},
{
"field_1": "value_2",
"field_2": 43,
},
]
def test__missing_key__error_should_be_raised(
self, parser: ConfigParser, mock_loader: MagicMock
) -> None:
dummy_key = "my_key"
dummy_expected_item = {
"field_1": "string",
"field_2": 42,
}
mock_loader.get.return_value = [
{
"field_1": "value_1",
},
]
with pytest.raises(ParserError) as exception_info:
parser._parse_item_list(
key=dummy_key,
# We are testing here with a simplified expected item type,
# ignoring the mypy warning.
expected_item=dummy_expected_item, # type: ignore
)
expected = (
"Missing mandatory field 'field_2' from section 'my_key' item at index 1!"
)
assert str(exception_info.value) == expected
def test__additional_key__error_should_be_raised(
self, parser: ConfigParser, mock_loader: MagicMock
) -> None:
dummy_key = "my_key"
dummy_expected_item = {
"field_1": "string",
"field_2": 42,
}
mock_loader.get.return_value = [
{
"field_1": "value_1",
"field_2": 42,
"extra_field": "uff",
},
]
with pytest.raises(ParserError) as exception_info:
parser._parse_item_list(
key=dummy_key,
# We are testing here with a simplified expected item type,
# ignoring the mypy warning.
expected_item=dummy_expected_item, # type: ignore
)
expected = (
"Unexpected field 'extra_field' found for section 'my_key' item at index 1!"
)
assert str(exception_info.value) == expected
def test__additional_keys__error_should_be_raised(
self, parser: ConfigParser, mock_loader: MagicMock
) -> None:
dummy_key = "my_key"
dummy_expected_item = {
"field_1": "string",
"field_2": 42,
}
mock_loader.get.return_value = [
{
"field_1": "value_1",
"field_2": 42,
"extra_field_1": "uff",
"extra_field_2": "huff",
},
]
with pytest.raises(ParserError) as exception_info:
parser._parse_item_list(
key=dummy_key,
# We are testing here with a simplified expected item type,
# ignoring the mypy warning.
expected_item=dummy_expected_item, # type: ignore
)
expected = "Unexpected fields 'extra_field_1', 'extra_field_2' found for section 'my_key' item at index 1!"
assert str(exception_info.value) == expected
def test__invalid_value_type__error_should_be_raised(
self, parser: ConfigParser, mock_loader: MagicMock
) -> None:
dummy_key = "my_key"
dummy_expected_item = {
"field_1": "string",
"field_2": 42,
}
mock_loader.get.return_value = [
{
"field_1": "value_1",
"field_2": "I am not an integer",
},
]
with pytest.raises(ParserError) as exception_info:
parser._parse_item_list(
key=dummy_key,
# We are testing here with a simplified expected item type,
# ignoring the mypy warning.
expected_item=dummy_expected_item, # type: ignore
)
expected = "The value for field 'field_2' should be an int in section 'my_key' item at index 1!"
assert str(exception_info.value) == expected
# =============================================================================
# HIGHER LEVEL PARSING METHODS
# =============================================================================
class TestNameParsing:
def test__name_can_be_parsed(
self, parser: ConfigParser, mocker: MockerFixture
) -> None:
dummy_name = "my_name"
mock_parse_string = mocker.patch.object(parser, "_parse_string")
mock_parse_string.return_value = dummy_name
result = parser.parse_name()
assert result == dummy_name
mock_parse_string.assert_called_with(key="name")
class TestVersionParsing:
def test__version_can_be_parsed(
self, parser: ConfigParser, mocker: MockerFixture
) -> None:
dummy_version = "my_version"
mock_parse_string = mocker.patch.object(parser, "_parse_string")
mock_parse_string.return_value = dummy_version
result = parser.parse_version()
assert result == dummy_version
mock_parse_string.assert_called_with(key="version")
class TestEnabledParsing:
def test__enabled_flag_can_be_parsed(
self, parser: ConfigParser, mocker: MockerFixture
) -> None:
dummy_enabled_flag = True
mock_parse_boolean = mocker.patch.object(parser, "_parse_boolean")
mock_parse_boolean.return_value = dummy_enabled_flag
result = parser.parse_enabled()
assert result == dummy_enabled_flag
mock_parse_boolean.assert_called_with(key="enabled")
class TestDocumentationParsing:
def test__documentation_can_be_parsed(
self, parser: ConfigParser, mocker: MockerFixture
) -> None:
dummy_documentation = "line1\nline2"
mock_parse_string = mocker.patch.object(parser, "_parse_string")
mock_parse_string.return_value = dummy_documentation
result = parser.parse_documentation()
assert result == [
"line1",
"line2",
]
mock_parse_string.assert_called_with(key="documentation", mandatory=False)
class TestVariablesParsing:
def test__missing_key_should_be_converted_to_dict(
self, parser: ConfigParser, mock_loader: MagicMock
) -> None:
mock_loader.get.side_effect = LoaderError("missing key")
result = parser.parse_variables()
assert result == {}
mock_loader.get.assert_called_with(key="variables")
def test__empty_value_should_be_left_as_is(
self, parser: ConfigParser, mock_loader: MagicMock
) -> None:
mock_loader.get.return_value = {}
result = parser.parse_variables()
assert result == {}
mock_loader.get.assert_called_with(key="variables")
def test__scalar_value__error_should_be_raised(
self, parser: ConfigParser, mock_loader: MagicMock
) -> None:
mock_loader.get.return_value = "I am a string"
with pytest.raises(ParserError) as exception_info:
parser.parse_variables()
expected = "The 'variables' section should have the following syntax: 'VARIABLE_NAME' = ['var_1', 'var_2', ..] !"
assert str(exception_info.value) == expected
mock_loader.get.assert_called_with(key="variables")
def test__non_string_key__error_should_be_raised(
self, parser: ConfigParser, mock_loader: MagicMock
) -> None:
mock_loader.get.return_value = {
42: ["non", "string", "key"],
}
with pytest.raises(ParserError) as exception_info:
parser.parse_variables()
expected = "The 'variables' section should only have string variable names!"
assert str(exception_info.value) == expected
mock_loader.get.assert_called_with(key="variables")
def test__non_compatible_variable__error_should_be_raised(
self, parser: ConfigParser, mock_loader: MagicMock
) -> None:
mock_loader.get.return_value = {
"VARIABLE": {"this is not a list": 42},
}
with pytest.raises(ParserError) as exception_info:
parser.parse_variables()
expected = "The 'variables' section should contain a single string or a list of strings for a variable name!"
assert str(exception_info.value) == expected
mock_loader.get.assert_called_with(key="variables")
def test__non_list_variable_value__should_be_converted_to_a_list(
self, parser: ConfigParser, mock_loader: MagicMock
) -> None:
mock_loader.get.return_value = {
"VARIABLE": "I am not a list",
}
result = parser.parse_variables()
assert result == {
"VARIABLE": ["I am not a list"],
}
mock_loader.get.assert_called_with(key="variables")
def test__list_variable_values__should_be_left_as_is(
self, parser: ConfigParser, mock_loader: MagicMock
) -> None:
mock_loader.get.return_value = {
"VARIABLE": ["I", "am", "a", "list"],
}
result = parser.parse_variables()
assert result == {
"VARIABLE": ["I", "am", "a", "list"],
}
mock_loader.get.assert_called_with(key="variables")
def test__non_string_items__error_should_be_raised(
self, parser: ConfigParser, mock_loader: MagicMock
) -> None:
mock_loader.get.return_value = {
"VARIABLE": ["I am a string", 42],
}
with pytest.raises(ParserError) as exception_info:
parser.parse_variables()
expected = "The 'variables' section should contain a single string or a list of strings for a variable name!"
assert str(exception_info.value) == expected
mock_loader.get.assert_called_with(key="variables")
class TestLinkParsing:
def test__links_can_be_parsed(
self, parser: ConfigParser, mocker: MockerFixture
) -> None:
dummy_links = [
{
"path_to_target": "my_path_to_target_1",
"path_to_symlink": "my_path_to_symlink_1",
"name": "my_name_1",
},
{
"path_to_target": "my_path_to_target_2",
"path_to_symlink": "my_path_to_symlink_2",
"name": "my_name_2",
},
]
mock_parse_item_list = mocker.patch.object(parser, "_parse_item_list")
mock_parse_item_list.return_value = dummy_links
result = parser.parse_links()
assert result == dummy_links
mock_parse_item_list.assert_called_with(
key="links",
expected_item={
"path_to_target": "string",
"path_to_symlink": "string",
"name": "string",
},
)
class TestHookParsing:
def test__hooks_can_be_parsed(
self, parser: ConfigParser, mocker: MockerFixture
) -> None:
dummy_hooks = [
{
"path_to_script": "my_path_to_script_1",
"name": "my_name_1",
"priority": 42,
},
{
"path_to_script": "my_path_to_script_2",
"name": "my_name_2",
"priority": 43,
},
]
mock_parse_item_list = mocker.patch.object(parser, "_parse_item_list")
mock_parse_item_list.return_value = dummy_hooks
result = parser.parse_hooks()
assert result == dummy_hooks
mock_parse_item_list.assert_called_with(
key="hooks",
expected_item={
"path_to_script": "string",
"name": "string",
"priority": 0,
},
)
| 33.068554 | 121 | 0.604128 | 2,487 | 22,189 | 5.028146 | 0.068758 | 0.065574 | 0.051979 | 0.060296 | 0.837025 | 0.822711 | 0.819512 | 0.795522 | 0.784966 | 0.784966 | 0 | 0.007668 | 0.294741 | 22,189 | 670 | 122 | 33.11791 | 0.791424 | 0.069269 | 0 | 0.607143 | 0 | 0.005952 | 0.124939 | 0 | 0 | 0 | 0 | 0 | 0.123016 | 1 | 0.073413 | false | 0 | 0.011905 | 0.001984 | 0.109127 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e8eaec138b53d04dc22c7cc543b0388e8a9e201b | 1,610 | py | Python | January9th/test_guessingGame.py | EricCharnesky/CIS2001-Winter2020 | e51d967e97399248dc8b69aaed2d5ca8aee0cd6e | [
"MIT"
] | 3 | 2020-01-06T23:21:36.000Z | 2021-03-01T08:36:57.000Z | January9th/test_guessingGame.py | EricCharnesky/CIS2001-Winter2020 | e51d967e97399248dc8b69aaed2d5ca8aee0cd6e | [
"MIT"
] | null | null | null | January9th/test_guessingGame.py | EricCharnesky/CIS2001-Winter2020 | e51d967e97399248dc8b69aaed2d5ca8aee0cd6e | [
"MIT"
] | 2 | 2020-01-21T16:00:03.000Z | 2020-05-05T14:57:34.000Z | from unittest import TestCase
from January9th import GuessingGame
class TestGuessingGame(TestCase):
def test_guess_guess_correctly_in_one_guess(self):
# AAA
# arrange - set up all variables
magic_number = 4
expected_result = "You guessed it in 1 guesses!"
max_number = 10
test_game = GuessingGame(max_number)
test_game._magic_number = magic_number
# act - call the code we are testing
actual_result = test_game.guess(magic_number)
# assert - did we get what we expected
self.assertEqual(expected_result, actual_result)
def test_guess_guess_too_low(self):
# AAA
# arrange - set up all variables
magic_number = 4
expected_result = "Your guess was too low!"
max_number = 10
test_game = GuessingGame(max_number)
test_game._magic_number = max_number
# act - call the code we are testing
actual_result = test_game.guess(magic_number)
# assert - did we get what we expected
self.assertEqual(expected_result, actual_result)
def test_guess_guess_too_high(self):
# AAA
# arrange - set up all variables
magic_number = 4
expected_result = "Your guess was too high!"
max_number = 10
test_game = GuessingGame(max_number)
test_game._magic_number = magic_number
# act - call the code we are testing
actual_result = test_game.guess(max_number)
# assert - did we get what we expected
self.assertEqual(expected_result, actual_result)
| 27.288136 | 56 | 0.65528 | 206 | 1,610 | 4.849515 | 0.252427 | 0.11011 | 0.036036 | 0.051051 | 0.834835 | 0.834835 | 0.834835 | 0.834835 | 0.834835 | 0.834835 | 0 | 0.009607 | 0.28882 | 1,610 | 58 | 57 | 27.758621 | 0.862882 | 0.198758 | 0 | 0.592593 | 0 | 0 | 0.058731 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 1 | 0.111111 | false | 0 | 0.074074 | 0 | 0.222222 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e8f2da97a055f7abd807f7756b5b0f170c97303b | 318 | py | Python | puma/logging/__init__.py | gift-surg/puma | 58beae3459a0c8d96adfe9af323e26868428df4d | [
"Apache-2.0"
] | null | null | null | puma/logging/__init__.py | gift-surg/puma | 58beae3459a0c8d96adfe9af323e26868428df4d | [
"Apache-2.0"
] | 13 | 2020-05-04T14:14:58.000Z | 2020-07-29T16:37:03.000Z | puma/logging/__init__.py | gift-surg/puma | 58beae3459a0c8d96adfe9af323e26868428df4d | [
"Apache-2.0"
] | null | null | null | from puma.logging.log_level import LogLevel # noqa: F401
from puma.logging.logging import Logging # noqa: F401
from puma.logging.managed_process_log_queue import ManagedProcessLogQueue # noqa: F401
from puma.logging.child_process_logging.process_logging_mechanism import ProcessLoggingMechanism # noqa: F401, I100
| 63.6 | 116 | 0.839623 | 42 | 318 | 6.166667 | 0.404762 | 0.123552 | 0.23166 | 0.185328 | 0.266409 | 0 | 0 | 0 | 0 | 0 | 0 | 0.052632 | 0.103774 | 318 | 4 | 117 | 79.5 | 0.85614 | 0.154088 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e8f3acadb0080e938b63a6f881fab49c52eba8e2 | 2,556 | py | Python | epytope/Data/pssms/smmpmbec/mat/A_11_01_10.py | christopher-mohr/epytope | 8ac9fe52c0b263bdb03235a5a6dffcb72012a4fd | [
"BSD-3-Clause"
] | 7 | 2021-02-01T18:11:28.000Z | 2022-01-31T19:14:07.000Z | epytope/Data/pssms/smmpmbec/mat/A_11_01_10.py | christopher-mohr/epytope | 8ac9fe52c0b263bdb03235a5a6dffcb72012a4fd | [
"BSD-3-Clause"
] | 22 | 2021-01-02T15:25:23.000Z | 2022-03-14T11:32:53.000Z | epytope/Data/pssms/smmpmbec/mat/A_11_01_10.py | christopher-mohr/epytope | 8ac9fe52c0b263bdb03235a5a6dffcb72012a4fd | [
"BSD-3-Clause"
] | 4 | 2021-05-28T08:50:38.000Z | 2022-03-14T11:45:32.000Z | A_11_01_10 = {0: {'A': -0.451, 'C': 0.083, 'E': 0.511, 'D': 0.521, 'G': -0.164, 'F': 0.086, 'I': -0.067, 'H': 0.032, 'K': -0.193, 'M': -0.153, 'L': -0.026, 'N': 0.141, 'Q': -0.012, 'P': 0.593, 'S': -0.547, 'R': -0.252, 'T': -0.298, 'W': 0.303, 'V': -0.163, 'Y': 0.055}, 1: {'A': -0.376, 'C': 0.806, 'E': 0.116, 'D': 0.212, 'G': 0.06, 'F': 0.081, 'I': -0.571, 'H': 0.478, 'K': 0.518, 'M': -0.522, 'L': -0.348, 'N': 0.322, 'Q': -0.194, 'P': 0.147, 'S': -0.662, 'R': 0.9, 'T': -0.933, 'W': 0.333, 'V': -0.714, 'Y': 0.346}, 2: {'A': -0.058, 'C': -0.023, 'E': 0.292, 'D': 0.15, 'G': 0.203, 'F': -0.195, 'I': -0.416, 'H': 0.238, 'K': 0.357, 'M': -0.552, 'L': -0.152, 'N': -0.09, 'Q': 0.169, 'P': 0.119, 'S': -0.268, 'R': 0.502, 'T': 0.057, 'W': 0.005, 'V': -0.016, 'Y': -0.321}, 3: {'A': -0.053, 'C': -0.056, 'E': 0.243, 'D': 0.003, 'G': -0.021, 'F': -0.31, 'I': -0.048, 'H': 0.189, 'K': 0.253, 'M': -0.306, 'L': 0.058, 'N': -0.121, 'Q': 0.06, 'P': -0.013, 'S': -0.111, 'R': 0.199, 'T': 0.057, 'W': 0.001, 'V': 0.077, 'Y': -0.102}, 4: {'A': -0.2, 'C': -0.021, 'E': 0.281, 'D': 0.097, 'G': 0.095, 'F': -0.197, 'I': 0.099, 'H': -0.022, 'K': 0.184, 'M': -0.107, 'L': 0.06, 'N': -0.029, 'Q': 0.067, 'P': -0.012, 'S': -0.001, 'R': 0.008, 'T': -0.016, 'W': -0.213, 'V': 0.006, 'Y': -0.079}, 5: {'A': -0.065, 'C': 0.098, 'E': 0.073, 'D': 0.171, 'G': 0.145, 'F': -0.24, 'I': -0.102, 'H': 0.169, 'K': 0.091, 'M': -0.034, 'L': -0.068, 'N': 0.052, 'Q': 0.167, 'P': -0.078, 'S': -0.128, 'R': 0.105, 'T': -0.119, 'W': 0.042, 'V': -0.19, 'Y': -0.089}, 6: {'A': 0.057, 'C': 0.017, 'E': 0.262, 'D': 0.203, 'G': 0.085, 'F': -0.136, 'I': 0.007, 'H': -0.062, 'K': 0.153, 'M': -0.129, 'L': -0.157, 'N': -0.014, 'Q': 0.171, 'P': 0.075, 'S': -0.098, 'R': 0.042, 'T': -0.042, 'W': -0.138, 'V': 0.088, 'Y': -0.383}, 7: {'A': 0.149, 'C': -0.0, 'E': 0.087, 'D': 0.351, 'G': 0.381, 'F': -0.448, 'I': -0.253, 'H': 0.031, 'K': 0.424, 'M': -0.421, 'L': -0.338, 'N': 0.088, 'Q': 0.284, 'P': -0.057, 'S': 0.013, 'R': 0.405, 'T': 0.05, 'W': -0.215, 'V': -0.135, 'Y': -0.396}, 8: {'A': -0.009, 'C': 0.166, 'E': -0.001, 'D': 0.248, 'G': 0.192, 'F': -0.378, 'I': 0.15, 'H': -0.06, 'K': 0.13, 'M': 0.043, 'L': -0.107, 'N': 0.008, 'Q': 0.154, 'P': -0.188, 'S': -0.005, 'R': 0.136, 'T': -0.105, 'W': 0.019, 'V': 0.08, 'Y': -0.474}, 9: {'A': -0.012, 'C': 0.356, 'E': 0.571, 'D': 0.48, 'G': -0.007, 'F': 0.264, 'I': 0.154, 'H': 0.021, 'K': -2.185, 'M': 0.082, 'L': 0.261, 'N': 0.306, 'Q': 0.44, 'P': 0.592, 'S': 0.149, 'R': -1.268, 'T': 0.302, 'W': 0.228, 'V': 0.379, 'Y': -1.112}, -1: {'con': 5.01083}} | 2,556 | 2,556 | 0.394757 | 618 | 2,556 | 1.627832 | 0.312298 | 0.019881 | 0.00994 | 0.011928 | 0.013917 | 0 | 0 | 0 | 0 | 0 | 0 | 0.374416 | 0.161972 | 2,556 | 1 | 2,556 | 2,556 | 0.095238 | 0 | 0 | 0 | 0 | 0 | 0.07939 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
33410e3feaa389d77854a91253a0989e09470f58 | 206 | py | Python | dynamic_model/BoatDynamics.py | archipela-go/analysis | 6932cc401713ca92c682984c6e7682f4966f6ba0 | [
"MIT"
] | null | null | null | dynamic_model/BoatDynamics.py | archipela-go/analysis | 6932cc401713ca92c682984c6e7682f4966f6ba0 | [
"MIT"
] | null | null | null | dynamic_model/BoatDynamics.py | archipela-go/analysis | 6932cc401713ca92c682984c6e7682f4966f6ba0 | [
"MIT"
] | null | null | null | import numpy as np
class BoatDynamics:
def __init__(self):
pass
def calculate_wrench(self, state, control):
pass
def calculate_derivatives(self, state, control):
pass
| 17.166667 | 52 | 0.650485 | 24 | 206 | 5.333333 | 0.625 | 0.109375 | 0.25 | 0.3125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.281553 | 206 | 11 | 53 | 18.727273 | 0.864865 | 0 | 0 | 0.375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.375 | false | 0.375 | 0.125 | 0 | 0.625 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 6 |
334fd41d4b008afe76738cca56836090d1a89de7 | 418 | py | Python | src/fiesta/core/patterns.py | lerooze/django-fiesta | d521f50bcdd3d40e91f0474ec2fa7e256758e0a5 | [
"BSD-3-Clause"
] | null | null | null | src/fiesta/core/patterns.py | lerooze/django-fiesta | d521f50bcdd3d40e91f0474ec2fa7e256758e0a5 | [
"BSD-3-Clause"
] | 3 | 2019-10-29T23:31:01.000Z | 2020-03-31T03:08:28.000Z | src/fiesta/core/patterns.py | lerooze/django-fiesta | d521f50bcdd3d40e91f0474ec2fa7e256758e0a5 | [
"BSD-3-Clause"
] | null | null | null | import re
# urn:sdmx:org.package-name.class-name=agency-id:(maintainable-parent-object-id[maintainable-parent-object-version].)?(container-object-id.)?object-id([object-version])?
MAINTAINABLE = re.compile(r'(?P<object_id>\.*)(\[(?P<version>.*)\])?')
PARENTABLE = re.compile(r'(?P<maintainable_parent_id>\.*)\[(?P<maintainable_parent_version>.*)\]\.(?P<container_id>.*?\.)?(?P<object_id>\.*)(\[(?P<version>.*)\])?')
| 59.714286 | 169 | 0.677033 | 55 | 418 | 5.018182 | 0.345455 | 0.144928 | 0.144928 | 0.188406 | 0.123188 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.028708 | 418 | 6 | 170 | 69.666667 | 0.679803 | 0.399522 | 0 | 0 | 0 | 0.333333 | 0.706827 | 0.706827 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
6839306a9b7cd2c8856e23bb1ee4244a3260c65e | 57 | py | Python | codingbat.com/List-1/rotate_left3.py | ahmedelq/PythonicAlgorithms | ce10dbb6e1fd0ea5c922a932b0f920236aa411bf | [
"MIT"
] | null | null | null | codingbat.com/List-1/rotate_left3.py | ahmedelq/PythonicAlgorithms | ce10dbb6e1fd0ea5c922a932b0f920236aa411bf | [
"MIT"
] | null | null | null | codingbat.com/List-1/rotate_left3.py | ahmedelq/PythonicAlgorithms | ce10dbb6e1fd0ea5c922a932b0f920236aa411bf | [
"MIT"
] | null | null | null | def rotate_left3(nums):
return nums[1:] + nums[:1]
| 19 | 32 | 0.614035 | 9 | 57 | 3.777778 | 0.666667 | 0.294118 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.066667 | 0.210526 | 57 | 2 | 33 | 28.5 | 0.688889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
6881f916fb4b35ae85e614ac3a383995a6b6d6dd | 292 | py | Python | TimeWrapper_JE/venv/Lib/site-packages/tqdm/_tqdm.py | JE-Chen/je_old_repo | a8b2f1ac2eec25758bd15b71c64b59b27e0bcda5 | [
"MIT"
] | null | null | null | TimeWrapper_JE/venv/Lib/site-packages/tqdm/_tqdm.py | JE-Chen/je_old_repo | a8b2f1ac2eec25758bd15b71c64b59b27e0bcda5 | [
"MIT"
] | null | null | null | TimeWrapper_JE/venv/Lib/site-packages/tqdm/_tqdm.py | JE-Chen/je_old_repo | a8b2f1ac2eec25758bd15b71c64b59b27e0bcda5 | [
"MIT"
] | null | null | null | from warnings import warn
from .std import * # NOQA
from .std import __all__ # NOQA
from .std import TqdmDeprecationWarning
warn("This function will be removed in tqdm==5.0.0\n"
"Please use `tqdm.std.*` instead of `tqdm._tqdm.*`",
TqdmDeprecationWarning, stacklevel=2)
| 29.2 | 58 | 0.69863 | 40 | 292 | 4.975 | 0.6 | 0.105528 | 0.19598 | 0.170854 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017094 | 0.19863 | 292 | 9 | 59 | 32.444444 | 0.833333 | 0.030822 | 0 | 0 | 0 | 0 | 0.350554 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.571429 | 0 | 0.571429 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
68a7c605d941b820d15e3b77e51e9d55d68946df | 89 | py | Python | sample_app/boards/admin.py | CCE-IT/cce-toolkit | a3dc470bd8fd3f01615ff57198dfefc88d3aa50c | [
"BSD-3-Clause"
] | 8 | 2016-06-23T14:41:26.000Z | 2018-07-06T17:54:08.000Z | sample_app/boards/admin.py | cceit/cce-toolkit | a3dc470bd8fd3f01615ff57198dfefc88d3aa50c | [
"BSD-3-Clause"
] | null | null | null | sample_app/boards/admin.py | cceit/cce-toolkit | a3dc470bd8fd3f01615ff57198dfefc88d3aa50c | [
"BSD-3-Clause"
] | null | null | null | from toolkit.helpers.admin import auto_admin_register
auto_admin_register(__package__)
| 17.8 | 53 | 0.876404 | 12 | 89 | 5.833333 | 0.666667 | 0.257143 | 0.485714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.078652 | 89 | 4 | 54 | 22.25 | 0.853659 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
d7ace1e1f48842c36836976f75166b664d2dcc6c | 31 | py | Python | models/fixmatch/__init__.py | limberc/TorchSSL | b78918964bde9a91ba8bb5be58c2b238951949f8 | [
"MIT"
] | null | null | null | models/fixmatch/__init__.py | limberc/TorchSSL | b78918964bde9a91ba8bb5be58c2b238951949f8 | [
"MIT"
] | null | null | null | models/fixmatch/__init__.py | limberc/TorchSSL | b78918964bde9a91ba8bb5be58c2b238951949f8 | [
"MIT"
] | null | null | null | from .fixmatch import FixMatch
| 15.5 | 30 | 0.83871 | 4 | 31 | 6.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.129032 | 31 | 1 | 31 | 31 | 0.962963 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d7ff1613c1f599210d19df9755cccd19f5ca8318 | 26 | py | Python | python/testData/psi/SingleQuotedFStringInsideMultilineFStringTerminatedByLineBreakInExpressionInParentheses.py | truthiswill/intellij-community | fff88cfb0dc168eea18ecb745d3e5b93f57b0b95 | [
"Apache-2.0"
] | 2 | 2019-04-28T07:48:50.000Z | 2020-12-11T14:18:08.000Z | python/testData/psi/SingleQuotedFStringInsideMultilineFStringTerminatedByLineBreakInExpressionInParentheses.py | truthiswill/intellij-community | fff88cfb0dc168eea18ecb745d3e5b93f57b0b95 | [
"Apache-2.0"
] | 173 | 2018-07-05T13:59:39.000Z | 2018-08-09T01:12:03.000Z | python/testData/psi/SingleQuotedFStringInsideMultilineFStringTerminatedByLineBreakInExpressionInParentheses.py | truthiswill/intellij-community | fff88cfb0dc168eea18ecb745d3e5b93f57b0b95 | [
"Apache-2.0"
] | 2 | 2020-03-15T08:57:37.000Z | 2020-04-07T04:48:14.000Z | s = f"""{f'{(1 +
2)}'}""" | 13 | 17 | 0.192308 | 5 | 26 | 1 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.095238 | 0.192308 | 26 | 2 | 18 | 13 | 0.142857 | 0 | 0 | 0 | 0 | 0 | 0.555556 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d7ffe1a488769967ce950bc9e48437a7e95b1b34 | 33,585 | py | Python | tests/unit/stream_alert_alert_processor/test_outputs/test_pagerduty.py | tuapuikia/streamalert | b1f733259aa051f8d533e7881018280fe77d7bda | [
"Apache-2.0"
] | null | null | null | tests/unit/stream_alert_alert_processor/test_outputs/test_pagerduty.py | tuapuikia/streamalert | b1f733259aa051f8d533e7881018280fe77d7bda | [
"Apache-2.0"
] | null | null | null | tests/unit/stream_alert_alert_processor/test_outputs/test_pagerduty.py | tuapuikia/streamalert | b1f733259aa051f8d533e7881018280fe77d7bda | [
"Apache-2.0"
] | null | null | null | """
Copyright 2017-present, Airbnb Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
# pylint: disable=protected-access,attribute-defined-outside-init
from mock import patch, PropertyMock
from moto import mock_s3, mock_kms
from nose.tools import assert_equal, assert_false, assert_true
from stream_alert.alert_processor.outputs.pagerduty import (
PagerDutyOutput,
PagerDutyOutputV2,
PagerDutyIncidentOutput
)
from stream_alert_cli.helpers import put_mock_creds
from tests.unit.stream_alert_alert_processor import (
ACCOUNT_ID,
FUNCTION_NAME,
KMS_ALIAS,
REGION
)
from tests.unit.stream_alert_alert_processor.helpers import get_alert, remove_temp_secrets
@mock_s3
@mock_kms
@patch('stream_alert.alert_processor.outputs.output_base.OutputDispatcher.MAX_RETRY_ATTEMPTS', 1)
class TestPagerDutyOutput(object):
"""Test class for PagerDutyOutput"""
DESCRIPTOR = 'unit_test_pagerduty'
SERVICE = 'pagerduty'
OUTPUT = ':'.join([SERVICE, DESCRIPTOR])
CREDS = {'url': 'http://pagerduty.foo.bar/create_event.json',
'service_key': 'mocked_service_key'}
def setup(self):
"""Setup before each method"""
self._dispatcher = PagerDutyOutput(REGION, ACCOUNT_ID, FUNCTION_NAME, None)
remove_temp_secrets()
output_name = self._dispatcher.output_cred_name(self.DESCRIPTOR)
put_mock_creds(output_name, self.CREDS, self._dispatcher.secrets_bucket, REGION, KMS_ALIAS)
def test_get_default_properties(self):
"""PagerDutyOutput - Get Default Properties"""
props = self._dispatcher._get_default_properties()
assert_equal(len(props), 1)
assert_equal(props['url'],
'https://events.pagerduty.com/generic/2010-04-15/create_event.json')
@patch('logging.Logger.info')
@patch('requests.post')
def test_dispatch_success(self, post_mock, log_mock):
"""PagerDutyOutput - Dispatch Success"""
post_mock.return_value.status_code = 200
assert_true(self._dispatcher.dispatch(get_alert(), self.OUTPUT))
log_mock.assert_called_with('Successfully sent alert to %s:%s',
self.SERVICE, self.DESCRIPTOR)
@patch('logging.Logger.error')
@patch('requests.post')
def test_dispatch_failure(self, post_mock, log_mock):
"""PagerDutyOutput - Dispatch Failure, Bad Request"""
post_mock.return_value.status_code = 400
assert_false(self._dispatcher.dispatch(get_alert(), self.OUTPUT))
log_mock.assert_called_with('Failed to send alert to %s:%s', self.SERVICE, self.DESCRIPTOR)
@patch('logging.Logger.error')
def test_dispatch_bad_descriptor(self, log_mock):
"""PagerDutyOutput - Dispatch Failure, Bad Descriptor"""
assert_false(
self._dispatcher.dispatch(get_alert(), ':'.join([self.SERVICE, 'bad_descriptor'])))
log_mock.assert_called_with('Failed to send alert to %s:%s', self.SERVICE, 'bad_descriptor')
@mock_s3
@mock_kms
@patch('stream_alert.alert_processor.outputs.output_base.OutputDispatcher.MAX_RETRY_ATTEMPTS', 1)
class TestPagerDutyOutputV2(object):
"""Test class for PagerDutyOutputV2"""
DESCRIPTOR = 'unit_test_pagerduty-v2'
SERVICE = 'pagerduty-v2'
OUTPUT = ':'.join([SERVICE, DESCRIPTOR])
CREDS = {'url': 'http://pagerduty.foo.bar/create_event.json',
'routing_key': 'mocked_routing_key'}
def setup(self):
"""Setup before each method"""
self._dispatcher = PagerDutyOutputV2(REGION, ACCOUNT_ID, FUNCTION_NAME, None)
remove_temp_secrets()
output_name = self._dispatcher.output_cred_name(self.DESCRIPTOR)
put_mock_creds(output_name, self.CREDS, self._dispatcher.secrets_bucket, REGION, KMS_ALIAS)
def test_get_default_properties(self):
"""PagerDutyOutputV2 - Get Default Properties"""
props = self._dispatcher._get_default_properties()
assert_equal(len(props), 1)
assert_equal(props['url'], 'https://events.pagerduty.com/v2/enqueue')
@patch('logging.Logger.info')
@patch('requests.post')
def test_dispatch_success(self, post_mock, log_mock):
"""PagerDutyOutputV2 - Dispatch Success"""
post_mock.return_value.status_code = 200
assert_true(self._dispatcher.dispatch(get_alert(), self.OUTPUT))
log_mock.assert_called_with('Successfully sent alert to %s:%s',
self.SERVICE, self.DESCRIPTOR)
@patch('logging.Logger.error')
@patch('requests.post')
def test_dispatch_failure(self, post_mock, log_mock):
"""PagerDutyOutputV2 - Dispatch Failure, Bad Request"""
json_error = {'message': 'error message', 'errors': ['error1']}
post_mock.return_value.json.return_value = json_error
post_mock.return_value.status_code = 400
assert_false(self._dispatcher.dispatch(get_alert(), self.OUTPUT))
log_mock.assert_called_with('Failed to send alert to %s:%s', self.SERVICE, self.DESCRIPTOR)
@patch('logging.Logger.error')
def test_dispatch_bad_descriptor(self, log_mock):
"""PagerDutyOutputV2 - Dispatch Failure, Bad Descriptor"""
assert_false(
self._dispatcher.dispatch(get_alert(), ':'.join([self.SERVICE, 'bad_descriptor'])))
log_mock.assert_called_with('Failed to send alert to %s:%s', self.SERVICE, 'bad_descriptor')
#pylint: disable=too-many-public-methods
@mock_s3
@mock_kms
@patch('stream_alert.alert_processor.outputs.output_base.OutputDispatcher.MAX_RETRY_ATTEMPTS', 1)
@patch('stream_alert.alert_processor.outputs.pagerduty.PagerDutyIncidentOutput.BACKOFF_MAX', 0)
@patch('stream_alert.alert_processor.outputs.pagerduty.PagerDutyIncidentOutput.BACKOFF_TIME', 0)
class TestPagerDutyIncidentOutput(object):
"""Test class for PagerDutyIncidentOutput"""
DESCRIPTOR = 'unit_test_pagerduty-incident'
SERVICE = 'pagerduty-incident'
OUTPUT = ':'.join([SERVICE, DESCRIPTOR])
CREDS = {'api': 'https://api.pagerduty.com',
'token': 'mocked_token',
'service_name': 'mocked_service_name',
'service_id': 'mocked_service_id',
'escalation_policy': 'mocked_escalation_policy',
'escalation_policy_id': 'mocked_escalation_policy_id',
'email_from': 'email@domain.com',
'integration_key': 'mocked_key'}
def setup(self):
"""Setup before each method"""
self._dispatcher = PagerDutyIncidentOutput(REGION, ACCOUNT_ID, FUNCTION_NAME, None)
self._dispatcher._base_url = self.CREDS['api']
remove_temp_secrets()
output_name = self._dispatcher.output_cred_name(self.DESCRIPTOR)
put_mock_creds(output_name, self.CREDS, self._dispatcher.secrets_bucket, REGION, KMS_ALIAS)
def test_get_default_properties(self):
"""PagerDutyIncidentOutput - Get Default Properties"""
props = self._dispatcher._get_default_properties()
assert_equal(len(props), 1)
assert_equal(props['api'], 'https://api.pagerduty.com')
def test_get_endpoint(self):
"""PagerDutyIncidentOutput - Get Endpoint"""
endpoint = self._dispatcher._get_endpoint(self.CREDS['api'], 'testtest')
assert_equal(endpoint, 'https://api.pagerduty.com/testtest')
@patch('requests.get')
def test_check_exists_get_id(self, get_mock):
"""PagerDutyIncidentOutput - Check Exists Get ID"""
# GET /check
get_mock.return_value.status_code = 200
json_check = {'check': [{'id': 'checked_id'}]}
get_mock.return_value.json.return_value = json_check
checked = self._dispatcher._check_exists('filter', 'http://mock_url', 'check')
assert_equal(checked, 'checked_id')
@patch('requests.get')
def test_check_exists_get_id_fail(self, get_mock):
"""PagerDutyIncidentOutput - Check Exists Get Id Fail"""
get_mock.return_value.status_code = 200
get_mock.return_value.json.return_value = dict()
checked = self._dispatcher._check_exists('filter', 'http://mock_url', 'check')
assert_false(checked)
@patch('requests.get')
def test_check_exists_no_get_id(self, get_mock):
"""PagerDutyIncidentOutput - Check Exists No Get Id"""
# GET /check
get_mock.return_value.status_code = 200
json_check = {'check': [{'id': 'checked_id'}]}
get_mock.return_value.json.return_value = json_check
assert_true(self._dispatcher._check_exists('filter', 'http://mock_url', 'check', False))
@patch('requests.get')
def test_user_verify_success(self, get_mock):
"""PagerDutyIncidentOutput - User Verify Success"""
get_mock.return_value.status_code = 200
json_check = {'users': [{'id': 'verified_user_id'}]}
get_mock.return_value.json.return_value = json_check
user_verified = self._dispatcher._user_verify('valid_user')
assert_equal(user_verified['id'], 'verified_user_id')
assert_equal(user_verified['type'], 'user_reference')
@patch('requests.get')
def test_user_verify_fail(self, get_mock):
"""PagerDutyIncidentOutput - User Verify Fail"""
get_mock.return_value.status_code = 200
json_check = {'not_users': [{'not_id': 'verified_user_id'}]}
get_mock.return_value.json.return_value = json_check
user_verified = self._dispatcher._user_verify('valid_user')
assert_false(user_verified)
@patch('requests.get')
def test_policy_verify_success_no_default(self, get_mock):
"""PagerDutyIncidentOutput - Policy Verify Success (No Default)"""
# GET /escalation_policies
get_mock.return_value.status_code = 200
json_check = {'escalation_policies': [{'id': 'good_policy_id'}]}
get_mock.return_value.json.return_value = json_check
policy_verified = self._dispatcher._policy_verify('valid_policy', '')
assert_equal(policy_verified['id'], 'good_policy_id')
assert_equal(policy_verified['type'], 'escalation_policy_reference')
@patch('requests.get')
def test_policy_verify_success_default(self, get_mock):
"""PagerDutyIncidentOutput - Policy Verify Success (Default)"""
# GET /escalation_policies
type(get_mock.return_value).status_code = PropertyMock(side_effect=[200, 200])
json_check_bad = {'no_escalation_policies': [{'id': 'bad_policy_id'}]}
json_check_good = {'escalation_policies': [{'id': 'good_policy_id'}]}
get_mock.return_value.json.side_effect = [json_check_bad, json_check_good]
policy_verified = self._dispatcher._policy_verify('valid_policy', 'default_policy')
assert_equal(policy_verified['id'], 'good_policy_id')
assert_equal(policy_verified['type'], 'escalation_policy_reference')
@patch('requests.get')
def test_policy_verify_fail_default(self, get_mock):
"""PagerDutyIncidentOutput - Policy Verify Fail (Default)"""
# GET /not_escalation_policies
type(get_mock.return_value).status_code = PropertyMock(side_effect=[400, 400])
json_check_bad = {'escalation_policies': [{'id': 'bad_policy_id'}]}
json_check_bad_default = {'escalation_policies': [{'id': 'good_policy_id'}]}
get_mock.return_value.json.side_effect = [json_check_bad, json_check_bad_default]
assert_false(self._dispatcher._policy_verify('valid_policy', 'default_policy'))
@patch('requests.get')
def test_policy_verify_fail_no_default(self, get_mock):
"""PagerDutyIncidentOutput - Policy Verify Fail (No Default)"""
# GET /not_escalation_policies
get_mock.return_value.status_code = 200
json_check = {'not_escalation_policies': [{'not_id': 'verified_policy_id'}]}
get_mock.return_value.json.return_value = json_check
assert_false(self._dispatcher._policy_verify('valid_policy', 'default_policy'))
@patch('requests.get')
def test_service_verify_success(self, get_mock):
"""PagerDutyIncidentOutput - Service Verify Success"""
# GET /services
get_mock.return_value.status_code = 200
json_check = {'services': [{'id': 'verified_service_id'}]}
get_mock.return_value.json.return_value = json_check
service_verified = self._dispatcher._service_verify('valid_service')
assert_equal(service_verified['id'], 'verified_service_id')
assert_equal(service_verified['type'], 'service_reference')
@patch('requests.get')
def test_service_verify_fail(self, get_mock):
"""PagerDutyIncidentOutput - Service Verify Fail"""
get_mock.return_value.status_code = 200
json_check = {'not_services': [{'not_id': 'verified_service_id'}]}
get_mock.return_value.json.return_value = json_check
assert_false(self._dispatcher._service_verify('valid_service'))
@patch('requests.get')
def test_item_verify_success(self, get_mock):
"""PagerDutyIncidentOutput - Item Verify Success"""
# GET /items
get_mock.return_value.status_code = 200
json_check = {'items': [{'id': 'verified_item_id'}]}
get_mock.return_value.json.return_value = json_check
item_verified = self._dispatcher._item_verify('valid_item', 'items', 'item_reference')
assert_equal(item_verified['id'], 'verified_item_id')
assert_equal(item_verified['type'], 'item_reference')
@patch('requests.get')
def test_item_verify_no_get_id_success(self, get_mock):
"""PagerDutyIncidentOutput - Item Verify No Get Id Success"""
# GET /items
get_mock.return_value.status_code = 200
json_check = {'items': [{'id': 'verified_item_id'}]}
get_mock.return_value.json.return_value = json_check
assert_true(self._dispatcher._item_verify('valid_item', 'items', 'item_reference', False))
@patch('requests.get')
def test_priority_verify_success(self, get_mock):
"""PagerDutyIncidentOutput - Priority Verify Success"""
priority_name = 'priority_name'
# GET /priorities
get_mock.return_value.status_code = 200
json_check = {'priorities': [{'id': 'verified_priority_id', 'name': priority_name}]}
get_mock.return_value.json.return_value = json_check
context = {'incident_priority': priority_name}
priority_verified = self._dispatcher._priority_verify(context)
assert_equal(priority_verified['id'], 'verified_priority_id')
assert_equal(priority_verified['type'], 'priority_reference')
@patch('requests.get')
def test_priority_verify_fail(self, get_mock):
"""PagerDutyIncidentOutput - Priority Verify Fail"""
# GET /priorities
get_mock.return_value.status_code = 404
context = {'incident_priority': 'priority_name'}
priority_not_verified = self._dispatcher._priority_verify(context)
assert_equal(priority_not_verified, dict())
@patch('requests.get')
def test_priority_verify_empty(self, get_mock):
"""PagerDutyIncidentOutput - Priority Verify Empty"""
# GET /priorities
get_mock.return_value.status_code = 200
json_check = {}
get_mock.return_value.json.return_value = json_check
context = {'incident_priority': 'priority_name'}
priority_not_verified = self._dispatcher._priority_verify(context)
assert_equal(priority_not_verified, dict())
@patch('requests.get')
def test_priority_verify_not_found(self, get_mock):
"""PagerDutyIncidentOutput - Priority Verify Not Found"""
# GET /priorities
get_mock.return_value.status_code = 200
json_check = {'priorities': [{'id': 'verified_priority_id', 'name': 'not_priority_name'}]}
get_mock.return_value.json.return_value = json_check
context = {'incident_priority': 'priority_name'}
priority_not_verified = self._dispatcher._priority_verify(context)
assert_equal(priority_not_verified, dict())
@patch('requests.get')
def test_priority_verify_invalid(self, get_mock):
"""PagerDutyIncidentOutput - Priority Verify Invalid"""
# GET /priorities
get_mock.return_value.status_code = 200
json_check = {'not_priorities': [{'id': 'verified_priority_id', 'name': 'priority_name'}]}
get_mock.return_value.json.return_value = json_check
context = {'incident_priority': 'priority_name'}
priority_not_verified = self._dispatcher._priority_verify(context)
assert_equal(priority_not_verified, dict())
@patch('requests.get')
def test_incident_assignment_user(self, get_mock):
"""PagerDutyIncidentOutput - Incident Assignment User"""
context = {'assigned_user': 'user_to_assign'}
get_mock.return_value.status_code = 200
json_user = {'users': [{'id': 'verified_user_id'}]}
get_mock.return_value.json.return_value = json_user
assigned_key, assigned_value = self._dispatcher._incident_assignment(context)
assert_equal(assigned_key, 'assignments')
assert_equal(assigned_value[0]['assignee']['id'], 'verified_user_id')
assert_equal(assigned_value[0]['assignee']['type'], 'user_reference')
def test_incident_assignment_policy_no_default(self):
"""PagerDutyIncidentOutput - Incident Assignment Policy (No Default)"""
context = {'assigned_policy_id': 'policy_id_to_assign'}
assigned_key, assigned_value = self._dispatcher._incident_assignment(context)
assert_equal(assigned_key, 'escalation_policy')
assert_equal(assigned_value['id'], 'policy_id_to_assign')
assert_equal(assigned_value['type'], 'escalation_policy_reference')
@patch('requests.post')
def test_add_note_incident_success(self, post_mock):
"""PagerDutyIncidentOutput - Add Note to Incident Success"""
post_mock.return_value.status_code = 200
json_note = {'note': {'id': 'created_note_id'}}
post_mock.return_value.json.return_value = json_note
note_id = self._dispatcher._add_incident_note('incident_id', 'this is the note')
assert_equal(note_id, 'created_note_id')
@patch('requests.post')
def test_add_note_incident_fail(self, post_mock):
"""PagerDutyIncidentOutput - Add Note to Incident Fail"""
post_mock.return_value.status_code = 200
json_note = {'note': {'not_id': 'created_note_id'}}
post_mock.return_value.json.return_value = json_note
note_id = self._dispatcher._add_incident_note('incident_id', 'this is the note')
assert_false(note_id)
@patch('requests.post')
def test_add_note_incident_bad_request(self, post_mock):
"""PagerDutyIncidentOutput - Add Note to Incident Bad Request"""
post_mock.return_value.status_code = 400
json_note = {'note': {'id': 'created_note_id'}}
post_mock.return_value.json.return_value = json_note
note_id = self._dispatcher._add_incident_note('incident_id', 'this is the note')
assert_false(note_id)
@patch('requests.post')
def test_add_note_incident_no_response(self, post_mock):
"""PagerDutyIncidentOutput - Add Note to Incident No Response"""
post_mock.return_value.status_code = 200
json_note = {}
post_mock.return_value.json.return_value = json_note
note_id = self._dispatcher._add_incident_note('incident_id', 'this is the note')
assert_false(note_id)
@patch('requests.get')
def test_item_verify_fail(self, get_mock):
"""PagerDutyIncidentOutput - Item Verify Fail"""
# /not_items
get_mock.return_value.status_code = 200
json_check = {'not_items': [{'not_id': 'verified_item_id'}]}
get_mock.return_value.json.return_value = json_check
item_verified = self._dispatcher._item_verify('http://mock_url', 'valid_item',
'items', 'item_reference')
assert_false(item_verified)
@patch('logging.Logger.info')
@patch('requests.put')
@patch('requests.post')
@patch('requests.get')
def test_dispatch_success_good_user(self, get_mock, post_mock, put_mock, log_mock):
"""PagerDutyIncidentOutput - Dispatch Success, Good User"""
# GET /users, /users
json_user = {'users': [{'id': 'valid_user_id'}]}
# GET /incidents
json_lookup = {'incidents': [{'id': 'incident_id'}]}
get_mock.return_value.status_code = 200
get_mock.return_value.json.side_effect = [json_user, json_user, json_lookup]
# POST /incidents, /v2/enqueue, /incidents/incident_id/notes
post_mock.return_value.status_code = 200
json_incident = {'incident': {'id': 'incident_id'}}
json_event = {'dedup_key': 'returned_dedup_key'}
json_note = {'note': {'id': 'note_id'}}
post_mock.return_value.json.side_effect = [json_incident, json_event, json_note]
# PUT /incidents/indicent_id/merge
put_mock.return_value.status_code = 200
ctx = {'pagerduty-incident': {'assigned_user': 'valid_user'}}
assert_true(self._dispatcher.dispatch(get_alert(context=ctx), self.OUTPUT))
log_mock.assert_called_with('Successfully sent alert to %s:%s',
self.SERVICE, self.DESCRIPTOR)
@patch('logging.Logger.info')
@patch('requests.put')
@patch('requests.post')
@patch('requests.get')
def test_dispatch_success_good_policy(self, get_mock, post_mock, put_mock, log_mock):
"""PagerDutyIncidentOutput - Dispatch Success, Good Policy"""
# GET /users
json_user = {'users': [{'id': 'user_id'}]}
# GET /incidents
json_lookup = {'incidents': [{'id': 'incident_id'}]}
get_mock.return_value.status_code = 200
get_mock.return_value.json.side_effect = [json_user, json_lookup]
# POST /incidents, /v2/enqueue, /incidents/incident_id/notes
post_mock.return_value.status_code = 200
json_incident = {'incident': {'id': 'incident_id'}}
json_event = {'dedup_key': 'returned_dedup_key'}
json_note = {'note': {'id': 'note_id'}}
post_mock.return_value.json.side_effect = [json_incident, json_event, json_note]
# PUT /incidents/indicent_id/merge
put_mock.return_value.status_code = 200
ctx = {'pagerduty-incident': {'assigned_policy_id': 'valid_policy_id'}}
assert_true(self._dispatcher.dispatch(get_alert(context=ctx), self.OUTPUT))
log_mock.assert_called_with('Successfully sent alert to %s:%s',
self.SERVICE, self.DESCRIPTOR)
@patch('logging.Logger.info')
@patch('requests.put')
@patch('requests.post')
@patch('requests.get')
def test_dispatch_success_with_priority(self, get_mock, post_mock, put_mock, log_mock):
"""PagerDutyIncidentOutput - Dispatch Success With Priority"""
# GET /priorities, /users
json_user = {'users': [{'id': 'user_id'}]}
json_priority = {'priorities': [{'id': 'priority_id', 'name': 'priority_name'}]}
# GET /incidents
json_lookup = {'incidents': [{'id': 'incident_id'}]}
get_mock.return_value.status_code = 200
get_mock.return_value.json.side_effect = [json_user, json_priority, json_lookup]
# POST /incidents, /v2/enqueue, /incidents/incident_id/notes
post_mock.return_value.status_code = 200
json_incident = {'incident': {'id': 'incident_id'}}
json_event = {'dedup_key': 'returned_dedup_key'}
json_note = {'note': {'id': 'note_id'}}
post_mock.return_value.json.side_effect = [json_incident, json_event, json_note]
# PUT /incidents/indicent_id/merge
put_mock.return_value.status_code = 200
ctx = {
'pagerduty-incident': {
'assigned_policy_id': 'valid_policy_id',
'incident_priority': 'priority_name'
}
}
assert_true(self._dispatcher.dispatch(get_alert(context=ctx), self.OUTPUT))
log_mock.assert_called_with('Successfully sent alert to %s:%s',
self.SERVICE, self.DESCRIPTOR)
@patch('logging.Logger.info')
@patch('requests.put')
@patch('requests.post')
@patch('requests.get')
def test_dispatch_success_bad_user(self, get_mock, post_mock, put_mock, log_mock):
"""PagerDutyIncidentOutput - Dispatch Success, Bad User"""
# GET /users, /users
json_user = {'users': [{'id': 'user_id'}]}
json_not_user = {'not_users': [{'id': 'user_id'}]}
# GET /incidents
json_lookup = {'incidents': [{'id': 'incident_id'}]}
get_mock.return_value.status_code = 200
get_mock.return_value.json.side_effect = [json_user, json_not_user, json_lookup]
# POST /incidents, /v2/enqueue, /incidents/incident_id/notes
post_mock.return_value.status_code = 200
json_incident = {'incident': {'id': 'incident_id'}}
json_event = {'dedup_key': 'returned_dedup_key'}
json_note = {'note': {'id': 'note_id'}}
post_mock.return_value.json.side_effect = [json_incident, json_event, json_note]
# PUT /incidents/indicent_id/merge
put_mock.return_value.status_code = 200
ctx = {'pagerduty-incident': {'assigned_user': 'invalid_user'}}
assert_true(self._dispatcher.dispatch(get_alert(context=ctx), self.OUTPUT))
log_mock.assert_called_with('Successfully sent alert to %s:%s',
self.SERVICE, self.DESCRIPTOR)
@patch('logging.Logger.info')
@patch('requests.put')
@patch('requests.post')
@patch('requests.get')
def test_dispatch_success_no_context(self, get_mock, post_mock, put_mock, log_mock):
"""PagerDutyIncidentOutput - Dispatch Success, No Context"""
# GET /users
json_user = {'users': [{'id': 'user_id'}]}
# GET /incidents
json_lookup = {'incidents': [{'id': 'incident_id'}]}
get_mock.return_value.status_code = 200
get_mock.return_value.json.side_effect = [json_user, json_lookup]
# POST /incidents, /v2/enqueue, /incidents/incident_id/notes
post_mock.return_value.status_code = 200
json_incident = {'incident': {'id': 'incident_id'}}
json_event = {'dedup_key': 'returned_dedup_key'}
json_note = {'note': {'id': 'note_id'}}
post_mock.return_value.json.side_effect = [json_incident, json_event, json_note]
# PUT /incidents/indicent_id/merge
put_mock.return_value.status_code = 200
assert_true(self._dispatcher.dispatch(get_alert(), self.OUTPUT))
log_mock.assert_called_with('Successfully sent alert to %s:%s',
self.SERVICE, self.DESCRIPTOR)
@patch('logging.Logger.error')
@patch('requests.post')
@patch('requests.get')
def test_dispatch_failure_bad_everything(self, get_mock, post_mock, log_mock):
"""PagerDutyIncidentOutput - Dispatch Failure: No User"""
# GET /users, /users
type(get_mock.return_value).status_code = PropertyMock(side_effect=[200, 400])
# Only set the return_value here since there will only be one successful call
# that makes it to the point of calling the .json() method
get_mock.return_value.json.return_value = {'users': [{'id': 'user_id'}]}
# POST /incidents
post_mock.return_value.status_code = 400
assert_false(self._dispatcher.dispatch(get_alert(), self.OUTPUT))
log_mock.assert_called_with('Failed to send alert to %s:%s', self.SERVICE, self.DESCRIPTOR)
@patch('logging.Logger.info')
@patch('requests.put')
@patch('requests.post')
@patch('requests.get')
def test_dispatch_success_no_merge_response(self, get_mock, post_mock, put_mock, log_mock):
"""PagerDutyIncidentOutput - Dispatch Success, No Merge Response"""
# GET /users
get_mock.return_value.status_code = 200
json_user = {'users': [{'id': 'user_id'}]}
json_lookup = {'incidents': [{'id': 'existing_incident_id'}]}
get_mock.return_value.json.side_effect = [json_user, json_lookup]
# POST /incidents, /v2/enqueue
post_mock.return_value.status_code = 200
json_incident = {'incident': {'id': 'incident_id'}}
json_event = {'dedup_key': 'returned_dedup_key'}
post_mock.return_value.json.side_effect = [json_incident, json_event]
# PUT /incidents/indicent_id/merge
put_mock.return_value.status_code = 200
put_mock.return_value.json.return_value = {}
ctx = {'pagerduty-incident': {'assigned_policy_id': 'valid_policy_id'}}
assert_true(self._dispatcher.dispatch(get_alert(context=ctx), self.OUTPUT))
log_mock.assert_called_with('Successfully sent alert to %s:%s',
self.SERVICE, self.DESCRIPTOR)
@patch('logging.Logger.error')
@patch('requests.post')
@patch('requests.get')
def test_dispatch_no_dispatch_no_incident_response(self, get_mock, post_mock, log_mock):
"""PagerDutyIncidentOutput - Dispatch Failure, No Incident Response"""
# /users
get_mock.return_value.status_code = 200
json_user = {'users': [{'id': 'user_id'}]}
get_mock.return_value.json.return_value = json_user
# /incidents
post_mock.return_value.status_code = 200
post_mock.return_value.json.return_value = {}
assert_false(self._dispatcher.dispatch(get_alert(), self.OUTPUT))
log_mock.assert_called_with('Failed to send alert to %s:%s', self.SERVICE, self.DESCRIPTOR)
@patch('logging.Logger.error')
@patch('requests.post')
@patch('requests.get')
def test_dispatch_no_dispatch_no_incident_event(self, get_mock, post_mock, log_mock):
"""PagerDutyIncidentOutput - Dispatch Failure, No Incident Event"""
# /users
get_mock.return_value.status_code = 200
json_user = {'users': [{'id': 'user_id'}]}
get_mock.return_value.json.return_value = json_user
# /incidents, /v2/enqueue
post_mock.return_value.status_code = 200
json_incident = {'incident': {'id': 'incident_id'}}
json_event = {}
post_mock.return_value.json.side_effect = [json_incident, json_event]
assert_false(self._dispatcher.dispatch(get_alert(), self.OUTPUT))
log_mock.assert_called_with('Failed to send alert to %s:%s', self.SERVICE, self.DESCRIPTOR)
@patch('logging.Logger.error')
@patch('requests.post')
@patch('requests.get')
def test_dispatch_no_dispatch_no_incident_key(self, get_mock, post_mock, log_mock):
"""PagerDutyIncidentOutput - Dispatch Failure, No Incident Key"""
# /users
get_mock.return_value.status_code = 200
json_user = {'users': [{'id': 'user_id'}]}
get_mock.return_value.json.return_value = json_user
# /incidents, /v2/enqueue
post_mock.return_value.status_code = 200
json_incident = {'incident': {'id': 'incident_id'}}
json_event = {'not_dedup_key': 'returned_dedup_key'}
post_mock.return_value.json.side_effect = [json_incident, json_event]
assert_false(self._dispatcher.dispatch(get_alert(), self.OUTPUT))
log_mock.assert_called_with('Failed to send alert to %s:%s', self.SERVICE, self.DESCRIPTOR)
@patch('logging.Logger.error')
@patch('requests.post')
@patch('requests.get')
def test_dispatch_bad_dispatch(self, get_mock, post_mock, log_mock):
"""PagerDutyIncidentOutput - Dispatch Failure, Bad Request"""
# /users
get_mock.return_value.status_code = 200
json_user = {'users': [{'id': 'user_id'}]}
get_mock.return_value.json.return_value = json_user
# /incidents
post_mock.return_value.status_code = 400
assert_false(self._dispatcher.dispatch(get_alert(), self.OUTPUT))
log_mock.assert_called_with('Failed to send alert to %s:%s', self.SERVICE, self.DESCRIPTOR)
@patch('logging.Logger.error')
@patch('requests.get')
def test_dispatch_bad_email(self, get_mock, log_mock):
"""PagerDutyIncidentOutput - Dispatch Failure, Bad Email"""
# /users
get_mock.return_value.status_code = 400
json_user = {'not_users': [{'id': 'no_user_id'}]}
get_mock.return_value.json.return_value = json_user
assert_false(self._dispatcher.dispatch(get_alert(), self.OUTPUT))
log_mock.assert_called_with('Failed to send alert to %s:%s', self.SERVICE, self.DESCRIPTOR)
@patch('logging.Logger.error')
def test_dispatch_bad_descriptor(self, log_mock):
"""PagerDutyIncidentOutput - Dispatch Failure, Bad Descriptor"""
assert_false(
self._dispatcher.dispatch(get_alert(), ':'.join([self.SERVICE, 'bad_descriptor'])))
log_mock.assert_called_with('Failed to send alert to %s:%s', self.SERVICE, 'bad_descriptor')
| 43.057692 | 100 | 0.679321 | 4,072 | 33,585 | 5.261297 | 0.064587 | 0.068801 | 0.072115 | 0.052931 | 0.854042 | 0.842093 | 0.794576 | 0.758589 | 0.708364 | 0.682086 | 0 | 0.008537 | 0.197826 | 33,585 | 779 | 101 | 43.112965 | 0.786682 | 0.135477 | 0 | 0.661943 | 0 | 0 | 0.19784 | 0.022435 | 0 | 0 | 0 | 0 | 0.163968 | 1 | 0.103239 | false | 0 | 0.01417 | 0 | 0.147773 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
cc025368851be7c6700e770d03a4ecdc3909d95f | 96 | py | Python | venv/lib/python3.8/site-packages/poetry/core/poetry.py | GiulianaPola/select_repeats | 17a0d053d4f874e42cf654dd142168c2ec8fbd11 | [
"MIT"
] | 2 | 2022-03-13T01:58:52.000Z | 2022-03-31T06:07:54.000Z | venv/lib/python3.8/site-packages/poetry/core/poetry.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | 19 | 2021-11-20T04:09:18.000Z | 2022-03-23T15:05:55.000Z | venv/lib/python3.8/site-packages/poetry/core/poetry.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | null | null | null | /home/runner/.cache/pip/pool/b9/50/07/84f0fefd3cb7be3c2131dd2413ff1c70524175a05712a85693a2ff50e0 | 96 | 96 | 0.895833 | 9 | 96 | 9.555556 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.4375 | 0 | 96 | 1 | 96 | 96 | 0.458333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
04398c6c38a9073774c593853860e8f661ae9c6e | 205 | py | Python | mak/libs/pyxx/cxx/grammar/__init__.py | motor-dev/Motor | 98cb099fe1c2d31e455ed868cc2a25eae51e79f0 | [
"BSD-3-Clause"
] | 4 | 2015-05-13T16:28:36.000Z | 2017-05-24T15:34:14.000Z | mak/libs/pyxx/cxx/grammar/__init__.py | motor-dev/Motor | 98cb099fe1c2d31e455ed868cc2a25eae51e79f0 | [
"BSD-3-Clause"
] | null | null | null | mak/libs/pyxx/cxx/grammar/__init__.py | motor-dev/Motor | 98cb099fe1c2d31e455ed868cc2a25eae51e79f0 | [
"BSD-3-Clause"
] | 1 | 2017-03-21T08:28:07.000Z | 2017-03-21T08:28:07.000Z | from . import basic
from . import expression
from . import statement
from . import declaration
from . import module
from . import klass
from . import overload
from . import template
from . import exception | 22.777778 | 25 | 0.785366 | 27 | 205 | 5.962963 | 0.407407 | 0.559006 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.170732 | 205 | 9 | 26 | 22.777778 | 0.947059 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f0c7816363d0fd6be3744d4c8b5e81db366bef95 | 171 | py | Python | tests/test_advanced/foo.py | thulsadum/configurator | 251f260c74ea130a804e63da987431ec0f6e7f1a | [
"MIT"
] | null | null | null | tests/test_advanced/foo.py | thulsadum/configurator | 251f260c74ea130a804e63da987431ec0f6e7f1a | [
"MIT"
] | null | null | null | tests/test_advanced/foo.py | thulsadum/configurator | 251f260c74ea130a804e63da987431ec0f6e7f1a | [
"MIT"
] | null | null | null | from configurator import Config
from .cfgctx import CFGCTX
@Config("foo", "foo value", default='foo', arg_name='cfg', context=CFGCTX)
def foo(*,cfg=None):
return cfg
| 24.428571 | 74 | 0.719298 | 25 | 171 | 4.88 | 0.6 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.134503 | 171 | 6 | 75 | 28.5 | 0.824324 | 0 | 0 | 0 | 0 | 0 | 0.105263 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | true | 0 | 0.4 | 0.2 | 0.8 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
9be08d45559e2f577ba6d664c87ff15e5c9733c0 | 121 | py | Python | AI_Web/ChineseChess/views/chess_views.py | xwy27/ArtificialIntelligenceProjects | e2b0154f07d749084e2d670260fa82f8f5ea23ed | [
"MIT"
] | 4 | 2018-12-19T14:10:56.000Z | 2021-07-12T06:05:17.000Z | AI_Web/ChineseChess/views/chess_views.py | xwy27/ArtificialIntelligenceProjects | e2b0154f07d749084e2d670260fa82f8f5ea23ed | [
"MIT"
] | 1 | 2019-08-06T01:57:41.000Z | 2019-08-06T01:57:41.000Z | AI_Web/ChineseChess/views/chess_views.py | xwy27/ArtificialIntelligenceProjects | e2b0154f07d749084e2d670260fa82f8f5ea23ed | [
"MIT"
] | null | null | null | from django.shortcuts import render
# Create your views here.
def chess(request):
return render(request, "chess.html") | 24.2 | 38 | 0.768595 | 17 | 121 | 5.470588 | 0.823529 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.132231 | 121 | 5 | 38 | 24.2 | 0.885714 | 0.190083 | 0 | 0 | 0 | 0 | 0.103093 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
9bfd43d3031bd97c15a1f9911d33012b79dd815a | 23,145 | py | Python | modules/db_migration/db_migration_to_s3/daily_migration.py | jindongyang94/docker-kubernetes-airflow | 81f27f981e03e6e22cef14e3f7a2690353685e11 | [
"Apache-2.0"
] | null | null | null | modules/db_migration/db_migration_to_s3/daily_migration.py | jindongyang94/docker-kubernetes-airflow | 81f27f981e03e6e22cef14e3f7a2690353685e11 | [
"Apache-2.0"
] | null | null | null | modules/db_migration/db_migration_to_s3/daily_migration.py | jindongyang94/docker-kubernetes-airflow | 81f27f981e03e6e22cef14e3f7a2690353685e11 | [
"Apache-2.0"
] | null | null | null | import subprocess
import os
import re
import csv
import contextlib
from datetime import datetime, timedelta
import time
import progressbar
import boto3
import psycopg2
from db_migration.db_migration_lib.helper import RDSHelper, S3Helper, PGHelper, DATALAKE_NAME, logger, DATABASE_TAGS, INSTANCE_TAGS, TABLE_TAGS
"""
The idea of this script is to find the respective database instances using Boto3, and then find the
respective databases in the instance and finally find the respective tables in each database and do a iterative
export and dump one table at a time to prevent overloading of memory.
This process can be expedited by parallel processing but I am unsure of how to do so yet. Would figure out a way
if this becomes a pertinent issue.
Upload the file downloaded to s3 to the correct respective folders and buckets based on company
name. It is important to note that the files with the same name would be replaced. This would
help in not saving redundant files but might not be useful if we want to version.
Since tables will never be able to be appended directly from s3, it does not make sense to load the entire csv all the time.
Perhaps write another script to merge each csvs based on time periodically.
S3 files would be named as follows:
s3://{BucketName}/{InstanceName}/{DBName}/{TableName}/{TableName-TimeStamp}.csv
# This method allows me to connect to export csv files for each table.
# This method does not require the maintenance of a JSON file at all, just different AWS credentials
# needed for different servers if different users have different access to the databases.
"""
# List Individual DBs instance and their respective Database List -----------------------------------------------
def describe_all_instances():
rds = RDSHelper()
dbs = rds.describe_db_instances(filters=INSTANCE_TAGS)
db_dictionary = {}
for db in dbs:
instance = db['DBInstanceIdentifier']
user = db['MasterUsername']
endpoint = db['Endpoint']
host = endpoint['Address']
port = endpoint['Port']
location = str(db['DBInstanceArn'].split(':')[3])
logger.info("Accessing instance %s ..." % instance)
pg = PGHelper(dbname='postgres', host=host, port=port, user=user)
con = pg.conn()
cur = con.cursor()
def extract_name_query(title, qry):
logger.info('%s' % (title))
cur.execute(qry)
results = cur.fetchall()
result_names = list(map(lambda x: x[0], results))
return result_names
# List all available databases in the same instance
database_names = extract_name_query(
'Extracting databases...', 'SELECT * FROM pg_database')
# Filtering available databases
default_databases = ['postgres',
'rdsadmin', 'template1', 'template0']
database_names = list(
filter(lambda x: x not in default_databases, database_names))
if DATABASE_TAGS:
database_names = list(
filter(lambda x: x in DATABASE_TAGS, database_names))
# Save all the information based on key: DBInstance, value: [db, [list of databases extracted from the instance]]
db_dictionary[instance] = [db, database_names]
return db_dictionary
# Individual Company Database Migration -----------------------------------------------
def individual_company_migration(instance_details, database_name, table_filters):
instance = instance_details['DBInstanceIdentifier']
user = instance_details['MasterUsername']
endpoint = instance_details['Endpoint']
host = endpoint['Address']
port = endpoint['Port']
location = str(instance_details['DBInstanceArn'].split(':')[3])
pg = PGHelper(dbname='postgres', host=host, port=port, user=user, type_db='prod')
logger.info("Accessing %s ..." % database_name)
con = pg.conn(database=database_name)
cur = con.cursor()
def extract_name_query(title, qry):
logger.info('%s' % (title))
cur.execute(qry)
results = cur.fetchall()
result_names = list(map(lambda x: x[0], results))
return result_names
# List all available tables in the same instance
table_query = "SELECT table_name FROM information_schema.tables WHERE table_schema='public' AND table_type='BASE TABLE'"
table_names = extract_name_query('Extracting tables...', table_query)
# Filtering available tables
if table_filters:
table_names = list(
filter(lambda x: x in table_names, table_names))
# We should also filter away those tables that does not start with hubble as well: ['delayed_jobs', 'ar_internal_metadata', 'schema_migrations', 'audits']
# We are going to remove hubble_safety_permit_logs as well as it is too big to be exported at the moment.
misc_tables = ['delayed_jobs', 'ar_internal_metadata', 'schema_migrations', 'audits', 'hubble_safety_permit_logs']
table_names = list(
filter(lambda x: x not in misc_tables, table_names)
)
logger.info("Tables List: %s" % table_names)
# for table_name in table_names:
for j in range(len(table_names)):
table_name = table_names[j]
# # Rerun for the table when the exception fails
# try:
# Save individual tables to CSV first - as we are sending one table at a time, we can del the csv files
# as soon as we have uploaded them
logger.info("Accessing %s ..." % table_name)
# We will save the time based on the latest commit time. Thus, there will be only one file for one table all time
# However, they might be of different timestamp due to difference in commit time.
s3 = S3Helper()
# Extract latest timestamp separately here:
# Use this query to extract the latest commit timestamp at that point of time
extract_ts_query = "SELECT MAX(pg_xact_commit_timestamp(xmin)) FROM " + table_name + " WHERE pg_xact_commit_timestamp(xmin) IS NOT NULL;"
cur.execute(extract_ts_query)
latest_timestamp = str(cur.fetchone()[0])
# Define needed timestamp to set the csvname we are using.
if latest_timestamp and latest_timestamp != 'None':
logger.info ("Latest Commit Timestamp from PostGres is: %s" % latest_timestamp)
latest_csvtimestamp = s3._convert_s3timestamp(latest_timestamp)
# However, if there is no timestamp at all, then use 24 '0's as the default.
else:
logger.info ("No Commit Timestamp available in PostGres. Using default.")
latest_csvtimestamp = '0' * 24
csvname = table_name + "-" + latest_csvtimestamp + ".csv"
local_csvname = database_name + "-" + csvname
# Respective paths needed
full_folder_path = ("%s/%s/%s/%s") % (DATALAKE_NAME, instance, database_name, table_name)
full_table_path = "%s/%s/%s/%s/%s" % (DATALAKE_NAME, instance, database_name, table_name, csvname)
s3_path = ("s3://%s") % (full_table_path)
# Grab the latest_timestamp from the folder. Ideally, there should only be one file under each table folder, but
# we will still segregate them as such for easy referencing.
table_timestamp = s3.latest_s3timestamp(full_folder_path)
# If we could not get a proper timestamp from s3, it means there is no initial file.
if not table_timestamp:
logger.info ("No CSV found in the respective S3 folder. Exporting all rows from table %s to csv." % table_name)
local_csvpath = '/tmp/' + local_csvname
with open(local_csvpath, "w") as csvfile:
# Get all of the rows and export them
export_query = "COPY " + table_name + " TO STDOUT WITH CSV HEADER"
cur.copy_expert(export_query, csvfile)
else:
logger.info ("CSV File found with Commit Timestamp: %s." % table_timestamp)
# Since the timestamp is down to the last milisecond, it is almost impossible for it be miss any rows.
# Thus, to save processing time, we share ignore any need to update the table csv if the timestamp is the same.
table_csvtimestamp = s3._convert_s3timestamp(table_timestamp)
if table_csvtimestamp == latest_csvtimestamp:
logger.info ("The latest Commit Timestamp (%s) and the latest S3 Timestamp (%s) are the same. Proceeeding to next table."
% (latest_timestamp, table_timestamp))
logger.info('\n')
break
# If timestamp is 0000.. , we should just use the min datetime to prevent error.
if table_csvtimestamp == '0' * 24:
table_timestamp = datetime.min
# Get only the rows after the committed timestamp retrieved and append that to the current csv.
# If there is no results, just go to the next table
export_query = "SELECT * FROM " + table_name + " WHERE pg_xact_commit_timestamp(xmin) > %s "
cur.execute(export_query, (table_timestamp,))
results = cur.fetchall()
if not results:
logger.info ("No new rows or updates from the current Database.")
logger.info('\n')
break
# Download the file to local storage first, then utilizing it - always save it under /tmp/ directory
# The file will also be deleted from s3
local_csvpath = s3.download_latest(full_folder_path, local_csvname)
with open(local_csvpath, 'a') as csvfile:
# Append by downloading the existing csv and append locally.
logger.info ("Writing rows into current local CSV File...")
for row in results:
writer = csv.writer(csvfile)
writer.writerow(row)
# Upload the file to the respective bucket - Replacing or uploading uses the same function
# This way of uploading would not resetting the entire path, so it is fine to not add a check.
s3.create_folder(full_folder_path, location)
s3.upload(local_csvpath, full_table_path)
latest_timestamp = s3._convert_timestamp(latest_csvtimestamp)
logger.info ('FILE PUT AT: %s with Latest Committed Time (%s)' % (s3_path, latest_timestamp))
# Deleting file from /tmp/ after use
os.remove(local_csvpath)
logger.info ('Local File Deleted: %s' % local_csvpath)
logger.info('\n')
break
# except psycopg2.Error as e:
# logger.error(e.pgerror)
# logger.info("Retrying for %s table." % table_name)
# logger.info('\n')
# continue
return
# Full Program to Run Locally-----------------------------------------------
def full_database_migration(instance_filters=None, database_filters=None, table_filters=None):
"""
-instance_filters (dict): for now it can be anything we are going to use to filter the instance:
1. db-cluster-id 2. db-instance-id
A filter name and value pair that is used to return a more specific list of results from a describe operation.
Filters can be used to match a set of resources by specific criteria, such as IDs.
The filters supported by a describe operation are documented with the describe operation.
E.g. [{"Name" :"tag:keyname", "Values":[""] }] - Must explicitly specify "Names" and "Values" pair.
-database_filters (list): simply only append the database names to this list so we only access those databases. By default,
it will access all
-table_filters (list): simply only append table names to this list so we only export those tables. By default it will export all.
"""
# Initiate RDS instance helper to iterate through RDS
rds = RDSHelper()
dbs = rds.describe_db_instances(filters=instance_filters)
logger.info ("Instances List: %s" % list(map(lambda x: x['DBInstanceIdentifier'], dbs)))
for db in dbs:
instance = db['DBInstanceIdentifier']
user = db['MasterUsername']
endpoint = db['Endpoint']
host = endpoint['Address']
port = endpoint['Port']
location = str(db['DBInstanceArn'].split(':')[3])
logger.info('instance: %s' % instance)
logger.info('user: %s' % user)
logger.info('endpoint: %s' % endpoint)
logger.info('host: %s' % host)
logger.info('port: %s' % port)
logger.info('location: %s' % location)
logger.info ("Accessing instance %s ..." % instance)
pg = PGHelper(dbname='postgres', host=host, port=port, user=user)
con = pg.conn()
cur = con.cursor()
def extract_name_query(title, qry):
logger.info('%s' % (title))
cur.execute(qry)
results = cur.fetchall()
result_names = list(map(lambda x: x[0], results))
return result_names
# List all available databases in the same instance
database_names = extract_name_query(
'Extracting databases...', 'SELECT * FROM pg_database')
# Filtering available databases
default_databases = ['postgres', 'rdsadmin', 'template1', 'template0']
database_names = list(
filter(lambda x: x not in default_databases, database_names))
if database_filters:
database_names = list(
filter(lambda x: x in database_filters, database_names))
logger.info("Databases List: %s" % database_names)
# for i in progressbar.progressbar(range(len(database_names))):
for database_name in database_names:
# database_name = database_names[i]
# Change database connection
logger.info("Accessing %s ..." % database_name)
con = pg.conn(database=database_name)
cur = con.cursor()
# List all available tables in the same instance
table_query = "SELECT table_name FROM information_schema.tables WHERE table_schema='public' AND table_type='BASE TABLE'"
table_names = extract_name_query('Extracting tables...', table_query)
# Filtering available tables
if table_filters:
table_names = list(
filter(lambda x: x in table_names, table_names))
# We should also filter away those tables that does not start with hubble as well: ['delayed_jobs', 'ar_internal_metadata', 'schema_migrations', 'audits']
# We are going to remove hubble_safety_permit_logs as well as it is too big to be exported at the moment.
misc_tables = ['delayed_jobs', 'ar_internal_metadata', 'schema_migrations', 'audits', 'hubble_safety_permit_logs']
table_names = list(
filter(lambda x: x not in misc_tables, table_names)
)
logger.info("Tables List: %s" % table_names)
progressbar.streams.wrap_stderr()
# for table_name in table_names:
for j in progressbar.progressbar(range(len(table_names))):
table_name = table_names[j]
# Rerun for the table when the exception fails
while True:
try:
# Save individual tables to CSV first - as we are sending one table at a time, we can del the csv files
# as soon as we have uploaded them
logger.info("Accessing %s ..." % table_name)
# We will save the time based on the latest commit time. Thus, there will be only one file for one table all time
# However, they might be of different timestamp due to difference in commit time.
s3 = S3Helper()
# Extract latest timestamp separately here:
# Use this query to extract the latest commit timestamp at that point of time
extract_ts_query = "SELECT MAX(pg_xact_commit_timestamp(xmin)) FROM " + table_name + " WHERE pg_xact_commit_timestamp(xmin) IS NOT NULL;"
cur.execute(extract_ts_query)
latest_timestamp = str(cur.fetchone()[0])
# Define needed timestamp to set the csvname we are using.
if latest_timestamp and latest_timestamp != 'None':
logger.info ("Latest Commit Timestamp from PostGres is: %s" % latest_timestamp)
latest_csvtimestamp = s3._convert_s3timestamp(latest_timestamp)
# However, if there is no timestamp at all, then use 24 '0's as the default.
else:
logger.info ("No Commit Timestamp available in PostGres. Using default.")
latest_csvtimestamp = '0' * 24
csvname = table_name + "-" + latest_csvtimestamp + ".csv"
# Respective paths needed
full_folder_path = ("%s/%s/%s/%s") % (DATALAKE_NAME, instance, database_name, table_name)
full_table_path = "%s/%s/%s/%s/%s" % (DATALAKE_NAME, instance, database_name, table_name, csvname)
s3_path = ("s3://%s") % (full_table_path)
# Grab the latest_timestamp from the folder. Ideally, there should only be one file under each table folder, but
# we will still segregate them as such for easy referencing.
table_timestamp = s3.latest_s3timestamp(full_folder_path)
# If we could not get a proper timestamp from s3, it means there is no initial file.
if not table_timestamp:
logger.info ("No CSV found in the respective S3 folder. Exporting all rows from table %s to csv." % table_name)
local_csvpath = '/tmp/' + csvname
with open(local_csvpath, "w") as csvfile:
# Get all of the rows and export them
export_query = "COPY " + table_name + " TO STDOUT WITH CSV HEADER"
cur.copy_expert(export_query, csvfile)
else:
logger.info ("CSV File found with Commit Timestamp: %s." % table_timestamp)
# Since the timestamp is down to the last milisecond, it is almost impossible for it be miss any rows.
# Thus, to save processing time, we share ignore any need to update the table csv if the timestamp is the same.
table_csvtimestamp = s3._convert_s3timestamp(table_timestamp)
if table_csvtimestamp == latest_csvtimestamp:
logger.info ("The latest Commit Timestamp (%s) and the latest S3 Timestamp (%s) are the same. Proceeeding to next table."
% (latest_timestamp, table_timestamp))
logger.info('\n')
break
# If timestamp is 0000.. , we should just use the min datetime to prevent error.
if table_csvtimestamp == '0' * 24:
table_timestamp = datetime.min
# Get only the rows after the committed timestamp retrieved and append that to the current csv.
# If there is no results, just go to the next table
export_query = "SELECT * FROM " + table_name + " WHERE pg_xact_commit_timestamp(xmin) > %s "
cur.execute(export_query, (table_timestamp,))
results = cur.fetchall()
if not results:
logger.info ("No new rows or updates from the current Database.")
logger.info('\n')
break
# Download the file to local storage first, then utilizing it - always save it under /tmp/ directory
# The file will also be deleted from s3
local_csvpath = s3.download_latest(full_folder_path)
with open(local_csvpath, 'a') as csvfile:
# Append by downloading the existing csv and append locally.
logger.info ("Writing rows into current local CSV File...")
for row in results:
writer = csv.writer(csvfile)
writer.writerow(row)
# Upload the file to the respective bucket - Replacing or uploading uses the same function
# This way of uploading would not resetting the entire path, so it is fine to not add a check.
s3.create_folder(full_folder_path, location)
s3.upload(local_csvpath, full_table_path)
latest_timestamp = s3._convert_timestamp(latest_csvtimestamp)
logger.info ('FILE PUT AT: %s with Latest Committed Time (%s)' % (s3_path, latest_timestamp))
# Deleting file from /tmp/ after use
os.remove(local_csvpath)
logger.info ('Local File Deleted')
logger.info('\n')
break
except psycopg2.Error as e:
logger.error(e.pgerror)
logger.info("Retrying for %s table." % table_name)
logger.info('\n')
continue
# Handler to Accomodate to Lambda Context Manager-----------------------------------------------
def handler(event=None, context=None):
# Start Time
start = time.time()
# The tag or name of the instance we want to enter
# test_server = 'arn:aws:rds:ap-southeast-1:160830294233:db:companya'
instance_tags = INSTANCE_TAGS
# The given companies
# database_tags = ['companyaworkers']
database_tags = DATABASE_TAGS
# The related modules needed
# correct_tables = []
full_database_migration(instance_filters=instance_tags, database_filters=database_tags)
end = time.time()
seconds = end - start
time_spent = str(timedelta(seconds=seconds))
logger.info("Time Spent on Script: %s" % time_spent)
if __name__ == "__main__":
handler()
| 51.31929 | 166 | 0.598358 | 2,783 | 23,145 | 4.847646 | 0.15415 | 0.035579 | 0.007116 | 0.012453 | 0.756801 | 0.741235 | 0.739975 | 0.736565 | 0.736565 | 0.714328 | 0 | 0.006405 | 0.318643 | 23,145 | 450 | 167 | 51.433333 | 0.849081 | 0.271635 | 0 | 0.721116 | 0 | 0.007968 | 0.167672 | 0.021526 | 0 | 0 | 0 | 0 | 0 | 1 | 0.027888 | false | 0 | 0.043825 | 0 | 0.091633 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
acac28bdd06a5c907dd20ecd81d9ab5dc4484d2f | 258,357 | py | Python | instances/passenger_demand/pas-20210422-1717-int1/34.py | LHcau/scheduling-shared-passenger-and-freight-transport-on-a-fixed-infrastructure | bba1e6af5bc8d9deaa2dc3b83f6fe9ddf15d2a11 | [
"BSD-3-Clause"
] | null | null | null | instances/passenger_demand/pas-20210422-1717-int1/34.py | LHcau/scheduling-shared-passenger-and-freight-transport-on-a-fixed-infrastructure | bba1e6af5bc8d9deaa2dc3b83f6fe9ddf15d2a11 | [
"BSD-3-Clause"
] | null | null | null | instances/passenger_demand/pas-20210422-1717-int1/34.py | LHcau/scheduling-shared-passenger-and-freight-transport-on-a-fixed-infrastructure | bba1e6af5bc8d9deaa2dc3b83f6fe9ddf15d2a11 | [
"BSD-3-Clause"
] | null | null | null |
"""
PASSENGERS
"""
numPassengers = 19244
passenger_arriving = (
(5, 4, 5, 6, 3, 2, 1, 1, 4, 1, 0, 0, 0, 9, 3, 5, 2, 7, 3, 1, 1, 2, 1, 0, 0, 0), # 0
(4, 6, 10, 9, 2, 1, 1, 4, 1, 0, 1, 1, 0, 7, 1, 3, 2, 6, 3, 2, 2, 2, 1, 1, 1, 0), # 1
(5, 9, 8, 7, 5, 0, 0, 6, 1, 1, 0, 0, 0, 6, 6, 3, 2, 0, 3, 4, 2, 3, 6, 1, 0, 0), # 2
(5, 9, 3, 6, 7, 2, 1, 2, 4, 0, 0, 0, 0, 7, 5, 2, 6, 5, 1, 1, 1, 2, 3, 2, 0, 0), # 3
(5, 5, 2, 2, 6, 4, 6, 2, 0, 0, 1, 0, 0, 5, 4, 7, 0, 6, 1, 4, 1, 1, 1, 0, 0, 0), # 4
(7, 2, 8, 8, 4, 0, 2, 3, 0, 0, 1, 0, 0, 7, 5, 5, 3, 5, 6, 5, 2, 0, 3, 0, 1, 0), # 5
(9, 4, 6, 4, 6, 3, 2, 2, 3, 2, 0, 0, 0, 6, 7, 5, 5, 3, 4, 1, 1, 4, 6, 3, 0, 0), # 6
(9, 4, 3, 8, 8, 1, 1, 2, 3, 2, 0, 0, 0, 15, 7, 4, 4, 8, 3, 5, 2, 0, 2, 2, 2, 0), # 7
(6, 7, 7, 12, 6, 4, 3, 4, 4, 2, 0, 1, 0, 5, 6, 6, 5, 9, 7, 11, 3, 2, 3, 1, 1, 0), # 8
(7, 4, 9, 7, 2, 2, 2, 5, 4, 0, 1, 0, 0, 8, 5, 10, 6, 6, 5, 4, 1, 4, 4, 2, 1, 0), # 9
(12, 6, 10, 11, 5, 2, 1, 0, 1, 2, 2, 0, 0, 8, 8, 5, 5, 6, 7, 4, 0, 0, 0, 0, 0, 0), # 10
(4, 8, 0, 9, 2, 3, 6, 6, 5, 2, 2, 0, 0, 11, 10, 4, 6, 7, 3, 2, 2, 1, 3, 2, 1, 0), # 11
(9, 7, 6, 11, 8, 3, 2, 6, 5, 3, 0, 1, 0, 9, 10, 4, 7, 12, 4, 2, 1, 2, 3, 2, 1, 0), # 12
(5, 10, 9, 13, 5, 2, 6, 4, 3, 4, 2, 1, 0, 17, 10, 4, 2, 5, 6, 5, 1, 4, 2, 4, 0, 0), # 13
(10, 12, 6, 9, 7, 3, 4, 7, 4, 4, 3, 0, 0, 11, 8, 6, 6, 3, 6, 4, 3, 4, 2, 0, 3, 0), # 14
(8, 11, 10, 5, 10, 2, 5, 6, 4, 1, 1, 0, 0, 11, 11, 7, 2, 8, 5, 5, 4, 5, 0, 2, 1, 0), # 15
(7, 11, 10, 8, 6, 1, 3, 2, 2, 2, 1, 0, 0, 19, 6, 7, 6, 13, 7, 2, 2, 9, 1, 4, 0, 0), # 16
(9, 12, 4, 12, 13, 4, 2, 1, 2, 0, 0, 1, 0, 6, 10, 8, 5, 6, 6, 6, 1, 3, 2, 0, 1, 0), # 17
(12, 9, 8, 8, 5, 3, 6, 7, 2, 2, 4, 1, 0, 8, 12, 7, 3, 6, 4, 5, 2, 4, 0, 3, 0, 0), # 18
(15, 6, 10, 9, 10, 6, 5, 3, 3, 0, 0, 0, 0, 8, 8, 2, 5, 4, 7, 4, 2, 2, 4, 4, 1, 0), # 19
(14, 16, 9, 10, 3, 5, 1, 1, 3, 4, 3, 0, 0, 6, 13, 6, 2, 5, 7, 5, 1, 4, 2, 1, 0, 0), # 20
(5, 8, 3, 10, 10, 5, 2, 1, 5, 2, 0, 1, 0, 8, 10, 5, 6, 11, 3, 1, 0, 3, 4, 2, 1, 0), # 21
(6, 11, 14, 5, 6, 2, 3, 6, 4, 1, 1, 2, 0, 4, 6, 5, 7, 7, 7, 3, 7, 1, 2, 2, 2, 0), # 22
(9, 6, 3, 10, 4, 0, 3, 1, 4, 1, 1, 0, 0, 10, 8, 7, 1, 6, 7, 6, 3, 3, 3, 3, 1, 0), # 23
(9, 9, 9, 7, 13, 5, 1, 1, 4, 4, 2, 1, 0, 14, 6, 3, 4, 9, 8, 7, 3, 2, 2, 4, 0, 0), # 24
(19, 6, 5, 11, 8, 2, 7, 4, 4, 1, 2, 4, 0, 9, 13, 3, 8, 10, 4, 6, 2, 4, 3, 0, 2, 0), # 25
(14, 10, 10, 11, 11, 7, 6, 6, 5, 1, 1, 4, 0, 18, 6, 5, 4, 12, 4, 3, 4, 4, 5, 0, 2, 0), # 26
(14, 7, 6, 12, 3, 4, 4, 4, 3, 0, 2, 0, 0, 10, 7, 6, 7, 7, 6, 2, 9, 7, 5, 1, 0, 0), # 27
(11, 10, 5, 10, 10, 0, 2, 6, 10, 4, 0, 2, 0, 14, 5, 6, 5, 14, 7, 6, 2, 7, 3, 0, 1, 0), # 28
(10, 12, 10, 7, 7, 3, 8, 7, 4, 1, 0, 1, 0, 14, 10, 7, 4, 7, 5, 6, 1, 3, 4, 1, 1, 0), # 29
(8, 12, 6, 9, 8, 5, 8, 4, 5, 4, 2, 1, 0, 11, 12, 8, 8, 11, 3, 3, 2, 1, 7, 2, 2, 0), # 30
(11, 12, 8, 5, 3, 3, 3, 2, 7, 2, 0, 1, 0, 8, 7, 4, 4, 12, 5, 2, 1, 2, 1, 1, 2, 0), # 31
(5, 7, 9, 12, 10, 1, 2, 5, 6, 2, 1, 1, 0, 8, 7, 10, 8, 13, 5, 10, 4, 5, 5, 4, 0, 0), # 32
(13, 10, 13, 8, 7, 5, 3, 3, 5, 2, 1, 0, 0, 15, 12, 7, 8, 9, 8, 2, 3, 1, 2, 3, 0, 0), # 33
(3, 13, 8, 3, 5, 3, 3, 3, 3, 1, 3, 3, 0, 14, 5, 8, 9, 10, 4, 6, 0, 2, 4, 1, 0, 0), # 34
(11, 8, 9, 14, 9, 0, 7, 3, 3, 3, 1, 2, 0, 14, 11, 3, 3, 14, 6, 4, 3, 5, 4, 1, 0, 0), # 35
(12, 11, 11, 10, 3, 4, 7, 3, 6, 0, 3, 2, 0, 13, 7, 8, 2, 8, 7, 2, 2, 5, 2, 3, 0, 0), # 36
(12, 9, 10, 9, 10, 7, 7, 5, 3, 2, 1, 1, 0, 18, 12, 7, 4, 5, 5, 8, 1, 4, 2, 1, 3, 0), # 37
(8, 15, 9, 7, 5, 6, 4, 3, 7, 2, 0, 0, 0, 11, 5, 7, 5, 9, 6, 6, 3, 6, 2, 4, 1, 0), # 38
(10, 11, 12, 6, 7, 2, 6, 3, 2, 1, 1, 1, 0, 6, 10, 3, 6, 6, 3, 0, 3, 4, 4, 3, 0, 0), # 39
(9, 9, 8, 7, 9, 1, 6, 7, 10, 1, 2, 0, 0, 4, 11, 8, 6, 16, 3, 6, 1, 5, 5, 2, 0, 0), # 40
(6, 4, 7, 11, 11, 4, 3, 5, 5, 2, 1, 0, 0, 14, 4, 1, 11, 2, 5, 4, 3, 3, 4, 1, 3, 0), # 41
(8, 10, 14, 9, 3, 3, 3, 4, 4, 3, 1, 1, 0, 15, 16, 7, 5, 13, 4, 3, 1, 2, 2, 1, 0, 0), # 42
(11, 12, 7, 5, 7, 4, 4, 2, 1, 0, 2, 1, 0, 12, 3, 3, 7, 2, 0, 0, 2, 7, 3, 2, 0, 0), # 43
(15, 10, 15, 12, 6, 4, 6, 2, 4, 1, 0, 1, 0, 10, 4, 7, 5, 8, 1, 5, 3, 2, 2, 0, 0, 0), # 44
(11, 12, 3, 10, 11, 3, 5, 4, 2, 1, 3, 0, 0, 14, 12, 9, 7, 8, 8, 3, 1, 3, 3, 0, 1, 0), # 45
(8, 10, 6, 8, 9, 1, 2, 2, 3, 1, 1, 0, 0, 16, 8, 11, 5, 7, 1, 1, 1, 2, 2, 0, 1, 0), # 46
(10, 10, 15, 11, 10, 6, 1, 5, 3, 3, 3, 1, 0, 11, 7, 9, 10, 5, 3, 3, 5, 4, 5, 4, 1, 0), # 47
(11, 7, 10, 10, 9, 1, 1, 2, 10, 2, 2, 1, 0, 8, 1, 7, 5, 9, 4, 2, 1, 3, 4, 4, 1, 0), # 48
(13, 11, 8, 5, 5, 2, 5, 6, 2, 3, 1, 0, 0, 15, 9, 12, 6, 5, 7, 2, 2, 3, 2, 0, 3, 0), # 49
(10, 6, 3, 9, 10, 3, 5, 2, 1, 1, 2, 3, 0, 15, 9, 5, 7, 9, 4, 4, 1, 2, 2, 4, 0, 0), # 50
(10, 15, 9, 11, 9, 5, 2, 7, 7, 0, 1, 0, 0, 9, 11, 6, 8, 2, 6, 2, 2, 3, 3, 0, 0, 0), # 51
(11, 8, 10, 7, 10, 3, 3, 9, 7, 3, 1, 0, 0, 8, 10, 1, 2, 11, 5, 2, 2, 1, 3, 0, 5, 0), # 52
(9, 20, 5, 13, 6, 2, 2, 2, 1, 3, 1, 0, 0, 8, 6, 3, 8, 11, 6, 3, 3, 4, 6, 3, 0, 0), # 53
(10, 11, 8, 10, 6, 5, 6, 2, 6, 2, 1, 0, 0, 10, 12, 4, 4, 6, 5, 3, 3, 6, 5, 2, 0, 0), # 54
(4, 10, 13, 9, 9, 6, 2, 2, 3, 1, 1, 1, 0, 9, 10, 12, 5, 12, 1, 3, 0, 4, 4, 1, 2, 0), # 55
(6, 5, 7, 8, 10, 5, 5, 2, 5, 2, 2, 2, 0, 6, 8, 8, 4, 11, 7, 1, 3, 2, 2, 1, 2, 0), # 56
(10, 8, 7, 8, 8, 3, 3, 5, 5, 2, 0, 1, 0, 8, 10, 8, 9, 5, 10, 3, 0, 5, 3, 0, 0, 0), # 57
(13, 7, 11, 9, 10, 0, 4, 1, 3, 2, 1, 1, 0, 11, 8, 5, 6, 4, 4, 2, 3, 4, 3, 3, 0, 0), # 58
(6, 8, 12, 13, 10, 3, 4, 3, 3, 3, 1, 0, 0, 11, 16, 4, 0, 8, 4, 6, 1, 3, 4, 2, 1, 0), # 59
(11, 8, 5, 5, 6, 4, 2, 1, 3, 3, 1, 1, 0, 13, 2, 7, 5, 7, 2, 4, 3, 7, 3, 0, 1, 0), # 60
(15, 9, 15, 11, 4, 4, 4, 2, 7, 2, 3, 1, 0, 8, 8, 4, 7, 4, 4, 5, 6, 2, 3, 2, 1, 0), # 61
(6, 9, 11, 5, 12, 4, 2, 2, 5, 0, 1, 0, 0, 6, 9, 9, 4, 9, 3, 5, 3, 2, 3, 4, 2, 0), # 62
(13, 8, 6, 8, 9, 2, 3, 3, 2, 2, 3, 1, 0, 14, 8, 5, 5, 3, 6, 4, 2, 4, 1, 0, 1, 0), # 63
(8, 6, 14, 12, 10, 2, 3, 4, 5, 5, 1, 0, 0, 16, 9, 9, 8, 8, 4, 3, 2, 5, 3, 2, 0, 0), # 64
(14, 6, 10, 7, 10, 4, 3, 3, 2, 1, 1, 1, 0, 11, 8, 5, 3, 5, 5, 7, 1, 1, 2, 1, 1, 0), # 65
(12, 10, 7, 12, 6, 2, 2, 4, 6, 1, 0, 0, 0, 11, 9, 7, 1, 7, 4, 7, 1, 11, 6, 1, 0, 0), # 66
(12, 9, 6, 8, 6, 5, 0, 6, 3, 3, 1, 0, 0, 10, 6, 4, 10, 9, 3, 4, 4, 3, 0, 1, 0, 0), # 67
(10, 8, 4, 8, 8, 5, 5, 3, 4, 2, 2, 0, 0, 14, 8, 6, 5, 9, 8, 7, 5, 3, 2, 1, 0, 0), # 68
(12, 10, 5, 8, 11, 6, 3, 1, 5, 1, 3, 0, 0, 5, 7, 5, 5, 3, 6, 2, 5, 1, 6, 4, 1, 0), # 69
(7, 9, 6, 8, 11, 3, 3, 1, 9, 4, 2, 1, 0, 11, 11, 2, 5, 11, 5, 2, 3, 7, 3, 3, 1, 0), # 70
(9, 8, 8, 9, 11, 6, 3, 1, 4, 1, 2, 1, 0, 7, 8, 8, 7, 4, 8, 0, 3, 3, 1, 1, 0, 0), # 71
(7, 7, 5, 12, 11, 3, 6, 3, 4, 3, 2, 1, 0, 14, 11, 10, 5, 8, 11, 3, 3, 7, 1, 1, 0, 0), # 72
(15, 3, 14, 4, 3, 6, 2, 4, 1, 1, 2, 1, 0, 4, 12, 5, 7, 5, 5, 3, 3, 3, 6, 0, 1, 0), # 73
(10, 5, 6, 8, 3, 2, 5, 1, 4, 2, 1, 0, 0, 10, 9, 4, 1, 8, 4, 5, 4, 7, 4, 0, 2, 0), # 74
(10, 13, 11, 11, 5, 6, 3, 5, 2, 2, 2, 0, 0, 12, 8, 10, 7, 8, 3, 4, 1, 2, 6, 1, 0, 0), # 75
(9, 12, 18, 8, 9, 4, 4, 3, 3, 3, 2, 1, 0, 9, 11, 10, 4, 9, 3, 5, 1, 3, 2, 3, 0, 0), # 76
(10, 8, 12, 7, 10, 1, 4, 2, 3, 2, 3, 1, 0, 7, 7, 8, 7, 9, 3, 6, 5, 4, 5, 2, 2, 0), # 77
(11, 5, 8, 13, 7, 4, 5, 5, 5, 0, 2, 0, 0, 10, 10, 6, 4, 7, 3, 4, 4, 5, 0, 3, 0, 0), # 78
(8, 8, 9, 11, 8, 5, 5, 4, 3, 0, 1, 2, 0, 11, 8, 3, 2, 5, 7, 2, 3, 6, 2, 0, 0, 0), # 79
(7, 8, 10, 8, 6, 3, 3, 0, 3, 3, 2, 0, 0, 13, 4, 7, 4, 9, 2, 4, 1, 4, 0, 3, 0, 0), # 80
(10, 6, 6, 10, 5, 5, 4, 1, 4, 1, 3, 1, 0, 7, 5, 7, 4, 8, 7, 0, 3, 7, 2, 3, 0, 0), # 81
(4, 5, 11, 15, 7, 9, 2, 3, 7, 2, 1, 1, 0, 7, 9, 7, 3, 9, 3, 5, 1, 4, 4, 2, 2, 0), # 82
(13, 10, 7, 13, 6, 4, 1, 4, 2, 2, 2, 1, 0, 6, 10, 11, 4, 9, 5, 4, 1, 3, 2, 4, 1, 0), # 83
(13, 12, 9, 7, 3, 1, 4, 1, 2, 2, 0, 1, 0, 11, 7, 6, 2, 7, 1, 1, 2, 5, 1, 1, 2, 0), # 84
(9, 9, 10, 13, 4, 3, 7, 4, 7, 5, 0, 0, 0, 7, 6, 7, 10, 6, 3, 9, 5, 2, 1, 0, 0, 0), # 85
(13, 6, 12, 10, 9, 2, 6, 2, 7, 1, 5, 0, 0, 9, 7, 5, 4, 8, 10, 4, 1, 0, 5, 2, 0, 0), # 86
(9, 8, 10, 10, 4, 5, 5, 1, 6, 1, 0, 0, 0, 14, 5, 4, 6, 9, 5, 0, 1, 3, 3, 1, 1, 0), # 87
(6, 3, 7, 11, 7, 3, 7, 4, 3, 2, 3, 0, 0, 16, 8, 8, 2, 4, 7, 1, 5, 5, 4, 1, 0, 0), # 88
(10, 8, 7, 8, 8, 7, 0, 2, 3, 1, 0, 0, 0, 8, 7, 7, 6, 1, 4, 1, 2, 4, 6, 1, 1, 0), # 89
(21, 7, 8, 8, 14, 2, 3, 2, 10, 1, 1, 1, 0, 7, 9, 9, 4, 11, 2, 3, 3, 2, 1, 4, 0, 0), # 90
(11, 12, 10, 5, 6, 0, 5, 1, 1, 2, 1, 0, 0, 10, 8, 10, 7, 8, 6, 1, 1, 4, 3, 0, 0, 0), # 91
(4, 10, 5, 11, 4, 4, 5, 4, 7, 1, 1, 0, 0, 14, 5, 5, 8, 9, 2, 3, 2, 2, 2, 2, 1, 0), # 92
(9, 10, 11, 8, 5, 3, 7, 4, 2, 4, 2, 1, 0, 12, 10, 13, 4, 6, 3, 1, 0, 3, 5, 1, 1, 0), # 93
(9, 7, 13, 10, 6, 6, 4, 7, 1, 0, 1, 2, 0, 14, 7, 4, 4, 12, 4, 3, 1, 7, 2, 4, 1, 0), # 94
(7, 9, 6, 6, 15, 4, 7, 8, 5, 2, 2, 2, 0, 11, 9, 4, 1, 9, 4, 6, 2, 2, 3, 0, 1, 0), # 95
(5, 9, 9, 12, 5, 12, 2, 2, 0, 1, 2, 0, 0, 6, 6, 4, 6, 11, 1, 4, 3, 0, 5, 1, 1, 0), # 96
(8, 6, 14, 6, 6, 6, 2, 2, 6, 2, 1, 0, 0, 13, 7, 7, 4, 8, 4, 4, 6, 6, 2, 2, 2, 0), # 97
(9, 4, 6, 7, 17, 1, 5, 2, 4, 1, 1, 2, 0, 14, 13, 7, 6, 7, 2, 2, 0, 6, 0, 2, 0, 0), # 98
(11, 10, 13, 9, 8, 0, 3, 2, 6, 2, 1, 1, 0, 18, 6, 7, 4, 8, 6, 2, 3, 2, 4, 0, 1, 0), # 99
(11, 8, 2, 5, 6, 6, 6, 1, 4, 1, 0, 1, 0, 6, 3, 9, 3, 11, 3, 4, 3, 5, 3, 3, 0, 0), # 100
(10, 5, 13, 10, 5, 2, 4, 3, 3, 1, 1, 0, 0, 8, 10, 5, 2, 13, 3, 2, 2, 4, 3, 2, 2, 0), # 101
(10, 9, 7, 9, 6, 2, 1, 1, 5, 0, 1, 0, 0, 8, 9, 10, 4, 10, 3, 3, 0, 5, 2, 2, 1, 0), # 102
(18, 9, 13, 15, 10, 4, 1, 3, 5, 1, 1, 0, 0, 8, 6, 6, 6, 5, 4, 3, 4, 3, 5, 1, 0, 0), # 103
(9, 10, 6, 3, 7, 2, 3, 1, 2, 2, 2, 0, 0, 9, 7, 6, 3, 7, 6, 5, 3, 5, 3, 3, 0, 0), # 104
(14, 7, 3, 12, 10, 3, 2, 3, 2, 0, 0, 0, 0, 7, 6, 10, 7, 3, 5, 4, 1, 3, 5, 1, 2, 0), # 105
(8, 9, 12, 9, 8, 1, 4, 3, 2, 1, 1, 1, 0, 11, 10, 10, 4, 4, 3, 3, 4, 2, 4, 0, 0, 0), # 106
(8, 8, 10, 11, 9, 5, 2, 4, 2, 1, 2, 0, 0, 7, 8, 4, 7, 11, 5, 4, 1, 0, 3, 0, 0, 0), # 107
(7, 8, 7, 7, 5, 5, 2, 0, 2, 0, 1, 2, 0, 12, 6, 6, 4, 8, 3, 4, 5, 1, 0, 4, 1, 0), # 108
(9, 6, 11, 10, 8, 4, 6, 2, 5, 1, 5, 2, 0, 8, 11, 5, 7, 8, 2, 2, 1, 4, 1, 1, 1, 0), # 109
(8, 6, 13, 13, 5, 2, 3, 1, 5, 0, 0, 0, 0, 7, 16, 6, 7, 6, 5, 5, 1, 6, 4, 1, 0, 0), # 110
(11, 8, 7, 7, 5, 3, 5, 3, 5, 0, 0, 1, 0, 9, 9, 10, 7, 9, 2, 1, 3, 2, 4, 1, 2, 0), # 111
(4, 3, 9, 10, 9, 1, 4, 2, 1, 0, 1, 0, 0, 5, 11, 7, 4, 14, 3, 3, 2, 2, 3, 2, 1, 0), # 112
(6, 6, 7, 8, 11, 5, 5, 1, 3, 1, 0, 1, 0, 8, 9, 7, 4, 10, 6, 4, 2, 3, 1, 1, 1, 0), # 113
(10, 11, 6, 11, 10, 8, 1, 3, 5, 2, 1, 1, 0, 7, 9, 5, 5, 7, 1, 4, 2, 4, 2, 2, 0, 0), # 114
(12, 4, 8, 10, 8, 7, 4, 2, 5, 1, 1, 0, 0, 10, 10, 4, 4, 5, 1, 1, 2, 2, 2, 0, 1, 0), # 115
(6, 3, 6, 5, 9, 3, 1, 2, 2, 1, 2, 0, 0, 8, 5, 6, 4, 8, 1, 3, 4, 5, 3, 0, 0, 0), # 116
(8, 10, 13, 6, 9, 1, 2, 3, 6, 2, 1, 2, 0, 9, 5, 3, 3, 6, 3, 5, 2, 4, 1, 0, 0, 0), # 117
(4, 5, 5, 5, 9, 2, 6, 2, 5, 1, 2, 0, 0, 9, 4, 2, 4, 5, 4, 4, 1, 5, 4, 3, 0, 0), # 118
(11, 9, 2, 14, 8, 3, 1, 2, 3, 2, 3, 0, 0, 13, 8, 8, 5, 6, 1, 1, 3, 4, 1, 1, 1, 0), # 119
(7, 7, 14, 6, 5, 4, 1, 1, 5, 1, 2, 1, 0, 11, 11, 4, 5, 5, 1, 6, 1, 2, 2, 1, 1, 0), # 120
(8, 7, 5, 7, 11, 5, 2, 1, 3, 4, 1, 0, 0, 10, 4, 8, 3, 9, 0, 5, 4, 2, 6, 0, 0, 0), # 121
(6, 6, 7, 10, 5, 7, 3, 1, 4, 0, 1, 1, 0, 12, 5, 4, 2, 14, 4, 4, 1, 1, 3, 3, 0, 0), # 122
(6, 10, 6, 9, 3, 0, 1, 2, 2, 1, 0, 0, 0, 5, 7, 2, 5, 9, 0, 4, 3, 5, 0, 1, 0, 0), # 123
(6, 8, 7, 5, 2, 6, 3, 6, 3, 0, 1, 0, 0, 9, 7, 8, 0, 7, 7, 3, 2, 1, 0, 0, 2, 0), # 124
(15, 6, 9, 8, 8, 2, 2, 2, 4, 1, 1, 0, 0, 15, 9, 7, 4, 10, 5, 4, 0, 4, 3, 1, 1, 0), # 125
(10, 7, 7, 8, 8, 4, 2, 3, 6, 2, 1, 1, 0, 10, 5, 6, 2, 6, 4, 1, 3, 3, 2, 1, 1, 0), # 126
(7, 7, 6, 5, 6, 2, 2, 2, 3, 1, 0, 2, 0, 7, 7, 4, 1, 4, 3, 2, 3, 5, 2, 2, 2, 0), # 127
(7, 8, 6, 10, 5, 8, 3, 3, 8, 3, 0, 0, 0, 7, 5, 9, 5, 8, 7, 1, 5, 3, 5, 1, 1, 0), # 128
(13, 2, 4, 7, 5, 5, 3, 0, 2, 1, 3, 1, 0, 10, 10, 4, 6, 12, 2, 3, 0, 4, 2, 0, 0, 0), # 129
(10, 6, 6, 8, 5, 4, 4, 4, 7, 1, 1, 0, 0, 9, 5, 9, 3, 11, 1, 4, 4, 1, 3, 2, 0, 0), # 130
(7, 4, 7, 8, 6, 3, 1, 1, 4, 1, 2, 0, 0, 9, 5, 3, 3, 4, 2, 0, 2, 4, 2, 2, 0, 0), # 131
(9, 5, 9, 12, 8, 5, 1, 5, 5, 2, 0, 0, 0, 11, 4, 6, 4, 9, 4, 3, 1, 5, 3, 2, 0, 0), # 132
(13, 10, 6, 13, 2, 3, 6, 1, 4, 1, 1, 1, 0, 14, 5, 6, 7, 5, 3, 4, 1, 3, 2, 0, 0, 0), # 133
(9, 8, 6, 9, 9, 3, 5, 2, 0, 0, 2, 0, 0, 8, 10, 7, 4, 11, 5, 0, 4, 7, 0, 2, 0, 0), # 134
(8, 9, 10, 5, 11, 1, 4, 2, 4, 2, 1, 1, 0, 7, 9, 3, 5, 8, 4, 0, 2, 2, 0, 1, 1, 0), # 135
(7, 7, 11, 8, 9, 4, 1, 1, 1, 0, 1, 0, 0, 11, 5, 7, 4, 6, 0, 0, 1, 7, 2, 2, 1, 0), # 136
(7, 8, 11, 4, 7, 9, 0, 3, 4, 2, 0, 0, 0, 10, 7, 4, 1, 10, 1, 3, 4, 3, 4, 1, 1, 0), # 137
(12, 4, 5, 8, 3, 0, 3, 0, 3, 2, 1, 0, 0, 10, 5, 4, 6, 3, 2, 4, 3, 8, 2, 0, 0, 0), # 138
(17, 14, 11, 10, 7, 1, 1, 2, 5, 0, 1, 1, 0, 7, 6, 8, 2, 12, 1, 2, 2, 3, 3, 3, 1, 0), # 139
(2, 7, 4, 7, 7, 3, 5, 1, 1, 1, 1, 0, 0, 9, 6, 9, 3, 8, 4, 3, 2, 2, 2, 2, 1, 0), # 140
(10, 5, 7, 7, 6, 2, 4, 3, 3, 0, 0, 0, 0, 11, 5, 8, 5, 9, 7, 5, 2, 3, 3, 1, 1, 0), # 141
(12, 7, 7, 15, 4, 3, 0, 1, 1, 1, 1, 1, 0, 4, 5, 8, 10, 6, 2, 3, 3, 2, 2, 1, 0, 0), # 142
(7, 5, 5, 8, 4, 1, 2, 1, 5, 0, 0, 1, 0, 10, 8, 5, 1, 4, 3, 1, 2, 4, 3, 0, 0, 0), # 143
(9, 3, 5, 7, 9, 5, 2, 5, 2, 0, 1, 0, 0, 10, 4, 3, 1, 7, 1, 3, 0, 3, 5, 1, 0, 0), # 144
(6, 6, 2, 6, 5, 7, 3, 5, 3, 5, 0, 0, 0, 7, 9, 6, 3, 4, 6, 1, 0, 4, 2, 0, 0, 0), # 145
(9, 1, 8, 14, 9, 0, 3, 3, 3, 1, 0, 0, 0, 18, 9, 12, 2, 9, 4, 2, 2, 2, 1, 2, 0, 0), # 146
(8, 9, 5, 4, 5, 5, 2, 2, 1, 1, 1, 0, 0, 10, 6, 5, 1, 5, 3, 2, 6, 3, 3, 2, 0, 0), # 147
(8, 4, 4, 6, 5, 1, 4, 2, 3, 0, 0, 0, 0, 9, 8, 2, 6, 8, 4, 4, 4, 4, 5, 1, 0, 0), # 148
(7, 5, 11, 8, 6, 1, 6, 3, 3, 1, 0, 1, 0, 6, 7, 11, 3, 6, 1, 6, 2, 4, 5, 3, 0, 0), # 149
(7, 7, 8, 14, 6, 1, 0, 3, 3, 0, 0, 0, 0, 11, 2, 3, 6, 6, 6, 3, 2, 0, 4, 1, 0, 0), # 150
(12, 3, 4, 12, 2, 3, 2, 1, 3, 1, 1, 0, 0, 6, 13, 4, 6, 7, 4, 1, 2, 4, 2, 0, 1, 0), # 151
(4, 4, 11, 6, 13, 1, 2, 2, 9, 2, 0, 0, 0, 7, 3, 6, 3, 5, 1, 0, 1, 1, 3, 1, 0, 0), # 152
(7, 6, 12, 9, 10, 7, 2, 3, 3, 1, 0, 0, 0, 10, 4, 6, 5, 4, 1, 1, 3, 3, 4, 1, 1, 0), # 153
(9, 5, 4, 6, 5, 3, 3, 1, 7, 1, 0, 1, 0, 5, 9, 1, 1, 2, 4, 3, 3, 2, 1, 2, 0, 0), # 154
(9, 7, 8, 4, 6, 4, 0, 3, 4, 0, 1, 1, 0, 6, 6, 2, 8, 7, 2, 2, 8, 3, 2, 1, 2, 0), # 155
(3, 5, 6, 10, 4, 3, 1, 5, 4, 2, 2, 1, 0, 10, 6, 5, 1, 4, 3, 2, 1, 2, 3, 1, 0, 0), # 156
(6, 6, 8, 4, 3, 3, 4, 3, 2, 2, 0, 0, 0, 11, 3, 4, 4, 7, 3, 3, 3, 6, 2, 1, 1, 0), # 157
(4, 4, 4, 6, 5, 4, 4, 1, 5, 0, 2, 0, 0, 2, 7, 2, 6, 3, 6, 4, 1, 4, 4, 2, 0, 0), # 158
(6, 8, 8, 7, 6, 3, 3, 3, 2, 2, 0, 2, 0, 10, 6, 3, 3, 5, 3, 3, 1, 4, 1, 1, 1, 0), # 159
(6, 10, 5, 9, 7, 5, 3, 1, 5, 1, 1, 0, 0, 3, 4, 8, 1, 6, 1, 5, 4, 6, 3, 3, 1, 0), # 160
(9, 5, 8, 6, 4, 3, 4, 1, 5, 1, 0, 0, 0, 8, 7, 2, 4, 6, 3, 1, 1, 3, 2, 3, 0, 0), # 161
(5, 1, 8, 4, 8, 3, 2, 0, 8, 0, 2, 0, 0, 11, 8, 1, 4, 4, 4, 4, 5, 3, 1, 0, 1, 0), # 162
(12, 4, 3, 5, 4, 0, 1, 2, 3, 1, 0, 0, 0, 8, 7, 4, 3, 8, 4, 2, 2, 2, 0, 1, 0, 0), # 163
(12, 8, 5, 5, 7, 5, 2, 3, 1, 0, 0, 0, 0, 8, 9, 3, 0, 10, 2, 1, 4, 1, 2, 1, 1, 0), # 164
(9, 3, 7, 7, 8, 1, 3, 2, 4, 0, 2, 0, 0, 8, 8, 4, 4, 7, 6, 2, 0, 4, 4, 2, 0, 0), # 165
(7, 5, 5, 4, 4, 3, 4, 1, 4, 0, 2, 0, 0, 8, 6, 1, 1, 10, 1, 4, 1, 2, 1, 4, 1, 0), # 166
(4, 6, 5, 5, 6, 5, 1, 3, 2, 2, 0, 2, 0, 7, 11, 2, 4, 7, 1, 4, 1, 4, 1, 1, 0, 0), # 167
(10, 4, 9, 7, 6, 3, 1, 3, 4, 2, 0, 0, 0, 13, 9, 7, 5, 3, 2, 2, 0, 1, 4, 0, 0, 0), # 168
(6, 8, 4, 7, 4, 1, 1, 1, 2, 0, 0, 1, 0, 12, 4, 3, 2, 11, 0, 3, 2, 4, 3, 3, 1, 0), # 169
(2, 5, 2, 3, 7, 2, 2, 2, 2, 0, 0, 1, 0, 7, 3, 3, 3, 9, 5, 3, 2, 2, 2, 1, 0, 0), # 170
(5, 6, 4, 2, 4, 2, 1, 2, 2, 0, 1, 0, 0, 3, 5, 3, 2, 4, 3, 3, 0, 6, 1, 1, 0, 0), # 171
(4, 4, 5, 7, 3, 4, 1, 1, 0, 2, 0, 0, 0, 5, 2, 6, 8, 7, 4, 0, 1, 4, 3, 1, 0, 0), # 172
(6, 4, 7, 3, 2, 2, 1, 0, 1, 1, 1, 1, 0, 9, 9, 4, 0, 6, 1, 4, 0, 4, 1, 0, 0, 0), # 173
(6, 4, 2, 3, 7, 4, 1, 2, 0, 1, 1, 0, 0, 4, 8, 0, 2, 4, 2, 0, 0, 1, 0, 2, 0, 0), # 174
(4, 1, 7, 6, 4, 2, 2, 2, 3, 0, 0, 0, 0, 5, 4, 1, 0, 4, 4, 1, 2, 2, 0, 1, 0, 0), # 175
(6, 2, 4, 1, 7, 3, 2, 1, 3, 1, 0, 1, 0, 7, 5, 3, 4, 4, 2, 1, 1, 2, 1, 2, 0, 0), # 176
(1, 1, 7, 5, 3, 2, 4, 1, 0, 1, 1, 1, 0, 4, 0, 4, 1, 6, 2, 1, 0, 0, 0, 2, 0, 0), # 177
(4, 4, 3, 2, 7, 1, 0, 0, 2, 1, 0, 0, 0, 3, 2, 3, 4, 6, 1, 1, 0, 3, 1, 0, 0, 0), # 178
(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), # 179
)
station_arriving_intensity = (
(5.020865578371768, 5.525288559693166, 5.211283229612507, 6.214667773863432, 5.554685607609612, 3.1386549320373387, 4.146035615373915, 4.653176172979423, 6.090099062168007, 3.9580150155223697, 4.205265163885603, 4.897915078306173, 5.083880212578363), # 0
(5.354327152019974, 5.890060694144759, 5.555346591330152, 6.625144253276616, 5.922490337474237, 3.3459835840425556, 4.419468941263694, 4.959513722905708, 6.492245326332909, 4.21898069227715, 4.483096135956131, 5.221216660814354, 5.419791647439855), # 1
(5.686723008979731, 6.253385170890979, 5.8980422855474135, 7.033987704664794, 6.288962973749744, 3.5524851145124448, 4.691818507960704, 5.264625247904419, 6.892786806877549, 4.478913775020546, 4.759823148776313, 5.543232652053055, 5.75436482820969), # 2
(6.016757793146562, 6.613820501936447, 6.238010869319854, 7.439576407532074, 6.652661676001902, 3.757340622585113, 4.962003641647955, 5.567301157494507, 7.290135160921093, 4.736782698426181, 5.0343484118273825, 5.862685684930461, 6.086272806254225), # 3
(6.343136148415981, 6.9699251992857745, 6.573892899703036, 7.840288641382569, 7.012144603796492, 3.9597312073986677, 5.2289436685084585, 5.866331861194915, 7.682702045582707, 4.991555897167679, 5.305574134590575, 6.178298392354764, 6.414188632939817), # 4
(6.66456271868351, 7.320257774943588, 6.9043289337525175, 8.234502685720393, 7.36596991669928, 4.158837968091214, 5.491557914725224, 6.160507768524592, 8.068899117981559, 5.242201805918663, 5.572402526547132, 6.488793407234148, 6.736785359632827), # 5
(6.979742147844666, 7.663376740914501, 7.227959528523866, 8.620596820049652, 7.712695774276043, 4.353842003800864, 5.7487657064812625, 6.4486192890024885, 8.447138035236815, 5.487688859352758, 5.833735797178282, 6.792893362476808, 7.052736037699606), # 6
(7.2873790797949685, 7.997840609203132, 7.543425241072635, 8.996949323874462, 8.050880336092554, 4.543924413665721, 5.999486369959585, 6.729456832147552, 8.815830454467644, 5.726985492143586, 6.088476155965268, 7.089320890990929, 7.360713718506519), # 7
(7.586178158429934, 8.322207891814099, 7.849366628454396, 9.361938476698928, 8.379081761714586, 4.7282662968238895, 6.2426392313431975, 7.001810807478725, 9.173388032793206, 5.959060138964774, 6.335525812389321, 7.376798625684702, 7.659391453419917), # 8
(7.874844027645085, 8.635037100752022, 8.144424247724704, 9.713942558027169, 8.69585821070791, 4.906048752413484, 6.47714361681512, 7.264471624514963, 9.518222427332674, 6.182881234489941, 6.573786975931678, 7.654049199466313, 7.947442293806162), # 9
(8.152081331335932, 8.934886748021516, 8.427238655939124, 10.051339847363288, 8.9997678426383, 5.076452879572607, 6.701918852558355, 7.516229692775211, 9.848745295205214, 6.397417213392714, 6.802161856073574, 7.919795245243952, 8.22353929103161), # 10
(8.416594713398005, 9.220315345627206, 8.696450410153215, 10.372508624211397, 9.289368817071534, 5.238659777439368, 6.915884264755916, 7.7558754217784145, 10.163368293529993, 6.601636510346719, 7.019552662296249, 8.17275939592581, 8.486355496462611), # 11
(8.667088817726812, 9.489881405573698, 8.95070006742254, 10.675827168075612, 9.563219293573377, 5.391850545151869, 7.1179591795908115, 7.982199221043521, 10.460503079426179, 6.794507560025572, 7.224861604080934, 8.411664284420068, 8.734563961465534), # 12
(8.902268288217876, 9.74214343986562, 9.188628184802662, 10.959673758460044, 9.819877431709601, 5.5352062818482235, 7.307062923246056, 8.193991500089481, 10.738561310012932, 6.974998797102904, 7.416990890908869, 8.63523254363492, 8.966837737406735), # 13
(9.120837768766716, 9.975659960507588, 9.408875319349146, 11.222426674868792, 10.05790139104599, 5.667908086666534, 7.482114821904661, 8.390042668435246, 10.995954642409421, 7.142078656252334, 7.594842732261284, 8.84218680647856, 9.181849875652563), # 14
(9.321501903268855, 10.188989479504217, 9.610082028117542, 11.462464196805985, 10.275849331148308, 5.789137058744912, 7.642034201749626, 8.569143135599756, 11.23109473373482, 7.29471557214749, 7.757319337619419, 9.031249705859171, 9.37827342756938), # 15
(9.5029653356198, 10.380690508860132, 9.790888868163425, 11.678164603775716, 10.472279411582333, 5.898074297221459, 7.785740388963976, 8.73008331110196, 11.442393241108286, 7.431877979461996, 7.9033229164645125, 9.20114387468494, 9.554781444523545), # 16
(9.663932709715075, 10.549321560579946, 9.949936396542352, 11.867906175282112, 10.645749791913838, 5.993900901234285, 7.9121527097307105, 8.871653604460818, 11.628261821648984, 7.552534312869467, 8.031755678277799, 9.350591945864055, 9.710046977881415), # 17
(9.803108669450204, 10.693441146668274, 10.08586517030988, 12.030067190829278, 10.794818631708589, 6.075797969921503, 8.020190490232851, 8.99264442519526, 11.787112132476096, 7.6556530070435365, 8.141519832540508, 9.478316552304715, 9.842743079009345), # 18
(9.919197858720699, 10.811607779129744, 10.197315746521578, 12.163025929921314, 10.918044090532366, 6.142946602421208, 8.108773056653394, 9.091846182824245, 11.917355830708779, 7.740202496657828, 8.231517588733878, 9.583040326915096, 9.951542799273696), # 19
(10.010904921422082, 10.902379969968962, 10.282928682233003, 12.265160672062354, 11.013984327950944, 6.194527897871518, 8.176819735175362, 9.168049286866717, 12.017404573466198, 7.805151216385958, 8.30065115633915, 9.66348590260339, 10.035119190040824), # 20
(10.076934501449866, 10.964316231190558, 10.341344534499719, 12.334849696756486, 11.081197503530088, 6.229722955410535, 8.223249851981759, 9.220044146841623, 12.085670017867521, 7.849467600901555, 8.34782274483756, 9.718375912277793, 10.092145302677078), # 21
(10.115991242699579, 10.995975074799144, 10.371203860377285, 12.370471283507836, 11.118241776835575, 6.247712874176367, 8.2469827332556, 9.246621172267915, 12.120563821031915, 7.872120084878242, 8.37193456371034, 9.74643298884649, 10.121294188548827), # 22
(10.13039336334264, 10.999723593964335, 10.374923182441702, 12.374930812757203, 11.127732056032597, 6.25, 8.249804002259339, 9.249493827160494, 12.124926234567901, 7.874792272519433, 8.37495803716174, 9.749897576588934, 10.125), # 23
(10.141012413034153, 10.997537037037038, 10.374314814814815, 12.374381944444446, 11.133107613614852, 6.25, 8.248253812636166, 9.2455, 12.124341666666666, 7.87315061728395, 8.37462457912458, 9.749086419753086, 10.125), # 24
(10.15140723021158, 10.993227023319616, 10.373113854595337, 12.373296039094651, 11.138364945594503, 6.25, 8.24519890260631, 9.237654320987655, 12.123186728395062, 7.869918838591678, 8.373963399426362, 9.747485139460448, 10.125), # 25
(10.161577019048034, 10.986859396433472, 10.371336762688616, 12.37168544238683, 11.143503868421105, 6.25, 8.240686718308721, 9.226104938271606, 12.1214762345679, 7.865150708733425, 8.372980483850855, 9.745115683584821, 10.125), # 26
(10.171520983716636, 10.978499999999999, 10.369, 12.369562499999999, 11.148524198544214, 6.25, 8.234764705882354, 9.211, 12.119225, 7.858899999999999, 8.371681818181818, 9.742, 10.125), # 27
(10.181238328390501, 10.968214677640603, 10.366120027434842, 12.366939557613168, 11.153425752413401, 6.25, 8.22748031146615, 9.192487654320988, 12.116447839506172, 7.851220484682213, 8.370073388203018, 9.73816003657979, 10.125), # 28
(10.19072825724275, 10.95606927297668, 10.362713305898492, 12.36382896090535, 11.15820834647822, 6.25, 8.218880981199066, 9.170716049382715, 12.113159567901235, 7.842165935070874, 8.368161179698216, 9.733617741197987, 10.125), # 29
(10.199989974446497, 10.94212962962963, 10.358796296296296, 12.360243055555555, 11.162871797188236, 6.25, 8.209014161220043, 9.145833333333332, 12.109375, 7.83179012345679, 8.365951178451178, 9.728395061728394, 10.125), # 30
(10.209022684174858, 10.926461591220852, 10.354385459533608, 12.356194187242798, 11.167415920993008, 6.25, 8.19792729766804, 9.117987654320988, 12.105108950617284, 7.820146822130773, 8.363449370245666, 9.722513946044812, 10.125), # 31
(10.217825590600954, 10.909131001371742, 10.349497256515773, 12.35169470164609, 11.171840534342095, 6.25, 8.185667836681999, 9.087327160493828, 12.100376234567902, 7.807289803383631, 8.360661740865444, 9.715996342021034, 10.125), # 32
(10.226397897897897, 10.890203703703703, 10.344148148148149, 12.346756944444444, 11.176145453685063, 6.25, 8.172283224400871, 9.054, 12.095191666666667, 7.793272839506173, 8.357594276094275, 9.708864197530863, 10.125), # 33
(10.23473881023881, 10.869745541838133, 10.338354595336076, 12.341393261316872, 11.180330495471466, 6.25, 8.15782090696361, 9.018154320987653, 12.089570061728397, 7.778149702789209, 8.354252961715924, 9.701139460448102, 10.125), # 34
(10.242847531796807, 10.847822359396433, 10.332133058984912, 12.335615997942385, 11.18439547615087, 6.25, 8.142328330509159, 8.979938271604938, 12.083526234567902, 7.761974165523548, 8.350643783514153, 9.692844078646548, 10.125), # 35
(10.250723266745005, 10.824499999999999, 10.3255, 12.3294375, 11.188340212172836, 6.25, 8.12585294117647, 8.9395, 12.077074999999999, 7.7448, 8.346772727272727, 9.684000000000001, 10.125), # 36
(10.258365219256524, 10.799844307270233, 10.318471879286694, 12.322870113168724, 11.192164519986921, 6.25, 8.108442185104494, 8.896987654320988, 12.070231172839506, 7.726680978509374, 8.34264577877541, 9.674629172382259, 10.125), # 37
(10.265772593504476, 10.773921124828533, 10.311065157750342, 12.315926183127573, 11.19586821604269, 6.25, 8.09014350843218, 8.85254938271605, 12.063009567901235, 7.707670873342479, 8.33826892380596, 9.664753543667125, 10.125), # 38
(10.272944593661986, 10.746796296296296, 10.303296296296297, 12.308618055555556, 11.199451116789703, 6.25, 8.071004357298476, 8.806333333333333, 12.055425000000001, 7.687823456790124, 8.333648148148148, 9.654395061728394, 10.125), # 39
(10.279880423902163, 10.718535665294924, 10.295181755829903, 12.300958076131687, 11.202913038677519, 6.25, 8.05107217784233, 8.758487654320989, 12.047492283950618, 7.667192501143119, 8.328789437585733, 9.643575674439873, 10.125), # 40
(10.286579288398128, 10.689205075445816, 10.286737997256516, 12.29295859053498, 11.206253798155702, 6.25, 8.030394416202695, 8.709160493827161, 12.0392262345679, 7.645831778692272, 8.323698777902482, 9.632317329675354, 10.125), # 41
(10.293040391323, 10.658870370370371, 10.277981481481483, 12.284631944444445, 11.209473211673808, 6.25, 8.009018518518518, 8.6585, 12.030641666666668, 7.623795061728395, 8.318382154882155, 9.620641975308642, 10.125), # 42
(10.299262936849892, 10.627597393689987, 10.268928669410151, 12.275990483539095, 11.212571095681403, 6.25, 7.98699193092875, 8.606654320987655, 12.021753395061728, 7.601136122542296, 8.312845554308517, 9.608571559213535, 10.125), # 43
(10.305246129151927, 10.595451989026063, 10.259596021947875, 12.267046553497943, 11.215547266628045, 6.25, 7.964362099572339, 8.553771604938273, 12.0125762345679, 7.577908733424783, 8.307094961965332, 9.596128029263832, 10.125), # 44
(10.310989172402216, 10.5625, 10.25, 12.2578125, 11.218401540963296, 6.25, 7.9411764705882355, 8.5, 12.003124999999999, 7.554166666666667, 8.301136363636363, 9.583333333333332, 10.125), # 45
(10.31649127077388, 10.528807270233196, 10.240157064471878, 12.24830066872428, 11.221133735136716, 6.25, 7.917482490115388, 8.445487654320988, 11.993414506172838, 7.529963694558756, 8.294975745105374, 9.57020941929584, 10.125), # 46
(10.321751628440035, 10.49443964334705, 10.230083676268862, 12.238523405349794, 11.223743665597867, 6.25, 7.893327604292747, 8.390382716049382, 11.983459567901235, 7.505353589391861, 8.288619092156129, 9.55677823502515, 10.125), # 47
(10.326769449573796, 10.459462962962963, 10.219796296296296, 12.228493055555557, 11.22623114879631, 6.25, 7.868759259259259, 8.334833333333334, 11.973275000000001, 7.4803901234567896, 8.28207239057239, 9.543061728395061, 10.125), # 48
(10.331543938348286, 10.42394307270233, 10.209311385459534, 12.218221965020577, 11.228596001181607, 6.25, 7.8438249011538765, 8.278987654320987, 11.96287561728395, 7.455127069044353, 8.275341626137923, 9.529081847279379, 10.125), # 49
(10.336074298936616, 10.387945816186559, 10.198645404663925, 12.207722479423868, 11.230838039203315, 6.25, 7.81857197611555, 8.222993827160494, 11.9522762345679, 7.429618198445358, 8.268432784636488, 9.514860539551899, 10.125), # 50
(10.34035973551191, 10.351537037037037, 10.187814814814814, 12.197006944444444, 11.232957079310998, 6.25, 7.793047930283224, 8.167, 11.941491666666668, 7.403917283950617, 8.261351851851853, 9.50041975308642, 10.125), # 51
(10.344399452247279, 10.314782578875173, 10.176836076817558, 12.186087705761317, 11.234952937954214, 6.25, 7.767300209795852, 8.111154320987653, 11.930536728395062, 7.3780780978509375, 8.254104813567777, 9.485781435756746, 10.125), # 52
(10.348192653315843, 10.27774828532236, 10.165725651577505, 12.174977109053497, 11.23682543158253, 6.25, 7.741376260792383, 8.055604938271605, 11.919426234567903, 7.3521544124371285, 8.246697655568026, 9.470967535436671, 10.125), # 53
(10.351738542890716, 10.2405, 10.154499999999999, 12.1636875, 11.238574376645502, 6.25, 7.715323529411765, 8.000499999999999, 11.908175, 7.3262, 8.239136363636362, 9.456, 10.125), # 54
(10.355036325145022, 10.203103566529492, 10.143175582990398, 12.152231224279834, 11.24019958959269, 6.25, 7.689189461792948, 7.945987654320987, 11.896797839506172, 7.300268632830361, 8.231426923556553, 9.44090077732053, 10.125), # 55
(10.358085204251871, 10.165624828532236, 10.131768861454047, 12.140620627572016, 11.241700886873659, 6.25, 7.663021504074881, 7.892216049382716, 11.885309567901235, 7.274414083219022, 8.223575321112358, 9.425691815272062, 10.125), # 56
(10.360884384384383, 10.12812962962963, 10.120296296296297, 12.128868055555555, 11.243078084937967, 6.25, 7.636867102396514, 7.839333333333334, 11.873725, 7.24869012345679, 8.215587542087542, 9.410395061728394, 10.125), # 57
(10.36343306971568, 10.090683813443073, 10.108774348422497, 12.116985853909464, 11.244331000235174, 6.25, 7.610773702896797, 7.787487654320987, 11.862058950617284, 7.223150525834477, 8.20746957226587, 9.395032464563329, 10.125), # 58
(10.36573046441887, 10.053353223593964, 10.097219478737998, 12.104986368312757, 11.245459449214845, 6.25, 7.584788751714678, 7.736827160493827, 11.850326234567902, 7.197849062642891, 8.1992273974311, 9.379625971650663, 10.125), # 59
(10.367775772667077, 10.016203703703704, 10.085648148148147, 12.092881944444445, 11.246463248326537, 6.25, 7.558959694989106, 7.6875, 11.838541666666668, 7.172839506172839, 8.190867003367003, 9.364197530864198, 10.125), # 60
(10.369568198633415, 9.97930109739369, 10.0740768175583, 12.080684927983539, 11.247342214019811, 6.25, 7.533333978859033, 7.639654320987654, 11.826720061728395, 7.148175628715135, 8.182394375857339, 9.348769090077733, 10.125), # 61
(10.371106946491004, 9.942711248285322, 10.062521947873801, 12.068407664609055, 11.248096162744234, 6.25, 7.507959049463406, 7.5934382716049384, 11.814876234567901, 7.123911202560586, 8.17381550068587, 9.333362597165067, 10.125), # 62
(10.37239122041296, 9.9065, 10.051, 12.056062500000001, 11.248724910949356, 6.25, 7.482882352941176, 7.549, 11.803025, 7.100099999999999, 8.165136363636364, 9.318, 10.125), # 63
(10.373420224572397, 9.870733196159122, 10.039527434842249, 12.043661779835391, 11.249228275084748, 6.25, 7.458151335431292, 7.506487654320988, 11.791181172839506, 7.076795793324188, 8.156362950492579, 9.302703246456334, 10.125), # 64
(10.374193163142438, 9.835476680384087, 10.0281207133059, 12.031217849794238, 11.249606071599967, 6.25, 7.433813443072703, 7.466049382716049, 11.779359567901235, 7.054052354823959, 8.147501247038285, 9.287494284407863, 10.125), # 65
(10.374709240296196, 9.800796296296298, 10.016796296296297, 12.018743055555555, 11.249858116944573, 6.25, 7.409916122004357, 7.427833333333334, 11.767575, 7.031923456790123, 8.138557239057238, 9.272395061728396, 10.125), # 66
(10.374967660206792, 9.766757887517146, 10.005570644718793, 12.006249742798353, 11.24998422756813, 6.25, 7.386506818365206, 7.391987654320989, 11.755842283950617, 7.010462871513489, 8.12953691233321, 9.257427526291723, 10.125), # 67
(10.374791614480825, 9.733248639320323, 9.994405949931412, 11.993641740472357, 11.249877955297345, 6.2498840115836, 7.363515194829646, 7.358343850022862, 11.744087848651121, 6.989620441647166, 8.120285988540376, 9.242530021899743, 10.124875150034294), # 68
(10.373141706924315, 9.699245519713262, 9.982988425925925, 11.980283514492752, 11.248910675381262, 6.248967078189301, 7.340268181346613, 7.325098765432099, 11.731797839506173, 6.968806390704429, 8.10986283891547, 9.227218973359324, 10.12388599537037), # 69
(10.369885787558895, 9.664592459843355, 9.971268432784635, 11.966087124261943, 11.246999314128942, 6.247161255906112, 7.31666013456137, 7.291952446273434, 11.718902892089622, 6.947919524462734, 8.09814888652608, 9.211422761292809, 10.121932334533609), # 70
(10.365069660642929, 9.62931016859153, 9.959250085733881, 11.951073503757382, 11.244168078754136, 6.244495808565767, 7.292701659538988, 7.258915866483768, 11.705422210791038, 6.926960359342639, 8.085187370783862, 9.195152937212715, 10.119039887688615), # 71
(10.358739130434783, 9.593419354838709, 9.946937499999999, 11.935263586956522, 11.240441176470588, 6.2410000000000005, 7.268403361344538, 7.226, 11.691375, 6.905929411764705, 8.07102153110048, 9.17842105263158, 10.115234375), # 72
(10.35094000119282, 9.556940727465816, 9.934334790809327, 11.918678307836823, 11.23584281449205, 6.236703094040542, 7.243775845043092, 7.193215820759031, 11.676780464106082, 6.884827198149493, 8.055694606887588, 9.161238659061919, 10.110541516632374), # 73
(10.341718077175404, 9.519894995353777, 9.921446073388202, 11.901338600375738, 11.230397200032275, 6.231634354519128, 7.218829715699722, 7.160574302697759, 11.661657807498857, 6.863654234917561, 8.039249837556856, 9.143617308016267, 10.104987032750344), # 74
(10.331119162640901, 9.482302867383511, 9.908275462962962, 11.883265398550725, 11.224128540305012, 6.22582304526749, 7.1935755783795, 7.128086419753086, 11.6460262345679, 6.84241103848947, 8.021730462519935, 9.125568551007147, 10.098596643518519), # 75
(10.319189061847677, 9.44418505243595, 9.894827074759945, 11.864479636339238, 11.217061042524005, 6.219298430117361, 7.168024038147495, 7.095763145861912, 11.629904949702789, 6.821098125285779, 8.003179721188491, 9.107103939547082, 10.091396069101508), # 76
(10.305973579054093, 9.40556225939201, 9.881105024005485, 11.845002247718732, 11.209218913903008, 6.212089772900472, 7.142185700068779, 7.063615454961135, 11.613313157293096, 6.7997160117270505, 7.983640852974187, 9.088235025148606, 10.083411029663925), # 77
(10.291518518518519, 9.366455197132618, 9.867113425925925, 11.824854166666666, 11.200626361655774, 6.204226337448559, 7.116071169208425, 7.031654320987655, 11.596270061728394, 6.7782652142338415, 7.9631570972886765, 9.068973359324238, 10.074667245370371), # 78
(10.275869684499314, 9.326884574538697, 9.8528563957476, 11.804056327160493, 11.191307592996047, 6.195737387593354, 7.089691050631501, 6.9998907178783725, 11.578794867398262, 6.756746249226714, 7.941771693543622, 9.049330493586504, 10.065190436385459), # 79
(10.259072881254847, 9.286871100491172, 9.838338048696844, 11.782629663177671, 11.181286815137579, 6.18665218716659, 7.063055949403081, 6.968335619570188, 11.560906778692273, 6.7351596331262265, 7.919527881150688, 9.029317979447935, 10.0550063228738), # 80
(10.241173913043479, 9.246435483870968, 9.8235625, 11.760595108695654, 11.170588235294117, 6.177, 7.036176470588235, 6.937, 11.542625, 6.713505882352941, 7.8964688995215315, 9.008947368421053, 10.044140624999999), # 81
(10.222218584123576, 9.205598433559008, 9.808533864883403, 11.737973597691894, 11.159236060679415, 6.166810089925317, 7.009063219252036, 6.90589483310471, 11.52396873571102, 6.691785513327416, 7.872637988067813, 8.988230212018387, 10.03261906292867), # 82
(10.202252698753504, 9.164380658436214, 9.793256258573388, 11.714786064143853, 11.147254498507221, 6.156111720774272, 6.981726800459553, 6.875031092821216, 11.504957190214906, 6.669999042470211, 7.848078386201194, 8.967178061752461, 10.020467356824417), # 83
(10.181322061191626, 9.122802867383513, 9.777733796296296, 11.691053442028986, 11.134667755991286, 6.144934156378601, 6.954177819275858, 6.844419753086419, 11.485609567901234, 6.648146986201889, 7.822833333333333, 8.945802469135803, 10.007711226851852), # 84
(10.159472475696308, 9.080885769281826, 9.761970593278463, 11.666796665324746, 11.121500040345357, 6.133306660570035, 6.926426880766024, 6.814071787837221, 11.465945073159578, 6.626229860943005, 7.796946068875894, 8.924114985680937, 9.994376393175584), # 85
(10.136749746525913, 9.03865007301208, 9.745970764746229, 11.64203666800859, 11.107775558783183, 6.121258497180309, 6.89848458999512, 6.783998171010516, 11.445982910379517, 6.604248183114124, 7.770459832240534, 8.902127162900394, 9.98048857596022), # 86
(10.113199677938807, 8.996116487455197, 9.729738425925925, 11.61679438405797, 11.09351851851852, 6.108818930041152, 6.870361552028219, 6.75420987654321, 11.425742283950619, 6.582202469135802, 7.743417862838915, 8.879850552306692, 9.96607349537037), # 87
(10.088868074193357, 8.9533057214921, 9.713277692043896, 11.59109074745035, 11.07875312676511, 6.096017222984301, 6.842068371930391, 6.724717878372199, 11.40524239826246, 6.560093235428601, 7.715863400082698, 8.857296705412365, 9.951156871570646), # 88
(10.063800739547922, 8.910238484003717, 9.696592678326475, 11.564946692163177, 11.063503590736707, 6.082882639841488, 6.813615654766708, 6.695533150434385, 11.384502457704619, 6.537920998413083, 7.687839683383544, 8.834477173729935, 9.935764424725651), # 89
(10.03804347826087, 8.866935483870968, 9.6796875, 11.538383152173914, 11.04779411764706, 6.069444444444445, 6.785014005602241, 6.666666666666666, 11.363541666666668, 6.515686274509804, 7.65938995215311, 8.81140350877193, 9.919921875), # 90
(10.011642094590563, 8.823417429974777, 9.662566272290809, 11.511421061460013, 11.031648914709915, 6.055731900624904, 6.756274029502062, 6.638129401005944, 11.342379229538182, 6.4933895801393255, 7.63055744580306, 8.788087262050874, 9.903654942558298), # 91
(9.984642392795372, 8.779705031196071, 9.64523311042524, 11.484081353998926, 11.015092189139029, 6.041774272214601, 6.727406331531242, 6.609932327389118, 11.321034350708734, 6.471031431722209, 7.601385403745053, 8.764539985079297, 9.886989347565157), # 92
(9.957090177133654, 8.735818996415771, 9.62769212962963, 11.456384963768118, 10.998148148148148, 6.027600823045267, 6.69842151675485, 6.582086419753087, 11.299526234567901, 6.448612345679011, 7.57191706539075, 8.74077322936972, 9.869950810185184), # 93
(9.92903125186378, 8.691780034514801, 9.609947445130317, 11.428352824745035, 10.98084099895102, 6.0132408169486355, 6.669330190237961, 6.554602652034752, 11.277874085505259, 6.426132838430297, 7.54219567015181, 8.716798546434674, 9.85256505058299), # 94
(9.90051142124411, 8.647608854374088, 9.592003172153635, 11.400005870907139, 10.963194948761398, 5.9987235177564395, 6.640142957045644, 6.527491998171011, 11.25609710791038, 6.403593426396621, 7.512264457439896, 8.69262748778668, 9.834857788923182), # 95
(9.871576489533012, 8.603326164874554, 9.573863425925927, 11.371365036231884, 10.945234204793028, 5.984078189300411, 6.610870422242971, 6.500765432098766, 11.234214506172838, 6.3809946259985475, 7.482166666666667, 8.668271604938273, 9.816854745370371), # 96
(9.842272260988848, 8.558952674897121, 9.555532321673525, 11.342451254696725, 10.926982974259664, 5.969334095412284, 6.581523190895013, 6.474433927754916, 11.212245484682214, 6.358336953656634, 7.451945537243782, 8.64374244940197, 9.798581640089164), # 97
(9.812644539869984, 8.514509093322713, 9.53701397462277, 11.31328546027912, 10.908465464375052, 5.954520499923793, 6.552111868066842, 6.44850845907636, 11.190209247828074, 6.335620925791441, 7.421644308582906, 8.619051572690298, 9.78006419324417), # 98
(9.782739130434782, 8.470016129032258, 9.5183125, 11.283888586956522, 10.889705882352942, 5.939666666666667, 6.52264705882353, 6.423, 11.168125, 6.312847058823529, 7.391306220095694, 8.59421052631579, 9.761328125), # 99
(9.752601836941611, 8.425494490906676, 9.49943201303155, 11.254281568706388, 10.870728435407084, 5.924801859472641, 6.493139368230145, 6.3979195244627345, 11.146011945587563, 6.290015869173458, 7.36097451119381, 8.569230861790967, 9.742399155521262), # 100
(9.722278463648834, 8.380964887826895, 9.480376628943759, 11.224485339506174, 10.85155733075123, 5.909955342173449, 6.463599401351762, 6.3732780064014625, 11.123889288980338, 6.267127873261788, 7.330692421288912, 8.544124130628353, 9.723303004972564), # 101
(9.691814814814816, 8.336448028673836, 9.461150462962962, 11.194520833333334, 10.832216775599129, 5.895156378600824, 6.43403776325345, 6.349086419753086, 11.1017762345679, 6.244183587509078, 7.300503189792663, 8.518901884340481, 9.704065393518519), # 102
(9.661256694697919, 8.291964622328422, 9.4417576303155, 11.164408984165325, 10.812730977164529, 5.880434232586496, 6.40446505900028, 6.325355738454504, 11.079691986739826, 6.221183528335889, 7.270450056116723, 8.493575674439873, 9.68471204132373), # 103
(9.63064990755651, 8.247535377671579, 9.422202246227709, 11.134170725979603, 10.79312414266118, 5.865818167962201, 6.374891893657326, 6.302096936442616, 11.057655749885688, 6.19812821216278, 7.24057625967275, 8.468157052439054, 9.665268668552812), # 104
(9.600040257648953, 8.203181003584229, 9.402488425925926, 11.103826992753623, 10.773420479302832, 5.851337448559671, 6.345328872289658, 6.279320987654321, 11.035686728395062, 6.175018155410313, 7.210925039872408, 8.442657569850553, 9.64576099537037), # 105
(9.569473549233614, 8.158922208947299, 9.382620284636488, 11.073398718464842, 10.753644194303236, 5.837021338210638, 6.315786599962345, 6.25703886602652, 11.01380412665752, 6.151853874499045, 7.181539636127355, 8.417088778186894, 9.626214741941014), # 106
(9.538995586568856, 8.11477970264171, 9.362601937585735, 11.042906837090714, 10.733819494876139, 5.822899100746838, 6.286275681740461, 6.235261545496114, 10.992027149062643, 6.128635885849539, 7.152463287849252, 8.391462228960604, 9.606655628429355), # 107
(9.508652173913044, 8.070774193548388, 9.3424375, 11.012372282608696, 10.713970588235293, 5.809, 6.256806722689075, 6.214, 10.970375, 6.105364705882353, 7.1237392344497605, 8.365789473684211, 9.587109375), # 108
(9.478489115524543, 8.026926390548255, 9.322131087105625, 10.98181598899624, 10.69412168159445, 5.795353299801859, 6.227390327873262, 6.193265203475081, 10.948866883859168, 6.082040851018047, 7.09541071534054, 8.340082063870238, 9.567601701817559), # 109
(9.448552215661715, 7.983257002522237, 9.301686814128946, 10.951258890230811, 10.674296982167354, 5.7819882639841484, 6.198037102358089, 6.173068129858253, 10.92752200502972, 6.058664837677183, 7.06752096993325, 8.314351551031214, 9.54815832904664), # 110
(9.41888727858293, 7.9397867383512555, 9.281108796296298, 10.920721920289855, 10.654520697167756, 5.768934156378601, 6.168757651208631, 6.153419753086419, 10.906359567901236, 6.035237182280319, 7.040113237639553, 8.288609486679663, 9.528804976851852), # 111
(9.38954010854655, 7.896536306916234, 9.26040114883402, 10.890226013150832, 10.634817033809409, 5.756220240816949, 6.139562579489958, 6.134331047096479, 10.885398776863282, 6.011758401248016, 7.013230757871109, 8.26286742232811, 9.509567365397805), # 112
(9.360504223703044, 7.853598618785952, 9.239617828252069, 10.85983388249204, 10.615175680173705, 5.7438697692145135, 6.1105259636567695, 6.115852568780606, 10.86471281125862, 5.988304736612729, 6.9869239061528665, 8.237192936504428, 9.490443900843221), # 113
(9.331480897900065, 7.811397183525536, 9.219045675021619, 10.829789421277336, 10.595393354566326, 5.731854608529901, 6.082018208410579, 6.09821125950512, 10.84461903571306, 5.965315167912783, 6.961244337113197, 8.211912172112974, 9.471275414160035), # 114
(9.302384903003995, 7.769947198683046, 9.198696932707318, 10.800084505181779, 10.5754076778886, 5.7201435124987645, 6.054059650191562, 6.081402654278709, 10.82512497866879, 5.942825327988077, 6.936154511427094, 8.187037582558851, 9.452006631660376), # 115
(9.273179873237634, 7.729188281291702, 9.178532189983873, 10.770666150266404, 10.555188526383779, 5.708708877287098, 6.026604817527893, 6.065380312898993, 10.80618133922783, 5.920793358449547, 6.911605931271481, 8.162523197487346, 9.43260725975589), # 116
(9.243829442823772, 7.689060048384721, 9.158512035525986, 10.741481372592244, 10.53470577629511, 5.6975230990608905, 5.9996082389477525, 6.050097795163585, 10.787738816492203, 5.899177400908129, 6.887550098823283, 8.13832304654375, 9.413047004858225), # 117
(9.214297245985211, 7.649502116995324, 9.138597058008367, 10.712477188220333, 10.513929303865842, 5.686558573986138, 5.973024442979315, 6.0355086608700965, 10.769748109563935, 5.877935596974759, 6.863938516259424, 8.11439115937335, 9.393295573379024), # 118
(9.184546916944742, 7.610454104156729, 9.118747846105723, 10.683600613211706, 10.492828985339221, 5.675787698228833, 5.946807958150756, 6.021566469816145, 10.752159917545043, 5.857026088260372, 6.840722685756828, 8.090681565621434, 9.373322671729932), # 119
(9.154542089925162, 7.571855626902158, 9.098924988492762, 10.654798663627394, 10.471374696958497, 5.665182867954965, 5.920913312990253, 6.008224781799343, 10.734924939537558, 5.836407016375905, 6.817854109492416, 8.067148294933297, 9.353098006322597), # 120
(9.124246399149268, 7.533646302264829, 9.079089073844187, 10.626018355528434, 10.449536314966918, 5.6547164793305305, 5.89529503602598, 5.995437156617307, 10.717993874643499, 5.816036522932296, 6.795284289643116, 8.043745376954222, 9.33259128356866), # 121
(9.093623478839854, 7.495765747277961, 9.059200690834711, 10.597206704975855, 10.427283715607734, 5.644360928521519, 5.869907655786117, 5.983157154067649, 10.70131742196489, 5.795872749540477, 6.772964728385851, 8.0204268413295, 9.31177220987977), # 122
(9.062636963219719, 7.458153578974774, 9.039220428139036, 10.568310728030694, 10.40458677512419, 5.634088611693925, 5.844705700798839, 5.971338333947983, 10.684846280603754, 5.775873837811387, 6.750846927897544, 7.997146717704421, 9.290610491667572), # 123
(9.031250486511654, 7.420749414388487, 9.01910887443187, 10.539277440753986, 10.381415369759537, 5.623871925013739, 5.819643699592319, 5.959934256055926, 10.668531149662115, 5.755997929355961, 6.728882390355119, 7.973859035724275, 9.269075835343711), # 124
(8.999427682938459, 7.38349287055232, 8.998826618387923, 10.51005385920676, 10.357739375757022, 5.613683264646956, 5.794676180694739, 5.948898480189091, 10.652322728241993, 5.736203165785134, 6.707022617935501, 7.950517825034348, 9.247137947319828), # 125
(8.967132186722928, 7.346323564499494, 8.978334248681898, 10.480586999450054, 10.333528669359893, 5.603495026759568, 5.76975767263427, 5.938184566145092, 10.636171715445418, 5.7164476887098425, 6.685219112815613, 7.927077115279934, 9.224766534007578), # 126
(8.93432763208786, 7.309181113263224, 8.957592353988504, 10.450823877544899, 10.308753126811398, 5.593279607517565, 5.744842703939094, 5.927746073721545, 10.620028810374407, 5.696689639741024, 6.6634233771723785, 7.903490936106316, 9.201931301818599), # 127
(8.900977653256046, 7.272005133876735, 8.93656152298245, 10.420711509552332, 10.28338262435479, 5.583009403086944, 5.719885803137382, 5.917536562716062, 10.603844712130984, 5.6768871604896125, 6.641586913182724, 7.879713317158788, 9.178601957164537), # 128
(8.867045884450281, 7.234735243373241, 8.91520234433844, 10.390196911533382, 10.257387038233311, 5.572656809633695, 5.694841498757313, 5.90750959292626, 10.587570119817174, 5.656998392566545, 6.619661223023571, 7.855698288082636, 9.154748206457038), # 129
(8.832495959893366, 7.197311058785966, 8.893475406731179, 10.359227099549086, 10.230736244690213, 5.562194223323808, 5.669664319327063, 5.89761872414975, 10.571155732535, 5.636981477582757, 6.5975978088718445, 7.831399878523152, 9.130339756107748), # 130
(8.797291513808094, 7.159672197148127, 8.87134129883538, 10.327749089660475, 10.203400119968745, 5.55159404032328, 5.644308793374809, 5.88781751618415, 10.554552249386486, 5.616794557149185, 6.575348172904468, 7.806772118125624, 9.105346312528312), # 131
(8.76139618041726, 7.121758275492944, 8.848760609325746, 10.295709897928587, 10.175348540312154, 5.540828656798102, 5.618729449428725, 5.878059528827073, 10.537710369473654, 5.596395772876765, 6.552863817298364, 7.781769036535342, 9.079737582130376), # 132
(8.724773593943663, 7.083508910853635, 8.825693926876983, 10.263056540414452, 10.146551381963686, 5.529870468914266, 5.592880816016989, 5.868298321876132, 10.520580791898526, 5.575743266376432, 6.53009624423046, 7.756344663397592, 9.053483271325586), # 133
(8.687387388610095, 7.044863720263423, 8.802101840163804, 10.229736033179103, 10.116978521166592, 5.518691872837765, 5.566717421667779, 5.858487455128944, 10.503114215763128, 5.5547951792591235, 6.506996955877678, 7.730453028357666, 9.026553086525583), # 134
(8.649201198639354, 7.005762320755524, 8.777944937860909, 10.195695392283579, 10.08659983416412, 5.507265264734592, 5.540193794909268, 5.84858048838312, 10.48526134016948, 5.533509653135776, 6.483517454416942, 7.704048161060852, 8.99891673414202), # 135
(8.610178658254235, 6.966144329363159, 8.753183808643008, 10.160881633788906, 10.055385197199517, 5.495563040770739, 5.513264464269635, 5.838530981436277, 10.466972864219606, 5.511844829617322, 6.459609242025177, 7.677084091152441, 8.970543920586536), # 136
(8.570283401677534, 6.925949363119547, 8.72777904118481, 10.125241773756125, 10.023304486516034, 5.483557597112198, 5.485883958277055, 5.828292494086029, 10.448199487015533, 5.4897588503147015, 6.435223820879306, 7.649514848277719, 8.941404352270776), # 137
(8.529479063132047, 6.885117039057908, 8.701691224161017, 10.088722828246263, 9.990327578356919, 5.471221329924964, 5.458006805459704, 5.81781858612999, 10.428891907659281, 5.4672098568388465, 6.410312693156252, 7.621294462081978, 8.91146773560639), # 138
(8.487729276840568, 6.843586974211461, 8.67488094624634, 10.051271813320358, 9.956424348965415, 5.458526635375026, 5.429587534345759, 5.807062817365774, 10.409000825252871, 5.444155990800697, 6.38482736103294, 7.592376962210506, 8.880703777005019), # 139
(8.444997677025897, 6.801298785613425, 8.647308796115487, 10.012835745039444, 9.92156467458478, 5.445445909628379, 5.400580673463397, 5.795978747590996, 10.388476938898332, 5.420555393811186, 6.358719326686294, 7.562716378308592, 8.849082182878314), # 140
(8.40124789791083, 6.758192090297021, 8.61893536244316, 9.973361639464553, 9.885718431458253, 5.431951548851015, 5.370940751340795, 5.78451993660327, 10.36727094769768, 5.396366207481251, 6.331940092293238, 7.532266740021525, 8.816572659637913), # 141
(8.356443573718156, 6.714206505295466, 8.58972123390407, 9.93279651265672, 9.848855495829087, 5.418015949208927, 5.340622296506126, 5.772639944200211, 10.345333550752942, 5.371546573421828, 6.304441160030697, 7.500982076994594, 8.783144913695466), # 142
(8.310548338670674, 6.669281647641981, 8.559626999172925, 9.891087380676975, 9.810945743940529, 5.403611506868106, 5.3095798374875685, 5.760292330179432, 10.322615447166147, 5.3460546332438525, 6.276174032075593, 7.4688164188730894, 8.748768651462617), # 143
(8.263525826991184, 6.623357134369786, 8.528613246924428, 9.848181259586356, 9.771959052035829, 5.388710617994547, 5.277767902813299, 5.747430654338549, 10.29906733603931, 5.31984852855826, 6.247090210604851, 7.435723795302299, 8.713413579351014), # 144
(8.215339672902477, 6.576372582512099, 8.496640565833289, 9.804025165445895, 9.731865296358233, 5.3732856787542405, 5.245141021011493, 5.734008476475176, 10.274639916474454, 5.292886400975988, 6.217141197795395, 7.401658235927513, 8.6770494037723), # 145
(8.16595351062735, 6.528267609102142, 8.463669544574216, 9.758566114316626, 9.690634353150992, 5.35730908531318, 5.21165372061033, 5.719979356386927, 10.249283887573606, 5.2651263921079705, 6.186278495824149, 7.3665737703940195, 8.639645831138118), # 146
(8.1153309743886, 6.47898183117313, 8.42966077182191, 9.71175112225958, 9.648236098657351, 5.340753233837358, 5.177260530137981, 5.705296853871415, 10.22294994843879, 5.236526643565146, 6.154453606868036, 7.3304244283471105, 8.601172567860118), # 147
(8.063435698409021, 6.428454865758288, 8.394574836251083, 9.663527205335797, 9.604640409120561, 5.323590520492767, 5.1419159781226265, 5.689914528726257, 10.195588798172029, 5.207045296958447, 6.1216180331039824, 7.29316423943207, 8.561599320349941), # 148
(8.010231316911412, 6.37662632989083, 8.358372326536443, 9.613841379606303, 9.55981716078387, 5.3057933414453995, 5.105574593092441, 5.673785940749067, 10.167151135875338, 5.176640493898813, 6.08772327670891, 7.254747233294191, 8.520895795019237), # 149
(7.955681464118564, 6.323435840603979, 8.321013831352694, 9.562640661132138, 9.513736229890526, 5.287334092861249, 5.0681909035756005, 5.656864649737456, 10.137587660650752, 5.1452703759971765, 6.0527208398597425, 7.215127439578763, 8.479031698279647), # 150
(7.899749774253275, 6.268823014930954, 8.282459939374542, 9.50987206597433, 9.466367492683776, 5.268185170906305, 5.029719438100283, 5.639104215489043, 10.106849071600289, 5.112893084864478, 6.016562224733405, 7.174258887931072, 8.435976736542818), # 151
(7.842399881538343, 6.212727469904973, 8.242671239276701, 9.455482610193918, 9.417680825406869, 5.2483189717465635, 4.9901147251946645, 5.620458197801441, 10.07488606782597, 5.079466762111649, 5.979198933506821, 7.132095607996409, 8.391700616220398), # 152
(7.78359542019656, 6.155088822559256, 8.201608319733868, 9.399419309851933, 9.367646104303056, 5.2277078915480155, 4.949331293386919, 5.600880156472262, 10.041649348429823, 5.044949549349629, 5.940582468356916, 7.088591629420064, 8.346173043724027), # 153
(7.723300024450729, 6.095846689927024, 8.159231769420758, 9.34162918100941, 9.31623320561558, 5.206324326476654, 4.907323671205228, 5.580323651299123, 10.007089612513866, 5.009299588189353, 5.900664331460612, 7.043700981847325, 8.299363725465357), # 154
(7.6614773285236355, 6.034940689041495, 8.115502177012075, 9.282059239727378, 9.263412005587696, 5.184140672698471, 4.864046387177761, 5.558742242079636, 9.971157559180128, 4.972475020241754, 5.859396024994833, 6.997377694923482, 8.251242367856026), # 155
(7.598090966638081, 5.972310436935888, 8.070380131182526, 9.220656502066875, 9.209152380462648, 5.161129326379461, 4.8194539698327, 5.5360894886114185, 9.933803887530626, 4.934433987117773, 5.816729051136504, 6.949575798293822, 8.201778677307685), # 156
(7.533104573016862, 5.907895550643423, 8.023826220606818, 9.157367984088937, 9.153424206483685, 5.137262683685614, 4.773500947698219, 5.512318950692082, 9.894979296667389, 4.895134630428341, 5.772614912062549, 6.900249321603637, 8.150942360231976), # 157
(7.464680946405239, 5.840453120772258, 7.973591953902355, 9.089769581651243, 9.093681105870997, 5.11102447631711, 4.725106720927857, 5.485796952349372, 9.851662091599097, 4.8533659162911436, 5.7255957525389425, 6.847599564194339, 8.096485859415345), # 158
(7.382286766978402, 5.763065319599478, 7.906737818402988, 9.003977158788453, 9.015191309781628, 5.073689648007103, 4.668212763385716, 5.4472135327643825, 9.786427261222144, 4.802280994098745, 5.667416935618994, 6.781362523683108, 8.025427646920194), # 159
(7.284872094904309, 5.675096728540714, 7.821920957955888, 8.89857751040886, 8.916420131346795, 5.024341296047684, 4.602243748383784, 5.3955991895273465, 9.697425227228651, 4.741205651862893, 5.59725950860954, 6.700501948887847, 7.93642060889358), # 160
(7.17322205458596, 5.577120868080469, 7.720046971910309, 8.774572503756728, 8.798393124282113, 4.963577241570314, 4.527681446006876, 5.33160053310978, 9.585829766999018, 4.6706581931709374, 5.515741654599707, 6.605767468907571, 7.830374044819097), # 161
(7.048121770426357, 5.469711258703239, 7.602021459615496, 8.632964006076326, 8.662135842303204, 4.891995305706455, 4.445007626339809, 5.255864173983202, 9.452814657913637, 4.5911569216102315, 5.42348155667862, 6.497908712841293, 7.708197254180333), # 162
(6.9103563668284975, 5.353441420893524, 7.468750020420702, 8.474753884611934, 8.508673839125688, 4.810193309587572, 4.354704059467401, 5.169036722619125, 9.299553677352906, 4.503220140768125, 5.321097397935408, 6.3776753097880325, 7.570799536460879), # 163
(6.760710968195384, 5.228884875135821, 7.321138253675176, 8.300944006607818, 8.339032668465189, 4.718769074345129, 4.257252515474466, 5.071764789489069, 9.127220602697223, 4.407366154231968, 5.209207361459196, 6.245816888846803, 7.419090191144328), # 164
(6.599970698930017, 5.096615141914632, 7.160091758728169, 8.112536239308252, 8.154237884037324, 4.618320421110586, 4.153134764445822, 4.964694985064546, 8.93698921132698, 4.3041132655891134, 5.088429630339111, 6.10308307911662, 7.25397851771427), # 165
(6.428920683435397, 4.957205741714454, 6.9865161349289275, 7.910532449957501, 7.955315039557714, 4.509445171015408, 4.042832576466286, 4.848473919817077, 8.730033280622573, 4.193979778426912, 4.959382387664279, 5.950223509696501, 7.0763738156542955), # 166
(6.248346046114523, 4.811230195019787, 6.801316981626704, 7.695934505799843, 7.74328968874198, 4.392741145191058, 3.9268277216206746, 4.723748204218176, 8.5075265879644, 4.077483996332714, 4.822683816523827, 5.7879878096854585, 6.887185384447996), # 167
(6.059031911370395, 4.659262022315128, 6.605399898170748, 7.469744274079546, 7.519187385305742, 4.268806164768999, 3.805601969993804, 4.5911644487393595, 8.270642910732855, 3.955144222893872, 4.678952100006881, 5.617125608182511, 6.6873225235789615), # 168
(5.861763403606015, 4.501874744084979, 6.399670483910309, 7.232963622040883, 7.28403368296462, 4.138238050880695, 3.6796370916704917, 4.451369263852145, 8.020556026308338, 3.8274787616977366, 4.528805421202568, 5.438386534286672, 6.477694532530785), # 169
(5.657325647224384, 4.339641880813837, 6.185034338194635, 6.98659441692812, 7.038854135434233, 4.001634624657607, 3.549414856735553, 4.305009260028047, 7.7584397120712385, 3.6950059163316578, 4.372861963200016, 5.252520217096959, 6.259210710787055), # 170
(5.4465037666285, 4.173136952986201, 5.962397060372978, 6.731638525985535, 6.784674296430206, 3.8595937072311983, 3.4154170352738054, 4.152731047738583, 7.485467745401956, 3.5582439903829886, 4.211739909088348, 5.060276285712386, 6.032780357831365), # 171
(5.230082886221365, 4.002933481086569, 5.7326642497945866, 6.4690978164573965, 6.5225197196681535, 3.7127131197329337, 3.2781253973700655, 3.9951812374552707, 7.202813903680886, 3.41771128743908, 4.046057441956694, 4.862404369231971, 5.799312773147303), # 172
(5.00884813040598, 3.8296049855994423, 5.4967415058087115, 6.1999741555879755, 6.253415958863702, 3.5615906832942748, 3.1380217131091497, 3.8330064396496235, 6.911651964288422, 3.2739261110872815, 3.8764327448941778, 4.659654096754725, 5.5597172562184625), # 173
(4.783584623585344, 3.653724987009318, 5.2555344277646014, 5.9252694106215404, 5.978388567732466, 3.406824219046685, 2.9955877525758754, 3.6668532647931604, 6.613155704604964, 3.1274067649149466, 3.7034840009899277, 4.452775097379668, 5.314903106528433), # 174
(4.555077490162455, 3.4758670058006946, 5.009948615011508, 5.645985448802367, 5.698463099990069, 3.2490115481216284, 2.851305285855058, 3.497368323357396, 6.308498902010905, 2.9786715525094243, 3.5278293933330693, 4.242517000205814, 5.0657796235608075), # 175
(4.324111854540319, 3.296604562458073, 4.760889666898678, 5.363124137374725, 5.41466510935213, 3.0887504916505666, 2.705656083031515, 3.325198225813849, 5.998855333886642, 2.828238777458067, 3.35008710501273, 4.029629434332179, 4.813256106799174), # 176
(4.0914728411219325, 3.1165111774659513, 4.5092631827753635, 5.077687343582883, 5.128020149534273, 2.9266388707649633, 2.5591219141900625, 3.1509895826340326, 5.68539877761257, 2.6766267433482245, 3.1708753191180357, 3.8148620288577786, 4.5582418557271245), # 177
(3.8579455743102966, 2.9361603713088282, 4.255974761990814, 4.790676934671116, 4.8395537742521135, 2.7632745065962827, 2.4121845494155174, 2.9753890042894655, 5.3693030105690855, 2.52435375376725, 2.9908122187381125, 3.598964412881627, 4.301646169828252), # 178
(0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0), # 179
)
passenger_arriving_acc = (
(5, 4, 5, 6, 3, 2, 1, 1, 4, 1, 0, 0, 0, 9, 3, 5, 2, 7, 3, 1, 1, 2, 1, 0, 0, 0), # 0
(9, 10, 15, 15, 5, 3, 2, 5, 5, 1, 1, 1, 0, 16, 4, 8, 4, 13, 6, 3, 3, 4, 2, 1, 1, 0), # 1
(14, 19, 23, 22, 10, 3, 2, 11, 6, 2, 1, 1, 0, 22, 10, 11, 6, 13, 9, 7, 5, 7, 8, 2, 1, 0), # 2
(19, 28, 26, 28, 17, 5, 3, 13, 10, 2, 1, 1, 0, 29, 15, 13, 12, 18, 10, 8, 6, 9, 11, 4, 1, 0), # 3
(24, 33, 28, 30, 23, 9, 9, 15, 10, 2, 2, 1, 0, 34, 19, 20, 12, 24, 11, 12, 7, 10, 12, 4, 1, 0), # 4
(31, 35, 36, 38, 27, 9, 11, 18, 10, 2, 3, 1, 0, 41, 24, 25, 15, 29, 17, 17, 9, 10, 15, 4, 2, 0), # 5
(40, 39, 42, 42, 33, 12, 13, 20, 13, 4, 3, 1, 0, 47, 31, 30, 20, 32, 21, 18, 10, 14, 21, 7, 2, 0), # 6
(49, 43, 45, 50, 41, 13, 14, 22, 16, 6, 3, 1, 0, 62, 38, 34, 24, 40, 24, 23, 12, 14, 23, 9, 4, 0), # 7
(55, 50, 52, 62, 47, 17, 17, 26, 20, 8, 3, 2, 0, 67, 44, 40, 29, 49, 31, 34, 15, 16, 26, 10, 5, 0), # 8
(62, 54, 61, 69, 49, 19, 19, 31, 24, 8, 4, 2, 0, 75, 49, 50, 35, 55, 36, 38, 16, 20, 30, 12, 6, 0), # 9
(74, 60, 71, 80, 54, 21, 20, 31, 25, 10, 6, 2, 0, 83, 57, 55, 40, 61, 43, 42, 16, 20, 30, 12, 6, 0), # 10
(78, 68, 71, 89, 56, 24, 26, 37, 30, 12, 8, 2, 0, 94, 67, 59, 46, 68, 46, 44, 18, 21, 33, 14, 7, 0), # 11
(87, 75, 77, 100, 64, 27, 28, 43, 35, 15, 8, 3, 0, 103, 77, 63, 53, 80, 50, 46, 19, 23, 36, 16, 8, 0), # 12
(92, 85, 86, 113, 69, 29, 34, 47, 38, 19, 10, 4, 0, 120, 87, 67, 55, 85, 56, 51, 20, 27, 38, 20, 8, 0), # 13
(102, 97, 92, 122, 76, 32, 38, 54, 42, 23, 13, 4, 0, 131, 95, 73, 61, 88, 62, 55, 23, 31, 40, 20, 11, 0), # 14
(110, 108, 102, 127, 86, 34, 43, 60, 46, 24, 14, 4, 0, 142, 106, 80, 63, 96, 67, 60, 27, 36, 40, 22, 12, 0), # 15
(117, 119, 112, 135, 92, 35, 46, 62, 48, 26, 15, 4, 0, 161, 112, 87, 69, 109, 74, 62, 29, 45, 41, 26, 12, 0), # 16
(126, 131, 116, 147, 105, 39, 48, 63, 50, 26, 15, 5, 0, 167, 122, 95, 74, 115, 80, 68, 30, 48, 43, 26, 13, 0), # 17
(138, 140, 124, 155, 110, 42, 54, 70, 52, 28, 19, 6, 0, 175, 134, 102, 77, 121, 84, 73, 32, 52, 43, 29, 13, 0), # 18
(153, 146, 134, 164, 120, 48, 59, 73, 55, 28, 19, 6, 0, 183, 142, 104, 82, 125, 91, 77, 34, 54, 47, 33, 14, 0), # 19
(167, 162, 143, 174, 123, 53, 60, 74, 58, 32, 22, 6, 0, 189, 155, 110, 84, 130, 98, 82, 35, 58, 49, 34, 14, 0), # 20
(172, 170, 146, 184, 133, 58, 62, 75, 63, 34, 22, 7, 0, 197, 165, 115, 90, 141, 101, 83, 35, 61, 53, 36, 15, 0), # 21
(178, 181, 160, 189, 139, 60, 65, 81, 67, 35, 23, 9, 0, 201, 171, 120, 97, 148, 108, 86, 42, 62, 55, 38, 17, 0), # 22
(187, 187, 163, 199, 143, 60, 68, 82, 71, 36, 24, 9, 0, 211, 179, 127, 98, 154, 115, 92, 45, 65, 58, 41, 18, 0), # 23
(196, 196, 172, 206, 156, 65, 69, 83, 75, 40, 26, 10, 0, 225, 185, 130, 102, 163, 123, 99, 48, 67, 60, 45, 18, 0), # 24
(215, 202, 177, 217, 164, 67, 76, 87, 79, 41, 28, 14, 0, 234, 198, 133, 110, 173, 127, 105, 50, 71, 63, 45, 20, 0), # 25
(229, 212, 187, 228, 175, 74, 82, 93, 84, 42, 29, 18, 0, 252, 204, 138, 114, 185, 131, 108, 54, 75, 68, 45, 22, 0), # 26
(243, 219, 193, 240, 178, 78, 86, 97, 87, 42, 31, 18, 0, 262, 211, 144, 121, 192, 137, 110, 63, 82, 73, 46, 22, 0), # 27
(254, 229, 198, 250, 188, 78, 88, 103, 97, 46, 31, 20, 0, 276, 216, 150, 126, 206, 144, 116, 65, 89, 76, 46, 23, 0), # 28
(264, 241, 208, 257, 195, 81, 96, 110, 101, 47, 31, 21, 0, 290, 226, 157, 130, 213, 149, 122, 66, 92, 80, 47, 24, 0), # 29
(272, 253, 214, 266, 203, 86, 104, 114, 106, 51, 33, 22, 0, 301, 238, 165, 138, 224, 152, 125, 68, 93, 87, 49, 26, 0), # 30
(283, 265, 222, 271, 206, 89, 107, 116, 113, 53, 33, 23, 0, 309, 245, 169, 142, 236, 157, 127, 69, 95, 88, 50, 28, 0), # 31
(288, 272, 231, 283, 216, 90, 109, 121, 119, 55, 34, 24, 0, 317, 252, 179, 150, 249, 162, 137, 73, 100, 93, 54, 28, 0), # 32
(301, 282, 244, 291, 223, 95, 112, 124, 124, 57, 35, 24, 0, 332, 264, 186, 158, 258, 170, 139, 76, 101, 95, 57, 28, 0), # 33
(304, 295, 252, 294, 228, 98, 115, 127, 127, 58, 38, 27, 0, 346, 269, 194, 167, 268, 174, 145, 76, 103, 99, 58, 28, 0), # 34
(315, 303, 261, 308, 237, 98, 122, 130, 130, 61, 39, 29, 0, 360, 280, 197, 170, 282, 180, 149, 79, 108, 103, 59, 28, 0), # 35
(327, 314, 272, 318, 240, 102, 129, 133, 136, 61, 42, 31, 0, 373, 287, 205, 172, 290, 187, 151, 81, 113, 105, 62, 28, 0), # 36
(339, 323, 282, 327, 250, 109, 136, 138, 139, 63, 43, 32, 0, 391, 299, 212, 176, 295, 192, 159, 82, 117, 107, 63, 31, 0), # 37
(347, 338, 291, 334, 255, 115, 140, 141, 146, 65, 43, 32, 0, 402, 304, 219, 181, 304, 198, 165, 85, 123, 109, 67, 32, 0), # 38
(357, 349, 303, 340, 262, 117, 146, 144, 148, 66, 44, 33, 0, 408, 314, 222, 187, 310, 201, 165, 88, 127, 113, 70, 32, 0), # 39
(366, 358, 311, 347, 271, 118, 152, 151, 158, 67, 46, 33, 0, 412, 325, 230, 193, 326, 204, 171, 89, 132, 118, 72, 32, 0), # 40
(372, 362, 318, 358, 282, 122, 155, 156, 163, 69, 47, 33, 0, 426, 329, 231, 204, 328, 209, 175, 92, 135, 122, 73, 35, 0), # 41
(380, 372, 332, 367, 285, 125, 158, 160, 167, 72, 48, 34, 0, 441, 345, 238, 209, 341, 213, 178, 93, 137, 124, 74, 35, 0), # 42
(391, 384, 339, 372, 292, 129, 162, 162, 168, 72, 50, 35, 0, 453, 348, 241, 216, 343, 213, 178, 95, 144, 127, 76, 35, 0), # 43
(406, 394, 354, 384, 298, 133, 168, 164, 172, 73, 50, 36, 0, 463, 352, 248, 221, 351, 214, 183, 98, 146, 129, 76, 35, 0), # 44
(417, 406, 357, 394, 309, 136, 173, 168, 174, 74, 53, 36, 0, 477, 364, 257, 228, 359, 222, 186, 99, 149, 132, 76, 36, 0), # 45
(425, 416, 363, 402, 318, 137, 175, 170, 177, 75, 54, 36, 0, 493, 372, 268, 233, 366, 223, 187, 100, 151, 134, 76, 37, 0), # 46
(435, 426, 378, 413, 328, 143, 176, 175, 180, 78, 57, 37, 0, 504, 379, 277, 243, 371, 226, 190, 105, 155, 139, 80, 38, 0), # 47
(446, 433, 388, 423, 337, 144, 177, 177, 190, 80, 59, 38, 0, 512, 380, 284, 248, 380, 230, 192, 106, 158, 143, 84, 39, 0), # 48
(459, 444, 396, 428, 342, 146, 182, 183, 192, 83, 60, 38, 0, 527, 389, 296, 254, 385, 237, 194, 108, 161, 145, 84, 42, 0), # 49
(469, 450, 399, 437, 352, 149, 187, 185, 193, 84, 62, 41, 0, 542, 398, 301, 261, 394, 241, 198, 109, 163, 147, 88, 42, 0), # 50
(479, 465, 408, 448, 361, 154, 189, 192, 200, 84, 63, 41, 0, 551, 409, 307, 269, 396, 247, 200, 111, 166, 150, 88, 42, 0), # 51
(490, 473, 418, 455, 371, 157, 192, 201, 207, 87, 64, 41, 0, 559, 419, 308, 271, 407, 252, 202, 113, 167, 153, 88, 47, 0), # 52
(499, 493, 423, 468, 377, 159, 194, 203, 208, 90, 65, 41, 0, 567, 425, 311, 279, 418, 258, 205, 116, 171, 159, 91, 47, 0), # 53
(509, 504, 431, 478, 383, 164, 200, 205, 214, 92, 66, 41, 0, 577, 437, 315, 283, 424, 263, 208, 119, 177, 164, 93, 47, 0), # 54
(513, 514, 444, 487, 392, 170, 202, 207, 217, 93, 67, 42, 0, 586, 447, 327, 288, 436, 264, 211, 119, 181, 168, 94, 49, 0), # 55
(519, 519, 451, 495, 402, 175, 207, 209, 222, 95, 69, 44, 0, 592, 455, 335, 292, 447, 271, 212, 122, 183, 170, 95, 51, 0), # 56
(529, 527, 458, 503, 410, 178, 210, 214, 227, 97, 69, 45, 0, 600, 465, 343, 301, 452, 281, 215, 122, 188, 173, 95, 51, 0), # 57
(542, 534, 469, 512, 420, 178, 214, 215, 230, 99, 70, 46, 0, 611, 473, 348, 307, 456, 285, 217, 125, 192, 176, 98, 51, 0), # 58
(548, 542, 481, 525, 430, 181, 218, 218, 233, 102, 71, 46, 0, 622, 489, 352, 307, 464, 289, 223, 126, 195, 180, 100, 52, 0), # 59
(559, 550, 486, 530, 436, 185, 220, 219, 236, 105, 72, 47, 0, 635, 491, 359, 312, 471, 291, 227, 129, 202, 183, 100, 53, 0), # 60
(574, 559, 501, 541, 440, 189, 224, 221, 243, 107, 75, 48, 0, 643, 499, 363, 319, 475, 295, 232, 135, 204, 186, 102, 54, 0), # 61
(580, 568, 512, 546, 452, 193, 226, 223, 248, 107, 76, 48, 0, 649, 508, 372, 323, 484, 298, 237, 138, 206, 189, 106, 56, 0), # 62
(593, 576, 518, 554, 461, 195, 229, 226, 250, 109, 79, 49, 0, 663, 516, 377, 328, 487, 304, 241, 140, 210, 190, 106, 57, 0), # 63
(601, 582, 532, 566, 471, 197, 232, 230, 255, 114, 80, 49, 0, 679, 525, 386, 336, 495, 308, 244, 142, 215, 193, 108, 57, 0), # 64
(615, 588, 542, 573, 481, 201, 235, 233, 257, 115, 81, 50, 0, 690, 533, 391, 339, 500, 313, 251, 143, 216, 195, 109, 58, 0), # 65
(627, 598, 549, 585, 487, 203, 237, 237, 263, 116, 81, 50, 0, 701, 542, 398, 340, 507, 317, 258, 144, 227, 201, 110, 58, 0), # 66
(639, 607, 555, 593, 493, 208, 237, 243, 266, 119, 82, 50, 0, 711, 548, 402, 350, 516, 320, 262, 148, 230, 201, 111, 58, 0), # 67
(649, 615, 559, 601, 501, 213, 242, 246, 270, 121, 84, 50, 0, 725, 556, 408, 355, 525, 328, 269, 153, 233, 203, 112, 58, 0), # 68
(661, 625, 564, 609, 512, 219, 245, 247, 275, 122, 87, 50, 0, 730, 563, 413, 360, 528, 334, 271, 158, 234, 209, 116, 59, 0), # 69
(668, 634, 570, 617, 523, 222, 248, 248, 284, 126, 89, 51, 0, 741, 574, 415, 365, 539, 339, 273, 161, 241, 212, 119, 60, 0), # 70
(677, 642, 578, 626, 534, 228, 251, 249, 288, 127, 91, 52, 0, 748, 582, 423, 372, 543, 347, 273, 164, 244, 213, 120, 60, 0), # 71
(684, 649, 583, 638, 545, 231, 257, 252, 292, 130, 93, 53, 0, 762, 593, 433, 377, 551, 358, 276, 167, 251, 214, 121, 60, 0), # 72
(699, 652, 597, 642, 548, 237, 259, 256, 293, 131, 95, 54, 0, 766, 605, 438, 384, 556, 363, 279, 170, 254, 220, 121, 61, 0), # 73
(709, 657, 603, 650, 551, 239, 264, 257, 297, 133, 96, 54, 0, 776, 614, 442, 385, 564, 367, 284, 174, 261, 224, 121, 63, 0), # 74
(719, 670, 614, 661, 556, 245, 267, 262, 299, 135, 98, 54, 0, 788, 622, 452, 392, 572, 370, 288, 175, 263, 230, 122, 63, 0), # 75
(728, 682, 632, 669, 565, 249, 271, 265, 302, 138, 100, 55, 0, 797, 633, 462, 396, 581, 373, 293, 176, 266, 232, 125, 63, 0), # 76
(738, 690, 644, 676, 575, 250, 275, 267, 305, 140, 103, 56, 0, 804, 640, 470, 403, 590, 376, 299, 181, 270, 237, 127, 65, 0), # 77
(749, 695, 652, 689, 582, 254, 280, 272, 310, 140, 105, 56, 0, 814, 650, 476, 407, 597, 379, 303, 185, 275, 237, 130, 65, 0), # 78
(757, 703, 661, 700, 590, 259, 285, 276, 313, 140, 106, 58, 0, 825, 658, 479, 409, 602, 386, 305, 188, 281, 239, 130, 65, 0), # 79
(764, 711, 671, 708, 596, 262, 288, 276, 316, 143, 108, 58, 0, 838, 662, 486, 413, 611, 388, 309, 189, 285, 239, 133, 65, 0), # 80
(774, 717, 677, 718, 601, 267, 292, 277, 320, 144, 111, 59, 0, 845, 667, 493, 417, 619, 395, 309, 192, 292, 241, 136, 65, 0), # 81
(778, 722, 688, 733, 608, 276, 294, 280, 327, 146, 112, 60, 0, 852, 676, 500, 420, 628, 398, 314, 193, 296, 245, 138, 67, 0), # 82
(791, 732, 695, 746, 614, 280, 295, 284, 329, 148, 114, 61, 0, 858, 686, 511, 424, 637, 403, 318, 194, 299, 247, 142, 68, 0), # 83
(804, 744, 704, 753, 617, 281, 299, 285, 331, 150, 114, 62, 0, 869, 693, 517, 426, 644, 404, 319, 196, 304, 248, 143, 70, 0), # 84
(813, 753, 714, 766, 621, 284, 306, 289, 338, 155, 114, 62, 0, 876, 699, 524, 436, 650, 407, 328, 201, 306, 249, 143, 70, 0), # 85
(826, 759, 726, 776, 630, 286, 312, 291, 345, 156, 119, 62, 0, 885, 706, 529, 440, 658, 417, 332, 202, 306, 254, 145, 70, 0), # 86
(835, 767, 736, 786, 634, 291, 317, 292, 351, 157, 119, 62, 0, 899, 711, 533, 446, 667, 422, 332, 203, 309, 257, 146, 71, 0), # 87
(841, 770, 743, 797, 641, 294, 324, 296, 354, 159, 122, 62, 0, 915, 719, 541, 448, 671, 429, 333, 208, 314, 261, 147, 71, 0), # 88
(851, 778, 750, 805, 649, 301, 324, 298, 357, 160, 122, 62, 0, 923, 726, 548, 454, 672, 433, 334, 210, 318, 267, 148, 72, 0), # 89
(872, 785, 758, 813, 663, 303, 327, 300, 367, 161, 123, 63, 0, 930, 735, 557, 458, 683, 435, 337, 213, 320, 268, 152, 72, 0), # 90
(883, 797, 768, 818, 669, 303, 332, 301, 368, 163, 124, 63, 0, 940, 743, 567, 465, 691, 441, 338, 214, 324, 271, 152, 72, 0), # 91
(887, 807, 773, 829, 673, 307, 337, 305, 375, 164, 125, 63, 0, 954, 748, 572, 473, 700, 443, 341, 216, 326, 273, 154, 73, 0), # 92
(896, 817, 784, 837, 678, 310, 344, 309, 377, 168, 127, 64, 0, 966, 758, 585, 477, 706, 446, 342, 216, 329, 278, 155, 74, 0), # 93
(905, 824, 797, 847, 684, 316, 348, 316, 378, 168, 128, 66, 0, 980, 765, 589, 481, 718, 450, 345, 217, 336, 280, 159, 75, 0), # 94
(912, 833, 803, 853, 699, 320, 355, 324, 383, 170, 130, 68, 0, 991, 774, 593, 482, 727, 454, 351, 219, 338, 283, 159, 76, 0), # 95
(917, 842, 812, 865, 704, 332, 357, 326, 383, 171, 132, 68, 0, 997, 780, 597, 488, 738, 455, 355, 222, 338, 288, 160, 77, 0), # 96
(925, 848, 826, 871, 710, 338, 359, 328, 389, 173, 133, 68, 0, 1010, 787, 604, 492, 746, 459, 359, 228, 344, 290, 162, 79, 0), # 97
(934, 852, 832, 878, 727, 339, 364, 330, 393, 174, 134, 70, 0, 1024, 800, 611, 498, 753, 461, 361, 228, 350, 290, 164, 79, 0), # 98
(945, 862, 845, 887, 735, 339, 367, 332, 399, 176, 135, 71, 0, 1042, 806, 618, 502, 761, 467, 363, 231, 352, 294, 164, 80, 0), # 99
(956, 870, 847, 892, 741, 345, 373, 333, 403, 177, 135, 72, 0, 1048, 809, 627, 505, 772, 470, 367, 234, 357, 297, 167, 80, 0), # 100
(966, 875, 860, 902, 746, 347, 377, 336, 406, 178, 136, 72, 0, 1056, 819, 632, 507, 785, 473, 369, 236, 361, 300, 169, 82, 0), # 101
(976, 884, 867, 911, 752, 349, 378, 337, 411, 178, 137, 72, 0, 1064, 828, 642, 511, 795, 476, 372, 236, 366, 302, 171, 83, 0), # 102
(994, 893, 880, 926, 762, 353, 379, 340, 416, 179, 138, 72, 0, 1072, 834, 648, 517, 800, 480, 375, 240, 369, 307, 172, 83, 0), # 103
(1003, 903, 886, 929, 769, 355, 382, 341, 418, 181, 140, 72, 0, 1081, 841, 654, 520, 807, 486, 380, 243, 374, 310, 175, 83, 0), # 104
(1017, 910, 889, 941, 779, 358, 384, 344, 420, 181, 140, 72, 0, 1088, 847, 664, 527, 810, 491, 384, 244, 377, 315, 176, 85, 0), # 105
(1025, 919, 901, 950, 787, 359, 388, 347, 422, 182, 141, 73, 0, 1099, 857, 674, 531, 814, 494, 387, 248, 379, 319, 176, 85, 0), # 106
(1033, 927, 911, 961, 796, 364, 390, 351, 424, 183, 143, 73, 0, 1106, 865, 678, 538, 825, 499, 391, 249, 379, 322, 176, 85, 0), # 107
(1040, 935, 918, 968, 801, 369, 392, 351, 426, 183, 144, 75, 0, 1118, 871, 684, 542, 833, 502, 395, 254, 380, 322, 180, 86, 0), # 108
(1049, 941, 929, 978, 809, 373, 398, 353, 431, 184, 149, 77, 0, 1126, 882, 689, 549, 841, 504, 397, 255, 384, 323, 181, 87, 0), # 109
(1057, 947, 942, 991, 814, 375, 401, 354, 436, 184, 149, 77, 0, 1133, 898, 695, 556, 847, 509, 402, 256, 390, 327, 182, 87, 0), # 110
(1068, 955, 949, 998, 819, 378, 406, 357, 441, 184, 149, 78, 0, 1142, 907, 705, 563, 856, 511, 403, 259, 392, 331, 183, 89, 0), # 111
(1072, 958, 958, 1008, 828, 379, 410, 359, 442, 184, 150, 78, 0, 1147, 918, 712, 567, 870, 514, 406, 261, 394, 334, 185, 90, 0), # 112
(1078, 964, 965, 1016, 839, 384, 415, 360, 445, 185, 150, 79, 0, 1155, 927, 719, 571, 880, 520, 410, 263, 397, 335, 186, 91, 0), # 113
(1088, 975, 971, 1027, 849, 392, 416, 363, 450, 187, 151, 80, 0, 1162, 936, 724, 576, 887, 521, 414, 265, 401, 337, 188, 91, 0), # 114
(1100, 979, 979, 1037, 857, 399, 420, 365, 455, 188, 152, 80, 0, 1172, 946, 728, 580, 892, 522, 415, 267, 403, 339, 188, 92, 0), # 115
(1106, 982, 985, 1042, 866, 402, 421, 367, 457, 189, 154, 80, 0, 1180, 951, 734, 584, 900, 523, 418, 271, 408, 342, 188, 92, 0), # 116
(1114, 992, 998, 1048, 875, 403, 423, 370, 463, 191, 155, 82, 0, 1189, 956, 737, 587, 906, 526, 423, 273, 412, 343, 188, 92, 0), # 117
(1118, 997, 1003, 1053, 884, 405, 429, 372, 468, 192, 157, 82, 0, 1198, 960, 739, 591, 911, 530, 427, 274, 417, 347, 191, 92, 0), # 118
(1129, 1006, 1005, 1067, 892, 408, 430, 374, 471, 194, 160, 82, 0, 1211, 968, 747, 596, 917, 531, 428, 277, 421, 348, 192, 93, 0), # 119
(1136, 1013, 1019, 1073, 897, 412, 431, 375, 476, 195, 162, 83, 0, 1222, 979, 751, 601, 922, 532, 434, 278, 423, 350, 193, 94, 0), # 120
(1144, 1020, 1024, 1080, 908, 417, 433, 376, 479, 199, 163, 83, 0, 1232, 983, 759, 604, 931, 532, 439, 282, 425, 356, 193, 94, 0), # 121
(1150, 1026, 1031, 1090, 913, 424, 436, 377, 483, 199, 164, 84, 0, 1244, 988, 763, 606, 945, 536, 443, 283, 426, 359, 196, 94, 0), # 122
(1156, 1036, 1037, 1099, 916, 424, 437, 379, 485, 200, 164, 84, 0, 1249, 995, 765, 611, 954, 536, 447, 286, 431, 359, 197, 94, 0), # 123
(1162, 1044, 1044, 1104, 918, 430, 440, 385, 488, 200, 165, 84, 0, 1258, 1002, 773, 611, 961, 543, 450, 288, 432, 359, 197, 96, 0), # 124
(1177, 1050, 1053, 1112, 926, 432, 442, 387, 492, 201, 166, 84, 0, 1273, 1011, 780, 615, 971, 548, 454, 288, 436, 362, 198, 97, 0), # 125
(1187, 1057, 1060, 1120, 934, 436, 444, 390, 498, 203, 167, 85, 0, 1283, 1016, 786, 617, 977, 552, 455, 291, 439, 364, 199, 98, 0), # 126
(1194, 1064, 1066, 1125, 940, 438, 446, 392, 501, 204, 167, 87, 0, 1290, 1023, 790, 618, 981, 555, 457, 294, 444, 366, 201, 100, 0), # 127
(1201, 1072, 1072, 1135, 945, 446, 449, 395, 509, 207, 167, 87, 0, 1297, 1028, 799, 623, 989, 562, 458, 299, 447, 371, 202, 101, 0), # 128
(1214, 1074, 1076, 1142, 950, 451, 452, 395, 511, 208, 170, 88, 0, 1307, 1038, 803, 629, 1001, 564, 461, 299, 451, 373, 202, 101, 0), # 129
(1224, 1080, 1082, 1150, 955, 455, 456, 399, 518, 209, 171, 88, 0, 1316, 1043, 812, 632, 1012, 565, 465, 303, 452, 376, 204, 101, 0), # 130
(1231, 1084, 1089, 1158, 961, 458, 457, 400, 522, 210, 173, 88, 0, 1325, 1048, 815, 635, 1016, 567, 465, 305, 456, 378, 206, 101, 0), # 131
(1240, 1089, 1098, 1170, 969, 463, 458, 405, 527, 212, 173, 88, 0, 1336, 1052, 821, 639, 1025, 571, 468, 306, 461, 381, 208, 101, 0), # 132
(1253, 1099, 1104, 1183, 971, 466, 464, 406, 531, 213, 174, 89, 0, 1350, 1057, 827, 646, 1030, 574, 472, 307, 464, 383, 208, 101, 0), # 133
(1262, 1107, 1110, 1192, 980, 469, 469, 408, 531, 213, 176, 89, 0, 1358, 1067, 834, 650, 1041, 579, 472, 311, 471, 383, 210, 101, 0), # 134
(1270, 1116, 1120, 1197, 991, 470, 473, 410, 535, 215, 177, 90, 0, 1365, 1076, 837, 655, 1049, 583, 472, 313, 473, 383, 211, 102, 0), # 135
(1277, 1123, 1131, 1205, 1000, 474, 474, 411, 536, 215, 178, 90, 0, 1376, 1081, 844, 659, 1055, 583, 472, 314, 480, 385, 213, 103, 0), # 136
(1284, 1131, 1142, 1209, 1007, 483, 474, 414, 540, 217, 178, 90, 0, 1386, 1088, 848, 660, 1065, 584, 475, 318, 483, 389, 214, 104, 0), # 137
(1296, 1135, 1147, 1217, 1010, 483, 477, 414, 543, 219, 179, 90, 0, 1396, 1093, 852, 666, 1068, 586, 479, 321, 491, 391, 214, 104, 0), # 138
(1313, 1149, 1158, 1227, 1017, 484, 478, 416, 548, 219, 180, 91, 0, 1403, 1099, 860, 668, 1080, 587, 481, 323, 494, 394, 217, 105, 0), # 139
(1315, 1156, 1162, 1234, 1024, 487, 483, 417, 549, 220, 181, 91, 0, 1412, 1105, 869, 671, 1088, 591, 484, 325, 496, 396, 219, 106, 0), # 140
(1325, 1161, 1169, 1241, 1030, 489, 487, 420, 552, 220, 181, 91, 0, 1423, 1110, 877, 676, 1097, 598, 489, 327, 499, 399, 220, 107, 0), # 141
(1337, 1168, 1176, 1256, 1034, 492, 487, 421, 553, 221, 182, 92, 0, 1427, 1115, 885, 686, 1103, 600, 492, 330, 501, 401, 221, 107, 0), # 142
(1344, 1173, 1181, 1264, 1038, 493, 489, 422, 558, 221, 182, 93, 0, 1437, 1123, 890, 687, 1107, 603, 493, 332, 505, 404, 221, 107, 0), # 143
(1353, 1176, 1186, 1271, 1047, 498, 491, 427, 560, 221, 183, 93, 0, 1447, 1127, 893, 688, 1114, 604, 496, 332, 508, 409, 222, 107, 0), # 144
(1359, 1182, 1188, 1277, 1052, 505, 494, 432, 563, 226, 183, 93, 0, 1454, 1136, 899, 691, 1118, 610, 497, 332, 512, 411, 222, 107, 0), # 145
(1368, 1183, 1196, 1291, 1061, 505, 497, 435, 566, 227, 183, 93, 0, 1472, 1145, 911, 693, 1127, 614, 499, 334, 514, 412, 224, 107, 0), # 146
(1376, 1192, 1201, 1295, 1066, 510, 499, 437, 567, 228, 184, 93, 0, 1482, 1151, 916, 694, 1132, 617, 501, 340, 517, 415, 226, 107, 0), # 147
(1384, 1196, 1205, 1301, 1071, 511, 503, 439, 570, 228, 184, 93, 0, 1491, 1159, 918, 700, 1140, 621, 505, 344, 521, 420, 227, 107, 0), # 148
(1391, 1201, 1216, 1309, 1077, 512, 509, 442, 573, 229, 184, 94, 0, 1497, 1166, 929, 703, 1146, 622, 511, 346, 525, 425, 230, 107, 0), # 149
(1398, 1208, 1224, 1323, 1083, 513, 509, 445, 576, 229, 184, 94, 0, 1508, 1168, 932, 709, 1152, 628, 514, 348, 525, 429, 231, 107, 0), # 150
(1410, 1211, 1228, 1335, 1085, 516, 511, 446, 579, 230, 185, 94, 0, 1514, 1181, 936, 715, 1159, 632, 515, 350, 529, 431, 231, 108, 0), # 151
(1414, 1215, 1239, 1341, 1098, 517, 513, 448, 588, 232, 185, 94, 0, 1521, 1184, 942, 718, 1164, 633, 515, 351, 530, 434, 232, 108, 0), # 152
(1421, 1221, 1251, 1350, 1108, 524, 515, 451, 591, 233, 185, 94, 0, 1531, 1188, 948, 723, 1168, 634, 516, 354, 533, 438, 233, 109, 0), # 153
(1430, 1226, 1255, 1356, 1113, 527, 518, 452, 598, 234, 185, 95, 0, 1536, 1197, 949, 724, 1170, 638, 519, 357, 535, 439, 235, 109, 0), # 154
(1439, 1233, 1263, 1360, 1119, 531, 518, 455, 602, 234, 186, 96, 0, 1542, 1203, 951, 732, 1177, 640, 521, 365, 538, 441, 236, 111, 0), # 155
(1442, 1238, 1269, 1370, 1123, 534, 519, 460, 606, 236, 188, 97, 0, 1552, 1209, 956, 733, 1181, 643, 523, 366, 540, 444, 237, 111, 0), # 156
(1448, 1244, 1277, 1374, 1126, 537, 523, 463, 608, 238, 188, 97, 0, 1563, 1212, 960, 737, 1188, 646, 526, 369, 546, 446, 238, 112, 0), # 157
(1452, 1248, 1281, 1380, 1131, 541, 527, 464, 613, 238, 190, 97, 0, 1565, 1219, 962, 743, 1191, 652, 530, 370, 550, 450, 240, 112, 0), # 158
(1458, 1256, 1289, 1387, 1137, 544, 530, 467, 615, 240, 190, 99, 0, 1575, 1225, 965, 746, 1196, 655, 533, 371, 554, 451, 241, 113, 0), # 159
(1464, 1266, 1294, 1396, 1144, 549, 533, 468, 620, 241, 191, 99, 0, 1578, 1229, 973, 747, 1202, 656, 538, 375, 560, 454, 244, 114, 0), # 160
(1473, 1271, 1302, 1402, 1148, 552, 537, 469, 625, 242, 191, 99, 0, 1586, 1236, 975, 751, 1208, 659, 539, 376, 563, 456, 247, 114, 0), # 161
(1478, 1272, 1310, 1406, 1156, 555, 539, 469, 633, 242, 193, 99, 0, 1597, 1244, 976, 755, 1212, 663, 543, 381, 566, 457, 247, 115, 0), # 162
(1490, 1276, 1313, 1411, 1160, 555, 540, 471, 636, 243, 193, 99, 0, 1605, 1251, 980, 758, 1220, 667, 545, 383, 568, 457, 248, 115, 0), # 163
(1502, 1284, 1318, 1416, 1167, 560, 542, 474, 637, 243, 193, 99, 0, 1613, 1260, 983, 758, 1230, 669, 546, 387, 569, 459, 249, 116, 0), # 164
(1511, 1287, 1325, 1423, 1175, 561, 545, 476, 641, 243, 195, 99, 0, 1621, 1268, 987, 762, 1237, 675, 548, 387, 573, 463, 251, 116, 0), # 165
(1518, 1292, 1330, 1427, 1179, 564, 549, 477, 645, 243, 197, 99, 0, 1629, 1274, 988, 763, 1247, 676, 552, 388, 575, 464, 255, 117, 0), # 166
(1522, 1298, 1335, 1432, 1185, 569, 550, 480, 647, 245, 197, 101, 0, 1636, 1285, 990, 767, 1254, 677, 556, 389, 579, 465, 256, 117, 0), # 167
(1532, 1302, 1344, 1439, 1191, 572, 551, 483, 651, 247, 197, 101, 0, 1649, 1294, 997, 772, 1257, 679, 558, 389, 580, 469, 256, 117, 0), # 168
(1538, 1310, 1348, 1446, 1195, 573, 552, 484, 653, 247, 197, 102, 0, 1661, 1298, 1000, 774, 1268, 679, 561, 391, 584, 472, 259, 118, 0), # 169
(1540, 1315, 1350, 1449, 1202, 575, 554, 486, 655, 247, 197, 103, 0, 1668, 1301, 1003, 777, 1277, 684, 564, 393, 586, 474, 260, 118, 0), # 170
(1545, 1321, 1354, 1451, 1206, 577, 555, 488, 657, 247, 198, 103, 0, 1671, 1306, 1006, 779, 1281, 687, 567, 393, 592, 475, 261, 118, 0), # 171
(1549, 1325, 1359, 1458, 1209, 581, 556, 489, 657, 249, 198, 103, 0, 1676, 1308, 1012, 787, 1288, 691, 567, 394, 596, 478, 262, 118, 0), # 172
(1555, 1329, 1366, 1461, 1211, 583, 557, 489, 658, 250, 199, 104, 0, 1685, 1317, 1016, 787, 1294, 692, 571, 394, 600, 479, 262, 118, 0), # 173
(1561, 1333, 1368, 1464, 1218, 587, 558, 491, 658, 251, 200, 104, 0, 1689, 1325, 1016, 789, 1298, 694, 571, 394, 601, 479, 264, 118, 0), # 174
(1565, 1334, 1375, 1470, 1222, 589, 560, 493, 661, 251, 200, 104, 0, 1694, 1329, 1017, 789, 1302, 698, 572, 396, 603, 479, 265, 118, 0), # 175
(1571, 1336, 1379, 1471, 1229, 592, 562, 494, 664, 252, 200, 105, 0, 1701, 1334, 1020, 793, 1306, 700, 573, 397, 605, 480, 267, 118, 0), # 176
(1572, 1337, 1386, 1476, 1232, 594, 566, 495, 664, 253, 201, 106, 0, 1705, 1334, 1024, 794, 1312, 702, 574, 397, 605, 480, 269, 118, 0), # 177
(1576, 1341, 1389, 1478, 1239, 595, 566, 495, 666, 254, 201, 106, 0, 1708, 1336, 1027, 798, 1318, 703, 575, 397, 608, 481, 269, 118, 0), # 178
(1576, 1341, 1389, 1478, 1239, 595, 566, 495, 666, 254, 201, 106, 0, 1708, 1336, 1027, 798, 1318, 703, 575, 397, 608, 481, 269, 118, 0), # 179
)
passenger_arriving_rate = (
(5.020865578371768, 5.064847846385402, 4.342736024677089, 4.661000830397574, 3.7031237384064077, 1.8308820436884476, 2.0730178076869574, 1.938823405408093, 2.030033020722669, 0.9895037538805926, 0.7008775273142672, 0.4081595898588478, 0.0, 5.083880212578363, 4.489755488447325, 3.5043876365713356, 2.968511261641777, 4.060066041445338, 2.7143527675713304, 2.0730178076869574, 1.3077728883488913, 1.8515618692032039, 1.5536669434658585, 0.8685472049354179, 0.4604407133077639, 0.0), # 0
(5.354327152019974, 5.399222302966028, 4.629455492775127, 4.968858189957462, 3.948326891649491, 1.9518237573581576, 2.209734470631847, 2.066464051210712, 2.164081775444303, 1.0547451730692876, 0.7471826893260219, 0.4351013884011963, 0.0, 5.419791647439855, 4.786115272413158, 3.73591344663011, 3.164235519207862, 4.328163550888606, 2.8930496716949965, 2.209734470631847, 1.3941598266843982, 1.9741634458247455, 1.6562860633191545, 0.9258910985550255, 0.49083839117872996, 0.0), # 1
(5.686723008979731, 5.732269739983398, 4.915035237956178, 5.275490778498595, 4.192641982499829, 2.072282983465593, 2.345909253980352, 2.193593853293508, 2.297595602292516, 1.1197284437551367, 0.7933038581293855, 0.46193605433775464, 0.0, 5.75436482820969, 5.0812965977153, 3.9665192906469278, 3.3591853312654094, 4.595191204585032, 3.0710313946109116, 2.345909253980352, 1.480202131046852, 2.0963209912499146, 1.758496926166199, 0.9830070475912357, 0.5211154309075817, 0.0), # 2
(6.016757793146562, 6.062668793441743, 5.198342391099879, 5.579682305649055, 4.435107784001268, 2.191782029841316, 2.4810018208239777, 2.3197088156227115, 2.430045053640364, 1.1841956746065454, 0.8390580686378972, 0.4885571404108718, 0.0, 6.086272806254225, 5.374128544519589, 4.195290343189486, 3.5525870238196355, 4.860090107280728, 3.247592341871796, 2.4810018208239777, 1.5655585927437972, 2.217553892000634, 1.8598941018830188, 1.0396684782199759, 0.551151708494704, 0.0), # 3
(6.343136148415981, 6.389098099345293, 5.478244083085864, 5.880216481036927, 4.674763069197661, 2.3098432043158894, 2.6144718342542292, 2.444304942164548, 2.560900681860902, 1.24788897429192, 0.8842623557650959, 0.514858199362897, 0.0, 6.414188632939817, 5.6634401929918665, 4.42131177882548, 3.743666922875759, 5.121801363721804, 3.422026919030367, 2.6144718342542292, 1.6498880030827783, 2.3373815345988307, 1.9600721603456428, 1.095648816617173, 0.5808270999404813, 0.0), # 4
(6.66456271868351, 6.710236293698289, 5.753607444793765, 6.175877014290295, 4.910646611132853, 2.4259888147198754, 2.745778957362612, 2.566878236885247, 2.689633039327186, 1.310550451479666, 0.9287337544245222, 0.5407327839361791, 0.0, 6.736785359632827, 5.948060623297969, 4.64366877212261, 3.9316513544389973, 5.379266078654372, 3.593629531639346, 2.745778957362612, 1.7328491533713395, 2.4553233055664263, 2.058625671430099, 1.1507214889587531, 0.6100214812452991, 0.0), # 5
(6.979742147844666, 7.024762012504959, 6.023299607103222, 6.465447615037239, 5.141797182850695, 2.5397411688838374, 2.8743828532406313, 2.686924703751037, 2.8157126784122717, 1.3719222148381898, 0.9722892995297139, 0.5660744468730674, 0.0, 7.052736037699606, 6.22681891560374, 4.8614464976485685, 4.115766644514569, 5.631425356824543, 3.761694585251452, 2.8743828532406313, 1.8141008349170267, 2.5708985914253475, 2.1551492050124135, 1.2046599214206444, 0.6386147284095418, 0.0), # 6
(7.2873790797949685, 7.331353891769537, 6.286187700893863, 6.747711992905847, 5.367253557395036, 2.650622574638337, 2.9997431849797924, 2.8039403467281465, 2.9386101514892147, 1.4317463730358968, 1.0147460259942116, 0.5907767409159108, 0.0, 7.360713718506519, 6.498544150075018, 5.073730129971057, 4.2952391191076895, 5.877220302978429, 3.9255164854194056, 2.9997431849797924, 1.8933018390273837, 2.683626778697518, 2.249237330968616, 1.2572375401787725, 0.6664867174335943, 0.0), # 7
(7.586178158429934, 7.628690567496257, 6.54113885704533, 7.021453857524196, 5.586054507809724, 2.7581553398139356, 3.1213196156715988, 2.917421169782802, 3.0577960109310682, 1.4897650347411937, 1.0559209687315536, 0.6147332188070586, 0.0, 7.659391453419917, 6.762065406877643, 5.279604843657768, 4.469295104223581, 6.1155920218621365, 4.084389637695923, 3.1213196156715988, 1.970110957009954, 2.793027253904862, 2.3404846191747324, 1.3082277714090662, 0.6935173243178416, 0.0), # 8
(7.874844027645085, 7.915450675689353, 6.787020206437253, 7.285456918520376, 5.797238807138606, 2.861861772241199, 3.23857180840756, 3.0268631768812346, 3.1727408091108913, 1.5457203086224858, 1.0956311626552797, 0.6378374332888596, 0.0, 7.947442293806162, 7.016211766177453, 5.478155813276398, 4.637160925867456, 6.345481618221783, 4.237608447633728, 3.23857180840756, 2.044186980172285, 2.898619403569303, 2.4284856395067926, 1.3574040412874508, 0.7195864250626686, 0.0), # 9
(8.152081331335932, 8.190312852353056, 7.022698879949271, 7.538504885522466, 5.999845228425533, 2.961264179750688, 3.3509594262791773, 3.1317623719896712, 3.282915098401738, 1.599354303348179, 1.133693642678929, 0.6599829371036627, 0.0, 8.22353929103161, 7.259812308140289, 5.668468213394645, 4.798062910044536, 6.565830196803476, 4.384467320785539, 3.3509594262791773, 2.11518869982192, 2.9999226142127666, 2.5128349618408223, 1.4045397759898541, 0.7445738956684597, 0.0), # 10
(8.416594713398005, 8.451955733491605, 7.247042008461013, 7.779381468158547, 6.192912544714355, 3.055884870172965, 3.457942132377958, 3.2316147590743394, 3.3877894311766643, 1.6504091275866801, 1.1699254437160416, 0.6810632829938176, 0.0, 8.486355496462611, 7.491696112931993, 5.849627218580208, 4.951227382760039, 6.775578862353329, 4.524260662704076, 3.457942132377958, 2.1827749072664036, 3.0964562723571776, 2.5931271560528497, 1.4494084016922026, 0.7683596121356006, 0.0), # 11
(8.667088817726812, 8.699057955109222, 7.458916722852117, 8.006870376056709, 6.375479529048918, 3.1452461513385908, 3.5589795897954057, 3.325916342101467, 3.486834359808726, 1.6986268900063934, 1.2041436006801558, 0.7009720237016724, 0.0, 8.734563961465534, 7.710692260718395, 6.020718003400779, 5.095880670019179, 6.973668719617452, 4.656282878942054, 3.5589795897954057, 2.246604393813279, 3.187739764524459, 2.6689567920189035, 1.4917833445704234, 0.7908234504644749, 0.0), # 12
(8.902268288217876, 8.93029815321015, 7.657190154002218, 8.219755318845033, 6.546584954473067, 3.2288703310781304, 3.653531461623028, 3.414163125037284, 3.579520436670977, 1.7437496992757264, 1.2361651484848115, 0.7196027119695768, 0.0, 8.966837737406735, 7.915629831665344, 6.180825742424058, 5.2312490978271775, 7.159040873341954, 4.7798283750521975, 3.653531461623028, 2.306335950770093, 3.2732924772365335, 2.7399184396150114, 1.5314380308004438, 0.8118452866554684, 0.0), # 13
(9.120837768766716, 9.144354963798623, 7.840729432790956, 8.416820006151594, 6.705267594030659, 3.306279717222145, 3.7410574109523305, 3.4958511118480193, 3.6653182141364735, 1.785519664063084, 1.2658071220435476, 0.7368489005398801, 0.0, 9.181849875652563, 8.10533790593868, 6.329035610217737, 5.3565589921892505, 7.330636428272947, 4.894191556587227, 3.7410574109523305, 2.3616283694443894, 3.3526337970153297, 2.8056066687171985, 1.5681458865581912, 0.8313049967089657, 0.0), # 14
(9.321501903268855, 9.339907022878865, 8.008401690097953, 8.59684814760449, 6.850566220765538, 3.376996617601199, 3.821017100874813, 3.5704763064998986, 3.743698244578273, 1.823678893036873, 1.2928865562699035, 0.752604142154931, 0.0, 9.37827342756938, 8.27864556370424, 6.464432781349516, 5.471036679110618, 7.487396489156546, 4.998666829099858, 3.821017100874813, 2.4121404411437135, 3.425283110382769, 2.865616049201497, 1.6016803380195905, 0.8490824566253515, 0.0), # 15
(9.5029653356198, 9.51563296645512, 8.159074056802854, 8.758623452831788, 6.981519607721555, 3.4405433400458514, 3.892870194481988, 3.6375347129591504, 3.8141310803694286, 1.8579694948654994, 1.3172204860774188, 0.7667619895570784, 0.0, 9.554781444523545, 8.434381885127861, 6.586102430387094, 5.5739084845964975, 7.628262160738857, 5.092548598142811, 3.892870194481988, 2.4575309571756083, 3.4907598038607777, 2.9195411509439295, 1.6318148113605708, 0.8650575424050111, 0.0), # 16
(9.663932709715075, 9.670211430531618, 8.291613663785293, 8.900929631461583, 7.097166527942559, 3.4964421923866666, 3.9560763548653552, 3.6965223351920073, 3.8760872738829946, 1.8881335782173672, 1.3386259463796333, 0.7792159954886714, 0.0, 9.710046977881415, 8.571375950375383, 6.693129731898166, 5.6644007346521, 7.752174547765989, 5.17513126926881, 3.9560763548653552, 2.4974587088476192, 3.5485832639712793, 2.9669765438205284, 1.6583227327570589, 0.8791101300483289, 0.0), # 17
(9.803108669450204, 9.802321051112584, 8.404887641924901, 9.022550393121959, 7.1965457544723925, 3.5442154824542103, 4.010095245116426, 3.746935177164692, 3.929037377492032, 1.9139132517608846, 1.3569199720900849, 0.7898597126920597, 0.0, 9.842743079009345, 8.688456839612655, 6.784599860450424, 5.741739755282652, 7.858074754984064, 5.245709248030569, 4.010095245116426, 2.531582487467293, 3.5982728772361963, 3.0075167977073205, 1.6809775283849802, 0.8911200955556896, 0.0), # 18
(9.919197858720699, 9.910640464202265, 8.497763122101317, 9.122269447440985, 7.2786960603549105, 3.5833855180790386, 4.054386528326697, 3.7882692428434357, 3.9724519435695926, 1.9350506241644574, 1.3719195981223131, 0.7985866939095915, 0.0, 9.951542799273696, 8.784453633005505, 6.859597990611565, 5.80515187249337, 7.944903887139185, 5.30357693998081, 4.054386528326697, 2.55956108434217, 3.6393480301774552, 3.0407564824803295, 1.6995526244202632, 0.9009673149274788, 0.0), # 19
(10.010904921422082, 9.993848305804882, 8.569107235194169, 9.198870504046766, 7.342656218633962, 3.613474607091719, 4.088409867587681, 3.8200205361944657, 4.005801524488732, 1.95128780409649, 1.3834418593898585, 0.805290491883616, 0.0, 10.035119190040824, 8.858195410719775, 6.9172092969492915, 5.853863412289469, 8.011603048977465, 5.348028750672252, 4.088409867587681, 2.5810532907797996, 3.671328109316981, 3.0662901680155894, 1.713821447038834, 0.9085316641640803, 0.0), # 20
(10.076934501449866, 10.050623211924679, 8.6177871120831, 9.251137272567364, 7.387465002353392, 3.6340050573228124, 4.1116249259908795, 3.84168506118401, 4.028556672622507, 1.9623669002253892, 1.39130379080626, 0.8098646593564828, 0.0, 10.092145302677078, 8.90851125292131, 6.9565189540313, 5.887100700676166, 8.057113345245014, 5.378359085657614, 4.1116249259908795, 2.5957178980877234, 3.693732501176696, 3.0837124241891223, 1.72355742241662, 0.91369301926588, 0.0), # 21
(10.115991242699579, 10.079643818565883, 8.642669883647738, 9.277853462630876, 7.41216118455705, 3.644499176602881, 4.1234913666278, 3.852758821778298, 4.040187940343971, 1.968030021219561, 1.3953224272850568, 0.8122027490705409, 0.0, 10.121294188548827, 8.934230239775948, 6.976612136425284, 5.904090063658682, 8.080375880687942, 5.393862350489617, 4.1234913666278, 2.6032136975734863, 3.706080592278525, 3.09261782087696, 1.7285339767295478, 0.9163312562332622, 0.0), # 22
(10.13039336334264, 10.083079961133974, 8.645769318701419, 9.281198109567903, 7.418488037355065, 3.6458333333333335, 4.124902001129669, 3.8539557613168727, 4.0416420781893, 1.9686980681298587, 1.3958263395269568, 0.8124914647157445, 0.0, 10.125, 8.93740611187319, 6.9791316976347835, 5.906094204389575, 8.0832841563786, 5.395538065843622, 4.124902001129669, 2.604166666666667, 3.7092440186775324, 3.0937327031893016, 1.729153863740284, 0.9166436328303613, 0.0), # 23
(10.141012413034153, 10.08107561728395, 8.645262345679013, 9.280786458333335, 7.422071742409901, 3.6458333333333335, 4.124126906318083, 3.852291666666667, 4.041447222222222, 1.968287654320988, 1.39577076318743, 0.8124238683127573, 0.0, 10.125, 8.936662551440328, 6.978853815937151, 5.904862962962962, 8.082894444444443, 5.393208333333334, 4.124126906318083, 2.604166666666667, 3.7110358712049507, 3.0935954861111123, 1.7290524691358027, 0.9164614197530866, 0.0), # 24
(10.15140723021158, 10.077124771376313, 8.644261545496114, 9.279972029320987, 7.4255766303963355, 3.6458333333333335, 4.122599451303155, 3.8490226337448563, 4.041062242798354, 1.96747970964792, 1.3956605665710604, 0.8122904282883707, 0.0, 10.125, 8.935194711172077, 6.978302832855302, 5.902439128943758, 8.082124485596708, 5.388631687242799, 4.122599451303155, 2.604166666666667, 3.7127883151981678, 3.0933240097736636, 1.728852309099223, 0.9161022519433014, 0.0), # 25
(10.161577019048034, 10.071287780064015, 8.642780635573846, 9.278764081790122, 7.429002578947403, 3.6458333333333335, 4.120343359154361, 3.8442103909465026, 4.0404920781893, 1.9662876771833566, 1.3954967473084758, 0.8120929736320684, 0.0, 10.125, 8.933022709952752, 6.977483736542379, 5.898863031550069, 8.0809841563786, 5.381894547325103, 4.120343359154361, 2.604166666666667, 3.7145012894737013, 3.0929213605967085, 1.7285561271147696, 0.915571616369456, 0.0), # 26
(10.171520983716636, 10.063624999999998, 8.640833333333333, 9.277171874999999, 7.432349465696142, 3.6458333333333335, 4.117382352941177, 3.837916666666667, 4.039741666666666, 1.9647250000000003, 1.3952803030303031, 0.8118333333333335, 0.0, 10.125, 8.930166666666667, 6.976401515151515, 5.894175, 8.079483333333332, 5.373083333333334, 4.117382352941177, 2.604166666666667, 3.716174732848071, 3.0923906250000006, 1.7281666666666669, 0.914875, 0.0), # 27
(10.181238328390501, 10.054196787837219, 8.638433356195703, 9.275204668209877, 7.4356171682756, 3.6458333333333335, 4.113740155733075, 3.830203189300412, 4.038815946502057, 1.9628051211705537, 1.3950122313671698, 0.8115133363816492, 0.0, 10.125, 8.926646700198141, 6.9750611568358485, 5.88841536351166, 8.077631893004114, 5.3622844650205765, 4.113740155733075, 2.604166666666667, 3.7178085841378, 3.091734889403293, 1.7276866712391405, 0.9140178898033837, 0.0), # 28
(10.19072825724275, 10.043063500228623, 8.635594421582077, 9.272871720679012, 7.438805564318813, 3.6458333333333335, 4.109440490599533, 3.821131687242798, 4.037719855967078, 1.9605414837677189, 1.3946935299497027, 0.811134811766499, 0.0, 10.125, 8.922482929431489, 6.973467649748514, 5.881624451303155, 8.075439711934155, 5.349584362139917, 4.109440490599533, 2.604166666666667, 3.7194027821594067, 3.0909572402263383, 1.7271188843164156, 0.9130057727480568, 0.0), # 29
(10.199989974446497, 10.03028549382716, 8.63233024691358, 9.270182291666666, 7.441914531458824, 3.6458333333333335, 4.104507080610022, 3.8107638888888884, 4.036458333333333, 1.957947530864198, 1.39432519640853, 0.8106995884773662, 0.0, 10.125, 8.917695473251028, 6.9716259820426485, 5.873842592592593, 8.072916666666666, 5.335069444444444, 4.104507080610022, 2.604166666666667, 3.720957265729412, 3.0900607638888897, 1.7264660493827162, 0.9118441358024693, 0.0), # 30
(10.209022684174858, 10.01592312528578, 8.62865454961134, 9.267145640432098, 7.444943947328672, 3.6458333333333335, 4.09896364883402, 3.799161522633745, 4.035036316872428, 1.9550367055326936, 1.3939082283742779, 0.8102094955037343, 0.0, 10.125, 8.912304450541077, 6.969541141871389, 5.865110116598079, 8.070072633744855, 5.318826131687243, 4.09896364883402, 2.604166666666667, 3.722471973664336, 3.0890485468107003, 1.7257309099222682, 0.910538465935071, 0.0), # 31
(10.217825590600954, 10.00003675125743, 8.624581047096479, 9.263771026234568, 7.447893689561397, 3.6458333333333335, 4.092833918340999, 3.7863863168724285, 4.033458744855967, 1.951822450845908, 1.3934436234775742, 0.8096663618350862, 0.0, 10.125, 8.906329980185948, 6.96721811738787, 5.8554673525377225, 8.066917489711933, 5.3009408436214, 4.092833918340999, 2.604166666666667, 3.7239468447806985, 3.0879236754115236, 1.7249162094192958, 0.909094250114312, 0.0), # 32
(10.226397897897897, 9.98268672839506, 8.620123456790123, 9.260067708333333, 7.450763635790041, 3.6458333333333335, 4.086141612200436, 3.7725000000000004, 4.031730555555555, 1.9483182098765437, 1.392932379349046, 0.8090720164609053, 0.0, 10.125, 8.899792181069957, 6.96466189674523, 5.84495462962963, 8.06346111111111, 5.2815, 4.086141612200436, 2.604166666666667, 3.7253818178950207, 3.086689236111112, 1.724024691358025, 0.9075169753086421, 0.0), # 33
(10.23473881023881, 9.963933413351622, 8.615295496113397, 9.256044945987654, 7.453553663647644, 3.6458333333333335, 4.078910453481805, 3.7575643004115222, 4.029856687242798, 1.9445374256973027, 1.3923754936193207, 0.8084282883706753, 0.0, 10.125, 8.892711172077426, 6.961877468096604, 5.833612277091907, 8.059713374485597, 5.260590020576132, 4.078910453481805, 2.604166666666667, 3.726776831823822, 3.085348315329219, 1.7230590992226795, 0.9058121284865113, 0.0), # 34
(10.242847531796807, 9.943837162780063, 8.610110882487428, 9.25171199845679, 7.456263650767246, 3.6458333333333335, 4.071164165254579, 3.741640946502058, 4.0278420781893, 1.9404935413808875, 1.3917739639190256, 0.807737006553879, 0.0, 10.125, 8.88510707209267, 6.958869819595128, 5.821480624142661, 8.0556841563786, 5.238297325102881, 4.071164165254579, 2.604166666666667, 3.728131825383623, 3.0839039994855972, 1.7220221764974855, 0.9039851966163696, 0.0), # 35
(10.250723266745005, 9.922458333333331, 8.604583333333334, 9.247078125, 7.45889347478189, 3.6458333333333335, 4.062926470588235, 3.724791666666667, 4.025691666666666, 1.9362000000000004, 1.391128787878788, 0.8070000000000002, 0.0, 10.125, 8.877, 6.95564393939394, 5.8086, 8.051383333333332, 5.214708333333334, 4.062926470588235, 2.604166666666667, 3.729446737390945, 3.0823593750000007, 1.7209166666666669, 0.9020416666666666, 0.0), # 36
(10.258365219256524, 9.89985728166438, 8.598726566072246, 9.242152584876543, 7.4614430133246135, 3.6458333333333335, 4.054221092552247, 3.707078189300412, 4.023410390946502, 1.931670244627344, 1.3904409631292352, 0.8062190976985216, 0.0, 10.125, 8.868410074683737, 6.952204815646175, 5.79501073388203, 8.046820781893004, 5.189909465020577, 4.054221092552247, 2.604166666666667, 3.7307215066623067, 3.080717528292182, 1.7197453132144491, 0.8999870256058529, 0.0), # 37
(10.265772593504476, 9.876094364426155, 8.592554298125286, 9.23694463734568, 7.46391214402846, 3.6458333333333335, 4.04507175421609, 3.6885622427983544, 4.021003189300411, 1.92691771833562, 1.3897114873009937, 0.8053961286389272, 0.0, 10.125, 8.859357415028198, 6.948557436504967, 5.780753155006859, 8.042006378600822, 5.163987139917697, 4.04507175421609, 2.604166666666667, 3.73195607201423, 3.078981545781894, 1.7185108596250571, 0.8978267604023779, 0.0), # 38
(10.272944593661986, 9.851229938271604, 8.586080246913582, 9.231463541666667, 7.466300744526468, 3.6458333333333335, 4.035502178649238, 3.6693055555555554, 4.0184750000000005, 1.9219558641975314, 1.3889413580246914, 0.8045329218106996, 0.0, 10.125, 8.849862139917693, 6.944706790123457, 5.765867592592593, 8.036950000000001, 5.137027777777778, 4.035502178649238, 2.604166666666667, 3.733150372263234, 3.07715451388889, 1.7172160493827164, 0.8955663580246914, 0.0), # 39
(10.279880423902163, 9.82532435985368, 8.579318129858253, 9.225718557098766, 7.468608692451679, 3.6458333333333335, 4.025536088921165, 3.649369855967079, 4.015830761316872, 1.9167981252857802, 1.3881315729309558, 0.8036313062033228, 0.0, 10.125, 8.83994436823655, 6.940657864654778, 5.750394375857339, 8.031661522633744, 5.1091177983539104, 4.025536088921165, 2.604166666666667, 3.7343043462258394, 3.0752395190329227, 1.7158636259716507, 0.8932113054412438, 0.0), # 40
(10.286579288398128, 9.79843798582533, 8.57228166438043, 9.219718942901235, 7.4708358654371345, 3.6458333333333335, 4.015197208101347, 3.628816872427984, 4.0130754115226335, 1.9114579446730684, 1.3872831296504138, 0.8026931108062796, 0.0, 10.125, 8.829624218869075, 6.936415648252069, 5.734373834019204, 8.026150823045267, 5.0803436213991775, 4.015197208101347, 2.604166666666667, 3.7354179327185673, 3.073239647633746, 1.7144563328760862, 0.8907670896204848, 0.0), # 41
(10.293040391323, 9.770631172839506, 8.564984567901236, 9.213473958333335, 7.472982141115872, 3.6458333333333335, 4.004509259259259, 3.6077083333333335, 4.010213888888889, 1.9059487654320992, 1.3863970258136926, 0.8017201646090536, 0.0, 10.125, 8.818921810699589, 6.931985129068463, 5.717846296296297, 8.020427777777778, 5.050791666666667, 4.004509259259259, 2.604166666666667, 3.736491070557936, 3.0711579861111122, 1.7129969135802474, 0.8882391975308643, 0.0), # 42
(10.299262936849892, 9.741964277549155, 8.557440557841794, 9.206992862654321, 7.475047397120935, 3.6458333333333335, 3.993495965464375, 3.58610596707819, 4.007251131687243, 1.9002840306355744, 1.3854742590514195, 0.800714296601128, 0.0, 10.125, 8.807857262612407, 6.927371295257098, 5.700852091906722, 8.014502263374485, 5.020548353909466, 3.993495965464375, 2.604166666666667, 3.7375236985604676, 3.0689976208847747, 1.7114881115683587, 0.8856331161408324, 0.0), # 43
(10.305246129151927, 9.712497656607225, 8.549663351623229, 9.200284915123458, 7.477031511085363, 3.6458333333333335, 3.9821810497861696, 3.564071502057614, 4.0041920781893, 1.8944771833561962, 1.3845158269942222, 0.7996773357719861, 0.0, 10.125, 8.796450693491845, 6.92257913497111, 5.683431550068587, 8.0083841563786, 4.98970010288066, 3.9821810497861696, 2.604166666666667, 3.7385157555426813, 3.0667616383744867, 1.709932670324646, 0.8829543324188387, 0.0), # 44
(10.310989172402216, 9.682291666666666, 8.541666666666668, 9.193359375, 7.478934360642197, 3.6458333333333335, 3.9705882352941178, 3.541666666666667, 4.001041666666666, 1.8885416666666672, 1.3835227272727273, 0.798611111111111, 0.0, 10.125, 8.784722222222221, 6.917613636363637, 5.665625, 8.002083333333331, 4.958333333333334, 3.9705882352941178, 2.604166666666667, 3.7394671803210984, 3.064453125000001, 1.7083333333333335, 0.8802083333333335, 0.0), # 45
(10.31649127077388, 9.65140666438043, 8.533464220393233, 9.186225501543209, 7.480755823424477, 3.6458333333333335, 3.958741245057694, 3.518953189300412, 3.997804835390946, 1.8824909236396894, 1.3824959575175624, 0.7975174516079867, 0.0, 10.125, 8.772691967687852, 6.912479787587812, 5.647472770919067, 7.995609670781892, 4.926534465020577, 3.958741245057694, 2.604166666666667, 3.7403779117122387, 3.062075167181071, 1.7066928440786466, 0.8774006058527665, 0.0), # 46
(10.321751628440035, 9.619903006401461, 8.525069730224052, 9.178892554012345, 7.482495777065244, 3.6458333333333335, 3.9466638021463734, 3.4959927983539094, 3.994486522633745, 1.8763383973479657, 1.3814365153593549, 0.7963981862520958, 0.0, 10.125, 8.760380048773053, 6.9071825767967745, 5.629015192043896, 7.98897304526749, 4.894389917695474, 3.9466638021463734, 2.604166666666667, 3.741247888532622, 3.0596308513374493, 1.7050139460448106, 0.8745366369455876, 0.0), # 47
(10.326769449573796, 9.587841049382716, 8.516496913580248, 9.171369791666667, 7.48415409919754, 3.6458333333333335, 3.9343796296296296, 3.4728472222222226, 3.9910916666666667, 1.8700975308641978, 1.3803453984287317, 0.7952551440329219, 0.0, 10.125, 8.74780658436214, 6.901726992143659, 5.610292592592592, 7.982183333333333, 4.861986111111112, 3.9343796296296296, 2.604166666666667, 3.74207704959877, 3.05712326388889, 1.7032993827160496, 0.871621913580247, 0.0), # 48
(10.331543938348286, 9.555281149977136, 8.507759487882945, 9.163666473765433, 7.485730667454405, 3.6458333333333335, 3.9219124505769383, 3.4495781893004116, 3.987625205761317, 1.8637817672610888, 1.3792236043563206, 0.7940901539399483, 0.0, 10.125, 8.73499169333943, 6.896118021781603, 5.5913453017832655, 7.975250411522634, 4.829409465020577, 3.9219124505769383, 2.604166666666667, 3.7428653337272024, 3.054555491255145, 1.7015518975765893, 0.8686619227251944, 0.0), # 49
(10.336074298936616, 9.522283664837678, 8.49887117055327, 9.155791859567902, 7.4872253594688765, 3.6458333333333335, 3.909285988057775, 3.4262474279835393, 3.9840920781893, 1.85740454961134, 1.3780721307727481, 0.7929050449626583, 0.0, 10.125, 8.72195549458924, 6.89036065386374, 5.572213648834019, 7.9681841563786, 4.796746399176955, 3.909285988057775, 2.604166666666667, 3.7436126797344382, 3.051930619855968, 1.6997742341106543, 0.86566215134888, 0.0), # 50
(10.34035973551191, 9.488908950617283, 8.489845679012346, 9.147755208333333, 7.488638052873998, 3.6458333333333335, 3.896523965141612, 3.4029166666666666, 3.9804972222222226, 1.8509793209876546, 1.3768919753086422, 0.7917016460905352, 0.0, 10.125, 8.708718106995885, 6.884459876543211, 5.552937962962963, 7.960994444444445, 4.764083333333334, 3.896523965141612, 2.604166666666667, 3.744319026436999, 3.049251736111112, 1.6979691358024693, 0.8626280864197532, 0.0), # 51
(10.344399452247279, 9.455217363968908, 8.480696730681299, 9.139565779320987, 7.489968625302809, 3.6458333333333335, 3.883650104897926, 3.3796476337448556, 3.976845576131687, 1.8445195244627348, 1.3756841355946297, 0.7904817863130622, 0.0, 10.125, 8.695299649443683, 6.878420677973147, 5.533558573388203, 7.953691152263374, 4.731506687242798, 3.883650104897926, 2.604166666666667, 3.7449843126514044, 3.04652192644033, 1.69613934613626, 0.8595652149062645, 0.0), # 52
(10.348192653315843, 9.421269261545497, 8.471438042981255, 9.131232831790122, 7.491216954388353, 3.6458333333333335, 3.8706881303961915, 3.3565020576131688, 3.9731420781893005, 1.8380386031092826, 1.3744496092613379, 0.7892472946197227, 0.0, 10.125, 8.681720240816947, 6.872248046306688, 5.514115809327846, 7.946284156378601, 4.699102880658437, 3.8706881303961915, 2.604166666666667, 3.7456084771941764, 3.043744277263375, 1.694287608596251, 0.8564790237768635, 0.0), # 53
(10.351738542890716, 9.387125000000001, 8.462083333333332, 9.122765625, 7.492382917763668, 3.6458333333333335, 3.8576617647058824, 3.333541666666666, 3.9693916666666667, 1.8315500000000005, 1.3731893939393938, 0.788, 0.0, 10.125, 8.668, 6.865946969696969, 5.49465, 7.938783333333333, 4.666958333333333, 3.8576617647058824, 2.604166666666667, 3.746191458881834, 3.040921875000001, 1.6924166666666667, 0.8533750000000002, 0.0), # 54
(10.355036325145022, 9.352844935985367, 8.452646319158665, 9.114173418209877, 7.493466393061793, 3.6458333333333335, 3.844594730896474, 3.3108281893004117, 3.9655992798353905, 1.8250671582075908, 1.3719044872594257, 0.7867417314433777, 0.0, 10.125, 8.654159045877153, 6.859522436297127, 5.4752014746227715, 7.931198559670781, 4.6351594650205765, 3.844594730896474, 2.604166666666667, 3.7467331965308963, 3.0380578060699595, 1.6905292638317333, 0.8502586305441244, 0.0), # 55
(10.358085204251871, 9.31848942615455, 8.443140717878373, 9.105465470679011, 7.4944672579157725, 3.6458333333333335, 3.8315107520374405, 3.288423353909465, 3.961769855967078, 1.818603520804756, 1.3705958868520598, 0.7854743179393385, 0.0, 10.125, 8.640217497332722, 6.852979434260299, 5.455810562414267, 7.923539711934156, 4.603792695473251, 3.8315107520374405, 2.604166666666667, 3.7472336289578863, 3.035155156893005, 1.6886281435756747, 0.8471354023776865, 0.0), # 56
(10.360884384384383, 9.284118827160494, 8.433580246913582, 9.096651041666666, 7.495385389958644, 3.6458333333333335, 3.818433551198257, 3.2663888888888892, 3.957908333333333, 1.812172530864198, 1.369264590347924, 0.7841995884773663, 0.0, 10.125, 8.626195473251027, 6.8463229517396185, 5.436517592592593, 7.915816666666666, 4.572944444444445, 3.818433551198257, 2.604166666666667, 3.747692694979322, 3.0322170138888898, 1.6867160493827165, 0.844010802469136, 0.0), # 57
(10.36343306971568, 9.24979349565615, 8.423978623685414, 9.087739390432098, 7.496220666823449, 3.6458333333333335, 3.8053868514483984, 3.2447865226337447, 3.954019650205761, 1.8057876314586196, 1.367911595377645, 0.7829193720469442, 0.0, 10.125, 8.612113092516385, 6.8395579768882255, 5.417362894375858, 7.908039300411522, 4.5427011316872425, 3.8053868514483984, 2.604166666666667, 3.7481103334117245, 3.029246463477367, 1.684795724737083, 0.8408903177869229, 0.0), # 58
(10.36573046441887, 9.215573788294467, 8.414349565614998, 9.078739776234567, 7.49697296614323, 3.6458333333333335, 3.792394375857339, 3.2236779835390945, 3.9501087448559673, 1.799462265660723, 1.3665378995718502, 0.7816354976375554, 0.0, 10.125, 8.597990474013107, 6.83268949785925, 5.398386796982168, 7.900217489711935, 4.513149176954733, 3.792394375857339, 2.604166666666667, 3.748486483071615, 3.02624659207819, 1.6828699131229998, 0.8377794352994972, 0.0), # 59
(10.367775772667077, 9.181520061728396, 8.404706790123456, 9.069661458333334, 7.497642165551024, 3.6458333333333335, 3.779479847494553, 3.203125, 3.946180555555556, 1.7932098765432103, 1.3651445005611673, 0.7803497942386832, 0.0, 10.125, 8.583847736625515, 6.825722502805837, 5.37962962962963, 7.892361111111112, 4.484375, 3.779479847494553, 2.604166666666667, 3.748821082775512, 3.023220486111112, 1.6809413580246915, 0.8346836419753088, 0.0), # 60
(10.369568198633415, 9.147692672610884, 8.395064014631917, 9.060513695987654, 7.498228142679874, 3.6458333333333335, 3.7666669894295164, 3.183189300411523, 3.9422400205761314, 1.7870439071787843, 1.3637323959762233, 0.7790640908398111, 0.0, 10.125, 8.56970499923792, 6.818661979881115, 5.361131721536351, 7.884480041152263, 4.456465020576132, 3.7666669894295164, 2.604166666666667, 3.749114071339937, 3.0201712319958856, 1.6790128029263836, 0.8316084247828076, 0.0), # 61
(10.371106946491004, 9.114151977594878, 8.385434956561502, 9.051305748456791, 7.498730775162823, 3.6458333333333335, 3.753979524731703, 3.1639326131687247, 3.9382920781893, 1.7809778006401469, 1.3623025834476452, 0.7777802164304223, 0.0, 10.125, 8.555582380734645, 6.811512917238226, 5.3429334019204395, 7.8765841563786, 4.429505658436215, 3.753979524731703, 2.604166666666667, 3.7493653875814115, 3.0171019161522645, 1.6770869913123003, 0.8285592706904436, 0.0), # 62
(10.37239122041296, 9.080958333333333, 8.375833333333334, 9.042046875, 7.499149940632904, 3.6458333333333335, 3.741441176470588, 3.1454166666666667, 3.9343416666666666, 1.7750250000000003, 1.360856060606061, 0.7765000000000001, 0.0, 10.125, 8.5415, 6.804280303030303, 5.325075, 7.868683333333333, 4.403583333333334, 3.741441176470588, 2.604166666666667, 3.749574970316452, 3.014015625000001, 1.675166666666667, 0.8255416666666667, 0.0), # 63
(10.373420224572397, 9.048172096479195, 8.366272862368541, 9.032746334876544, 7.4994855167231655, 3.6458333333333335, 3.729075667715646, 3.127703189300412, 3.9303937242798352, 1.7691989483310475, 1.3593938250820965, 0.7752252705380279, 0.0, 10.125, 8.527477975918305, 6.796969125410483, 5.307596844993141, 7.8607874485596705, 4.378784465020577, 3.729075667715646, 2.604166666666667, 3.7497427583615828, 3.0109154449588487, 1.6732545724737085, 0.822561099679927, 0.0), # 64
(10.374193163142438, 9.015853623685413, 8.35676726108825, 9.023413387345679, 7.499737381066645, 3.6458333333333335, 3.7169067215363514, 3.1108539094650207, 3.9264531893004113, 1.7635130887059902, 1.357916874506381, 0.7739578570339887, 0.0, 10.125, 8.513536427373873, 6.7895843725319045, 5.290539266117969, 7.852906378600823, 4.355195473251029, 3.7169067215363514, 2.604166666666667, 3.7498686905333223, 3.0078044624485605, 1.67135345221765, 0.819623056698674, 0.0), # 65
(10.374709240296196, 8.984063271604938, 8.34733024691358, 9.014057291666667, 7.499905411296382, 3.6458333333333335, 3.7049580610021784, 3.094930555555556, 3.9225250000000003, 1.7579808641975312, 1.3564262065095398, 0.7726995884773664, 0.0, 10.125, 8.499695473251029, 6.782131032547699, 5.273942592592592, 7.8450500000000005, 4.332902777777778, 3.7049580610021784, 2.604166666666667, 3.749952705648191, 3.0046857638888897, 1.6694660493827165, 0.8167330246913582, 0.0), # 66
(10.374967660206792, 8.952861396890716, 8.337975537265661, 9.004687307098765, 7.499989485045419, 3.6458333333333335, 3.693253409182603, 3.0799948559670787, 3.9186140946502057, 1.7526157178783728, 1.3549228187222018, 0.7714522938576437, 0.0, 10.125, 8.485975232434079, 6.774614093611008, 5.257847153635117, 7.837228189300411, 4.31199279835391, 3.693253409182603, 2.604166666666667, 3.7499947425227096, 3.001562435699589, 1.6675951074531323, 0.8138964906264289, 0.0), # 67
(10.374791614480825, 8.922144586043629, 8.328671624942844, 8.995231305354269, 7.499918636864896, 3.645765673423767, 3.681757597414823, 3.0659766041761927, 3.9146959495503735, 1.747405110411792, 1.3533809980900628, 0.770210835158312, 0.0, 10.124875150034294, 8.47231918674143, 6.766904990450313, 5.242215331235375, 7.829391899100747, 4.29236724584667, 3.681757597414823, 2.604118338159833, 3.749959318432448, 2.99841043511809, 1.6657343249885688, 0.8111040532766937, 0.0), # 68
(10.373141706924315, 8.890975059737157, 8.319157021604937, 8.985212635869564, 7.499273783587508, 3.6452307956104257, 3.6701340906733066, 3.052124485596708, 3.910599279835391, 1.7422015976761076, 1.3516438064859118, 0.7689349144466104, 0.0, 10.12388599537037, 8.458284058912714, 6.758219032429559, 5.226604793028321, 7.821198559670782, 4.272974279835391, 3.6701340906733066, 2.6037362825788755, 3.749636891793754, 2.9950708786231885, 1.6638314043209876, 0.8082704599761052, 0.0), # 69
(10.369885787558895, 8.859209754856408, 8.309390360653863, 8.974565343196456, 7.497999542752628, 3.6441773992785653, 3.658330067280685, 3.0383135192805977, 3.9063009640298736, 1.736979881115684, 1.3496914810876801, 0.7676185634410675, 0.0, 10.121932334533609, 8.44380419785174, 6.7484574054383994, 5.210939643347051, 7.812601928059747, 4.253638926992837, 3.658330067280685, 2.6029838566275467, 3.748999771376314, 2.991521781065486, 1.6618780721307727, 0.8053827049869463, 0.0), # 70
(10.365069660642929, 8.826867654542236, 8.299375071444901, 8.963305127818035, 7.496112052502757, 3.6426225549966977, 3.646350829769494, 3.0245482777015704, 3.9018074035970125, 1.7317400898356603, 1.347531228463977, 0.7662627447677263, 0.0, 10.119039887688615, 8.428890192444989, 6.737656142319885, 5.195220269506979, 7.803614807194025, 4.234367588782199, 3.646350829769494, 2.6018732535690696, 3.7480560262513785, 2.987768375939346, 1.6598750142889804, 0.8024425140492942, 0.0), # 71
(10.358739130434783, 8.793967741935482, 8.289114583333333, 8.95144769021739, 7.493627450980392, 3.6405833333333337, 3.634201680672269, 3.0108333333333333, 3.897125, 1.7264823529411768, 1.3451702551834133, 0.7648684210526316, 0.0, 10.115234375, 8.413552631578947, 6.7258512759170666, 5.179447058823529, 7.79425, 4.215166666666667, 3.634201680672269, 2.600416666666667, 3.746813725490196, 2.983815896739131, 1.6578229166666667, 0.7994516129032258, 0.0), # 72
(10.35094000119282, 8.760529000176998, 8.27861232567444, 8.939008730877617, 7.490561876328034, 3.638076804856983, 3.621887922521546, 2.9971732586495965, 3.8922601547020275, 1.7212067995373737, 1.3426157678145982, 0.7634365549218266, 0.0, 10.110541516632374, 8.397802104140093, 6.71307883907299, 5.163620398612119, 7.784520309404055, 4.196042562109435, 3.621887922521546, 2.598626289183559, 3.745280938164017, 2.979669576959206, 1.655722465134888, 0.7964117272888181, 0.0), # 73
(10.341718077175404, 8.726570412407629, 8.267871727823502, 8.926003950281803, 7.486931466688183, 3.6351200401361585, 3.609414857849861, 2.9835726261240665, 3.8872192691662857, 1.7159135587293908, 1.3398749729261428, 0.7619681090013557, 0.0, 10.104987032750344, 8.38164919901491, 6.699374864630713, 5.147740676188171, 7.774438538332571, 4.177001676573693, 3.609414857849861, 2.5965143143829703, 3.7434657333440917, 2.975334650093935, 1.6535743455647005, 0.7933245829461482, 0.0), # 74
(10.331119162640901, 8.692110961768218, 8.256896219135802, 8.912449048913043, 7.482752360203341, 3.6317301097393697, 3.59678778918975, 2.9700360082304527, 3.8820087448559666, 1.7106027596223679, 1.336955077086656, 0.7604640459172624, 0.0, 10.098596643518519, 8.365104505089885, 6.684775385433279, 5.131808278867102, 7.764017489711933, 4.158050411522634, 3.59678778918975, 2.594092935528121, 3.7413761801016703, 2.9708163496376816, 1.6513792438271604, 0.7901919056152927, 0.0), # 75
(10.319189061847677, 8.65716963139962, 8.245689228966622, 8.898359727254428, 7.478040695016003, 3.6279240842351275, 3.5840120190737474, 2.956567977442463, 3.876634983234263, 1.7052745313214452, 1.3338632868647486, 0.7589253282955902, 0.0, 10.091396069101508, 8.348178611251491, 6.669316434323743, 5.115823593964334, 7.753269966468526, 4.139195168419449, 3.5840120190737474, 2.5913743458822336, 3.7390203475080015, 2.96611990908481, 1.6491378457933243, 0.7870154210363293, 0.0), # 76
(10.305973579054093, 8.621765404442675, 8.234254186671238, 8.883751685789049, 7.472812609268672, 3.6237190341919425, 3.5710928500343897, 2.9431731062338065, 3.871104385764365, 1.699929002931763, 1.3306068088290313, 0.7573529187623839, 0.0, 10.083411029663925, 8.330882106386222, 6.653034044145156, 5.099787008795288, 7.74220877152873, 4.120442348727329, 3.5710928500343897, 2.58837073870853, 3.736406304634336, 2.9612505619296834, 1.6468508373342476, 0.7837968549493343, 0.0), # 77
(10.291518518518519, 8.585917264038233, 8.222594521604938, 8.868640625, 7.467084241103849, 3.6191320301783265, 3.5580355846042124, 2.9298559670781894, 3.8654233539094642, 1.6945663035584608, 1.327192849548113, 0.7557477799436866, 0.0, 10.074667245370371, 8.313225579380552, 6.635964247740564, 5.083698910675381, 7.7308467078189285, 4.101798353909466, 3.5580355846042124, 2.585094307270233, 3.7335421205519244, 2.956213541666667, 1.6445189043209878, 0.7805379330943849, 0.0), # 78
(10.275869684499314, 8.549644193327138, 8.210713663123, 8.85304224537037, 7.460871728664031, 3.61418014276279, 3.5448455253157505, 2.916621132449322, 3.859598289132754, 1.6891865623066789, 1.3236286155906039, 0.7541108744655421, 0.0, 10.065190436385459, 8.295219619120962, 6.618143077953018, 5.067559686920035, 7.719196578265508, 4.083269585429051, 3.5448455253157505, 2.5815572448305644, 3.7304358643320157, 2.951014081790124, 1.6421427326246, 0.7772403812115581, 0.0), # 79
(10.259072881254847, 8.51296517545024, 8.198615040580703, 8.836972247383253, 7.454191210091719, 3.6088804425138448, 3.5315279747015405, 2.9034731748209115, 3.853635592897424, 1.683789908281557, 1.3199213135251149, 0.7524431649539947, 0.0, 10.0550063228738, 8.27687481449394, 6.599606567625574, 5.05136972484467, 7.707271185794848, 4.064862444749276, 3.5315279747015405, 2.577771744652746, 3.7270956050458595, 2.945657415794418, 1.639723008116141, 0.7739059250409311, 0.0), # 80
(10.241173913043479, 8.475899193548386, 8.186302083333333, 8.82044633152174, 7.447058823529411, 3.60325, 3.5180882352941176, 2.890416666666667, 3.8475416666666664, 1.6783764705882358, 1.3160781499202554, 0.7507456140350878, 0.0, 10.044140624999999, 8.258201754385965, 6.580390749601277, 5.035129411764706, 7.695083333333333, 4.046583333333333, 3.5180882352941176, 2.57375, 3.7235294117647055, 2.940148777173914, 1.6372604166666667, 0.7705362903225808, 0.0), # 81
(10.222218584123576, 8.438465230762423, 8.17377822073617, 8.803480198268922, 7.43949070711961, 3.5973058857897686, 3.504531609626018, 2.8774561804602956, 3.841322911903673, 1.6729463783318543, 1.3121063313446355, 0.7490191843348656, 0.0, 10.03261906292867, 8.23921102768352, 6.560531656723177, 5.018839134995561, 7.682645823807346, 4.0284386526444145, 3.504531609626018, 2.5695042041355487, 3.719745353559805, 2.934493399422974, 1.634755644147234, 0.767133202796584, 0.0), # 82
(10.202252698753504, 8.400682270233196, 8.16104688214449, 8.78608954810789, 7.431502999004814, 3.591065170451659, 3.4908634002297765, 2.8645962886755068, 3.8349857300716352, 1.6674997606175532, 1.3080130643668657, 0.7472648384793719, 0.0, 10.020467356824417, 8.219913223273089, 6.540065321834328, 5.002499281852659, 7.6699714601432705, 4.01043480414571, 3.4908634002297765, 2.5650465503226134, 3.715751499502407, 2.9286965160359637, 1.632209376428898, 0.7636983882030178, 0.0), # 83
(10.181322061191626, 8.362569295101553, 8.14811149691358, 8.768290081521739, 7.423111837327523, 3.584544924554184, 3.477088909637929, 2.851841563786008, 3.8285365226337444, 1.6620367465504726, 1.3038055555555557, 0.7454835390946503, 0.0, 10.007711226851852, 8.200318930041153, 6.519027777777778, 4.986110239651417, 7.657073045267489, 3.9925781893004113, 3.477088909637929, 2.5603892318244172, 3.7115559186637617, 2.922763360507247, 1.629622299382716, 0.7602335722819594, 0.0), # 84
(10.159472475696308, 8.32414528850834, 8.13497549439872, 8.75009749899356, 7.414333360230238, 3.577762218665854, 3.463213440383012, 2.8391965782655086, 3.8219816910531925, 1.6565574652357518, 1.2994910114793157, 0.7436762488067449, 0.0, 9.994376393175584, 8.180438736874192, 6.497455057396579, 4.969672395707254, 7.643963382106385, 3.9748752095717124, 3.463213440383012, 2.5555444419041815, 3.707166680115119, 2.916699166331187, 1.626995098879744, 0.7567404807734855, 0.0), # 85
(10.136749746525913, 8.285429233594407, 8.121642303955191, 8.731527501006443, 7.405183705855455, 3.57073412335518, 3.44924229499756, 2.826665904587715, 3.815327636793172, 1.6510620457785314, 1.2950766387067558, 0.7418439302416996, 0.0, 9.98048857596022, 8.160283232658694, 6.475383193533778, 4.953186137335593, 7.630655273586344, 3.9573322664228017, 3.44924229499756, 2.550524373825129, 3.7025918529277275, 2.910509167002148, 1.6243284607910382, 0.7532208394176735, 0.0), # 86
(10.113199677938807, 8.246440113500597, 8.10811535493827, 8.712595788043478, 7.3956790123456795, 3.563477709190672, 3.4351807760141093, 2.8142541152263374, 3.8085807613168727, 1.645550617283951, 1.290569643806486, 0.7399875460255577, 0.0, 9.96607349537037, 8.139863006281134, 6.452848219032429, 4.936651851851852, 7.6171615226337455, 3.9399557613168725, 3.4351807760141093, 2.54534122085048, 3.6978395061728397, 2.904198596014493, 1.6216230709876542, 0.7496763739545999, 0.0), # 87
(10.088868074193357, 8.207196911367758, 8.094398076703246, 8.693318060587762, 7.385835417843406, 3.5560100467408424, 3.4210341859651954, 2.801965782655083, 3.8017474660874866, 1.6400233088571508, 1.2859772333471164, 0.7381080587843638, 0.0, 9.951156871570646, 8.119188646628, 6.429886166735582, 4.9200699265714505, 7.603494932174973, 3.9227520957171165, 3.4210341859651954, 2.540007176243459, 3.692917708921703, 2.897772686862588, 1.6188796153406495, 0.7461088101243417, 0.0), # 88
(10.063800739547922, 8.16771861033674, 8.080493898605397, 8.673710019122383, 7.375669060491138, 3.5483482065742016, 3.406807827383354, 2.7898054793476605, 3.794834152568206, 1.634480249603271, 1.2813066138972575, 0.7362064311441613, 0.0, 9.935764424725651, 8.098270742585774, 6.4065330694862865, 4.903440748809812, 7.589668305136412, 3.905727671086725, 3.406807827383354, 2.534534433267287, 3.687834530245569, 2.891236673040795, 1.6160987797210793, 0.7425198736669765, 0.0), # 89
(10.03804347826087, 8.128024193548386, 8.06640625, 8.653787364130435, 7.365196078431373, 3.5405092592592595, 3.3925070028011204, 2.7777777777777777, 3.7878472222222226, 1.6289215686274514, 1.2765649920255184, 0.7342836257309943, 0.0, 9.919921875, 8.077119883040936, 6.382824960127592, 4.886764705882353, 7.575694444444445, 3.888888888888889, 3.3925070028011204, 2.5289351851851856, 3.6825980392156863, 2.884595788043479, 1.6132812500000002, 0.7389112903225807, 0.0), # 90
(10.011642094590563, 8.088132644143545, 8.05213856024234, 8.63356579609501, 7.35443260980661, 3.532510275364528, 3.378137014751031, 2.7658872504191434, 3.780793076512727, 1.6233473950348318, 1.2717595743005101, 0.7323406051709063, 0.0, 9.903654942558298, 8.055746656879968, 6.35879787150255, 4.870042185104494, 7.561586153025454, 3.872242150586801, 3.378137014751031, 2.5232216252603767, 3.677216304903305, 2.8778552653650036, 1.6104277120484682, 0.7352847858312315, 0.0), # 91
(9.984642392795372, 8.048062945263066, 8.0376942586877, 8.613061015499195, 7.343394792759352, 3.524368325458518, 3.363703165765621, 2.754138469745466, 3.773678116902911, 1.6177578579305527, 1.2668975672908422, 0.7303783320899415, 0.0, 9.886989347565157, 8.034161652989356, 6.334487836454211, 4.853273573791657, 7.547356233805822, 3.8557938576436523, 3.363703165765621, 2.517405946756084, 3.671697396379676, 2.871020338499732, 1.6075388517375402, 0.7316420859330061, 0.0), # 92
(9.957090177133654, 8.00783408004779, 8.023076774691358, 8.592288722826089, 7.332098765432098, 3.5161004801097393, 3.349210758377425, 2.742536008230453, 3.766508744855967, 1.6121530864197533, 1.261986177565125, 0.7283977691141434, 0.0, 9.869950810185184, 8.012375460255576, 6.309930887825625, 4.836459259259259, 7.533017489711934, 3.839550411522634, 3.349210758377425, 2.5115003429355283, 3.666049382716049, 2.86409624094203, 1.6046153549382718, 0.727984916367981, 0.0), # 93
(9.92903125186378, 7.967465031638567, 8.008289537608597, 8.571264618558777, 7.320560665967347, 3.5077238098867043, 3.3346650951189805, 2.7310844383478132, 3.759291361835086, 1.6065332096075746, 1.2570326116919686, 0.7263998788695563, 0.0, 9.85256505058299, 7.990398667565118, 6.285163058459842, 4.819599628822722, 7.518582723670172, 3.823518213686939, 3.3346650951189805, 2.5055170070619317, 3.6602803329836733, 2.8570882061862592, 1.6016579075217197, 0.7243150028762335, 0.0), # 94
(9.90051142124411, 7.926974783176247, 7.993335976794697, 8.550004403180354, 7.308796632507598, 3.499255385357923, 3.320071478522822, 2.719788332571255, 3.7520323693034596, 1.6008983565991557, 1.2520440762399827, 0.7243856239822234, 0.0, 9.834857788923182, 7.968241863804456, 6.260220381199914, 4.8026950697974655, 7.504064738606919, 3.8077036655997567, 3.320071478522822, 2.4994681323985164, 3.654398316253799, 2.850001467726785, 1.5986671953589393, 0.7206340711978407, 0.0), # 95
(9.871576489533012, 7.886382317801674, 7.978219521604939, 8.528523777173913, 7.296822803195352, 3.4907122770919066, 3.3054352111214853, 2.708652263374486, 3.7447381687242793, 1.5952486564996373, 1.247027777777778, 0.7223559670781895, 0.0, 9.816854745370371, 7.945915637860083, 6.23513888888889, 4.785745969498911, 7.489476337448559, 3.7921131687242804, 3.3054352111214853, 2.4933659122085046, 3.648411401597676, 2.8428412590579715, 1.595643904320988, 0.7169438470728796, 0.0), # 96
(9.842272260988848, 7.845706618655694, 7.962943601394604, 8.506838441022543, 7.284655316173109, 3.482111555657166, 3.2907615954475067, 2.697680803231215, 3.7374151615607376, 1.589584238414159, 1.2419909228739638, 0.7203118707834976, 0.0, 9.798581640089164, 7.923430578618472, 6.209954614369819, 4.768752715242476, 7.474830323121475, 3.7767531245237014, 3.2907615954475067, 2.4872225397551184, 3.6423276580865545, 2.8356128136741816, 1.5925887202789208, 0.7132460562414268, 0.0), # 97
(9.812644539869984, 7.804966668879153, 7.947511645518976, 8.48496409520934, 7.272310309583368, 3.4734702916222124, 3.276055934033421, 2.68687852461515, 3.7300697492760246, 1.5839052314478608, 1.236940718097151, 0.7182542977241916, 0.0, 9.78006419324417, 7.900797274966106, 6.184703590485755, 4.751715694343581, 7.460139498552049, 3.7616299344612103, 3.276055934033421, 2.48105020830158, 3.636155154791684, 2.8283213650697805, 1.589502329103795, 0.7095424244435595, 0.0), # 98
(9.782739130434782, 7.764181451612902, 7.931927083333334, 8.462916440217391, 7.259803921568627, 3.464805555555556, 3.261323529411765, 2.67625, 3.7227083333333333, 1.5782117647058826, 1.2318843700159492, 0.7161842105263159, 0.0, 9.761328125, 7.878026315789473, 6.159421850079745, 4.734635294117647, 7.445416666666667, 3.7467500000000005, 3.261323529411765, 2.474861111111111, 3.6299019607843137, 2.820972146739131, 1.5863854166666669, 0.7058346774193549, 0.0), # 99
(9.752601836941611, 7.723369949997786, 7.916193344192958, 8.44071117652979, 7.247152290271389, 3.4561344180257074, 3.2465696841150726, 2.665799801859473, 3.715337315195854, 1.572503967293365, 1.2268290851989685, 0.714102571815914, 0.0, 9.742399155521262, 7.8551282899750525, 6.134145425994841, 4.717511901880093, 7.430674630391708, 3.732119722603262, 3.2465696841150726, 2.468667441446934, 3.6235761451356945, 2.8135703921765973, 1.5832386688385918, 0.7021245409088898, 0.0), # 100
(9.722278463648834, 7.682551147174654, 7.900313857453133, 8.41836400462963, 7.234371553834153, 3.4474739496011786, 3.231799700675881, 2.6555325026672763, 3.7079630963267793, 1.5667819683154474, 1.2217820702148188, 0.7120103442190294, 0.0, 9.723303004972564, 7.832113786409323, 6.108910351074094, 4.7003459049463405, 7.415926192653559, 3.7177455037341867, 3.231799700675881, 2.4624813925722706, 3.6171857769170765, 2.806121334876544, 1.5800627714906266, 0.6984137406522414, 0.0), # 101
(9.691814814814816, 7.641744026284349, 7.884292052469135, 8.395890625, 7.221477850399419, 3.4388412208504806, 3.217018881626725, 2.645452674897119, 3.7005920781893, 1.56104589687727, 1.2167505316321108, 0.7099084903617069, 0.0, 9.704065393518519, 7.808993393978774, 6.083752658160553, 4.683137690631809, 7.4011841563786, 3.703633744855967, 3.217018881626725, 2.4563151577503435, 3.6107389251997093, 2.798630208333334, 1.5768584104938272, 0.6947040023894864, 0.0), # 102
(9.661256694697919, 7.60096757046772, 7.8681313585962505, 8.373306738123993, 7.208487318109686, 3.430253302342123, 3.20223252950014, 2.63556489102271, 3.6932306622466085, 1.5552958820839726, 1.211741676019454, 0.7077979728699895, 0.0, 9.68471204132373, 7.785777701569883, 6.058708380097269, 4.6658876462519165, 7.386461324493217, 3.689790847431794, 3.20223252950014, 2.4501809302443736, 3.604243659054843, 2.7911022460413317, 1.5736262717192502, 0.6909970518607019, 0.0), # 103
(9.63064990755651, 7.560240762865614, 7.851835205189758, 8.350628044484703, 7.195416095107452, 3.421727264644617, 3.187445946828663, 2.6258737235177567, 3.685885249961896, 1.5495320530406955, 1.2067627099454585, 0.7056797543699213, 0.0, 9.665268668552812, 7.762477298069133, 6.033813549727292, 4.648596159122086, 7.371770499923792, 3.6762232129248593, 3.187445946828663, 2.4440909033175835, 3.597708047553726, 2.783542681494901, 1.5703670410379515, 0.687294614805965, 0.0), # 104
(9.600040257648953, 7.519582586618876, 7.835407021604938, 8.327870244565217, 7.182280319535221, 3.4132801783264752, 3.172664436144829, 2.6163837448559675, 3.6785622427983538, 1.5437545388525786, 1.201820839978735, 0.7035547974875461, 0.0, 9.64576099537037, 7.739102772363006, 6.009104199893674, 4.631263616557734, 7.3571244855967075, 3.662937242798354, 3.172664436144829, 2.4380572702331964, 3.5911401597676105, 2.775956748188406, 1.5670814043209877, 0.6835984169653525, 0.0), # 105
(9.569473549233614, 7.479012024868357, 7.818850237197074, 8.305049038848631, 7.1690961295354905, 3.404929113956206, 3.1578932999811724, 2.6070995275110502, 3.6712680422191735, 1.5379634686247616, 1.1969232726878927, 0.701424064848908, 0.0, 9.626214741941014, 7.715664713337986, 5.9846163634394625, 4.613890405874283, 7.342536084438347, 3.6499393385154706, 3.1578932999811724, 2.4320922242544327, 3.5845480647677452, 2.768349679616211, 1.5637700474394147, 0.6799101840789417, 0.0), # 106
(9.538995586568856, 7.438548060754901, 7.802168281321446, 8.282180127818036, 7.155879663250759, 3.3966911421023225, 3.1431378408702306, 2.5980256439567144, 3.6640090496875475, 1.532158971462385, 1.1920772146415421, 0.6992885190800504, 0.0, 9.606655628429355, 7.692173709880553, 5.96038607320771, 4.596476914387154, 7.328018099375095, 3.6372359015394005, 3.1431378408702306, 2.426207958644516, 3.5779398316253794, 2.760726709272679, 1.5604336562642893, 0.6762316418868093, 0.0), # 107
(9.508652173913044, 7.398209677419356, 7.785364583333334, 8.259279211956523, 7.1426470588235285, 3.3885833333333335, 3.1284033613445374, 2.589166666666667, 3.656791666666667, 1.5263411764705888, 1.1872898724082936, 0.6971491228070177, 0.0, 9.587109375, 7.668640350877193, 5.936449362041468, 4.579023529411765, 7.313583333333334, 3.624833333333334, 3.1284033613445374, 2.4204166666666667, 3.5713235294117642, 2.7530930706521746, 1.557072916666667, 0.6725645161290325, 0.0), # 108
(9.478489115524543, 7.358015858002567, 7.768442572588021, 8.23636199174718, 7.129414454396299, 3.3806227582177515, 3.113695163936631, 2.580527168114617, 3.6496222946197223, 1.5205102127545123, 1.1825684525567568, 0.6950068386558532, 0.0, 9.567601701817559, 7.645075225214384, 5.9128422627837836, 4.561530638263536, 7.299244589239445, 3.612738035360464, 3.113695163936631, 2.4147305415841083, 3.5647072271981495, 2.7454539972490606, 1.5536885145176043, 0.668910532545688, 0.0), # 109
(9.448552215661715, 7.317985585645383, 7.751405678440788, 8.213444167673108, 7.116197988111569, 3.3728264873240867, 3.0990185511790447, 2.5721117207742723, 3.6425073350099066, 1.5146662094192962, 1.177920161655542, 0.6928626292526012, 0.0, 9.54815832904664, 7.621488921778612, 5.8896008082777085, 4.543998628257887, 7.285014670019813, 3.600956409083981, 3.0990185511790447, 2.409161776660062, 3.5580989940557846, 2.737814722557703, 1.5502811356881578, 0.6652714168768531, 0.0), # 110
(9.41888727858293, 7.278137843488651, 7.7342573302469155, 8.190541440217391, 7.103013798111837, 3.365211591220851, 3.0843788256043156, 2.5639248971193416, 3.635453189300412, 1.5088092955700803, 1.173352206273259, 0.6907174572233054, 0.0, 9.528804976851852, 7.597892029456357, 5.866761031366295, 4.526427886710239, 7.270906378600824, 3.5894948559670783, 3.0843788256043156, 2.4037225651577505, 3.5515068990559184, 2.7301804800724643, 1.546851466049383, 0.6616488948626047, 0.0), # 111
(9.38954010854655, 7.238491614673214, 7.717000957361684, 8.167669509863124, 7.089878022539605, 3.357795140476554, 3.069781289744979, 2.5559712696235333, 3.628466258954427, 1.5029396003120044, 1.1688717929785184, 0.6885722851940093, 0.0, 9.509567365397805, 7.574295137134101, 5.844358964892591, 4.5088188009360115, 7.256932517908854, 3.5783597774729463, 3.069781289744979, 2.3984251003403956, 3.5449390112698027, 2.7225565032877084, 1.543400191472337, 0.6580446922430195, 0.0), # 112
(9.360504223703044, 7.1991320672204555, 7.699681523543391, 8.14487541186903, 7.076783786782469, 3.3505906987084666, 3.0552629818283847, 2.548271903658586, 3.6215709370862066, 1.4970761841531826, 1.1644873176921446, 0.6864327447087024, 0.0, 9.490443900843221, 7.550760191795725, 5.8224365884607225, 4.491228552459547, 7.243141874172413, 3.5675806651220205, 3.0552629818283847, 2.3932790705060474, 3.5383918933912346, 2.7149584706230105, 1.5399363047086783, 0.654466551565496, 0.0), # 113
(9.331480897900065, 7.16044741823174, 7.682538062518016, 8.122342065958001, 7.063595569710884, 3.343581854975776, 3.0410091042052896, 2.5409213581271333, 3.6148730119043533, 1.491328791978196, 1.1602073895188663, 0.684326014342748, 0.0, 9.471275414160035, 7.5275861577702265, 5.801036947594331, 4.473986375934587, 7.229746023808707, 3.557289901377987, 3.0410091042052896, 2.3882727535541255, 3.531797784855442, 2.7074473553193346, 1.5365076125036032, 0.6509497652937947, 0.0), # 114
(9.302384903003995, 7.122451598792792, 7.665580777256098, 8.100063378886334, 7.050271785259067, 3.3367503822909463, 3.027029825095781, 2.533917772616129, 3.6083749928895963, 1.4857063319970194, 1.1560257519045158, 0.6822531318799043, 0.0, 9.452006631660376, 7.5047844506789465, 5.7801287595225785, 4.457118995991058, 7.216749985779193, 3.5474848816625806, 3.027029825095781, 2.3833931302078186, 3.5251358926295335, 2.700021126295445, 1.5331161554512198, 0.647495599890254, 0.0), # 115
(9.273179873237634, 7.0850892578507265, 7.648776824986561, 8.077999612699802, 7.036792350922519, 3.330080178417474, 3.0133024087639466, 2.5272417970412473, 3.6020604464092765, 1.480198339612387, 1.1519343218785802, 0.6802102664572789, 0.0, 9.43260725975589, 7.482312931030067, 5.7596716093929015, 4.44059501883716, 7.204120892818553, 3.5381385158577463, 3.0133024087639466, 2.3786286988696244, 3.5183961754612594, 2.6926665375666015, 1.5297553649973124, 0.6440990234409752, 0.0), # 116
(9.243829442823772, 7.04830504435266, 7.632093362938321, 8.056111029444182, 7.02313718419674, 3.323555141118853, 2.9998041194738763, 2.5208740813181603, 3.5959129388307343, 1.4747943502270324, 1.1479250164705472, 0.6781935872119792, 0.0, 9.413047004858225, 7.46012945933177, 5.739625082352736, 4.424383050681096, 7.1918258776614685, 3.5292237138454245, 2.9998041194738763, 2.3739679579420376, 3.51156859209837, 2.6853703431480613, 1.5264186725876645, 0.6407550040320601, 0.0), # 117
(9.214297245985211, 7.0120436072457135, 7.615497548340306, 8.03435789116525, 7.009286202577227, 3.317159168158581, 2.9865122214896576, 2.51479527536254, 3.5899160365213114, 1.46948389924369, 1.143989752709904, 0.6761992632811126, 0.0, 9.393295573379024, 7.438191896092237, 5.71994876354952, 4.40845169773107, 7.179832073042623, 3.5207133855075567, 2.9865122214896576, 2.369399405827558, 3.5046431012886137, 2.678119297055084, 1.5230995096680613, 0.6374585097496104, 0.0), # 118
(9.184546916944742, 6.976249595477001, 7.598956538421437, 8.012700459908778, 6.99521932355948, 3.3108761573001524, 2.973403979075378, 2.5089860290900607, 3.5840533058483475, 1.4642565220650932, 1.1401204476261382, 0.6742234638017862, 0.0, 9.373322671729932, 7.416458101819647, 5.70060223813069, 4.392769566195279, 7.168106611696695, 3.5125804407260848, 2.973403979075378, 2.3649115409286803, 3.49760966177974, 2.670900153302927, 1.5197913076842873, 0.6342045086797276, 0.0), # 119
(9.154542089925162, 6.940867657993644, 7.582437490410635, 7.991098997720545, 6.980916464638998, 3.304690006307063, 2.9604566564951265, 2.5034269924163928, 3.578308313179186, 1.4591017540939766, 1.136309018248736, 0.6722623579111081, 0.0, 9.353098006322597, 7.394885937022188, 5.68154509124368, 4.377305262281929, 7.156616626358372, 3.50479778938295, 2.9604566564951265, 2.360492861647902, 3.490458232319499, 2.663699665906849, 1.516487498082127, 0.6309879689085133, 0.0), # 120
(9.124246399149268, 6.90584244374276, 7.565907561536823, 7.969513766646325, 6.966357543311279, 3.29858461294281, 2.94764751801299, 2.4980988152572112, 3.572664624881166, 1.4540091307330743, 1.1325473816071863, 0.6703121147461852, 0.0, 9.33259128356866, 7.373433262208036, 5.662736908035931, 4.362027392199222, 7.145329249762332, 3.497338341360096, 2.94764751801299, 2.356131866387721, 3.4831787716556395, 2.656504588882109, 1.5131815123073646, 0.6278038585220692, 0.0), # 121
(9.093623478839854, 6.871118601671464, 7.549333909028926, 7.947905028731892, 6.951522477071823, 3.292543874970886, 2.9349538278930587, 2.492982147528187, 3.5671058073216297, 1.4489681873851195, 1.1288274547309753, 0.6683689034441251, 0.0, 9.31177220987977, 7.352057937885375, 5.644137273654876, 4.346904562155357, 7.1342116146432595, 3.490175006539462, 2.9349538278930587, 2.351817053550633, 3.4757612385359113, 2.6493016762439643, 1.5098667818057854, 0.6246471456064968, 0.0), # 122
(9.062636963219719, 6.836640780726876, 7.532683690115864, 7.92623304602302, 6.936391183416127, 3.28655169015479, 2.9223528503994194, 2.4880576391449933, 3.5616154268679177, 1.443968459452847, 1.1251411546495909, 0.6664288931420351, 0.0, 9.290610491667572, 7.330717824562385, 5.625705773247954, 4.33190537835854, 7.123230853735835, 3.4832806948029904, 2.9223528503994194, 2.3475369215391355, 3.4681955917080636, 2.642077682007674, 1.5065367380231727, 0.621512798247898, 0.0), # 123
(9.031250486511654, 6.802353629856113, 7.515924062026559, 7.90445808056549, 6.920943579839691, 3.2805919562580144, 2.9098218497961597, 2.483305940023303, 3.5561770498873715, 1.4389994823389904, 1.1214803983925201, 0.664488252977023, 0.0, 9.269075835343711, 7.309370782747252, 5.6074019919625995, 4.316998447016971, 7.112354099774743, 3.476628316032624, 2.9098218497961597, 2.3432799687557244, 3.4604717899198456, 2.634819360188497, 1.5031848124053118, 0.618395784532374, 0.0), # 124
(8.999427682938459, 6.768201798006293, 7.499022181989936, 7.88254039440507, 6.905159583838015, 3.274648571044058, 2.8973380903473696, 2.478707700078788, 3.5507742427473308, 1.4340507914462837, 1.1178371029892504, 0.6625431520861957, 0.0, 9.247137947319828, 7.2879746729481525, 5.5891855149462515, 4.30215237433885, 7.1015484854946616, 3.470190780110303, 2.8973380903473696, 2.3390346936028985, 3.4525797919190073, 2.6275134648016905, 1.4998044363979874, 0.6152910725460268, 0.0), # 125
(8.967132186722928, 6.734129934124536, 7.481945207234916, 7.8604402495875405, 6.889019112906595, 3.2687054322764144, 2.884878836317135, 2.474243569227122, 3.545390571815139, 1.4291119221774609, 1.1142031854692689, 0.6605897596066612, 0.0, 9.224766534007578, 7.266487355673273, 5.571015927346345, 4.287335766532382, 7.090781143630278, 3.463940996917971, 2.884878836317135, 2.334789594483153, 3.4445095564532977, 2.620146749862514, 1.4963890414469831, 0.6121936303749579, 0.0), # 126
(8.93432763208786, 6.7000826871579555, 7.464660294990421, 7.838117908158674, 6.8725020845409315, 3.26274643771858, 2.872421351969547, 2.469894197383977, 3.5400096034581354, 1.4241724099352562, 1.1105705628620632, 0.6586242446755264, 0.0, 9.201931301818599, 7.244866691430789, 5.552852814310316, 4.272517229805768, 7.080019206916271, 3.457851876337568, 2.872421351969547, 2.3305331697989855, 3.4362510422704657, 2.612705969386225, 1.4929320589980841, 0.6090984261052688, 0.0), # 127
(8.900977653256046, 6.666004706053673, 7.447134602485375, 7.815533632164248, 6.855588416236526, 3.2567554851340508, 2.859942901568691, 2.465640234465026, 3.534614904043661, 1.4192217901224033, 1.1069311521971208, 0.6566427764298991, 0.0, 9.178601957164537, 7.223070540728888, 5.534655760985604, 4.257665370367209, 7.069229808087322, 3.4518963282510366, 2.859942901568691, 2.3262539179528936, 3.427794208118263, 2.6051778773880834, 1.4894269204970751, 0.6060004278230613, 0.0), # 128
(8.867045884450281, 6.631840639758805, 7.4293352869486995, 7.792647683650037, 6.838258025488874, 3.250716472286322, 2.8474207493786565, 2.4614623303859418, 3.529190039939058, 1.4142495981416365, 1.1032768705039286, 0.6546415240068865, 0.0, 9.154748206457038, 7.20105676407575, 5.516384352519642, 4.242748794424909, 7.058380079878116, 3.4460472625403185, 2.8474207493786565, 2.321940337347373, 3.419129012744437, 2.597549227883346, 1.4858670573897401, 0.6028946036144368, 0.0), # 129
(8.832495959893366, 6.5975351372204685, 7.411229505609316, 7.769420324661814, 6.820490829793475, 3.2446132969388883, 2.8348321596635313, 2.457341135062396, 3.5237185775116666, 1.4092453693956895, 1.0995996348119743, 0.6526166565435961, 0.0, 9.130339756107748, 7.178783221979556, 5.4979981740598705, 4.2277361081870675, 7.047437155023333, 3.4402775890873545, 2.8348321596635313, 2.3175809263849203, 3.4102454148967376, 2.589806774887272, 1.4822459011218634, 0.5997759215654973, 0.0), # 130
(8.797291513808094, 6.563032847385783, 7.392784415696151, 7.7458118172453565, 6.802266746645829, 3.238429856855247, 2.8221543966874045, 2.4532572984100627, 3.5181840831288285, 1.4041986392872965, 1.0958913621507447, 0.6505643431771354, 0.0, 9.105346312528312, 7.156207774948489, 5.479456810753724, 4.212595917861889, 7.036368166257657, 3.4345602177740875, 2.8221543966874045, 2.3131641834680337, 3.4011333733229145, 2.5819372724151193, 1.4785568831392302, 0.596639349762344, 0.0), # 131
(8.76139618041726, 6.528278419201865, 7.373967174438122, 7.72178242344644, 6.783565693541435, 3.2321500497988933, 2.8093647247143627, 2.449191470344614, 3.5125701231578845, 1.3990989432191914, 1.0921439695497275, 0.6484807530446118, 0.0, 9.079737582130376, 7.13328828349073, 5.460719847748638, 4.1972968296575734, 7.025140246315769, 3.4288680584824593, 2.8093647247143627, 2.3086786069992096, 3.3917828467707176, 2.573927474482147, 1.4747934348876244, 0.5934798562910787, 0.0), # 132
(8.724773593943663, 6.493216501615832, 7.354744939064153, 7.697292405310838, 6.764367587975791, 3.225757773533322, 2.7964404080084946, 2.445124300781722, 3.5068602639661752, 1.3939358165941083, 1.0883493740384103, 0.6463620552831327, 0.0, 9.053483271325586, 7.10998260811446, 5.44174687019205, 4.181807449782324, 7.0137205279323505, 3.4231740210944106, 2.7964404080084946, 2.3041126953809443, 3.3821837939878954, 2.5657641351036133, 1.4709489878128308, 0.590292409237803, 0.0), # 133
(8.687387388610095, 6.457791743574804, 7.33508486680317, 7.672302024884328, 6.7446523474443945, 3.2192369258220297, 2.7833587108338893, 2.44103643963706, 3.5010380719210428, 1.388698794814781, 1.0844994926462799, 0.6442044190298056, 0.0, 9.026553086525583, 7.0862486093278605, 5.422497463231399, 4.166096384444343, 7.0020761438420855, 3.417451015491884, 2.7833587108338893, 2.2994549470157355, 3.3723261737221972, 2.557434008294776, 1.4670169733606342, 0.5870719766886187, 0.0), # 134
(8.649201198639354, 6.421948794025897, 7.314954114884091, 7.646771544212684, 6.724399889442747, 3.212571404428512, 2.770096897454634, 2.4369085368263, 3.4950871133898262, 1.3833774132839443, 1.0805862424028239, 0.6420040134217377, 0.0, 8.99891673414202, 7.0620441476391145, 5.402931212014119, 4.150132239851832, 6.9901742267796525, 3.41167195155682, 2.770096897454634, 2.2946938603060802, 3.3621999447213735, 2.548923848070895, 1.4629908229768183, 0.583813526729627, 0.0), # 135
(8.610178658254235, 6.385632301916229, 7.294319840535841, 7.62066122534168, 6.703590131466344, 3.205745107116265, 2.7566322321348173, 2.4327212422651154, 3.4889909547398688, 1.3779612074043308, 1.0766015403375297, 0.6397570075960368, 0.0, 8.970543920586536, 7.037327083556404, 5.383007701687648, 4.133883622212991, 6.9779819094797375, 3.4058097391711617, 2.7566322321348173, 2.289817933654475, 3.351795065733172, 2.540220408447227, 1.4588639681071682, 0.58051202744693, 0.0), # 136
(8.570283401677534, 6.348786916192918, 7.273149200987342, 7.593931330317094, 6.682202991010689, 3.1987419316487826, 2.7429419791385277, 2.428455205869179, 3.4827331623385107, 1.3724397125786756, 1.0725373034798844, 0.63745957068981, 0.0, 8.941404352270776, 7.012055277587909, 5.362686517399421, 4.117319137736026, 6.965466324677021, 3.3998372882168506, 2.7429419791385277, 2.284815665463416, 3.3411014955053444, 2.5313104434390317, 1.4546298401974684, 0.577162446926629, 0.0), # 137
(8.529479063132047, 6.311357285803083, 7.251409353467515, 7.566542121184698, 6.660218385571278, 3.1915457757895624, 2.729003402729852, 2.4240910775541624, 3.4762973025530934, 1.3668024642097119, 1.0683854488593754, 0.6351078718401649, 0.0, 8.91146773560639, 6.986186590241813, 5.341927244296877, 4.100407392629135, 6.952594605106187, 3.3937275085758274, 2.729003402729852, 2.2796755541354017, 3.330109192785639, 2.5221807070615663, 1.450281870693503, 0.5737597532548258, 0.0), # 138
(8.487729276840568, 6.273288059693839, 7.229067455205284, 7.538453859990269, 6.63761623264361, 3.184140537302099, 2.7147937671728797, 2.4196095072357395, 3.469666941750957, 1.3610389977001744, 1.0641378935054902, 0.6326980801842089, 0.0, 8.880703777005019, 6.959678882026297, 5.32068946752745, 4.083116993100523, 6.939333883501914, 3.3874533101300353, 2.7147937671728797, 2.274386098072928, 3.318808116321805, 2.51281795333009, 1.4458134910410567, 0.5702989145176218, 0.0), # 139
(8.444997677025897, 6.234523886812306, 7.206090663429573, 7.509626808779583, 6.614376449723186, 3.176510113949888, 2.7002903367316984, 2.4149911448295818, 3.462825646299444, 1.3551388484527966, 1.0597865544477159, 0.6302263648590494, 0.0, 8.849082182878314, 6.932490013449542, 5.298932772238579, 4.0654165453583895, 6.925651292598888, 3.3809876027614147, 2.7002903367316984, 2.2689357956784915, 3.307188224861593, 2.5032089362598615, 1.4412181326859146, 0.5667748988011189, 0.0), # 140
(8.40124789791083, 6.195009416105602, 7.1824461353693, 7.480021229598415, 6.590478954305501, 3.1686384034964257, 2.6854703756703975, 2.4102166402513627, 3.455756982565893, 1.349091551870313, 1.0553233487155398, 0.6276888950017938, 0.0, 8.816572659637913, 6.904577845019731, 5.276616743577699, 4.047274655610939, 6.911513965131786, 3.3743032963519077, 2.6854703756703975, 2.26331314535459, 3.2952394771527507, 2.4933404098661387, 1.4364892270738603, 0.5631826741914184, 0.0), # 141
(8.356443573718156, 6.154689296520844, 7.158101028253392, 7.44959738449254, 6.565903663886058, 3.1605093037052074, 2.670311148253063, 2.4052666434167547, 3.448444516917647, 1.3428866433554572, 1.0507401933384497, 0.6250818397495496, 0.0, 8.783144913695466, 6.875900237245045, 5.253700966692247, 4.028659930066371, 6.896889033835294, 3.3673733007834565, 2.670311148253063, 2.2575066455037196, 3.282951831943029, 2.4831991281641805, 1.4316202056506786, 0.5595172087746222, 0.0), # 142
(8.310548338670674, 6.113508177005149, 7.133022499310772, 7.418315535507731, 6.540630495960352, 3.152106712339729, 2.6547899187437842, 2.4001218042414303, 3.4408718157220486, 1.3365136583109634, 1.0460290053459322, 0.6224013682394242, 0.0, 8.748768651462617, 6.846415050633665, 5.230145026729661, 4.009540974932889, 6.881743631444097, 3.360170525938002, 2.6547899187437842, 2.251504794528378, 3.270315247980176, 2.472771845169244, 1.4266044998621543, 0.5557734706368318, 0.0), # 143
(8.263525826991184, 6.071410706505636, 7.107177705770357, 7.386135944689768, 6.514639368023886, 3.1434145271634857, 2.6388839514066493, 2.3947627726410623, 3.4330224453464364, 1.3299621321395652, 1.0411817017674754, 0.619643649608525, 0.0, 8.713413579351014, 6.816080145693774, 5.205908508837376, 3.9898863964186946, 6.866044890692873, 3.3526678816974873, 2.6388839514066493, 2.245296090831061, 3.257319684011943, 2.4620453148965895, 1.4214355411540713, 0.5519464278641489, 0.0), # 144
(8.215339672902477, 6.0283415339694235, 7.080533804861075, 7.353018874084421, 6.487910197572155, 3.134416645939974, 2.6225705105057466, 2.3891701985313234, 3.424879972158151, 1.3232216002439972, 1.036190199632566, 0.6168048529939595, 0.0, 8.6770494037723, 6.784853382933553, 5.180950998162829, 3.969664800731991, 6.849759944316302, 3.344838277943853, 2.6225705105057466, 2.238869032814267, 3.2439550987860777, 2.451006291361474, 1.4161067609722149, 0.548031048542675, 0.0), # 145
(8.16595351062735, 5.984245308343629, 7.053057953811847, 7.318924585737469, 6.460422902100661, 3.1250969664326886, 2.605826860305165, 2.3833247318278863, 3.4164279625245353, 1.3162815980269928, 1.0310464159706916, 0.6138811475328351, 0.0, 8.639645831138118, 6.7526926228611845, 5.155232079853457, 3.948844794080978, 6.832855925049071, 3.3366546245590407, 2.605826860305165, 2.2322121188804918, 3.2302114510503306, 2.439641528579157, 1.4106115907623695, 0.5440223007585119, 0.0), # 146
(8.1153309743886, 5.93906667857537, 7.024717309851591, 7.283813341694685, 6.4321573991049, 3.1154393864051255, 2.5886302650689905, 2.3772070224464232, 3.40764998281293, 1.3091316608912866, 1.0257422678113395, 0.6108687023622593, 0.0, 8.601172567860118, 6.719555725984851, 5.1287113390566965, 3.9273949826738592, 6.81529996562586, 3.3280898314249923, 2.5886302650689905, 2.2253138474322327, 3.21607869955245, 2.4279377805648954, 1.4049434619703185, 0.5399151525977609, 0.0), # 147
(8.063435698409021, 5.892750293611764, 6.9954790302092364, 7.247645404001847, 6.403093606080374, 3.105427803620781, 2.5709579890613132, 2.3707977203026074, 3.398529599390676, 1.301761324239612, 1.0202696721839972, 0.6077636866193392, 0.0, 8.561599320349941, 6.68540055281273, 5.101348360919985, 3.905283972718835, 6.797059198781352, 3.3191168084236504, 2.5709579890613132, 2.2181627168719866, 3.201546803040187, 2.4158818013339496, 1.3990958060418472, 0.535704572146524, 0.0), # 148
(8.010231316911412, 5.845240802399927, 6.965310272113703, 7.210381034704727, 6.37321144052258, 3.0950461158431497, 2.5527872965462204, 2.3640774753121114, 3.3890503786251127, 1.2941601234747035, 1.0146205461181517, 0.6045622694411826, 0.0, 8.520895795019237, 6.650184963853008, 5.073102730590758, 3.88248037042411, 6.778100757250225, 3.3097084654369557, 2.5527872965462204, 2.21074722560225, 3.18660572026129, 2.403460344901576, 1.3930620544227408, 0.5313855274909026, 0.0), # 149
(7.955681464118564, 5.796482853886981, 6.934178192793912, 7.171980495849104, 6.342490819927017, 3.0842782208357287, 2.5340954517878003, 2.3570269373906068, 3.3791958868835836, 1.2863175939992944, 1.0087868066432906, 0.601260619964897, 0.0, 8.479031698279647, 6.6138668196138655, 5.043934033216452, 3.8589527819978824, 6.758391773767167, 3.2998377123468496, 2.5340954517878003, 2.2030558720255207, 3.1712454099635083, 2.390660165283035, 1.3868356385587826, 0.5269529867169983, 0.0), # 150
(7.899749774253275, 5.746421097020041, 6.902049949478785, 7.132404049480748, 6.310911661789184, 3.0731080163620113, 2.5148597190501416, 2.3496267564537683, 3.3689496905334293, 1.2782232712161197, 1.002760370788901, 0.5978549073275894, 0.0, 8.435976736542818, 6.576403980603482, 5.013801853944504, 3.8346698136483583, 6.737899381066859, 3.2894774590352753, 2.5148597190501416, 2.1950771545442938, 3.155455830894592, 2.377468016493583, 1.3804099898957571, 0.5224019179109128, 0.0), # 151
(7.842399881538343, 5.6950001807462245, 6.868892699397251, 7.091611957645439, 6.278453883604579, 3.0615194001854955, 2.4950573625973322, 2.3418575824172674, 3.3582953559419897, 1.2698666905279126, 0.9965331555844703, 0.5943413006663675, 0.0, 8.391700616220398, 6.537754307330042, 4.982665777922351, 3.809600071583737, 6.716590711883979, 3.2786006153841742, 2.4950573625973322, 2.1867995715610684, 3.1392269418022893, 2.36387065254848, 1.3737785398794504, 0.5177272891587478, 0.0), # 152
(7.78359542019656, 5.642164754012652, 6.834673599778224, 7.049564482388949, 6.245097402868703, 3.049496270069676, 2.4746656466934596, 2.333700065196776, 3.3472164494766075, 1.2612373873374074, 0.9900970780594861, 0.5907159691183387, 0.0, 8.346173043724027, 6.497875660301725, 4.95048539029743, 3.783712162012222, 6.694432898953215, 3.2671800912754865, 2.4746656466934596, 2.17821162147834, 3.1225487014343516, 2.3498548274629836, 1.3669347199556448, 0.5129240685466048, 0.0), # 153
(7.723300024450729, 5.587859465766439, 6.7993598078506325, 7.006221885757057, 6.210822137077053, 3.0370225237780484, 2.453661835602614, 2.325134854707968, 3.3356965375046217, 1.2523248970473384, 0.9834440552434354, 0.5869750818206104, 0.0, 8.299363725465357, 6.456725900026714, 4.917220276217177, 3.7569746911420143, 6.671393075009243, 3.2551887965911552, 2.453661835602614, 2.169301802698606, 3.1054110685385266, 2.335407295252353, 1.3598719615701265, 0.5079872241605854, 0.0), # 154
(7.6614773285236355, 5.532028964954703, 6.762918480843396, 6.961544429795533, 6.175608003725131, 3.0240820590741087, 2.4320231935888805, 2.316142600866515, 3.323719186393376, 1.2431187550604388, 0.9765660041658056, 0.5831148079102902, 0.0, 8.251242367856026, 6.414262887013191, 4.882830020829028, 3.7293562651813157, 6.647438372786752, 3.242599641213121, 2.4320231935888805, 2.160058613624363, 3.0878040018625654, 2.320514809931845, 1.3525836961686795, 0.5029117240867913, 0.0), # 155
(7.598090966638081, 5.474617900524564, 6.725316775985439, 6.915492376550157, 6.139434920308432, 3.0106587737213526, 2.40972698491635, 2.3067039535880913, 3.3112679625102084, 1.2336084967794434, 0.9694548418560842, 0.5791313165244852, 0.0, 8.201778677307685, 6.370444481769337, 4.84727420928042, 3.7008254903383295, 6.622535925020417, 3.2293855350233276, 2.40972698491635, 2.150470552658109, 3.069717460154216, 2.3051641255167192, 1.3450633551970879, 0.49769253641132405, 0.0), # 156
(7.533104573016862, 5.415570921423138, 6.686521850505682, 6.868025988066703, 6.102282804322456, 2.9967365654832747, 2.3867504738491094, 2.2967995627883675, 3.2983264322224626, 1.2237836576070855, 0.9621024853437583, 0.5750207768003032, 0.0, 8.150942360231976, 6.325228544803333, 4.810512426718791, 3.671350972821256, 6.596652864444925, 3.2155193879037145, 2.3867504738491094, 2.140526118202339, 3.051141402161228, 2.2893419960222348, 1.3373043701011365, 0.4923246292202853, 0.0), # 157
(7.464680946405239, 5.353748694041236, 6.644659961585297, 6.817327186238432, 6.062454070580665, 2.9814309445183143, 2.3625533604639286, 2.285748730145572, 3.2838873638663655, 1.213341479072786, 0.9542659587564906, 0.570633297016195, 0.0, 8.096485859415345, 6.276966267178143, 4.771329793782452, 3.640024437218358, 6.567774727732731, 3.200048222203801, 2.3625533604639286, 2.129593531798796, 3.0312270352903323, 2.2724423954128112, 1.3289319923170593, 0.48670442673102154, 0.0), # 158
(7.382286766978402, 5.282809876299521, 6.58894818200249, 6.7529828690913405, 6.010127539854418, 2.95965229467081, 2.334106381692858, 2.2696723053184926, 3.2621424204073812, 1.2005702485246865, 0.9445694892698324, 0.5651135436402591, 0.0, 8.025427646920194, 6.216248980042849, 4.722847446349162, 3.601710745574059, 6.5242848408147625, 3.17754122744589, 2.334106381692858, 2.114037353336293, 3.005063769927209, 2.250994289697114, 1.3177896364004982, 0.4802554432999565, 0.0), # 159
(7.284872094904309, 5.202172001162321, 6.51826746496324, 6.673933132806645, 5.94428008756453, 2.9308657560278157, 2.301121874191892, 2.248166328969728, 3.2324750757428835, 1.1853014129657236, 0.9328765847682567, 0.5583751624073207, 0.0, 7.93642060889358, 6.142126786480525, 4.664382923841283, 3.55590423889717, 6.464950151485767, 3.147432860557619, 2.301121874191892, 2.0934755400198686, 2.972140043782265, 2.2246443776022153, 1.3036534929926482, 0.47292472737839286, 0.0), # 160
(7.17322205458596, 5.11236079574043, 6.4333724765919245, 6.5809293778175455, 5.865595416188075, 2.895420057582683, 2.263840723003438, 2.2215002221290754, 3.1952765889996724, 1.1676645482927346, 0.9192902757666179, 0.5504806224089643, 0.0, 7.830374044819097, 6.055286846498606, 4.596451378833089, 3.5029936448782033, 6.390553177999345, 3.1101003109807053, 2.263840723003438, 2.0681571839876307, 2.9327977080940375, 2.1936431259391824, 1.2866744953183848, 0.46476007234003913, 0.0), # 161
(7.048121770426357, 5.013901987144635, 6.335017883012913, 6.474723004557244, 5.7747572282021356, 2.853663928328766, 2.2225038131699044, 2.1899434058263343, 3.150938219304545, 1.147789230402558, 0.9039135927797701, 0.5414923927367745, 0.0, 7.708197254180333, 5.956416320104519, 4.519567963898851, 3.4433676912076736, 6.30187643860909, 3.065920768156868, 2.2225038131699044, 2.03833137737769, 2.8873786141010678, 2.158241001519082, 1.2670035766025827, 0.4558092715586033, 0.0), # 162
(6.9103563668284975, 4.90732130248573, 6.223958350350585, 6.35606541345895, 5.672449226083792, 2.8059460972594175, 2.1773520297337003, 2.153765301091302, 3.0998512257843016, 1.1258050351920315, 0.8868495663225682, 0.5314729424823361, 0.0, 7.570799536460879, 5.846202367305696, 4.43424783161284, 3.3774151055760937, 6.199702451568603, 3.015271421527823, 2.1773520297337003, 2.0042472123281554, 2.836224613041896, 2.118688471152984, 1.2447916700701172, 0.4461201184077937, 0.0), # 163
(6.760710968195384, 4.793144468874502, 6.100948544729314, 6.225708004955863, 5.559355112310126, 2.752615293367992, 2.128626257737233, 2.113235328953779, 3.0424068675657407, 1.1018415385579923, 0.8682012269098661, 0.5204847407372336, 0.0, 7.419090191144328, 5.725332148109569, 4.34100613454933, 3.305524615673976, 6.0848137351314815, 2.9585294605352903, 2.128626257737233, 1.9661537809771372, 2.779677556155063, 2.075236001651955, 1.2201897089458629, 0.43574040626131844, 0.0), # 164
(6.599970698930017, 4.671897213421746, 5.966743132273474, 6.084402179481189, 5.436158589358215, 2.694020245647842, 2.076567382222911, 2.068622910443561, 2.9789964037756596, 1.0760283163972786, 0.8480716050565187, 0.5085902565930517, 0.0, 7.25397851771427, 5.594492822523568, 4.2403580252825925, 3.2280849491918353, 5.957992807551319, 2.8960720746209856, 2.076567382222911, 1.9243001754627442, 2.7180792946791077, 2.0281340598270634, 1.1933486264546949, 0.42471792849288603, 0.0), # 165
(6.428920683435397, 4.54410526323825, 5.82209677910744, 5.932899337468126, 5.3035433597051425, 2.630509683092322, 2.021416288233143, 2.020197466590449, 2.9100110935408576, 1.0484949446067282, 0.8265637312773799, 0.49585195914137514, 0.0, 7.0763738156542955, 5.454371550555126, 4.1328186563869, 3.145484833820184, 5.820022187081715, 2.8282764532266285, 2.021416288233143, 1.8789354879230868, 2.6517716798525712, 1.9776331124893758, 1.1644193558214881, 0.41310047847620457, 0.0), # 166
(6.248346046114523, 4.410294345434805, 5.667764151355587, 5.771950879349882, 5.1621931258279865, 2.562432334694784, 1.9634138608103373, 1.9682284184242402, 2.835842195988133, 1.0193709990831787, 0.8037806360873045, 0.48233231747378824, 0.0, 6.887185384447996, 5.30565549221167, 4.0189031804365225, 3.058112997249536, 5.671684391976266, 2.755519785793936, 1.9634138608103373, 1.8303088104962744, 2.5810965629139933, 1.9239836264499612, 1.1335528302711175, 0.4009358495849823, 0.0), # 167
(6.059031911370395, 4.270990187122201, 5.50449991514229, 5.60230820555966, 5.012791590203827, 2.490136929448583, 1.902800984996902, 1.9129851869747332, 2.7568809702442847, 0.9887860557234682, 0.7798253500011468, 0.468093800681876, 0.0, 6.6873225235789615, 5.149031807500635, 3.8991267500057343, 2.9663581671704042, 5.513761940488569, 2.6781792617646265, 1.902800984996902, 1.7786692353204163, 2.5063957951019136, 1.867436068519887, 1.100899983028458, 0.3882718351929274, 0.0), # 168
(5.861763403606015, 4.1267185154112305, 5.333058736591924, 5.4247227165306615, 4.856022455309747, 2.413972196347072, 1.8398185458352458, 1.8547371932717271, 2.6735186754361124, 0.9568696904244344, 0.7548009035337614, 0.45319887785722274, 0.0, 6.477694532530785, 4.985187656429449, 3.774004517668807, 2.8706090712733023, 5.347037350872225, 2.596632070580418, 1.8398185458352458, 1.724265854533623, 2.4280112276548733, 1.808240905510221, 1.066611747318385, 0.3751562286737483, 0.0), # 169
(5.657325647224384, 3.978005057412684, 5.154195281828863, 5.23994581269609, 4.692569423622822, 2.334286864383604, 1.7747074283677764, 1.7937538583450197, 2.5861465706904125, 0.9237514790829147, 0.7288103272000027, 0.4377100180914133, 0.0, 6.259210710787055, 4.814810199005545, 3.6440516360000137, 2.7712544372487433, 5.172293141380825, 2.5112554016830275, 1.7747074283677764, 1.6673477602740028, 2.346284711811411, 1.7466486042320304, 1.0308390563657726, 0.36163682340115316, 0.0), # 170
(5.4465037666285, 3.82537554023735, 4.968664216977482, 5.048728894489152, 4.523116197620137, 2.2514296625515327, 1.7077085176369027, 1.7303046032244096, 2.495155915133985, 0.8895609975957474, 0.7019566515147247, 0.4216896904760322, 0.0, 6.032780357831365, 4.638586595236354, 3.509783257573624, 2.6686829927872413, 4.99031183026797, 2.4224264445141737, 1.7077085176369027, 1.6081640446796661, 2.2615580988100685, 1.6829096314963843, 0.9937328433954964, 0.3477614127488501, 0.0), # 171
(5.230082886221365, 3.6693556909960217, 4.777220208162156, 4.851823362343048, 4.348346479778769, 2.1657493198442115, 1.6390626986850327, 1.664658848939696, 2.4009379678936282, 0.8544278218597702, 0.6743429069927823, 0.4052003641026643, 0.0, 5.799312773147303, 4.457204005129307, 3.3717145349639117, 2.56328346557931, 4.8018759357872565, 2.3305223885155746, 1.6390626986850327, 1.5469637998887225, 2.1741732398893845, 1.6172744541143496, 0.9554440416324312, 0.3335777900905475, 0.0), # 172
(5.00884813040598, 3.510471236799489, 4.58061792150726, 4.649980616690982, 4.168943972575801, 2.077594565254994, 1.5690108565545748, 1.5970860165206766, 2.303883988096141, 0.8184815277718206, 0.6460721241490297, 0.3883045080628938, 0.0, 5.5597172562184625, 4.271349588691831, 3.2303606207451483, 2.4554445833154612, 4.607767976192282, 2.235920423128947, 1.5690108565545748, 1.483996118039281, 2.0844719862879004, 1.5499935388969943, 0.916123584301452, 0.31913374879995354, 0.0), # 173
(4.783584623585344, 3.349247904758541, 4.3796120231371685, 4.443952057966156, 3.9855923784883105, 1.987314127777233, 1.4977938762879377, 1.5278555269971503, 2.204385234868321, 0.7818516912287369, 0.6172473334983214, 0.37106459144830567, 0.0, 5.314903106528433, 4.081710505931362, 3.0862366674916064, 2.34555507368621, 4.408770469736642, 2.1389977377960103, 1.4977938762879377, 1.4195100912694523, 1.9927961892441552, 1.4813173526553853, 0.8759224046274336, 0.3044770822507765, 0.0), # 174
(4.555077490162455, 3.18621142198397, 4.174957179176257, 4.2344890866017755, 3.7989753999933793, 1.8952567364042834, 1.425652642927529, 1.457236801398915, 2.102832967336968, 0.7446678881273562, 0.5879715655555117, 0.35354308335048457, 0.0, 5.0657796235608075, 3.8889739168553294, 2.939857827777558, 2.234003664382068, 4.205665934673936, 2.040131521958481, 1.425652642927529, 1.3537548117173452, 1.8994876999966896, 1.411496362200592, 0.8349914358352515, 0.28965558381672457, 0.0), # 175
(4.324111854540319, 3.0218875155865668, 3.9674080557488987, 4.0223431030310435, 3.609776739568087, 1.8017711201294973, 1.3528280415157574, 1.3854992607557703, 1.9996184446288805, 0.7070596943645169, 0.558347850835455, 0.33580245286101496, 0.0, 4.813256106799174, 3.693826981471164, 2.791739254177275, 2.1211790830935504, 3.999236889257761, 1.9396989650580787, 1.3528280415157574, 1.2869793715210696, 1.8048883697840434, 1.3407810343436815, 0.7934816111497798, 0.2747170468715061, 0.0), # 176
(4.0914728411219325, 2.856801912677122, 3.7577193189794698, 3.808265507687162, 3.4186800996895155, 1.7072060079462288, 1.2795609570950313, 1.3129123260975137, 1.8951329258708567, 0.6691566858370562, 0.528479219853006, 0.3179051690714816, 0.0, 4.5582418557271245, 3.496956859786297, 2.6423960992650297, 2.0074700575111684, 3.7902658517417134, 1.838077256536519, 1.2795609570950313, 1.2194328628187348, 1.7093400498447577, 1.269421835895721, 0.751543863795894, 0.25970926478882933, 0.0), # 177
(3.8579455743102966, 2.6914803403664256, 3.5466456349923448, 3.593007701003337, 3.226369182834742, 1.6119101288478317, 1.2060922747077587, 1.239745418453944, 1.7897676701896952, 0.6310884384418126, 0.49846870312301883, 0.299913701073469, 0.0, 4.301646169828252, 3.299050711808158, 2.4923435156150937, 1.8932653153254375, 3.5795353403793904, 1.7356435858355217, 1.2060922747077587, 1.1513643777484512, 1.613184591417371, 1.1976692336677792, 0.7093291269984691, 0.24468003094240237, 0.0), # 178
(0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0), # 179
)
passenger_allighting_rate = (
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 0
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 1
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 2
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 3
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 4
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 5
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 6
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 7
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 8
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 9
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 10
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 11
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 12
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 13
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 14
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 15
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 16
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 17
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 18
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 19
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 20
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 21
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 22
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 23
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 24
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 25
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 26
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 27
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 28
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 29
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 30
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 31
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 32
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 33
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 34
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 35
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 36
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 37
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 38
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 39
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 40
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 41
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 42
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 43
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 44
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 45
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 46
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 47
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 48
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 49
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 50
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 51
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 52
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 53
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 54
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 55
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 56
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 57
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 58
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 59
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 60
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 61
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 62
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 63
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 64
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 65
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 66
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 67
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 68
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 69
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 70
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 71
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 72
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 73
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 74
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 75
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 76
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 77
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 78
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 79
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 80
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 81
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 82
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 83
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 84
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 85
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 86
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 87
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 88
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 89
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 90
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 91
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 92
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 93
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 94
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 95
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 96
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 97
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 98
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 99
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 100
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 101
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 102
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 103
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 104
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 105
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 106
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 107
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 108
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 109
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 110
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 111
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 112
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 113
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 114
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 115
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 116
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 117
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 118
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 119
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 120
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 121
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 122
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 123
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 124
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 125
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 126
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 127
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 128
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 129
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 130
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 131
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 132
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 133
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 134
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 135
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 136
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 137
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 138
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 139
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 140
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 141
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 142
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 143
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 144
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 145
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 146
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 147
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 148
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 149
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 150
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 151
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 152
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 153
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 154
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 155
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 156
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 157
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 158
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 159
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 160
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 161
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 162
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 163
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 164
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 165
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 166
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 167
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 168
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 169
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 170
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 171
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 172
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 173
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 174
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 175
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 176
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 177
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 178
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 179
)
"""
parameters for reproducibiliy. More information: https://numpy.org/doc/stable/reference/random/parallel.html
"""
#initial entropy
entropy = 8991598675325360468762009371570610170
#index for seed sequence child
child_seed_index = (
1, # 0
33, # 1
)
| 276.317647 | 494 | 0.769606 | 32,987 | 258,357 | 6.027283 | 0.217752 | 0.358512 | 0.344026 | 0.651839 | 0.37609 | 0.367409 | 0.364758 | 0.364054 | 0.363974 | 0.363974 | 0 | 0.849844 | 0.095736 | 258,357 | 934 | 495 | 276.61349 | 0.001194 | 0.015525 | 0 | 0.200873 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.005459 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ace2b0abc2b087521f2b588d090d4f1834d44773 | 164 | py | Python | Chapter 05/apyori_ex.py | bpbpublications/Essentials-of-Deep-Learning-and-AI | 6ef6a6958afe88c11b1bbb18932cc43df2d43b29 | [
"MIT"
] | null | null | null | Chapter 05/apyori_ex.py | bpbpublications/Essentials-of-Deep-Learning-and-AI | 6ef6a6958afe88c11b1bbb18932cc43df2d43b29 | [
"MIT"
] | null | null | null | Chapter 05/apyori_ex.py | bpbpublications/Essentials-of-Deep-Learning-and-AI | 6ef6a6958afe88c11b1bbb18932cc43df2d43b29 | [
"MIT"
] | 1 | 2021-11-29T10:18:57.000Z | 2021-11-29T10:18:57.000Z | import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from apyori import apriori
store_data = pd.read_csv('./store_data.csv')
store_data.head(0)
| 18.222222 | 44 | 0.786585 | 29 | 164 | 4.310345 | 0.62069 | 0.216 | 0.192 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007042 | 0.134146 | 164 | 8 | 45 | 20.5 | 0.873239 | 0 | 0 | 0 | 0 | 0 | 0.097561 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
acf5742968d40a128c38a5a76745551e10716cc0 | 42 | py | Python | arghandle/__init__.py | svaisakh/magnet | bff6748803ac8efd081f0ddbdca8b1743c674a14 | [
"MIT"
] | 343 | 2018-09-03T09:59:36.000Z | 2022-02-08T11:32:34.000Z | arghandle/__init__.py | svaisakh/magnet | bff6748803ac8efd081f0ddbdca8b1743c674a14 | [
"MIT"
] | 7 | 2018-09-04T07:03:11.000Z | 2019-03-21T07:17:14.000Z | arghandle/__init__.py | MagNet-DL/magnet | bff6748803ac8efd081f0ddbdca8b1743c674a14 | [
"MIT"
] | 23 | 2018-09-03T19:12:04.000Z | 2021-02-20T09:23:30.000Z | from arghandle.core import arghandle, args | 42 | 42 | 0.857143 | 6 | 42 | 6 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.095238 | 42 | 1 | 42 | 42 | 0.947368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4a0093a0665c09400b4a820ff3008f199972611d | 51 | py | Python | flagging_site/blueprints/__init__.py | cameronreaves/flagging | 412fae782ac38f971a1715aeb257a8ab10a9ad3a | [
"MIT"
] | null | null | null | flagging_site/blueprints/__init__.py | cameronreaves/flagging | 412fae782ac38f971a1715aeb257a8ab10a9ad3a | [
"MIT"
] | null | null | null | flagging_site/blueprints/__init__.py | cameronreaves/flagging | 412fae782ac38f971a1715aeb257a8ab10a9ad3a | [
"MIT"
] | null | null | null | from . import cyanobacteria
from . import flagging
| 17 | 27 | 0.803922 | 6 | 51 | 6.833333 | 0.666667 | 0.487805 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.156863 | 51 | 2 | 28 | 25.5 | 0.953488 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4a1bdfc3237555b3a30b59c9e7f6a22566e7553d | 18,769 | py | Python | aispace/layers/adapters/model_adapters.py | SmileGoat/AiSpace | 35fc120667e4263c99b300815e0bf018f5064a40 | [
"Apache-2.0"
] | 32 | 2020-01-16T07:59:03.000Z | 2022-03-31T09:24:00.000Z | aispace/layers/adapters/model_adapters.py | SmileGoat/AiSpace | 35fc120667e4263c99b300815e0bf018f5064a40 | [
"Apache-2.0"
] | 9 | 2020-06-05T03:27:06.000Z | 2022-03-12T01:00:17.000Z | aispace/layers/adapters/model_adapters.py | SmileGoat/AiSpace | 35fc120667e4263c99b300815e0bf018f5064a40 | [
"Apache-2.0"
] | 3 | 2020-06-09T02:22:50.000Z | 2021-07-19T06:07:07.000Z | # -*- coding: utf-8 -*-
# @Time : 2019-11-28 13:57
# @Author : yingyuankai
# @Email : yingyuankai@aliyun.com
# @File : tf_model_adapters.py
import re
import numpy as np
from collections import OrderedDict
import tensorflow as tf
__all__ = [
"tf_huggingface_bert_adapter",
"tf_huggingface_ernie_adapter",
"tf_huggingface_xlnet_adapter",
"tf_huggingface_albert_chinese_adapter",
"tf_huggingface_albert_chinese_google_adapter",
"tf_huggingface_electra_adapter",
"tf_huggingface_gpt2_adapter"
]
def tf_huggingface_bert_adapter(hf_model_variables: list, init_checkpoint: str):
"""Build name to variable map from huggingface bert names to bert_wwm variables,
and then set values for current model.
:param hf_model_variables:
:return:
"""
name_to_values = list()
for item in hf_model_variables:
var_name = item.name
matched_name = re.match("^.*/(bert/.*):\\d+$", var_name)
if matched_name is None:
continue
matched_name = matched_name.group(1)
# for bert/encoder
encoder_matched = re.match("^bert/encoder/layer_._\\d+.*$", matched_name)
if encoder_matched is not None:
matched_name = matched_name.replace("_._", "_")
# for bert/embeddings
if matched_name == "bert/embeddings/weight":
matched_name = "bert/embeddings/word_embeddings"
elif matched_name == "bert/embeddings/position_embeddings/embeddings":
matched_name = "bert/embeddings/position_embeddings"
elif matched_name == "bert/embeddings/token_type_embeddings/embeddings":
matched_name = "bert/embeddings/token_type_embeddings"
elif matched_name == "bert/embeddings/task_type_embeddings/embeddings":
matched_name = "bert/embeddings/task_type_embeddings"
value = tf.train.load_variable(init_checkpoint, matched_name)
name_to_values.append((item, value))
tf.keras.backend.batch_set_value(name_to_values)
def tf_huggingface_ernie_adapter(hf_model_variables: list, init_checkpoint: str):
"""Build name to variable map from huggingface bert names to bert_wwm variables,
and then set values for current model.
:param hf_model_variables:
:return:
"""
name_to_values = list()
for item in hf_model_variables:
var_name = item.name
matched_name = re.match("^.*/(ernie/.*):\\d+$", var_name)
if matched_name is None:
continue
matched_name = matched_name.group(1)
# for bert/encoder
encoder_matched = re.match("^ernie/encoder/layer_._\\d+.*$", matched_name)
if encoder_matched is not None:
matched_name = matched_name.replace("_._", "_").replace("ernie", "bert")
# for bert/embeddings
if matched_name == "ernie/embeddings/weight":
matched_name = "bert/embeddings/word_embeddings"
elif matched_name == "ernie/embeddings/position_embeddings/embeddings":
matched_name = "bert/embeddings/position_embeddings"
elif matched_name == "ernie/embeddings/token_type_embeddings/embeddings":
matched_name = "bert/embeddings/token_type_embeddings"
elif matched_name == "ernie/embeddings/task_type_embeddings/embeddings":
matched_name = "bert/embeddings/task_type_embeddings"
matched_name = matched_name.replace("ernie", "bert")
value = tf.train.load_variable(init_checkpoint, matched_name)
name_to_values.append((item, value))
tf.keras.backend.batch_set_value(name_to_values)
def tf_huggingface_xlnet_adapter(hf_model_variables: list, init_checkpoint: str):
"""Build name to variable map from huggingface xlnet names to xlnet_chinese variables,
and then set values for current model.
:param hf_model_variables:
:return:
"""
name_to_values = list()
r_r_bias_values = tf.train.load_variable(init_checkpoint, "model/transformer/r_r_bias")
r_s_bias_values = tf.train.load_variable(init_checkpoint, "model/transformer/r_s_bias")
r_w_bias_values = tf.train.load_variable(init_checkpoint, "model/transformer/r_w_bias")
seg_embed_values = tf.train.load_variable(init_checkpoint, "model/transformer/seg_embed")
for item in hf_model_variables:
var_name = item.name
matched_name = re.match("^.*/(xl_net/.*):\\d+$", var_name)
if matched_name is None:
continue
matched_name = matched_name.group(1)
# for bert/encoder
encoder_matched = re.match("^xl_net/layer_._\\d+.*$", matched_name)
if encoder_matched is not None:
matched_name = matched_name.replace("_._", "_").\
replace("xl_net", "model/transformer").\
replace("layer_norm", "LayerNorm")
i = int(re.match("^.*/layer_(\\d+).*$", matched_name).group(1))
# for r_r_bias
r_r_bias_matched = re.match("^.*/r_r_bias$", matched_name)
if r_r_bias_matched is not None:
value = np.squeeze(r_r_bias_values[i])
name_to_values.append((item, value))
continue
# for r_s_bias
r_s_bias_matched = re.match("^.*/r_s_bias$", matched_name)
if r_s_bias_matched is not None:
value = np.squeeze(r_s_bias_values[i])
name_to_values.append((item, value))
continue
# for r_w_bias
r_w_bias_matched = re.match("^.*/r_w_bias$", matched_name)
if r_w_bias_matched is not None:
value = np.squeeze(r_w_bias_values[i])
name_to_values.append((item, value))
continue
# for seq_embed
seg_embed_matched = re.match("^.*/seg_embed$", matched_name)
if seg_embed_matched is not None:
value = np.squeeze(seg_embed_values[i])
name_to_values.append((item, value))
continue
# for ending with kqvor
kqvor_matched = re.match("^.*/[kqvor]$", matched_name)
if kqvor_matched is not None:
matched_name += "/kernel"
# for bert/embeddings
if matched_name == 'xl_net/word_embedding/weight':
matched_name = "model/transformer/word_embedding/lookup_table"
if matched_name.endswith("mask_emb"):
matched_name = "model/transformer/mask_emb/mask_emb"
value = tf.train.load_variable(init_checkpoint, matched_name)
name_to_values.append((item, value))
tf.keras.backend.batch_set_value(name_to_values)
def tf_huggingface_albert_chinese_adapter(hf_model_variables: list, init_checkpoint: str):
"""Build name to variable map from huggingface albert names to albert_chinese variables,
and then set values for current model.
brightmart version
ref: https://github.com/brightmart/albert_zh
:param hf_model_variables:
:return:
"""
name_to_values = list()
default_prefix = "bert/encoder/layer_shared/"
default_var_name = "albert_brightmart"
for item in hf_model_variables:
var_name = item.name
matched_name = re.match(f"^.*/({default_var_name}/.*):\\d+$", var_name)
if matched_name is None:
continue
matched_name = matched_name.group(1)
# for pooler
if matched_name == f"{default_var_name}/pooler/bias":
matched_name = "bert/pooler/dense/bias"
elif matched_name == f"{default_var_name}/pooler/kernel":
matched_name = "bert/pooler/dense/kernel"
# for embeddings
elif matched_name == f"{default_var_name}/embeddings/word_embeddings/weight":
matched_name = "bert/embeddings/word_embeddings"
elif matched_name == f"{default_var_name}/embeddings/position_embeddings/embeddings":
matched_name = "bert/embeddings/position_embeddings"
elif matched_name == f"{default_var_name}/embeddings/token_type_embeddings/embeddings":
matched_name = "bert/embeddings/token_type_embeddings"
elif matched_name == f"{default_var_name}/embeddings/LayerNorm/gamma":
matched_name = "bert/embeddings/LayerNorm/gamma"
elif matched_name == f"{default_var_name}/embeddings/LayerNorm/beta":
matched_name = "bert/embeddings/LayerNorm/beta"
# for encoder
elif matched_name == f"{default_var_name}/embeddings/embedding_hidden_mapping_in":
matched_name = "bert/embeddings/word_embeddings_2"
# for transformer layers
elif matched_name.endswith("ffn/kernel"):
matched_name = f"{default_prefix}intermediate/dense/kernel"
elif matched_name.endswith("ffn/bias"):
matched_name = f"{default_prefix}intermediate/dense/bias"
elif matched_name.endswith("ffn_output/kernel"):
matched_name = f"{default_prefix}output/dense/kernel"
elif matched_name.endswith("ffn_output/bias"):
matched_name = f"{default_prefix}output/dense/bias"
elif matched_name.endswith("full_layer_layer_norm/gamma"):
matched_name = f"{default_prefix}output/LayerNorm/gamma"
elif matched_name.endswith("full_layer_layer_norm/beta"):
matched_name = f"{default_prefix}output/LayerNorm/beta"
elif matched_name.endswith("attention/LayerNorm/gamma"):
matched_name = f"{default_prefix}attention/output/LayerNorm/gamma"
elif matched_name.endswith("attention/LayerNorm/beta"):
matched_name = f"{default_prefix}attention/output/LayerNorm/beta"
elif matched_name.find("attention/dense") != -1:
matched_name = re.match("^.*attention/(.*)$", matched_name).group(1)
matched_name = f"{default_prefix}attention/output/{matched_name}"
elif matched_name.find("attention") != -1:
matched_name = re.match("^.*attention/(.*)$", matched_name).group(1)
matched_name = f"{default_prefix}attention/self/{matched_name}"
# else:
# continue
value = tf.train.load_variable(init_checkpoint, matched_name)
name_to_values.append((item, value))
tf.keras.backend.batch_set_value(name_to_values)
def tf_huggingface_albert_chinese_google_adapter(hf_model_variables: list, init_checkpoint: str):
"""Build name to variable map from huggingface albert names to albert_chinese_google variables,
and then set values for current model.
:param hf_model_variables:
:return:
"""
name_to_values = list()
default_prefix = "bert/encoder/transformer/group_0/inner_group_0/"
for item in hf_model_variables:
var_name = item.name
matched_name = re.match("^.*/(albert/.*):\\d+$", var_name)
if matched_name is None:
continue
matched_name = matched_name.group(1)
# for pooler
if matched_name == "albert/pooler/bias":
matched_name = "bert/pooler/dense/bias"
elif matched_name == "albert/pooler/kernel":
matched_name = "bert/pooler/dense/kernel"
# for embeddings
elif matched_name == "albert/embeddings/word_embeddings/weight":
matched_name = "bert/embeddings/word_embeddings"
elif matched_name == "albert/embeddings/position_embeddings/embeddings":
matched_name = "bert/embeddings/position_embeddings"
elif matched_name == "albert/embeddings/token_type_embeddings/embeddings":
matched_name = "bert/embeddings/token_type_embeddings"
elif matched_name == "albert/embeddings/LayerNorm/gamma":
matched_name = "bert/embeddings/LayerNorm/gamma"
elif matched_name == "albert/embeddings/LayerNorm/beta":
matched_name = "bert/embeddings/LayerNorm/beta"
# for encoder
elif matched_name == "albert/encoder/embedding_hidden_mapping_in/kernel":
matched_name = "bert/encoder/embedding_hidden_mapping_in/kernel"
elif matched_name == "albert/encoder/embedding_hidden_mapping_in/bias":
matched_name = "bert/encoder/embedding_hidden_mapping_in/bias"
# for transformer layers
elif matched_name.endswith("ffn/kernel"):
matched_name = f"{default_prefix}ffn_1/intermediate/dense/kernel"
elif matched_name.endswith("ffn/bias"):
matched_name = f"{default_prefix}ffn_1/intermediate/dense/bias"
elif matched_name.endswith("ffn_output/kernel"):
matched_name = f"{default_prefix}ffn_1/intermediate/output/dense/kernel"
elif matched_name.endswith("ffn_output/bias"):
matched_name = f"{default_prefix}ffn_1/intermediate/output/dense/bias"
elif matched_name.endswith("full_layer_layer_norm/gamma"):
matched_name = f"{default_prefix}LayerNorm_1/gamma"
elif matched_name.endswith("full_layer_layer_norm/beta"):
matched_name = f"{default_prefix}LayerNorm_1/beta"
elif matched_name.endswith("attention/LayerNorm/gamma"):
matched_name = f"{default_prefix}LayerNorm/gamma"
elif matched_name.endswith("attention/LayerNorm/beta"):
matched_name = f"{default_prefix}LayerNorm/beta"
elif matched_name.find("attention/dense") != -1:
matched_name = re.match("^.*attention/(.*)$", matched_name).group(1)
matched_name = f"{default_prefix}attention_1/output/{matched_name}"
elif matched_name.find("attention") != -1:
matched_name = re.match("^.*attention/(.*)$", matched_name).group(1)
matched_name = f"{default_prefix}attention_1/self/{matched_name}"
value = tf.train.load_variable(init_checkpoint, matched_name)
name_to_values.append((item, value))
tf.keras.backend.batch_set_value(name_to_values)
def tf_huggingface_electra_adapter(hf_model_variables: list, init_checkpoint: str):
"""Build name to variable map from huggingface electra names to electra variables,
and then set values for current model.
:param hf_model_variables:
:return:
"""
name_to_values = list()
for item in hf_model_variables:
var_name = item.name
matched_name = re.match("^.*/(electra/.*):\\d+$", var_name)
if matched_name is None:
continue
matched_name = matched_name.group(1)
# for bert/encoder
encoder_matched = re.match("^electra/encoder/layer_._\\d+.*$", matched_name)
if encoder_matched is not None:
matched_name = matched_name.replace("_._", "_")
# for bert/embeddings
if matched_name == "electra/embeddings/weight":
matched_name = "electra/embeddings/word_embeddings"
elif matched_name == "electra/embeddings/position_embeddings/embeddings":
matched_name = "electra/embeddings/position_embeddings"
elif matched_name == "electra/embeddings/token_type_embeddings/embeddings":
matched_name = "electra/embeddings/token_type_embeddings"
elif matched_name == "electra/embeddings/task_type_embeddings/embeddings":
matched_name = "electra/embeddings/task_type_embeddings"
value = tf.train.load_variable(init_checkpoint, matched_name)
name_to_values.append((item, value))
tf.keras.backend.batch_set_value(name_to_values)
def tf_huggingface_gpt2_adapter(hf_model_variables: list, init_checkpoint: str):
"""Build name to variable map from huggingface gpt2 names to gpt2 variables,
and then set values for current model.
:param hf_model_variables:
:return:
"""
model_gold = tf.keras.models.load_model(init_checkpoint)
vars_gold = model_gold.trainable_variables
vars_gold_refinded = {}
name_to_values = list()
for var in vars_gold:
name, value = var.name, var.numpy()
name = name.replace("kernel", "weight")
name_pieces = name.split("/")
prefix = "/".join(name_pieces[:3] + [name_pieces[-1]])
if name.endswith("bias:0"):
value = np.reshape(value, [1, value.shape[0]])
# need merge
if name.find("query_layer") != -1 or name.find("key_layer") != -1 or name.find("value_layer") != -1:
if prefix not in vars_gold_refinded:
vars_gold_refinded[prefix] = value
else:
vars_gold_refinded[prefix] = np.concatenate((vars_gold_refinded[prefix], value), axis=1)
else:
vars_gold_refinded[name] = value
for item in hf_model_variables:
var_name = item.name
matched_name = re.match("^.*/(gpt2/.*)$", var_name)
if matched_name is None:
continue
matched_name = matched_name.group(1)
matched_name = matched_name.replace("gpt2", "gpt")
name_pieces = matched_name.split("/")
if name_pieces[1] == "wte":
matched_name = "gpt/embedding/embeddings:0"
elif name_pieces[1] == "wpe":
matched_name = "position_embeddings:0"
elif name_pieces[1] == "ln_f":
matched_name = matched_name.replace(name_pieces[1], "LayerNorm_final_norm")
elif name_pieces[1].startswith("h_._"):
layer_name = name_pieces[1]
layer_idx = int(layer_name.split("_._")[-1])
new_layer_name = f"layer{layer_idx:02}"
matched_name = matched_name.replace(layer_name, new_layer_name)
if len(name_pieces) >= 4:
if name_pieces[2] == "attn":
if name_pieces[3] == "c_attn":
matched_name = matched_name.replace("/".join(name_pieces[2: 4]), "attention")
elif name_pieces[3] == "c_proj":
matched_name = matched_name.replace("/".join(name_pieces[2: 4]), "attention/context_projection_layer")
elif name_pieces[2] == "ln_1":
matched_name = matched_name.replace(name_pieces[2], "LayerNorm_mlp_ln0")
elif name_pieces[2] == "ln_2":
matched_name = matched_name.replace(name_pieces[2], "LayerNorm_mlp_ln1")
elif name_pieces[2] == "mlp":
if name_pieces[3] == "c_fc":
matched_name = matched_name.replace("/".join(name_pieces[2: 4]), "intermediate")
elif name_pieces[3] == "c_proj":
matched_name = matched_name.replace("/".join(name_pieces[2: 4]), "output")
else:
continue
value = vars_gold_refinded.get(matched_name)
if value is None:
continue
assert value.shape == item.shape
tf.keras.backend.set_value(item, value)
# name_to_values.append((item, value))
# tf.keras.backend.batch_set_value(name_to_values) | 45.445521 | 126 | 0.656561 | 2,306 | 18,769 | 5.030789 | 0.080659 | 0.181105 | 0.056892 | 0.045858 | 0.828377 | 0.794242 | 0.754935 | 0.725713 | 0.713042 | 0.680372 | 0 | 0.006184 | 0.23315 | 18,769 | 413 | 127 | 45.445521 | 0.799833 | 0.094251 | 0 | 0.436426 | 0 | 0 | 0.280402 | 0.237565 | 0 | 0 | 0 | 0 | 0.003436 | 1 | 0.024055 | false | 0 | 0.013746 | 0 | 0.037801 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c5c43d2c2f9cb9a065e663c9fee1b3f43c3eba93 | 178 | py | Python | office365/sharepoint/storagemetrics/storage_metrics.py | theodoriss/Office365-REST-Python-Client | 3bd7a62dadcd3f0a0aceeaff7584fff3fd44886e | [
"MIT"
] | 544 | 2016-08-04T17:10:16.000Z | 2022-03-31T07:17:20.000Z | office365/sharepoint/storagemetrics/storage_metrics.py | theodoriss/Office365-REST-Python-Client | 3bd7a62dadcd3f0a0aceeaff7584fff3fd44886e | [
"MIT"
] | 438 | 2016-10-11T12:24:22.000Z | 2022-03-31T19:30:35.000Z | office365/sharepoint/storagemetrics/storage_metrics.py | theodoriss/Office365-REST-Python-Client | 3bd7a62dadcd3f0a0aceeaff7584fff3fd44886e | [
"MIT"
] | 202 | 2016-08-22T19:29:40.000Z | 2022-03-30T20:26:15.000Z | from office365.sharepoint.base_entity import BaseEntity
class StorageMetrics(BaseEntity):
"""Specifies the storage-related metrics for list folders in the site"""
pass
| 25.428571 | 76 | 0.780899 | 22 | 178 | 6.272727 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.019868 | 0.151685 | 178 | 6 | 77 | 29.666667 | 0.89404 | 0.370787 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
680e48dad0f4a0f6c05470e3aac6b74569bf6783 | 32 | py | Python | code/model/layer/__init__.py | muzishen/Huawei_Digix_Retrieval_Top4 | 39151e2f8493221138404e2942afbf03e3afbf08 | [
"Apache-2.0"
] | 4 | 2021-02-21T14:56:01.000Z | 2021-08-17T16:22:44.000Z | code/model/layer/__init__.py | muzishen/Huawei_Digix_Retrieval_Top4 | 39151e2f8493221138404e2942afbf03e3afbf08 | [
"Apache-2.0"
] | null | null | null | code/model/layer/__init__.py | muzishen/Huawei_Digix_Retrieval_Top4 | 39151e2f8493221138404e2942afbf03e3afbf08 | [
"Apache-2.0"
] | null | null | null | from .non_local import Non_local | 32 | 32 | 0.875 | 6 | 32 | 4.333333 | 0.666667 | 0.615385 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.09375 | 32 | 1 | 32 | 32 | 0.896552 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
680f86011b823829b50b7b70feff5c2e38613c24 | 244 | py | Python | azure-iot-device/azure/iot/device/provisioning/pipeline/__init__.py | danewalton/azure-iot-sdk-python | addc82a8c28478738602bd698acdaf1a16dc39b4 | [
"MIT"
] | 366 | 2016-12-02T20:38:05.000Z | 2022-03-29T10:08:14.000Z | azure-iot-device/azure/iot/device/provisioning/pipeline/__init__.py | danewalton/azure-iot-sdk-python | addc82a8c28478738602bd698acdaf1a16dc39b4 | [
"MIT"
] | 640 | 2016-12-16T21:59:48.000Z | 2022-03-30T20:17:52.000Z | azure-iot-device/azure/iot/device/provisioning/pipeline/__init__.py | danewalton/azure-iot-sdk-python | addc82a8c28478738602bd698acdaf1a16dc39b4 | [
"MIT"
] | 371 | 2016-11-16T16:06:04.000Z | 2022-03-31T10:10:57.000Z | """Azure Provisioning Device Communication Pipeline
This package provides pipeline for use with the Azure Provisioning Device SDK.
INTERNAL USAGE ONLY
"""
from .mqtt_pipeline import MQTTPipeline
from .config import ProvisioningPipelineConfig
| 27.111111 | 78 | 0.836066 | 29 | 244 | 7 | 0.758621 | 0.167488 | 0.226601 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.127049 | 244 | 8 | 79 | 30.5 | 0.953052 | 0.610656 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a86455e051ba216406ae50f5dc50b49b0199fefe | 2,486 | py | Python | Server/app/docs/friend.py | Nerd-Bear/But | 5213288568dd4442f6c7a4251131fd66889c727a | [
"MIT"
] | 1 | 2019-07-15T07:36:54.000Z | 2019-07-15T07:36:54.000Z | Server/app/docs/friend.py | Nerd-Bear/But | 5213288568dd4442f6c7a4251131fd66889c727a | [
"MIT"
] | null | null | null | Server/app/docs/friend.py | Nerd-Bear/But | 5213288568dd4442f6c7a4251131fd66889c727a | [
"MIT"
] | null | null | null | FIND_FRIEND_POST = {
'tags': ['friend'],
'description': '벗 찾기',
'parameters': [
{
'name': 'Authorization',
'description': 'api를 호출한 사람의 uuid',
'in': 'header',
'type': 'string',
'required': True
},
{
'name': 'region',
'description': '본인 지역과 같은 벗을 검색할 것인가',
'in': 'json',
'type': 'bool',
'required': True
},
{
'name': 'count',
'description': '가져올 친구 수',
'in': 'json',
'type': 'int',
'required': True
}
],
'responses': {
'201': {
'description': '성공',
'example': [{
'name': '이름',
'profile_image': '프로필 사진',
'id': 'uuid',
'region': '사는 곳',
'age': '나이'
},
{
'name': '이름 ㅁㅁㅁ',
'profile_image': '프로필 사진 ㅁㅁㅁ',
'id': 'uuid ㅁㅁㅁ',
'region': '사는 곳 ㅁㅁㅁ',
'age': '나이 ㅁㅁㅁ'
}
]
},
'204': {
'description': '검색 결과 없음'
},
'401': {
'description': 'api를 호출한 사람의 uuid 오류'
}
}
}
FRIEND_LIST_GET = {
'tags': ['friend'],
'description': '벗 리스트 조회',
'parameters': [
{
'name': 'Authorization',
'description': 'api를 호출한 사람의 uuid',
'in': 'header',
'type': 'string',
'required': True
}
],
'responses': {
'201': {
'description': '성공',
'example': [{
'name': '이름',
'profile_image': '프로필 사진',
'id': 'uuid',
'region': '사는 곳',
'age': '나이'
},
{
'name': '이름 ㅁㅁㅁ',
'profile_image': '프로필 사진 ㅁㅁㅁ',
'id': 'uuid ㅁㅁㅁ',
'region': '사는 곳 ㅁㅁㅁ',
'age': '나이 ㅁㅁㅁ'
}
]
},
'204': {
'description': '검색 결과 없음'
},
'401': {
'description': 'api를 호출한 사람의 uuid 오류'
}
}
} | 26.446809 | 54 | 0.293242 | 168 | 2,486 | 4.291667 | 0.35119 | 0.083218 | 0.099861 | 0.116505 | 0.751734 | 0.751734 | 0.751734 | 0.751734 | 0.751734 | 0.751734 | 0 | 0.01629 | 0.555511 | 2,486 | 94 | 55 | 26.446809 | 0.636199 | 0 | 0 | 0.608696 | 0 | 0 | 0.292722 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a89e9f379bb1f4cf3407cfb435ce1e5d2a96f665 | 68 | py | Python | addons14/base_technical_features/models/__init__.py | odoochain/addons_oca | 55d456d798aebe16e49b4a6070765f206a8885ca | [
"MIT"
] | 1 | 2021-06-10T14:59:13.000Z | 2021-06-10T14:59:13.000Z | addons14/base_technical_features/models/__init__.py | odoochain/addons_oca | 55d456d798aebe16e49b4a6070765f206a8885ca | [
"MIT"
] | null | null | null | addons14/base_technical_features/models/__init__.py | odoochain/addons_oca | 55d456d798aebe16e49b4a6070765f206a8885ca | [
"MIT"
] | 1 | 2021-04-09T09:44:44.000Z | 2021-04-09T09:44:44.000Z | from . import base
from . import ir_ui_menu
from . import res_users
| 17 | 24 | 0.779412 | 12 | 68 | 4.166667 | 0.666667 | 0.6 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.176471 | 68 | 3 | 25 | 22.666667 | 0.892857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a8a8d176e1649ddbb795ee0bac0588b9789fbb69 | 43 | py | Python | tervis/db/meta.py | robopsi/sentry-health | 276fdd1fd33ae9602c8ab650954ea46dc5ec5e88 | [
"BSD-3-Clause"
] | 3 | 2016-11-06T19:51:29.000Z | 2017-10-31T11:31:46.000Z | tervis/db/meta.py | robopsi/sentry-health | 276fdd1fd33ae9602c8ab650954ea46dc5ec5e88 | [
"BSD-3-Clause"
] | 1 | 2016-12-19T16:42:25.000Z | 2016-12-19T17:11:34.000Z | tervis/db/meta.py | getsentry/sentry-health | 276fdd1fd33ae9602c8ab650954ea46dc5ec5e88 | [
"BSD-3-Clause"
] | 4 | 2018-06-05T02:49:06.000Z | 2021-03-04T10:18:35.000Z | from sqlalchemy import * # noqa: F403,401
| 21.5 | 42 | 0.72093 | 6 | 43 | 5.166667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.171429 | 0.186047 | 43 | 1 | 43 | 43 | 0.714286 | 0.325581 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a8b87ef9e609d73610b0b7fc5c812a87c7a399ad | 96 | py | Python | nmt/my_module/__init__.py | awesome-archive/RAdam | 56d2847bce23f8ec551ca3b2ff4a3aaeb96b0ebf | [
"Apache-2.0"
] | 1 | 2019-08-16T07:36:33.000Z | 2019-08-16T07:36:33.000Z | nmt/my_module/__init__.py | awesome-archive/RAdam | 56d2847bce23f8ec551ca3b2ff4a3aaeb96b0ebf | [
"Apache-2.0"
] | null | null | null | nmt/my_module/__init__.py | awesome-archive/RAdam | 56d2847bce23f8ec551ca3b2ff4a3aaeb96b0ebf | [
"Apache-2.0"
] | 1 | 2019-08-29T14:36:18.000Z | 2019-08-29T14:36:18.000Z | from .ada2 import *
from .adam2 import *
from .adadelta import *
from .linear_schedule import *
| 19.2 | 30 | 0.75 | 13 | 96 | 5.461538 | 0.538462 | 0.422535 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.025 | 0.166667 | 96 | 4 | 31 | 24 | 0.8625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a8bb45de7062b70640fb9dfef6110871089ffed6 | 21 | py | Python | cride/rides/serializers/__init__.py | stalinchiguano98/Advanced_Django | 642576deaf569663d5dbc0d5820cfbc49c17fd2e | [
"MIT"
] | 9 | 2020-05-10T05:56:40.000Z | 2022-01-24T08:49:27.000Z | cride/rides/serializers/__init__.py | stalinchiguano98/Advanced_Django | 642576deaf569663d5dbc0d5820cfbc49c17fd2e | [
"MIT"
] | 7 | 2020-06-05T19:54:39.000Z | 2022-03-11T23:41:06.000Z | cride/rides/serializers/__init__.py | stalinchiguano98/Advanced_Django | 642576deaf569663d5dbc0d5820cfbc49c17fd2e | [
"MIT"
] | 5 | 2020-04-24T11:38:25.000Z | 2021-01-02T09:41:04.000Z | from .rides import *
| 10.5 | 20 | 0.714286 | 3 | 21 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.190476 | 21 | 1 | 21 | 21 | 0.882353 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4f23cbb23a5abe71f808e1867ace33d4935ac8d1 | 220 | py | Python | atlas/foundations_contrib/src/test/job_bundling/__init__.py | DeepLearnI/atlas | 8aca652d7e647b4e88530b93e265b536de7055ed | [
"Apache-2.0"
] | 296 | 2020-03-16T19:55:00.000Z | 2022-01-10T19:46:05.000Z | atlas/foundations_contrib/src/test/job_bundling/__init__.py | DeepLearnI/atlas | 8aca652d7e647b4e88530b93e265b536de7055ed | [
"Apache-2.0"
] | 57 | 2020-03-17T11:15:57.000Z | 2021-07-10T14:42:27.000Z | atlas/foundations_contrib/src/test/job_bundling/__init__.py | DeepLearnI/atlas | 8aca652d7e647b4e88530b93e265b536de7055ed | [
"Apache-2.0"
] | 38 | 2020-03-17T21:06:05.000Z | 2022-02-08T03:19:34.000Z |
from test.job_bundling.test_script_environment import TestScriptEnvironment
from test.job_bundling.test_folder_job_source_bundle import TestFolderJobSourceBundle
from test.job_bundling.test_empty_job import TestEmptyJob | 55 | 85 | 0.918182 | 29 | 220 | 6.586207 | 0.482759 | 0.125654 | 0.172775 | 0.298429 | 0.361257 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.054545 | 220 | 4 | 86 | 55 | 0.918269 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4f5aa990deda947fd44a1d87c7208fa5b8eb9286 | 124 | py | Python | auto_struct/data_types/base/__init__.py | Valmarelox/auto_struct | ec06fc426d468d4d01f300add3081df9eda87f41 | [
"MIT"
] | 7 | 2020-09-03T20:54:13.000Z | 2022-03-09T01:21:07.000Z | auto_struct/data_types/base/__init__.py | Valmarelox/auto_struct | ec06fc426d468d4d01f300add3081df9eda87f41 | [
"MIT"
] | null | null | null | auto_struct/data_types/base/__init__.py | Valmarelox/auto_struct | ec06fc426d468d4d01f300add3081df9eda87f41 | [
"MIT"
] | null | null | null | from .base_type import BaseType
from .base_single_value_type import BaseSingleValueType
from .base_struct import BaseStruct
| 31 | 55 | 0.879032 | 17 | 124 | 6.117647 | 0.588235 | 0.230769 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.096774 | 124 | 3 | 56 | 41.333333 | 0.928571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4f860590578a7cdca92c24ec847999f941adee5c | 131 | py | Python | lib_rovpp/attacks/__init__.py | iReynaldo/lib_rovpp | eb201adc948e9375123c2e2301ee524392dd7b0d | [
"BSD-3-Clause"
] | 1 | 2021-12-05T07:42:35.000Z | 2021-12-05T07:42:35.000Z | lib_rovpp/attacks/__init__.py | iReynaldo/lib_rovpp | eb201adc948e9375123c2e2301ee524392dd7b0d | [
"BSD-3-Clause"
] | null | null | null | lib_rovpp/attacks/__init__.py | iReynaldo/lib_rovpp | eb201adc948e9375123c2e2301ee524392dd7b0d | [
"BSD-3-Clause"
] | null | null | null | from .attacks import ROVPPPrefixHijack
from .attacks import ROVPPSubprefixHijack
from .attacks import ROVPPUnannouncedPrefixHijack
| 32.75 | 49 | 0.885496 | 12 | 131 | 9.666667 | 0.5 | 0.284483 | 0.439655 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.091603 | 131 | 3 | 50 | 43.666667 | 0.97479 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
4fa6798144ee6753a08a6c6d0591b57336ccce3a | 35,319 | py | Python | test/unit_testing/grid/element_linear_dx_data/test_element_linearB/element/geom_element_SYM.py | nwukie/ChiDG | d096548ba3bd0a338a29f522fb00a669f0e33e9b | [
"BSD-3-Clause"
] | 36 | 2016-10-05T15:12:22.000Z | 2022-03-17T02:08:23.000Z | test/unit_testing/grid/element_linear_dx_data/test_element_linearC/element/geom_element_SYM.py | nwukie/ChiDG | d096548ba3bd0a338a29f522fb00a669f0e33e9b | [
"BSD-3-Clause"
] | 17 | 2016-05-17T02:21:05.000Z | 2017-08-10T16:33:07.000Z | test/unit_testing/grid/element_linear_dx_data/test_element_linearB/element/geom_element_SYM.py | nwukie/ChiDG | d096548ba3bd0a338a29f522fb00a669f0e33e9b | [
"BSD-3-Clause"
] | 20 | 2016-07-18T16:20:47.000Z | 2020-11-27T19:26:12.000Z | from __future__ import division
import sys
import os
import time
import numpy
import pickle
from sympy import *
from sympy.tensor.array import MutableSparseNDimArray
def update_progress(job_title, progress):
length = 20 # modify this to change the length
block = int(round(length*progress))
msg = "\r{0}: [{1}] {2}%".format(job_title, "#"*block + "-"*(length-block), round(progress*100, 2))
if progress >= 1: msg += " DONE\r\n"
sys.stdout.write(msg)
sys.stdout.flush()
def cls():
os.system('cls' if os.name=='nt' else 'clear')
cls()
print "WARNING: This script is very slow, it might run for hours. It is strongly recommended to watch Netflix in the meanwhile."
################################################################################################################
# Define symbols for each coordinate for support node
x1,y1,z1 = symbols('x1 y1 z1')
x2,y2,z2 = symbols('x2 y2 z2')
x3,y3,z3 = symbols('x3 y3 z3')
x4,y4,z4 = symbols('x4 y4 z4')
x5,y5,z5 = symbols('x5 y5 z5')
x6,y6,z6 = symbols('x6 y6 z6')
x7,y7,z7 = symbols('x7 y7 z7')
x8,y8,z8 = symbols('x8 y8 z8')
coords_ = Matrix( [[x1,y1,z1],
[x2,y2,z2],
[x3,y3,z3],
[x4,y4,z4],
[x5,y5,z5],
[x6,y6,z6],
[x7,y7,z7],
[x8,y8,z8],
] )
nnodes_r = coords_.shape[0]
nnodes_ie = 8
nnodes_if = 4
nterms_s = 8
ndirs = 3
# Define coordinate values at support nodes
coords = Matrix( [[0.0,0.0,0.0],
[5.0,0.0,0.0],
[0.0,1.0,0.0],
[5.0,1.0,0.0],
[0.0,0.0,1.0],
[5.0,0.0,1.0],
[0.0,1.0,1.0],
[5.0,1.0,1.0],
] )
# Define matrix of polynomial basis terms at support nodes
val_r = Matrix( [[ 1.0,-1.0,-1.0,-1.0, 1.0, 1.0, 1.0,-1.0],
[ 1.0,-1.0,-1.0, 1.0,-1.0,-1.0, 1.0, 1.0],
[ 1.0, 1.0,-1.0,-1.0,-1.0, 1.0,-1.0, 1.0],
[ 1.0, 1.0,-1.0, 1.0, 1.0,-1.0,-1.0,-1.0],
[ 1.0,-1.0, 1.0,-1.0, 1.0,-1.0,-1.0, 1.0],
[ 1.0,-1.0, 1.0, 1.0,-1.0, 1.0,-1.0,-1.0],
[ 1.0, 1.0, 1.0,-1.0,-1.0,-1.0, 1.0,-1.0],
[ 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0],
] )
# Define matrices at interpolation nodes (quadrature, level = 1)
val_i = Matrix( [[ 1.0,-sqrt(1.0/3.0),-sqrt(1.0/3.0),-sqrt(1.0/3.0), 1.0/3.0, 1.0/3.0, 1.0/3.0,-1.0/3.0*sqrt(1.0/3.0)],
[ 1.0,-sqrt(1.0/3.0),-sqrt(1.0/3.0), sqrt(1.0/3.0),-1.0/3.0,-1.0/3.0, 1.0/3.0, 1.0/3.0*sqrt(1.0/3.0)],
[ 1.0, sqrt(1.0/3.0),-sqrt(1.0/3.0),-sqrt(1.0/3.0),-1.0/3.0, 1.0/3.0,-1.0/3.0, 1.0/3.0*sqrt(1.0/3.0)],
[ 1.0, sqrt(1.0/3.0),-sqrt(1.0/3.0), sqrt(1.0/3.0), 1.0/3.0,-1.0/3.0,-1.0/3.0,-1.0/3.0*sqrt(1.0/3.0)],
[ 1.0,-sqrt(1.0/3.0), sqrt(1.0/3.0),-sqrt(1.0/3.0), 1.0/3.0,-1.0/3.0,-1.0/3.0, 1.0/3.0*sqrt(1.0/3.0)],
[ 1.0,-sqrt(1.0/3.0), sqrt(1.0/3.0), sqrt(1.0/3.0),-1.0/3.0, 1.0/3.0,-1.0/3.0,-1.0/3.0*sqrt(1.0/3.0)],
[ 1.0, sqrt(1.0/3.0), sqrt(1.0/3.0),-sqrt(1.0/3.0),-1.0/3.0,-1.0/3.0, 1.0/3.0,-1.0/3.0*sqrt(1.0/3.0)],
[ 1.0, sqrt(1.0/3.0), sqrt(1.0/3.0), sqrt(1.0/3.0), 1.0/3.0, 1.0/3.0, 1.0/3.0, 1.0/3.0*sqrt(1.0/3.0)],
] )
ddxi_i = Matrix( [[ 0.0,0.0,0.0,1.0,-sqrt(1.0/3.0),-sqrt(1.0/3.0),0.0, 1.0/3.0],
[ 0.0,0.0,0.0,1.0,-sqrt(1.0/3.0),-sqrt(1.0/3.0),0.0, 1.0/3.0],
[ 0.0,0.0,0.0,1.0, sqrt(1.0/3.0),-sqrt(1.0/3.0),0.0,-1.0/3.0],
[ 0.0,0.0,0.0,1.0, sqrt(1.0/3.0),-sqrt(1.0/3.0),0.0,-1.0/3.0],
[ 0.0,0.0,0.0,1.0,-sqrt(1.0/3.0), sqrt(1.0/3.0),0.0,-1.0/3.0],
[ 0.0,0.0,0.0,1.0,-sqrt(1.0/3.0), sqrt(1.0/3.0),0.0,-1.0/3.0],
[ 0.0,0.0,0.0,1.0, sqrt(1.0/3.0), sqrt(1.0/3.0),0.0, 1.0/3.0],
[ 0.0,0.0,0.0,1.0, sqrt(1.0/3.0), sqrt(1.0/3.0),0.0, 1.0/3.0],
] )
ddeta_i = Matrix( [[ 0.0,1.0,0.0,0.0,-sqrt(1.0/3.0),0.0,-sqrt(1.0/3.0), 1.0/3.0],
[ 0.0,1.0,0.0,0.0, sqrt(1.0/3.0),0.0,-sqrt(1.0/3.0),-1.0/3.0],
[ 0.0,1.0,0.0,0.0,-sqrt(1.0/3.0),0.0,-sqrt(1.0/3.0), 1.0/3.0],
[ 0.0,1.0,0.0,0.0, sqrt(1.0/3.0),0.0,-sqrt(1.0/3.0),-1.0/3.0],
[ 0.0,1.0,0.0,0.0,-sqrt(1.0/3.0),0.0, sqrt(1.0/3.0),-1.0/3.0],
[ 0.0,1.0,0.0,0.0, sqrt(1.0/3.0),0.0, sqrt(1.0/3.0), 1.0/3.0],
[ 0.0,1.0,0.0,0.0,-sqrt(1.0/3.0),0.0, sqrt(1.0/3.0),-1.0/3.0],
[ 0.0,1.0,0.0,0.0, sqrt(1.0/3.0),0.0, sqrt(1.0/3.0), 1.0/3.0],
] )
ddzeta_i= Matrix( [[ 0.0,0.0,1.0,0.0,0.0,-sqrt(1.0/3.0),-sqrt(1.0/3.0), 1.0/3.0],
[ 0.0,0.0,1.0,0.0,0.0, sqrt(1.0/3.0),-sqrt(1.0/3.0),-1.0/3.0],
[ 0.0,0.0,1.0,0.0,0.0,-sqrt(1.0/3.0), sqrt(1.0/3.0),-1.0/3.0],
[ 0.0,0.0,1.0,0.0,0.0, sqrt(1.0/3.0), sqrt(1.0/3.0), 1.0/3.0],
[ 0.0,0.0,1.0,0.0,0.0,-sqrt(1.0/3.0),-sqrt(1.0/3.0), 1.0/3.0],
[ 0.0,0.0,1.0,0.0,0.0, sqrt(1.0/3.0),-sqrt(1.0/3.0),-1.0/3.0],
[ 0.0,0.0,1.0,0.0,0.0,-sqrt(1.0/3.0), sqrt(1.0/3.0),-1.0/3.0],
[ 0.0,0.0,1.0,0.0,0.0, sqrt(1.0/3.0), sqrt(1.0/3.0), 1.0/3.0],
] )
# Define element interpolation nodes weights for linear element
weights_e = Matrix( [1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0] )
# Define val_f for each face
# Face 1, XI_MIN
val_1 = Matrix( [[ 1.0,-sqrt(1.0/3.0),-sqrt(1.0/3.0),-1.0, sqrt(1.0/3.0), sqrt(1.0/3.0), 1.0/3.0,-1.0/3.0],
[ 1.0, sqrt(1.0/3.0),-sqrt(1.0/3.0),-1.0,-sqrt(1.0/3.0), sqrt(1.0/3.0),-1.0/3.0, 1.0/3.0],
[ 1.0,-sqrt(1.0/3.0), sqrt(1.0/3.0),-1.0, sqrt(1.0/3.0),-sqrt(1.0/3.0),-1.0/3.0, 1.0/3.0],
[ 1.0, sqrt(1.0/3.0), sqrt(1.0/3.0),-1.0,-sqrt(1.0/3.0),-sqrt(1.0/3.0), 1.0/3.0,-1.0/3.0],
] )
# Face 2, XI_MAX
val_2 = Matrix( [[ 1.0,-sqrt(1.0/3.0),-sqrt(1.0/3.0),1.0,-sqrt(1.0/3.0),-sqrt(1.0/3.0), 1.0/3.0, 1.0/3.0],
[ 1.0, sqrt(1.0/3.0),-sqrt(1.0/3.0),1.0, sqrt(1.0/3.0),-sqrt(1.0/3.0),-1.0/3.0,-1.0/3.0],
[ 1.0,-sqrt(1.0/3.0), sqrt(1.0/3.0),1.0,-sqrt(1.0/3.0), sqrt(1.0/3.0),-1.0/3.0,-1.0/3.0],
[ 1.0, sqrt(1.0/3.0), sqrt(1.0/3.0),1.0, sqrt(1.0/3.0), sqrt(1.0/3.0), 1.0/3.0, 1.0/3.0],
] )
# Face 3, ETA_MIN
val_3 = Matrix( [[ 1.0,-1.0,-sqrt(1.0/3.0),-sqrt(1.0/3.0), sqrt(1.0/3.0), 1.0/3.0, sqrt(1.0/3.0),-1.0/3.0],
[ 1.0,-1.0,-sqrt(1.0/3.0), sqrt(1.0/3.0),-sqrt(1.0/3.0),-1.0/3.0, sqrt(1.0/3.0), 1.0/3.0],
[ 1.0,-1.0, sqrt(1.0/3.0),-sqrt(1.0/3.0), sqrt(1.0/3.0),-1.0/3.0,-sqrt(1.0/3.0), 1.0/3.0],
[ 1.0,-1.0, sqrt(1.0/3.0), sqrt(1.0/3.0),-sqrt(1.0/3.0), 1.0/3.0,-sqrt(1.0/3.0),-1.0/3.0],
] )
# Face 4, ETA_MAX
val_4 = Matrix( [[ 1.0,1.0,-sqrt(1.0/3.0),-sqrt(1.0/3.0),-sqrt(1.0/3.0), 1.0/3.0,-sqrt(1.0/3.0), 1.0/3.0],
[ 1.0,1.0,-sqrt(1.0/3.0), sqrt(1.0/3.0), sqrt(1.0/3.0),-1.0/3.0,-sqrt(1.0/3.0),-1.0/3.0],
[ 1.0,1.0, sqrt(1.0/3.0),-sqrt(1.0/3.0),-sqrt(1.0/3.0),-1.0/3.0, sqrt(1.0/3.0),-1.0/3.0],
[ 1.0,1.0, sqrt(1.0/3.0), sqrt(1.0/3.0), sqrt(1.0/3.0), 1.0/3.0, sqrt(1.0/3.0), 1.0/3.0],
] )
# Face 5, ZETA_MIN
val_5 = Matrix( [[ 1.0,-sqrt(1.0/3.0),-1.0,-sqrt(1.0/3.0), sqrt(1.0/3.0), 1.0/3.0, sqrt(1.0/3.0),-1.0/3.0],
[ 1.0,-sqrt(1.0/3.0),-1.0, sqrt(1.0/3.0),-sqrt(1.0/3.0),-1.0/3.0, sqrt(1.0/3.0), 1.0/3.0],
[ 1.0, sqrt(1.0/3.0),-1.0,-sqrt(1.0/3.0),-sqrt(1.0/3.0), 1.0/3.0,-sqrt(1.0/3.0), 1.0/3.0],
[ 1.0, sqrt(1.0/3.0),-1.0, sqrt(1.0/3.0), sqrt(1.0/3.0),-1.0/3.0,-sqrt(1.0/3.0),-1.0/3.0],
] )
# Face 6, ZETA_MAX
val_6 = Matrix( [[ 1.0,-sqrt(1.0/3.0),1.0,-sqrt(1.0/3.0), sqrt(1.0/3.0),-1.0/3.0,-sqrt(1.0/3.0), 1.0/3.0],
[ 1.0,-sqrt(1.0/3.0),1.0, sqrt(1.0/3.0),-sqrt(1.0/3.0), 1.0/3.0,-sqrt(1.0/3.0),-1.0/3.0],
[ 1.0, sqrt(1.0/3.0),1.0,-sqrt(1.0/3.0),-sqrt(1.0/3.0),-1.0/3.0, sqrt(1.0/3.0),-1.0/3.0],
[ 1.0, sqrt(1.0/3.0),1.0, sqrt(1.0/3.0), sqrt(1.0/3.0), 1.0/3.0, sqrt(1.0/3.0), 1.0/3.0],
] )
#--------------------------------------------------------------------
# Matrix modes_to_nodes
val_r_inv = val_r**(-1)
# Computes coordiantes modes
coords_modes_ = val_r_inv * coords_
coords_modes = lambdify(coords_,coords_modes_,"numpy")
# Initialized coordiantes
interp_coords_ = MutableSparseNDimArray.zeros(nnodes_ie,3)
for inode in range(0,nnodes_ie):
for idir in range(0,3):
interp_coords_[inode,idir] = val_i[inode,:] * coords_modes_[:,idir]
# Initialized jacobian
jacobian_ = MutableSparseNDimArray.zeros(3, 3, nnodes_ie)
for inode in range(0,nnodes_ie):
jacobian_[0,0,inode] = ddxi_i[inode,:] * coords_modes_[:,0]
jacobian_[0,1,inode] = ddeta_i[inode,:] * coords_modes_[:,0]
jacobian_[0,2,inode] = ddzeta_i[inode,:] * coords_modes_[:,0]
jacobian_[1,0,inode] = ddxi_i[inode,:] * coords_modes_[:,1]
jacobian_[1,1,inode] = ddeta_i[inode,:] * coords_modes_[:,1]
jacobian_[1,2,inode] = ddzeta_i[inode,:] * coords_modes_[:,1]
jacobian_[2,0,inode] = ddxi_i[inode,:] * coords_modes_[:,2]
jacobian_[2,1,inode] = ddeta_i[inode,:] * coords_modes_[:,2]
jacobian_[2,2,inode] = ddzeta_i[inode,:] * coords_modes_[:,2]
update_progress("Computing Jacobian ", inode/(nnodes_ie-1))
# Matrics and Determinant
metrics_ = MutableSparseNDimArray.zeros(3, 3, nnodes_ie)
jinv_ = zeros(nnodes_ie)
for inode in range(0,nnodes_ie):
ijacobian = zeros(3,3)
for irow in range(0,3):
for icol in range(0,3):
ijacobian[irow,icol] = jacobian_[irow,icol,inode]
# Compute jacobian for the ith node
update_progress("Computing Jinv and Metric ", inode/(nnodes_ie-1))
jinv_[inode] = ijacobian.det()
imetric = ijacobian**(-1)
for irow in range(0,3):
for icol in range(0,3):
metrics_[irow,icol,inode] = imetric[irow,icol]
# Compute inverse Mass matrix
invmass_ = zeros(nterms_s,nterms_s)
mass_ = zeros(nterms_s,nterms_s)
i = 1
val_tmp = val_i
for iterm in range(0,nterms_s):
for inode in range(0,nnodes_ie):
val_tmp[inode,iterm] = val_tmp[inode,iterm] * weights_e[inode] * jinv_[inode]
update_progress("Computing invmass ", i/(nterms_s*nnodes_ie))
i += 1
mass_ = transpose(val_tmp)*val_i
invmass_ = (mass_)**(-1)
# Compute BR2_VOL for each face
br2_vol_face1_ = zeros(nnodes_ie,nnodes_if)
br2_vol_face2_ = zeros(nnodes_ie,nnodes_if)
br2_vol_face3_ = zeros(nnodes_ie,nnodes_if)
br2_vol_face4_ = zeros(nnodes_ie,nnodes_if)
br2_vol_face5_ = zeros(nnodes_ie,nnodes_if)
br2_vol_face6_ = zeros(nnodes_ie,nnodes_if)
br2_vol_face1_ = val_i*(invmass_*transpose(val_1))
br2_vol_face2_ = val_i*(invmass_*transpose(val_2))
br2_vol_face3_ = val_i*(invmass_*transpose(val_3))
br2_vol_face4_ = val_i*(invmass_*transpose(val_4))
br2_vol_face5_ = val_i*(invmass_*transpose(val_5))
br2_vol_face6_ = val_i*(invmass_*transpose(val_6))
update_progress("Computing br2_vol ", 1)
# Compute BR2_FACE for each face
br2_face_face1_ = zeros(nnodes_if,nnodes_if)
br2_face_face2_ = zeros(nnodes_if,nnodes_if)
br2_face_face3_ = zeros(nnodes_if,nnodes_if)
br2_face_face4_ = zeros(nnodes_if,nnodes_if)
br2_face_face5_ = zeros(nnodes_if,nnodes_if)
br2_face_face6_ = zeros(nnodes_if,nnodes_if)
br2_face_face1_ = val_1*(invmass_*transpose(val_1))
br2_face_face2_ = val_2*(invmass_*transpose(val_2))
br2_face_face3_ = val_3*(invmass_*transpose(val_3))
br2_face_face4_ = val_4*(invmass_*transpose(val_4))
br2_face_face5_ = val_5*(invmass_*transpose(val_5))
br2_face_face6_ = val_6*(invmass_*transpose(val_6))
update_progress("Computing br2_face ", 1)
## Grad1, Grad2, and Grad3
#grad1_ = zeros(nnodes_ie,nterms_s)
#grad2_ = zeros(nnodes_ie,nterms_s)
#grad3_ = zeros(nnodes_ie,nterms_s)
#i = 1
#for iterm in range(0,nterms_s):
# for inode in range(0,nnodes_ie):
# grad1_[inode,iterm] = metrics_[0,0,inode] * ddxi_i[inode,iterm] + metrics_[1,0,inode] * ddeta_i[inode,iterm] + metrics_[2,0,inode] * ddzeta_i[inode,iterm]
# grad2_[inode,iterm] = metrics_[0,1,inode] * ddxi_i[inode,iterm] + metrics_[1,1,inode] * ddeta_i[inode,iterm] + metrics_[2,1,inode] * ddzeta_i[inode,iterm]
# grad3_[inode,iterm] = metrics_[0,2,inode] * ddxi_i[inode,iterm] + metrics_[1,2,inode] * ddeta_i[inode,iterm] + metrics_[2,2,inode] * ddzeta_i[inode,iterm]
# update_progress("Computing grad1, grad2, grad3 ", i/(nnodes_ie*nterms_s))
# i += 1
# Differentiate coordinates at interpolation points
interp_coords_dx_ = MutableSparseNDimArray.zeros(nnodes_ie, 3, nnodes_r, ndirs)
i = 1
for inode in range(0,nnodes_ie):
for direct in range(0,3):
for inode_diff in range(0,nnodes_r):
for idir in range(0,ndirs):
interp_coords_dx_[inode,direct,inode_diff,idir] = interp_coords_[inode,direct].diff(coords_[inode_diff,idir])
update_progress("Computing interp_coords_dx ", i/(nnodes_ie*nnodes_r*ndirs*3))
i += 1
# Differentiate determinant
djinv_dx_ = MutableSparseNDimArray.zeros(nnodes_ie, nnodes_r, ndirs)
i = 1
for inode in range(0,nnodes_ie):
for inode_diff in range(0,nnodes_r):
for idir in range(0,ndirs):
djinv_dx_[inode,inode_diff,idir] = jinv_[inode].diff(coords_[inode_diff,idir])
update_progress("Computing djinv_dx ", i/(nnodes_ie*nnodes_r*ndirs))
i += 1
# Differentiate metrics
dmetric_dx_ = MutableSparseNDimArray.zeros(3,3,nnodes_ie,nnodes_r,ndirs)
i = 1
for inode in range(0,nnodes_ie):
for inode_diff in range(0,nnodes_r):
for idir in range(0,ndirs):
for irow in range(0,3):
for icol in range(0,3):
dmetric_dx_[irow,icol,inode,inode_diff,idir] = metrics_[irow,icol,inode].diff(coords_[inode_diff,idir])
update_progress("Computing dmetric_dx ", i/(nnodes_ie*nnodes_r*ndirs*9))
i += 1
# Differentiate invmass
dinvmass_dx_ = MutableSparseNDimArray.zeros(nterms_s,nterms_s,nnodes_r,ndirs)
i = 1
for inode_diff in range(0,nnodes_r):
for idir in range(0,ndirs):
for irow in range(0,nterms_s):
for icol in range(0,nterms_s):
dinvmass_dx_[irow,icol,inode_diff,idir] = invmass_[irow,icol].diff(coords_[inode_diff,idir])
update_progress("Computing dinvmass_dx ", i/(nnodes_r*ndirs*nterms_s*nterms_s))
i += 1
# Differentiate BR2_vol
dbr2_vol_face1_dx_ = MutableSparseNDimArray.zeros(nnodes_ie,nnodes_if,nnodes_r,ndirs)
dbr2_vol_face2_dx_ = MutableSparseNDimArray.zeros(nnodes_ie,nnodes_if,nnodes_r,ndirs)
dbr2_vol_face3_dx_ = MutableSparseNDimArray.zeros(nnodes_ie,nnodes_if,nnodes_r,ndirs)
dbr2_vol_face4_dx_ = MutableSparseNDimArray.zeros(nnodes_ie,nnodes_if,nnodes_r,ndirs)
dbr2_vol_face5_dx_ = MutableSparseNDimArray.zeros(nnodes_ie,nnodes_if,nnodes_r,ndirs)
dbr2_vol_face6_dx_ = MutableSparseNDimArray.zeros(nnodes_ie,nnodes_if,nnodes_r,ndirs)
i = 1
for inode_diff in range(0,nnodes_r):
for idir in range(0,ndirs):
for irow in range(0,nnodes_ie):
for icol in range(0,nnodes_if):
dbr2_vol_face1_dx_[irow,icol,inode_diff,idir] = br2_vol_face1_[irow,icol].diff(coords_[inode_diff,idir])
dbr2_vol_face2_dx_[irow,icol,inode_diff,idir] = br2_vol_face2_[irow,icol].diff(coords_[inode_diff,idir])
dbr2_vol_face3_dx_[irow,icol,inode_diff,idir] = br2_vol_face3_[irow,icol].diff(coords_[inode_diff,idir])
dbr2_vol_face4_dx_[irow,icol,inode_diff,idir] = br2_vol_face4_[irow,icol].diff(coords_[inode_diff,idir])
dbr2_vol_face5_dx_[irow,icol,inode_diff,idir] = br2_vol_face5_[irow,icol].diff(coords_[inode_diff,idir])
dbr2_vol_face6_dx_[irow,icol,inode_diff,idir] = br2_vol_face6_[irow,icol].diff(coords_[inode_diff,idir])
update_progress("Computing dbr2_vol_faces_dx ", i/(nnodes_r*ndirs*nnodes_ie*nnodes_if))
i += 1
# Differentiate BR2_face
dbr2_face_face1_dx_ = MutableSparseNDimArray.zeros(nnodes_if,nnodes_if,nnodes_r,ndirs)
dbr2_face_face2_dx_ = MutableSparseNDimArray.zeros(nnodes_if,nnodes_if,nnodes_r,ndirs)
dbr2_face_face3_dx_ = MutableSparseNDimArray.zeros(nnodes_if,nnodes_if,nnodes_r,ndirs)
dbr2_face_face4_dx_ = MutableSparseNDimArray.zeros(nnodes_if,nnodes_if,nnodes_r,ndirs)
dbr2_face_face5_dx_ = MutableSparseNDimArray.zeros(nnodes_if,nnodes_if,nnodes_r,ndirs)
dbr2_face_face6_dx_ = MutableSparseNDimArray.zeros(nnodes_if,nnodes_if,nnodes_r,ndirs)
i = 1
for inode_diff in range(0,nnodes_r):
for idir in range(0,ndirs):
for irow in range(0,nnodes_if):
for icol in range(0,nnodes_if):
dbr2_face_face1_dx_[irow,icol,inode_diff,idir] = br2_face_face1_[irow,icol].diff(coords_[inode_diff,idir])
dbr2_face_face2_dx_[irow,icol,inode_diff,idir] = br2_face_face2_[irow,icol].diff(coords_[inode_diff,idir])
dbr2_face_face3_dx_[irow,icol,inode_diff,idir] = br2_face_face3_[irow,icol].diff(coords_[inode_diff,idir])
dbr2_face_face4_dx_[irow,icol,inode_diff,idir] = br2_face_face4_[irow,icol].diff(coords_[inode_diff,idir])
dbr2_face_face5_dx_[irow,icol,inode_diff,idir] = br2_face_face5_[irow,icol].diff(coords_[inode_diff,idir])
dbr2_face_face6_dx_[irow,icol,inode_diff,idir] = br2_face_face6_[irow,icol].diff(coords_[inode_diff,idir])
update_progress("Computing dbr2_face_faces_dx ", i/(nnodes_r*ndirs*nnodes_if*nnodes_if))
i += 1
## Differentaite Gradients
#dgrad1_dx_ = MutableSparseNDimArray.zeros(nnodes_ie,nterms_s,nnodes_r,ndirs)
#dgrad2_dx_ = MutableSparseNDimArray.zeros(nnodes_ie,nterms_s,nnodes_r,ndirs)
#dgrad3_dx_ = MutableSparseNDimArray.zeros(nnodes_ie,nterms_s,nnodes_r,ndirs)
#i = 1
#for inode in range(0,nnodes_ie):
# for inode_diff in range(0,nnodes_r):
# for idir in range(0,ndirs):
# for inode in range(0,nnodes_ie):
# for iterm in range(0,nterms_s):
# dgrad1_dx_[inode,iterm,inode_diff,idir] = grad1_[inode,iterm].diff(coords_[inode_diff,idir])
# dgrad2_dx_[inode,iterm,inode_diff,idir] = grad2_[inode,iterm].diff(coords_[inode_diff,idir])
# dgrad3_dx_[inode,iterm,inode_diff,idir] = grad3_[inode,iterm].diff(coords_[inode_diff,idir])
# update_progress("Computing dgrad1_dx, dgrad2_dx, .. ", i/(nnodes_ie*nnodes_r*ndirs*nnodes_ie*nterms_s))
# i += 1
#WRITE_____________________
##
## Metrics
##
#f = open("metrics.txt","w")
#i = 1
#for inode in range (0,nnodes_ie):
# f.write("Metric interpolation node %d \n" % (inode+1))
# array = numpy.zeros([3, 3])
# for irow in range(0,3):
# for icol in range(0,3):
# data_sym = lambdify(coords_,metrics_[irow,icol,inode],"numpy")
# data_val = data_sym(*flatten(coords))
# array[irow,icol] = data_val
# update_progress("Writing metrics to file ", i/(nnodes_ie*9))
# i += 1
# numpy.savetxt(f,array)
#f.close()
#
##
## jinv
##
#f = open("jinv.txt","w")
#array = numpy.zeros([1])
#i = 1
#for inode in range (0,nnodes_ie):
# f.write("Jinv interpolation node %d \n" % (inode+1))
# data_sym = lambdify(coords_,jinv_[inode],"numpy")
# data_val = data_sym(*flatten(coords))
# array[0] = data_val
# numpy.savetxt(f,array)
# update_progress("Writing jinv to file ", i/(nnodes_ie))
# i += 1
#f.close()
##
## Grad1
##
#f = open("grad1.txt","w")
#f.write("Grad1 \n")
#array = numpy.zeros([nnodes_ie,nterms_s])
#i = 1
#for inode in range (0,nnodes_ie):
# for iterm in range(0,nterms_s):
# data_sym = lambdify(coords_,grad1_[inode,iterm],"numpy")
# data_val = (data_sym(*flatten(coords)))
# array[inode,iterm] = data_val
# update_progress("Writing grad1 to file ", i/(nnodes_ie*nterms_s))
# i += 1
#numpy.savetxt(f,array)
#f.close()
#
##
## Grad2
##
#f = open("grad2.txt","w")
#f.write("Grad2\n")
#array = numpy.zeros([nnodes_ie,nterms_s])
#i = 1
#for inode in range (0,nnodes_ie):
# for iterm in range(0,nterms_s):
# data_sym = lambdify(coords_,grad2_[inode,iterm],"numpy")
# data_val = (data_sym(*flatten(coords)))
# array[inode,iterm] = data_val
# update_progress("Writing grad2 to file ", i/(nnodes_ie*nterms_s))
# i += 1
#numpy.savetxt(f,array)
#f.close()
#
##
## Grad3
##
#f = open("grad3.txt","w")
#f.write("Grad3\n")
#array = numpy.zeros([nnodes_ie,nterms_s])
#i = 1
#for inode in range (0,nnodes_ie):
# for iterm in range(0,nterms_s):
# data_sym = lambdify(coords_,grad3_[inode,iterm],"numpy")
# data_val = (data_sym(*flatten(coords)))
# array[inode,iterm] = data_val
# update_progress("Writing grad3 to file ", i/(nnodes_ie*nterms_s))
# i += 1
#numpy.savetxt(f,array)
#f.close()
##
## dmetric_dx
##
#f = open("dmetric_dx.txt","w")
#i = 1
#for inode in range (0,nnodes_ie):
# for inode_diff in range(0,nnodes_r):
# for idir in range(0,ndirs):
# array = numpy.zeros([3,3])
# f.write("dmetric_dx interpolation node %s, diff_node %s, diff_dir %s \n" % (inode+1,inode_diff+1,idir+1))
# for irow in range(0,3):
# for icol in range(0,3):
# data_sym = lambdify(coords_,dmetric_dx_[irow,icol,inode,inode_diff,idir],"numpy")
# data_val = data_sym(*flatten(coords))
# array[irow,icol] = data_val
# update_progress("Writing dmetric_dx to file ", i/(nnodes_ie*nnodes_r*ndirs*3*3))
# i += 1
# numpy.savetxt(f,array)
#f.close()
#
# interp_coords_dx
#
f = open("interp_coords_dx.txt","w")
i = 1
for inode in range (0,nnodes_ie):
for direct in range (0,3):
array = numpy.zeros([nnodes_r,ndirs])
f.write("coord interpolation node %s, coord %s, row=inode_diff, col=dir \n" % (inode+1,direct+1))
for inode_diff in range(0,nnodes_r):
for idir in range(0,ndirs):
data_sym = lambdify(coords_,interp_coords_dx_[inode,direct,inode_diff,idir],"numpy")
data_val = data_sym(*flatten(coords))
array[inode_diff,idir] = data_val
update_progress("Writing interp_coords_dx to file ", i/(nnodes_ie*nnodes_r*ndirs*3))
i += 1
numpy.savetxt(f,array)
f.close()
##
## djinv_dx
##
#f = open("djinv_dx.txt","w")
#i = 1
#for inode in range (0,nnodes_ie):
# array = numpy.zeros([nnodes_r,ndirs])
# f.write("djinv_dx interpolation node %s, row=inode_diff, col=dir \n" % (inode+1))
# for inode_diff in range(0,nnodes_r):
# for idir in range(0,ndirs):
# data_sym = lambdify(coords_,djinv_dx_[inode,inode_diff,idir],"numpy")
# data_val = data_sym(*flatten(coords))
# array[inode_diff,idir] = data_val
# update_progress("Writing djinv_dx to file ", i/(nnodes_ie*nnodes_r*ndirs))
# i += 1
# numpy.savetxt(f,array)
#f.close()
#
# dinvmass_dx
#
f = open("dinvmass_dx.txt","w")
i = 1
for inode_diff in range(0,nnodes_r):
for idir in range(0,ndirs):
f.write("dinvmass_dx => diff_node %s, diff_dir %s \n" % (inode_diff+1,idir+1))
array = numpy.zeros([nterms_s,nterms_s])
for irow in range(0,nterms_s):
for icol in range(0,nterms_s):
data_sym = lambdify(coords_,dinvmass_dx_[irow,icol,inode_diff,idir],"numpy")
data_val = (data_sym(*flatten(coords)))
array[irow,icol] = data_val
update_progress("Writing dinvmass_dx to file ", i/(nterms_s*nnodes_r*ndirs*nterms_s))
i += 1
numpy.savetxt(f,array)
f.close()
#
# dbr2_vol_dx
#
f = open("dbr2_vol_face1_dx.txt","w")
i = 1
for inode_diff in range(0,nnodes_r):
for idir in range(0,ndirs):
f.write("dbr2_vol_face1_dx => diff_node %s, diff_dir %s \n" % (inode_diff+1,idir+1))
array = numpy.zeros([nnodes_ie,nnodes_if])
for irow in range(0,nnodes_ie):
for icol in range(0,nnodes_if):
data_sym = lambdify(coords_,dbr2_vol_face1_dx_[irow,icol,inode_diff,idir],"numpy")
data_val = (data_sym(*flatten(coords)))
array[irow,icol] = data_val
update_progress("Writing dbr2_vol_face1_dx to file ", i/(nnodes_ie*nnodes_r*ndirs*nnodes_if))
i += 1
numpy.savetxt(f,array)
f.close()
f = open("dbr2_vol_face2_dx.txt","w")
i = 1
for inode_diff in range(0,nnodes_r):
for idir in range(0,ndirs):
f.write("dbr2_vol_face2_dx => diff_node %s, diff_dir %s \n" % (inode_diff+1,idir+1))
array = numpy.zeros([nnodes_ie,nnodes_if])
for irow in range(0,nnodes_ie):
for icol in range(0,nnodes_if):
data_sym = lambdify(coords_,dbr2_vol_face2_dx_[irow,icol,inode_diff,idir],"numpy")
data_val = (data_sym(*flatten(coords)))
array[irow,icol] = data_val
update_progress("Writing dbr2_vol_face2_dx to file ", i/(nnodes_ie*nnodes_r*ndirs*nnodes_if))
i += 1
numpy.savetxt(f,array)
f.close()
f = open("dbr2_vol_face3_dx.txt","w")
i = 1
for inode_diff in range(0,nnodes_r):
for idir in range(0,ndirs):
f.write("dbr2_vol_face3_dx => diff_node %s, diff_dir %s \n" % (inode_diff+1,idir+1))
array = numpy.zeros([nnodes_ie,nnodes_if])
for irow in range(0,nnodes_ie):
for icol in range(0,nnodes_if):
data_sym = lambdify(coords_,dbr2_vol_face3_dx_[irow,icol,inode_diff,idir],"numpy")
data_val = (data_sym(*flatten(coords)))
array[irow,icol] = data_val
update_progress("Writing dbr2_vol_face3_dx to file ", i/(nnodes_ie*nnodes_r*ndirs*nnodes_if))
i += 1
numpy.savetxt(f,array)
f.close()
f = open("dbr2_vol_face4_dx.txt","w")
i = 1
for inode_diff in range(0,nnodes_r):
for idir in range(0,ndirs):
f.write("dbr2_vol_face4_dx => diff_node %s, diff_dir %s \n" % (inode_diff+1,idir+1))
array = numpy.zeros([nnodes_ie,nnodes_if])
for irow in range(0,nnodes_ie):
for icol in range(0,nnodes_if):
data_sym = lambdify(coords_,dbr2_vol_face4_dx_[irow,icol,inode_diff,idir],"numpy")
data_val = (data_sym(*flatten(coords)))
array[irow,icol] = data_val
update_progress("Writing dbr2_vol_face4_dx to file ", i/(nnodes_ie*nnodes_r*ndirs*nnodes_if))
i += 1
numpy.savetxt(f,array)
f.close()
f = open("dbr2_vol_face5_dx.txt","w")
i = 1
for inode_diff in range(0,nnodes_r):
for idir in range(0,ndirs):
f.write("dbr2_vol_face5_dx => diff_node %s, diff_dir %s \n" % (inode_diff+1,idir+1))
array = numpy.zeros([nnodes_ie,nnodes_if])
for irow in range(0,nnodes_ie):
for icol in range(0,nnodes_if):
data_sym = lambdify(coords_,dbr2_vol_face5_dx_[irow,icol,inode_diff,idir],"numpy")
data_val = (data_sym(*flatten(coords)))
array[irow,icol] = data_val
update_progress("Writing dbr2_vol_face5_dx to file ", i/(nnodes_ie*nnodes_r*ndirs*nnodes_if))
i += 1
numpy.savetxt(f,array)
f.close()
f = open("dbr2_vol_face6_dx.txt","w")
i = 1
for inode_diff in range(0,nnodes_r):
for idir in range(0,ndirs):
f.write("dbr2_vol_face6_dx => diff_node %s, diff_dir %s \n" % (inode_diff+1,idir+1))
array = numpy.zeros([nnodes_ie,nnodes_if])
for irow in range(0,nnodes_ie):
for icol in range(0,nnodes_if):
data_sym = lambdify(coords_,dbr2_vol_face6_dx_[irow,icol,inode_diff,idir],"numpy")
data_val = (data_sym(*flatten(coords)))
array[irow,icol] = data_val
update_progress("Writing dbr2_vol_face6_dx to file ", i/(nnodes_ie*nnodes_r*ndirs*nnodes_if))
i += 1
numpy.savetxt(f,array)
f.close()
#
# dbr2_face_dx
#
f = open("dbr2_face_face1_dx.txt","w")
i = 1
for inode_diff in range(0,nnodes_r):
for idir in range(0,ndirs):
f.write("dbr2_face_face1_dx => diff_node %s, diff_dir %s \n" % (inode_diff+1,idir+1))
array = numpy.zeros([nnodes_if,nnodes_if])
for irow in range(0,nnodes_if):
for icol in range(0,nnodes_if):
data_sym = lambdify(coords_,dbr2_face_face1_dx_[irow,icol,inode_diff,idir],"numpy")
data_val = (data_sym(*flatten(coords)))
array[irow,icol] = data_val
update_progress("Writing dbr2_face_face1_dx to file ", i/(nnodes_if*nnodes_r*ndirs*nnodes_if))
i += 1
numpy.savetxt(f,array)
f.close()
f = open("dbr2_face_face2_dx.txt","w")
i = 1
for inode_diff in range(0,nnodes_r):
for idir in range(0,ndirs):
f.write("dbr2_face_face2_dx => diff_node %s, diff_dir %s \n" % (inode_diff+1,idir+1))
array = numpy.zeros([nnodes_if,nnodes_if])
for irow in range(0,nnodes_if):
for icol in range(0,nnodes_if):
data_sym = lambdify(coords_,dbr2_face_face2_dx_[irow,icol,inode_diff,idir],"numpy")
data_val = (data_sym(*flatten(coords)))
array[irow,icol] = data_val
update_progress("Writing dbr2_face_face2_dx to file ", i/(nnodes_if*nnodes_r*ndirs*nnodes_if))
i += 1
numpy.savetxt(f,array)
f.close()
f = open("dbr2_face_face3_dx.txt","w")
i = 1
for inode_diff in range(0,nnodes_r):
for idir in range(0,ndirs):
f.write("dbr2_face_face3_dx => diff_node %s, diff_dir %s \n" % (inode_diff+1,idir+1))
array = numpy.zeros([nnodes_if,nnodes_if])
for irow in range(0,nnodes_if):
for icol in range(0,nnodes_if):
data_sym = lambdify(coords_,dbr2_face_face3_dx_[irow,icol,inode_diff,idir],"numpy")
data_val = (data_sym(*flatten(coords)))
array[irow,icol] = data_val
update_progress("Writing dbr2_face_face3_dx to file ", i/(nnodes_if*nnodes_r*ndirs*nnodes_if))
i += 1
numpy.savetxt(f,array)
f.close()
f = open("dbr2_face_face4_dx.txt","w")
i = 1
for inode_diff in range(0,nnodes_r):
for idir in range(0,ndirs):
f.write("dbr2_face_face4_dx => diff_node %s, diff_dir %s \n" % (inode_diff+1,idir+1))
array = numpy.zeros([nnodes_if,nnodes_if])
for irow in range(0,nnodes_if):
for icol in range(0,nnodes_if):
data_sym = lambdify(coords_,dbr2_face_face4_dx_[irow,icol,inode_diff,idir],"numpy")
data_val = (data_sym(*flatten(coords)))
array[irow,icol] = data_val
update_progress("Writing dbr2_face_face4_dx to file ", i/(nnodes_if*nnodes_r*ndirs*nnodes_if))
i += 1
numpy.savetxt(f,array)
f.close()
f = open("dbr2_face_face5_dx.txt","w")
i = 1
for inode_diff in range(0,nnodes_r):
for idir in range(0,ndirs):
f.write("dbr2_face_face5_dx => diff_node %s, diff_dir %s \n" % (inode_diff+1,idir+1))
array = numpy.zeros([nnodes_if,nnodes_if])
for irow in range(0,nnodes_if):
for icol in range(0,nnodes_if):
data_sym = lambdify(coords_,dbr2_face_face5_dx_[irow,icol,inode_diff,idir],"numpy")
data_val = (data_sym(*flatten(coords)))
array[irow,icol] = data_val
update_progress("Writing dbr2_face_face5_dx to file ", i/(nnodes_if*nnodes_r*ndirs*nnodes_if))
i += 1
numpy.savetxt(f,array)
f.close()
f = open("dbr2_face_face6_dx.txt","w")
i = 1
for inode_diff in range(0,nnodes_r):
for idir in range(0,ndirs):
f.write("dbr2_face_face6_dx => diff_node %s, diff_dir %s \n" % (inode_diff+1,idir+1))
array = numpy.zeros([nnodes_if,nnodes_if])
for irow in range(0,nnodes_if):
for icol in range(0,nnodes_if):
data_sym = lambdify(coords_,dbr2_face_face6_dx_[irow,icol,inode_diff,idir],"numpy")
data_val = (data_sym(*flatten(coords)))
array[irow,icol] = data_val
update_progress("Writing dbr2_face_face6_dx to file ", i/(nnodes_if*nnodes_r*ndirs*nnodes_if))
i += 1
numpy.savetxt(f,array)
f.close()
##
## dgrad1_dx
##
#f = open("dgrad1_dx.txt","w")
#i = 1
#for inode_diff in range(0,nnodes_r):
# for idir in range(0,ndirs):
# f.write("dgrad1_dx => diff_node %s, diff_dir %s \n" % (inode_diff+1,idir+1))
# array = numpy.zeros([nnodes_ie,nterms_s])
# for irow in range(0,nnodes_ie):
# for icol in range(0,nterms_s):
# data_sym = lambdify(coords_,dgrad1_dx_[irow,icol,inode_diff,idir],"numpy")
# data_val = (data_sym(*flatten(coords)))
# array[irow,icol] = data_val
# update_progress("Writing dgrad1_dx to file ", i/(nnodes_ie*nnodes_r*ndirs*nterms_s))
# i += 1
# numpy.savetxt(f,array)
#f.close()
#
##
## dgrad2_dx
##
#f = open("dgrad2_dx.txt","w")
#i = 1
#for inode_diff in range(0,nnodes_r):
# for idir in range(0,ndirs):
# f.write("dgrad2_dx => diff_node %s, diff_dir %s \n" % (inode_diff+1,idir+1))
# array = numpy.zeros([nnodes_ie,nterms_s])
# for irow in range(0,nnodes_ie):
# for icol in range(0,nterms_s):
# data_sym = lambdify(coords_,dgrad2_dx_[irow,icol,inode_diff,idir],"numpy")
# data_val = (data_sym(*flatten(coords)))
# array[irow,icol] = data_val
# update_progress("Writing dgrad2_dx to file ", i/(nnodes_ie*nnodes_r*ndirs*nterms_s))
# i += 1
# numpy.savetxt(f,array)
#f.close()
#
##
## dgrad3_dx
##
#f = open("dgrad3_dx.txt","w")
#i = 1
#for inode_diff in range(0,nnodes_r):
# for idir in range(0,ndirs):
# f.write("dgrad3_dx => diff_node %s, diff_dir %s \n" % (inode_diff+1,idir+1))
# array = numpy.zeros([nnodes_ie,nterms_s])
# for irow in range(0,nnodes_ie):
# for icol in range(0,nterms_s):
# data_sym = lambdify(coords_,dgrad3_dx_[irow,icol,inode_diff,idir],"numpy")
# data_val = (data_sym(*flatten(coords)))
# array[irow,icol] = data_val
# update_progress("Writing dgrad3_dx to file ", i/(nnodes_ie*nnodes_r*ndirs*nterms_s))
# i += 1
# numpy.savetxt(f,array)
#f.close()
| 43.496305 | 164 | 0.58586 | 6,214 | 35,319 | 3.123753 | 0.033795 | 0.045541 | 0.043274 | 0.057699 | 0.839472 | 0.805265 | 0.775643 | 0.719592 | 0.689197 | 0.62784 | 0 | 0.087827 | 0.232424 | 35,319 | 811 | 165 | 43.549938 | 0.628181 | 0.236785 | 0 | 0.473904 | 0 | 0.002088 | 0.081855 | 0.009714 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.016701 | null | null | 0.002088 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
96fcf3db76ed42c278bc9f7e35c302521a261bcc | 48 | py | Python | clickapptest/lib/common/common.py | AtsushiSakai/clickapptest | 5fd38e3056d9d63b962ab531950d5f124003fc3a | [
"MIT"
] | null | null | null | clickapptest/lib/common/common.py | AtsushiSakai/clickapptest | 5fd38e3056d9d63b962ab531950d5f124003fc3a | [
"MIT"
] | null | null | null | clickapptest/lib/common/common.py | AtsushiSakai/clickapptest | 5fd38e3056d9d63b962ab531950d5f124003fc3a | [
"MIT"
] | null | null | null |
def print_common():
print("this is common") | 16 | 27 | 0.666667 | 7 | 48 | 4.428571 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1875 | 48 | 3 | 27 | 16 | 0.794872 | 0 | 0 | 0 | 0 | 0 | 0.291667 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0 | 0 | 0 | 0.5 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
8c07af07a184ddf00713f94bf206e65b652e34cb | 36 | py | Python | goamazondownloader/sbandradar/__init__.py | AdrianoPereira/goamazondownloader | 05671ee59c0a5a4c20edd5e4825d5ebf589d1327 | [
"MIT"
] | 1 | 2021-05-07T16:04:29.000Z | 2021-05-07T16:04:29.000Z | goamazondownloader/sbandradar/__init__.py | AdrianoPereira/goamazondownloader | 05671ee59c0a5a4c20edd5e4825d5ebf589d1327 | [
"MIT"
] | 1 | 2020-08-14T16:34:32.000Z | 2020-08-14T16:34:32.000Z | goamazondownloader/sbandradar/__init__.py | AdrianoPereira/goamazondownloader | 05671ee59c0a5a4c20edd5e4825d5ebf589d1327 | [
"MIT"
] | null | null | null | from ._sbandradar import SBandRadar
| 18 | 35 | 0.861111 | 4 | 36 | 7.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 36 | 1 | 36 | 36 | 0.9375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.