hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
317a20a44096a918c27596143b2d6e3d161826ea | 15,906 | py | Python | test/unit/common/middleware/test_domain_remap.py | 10088/swift | 93c432342bffce7a87902d7d8e9850eeddbb1a7c | [
"Apache-2.0"
] | null | null | null | test/unit/common/middleware/test_domain_remap.py | 10088/swift | 93c432342bffce7a87902d7d8e9850eeddbb1a7c | [
"Apache-2.0"
] | null | null | null | test/unit/common/middleware/test_domain_remap.py | 10088/swift | 93c432342bffce7a87902d7d8e9850eeddbb1a7c | [
"Apache-2.0"
] | null | null | null | # Copyright (c) 2010-2012 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import six
import unittest
from swift.common.swob import Request, HTTPMovedPermanently
from swift.common.middleware import domain_remap
from swift.common import registry
class FakeApp(object):
def __call__(self, env, start_response):
start_response('200 OK', [])
if six.PY2:
return [env['PATH_INFO']]
else:
print(env)
return [env['PATH_INFO'].encode('latin-1')]
class RedirectSlashApp(object):
def __call__(self, env, start_response):
loc = env['PATH_INFO'] + '/'
return HTTPMovedPermanently(location=loc)(env, start_response)
def start_response(*args):
pass
class TestDomainRemap(unittest.TestCase):
def setUp(self):
self.app = domain_remap.DomainRemapMiddleware(FakeApp(), {})
def test_domain_remap_passthrough(self):
req = Request.blank('/', environ={'REQUEST_METHOD': 'GET',
'SERVER_NAME': 'example.com'},
headers={'Host': None})
resp = self.app(req.environ, start_response)
self.assertEqual(resp, [b'/'])
req = Request.blank('/', environ={'REQUEST_METHOD': 'GET'},
headers={'Host': 'example.com'})
resp = self.app(req.environ, start_response)
self.assertEqual(resp, [b'/'])
req = Request.blank('/', environ={'REQUEST_METHOD': 'GET'},
headers={'Host': 'example.com:8080'})
resp = self.app(req.environ, start_response)
self.assertEqual(resp, [b'/'])
def test_domain_remap_account(self):
req = Request.blank('/', environ={'REQUEST_METHOD': 'GET',
'SERVER_NAME': 'AUTH_a.example.com'},
headers={'Host': None})
resp = self.app(req.environ, start_response)
self.assertEqual(resp, [b'/v1/AUTH_a/'])
req = Request.blank('/', environ={'REQUEST_METHOD': 'GET'},
headers={'Host': 'AUTH_a.example.com'})
resp = self.app(req.environ, start_response)
self.assertEqual(resp, [b'/v1/AUTH_a/'])
req = Request.blank('/', environ={'REQUEST_METHOD': 'GET'},
headers={'Host': 'AUTH-uuid.example.com'})
resp = self.app(req.environ, start_response)
self.assertEqual(resp, [b'/v1/AUTH_uuid/'])
def test_domain_remap_account_container(self):
req = Request.blank('/', environ={'REQUEST_METHOD': 'GET'},
headers={'Host': 'c.AUTH_a.example.com'})
resp = self.app(req.environ, start_response)
self.assertEqual(resp, [b'/v1/AUTH_a/c/'])
def test_domain_remap_extra_subdomains(self):
req = Request.blank('/', environ={'REQUEST_METHOD': 'GET'},
headers={'Host': 'x.y.c.AUTH_a.example.com'})
resp = self.app(req.environ, start_response)
self.assertEqual(resp, [b'Bad domain in host header'])
def test_domain_remap_account_with_path_root_container(self):
req = Request.blank('/v1', environ={'REQUEST_METHOD': 'GET'},
headers={'Host': 'AUTH_a.example.com'})
resp = self.app(req.environ, start_response)
self.assertEqual(resp, [b'/v1/AUTH_a/v1'])
def test_domain_remap_account_with_path_root_unicode_container(self):
req = Request.blank('/%E4%BD%A0%E5%A5%BD',
environ={'REQUEST_METHOD': 'GET'},
headers={'Host': 'AUTH_a.example.com'})
resp = self.app(req.environ, start_response)
self.assertEqual(resp, [b'/v1/AUTH_a/\xe4\xbd\xa0\xe5\xa5\xbd'])
def test_domain_remap_account_container_with_path_root_obj(self):
req = Request.blank('/v1', environ={'REQUEST_METHOD': 'GET'},
headers={'Host': 'c.AUTH_a.example.com'})
resp = self.app(req.environ, start_response)
self.assertEqual(resp, [b'/v1/AUTH_a/c/v1'])
def test_domain_remap_account_container_with_path_obj_slash_v1(self):
# Include http://localhost because urlparse used in Request.__init__
# parse //v1 as http://v1
req = Request.blank('http://localhost//v1',
environ={'REQUEST_METHOD': 'GET'},
headers={'Host': 'c.AUTH_a.example.com'})
resp = self.app(req.environ, start_response)
self.assertEqual(resp, [b'/v1/AUTH_a/c//v1'])
def test_domain_remap_account_container_with_root_path_obj_slash_v1(self):
req = Request.blank('/v1//v1',
environ={'REQUEST_METHOD': 'GET'},
headers={'Host': 'c.AUTH_a.example.com'})
resp = self.app(req.environ, start_response)
self.assertEqual(resp, [b'/v1/AUTH_a/c/v1//v1'])
def test_domain_remap_account_container_with_path_trailing_slash(self):
req = Request.blank('/obj/', environ={'REQUEST_METHOD': 'GET'},
headers={'Host': 'c.AUTH_a.example.com'})
resp = self.app(req.environ, start_response)
self.assertEqual(resp, [b'/v1/AUTH_a/c/obj/'])
def test_domain_remap_account_container_with_path(self):
req = Request.blank('/obj', environ={'REQUEST_METHOD': 'GET'},
headers={'Host': 'c.AUTH_a.example.com'})
resp = self.app(req.environ, start_response)
self.assertEqual(resp, [b'/v1/AUTH_a/c/obj'])
def test_domain_remap_account_container_with_path_root_and_path(self):
req = Request.blank('/v1/obj', environ={'REQUEST_METHOD': 'GET'},
headers={'Host': 'c.AUTH_a.example.com'})
resp = self.app(req.environ, start_response)
self.assertEqual(resp, [b'/v1/AUTH_a/c/v1/obj'])
def test_domain_remap_with_path_root_and_path_no_slash(self):
req = Request.blank('/v1obj', environ={'REQUEST_METHOD': 'GET'},
headers={'Host': 'c.AUTH_a.example.com'})
resp = self.app(req.environ, start_response)
self.assertEqual(resp, [b'/v1/AUTH_a/c/v1obj'])
def test_domain_remap_account_matching_ending_not_domain(self):
req = Request.blank('/dontchange', environ={'REQUEST_METHOD': 'GET'},
headers={'Host': 'c.aexample.com'})
resp = self.app(req.environ, start_response)
self.assertEqual(resp, [b'/dontchange'])
def test_domain_remap_configured_with_empty_storage_domain(self):
self.app = domain_remap.DomainRemapMiddleware(FakeApp(),
{'storage_domain': ''})
req = Request.blank('/test', environ={'REQUEST_METHOD': 'GET'},
headers={'Host': 'c.AUTH_a.example.com'})
resp = self.app(req.environ, start_response)
self.assertEqual(resp, [b'/test'])
def test_storage_domains_conf_format(self):
conf = {'storage_domain': 'foo.com'}
app = domain_remap.filter_factory(conf)(FakeApp())
self.assertEqual(app.storage_domain, ['.foo.com'])
conf = {'storage_domain': 'foo.com, '}
app = domain_remap.filter_factory(conf)(FakeApp())
self.assertEqual(app.storage_domain, ['.foo.com'])
conf = {'storage_domain': 'foo.com, bar.com'}
app = domain_remap.filter_factory(conf)(FakeApp())
self.assertEqual(app.storage_domain, ['.foo.com', '.bar.com'])
conf = {'storage_domain': 'foo.com, .bar.com'}
app = domain_remap.filter_factory(conf)(FakeApp())
self.assertEqual(app.storage_domain, ['.foo.com', '.bar.com'])
conf = {'storage_domain': '.foo.com, .bar.com'}
app = domain_remap.filter_factory(conf)(FakeApp())
self.assertEqual(app.storage_domain, ['.foo.com', '.bar.com'])
def test_domain_remap_configured_with_prefixes(self):
conf = {'reseller_prefixes': 'PREFIX'}
self.app = domain_remap.DomainRemapMiddleware(FakeApp(), conf)
req = Request.blank('/test', environ={'REQUEST_METHOD': 'GET'},
headers={'Host': 'c.prefix_uuid.example.com'})
resp = self.app(req.environ, start_response)
self.assertEqual(resp, [b'/v1/PREFIX_uuid/c/test'])
def test_domain_remap_configured_with_bad_prefixes(self):
conf = {'reseller_prefixes': 'UNKNOWN'}
self.app = domain_remap.DomainRemapMiddleware(FakeApp(), conf)
req = Request.blank('/test', environ={'REQUEST_METHOD': 'GET'},
headers={'Host': 'c.prefix_uuid.example.com'})
resp = self.app(req.environ, start_response)
self.assertEqual(resp, [b'/test'])
def test_domain_remap_configured_with_no_prefixes(self):
conf = {'reseller_prefixes': ''}
self.app = domain_remap.DomainRemapMiddleware(FakeApp(), conf)
req = Request.blank('/test', environ={'REQUEST_METHOD': 'GET'},
headers={'Host': 'c.uuid.example.com'})
resp = self.app(req.environ, start_response)
self.assertEqual(resp, [b'/v1/uuid/c/test'])
def test_domain_remap_add_prefix(self):
conf = {'default_reseller_prefix': 'FOO'}
self.app = domain_remap.DomainRemapMiddleware(FakeApp(), conf)
req = Request.blank('/test', environ={'REQUEST_METHOD': 'GET'},
headers={'Host': 'uuid.example.com'})
resp = self.app(req.environ, start_response)
self.assertEqual(resp, [b'/v1/FOO_uuid/test'])
def test_domain_remap_add_prefix_already_there(self):
conf = {'default_reseller_prefix': 'AUTH'}
self.app = domain_remap.DomainRemapMiddleware(FakeApp(), conf)
req = Request.blank('/test', environ={'REQUEST_METHOD': 'GET'},
headers={'Host': 'auth-uuid.example.com'})
resp = self.app(req.environ, start_response)
self.assertEqual(resp, [b'/v1/AUTH_uuid/test'])
def test_multiple_storage_domains(self):
conf = {'storage_domain': 'storage1.com, storage2.com'}
self.app = domain_remap.DomainRemapMiddleware(FakeApp(), conf)
def do_test(host):
req = Request.blank('/test', environ={'REQUEST_METHOD': 'GET'},
headers={'Host': host})
return self.app(req.environ, start_response)
resp = do_test('auth-uuid.storage1.com')
self.assertEqual(resp, [b'/v1/AUTH_uuid/test'])
resp = do_test('auth-uuid.storage2.com')
self.assertEqual(resp, [b'/v1/AUTH_uuid/test'])
resp = do_test('auth-uuid.storage3.com')
self.assertEqual(resp, [b'/test'])
def test_domain_remap_redirect(self):
app = domain_remap.DomainRemapMiddleware(RedirectSlashApp(), {})
req = Request.blank('/cont', environ={'REQUEST_METHOD': 'GET'},
headers={'Host': 'auth-uuid.example.com'})
resp = req.get_response(app)
self.assertEqual(resp.status_int, 301)
self.assertEqual(resp.headers.get('Location'),
'http://auth-uuid.example.com/cont/')
req = Request.blank('/cont/test', environ={'REQUEST_METHOD': 'GET'},
headers={'Host': 'auth-uuid.example.com'})
resp = req.get_response(app)
self.assertEqual(resp.status_int, 301)
self.assertEqual(resp.headers.get('Location'),
'http://auth-uuid.example.com/cont/test/')
req = Request.blank('/test', environ={'REQUEST_METHOD': 'GET'},
headers={'Host': 'cont.auth-uuid.example.com'})
resp = req.get_response(app)
self.assertEqual(resp.status_int, 301)
self.assertEqual(resp.headers.get('Location'),
'http://cont.auth-uuid.example.com/test/')
class TestDomainRemapClientMangling(unittest.TestCase):
def setUp(self):
self.app = domain_remap.DomainRemapMiddleware(FakeApp(), {
'mangle_client_paths': True})
def test_domain_remap_account_with_path_root_container(self):
req = Request.blank('/v1', environ={'REQUEST_METHOD': 'GET'},
headers={'Host': 'AUTH_a.example.com'})
resp = self.app(req.environ, start_response)
self.assertEqual(resp, [b'/v1/AUTH_a/'])
def test_domain_remap_account_container_with_path_root_obj(self):
req = Request.blank('/v1', environ={'REQUEST_METHOD': 'GET'},
headers={'Host': 'c.AUTH_a.example.com'})
resp = self.app(req.environ, start_response)
self.assertEqual(resp, [b'/v1/AUTH_a/c/'])
def test_domain_remap_account_container_with_path_obj_slash_v1(self):
# Include http://localhost because urlparse used in Request.__init__
# parse //v1 as http://v1
req = Request.blank('http://localhost//v1',
environ={'REQUEST_METHOD': 'GET'},
headers={'Host': 'c.AUTH_a.example.com'})
resp = self.app(req.environ, start_response)
self.assertEqual(resp, [b'/v1/AUTH_a/c//v1'])
def test_domain_remap_account_container_with_root_path_obj_slash_v1(self):
req = Request.blank('/v1//v1',
environ={'REQUEST_METHOD': 'GET'},
headers={'Host': 'c.AUTH_a.example.com'})
resp = self.app(req.environ, start_response)
self.assertEqual(resp, [b'/v1/AUTH_a/c//v1'])
def test_domain_remap_account_container_with_path_trailing_slash(self):
req = Request.blank('/obj/', environ={'REQUEST_METHOD': 'GET'},
headers={'Host': 'c.AUTH_a.example.com'})
resp = self.app(req.environ, start_response)
self.assertEqual(resp, [b'/v1/AUTH_a/c/obj/'])
def test_domain_remap_account_container_with_path_root_and_path(self):
req = Request.blank('/v1/obj', environ={'REQUEST_METHOD': 'GET'},
headers={'Host': 'c.AUTH_a.example.com'})
resp = self.app(req.environ, start_response)
self.assertEqual(resp, [b'/v1/AUTH_a/c/obj'])
def test_domain_remap_with_path_root_and_path_no_slash(self):
req = Request.blank('/v1obj', environ={'REQUEST_METHOD': 'GET'},
headers={'Host': 'c.AUTH_a.example.com'})
resp = self.app(req.environ, start_response)
self.assertEqual(resp, [b'/v1/AUTH_a/c/v1obj'])
class TestSwiftInfo(unittest.TestCase):
def setUp(self):
registry._swift_info = {}
registry._swift_admin_info = {}
def test_registered_defaults(self):
domain_remap.filter_factory({})
swift_info = registry.get_swift_info()
self.assertIn('domain_remap', swift_info)
self.assertEqual(swift_info['domain_remap'], {
'default_reseller_prefix': None})
def test_registered_nondefaults(self):
domain_remap.filter_factory({'default_reseller_prefix': 'cupcake',
'mangle_client_paths': 'yes'})
swift_info = registry.get_swift_info()
self.assertIn('domain_remap', swift_info)
self.assertEqual(swift_info['domain_remap'], {
'default_reseller_prefix': 'cupcake'})
if __name__ == '__main__':
unittest.main()
| 46.104348 | 79 | 0.609644 | 1,894 | 15,906 | 4.89018 | 0.10982 | 0.059382 | 0.082056 | 0.086914 | 0.832218 | 0.788815 | 0.774779 | 0.739905 | 0.735478 | 0.729756 | 0 | 0.008392 | 0.243367 | 15,906 | 344 | 80 | 46.238372 | 0.761197 | 0.046963 | 0 | 0.625 | 0 | 0 | 0.191347 | 0.027873 | 0 | 0 | 0 | 0 | 0.185606 | 1 | 0.147727 | false | 0.007576 | 0.018939 | 0 | 0.200758 | 0.003788 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3195583c729da5728d6fd40305164b8f19634937 | 130 | py | Python | dj_lab/mysite/news/admin.py | ch1huizong/lab | c1622f94ca5c8a716cb5769fa6213060eb7c140d | [
"MIT"
] | null | null | null | dj_lab/mysite/news/admin.py | ch1huizong/lab | c1622f94ca5c8a716cb5769fa6213060eb7c140d | [
"MIT"
] | null | null | null | dj_lab/mysite/news/admin.py | ch1huizong/lab | c1622f94ca5c8a716cb5769fa6213060eb7c140d | [
"MIT"
] | null | null | null | from django.contrib import admin
from . import models
admin.site.register(models.Reporter)
admin.site.register(models.Article)
| 16.25 | 36 | 0.807692 | 18 | 130 | 5.833333 | 0.555556 | 0.171429 | 0.32381 | 0.438095 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 130 | 7 | 37 | 18.571429 | 0.897436 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
319bfeddf2c27f81ab0660b2211b3ac9cd26bcc3 | 2,120 | py | Python | leetcode/python/searchInRotatedSortedArray.py | yaoxuanw007/forfun | db50bd40852d49bd68bae03ceb43cb4a901c6d37 | [
"MIT"
] | null | null | null | leetcode/python/searchInRotatedSortedArray.py | yaoxuanw007/forfun | db50bd40852d49bd68bae03ceb43cb4a901c6d37 | [
"MIT"
] | null | null | null | leetcode/python/searchInRotatedSortedArray.py | yaoxuanw007/forfun | db50bd40852d49bd68bae03ceb43cb4a901c6d37 | [
"MIT"
] | null | null | null | # https://oj.leetcode.com/problems/search-in-rotated-sorted-array/
# Only work for the array in ascending order
class Solution:
# @param A, a list of integers
# @param target, an integer to be searched
# @return an integer
def search(self, A, target):
left, right = 0, len(A) - 1
while left <= right:
mid = (left + right) / 2
if A[mid] == target:
return mid
elif A[mid] > A[right]:
if target >= A[left] and target < A[mid]:
right = mid - 1
else:
left = mid + 1
else:
if target > A[mid] and target <= A[right]:
left = mid + 1
else:
right = mid -1
return -1
# Two orders and no dup
class Solution1:
# @param A, a list of integers
# @param target, an integer to be searched
# @return an integer
def search(self, A, target):
left, right = 0, len(A) - 1
while left <= right:
mid = (left + right) / 2
if A[mid] == target:
return mid
elif A[left] < A[right]:
if A[mid] < A[left]:
if A[mid] < target and target <= A[left]:
right = mid - 1
else:
left = mid + 1
elif A[mid] > A[right]:
if A[right] <= target and target < A[mid]:
left = mid + 1
else:
right = mid - 1
else:
if target < A[mid]:
right = mid - 1
else:
left = mid + 1
else:
if A[mid] < A[right]:
if A[mid] < target and target <= A[right]:
left = mid + 1
else:
right = mid - 1
elif A[mid] > A[left]:
if A[left] <= target and target < A[mid]:
right = mid - 1
else:
left = mid + 1
else:
if target < A[mid]:
left = mid + 1
else:
right = mid - 1
return -1
s = Solution1()
print s.search([4,5,6,7,0,1,2], 2)
print s.search([2,1,0,7,6,5,4], 4)
print s.search([4,5,6,7,0,1,2], 0)
print s.search([0,1,2,4,5,6,7], 6)
print s.search([0,1,1,1,2,6,7], 6)
print s.search([0,1,1,1,2,6,7], 1)
| 26.835443 | 66 | 0.482547 | 325 | 2,120 | 3.147692 | 0.169231 | 0.062561 | 0.093842 | 0.082111 | 0.809384 | 0.773216 | 0.711632 | 0.652004 | 0.652004 | 0.638319 | 0 | 0.056188 | 0.378774 | 2,120 | 78 | 67 | 27.179487 | 0.720577 | 0.144811 | 0 | 0.707692 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.092308 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
9ee27514875356ce737eb5951a1ccc38b62b4f0e | 235 | py | Python | event/views.py | Nadiahansen15/django-denmark.org | d3be44764367a7c1059d7a0e896fbbc56ce1771f | [
"MIT"
] | null | null | null | event/views.py | Nadiahansen15/django-denmark.org | d3be44764367a7c1059d7a0e896fbbc56ce1771f | [
"MIT"
] | 1 | 2021-05-03T09:27:59.000Z | 2021-05-03T09:27:59.000Z | event/views.py | Nadiahansen15/django-denmark.org | d3be44764367a7c1059d7a0e896fbbc56ce1771f | [
"MIT"
] | null | null | null | from django.shortcuts import render, get_object_or_404
from django.utils import timezone
from django.shortcuts import redirect
# Create your views here.
def getLandingPage(request):
return render(request, 'event/landingpage.html') | 33.571429 | 54 | 0.817021 | 32 | 235 | 5.90625 | 0.71875 | 0.15873 | 0.201058 | 0.26455 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014423 | 0.114894 | 235 | 7 | 55 | 33.571429 | 0.894231 | 0.097872 | 0 | 0 | 0 | 0 | 0.104265 | 0.104265 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.6 | 0.2 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
9efa8a75ffe54a5aa57bd8271ab8b2e5ccd982de | 40 | py | Python | algorithms/simpleArraySum.py | marismarcosta/hackerrank | 3580b4fe0094e2a13f9a7efeeb0e072810be9ebf | [
"MIT"
] | null | null | null | algorithms/simpleArraySum.py | marismarcosta/hackerrank | 3580b4fe0094e2a13f9a7efeeb0e072810be9ebf | [
"MIT"
] | 3 | 2020-09-27T22:57:05.000Z | 2020-09-29T23:07:44.000Z | algorithms/simpleArraySum.py | marismarcosta/hackerrank-challenges | 3580b4fe0094e2a13f9a7efeeb0e072810be9ebf | [
"MIT"
] | 1 | 2020-11-06T21:16:19.000Z | 2020-11-06T21:16:19.000Z | def simpleArraySum(ar):
return sum(ar) | 20 | 23 | 0.75 | 6 | 40 | 5 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 40 | 2 | 24 | 20 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
7362f8dd640949750651dd93185e0930d7819213 | 172 | py | Python | test/args/model_lists.py | pvandyken/cluster_utils | 801bebb299bec8b71cb9c237a5a4daff9c9e33fe | [
"MIT"
] | null | null | null | test/args/model_lists.py | pvandyken/cluster_utils | 801bebb299bec8b71cb9c237a5a4daff9c9e33fe | [
"MIT"
] | null | null | null | test/args/model_lists.py | pvandyken/cluster_utils | 801bebb299bec8b71cb9c237a5a4daff9c9e33fe | [
"MIT"
] | null | null | null | from __future__ import absolute_import
from kslurm.args.arg_types import PositionalArg
def positional_models(num: int):
return [PositionalArg() for _ in range(num)]
| 21.5 | 48 | 0.790698 | 23 | 172 | 5.565217 | 0.782609 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.139535 | 172 | 7 | 49 | 24.571429 | 0.864865 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.5 | 0.25 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
b40c7bc24cacbc03844c5291a64661634da308c5 | 151 | py | Python | tests/test_sitemap.py | vndmtrx/flask-staticsite | a1d80578b810088ce29fa48aa73a84aac178f248 | [
"Apache-2.0"
] | 2 | 2018-10-21T18:17:20.000Z | 2021-11-27T12:25:17.000Z | tests/test_sitemap.py | vndmtrx/flask-staticsite | a1d80578b810088ce29fa48aa73a84aac178f248 | [
"Apache-2.0"
] | null | null | null | tests/test_sitemap.py | vndmtrx/flask-staticsite | a1d80578b810088ce29fa48aa73a84aac178f248 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
import unittest
class TestSitemap(unittest.TestCase):
pass
| 16.777778 | 39 | 0.735099 | 19 | 151 | 5.578947 | 0.894737 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007752 | 0.145695 | 151 | 8 | 40 | 18.875 | 0.813953 | 0.278146 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.25 | 0.5 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
b443d7db2cd3cb03bba239ad069270dd96d5b878 | 68 | py | Python | CodingBat/Warmup-1/sum_double.py | arthxvr/coding--python | 1e91707be6cb8fef816dad0c1a65f2cc3327357e | [
"MIT"
] | null | null | null | CodingBat/Warmup-1/sum_double.py | arthxvr/coding--python | 1e91707be6cb8fef816dad0c1a65f2cc3327357e | [
"MIT"
] | null | null | null | CodingBat/Warmup-1/sum_double.py | arthxvr/coding--python | 1e91707be6cb8fef816dad0c1a65f2cc3327357e | [
"MIT"
] | null | null | null | def sum_double(a, b):
return 2 * (a + b) if a == b else (a + b)
| 22.666667 | 45 | 0.5 | 15 | 68 | 2.2 | 0.6 | 0.242424 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021277 | 0.308824 | 68 | 2 | 46 | 34 | 0.680851 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
b461b5f70c282446c1d63ae9812876b6803c5d3f | 34,543 | py | Python | tests/test_functional/test_generation.py | the-code-robot/routes | bb7694b2ca8ac55a2bb671cc4c91fd55b3b8cc18 | [
"MIT"
] | 105 | 2015-01-27T02:33:17.000Z | 2022-03-06T06:08:47.000Z | tests/test_functional/test_generation.py | the-code-robot/routes | bb7694b2ca8ac55a2bb671cc4c91fd55b3b8cc18 | [
"MIT"
] | 75 | 2015-01-05T21:16:02.000Z | 2021-12-06T21:13:43.000Z | tests/test_functional/test_generation.py | the-code-robot/routes | bb7694b2ca8ac55a2bb671cc4c91fd55b3b8cc18 | [
"MIT"
] | 48 | 2015-01-19T00:40:23.000Z | 2022-03-06T06:08:53.000Z | """test_generation"""
import sys, time, unittest
from six.moves import urllib
from nose.tools import eq_, assert_raises
from routes import *
class TestGeneration(unittest.TestCase):
def test_all_static_no_reqs(self):
m = Mapper()
m.connect('hello/world')
eq_('/hello/world', m.generate())
def test_basic_dynamic(self):
for path in ['hi/:fred', 'hi/:(fred)']:
m = Mapper()
m.connect(path)
eq_('/hi/index', m.generate(fred='index'))
eq_('/hi/show', m.generate(fred='show'))
eq_('/hi/list%20people', m.generate(fred='list people'))
eq_(None, m.generate())
def test_relative_url(self):
m = Mapper(explicit=False)
m.minimization = True
m.environ = dict(HTTP_HOST='localhost')
url = URLGenerator(m, m.environ)
m.connect(':controller/:action/:id')
m.create_regs(['content','blog','admin/comments'])
eq_('about', url('about'))
eq_('http://localhost/about', url('about', qualified=True))
def test_basic_dynamic_explicit_use(self):
m = Mapper()
m.connect('hi/{fred}')
url = URLGenerator(m, {})
eq_('/hi/index', url(fred='index'))
eq_('/hi/show', url(fred='show'))
eq_('/hi/list%20people', url(fred='list people'))
def test_dynamic_with_default(self):
for path in ['hi/:action', 'hi/:(action)']:
m = Mapper(explicit=False)
m.minimization = True
m.connect(path)
eq_('/hi', m.generate(action='index'))
eq_('/hi/show', m.generate(action='show'))
eq_('/hi/list%20people', m.generate(action='list people'))
eq_('/hi', m.generate())
def test_dynamic_with_false_equivs(self):
m = Mapper(explicit=False)
m.minimization = True
m.connect('article/:page', page=False)
m.connect(':controller/:action/:id')
eq_('/blog/view/0', m.generate(controller="blog", action="view", id="0"))
eq_('/blog/view/0', m.generate(controller="blog", action="view", id=0))
eq_('/blog/view/False', m.generate(controller="blog", action="view", id=False))
eq_('/blog/view/False', m.generate(controller="blog", action="view", id='False'))
eq_('/blog/view', m.generate(controller="blog", action="view", id=None))
eq_('/blog/view', m.generate(controller="blog", action="view", id='None'))
eq_('/article', m.generate(page=None))
m = Mapper()
m.minimization = True
m.connect('view/:home/:area', home="austere", area=None)
eq_('/view/sumatra', m.generate(home='sumatra'))
eq_('/view/austere/chicago', m.generate(area='chicago'))
m = Mapper()
m.minimization = True
m.connect('view/:home/:area', home=None, area=None)
eq_('/view/None/chicago', m.generate(home=None, area='chicago'))
def test_dynamic_with_underscore_parts(self):
m = Mapper(explicit=False)
m.minimization = True
m.connect('article/:small_page', small_page=False)
m.connect(':(controller)/:(action)/:(id)')
eq_('/blog/view/0', m.generate(controller="blog", action="view", id="0"))
eq_('/blog/view/False', m.generate(controller="blog", action="view", id='False'))
eq_('/blog/view', m.generate(controller="blog", action="view", id='None'))
eq_('/article', m.generate(small_page=None))
eq_('/article/hobbes', m.generate(small_page='hobbes'))
def test_dynamic_with_false_equivs_and_splits(self):
m = Mapper(explicit=False)
m.minimization = True
m.connect('article/:(page)', page=False)
m.connect(':(controller)/:(action)/:(id)')
eq_('/blog/view/0', m.generate(controller="blog", action="view", id="0"))
eq_('/blog/view/0', m.generate(controller="blog", action="view", id=0))
eq_('/blog/view/False', m.generate(controller="blog", action="view", id=False))
eq_('/blog/view/False', m.generate(controller="blog", action="view", id='False'))
eq_('/blog/view', m.generate(controller="blog", action="view", id=None))
eq_('/blog/view', m.generate(controller="blog", action="view", id='None'))
eq_('/article', m.generate(page=None))
m = Mapper()
m.minimization = True
m.connect('view/:(home)/:(area)', home="austere", area=None)
eq_('/view/sumatra', m.generate(home='sumatra'))
eq_('/view/austere/chicago', m.generate(area='chicago'))
m = Mapper()
m.minimization = True
m.connect('view/:(home)/:(area)', home=None, area=None)
eq_('/view/None/chicago', m.generate(home=None, area='chicago'))
def test_dynamic_with_regexp_condition(self):
for path in ['hi/:name', 'hi/:(name)']:
m = Mapper()
m.connect(path, requirements = {'name':'[a-z]+'})
eq_('/hi/index', m.generate(name='index'))
eq_(None, m.generate(name='fox5'))
eq_(None, m.generate(name='something_is_up'))
eq_('/hi/abunchofcharacter', m.generate(name='abunchofcharacter'))
eq_(None, m.generate())
def test_dynamic_with_default_and_regexp_condition(self):
for path in ['hi/:action', 'hi/:(action)']:
m = Mapper(explicit=False)
m.minimization = True
m.connect(path, requirements = {'action':'[a-z]+'})
eq_('/hi', m.generate(action='index'))
eq_(None, m.generate(action='fox5'))
eq_(None, m.generate(action='something_is_up'))
eq_(None, m.generate(action='list people'))
eq_('/hi/abunchofcharacter', m.generate(action='abunchofcharacter'))
eq_('/hi', m.generate())
def test_path(self):
for path in ['hi/*file', 'hi/*(file)']:
m = Mapper()
m.minimization = True
m.connect(path)
eq_('/hi', m.generate(file=None))
eq_('/hi/books/learning_python.pdf', m.generate(file='books/learning_python.pdf'))
eq_('/hi/books/development%26whatever/learning_python.pdf',
m.generate(file='books/development&whatever/learning_python.pdf'))
def test_path_backwards(self):
for path in ['*file/hi', '*(file)/hi']:
m = Mapper()
m.minimization = True
m.connect(path)
eq_('/hi', m.generate(file=None))
eq_('/books/learning_python.pdf/hi', m.generate(file='books/learning_python.pdf'))
eq_('/books/development%26whatever/learning_python.pdf/hi',
m.generate(file='books/development&whatever/learning_python.pdf'))
def test_controller(self):
for path in ['hi/:controller', 'hi/:(controller)']:
m = Mapper()
m.connect(path)
eq_('/hi/content', m.generate(controller='content'))
eq_('/hi/admin/user', m.generate(controller='admin/user'))
def test_controller_with_static(self):
for path in ['hi/:controller', 'hi/:(controller)']:
m = Mapper()
m.connect(path)
m.connect('google', 'http://www.google.com', _static=True)
eq_('/hi/content', m.generate(controller='content'))
eq_('/hi/admin/user', m.generate(controller='admin/user'))
eq_('http://www.google.com', url_for('google'))
def test_standard_route(self):
for path in [':controller/:action/:id', ':(controller)/:(action)/:(id)']:
m = Mapper(explicit=False)
m.minimization = True
m.connect(path)
eq_('/content', m.generate(controller='content', action='index'))
eq_('/content/list', m.generate(controller='content', action='list'))
eq_('/content/show/10', m.generate(controller='content', action='show', id ='10'))
eq_('/admin/user', m.generate(controller='admin/user', action='index'))
eq_('/admin/user/list', m.generate(controller='admin/user', action='list'))
eq_('/admin/user/show/10', m.generate(controller='admin/user', action='show', id='10'))
def test_multiroute(self):
m = Mapper(explicit=False)
m.minimization = True
m.connect('archive/:year/:month/:day', controller='blog', action='view', month=None, day=None,
requirements={'month':'\d{1,2}','day':'\d{1,2}'})
m.connect('viewpost/:id', controller='post', action='view')
m.connect(':controller/:action/:id')
url = m.generate(controller='blog', action='view', year=2004, month='blah')
assert url == '/blog/view?year=2004&month=blah' or url == '/blog/view?month=blah&year=2004'
eq_('/archive/2004/11', m.generate(controller='blog', action='view', year=2004, month=11))
eq_('/archive/2004/11', m.generate(controller='blog', action='view', year=2004, month='11'))
eq_('/archive/2004', m.generate(controller='blog', action='view', year=2004))
eq_('/viewpost/3', m.generate(controller='post', action='view', id=3))
def test_multiroute_with_splits(self):
m = Mapper(explicit=False)
m.minimization = True
m.connect('archive/:(year)/:(month)/:(day)', controller='blog', action='view', month=None, day=None,
requirements={'month':'\d{1,2}','day':'\d{1,2}'})
m.connect('viewpost/:(id)', controller='post', action='view')
m.connect(':(controller)/:(action)/:(id)')
url = m.generate(controller='blog', action='view', year=2004, month='blah')
assert url == '/blog/view?year=2004&month=blah' or url == '/blog/view?month=blah&year=2004'
eq_('/archive/2004/11', m.generate(controller='blog', action='view', year=2004, month=11))
eq_('/archive/2004/11', m.generate(controller='blog', action='view', year=2004, month='11'))
eq_('/archive/2004', m.generate(controller='blog', action='view', year=2004))
eq_('/viewpost/3', m.generate(controller='post', action='view', id=3))
def test_big_multiroute(self):
m = Mapper(explicit=False)
m.minimization = True
m.connect('', controller='articles', action='index')
m.connect('admin', controller='admin/general', action='index')
m.connect('admin/comments/article/:article_id/:action/:id', controller = 'admin/comments', action=None, id=None)
m.connect('admin/trackback/article/:article_id/:action/:id', controller='admin/trackback', action=None, id=None)
m.connect('admin/content/:action/:id', controller='admin/content')
m.connect('xml/:action/feed.xml', controller='xml')
m.connect('xml/articlerss/:id/feed.xml', controller='xml', action='articlerss')
m.connect('index.rdf', controller='xml', action='rss')
m.connect('articles', controller='articles', action='index')
m.connect('articles/page/:page', controller='articles', action='index', requirements = {'page':'\d+'})
m.connect('articles/:year/:month/:day/page/:page', controller='articles', action='find_by_date', month = None, day = None,
requirements = {'year':'\d{4}', 'month':'\d{1,2}','day':'\d{1,2}'})
m.connect('articles/category/:id', controller='articles', action='category')
m.connect('pages/*name', controller='articles', action='view_page')
eq_('/pages/the/idiot/has/spoken',
m.generate(controller='articles', action='view_page', name='the/idiot/has/spoken'))
eq_('/', m.generate(controller='articles', action='index'))
eq_('/xml/articlerss/4/feed.xml', m.generate(controller='xml', action='articlerss', id=4))
eq_('/xml/rss/feed.xml', m.generate(controller='xml', action='rss'))
eq_('/admin/comments/article/4/view/2',
m.generate(controller='admin/comments', action='view', article_id=4, id=2))
eq_('/admin', m.generate(controller='admin/general'))
eq_('/admin/comments/article/4/index', m.generate(controller='admin/comments', article_id=4))
eq_('/admin/comments/article/4',
m.generate(controller='admin/comments', action=None, article_id=4))
eq_('/articles/2004/2/20/page/1',
m.generate(controller='articles', action='find_by_date', year=2004, month=2, day=20, page=1))
eq_('/articles/category', m.generate(controller='articles', action='category'))
eq_('/xml/index/feed.xml', m.generate(controller='xml'))
eq_('/xml/articlerss/feed.xml', m.generate(controller='xml', action='articlerss'))
eq_(None, m.generate(controller='admin/comments', id=2))
eq_(None, m.generate(controller='articles', action='find_by_date', year=2004))
def test_big_multiroute_with_splits(self):
m = Mapper(explicit=False)
m.minimization = True
m.connect('', controller='articles', action='index')
m.connect('admin', controller='admin/general', action='index')
m.connect('admin/comments/article/:(article_id)/:(action)/:(id).html', controller = 'admin/comments', action=None, id=None)
m.connect('admin/trackback/article/:(article_id)/:action/:(id).html', controller='admin/trackback', action=None, id=None)
m.connect('admin/content/:(action)/:(id)', controller='admin/content')
m.connect('xml/:(action)/feed.xml', controller='xml')
m.connect('xml/articlerss/:(id)/feed.xml', controller='xml', action='articlerss')
m.connect('index.rdf', controller='xml', action='rss')
m.connect('articles', controller='articles', action='index')
m.connect('articles/page/:(page).myt', controller='articles', action='index', requirements = {'page':'\d+'})
m.connect('articles/:(year)/:month/:(day)/page/:page', controller='articles', action='find_by_date', month = None, day = None,
requirements = {'year':'\d{4}', 'month':'\d{1,2}','day':'\d{1,2}'})
m.connect('articles/category/:id', controller='articles', action='category')
m.connect('pages/*name', controller='articles', action='view_page')
eq_('/pages/the/idiot/has/spoken',
m.generate(controller='articles', action='view_page', name='the/idiot/has/spoken'))
eq_('/', m.generate(controller='articles', action='index'))
eq_('/xml/articlerss/4/feed.xml', m.generate(controller='xml', action='articlerss', id=4))
eq_('/xml/rss/feed.xml', m.generate(controller='xml', action='rss'))
eq_('/admin/comments/article/4/view/2.html',
m.generate(controller='admin/comments', action='view', article_id=4, id=2))
eq_('/admin', m.generate(controller='admin/general'))
eq_('/admin/comments/article/4/edit/3.html',
m.generate(controller='admin/comments', article_id=4, action='edit', id=3))
eq_(None, m.generate(controller='admin/comments', action=None, article_id=4))
eq_('/articles/2004/2/20/page/1',
m.generate(controller='articles', action='find_by_date', year=2004, month=2, day=20, page=1))
eq_('/articles/category', m.generate(controller='articles', action='category'))
eq_('/xml/index/feed.xml', m.generate(controller='xml'))
eq_('/xml/articlerss/feed.xml', m.generate(controller='xml', action='articlerss'))
eq_(None, m.generate(controller='admin/comments', id=2))
eq_(None, m.generate(controller='articles', action='find_by_date', year=2004))
def test_big_multiroute_with_nomin(self):
m = Mapper(explicit=False)
m.minimization = False
m.connect('', controller='articles', action='index')
m.connect('admin', controller='admin/general', action='index')
m.connect('admin/comments/article/:article_id/:action/:id', controller = 'admin/comments', action=None, id=None)
m.connect('admin/trackback/article/:article_id/:action/:id', controller='admin/trackback', action=None, id=None)
m.connect('admin/content/:action/:id', controller='admin/content')
m.connect('xml/:action/feed.xml', controller='xml')
m.connect('xml/articlerss/:id/feed.xml', controller='xml', action='articlerss')
m.connect('index.rdf', controller='xml', action='rss')
m.connect('articles', controller='articles', action='index')
m.connect('articles/page/:page', controller='articles', action='index', requirements = {'page':'\d+'})
m.connect('articles/:year/:month/:day/page/:page', controller='articles', action='find_by_date', month = None, day = None,
requirements = {'year':'\d{4}', 'month':'\d{1,2}','day':'\d{1,2}'})
m.connect('articles/category/:id', controller='articles', action='category')
m.connect('pages/*name', controller='articles', action='view_page')
eq_('/pages/the/idiot/has/spoken',
m.generate(controller='articles', action='view_page', name='the/idiot/has/spoken'))
eq_('/', m.generate(controller='articles', action='index'))
eq_('/xml/articlerss/4/feed.xml', m.generate(controller='xml', action='articlerss', id=4))
eq_('/xml/rss/feed.xml', m.generate(controller='xml', action='rss'))
eq_('/admin/comments/article/4/view/2',
m.generate(controller='admin/comments', action='view', article_id=4, id=2))
eq_('/admin', m.generate(controller='admin/general'))
eq_('/articles/2004/2/20/page/1',
m.generate(controller='articles', action='find_by_date', year=2004, month=2, day=20, page=1))
eq_(None, m.generate(controller='articles', action='category'))
eq_('/articles/category/4', m.generate(controller='articles', action='category', id=4))
eq_('/xml/index/feed.xml', m.generate(controller='xml'))
eq_('/xml/articlerss/feed.xml', m.generate(controller='xml', action='articlerss'))
eq_(None, m.generate(controller='admin/comments', id=2))
eq_(None, m.generate(controller='articles', action='find_by_date', year=2004))
def test_no_extras(self):
m = Mapper()
m.minimization = True
m.connect(':controller/:action/:id')
m.connect('archive/:year/:month/:day', controller='blog', action='view', month=None, day=None)
eq_('/archive/2004', m.generate(controller='blog', action='view', year=2004))
def test_no_extras_with_splits(self):
m = Mapper()
m.minimization = True
m.connect(':(controller)/:(action)/:(id)')
m.connect('archive/:(year)/:(month)/:(day)', controller='blog', action='view', month=None, day=None)
eq_('/archive/2004', m.generate(controller='blog', action='view', year=2004))
def test_the_smallest_route(self):
for path in ['pages/:title', 'pages/:(title)']:
m = Mapper()
m.connect('', controller='page', action='view', title='HomePage')
m.connect(path, controller='page', action='view')
eq_('/', m.generate(controller='page', action='view', title='HomePage'))
eq_('/pages/joe', m.generate(controller='page', action='view', title='joe'))
def test_extras(self):
m = Mapper(explicit=False)
m.minimization = True
m.connect('viewpost/:id', controller='post', action='view')
m.connect(':controller/:action/:id')
eq_('/viewpost/2?extra=x%2Fy', m.generate(controller='post', action='view', id=2, extra='x/y'))
eq_('/blog?extra=3', m.generate(controller='blog', action='index', extra=3))
eq_('/viewpost/2?extra=3', m.generate(controller='post', action='view', id=2, extra=3))
def test_extras_with_splits(self):
m = Mapper(explicit=False)
m.minimization = True
m.connect('viewpost/:(id)', controller='post', action='view')
m.connect(':(controller)/:(action)/:(id)')
eq_('/blog?extra=3', m.generate(controller='blog', action='index', extra=3))
eq_('/viewpost/2?extra=3', m.generate(controller='post', action='view', id=2, extra=3))
def test_extras_as_unicode(self):
m = Mapper()
m.connect(':something')
thing = "whatever"
euro = u"\u20ac" # Euro symbol
eq_("/%s?extra=%%E2%%82%%AC" % thing, m.generate(something=thing, extra=euro))
def test_extras_as_list_of_unicodes(self):
m = Mapper()
m.connect(':something')
thing = "whatever"
euro = [u"\u20ac", u"\xa3"] # Euro and Pound sterling symbols
eq_("/%s?extra=%%E2%%82%%AC&extra=%%C2%%A3" % thing, m.generate(something=thing, extra=euro))
def test_static(self):
m = Mapper()
m.connect('hello/world',known='known_value',controller='content',action='index')
eq_('/hello/world', m.generate(controller='content',action= 'index',known ='known_value'))
eq_('/hello/world?extra=hi',
m.generate(controller='content',action='index',known='known_value',extra='hi'))
eq_(None, m.generate(known='foo'))
def test_typical(self):
for path in [':controller/:action/:id', ':(controller)/:(action)/:(id)']:
m = Mapper()
m.minimization = True
m.minimization = True
m.connect(path, action = 'index', id = None)
eq_('/content', m.generate(controller='content', action='index'))
eq_('/content/list', m.generate(controller='content', action='list'))
eq_('/content/show/10', m.generate(controller='content', action='show', id=10))
eq_('/admin/user', m.generate(controller='admin/user', action='index'))
eq_('/admin/user', m.generate(controller='admin/user'))
eq_('/admin/user/show/10', m.generate(controller='admin/user', action='show', id=10))
eq_('/content', m.generate(controller='content'))
def test_route_with_fixnum_default(self):
m = Mapper(explicit=False)
m.minimization = True
m.connect('page/:id', controller='content', action='show_page', id=1)
m.connect(':controller/:action/:id')
eq_('/page', m.generate(controller='content', action='show_page'))
eq_('/page', m.generate(controller='content', action='show_page', id=1))
eq_('/page', m.generate(controller='content', action='show_page', id='1'))
eq_('/page/10', m.generate(controller='content', action='show_page', id=10))
eq_('/blog/show/4', m.generate(controller='blog', action='show', id=4))
eq_('/page', m.generate(controller='content', action='show_page'))
eq_('/page/4', m.generate(controller='content', action='show_page',id=4))
eq_('/content/show', m.generate(controller='content', action='show'))
def test_route_with_fixnum_default_with_splits(self):
m = Mapper(explicit=False)
m.minimization = True
m.connect('page/:(id)', controller='content', action='show_page', id =1)
m.connect(':(controller)/:(action)/:(id)')
eq_('/page', m.generate(controller='content', action='show_page'))
eq_('/page', m.generate(controller='content', action='show_page', id=1))
eq_('/page', m.generate(controller='content', action='show_page', id='1'))
eq_('/page/10', m.generate(controller='content', action='show_page', id=10))
eq_('/blog/show/4', m.generate(controller='blog', action='show', id=4))
eq_('/page', m.generate(controller='content', action='show_page'))
eq_('/page/4', m.generate(controller='content', action='show_page',id=4))
eq_('/content/show', m.generate(controller='content', action='show'))
def test_uppercase_recognition(self):
for path in [':controller/:action/:id', ':(controller)/:(action)/:(id)']:
m = Mapper(explicit=False)
m.minimization = True
m.connect(path)
eq_('/Content', m.generate(controller='Content', action='index'))
eq_('/Content/list', m.generate(controller='Content', action='list'))
eq_('/Content/show/10', m.generate(controller='Content', action='show', id='10'))
eq_('/Admin/NewsFeed', m.generate(controller='Admin/NewsFeed', action='index'))
def test_backwards(self):
m = Mapper(explicit=False)
m.minimization = True
m.connect('page/:id/:action', controller='pages', action='show')
m.connect(':controller/:action/:id')
eq_('/page/20', m.generate(controller='pages', action='show', id=20))
eq_('/pages/boo', m.generate(controller='pages', action='boo'))
def test_backwards_with_splits(self):
m = Mapper(explicit=False)
m.minimization = True
m.connect('page/:(id)/:(action)', controller='pages', action='show')
m.connect(':(controller)/:(action)/:(id)')
eq_('/page/20', m.generate(controller='pages', action='show', id=20))
eq_('/pages/boo', m.generate(controller='pages', action='boo'))
def test_both_requirement_and_optional(self):
m = Mapper()
m.minimization = True
m.connect('test/:year', controller='post', action='show', year=None, requirements = {'year':'\d{4}'})
eq_('/test', m.generate(controller='post', action='show'))
eq_('/test', m.generate(controller='post', action='show', year=None))
def test_set_to_nil_forgets(self):
m = Mapper()
m.minimization = True
m.connect('pages/:year/:month/:day', controller='content', action='list_pages', month=None, day=None)
m.connect(':controller/:action/:id')
eq_('/pages/2005', m.generate(controller='content', action='list_pages', year=2005))
eq_('/pages/2005/6', m.generate(controller='content', action='list_pages', year=2005, month=6))
eq_('/pages/2005/6/12',
m.generate(controller='content', action='list_pages', year=2005, month=6, day=12))
def test_url_with_no_action_specified(self):
m = Mapper()
m.connect('', controller='content')
m.connect(':controller/:action/:id')
eq_('/', m.generate(controller='content', action='index'))
eq_('/', m.generate(controller='content'))
def test_url_with_prefix(self):
m = Mapper(explicit=False)
m.minimization = True
m.prefix = '/blog'
m.connect(':controller/:action/:id')
m.create_regs(['content','blog','admin/comments'])
eq_('/blog/content/view', m.generate(controller='content', action='view'))
eq_('/blog/content', m.generate(controller='content'))
eq_('/blog/admin/comments', m.generate(controller='admin/comments'))
def test_url_with_prefix_deeper(self):
m = Mapper(explicit=False)
m.minimization = True
m.prefix = '/blog/phil'
m.connect(':controller/:action/:id')
m.create_regs(['content','blog','admin/comments'])
eq_('/blog/phil/content/view', m.generate(controller='content', action='view'))
eq_('/blog/phil/content', m.generate(controller='content'))
eq_('/blog/phil/admin/comments', m.generate(controller='admin/comments'))
def test_url_with_environ_empty(self):
m = Mapper(explicit=False)
m.minimization = True
m.environ = dict(SCRIPT_NAME='')
m.connect(':controller/:action/:id')
m.create_regs(['content','blog','admin/comments'])
eq_('/content/view', m.generate(controller='content', action='view'))
eq_('/content', m.generate(controller='content'))
eq_('/admin/comments', m.generate(controller='admin/comments'))
def test_url_with_environ(self):
m = Mapper(explicit=False)
m.minimization = True
m.environ = dict(SCRIPT_NAME='/blog')
m.connect(':controller/:action/:id')
m.create_regs(['content','blog','admin/comments'])
eq_('/blog/content/view', m.generate(controller='content', action='view'))
eq_('/blog/content', m.generate(controller='content'))
eq_('/blog/content', m.generate(controller='content'))
eq_('/blog/admin/comments', m.generate(controller='admin/comments'))
m.environ = dict(SCRIPT_NAME='/notblog')
eq_('/notblog/content/view', m.generate(controller='content', action='view'))
eq_('/notblog/content', m.generate(controller='content'))
eq_('/notblog/content', m.generate(controller='content'))
eq_('/notblog/admin/comments', m.generate(controller='admin/comments'))
def test_url_with_environ_and_caching(self):
m = Mapper()
m.connect("foo", "/", controller="main", action="index")
eq_('/', m.generate(controller='main', action='index'))
eq_('/bar/', m.generate(controller='main', action='index', _environ=dict(SCRIPT_NAME='/bar')))
eq_('/', m.generate(controller='main', action='index'))
def test_url_with_environ_and_absolute(self):
m = Mapper(explicit=False)
m.minimization = True
m.environ = dict(SCRIPT_NAME='/blog')
m.connect('image', 'image/:name', _absolute=True)
m.connect(':controller/:action/:id')
m.create_regs(['content','blog','admin/comments'])
eq_('/blog/content/view', m.generate(controller='content', action='view'))
eq_('/blog/content', m.generate(controller='content'))
eq_('/blog/content', m.generate(controller='content'))
eq_('/blog/admin/comments', m.generate(controller='admin/comments'))
eq_('/image/topnav.jpg', url_for('image', name='topnav.jpg'))
def test_route_with_odd_leftovers(self):
m = Mapper(explicit=False)
m.minimization = True
m.connect(':controller/:(action)-:(id)')
m.create_regs(['content','blog','admin/comments'])
eq_('/content/view-', m.generate(controller='content', action='view'))
eq_('/content/index-', m.generate(controller='content'))
def test_route_with_end_extension(self):
m = Mapper(explicit=False)
m.connect(':controller/:(action)-:(id).html')
m.create_regs(['content','blog','admin/comments'])
eq_(None, m.generate(controller='content', action='view'))
eq_(None, m.generate(controller='content'))
eq_('/content/view-3.html', m.generate(controller='content', action='view', id=3))
eq_('/content/index-2.html', m.generate(controller='content', id=2))
def test_unicode(self):
hoge = u'\u30c6\u30b9\u30c8' # the word test in Japanese
hoge_enc = urllib.parse.quote(hoge.encode('utf-8'))
m = Mapper()
m.connect(':hoge')
eq_("/%s" % hoge_enc, m.generate(hoge=hoge))
self.assert_(isinstance(m.generate(hoge=hoge), str))
def test_unicode_static(self):
hoge = u'\u30c6\u30b9\u30c8' # the word test in Japanese
hoge_enc = urllib.parse.quote(hoge.encode('utf-8'))
m = Mapper()
m.minimization = True
m.connect('google-jp', 'http://www.google.co.jp/search', _static=True)
m.create_regs(['messages'])
eq_("http://www.google.co.jp/search?q=" + hoge_enc, url_for('google-jp', q=hoge))
self.assert_(isinstance(url_for('google-jp', q=hoge), str))
def test_other_special_chars(self):
m = Mapper()
m.minimization = True
m.connect('/:year/:(slug).:(format),:(locale)', locale='en', format='html')
m.create_regs(['content'])
eq_('/2007/test', m.generate(year=2007, slug='test'))
eq_('/2007/test.xml', m.generate(year=2007, slug='test', format='xml'))
eq_('/2007/test.xml,ja', m.generate(year=2007, slug='test', format='xml', locale='ja'))
eq_(None, m.generate(year=2007, format='html'))
def test_dot_format_args(self):
for minimization in [False, True]:
m = Mapper(explicit=True)
m.minimization=minimization
m.connect('/songs/{title}{.format}')
m.connect('/stories/{slug}{.format:pdf}')
eq_('/songs/my-way', m.generate(title='my-way'))
eq_('/songs/my-way.mp3', m.generate(title='my-way', format='mp3'))
eq_('/stories/frist-post', m.generate(slug='frist-post'))
eq_('/stories/frist-post.pdf', m.generate(slug='frist-post', format='pdf'))
eq_(None, m.generate(slug='frist-post', format='doc'))
if __name__ == '__main__':
unittest.main()
else:
def bench_gen(withcache = False):
m = Mapper()
m.connect('', controller='articles', action='index')
m.connect('admin', controller='admin/general', action='index')
m.connect('admin/comments/article/:article_id/:action/:id', controller = 'admin/comments', action = None, id=None)
m.connect('admin/trackback/article/:article_id/:action/:id', controller='admin/trackback', action=None, id=None)
m.connect('admin/content/:action/:id', controller='admin/content')
m.connect('xml/:action/feed.xml', controller='xml')
m.connect('xml/articlerss/:id/feed.xml', controller='xml', action='articlerss')
m.connect('index.rdf', controller='xml', action='rss')
m.connect('articles', controller='articles', action='index')
m.connect('articles/page/:page', controller='articles', action='index', requirements = {'page':'\d+'})
m.connect('articles/:year/:month/:day/page/:page', controller='articles', action='find_by_date', month = None, day = None,
requirements = {'year':'\d{4}', 'month':'\d{1,2}','day':'\d{1,2}'})
m.connect('articles/category/:id', controller='articles', action='category')
m.connect('pages/*name', controller='articles', action='view_page')
if withcache:
m.urlcache = {}
m._create_gens()
n = 5000
start = time.time()
for x in range(1,n):
m.generate(controller='articles', action='index', page=4)
m.generate(controller='admin/general', action='index')
m.generate(controller='admin/comments', action='show', article_id=2)
m.generate(controller='articles', action='find_by_date', year=2004, page=1)
m.generate(controller='articles', action='category', id=4)
m.generate(controller='xml', action='articlerss', id=2)
end = time.time()
ts = time.time()
for x in range(1,n*6):
pass
en = time.time()
total = end-start-(en-ts)
per_url = total / (n*6)
print("Generation (%s URLs)" % (n*6))
print("%s ms/url" % (per_url*1000))
print("%s urls/s\n" % (1.00/per_url))
| 48.177127 | 134 | 0.607504 | 4,328 | 34,543 | 4.734519 | 0.063309 | 0.092675 | 0.149285 | 0.068518 | 0.881363 | 0.845786 | 0.791225 | 0.755649 | 0.719584 | 0.69899 | 0 | 0.01584 | 0.195843 | 34,543 | 716 | 135 | 48.244413 | 0.72183 | 0.003242 | 0 | 0.594077 | 0 | 0 | 0.279647 | 0.090146 | 0 | 0 | 0 | 0 | 0.008711 | 1 | 0.087108 | false | 0.001742 | 0.006969 | 0 | 0.095819 | 0.005226 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b47a086c239e13a1c46fe3e7f2736254aa106e75 | 147 | py | Python | tickcounter/questionnaire/jsonencoder.py | Whatever929/rice | 0fa428bb891a1fb3a2e50cb936dfccf2dfb3519c | [
"MIT"
] | null | null | null | tickcounter/questionnaire/jsonencoder.py | Whatever929/rice | 0fa428bb891a1fb3a2e50cb936dfccf2dfb3519c | [
"MIT"
] | null | null | null | tickcounter/questionnaire/jsonencoder.py | Whatever929/rice | 0fa428bb891a1fb3a2e50cb936dfccf2dfb3519c | [
"MIT"
] | null | null | null | import pandas as pd
class JSONEncoder(object):
# Encode using JSON, create a MultiEncoder with needed Encoder objects under the hood.
pass | 29.4 | 89 | 0.761905 | 21 | 147 | 5.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.197279 | 147 | 5 | 90 | 29.4 | 0.949153 | 0.571429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
b482d1934ca285b482a453d3f06a09b5f2dc6cf6 | 2,626 | py | Python | pyradox/densenets/__init__.py | p4vv37/pyradox | cfc8c07d637a1cc189dd8d200f8a55d00405b81f | [
"MIT"
] | 61 | 2021-01-10T09:31:32.000Z | 2022-02-13T13:30:48.000Z | pyradox/densenets/__init__.py | p4vv37/pyradox | cfc8c07d637a1cc189dd8d200f8a55d00405b81f | [
"MIT"
] | 1 | 2021-04-24T12:03:19.000Z | 2021-04-24T12:03:19.000Z | pyradox/densenets/__init__.py | p4vv37/pyradox | cfc8c07d637a1cc189dd8d200f8a55d00405b81f | [
"MIT"
] | 6 | 2021-01-17T16:17:35.000Z | 2022-02-13T13:30:49.000Z | from tensorflow.keras import layers
from pyradox.modules import *
class DenselyConnectedNetwork(layers.Layer):
"""Network of Densely Connected Layers followed by Batch Normalization (optional) and Dropout (optional)
Args:
layer_config (list of int): the number of units in each dense layer
batch_normalization (bool): whether to use Batch Normalization, default: False
dropout (float): the dropout rate, default: 0
activation (keras Activation): activation function for dense Layers, default: relu
kwargs (keyword arguments): the arguments for Dense Layer
"""
def __init__(
self,
layer_config,
batch_normalization=False,
dropout=0,
activation="relu",
**kwargs
):
super().__init__()
self.layer_config = layer_config
self.batch_normalization = batch_normalization
self.dropout = dropout
self.activation = activation
self.kwargs = kwargs
def __call__(self, inputs):
x = inputs
for units in self.layer_config:
x = DenselyConnected(
units,
batch_normalization=self.batch_normalization,
dropout=self.dropout,
activation=self.activation,
**self.kwargs
)(x)
return x
class DenselyConnectedResnet(layers.Layer):
"""Network of skip connections for densely connected layer
Args:
layer_config (list of int): the number of units in each dense layer
batch_normalization (bool): whether to use Batch Normalization, default: False
dropout (float): the dropout rate, default: 0
activation (keras Activation): activation to be applied, default: relu
kwargs (keyword arguments): the arguments for Dense Layer
"""
def __init__(
self,
layer_config,
batch_normalization=True,
dropout=0,
activation="relu",
**kwargs
):
super().__init__()
self.layer_config = layer_config
self.batch_normalization = batch_normalization
self.dropout = dropout
self.activation = activation
self.kwargs = kwargs
def __call__(self, inputs):
x = inputs
for units in self.layer_config:
x = DenseSkipConnection(
units,
batch_normalization=self.batch_normalization,
dropout=self.dropout,
activation=self.activation,
**self.kwargs
)(x)
return x | 32.825 | 108 | 0.607007 | 265 | 2,626 | 5.841509 | 0.237736 | 0.174419 | 0.05814 | 0.049096 | 0.77907 | 0.77907 | 0.77907 | 0.77907 | 0.77907 | 0.77907 | 0 | 0.002254 | 0.324067 | 2,626 | 80 | 109 | 32.825 | 0.869859 | 0.347677 | 0 | 0.851852 | 0 | 0 | 0.004863 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.074074 | false | 0 | 0.037037 | 0 | 0.185185 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c3228bd6f6aa5c9786eb7b55e9e7fe28386eac77 | 107 | py | Python | tests/exog/random/random_exog_300_40.py | jmabry/pyaf | afbc15a851a2445a7824bf255af612dc429265af | [
"BSD-3-Clause"
] | null | null | null | tests/exog/random/random_exog_300_40.py | jmabry/pyaf | afbc15a851a2445a7824bf255af612dc429265af | [
"BSD-3-Clause"
] | 1 | 2019-11-30T23:39:38.000Z | 2019-12-01T04:34:35.000Z | tests/exog/random/random_exog_300_40.py | jmabry/pyaf | afbc15a851a2445a7824bf255af612dc429265af | [
"BSD-3-Clause"
] | null | null | null | import pyaf.tests.exog.test_random_exogenous as testrandexog
testrandexog.test_random_exogenous( 300,40); | 26.75 | 60 | 0.859813 | 15 | 107 | 5.866667 | 0.733333 | 0.227273 | 0.431818 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.05 | 0.065421 | 107 | 4 | 61 | 26.75 | 0.83 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
5ee520f8b7a86fd4adfdc25d97c96b6852a7daba | 927 | py | Python | steinbock/classification/ilastik/__init__.py | TambourineClub/steinbock | 548ccf69e80a1ec3cb144a16ec67070fcab5474c | [
"MIT"
] | 1 | 2022-03-03T13:22:27.000Z | 2022-03-03T13:22:27.000Z | steinbock/classification/ilastik/__init__.py | TambourineClub/steinbock | 548ccf69e80a1ec3cb144a16ec67070fcab5474c | [
"MIT"
] | null | null | null | steinbock/classification/ilastik/__init__.py | TambourineClub/steinbock | 548ccf69e80a1ec3cb144a16ec67070fcab5474c | [
"MIT"
] | null | null | null | from ._ilastik import (
list_ilastik_image_files,
list_ilastik_crop_files,
read_ilastik_image,
read_ilastik_crop,
write_ilastik_image,
write_ilastik_crop,
create_ilastik_image,
try_create_ilastik_images_from_disk,
create_ilastik_crop,
try_create_ilastik_crops_from_disk,
create_and_save_ilastik_project,
run_pixel_classification,
try_fix_ilastik_crops_from_disk,
fix_ilastik_project_file_inplace,
)
__all__ = [
"list_ilastik_image_files",
"list_ilastik_crop_files",
"read_ilastik_image",
"read_ilastik_crop",
"write_ilastik_image",
"write_ilastik_crop",
"create_ilastik_image",
"try_create_ilastik_images_from_disk",
"create_ilastik_crop",
"try_create_ilastik_crops_from_disk",
"create_and_save_ilastik_project",
"run_pixel_classification",
"try_fix_ilastik_crops_from_disk",
"fix_ilastik_project_file_inplace",
]
| 27.264706 | 42 | 0.771305 | 118 | 927 | 5.288136 | 0.20339 | 0.153846 | 0.102564 | 0.128205 | 0.967949 | 0.967949 | 0.967949 | 0.967949 | 0.967949 | 0.967949 | 0 | 0 | 0.161812 | 927 | 33 | 43 | 28.090909 | 0.803089 | 0 | 0 | 0 | 0 | 0 | 0.372168 | 0.252427 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.03125 | 0 | 0.03125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6f072261aa8ddfcac709ddc4fd3cb00fc78c6327 | 9,006 | py | Python | skcmdb/forms.py | encodingl/skadmin | 3a89f6123a85d13ea4d335f7986731b01dced32d | [
"Apache-2.0"
] | null | null | null | skcmdb/forms.py | encodingl/skadmin | 3a89f6123a85d13ea4d335f7986731b01dced32d | [
"Apache-2.0"
] | null | null | null | skcmdb/forms.py | encodingl/skadmin | 3a89f6123a85d13ea4d335f7986731b01dced32d | [
"Apache-2.0"
] | null | null | null | #! /usr/bin/env python
# -*- coding: utf-8 -*-
from django import forms
from django.forms.widgets import *
from .models import Host, Idc, HostGroup, Env, YwGroup, MiddleType, App, DbSource, Url, WhileIp
import sys
reload(sys)
sys.setdefaultencoding('utf8')
class AssetForm(forms.ModelForm):
class Meta:
model = Host
exclude = ("id",)
widgets = {
'hostname': TextInput(attrs={'class': 'form-control', 'style': 'width:530px;', 'placeholder': u'必填项'}),
'ip': TextInput(attrs={'class': 'form-control', 'style': 'width:530px;', 'placeholder': u'必填项'}),
'other_ip': TextInput(attrs={'class': 'form-control', 'style': 'width:530px;'}),
'sa': Select(attrs={'class': 'form-control', 'style': 'width:530px;'}),
'env': Select(attrs={'class': 'form-control', 'style': 'width:530px;'}),
'ywgroup': Select(attrs={'class': 'form-control', 'style': 'width:530px;'}),
'group': Select(attrs={'class': 'form-control', 'style': 'width:530px;'}),
'middletype': Select(attrs={'class': 'form-control', 'style': 'width:530px;'}),
'asset_no': TextInput(attrs={'class': 'form-control', 'style': 'width:530px;'}),
'asset_type': Select(attrs={'class': 'form-control', 'style': 'width:530px;'}),
'status': Select(attrs={'class': 'form-control', 'style': 'width:530px;'}),
'os': TextInput(attrs={'class': 'form-control', 'style': 'width:530px;'}),
'vendor': TextInput(attrs={'class': 'form-control', 'style': 'width:530px;'}),
'cpu_model': TextInput(attrs={'class': 'form-control', 'style': 'width:530px;'}),
'cpu_num': TextInput(attrs={'class': 'form-control', 'style': 'width:530px;'}),
'memory': TextInput(attrs={'class': 'form-control', 'style': 'width:530px;'}),
'disk': TextInput(attrs={'class': 'form-control', 'style': 'width:530px;'}),
'sn': TextInput(attrs={'class': 'form-control', 'style': 'width:530px;'}),
'idc': Select(attrs={'class': 'form-control', 'style': 'width:530px;'}),
'position': TextInput(
attrs={'class': 'form-control', 'style': 'width:530px;', 'placeholder': u'物理机写位置,虚机写宿主'}),
'memo': Textarea(attrs={'class': 'form-control', 'style': 'width:530px;'}),
}
class IdcForm(forms.ModelForm):
def clean(self):
cleaned_data = super(IdcForm, self).clean()
value = cleaned_data.get('name')
try:
Idc.objects.get(name=value)
self._errors['name'] = self.error_class(["%s的信息已经存在" % value])
except Idc.DoesNotExist:
pass
return cleaned_data
class Meta:
model = Idc
exclude = ("id",)
widgets = {
'name': TextInput(attrs={'class': 'form-control', 'style': 'width:450px;'}),
'address': TextInput(attrs={'class': 'form-control', 'style': 'width:450px;'}),
'tel': TextInput(attrs={'class': 'form-control', 'style': 'width:450px;'}),
'contact': TextInput(attrs={'class': 'form-control', 'style': 'width:450px;'}),
'contact_phone': TextInput(attrs={'class': 'form-control', 'style': 'width:450px;'}),
'ip_range': TextInput(attrs={'class': 'form-control', 'style': 'width:450px;'}),
'jigui': TextInput(attrs={'class': 'form-control', 'style': 'width:450px;'}),
'bandwidth': TextInput(attrs={'class': 'form-control', 'style': 'width:450px;'}),
}
class EnvForm(forms.ModelForm):
class Meta:
model = Env
exclude = ("id",)
widgets = {
'name': TextInput(attrs={'class': 'form-control', 'style': 'width:450px;'}),
'address': Select(attrs={'class': 'form-control', 'style': 'width:450px;'}),
'descrition': TextInput(attrs={'class': 'form-control', 'style': 'width:450px;'}),
}
class YwGroupForm(forms.ModelForm):
def clean(self):
cleaned_data = super(YwGroupForm, self).clean()
value = cleaned_data.get('name')
try:
YwGroup.objects.get(name=value)
self._errors['name'] = self.error_class(["%s的信息已经存在" % value])
except YwGroup.DoesNotExist:
pass
return cleaned_data
class Meta:
model = YwGroup
exclude = ("id",)
widgets = {
'name': TextInput(attrs={'class': 'form-control', 'style': 'width:450px;'}),
'sa': TextInput(attrs={'class': 'form-control', 'style': 'width:450px;'}),
'descrition': TextInput(attrs={'class': 'form-control', 'style': 'width:450px;'}),
}
class MiddleTypeForm(forms.ModelForm):
def clean(self):
cleaned_data = super(MiddleTypeForm, self).clean()
value = cleaned_data.get('name')
try:
MiddleType.objects.get(name=value)
self._errors['name'] = self.error_class(["%s的信息已经存在" % value])
except MiddleType.DoesNotExist:
pass
return cleaned_data
class Meta:
model = MiddleType
exclude = ("id",)
widgets = {
'name': TextInput(attrs={'class': 'form-control', 'style': 'width:450px;'}),
'descrition': TextInput(attrs={'class': 'form-control', 'style': 'width:450px;'}),
}
class HostGroupForm(forms.ModelForm):
def clean(self):
cleaned_data = super(HostGroupForm, self).clean()
value = cleaned_data.get('name')
try:
HostGroup.objects.get(name=value)
self._errors['name'] = self.error_class(["%s的信息已经存在" % value])
except HostGroup.DoesNotExist:
pass
return cleaned_data
class Meta:
model = HostGroup
exclude = ("id",)
class AppForm(forms.ModelForm):
class Meta:
model = App
exclude = ("id",)
widgets = {
'name': TextInput(attrs={'class': 'form-control', 'placeholder': u'必填项'}),
'ywgroup': Select(attrs={'class': 'form-control'}),
'sa': Select(attrs={'class': 'form-control'}),
'env': Select(attrs={'class': 'form-control'}),
'belong_ip': SelectMultiple(attrs={'class': 'form-control'}),
'kafka': SelectMultiple(attrs={'class': 'form-control'}),
'web_port': TextInput(attrs={'class': 'form-control'}),
'dubbo_port': TextInput(attrs={'class': 'form-control'}),
'status': Select(attrs={'class': 'form-control'}),
'descrition': TextInput(attrs={'class': 'form-control'}),
}
class UrlForm(forms.ModelForm):
class Meta:
model = Url
exclude = ("id",)
widgets = {
'name': TextInput(attrs={'class': 'form-control', 'placeholder': u'必填项'}),
'nickname': TextInput(attrs={'class': 'form-control'}),
'whitelist': SelectMultiple(attrs={'class': 'form-control'}),
'mapip': TextInput(attrs={'class': 'form-control'}),
'ywgroup': Select(attrs={'class': 'form-control'}),
'type': Select(attrs={'class': 'form-control'}),
'sa': Select(attrs={'class': 'form-control'}),
'env': Select(attrs={'class': 'form-control'}),
'belongapp': Select(attrs={'class': 'form-control'}),
'status': Select(attrs={'class': 'form-control'}),
'descrition': Textarea(attrs={'class': 'form-control'}),
}
class WhileIpForm(forms.ModelForm):
class Meta:
model = WhileIp
exclude = ("id",)
widgets = {
'ip': TextInput(attrs={'class': 'form-control', 'placeholder': u'必填项'}),
'name': TextInput(attrs={'class': 'form-control'}),
'descrition': Textarea(attrs={'class': 'form-control', 'style': 'height:150px'}),
}
class DbSourceForm(forms.ModelForm):
def clean(self):
cleaned_data = super(DbSourceForm, self).clean()
value = cleaned_data.get('name')
try:
HostGroup.objects.get(name=value)
self._errors['name'] = self.error_class(["%s的信息已经存在" % value])
except HostGroup.DoesNotExist:
pass
return cleaned_data
class Meta:
model = DbSource
exclude = ("id",)
widgets = {
'name': TextInput(attrs={'class': 'form-control', 'style': 'width:530px;', 'placeholder': u'主机名称'}),
'host': TextInput(attrs={'class': 'form-control', 'style': 'width:530px;', 'placeholder': u'主机ip'}),
'user': TextInput(attrs={'class': 'form-control', 'style': 'width:530px;', 'placeholder': u'用户名'}),
'password': PasswordInput(attrs={'class': 'form-control', 'style': 'width:530px;', 'placeholder': u'密码'}),
'port': TextInput(attrs={'class': 'form-control', 'style': 'width:530px;', 'placeholder': u'默认3306'}),
'db': TextInput(attrs={'class': 'form-control', 'style': 'width:530px;', 'placeholder': u'数据库名'}),
}
| 42.885714 | 118 | 0.557295 | 901 | 9,006 | 5.531632 | 0.136515 | 0.13443 | 0.188202 | 0.282303 | 0.852528 | 0.788122 | 0.752809 | 0.744382 | 0.484149 | 0.414526 | 0 | 0.020223 | 0.242283 | 9,006 | 209 | 119 | 43.090909 | 0.710141 | 0.004775 | 0 | 0.451977 | 0 | 0 | 0.287691 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.028249 | false | 0.033898 | 0.022599 | 0 | 0.19209 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6f24ba205398a2360f73fef3cbff8c81e55fc339 | 492 | py | Python | haystack/nodes/file_converter/__init__.py | gabinguo/haystack | a60043ab805cf9cd4bd527aa47e3a18c6701be53 | [
"Apache-2.0"
] | 1 | 2021-11-19T15:01:00.000Z | 2021-11-19T15:01:00.000Z | haystack/nodes/file_converter/__init__.py | jaehyeongAN/haystack | b63669d1bc60b6c773b8b89d631afdd0ebbf4c4c | [
"Apache-2.0"
] | null | null | null | haystack/nodes/file_converter/__init__.py | jaehyeongAN/haystack | b63669d1bc60b6c773b8b89d631afdd0ebbf4c4c | [
"Apache-2.0"
] | 1 | 2021-02-16T10:52:38.000Z | 2021-02-16T10:52:38.000Z | from haystack.nodes.file_converter.base import BaseConverter
from haystack.nodes.file_converter.docx import DocxToTextConverter
from haystack.nodes.file_converter.image import ImageToTextConverter
from haystack.nodes.file_converter.markdown import MarkdownConverter
from haystack.nodes.file_converter.pdf import PDFToTextConverter, PDFToTextOCRConverter
from haystack.nodes.file_converter.tika import TikaConverter, TikaXHTMLParser
from haystack.nodes.file_converter.txt import TextConverter
| 61.5 | 87 | 0.892276 | 58 | 492 | 7.448276 | 0.37931 | 0.194444 | 0.275463 | 0.340278 | 0.486111 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.060976 | 492 | 7 | 88 | 70.285714 | 0.935065 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6f56a6bae1e1a7da1ce59ab8cf49d3c018535722 | 4,089 | py | Python | masque/playground/runners.py | dfdx/masque | d779bd3a4001436148cf6668638d9a1bf64254bb | [
"MIT"
] | 3 | 2015-11-29T06:25:13.000Z | 2020-02-13T20:12:37.000Z | masque/playground/runners.py | dfdx/masque | d779bd3a4001436148cf6668638d9a1bf64254bb | [
"MIT"
] | null | null | null | masque/playground/runners.py | dfdx/masque | d779bd3a4001436148cf6668638d9a1bf64254bb | [
"MIT"
] | 1 | 2019-07-19T17:29:43.000Z | 2019-07-19T17:29:43.000Z | """
Subroutines aimed to provide different learning scenarios
"""
import logging
import time
from sklearn.cross_validation import cross_val_score, train_test_split
from masque.utils import conv_transform
from masque import datasets
log = logging.getLogger('masque')
def pretrain_conv(conf):
"""
Pretrain/Convolve. Steps:
1. Fit pretrain_pipeline to pretrain_data
2. Convolve data with learned features
3. Run cross-validation with pipeline and transformed data
"""
start = time.time()
log.info('Loading pretraining data')
pre_X, _ = conf['pretrain_data']()
log.info('Building and fitting pretrain model')
pretrain_model = conf['pretrain_model']
pretrain_model.fit(pre_X)
del pre_X
# assuming last step is RBM
filters = pretrain_model.steps[-1][1].components_
filters = filters.reshape(len(filters), conf['filter_shape'][0],
conf['filter_shape'][1])
log.info('Loading data')
X, y = conf['data']()
log.info('Transforming data (convolution)')
Xt = conv_transform(X, filters, conf['x_shape'])
del X
time.sleep(20) # cool down my poor CPU
log.info('Building and cross-validating model')
model = conf['model']
scores = cross_val_score(model, Xt, y, cv=10, verbose=True)
log.info('Accuracy: %f (+/- %f)' % (scores.mean(), scores.std() * 3))
log.info('Time taken: %d seconds' % (time.time() - start,))
return scores
def pretrain_classify(conf):
"""
Pretrain/Classify. Steps:
1. Fit pretrain_pipeline to pretrain_data
2. Transforms data with pretrain_pipeline
3. Runs cross-validation with pipeline and transformed data
"""
start = time.time()
log.info('Loading pretraining data')
pre_X, _ = conf['pretrain_data']()
log.info('Building and fitting pretrain model')
pretrain_model = conf['pretrain_model']
pretrain_model.fit(pre_X)
del pre_X
# assuming last step is RBM
log.info('Loading data')
X, y = conf['data']()
log.info('Transforming data')
Xt = pretrain_model.transform(X)
del X
time.sleep(20) # cool down my poor CPU
log.info('Building and cross-validating model')
model = conf['model']
scores = cross_val_score(model, Xt, y, cv=10, verbose=True)
log.info('Accuracy: %f (+/- %f)' % (scores.mean(), scores.std() * 3))
log.info('Time taken: %d seconds' % (time.time() - start,))
return scores
# # TODO: reserved for future tests based on patches
# def pretrain_classify(conf):
# """
# Pretrain/Classify. Steps:
# 1. Fit pretrain_pipeline to pretrain_data
# 2. Transforms data with pretrain_pipeline
# 3. Runs cross-validation with pipeline and transformed data
# """
# start = time.time()
# log.info('Loading pretraining data')
# pre_X, _ = conf['pretrain_data']()
# log.info('Building and fitting pretrain model')
# pretrain_model = conf['pretrain_model']
# pretrain_model.fit(pre_X)
# del pre_X
# # assuming last step is RBM
# log.info('Loading data')
# X, y = conf['data']()
# log.info('Transforming data')
# # TODO: need to use full DBN, not just last step
# rbm = pretrain_model.steps[-1][1]
# Xt = rbm.transform(X)
# del X
# time.sleep(20) # cool down my poor CPU
# log.info('Building and cross-validating model')
# model = conf['model']
# scores = cross_val_score(model, Xt, y, cv=2, verbose=True)
# log.info('Accuracy: %f (+/- %f)' % (scores.mean(), scores.std() * 3))
# log.info('Time taken: %d seconds' % (time.time() - start,))
# return scores
def plain_classify(conf):
"""
Simple classifier. Runs cross-validation with model and data from config
"""
start = time.time()
X, y = conf['data']()
log.info('Building and cross-validating model')
model = conf['model']
scores = cross_val_score(model, X, y, cv=10, verbose=True)
log.info('Accuracy: %f (+/- %f)' % (scores.mean(), scores.std() * 3))
log.info('Time taken: %d seconds' % (time.time() - start,))
return scores
| 33.793388 | 76 | 0.646124 | 555 | 4,089 | 4.655856 | 0.2 | 0.065015 | 0.029799 | 0.048762 | 0.767415 | 0.751935 | 0.748065 | 0.748065 | 0.748065 | 0.732585 | 0 | 0.009963 | 0.214478 | 4,089 | 120 | 77 | 34.075 | 0.794521 | 0.40988 | 0 | 0.706897 | 0 | 0 | 0.235755 | 0 | 0 | 0 | 0 | 0.008333 | 0 | 1 | 0.051724 | false | 0 | 0.086207 | 0 | 0.189655 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6f664d5515a5c73ce75c83aee8bf7043668461ab | 2,088 | py | Python | test/integration/test_check_dockerfile_healthcheck.py | dlpezbel/SDS | 43b64744d8011af6ccd62fee394d6af2b11cac68 | [
"MIT"
] | 4 | 2020-05-22T09:42:32.000Z | 2020-09-11T08:00:48.000Z | test/integration/test_check_dockerfile_healthcheck.py | dlpezbel/SDS | 43b64744d8011af6ccd62fee394d6af2b11cac68 | [
"MIT"
] | 24 | 2020-07-11T07:36:26.000Z | 2020-08-30T19:49:10.000Z | test/integration/test_check_dockerfile_healthcheck.py | dlpezbel/SDS | 43b64744d8011af6ccd62fee394d6af2b11cac68 | [
"MIT"
] | null | null | null | import json
from unittest import TestCase
from project.resource import check_container_resource
DOCKER_FILE = 'dockerfile'
class Test(TestCase):
def setUp(self):
check_container_resource.app.testing = True
self.app = check_container_resource.app.test_client()
def test_post_given_dockerfile_with_healthcheck_when_check_then_OK(self):
file = open('test/resources/Dockerfile_with_healthcheck', 'r')
result = self.app.post('/sds/images/dockerfile/check', json={
DOCKER_FILE: file.read()})
data = json.loads(result.data)
self.assertEqual(data['dockerfile_evaluation']['4_6']['evaluation'], 'OK')
def test_post_given_dockerfile_without_healthcheck_when_check_then_KO(self):
result = self.app.post('/sds/images/dockerfile/check', json={
DOCKER_FILE: 'test/resources/Dockerfile_basic'})
data = json.loads(result.data)
self.assertEqual(data['dockerfile_evaluation']['4_6']['evaluation'], 'KO')
def test_post_given_dockerfile_without_healthcheck_when_check_and_fix_then_KO_and_fix(self):
file = open('test/resources/Dockerfile_basic', 'r')
result = self.app.post('/sds/images/dockerfile/fix', json={
DOCKER_FILE: file.read()})
data = json.loads(result.data)
self.assertEqual(data[0]['dockerfile_evaluation']['4_6']['evaluation'], 'KO')
self.assertIn('HEALTHCHECK',data[1]['proposed_dockerfile'])
self.assertEqual(data[1]['proposed_dockerfile'].count('HEALTHCHECK'),1)
def test_post_given_dockerfile_with_healthcheck_when_check_and_fix_then_OK_and_no_fix(self):
file = open('test/resources/Dockerfile_with_healthcheck', 'r')
result = self.app.post('/sds/images/dockerfile/fix', json={
DOCKER_FILE: file.read()})
data = json.loads(result.data)
self.assertEqual(data[0]['dockerfile_evaluation']['4_6']['evaluation'], 'OK')
self.assertIn('HEALTHCHECK',data[1]['proposed_dockerfile'])
self.assertEqual(data[1]['proposed_dockerfile'].count('HEALTHCHECK'),1)
| 46.4 | 96 | 0.701149 | 260 | 2,088 | 5.342308 | 0.2 | 0.064795 | 0.082073 | 0.046076 | 0.804896 | 0.804896 | 0.784737 | 0.75522 | 0.75522 | 0.606911 | 0 | 0.009132 | 0.16092 | 2,088 | 44 | 97 | 47.454545 | 0.783676 | 0 | 0 | 0.485714 | 0 | 0 | 0.254432 | 0.161955 | 0 | 0 | 0 | 0 | 0.228571 | 1 | 0.142857 | false | 0 | 0.085714 | 0 | 0.257143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6f74d5aef7c24cd365e917f7ff855327355d8813 | 121 | py | Python | test-registry/install2/f.py | NathanTP/open-lambda | 63b2f51bbe1ac121d14af9a5562c547227c4d2ab | [
"Apache-2.0"
] | 826 | 2016-06-18T04:42:13.000Z | 2022-03-31T13:21:33.000Z | test-registry/install2/f.py | NathanTP/open-lambda | 63b2f51bbe1ac121d14af9a5562c547227c4d2ab | [
"Apache-2.0"
] | 62 | 2016-07-14T11:10:02.000Z | 2022-02-12T18:33:55.000Z | test-registry/install2/f.py | NathanTP/open-lambda | 63b2f51bbe1ac121d14af9a5562c547227c4d2ab | [
"Apache-2.0"
] | 105 | 2016-06-20T15:36:22.000Z | 2022-02-01T06:04:58.000Z | import requests
import urllib3
# ol-install: requests,certifi,chardet,idna,urllib3
def f(event):
return 'imported'
| 15.125 | 51 | 0.760331 | 16 | 121 | 5.75 | 0.8125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.019231 | 0.140496 | 121 | 7 | 52 | 17.285714 | 0.865385 | 0.404959 | 0 | 0 | 0 | 0 | 0.114286 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.75 | 0.25 | 1.25 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
48899a636b8b06192aadf0ac062b2d9adb5f3920 | 92 | py | Python | backend/tests/projectx/test_asgi.py | mmcardle/projectx | 058935273834c683de8db8bb2d720b1ddcd433e8 | [
"MIT"
] | 4 | 2021-04-22T08:55:13.000Z | 2022-03-23T12:58:43.000Z | backend/tests/projectx/test_asgi.py | mmcardle/projectx | 058935273834c683de8db8bb2d720b1ddcd433e8 | [
"MIT"
] | 3 | 2021-05-12T11:05:58.000Z | 2021-09-12T16:40:31.000Z | backend/tests/projectx/test_asgi.py | mmcardle/projectx | 058935273834c683de8db8bb2d720b1ddcd433e8 | [
"MIT"
] | 1 | 2021-04-18T08:33:02.000Z | 2021-04-18T08:33:02.000Z | from projectx.wsgi import application
def test_wsgi():
assert application is not None
| 15.333333 | 37 | 0.771739 | 13 | 92 | 5.384615 | 0.846154 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.184783 | 92 | 5 | 38 | 18.4 | 0.933333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
48b1f0d2551fc5721d9fafdd6b86888f99a06d4c | 7,923 | py | Python | Python/pixel_cost_cuda.py | eshibusawa/PSL-Python | e2223f1b24666148b5f20353a961901221539a5f | [
"BSD-2-Clause"
] | null | null | null | Python/pixel_cost_cuda.py | eshibusawa/PSL-Python | e2223f1b24666148b5f20353a961901221539a5f | [
"BSD-2-Clause"
] | null | null | null | Python/pixel_cost_cuda.py | eshibusawa/PSL-Python | e2223f1b24666148b5f20353a961901221539a5f | [
"BSD-2-Clause"
] | null | null | null | # This file is part of PSL-Python.
# Copyright (c) 2021, Eijiro Shibusawa <phd_kimberlite@yahoo.co.jp>
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR
# ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
# (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
# ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
# SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
import torch
import kornia.filters
class absolute_difference():
def __init__(self, img_ref, kernel_size = (7, 7), box_filter_enabled = True, use_batch = True):
self.img_ref = img_ref
self.use_batch = use_batch
self.kernel_size = kernel_size
self.box_filter_enabled = box_filter_enabled
self.precision = torch.float16
self.box_filter = (lambda x: kornia.filters.box_blur(x, self.kernel_size, border_type='replicate', normalized=False))
def get_cost_volume(self, warped_images):
if self.use_batch:
ad = torch.abs(self.img_ref.to(self.precision) - warped_images)
if self.box_filter_enabled:
CV = torch.mean(self.box_filter(ad), dim = 0)
else:
CV = torch.mean(ad, dim = 0)
else:
sad = None
scale = 1.0 / warped_images.shape[0]
for warped_image in warped_images:
ad = torch.abs(self.img_ref.to(self.precision) - warped_image)
if self.box_filter_enabled:
ad = self.box_filter(ad[None, :])[0, :]
if sad is None:
sad = ad
else:
sad += ad
CV = sad * scale
del sad
del ad
torch.cuda.empty_cache()
return CV
class zero_mean_absolute_difference():
def __init__(self, img_ref, kernel_size = (7, 7), box_filter_enabled = True, use_batch = True):
self.use_batch = use_batch
self.kernel_size = kernel_size
self.box_filter_enabled = box_filter_enabled
self.precision = torch.float16
self.img_ref = img_ref.to(self.precision)
self.box_filter = (lambda x, f: kornia.filters.box_blur(x, self.kernel_size, border_type='replicate', normalized=f))
img_ref2 = img_ref[None, None, :]
self.img_box = img_ref2 - self.box_filter(img_ref2.to(self.precision), True)
def get_cost_volume(self, warped_images):
if self.use_batch:
warp_box = warped_images - self.box_filter(warped_images.to(self.precision), True)
ad = torch.abs(self.img_box - warp_box)
if self.box_filter_enabled:
CV = torch.mean(self.box_filter(ad), dim = 0)
else:
CV = torch.mean(ad, dim = 0)
else:
sad = None
scale = 1.0 / warped_images.shape[0]
for warped_image in warped_images:
w = warped_image[None, :]
warp_box = w - self.box_filter(w.to(self.precision), True)
ad = torch.abs(self.img_box - warp_box)
if self.box_filter_enabled:
ad = self.box_filter(ad, False)[0, :]
if sad is None:
sad = ad
else:
sad += ad
CV = sad * scale
del sad, ad, warp_box
torch.cuda.empty_cache()
return CV
class zero_mean_normalized_cross_correlation():
def __init__(self, img_ref, kernel_size = (7, 7), use_batch = True):
self.kernel_size = kernel_size
self.use_batch = use_batch
self.eps = 1E-7
self.precision = torch.float32
self.box_filter = (lambda x, f: kornia.filters.box_blur(x, self.kernel_size, border_type='replicate', normalized=f))
img_ref2 = img_ref[None, None, :]
self.img_box = img_ref2 - self.box_filter(img_ref2.to(self.precision), True)
self.img_sqr_box = self.box_filter(self.img_box * self.img_box, False)
def get_cost_volume(self, warped_images):
if self.use_batch:
warp_box = warped_images - self.box_filter(warped_images.to(self.precision), True)
warp_sqr_box = self.box_filter(warp_box * warp_box, False)
cross = warp_box * self.img_box
nume = self.box_filter(cross, False)
denom = warp_sqr_box * self.img_sqr_box
CV = (1 - torch.mean(nume/(torch.sqrt(denom) + self.eps), dim = 0))/2
else:
zncc = None
scale = 1.0 / warped_images.shape[0]
for warped_image in warped_images:
w = warped_image[None, :]
warp_box = w - self.box_filter(w.to(self.precision), True)
warp_sqr_box = self.box_filter(warp_box * warp_box, False)
cross = warp_box * self.img_box
nume = self.box_filter(cross, False)
denom = warp_sqr_box * self.img_sqr_box
if zncc is None:
zncc = (nume/(torch.sqrt(denom) + self.eps))
else:
zncc += (nume/(torch.sqrt(denom) + self.eps))
CV = (1 - (zncc[0] * scale))/2
del zncc
del warp_box, warp_sqr_box, cross, nume, denom
torch.cuda.empty_cache()
return CV
class normalized_cross_correlation():
def __init__(self, img_ref, kernel_size = (7, 7), use_batch = True):
self.kernel_size = kernel_size
self.use_batch = use_batch
self.eps = 1E-7
self.precision = torch.float32
self.box_filter = (lambda x: kornia.filters.box_blur(x, self.kernel_size, border_type='replicate', normalized=False))
self.img_ref = img_ref.to(self.precision)[None, None, :]
self.img_sqr_box = self.box_filter(self.img_ref * self.img_ref)
def get_cost_volume(self, warped_images):
if self.use_batch:
w = warped_images.to(self.precision)
warp_sqr_box = self.box_filter(w * w)
cross = w * self.img_ref
nume = self.box_filter(cross)
denom = warp_sqr_box * self.img_sqr_box
CV = (1 - torch.mean(nume/(torch.sqrt(denom) + self.eps), dim = 0))/2
else:
zncc = None
scale = 1.0 / warped_images.shape[0]
for warped_image in warped_images:
w = warped_image.to(self.precision)[None, :]
warp_sqr_box = self.box_filter(w * w)
cross = w * self.img_ref
nume = self.box_filter(cross)
denom = warp_sqr_box * self.img_sqr_box
if zncc is None:
zncc = (nume/(torch.sqrt(denom) + self.eps))
else:
zncc += (nume/(torch.sqrt(denom) + self.eps))
CV = (1 - (zncc[0] * scale))/2
del zncc
del w, warp_sqr_box, cross, nume, denom
torch.cuda.empty_cache()
return CV
| 45.797688 | 125 | 0.609239 | 1,076 | 7,923 | 4.275093 | 0.171933 | 0.066522 | 0.084783 | 0.024348 | 0.771522 | 0.752609 | 0.752609 | 0.751522 | 0.738913 | 0.707174 | 0 | 0.011145 | 0.297867 | 7,923 | 172 | 126 | 46.063953 | 0.815747 | 0.171652 | 0 | 0.842857 | 0 | 0 | 0.005505 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.057143 | false | 0 | 0.014286 | 0 | 0.128571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
48bec2cb3bb9a751f374fa44811e860f248814d8 | 159 | py | Python | d2ix/util/__init__.py | tum-ewk/d2ix_public | dffcb474f51ccdc03a339e306b61b88c224a332d | [
"Apache-2.0"
] | 2 | 2019-12-16T20:48:10.000Z | 2020-11-24T03:20:36.000Z | d2ix/util/__init__.py | tum-ewk/d2ix_public | dffcb474f51ccdc03a339e306b61b88c224a332d | [
"Apache-2.0"
] | 4 | 2019-04-15T08:03:23.000Z | 2019-04-15T08:05:17.000Z | d2ix/util/__init__.py | tum-ewk/d2ix_public | dffcb474f51ccdc03a339e306b61b88c224a332d | [
"Apache-2.0"
] | 8 | 2019-03-13T06:51:22.000Z | 2020-08-31T11:04:56.000Z | from d2ix.util.tools import YAMLd2ix, model_data_yml, setup_logging, split_columns, df_to_nested_dict
from d2ix.util.data_sanity_tests import check_input_data
| 53 | 101 | 0.874214 | 27 | 159 | 4.740741 | 0.777778 | 0.125 | 0.1875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.020408 | 0.075472 | 159 | 2 | 102 | 79.5 | 0.85034 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5b024669082e2fedff09fa6bf0e45e95f746cc27 | 2,496 | py | Python | netintro.py | OjanRN/pythonscripts | 6f0ee59fd8e25a616f3541c248f71be299561ecb | [
"MIT"
] | null | null | null | netintro.py | OjanRN/pythonscripts | 6f0ee59fd8e25a616f3541c248f71be299561ecb | [
"MIT"
] | null | null | null | netintro.py | OjanRN/pythonscripts | 6f0ee59fd8e25a616f3541c248f71be299561ecb | [
"MIT"
] | null | null | null | import os
import time
import sys
introtext = """
______ ______ __
/ \ / \ / |
/$$$$$$ | __ ______ _______ /$$$$$$ | ______ ____$$ | ______ _______
$$ | $$ | / | / \ / \ $$ | $$/ / \ / $$ | / \ / |
$$ | $$ | $$/ $$$$$$ |$$$$$$$ | $$ | /$$$$$$ |/$$$$$$$ |/$$$$$$ |/$$$$$$$/
$$ | $$ | / | / $$ |$$ | $$ | $$ | __ $$ | $$ |$$ | $$ |$$ $$ |$$ \
$$ \__$$ | $$ |/$$$$$$$ |$$ | $$ | $$ \__/ |$$ \__$$ |$$ \__$$ |$$$$$$$$/ $$$$$$ |
$$ $$/ $$ |$$ $$ |$$ | $$ | $$ $$/ $$ $$/ $$ $$ |$$ |/ $$/
$$$$$$/__ $$ | $$$$$$$/ $$/ $$/ $$$$$$/ $$$$$$/ $$$$$$$/ $$$$$$$/ $$$$$$$/
/ \__$$ |
$$ $$/
$$$$$$/
"""
desc1 = """
____ _ _ _
/ ___|| |_ _ _ __| | ___ _ __ | |_
\___ \| __| | | |/ _` |/ _ \ '_ \| __|
___) | |_| |_| | (_| | __/ | | | |_
|____/ \__|\__,_|\__,_|\___|_| |_|\__|
"""
desc2 = """
____
| _ \ _ __ ___ __ _ _ __ __ _ _ __ ___ _ __ ___ ___ _ __
| |_) | '__/ _ \ / _` | '__/ _` | '_ ` _ \| '_ ` _ \ / _ \ '__|
| __/| | | (_) | (_| | | | (_| | | | | | | | | | | | __/ |
|_| |_| \___/ \__, |_| \__,_|_| |_| |_|_| |_| |_|\___|_|
|___/
"""
desc3 = """
____ __ __ _ _ _ _
/ ___|___ / _|/ _| ___ ___ __ _ __| | __| (_) ___| |_
| | / _ \| |_| |_ / _ \/ _ \ / _` |/ _` |/ _` | |/ __| __|
| |__| (_) | _| _| __/ __/ | (_| | (_| | (_| | | (__| |_
\____\___/|_| |_| \___|\___| \__,_|\__,_|\__,_|_|\___|\__|
"""
def typewriter(text):
for char in (text):
sys.stdout.write(char)
sys.stdout.flush()
time.sleep(0.0001)
os.system('cls')
typewriter(introtext)
time.sleep(0.5)
os.system('cls')
typewriter(desc1)
time.sleep(0.5)
os.system('cls')
typewriter(desc2)
time.sleep(0.5)
os.system('cls')
typewriter(desc3)
os.system(3)
os.system('cls') | 37.253731 | 93 | 0.23758 | 66 | 2,496 | 4.363636 | 0.363636 | 0.166667 | 0.190972 | 0.291667 | 0.333333 | 0.333333 | 0.333333 | 0.333333 | 0 | 0 | 0 | 0.014458 | 0.501202 | 2,496 | 67 | 94 | 37.253731 | 0.216867 | 0 | 0 | 0.210526 | 0 | 0.22807 | 0.821786 | 0.020825 | 0 | 0 | 0 | 0 | 0 | 1 | 0.017544 | false | 0 | 0.052632 | 0 | 0.070175 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d2cc9593a3fa3adadf7524e0834a110bd592d0bd | 4,569 | py | Python | coba/tests/test_benchmarks_formats.py | zmhammedi/coba | 90cdeb7b781e1ef498cd5a989bd3601b85a2fad0 | [
"BSD-3-Clause"
] | null | null | null | coba/tests/test_benchmarks_formats.py | zmhammedi/coba | 90cdeb7b781e1ef498cd5a989bd3601b85a2fad0 | [
"BSD-3-Clause"
] | null | null | null | coba/tests/test_benchmarks_formats.py | zmhammedi/coba | 90cdeb7b781e1ef498cd5a989bd3601b85a2fad0 | [
"BSD-3-Clause"
] | null | null | null | import json
import unittest
from coba.benchmarks.formats import BenchmarkFileFmtV2
class BenchmarkFileFmtV2_Tests(unittest.TestCase):
def test_one_simulation(self):
json_txt = """{
"simulations" : [
{ "OpenmlSimulation": 150 }
]
}"""
benchmark = BenchmarkFileFmtV2().filter(json.loads(json_txt))
self.assertEqual('[{"OpenmlSimulation":150}]', str(benchmark._simulations))
def test_raw_simulation(self):
json_txt = """{
"simulations" : { "OpenmlSimulation": 150 }
}"""
benchmark = BenchmarkFileFmtV2().filter(json.loads(json_txt))
self.assertEqual('[{"OpenmlSimulation":150}]', str(benchmark._simulations))
def test_one_simulation_one_filter(self):
json_txt = """{
"simulations" : [
[{ "OpenmlSimulation": 150 }, {"Take":10} ]
]
}"""
benchmark = BenchmarkFileFmtV2().filter(json.loads(json_txt))
self.assertEqual('[{"OpenmlSimulation":150},{"Take":10}]', str(benchmark._simulations))
def test_one_simulation_two_filters(self):
json_txt = """{
"simulations" : [
[{ "OpenmlSimulation": 150 }, {"Take":[10,20], "method":"foreach"} ]
]
}"""
benchmark = BenchmarkFileFmtV2().filter(json.loads(json_txt))
self.assertEqual('[{"OpenmlSimulation":150},{"Take":10}, {"OpenmlSimulation":150},{"Take":20}]', str(benchmark._simulations))
def test_two_simulations_two_filters(self):
json_txt = """{
"simulations" : [
[{ "OpenmlSimulation": [150,151], "method":"foreach" }, { "Take":[10,20], "method":"foreach" }]
]
}"""
benchmark = BenchmarkFileFmtV2().filter(json.loads(json_txt))
self.assertEqual(4, len(benchmark._simulations))
self.assertEqual('{"OpenmlSimulation":150},{"Take":10}', str(benchmark._simulations[0]))
self.assertEqual('{"OpenmlSimulation":150},{"Take":20}', str(benchmark._simulations[1]))
self.assertEqual('{"OpenmlSimulation":151},{"Take":10}', str(benchmark._simulations[2]))
self.assertEqual('{"OpenmlSimulation":151},{"Take":20}', str(benchmark._simulations[3]))
def test_two_singular_simulations(self):
json_txt = """{
"simulations" : [
{ "OpenmlSimulation": 150},
{ "OpenmlSimulation": 151}
]
}"""
benchmark = BenchmarkFileFmtV2().filter(json.loads(json_txt))
self.assertEqual('[{"OpenmlSimulation":150}, {"OpenmlSimulation":151}]', str(benchmark._simulations))
def test_one_foreach_simulation(self):
json_txt = """{
"simulations" : [
{"OpenmlSimulation": [150,151], "method":"foreach"}
]
}"""
benchmark = BenchmarkFileFmtV2().filter(json.loads(json_txt))
self.assertEqual('[{"OpenmlSimulation":150}, {"OpenmlSimulation":151}]', str(benchmark._simulations))
def test_one_variable(self):
json_txt = """{
"variables": {"$openml_sims": {"OpenmlSimulation": [150,151], "method":"foreach"} },
"simulations" : [ "$openml_sims" ]
}"""
benchmark = BenchmarkFileFmtV2().filter(json.loads(json_txt))
self.assertEqual('[{"OpenmlSimulation":150}, {"OpenmlSimulation":151}]', str(benchmark._simulations))
def test_two_variables(self):
json_txt = """{
"variables": {
"$openmls": {"OpenmlSimulation": [150,151], "method":"foreach"},
"$takes" : {"Take":[10,20], "method":"foreach"}
},
"simulations" : [
["$openmls", "$takes"],
"$openmls"
]
}"""
benchmark = BenchmarkFileFmtV2().filter(json.loads(json_txt))
self.assertEqual(6, len(benchmark._simulations))
self.assertEqual('{"OpenmlSimulation":150},{"Take":10}', str(benchmark._simulations[0]))
self.assertEqual('{"OpenmlSimulation":150},{"Take":20}', str(benchmark._simulations[1]))
self.assertEqual('{"OpenmlSimulation":151},{"Take":10}', str(benchmark._simulations[2]))
self.assertEqual('{"OpenmlSimulation":151},{"Take":20}', str(benchmark._simulations[3]))
self.assertEqual('{"OpenmlSimulation":150}' , str(benchmark._simulations[4]))
self.assertEqual('{"OpenmlSimulation":151}' , str(benchmark._simulations[5]))
if __name__ == '__main__':
unittest.main() | 38.075 | 133 | 0.583497 | 394 | 4,569 | 6.57868 | 0.13198 | 0.161265 | 0.203318 | 0.157407 | 0.854938 | 0.803627 | 0.782793 | 0.725694 | 0.684028 | 0.611497 | 0 | 0.047235 | 0.240096 | 4,569 | 120 | 134 | 38.075 | 0.699309 | 0 | 0 | 0.516484 | 0 | 0.010989 | 0.434136 | 0.143107 | 0 | 0 | 0 | 0 | 0.208791 | 1 | 0.098901 | false | 0 | 0.032967 | 0 | 0.142857 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
824ea23ba4d979a254957571c01c14d5487e1a74 | 88 | py | Python | Assignments/Inheritance/Exercise/5_restaurant/project/beverage/tea.py | KaloyankerR/python-oop-repository | 7cd565b0b173c4a02e7b2dc22bfeb7808d294f89 | [
"MIT"
] | null | null | null | Assignments/Inheritance/Exercise/5_restaurant/project/beverage/tea.py | KaloyankerR/python-oop-repository | 7cd565b0b173c4a02e7b2dc22bfeb7808d294f89 | [
"MIT"
] | null | null | null | Assignments/Inheritance/Exercise/5_restaurant/project/beverage/tea.py | KaloyankerR/python-oop-repository | 7cd565b0b173c4a02e7b2dc22bfeb7808d294f89 | [
"MIT"
] | null | null | null | from project.beverage.hot_beverage import HotBeverage
class Tea(HotBeverage):
pass | 17.6 | 53 | 0.806818 | 11 | 88 | 6.363636 | 0.818182 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.136364 | 88 | 5 | 54 | 17.6 | 0.921053 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
829650483463cba1927f64fec3bcfd5cf1e5ddc1 | 20 | py | Python | __init__.py | fbcosentino/pydoku | 29bda67f0bb8e9dd99e0b3849315819900363fc5 | [
"MIT"
] | null | null | null | __init__.py | fbcosentino/pydoku | 29bda67f0bb8e9dd99e0b3849315819900363fc5 | [
"MIT"
] | null | null | null | __init__.py | fbcosentino/pydoku | 29bda67f0bb8e9dd99e0b3849315819900363fc5 | [
"MIT"
] | null | null | null | from pydoku import * | 20 | 20 | 0.8 | 3 | 20 | 5.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15 | 20 | 1 | 20 | 20 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
82affcab9bbb2e0f7d878e54c2ceb2eabce2ddb5 | 30 | py | Python | postr/app.py | dbgrigsby/Postr | c374648134123f857babb65aff161a4c3c470502 | [
"MIT"
] | 3 | 2018-10-09T17:02:05.000Z | 2022-03-21T08:58:49.000Z | postr/app.py | dbgrigsby/Postr | c374648134123f857babb65aff161a4c3c470502 | [
"MIT"
] | 11 | 2018-09-26T05:33:30.000Z | 2019-04-06T04:06:51.000Z | postr/app.py | dbgrigsby/Postr | c374648134123f857babb65aff161a4c3c470502 | [
"MIT"
] | 3 | 2018-12-20T18:35:25.000Z | 2022-03-21T08:58:54.000Z | print('Wow, this is an app!')
| 15 | 29 | 0.633333 | 6 | 30 | 3.166667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 30 | 1 | 30 | 30 | 0.76 | 0 | 0 | 0 | 0 | 0 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
82c045022e5526c2156cc79475ed630aaa109b8f | 60 | py | Python | 32.operacoes_com_tuplas/3.2multiplicacao_id_tuplas.py | robinson-1985/python-zero-dnc | df510d67e453611fcd320df1397cdb9ca47fecb8 | [
"MIT"
] | null | null | null | 32.operacoes_com_tuplas/3.2multiplicacao_id_tuplas.py | robinson-1985/python-zero-dnc | df510d67e453611fcd320df1397cdb9ca47fecb8 | [
"MIT"
] | null | null | null | 32.operacoes_com_tuplas/3.2multiplicacao_id_tuplas.py | robinson-1985/python-zero-dnc | df510d67e453611fcd320df1397cdb9ca47fecb8 | [
"MIT"
] | null | null | null | tupla_3 = (10,203,40,7)*2
print(tupla_3)
print(id(tupla_3))
| 15 | 25 | 0.7 | 14 | 60 | 2.785714 | 0.642857 | 0.461538 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.218182 | 0.083333 | 60 | 3 | 26 | 20 | 0.490909 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.666667 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
82d455d744fc104ab44ab95fc429c63268c04bf9 | 41 | py | Python | dmb/ops/spn/__init__.py | jiaw-z/DenseMatchingBenchmark | 177c56ca1952f54d28e6073afa2c16981113a2af | [
"MIT"
] | 160 | 2019-11-16T13:59:21.000Z | 2022-03-28T07:52:59.000Z | dmb/ops/spn/__init__.py | jiaw-z/DenseMatchingBenchmark | 177c56ca1952f54d28e6073afa2c16981113a2af | [
"MIT"
] | 22 | 2019-11-22T02:14:18.000Z | 2022-01-24T10:16:14.000Z | dmb/ops/spn/__init__.py | jiaw-z/DenseMatchingBenchmark | 177c56ca1952f54d28e6073afa2c16981113a2af | [
"MIT"
] | 38 | 2019-12-27T14:01:01.000Z | 2022-03-12T11:40:11.000Z | from .modules import GateRecurrent2dnoind | 41 | 41 | 0.902439 | 4 | 41 | 9.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.026316 | 0.073171 | 41 | 1 | 41 | 41 | 0.947368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
82e7710cbefe5f693c2dae13494cf03646670339 | 23 | py | Python | bci_framework/default_extensions/Visuospatial_working_memory_Change_detection_task/__init__.py | UN-GCPDS/bci-framework- | b51f530967561738dc34752acf6add20cbb02283 | [
"BSD-2-Clause"
] | null | null | null | bci_framework/default_extensions/Visuospatial_working_memory_Change_detection_task/__init__.py | UN-GCPDS/bci-framework- | b51f530967561738dc34752acf6add20cbb02283 | [
"BSD-2-Clause"
] | null | null | null | bci_framework/default_extensions/Visuospatial_working_memory_Change_detection_task/__init__.py | UN-GCPDS/bci-framework- | b51f530967561738dc34752acf6add20cbb02283 | [
"BSD-2-Clause"
] | null | null | null | """
===
VIs
===
"""
| 2.555556 | 3 | 0.130435 | 1 | 23 | 3 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.347826 | 23 | 8 | 4 | 2.875 | 0.2 | 0.478261 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7d79d76c7f334597bf2e5ac409462c581f1a89d4 | 23 | py | Python | cdshealpix/nested/__init__.py | cds-astro/cds-healpix-python | bc02f2e7ccad1d55f3a5e4caac689d8cc29eadeb | [
"BSD-3-Clause"
] | 3 | 2019-01-30T13:34:51.000Z | 2020-11-11T07:31:12.000Z | cdshealpix/nested/__init__.py | cds-astro/cds-healpix-python | bc02f2e7ccad1d55f3a5e4caac689d8cc29eadeb | [
"BSD-3-Clause"
] | 9 | 2019-07-04T08:09:47.000Z | 2021-03-10T16:51:18.000Z | cdshealpix/nested/__init__.py | cds-astro/cds-healpix-python | bc02f2e7ccad1d55f3a5e4caac689d8cc29eadeb | [
"BSD-3-Clause"
] | 1 | 2019-07-04T11:23:20.000Z | 2019-07-04T11:23:20.000Z | from .healpix import *
| 11.5 | 22 | 0.73913 | 3 | 23 | 5.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.173913 | 23 | 1 | 23 | 23 | 0.894737 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7d95b26c089eb32c94c57a5c2fab7baa4f0ab253 | 7 | py | Python | hello.py | Esbax/testing_stuff | b0e738fcfbe1cd75a6b8485d134a6fb83d598ed0 | [
"MIT"
] | 6 | 2021-02-02T10:08:02.000Z | 2022-03-24T08:10:44.000Z | hello.py | Esbax/testing_stuff | b0e738fcfbe1cd75a6b8485d134a6fb83d598ed0 | [
"MIT"
] | 1 | 2021-02-27T21:55:26.000Z | 2021-02-28T12:39:29.000Z | hello.py | Esbax/testing_stuff | b0e738fcfbe1cd75a6b8485d134a6fb83d598ed0 | [
"MIT"
] | 6 | 2020-02-24T13:51:32.000Z | 2021-06-05T19:02:05.000Z | x = 15
| 3.5 | 6 | 0.428571 | 2 | 7 | 1.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.5 | 0.428571 | 7 | 1 | 7 | 7 | 0.25 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7db4e3133936b7831272abee0e6442c409dbae33 | 263 | py | Python | lagom/core/policies/__init__.py | lkylych/lagom | 64777be7f09136072a671c444b5b3fbbcb1b2f18 | [
"MIT"
] | null | null | null | lagom/core/policies/__init__.py | lkylych/lagom | 64777be7f09136072a671c444b5b3fbbcb1b2f18 | [
"MIT"
] | null | null | null | lagom/core/policies/__init__.py | lkylych/lagom | 64777be7f09136072a671c444b5b3fbbcb1b2f18 | [
"MIT"
] | null | null | null | from lagom.core.policies.base_policy import BasePolicy
from lagom.core.policies.random_policy import RandomPolicy
from lagom.core.policies.base_categorical_policy import BaseCategoricalPolicy
from lagom.core.policies.base_gaussian_policy import BaseGaussianPolicy | 65.75 | 77 | 0.897338 | 34 | 263 | 6.764706 | 0.411765 | 0.156522 | 0.226087 | 0.365217 | 0.326087 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.057034 | 263 | 4 | 78 | 65.75 | 0.927419 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7db6ad08485a10a9511ba4ba3c9b7c3779d08477 | 4,983 | py | Python | demos/proxy/app/account.py | MyPCH/MyPCH-SC | 4f170934fa9fefaea1660a9a70c48e2234ba0b4f | [
"MIT"
] | null | null | null | demos/proxy/app/account.py | MyPCH/MyPCH-SC | 4f170934fa9fefaea1660a9a70c48e2234ba0b4f | [
"MIT"
] | null | null | null | demos/proxy/app/account.py | MyPCH/MyPCH-SC | 4f170934fa9fefaea1660a9a70c48e2234ba0b4f | [
"MIT"
] | null | null | null | accounts = {'mypchtryinghard@gmail.com': {'username': 'mypchpwd2', 'password': '12345678', 'usertype': 'patient',
'tpgroup': '1', 'shared_users': '[]', 'url': 'localhost:9107',
'fake_response': 'You shall have access to watermarked data from ...',
'fake_post': 'You are a RW user shall be able to upload valid data ...'},
'mypchpwd1@gmail.com': {'username': 'mypchpwd1', 'password': '12345678', 'usertype': 'patient',
'clinic': '1', 'shared_users': '[mypchpwd2@gmail.com]', 'url': 'localhost:9107',
'fake_response': 'You have full access to watermarked data from own account. You shall also have access to watermarked data from mypchpwd2@gmail.com',
'fake_post': 'You are a RW user and is allowed to send POST requests ...'},
'mypchpwd2@gmail.com': {'username': 'mypchpwd2', 'password': '12345678', 'usertype': 'patient',
'clinic': '1', 'shared_users': '[mypchpwd3@gmail.com]', 'url': 'localhost:9107',
'fake_response': 'You have full access to data from own account. You shall also have access to watermarked data from mypchpwd3@gmail.com',
'fake_post': 'You are a RW user and is allowed to send POST requests ...'},
'mypchpwd3@gmail.com': {'username': 'mypchpwd3', 'password': '12345678', 'usertype': 'patient',
'clinic': '2', 'shared_users': '[]', 'url': 'localhost:9107',
'fake_response': 'You have full access to data from own account. You shall not be able to access data from others',
'fake_post': 'You are a RW user and is allowed to send POST requests ...'},
'mypchc1d1@gmail.com': {'username': 'mypchc1d1', 'password': '12345678', 'usertype': 'advisor',
'clinic': '1', 'shared_users': '[]', 'url': 'localhost:9107',
'fake_response': 'You have full access to watermarked data from all patients in the specific clinic',
'fake_post': 'You are a RO user and is NOT allowed to send POST requests ...'},
'mypchc1d2@gmail.com': {'username': 'mypchc1d2', 'password': '12345678', 'usertype': 'advisor',
'clinic': '1', 'shared_users': '[]', 'url': 'localhost:9107',
'fake_response': 'You have full access to watermarked data from all patients in the specific clinic. You also have access to aggregated data from a patient in another clinic',
'fake_post': 'You are a RO user and is NOT allowed to send POST requests ...'},
'mypchr1u1@gmail.com': {'username': 'mypchr1u1', 'password': '12345678', 'usertype': 'research',
'clinic': '', 'shared_users': '[]', 'url': 'localhost:9107',
'fake_response': 'You have annonymouns and partial access to data from some patients',
'fake_post': 'You are a RO user and is NOT allowed to send POST requests ...'},
'mypchi1u1@gmail.com': {'username': 'mypchi1u1', 'password': '12345678', 'usertype': '3party',
'clinic': '', 'shared_users': '[]', 'url': 'localhost:9107',
'fake_response': 'You have partial (time and data types) access to watermarked data from some patients',
'fake_post': 'You are a RO user and is NOT allowed to send POST requests ...'},
'mypchi1u2@gmail.com': {'username': 'mypchi1u2', 'password': '12345678', 'usertype': '3party',
'clinic': '', 'shared_users': '[]', 'url': 'localhost:9107',
'fake_response': 'You have partial (time and data types) access to watermarked data from some patients',
'fake_post': 'You are a RO user and is NOT allowed to send POST requests ...'},
'mypchi2u1@gmail.com': {'username': 'mypchi2u1', 'password': '12345678', 'usertype': '3party',
'clinic': '', 'shared_users': '[]', 'url': 'localhost:9107',
'fake_response': 'You have partial (time and data types) access to watermarked data from some patients',
'fake_post': 'You are a RO user and is NOT allowed to send POST requests ...'}}
| 121.536585 | 217 | 0.499298 | 493 | 4,983 | 4.985801 | 0.148073 | 0.045566 | 0.065094 | 0.081367 | 0.785598 | 0.770545 | 0.757933 | 0.74939 | 0.676566 | 0.659072 | 0 | 0.052547 | 0.37367 | 4,983 | 40 | 218 | 124.575 | 0.735021 | 0 | 0 | 0.425 | 0 | 0.075 | 0.57355 | 0.013446 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.25 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
7dbb8ba8b6b2672cd236cd4158fc014d82e8ee69 | 195 | py | Python | mindhome_alpha/erpnext/patches/v6_12/set_overdue_tasks.py | Mindhome/field_service | 3aea428815147903eb9af1d0c1b4b9fc7faed057 | [
"MIT"
] | 1 | 2021-04-29T14:55:29.000Z | 2021-04-29T14:55:29.000Z | mindhome_alpha/erpnext/patches/v6_12/set_overdue_tasks.py | Mindhome/field_service | 3aea428815147903eb9af1d0c1b4b9fc7faed057 | [
"MIT"
] | null | null | null | mindhome_alpha/erpnext/patches/v6_12/set_overdue_tasks.py | Mindhome/field_service | 3aea428815147903eb9af1d0c1b4b9fc7faed057 | [
"MIT"
] | 1 | 2021-04-29T14:39:01.000Z | 2021-04-29T14:39:01.000Z | from __future__ import unicode_literals
import frappe
def execute():
frappe.reload_doctype("Task")
from erpnext.projects.doctype.task.task import set_tasks_as_overdue
set_tasks_as_overdue()
| 21.666667 | 68 | 0.830769 | 28 | 195 | 5.357143 | 0.607143 | 0.146667 | 0.133333 | 0.226667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.097436 | 195 | 8 | 69 | 24.375 | 0.852273 | 0 | 0 | 0 | 0 | 0 | 0.020513 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | true | 0 | 0.5 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
81d6270d82551c253af5b04ea88ec3e1a6a847a6 | 87 | py | Python | fault_tolerant_flight_control_drl/envs/outer_loop/__init__.py | kdally/fault-tolerant-flight-control-drl | 800a1c9319b44ab2b1d17f6e19266c2392d6e57b | [
"MIT"
] | 8 | 2021-02-27T09:49:57.000Z | 2022-03-21T16:28:08.000Z | fault_tolerant_flight_control_drl/envs/outer_loop/__init__.py | kdally/fault-tolerant-flight-control-drl | 800a1c9319b44ab2b1d17f6e19266c2392d6e57b | [
"MIT"
] | null | null | null | fault_tolerant_flight_control_drl/envs/outer_loop/__init__.py | kdally/fault-tolerant-flight-control-drl | 800a1c9319b44ab2b1d17f6e19266c2392d6e57b | [
"MIT"
] | 2 | 2021-03-04T07:24:35.000Z | 2021-11-17T04:21:08.000Z | from fault_tolerant_flight_control_drl.envs.outer_loop.outer_loop import AltController
| 43.5 | 86 | 0.91954 | 13 | 87 | 5.692308 | 0.846154 | 0.243243 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.045977 | 87 | 1 | 87 | 87 | 0.891566 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
81df8d9b8d58119469697c7da685df3de4350e71 | 7,716 | py | Python | tastefulpy/authorization.py | mjschultz/django-tastefulpy | c81c7b32da16f9b181589a0311d9819718fdc960 | [
"BSD-3-Clause"
] | null | null | null | tastefulpy/authorization.py | mjschultz/django-tastefulpy | c81c7b32da16f9b181589a0311d9819718fdc960 | [
"BSD-3-Clause"
] | null | null | null | tastefulpy/authorization.py | mjschultz/django-tastefulpy | c81c7b32da16f9b181589a0311d9819718fdc960 | [
"BSD-3-Clause"
] | null | null | null | from __future__ import unicode_literals
from tastefulpy.exceptions import TastefulpyError, Unauthorized
from tastefulpy.compat import get_module_name
class Authorization(object):
"""
A base class that provides no permissions checking.
"""
def __get__(self, instance, owner):
"""
Makes ``Authorization`` a descriptor of ``ResourceOptions`` and creates
a reference to the ``ResourceOptions`` object that may be used by
methods of ``Authorization``.
"""
self.resource_meta = instance
return self
def apply_limits(self, request, object_list):
"""
Deprecated.
FIXME: REMOVE BEFORE 1.0
"""
raise TastefulpyError("Authorization classes no longer support `apply_limits`. Please update to using `read_list`.")
def read_list(self, object_list, bundle):
"""
Returns a list of all the objects a user is allowed to read.
Should return an empty list if none are allowed.
Returns the entire list by default.
"""
return object_list
def read_detail(self, object_list, bundle):
"""
Returns either ``True`` if the user is allowed to read the object in
question or throw ``Unauthorized`` if they are not.
Returns ``True`` by default.
"""
return True
def create_list(self, object_list, bundle):
"""
Unimplemented, as Tastefulpy never creates entire new lists, but
present for consistency & possible extension.
"""
raise NotImplementedError("Tastefulpy has no way to determine if all objects should be allowed to be created.")
def create_detail(self, object_list, bundle):
"""
Returns either ``True`` if the user is allowed to create the object in
question or throw ``Unauthorized`` if they are not.
Returns ``True`` by default.
"""
return True
def update_list(self, object_list, bundle):
"""
Returns a list of all the objects a user is allowed to update.
Should return an empty list if none are allowed.
Returns the entire list by default.
"""
return object_list
def update_detail(self, object_list, bundle):
"""
Returns either ``True`` if the user is allowed to update the object in
question or throw ``Unauthorized`` if they are not.
Returns ``True`` by default.
"""
return True
def delete_list(self, object_list, bundle):
"""
Returns a list of all the objects a user is allowed to delete.
Should return an empty list if none are allowed.
Returns the entire list by default.
"""
return object_list
def delete_detail(self, object_list, bundle):
"""
Returns either ``True`` if the user is allowed to delete the object in
question or throw ``Unauthorized`` if they are not.
Returns ``True`` by default.
"""
return True
class ReadOnlyAuthorization(Authorization):
"""
Default Authentication class for ``Resource`` objects.
Only allows ``GET`` requests.
"""
def read_list(self, object_list, bundle):
return object_list
def read_detail(self, object_list, bundle):
return True
def create_list(self, object_list, bundle):
return []
def create_detail(self, object_list, bundle):
raise Unauthorized("You are not allowed to access that resource.")
def update_list(self, object_list, bundle):
return []
def update_detail(self, object_list, bundle):
raise Unauthorized("You are not allowed to access that resource.")
def delete_list(self, object_list, bundle):
return []
def delete_detail(self, object_list, bundle):
raise Unauthorized("You are not allowed to access that resource.")
class DjangoAuthorization(Authorization):
"""
Uses permission checking from ``django.contrib.auth`` to map
``POST / PUT / DELETE / PATCH`` to their equivalent Django auth
permissions.
Both the list & detail variants simply check the model they're based
on, as that's all the more granular Django's permission setup gets.
"""
def base_checks(self, request, model_klass):
# If it doesn't look like a model, we can't check permissions.
if not model_klass or not getattr(model_klass, '_meta', None):
return False
# User must be logged in to check permissions.
if not hasattr(request, 'user'):
return False
return model_klass
def read_list(self, object_list, bundle):
klass = self.base_checks(bundle.request, object_list.model)
if klass is False:
return []
# GET-style methods are always allowed.
return object_list
def read_detail(self, object_list, bundle):
klass = self.base_checks(bundle.request, bundle.obj.__class__)
if klass is False:
raise Unauthorized("You are not allowed to access that resource.")
# GET-style methods are always allowed.
return True
def create_list(self, object_list, bundle):
klass = self.base_checks(bundle.request, object_list.model)
if klass is False:
return []
permission = '%s.add_%s' % (klass._meta.app_label, get_module_name(klass._meta))
if not bundle.request.user.has_perm(permission):
return []
return object_list
def create_detail(self, object_list, bundle):
klass = self.base_checks(bundle.request, bundle.obj.__class__)
if klass is False:
raise Unauthorized("You are not allowed to access that resource.")
permission = '%s.add_%s' % (klass._meta.app_label, get_module_name(klass._meta))
if not bundle.request.user.has_perm(permission):
raise Unauthorized("You are not allowed to access that resource.")
return True
def update_list(self, object_list, bundle):
klass = self.base_checks(bundle.request, object_list.model)
if klass is False:
return []
permission = '%s.change_%s' % (klass._meta.app_label, get_module_name(klass._meta))
if not bundle.request.user.has_perm(permission):
return []
return object_list
def update_detail(self, object_list, bundle):
klass = self.base_checks(bundle.request, bundle.obj.__class__)
if klass is False:
raise Unauthorized("You are not allowed to access that resource.")
permission = '%s.change_%s' % (klass._meta.app_label, get_module_name(klass._meta))
if not bundle.request.user.has_perm(permission):
raise Unauthorized("You are not allowed to access that resource.")
return True
def delete_list(self, object_list, bundle):
klass = self.base_checks(bundle.request, object_list.model)
if klass is False:
return []
permission = '%s.delete_%s' % (klass._meta.app_label, get_module_name(klass._meta))
if not bundle.request.user.has_perm(permission):
return []
return object_list
def delete_detail(self, object_list, bundle):
klass = self.base_checks(bundle.request, bundle.obj.__class__)
if klass is False:
raise Unauthorized("You are not allowed to access that resource.")
permission = '%s.delete_%s' % (klass._meta.app_label, get_module_name(klass._meta))
if not bundle.request.user.has_perm(permission):
raise Unauthorized("You are not allowed to access that resource.")
return True
| 31.365854 | 124 | 0.644116 | 978 | 7,716 | 4.93047 | 0.161554 | 0.076732 | 0.069681 | 0.099544 | 0.730195 | 0.723559 | 0.723559 | 0.676483 | 0.676483 | 0.652426 | 0 | 0.000356 | 0.271384 | 7,716 | 245 | 125 | 31.493878 | 0.857346 | 0.255443 | 0 | 0.850467 | 0 | 0 | 0.129762 | 0 | 0 | 0 | 0 | 0.004082 | 0 | 1 | 0.252336 | false | 0 | 0.028037 | 0.046729 | 0.598131 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
c4f9f7b92d4f1e0b3b850109750bb64364ecd8f2 | 102 | py | Python | misp/models/__init__.py | zhoudaxia233/misp | c0d36e3f1a1eeac417d6bfff015ea5430f1d0de5 | [
"MIT"
] | 2 | 2019-12-21T10:46:57.000Z | 2019-12-22T14:01:23.000Z | misp/models/__init__.py | zhoudaxia233/misp | c0d36e3f1a1eeac417d6bfff015ea5430f1d0de5 | [
"MIT"
] | null | null | null | misp/models/__init__.py | zhoudaxia233/misp | c0d36e3f1a1eeac417d6bfff015ea5430f1d0de5 | [
"MIT"
] | null | null | null | from .cbamresnet import *
from .tailunet import *
from .tailnet import *
from .efficientunet import *
| 20.4 | 28 | 0.764706 | 12 | 102 | 6.5 | 0.5 | 0.384615 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.156863 | 102 | 4 | 29 | 25.5 | 0.906977 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f21e5ce7af50149678fbfc12e3056875c7662d03 | 2,209 | py | Python | tests/test_command_line_tool.py | Shimwell/remove_dagmc_tag | e1b2c124ec1e92f6c7867c9edf08cad81d6e40b3 | [
"MIT"
] | 2 | 2021-04-29T13:30:03.000Z | 2021-05-20T13:34:58.000Z | tests/test_command_line_tool.py | Shimwell/remove_dagmc_tag | e1b2c124ec1e92f6c7867c9edf08cad81d6e40b3 | [
"MIT"
] | 10 | 2021-04-29T20:43:46.000Z | 2021-11-18T15:44:56.000Z | tests/test_command_line_tool.py | svalinn/remove_dagmc_tags | e1b2c124ec1e92f6c7867c9edf08cad81d6e40b3 | [
"MIT"
] | null | null | null |
import os
import unittest
from pathlib import Path
class TestReactor(unittest.TestCase):
def test_removal_of_multiple_tags(self):
# test_removal_of_reflecting_tag
os.system('rm dagmc_output.h5m')
os.system(
'remove-dagmc-tags -i tests/dagmc.h5m -o dagmc_output.h5m -t reflective')
assert Path('dagmc_output.h5m').exists
assert Path('dagmc_output.h5m').stat().st_size < Path(
'tests/dagmc.h5m').stat().st_size
size_with_out_reflective = Path('dagmc_output.h5m').stat().st_size
# test_removal_of_graveyard
os.system('rm dagmc_output.h5m')
os.system(
'remove-dagmc-tags -i tests/dagmc.h5m -o dagmc_output.h5m -t mat:graveyard')
assert Path('dagmc_output.h5m').exists
assert Path('dagmc_output.h5m').stat().st_size < Path(
'tests/dagmc.h5m').stat().st_size
size_with_out_graveyard = Path('dagmc_output.h5m').stat().st_size
os.system('rm dagmc_output.h5m')
os.system(
'remove-dagmc-tags -i tests/dagmc.h5m -o dagmc_output.h5m -t reflective mat:graveyard')
assert Path('dagmc_output.h5m').exists
assert Path('dagmc_output.h5m').stat(
).st_size < size_with_out_graveyard
assert Path('dagmc_output.h5m').stat(
).st_size < size_with_out_reflective
def test_conversion_to_vtk_with_multiple_tag_removal(self):
os.system('rm dagmc_output.vtk')
os.system(
'remove-dagmc-tags -i tests/dagmc.h5m -o dagmc_output.vtk -t reflective')
assert Path('dagmc_output.vtk').exists
# test_conversion_to_vtk_without_graveyard(self):
os.system('rm dagmc_output.vtk')
os.system(
'remove-dagmc-tags -i tests/dagmc.h5m -o dagmc_output.vtk -t mat:graveyard')
assert Path('dagmc_output.vtk').exists
# test_conversion_to_vtk_without_graveyard_or_reflecting_tag(self):
os.system('rm dagmc_output.vtk')
os.system(
'remove-dagmc-tags -i tests/dagmc.h5m -o dagmc_output.vtk -t reflective mat:graveyard')
assert Path('dagmc_output.vtk').exists
if __name__ == "__main__":
unittest.main()
| 36.816667 | 99 | 0.66048 | 304 | 2,209 | 4.526316 | 0.151316 | 0.19186 | 0.152616 | 0.152616 | 0.828488 | 0.828488 | 0.828488 | 0.787791 | 0.72093 | 0.72093 | 0 | 0.013364 | 0.220914 | 2,209 | 59 | 100 | 37.440678 | 0.786171 | 0.076958 | 0 | 0.571429 | 0 | 0.095238 | 0.392523 | 0 | 0 | 0 | 0 | 0 | 0.238095 | 1 | 0.047619 | false | 0 | 0.071429 | 0 | 0.142857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1ee7a85f64f0f6d7aed7be2c1edb9cd2561ae63e | 170 | py | Python | example/shop/orders/admin.py | sakkada/django-shopkit | 35e6f8ac73bf6aa40887aa9b1b860d27db8b2975 | [
"BSD-3-Clause"
] | null | null | null | example/shop/orders/admin.py | sakkada/django-shopkit | 35e6f8ac73bf6aa40887aa9b1b860d27db8b2975 | [
"BSD-3-Clause"
] | null | null | null | example/shop/orders/admin.py | sakkada/django-shopkit | 35e6f8ac73bf6aa40887aa9b1b860d27db8b2975 | [
"BSD-3-Clause"
] | null | null | null | from django.contrib import admin
from . import models
admin.site.register(models.Order)
admin.site.register(models.DeliveryGroup)
admin.site.register(models.OrderLine)
| 21.25 | 41 | 0.823529 | 23 | 170 | 6.086957 | 0.478261 | 0.192857 | 0.364286 | 0.492857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.076471 | 170 | 7 | 42 | 24.285714 | 0.89172 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.4 | 0 | 0.4 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
486b7cb02d6f71c6ccfb023324889c2a090d0339 | 270 | py | Python | pipelines/admin.py | EddyAnalytics/eddy-backend | bc465996e51b9ebc3e498ad0d6434bac80b173a6 | [
"Apache-2.0"
] | 1 | 2021-09-24T07:52:08.000Z | 2021-09-24T07:52:08.000Z | pipelines/admin.py | EddyAnalytics/eddy-backend | bc465996e51b9ebc3e498ad0d6434bac80b173a6 | [
"Apache-2.0"
] | 2 | 2021-05-25T22:16:18.000Z | 2021-06-09T19:16:24.000Z | pipelines/admin.py | EddyAnalytics/eddy-backend | bc465996e51b9ebc3e498ad0d6434bac80b173a6 | [
"Apache-2.0"
] | null | null | null | from django.contrib import admin
from pipelines.models import BlockType, Pipeline, Block
from utils.utils import ReadOnlyIdAdmin
admin.site.register(Pipeline, ReadOnlyIdAdmin)
admin.site.register(Block, ReadOnlyIdAdmin)
admin.site.register(BlockType, ReadOnlyIdAdmin)
| 30 | 55 | 0.844444 | 32 | 270 | 7.125 | 0.4375 | 0.263158 | 0.315789 | 0.421053 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.081481 | 270 | 8 | 56 | 33.75 | 0.919355 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
6f7d4b507fe788b6b1c7ceee06e0ef77b92fa43c | 7,978 | py | Python | monitoring/google/cloud/monitoring_v3/proto/group_service_pb2_grpc.py | jo2y/google-cloud-python | 1b76727be16bc4335276f793340bb72d32be7166 | [
"Apache-2.0"
] | 1 | 2018-06-29T17:53:28.000Z | 2018-06-29T17:53:28.000Z | monitoring/google/cloud/monitoring_v3/proto/group_service_pb2_grpc.py | jo2y/google-cloud-python | 1b76727be16bc4335276f793340bb72d32be7166 | [
"Apache-2.0"
] | 1 | 2021-06-25T15:16:57.000Z | 2021-06-25T15:16:57.000Z | monitoring/google/cloud/monitoring_v3/proto/group_service_pb2_grpc.py | jo2y/google-cloud-python | 1b76727be16bc4335276f793340bb72d32be7166 | [
"Apache-2.0"
] | 1 | 2021-06-30T11:44:03.000Z | 2021-06-30T11:44:03.000Z | # Generated by the gRPC Python protocol compiler plugin. DO NOT EDIT!
import grpc
from google.cloud.monitoring_v3.proto import group_pb2 as google_dot_cloud_dot_monitoring__v3_dot_proto_dot_group__pb2
from google.cloud.monitoring_v3.proto import group_service_pb2 as google_dot_cloud_dot_monitoring__v3_dot_proto_dot_group__service__pb2
from google.protobuf import empty_pb2 as google_dot_protobuf_dot_empty__pb2
class GroupServiceStub(object):
"""The Group API lets you inspect and manage your
[groups](google.monitoring.v3.Group).
A group is a named filter that is used to identify
a collection of monitored resources. Groups are typically used to
mirror the physical and/or logical topology of the environment.
Because group membership is computed dynamically, monitored
resources that are started in the future are automatically placed
in matching groups. By using a group to name monitored resources in,
for example, an alert policy, the target of that alert policy is
updated automatically as monitored resources are added and removed
from the infrastructure.
"""
def __init__(self, channel):
"""Constructor.
Args:
channel: A grpc.Channel.
"""
self.ListGroups = channel.unary_unary(
'/google.monitoring.v3.GroupService/ListGroups',
request_serializer=google_dot_cloud_dot_monitoring__v3_dot_proto_dot_group__service__pb2.ListGroupsRequest.SerializeToString,
response_deserializer=google_dot_cloud_dot_monitoring__v3_dot_proto_dot_group__service__pb2.ListGroupsResponse.FromString,
)
self.GetGroup = channel.unary_unary(
'/google.monitoring.v3.GroupService/GetGroup',
request_serializer=google_dot_cloud_dot_monitoring__v3_dot_proto_dot_group__service__pb2.GetGroupRequest.SerializeToString,
response_deserializer=google_dot_cloud_dot_monitoring__v3_dot_proto_dot_group__pb2.Group.FromString,
)
self.CreateGroup = channel.unary_unary(
'/google.monitoring.v3.GroupService/CreateGroup',
request_serializer=google_dot_cloud_dot_monitoring__v3_dot_proto_dot_group__service__pb2.CreateGroupRequest.SerializeToString,
response_deserializer=google_dot_cloud_dot_monitoring__v3_dot_proto_dot_group__pb2.Group.FromString,
)
self.UpdateGroup = channel.unary_unary(
'/google.monitoring.v3.GroupService/UpdateGroup',
request_serializer=google_dot_cloud_dot_monitoring__v3_dot_proto_dot_group__service__pb2.UpdateGroupRequest.SerializeToString,
response_deserializer=google_dot_cloud_dot_monitoring__v3_dot_proto_dot_group__pb2.Group.FromString,
)
self.DeleteGroup = channel.unary_unary(
'/google.monitoring.v3.GroupService/DeleteGroup',
request_serializer=google_dot_cloud_dot_monitoring__v3_dot_proto_dot_group__service__pb2.DeleteGroupRequest.SerializeToString,
response_deserializer=google_dot_protobuf_dot_empty__pb2.Empty.FromString,
)
self.ListGroupMembers = channel.unary_unary(
'/google.monitoring.v3.GroupService/ListGroupMembers',
request_serializer=google_dot_cloud_dot_monitoring__v3_dot_proto_dot_group__service__pb2.ListGroupMembersRequest.SerializeToString,
response_deserializer=google_dot_cloud_dot_monitoring__v3_dot_proto_dot_group__service__pb2.ListGroupMembersResponse.FromString,
)
class GroupServiceServicer(object):
"""The Group API lets you inspect and manage your
[groups](google.monitoring.v3.Group).
A group is a named filter that is used to identify
a collection of monitored resources. Groups are typically used to
mirror the physical and/or logical topology of the environment.
Because group membership is computed dynamically, monitored
resources that are started in the future are automatically placed
in matching groups. By using a group to name monitored resources in,
for example, an alert policy, the target of that alert policy is
updated automatically as monitored resources are added and removed
from the infrastructure.
"""
def ListGroups(self, request, context):
"""Lists the existing groups.
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def GetGroup(self, request, context):
"""Gets a single group.
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def CreateGroup(self, request, context):
"""Creates a new group.
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def UpdateGroup(self, request, context):
"""Updates an existing group.
You can change any group attributes except `name`.
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def DeleteGroup(self, request, context):
"""Deletes an existing group.
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def ListGroupMembers(self, request, context):
"""Lists the monitored resources that are members of a group.
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def add_GroupServiceServicer_to_server(servicer, server):
rpc_method_handlers = {
'ListGroups': grpc.unary_unary_rpc_method_handler(
servicer.ListGroups,
request_deserializer=google_dot_cloud_dot_monitoring__v3_dot_proto_dot_group__service__pb2.ListGroupsRequest.FromString,
response_serializer=google_dot_cloud_dot_monitoring__v3_dot_proto_dot_group__service__pb2.ListGroupsResponse.SerializeToString,
),
'GetGroup': grpc.unary_unary_rpc_method_handler(
servicer.GetGroup,
request_deserializer=google_dot_cloud_dot_monitoring__v3_dot_proto_dot_group__service__pb2.GetGroupRequest.FromString,
response_serializer=google_dot_cloud_dot_monitoring__v3_dot_proto_dot_group__pb2.Group.SerializeToString,
),
'CreateGroup': grpc.unary_unary_rpc_method_handler(
servicer.CreateGroup,
request_deserializer=google_dot_cloud_dot_monitoring__v3_dot_proto_dot_group__service__pb2.CreateGroupRequest.FromString,
response_serializer=google_dot_cloud_dot_monitoring__v3_dot_proto_dot_group__pb2.Group.SerializeToString,
),
'UpdateGroup': grpc.unary_unary_rpc_method_handler(
servicer.UpdateGroup,
request_deserializer=google_dot_cloud_dot_monitoring__v3_dot_proto_dot_group__service__pb2.UpdateGroupRequest.FromString,
response_serializer=google_dot_cloud_dot_monitoring__v3_dot_proto_dot_group__pb2.Group.SerializeToString,
),
'DeleteGroup': grpc.unary_unary_rpc_method_handler(
servicer.DeleteGroup,
request_deserializer=google_dot_cloud_dot_monitoring__v3_dot_proto_dot_group__service__pb2.DeleteGroupRequest.FromString,
response_serializer=google_dot_protobuf_dot_empty__pb2.Empty.SerializeToString,
),
'ListGroupMembers': grpc.unary_unary_rpc_method_handler(
servicer.ListGroupMembers,
request_deserializer=google_dot_cloud_dot_monitoring__v3_dot_proto_dot_group__service__pb2.ListGroupMembersRequest.FromString,
response_serializer=google_dot_cloud_dot_monitoring__v3_dot_proto_dot_group__service__pb2.ListGroupMembersResponse.SerializeToString,
),
}
generic_handler = grpc.method_handlers_generic_handler(
'google.monitoring.v3.GroupService', rpc_method_handlers)
server.add_generic_rpc_handlers((generic_handler,))
| 50.815287 | 143 | 0.794184 | 963 | 7,978 | 6.13188 | 0.143302 | 0.071126 | 0.056901 | 0.069094 | 0.822693 | 0.802879 | 0.798137 | 0.700593 | 0.686029 | 0.686029 | 0 | 0.00951 | 0.143269 | 7,978 | 156 | 144 | 51.141026 | 0.854426 | 0.20193 | 0 | 0.309278 | 1 | 0 | 0.10443 | 0.049576 | 0 | 0 | 0 | 0 | 0 | 1 | 0.082474 | false | 0 | 0.041237 | 0 | 0.14433 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6f8841115797d5e01e0790aad3beef3307635412 | 37 | py | Python | pylogo/__init__.py | obo/loev3go | 763c6cf61133add914d231d07bc8c3c29672aba9 | [
"MIT"
] | 1 | 2018-09-05T20:57:40.000Z | 2018-09-05T20:57:40.000Z | pylogo/__init__.py | obo/loev3go | 763c6cf61133add914d231d07bc8c3c29672aba9 | [
"MIT"
] | null | null | null | pylogo/__init__.py | obo/loev3go | 763c6cf61133add914d231d07bc8c3c29672aba9 | [
"MIT"
] | 2 | 2019-10-05T23:02:41.000Z | 2020-06-25T20:21:02.000Z | from pylogo.interpreter import Logo
| 12.333333 | 35 | 0.837838 | 5 | 37 | 6.2 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.135135 | 37 | 2 | 36 | 18.5 | 0.96875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6fb95545678fa9a51fecc5398e7d86db2c010e5b | 102 | py | Python | WEEKS/CD_Sata-Structures/_MISC/misc-examples/python3-book-examples/importlib/example/submodule.py | webdevhub42/Lambda | b04b84fb5b82fe7c8b12680149e25ae0d27a0960 | [
"MIT"
] | null | null | null | WEEKS/CD_Sata-Structures/_MISC/misc-examples/python3-book-examples/importlib/example/submodule.py | webdevhub42/Lambda | b04b84fb5b82fe7c8b12680149e25ae0d27a0960 | [
"MIT"
] | null | null | null | WEEKS/CD_Sata-Structures/_MISC/misc-examples/python3-book-examples/importlib/example/submodule.py | webdevhub42/Lambda | b04b84fb5b82fe7c8b12680149e25ae0d27a0960 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# encoding: utf-8
#
#
"""
"""
# end_pymotw_header
print("Importing submodule")
| 10.2 | 28 | 0.656863 | 13 | 102 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011364 | 0.137255 | 102 | 9 | 29 | 11.333333 | 0.727273 | 0.529412 | 0 | 0 | 0 | 0 | 0.527778 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 6 |
6fd9cb9b94ddb752aed7dc4d3a7f5ae1b078ec1c | 30 | py | Python | datamallet/visualization/__init__.py | bodealamu/datamallet | d736c5624014f6e186211099b6196c4b0aa544cd | [
"MIT"
] | 6 | 2021-12-13T04:28:09.000Z | 2022-03-17T23:37:17.000Z | datamallet/visualization/__init__.py | bodealamu/datamallet | d736c5624014f6e186211099b6196c4b0aa544cd | [
"MIT"
] | null | null | null | datamallet/visualization/__init__.py | bodealamu/datamallet | d736c5624014f6e186211099b6196c4b0aa544cd | [
"MIT"
] | 2 | 2022-02-22T00:05:06.000Z | 2022-03-09T04:35:14.000Z | from .autoplot import AutoPlot | 30 | 30 | 0.866667 | 4 | 30 | 6.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 30 | 1 | 30 | 30 | 0.962963 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
82eb7206b28d605a3ae61f303ead8077d188aeb9 | 31 | py | Python | wsm/commands/__init__.py | Rayologist/windows-sshd-manager | 4f78a0cdaa12fe3c2a785aca31066c3be886878b | [
"Apache-2.0"
] | 9 | 2022-02-09T09:09:43.000Z | 2022-02-09T09:10:06.000Z | wsm/commands/__init__.py | Rayologist/windows-sshd-manager | 4f78a0cdaa12fe3c2a785aca31066c3be886878b | [
"Apache-2.0"
] | null | null | null | wsm/commands/__init__.py | Rayologist/windows-sshd-manager | 4f78a0cdaa12fe3c2a785aca31066c3be886878b | [
"Apache-2.0"
] | null | null | null | from .options import WSMParser
| 15.5 | 30 | 0.83871 | 4 | 31 | 6.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.129032 | 31 | 1 | 31 | 31 | 0.962963 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d20f25370ce16b5d900619c828b6ec9401bf54e8 | 12,840 | py | Python | funds/tests/test_postings_api.py | cimmke/funds | 53157fc1b868be8044b848653833884fd26d0142 | [
"MIT"
] | null | null | null | funds/tests/test_postings_api.py | cimmke/funds | 53157fc1b868be8044b848653833884fd26d0142 | [
"MIT"
] | null | null | null | funds/tests/test_postings_api.py | cimmke/funds | 53157fc1b868be8044b848653833884fd26d0142 | [
"MIT"
] | null | null | null | import datetime
from django.urls import reverse
from rest_framework.test import APITestCase
from rest_framework import status
from funds.models import Postings
class PostingsAPITest(APITestCase):
def setUp(self):
url = reverse('postings-list')
self.posting_num = 1
self.payee = 'Store'
self.note = 'Note'
data = {
'posting_num': self.posting_num,
'posting_type': Postings.STANDARD,
'payee': self.payee,
'note': self.note
}
self.setup_response = self.client.post(url, data, format='json')
def test_create_account(self):
"""
Verify we can create a new posting
Actual creation done in setup so its available for following tests
"""
self.assertEqual(self.setup_response.status_code, status.HTTP_201_CREATED)
posting = Postings.objects.get()
self.assertEqual(Postings.objects.count(), 1)
self.assertEqual(posting.posting_num, self.posting_num)
self.assertEqual(posting.posting_type, Postings.STANDARD)
self.assertEqual(posting.date, datetime.date.today())
self.assertEqual(posting.payee, self.payee)
self.assertFalse(posting.cleared)
self.assertEqual(posting.note, self.note)
def test_update_posting_date(self):
"""
Verify we can patch the date of the posting
"""
posting = Postings.objects.get()
url = reverse('postings-detail', args=[posting.posting_num])
new_posting_date = datetime.date(2020, 12, 11)
data = {'date': new_posting_date}
response = self.client.patch(url, data, format='json')
self.assertEqual(response.status_code, status.HTTP_200_OK)
posting = Postings.objects.get()
self.assertEqual(posting.posting_num, self.posting_num)
self.assertEqual(posting.posting_type, Postings.STANDARD)
self.assertEqual(posting.date, new_posting_date)
self.assertEqual(posting.payee, self.payee)
self.assertFalse(posting.cleared)
self.assertEqual(posting.note, self.note)
def test_update_posting_type(self):
"""
Verify we can patch the type of the posting
"""
posting = Postings.objects.get()
url = reverse('postings-detail', args=[posting.posting_num])
data = {'posting_type': Postings.INCOME}
response = self.client.patch(url, data, format='json')
self.assertEqual(response.status_code, status.HTTP_200_OK)
posting = Postings.objects.get()
self.assertEqual(Postings.objects.count(), 1)
self.assertEqual(posting.posting_num, self.posting_num)
self.assertEqual(posting.posting_type, Postings.INCOME)
self.assertEqual(posting.date, datetime.date.today())
self.assertEqual(posting.payee, self.payee)
self.assertFalse(posting.cleared)
self.assertEqual(posting.note, self.note)
def test_update_payee(self):
"""
Verify we can patch the payee of the posting
"""
posting = Postings.objects.get()
url = reverse('postings-detail', args=[posting.posting_num])
new_payee = 'New Store'
data = {'payee': new_payee}
response = self.client.patch(url, data, format='json')
self.assertEqual(response.status_code, status.HTTP_200_OK)
posting = Postings.objects.get()
self.assertEqual(Postings.objects.count(), 1)
self.assertEqual(posting.posting_num, self.posting_num)
self.assertEqual(posting.posting_type, Postings.STANDARD)
self.assertEqual(posting.date, datetime.date.today())
self.assertEqual(posting.payee, new_payee)
self.assertFalse(posting.cleared)
self.assertEqual(posting.note, self.note)
def test_update_cleared(self):
"""
Verify we can patch if the posting is cleared
"""
posting = Postings.objects.get()
url = reverse('postings-detail', args=[posting.posting_num])
data = {'cleared': True}
response = self.client.patch(url, data, format='json')
self.assertEqual(response.status_code, status.HTTP_200_OK)
posting = Postings.objects.get()
self.assertEqual(Postings.objects.count(), 1)
self.assertEqual(posting.posting_num, self.posting_num)
self.assertEqual(posting.posting_type, Postings.STANDARD)
self.assertEqual(posting.date, datetime.date.today())
self.assertEqual(posting.payee, self.payee)
self.assertTrue(posting.cleared)
self.assertEqual(posting.note, self.note)
def test_update_note(self):
"""
Verify we can patch the note of the posting
"""
posting = Postings.objects.get()
url = reverse('postings-detail', args=[posting.posting_num])
new_note = 'New Note'
data = {'note': new_note}
response = self.client.patch(url, data, format='json')
self.assertEqual(response.status_code, status.HTTP_200_OK)
posting = Postings.objects.get()
self.assertEqual(Postings.objects.count(), 1)
self.assertEqual(posting.posting_num, self.posting_num)
self.assertEqual(posting.posting_type, Postings.STANDARD)
self.assertEqual(posting.date, datetime.date.today())
self.assertEqual(posting.payee, self.payee)
self.assertFalse(posting.cleared)
self.assertEqual(posting.note, new_note)
def test_update_all_fields(self):
"""
Verify we can update all the fields with a put action
"""
posting = Postings.objects.get()
url = reverse('postings-detail', args=[posting.posting_num])
new_date = datetime.date(2020, 12, 11)
new_payee = 'New Payee'
new_note = 'New Note'
data = {
'posting_num': posting.posting_num,
'posting_type': Postings.TRANSFER,
'date': new_date,
'payee': new_payee,
'cleared': True,
'note': new_note
}
response = self.client.put(url, data, format='json')
self.assertEqual(response.status_code, status.HTTP_200_OK)
posting = Postings.objects.get()
self.assertEqual(Postings.objects.count(), 1)
self.assertEqual(posting.posting_num, self.posting_num)
self.assertEqual(posting.posting_type, Postings.TRANSFER)
self.assertEqual(posting.date, new_date)
self.assertEqual(posting.payee, new_payee)
self.assertTrue(posting.cleared)
self.assertEqual(posting.note, new_note)
def test_delete_posting(self):
"""
Verify we can delete the posting
"""
posting = Postings.objects.get()
url = reverse('postings-detail', args=[posting.posting_num])
response = self.client.delete(url)
self.assertEqual(response.status_code, status.HTTP_204_NO_CONTENT)
self.assertEqual(Postings.objects.count(), 0)
def test_create_posting_negative_num(self):
"""
Verify if create a posting with a negative int we get a 400 error
"""
url = reverse('postings-list')
data = {
'posting_num': -1,
'posting_type': Postings.STANDARD,
'payee': self.payee,
'note': self.note
}
response = self.client.post(url, data, format='json')
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
self.assertEqual(Postings.objects.count(), 1)
def test_create_posting_invalid_posting_type(self):
"""
Verify if create a posting with invalid type we get an error
"""
url = reverse('postings-list')
data = {
'posting_num': self.posting_num,
'posting_type': 'Invalid',
'payee': self.payee,
'note': self.note
}
response = self.client.post(url, data, format='json')
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
self.assertEqual(Postings.objects.count(), 1)
def test_create_posting_payee_too_long(self):
"""
Verify if create an posting with payee to long we get an error
"""
url = reverse('postings-list')
payee = 'a' * 100
data = {
'posting_num': self.posting_num,
'posting_type': Postings.STANDARD,
'payee': payee,
'note': self.note
}
response = self.client.post(url, data, format='json')
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
self.assertEqual(Postings.objects.count(), 1)
def test_create_posting_payee_blank(self):
"""
Verify can create a posting with a blank payee
"""
url = reverse('postings-list')
data = {
'posting_num': self.posting_num + 1,
'posting_type': Postings.STANDARD,
'note': self.note
}
response = self.client.post(url, data, format='json')
self.assertEqual(response.status_code, status.HTTP_201_CREATED)
posting = Postings.objects.get(pk=2)
self.assertEqual(Postings.objects.count(), 2)
self.assertEqual(posting.posting_num, self.posting_num + 1)
self.assertEqual(posting.posting_type, Postings.STANDARD)
self.assertEqual(posting.date, datetime.date.today())
self.assertEqual(posting.payee, '')
self.assertFalse(posting.cleared)
self.assertEqual(posting.note, self.note)
def test_create_posting_note_blank(self):
"""
Verify can create a posting with a blank note
"""
url = reverse('postings-list')
data = {
'posting_num': self.posting_num + 1,
'posting_type': Postings.STANDARD,
'payee': self.payee
}
response = self.client.post(url, data, format='json')
self.assertEqual(response.status_code, status.HTTP_201_CREATED)
posting = Postings.objects.get(pk=2)
self.assertEqual(Postings.objects.count(), 2)
self.assertEqual(posting.posting_num, self.posting_num + 1)
self.assertEqual(posting.posting_type, Postings.STANDARD)
self.assertEqual(posting.date, datetime.date.today())
self.assertEqual(posting.payee, self.payee)
self.assertFalse(posting.cleared)
self.assertEqual(posting.note, '')
def test_create_posting_cleared_invalid_type(self):
"""
Verify if create a posting with invalid type for cleared get 400 error
"""
url = reverse('postings-list')
cleared = 5
data = {
'posting_num': self.posting_num + 1,
'posting_type': Postings.STANDARD,
'payee': self.payee,
'cleared': cleared,
'note': self.note
}
response = self.client.post(url, data, format='json')
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
self.assertEqual(Postings.objects.count(), 1)
def test_update_posting_date_blank(self):
"""
Verify if update date with blank value get 400 error
"""
posting = Postings.objects.get()
url = reverse('postings-detail', args=[posting.posting_num])
new_date = ''
data = {'date': new_date}
response = self.client.patch(url, data, format='json')
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
def test_update_posting_posting_type_invalid(self):
"""
Verify if update posting_type with invalid value get 400 error
"""
posting = Postings.objects.get()
url = reverse('postings-detail', args=[posting.posting_num])
data = {'posting_type': 'Invalid'}
response = self.client.patch(url, data, format='json')
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
def test_update_posting_payee_too_long(self):
"""
Verify if update posting with value that is too long get 400 error
"""
posting = Postings.objects.get()
url = reverse('postings-detail', args=[posting.posting_num])
new_payee = 'a' * 100
data = {'payee': new_payee}
response = self.client.patch(url, data, format='json')
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
def test_update_posting_cleared_invalid_type(self):
"""
Verify if update posting with invalid type for cleared get 400 error
"""
posting = Postings.objects.get()
url = reverse('postings-detail', args=[posting.posting_num])
data = {'cleared': 'a'}
response = self.client.patch(url, data, format='json')
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
| 40.504732 | 82 | 0.637072 | 1,493 | 12,840 | 5.330208 | 0.073007 | 0.143252 | 0.124403 | 0.06283 | 0.87899 | 0.8477 | 0.819553 | 0.794421 | 0.778462 | 0.768158 | 0 | 0.01217 | 0.251246 | 12,840 | 316 | 83 | 40.632911 | 0.815581 | 0.079283 | 0 | 0.687764 | 0 | 0 | 0.062198 | 0 | 0 | 0 | 0 | 0 | 0.35865 | 1 | 0.080169 | false | 0 | 0.021097 | 0 | 0.105485 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d253b9e64dfcce097f6efc5cd5cfc623ec51d8f7 | 3,468 | py | Python | im2mesh/encoder/pointnet.py | thunanguyen/occupancy_networks | 12c707a1cb89707cbf70ef7b8e881d611b883790 | [
"MIT"
] | null | null | null | im2mesh/encoder/pointnet.py | thunanguyen/occupancy_networks | 12c707a1cb89707cbf70ef7b8e881d611b883790 | [
"MIT"
] | null | null | null | im2mesh/encoder/pointnet.py | thunanguyen/occupancy_networks | 12c707a1cb89707cbf70ef7b8e881d611b883790 | [
"MIT"
] | null | null | null | import torch
import torch.nn as nn
from im2mesh.layers import ResnetBlockFC
def maxpool(x, dim=-1, keepdim=False):
out, _ = x.max(dim=dim, keepdim=keepdim)
return out
class SimplePointnet(nn.Module):
''' PointNet-based encoder network.
Args:
c_dim (int): dimension of latent code c
dim (int): input points dimension
hidden_dim (int): hidden dimension of the network
'''
def __init__(self, c_dim=128, dim=3, hidden_dim=128):
super().__init__()
self.c_dim = c_dim
self.fc_pos = nn.Linear(dim, 2*hidden_dim)
self.fc_0 = nn.Linear(2*hidden_dim, hidden_dim)
self.fc_1 = nn.Linear(2*hidden_dim, hidden_dim)
self.fc_2 = nn.Linear(2*hidden_dim, hidden_dim)
self.fc_3 = nn.Linear(2*hidden_dim, hidden_dim)
self.fc_c = nn.Linear(hidden_dim, c_dim)
self.actvn = lambda x: torch.sin(x) #nn.ReLU()
self.pool = maxpool
def forward(self, p):
batch_size, T, D = p.size()
# output size: B x T X F
net = self.fc_pos(p)
net = self.fc_0(self.actvn(net))
pooled = self.pool(net, dim=1, keepdim=True).expand(net.size())
net = torch.cat([net, pooled], dim=2)
net = self.fc_1(self.actvn(net))
pooled = self.pool(net, dim=1, keepdim=True).expand(net.size())
net = torch.cat([net, pooled], dim=2)
net = self.fc_2(self.actvn(net))
pooled = self.pool(net, dim=1, keepdim=True).expand(net.size())
net = torch.cat([net, pooled], dim=2)
net = self.fc_3(self.actvn(net))
# Recude to B x F
net = self.pool(net, dim=1)
c = self.fc_c(self.actvn(net))
return c
class ResnetPointnet(nn.Module):
''' PointNet-based encoder network with ResNet blocks.
Args:
c_dim (int): dimension of latent code c
dim (int): input points dimension
hidden_dim (int): hidden dimension of the network
'''
def __init__(self, c_dim=128, dim=3, hidden_dim=128):
super().__init__()
self.c_dim = c_dim
self.fc_pos = nn.Linear(dim, 2*hidden_dim)
self.block_0 = ResnetBlockFC(2*hidden_dim, hidden_dim)
self.block_1 = ResnetBlockFC(2*hidden_dim, hidden_dim)
self.block_2 = ResnetBlockFC(2*hidden_dim, hidden_dim)
self.block_3 = ResnetBlockFC(2*hidden_dim, hidden_dim)
self.block_4 = ResnetBlockFC(2*hidden_dim, hidden_dim)
self.fc_c = nn.Linear(hidden_dim, c_dim)
self.actvn = lambda x: torch.sin(x) #nn.ReLU()
self.pool = maxpool
def forward(self, p):
batch_size, T, D = p.size()
# output size: B x T X F
net = self.fc_pos(p)
net = self.block_0(net)
pooled = self.pool(net, dim=1, keepdim=True).expand(net.size())
net = torch.cat([net, pooled], dim=2)
net = self.block_1(net)
pooled = self.pool(net, dim=1, keepdim=True).expand(net.size())
net = torch.cat([net, pooled], dim=2)
net = self.block_2(net)
pooled = self.pool(net, dim=1, keepdim=True).expand(net.size())
net = torch.cat([net, pooled], dim=2)
net = self.block_3(net)
pooled = self.pool(net, dim=1, keepdim=True).expand(net.size())
net = torch.cat([net, pooled], dim=2)
net = self.block_4(net)
# Recude to B x F
net = self.pool(net, dim=1)
c = self.fc_c(self.actvn(net))
return c
| 30.421053 | 71 | 0.597751 | 535 | 3,468 | 3.723364 | 0.136449 | 0.11747 | 0.055221 | 0.072289 | 0.888554 | 0.888554 | 0.853414 | 0.846888 | 0.764558 | 0.710843 | 0 | 0.023978 | 0.266436 | 3,468 | 113 | 72 | 30.690265 | 0.759041 | 0.13466 | 0 | 0.606061 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.075758 | false | 0 | 0.045455 | 0 | 0.19697 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
968c48d958bf700ed378c40b46b8421d6fe6c7ee | 75 | py | Python | katas/kyu_7/gradually_adding_parameters.py | the-zebulan/CodeWars | 1eafd1247d60955a5dfb63e4882e8ce86019f43a | [
"MIT"
] | 40 | 2016-03-09T12:26:20.000Z | 2022-03-23T08:44:51.000Z | katas/kyu_7/gradually_adding_parameters.py | akalynych/CodeWars | 1eafd1247d60955a5dfb63e4882e8ce86019f43a | [
"MIT"
] | null | null | null | katas/kyu_7/gradually_adding_parameters.py | akalynych/CodeWars | 1eafd1247d60955a5dfb63e4882e8ce86019f43a | [
"MIT"
] | 36 | 2016-11-07T19:59:58.000Z | 2022-03-31T11:18:27.000Z | def add(*args):
return sum(i * a for i, a in enumerate(args, start=1))
| 25 | 58 | 0.626667 | 15 | 75 | 3.133333 | 0.8 | 0.085106 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016949 | 0.213333 | 75 | 2 | 59 | 37.5 | 0.779661 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
968df75cd9c74feb1aff959a210e1258152c2f78 | 288 | py | Python | example/services/__init__.py | TheCaptainCat/flasque | d42deb57572084f513202a32c460186700ce8e0b | [
"MIT"
] | 4 | 2020-11-02T15:16:32.000Z | 2022-01-11T11:19:24.000Z | example/services/__init__.py | TheCaptainCat/bolinette | d42deb57572084f513202a32c460186700ce8e0b | [
"MIT"
] | 14 | 2021-01-04T11:06:59.000Z | 2022-03-23T17:01:49.000Z | example/services/__init__.py | TheCaptainCat/bolinette | d42deb57572084f513202a32c460186700ce8e0b | [
"MIT"
] | null | null | null | from example.services.book import BookService
from example.services.person import PersonService
from example.services.library import LibraryService
from example.services.tag import TagService
from example.services.label import LabelService
from example.services.trace import TraceService
| 41.142857 | 51 | 0.875 | 36 | 288 | 7 | 0.444444 | 0.261905 | 0.452381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.083333 | 288 | 6 | 52 | 48 | 0.954545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7367d1028de297daf53ab3365e8eff556848dac6 | 196 | py | Python | tests/unit/test_property.py | gussmith23/dwellinglybackend | ea7f6f83c3fb2faff7052611c105ae6ae20b1a20 | [
"MIT"
] | null | null | null | tests/unit/test_property.py | gussmith23/dwellinglybackend | ea7f6f83c3fb2faff7052611c105ae6ae20b1a20 | [
"MIT"
] | null | null | null | tests/unit/test_property.py | gussmith23/dwellinglybackend | ea7f6f83c3fb2faff7052611c105ae6ae20b1a20 | [
"MIT"
] | null | null | null | from conftest import newPropertyName, newPropertyAddress
def test_new_property(new_property):
assert new_property.name == newPropertyName
assert new_property.address == newPropertyAddress | 39.2 | 56 | 0.831633 | 21 | 196 | 7.52381 | 0.571429 | 0.278481 | 0.21519 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.117347 | 196 | 5 | 57 | 39.2 | 0.913295 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.5 | 1 | 0.25 | false | 0 | 0.25 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
73918c7c2d83b3ee25db6192cb722ea30d66c2e3 | 11,934 | py | Python | hail/python/test/hail/experimental/test_dnd_array.py | mitochon/hail | 25e5e5b8da1d978468d2cee393426ade46484a87 | [
"MIT"
] | null | null | null | hail/python/test/hail/experimental/test_dnd_array.py | mitochon/hail | 25e5e5b8da1d978468d2cee393426ade46484a87 | [
"MIT"
] | 3 | 2017-06-16T18:10:45.000Z | 2017-07-21T17:44:13.000Z | hail/python/test/hail/experimental/test_dnd_array.py | mitochon/hail | 25e5e5b8da1d978468d2cee393426ade46484a87 | [
"MIT"
] | 2 | 2018-01-30T00:50:52.000Z | 2018-03-22T20:04:01.000Z | import numpy as np
import hail as hl
from hail.utils import new_temp_file
from ..helpers import startTestHailContext, stopTestHailContext, fails_local_backend
setUpModule = startTestHailContext
tearDownModule = stopTestHailContext
def test_range_collect():
n_variants = 10
n_samples = 10
block_size = 3
mt = hl.utils.range_matrix_table(n_variants, n_samples)
mt = mt.select_entries(x=mt.row_idx * mt.col_idx)
da = hl.experimental.dnd.array(mt, 'x', block_size=block_size)
a = np.array(mt.x.collect()).reshape(n_variants, n_samples)
assert np.array_equal(da.collect(), a)
@fails_local_backend()
def test_range_matmul():
n_variants = 10
n_samples = 10
block_size = 3
n_blocks = 16
mt = hl.utils.range_matrix_table(n_variants, n_samples)
mt = mt.select_entries(x=mt.row_idx * mt.col_idx)
da = hl.experimental.dnd.array(mt, 'x', block_size=block_size)
da = (da @ da.T).checkpoint(new_temp_file())
assert da._force_count_blocks() == n_blocks
da_result = da.collect()
a = np.array(mt.x.collect()).reshape(n_variants, n_samples)
a_result = a @ a.T
assert np.array_equal(da_result, a_result)
@fails_local_backend()
def test_small_collect():
n_variants = 10
n_samples = 10
block_size = 3
mt = hl.balding_nichols_model(n_populations=2,
n_variants=n_variants,
n_samples=n_samples)
mt = mt.select_entries(dosage=hl.float(mt.GT.n_alt_alleles()))
da = hl.experimental.dnd.array(mt, 'dosage', block_size=block_size)
a = np.array(mt.dosage.collect()).reshape(n_variants, n_samples)
assert np.array_equal(da.collect(), a)
@fails_local_backend()
def test_medium_collect():
n_variants = 100
n_samples = 100
block_size = 32
mt = hl.balding_nichols_model(n_populations=2,
n_variants=n_variants,
n_samples=n_samples)
mt = mt.select_entries(dosage=hl.float(mt.GT.n_alt_alleles()))
da = hl.experimental.dnd.array(mt, 'dosage', block_size=block_size)
a = np.array(mt.dosage.collect()).reshape(n_variants, n_samples)
assert np.array_equal(da.collect(), a)
@fails_local_backend()
def test_small_matmul():
n_variants = 10
n_samples = 10
block_size = 3
n_blocks = 16
mt = hl.balding_nichols_model(n_populations=2,
n_variants=n_variants,
n_samples=n_samples)
mt = mt.select_entries(dosage=hl.float(mt.GT.n_alt_alleles()))
da = hl.experimental.dnd.array(mt, 'dosage', block_size=block_size)
da = (da @ da.T).checkpoint(new_temp_file())
assert da._force_count_blocks() == n_blocks
da_result = da.collect()
a = np.array(mt.dosage.collect()).reshape(n_variants, n_samples)
a_result = a @ a.T
assert np.array_equal(da_result, a_result)
@fails_local_backend()
def test_medium_matmul():
n_variants = 100
n_samples = 100
block_size = 32
n_blocks = 16
mt = hl.balding_nichols_model(n_populations=2,
n_variants=n_variants,
n_samples=n_samples)
mt = mt.select_entries(dosage=hl.float(mt.GT.n_alt_alleles()))
da = hl.experimental.dnd.array(mt, 'dosage', block_size=block_size)
da = (da @ da.T).checkpoint(new_temp_file())
assert da._force_count_blocks() == n_blocks
da_result = da.collect()
a = np.array(mt.dosage.collect()).reshape(n_variants, n_samples)
a_result = a @ a.T
assert np.array_equal(da_result, a_result)
@fails_local_backend()
def test_matmul_via_inner_product():
n_variants = 10
n_samples = 10
block_size = 3
n_blocks = 16
mt = hl.utils.range_matrix_table(n_variants, n_samples)
mt = mt.select_entries(x=mt.row_idx * mt.col_idx)
da = hl.experimental.dnd.array(mt, 'x', block_size=block_size)
prod = (da @ da.T).checkpoint(new_temp_file())
assert prod._force_count_blocks() == n_blocks
prod_result = prod.collect()
ip_result = da.inner_product(da.T,
lambda l, r: l * r,
lambda l, r: l + r,
hl.float(0.0),
lambda prod: hl.agg.sum(prod)
).collect()
assert np.array_equal(prod_result, ip_result)
@fails_local_backend()
def test_king_homo_estimator():
hl.set_global_seed(1)
mt = hl.balding_nichols_model(2, 5, 5)
mt = mt.select_entries(genotype_score=hl.float(mt.GT.n_alt_alleles()))
da = hl.experimental.dnd.array(mt, 'genotype_score', block_size=3)
def sqr(x):
return x * x
score_difference = da.T.inner_product(
da,
lambda l, r: sqr(l - r),
lambda l, r: l + r,
hl.float(0),
hl.agg.sum
).checkpoint(new_temp_file())
assert np.array_equal(
score_difference.collect(),
np.array([[0., 6., 4., 2., 4.],
[6., 0., 6., 4., 6.],
[4., 6., 0., 6., 0.],
[2., 4., 6., 0., 6.],
[4., 6., 0., 6., 0.]]))
@fails_local_backend()
def test_dndarray_sum():
n_variants = 10
n_samples = 10
block_size = 3
n_blocks = 16
mt1 = hl.balding_nichols_model(n_populations=2,
n_variants=n_variants,
n_samples=n_samples)
mt1 = mt1.select_entries(dosage=hl.float(mt1.GT.n_alt_alleles()))
mt2 = hl.balding_nichols_model(n_populations=2,
n_variants=n_variants,
n_samples=n_samples)
mt2 = mt2.select_entries(dosage=hl.float(mt2.GT.n_alt_alleles()))
da1 = hl.experimental.dnd.array(mt1, 'dosage', block_size=block_size)
da2 = hl.experimental.dnd.array(mt2, 'dosage', block_size=block_size)
da_sum = (da1 + da2).checkpoint(new_temp_file())
assert da_sum._force_count_blocks() == n_blocks
da_result = da_sum.collect()
a1 = np.array(mt1.dosage.collect()).reshape(n_variants, n_samples)
a2 = np.array(mt2.dosage.collect()).reshape(n_variants, n_samples)
a_result = a1 + a2
assert np.array_equal(da_result, a_result)
@fails_local_backend()
def test_dndarray_sum_scalar():
n_variants = 10
n_samples = 10
block_size = 3
n_blocks = 16
mt1 = hl.balding_nichols_model(n_populations=2,
n_variants=n_variants,
n_samples=n_samples)
mt1 = mt1.select_entries(dosage=hl.float(mt1.GT.n_alt_alleles()))
da1 = hl.experimental.dnd.array(mt1, 'dosage', block_size=block_size)
da_sum = (da1 + 10).checkpoint(new_temp_file())
assert da_sum._force_count_blocks() == n_blocks
da_result = da_sum.collect()
a1 = np.array(mt1.dosage.collect()).reshape(n_variants, n_samples)
a_result = a1 + 10
assert np.array_equal(da_result, a_result)
@fails_local_backend()
def test_dndarray_rsum_scalar():
n_variants = 10
n_samples = 10
block_size = 3
n_blocks = 16
mt1 = hl.balding_nichols_model(n_populations=2,
n_variants=n_variants,
n_samples=n_samples)
mt1 = mt1.select_entries(dosage=hl.float(mt1.GT.n_alt_alleles()))
da1 = hl.experimental.dnd.array(mt1, 'dosage', block_size=block_size)
da_sum = (10 + da1).checkpoint(new_temp_file())
assert da_sum._force_count_blocks() == n_blocks
da_result = da_sum.collect()
a1 = np.array(mt1.dosage.collect()).reshape(n_variants, n_samples)
a_result = 10 + a1
assert np.array_equal(da_result, a_result)
@fails_local_backend()
def test_dndarray_mul_scalar():
n_variants = 10
n_samples = 10
block_size = 3
n_blocks = 16
mt1 = hl.balding_nichols_model(n_populations=2,
n_variants=n_variants,
n_samples=n_samples)
mt1 = mt1.select_entries(dosage=hl.float(mt1.GT.n_alt_alleles()))
da1 = hl.experimental.dnd.array(mt1, 'dosage', block_size=block_size)
da_sum = (da1 * 10).checkpoint(new_temp_file())
assert da_sum._force_count_blocks() == n_blocks
da_result = da_sum.collect()
a1 = np.array(mt1.dosage.collect()).reshape(n_variants, n_samples)
a_result = a1 * 10
assert np.array_equal(da_result, a_result)
@fails_local_backend()
def test_dndarray_rmul_scalar():
n_variants = 10
n_samples = 10
block_size = 3
n_blocks = 16
mt1 = hl.balding_nichols_model(n_populations=2,
n_variants=n_variants,
n_samples=n_samples)
mt1 = mt1.select_entries(dosage=hl.float(mt1.GT.n_alt_alleles()))
da1 = hl.experimental.dnd.array(mt1, 'dosage', block_size=block_size)
da_sum = (10 * da1).checkpoint(new_temp_file())
assert da_sum._force_count_blocks() == n_blocks
da_result = da_sum.collect()
a1 = np.array(mt1.dosage.collect()).reshape(n_variants, n_samples)
a_result = 10 * a1
assert np.array_equal(da_result, a_result)
@fails_local_backend()
def test_dndarray_sub_scalar():
n_variants = 10
n_samples = 10
block_size = 3
n_blocks = 16
mt1 = hl.balding_nichols_model(n_populations=2,
n_variants=n_variants,
n_samples=n_samples)
mt1 = mt1.select_entries(dosage=hl.float(mt1.GT.n_alt_alleles()))
da1 = hl.experimental.dnd.array(mt1, 'dosage', block_size=block_size)
da_sum = (da1 - 10).checkpoint(new_temp_file())
assert da_sum._force_count_blocks() == n_blocks
da_result = da_sum.collect()
a1 = np.array(mt1.dosage.collect()).reshape(n_variants, n_samples)
a_result = a1 - 10
assert np.array_equal(da_result, a_result)
@fails_local_backend()
def test_dndarray_rsub_scalar():
n_variants = 10
n_samples = 10
block_size = 3
n_blocks = 16
mt1 = hl.balding_nichols_model(n_populations=2,
n_variants=n_variants,
n_samples=n_samples)
mt1 = mt1.select_entries(dosage=hl.float(mt1.GT.n_alt_alleles()))
da1 = hl.experimental.dnd.array(mt1, 'dosage', block_size=block_size)
da_sum = (10 - da1).checkpoint(new_temp_file())
assert da_sum._force_count_blocks() == n_blocks
da_result = da_sum.collect()
a1 = np.array(mt1.dosage.collect()).reshape(n_variants, n_samples)
a_result = 10 - a1
assert np.array_equal(da_result, a_result)
def test_dndarray_errors_on_unsorted_columns():
n_variants = 10
n_samples = 10
block_size = 3
mt = hl.utils.range_matrix_table(n_variants, n_samples)
mt = mt.key_cols_by(sampleid=hl.str('zyxwvutsrq')[mt.col_idx])
mt = mt.select_entries(x=mt.row_idx * mt.col_idx)
try:
hl.experimental.dnd.array(mt, 'x', block_size=block_size)
except ValueError as err:
assert 'columns are not in sorted order', err.args[0]
else:
assert False
@fails_local_backend()
def test_dndarray_sort_columns():
n_variants = 10
n_samples = 10
block_size = 3
disorder = [0, 9, 8, 7, 1, 2, 3, 4, 6, 5]
order = [x[0]
for x in sorted(enumerate(disorder),
key=lambda x: x[1])]
mt = hl.utils.range_matrix_table(n_variants, n_samples)
mt = mt.key_cols_by(sampleid=hl.literal(disorder)[mt.col_idx])
mt = mt.select_entries(x=mt.row_idx * mt.col_idx)
da = hl.experimental.dnd.array(mt, 'x', block_size=block_size, sort_columns=True)
a = np.array(
[r * order[c] for r in range(n_variants) for c in range(n_samples)]
).reshape((n_variants, n_samples))
assert np.array_equal(da.collect(), a)
result = (da.T @ da).collect()
expected = a.T @ a
assert np.array_equal(result, expected)
| 32.517711 | 85 | 0.633484 | 1,734 | 11,934 | 4.053057 | 0.081315 | 0.078116 | 0.062607 | 0.077405 | 0.861554 | 0.846329 | 0.833238 | 0.82584 | 0.816733 | 0.807057 | 0 | 0.030164 | 0.249958 | 11,934 | 366 | 86 | 32.606557 | 0.754999 | 0 | 0 | 0.69338 | 0 | 0 | 0.011061 | 0 | 0 | 0 | 0 | 0 | 0.10453 | 1 | 0.062718 | false | 0 | 0.013937 | 0.003484 | 0.080139 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
73c0acea6b030d0417f67af3dca59e481fc30d28 | 46 | py | Python | lib/optim/__init__.py | plemeri/UACANet | fdfca0d26e0c7d25e92676d451dc613af5e83f3c | [
"MIT"
] | 57 | 2021-07-07T06:13:03.000Z | 2022-03-29T10:20:30.000Z | lib/optim/__init__.py | POSTECH-IMLAB/UACANet | 1b22572dc1e2b42f27ed06be51b6604bdff7471b | [
"MIT"
] | 9 | 2021-07-17T10:46:52.000Z | 2022-03-10T15:00:57.000Z | lib/optim/__init__.py | plemeri/UACANet | fdfca0d26e0c7d25e92676d451dc613af5e83f3c | [
"MIT"
] | 17 | 2021-07-08T04:52:19.000Z | 2022-03-19T13:33:48.000Z | from .losses import *
from .scheduler import * | 23 | 24 | 0.76087 | 6 | 46 | 5.833333 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.152174 | 46 | 2 | 24 | 23 | 0.897436 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
73fcdfda94d4ff400413247018daa0d260f0385f | 37,783 | py | Python | egret/models/acopf.py | kdheepak/Egret | f982247637137e098191453e682376adc1b4c2ac | [
"BSD-3-Clause"
] | 1 | 2021-08-25T12:53:38.000Z | 2021-08-25T12:53:38.000Z | egret/models/acopf.py | kdheepak/Egret | f982247637137e098191453e682376adc1b4c2ac | [
"BSD-3-Clause"
] | 1 | 2019-05-23T02:56:29.000Z | 2019-05-23T02:56:29.000Z | egret/models/acopf.py | kdheepak/Egret | f982247637137e098191453e682376adc1b4c2ac | [
"BSD-3-Clause"
] | 2 | 2019-11-18T20:18:51.000Z | 2020-05-08T15:56:17.000Z | # ___________________________________________________________________________
#
# EGRET: Electrical Grid Research and Engineering Tools
# Copyright 2019 National Technology & Engineering Solutions of Sandia, LLC
# (NTESS). Under the terms of Contract DE-NA0003525 with NTESS, the U.S.
# Government retains certain rights in this software.
# This software is distributed under the Revised BSD License.
# ___________________________________________________________________________
"""
This module provides functions that create the modules for typical ACOPF formulations.
#TODO: document this with examples
"""
import pyomo.environ as pe
import operator as op
import egret.model_library.transmission.tx_utils as tx_utils
import egret.model_library.transmission.tx_calc as tx_calc
import egret.model_library.transmission.bus as libbus
import egret.model_library.transmission.branch as libbranch
import egret.model_library.transmission.gen as libgen
from egret.model_library.defn import FlowType, CoordinateType
from egret.data.model_data import map_items, zip_items
from math import pi, radians
def _include_feasibility_slack(model, bus_attrs, gen_attrs, bus_p_loads, bus_q_loads, penalty=1000):
import egret.model_library.decl as decl
slack_init = {k: 0 for k in bus_attrs['names']}
slack_bounds = {k: (0, sum(bus_p_loads.values())) for k in bus_attrs['names']}
decl.declare_var('p_slack_pos', model=model, index_set=bus_attrs['names'],
initialize=slack_init, bounds=slack_bounds
)
decl.declare_var('p_slack_neg', model=model, index_set=bus_attrs['names'],
initialize=slack_init, bounds=slack_bounds
)
slack_bounds = {k: (0, sum(bus_q_loads.values())) for k in bus_attrs['names']}
decl.declare_var('q_slack_pos', model=model, index_set=bus_attrs['names'],
initialize=slack_init, bounds=slack_bounds
)
decl.declare_var('q_slack_neg', model=model, index_set=bus_attrs['names'],
initialize=slack_init, bounds=slack_bounds
)
p_rhs_kwargs = {'include_feasibility_slack_pos':'p_slack_pos','include_feasibility_slack_neg':'p_slack_neg'}
q_rhs_kwargs = {'include_feasibility_slack_pos':'q_slack_pos','include_feasibility_slack_neg':'q_slack_neg'}
p_penalty = penalty * (max([gen_attrs['p_cost'][k]['values'][1] for k in gen_attrs['names']]) + 1)
q_penalty = penalty * (max(gen_attrs.get('q_cost', gen_attrs['p_cost'])[k]['values'][1] for k in gen_attrs['names']) + 1)
penalty_expr = sum(p_penalty * (model.p_slack_pos[bus_name] + model.p_slack_neg[bus_name])
+ q_penalty * (model.q_slack_pos[bus_name] + model.q_slack_neg[bus_name])
for bus_name in bus_attrs['names'])
return p_rhs_kwargs, q_rhs_kwargs, penalty_expr
def create_psv_acopf_model(model_data, include_feasibility_slack=False):
md = model_data.clone_in_service()
tx_utils.scale_ModelData_to_pu(md, inplace = True)
gens = dict(md.elements(element_type='generator'))
buses = dict(md.elements(element_type='bus'))
branches = dict(md.elements(element_type='branch'))
loads = dict(md.elements(element_type='load'))
shunts = dict(md.elements(element_type='shunt'))
gen_attrs = md.attributes(element_type='generator')
bus_attrs = md.attributes(element_type='bus')
branch_attrs = md.attributes(element_type='branch')
load_attrs = md.attributes(element_type='load')
shunt_attrs = md.attributes(element_type='shunt')
inlet_branches_by_bus, outlet_branches_by_bus = \
tx_utils.inlet_outlet_branches_by_bus(branches, buses)
gens_by_bus = tx_utils.gens_by_bus(buses, gens)
model = pe.ConcreteModel()
### declare (and fix) the loads at the buses
bus_p_loads, bus_q_loads = tx_utils.dict_of_bus_loads(buses, loads)
libbus.declare_var_pl(model, bus_attrs['names'], initialize=bus_p_loads)
libbus.declare_var_ql(model, bus_attrs['names'], initialize=bus_q_loads)
model.pl.fix()
model.ql.fix()
### declare the fixed shunts at the buses
bus_bs_fixed_shunts, bus_gs_fixed_shunts = tx_utils.dict_of_bus_fixed_shunts(buses, shunts)
### declare the polar voltages
libbus.declare_var_vm(model, bus_attrs['names'], initialize=bus_attrs['vm'],
bounds=zip_items(bus_attrs['v_min'], bus_attrs['v_max'])
)
va_bounds = {k: (-pi, pi) for k in bus_attrs['va']}
libbus.declare_var_va(model, bus_attrs['names'], initialize=bus_attrs['va'],
bounds=va_bounds
)
### include the feasibility slack for the bus balances
p_rhs_kwargs = {}
q_rhs_kwargs = {}
if include_feasibility_slack:
p_rhs_kwargs, q_rhs_kwargs, penalty_expr = _include_feasibility_slack(model, bus_attrs, gen_attrs, bus_p_loads, bus_q_loads)
### fix the reference bus
ref_bus = md.data['system']['reference_bus']
ref_angle = md.data['system']['reference_bus_angle']
model.va[ref_bus].fix(radians(ref_angle))
### declare the generator real and reactive power
pg_init = {k: (gen_attrs['p_min'][k] + gen_attrs['p_max'][k]) / 2.0 for k in gen_attrs['pg']}
libgen.declare_var_pg(model, gen_attrs['names'], initialize=pg_init,
bounds=zip_items(gen_attrs['p_min'], gen_attrs['p_max'])
)
qg_init = {k: (gen_attrs['q_min'][k] + gen_attrs['q_max'][k]) / 2.0 for k in gen_attrs['qg']}
libgen.declare_var_qg(model, gen_attrs['names'], initialize=qg_init,
bounds=zip_items(gen_attrs['q_min'], gen_attrs['q_max'])
)
### declare the current flows in the branches
vr_init = {k: bus_attrs['vm'][k] * pe.cos(bus_attrs['va'][k]) for k in bus_attrs['vm']}
vj_init = {k: bus_attrs['vm'][k] * pe.sin(bus_attrs['va'][k]) for k in bus_attrs['vm']}
s_max = {k: branches[k]['rating_long_term'] for k in branches.keys()}
s_lbub = dict()
for k in branches.keys():
if s_max[k] is None:
s_lbub[k] = (None, None)
else:
s_lbub[k] = (-s_max[k],s_max[k])
pf_bounds = s_lbub
pt_bounds = s_lbub
qf_bounds = s_lbub
qt_bounds = s_lbub
pf_init = dict()
pt_init = dict()
qf_init = dict()
qt_init = dict()
for branch_name, branch in branches.items():
from_bus = branch['from_bus']
to_bus = branch['to_bus']
y_matrix = tx_calc.calculate_y_matrix_from_branch(branch)
ifr_init = tx_calc.calculate_ifr(vr_init[from_bus], vj_init[from_bus], vr_init[to_bus],
vj_init[to_bus], y_matrix)
ifj_init = tx_calc.calculate_ifj(vr_init[from_bus], vj_init[from_bus], vr_init[to_bus],
vj_init[to_bus], y_matrix)
itr_init = tx_calc.calculate_itr(vr_init[from_bus], vj_init[from_bus], vr_init[to_bus],
vj_init[to_bus], y_matrix)
itj_init = tx_calc.calculate_itj(vr_init[from_bus], vj_init[from_bus], vr_init[to_bus],
vj_init[to_bus], y_matrix)
pf_init[branch_name] = tx_calc.calculate_p(ifr_init, ifj_init, vr_init[from_bus], vj_init[from_bus])
pt_init[branch_name] = tx_calc.calculate_p(itr_init, itj_init, vr_init[to_bus], vj_init[to_bus])
qf_init[branch_name] = tx_calc.calculate_q(ifr_init, ifj_init, vr_init[from_bus], vj_init[from_bus])
qt_init[branch_name] = tx_calc.calculate_q(itr_init, itj_init, vr_init[to_bus], vj_init[to_bus])
libbranch.declare_var_pf(model=model,
index_set=branch_attrs['names'],
initialize=pf_init,
bounds=pf_bounds
)
libbranch.declare_var_pt(model=model,
index_set=branch_attrs['names'],
initialize=pt_init,
bounds=pt_bounds
)
libbranch.declare_var_qf(model=model,
index_set=branch_attrs['names'],
initialize=qf_init,
bounds=qf_bounds
)
libbranch.declare_var_qt(model=model,
index_set=branch_attrs['names'],
initialize=qt_init,
bounds=qt_bounds
)
### declare the branch power flow constraints
libbranch.declare_eq_branch_power(model=model,
index_set=branch_attrs['names'],
branches=branches,
branch_attrs=branch_attrs,
coordinate_type=CoordinateType.POLAR
)
### declare the pq balances
libbus.declare_eq_p_balance(model=model,
index_set=bus_attrs['names'],
bus_p_loads=bus_p_loads,
gens_by_bus=gens_by_bus,
bus_gs_fixed_shunts=bus_gs_fixed_shunts,
inlet_branches_by_bus=inlet_branches_by_bus,
outlet_branches_by_bus=outlet_branches_by_bus,
coordinate_type=CoordinateType.POLAR,
**p_rhs_kwargs
)
libbus.declare_eq_q_balance(model=model,
index_set=bus_attrs['names'],
bus_q_loads=bus_q_loads,
gens_by_bus=gens_by_bus,
bus_bs_fixed_shunts=bus_bs_fixed_shunts,
inlet_branches_by_bus=inlet_branches_by_bus,
outlet_branches_by_bus=outlet_branches_by_bus,
coordinate_type=CoordinateType.POLAR,
**q_rhs_kwargs
)
### declare the thermal limits
libbranch.declare_ineq_s_branch_thermal_limit(model=model,
index_set=branch_attrs['names'],
branches=branches,
s_thermal_limits=s_max,
flow_type=FlowType.POWER
)
### declare the voltage min and max inequalities
libbus.declare_ineq_vm_bus_lbub(model=model,
index_set=bus_attrs['names'],
buses=buses,
coordinate_type=CoordinateType.POLAR
)
### declare angle difference limits on interconnected buses
libbranch.declare_ineq_angle_diff_branch_lbub(model=model,
index_set=branch_attrs['names'],
branches=branches,
coordinate_type=CoordinateType.POLAR
)
### declare the generator cost objective
libgen.declare_expression_pgqg_operating_cost(model=model,
index_set=gen_attrs['names'],
p_costs=gen_attrs['p_cost'],
q_costs=gen_attrs.get('q_cost', None)
)
obj_expr = sum(model.pg_operating_cost[gen_name] for gen_name in model.pg_operating_cost)
if include_feasibility_slack:
obj_expr += penalty_expr
if hasattr(model, 'qg_operating_cost'):
obj_expr += sum(model.qg_operating_cost[gen_name] for gen_name in model.qg_operating_cost)
model.obj = pe.Objective(expr=obj_expr)
return model, md
def create_rsv_acopf_model(model_data, include_feasibility_slack=False):
md = model_data.clone_in_service()
tx_utils.scale_ModelData_to_pu(md, inplace = True)
gens = dict(md.elements(element_type='generator'))
buses = dict(md.elements(element_type='bus'))
branches = dict(md.elements(element_type='branch'))
loads = dict(md.elements(element_type='load'))
shunts = dict(md.elements(element_type='shunt'))
gen_attrs = md.attributes(element_type='generator')
bus_attrs = md.attributes(element_type='bus')
branch_attrs = md.attributes(element_type='branch')
load_attrs = md.attributes(element_type='load')
shunt_attrs = md.attributes(element_type='shunt')
inlet_branches_by_bus, outlet_branches_by_bus = \
tx_utils.inlet_outlet_branches_by_bus(branches, buses)
gens_by_bus = tx_utils.gens_by_bus(buses, gens)
model = pe.ConcreteModel()
### declare (and fix) the loads at the buses
bus_p_loads, bus_q_loads = tx_utils.dict_of_bus_loads(buses, loads)
libbus.declare_var_pl(model, bus_attrs['names'], initialize=bus_p_loads)
libbus.declare_var_ql(model, bus_attrs['names'], initialize=bus_q_loads)
model.pl.fix()
model.ql.fix()
### declare the fixed shunts at the buses
bus_bs_fixed_shunts, bus_gs_fixed_shunts = tx_utils.dict_of_bus_fixed_shunts(buses, shunts)
### declare the rectangular voltages
neg_v_max = map_items(op.neg, bus_attrs['v_max'])
vr_init = {k: bus_attrs['vm'][k] * pe.cos(bus_attrs['va'][k]) for k in bus_attrs['vm']}
libbus.declare_var_vr(model, bus_attrs['names'], initialize=vr_init,
bounds=zip_items(neg_v_max, bus_attrs['v_max'])
)
vj_init = {k: bus_attrs['vm'][k] * pe.sin(bus_attrs['va'][k]) for k in bus_attrs['vm']}
libbus.declare_var_vj(model, bus_attrs['names'], initialize=vj_init,
bounds=zip_items(neg_v_max, bus_attrs['v_max'])
)
### include the feasibility slack for the bus balances
p_rhs_kwargs = {}
q_rhs_kwargs = {}
if include_feasibility_slack:
p_rhs_kwargs, q_rhs_kwargs, penalty_expr = _include_feasibility_slack(model, bus_attrs, gen_attrs, bus_p_loads, bus_q_loads)
### fix the reference bus
ref_bus = md.data['system']['reference_bus']
ref_angle = md.data['system']['reference_bus_angle']
if ref_angle != 0.0:
libbus.declare_eq_ref_bus_nonzero(model, ref_angle, ref_bus)
else:
model.vj[ref_bus].fix(0.0)
model.vr[ref_bus].setlb(0.0)
### declare the generator real and reactive power
pg_init = {k: (gen_attrs['p_min'][k] + gen_attrs['p_max'][k]) / 2.0 for k in gen_attrs['pg']}
libgen.declare_var_pg(model, gen_attrs['names'], initialize=pg_init,
bounds=zip_items(gen_attrs['p_min'], gen_attrs['p_max'])
)
qg_init = {k: (gen_attrs['q_min'][k] + gen_attrs['q_max'][k]) / 2.0 for k in gen_attrs['qg']}
libgen.declare_var_qg(model, gen_attrs['names'], initialize=qg_init,
bounds=zip_items(gen_attrs['q_min'], gen_attrs['q_max'])
)
### declare the current flows in the branches
s_max = {k: branches[k]['rating_long_term'] for k in branches.keys()}
s_lbub = dict()
for k in branches.keys():
if s_max[k] is None:
s_lbub[k] = (None, None)
else:
s_lbub[k] = (-s_max[k],s_max[k])
pf_bounds = s_lbub
pt_bounds = s_lbub
qf_bounds = s_lbub
qt_bounds = s_lbub
pf_init = dict()
pt_init = dict()
qf_init = dict()
qt_init = dict()
for branch_name, branch in branches.items():
from_bus = branch['from_bus']
to_bus = branch['to_bus']
y_matrix = tx_calc.calculate_y_matrix_from_branch(branch)
ifr_init = tx_calc.calculate_ifr(vr_init[from_bus], vj_init[from_bus], vr_init[to_bus],
vj_init[to_bus], y_matrix)
ifj_init = tx_calc.calculate_ifj(vr_init[from_bus], vj_init[from_bus], vr_init[to_bus],
vj_init[to_bus], y_matrix)
itr_init = tx_calc.calculate_itr(vr_init[from_bus], vj_init[from_bus], vr_init[to_bus],
vj_init[to_bus], y_matrix)
itj_init = tx_calc.calculate_itj(vr_init[from_bus], vj_init[from_bus], vr_init[to_bus],
vj_init[to_bus], y_matrix)
pf_init[branch_name] = tx_calc.calculate_p(ifr_init, ifj_init, vr_init[from_bus], vj_init[from_bus])
pt_init[branch_name] = tx_calc.calculate_p(itr_init, itj_init, vr_init[to_bus], vj_init[to_bus])
qf_init[branch_name] = tx_calc.calculate_q(ifr_init, ifj_init, vr_init[from_bus], vj_init[from_bus])
qt_init[branch_name] = tx_calc.calculate_q(itr_init, itj_init, vr_init[to_bus], vj_init[to_bus])
libbranch.declare_var_pf(model=model,
index_set=branch_attrs['names'],
initialize=pf_init,
bounds=pf_bounds
)
libbranch.declare_var_pt(model=model,
index_set=branch_attrs['names'],
initialize=pt_init,
bounds=pt_bounds
)
libbranch.declare_var_qf(model=model,
index_set=branch_attrs['names'],
initialize=qf_init,
bounds=qf_bounds
)
libbranch.declare_var_qt(model=model,
index_set=branch_attrs['names'],
initialize=qt_init,
bounds=qt_bounds
)
### declare the branch power flow constraints
libbranch.declare_eq_branch_power(model=model,
index_set=branch_attrs['names'],
branches=branches,
branch_attrs=branch_attrs,
coordinate_type=CoordinateType.RECTANGULAR
)
### declare the pq balances
libbus.declare_eq_p_balance(model=model,
index_set=bus_attrs['names'],
bus_p_loads=bus_p_loads,
gens_by_bus=gens_by_bus,
bus_gs_fixed_shunts=bus_gs_fixed_shunts,
inlet_branches_by_bus=inlet_branches_by_bus,
outlet_branches_by_bus=outlet_branches_by_bus,
coordinate_type=CoordinateType.RECTANGULAR,
**p_rhs_kwargs
)
libbus.declare_eq_q_balance(model=model,
index_set=bus_attrs['names'],
bus_q_loads=bus_q_loads,
gens_by_bus=gens_by_bus,
bus_bs_fixed_shunts=bus_bs_fixed_shunts,
inlet_branches_by_bus=inlet_branches_by_bus,
outlet_branches_by_bus=outlet_branches_by_bus,
coordinate_type=CoordinateType.RECTANGULAR,
**q_rhs_kwargs
)
### declare the thermal limits
libbranch.declare_ineq_s_branch_thermal_limit(model=model,
index_set=branch_attrs['names'],
branches=branches,
s_thermal_limits=s_max,
flow_type=FlowType.POWER
)
### declare the voltage min and max inequalities
libbus.declare_ineq_vm_bus_lbub(model=model,
index_set=bus_attrs['names'],
buses=buses,
coordinate_type=CoordinateType.RECTANGULAR
)
### declare angle difference limits on interconnected buses
libbranch.declare_ineq_angle_diff_branch_lbub(model=model,
index_set=branch_attrs['names'],
branches=branches,
coordinate_type=CoordinateType.RECTANGULAR
)
### declare the generator cost objective
libgen.declare_expression_pgqg_operating_cost(model=model,
index_set=gen_attrs['names'],
p_costs=gen_attrs['p_cost'],
q_costs=gen_attrs.get('q_cost', None)
)
obj_expr = sum(model.pg_operating_cost[gen_name] for gen_name in model.pg_operating_cost)
if include_feasibility_slack:
obj_expr += penalty_expr
if hasattr(model, 'qg_operating_cost'):
obj_expr += sum(model.qg_operating_cost[gen_name] for gen_name in model.qg_operating_cost)
model.obj = pe.Objective(expr=obj_expr)
return model, md
def create_riv_acopf_model(model_data, include_feasibility_slack=False):
md = model_data.clone_in_service()
tx_utils.scale_ModelData_to_pu(md, inplace = True)
gens = dict(md.elements(element_type='generator'))
buses = dict(md.elements(element_type='bus'))
branches = dict(md.elements(element_type='branch'))
loads = dict(md.elements(element_type='load'))
shunts = dict(md.elements(element_type='shunt'))
gen_attrs = md.attributes(element_type='generator')
bus_attrs = md.attributes(element_type='bus')
branch_attrs = md.attributes(element_type='branch')
load_attrs = md.attributes(element_type='load')
shunt_attrs = md.attributes(element_type='shunt')
inlet_branches_by_bus, outlet_branches_by_bus = \
tx_utils.inlet_outlet_branches_by_bus(branches, buses)
gens_by_bus = tx_utils.gens_by_bus(buses, gens)
model = pe.ConcreteModel()
### declare (and fix) the loads at the buses
bus_p_loads, bus_q_loads = tx_utils.dict_of_bus_loads(buses, loads)
libbus.declare_var_pl(model, bus_attrs['names'], initialize=bus_p_loads)
libbus.declare_var_ql(model, bus_attrs['names'], initialize=bus_q_loads)
model.pl.fix()
model.ql.fix()
### declare the fixed shunts at the buses
bus_bs_fixed_shunts, bus_gs_fixed_shunts = tx_utils.dict_of_bus_fixed_shunts(buses, shunts)
### declare the rectangular voltages
neg_v_max = map_items(op.neg, bus_attrs['v_max'])
vr_init = {k: bus_attrs['vm'][k] * pe.cos(bus_attrs['va'][k]) for k in bus_attrs['vm']}
libbus.declare_var_vr(model, bus_attrs['names'], initialize=vr_init,
bounds=zip_items(neg_v_max, bus_attrs['v_max'])
)
vj_init = {k: bus_attrs['vm'][k] * pe.sin(bus_attrs['va'][k]) for k in bus_attrs['vm']}
libbus.declare_var_vj(model, bus_attrs['names'], initialize=vj_init,
bounds=zip_items(neg_v_max, bus_attrs['v_max'])
)
### include the feasibility slack for the bus balances
p_rhs_kwargs = {}
q_rhs_kwargs = {}
if include_feasibility_slack:
p_rhs_kwargs, q_rhs_kwargs, penalty_expr = _include_feasibility_slack(model, bus_attrs, gen_attrs, bus_p_loads, bus_q_loads)
### fix the reference bus
ref_bus = md.data['system']['reference_bus']
ref_angle = md.data['system']['reference_bus_angle']
if ref_angle != 0.0:
libbus.declare_eq_ref_bus_nonzero(model, ref_angle, ref_bus)
else:
model.vj[ref_bus].fix(0.0)
model.vr[ref_bus].setlb(0.0)
### declare the generator real and reactive power
pg_init = {k: (gen_attrs['p_min'][k] + gen_attrs['p_max'][k]) / 2.0 for k in gen_attrs['pg']}
libgen.declare_var_pg(model, gen_attrs['names'], initialize=pg_init,
bounds=zip_items(gen_attrs['p_min'], gen_attrs['p_max'])
)
qg_init = {k: (gen_attrs['q_min'][k] + gen_attrs['q_max'][k]) / 2.0 for k in gen_attrs['qg']}
libgen.declare_var_qg(model, gen_attrs['names'], initialize=qg_init,
bounds=zip_items(gen_attrs['q_min'], gen_attrs['q_max'])
)
### declare the current flows in the branches
branch_currents = tx_utils.dict_of_branch_currents(branches, buses)
s_max = {k: branches[k]['rating_long_term'] for k in branches.keys()}
if_bounds = dict()
it_bounds = dict()
ifr_init = dict()
ifj_init = dict()
itr_init = dict()
itj_init = dict()
for branch_name, branch in branches.items():
from_bus = branch['from_bus']
to_bus = branch['to_bus']
y_matrix = tx_calc.calculate_y_matrix_from_branch(branch)
ifr_init[branch_name] = tx_calc.calculate_ifr(vr_init[from_bus], vj_init[from_bus], vr_init[to_bus],
vj_init[to_bus], y_matrix)
ifj_init[branch_name] = tx_calc.calculate_ifj(vr_init[from_bus], vj_init[from_bus], vr_init[to_bus],
vj_init[to_bus], y_matrix)
itr_init[branch_name] = tx_calc.calculate_itr(vr_init[from_bus], vj_init[from_bus], vr_init[to_bus],
vj_init[to_bus], y_matrix)
itj_init[branch_name] = tx_calc.calculate_itj(vr_init[from_bus], vj_init[from_bus], vr_init[to_bus],
vj_init[to_bus], y_matrix)
if s_max[branch_name] is None:
if_bounds[branch_name] = (None, None)
it_bounds[branch_name] = (None, None)
else:
if_max = s_max[branch_name] / buses[branches[branch_name]['from_bus']]['v_min']
it_max = s_max[branch_name] / buses[branches[branch_name]['to_bus']]['v_min']
if_bounds[branch_name] = (-if_max, if_max)
it_bounds[branch_name] = (-it_max, it_max)
libbranch.declare_var_ifr(model=model,
index_set=branch_attrs['names'],
initialize=ifr_init,
bounds=if_bounds
)
libbranch.declare_var_ifj(model=model,
index_set=branch_attrs['names'],
initialize=ifj_init,
bounds=if_bounds
)
libbranch.declare_var_itr(model=model,
index_set=branch_attrs['names'],
initialize=itr_init,
bounds=it_bounds
)
libbranch.declare_var_itj(model=model,
index_set=branch_attrs['names'],
initialize=itj_init,
bounds=it_bounds
)
ir_init = dict()
ij_init = dict()
for bus_name, bus in buses.items():
ir_expr = sum([ifr_init[branch_name] for branch_name in outlet_branches_by_bus[bus_name]])
ir_expr += sum([itr_init[branch_name] for branch_name in inlet_branches_by_bus[bus_name]])
ij_expr = sum([ifj_init[branch_name] for branch_name in outlet_branches_by_bus[bus_name]])
ij_expr += sum([itj_init[branch_name] for branch_name in inlet_branches_by_bus[bus_name]])
if bus_gs_fixed_shunts[bus_name] != 0.0:
ir_expr += bus_gs_fixed_shunts[bus_name] * vr_init[bus_name]
ij_expr += bus_gs_fixed_shunts[bus_name] * vj_init[bus_name]
if bus_bs_fixed_shunts[bus_name] != 0.0:
ir_expr += bus_bs_fixed_shunts[bus_name] * vj_init[bus_name]
ij_expr += bus_bs_fixed_shunts[bus_name] * vr_init[bus_name]
ir_init[bus_name] = ir_expr
ij_init[bus_name] = ij_expr
# TODO: Implement better bounds (?) for these aggregated variables -- note, these are unbounded in old Egret
libbus.declare_var_ir_aggregation_at_bus(model=model,
index_set=bus_attrs['names'],
initialize=ir_init,
bounds=(None,None)
)
libbus.declare_var_ij_aggregation_at_bus(model=model,
index_set=bus_attrs['names'],
initialize=ij_init,
bounds=(None,None)
)
### declare the branch current flow constraints
libbranch.declare_eq_branch_current(model=model,
index_set=branch_attrs['names'],
branches=branches
)
### declare the ir/ij_aggregation constraints
libbus.declare_eq_i_aggregation_at_bus(model=model,
index_set=bus_attrs['names'],
bus_bs_fixed_shunts=bus_bs_fixed_shunts,
bus_gs_fixed_shunts=bus_gs_fixed_shunts,
inlet_branches_by_bus=inlet_branches_by_bus,
outlet_branches_by_bus=outlet_branches_by_bus
)
### declare the pq balances
libbus.declare_eq_p_balance_with_i_aggregation(model=model,
index_set=bus_attrs['names'],
bus_p_loads=bus_p_loads,
gens_by_bus=gens_by_bus,
**p_rhs_kwargs
)
libbus.declare_eq_q_balance_with_i_aggregation(model=model,
index_set=bus_attrs['names'],
bus_q_loads=bus_q_loads,
gens_by_bus=gens_by_bus,
**q_rhs_kwargs
)
### declare the thermal limits
libbranch.declare_ineq_s_branch_thermal_limit(model=model,
index_set=branch_attrs['names'],
branches=branches,
s_thermal_limits=s_max,
flow_type=FlowType.CURRENT
)
### declare the voltage min and max inequalities
libbus.declare_ineq_vm_bus_lbub(model=model,
index_set=bus_attrs['names'],
buses=buses,
coordinate_type=CoordinateType.RECTANGULAR
)
### declare angle difference limits on interconnected buses
libbranch.declare_ineq_angle_diff_branch_lbub(model=model,
index_set=branch_attrs['names'],
branches=branches,
coordinate_type=CoordinateType.RECTANGULAR
)
### declare the generator cost objective
libgen.declare_expression_pgqg_operating_cost(model=model,
index_set=gen_attrs['names'],
p_costs=gen_attrs['p_cost'],
q_costs=gen_attrs.get('q_cost', None)
)
obj_expr = sum(model.pg_operating_cost[gen_name] for gen_name in model.pg_operating_cost)
if include_feasibility_slack:
obj_expr += penalty_expr
if hasattr(model, 'qg_operating_cost'):
obj_expr += sum(model.qg_operating_cost[gen_name] for gen_name in model.qg_operating_cost)
model.obj = pe.Objective(expr=obj_expr)
return model, md
def solve_acopf(model_data,
solver,
timelimit = None,
solver_tee = True,
symbolic_solver_labels = False,
options = None,
acopf_model_generator = create_psv_acopf_model,
return_model = False,
return_results = False,
**kwargs):
'''
Create and solve a new acopf model
Parameters
----------
model_data : egret.data.ModelData
An egret ModelData object with the appropriate data loaded.
solver : str or pyomo.opt.base.solvers.OptSolver
Either a string specifying a pyomo solver name, or an instantiated pyomo solver
timelimit : float (optional)
Time limit for dcopf run. Default of None results in no time
limit being set.
solver_tee : bool (optional)
Display solver log. Default is True.
symbolic_solver_labels : bool (optional)
Use symbolic solver labels. Useful for debugging; default is False.
options : dict (optional)
Other options to pass into the solver. Default is dict().
acopf_model_generator : function (optional)
Function for generating the acopf model. Default is
egret.models.acopf.create_psv_acopf_model
return_model : bool (optional)
If True, returns the pyomo model object
return_results : bool (optional)
If True, returns the pyomo results object
kwargs : dictionary (optional)
Additional arguments for building model
'''
import pyomo.environ as pe
from pyomo.environ import value
from egret.common.solver_interface import _solve_model
from egret.model_library.transmission.tx_utils import \
scale_ModelData_to_pu, unscale_ModelData_to_pu
m, md = acopf_model_generator(model_data, **kwargs)
m.dual = pe.Suffix(direction=pe.Suffix.IMPORT)
m, results = _solve_model(m,solver,timelimit=timelimit,solver_tee=solver_tee,
symbolic_solver_labels=symbolic_solver_labels,options=options)
# save results data to ModelData object
gens = dict(md.elements(element_type='generator'))
buses = dict(md.elements(element_type='bus'))
branches = dict(md.elements(element_type='branch'))
md.data['system']['total_cost'] = value(m.obj)
for g,g_dict in gens.items():
g_dict['pg'] = value(m.pg[g])
g_dict['qg'] = value(m.qg[g])
for b,b_dict in buses.items():
b_dict['lmp'] = value(m.dual[m.eq_p_balance[b]])
b_dict['qlmp'] = value(m.dual[m.eq_q_balance[b]])
b_dict['pl'] = value(m.pl[b])
if hasattr(m, 'vj'):
b_dict['vm'] = tx_calc.calculate_vm_from_vj_vr(value(m.vj[b]), value(m.vr[b]))
b_dict['va'] = tx_calc.calculate_va_from_vj_vr(value(m.vj[b]), value(m.vr[b]))
else:
b_dict['vm'] = value(m.vm[b])
b_dict['va'] = value(m.va[b])
for k, k_dict in branches.items():
if hasattr(m,'pf'):
k_dict['pf'] = value(m.pf[k])
k_dict['pt'] = value(m.pt[k])
k_dict['qf'] = value(m.qf[k])
k_dict['qt'] = value(m.qt[k])
if hasattr(m,'irf'):
b = k_dict['from_bus']
k_dict['pf'] = value(tx_calc.calculate_p(value(m.ifr[k]), value(m.ifj[k]), value(m.vr[b]), value(m.vj[b])))
k_dict['qf'] = value(tx_calc.calculate_q(value(m.ifr[k]), value(m.ifj[k]), value(m.vr[b]), value(m.vj[b])))
b = k_dict['to_bus']
k_dict['pt'] = value(tx_calc.calculate_p(value(m.itr[k]), value(m.itj[k]), value(m.vr[b]), value(m.vj[b])))
k_dict['qt'] = value(tx_calc.calculate_q(value(m.itr[k]), value(m.itj[k]), value(m.vr[b]), value(m.vj[b])))
unscale_ModelData_to_pu(md, inplace=True)
if return_model and return_results:
return md, m, results
elif return_model:
return md, m
elif return_results:
return md, results
return md
if __name__ == '__main__':
import os
from egret.parsers.matpower_parser import create_ModelData
path = os.path.dirname(__file__)
filename = 'pglib_opf_case14_ieee.m'
matpower_file = os.path.join(path, '../../download/pglib-opf/', filename)
model_data = create_ModelData(matpower_file)
kwargs = {'include_feasibility_slack':False}
md,m,results = solve_acopf(model_data, "ipopt",acopf_model_generator=create_psv_acopf_model,return_model=True, return_results=True,**kwargs)
md,m,results = solve_acopf(model_data, "ipopt",acopf_model_generator=create_rsv_acopf_model,return_model=True, return_results=True,**kwargs)
md,m,results = solve_acopf(model_data, "ipopt",acopf_model_generator=create_riv_acopf_model,return_model=True, return_results=True,**kwargs)
| 48.439744 | 144 | 0.572109 | 4,658 | 37,783 | 4.250107 | 0.067626 | 0.027479 | 0.030308 | 0.036369 | 0.824872 | 0.805627 | 0.78128 | 0.765217 | 0.744658 | 0.728292 | 0 | 0.002063 | 0.332822 | 37,783 | 779 | 145 | 48.501926 | 0.783283 | 0.088347 | 0 | 0.65529 | 0 | 0 | 0.042433 | 0.005527 | 0 | 0 | 0 | 0.002567 | 0 | 1 | 0.008532 | false | 0 | 0.030717 | 0 | 0.052901 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
fb5a567ce0a8c34208583e195130ba851f296391 | 13,987 | py | Python | bin/wa_workloads_eai_handler.py | gayatrisingh31/grand_central | eae635d865549b8002a42d051d9af69e8688e129 | [
"MIT"
] | 36 | 2019-11-06T20:49:07.000Z | 2021-07-07T02:26:52.000Z | bin/wa_workloads_eai_handler.py | gayatrisingh31/grand_central | eae635d865549b8002a42d051d9af69e8688e129 | [
"MIT"
] | 21 | 2019-11-10T05:38:06.000Z | 2022-03-10T15:07:48.000Z | bin/wa_workloads_eai_handler.py | gayatrisingh31/grand_central | eae635d865549b8002a42d051d9af69e8688e129 | [
"MIT"
] | 7 | 2020-02-13T22:56:46.000Z | 2022-01-22T05:57:34.000Z | import logging
import sys
import uuid
import splunk.admin as admin
import wa_workloads_eai_handler_schema
import urllib
import re
import json
import errno
import base_eai_handler
import log_helper
import time
from splunk.clilib.bundle_paths import make_splunkhome_path
from well_architected import WellArchitected
from wa_constants import *
# Setup the logger
logger = log_helper.setup(logging.INFO, 'WaWorkloadsEAIHandler', 'wa_workloads_eai_handler.log')
class ExampleEAIHandler(base_eai_handler.BaseEAIHandler):
def setup(self):
# Add our supported args
for arg in wa_workloads_eai_handler_schema.ALL_FIELDS:
self.supportedArgs.addOptArg(arg)
def handleList(self, confInfo):
"""
Called when user invokes the "list" action. Returns the contents of example_eai_handler.conf
Arguments
confInfo -- The object containing the information about what is being requested.
"""
logger.info('List requested.')
# Fetch from wa_workloads conf handler
wa_workloads_eai_handler_conf_path = self.get_conf_handler_path_name('wa_workloads', self.userName)
wa_workloads_eai_handler_conf_response_payload = self.simple_request_eai(wa_workloads_eai_handler_conf_path, 'list', 'GET', get_args={'count': -1})
self.set_conf_info_from_eai_payload(confInfo, wa_workloads_eai_handler_conf_response_payload)
def handleCreate(self, confInfo):
"""
Called when user invokes the 'create' action.
Arguments
confInfo -- The object containing the information about what is being requested.
"""
logger.info('Create requested.')
# validate params
params = self.validate_schema_params()
# Get aws creds
grand_central_aws_account_eai_response_payload = self.simple_request_eai(
params['organization_master_account_link_alternate'],
'list',
'GET'
)
aws_access_key = grand_central_aws_account_eai_response_payload['entry'][0]['content']['aws_access_key']
aws_secret_key_link_alternate = grand_central_aws_account_eai_response_payload['entry'][0]['content']['aws_secret_key_link_alternate']
passwords_conf_payload = self.simple_request_eai(aws_secret_key_link_alternate, 'list', 'GET')
aws_secret_key = passwords_conf_payload['entry'][0]['content']['clear_password']
# Extract POST params
name = params['name']
workload_name = params['workload_name']
description = params['description']
environment = params['environment']
regions = params['regions'].replace(' ', '').split(',')
pillar_priorities = params['pillar_priorities'].replace(' ', '').split(',')
lenses = params['lenses'].replace(' ', '').split(',')
review_owner = params['review_owner']
# create wa instance
try:
wa = WellArchitected(
{
'service_name': 'wellarchitected',
'endpoint_url': 'https://vjsx9j0t83.execute-api.us-west-2.amazonaws.com/Prod',
'region_name': 'us-west-2',
'aws_access_key_id': aws_access_key,
'aws_secret_access_key': aws_secret_key
}
)
except Exception as e:
logger.error('Error when instantiating wa: {}'.format(e))
raise admin.InternalException('Error when instantiating wa: {}'.format(e))
# create workload
try:
response = wa.create_workload(
{
'Name': workload_name,
'Description': description,
'Environment': environment,
'AwsRegions': regions,
'PillarPriorities': pillar_priorities,
'Lenses': lenses,
'ReviewOwner': review_owner
}
)
except Exception as e:
logger.error('Error creating workload: {}'.format(e))
raise admin.InternalException('Error creating workload: {}'.format(e))
post_args = {
'name': name,
'organization_master_account_link_alternate': params['organization_master_account_link_alternate'],
'workload_name': workload_name,
'workload_id': response['Id'],
'workload_owner': params['review_owner'],
'workload_arn': response['Arn'],
'aws_regions': regions
}
# Create stanza in wa_workloads_eai_handler.conf
wa_workloads_eai_response_payload = self.simple_request_eai(
self.get_conf_handler_path_name('wa_workloads'),
'create',
'POST',
post_args
)
# Always populate entry content from request to handler.
wa_workloads_rest_path = '/servicesNS/{}/{}/wa_workloads/{}'.format(
'nobody',
self.appName,
urllib.quote_plus(name)
)
wa_workloads_eai_handler_response_payload = self.simple_request_eai(
wa_workloads_rest_path,
'read',
'GET'
)
self.set_conf_info_from_eai_payload(confInfo, wa_workloads_eai_response_payload)
def handleEdit(self, confInfo):
"""
Called when user invokes the 'edit' action.
Arguments
confInfo -- The object containing the information about what is being requested.
"""
logger.info('Update requested.')
params = self.validate_schema_params()
conf_stanza = urllib.quote_plus(params.get('name'))
del params['name']
# raise admin.InternalException('HERHEHRHE {} {} {}'.format(params, conf_stanza, confInfo))
# Get workload ID
deployed_stacksets_rest_path = '/servicesNS/{}/{}/wa_workloads/{}'.format(
'nobody',
self.appName,
urllib.quote_plus(conf_stanza)
)
wa_workloads_eai_response_payload = self.simple_request_eai(
deployed_stacksets_rest_path,
'read',
'GET'
)
workload_id = wa_workloads_eai_response_payload['entry'][0]['content']['workload_id']
organization_master_account_link_alternate = wa_workloads_eai_response_payload['entry'][0]['content']['organization_master_account_link_alternate']
# Get aws creds
grand_central_aws_account_eai_response_payload = self.simple_request_eai(
organization_master_account_link_alternate,
'list',
'GET'
)
aws_access_key = grand_central_aws_account_eai_response_payload['entry'][0]['content']['aws_access_key']
aws_secret_key_link_alternate = grand_central_aws_account_eai_response_payload['entry'][0]['content']['aws_secret_key_link_alternate']
passwords_conf_payload = self.simple_request_eai(aws_secret_key_link_alternate, 'list', 'GET')
aws_secret_key = passwords_conf_payload['entry'][0]['content']['clear_password']
# Extract POST params
workload_name = params['workload_name']
description = params['description']
environment = params['environment']
regions = params['regions'].replace(' ', '').split(',')
pillar_priorities = params['pillar_priorities'].replace(' ', '').split(',')
lenses = params['lenses'].replace(' ', '').split(',')
review_owner = params['review_owner']
# create wa instance
try:
wa = WellArchitected(
{
'service_name': 'wellarchitected',
'endpoint_url': 'https://vjsx9j0t83.execute-api.us-west-2.amazonaws.com/Prod',
'region_name': 'us-west-2',
'aws_access_key_id': aws_access_key,
'aws_secret_access_key': aws_secret_key
}
)
except Exception as e:
logger.error('Error when instantiating wa: {}'.format(e))
raise admin.InternalException('Error when instantiating wa: {}'.format(e))
# edit workload
try:
response = wa.update_workload(
{
'Id': workload_id,
'Name': workload_name,
'Description': description,
'Environment': environment,
# 'AwsRegions': regions,
# 'PillarPriorities': pillar_priorities,
# 'ReviewOwner': review_owner
}
)
except Exception as e:
logger.error('Error editing workload: {}'.format(e))
raise admin.InternalException('Error editing workload: {}'.format(e))
post_args = {
# 'name': conf_stanza,
'organization_master_account_link_alternate': params['organization_master_account_link_alternate'],
'workload_name': workload_name,
'workload_id': response['Workload']['Id'],
'workload_owner': params['review_owner'],
'workload_arn': response['Workload']['Arn'],
'aws_regions': regions
}
conf_handler_path = '{}/{}'.format(
self.get_conf_handler_path_name('wa_workloads', 'nobody'),
conf_stanza
)
# Edit wa_workloads.conf
wa_workloads_eai_handler_response_payload = self.simple_request_eai(
conf_handler_path,
'edit',
'POST',
post_args
)
# Always populate entry content from request to handler.
wa_workloads_eai_handler_rest_path = '/servicesNS/{}/{}/wa_workloads/{}'.format(
'nobody',
self.appName,
conf_stanza
)
wa_workloads_eai_handler_response_payload = self.simple_request_eai(
wa_workloads_eai_handler_rest_path,
'read',
'GET'
)
self.set_conf_info_from_eai_payload(confInfo, wa_workloads_eai_handler_response_payload)
def handleRemove(self, confInfo):
"""
Called when user invokes the 'delete' action. Removes the requested stanza from example_eai_handler.conf
Arguments
confInfo -- The object containing the information about what is being requested.
"""
logger.info('Conf stanza deletion requested.')
name = self.callerArgs.id
conf_stanza = urllib.quote_plus(name)
# Get workload ID
deployed_stacksets_rest_path = '/servicesNS/{}/{}/wa_workloads/{}'.format(
'nobody',
self.appName,
urllib.quote_plus(name)
)
wa_workloads_eai_response_payload = self.simple_request_eai(
deployed_stacksets_rest_path,
'read',
'GET'
)
workload_id = wa_workloads_eai_response_payload['entry'][0]['content']['workload_id']
organization_master_account_link_alternate = wa_workloads_eai_response_payload['entry'][0]['content']['organization_master_account_link_alternate']
# Get aws creds
grand_central_aws_account_eai_response_payload = self.simple_request_eai(
organization_master_account_link_alternate,
'list',
'GET'
)
aws_access_key = grand_central_aws_account_eai_response_payload['entry'][0]['content']['aws_access_key']
aws_secret_key_link_alternate = grand_central_aws_account_eai_response_payload['entry'][0]['content']['aws_secret_key_link_alternate']
passwords_conf_payload = self.simple_request_eai(aws_secret_key_link_alternate, 'list', 'GET')
aws_secret_key = passwords_conf_payload['entry'][0]['content']['clear_password']
# create wa instance
try:
wa = WellArchitected(
{
'service_name': 'wellarchitected',
'endpoint_url': 'https://vjsx9j0t83.execute-api.us-west-2.amazonaws.com/Prod',
'region_name': 'us-west-2',
'aws_access_key_id': aws_access_key,
'aws_secret_access_key': aws_secret_key
}
)
except Exception as e:
logger.error('Error when instantiating wa: {}'.format(e))
raise admin.InternalException('Error when instantiating wa: {}'.format(e))
# delete workload
try:
wa.delete_workload({
'Id': workload_id
})
except Exception as e:
logger.error('Error when deleting workload: {}'.format(e))
raise admin.InternalException('Error when deleting workload: {}'.format(e))
# Delete example_eai_handler.conf stanza
conf_handler_path = '{}/{}'.format(
self.get_conf_handler_path_name('wa_workloads'),
conf_stanza
)
wa_workloads_eai_handler_response_payload = self.simple_request_eai(
conf_handler_path,
'remove',
'DELETE'
)
# Always populate entry content from request to handler.
wa_workloads_rest_path = '/servicesNS/{}/{}/wa_workloads'.format('nobody', self.appName)
wa_workloads_eai_handler_response_payload = self.simple_request_eai(
wa_workloads_rest_path,
'list',
'GET',
get_args={'count': -1}
)
self.set_conf_info_from_eai_payload(confInfo, wa_workloads_eai_handler_response_payload)
def validate_schema_params(self):
"""
Validates raw request params against the example schema
"""
schema = wa_workloads_eai_handler_schema.example_schema
params = self.get_params(schema=wa_workloads_eai_handler_schema, filter=wa_workloads_eai_handler_schema.CONF_FIELDS)
return self.validate_params(schema, params)
admin.init(ExampleEAIHandler, admin.CONTEXT_NONE) | 40.542029 | 155 | 0.617359 | 1,456 | 13,987 | 5.567308 | 0.126374 | 0.058352 | 0.048359 | 0.051813 | 0.830002 | 0.795213 | 0.77054 | 0.72946 | 0.715643 | 0.6906 | 0 | 0.003295 | 0.284049 | 13,987 | 345 | 156 | 40.542029 | 0.806171 | 0.106813 | 0 | 0.565217 | 0 | 0 | 0.197689 | 0.053308 | 0 | 0 | 0 | 0 | 0 | 1 | 0.023715 | false | 0.023715 | 0.059289 | 0 | 0.090909 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
fb62f9fba92c17b0680e85555012049cc92255c8 | 210 | py | Python | spark/pyspark/examples/gis/intersecion_gen.py | beautifulpython/GIS | 9054f382ef2e473291863ad3e4a3198fc6ea7f55 | [
"Apache-2.0"
] | null | null | null | spark/pyspark/examples/gis/intersecion_gen.py | beautifulpython/GIS | 9054f382ef2e473291863ad3e4a3198fc6ea7f55 | [
"Apache-2.0"
] | null | null | null | spark/pyspark/examples/gis/intersecion_gen.py | beautifulpython/GIS | 9054f382ef2e473291863ad3e4a3198fc6ea7f55 | [
"Apache-2.0"
] | 1 | 2020-01-15T03:42:52.000Z | 2020-01-15T03:42:52.000Z | with open('/tmp/intersection.json', 'w') as file:
file.write('{"left": "POINT(0 0)", "right": "LINESTRING ( 2 0, 0 2 )"}\n')
file.write('{"left": "POINT(0 0)", "right": "LINESTRING ( 0 0, 0 2 )"}\n')
| 52.5 | 79 | 0.542857 | 34 | 210 | 3.352941 | 0.470588 | 0.087719 | 0.22807 | 0.315789 | 0.614035 | 0.614035 | 0.614035 | 0.614035 | 0 | 0 | 0 | 0.069364 | 0.17619 | 210 | 3 | 80 | 70 | 0.589595 | 0 | 0 | 0 | 0 | 0.666667 | 0.690476 | 0.104762 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
fb83bd43bc0f38fe6e0b9c7593d6f5bd67f16360 | 132 | py | Python | app/auth/__init__.py | Alexotieno1717/blog-website | 000dc608213eee67a23d9c12e309a0597edaeeb6 | [
"MIT"
] | null | null | null | app/auth/__init__.py | Alexotieno1717/blog-website | 000dc608213eee67a23d9c12e309a0597edaeeb6 | [
"MIT"
] | null | null | null | app/auth/__init__.py | Alexotieno1717/blog-website | 000dc608213eee67a23d9c12e309a0597edaeeb6 | [
"MIT"
] | null | null | null | from flask import Blueprint
auth = Blueprint('auth', __name__, template_folder='templates/auth')
from app.auth import views, forms
| 26.4 | 68 | 0.787879 | 18 | 132 | 5.5 | 0.666667 | 0.262626 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.113636 | 132 | 4 | 69 | 33 | 0.846154 | 0 | 0 | 0 | 0 | 0 | 0.136364 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0.666667 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 6 |
fbafd5618ff695207162bbe6fb7abe5de26544ff | 362 | py | Python | samples/PolynomialDistractor.py | JA-VON/Question-Generator | 8247e36928d02964c76588a5859e0579822b4623 | [
"Apache-2.0"
] | 1 | 2016-03-07T20:42:23.000Z | 2016-03-07T20:42:23.000Z | samples/PolynomialDistractor.py | JA-VON/Question-Generator | 8247e36928d02964c76588a5859e0579822b4623 | [
"Apache-2.0"
] | 6 | 2016-03-07T06:37:28.000Z | 2016-03-10T02:12:50.000Z | samples/PolynomialDistractor.py | JA-VON/Question-Generator | 8247e36928d02964c76588a5859e0579822b4623 | [
"Apache-2.0"
] | null | null | null | def distractor_1(polynomial):
return str(polynomial.get_degree() + 1)
def distractor_2(polynomial):
return str(polynomial.get_degree() - 1000)
def distractor_3(polynomial):
return str(polynomial.get_degree() + 100)
def distractor_4(polynomial):
if polynomial.get_degree != 0:
return "0"
def distractor_5(polynomial):
return "No such thing as a degree" | 24.133333 | 43 | 0.762431 | 52 | 362 | 5.134615 | 0.403846 | 0.243446 | 0.284644 | 0.325843 | 0.426966 | 0.426966 | 0 | 0 | 0 | 0 | 0 | 0.047319 | 0.124309 | 362 | 15 | 44 | 24.133333 | 0.794953 | 0 | 0 | 0 | 0 | 0 | 0.071625 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.454545 | false | 0 | 0 | 0.363636 | 0.909091 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
8375a9b30b2c126904ccbac6093791577ecd056c | 35,523 | py | Python | kiali_qe/tests/test_istio_objects_crud.py | Hawkular-QE/kiali-qe-python | 24e058def1efd0a509a2b599901f4179dbf37583 | [
"Apache-2.0"
] | null | null | null | kiali_qe/tests/test_istio_objects_crud.py | Hawkular-QE/kiali-qe-python | 24e058def1efd0a509a2b599901f4179dbf37583 | [
"Apache-2.0"
] | 3 | 2018-03-28T17:11:13.000Z | 2018-03-28T17:55:08.000Z | kiali_qe/tests/test_istio_objects_crud.py | Hawkular-QE/kiali-qe-python | 24e058def1efd0a509a2b599901f4179dbf37583 | [
"Apache-2.0"
] | 2 | 2018-02-13T10:56:03.000Z | 2018-03-20T14:07:51.000Z | import pytest
from openshift.dynamic.exceptions import InternalServerError
from kubernetes.client.rest import ApiException
from kiali_qe.tests import IstioConfigPageTest, ServicesPageTest
from kiali_qe.utils import get_yaml, get_dict
from kiali_qe.utils.path import istio_objects_path
from kiali_qe.components.enums import (
IstioConfigObjectType,
IstioConfigPageFilter,
IstioConfigValidationType,
AuthPolicyType,
AuthPolicyActionType,
MutualTLSMode
)
from kiali_qe.components.error_codes import (
KIA0202,
KIA0201,
KIA0203,
KIA0209,
KIA1106,
KIA1107,
KIA0101,
KIA0102,
KIA1101,
KIA0004,
KIA0001,
KIA1105,
KIA0104
)
'''
Tests are divided into groups using different services and namespaces. This way the group of tests
can be run in parallel.
'''
BOOKINFO_1 = 'bookinfo'
BOOKINFO_2 = 'bookinfo2'
BOOKINFO_3 = 'bookinfo3'
REVIEWS = 'reviews'
DETAILS = 'details'
RATINGS = 'ratings'
DEST_RULE = 'destination-rule-cb-details.yaml'
DEST_RULE_VS_RATINGS = 'destination-rule-ratings.yaml'
DEST_RULE_VS_REVIEWS = 'destination-rule-reviews.yaml'
DEST_RULE_BROKEN = 'destination-rule-cb-broken.yaml'
DEST_RULE_WARNING = 'dest-rules-svc.yaml'
DEST_RULE_HOST_WARNING = 'destination-rule-host-wrong.yaml'
VIRTUAL_SERVICE = 'virtual-service.yaml'
VIRTUAL_SERVICE_SUBSET = 'virtual-service-subset-duplicate.yaml'
VIRTUAL_SERVICE_BROKEN = 'virtual-service-broken.yaml'
VIRTUAL_SERVICE_BROKEN_WEIGHT = 'virtual-service-broken-weight.yaml'
VIRTUAL_SERVICE_BROKEN_WEIGHT_TEXT = 'virtual-service-broken-weight-text.yaml'
VIRTUAL_SERVICE_SVC = 'virtual-service-svc.yaml'
VIRTUAL_SERVICE_SVC2 = 'virtual-service-svc2.yaml'
GATEWAY = 'gateway.yaml'
GATEWAY_LINK = 'gateway-link.yaml'
SERVICE_ENTRY = 'service-entry.yaml'
SERVICE_MESH_RBAC_CONFIG = 'service-mesh-rbac-config.yaml'
RBAC_CONFIG = 'rbac-config.yaml'
AUTH_POLICY = 'auth-policy.yaml'
SERVICE_ROLE = 'service-role.yaml'
SERVICE_ROLE_BROKEN = 'service-role-broken.yaml'
SERVICE_ROLE_BINDING = 'service-role-binding.yaml'
SERVICE_ROLE_BINDING_BROKEN = 'service-role-binding-broken.yaml'
@pytest.mark.p_smoke
@pytest.mark.p_crud_resource
@pytest.mark.p_crud_group1
def __test_destination_rule(kiali_client, openshift_client, browser):
destination_rule = get_yaml(istio_objects_path.strpath, DEST_RULE)
destination_rule_dict = get_dict(istio_objects_path.strpath, DEST_RULE)
_istio_config_test(kiali_client, openshift_client, browser,
destination_rule_dict,
destination_rule,
[
{'name': IstioConfigPageFilter.ISTIO_TYPE.text,
'value': IstioConfigObjectType.DESTINATION_RULE.text},
{'name': IstioConfigPageFilter.CONFIG.text,
'value': IstioConfigValidationType.VALID.text},
{'name': IstioConfigPageFilter.ISTIO_NAME.text,
'value': destination_rule_dict.metadata.name}
],
namespace=BOOKINFO_1,
kind='DestinationRule',
api_version='networking.istio.io/v1alpha3',
service_name=DETAILS,
check_service_details=True)
@pytest.mark.p_crud_resource
@pytest.mark.p_crud_group1
def test_destination_rule_broken(kiali_client, openshift_client, browser):
destination_rule_broken = get_yaml(istio_objects_path.strpath, DEST_RULE_BROKEN)
destination_rule_broken_dict = get_dict(istio_objects_path.strpath, DEST_RULE_BROKEN)
_istio_config_test(kiali_client, openshift_client, browser,
destination_rule_broken_dict,
destination_rule_broken,
[
{'name': IstioConfigPageFilter.ISTIO_TYPE.text,
'value': IstioConfigObjectType.DESTINATION_RULE.text},
{'name': IstioConfigPageFilter.CONFIG.text,
'value': IstioConfigValidationType.NOT_VALID.text},
{'name': IstioConfigPageFilter.ISTIO_NAME.text,
'value': destination_rule_broken_dict.metadata.name}
],
namespace=BOOKINFO_2,
kind='DestinationRule',
api_version='networking.istio.io/v1alpha3',
service_name=DETAILS,
error_messages=[KIA0203, KIA0209],
check_service_details=False)
@pytest.mark.p_crud_resource
@pytest.mark.p_crud_group1
def test_destination_rule_svc_warning(kiali_client, openshift_client, browser):
destination_rule_warning = get_yaml(istio_objects_path.strpath, DEST_RULE_WARNING)
destination_rule_warning_dict = get_dict(istio_objects_path.strpath, DEST_RULE_WARNING)
_create_dest_rule_vs(openshift_client, DEST_RULE_VS_REVIEWS)
_istio_config_test(kiali_client, openshift_client, browser,
destination_rule_warning_dict,
destination_rule_warning,
[
{'name': IstioConfigPageFilter.ISTIO_TYPE.text,
'value': IstioConfigObjectType.DESTINATION_RULE.text},
{'name': IstioConfigPageFilter.CONFIG.text,
'value': IstioConfigValidationType.WARNING.text},
{'name': IstioConfigPageFilter.ISTIO_NAME.text,
'value': 'reviews-dr2-svc'}
],
namespace=BOOKINFO_2,
kind='DestinationRule',
api_version='networking.istio.io/v1alpha3',
service_name=DETAILS,
error_messages=[KIA0201],
check_service_details=False)
_delete_dest_rule_vs(openshift_client, DEST_RULE_VS_REVIEWS)
@pytest.mark.p_crud_resource
@pytest.mark.p_crud_group1
def test_destination_rule_host_warning(kiali_client, openshift_client, browser):
destination_rule_warning = get_yaml(istio_objects_path.strpath, DEST_RULE_HOST_WARNING)
destination_rule_warning_dict = get_dict(istio_objects_path.strpath, DEST_RULE_HOST_WARNING)
_istio_config_test(kiali_client, openshift_client, browser,
destination_rule_warning_dict,
destination_rule_warning,
[
{'name': IstioConfigPageFilter.ISTIO_TYPE.text,
'value': IstioConfigObjectType.DESTINATION_RULE.text},
{'name': IstioConfigPageFilter.CONFIG.text,
'value': IstioConfigValidationType.WARNING.text},
{'name': IstioConfigPageFilter.ISTIO_NAME.text,
'value': 'reviews-dr2-svc'}
],
namespace=BOOKINFO_1,
kind='DestinationRule',
api_version='networking.istio.io/v1alpha3',
service_name=DETAILS,
error_messages=[KIA0202],
check_service_details=False)
@pytest.mark.p_crud_resource
@pytest.mark.p_crud_group2
def test_virtual_service(kiali_client, openshift_client, browser):
gateway = get_yaml(istio_objects_path.strpath, GATEWAY_LINK)
gateway_dict = get_dict(istio_objects_path.strpath, GATEWAY_LINK)
_istio_config_create(openshift_client, gateway_dict, gateway,
'Gateway',
'networking.istio.io/v1alpha3',
namespace=BOOKINFO_2)
virtual_service = get_yaml(istio_objects_path.strpath, VIRTUAL_SERVICE)
virtual_service_dict = get_dict(istio_objects_path.strpath, VIRTUAL_SERVICE)
_create_dest_rule_vs(openshift_client, DEST_RULE_VS_REVIEWS)
_istio_config_test(kiali_client, openshift_client, browser,
virtual_service_dict,
virtual_service,
[
{'name': IstioConfigPageFilter.ISTIO_TYPE.text,
'value': IstioConfigObjectType.VIRTUAL_SERVICE.text},
{'name': IstioConfigPageFilter.CONFIG.text,
'value': IstioConfigValidationType.VALID.text},
{'name': IstioConfigPageFilter.ISTIO_NAME.text,
'value': virtual_service_dict.metadata.name}
],
namespace=BOOKINFO_2,
kind='VirtualService',
api_version='networking.istio.io/v1alpha3',
service_name=REVIEWS,
check_service_details=False,
delete_istio_config=False)
_vs_gateway_link_test(kiali_client, openshift_client, browser, gateway_dict,
kind='Gateway',
vs_name=virtual_service_dict.metadata.name,
namespace=BOOKINFO_2)
_delete_dest_rule_vs(openshift_client, DEST_RULE_VS_REVIEWS)
_delete_gateway_vs(openshift_client, GATEWAY_LINK)
@pytest.mark.p_crud_resource
@pytest.mark.p_crud_group2
def test_virtual_service_svc_warning(kiali_client, openshift_client, browser):
virtual_service = get_yaml(istio_objects_path.strpath, VIRTUAL_SERVICE_SVC)
virtual_service_dict = get_dict(istio_objects_path.strpath, VIRTUAL_SERVICE_SVC)
_istio_config_create(openshift_client, virtual_service_dict, virtual_service,
'VirtualService',
'networking.istio.io/v1alpha3',
namespace=BOOKINFO_2)
virtual_service2 = get_yaml(istio_objects_path.strpath, VIRTUAL_SERVICE_SVC2)
virtual_service_dict2 = get_dict(istio_objects_path.strpath, VIRTUAL_SERVICE_SVC2)
_istio_config_create(openshift_client, virtual_service_dict2, virtual_service2,
'VirtualService',
'networking.istio.io/v1alpha3',
namespace=BOOKINFO_2)
_create_dest_rule_vs(openshift_client, DEST_RULE_VS_REVIEWS)
_istio_config_details_test(kiali_client,
openshift_client,
browser,
virtual_service_dict,
virtual_service,
namespace=BOOKINFO_2,
kind='VirtualService',
api_version='networking.istio.io/v1alpha3',
error_messages=[KIA1106])
_istio_config_details_test(kiali_client,
openshift_client,
browser,
virtual_service_dict2,
virtual_service2,
namespace=BOOKINFO_2,
kind='VirtualService',
api_version='networking.istio.io/v1alpha3',
error_messages=[KIA1106])
_delete_dest_rule_vs(openshift_client, DEST_RULE_VS_REVIEWS)
@pytest.mark.p_crud_resource
@pytest.mark.p_crud_group2
def test_virtual_service_subset_warning(kiali_client, openshift_client, browser):
gateway = get_yaml(istio_objects_path.strpath, GATEWAY)
gateway_dict = get_dict(istio_objects_path.strpath, GATEWAY)
_istio_config_create(openshift_client, gateway_dict, gateway,
'Gateway',
'networking.istio.io/v1alpha3',
namespace=BOOKINFO_1)
virtual_service = get_yaml(istio_objects_path.strpath, VIRTUAL_SERVICE_SUBSET)
virtual_service_dict = get_dict(istio_objects_path.strpath, VIRTUAL_SERVICE_SUBSET)
_istio_config_create(openshift_client, virtual_service_dict, virtual_service,
'VirtualService',
'networking.istio.io/v1alpha3',
namespace=BOOKINFO_1)
_create_dest_rule_vs(openshift_client, DEST_RULE_VS_REVIEWS)
_istio_config_details_test(kiali_client,
openshift_client,
browser,
virtual_service_dict,
virtual_service,
namespace=BOOKINFO_1,
kind='VirtualService',
api_version='networking.istio.io/v1alpha3',
error_messages=[KIA1105, KIA1105])
_delete_dest_rule_vs(openshift_client, DEST_RULE_VS_REVIEWS)
@pytest.mark.p_crud_resource
@pytest.mark.p_crud_group2
def test_virtual_service_broken(kiali_client, openshift_client, browser):
virtual_service_broken = get_yaml(istio_objects_path.strpath, VIRTUAL_SERVICE_BROKEN)
virtual_service_broken_dict = get_dict(istio_objects_path.strpath, VIRTUAL_SERVICE_BROKEN)
_create_dest_rule_vs(openshift_client, DEST_RULE_VS_REVIEWS, BOOKINFO_3)
try:
_istio_config_test(kiali_client, openshift_client, browser,
virtual_service_broken_dict,
virtual_service_broken,
[
{'name': IstioConfigPageFilter.ISTIO_TYPE.text,
'value': IstioConfigObjectType.VIRTUAL_SERVICE.text},
{'name': IstioConfigPageFilter.CONFIG.text,
'value': IstioConfigValidationType.NOT_VALID.text},
{'name': IstioConfigPageFilter.ISTIO_NAME.text,
'value': virtual_service_broken_dict.metadata.name}
],
namespace=BOOKINFO_3,
kind='VirtualService',
api_version='networking.istio.io/v1alpha3',
service_name=REVIEWS,
error_messages=[KIA1101, KIA1107],
check_service_details=False)
finally:
_delete_dest_rule_vs(openshift_client, DEST_RULE_VS_REVIEWS)
@pytest.mark.p_crud_resource
@pytest.mark.p_crud_group2
def test_virtual_service_broken_weight(kiali_client, openshift_client, browser):
virtual_service_broken = get_yaml(istio_objects_path.strpath,
VIRTUAL_SERVICE_BROKEN_WEIGHT)
virtual_service_broken_dict = get_dict(istio_objects_path.strpath,
VIRTUAL_SERVICE_BROKEN_WEIGHT)
try:
_istio_config_create(
openshift_client,
virtual_service_broken_dict,
virtual_service_broken,
namespace=BOOKINFO_1,
kind='VirtualService',
api_version='networking.istio.io/v1alpha3')
assert False, "'Weight sum should be 100' "\
"error should be thrown, or Galley is not configured"
except (ApiException, InternalServerError):
pass
@pytest.mark.p_crud_resource
@pytest.mark.p_crud_group2
def test_virtual_service_broken_weight_text(kiali_client, openshift_client, browser):
virtual_service_broken = get_yaml(istio_objects_path.strpath,
VIRTUAL_SERVICE_BROKEN_WEIGHT_TEXT)
virtual_service_broken_dict = get_dict(istio_objects_path.strpath,
VIRTUAL_SERVICE_BROKEN_WEIGHT_TEXT)
try:
_create_dest_rule_vs(openshift_client, DEST_RULE_VS_RATINGS)
_istio_config_test(kiali_client, openshift_client, browser,
virtual_service_broken_dict,
virtual_service_broken,
[
{'name': IstioConfigPageFilter.ISTIO_TYPE.text,
'value': IstioConfigObjectType.VIRTUAL_SERVICE.text},
{'name': IstioConfigPageFilter.CONFIG.text,
'value': IstioConfigValidationType.NOT_VALID.text},
{'name': IstioConfigPageFilter.ISTIO_NAME.text,
'value': virtual_service_broken_dict.metadata.name}
],
namespace=BOOKINFO_1,
kind='VirtualService',
api_version='networking.istio.io/v1alpha3',
service_name=RATINGS,
error_messages=['Weight must be a number',
'Weight sum should be 100'],
check_service_details=False)
_delete_dest_rule_vs(openshift_client, DEST_RULE_VS_RATINGS)
except (ApiException, InternalServerError):
pass
@pytest.mark.p_crud_resource
@pytest.mark.p_crud_group3
def test_gateway(kiali_client, openshift_client, browser, pick_namespace):
gateway = get_yaml(istio_objects_path.strpath, GATEWAY)
gateway_dict = get_dict(istio_objects_path.strpath, GATEWAY)
namespace = pick_namespace(BOOKINFO_2)
_istio_config_test(kiali_client, openshift_client, browser,
gateway_dict,
gateway,
[
{'name': IstioConfigPageFilter.ISTIO_TYPE.text,
'value': IstioConfigObjectType.GATEWAY.text},
{'name': IstioConfigPageFilter.ISTIO_NAME.text,
'value': gateway_dict.metadata.name}
],
namespace=namespace,
kind='Gateway',
api_version='networking.istio.io/v1alpha3',
service_name=REVIEWS,
check_service_details=False)
@pytest.mark.p_crud_resource
@pytest.mark.p_crud_group3
def test_gateway_create(kiali_client, openshift_client, browser, pick_namespace):
namespace = pick_namespace(BOOKINFO_2)
gateway_name = 'gatewaytocreate'
namespaces = [BOOKINFO_1, namespace]
try:
_delete_gateways(openshift_client, gateway_name, namespaces)
tests = IstioConfigPageTest(
kiali_client=kiali_client, openshift_client=openshift_client, browser=browser)
tests.test_gateway_create(gateway_name, 'www.google.com', 'http', '8080',
namespaces=namespaces)
finally:
_delete_gateways(openshift_client, gateway_name, namespaces)
@pytest.mark.p_crud_resource
@pytest.mark.p_crud_group3
def test_sidecar_create(kiali_client, openshift_client, browser, pick_namespace):
namespace = pick_namespace(BOOKINFO_2)
sidecar_name = 'sidecartocreate'
namespaces = [BOOKINFO_1, namespace]
try:
_delete_sidecars(openshift_client, sidecar_name, namespaces)
tests = IstioConfigPageTest(
kiali_client=kiali_client, openshift_client=openshift_client, browser=browser)
tests.test_sidecar_create(name=sidecar_name, egress_host='bookinfo/*',
labels='<name>=<value>', namespaces=namespaces)
finally:
_delete_sidecars(openshift_client, sidecar_name, namespaces)
@pytest.mark.p_crud_resource
@pytest.mark.p_crud_group3
def test_authpolicy_deny_all_create(kiali_client, openshift_client, browser, pick_namespace):
namespace = pick_namespace(BOOKINFO_2)
authpolicy_name = 'authorizationpolicies'
namespaces = [BOOKINFO_1, namespace]
try:
_delete_authpolicies(openshift_client, authpolicy_name, namespaces)
tests = IstioConfigPageTest(
kiali_client=kiali_client, openshift_client=openshift_client, browser=browser)
tests.test_authpolicy_create(name=authpolicy_name,
policy=AuthPolicyType.DENY_ALL.text,
namespaces=namespaces)
finally:
_delete_authpolicies(openshift_client, authpolicy_name, namespaces)
@pytest.mark.p_crud_resource
@pytest.mark.p_crud_group3
def test_authpolicy_allow_all_create(kiali_client, openshift_client, browser, pick_namespace):
namespace = pick_namespace(BOOKINFO_2)
authpolicy_name = 'authpolicyallowallcreate'
namespaces = [BOOKINFO_1, namespace]
try:
_delete_authpolicies(openshift_client, authpolicy_name, namespaces)
tests = IstioConfigPageTest(
kiali_client=kiali_client, openshift_client=openshift_client, browser=browser)
tests.test_authpolicy_create(name=authpolicy_name,
policy=AuthPolicyType.ALLOW_ALL.text,
namespaces=namespaces)
finally:
_delete_authpolicies(openshift_client, authpolicy_name, namespaces)
@pytest.mark.p_crud_resource
@pytest.mark.p_crud_group3
def test_authpolicy_rules_allow_create(kiali_client, openshift_client, browser, pick_namespace):
namespace = pick_namespace(BOOKINFO_2)
authpolicy_name = 'authpolicyrulesallowtocreate'
namespaces = [BOOKINFO_1, namespace]
try:
_delete_authpolicies(openshift_client, authpolicy_name, namespaces)
tests = IstioConfigPageTest(
kiali_client=kiali_client, openshift_client=openshift_client, browser=browser)
tests.test_authpolicy_create(name=authpolicy_name,
policy=AuthPolicyType.RULES.text,
namespaces=namespaces,
policy_action=AuthPolicyActionType.ALLOW.text)
finally:
_delete_authpolicies(openshift_client, authpolicy_name, namespaces)
@pytest.mark.p_crud_resource
@pytest.mark.p_crud_group3
def test_authpolicy_rules_deny_disabled(kiali_client, openshift_client, browser, pick_namespace):
namespace = pick_namespace(BOOKINFO_2)
authpolicy_name = 'authpolicyrulesdenydisabled'
namespaces = [BOOKINFO_1, namespace]
tests = IstioConfigPageTest(
kiali_client=kiali_client, openshift_client=openshift_client, browser=browser)
tests.test_authpolicy_create(name=authpolicy_name,
policy=AuthPolicyType.RULES.text,
namespaces=namespaces,
policy_action=AuthPolicyActionType.DENY.text)
@pytest.mark.p_crud_resource
@pytest.mark.p_crud_group3
def test_peerauth_create(kiali_client, openshift_client, browser, pick_namespace):
namespace = pick_namespace(BOOKINFO_2)
name = 'peerauthtocreate'
namespaces = [BOOKINFO_1, namespace]
try:
_delete_peerauths(openshift_client, name, namespaces)
tests = IstioConfigPageTest(
kiali_client=kiali_client, openshift_client=openshift_client, browser=browser)
tests.test_peerauth_create(name=name, mtls_mode=MutualTLSMode.PERMISSIVE.text,
labels='app=value', namespaces=namespaces,
mtls_ports={'8080': MutualTLSMode.STRICT.text})
finally:
_delete_peerauths(openshift_client, name, namespaces)
@pytest.mark.p_crud_resource
@pytest.mark.p_crud_group3
def test_peerauth_create_disabled(kiali_client, openshift_client, browser, pick_namespace):
"""
MTLS Ports require Labels
"""
namespace = pick_namespace(BOOKINFO_2)
name = 'peerauthtocreatedisabled'
namespaces = [BOOKINFO_1, namespace]
try:
_delete_peerauths(openshift_client, name, namespaces)
tests = IstioConfigPageTest(
kiali_client=kiali_client, openshift_client=openshift_client, browser=browser)
tests.test_peerauth_create(name=name, mtls_mode=MutualTLSMode.PERMISSIVE.text,
labels='app=value',namespaces=namespaces,
mtls_ports={'8080': MutualTLSMode.STRICT.text},)
finally:
_delete_peerauths(openshift_client, name, namespaces)
@pytest.mark.p_crud_resource
@pytest.mark.p_crud_group4
def test_requestauth_create(kiali_client, openshift_client, browser, pick_namespace):
namespace = pick_namespace(BOOKINFO_2)
name = 'requestauthtocreate'
namespaces = [BOOKINFO_1, namespace]
try:
_delete_requestauths(openshift_client, name, namespaces)
tests = IstioConfigPageTest(
kiali_client=kiali_client, openshift_client=openshift_client, browser=browser)
tests.test_requestauth_create(name=name,
labels='app=value',
namespaces=namespaces,
jwt_rules={'issuer': 'value1'})
finally:
_delete_requestauths(openshift_client, name, namespaces)
@pytest.mark.p_crud_resource
@pytest.mark.p_crud_group4
def test_requestauth_create_disabled(kiali_client, openshift_client, browser, pick_namespace):
namespace = pick_namespace(BOOKINFO_2)
name = 'requestauthtocreatedisabled'
namespaces = [BOOKINFO_1, namespace]
tests = IstioConfigPageTest(
kiali_client=kiali_client, openshift_client=openshift_client, browser=browser)
tests.test_requestauth_create(name=name,
labels='app=value',
namespaces=namespaces,
jwt_rules={'audiences': 'value1'},
expected_created=False)
@pytest.mark.p_crud_resource
@pytest.mark.p_crud_group1
def test_service_entry(kiali_client, openshift_client, browser):
yaml = get_yaml(istio_objects_path.strpath, SERVICE_ENTRY)
_dict = get_dict(istio_objects_path.strpath, SERVICE_ENTRY)
_istio_config_test(kiali_client, openshift_client, browser,
_dict,
yaml,
[
{'name': IstioConfigPageFilter.ISTIO_TYPE.text,
'value': IstioConfigObjectType.SERVICE_ENTRY.text},
{'name': IstioConfigPageFilter.ISTIO_NAME.text,
'value': _dict.metadata.name}
],
namespace=BOOKINFO_1,
kind='ServiceEntry',
api_version='networking.istio.io/v1alpha3',
service_name=DETAILS,
check_service_details=False)
@pytest.mark.p_crud_resource
@pytest.mark.p_crud_group4
def test_auth_policy(kiali_client, openshift_client, browser):
yaml = get_yaml(istio_objects_path.strpath, AUTH_POLICY)
_dict = get_dict(istio_objects_path.strpath, AUTH_POLICY)
_istio_config_test(kiali_client, openshift_client, browser,
_dict,
yaml,
[
{'name': IstioConfigPageFilter.ISTIO_TYPE.text,
'value': IstioConfigObjectType.AUTHORIZATION_POLICY.text},
{'name': IstioConfigPageFilter.ISTIO_NAME.text,
'value': _dict.metadata.name}
],
namespace=BOOKINFO_2,
kind='AuthorizationPolicy',
api_version='security.istio.io/v1beta1',
service_name=DETAILS,
check_service_details=False,
error_messages=[KIA0004, KIA0101, KIA0102, KIA0102, KIA0104, KIA0104, KIA0104])
def _istio_config_create(openshift_client, config_dict, config_yaml, kind, api_version,
namespace=BOOKINFO_1):
openshift_client.delete_istio_config(name=config_dict.metadata.name,
namespace=namespace,
kind=kind,
api_version=api_version)
openshift_client.create_istio_config(body=config_yaml,
namespace=namespace,
kind=kind,
api_version=api_version)
def _istio_config_delete(openshift_client, name, kind, api_version, namespace=BOOKINFO_1):
openshift_client.delete_istio_config(name=name,
namespace=namespace,
kind=kind,
api_version=api_version)
def _istio_config_list(kiali_client, name, namespace=BOOKINFO_1):
return kiali_client.istio_config_list(namespaces=[namespace], config_names=[name])
def _ui_istio_config_delete(tests, name, object_type, namespace=BOOKINFO_1):
tests.delete_istio_config(name=name,
namespace=namespace,
object_type=object_type)
def _create_dest_rule_vs(openshift_client, destination_rule_conf, namespace=BOOKINFO_1):
destination_rule = get_yaml(istio_objects_path.strpath, destination_rule_conf)
destination_rule_dict = get_dict(istio_objects_path.strpath, destination_rule_conf)
_istio_config_create(openshift_client, destination_rule_dict, destination_rule,
'DestinationRule',
'networking.istio.io/v1alpha3',
namespace)
def _delete_dest_rule_vs(openshift_client, destination_rule_conf, namespace=BOOKINFO_1):
destination_rule_dict = get_dict(istio_objects_path.strpath, destination_rule_conf)
_istio_config_delete(openshift_client, destination_rule_dict.metadata.name,
'DestinationRule',
'networking.istio.io/v1alpha3',
namespace)
def _create_gateway_vs(openshift_client, gateway_conf, namespace=BOOKINFO_1):
gateway = get_yaml(istio_objects_path.strpath, gateway_conf)
gateway_dict = get_dict(istio_objects_path.strpath, gateway_conf)
_istio_config_create(openshift_client, gateway_dict, gateway,
'Gateway',
'networking.istio.io/v1alpha3',
namespace)
def _delete_gateway_vs(openshift_client, gateway_conf, namespace=BOOKINFO_1):
gateway_dict = get_dict(istio_objects_path.strpath, gateway_conf)
_istio_config_delete(openshift_client, gateway_dict.metadata.name,
'Gateway',
'networking.istio.io/v1alpha3',
namespace)
def _delete_gateways(openshift_client, name, namespaces):
for namespace in namespaces:
_istio_config_delete(openshift_client, name=name, kind='Gateway',
api_version='networking.istio.io/v1alpha3', namespace=namespace)
def _delete_sidecars(openshift_client, name, namespaces):
for namespace in namespaces:
_istio_config_delete(openshift_client, name=name, kind='Sidecar',
api_version='networking.istio.io/v1alpha3', namespace=namespace)
def _delete_authpolicies(openshift_client, name, namespaces):
for namespace in namespaces:
_istio_config_delete(openshift_client, name=name, kind='AuthorizationPolicy',
api_version='security.istio.io/v1beta1', namespace=namespace)
def _delete_peerauths(openshift_client, name, namespaces):
for namespace in namespaces:
_istio_config_delete(openshift_client, name=name, kind='PeerAuthentication',
api_version='security.istio.io/v1beta1', namespace=namespace)
def _delete_requestauths(openshift_client, name, namespaces):
for namespace in namespaces:
_istio_config_delete(openshift_client, name=name, kind='RequestAuthentication',
api_version='security.istio.io/v1beta1', namespace=namespace)
def _istio_config_test(kiali_client, openshift_client, browser, config_dict,
config_yaml, filters, namespace, kind, api_version,
service_name, check_service_details=False,
delete_istio_config=True,
error_messages=[]):
tests = IstioConfigPageTest(
kiali_client=kiali_client, openshift_client=openshift_client, browser=browser)
if not namespace:
namespace = BOOKINFO_1
try:
_istio_config_create(
openshift_client, config_dict, config_yaml, kind, api_version, namespace)
tests.assert_all_items(namespaces=[namespace], filters=filters)
_istio_config_details_test(kiali_client,
openshift_client,
browser,
config_dict,
config_yaml,
kind,
api_version,
namespace,
error_messages=error_messages)
if check_service_details:
_service_details_test(kiali_client,
openshift_client,
browser,
config_dict,
config_yaml,
kind,
api_version,
service_name,
namespace)
if delete_istio_config:
_ui_istio_config_delete(tests, config_dict.metadata.name,
kind, namespace)
assert len(_istio_config_list(kiali_client, config_dict.metadata.name, namespace)) == 0
finally:
if delete_istio_config:
_istio_config_delete(openshift_client, config_dict.metadata.name,
kind, api_version, namespace)
def _istio_config_details_test(kiali_client, openshift_client, browser, config_dict,
config_yaml, kind, api_version, namespace=BOOKINFO_1,
error_messages=[]):
tests = IstioConfigPageTest(
kiali_client=kiali_client, openshift_client=openshift_client, browser=browser)
tests.assert_details(name=config_dict.metadata.name,
object_type=kind,
namespace=namespace,
error_messages=error_messages,
apply_filters=False)
def _service_details_test(kiali_client, openshift_client, browser, config_dict,
config_yaml, kind, api_version, service_name, namespace=BOOKINFO_1):
tests = ServicesPageTest(
kiali_client=kiali_client, openshift_client=openshift_client, browser=browser)
tests.assert_details(name=service_name, namespace=namespace, force_refresh=True)
def _vs_gateway_link_test(kiali_client, openshift_client, browser, config_dict,
kind, vs_name, namespace=BOOKINFO_1):
tests = IstioConfigPageTest(
kiali_client=kiali_client, openshift_client=openshift_client, browser=browser)
# object_type should not be set here
tests.load_details_page(vs_name, namespace, force_refresh=False, load_only=True)
tests.click_on_gateway(config_dict.metadata.name, namespace)
tests.assert_details(name=config_dict.metadata.name,
object_type=kind,
namespace=namespace,
error_messages=[],
apply_filters=False)
| 45.954722 | 102 | 0.626411 | 3,405 | 35,523 | 6.138913 | 0.067548 | 0.094006 | 0.071329 | 0.070899 | 0.859781 | 0.823135 | 0.798402 | 0.751854 | 0.715735 | 0.693202 | 0 | 0.011717 | 0.303268 | 35,523 | 772 | 103 | 46.014249 | 0.832848 | 0.001717 | 0 | 0.637195 | 0 | 0 | 0.071044 | 0.039444 | 0 | 0 | 0 | 0 | 0.009146 | 1 | 0.060976 | false | 0.003049 | 0.012195 | 0.001524 | 0.074695 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
83be5d7f0950afa2c1ed96588406be84f9f4df8c | 2,277 | py | Python | AutoExamSys/SystemModel/migrations/0001_initial.py | Yb-Z/COMP2411_GP | 6b6ae307efbbe5e04c00588f869ae520f9680b0e | [
"MIT"
] | null | null | null | AutoExamSys/SystemModel/migrations/0001_initial.py | Yb-Z/COMP2411_GP | 6b6ae307efbbe5e04c00588f869ae520f9680b0e | [
"MIT"
] | null | null | null | AutoExamSys/SystemModel/migrations/0001_initial.py | Yb-Z/COMP2411_GP | 6b6ae307efbbe5e04c00588f869ae520f9680b0e | [
"MIT"
] | null | null | null | # Generated by Django 3.1.3 on 2020-12-05 18:50
from django.db import migrations, models
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='Student',
fields=[
('id', models.CharField(max_length=20, primary_key=True, serialize=False, verbose_name='Student ID')),
('name', models.CharField(max_length=50, verbose_name='Name')),
('gender', models.CharField(choices=[('Male', 'Male'), ('Female', 'Female')], default=None, max_length=8, verbose_name='Gender')),
('dept', models.CharField(choices=[('Faculty of Engineering', 'Department of Computing')], default=None, max_length=50, verbose_name='Department')),
('major', models.CharField(default=None, max_length=50, verbose_name='Major')),
('password', models.CharField(default='111', max_length=50, verbose_name='Password')),
('email', models.EmailField(default=None, max_length=254, verbose_name='Email')),
],
options={
'verbose_name': 'Student',
'verbose_name_plural': 'Student',
'db_table': 'student',
},
),
migrations.CreateModel(
name='Teacher',
fields=[
('id', models.CharField(max_length=20, primary_key=True, serialize=False, verbose_name='Teacher ID')),
('name', models.CharField(max_length=50, verbose_name='Name')),
('gender', models.CharField(choices=[('Male', 'Male'), ('Female', 'Female')], default=None, max_length=8, verbose_name='Gender')),
('dept', models.CharField(choices=[('Faculty of Engineering', 'Department of Computing')], default=None, max_length=50, verbose_name='Department')),
('password', models.CharField(default='111', max_length=50, verbose_name='Password')),
('email', models.EmailField(default=None, max_length=254, verbose_name='Email')),
],
options={
'verbose_name': 'Teacher',
'verbose_name_plural': 'Teacher',
'db_table': 'teacher',
},
),
]
| 47.4375 | 164 | 0.575318 | 229 | 2,277 | 5.563319 | 0.257642 | 0.146782 | 0.06044 | 0.098901 | 0.743328 | 0.743328 | 0.743328 | 0.717425 | 0.717425 | 0.717425 | 0 | 0.028161 | 0.267018 | 2,277 | 47 | 165 | 48.446809 | 0.735171 | 0.019763 | 0 | 0.5 | 1 | 0 | 0.190135 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.05 | 0.025 | 0 | 0.125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
83e28f3943696d51dca3cd09ec687e02a49598df | 8,975 | py | Python | micropython/examples/interstate75/font_10x14.py | sstobbe/pimoroni-pico | c7ec56d278ee4f3679cd367dad225d44808e24cf | [
"MIT"
] | 2 | 2022-03-19T13:30:10.000Z | 2022-03-19T19:47:43.000Z | micropython/examples/interstate75/font_10x14.py | sstobbe/pimoroni-pico | c7ec56d278ee4f3679cd367dad225d44808e24cf | [
"MIT"
] | null | null | null | micropython/examples/interstate75/font_10x14.py | sstobbe/pimoroni-pico | c7ec56d278ee4f3679cd367dad225d44808e24cf | [
"MIT"
] | null | null | null | letter_width = 10
letter_height = 14
font = [
[0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000], # " "
[0x0ffc, 0x0a04, 0x0ffc, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000], # "!"
[0x007c, 0x0044, 0x007c, 0x0044, 0x007c, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000], # """
[0x03f0, 0x02d0, 0x0edc, 0x0804, 0x0edc, 0x0edc, 0x0804, 0x0edc, 0x02d0, 0x03f0], # "#"
[0x0ef8, 0x0b8c, 0x1b76, 0x1002, 0x1b76, 0x0cd4, 0x079c, 0x0000, 0x0000, 0x0000], # "$"
[0x0038, 0x006c, 0x0f54, 0x09ec, 0x0e78, 0x079c, 0x0de4, 0x0abc, 0x0d80, 0x0700], # "%"
[0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000], # "&"
[0x007c, 0x0044, 0x007c, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000], # "'"
[0x03f0, 0x0e1c, 0x19e6, 0x173a, 0x1c0e, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000], # "("
[0x1c0e, 0x173a, 0x19e6, 0x0e1c, 0x03f0, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000], # ")"
[0x00fc, 0x00b4, 0x00cc, 0x00cc, 0x00b4, 0x00fc, 0x0000, 0x0000, 0x0000, 0x0000], # "*"
[0x01c0, 0x0140, 0x0770, 0x0410, 0x0770, 0x0140, 0x01c0, 0x0000, 0x0000, 0x0000], # "+"
[0x1c00, 0x1700, 0x1900, 0x0f00, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000], # ","
[0x01c0, 0x0140, 0x0140, 0x0140, 0x0140, 0x0140, 0x01c0, 0x0000, 0x0000, 0x0000], # "-"
[0x0e00, 0x0a00, 0x0e00, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000], # "."
[0x1e00, 0x1380, 0x1ce0, 0x0738, 0x01ce, 0x0072, 0x001e, 0x0000, 0x0000, 0x0000], # "/"
[0x07f8, 0x0c0c, 0x0bf4, 0x0a14, 0x0bf4, 0x0c0c, 0x07f8, 0x0000, 0x0000, 0x0000], # "0"
[0x0e70, 0x0a58, 0x0bec, 0x0804, 0x0bfc, 0x0a00, 0x0e00, 0x0000, 0x0000, 0x0000], # "1"
[0x0e38, 0x0b2c, 0x09b4, 0x0ad4, 0x0b74, 0x0b8c, 0x0ef8, 0x0000, 0x0000, 0x0000], # "2"
[0x0738, 0x0d2c, 0x0b34, 0x0bf4, 0x0b34, 0x0ccc, 0x07f8, 0x0000, 0x0000, 0x0000], # "3"
[0x03c0, 0x0270, 0x0298, 0x0eec, 0x0804, 0x0efc, 0x0380, 0x0000, 0x0000, 0x0000], # "4"
[0x0efc, 0x0a84, 0x0ab4, 0x0ab4, 0x0bb4, 0x0c74, 0x07dc, 0x0000, 0x0000, 0x0000], # "5"
[0x07f8, 0x0c0c, 0x0bb4, 0x0ab4, 0x0bb4, 0x0c74, 0x07dc, 0x0000, 0x0000, 0x0000], # "6"
[0x001c, 0x0014, 0x0f94, 0x08f4, 0x0f34, 0x01c4, 0x007c, 0x0000, 0x0000, 0x0000], # "7"
[0x07f8, 0x0c4c, 0x0bb4, 0x0ab4, 0x0bb4, 0x0c4c, 0x07f8, 0x0000, 0x0000, 0x0000], # "8"
[0x0ef8, 0x0b8c, 0x0b74, 0x0b54, 0x0b74, 0x0c0c, 0x07f8, 0x0000, 0x0000, 0x0000], # "9"
[0x0e1c, 0x0a14, 0x0e1c, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000], # ":"
[0x1c00, 0x171c, 0x1914, 0x0f1c, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000], # ";"
[0x0380, 0x06c0, 0x0d60, 0x0ba0, 0x0ee0, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000], # "<"
[0x0ee0, 0x0aa0, 0x0aa0, 0x0aa0, 0x0aa0, 0x0aa0, 0x0aa0, 0x0ee0, 0x0000, 0x0000], # "="
[0x0ee0, 0x0ba0, 0x0d60, 0x06c0, 0x0380, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000], # ">"
[0x0000, 0x001c, 0x0fd4, 0x0a74, 0x0fb4, 0x00cc, 0x0078, 0x0000, 0x0000, 0x0000], # "?"
[0x0ff0, 0x1818, 0x37ec, 0x2c74, 0x2bb4, 0x2bb4, 0x3c0c, 0x07f8, 0x0000, 0x0000], # "@"
[0x0f80, 0x08f0, 0x0f1c, 0x0164, 0x0f1c, 0x08f0, 0x0f80, 0x0000, 0x0000, 0x0000], # "A"
[0x0ffc, 0x0804, 0x0bb4, 0x0bb4, 0x0c4c, 0x07f8, 0x0000, 0x0000, 0x0000, 0x0000], # "B"
[0x07f8, 0x0c0c, 0x0bf4, 0x0a14, 0x0a14, 0x0e1c, 0x0000, 0x0000, 0x0000, 0x0000], # "C"
[0x0ffc, 0x0804, 0x0bf4, 0x0bf4, 0x0c0c, 0x07f8, 0x0000, 0x0000, 0x0000, 0x0000], # "D"
[0x0ffc, 0x0804, 0x0bb4, 0x0ab4, 0x0ab4, 0x0efc, 0x0000, 0x0000, 0x0000, 0x0000], # "E"
[0x0ffc, 0x0804, 0x0fb4, 0x00b4, 0x00f4, 0x001c, 0x0000, 0x0000, 0x0000, 0x0000], # "F"
[0x07f8, 0x0c0c, 0x0bf4, 0x0bd4, 0x0b54, 0x0c5c, 0x07c0, 0x0000, 0x0000, 0x0000], # "G"
[0x0ffc, 0x0804, 0x0fbc, 0x00a0, 0x0fbc, 0x0804, 0x0ffc, 0x0000, 0x0000, 0x0000], # "H"
[0x0e1c, 0x0a14, 0x0bf4, 0x0804, 0x0bf4, 0x0a14, 0x0e1c, 0x0000, 0x0000, 0x0000], # "I"
[0x0e1c, 0x0a14, 0x0bf4, 0x0c04, 0x07f4, 0x0014, 0x001c, 0x0000, 0x0000, 0x0000], # "J"
[0x0ffc, 0x0804, 0x0fbc, 0x0e5c, 0x09e4, 0x0f3c, 0x0000, 0x0000, 0x0000, 0x0000], # "K"
[0x0ffc, 0x0804, 0x0bfc, 0x0a00, 0x0a00, 0x0e00, 0x0000, 0x0000, 0x0000, 0x0000], # "L"
[0x0ffc, 0x0804, 0x0fec, 0x00d8, 0x00b0, 0x00b0, 0x00d8, 0x0fec, 0x0804, 0x0ffc], # "M"
[0x0ffc, 0x0804, 0x0fcc, 0x0738, 0x0cfc, 0x0804, 0x0ffc, 0x0000, 0x0000, 0x0000], # "N"
[0x07f8, 0x0c0c, 0x0bf4, 0x0a14, 0x0a14, 0x0bf4, 0x0c0c, 0x07f8, 0x0000, 0x0000], # "O"
[0x0ffc, 0x0804, 0x0f74, 0x0174, 0x018c, 0x00f8, 0x0000, 0x0000, 0x0000, 0x0000], # "P"
[0x07f8, 0x0c0c, 0x0bf4, 0x0a14, 0x0a14, 0x1bf4, 0x140c, 0x17f8, 0x1c00, 0x0000], # "Q"
[0x0ffc, 0x0804, 0x0f74, 0x0e74, 0x098c, 0x0ff8, 0x0000, 0x0000, 0x0000, 0x0000], # "R"
[0x0ef8, 0x0b8c, 0x0b74, 0x0b54, 0x0cd4, 0x079c, 0x0000, 0x0000, 0x0000, 0x0000], # "S"
[0x001c, 0x0014, 0x0ff4, 0x0804, 0x0ff4, 0x0014, 0x001c, 0x0000, 0x0000, 0x0000], # "T"
[0x07fc, 0x0c04, 0x0bfc, 0x0a00, 0x0bfc, 0x0c04, 0x07fc, 0x0000, 0x0000, 0x0000], # "U"
[0x01fc, 0x0704, 0x0cfc, 0x0b80, 0x0cfc, 0x0704, 0x01fc, 0x0000, 0x0000, 0x0000], # "V"
[0x01fc, 0x0704, 0x0cfc, 0x0bc0, 0x0c40, 0x0bc0, 0x0cfc, 0x0704, 0x01fc, 0x0000], # "W"
[0x0f3c, 0x09e4, 0x0edc, 0x0330, 0x0edc, 0x09e4, 0x0f3c, 0x0000, 0x0000, 0x0000], # "X"
[0x003c, 0x00e4, 0x0f9c, 0x0870, 0x0f9c, 0x00e4, 0x003c, 0x0000, 0x0000, 0x0000], # "Y"
[0x0f1c, 0x0994, 0x0af4, 0x0b34, 0x0bd4, 0x0a64, 0x0e3c, 0x0000, 0x0000, 0x0000], # "Z"
[0x0ffc, 0x0804, 0x0bf4, 0x0a14, 0x0e1c, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000], # "["
[0x001e, 0x0072, 0x01ce, 0x0738, 0x1ce0, 0x1380, 0x1e00, 0x0000, 0x0000, 0x0000], # "\"
[0x0e1c, 0x0a14, 0x0bf4, 0x0804, 0x0ffc, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000], # "]"
[0x0070, 0x0058, 0x006c, 0x0034, 0x006c, 0x0058, 0x0070, 0x0000, 0x0000, 0x0000], # "^"
[0x1c00, 0x1400, 0x1400, 0x1400, 0x1400, 0x1400, 0x1400, 0x1c00, 0x0000, 0x0000], # "_"
[0x000e, 0x001a, 0x0036, 0x002c, 0x0038, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000], # "`"
[0x07c0, 0x0c60, 0x0ba0, 0x0aa0, 0x0ba0, 0x0c60, 0x0bc0, 0x0e00, 0x0000, 0x0000], # "a"
[0x0ffc, 0x0804, 0x0bbc, 0x0ba0, 0x0c60, 0x07c0, 0x0000, 0x0000, 0x0000, 0x0000], # "b"
[0x07c0, 0x0c60, 0x0ba0, 0x0aa0, 0x0ee0, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000], # "c"
[0x07c0, 0x0c60, 0x0ba0, 0x0bbc, 0x0804, 0x0ffc, 0x0000, 0x0000, 0x0000, 0x0000], # "d"
[0x07c0, 0x0c60, 0x0aa0, 0x0aa0, 0x0b60, 0x0fc0, 0x0000, 0x0000, 0x0000, 0x0000], # "e"
[0x0ff8, 0x080c, 0x0fb4, 0x00f4, 0x001c, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000], # "f"
[0x1fc0, 0x3660, 0x2da0, 0x2da0, 0x2da0, 0x3060, 0x1fc0, 0x0000, 0x0000, 0x0000], # "g"
[0x0ffc, 0x0804, 0x0fbc, 0x0fa0, 0x0860, 0x0fc0, 0x0000, 0x0000, 0x0000, 0x0000], # "h"
[0x0ff8, 0x0828, 0x0ff8, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000], # "i"
[0x1c00, 0x1400, 0x17f8, 0x1828, 0x0ff8, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000], # "j"
[0x0ffc, 0x0804, 0x0efc, 0x0d60, 0x0ba0, 0x0ee0, 0x0000, 0x0000, 0x0000, 0x0000], # "k"
[0x07fc, 0x0c04, 0x0bfc, 0x0a00, 0x0e00, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000], # "l"
[0x0fc0, 0x0860, 0x0fa0, 0x07a0, 0x0460, 0x07a0, 0x0fa0, 0x0860, 0x0fc0, 0x0000], # "m"
[0x0fc0, 0x0860, 0x0fa0, 0x0fa0, 0x0860, 0x0fc0, 0x0000, 0x0000, 0x0000, 0x0000], # "n"
[0x07c0, 0x0c60, 0x0ba0, 0x0aa0, 0x0ba0, 0x0c60, 0x07c0, 0x0000, 0x0000, 0x0000], # "o"
[0x3fe0, 0x2020, 0x3da0, 0x05a0, 0x0660, 0x03c0, 0x0000, 0x0000, 0x0000, 0x0000], # "p"
[0x03c0, 0x0660, 0x05a0, 0x3da0, 0x2020, 0x37e0, 0x1c00, 0x0000, 0x0000, 0x0000], # "q"
[0x0fc0, 0x0860, 0x0fa0, 0x00a0, 0x00e0, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000], # "r"
[0x0fc0, 0x0b60, 0x0aa0, 0x0da0, 0x07e0, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000], # "s"
[0x01c0, 0x0770, 0x0c10, 0x0b70, 0x0bc0, 0x0e00, 0x0000, 0x0000, 0x0000, 0x0000], # "t"
[0x07e0, 0x0c20, 0x0be0, 0x0be0, 0x0c20, 0x07e0, 0x0000, 0x0000, 0x0000, 0x0000], # "u"
[0x01e0, 0x0720, 0x0ce0, 0x0b80, 0x0ce0, 0x0720, 0x01e0, 0x0000, 0x0000, 0x0000], # "v"
[0x01e0, 0x0720, 0x0ce0, 0x0b80, 0x0c80, 0x0b80, 0x0ce0, 0x0720, 0x01e0, 0x0000], # "w"
[0x0ee0, 0x0ba0, 0x0d60, 0x06c0, 0x0d60, 0x0ba0, 0x0ee0, 0x0000, 0x0000, 0x0000], # "x"
[0x1de0, 0x1720, 0x1ae0, 0x0d80, 0x06e0, 0x0320, 0x01e0, 0x0000, 0x0000, 0x0000], # "y"
[0x0ee0, 0x0ba0, 0x09a0, 0x0aa0, 0x0b20, 0x0ba0, 0x0ee0, 0x0000, 0x0000, 0x0000], # "z"
[0x01e0, 0x0f3c, 0x18c6, 0x17fa, 0x1c0e, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000], # "{"
[0x1ffe, 0x1002, 0x1ffe, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000], # "|"
[0x1c0e, 0x17fa, 0x18c6, 0x0f3c, 0x01e0, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000], # "}"
[0x0380, 0x02c0, 0x0340, 0x0340, 0x02c0, 0x02c0, 0x0340, 0x01c0, 0x0000, 0x0000], # "~"
[0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000], # ""
]
| 88.861386 | 92 | 0.652479 | 1,029 | 8,975 | 5.688047 | 0.233236 | 0.567914 | 0.58124 | 0.434649 | 0.520075 | 0.353323 | 0.180762 | 0.058432 | 0.04613 | 0.04613 | 0 | 0.5491 | 0.183064 | 8,975 | 100 | 93 | 89.75 | 0.249182 | 0.042451 | 0 | 0.03 | 0 | 0 | 0 | 0 | 0 | 0 | 0.677966 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f7b0cd34d6423d2188e9d13a1899d2874c44ce4d | 101 | py | Python | mediathekDownloader/__init__.py | TomMeHo/mediathekDownloader | 645f76b5df6710ee7cd8c761c44e6e61c026d849 | [
"MIT"
] | 1 | 2022-01-26T19:09:56.000Z | 2022-01-26T19:09:56.000Z | mediathekDownloader/__init__.py | TomMeHo/mediathekDownloader | 645f76b5df6710ee7cd8c761c44e6e61c026d849 | [
"MIT"
] | null | null | null | mediathekDownloader/__init__.py | TomMeHo/mediathekDownloader | 645f76b5df6710ee7cd8c761c44e6e61c026d849 | [
"MIT"
] | null | null | null | from .downloadQueueItem import DownloadQueueItem
from .vsMetaInfoGenerator import VsMetaInfoGenerator | 50.5 | 52 | 0.910891 | 8 | 101 | 11.5 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.069307 | 101 | 2 | 52 | 50.5 | 0.978723 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7910fda91277058b9d1000fd07224a69d8bae644 | 17,060 | py | Python | losses/bregman_pytorch.py | syelman/DRDM-Count | 47b43b64d7f536995bf4cbc44318b3108a5f2aff | [
"MIT"
] | 3 | 2021-04-15T15:51:49.000Z | 2022-03-15T20:54:23.000Z | losses/bregman_pytorch.py | syelman/DRDM-Count | 47b43b64d7f536995bf4cbc44318b3108a5f2aff | [
"MIT"
] | null | null | null | losses/bregman_pytorch.py | syelman/DRDM-Count | 47b43b64d7f536995bf4cbc44318b3108a5f2aff | [
"MIT"
] | 1 | 2022-02-05T18:12:03.000Z | 2022-02-05T18:12:03.000Z | # -*- coding: utf-8 -*-
"""
Rewrite ot.bregman.sinkhorn in Python Optimal Transport (https://pythonot.github.io/_modules/ot/bregman.html#sinkhorn)
using pytorch operations.
Bregman projections for regularized OT (Sinkhorn distance).
"""
import torch
M_EPS = 1e-16
def sinkhorn(a, b, C, reg=1e-1, method='sinkhorn', maxIter=1000, tau=1e3,
stopThr=1e-9, verbose=True, log=True, warm_start=None, eval_freq=10, print_freq=200, **kwargs):
"""
Solve the entropic regularization optimal transport
The input should be PyTorch tensors
The function solves the following optimization problem:
.. math::
\gamma = arg\min_\gamma <\gamma,C>_F + reg\cdot\Omega(\gamma)
s.t. \gamma 1 = a
\gamma^T 1= b
\gamma\geq 0
where :
- C is the (ns,nt) metric cost matrix
- :math:`\Omega` is the entropic regularization term :math:`\Omega(\gamma)=\sum_{i,j} \gamma_{i,j}\log(\gamma_{i,j})`
- a and b are target and source measures (sum to 1)
The algorithm used for solving the problem is the Sinkhorn-Knopp matrix scaling algorithm as proposed in [1].
Parameters
----------
a : torch.tensor (na,)
samples measure in the target domain
b : torch.tensor (nb,)
samples in the source domain
C : torch.tensor (na,nb)
loss matrix
reg : float
Regularization term > 0
method : str
method used for the solver either 'sinkhorn', 'greenkhorn', 'sinkhorn_stabilized' or
'sinkhorn_epsilon_scaling', see those function for specific parameters
maxIter : int, optional
Max number of iterations
stopThr : float, optional
Stop threshol on error ( > 0 )
verbose : bool, optional
Print information along iterations
log : bool, optional
record log if True
Returns
-------
gamma : (na x nb) torch.tensor
Optimal transportation matrix for the given parameters
log : dict
log dictionary return only if log==True in parameters
References
----------
[1] M. Cuturi, Sinkhorn Distances : Lightspeed Computation of Optimal Transport, Advances in Neural Information Processing Systems (NIPS) 26, 2013
See Also
--------
"""
if method.lower() == 'sinkhorn':
return sinkhorn_knopp(a, b, C, reg, maxIter=maxIter,
stopThr=stopThr, verbose=verbose, log=log,
warm_start=warm_start, eval_freq=eval_freq, print_freq=print_freq,
**kwargs)
elif method.lower() == 'sinkhorn_stabilized':
return sinkhorn_stabilized(a, b, C, reg, maxIter=maxIter, tau=tau,
stopThr=stopThr, verbose=verbose, log=log,
warm_start=warm_start, eval_freq=eval_freq, print_freq=print_freq,
**kwargs)
elif method.lower() == 'sinkhorn_epsilon_scaling':
return sinkhorn_epsilon_scaling(a, b, C, reg,
maxIter=maxIter, maxInnerIter=100, tau=tau,
scaling_base=0.75, scaling_coef=None, stopThr=stopThr,
verbose=False, log=log, warm_start=warm_start, eval_freq=eval_freq,
print_freq=print_freq, **kwargs)
else:
raise ValueError("Unknown method '%s'." % method)
def sinkhorn_knopp(a, b, C, reg=1e-1, maxIter=1000, stopThr=1e-9,
verbose=True, log=True, warm_start=None, eval_freq=10, print_freq=200, **kwargs):
"""
Solve the entropic regularization optimal transport
The input should be PyTorch tensors
The function solves the following optimization problem:
.. math::
\gamma = arg\min_\gamma <\gamma,C>_F + reg\cdot\Omega(\gamma)
s.t. \gamma 1 = a
\gamma^T 1= b
\gamma\geq 0
where :
- C is the (ns,nt) metric cost matrix
- :math:`\Omega` is the entropic regularization term :math:`\Omega(\gamma)=\sum_{i,j} \gamma_{i,j}\log(\gamma_{i,j})`
- a and b are target and source measures (sum to 1)
The algorithm used for solving the problem is the Sinkhorn-Knopp matrix scaling algorithm as proposed in [1].
Parameters
----------
a : torch.tensor (na,)
samples measure in the target domain
b : torch.tensor (nb,)
samples in the source domain
C : torch.tensor (na,nb)
loss matrix
reg : float
Regularization term > 0
maxIter : int, optional
Max number of iterations
stopThr : float, optional
Stop threshol on error ( > 0 )
verbose : bool, optional
Print information along iterations
log : bool, optional
record log if True
Returns
-------
gamma : (na x nb) torch.tensor
Optimal transportation matrix for the given parameters
log : dict
log dictionary return only if log==True in parameters
References
----------
[1] M. Cuturi, Sinkhorn Distances : Lightspeed Computation of Optimal Transport, Advances in Neural Information Processing Systems (NIPS) 26, 2013
See Also
--------
"""
device = a.device
na, nb = C.shape
assert na >= 1 and nb >= 1, 'C needs to be 2d'
assert na == a.shape[0] and nb == b.shape[0], "Shape of a or b does't match that of C"
assert reg > 0, 'reg should be greater than 0'
assert a.min() >= 0. and b.min() >= 0., 'Elements in a or b less than 0'
# unnecessary check for our special case
if log:
log = {'err': []}
if warm_start is not None:
u = warm_start['u']
v = warm_start['v']
else:
u = torch.ones(na, dtype=a.dtype).to(device) / na
v = torch.ones(nb, dtype=b.dtype).to(device) / nb
K = torch.empty(C.shape, dtype=C.dtype).to(device)
torch.div(C, -reg, out=K)
torch.exp(K, out=K)
b_hat = torch.empty(b.shape, dtype=C.dtype).to(device)
it = 1
err = 1
# allocate memory beforehand
KTu = torch.empty(v.shape, dtype=v.dtype).to(device)
Kv = torch.empty(u.shape, dtype=u.dtype).to(device)
while (err > stopThr and it <= maxIter):
upre, vpre = u, v
torch.matmul(u, K, out=KTu)
v = torch.div(b, KTu + M_EPS)
torch.matmul(K, v, out=Kv)
u = torch.div(a, Kv + M_EPS)
if torch.any(torch.isnan(u)) or torch.any(torch.isnan(v)) or \
torch.any(torch.isinf(u)) or torch.any(torch.isinf(v)):
print('Warning: numerical errors at iteration', it)
u, v = upre, vpre
break
if log and it % eval_freq == 0:
# we can speed up the process by checking for the error only all
# the eval_freq iterations
# below is equivalent to:
# b_hat = torch.sum(u.reshape(-1, 1) * K * v.reshape(1, -1), 0)
# but with more memory efficient
b_hat = torch.matmul(u, K) * v
err = (b - b_hat).pow(2).sum().item()
# err = (b - b_hat).abs().sum().item()
log['err'].append(err)
if verbose and it % print_freq == 0:
print('iteration {:5d}, constraint error {:5e}'.format(it, err))
it += 1
if log:
log['u'] = u
log['v'] = v
log['alpha'] = reg * torch.log(u + M_EPS)
log['beta'] = reg * torch.log(v + M_EPS)
# transport plan
P = u.reshape(-1, 1) * K * v.reshape(1, -1)
if log:
return P, log
else:
return P
def sinkhorn_stabilized(a, b, C, reg=1e-1, maxIter=1000, tau=1e3, stopThr=1e-9,
verbose=False, log=False, warm_start=None, eval_freq=10, print_freq=200, **kwargs):
"""
Solve the entropic regularization OT problem with log stabilization
The function solves the following optimization problem:
.. math::
\gamma = arg\min_\gamma <\gamma,C>_F + reg\cdot\Omega(\gamma)
s.t. \gamma 1 = a
\gamma^T 1= b
\gamma\geq 0
where :
- C is the (ns,nt) metric cost matrix
- :math:`\Omega` is the entropic regularization term :math:`\Omega(\gamma)=\sum_{i,j} \gamma_{i,j}\log(\gamma_{i,j})`
- a and b are target and source measures (sum to 1)
The algorithm used for solving the problem is the Sinkhorn-Knopp matrix scaling algorithm as proposed in [1]
but with the log stabilization proposed in [3] an defined in [2] (Algo 3.1)
Parameters
----------
a : torch.tensor (na,)
samples measure in the target domain
b : torch.tensor (nb,)
samples in the source domain
C : torch.tensor (na,nb)
loss matrix
reg : float
Regularization term > 0
tau : float
thershold for max value in u or v for log scaling
maxIter : int, optional
Max number of iterations
stopThr : float, optional
Stop threshol on error ( > 0 )
verbose : bool, optional
Print information along iterations
log : bool, optional
record log if True
Returns
-------
gamma : (na x nb) torch.tensor
Optimal transportation matrix for the given parameters
log : dict
log dictionary return only if log==True in parameters
References
----------
[1] M. Cuturi, Sinkhorn Distances : Lightspeed Computation of Optimal Transport, Advances in Neural Information Processing Systems (NIPS) 26, 2013
[2] Bernhard Schmitzer. Stabilized Sparse Scaling Algorithms for Entropy Regularized Transport Problems. SIAM Journal on Scientific Computing, 2019
[3] Chizat, L., Peyré, G., Schmitzer, B., & Vialard, F. X. (2016). Scaling algorithms for unbalanced transport problems. arXiv preprint arXiv:1607.05816.
See Also
--------
"""
device = a.device
na, nb = C.shape
assert na >= 1 and nb >= 1, 'C needs to be 2d'
assert na == a.shape[0] and nb == b.shape[0], "Shape of a or b does't match that of C"
assert reg > 0, 'reg should be greater than 0'
assert a.min() >= 0. and b.min() >= 0., 'Elements in a or b less than 0'
if log:
log = {'err': []}
if warm_start is not None:
alpha = warm_start['alpha']
beta = warm_start['beta']
else:
alpha = torch.zeros(na, dtype=a.dtype).to(device)
beta = torch.zeros(nb, dtype=b.dtype).to(device)
u = torch.ones(na, dtype=a.dtype).to(device) / na
v = torch.ones(nb, dtype=b.dtype).to(device) / nb
def update_K(alpha, beta):
"""log space computation"""
"""memory efficient"""
torch.add(alpha.reshape(-1, 1), beta.reshape(1, -1), out=K)
torch.add(K, -C, out=K)
torch.div(K, reg, out=K)
torch.exp(K, out=K)
def update_P(alpha, beta, u, v, ab_updated=False):
"""log space P (gamma) computation"""
torch.add(alpha.reshape(-1, 1), beta.reshape(1, -1), out=P)
torch.add(P, -C, out=P)
torch.div(P, reg, out=P)
if not ab_updated:
torch.add(P, torch.log(u + M_EPS).reshape(-1, 1), out=P)
torch.add(P, torch.log(v + M_EPS).reshape(1, -1), out=P)
torch.exp(P, out=P)
K = torch.empty(C.shape, dtype=C.dtype).to(device)
update_K(alpha, beta)
b_hat = torch.empty(b.shape, dtype=C.dtype).to(device)
it = 1
err = 1
ab_updated = False
# allocate memory beforehand
KTu = torch.empty(v.shape, dtype=v.dtype).to(device)
Kv = torch.empty(u.shape, dtype=u.dtype).to(device)
P = torch.empty(C.shape, dtype=C.dtype).to(device)
while (err > stopThr and it <= maxIter):
upre, vpre = u, v
torch.matmul(u, K, out=KTu)
v = torch.div(b, KTu + M_EPS)
torch.matmul(K, v, out=Kv)
u = torch.div(a, Kv + M_EPS)
ab_updated = False
# remove numerical problems and store them in K
if u.abs().sum() > tau or v.abs().sum() > tau:
alpha += reg * torch.log(u + M_EPS)
beta += reg * torch.log(v + M_EPS)
u.fill_(1. / na)
v.fill_(1. / nb)
update_K(alpha, beta)
ab_updated = True
if log and it % eval_freq == 0:
# we can speed up the process by checking for the error only all
# the eval_freq iterations
update_P(alpha, beta, u, v, ab_updated)
b_hat = torch.sum(P, 0)
err = (b - b_hat).pow(2).sum().item()
log['err'].append(err)
if verbose and it % print_freq == 0:
print('iteration {:5d}, constraint error {:5e}'.format(it, err))
it += 1
if log:
log['u'] = u
log['v'] = v
log['alpha'] = alpha + reg * torch.log(u + M_EPS)
log['beta'] = beta + reg * torch.log(v + M_EPS)
# transport plan
update_P(alpha, beta, u, v, False)
if log:
return P, log
else:
return P
def sinkhorn_epsilon_scaling(a, b, C, reg=1e-1, maxIter=100, maxInnerIter=100, tau=1e3, scaling_base=0.75,
scaling_coef=None, stopThr=1e-9, verbose=False, log=False, warm_start=None, eval_freq=10,
print_freq=200, **kwargs):
"""
Solve the entropic regularization OT problem with log stabilization
The function solves the following optimization problem:
.. math::
\gamma = arg\min_\gamma <\gamma,C>_F + reg\cdot\Omega(\gamma)
s.t. \gamma 1 = a
\gamma^T 1= b
\gamma\geq 0
where :
- C is the (ns,nt) metric cost matrix
- :math:`\Omega` is the entropic regularization term :math:`\Omega(\gamma)=\sum_{i,j} \gamma_{i,j}\log(\gamma_{i,j})`
- a and b are target and source measures (sum to 1)
The algorithm used for solving the problem is the Sinkhorn-Knopp matrix
scaling algorithm as proposed in [1] but with the log stabilization
proposed in [3] and the log scaling proposed in [2] algorithm 3.2
Parameters
----------
a : torch.tensor (na,)
samples measure in the target domain
b : torch.tensor (nb,)
samples in the source domain
C : torch.tensor (na,nb)
loss matrix
reg : float
Regularization term > 0
tau : float
thershold for max value in u or v for log scaling
maxIter : int, optional
Max number of iterations
stopThr : float, optional
Stop threshol on error ( > 0 )
verbose : bool, optional
Print information along iterations
log : bool, optional
record log if True
Returns
-------
gamma : (na x nb) torch.tensor
Optimal transportation matrix for the given parameters
log : dict
log dictionary return only if log==True in parameters
References
----------
[1] M. Cuturi, Sinkhorn Distances : Lightspeed Computation of Optimal Transport, Advances in Neural Information Processing Systems (NIPS) 26, 2013
[2] Bernhard Schmitzer. Stabilized Sparse Scaling Algorithms for Entropy Regularized Transport Problems. SIAM Journal on Scientific Computing, 2019
[3] Chizat, L., Peyré, G., Schmitzer, B., & Vialard, F. X. (2016). Scaling algorithms for unbalanced transport problems. arXiv preprint arXiv:1607.05816.
See Also
--------
"""
na, nb = C.shape
assert na >= 1 and nb >= 1, 'C needs to be 2d'
assert na == a.shape[0] and nb == b.shape[0], "Shape of a or b does't match that of C"
assert reg > 0, 'reg should be greater than 0'
assert a.min() >= 0. and b.min() >= 0., 'Elements in a or b less than 0'
def get_reg(it, reg, pre_reg):
if it == 1:
return scaling_coef
else:
if (pre_reg - reg) * scaling_base < M_EPS:
return reg
else:
return (pre_reg - reg) * scaling_base + reg
if scaling_coef is None:
scaling_coef = C.max() + reg
it = 1
err = 1
running_reg = scaling_coef
if log:
log = {'err': []}
warm_start = None
while (err > stopThr and it <= maxIter):
running_reg = get_reg(it, reg, running_reg)
P, _log = sinkhorn_stabilized(a, b, C, running_reg, maxIter=maxInnerIter, tau=tau,
stopThr=stopThr, verbose=False, log=True,
warm_start=warm_start, eval_freq=eval_freq, print_freq=print_freq,
**kwargs)
warm_start = {}
warm_start['alpha'] = _log['alpha']
warm_start['beta'] = _log['beta']
primal_val = (C * P).sum() + reg * (P * torch.log(P)).sum() - reg * P.sum()
dual_val = (_log['alpha'] * a).sum() + (_log['beta'] * b).sum() - reg * P.sum()
err = primal_val - dual_val
log['err'].append(err)
if verbose and it % print_freq == 0:
print('iteration {:5d}, constraint error {:5e}'.format(it, err))
it += 1
if log:
log['alpha'] = _log['alpha']
log['beta'] = _log['beta']
return P, log
else:
return P
| 35.175258 | 157 | 0.583118 | 2,393 | 17,060 | 4.092353 | 0.121605 | 0.020219 | 0.019912 | 0.004289 | 0.837129 | 0.816195 | 0.79322 | 0.78311 | 0.751353 | 0.733279 | 0 | 0.021023 | 0.297362 | 17,060 | 484 | 158 | 35.247934 | 0.795946 | 0.441032 | 0 | 0.567708 | 0 | 0 | 0.076429 | 0.002754 | 0 | 0 | 0 | 0 | 0.0625 | 1 | 0.036458 | false | 0 | 0.005208 | 0 | 0.104167 | 0.078125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7913bf49091fc5b63d6b6f1a744a4b10cc800a06 | 1,657 | py | Python | nnunet/experiment_planning/change_batch_size.py | Jiawei-Yang/TumorCP | 6053c75642fcbc0fb0424320ab3d758f24883b0e | [
"Apache-2.0"
] | 12 | 2021-07-22T15:08:13.000Z | 2022-03-10T08:15:56.000Z | nnunet/experiment_planning/change_batch_size.py | Jiawei-Yang/TumorCP | 6053c75642fcbc0fb0424320ab3d758f24883b0e | [
"Apache-2.0"
] | null | null | null | nnunet/experiment_planning/change_batch_size.py | Jiawei-Yang/TumorCP | 6053c75642fcbc0fb0424320ab3d758f24883b0e | [
"Apache-2.0"
] | 3 | 2021-11-26T06:26:24.000Z | 2022-02-14T01:23:44.000Z | from batchgenerators.utilities.file_and_folder_operations import *
import numpy as np
if __name__ == '__main__':
# input_file = '/home/fabian/data/nnUNet_preprocessed/Task004_Hippocampus/nnUNetPlansv2.1_plans_3D.pkl'
# output_file = '/home/fabian/data/nnUNet_preprocessed/Task004_Hippocampus/nnUNetPlansv2.1_LISA_plans_3D.pkl'
# a = load_pickle(input_file)
# a['plans_per_stage'][0]['batch_size'] = int(np.floor(6 / 9 * a['plans_per_stage'][0]['batch_size']))
# save_pickle(a, output_file)
input_file = '../../data/nnUNet_preprocessed/Task100_LiTSbaseline/nnUNetPlansv2.1_plans_3D.pkl'
output_file = '../../data/nnUNet_preprocessed/Task100_LiTSbaseline/nnUNetPlansv2.1_plans_3D.pkl'
a = load_pickle(input_file)
print(a['plans_per_stage'])
# a['plans_per_stage'][0]['batch_size'] = int(np.floor(6 / 9 * a['plans_per_stage'][0]['batch_size']))
a['plans_per_stage'][0]['patch_size'] = np.array([128, 128, 128])
a['plans_per_stage'][1]['patch_size'] = np.array([128, 128, 128])
a['plans_per_stage'][0]['num_pool_per_axis'] = np.array([5, 5, 5])
a['plans_per_stage'][1]['num_pool_per_axis'] = np.array([5, 5, 5])
a['plans_per_stage'][0]['pool_op_kernel_sizes'] = [[2, 2, 2], [2, 2, 2], [2, 2, 2], [2, 2, 2], [2, 2, 2]]
a['plans_per_stage'][1]['pool_op_kernel_sizes'] = [[2, 2, 2], [2, 2, 2], [2, 2, 2], [2, 2, 2], [2, 2, 2]]
a['plans_per_stage'][0]['conv_kernel_sizes'] = [[3, 3, 3], [3, 3, 3], [3, 3, 3], [3, 3, 3], [3, 3, 3], [3, 3, 3]]
a['plans_per_stage'][1]['conv_kernel_sizes'] = [[3, 3, 3], [3, 3, 3], [3, 3, 3], [3, 3, 3], [3, 3, 3], [3, 3, 3]]
save_pickle(a, output_file) | 69.041667 | 117 | 0.639107 | 283 | 1,657 | 3.431095 | 0.204947 | 0.070031 | 0.098867 | 0.123584 | 0.881565 | 0.801236 | 0.801236 | 0.778579 | 0.727085 | 0.727085 | 0 | 0.0911 | 0.138805 | 1,657 | 24 | 118 | 69.041667 | 0.589348 | 0.281835 | 0 | 0 | 0 | 0 | 0.36402 | 0.135135 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.125 | 0 | 0.125 | 0.0625 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f70ff2c39f33dd888d631eafd160e6a0bc3e455e | 29,175 | py | Python | Plotters/Results/Plots_Paper_One.py | PouyaREZ/Wastewater_Energy_Optimization | ead604b715337dc8c76871910d38965d1b8b1856 | [
"MIT"
] | 2 | 2021-02-18T19:36:18.000Z | 2021-05-20T03:32:20.000Z | Plotters/Results/Plots_Paper_One.py | PouyaREZ/Wastewater_Energy_Optimization | ead604b715337dc8c76871910d38965d1b8b1856 | [
"MIT"
] | null | null | null | Plotters/Results/Plots_Paper_One.py | PouyaREZ/Wastewater_Energy_Optimization | ead604b715337dc8c76871910d38965d1b8b1856 | [
"MIT"
] | 1 | 2022-01-21T18:39:45.000Z | 2022-01-21T18:39:45.000Z | # -*- coding: utf-8 -*-
"""
Created on Fri Feb 4 2020
@Author: PouyaRZ
____________________________________________________
Plots to produce:
1. LCC of equipment for each scenario for all the individuals
2, SCC of equipment for each scenario for all the individuals
3. SCC vs LCC scatter plot.
4. SCC vs chiller type
5. SCC vs CHP type,
6. LCC vs chiller type
7. SCC vs CHP type
8. Traces of building types across all the runs
____________________________________________________
"""
import pandas as pd
import numpy as np
from matplotlib import pyplot as plt
def DF_Filter(filename):
file = np.loadtxt(filename, dtype='float')
inputDF = pd.DataFrame(file)
error_tol = 1.15
# print('GFA stats:')
# print(inputDF.iloc[:,38].describe())
print('+++++ processing %s +++++\n'%(filename))
print('Count duplicates:')
condition1 = inputDF.duplicated()==True
print(inputDF[condition1][38].count())
print('Count under the min GFA:') # Count non-trivial neighborhoods
condition2 = inputDF[38] <= 1/error_tol#<=647497/10
print(inputDF[condition2][38].count())
print('Count over the max GFA:')
condition3 = inputDF[38]>=647497*5*error_tol
print(inputDF[condition3][38].count())
print('Count over the max Site GFA:')
condition4 = inputDF[38]/inputDF[36]>=647497*error_tol
print(inputDF[condition4][38].count())
print('Count valid answers:')
print(len(inputDF) - inputDF[condition1 | condition2 | condition3 | condition4][38].count())
# print('------------------')
# Filtering the inadmissible results
Filtered = ~(condition1 | condition2 | condition3 | condition4)
inputDF = inputDF[Filtered]
inputDF.reset_index(inplace=True, drop=True)
# print('Annual energy demand stats (MWh):')
inputDF[26] /= inputDF[38] # Normalizing LCC ($/m2)
inputDF[27] /= inputDF[38] # Normalizing SCC ($/m2)
inputDF[39] /= inputDF[38] # Normalizing CO2 (Tonnes/m2)
inputDF[40] /= (10**3*inputDF[38]) # Normalizing total energy demand (MWh/m2)
inputDF[41] /= inputDF[38] # Normalizing total wwater treatment demand (L/m2)
for i in range(29,36): # Converting percent areas to integer %
inputDF[i] = inputDF[i] * 100
# print(inputDF[40].describe())
return inputDF
### MAIN FUNCTION
print('loading data')
filenames = ['../RQ1_W_CWWTP_ModConsts_Feb17/SDO_LHS_TestRuns288_Constraint_SF_Test.txt',
'../RQ1_WO_CWWTP_ModConsts_Feb17/SDO_LHS_TestRuns288_Constraint_SF_Test.txt']
DFNames = ['CCHP|CWWTP','CCHP+WWT']
DFs = {}
for i in range(2):
DFs[DFNames[i]] = DF_Filter(filenames[i])
plt.style.use('ggplot')
colors_rb = {DFNames[0]:'r', DFNames[1]:'b'}
# =============================================================================
## CHP/Chiller/Solar Types used in the individual neighborhood
CHP_Types = {}
CHP_Types[1] = 'Gas_1'
CHP_Types[2] = 'Gas_2'
CHP_Types[3] = 'Gas_3'
CHP_Types[4] = 'Gas_4'
CHP_Types[5] = 'Gas_5'
CHP_Types[6] = 'Micro_1'
CHP_Types[7] = 'Micro_2'
CHP_Types[8] = 'Micro_3'
CHP_Types[9] = 'Recipro_1'
CHP_Types[10] = 'Recipro_2'
CHP_Types[11] = 'Recipro_3'
CHP_Types[12] = 'Recipro_4'
CHP_Types[13] = 'Recipro_5'
CHP_Types[14] = 'Steam_1'
CHP_Types[15] = 'Steam_2'
CHP_Types[16] = 'Steam_3'
CHP_Types[17] = 'Fuel_Cell_1'
CHP_Types[18] = 'Fuel_Cell_2'
CHP_Types[19] = 'Fuel_Cell_3'
CHP_Types[20] = 'Fuel_Cell_4'
CHP_Types[21] = 'Fuel_Cell_5'
CHP_Types[22] = 'Fuel_Cell_6'
CHP_Types[23] = 'Bio_1'
CHP_Types[24] = 'Bio_2'
CHP_Types[25] = 'Bio_3'
CHP_Types[26] = 'Bio_4'
CHP_Types[27] = 'Bio_5'
CHP_Types[28] = 'Bio_6'
CHP_Types[29] = 'Bio_7'
CHP_Types[30] = 'Bio_8'
CHP_Types[31] = 'Bio_9'
CHP_Types[32] = 'Bio_10'
Chiller_Types = {}
Chiller_Types[1] = 'Electric_1'
Chiller_Types[2] = 'Electric_2'
Chiller_Types[3] = 'Electric_3'
Chiller_Types[4] = 'Electric_4'
Chiller_Types[5] = 'Electric_5'
Chiller_Types[6] = 'Electric_6'
Chiller_Types[7] = 'Electric_7'
Chiller_Types[8] = 'Electric_8'
Chiller_Types[9] = 'Electric_9'
Chiller_Types[10] = 'Absorp_1'
Chiller_Types[11] = 'Absorp_2'
Chiller_Types[12] = 'Absorp_3'
Chiller_Types[13] = 'Absorp_4'
Chiller_Types[14] = 'Absorp_5'
Chiller_Types[15] = 'Absorp_6'
Chiller_Types[16] = 'Absorp_7'
Chiller_Types[17] = 'Absorp_8'
WWT_Types = {}
WWT_Types[1] = "FO_MD"
WWT_Types[2] = "FO_RO"
WWT_Types[3] = "CWWTP"
## CHP, Chiller and WWT name assignments
# CHP = {}
# Chiller = {}
# WWT = {}
for DFName in DFNames:
# CHP[DFName] = np.array([CHP_Types[int(i)] for i in DFs[DFName][21]]) # Making strings of CHP names instead of integers
DFs[DFName][21] = np.array([CHP_Types[int(i)] for i in DFs[DFName][21]]) # Making strings of CHP names instead of integers
# Chiller[DFName] = np.array([Chiller_Types[int(i)] for i in DFs[DFName][22]]) # Making strings of Chiller names instead of integers
DFs[DFName][22] = np.array([Chiller_Types[int(i)] for i in DFs[DFName][22]]) # Making strings of Chiller names instead of integers
# WWT[DFName] = np.array([WWT_Types[int(i)] for i in DFs[DFName][24]]) # Making strings of WWT module names instead of integers
DFs[DFName][24] = np.array([WWT_Types[int(i)] for i in DFs[DFName][24]]) # Making strings of WWT module names instead of integers
# =============================================================================
######################## PLOTS ##########################
#############################################
print('plotting overall LCC and SCC graphs')
# LCC
plt.figure(figsize=(10,5))
for DFName in DFNames:
sortedDF = DFs[DFName].sort_values(by=26, ascending=True).reset_index(drop=True)
plt.scatter(x=sortedDF.index,y=(sortedDF[26]/10**3),label=DFName, s=2, alpha=0.5, c=colors_rb[DFName])
# (DFs[DFName][0][26]/10**6).plot(label=DFName)
plt.xlabel('Rank')
plt.ylabel(r'LCC (k\$/$m^2$)')
# plt.title('LCC')
plt.legend()
plt.savefig('LCC_Ascending.png', dpi=400, bbox_inches='tight')
# SCC
plt.figure(figsize=(10,5))
for DFName in DFNames:
sortedDF = DFs[DFName].sort_values(by=27, ascending=True).reset_index(drop=True)
plt.scatter(x=sortedDF.index,y=(sortedDF[27]/10**3),label=DFName, s=2, alpha=0.5, c=colors_rb[DFName])
# (DFs[DFName][0][26]/10**6).plot(label=DFName)
plt.xlabel('Rank')
plt.ylabel(r'SCC (k\$/$m^2$)')
# plt.title('SCC')
plt.legend()
plt.savefig('SCC_Ascending.png', dpi=400, bbox_inches='tight')
plt.close('all')
#############################################
print('plotting LCC and SCC box plots')
print('\n#############################################')
print('Stats of LCC ($/m2) for Disintegrated Case:\n',(DFs[DFNames[0]][26]).describe())
print('Stats of LCC ($/m2) for Integrated Case:\n',(DFs[DFNames[1]][26]).describe())
print('Stats of SCC ($/m2) for Disintegrated Case:\n',(DFs[DFNames[0]][27]).describe())
print('Stats of SCC ($/m2) for Integrated Case:\n',(DFs[DFNames[1]][27]).describe())
print('#############################################\n')
# =============================================================================
# # LCC
# plt.figure(figsize=(10,5))
# # for DFName in DFNames:
# plt.boxplot(x=[(DFs[DFNames[0]][26]/10**3), (DFs[DFNames[1]][26]/10**3)])
# # (DFs[DFName][0][26]/10**6).plot(label=DFName)
# # plt.xlabel('Rank')
# plt.ylabel(r'LCC (k\$/$m^2$)')
# plt.xticks([1,2],[DFNames[0],DFNames[1]])
# # plt.title('LCC')
# plt.savefig('LCC_Boxplot.png', dpi=400, bbox_inches='tight')
#
#
#
# # SCC
# plt.figure(figsize=(10,5))
# # for DFName in DFNames:
# plt.boxplot(x=[(DFs[DFNames[0]][27]/10**3), (DFs[DFNames[1]][27]/10**3)])
# # (DFs[DFName][0][26]/10**6).plot(label=DFName)
# # plt.xlabel('Rank')
# plt.ylabel(r'SCC (k\$/$m^2$)')
# plt.xticks([1,2],[DFNames[0],DFNames[1]])
# # plt.title('LCC')
# plt.savefig('SCC_Boxplot.png', dpi=400, bbox_inches='tight')
#
# plt.close('all')
# =============================================================================
'''
#############################################
print('plotting LCC/SCC vs total neighborhood energy and ww graphs')
print('\n#############################################')
print('Stats of Total Energy Demand (MWh/m2) for Disintegrated Case:\n',(DFs[DFNames[0]][40]).describe())
print('Stats of Total Energy Demand (MWh/m2) for Integrated Case:\n',(DFs[DFNames[1]][40]).describe())
print('Stats of Total Wastewater Treatment Demand (m3/m2) for Disintegrated Case:\n',(DFs[DFNames[0]][41]/10**3).describe())
print('Stats of Total Wastewater Treatment Demand (m3/m2) for Integrated Case:\n',(DFs[DFNames[1]][41]/10**3).describe())
print('#############################################\n')
# LCC vs Neighborhood's Total Energy Use
plt.figure(figsize=(10,5))
for DFName in DFNames:
sortedDF = DFs[DFName].sort_values(by=40, ascending=True).reset_index(drop=True)
plt.scatter(x=(sortedDF[40]),y=(sortedDF[26]/10**3),label=DFName, s=2, alpha=0.5, c=colors_rb[DFName])
# (DFs[DFName][0][26]/10**6).plot(label=DFName)
plt.xlabel(r'Total Energy Demand (MWh/$m^2$)')
plt.ylabel(r'LCC (k\$/$m^2$)')
# plt.title('LCC')
plt.legend()
plt.savefig('LCC_vs_Energy_Demand.png', dpi=400, bbox_inches='tight')
# LCC vs Neighborhood's Total WWater Demand
plt.figure(figsize=(10,5))
for DFName in DFNames:
sortedDF = DFs[DFName].sort_values(by=41, ascending=True).reset_index(drop=True)
plt.scatter(x=(sortedDF[41]/10**3),y=(sortedDF[26]/10**3),label=DFName, s=2, alpha=0.5, c=colors_rb[DFName])
# (DFs[DFName][0][26]/10**6).plot(label=DFName)
plt.xlabel(r'Total Wastewater Treatment Demand ($m^3$/$m^2$)')
plt.ylabel(r'LCC (k\$/$m^2$)')
# plt.title('LCC')
plt.legend()
plt.savefig('LCC_vs_WWater_Demand.png', dpi=400, bbox_inches='tight')
# SCC vs Neighborhood's Total Energy Use
plt.figure(figsize=(10,5))
for DFName in DFNames:
sortedDF = DFs[DFName].sort_values(by=40, ascending=True).reset_index(drop=True)
plt.scatter(x=(sortedDF[40]),y=(sortedDF[27]/10**3),label=DFName, s=2, alpha=0.5, c=colors_rb[DFName])
# (DFs[DFName][0][26]/10**6).plot(label=DFName)
plt.xlabel(r'Total Energy Demand (MWh/$m^2$)')
plt.ylabel(r'SCC (k\$/$m^2$)')
# plt.title('LCC')
plt.legend()
plt.savefig('SCC_vs_Energy_Demand.png', dpi=400, bbox_inches='tight')
# SCC vs Neighborhood's Total WWater Demand
plt.figure(figsize=(10,5))
for DFName in DFNames:
sortedDF = DFs[DFName].sort_values(by=41, ascending=True).reset_index(drop=True)
plt.scatter(x=(sortedDF[41]/10**3),y=(sortedDF[27]/10**3),label=DFName, s=2, alpha=0.5, c=colors_rb[DFName])
# (DFs[DFName][0][26]/10**6).plot(label=DFName)
plt.xlabel(r'Total Wastewater Treatment Demand ($m^3$/$m^2$)')
plt.ylabel(r'SCC (k\$/$m^2$)')
# plt.title('LCC')
plt.legend()
plt.savefig('SCC_vs_WWater_Demand.png', dpi=400, bbox_inches='tight')
plt.close('all')
#############################################
print('plotting building mix vs neighborhood energy and ww graphs')
# Building Mix vs Neighborhood's Total WWater Demand (integrated)
DFName = 'CCHP+WWT'
bldg_types = ['Res','Off','Com','Ind','Hos','Med','Edu']
colors = ['m','b','c','g','y','orange','r']
columns = list(range(29,36))
plt.figure(figsize=(10,5))
sortedDF = DFs[DFName].sort_values(by=41, ascending=True).reset_index(drop=True)
for i in range(len(bldg_types)):
plt.scatter(x=(sortedDF[41]/10**3),y=DFs[DFName].iloc[:,columns[i]],
s=0.5, label=bldg_types[i], c=colors[i], alpha=0.5)
# (DFs[DFName][0][26]/10**6).plot(label=DFName)
plt.xlabel(r'Total Wastewater Treatment Demand ($m^3$/$m^2$)')
plt.ylabel('Percent of Total GFA (%)')
plt.ylim(0, 100)
plt.xlim(0,11)
# plt.title('LCC')
plt.legend()
plt.savefig('Bldg_Mix_vs_WWater_Demand_Integ.png', dpi=400, bbox_inches='tight')
# Building Mix vs Neighborhood's Total WWater Demand (Disintegrated)
DFName = 'CCHP|CWWTP'
bldg_types = ['Res','Off','Com','Ind','Hos','Med','Edu']
colors = ['m','b','c','g','y','orange','r']
columns = list(range(29,36))
plt.figure(figsize=(10,5))
sortedDF = DFs[DFName].sort_values(by=41, ascending=True).reset_index(drop=True)
for i in range(len(bldg_types)):
plt.scatter(x=(sortedDF[41]/10**3),y=DFs[DFName].iloc[:,columns[i]],
s=0.5, label=bldg_types[i], c=colors[i], alpha=0.5)
# (DFs[DFName][0][26]/10**6).plot(label=DFName)
plt.xlabel(r'Total Wastewater Treatment Demand ($m^3$/$m^2$)')
plt.ylabel('Percent of Total GFA (%)')
# plt.title('LCC')
plt.ylim(0, 100)
plt.xlim(0,11)
plt.legend()
plt.savefig('Bldg_Mix_vs_WWater_Demand_Disinteg.png', dpi=400, bbox_inches='tight')
# Building Mix vs Neighborhood's Total Energy Demand (integrated)
DFName = 'CCHP+WWT'
bldg_types = ['Res','Off','Com','Ind','Hos','Med','Edu']
colors = ['m','b','c','g','y','orange','r']
columns = list(range(29,36))
plt.figure(figsize=(10,5))
sortedDF = DFs[DFName].sort_values(by=40, ascending=True).reset_index(drop=True)
for i in range(len(bldg_types)):
plt.scatter(x=(sortedDF[40]),y=DFs[DFName].iloc[:,columns[i]],
s=0.5, label=bldg_types[i], c=colors[i], alpha=0.5)
# (DFs[DFName][0][26]/10**6).plot(label=DFName)
plt.xlabel(r'Total Energy Demand (MWh/$m^2$)')
plt.ylabel('Percent of Total GFA (%)')
# plt.title('LCC')
plt.ylim(0, 100)
plt.xlim(0,1)
plt.legend()
plt.savefig('Bldg_Mix_vs_Energy_Demand_Integ.png', dpi=400, bbox_inches='tight')
# Building Mix vs Neighborhood's Total Energy Demand (Disintegrated)
DFName = 'CCHP|CWWTP'
bldg_types = ['Res','Off','Com','Ind','Hos','Med','Edu']
colors = ['m','b','c','g','y','orange','r']
columns = list(range(29,36))
plt.figure(figsize=(10,5))
sortedDF = DFs[DFName].sort_values(by=40, ascending=True).reset_index(drop=True)
for i in range(len(bldg_types)):
plt.scatter(x=(sortedDF[40]),y=DFs[DFName].iloc[:,columns[i]],
s=0.5, label=bldg_types[i], c=colors[i], alpha=0.5)
# (DFs[DFName][0][26]/10**6).plot(label=DFName)
plt.xlabel(r'Total Energy Demand (MWh/$m^2$)')
plt.ylabel('Percent of Total GFA (%)')
# plt.title('LCC')
plt.ylim(0, 100)
plt.xlim(0,1)
plt.legend()
plt.savefig('Bldg_Mix_vs_Energy_Demand_Disinteg.png', dpi=400, bbox_inches='tight')
plt.close('all')
#############################################
print('plotting Supply type vs total neighborhood energy and ww graphs')
# Total Energy Demand vs CHP
plt.figure(figsize=(10,5))
for DFName in DFNames:
sortedDF = DFs[DFName].sort_values(by=40, ascending=True).reset_index(drop=True)
plt.scatter(x=DFs[DFName][21],y=(sortedDF[40]),label=DFName, s=2, alpha=0.5, c=colors_rb[DFName])
plt.xlabel(r'CHP Type')
plt.ylabel(r'Total Energy Demand (MWh/$m^2$)')
plt.legend()
plt.savefig('Total_Energy_vs_CHP.png', dpi=400, bbox_inches='tight')
# Total WWater Demand vs CHP
plt.figure(figsize=(10,5))
for DFName in DFNames:
sortedDF = DFs[DFName].sort_values(by=41, ascending=True).reset_index(drop=True)
plt.scatter(x=DFs[DFName][21],y=(sortedDF[41]/10**3),label=DFName, s=2, alpha=0.5, c=colors_rb[DFName])
plt.xlabel(r'CHP Type')
plt.ylabel(r'Total Wastewater Treatment Demand ($m^3$/$m^2$)')
plt.legend()
plt.savefig('Total_WWater_vs_CHP.png', dpi=400, bbox_inches='tight')
# Total Energy Demand vs Chiller
plt.figure(figsize=(10,5))
for DFName in DFNames:
sortedDF = DFs[DFName].sort_values(by=40, ascending=True).reset_index(drop=True)
plt.scatter(x=DFs[DFName][22],y=(sortedDF[40]),label=DFName, s=2, alpha=0.5, c=colors_rb[DFName])
plt.xlabel(r'Chiller Type')
plt.ylabel(r'Total Energy Demand (MWh/$m^2$)')
plt.legend()
plt.savefig('Total_Energy_vs_Chiller.png', dpi=400, bbox_inches='tight')
# Total WWater Demand vs Chiller
plt.figure(figsize=(10,5))
for DFName in DFNames:
sortedDF = DFs[DFName].sort_values(by=41, ascending=True).reset_index(drop=True)
plt.scatter(x=DFs[DFName][22],y=(sortedDF[41]/10**3),label=DFName, s=2, alpha=0.5, c=colors_rb[DFName])
plt.xlabel(r'Chiller Type')
plt.ylabel(r'Total Wastewater Treatment Demand ($m^3$/$m^2$)')
plt.legend()
plt.savefig('Total_WWater_vs_Chiller.png', dpi=400, bbox_inches='tight')
# Total Energy Demand vs WWT (integrated)
plt.figure(figsize=(10,5))
DFName = 'CCHP+WWT'
sortedDF = DFs[DFName].sort_values(by=40, ascending=True).reset_index(drop=True)
plt.scatter(x=DFs[DFName][24],y=(sortedDF[40]),s=2, c=colors_rb[DFName])
plt.xlabel(r'WWT Type')
plt.ylabel(r'Total Energy Demand (MWh/$m^2$)')
plt.legend()
plt.savefig('Total_Energy_vs_WWT_Integ.png', dpi=400, bbox_inches='tight')
# Total WWater Demand vs WWT (integrated)
plt.figure(figsize=(10,5))
DFName = 'CCHP+WWT'
sortedDF = DFs[DFName].sort_values(by=41, ascending=True).reset_index(drop=True)
plt.scatter(x=DFs[DFName][24],y=(sortedDF[41]/10**3), s=2, c=colors_rb[DFName])
plt.xlabel(r'WWT Type')
plt.ylabel(r'Total Wastewater Treatment Demand ($m^3$/$m^2$)')
plt.savefig('Total_Wwater_vs_WWT_Integ.png', dpi=400, bbox_inches='tight')
'''
plt.close('all')
#############################################
print('plotting pareto fronts')
# LCC vs CO2
plt.figure(figsize=(10,5))
for DFName in DFNames:
plt.scatter(x=DFs[DFName][26]/10**3,y=DFs[DFName][39],label=DFName, s=2, alpha=0.5, c=colors_rb[DFName])
plt.xlabel(r'LCC (k\$/$m^2$)')
plt.ylabel(r'Lifecycle $CO_{2e}$ (T/$m^2$)')
plt.legend()
plt.savefig('CO2_vs_LCC.png', dpi=400, bbox_inches='tight')
#############################################
# LCC vs SCC
plt.figure(figsize=(10,5))
for DFName in DFNames:
plt.scatter(x=DFs[DFName][26]/10**3,y=DFs[DFName][27]/10**3,label=DFName, s=2, alpha=0.5, c=colors_rb[DFName])
plt.xlabel(r'LCC (k\$/$m^2$)')
plt.ylabel(r'SCC (k\$/$m^2$)')
plt.legend()
plt.savefig('SCC_vs_LCC.png', dpi=400, bbox_inches='tight')
# LCC vs SCC w Generation-based transparency
plt.figure(figsize=(10,5))
for DFName in DFNames:
alphas = np.linspace(0.1, 1, len(DFs[DFName]))
rgba_colors = np.zeros((len(DFs[DFName]),4))
if DFName == DFNames[0]:
rgba_colors[:,0] = 1.0 # red
else:
rgba_colors[:,2] = 1.0 # blue
rgba_colors[:,3] = alphas
plt.scatter(x=DFs[DFName][26]/10**3,y=DFs[DFName][27]/10**3,label=DFName, s=1, c=rgba_colors)
plt.xlabel(r'LCC (k\$/$m^2$)')
plt.ylabel(r'SCC (k\$/$m^2$)')
plt.legend()
plt.savefig('SCC_vs_LCC_Gen_Colorcoded.png', dpi=400, bbox_inches='tight')
# LCC vs SCC w Generation-based transparency and elite-filtered
plt.figure(figsize=(10,5))
for DFName in DFNames:
DF = DFs[DFName][DFs[DFName][26]/10**3 <= 500]
DF = DF[DFs[DFName][27]/10**3 <= 0.1]
alphas = np.linspace(0.1, 1, len(DF))
rgba_colors = np.zeros((len(DF),4))
if DFName == DFNames[0]:
rgba_colors[:,0] = 1.0 # red
else:
rgba_colors[:,2] = 1.0 # blue
rgba_colors[:,3] = alphas
plt.scatter(x=DF[26]/10**3,y=DF[27]/10**3,label=DFName, s=1, c=rgba_colors)
plt.xlabel(r'LCC (k\$/$m^2$)')
plt.ylabel(r'SCC (k\$/$m^2$)')
plt.legend()
plt.savefig('SCC_vs_LCC_Gen_Colorcoded_Filtered.png', dpi=400, bbox_inches='tight')
# =============================================================================
# # LCC vs SCC (integrated)
# plt.figure(figsize=(10,5))
# DFName = 'CCHP+WWT'
# plt.scatter(x=DFs[DFName][26]/10**3,y=DFs[DFName][27]/10**3, s=2)
# plt.xlabel(r'LCC (k\$/$m^2$)')
# plt.ylabel(r'SCC (k\$/$m^2$)')
# plt.savefig('SCC_vs_LCC_Integ.png', dpi=400, bbox_inches='tight')
#
#
# # LCC vs SCC (disintegrated)
# plt.figure(figsize=(10,5))
# DFName = 'CCHP|CWWTP'
# plt.scatter(x=DFs[DFName][26]/10**3,y=DFs[DFName][27]/10**3, s=2)
# # (DFs[DFName][0][26]/10**6).plot(label=DFName)
# plt.xlabel(r'LCC (k\$/$m^2$)')
# plt.ylabel(r'SCC (k\$/$m^2$)')
# # plt.title('LCC')
# plt.savefig('SCC_vs_LCC_Disinteg.png', dpi=400, bbox_inches='tight')
#
# =============================================================================
#############################################
print('plotting Supply type vs opt objectives')
print('\n#############################################')
Disinteg_Grpd_by_CHP_meanLCC = DFs[DFNames[0]].groupby(21)[26].mean()
Disnteg_Grpd_by_CHP_medLCC = DFs[DFNames[0]].groupby(21)[26].median()
Disnteg_Grpd_by_CHP_meanSCC = DFs[DFNames[0]].groupby(21)[27].mean()
Disnteg_Grpd_by_CHP_medSCC = DFs[DFNames[0]].groupby(21)[27].median()
Integ_Grpd_by_CHP_meanLCC = DFs[DFNames[1]].groupby(21)[26].mean()
Integ_Grpd_by_CHP_medLCC = DFs[DFNames[1]].groupby(21)[26].median()
Integ_Grpd_by_CHP_meanSCC = DFs[DFNames[1]].groupby(21)[27].mean()
Integ_Grpd_by_CHP_medSCC = DFs[DFNames[1]].groupby(21)[27].median()
items = [Disinteg_Grpd_by_CHP_meanLCC, Disnteg_Grpd_by_CHP_medLCC, Disnteg_Grpd_by_CHP_meanSCC,
Disnteg_Grpd_by_CHP_medSCC, Integ_Grpd_by_CHP_meanLCC, Integ_Grpd_by_CHP_medLCC,
Integ_Grpd_by_CHP_meanSCC, Integ_Grpd_by_CHP_medSCC]
items_names = ['Disinteg_Grpd_by_CHP_meanLCC', 'Disnteg_Grpd_by_CHP_medLCC', 'Disnteg_Grpd_by_CHP_meanSCC',
'Disnteg_Grpd_by_CHP_medSCC', 'Integ_Grpd_by_CHP_meanLCC', 'Integ_Grpd_by_CHP_medLCC',
'Integ_Grpd_by_CHP_meanSCC', 'Integ_Grpd_by_CHP_medSCC']
for i in range(len(items)):
print(items_names[i], items[i])
print('#############################################\n')
# shapes = {DFNames[0]: '+', DFNames[1]: 'x'}
# LCC vs CHP
for DFName in DFNames:
plt.figure(figsize=(10,5))
DF = DFs[DFName].sort_values(by=21)
plt.scatter(x=DF[21], y=DF[26]/10**3,label=DFName, s=2, alpha=0.5)#, c=colors_rb[DFName])#, marker=shapes[DFName])
plt.xlabel(r'CHP Type')
plt.xticks(rotation=75)
plt.ylabel(r'LCC (k\$/$m^2$)')
plt.ylim(-5, 500)
# plt.legend()
if DFName == 'CCHP|CWWTP':
plt.savefig('LCC_vs_CHP_disinteg.png', dpi=400, bbox_inches='tight')
else:
plt.savefig('LCC_vs_CHP_integ.png', dpi=400, bbox_inches='tight')
# SCC vs CHP
for DFName in DFNames:
plt.figure(figsize=(10,5))
DF = DFs[DFName].sort_values(by=21)
plt.scatter(x=DF[21], y=DF[27]/10**3,label=DFName, s=2, alpha=0.5)#, c=colors_rb[DFName])
plt.xlabel(r'CHP Type')
plt.xticks(rotation=75)
plt.ylabel(r'SCC (k\$/$m^2$)')
plt.ylim(-0.01, 0.1)
# plt.legend()
if DFName == 'CCHP|CWWTP':
plt.savefig('SCC_vs_CHP_disinteg.png', dpi=400, bbox_inches='tight')
else:
plt.savefig('SCC_vs_CHP_integ.png', dpi=400, bbox_inches='tight')
# SCC vs CHP with LCC-oriented transparency
for DFName in DFNames:
plt.figure(figsize=(10,5))
DF = DFs[DFName].sort_values(by=21)
DF = DF[(DF[26]<=100) & (DF[27]<=100)]
print('number of indivs plotted: ', len(DF))
alphas = 1.2 - DF[26]/DF[26].max() # Normalized LCCs (lowest LCC: 1; highest LCC: 0)
# alphas = np.linspace(0.1, 1, len(DFs[DFName]))
rgba_colors = np.zeros((len(DF),4))
rgba_colors[:,3] = alphas
plt.scatter(x=DF[21],y=DF[27]/10**3,label=DFName, s=1, c=rgba_colors)
plt.xlabel(r'CHP Type')
plt.xticks(rotation=75)
plt.ylabel(r'SCC (k\$/$m^2$)')
plt.ylim(-0.01, 0.1)
# plt.legend()
if DFName == 'CCHP|CWWTP':
plt.savefig('SCC_vs_CHP_disinteg_colorCoded.png', dpi=400, bbox_inches='tight')
else:
plt.savefig('SCC_vs_CHP_integ_colorCoded.png', dpi=400, bbox_inches='tight')
# =============================================================================
# # LCC vs CHP (integrated)
# plt.figure(figsize=(10,5))
# DFName = 'CCHP+WWT'
# plt.scatter(x=DFs[DFName][21], y=DFs[DFName][26]/10**3, s=2)
# plt.xlabel(r'CHP Type')
# plt.ylabel(r'LCC (k\$/$m^2$)')
# plt.savefig('LCC_vs_CHP_Integ.png', dpi=400, bbox_inches='tight')
#
#
# # LCC vs CHP (disintegrated)
# plt.figure(figsize=(10,5))
# DFName = 'CCHP|CWWTP'
# plt.scatter(x=DFs[DFName][21], y=DFs[DFName][26]/10**3, s=2)
# plt.xlabel(r'CHP Type')
# plt.ylabel(r'LCC (k\$/$m^2$)')
# plt.savefig('LCC_vs_CHP_Disinteg.png', dpi=400, bbox_inches='tight')
# =============================================================================
# LCC vs Chiller
for DFName in DFNames:
plt.figure(figsize=(10,5))
DF = DFs[DFName].sort_values(by=22)
plt.scatter(x=DF[22], y=DF[26]/10**3,label=DFName, s=2, alpha=0.5)#, c=colors_rb[DFName])
plt.xlabel(r'Chiller Type')
plt.xticks(rotation=75)
plt.ylabel(r'LCC (k\$/$m^2$)')
plt.ylim(-5, 500)
# plt.legend()
if DFName == 'CCHP|CWWTP':
plt.savefig('LCC_vs_Chiller_disinteg.png', dpi=400, bbox_inches='tight')
else:
plt.savefig('LCC_vs_Chiller_integ.png', dpi=400, bbox_inches='tight')
# SCC vs Chiller
for DFName in DFNames:
plt.figure(figsize=(10,5))
DF = DFs[DFName].sort_values(by=22)
plt.scatter(x=DF[22], y=DF[27]/10**3,label=DFName, s=2, alpha=0.5)#, c=colors_rb[DFName])
plt.xlabel(r'Chiller Type')
plt.xticks(rotation=75)
plt.ylabel(r'SCC (k\$/$m^2$)')
plt.ylim(-0.01, 0.1)
# plt.legend()
if DFName == 'CCHP|CWWTP':
plt.savefig('SCC_vs_Chiller_disinteg.png', dpi=400, bbox_inches='tight')
else:
plt.savefig('SCC_vs_Chiller_integ.png', dpi=400, bbox_inches='tight')
# SCC vs Chiller with LCC-oriented transparency
for DFName in DFNames:
plt.figure(figsize=(10,5))
DF = DFs[DFName].sort_values(by=22)
DF = DF[(DF[26]<=100) & (DF[27]<=0.5)]
print('number of indivs plotted: ', len(DF))
alphas = 1 - DF[26]/DF[26].max() # Normalized LCCs (lowest LCC: 1; highest LCC: 0)
# alphas = np.linspace(0.1, 1, len(DFs[DFName]))
rgba_colors = np.zeros((len(DF),4))
rgba_colors[:,3] = alphas
plt.scatter(x=DF[22],y=DF[27]/10**3,label=DFName, s=1, c=rgba_colors)
plt.xlabel(r'Chiller Type')
plt.xticks(rotation=75)
plt.ylabel(r'SCC (k\$/$m^2$)')
plt.ylim(-0.01, 0.1)
# plt.legend()
if DFName == 'CCHP|CWWTP':
plt.savefig('SCC_vs_Chiller_disinteg_colorCoded.png', dpi=400, bbox_inches='tight')
else:
plt.savefig('SCC_vs_Chiller_integ_colorCoded.png', dpi=400, bbox_inches='tight')
# =============================================================================
# # LCC vs Chiller (integrated)
# plt.figure(figsize=(10,5))
# DFName = 'CCHP+WWT'
# plt.scatter(x=DFs[DFName][22], y=DFs[DFName][26]/10**3, s=2)
# plt.xlabel(r'Chiller Type')
# plt.ylabel(r'LCC (k\$/$m^2$)')
# plt.savefig('LCC_vs_Chiller_Integ.png', dpi=400, bbox_inches='tight')
#
#
# # LCC vs Chiller (disintegrated)
# plt.figure(figsize=(10,5))
# DFName = 'CCHP|CWWTP'
# plt.scatter(x=DFs[DFName][22], y=DFs[DFName][26]/10**3, s=2)
# plt.xlabel(r'Chiller Type')
# plt.ylabel(r'LCC (k\$/$m^2$)')
# plt.savefig('LCC_vs_Chiller_Disinteg.png', dpi=400, bbox_inches='tight')
# =============================================================================
# LCC vs WWT (integrated)
plt.figure(figsize=(10,5))
DFName = 'CCHP+WWT'
DF = DFs[DFName].sort_values(by=24)
plt.scatter(x=DF[24], y=DF[26]/10**3, s=2)#, c=colors_rb[DFName])
plt.xlabel(r'WWT Type')
plt.xticks(rotation=75)
plt.ylabel(r'LCC (k\$/$m^2$)')
plt.ylim(-5, 500)
plt.savefig('LCC_vs_WWT_Integ.png', dpi=400, bbox_inches='tight')
# SCC vs WWT (integrated)
plt.figure(figsize=(10,5))
DFName = 'CCHP+WWT'
DF = DFs[DFName].sort_values(by=24)
plt.scatter(x=DF[24], y=DF[27]/10**3, s=2)#, c=colors_rb[DFName])
plt.xlabel(r'WWT Type')
plt.xticks(rotation=75)
plt.ylabel(r'SCC (k\$/$m^2$)')
plt.ylim(-0.01, 0.1)
plt.savefig('SCC_vs_WWT_Integ.png', dpi=400, bbox_inches='tight')
# SCC vs WWT with LCC-oriented transparency (integrated)
plt.figure(figsize=(10,5))
DFName = 'CCHP+WWT'
DF = DFs[DFName].sort_values(by=24)
DF = DF[(DF[26]<=100) & (DF[27]<=0.5)]
print('number of indivs plotted: ', len(DF))
alphas = 1 - DF[26]/DF[26].max() # Normalized LCCs (lowest LCC: 1; highest LCC: 0)
# alphas = np.linspace(0.1, 1, len(DFs[DFName]))
rgba_colors = np.zeros((len(DF),4))
rgba_colors[:,3] = alphas
plt.scatter(x=DF[24],y=DF[27]/10**3,s=1, c=rgba_colors)
plt.xlabel(r'WWT Type')
plt.xticks(rotation=75)
plt.ylabel(r'SCC (k\$/$m^2$)')
plt.ylim(-0.01, 0.1)
plt.savefig('SCC_vs_WWT_Integ_colorCoded.png', dpi=400, bbox_inches='tight')
plt.close('all')
#############################################
'''
print('plotting building mix traces')
# Building Mix trace plots
DFName = 'CCHP+WWT'
plt.figure(figsize=(10,5))
fig = plt.figure(figsize=(10,5))
ax = fig.add_subplot(111)
Num_Individuals = len(DFs[DFName])
cm = plt.get_cmap('rainbow')
ax.set_prop_cycle(color=[cm(1.*i/Num_Individuals) for i in range(Num_Individuals)])#ax.set_color_cycle([cm(1.*i/Num_Individuals) for i in range(Num_Individuals)])
for i in range(Num_Individuals):
ax.plot(['Res','Off','Com','Ind','Hos','Med','Edu'],
DFs[DFName].iloc[i,29:36],linewidth=0.2, alpha=0.5)
ax.set_xlabel('Building-Use')
ax.set_ylabel('Percent of Total GFA (%)')
plt.ylim(0, 100)
fig.savefig('Uses_Integ.png', dpi=400, bbox_inches='tight')
DFName = 'CCHP|CWWTP'
fig = plt.figure(figsize=(10,5))
ax = fig.add_subplot(111)
Num_Individuals = len(DFs[DFName])
cm = plt.get_cmap('rainbow')
ax.set_prop_cycle(color=[cm(1.*i/Num_Individuals) for i in range(Num_Individuals)])#ax.set_color_cycle([cm(1.*i/Num_Individuals) for i in range(Num_Individuals)])
y_array = np.array(DFs[DFName].iloc[:,29:36])
for i in range(Num_Individuals):
ax.plot(['Res','Off','Com','Ind','Hos','Med','Edu'],
DFs[DFName].iloc[i,29:36],linewidth=0.2, alpha=0.5)
ax.set_xlabel('Building-Use')
ax.set_ylabel('Percent of Total GFA (%)')
plt.ylim(0, 100)
fig.savefig('Uses_Disinteg.png', dpi=400, bbox_inches='tight')
plt.close('all')
'''
| 34.982014 | 162 | 0.637943 | 4,698 | 29,175 | 3.822903 | 0.069817 | 0.044098 | 0.013085 | 0.032572 | 0.83441 | 0.814031 | 0.791592 | 0.773107 | 0.755401 | 0.72294 | 0 | 0.054225 | 0.118218 | 29,175 | 833 | 163 | 35.02401 | 0.643901 | 0.201508 | 0 | 0.444444 | 0 | 0 | 0.207565 | 0.073427 | 0 | 0 | 0 | 0 | 0 | 1 | 0.003268 | false | 0 | 0.009804 | 0 | 0.01634 | 0.091503 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f71dca35de7fa623ae7ad9dcf06fb60056718130 | 181 | py | Python | Baekjoon/Python/1110.py | KHJcode/Algorithm-study | fa08d3c752fcb3557fd45fb394157926afc0de4a | [
"MIT"
] | 2 | 2020-05-23T01:55:38.000Z | 2020-07-07T15:59:00.000Z | Baekjoon/Python/1110.py | KHJcode/Algorithm-study | fa08d3c752fcb3557fd45fb394157926afc0de4a | [
"MIT"
] | null | null | null | Baekjoon/Python/1110.py | KHJcode/Algorithm-study | fa08d3c752fcb3557fd45fb394157926afc0de4a | [
"MIT"
] | null | null | null | n = int(input())
count = 1
_n = int(str(n % 10) + str((n // 10 + n % 10) % 10))
while _n != n:
count += 1
_n = int(str(_n % 10) + str((_n // 10 + _n % 10) % 10))
print(count)
| 18.1 | 57 | 0.475138 | 34 | 181 | 2.352941 | 0.264706 | 0.225 | 0.3 | 0.25 | 0.675 | 0.675 | 0.675 | 0.675 | 0.675 | 0.675 | 0 | 0.137405 | 0.276243 | 181 | 9 | 58 | 20.111111 | 0.473282 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.142857 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f7228538402e71c82d83f635f86f69621986543e | 43 | py | Python | 2_from_go_to_python/trysum.py | TihonV/pygoexamples | ca9604862d08145057aefc60cd0f9b77c9f5346a | [
"MIT"
] | 18 | 2020-01-23T21:20:47.000Z | 2022-02-20T19:10:02.000Z | 2_from_go_to_python/trysum.py | TihonV/pygoexamples | ca9604862d08145057aefc60cd0f9b77c9f5346a | [
"MIT"
] | null | null | null | 2_from_go_to_python/trysum.py | TihonV/pygoexamples | ca9604862d08145057aefc60cd0f9b77c9f5346a | [
"MIT"
] | 4 | 2019-08-03T12:59:53.000Z | 2022-02-07T23:43:35.000Z | from newmath import sum
print(sum(2, 40))
| 10.75 | 23 | 0.72093 | 8 | 43 | 3.875 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.083333 | 0.162791 | 43 | 3 | 24 | 14.333333 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
f727f6cf011e2798bbf44b01f9e61c7dbe87ceb3 | 142 | py | Python | gala/potential/potential/__init__.py | zilishen/gala | f7184e6b09fbc42a349f6b5a2bca6242f1e9936e | [
"MIT"
] | 1 | 2020-11-20T18:27:25.000Z | 2020-11-20T18:27:25.000Z | gala/potential/potential/__init__.py | ltlancas/gala | 2621bb599d67e74a85446abf72d5930ef70ca181 | [
"MIT"
] | 3 | 2021-07-26T15:07:25.000Z | 2021-09-13T15:04:27.000Z | gala/potential/potential/__init__.py | ltlancas/gala | 2621bb599d67e74a85446abf72d5930ef70ca181 | [
"MIT"
] | 1 | 2018-10-23T23:20:20.000Z | 2018-10-23T23:20:20.000Z | from .core import *
from .cpotential import *
from .ccompositepotential import *
from .builtin import *
from .io import *
from .util import *
| 20.285714 | 34 | 0.746479 | 18 | 142 | 5.888889 | 0.444444 | 0.471698 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.169014 | 142 | 6 | 35 | 23.666667 | 0.898305 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f739b4d5394ea28e6767e0a78ff090af422b45e3 | 79 | py | Python | refresh.py | abagh0703/RetailTrail | cbca3c052523c52935066c5585e5dd2f1c6b4b1e | [
"MIT"
] | null | null | null | refresh.py | abagh0703/RetailTrail | cbca3c052523c52935066c5585e5dd2f1c6b4b1e | [
"MIT"
] | null | null | null | refresh.py | abagh0703/RetailTrail | cbca3c052523c52935066c5585e5dd2f1c6b4b1e | [
"MIT"
] | null | null | null | #!flask/bin/python
from app.views import update_all_sheets
update_all_sheets(); | 26.333333 | 39 | 0.835443 | 13 | 79 | 4.769231 | 0.769231 | 0.290323 | 0.483871 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.063291 | 79 | 3 | 40 | 26.333333 | 0.837838 | 0.21519 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
e3adf8f8686b611ce89805e3a17fd55402c772f9 | 164 | py | Python | util/test/tests/D3D12/D3D12_Buffer_Truncation.py | PLohrmannAMD/renderdoc | ea16d31aa340581f5e505e0c734a8468e5d3d47f | [
"MIT"
] | 6,181 | 2015-01-07T11:49:11.000Z | 2022-03-31T21:46:55.000Z | util/test/tests/D3D12/D3D12_Buffer_Truncation.py | PLohrmannAMD/renderdoc | ea16d31aa340581f5e505e0c734a8468e5d3d47f | [
"MIT"
] | 2,015 | 2015-01-16T01:45:25.000Z | 2022-03-25T12:01:06.000Z | util/test/tests/D3D12/D3D12_Buffer_Truncation.py | PLohrmannAMD/renderdoc | ea16d31aa340581f5e505e0c734a8468e5d3d47f | [
"MIT"
] | 1,088 | 2015-01-06T08:36:25.000Z | 2022-03-30T03:31:21.000Z | import rdtest
import renderdoc as rd
class D3D12_Buffer_Truncation(rdtest.Buffer_Truncation):
demos_test_name = 'D3D12_Buffer_Truncation'
internal = False | 23.428571 | 56 | 0.810976 | 21 | 164 | 6 | 0.666667 | 0.380952 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.042553 | 0.140244 | 164 | 7 | 57 | 23.428571 | 0.851064 | 0 | 0 | 0 | 0 | 0 | 0.139394 | 0.139394 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.4 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e3c0884fd05535cbeae7489844f625f1b16f833f | 6,043 | py | Python | ensembles.py | canek13/canek13-Gradient_Boosting-RandomForest-for-Property-Sale | 01f475be0cdb5774679aa3b92f16814f3f97bf87 | [
"MIT"
] | null | null | null | ensembles.py | canek13/canek13-Gradient_Boosting-RandomForest-for-Property-Sale | 01f475be0cdb5774679aa3b92f16814f3f97bf87 | [
"MIT"
] | null | null | null | ensembles.py | canek13/canek13-Gradient_Boosting-RandomForest-for-Property-Sale | 01f475be0cdb5774679aa3b92f16814f3f97bf87 | [
"MIT"
] | null | null | null | import numpy as np
from sklearn.tree import DecisionTreeRegressor
from scipy.optimize import minimize_scalar
import time
class RandomForestMSE:
def __init__(self, n_estimators, max_depth=None, feature_subsample_size=None,
**trees_parameters):
"""
n_estimators : int
The number of trees in the forest.
max_depth : int
The maximum depth of the tree. If None then there is no limits.
feature_subsample_size : int
The size of feature set for each tree.
If None feature_subsample_size = n_features
"""
self.feature_subsample_size = feature_subsample_size
self.trees = [DecisionTreeRegressor(max_depth=max_depth, **trees_parameters) for i in range(n_estimators)]
def fit(self, X, y):
"""
X : numpy ndarray
Array of size n_objects, n_features
y : numpy ndarray
Array of size n_objects
"""
np.random.seed(42)
self.feature_list = []
if self.feature_subsample_size == None:
self.feature_subsample_size = X.shape[1]
start_time = time.time()
for tree in self.trees:
feature_indexes = self.get_feature_indexes(X.shape[1])
sample_indexes = self.get_sample_indexes(X.shape[0])
self.feature_list.append(feature_indexes)
tree.fit(X[sample_indexes, :][:, feature_indexes], y[sample_indexes])
return time.time() - start_time
def predict(self, X):
"""
X : numpy ndarray
Array of size n_objects, n_features
Returns
-------
y : numpy ndarray
Array of size n_objects
"""
predict_list = [tree.predict(X[:, self.feature_list[i]]) for i, tree in enumerate(self.trees)]
return np.mean(predict_list, axis=0)
def get_sample_indexes(self, size):
"""
size: int
The 0-dimention size of matrix
<= X_train.shape[0]
Returns
-------
random_indexes : numpy ndarray
Array of size feature_subsample_size
"""
return np.random.choice(size, np.random.randint(size // 24, size // 2), replace=True)
def get_feature_indexes(self, size):
"""
size: int
The 1-dimention size of matrix
X_train.shape[1]
Returns
-------
random_indexes : numpy ndarray
Array of size feature_subsample_size
"""
return np.random.choice(size, self.feature_subsample_size, replace=False)
class GradientBoostingMSE:
def __init__(self, n_estimators, learning_rate=0.1, max_depth=5, feature_subsample_size=None,
**trees_parameters):
"""
n_estimators : int
The number of trees in the forest.
learning_rate : float
F_m = F_m-1 + learning_rate * c_m * f_m
max_depth : int
The maximum depth of the tree. If None then there is no limits.
feature_subsample_size : int
The size of feature set for each tree.
If None feature_subsample_size = n_features
"""
self.feature_subsample_size = feature_subsample_size
self.learning_rate = learning_rate
self.trees = [DecisionTreeRegressor(max_depth=max_depth, **trees_parameters) for i in range(n_estimators)]
def fit(self, X, y):
"""
X : numpy ndarray
Array of size n_objects, n_features
y : numpy ndarray
Array of size n_objects
F_m = F_m_old + learning_rate * c_m * f_m
"""
if self.feature_subsample_size == None:
self.feature_subsample_size = X.shape[1]
np.random.seed(42)
F_m = 0
self.coef = []
self.feature_list = []
start_time = time.time()
for tree in self.trees:
feature_indexes = self.get_feature_indexes(X.shape[1])
sample_indexes = self.get_sample_indexes(X.shape[0])
self.feature_list.append(feature_indexes)
tree.fit(X[sample_indexes, :][:, feature_indexes], (y - F_m)[sample_indexes])
f_m = tree.predict(X[:, feature_indexes])
best_coef = minimize_scalar(lambda c:
self.mean_squared_error(y, F_m + c * f_m))
self.coef.append(best_coef.x)
F_m += self.learning_rate * best_coef.x * f_m
return time.time() - start_time
def predict(self, X):
"""
X : numpy ndarray
Array of size n_objects, n_features
Returns
-------
y : numpy ndarray
Array of size n_objects
"""
np.random.seed(42)
out = 0
for i, tree in enumerate(self.trees):
out += self.learning_rate * self.coef[i] * tree.predict(X[:, self.feature_list[i]])
return out
def mean_squared_error(self, y, ens):
return np.mean((y - ens) ** 2)
def get_sample_indexes(self, size):
"""
size: int
The 0-dimention size of matrix
<= X_train.shape[0]
Returns
-------
random_indexes : numpy ndarray
Array of size feature_subsample_size
"""
return np.random.choice(size, np.random.randint(size // 24, size // 2), replace=True)
def get_feature_indexes(self, size):
"""
size: int
The 1-dimention size of matrix
X_train.shape[1]
Returns
-------
random_indexes : numpy ndarray
Array of size feature_subsample_size
"""
return np.random.choice(size, self.feature_subsample_size, replace=False)
| 31.973545 | 114 | 0.555022 | 722 | 6,043 | 4.421053 | 0.134349 | 0.100251 | 0.125313 | 0.071429 | 0.825815 | 0.805138 | 0.795113 | 0.760025 | 0.760025 | 0.760025 | 0 | 0.008761 | 0.357769 | 6,043 | 188 | 115 | 32.143617 | 0.813708 | 0.292404 | 0 | 0.6 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.169231 | false | 0 | 0.061538 | 0.015385 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
54068a33362fbeb2e13a3098d9ee4c07057cfbe6 | 287 | py | Python | platform/hwconf_data/mgm13/modules/WTIMER0/__init__.py | lenloe1/v2.7 | 9ac9c4a7bb37987af382c80647f42d84db5f2e1d | [
"Zlib"
] | null | null | null | platform/hwconf_data/mgm13/modules/WTIMER0/__init__.py | lenloe1/v2.7 | 9ac9c4a7bb37987af382c80647f42d84db5f2e1d | [
"Zlib"
] | 1 | 2020-08-25T02:36:22.000Z | 2020-08-25T02:36:22.000Z | platform/hwconf_data/mgm13/modules/WTIMER0/__init__.py | lenloe1/v2.7 | 9ac9c4a7bb37987af382c80647f42d84db5f2e1d | [
"Zlib"
] | 1 | 2020-08-25T01:56:04.000Z | 2020-08-25T01:56:04.000Z | import mgm13.halconfig.halconfig_types as halconfig_types
import mgm13.halconfig.halconfig_dependency as halconfig_dependency
import mgm13.PythonSnippet.ExporterModel as ExporterModel
import mgm13.PythonSnippet.RuntimeModel as RuntimeModel
import mgm13.PythonSnippet.Metadata as Metadata | 57.4 | 67 | 0.898955 | 34 | 287 | 7.470588 | 0.294118 | 0.216535 | 0.283465 | 0.228346 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.037313 | 0.066202 | 287 | 5 | 68 | 57.4 | 0.910448 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
541b2efca9b87b6577aff73178340c2d042e526d | 2,721 | py | Python | Petal-Width.py | darrenkeenan/Iris-Flower-Data-Set | 7c6e58d7fb2da04f3abc420b0e7bcd4330c8052e | [
"Apache-2.0"
] | null | null | null | Petal-Width.py | darrenkeenan/Iris-Flower-Data-Set | 7c6e58d7fb2da04f3abc420b0e7bcd4330c8052e | [
"Apache-2.0"
] | null | null | null | Petal-Width.py | darrenkeenan/Iris-Flower-Data-Set | 7c6e58d7fb2da04f3abc420b0e7bcd4330c8052e | [
"Apache-2.0"
] | null | null | null | # Calculating min & max petal width per species of iris data set
# Darren Keenan - finalized 2018-04-29
columnname = "Petal Width", "Species/Class" # naming two columns
print(columnname) # printing named columns
with open("data/iris.csv") as iris: # opening file iris.csv saved on computer
for cmandname in iris:
i = cmandname.split(",") # aligning data
print(i[3],i[4]) # listing column 4 in file, petal width & column 5, spcies/class of iri
listset = (0.2,0.2,0.2,0.2,0.2,0.4,0.3,0.2,0.2,0.1,0.2,0.2,0.1,0.1,0.2,0.4,0.4,0.3,0.3,0.3,0.2,0.4,0.2,0.5,0.2,0.2,0.4,0.2,0.2,0.2,0.2,0.4,0.1,0.2,0.1,0.2,0.2,0.1,0.2,0.2,0.3,0.3,0.2,0.6,0.4,0.3,0.2,0.2,0.2,0.2)
print("Min Iris-Setosa Petal Width is:", min(listset)) # printing minimum value of listset
# Min Iris-Setosa Petal Width is: 0.1
listset = (0.2,0.2,0.2,0.2,0.2,0.4,0.3,0.2,0.2,0.1,0.2,0.2,0.1,0.1,0.2,0.4,0.4,0.3,0.3,0.3,0.2,0.4,0.2,0.5,0.2,0.2,0.4,0.2,0.2,0.2,0.2,0.4,0.1,0.2,0.1,0.2,0.2,0.1,0.2,0.2,0.3,0.3,0.2,0.6,0.4,0.3,0.2,0.2,0.2,0.2)
print("Max Iris-Setosa Petal Width is:", max(listset)) # printing maximum value of listset
# Max Iris-Setosa Petal Width is: 0.6
listset = (1.4,1.5,1.5,1.3,1.5,1.3,1.6,1,1.3,1.4,1,1.5,1,1.4,1.3,1.4,1.5,1,1.5,1.1,1.8,1.3,1.5,1.2,1.3,1.4,1.4,1.7,1.5,1,1.1,1,1.2,1.6,1.5,1.6,1.5,1.3,1.3,1.3,1.2,1.4,1.2,1,1.3,1.2,1.3,1.3,1.1,1.3)
print("Min Iris-Versicolor Petal Width is:", min(listset)) # printing minimum value of listset
# Min Iris-Versicolor Petal Width is: 1
listset = (1.4,1.5,1.5,1.3,1.5,1.3,1.6,1,1.3,1.4,1,1.5,1,1.4,1.3,1.4,1.5,1,1.5,1.1,1.8,1.3,1.5,1.2,1.3,1.4,1.4,1.7,1.5,1,1.1,1,1.2,1.6,1.5,1.6,1.5,1.3,1.3,1.3,1.2,1.4,1.2,1,1.3,1.2,1.3,1.3,1.1,1.3)
print("Max Iris-Versicolor Petal Width is:", max(listset)) # printing maximum value of listset
# Max Iris-Versicolor Petal Width is: 1.8
listset = (2.5,1.9,2.1,1.8,2.2,2.1,1.7,1.8,1.8,2.5,2,1.9,2.1,2,2.4,2.3,1.8,2.2,2.3,1.5,2.3,2,2,1.8,2.1,1.8,1.8,1.8,2.1,1.6,1.9,2,2.2,1.5,1.4,2.3,2.4,1.8,1.8,2.1,2.4,2.3,1.9,2.3,2.5,2.3,1.9,2,2.3,1.8)
print("Min Iris-Virginica Petal Width is:", min(listset)) # printing minimum value of listset
# Min Iris-Virginica Petal Width is: 1.4
listset = (2.5,1.9,2.1,1.8,2.2,2.1,1.7,1.8,1.8,2.5,2,1.9,2.1,2,2.4,2.3,1.8,2.2,2.3,1.5,2.3,2,2,1.8,2.1,1.8,1.8,1.8,2.1,1.6,1.9,2,2.2,1.5,1.4,2.3,2.4,1.8,1.8,2.1,2.4,2.3,1.9,2.3,2.5,2.3,1.9,2,2.3,1.8)
print("Max Iris-Virginica Petal Width is:", max(listset)) # printing maximum value of listset
# Max Iris-Virginica Petal Width is: 2.5
| 77.742857 | 211 | 0.568173 | 782 | 2,721 | 1.976982 | 0.079284 | 0.072445 | 0.104787 | 0.07762 | 0.783959 | 0.783959 | 0.64295 | 0.64295 | 0.64295 | 0.64295 | 0 | 0.263926 | 0.168688 | 2,721 | 34 | 212 | 80.029412 | 0.41954 | 0.257626 | 0 | 0.333333 | 0 | 0 | 0.119 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.444444 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
5422825db49084c9c805b86d89e9a090c5b5e953 | 1,073 | py | Python | tests/resources/test_commissions.py | cuencandres/cuenca-python | eb5047bc7a990890316413abf04499facef3c8e7 | [
"MIT"
] | null | null | null | tests/resources/test_commissions.py | cuencandres/cuenca-python | eb5047bc7a990890316413abf04499facef3c8e7 | [
"MIT"
] | null | null | null | tests/resources/test_commissions.py | cuencandres/cuenca-python | eb5047bc7a990890316413abf04499facef3c8e7 | [
"MIT"
] | 1 | 2020-10-16T20:18:15.000Z | 2020-10-16T20:18:15.000Z | import pytest
from cuenca import Commission, Deposit, Transfer
@pytest.mark.vcr
def test_commission_retrieve():
id_commission = 'COXXX'
commission: Commission = Commission.retrieve(id_commission)
assert commission.id == id_commission
assert not commission.related_transaction
@pytest.mark.vcr
def test_commission_retrieve_witw_cash_deposit():
id_commission = 'COXXX'
commission: Commission = Commission.retrieve(id_commission)
assert commission.id == id_commission
related_transaction = commission.related_transaction
assert related_transaction
assert type(related_transaction) == Deposit
assert related_transaction.network == 'cash'
@pytest.mark.vcr
def test_commission_retrieve_witw_cash_transfer():
id_commission = 'COXXX'
commission: Commission = Commission.retrieve(id_commission)
assert commission.id == id_commission
related_transaction = commission.related_transaction
assert related_transaction
assert type(related_transaction) == Transfer
assert related_transaction.network == 'spei'
| 31.558824 | 63 | 0.78192 | 118 | 1,073 | 6.838983 | 0.194915 | 0.245353 | 0.173482 | 0.148699 | 0.76456 | 0.76456 | 0.76456 | 0.717472 | 0.717472 | 0.60347 | 0 | 0 | 0.147251 | 1,073 | 33 | 64 | 32.515152 | 0.881967 | 0 | 0 | 0.615385 | 0 | 0 | 0.021435 | 0 | 0 | 0 | 0 | 0 | 0.384615 | 1 | 0.115385 | false | 0 | 0.076923 | 0 | 0.192308 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5876e96238ad1db0337dc2c91bb3150f8b9a1eef | 12,974 | py | Python | tests/changes/models/test_job.py | vault-the/changes | 37e23c3141b75e4785cf398d015e3dbca41bdd56 | [
"Apache-2.0"
] | 443 | 2015-01-03T16:28:39.000Z | 2021-04-26T16:39:46.000Z | tests/changes/models/test_job.py | vault-the/changes | 37e23c3141b75e4785cf398d015e3dbca41bdd56 | [
"Apache-2.0"
] | 12 | 2015-07-30T19:07:16.000Z | 2016-11-07T23:11:21.000Z | tests/changes/models/test_job.py | vault-the/changes | 37e23c3141b75e4785cf398d015e3dbca41bdd56 | [
"Apache-2.0"
] | 47 | 2015-01-09T10:04:00.000Z | 2020-11-18T17:58:19.000Z | from __future__ import absolute_import
import mock
from flask import current_app
from changes.models.command import CommandType
from changes.models.jobplan import JobPlan
from changes.testutils import TestCase
from changes.vcs.base import Vcs
class AutogeneratedJobTest(TestCase):
def _create_job_and_jobplan(self):
current_app.config['APT_SPEC'] = 'deb http://example.com/debian distribution component1'
current_app.config['ENCAP_RSYNC_URL'] = 'rsync://example.com/encap/'
current_app.config['BAZEL_APT_PKGS'] = ['bazel']
current_app.config['BAZEL_ROOT_PATH'] = '/bazel/root/path'
project = self.create_project()
plan = self.create_plan(project)
option = self.create_option(
item_id=plan.id,
name='bazel.autogenerate',
value='1',
)
build = self.create_build(project)
job = self.create_job(build)
jobplan = self.create_job_plan(job, plan)
return job
@mock.patch('changes.models.project.Project.get_config')
def test_autogenerated_commands(self, get_config):
get_config.return_value = {
'bazel.additional-test-flags': [],
'bazel.targets': [
'//aa/bb/cc/...',
'//aa/abc/...',
],
'bazel.dependencies': {
'encap': [
'package1',
'pkg-2',
]
},
'bazel.exclude-tags': [],
'bazel.cpus': 4, # Default
'bazel.mem': 8192, # Default
'bazel.max-executors': 1, # Default
}
mock_vcs = mock.Mock(spec=Vcs)
mock_vcs.get_buildstep_checkout_revision.return_value = 'git checkout master'
mock_vcs.get_buildstep_checkout_parent_revision.return_value = 'git checkout master^'
mock_vcs.get_buildstep_changed_files.return_value = 'git diff --name-only master^..master'
job = self._create_job_and_jobplan()
with mock.patch.object(job.project.repository, "get_vcs") as mock_get_vcs:
mock_get_vcs.return_value = mock_vcs
_, implementation = JobPlan.get_build_step_for_job(job.id)
bazel_setup_expected = """#!/bin/bash -eux
sudo apt-get install -y --force-yes bazel
""".strip()
sync_encap_expected = """
sudo mkdir -p /usr/local/encap/
sudo /usr/bin/rsync -a --delete rsync://example.com/encap/package1 /usr/local/encap/
sudo /usr/bin/rsync -a --delete rsync://example.com/encap/pkg-2 /usr/local/encap/
""".strip()
collect_targets_expected = """#!/bin/bash -eu
sudo apt-get install -y --force-yes bazel python >/dev/null 2>&1
"/var/changes/input/collect-targets" --output-user-root="/bazel/root/path" --target-patterns=//aa/bb/cc/... --target-patterns=//aa/abc/... --test-flags=--spawn_strategy=sandboxed --test-flags=--genrule_strategy=sandboxed --test-flags=--keep_going --jobs="8" --selective-testing-skip-list={} 2> /dev/null
""".strip().format(job.project.get_config_path())
extra_setup_expected = """#!/bin/bash -eux
exit 0
""".strip()
assert len(implementation.commands) == 4
assert implementation.max_executors == 1
assert implementation.artifacts == []
assert implementation.artifact_suffix == '.bazel'
assert implementation.commands[0].type == CommandType.setup
assert implementation.commands[0].script == bazel_setup_expected
assert implementation.commands[1].type == CommandType.setup
assert implementation.commands[1].script == sync_encap_expected
assert implementation.commands[2].type == CommandType.setup
assert implementation.commands[2].script == extra_setup_expected
assert implementation.commands[3].type == CommandType.collect_bazel_targets
assert implementation.commands[3].script == collect_targets_expected
assert implementation.commands[3].env['VCS_CHECKOUT_TARGET_REVISION_CMD'] == 'git checkout master'
assert implementation.commands[3].env['VCS_CHECKOUT_PARENT_REVISION_CMD'] == 'git checkout master^'
assert implementation.commands[3].env['VCS_GET_CHANGED_FILES_CMD'] == 'git diff --name-only master^..master'
@mock.patch('changes.models.project.Project.get_config')
def test_autogenerated_commands_with_exclusions(self, get_config):
get_config.return_value = {
'bazel.additional-test-flags': [],
'bazel.targets': [
'//foo/bar/baz/...',
'//bar/bax/...',
],
'bazel.exclude-tags': [
'flaky',
'another_tag',
],
'bazel.cpus': 2,
'bazel.mem': 1234,
'bazel.max-executors': 3,
}
mock_vcs = mock.Mock(spec=Vcs)
mock_vcs.get_buildstep_checkout_revision.return_value = 'git checkout master'
mock_vcs.get_buildstep_checkout_parent_revision.return_value = 'git checkout master^'
mock_vcs.get_buildstep_changed_files.return_value = 'git diff --name-only master^..master'
job = self._create_job_and_jobplan()
with mock.patch.object(job.project.repository, "get_vcs") as mock_get_vcs:
mock_get_vcs.return_value = mock_vcs
_, implementation = JobPlan.get_build_step_for_job(job.id)
collect_tests_expected = """#!/bin/bash -eu
sudo apt-get install -y --force-yes bazel python >/dev/null 2>&1
"/var/changes/input/collect-targets" --output-user-root="/bazel/root/path" --target-patterns=//foo/bar/baz/... --target-patterns=//bar/bax/... --exclude-tags=flaky --exclude-tags=another_tag --test-flags=--spawn_strategy=sandboxed --test-flags=--genrule_strategy=sandboxed --test-flags=--keep_going --jobs="4" --selective-testing-skip-list={} 2> /dev/null
""".strip().format(job.project.get_config_path())
assert implementation.max_executors == 3
assert implementation.artifacts == []
assert implementation.artifact_suffix == '.bazel'
assert implementation.resources['cpus'] == 2
assert implementation.resources['mem'] == 1234
assert len(implementation.commands) == 4
assert implementation.commands[0].type == CommandType.setup
assert implementation.commands[1].type == CommandType.setup
assert implementation.commands[2].type == CommandType.setup
assert implementation.commands[3].type == CommandType.collect_bazel_targets
assert implementation.commands[3].script == collect_tests_expected
@mock.patch('changes.models.project.Project.get_config')
def test_autogenerated_commands_with_additional_test_flags(self, get_config):
get_config.return_value = {
'bazel.additional-test-flags': ['--test_env=testing=123', '--test_env=testing=123'],
'bazel.targets': [
'//foo/bar/baz/...',
'//bar/bax/...',
],
'bazel.exclude-tags': [],
'bazel.cpus': 2,
'bazel.mem': 1234,
'bazel.max-executors': 3,
}
mock_vcs = mock.Mock(spec=Vcs)
mock_vcs.get_buildstep_checkout_revision.return_value = 'git checkout master'
mock_vcs.get_buildstep_checkout_parent_revision.return_value = 'git checkout master^'
mock_vcs.get_buildstep_changed_files.return_value = 'git diff --name-only master^..master'
job = self._create_job_and_jobplan()
with mock.patch.object(job.project.repository, "get_vcs") as mock_get_vcs:
mock_get_vcs.return_value = mock_vcs
_, implementation = JobPlan.get_build_step_for_job(job.id)
collect_tests_expected = """#!/bin/bash -eu
sudo apt-get install -y --force-yes bazel python >/dev/null 2>&1
"/var/changes/input/collect-targets" --output-user-root="/bazel/root/path" --target-patterns=//foo/bar/baz/... --target-patterns=//bar/bax/... --test-flags=--spawn_strategy=sandboxed --test-flags=--genrule_strategy=sandboxed --test-flags=--keep_going --test-flags=--test_env=testing=123 --jobs="4" --selective-testing-skip-list={} 2> /dev/null
""".strip().format(job.project.get_config_path())
assert implementation.max_executors == 3
assert implementation.artifacts == []
assert implementation.artifact_suffix == '.bazel'
assert implementation.resources['cpus'] == 2
assert implementation.resources['mem'] == 1234
assert len(implementation.commands) == 4
assert implementation.commands[0].type == CommandType.setup
assert implementation.commands[1].type == CommandType.setup
assert implementation.commands[2].type == CommandType.setup
assert implementation.commands[3].type == CommandType.collect_bazel_targets
assert implementation.commands[3].script == collect_tests_expected
@mock.patch('changes.models.project.Project.get_config')
def test_autogenerated_commands_with_additional_test_flags_invalid(self, get_config):
get_config.return_value = {
'bazel.additional-test-flags': ['--keep_going'], # not in whitelist
'bazel.targets': [
'//foo/bar/baz/...',
'//bar/bax/...',
],
'bazel.exclude-tags': [],
'bazel.cpus': 2,
'bazel.mem': 1234,
'bazel.max-executors': 3,
}
_, implementation = JobPlan.get_build_step_for_job(self._create_job_and_jobplan().id)
assert implementation is None
@mock.patch('changes.models.project.Project.get_config')
def test_invalid_cpus(self, get_config):
get_config.return_value = {
'bazel.additional-test-flags': [],
'bazel.targets': [
'//aa/bb/cc/...',
'//aa/abc/...',
],
'bazel.exclude-tags': [],
'bazel.cpus': 0, # 0 CPUs is not valid
'bazel.mem': 8192,
'bazel.max-executors': 1,
}
_, implementation = JobPlan.get_build_step_for_job(self._create_job_and_jobplan().id)
assert implementation is None
get_config.return_value = {
'bazel.targets': [
'//aa/bb/cc/...',
'//aa/abc/...',
],
'bazel.exclude-tags': [],
'bazel.cpus': 2, # Too many
'bazel.mem': 8192,
'bazel.max-executors': 1,
}
current_app.config['MAX_CPUS_PER_EXECUTOR'] = 1
_, implementation = JobPlan.get_build_step_for_job(self._create_job_and_jobplan().id)
assert implementation is None
@mock.patch('changes.models.project.Project.get_config')
def test_invalid_mems(self, get_config):
get_config.return_value = {
'bazel.additional-test-flags': [],
'bazel.targets': [
'//aa/bb/cc/...',
'//aa/abc/...',
],
'bazel.exclude-tags': [],
'bazel.cpus': 1,
'bazel.mem': 1025, # Too high
'bazel.max-executors': 1,
}
current_app.config['MIN_MEM_MB_PER_EXECUTOR'] = 1
current_app.config['MAX_MEM_MB_PER_EXECUTOR'] = 10
_, implementation = JobPlan.get_build_step_for_job(self._create_job_and_jobplan().id)
assert implementation is None
get_config.return_value = {
'bazel.targets': [
'//aa/bb/cc/...',
'//aa/abc/...',
],
'bazel.exclude-tags': [],
'bazel.cpus': 1,
'bazel.mem': 1025, # Too low
'bazel.max-executors': 1,
}
current_app.config['MIN_MEM_MB_PER_EXECUTOR'] = 2000
current_app.config['MAX_MEM_MB_PER_EXECUTOR'] = 3000
_, implementation = JobPlan.get_build_step_for_job(self._create_job_and_jobplan().id)
assert implementation is None
@mock.patch('changes.models.project.Project.get_config')
def test_invalid_num_executors(self, get_config):
get_config.return_value = {
'bazel.additional-test-flags': [],
'bazel.targets': [
'//aa/bb/cc/...',
'//aa/abc/...',
],
'bazel.exclude-tags': [],
'bazel.cpus': 1,
'bazel.mem': 1234,
'bazel.max-executors': 0, # invalid
}
_, implementation = JobPlan.get_build_step_for_job(self._create_job_and_jobplan().id)
assert implementation is None
get_config.return_value = {
'bazel.targets': [
'//aa/bb/cc/...',
'//aa/abc/...',
],
'bazel.exclude-tags': [],
'bazel.cpus': 1,
'bazel.mem': 1234,
'bazel.max-executors': 11, # too high
}
_, implementation = JobPlan.get_build_step_for_job(self._create_job_and_jobplan().id)
assert implementation is None
| 41.318471 | 355 | 0.615076 | 1,481 | 12,974 | 5.155976 | 0.123565 | 0.107386 | 0.077004 | 0.02737 | 0.849005 | 0.83473 | 0.825432 | 0.80186 | 0.788633 | 0.788633 | 0 | 0.014239 | 0.247572 | 12,974 | 313 | 356 | 41.450479 | 0.767978 | 0.007939 | 0 | 0.684825 | 0 | 0.019455 | 0.296034 | 0.132037 | 0 | 0 | 0 | 0 | 0.171206 | 1 | 0.031128 | false | 0 | 0.027237 | 0 | 0.066148 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5441a86abda37b4c280e4971dd9a785b653637fd | 33 | py | Python | test3.py | yaojinsong9998/renamecode | 5f06cd828c3d5af4a075a7ef71dd55516063b8dc | [
"MIT"
] | null | null | null | test3.py | yaojinsong9998/renamecode | 5f06cd828c3d5af4a075a7ef71dd55516063b8dc | [
"MIT"
] | null | null | null | test3.py | yaojinsong9998/renamecode | 5f06cd828c3d5af4a075a7ef71dd55516063b8dc | [
"MIT"
] | null | null | null | s = "13232 ** "
print (s) | 16.5 | 23 | 0.363636 | 4 | 33 | 3 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.263158 | 0.424242 | 33 | 2 | 24 | 16.5 | 0.368421 | 0 | 0 | 0 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
54c16f5675af51b36cd759aff1e36edd03f25437 | 42 | py | Python | pkg/slack/__init__.py | ks6088ts/sandbox-cloud-python | 1d4c3b768dd1c6a4130ae8d61c56dab5ebb8f2b4 | [
"MIT"
] | null | null | null | pkg/slack/__init__.py | ks6088ts/sandbox-cloud-python | 1d4c3b768dd1c6a4130ae8d61c56dab5ebb8f2b4 | [
"MIT"
] | 13 | 2021-07-23T22:33:02.000Z | 2021-08-09T06:30:09.000Z | pkg/slack/__init__.py | ks6088ts/sandbox-cloud-python | 1d4c3b768dd1c6a4130ae8d61c56dab5ebb8f2b4 | [
"MIT"
] | null | null | null | """pkg.slack"""
from .apis import Handler
| 14 | 25 | 0.690476 | 6 | 42 | 4.833333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.119048 | 42 | 2 | 26 | 21 | 0.783784 | 0.214286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
49c83561dbfcfcb48541940317db15c18b7cf7c0 | 203 | py | Python | src/utils/pythonSrc/watchFaceParser/models/parameterFlags.py | chm-dev/amazfitGTSwatchfaceBundle | 4cb04be5215de16628418e9b38152a35d5372d3e | [
"MIT"
] | 49 | 2019-12-18T11:24:56.000Z | 2022-03-28T09:56:27.000Z | src/utils/pythonSrc/watchFaceParser/models/parameterFlags.py | chm-dev/amazfitGTSwatchfaceBundle | 4cb04be5215de16628418e9b38152a35d5372d3e | [
"MIT"
] | 6 | 2020-01-08T21:31:15.000Z | 2022-03-25T19:13:26.000Z | src/utils/pythonSrc/watchFaceParser/models/parameterFlags.py | chm-dev/amazfitGTSwatchfaceBundle | 4cb04be5215de16628418e9b38152a35d5372d3e | [
"MIT"
] | 6 | 2019-12-27T17:30:24.000Z | 2021-09-30T08:11:01.000Z | class ParameterFlags:
Unknown = 1
hasChildren = 2
Unknown2 = 4
def __init__(self, flag):
self._flag = flag
def hasFlag(self, flag):
return (self._flag & flag) != 0
| 16.916667 | 39 | 0.581281 | 24 | 203 | 4.666667 | 0.625 | 0.285714 | 0.214286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.036232 | 0.320197 | 203 | 11 | 40 | 18.454545 | 0.775362 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0 | 0.125 | 0.875 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
3fbece05dbf8163776aa07eb81b6f43c028b6ef4 | 28 | py | Python | app/settings_test.py | fatboystring/csv_generator | 881b41d93aeb496ea6a25c687db3eab3e0eae571 | [
"MIT"
] | 3 | 2016-03-22T19:37:00.000Z | 2018-05-22T22:14:36.000Z | app/settings_test.py | fatboystring/csv_generator | 881b41d93aeb496ea6a25c687db3eab3e0eae571 | [
"MIT"
] | 2 | 2016-08-15T10:37:29.000Z | 2016-11-24T12:37:40.000Z | app/settings_test.py | fatboystring/csv_generator | 881b41d93aeb496ea6a25c687db3eab3e0eae571 | [
"MIT"
] | 1 | 2021-06-28T10:30:37.000Z | 2021-06-28T10:30:37.000Z | from app.settings import *
| 9.333333 | 26 | 0.75 | 4 | 28 | 5.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.178571 | 28 | 2 | 27 | 14 | 0.913043 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3fe57b420fdbb1de35f454769c9d79b5ea8d808b | 148 | py | Python | zvt/domain/quotes/index/__init__.py | stone64/zvt | 19360b3f29992bc759709adfa90e32843147a807 | [
"MIT"
] | 2 | 2020-09-04T03:24:03.000Z | 2020-11-27T20:57:55.000Z | zvt/domain/quotes/index/__init__.py | stone64/zvt | 19360b3f29992bc759709adfa90e32843147a807 | [
"MIT"
] | null | null | null | zvt/domain/quotes/index/__init__.py | stone64/zvt | 19360b3f29992bc759709adfa90e32843147a807 | [
"MIT"
] | 1 | 2021-01-24T15:44:53.000Z | 2021-01-24T15:44:53.000Z | # -*- coding: utf-8 -*-
# this file is generated by gen_kdata_schema function, dont't change it
from zvt.domain.quotes.index.index_1d_kdata import * | 49.333333 | 71 | 0.756757 | 25 | 148 | 4.32 | 0.92 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015504 | 0.128378 | 148 | 3 | 72 | 49.333333 | 0.821705 | 0.614865 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b7df5b14228574c8d72899c1ce635766c23a8c67 | 47 | py | Python | test03.py | kangwonlee/test-reposetman-fetch-and-reset-00 | 19db5b4a604bafadc2e77b024a6443f312a77de6 | [
"BSD-3-Clause"
] | null | null | null | test03.py | kangwonlee/test-reposetman-fetch-and-reset-00 | 19db5b4a604bafadc2e77b024a6443f312a77de6 | [
"BSD-3-Clause"
] | null | null | null | test03.py | kangwonlee/test-reposetman-fetch-and-reset-00 | 19db5b4a604bafadc2e77b024a6443f312a77de6 | [
"BSD-3-Clause"
] | null | null | null | print('this was supposed to be another test')
| 15.666667 | 45 | 0.744681 | 8 | 47 | 4.375 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.170213 | 47 | 2 | 46 | 23.5 | 0.897436 | 0 | 0 | 0 | 0 | 0 | 0.782609 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
4d0d2fc1bc3661f95faefcd17accbdd9604f0802 | 29 | py | Python | fountain/data/__init__.py | oaao/fountain-of-knuth | 626be5ef98370e59c273a93a10614a4438e1fb22 | [
"MIT"
] | null | null | null | fountain/data/__init__.py | oaao/fountain-of-knuth | 626be5ef98370e59c273a93a10614a4438e1fb22 | [
"MIT"
] | null | null | null | fountain/data/__init__.py | oaao/fountain-of-knuth | 626be5ef98370e59c273a93a10614a4438e1fb22 | [
"MIT"
] | null | null | null | from . import mayzner, norvig | 29 | 29 | 0.793103 | 4 | 29 | 5.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.137931 | 29 | 1 | 29 | 29 | 0.92 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4d1aa1f0a57826763663cfec607c67bef2493c08 | 535 | py | Python | picqer_client_python/resources/shipping_provider.py | jordyvanraalte/piqcer-client-python | 494e0428bacb0089850984fb9be9ef247d24f5cb | [
"MIT"
] | 3 | 2021-01-05T05:10:52.000Z | 2021-01-06T17:54:22.000Z | picqer_client_python/resources/shipping_provider.py | jordyvanraalte/piqcer-client-python | 494e0428bacb0089850984fb9be9ef247d24f5cb | [
"MIT"
] | null | null | null | picqer_client_python/resources/shipping_provider.py | jordyvanraalte/piqcer-client-python | 494e0428bacb0089850984fb9be9ef247d24f5cb | [
"MIT"
] | null | null | null | from ..resources.resource import Resource
class ShippingProviders(Resource):
def __init__(self):
super().__init__("stockhistory")
def get(self, id):
raise NotImplementedError("Not possible to post a warehouse")
def post(self, object):
raise NotImplementedError("Not possible to post a warehouse")
def put(self, id, object):
raise NotImplementedError("Not possible to post a warehouse")
def delete(self, id):
raise NotImplementedError("Not possible to post a warehouse") | 29.722222 | 69 | 0.693458 | 63 | 535 | 5.761905 | 0.380952 | 0.264463 | 0.297521 | 0.385675 | 0.652893 | 0.652893 | 0.652893 | 0.652893 | 0.652893 | 0.644628 | 0 | 0 | 0.214953 | 535 | 18 | 70 | 29.722222 | 0.864286 | 0 | 0 | 0.333333 | 0 | 0 | 0.261194 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.416667 | false | 0 | 0.083333 | 0 | 0.583333 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
4d6cc884ac90cd3ebdc5375075a845a6dbd9ece7 | 27,925 | py | Python | libcloud/test/dns/test_durabledns.py | atsaki/libcloud | ae85479e835494e196e2f6e79aae9a475603d8ac | [
"Apache-2.0"
] | 3 | 2016-06-03T03:40:18.000Z | 2018-09-24T05:28:47.000Z | libcloud/test/dns/test_durabledns.py | atsaki/libcloud | ae85479e835494e196e2f6e79aae9a475603d8ac | [
"Apache-2.0"
] | 1 | 2015-10-26T21:29:56.000Z | 2015-10-27T17:29:20.000Z | libcloud/test/dns/test_durabledns.py | atsaki/libcloud | ae85479e835494e196e2f6e79aae9a475603d8ac | [
"Apache-2.0"
] | 2 | 2018-09-24T05:28:42.000Z | 2020-12-31T05:11:04.000Z | # Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
import sys
import unittest
from mock import MagicMock
from libcloud.dns.base import Record, Zone
from libcloud.dns.types import RecordType
from libcloud.dns.types import ZoneDoesNotExistError, ZoneAlreadyExistsError
from libcloud.dns.types import RecordDoesNotExistError
from libcloud.test import LibcloudTestCase, MockHttpTestCase
from libcloud.test.file_fixtures import DNSFileFixtures
from libcloud.test.secrets import DNS_PARAMS_DURABLEDNS
from libcloud.utils.py3 import httplib
from libcloud.dns.drivers.durabledns import DurableDNSDriver
from libcloud.dns.drivers.durabledns import ZONE_EXTRA_PARAMS_DEFAULT_VALUES
from libcloud.dns.drivers.durabledns import DEFAULT_TTL
from libcloud.dns.drivers.durabledns import RECORD_EXTRA_PARAMS_DEFAULT_VALUES
class DurableDNSTests(LibcloudTestCase):
def setUp(self):
DurableDNSDriver.connectionCls.conn_classes = \
(None, DurableDNSMockHttp)
DurableDNSMockHttp.type = None
self.driver = DurableDNSDriver(*DNS_PARAMS_DURABLEDNS)
def assertHasKeys(self, dictionary, keys):
for key in keys:
self.assertTrue(key in dictionary, 'key "%s" not in dictionary' %
(key))
def test_list_record_types(self):
record_types = self.driver.list_record_types()
self.assertEqual(len(record_types), 10)
self.assertTrue(RecordType.A in record_types)
self.assertTrue(RecordType.AAAA in record_types)
self.assertTrue(RecordType.CNAME in record_types)
self.assertTrue(RecordType.HINFO in record_types)
self.assertTrue(RecordType.MX in record_types)
self.assertTrue(RecordType.NS in record_types)
self.assertTrue(RecordType.PTR in record_types)
self.assertTrue(RecordType.RP in record_types)
self.assertTrue(RecordType.SRV in record_types)
self.assertTrue(RecordType.TXT in record_types)
def test_list_zones(self):
extra = {'ns': 'ns1.durabledns.com.', 'mbox': 'mail.myzone.com',
'serial': '1437473456', 'refresh': '13000', 'retry': 7200,
'expire': 1300, 'minimum': 13, 'xfer': '127.0.0.1',
'update_acl': '127.0.0.1'}
zone = Zone(id='myzone.com.', domain='myzone.com.', type='master',
ttl=1300, driver=self.driver, extra=extra)
self.driver.get_zone = MagicMock(return_value=zone)
zones = self.driver.list_zones()
self.assertEqual(len(zones), 2)
zone = zones[0]
self.assertEqual(zone.id, 'myzone.com.')
self.assertEqual(zone.domain, 'myzone.com.')
self.assertEqual(zone.ttl, 1300)
self.assertEqual(zone.extra['ns'], 'ns1.durabledns.com.')
self.assertEqual(zone.extra['mbox'], 'mail.myzone.com')
self.assertEqual(zone.extra['serial'], '1437473456')
self.assertEqual(zone.extra['refresh'], '13000')
self.assertEqual(zone.extra['retry'], 7200)
self.assertEqual(zone.extra['expire'], 1300)
self.assertEqual(zone.extra['minimum'], 13)
self.assertEqual(zone.extra['xfer'], '127.0.0.1')
self.assertEqual(zone.extra['update_acl'], '127.0.0.1')
self.assertEqual(len(zone.extra.keys()), 9)
def test_list_records(self):
z_extra = {'ns': 'ns1.durabledns.com.', 'mbox': 'mail.myzone.com',
'refresh': '13000', 'retry': 7200, 'expire': 1300,
'minimum': 13, 'xfer': '127.0.0.1',
'update_acl': '127.0.0.1'}
zone = Zone(id='myzone.com.', domain='myzone.com.', type='master',
ttl=1300, driver=self.driver, extra=z_extra)
extra = {'aux': 1, 'ttl': 3600}
record = Record(id='353286987', type='A', zone=zone,
name='record1', data='192.168.0.1',
driver=self, extra=extra)
self.driver.get_record = MagicMock(return_value=record)
records = self.driver.list_records(zone=zone)
self.assertEqual(len(records), 2)
self.assertEqual(record.id, '353286987')
self.assertEqual(record.name, 'record1')
self.assertEqual(record.type, 'A')
self.assertEqual(record.data, '192.168.0.1')
self.assertEqual(record.zone, zone)
self.assertEqual(record.extra['aux'], 1)
self.assertEqual(record.extra['ttl'], 3600)
def test_list_records_zone_does_not_exist(self):
z_extra = {'ns': 'ns1.durabledns.com.', 'mbox': 'mail.myzone.com',
'refresh': '13000', 'retry': 7200, 'expire': 1300,
'minimum': 13, 'xfer': '127.0.0.1',
'update_acl': '127.0.0.1'}
zone = Zone(id='myzone.com.', domain='myzone.com.', type='master',
ttl=1300, driver=self.driver, extra=z_extra)
DurableDNSMockHttp.type = 'ZONE_DOES_NOT_EXIST'
try:
self.driver.list_records(zone=zone)
except ZoneDoesNotExistError:
e = sys.exc_info()[1]
self.assertEqual(e.zone_id, zone.id)
else:
self.fail('Exception was not thrown')
def test_get_zone(self):
zone = self.driver.get_zone(zone_id='myzone.com.')
self.assertEqual(zone.id, 'myzone.com.')
self.assertEqual(zone.domain, 'myzone.com.')
self.assertEqual(zone.ttl, 1300)
self.assertEqual(zone.extra['ns'], 'ns1.durabledns.com.')
self.assertEqual(zone.extra['mbox'], 'mail.myzone.com')
self.assertEqual(zone.extra['serial'], '1437473456')
self.assertEqual(zone.extra['refresh'], '13000')
self.assertEqual(zone.extra['retry'], 7200)
self.assertEqual(zone.extra['expire'], 1300)
self.assertEqual(zone.extra['minimum'], 13)
self.assertEqual(zone.extra['xfer'], '127.0.0.1/32')
self.assertEqual(zone.extra['update_acl'],
'127.0.0.1/32,127.0.0.100/32')
self.assertEqual(len(zone.extra.keys()), 9)
def test_get_zone_does_not_exist(self):
DurableDNSMockHttp.type = 'ZONE_DOES_NOT_EXIST'
try:
self.driver.get_zone(zone_id='nonexistentzone.com.')
except ZoneDoesNotExistError:
e = sys.exc_info()[1]
self.assertEqual(e.zone_id, 'nonexistentzone.com.')
else:
self.fail('Exception was not thrown')
def test_get_record(self):
record = self.driver.get_record(zone_id='myzone.com.',
record_id='record1')
self.assertEqual(record.id, '353286987')
self.assertEqual(record.name, 'record1')
self.assertEqual(record.type, 'A')
self.assertEqual(record.data, '192.168.0.1')
self.assertEqual(record.extra['aux'], 1)
self.assertEqual(record.extra['ttl'], 3600)
def test_get_record_zone_does_not_exist(self):
DurableDNSMockHttp.type = 'ZONE_DOES_NOT_EXIST'
try:
self.driver.get_record(zone_id='nonexistentzone.com.',
record_id='record1')
except ZoneDoesNotExistError:
pass
else:
self.fail('Exception was not thrown')
def test_get_record_record_does_not_exist(self):
DurableDNSMockHttp.type = 'RECORD_DOES_NOT_EXIST'
try:
self.driver.get_record(zone_id='', record_id='')
except RecordDoesNotExistError:
pass
else:
self.fail('Exception was not thrown')
def test_create_zone_with_extra_param(self):
DurableDNSMockHttp.type = 'WITH_EXTRA_PARAMS'
zone = self.driver.create_zone(domain='myzone.com.', ttl=4000,
extra={'mbox': 'mail.myzone.com',
'minimum': 50000})
extra = ZONE_EXTRA_PARAMS_DEFAULT_VALUES
self.assertEqual(zone.id, 'myzone.com.')
self.assertEqual(zone.domain, 'myzone.com.')
self.assertEqual(zone.ttl, 4000)
self.assertEqual(zone.extra['ns'], extra['ns'])
self.assertEqual(zone.extra['mbox'], 'mail.myzone.com')
self.assertEqual(zone.extra['serial'], '1437473456')
self.assertEqual(zone.extra['refresh'], extra['refresh'])
self.assertEqual(zone.extra['retry'], extra['retry'])
self.assertEqual(zone.extra['expire'], extra['expire'])
self.assertEqual(zone.extra['minimum'], 50000)
self.assertEqual(zone.extra['xfer'], extra['xfer'])
self.assertEqual(zone.extra['update_acl'], extra['update_acl'])
self.assertEqual(len(zone.extra.keys()), 9)
def test_create_zone_no_extra_param(self):
DurableDNSMockHttp.type = 'NO_EXTRA_PARAMS'
zone = self.driver.create_zone(domain='myzone.com.')
extra = ZONE_EXTRA_PARAMS_DEFAULT_VALUES
self.assertEqual(zone.id, 'myzone.com.')
self.assertEqual(zone.domain, 'myzone.com.')
self.assertEqual(zone.ttl, DEFAULT_TTL)
self.assertEqual(zone.extra['ns'], extra['ns'])
self.assertEqual(zone.extra['mbox'], extra['mbox'])
self.assertEqual(zone.extra['serial'], '1437473456')
self.assertEqual(zone.extra['refresh'], extra['refresh'])
self.assertEqual(zone.extra['retry'], extra['retry'])
self.assertEqual(zone.extra['expire'], extra['expire'])
self.assertEqual(zone.extra['minimum'], extra['minimum'])
self.assertEqual(zone.extra['xfer'], extra['xfer'])
self.assertEqual(zone.extra['update_acl'], extra['update_acl'])
self.assertEqual(len(zone.extra.keys()), 9)
def test_create_zone_zone_already_exist(self):
DurableDNSMockHttp.type = 'ZONE_ALREADY_EXIST'
try:
self.driver.create_zone(domain='myzone.com.')
except ZoneAlreadyExistsError:
pass
else:
self.fail('Exception was not thrown')
def test_create_record_no_extra_param(self):
zone = self.driver.list_zones()[0]
DurableDNSMockHttp.type = 'NO_EXTRA_PARAMS'
record = self.driver.create_record(name='record1', zone=zone,
type=RecordType.A, data='1.2.3.4')
self.assertEqual(record.id, '353367855')
self.assertEqual(record.name, 'record1')
self.assertEqual(record.zone, zone)
self.assertEqual(record.type, RecordType.A)
self.assertEqual(record.data, '1.2.3.4')
self.assertEqual(record.extra.get('aux'),
RECORD_EXTRA_PARAMS_DEFAULT_VALUES.get('aux'))
self.assertEqual(record.extra.get('ttl'),
RECORD_EXTRA_PARAMS_DEFAULT_VALUES.get('ttl'))
def test_create_record_with_extra_param(self):
zone = self.driver.list_zones()[0]
DurableDNSMockHttp.type = 'WITH_EXTRA_PARAMS'
record = self.driver.create_record(name='record1', zone=zone,
type=RecordType.A, data='1.2.3.4',
extra={'ttl': 4000})
self.assertEqual(record.id, '353367855')
self.assertEqual(record.name, 'record1')
self.assertEqual(record.zone, zone)
self.assertEqual(record.type, RecordType.A)
self.assertEqual(record.data, '1.2.3.4')
self.assertEqual(record.extra.get('aux'),
RECORD_EXTRA_PARAMS_DEFAULT_VALUES.get('aux'))
self.assertEqual(record.extra.get('ttl'), 4000)
def test_create_record_zone_does_not_exist(self):
DurableDNSMockHttp.type = 'ZONE_DOES_NOT_EXIST'
z_extra = {'ns': 'ns1.durabledns.com.', 'mbox': 'mail.myzone.com',
'refresh': '13000', 'retry': 7200, 'expire': 1300,
'minimum': 13, 'xfer': '127.0.0.1',
'update_acl': '127.0.0.1'}
zone = Zone(id='deletedzone.com.', domain='deletedzone.com.',
type='master', ttl=1300, driver=self.driver, extra=z_extra)
try:
self.driver.create_record(name='record1', zone=zone,
type=RecordType.A, data='1.2.3.4')
except ZoneDoesNotExistError:
pass
else:
self.fail('Exception was not thrown')
def test_update_zone(self):
# We'll assume that this zone has been created before. So will have
# a serial number in his extra attributes. Later we are going to
# check that after the update, serial number should change to new one.
DurableDNSMockHttp.type = 'UPDATE_ZONE'
z_extra = {'ns': 'ns1.durabledns.com.', 'mbox': 'mail.myzone.com',
'refresh': '13000', 'retry': 7200, 'expire': 1300,
'minimum': 13, 'xfer': '127.0.0.1', 'serial': '1437473456',
'update_acl': '127.0.0.1'}
zone = Zone(id='deletedzone.com.', domain='deletedzone.com.',
type='master', ttl=1300, driver=self.driver, extra=z_extra)
new_extra = {'minimum': 5000, 'expire': 8000}
updated_zone = self.driver.update_zone(zone, zone.domain,
type=zone.type, ttl=4000,
extra=new_extra)
self.assertEqual(updated_zone.id, 'myzone.com.')
self.assertEqual(updated_zone.domain, 'myzone.com.')
self.assertEqual(updated_zone.ttl, 4000)
self.assertEqual(updated_zone.extra['ns'], z_extra['ns'])
self.assertEqual(updated_zone.extra['mbox'], z_extra['mbox'])
self.assertEqual(updated_zone.extra['serial'], '1437475078')
self.assertEqual(updated_zone.extra['refresh'], z_extra['refresh'])
self.assertEqual(updated_zone.extra['retry'], z_extra['retry'])
self.assertEqual(updated_zone.extra['expire'], 8000)
self.assertEqual(updated_zone.extra['minimum'], 5000)
self.assertEqual(updated_zone.extra['xfer'], z_extra['xfer'])
self.assertEqual(updated_zone.extra['update_acl'],
z_extra['update_acl'])
self.assertEqual(len(updated_zone.extra.keys()), 9)
def test_update_zone_zone_does_not_exist(self):
DurableDNSMockHttp.type = 'ZONE_DOES_NOT_EXIST'
z_extra = {'ns': 'ns1.durabledns.com.', 'mbox': 'mail.myzone.com',
'refresh': '13000', 'retry': 7200, 'expire': 1300,
'minimum': 13, 'xfer': '127.0.0.1', 'serial': '1437473456',
'update_acl': '127.0.0.1'}
zone = Zone(id='deletedzone.com.', domain='deletedzone.com.',
type='master', ttl=1300, driver=self.driver, extra=z_extra)
try:
self.driver.update_zone(zone, zone.domain)
except ZoneDoesNotExistError:
pass
else:
self.fail('Exception was not thrown')
def test_update_record(self):
z_extra = {'ns': 'ns1.durabledns.com.', 'mbox': 'mail.myzone.com',
'refresh': '13000', 'retry': 7200, 'expire': 1300,
'minimum': 13, 'xfer': '127.0.0.1',
'update_acl': '127.0.0.1'}
zone = Zone(id='myzone.com.', domain='myzone.com.', type='master',
ttl=1300, driver=self.driver, extra=z_extra)
extra = {'aux': 1, 'ttl': 3600}
record = Record(id='353286987', type='A', zone=zone,
name='record1', data='192.168.0.1',
driver=self, extra=extra)
new_extra = {'aux': 0, 'ttl': 4500}
updated_record = self.driver.update_record(record, record.name,
record.type, record.data,
extra=new_extra)
self.assertEqual(updated_record.data, '192.168.0.1')
self.assertEqual(updated_record.id, '353286987')
self.assertEqual(updated_record.name, 'record1')
self.assertEqual(updated_record.zone, record.zone)
self.assertEqual(updated_record.type, RecordType.A)
self.assertEqual(updated_record.extra.get('aux'), 0)
self.assertEqual(updated_record.extra.get('ttl'), 4500)
def test_update_record_zone_does_not_exist(self):
z_extra = {'ns': 'ns1.durabledns.com.', 'mbox': 'mail.myzone.com',
'refresh': '13000', 'retry': 7200, 'expire': 1300,
'minimum': 13, 'xfer': '127.0.0.1',
'update_acl': '127.0.0.1'}
zone = Zone(id='myzone.com.', domain='myzone.com.', type='master',
ttl=1300, driver=self.driver, extra=z_extra)
extra = {'aux': 1, 'ttl': 3600}
record = Record(id='353286987', type='A', zone=zone,
name='record1', data='192.168.0.1',
driver=self, extra=extra)
DurableDNSMockHttp.type = 'ZONE_DOES_NOT_EXIST'
try:
self.driver.update_record(record, record.name, record.type,
record.data)
except ZoneDoesNotExistError:
pass
else:
self.fail('Exception was not thrown')
def test_delete_zone(self):
z_extra = {'ns': 'ns1.durabledns.com.', 'mbox': 'mail.myzone.com',
'refresh': '13000', 'retry': 7200, 'expire': 1300,
'minimum': 13, 'xfer': '127.0.0.1',
'update_acl': '127.0.0.1'}
zone = Zone(id='myzone.com.', domain='myzone.com.', type='master',
ttl=1300, driver=self.driver, extra=z_extra)
status = self.driver.delete_zone(zone=zone)
self.assertTrue(status)
def test_delete_zone_zone_does_not_exist(self):
z_extra = {'ns': 'ns1.durabledns.com.', 'mbox': 'mail.myzone.com',
'refresh': '13000', 'retry': 7200, 'expire': 1300,
'minimum': 13, 'xfer': '127.0.0.1',
'update_acl': '127.0.0.1'}
zone = Zone(id='myzone.com.', domain='myzone.com.', type='master',
ttl=1300, driver=self.driver, extra=z_extra)
DurableDNSMockHttp.type = 'ZONE_DOES_NOT_EXIST'
try:
self.driver.delete_zone(zone=zone)
except ZoneDoesNotExistError:
pass
else:
self.fail('Exception was not thrown')
def test_delete_record(self):
z_extra = {'ns': 'ns1.durabledns.com.', 'mbox': 'mail.myzone.com',
'refresh': '13000', 'retry': 7200, 'expire': 1300,
'minimum': 13, 'xfer': '127.0.0.1',
'update_acl': '127.0.0.1'}
zone = Zone(id='myzone.com.', domain='myzone.com.', type='master',
ttl=1300, driver=self.driver, extra=z_extra)
extra = {'aux': 1, 'ttl': 3600}
record = Record(id='353286987', type='A', zone=zone,
name='record1', data='192.168.0.1',
driver=self, extra=extra)
status = self.driver.delete_record(record=record)
self.assertTrue(status)
def test_delete_record_record_does_not_exist(self):
z_extra = {'ns': 'ns1.durabledns.com.', 'mbox': 'mail.myzone.com',
'refresh': '13000', 'retry': 7200, 'expire': 1300,
'minimum': 13, 'xfer': '127.0.0.1',
'update_acl': '127.0.0.1'}
zone = Zone(id='myzone.com.', domain='myzone.com.', type='master',
ttl=1300, driver=self.driver, extra=z_extra)
extra = {'aux': 1, 'ttl': 3600}
record = Record(id='353286987', type='A', zone=zone,
name='record1', data='192.168.0.1',
driver=self, extra=extra)
DurableDNSMockHttp.type = 'RECORD_DOES_NOT_EXIST'
try:
self.driver.delete_record(record=record)
except RecordDoesNotExistError:
pass
else:
self.fail('Exception was not thrown')
def test_delete_record_zone_does_not_exist(self):
zone = self.driver.list_zones()[0]
record = self.driver.list_records(zone=zone)[0]
DurableDNSMockHttp.type = 'ZONE_DOES_NOT_EXIST'
try:
self.driver.delete_record(record=record)
except ZoneDoesNotExistError:
pass
else:
self.fail('Exception was not thrown')
class DurableDNSMockHttp(MockHttpTestCase):
fixtures = DNSFileFixtures('durabledns')
def _services_dns_listZones_php(self, method, url, body, headers):
body = self.fixtures.load('list_zones.xml')
return (httplib.OK, body, {}, httplib.responses[httplib.OK])
def _services_dns_listRecords_php(self, method, url, body, headers):
body = self.fixtures.load('list_records.xml')
return (httplib.OK, body, {}, httplib.responses[httplib.OK])
def _services_dns_listRecords_php_ZONE_DOES_NOT_EXIST(self, method, url,
body, headers):
body = self.fixtures.load('list_records_ZONE_DOES_NOT_EXIST.xml')
return (httplib.OK, body, {}, httplib.responses[httplib.OK])
def _services_dns_getZone_php(self, method, url, body, headers):
body = self.fixtures.load('get_zone.xml')
return (httplib.OK, body, {}, httplib.responses[httplib.OK])
def _services_dns_getZone_php_ZONE_DOES_NOT_EXIST(self, method, url,
body, headers):
body = self.fixtures.load('get_zone_ZONE_DOES_NOT_EXIST.xml')
return (httplib.OK, body, {}, httplib.responses[httplib.OK])
def _services_dns_getRecord_php(self, method, url, body, headers):
body = self.fixtures.load('get_record.xml')
return (httplib.OK, body, {}, httplib.responses[httplib.OK])
def _services_dns_getRecord_php_ZONE_DOES_NOT_EXIST(self, method, url,
body, headers):
body = self.fixtures.load('get_record_ZONE_DOES_NOT_EXIST.xml')
return (httplib.OK, body, {}, httplib.responses[httplib.OK])
def _services_dns_getRecord_php_RECORD_DOES_NOT_EXIST(self, method, url,
body, headers):
body = self.fixtures.load('get_record_RECORD_DOES_NOT_EXIST.xml')
return (httplib.OK, body, {}, httplib.responses[httplib.OK])
def _services_dns_createZone_php_WITH_EXTRA_PARAMS(self, method, url, body,
headers):
body = self.fixtures.load('create_zone.xml')
return (httplib.OK, body, {}, httplib.responses[httplib.OK])
def _services_dns_getZone_php_WITH_EXTRA_PARAMS(self, method, url, body,
headers):
body = self.fixtures.load('get_zone_WITH_EXTRA_PARAMS.xml')
return (httplib.OK, body, {}, httplib.responses[httplib.OK])
def _services_dns_createZone_php_NO_EXTRA_PARAMS(self, method, url, body,
headers):
body = self.fixtures.load('create_zone.xml')
return (httplib.OK, body, {}, httplib.responses[httplib.OK])
def _services_dns_getZone_php_NO_EXTRA_PARAMS(self, method, url, body,
headers):
body = self.fixtures.load('get_zone_NO_EXTRA_PARAMS.xml')
return (httplib.OK, body, {}, httplib.responses[httplib.OK])
def _services_dns_createZone_php_ZONE_ALREADY_EXIST(self, method, url,
body, headers):
body = self.fixtures.load('create_zone_ZONE_ALREADY_EXIST.xml')
return (httplib.OK, body, {}, httplib.responses[httplib.OK])
def _services_dns_createRecord_php_NO_EXTRA_PARAMS(self, method, url, body,
headers):
body = self.fixtures.load('create_record_NO_EXTRA_PARAMS.xml')
return (httplib.OK, body, {}, httplib.responses[httplib.OK])
def _services_dns_createRecord_php_WITH_EXTRA_PARAMS(self, method, url,
body, headers):
body = self.fixtures.load('create_record_WITH_EXTRA_PARAMS.xml')
return (httplib.OK, body, {}, httplib.responses[httplib.OK])
def _services_dns_createRecord_php_ZONE_DOES_NOT_EXIST(self, method, url,
body, headers):
body = self.fixtures.load('create_record_ZONE_DOES_NOT_EXIST.xml')
return (httplib.OK, body, {}, httplib.responses[httplib.OK])
def _services_dns_updateZone_php_UPDATE_ZONE(self, method, url,
body, headers):
body = self.fixtures.load('update_zone_UPDATE_ZONE.xml')
return (httplib.OK, body, {}, httplib.responses[httplib.OK])
def _services_dns_getZone_php_UPDATE_ZONE(self, method, url,
body, headers):
body = self.fixtures.load('get_zone_UPDATE_ZONE.xml')
return (httplib.OK, body, {}, httplib.responses[httplib.OK])
def _services_dns_updateZone_php_ZONE_DOES_NOT_EXIST(self, method, url,
body, headers):
body = self.fixtures.load('update_zone_ZONE_DOES_NOT_EXIST.xml')
return (httplib.OK, body, {}, httplib.responses[httplib.OK])
def _services_dns_updateRecord_php(self, method, url, body, headers):
body = self.fixtures.load('update_record.xml')
return (httplib.OK, body, {}, httplib.responses[httplib.OK])
def _services_dns_updateRecord_php_ZONE_DOES_NOT_EXIST(self, method, url,
body, headers):
body = self.fixtures.load('update_record_ZONE_DOES_NOT_EXIST.xml')
return (httplib.OK, body, {}, httplib.responses[httplib.OK])
def _services_dns_deleteZone_php(self, method, url, body, headers):
body = self.fixtures.load('delete_zone.xml')
return (httplib.OK, body, {}, httplib.responses[httplib.OK])
def _services_dns_deleteZone_php_ZONE_DOES_NOT_EXIST(self, method, url,
body, headers):
body = self.fixtures.load('delete_zone_ZONE_DOES_NOT_EXIST.xml')
return (httplib.OK, body, {}, httplib.responses[httplib.OK])
def _services_dns_deleteRecord_php(self, method, url, body, headers):
body = self.fixtures.load('delete_record.xml')
return (httplib.OK, body, {}, httplib.responses[httplib.OK])
def _services_dns_deleteRecord_php_RECORD_DOES_NOT_EXIST(self, method, url,
body, headers):
body = self.fixtures.load('delete_record_RECORD_DOES_NOT_EXIST.xml')
return (httplib.OK, body, {}, httplib.responses[httplib.OK])
def _services_dns_deleteRecord_php_ZONE_DOES_NOT_EXIST(self, method, url,
body, headers):
body = self.fixtures.load('delete_record_ZONE_DOES_NOT_EXIST.xml')
return (httplib.OK, body, {}, httplib.responses[httplib.OK])
if __name__ == '__main__':
sys.exit(unittest.main())
| 49.512411 | 79 | 0.602149 | 3,273 | 27,925 | 4.96242 | 0.076383 | 0.096047 | 0.056151 | 0.053195 | 0.843677 | 0.800579 | 0.727866 | 0.719493 | 0.716968 | 0.70761 | 0 | 0.041412 | 0.267574 | 27,925 | 563 | 80 | 49.600355 | 0.752701 | 0.03291 | 0 | 0.60166 | 0 | 0 | 0.154507 | 0.023639 | 0 | 0 | 0 | 0 | 0.244813 | 1 | 0.107884 | false | 0.018672 | 0.03112 | 0 | 0.19917 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4d8d831a14be5c12e1efd1ee031af6689f133b11 | 103 | py | Python | tfkit/__init__.py | voidful/TFk | 5ba46a5ef12bdbbd0215a2eadb036b443c382f3f | [
"Apache-2.0"
] | null | null | null | tfkit/__init__.py | voidful/TFk | 5ba46a5ef12bdbbd0215a2eadb036b443c382f3f | [
"Apache-2.0"
] | null | null | null | tfkit/__init__.py | voidful/TFk | 5ba46a5ef12bdbbd0215a2eadb036b443c382f3f | [
"Apache-2.0"
] | null | null | null | from tfkit.model import *
import tfkit.utility
import tfkit.dump
import tfkit.train
import tfkit.eval
| 14.714286 | 25 | 0.815534 | 16 | 103 | 5.25 | 0.5 | 0.52381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.126214 | 103 | 6 | 26 | 17.166667 | 0.933333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4de29893e0b798b9afb6cfdffcedd5f1cdcf8952 | 45 | py | Python | clay/markdown_ext/__init__.py | TuxCoder/Clay | 04f15b4d742b14d09df9049dd91cfa4386cba66e | [
"MIT"
] | null | null | null | clay/markdown_ext/__init__.py | TuxCoder/Clay | 04f15b4d742b14d09df9049dd91cfa4386cba66e | [
"MIT"
] | null | null | null | clay/markdown_ext/__init__.py | TuxCoder/Clay | 04f15b4d742b14d09df9049dd91cfa4386cba66e | [
"MIT"
] | null | null | null | from .jinja import MarkdownExtension # noqa
| 22.5 | 44 | 0.8 | 5 | 45 | 7.2 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.155556 | 45 | 1 | 45 | 45 | 0.947368 | 0.088889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1275a7c43036f133045d4c680f06a6c63c0b9259 | 32 | py | Python | kdlutils/__init__.py | kaustubh-sadekar/dlutils | 91b98f7701f4d682ae2790e4cf41b9daa5e3cf77 | [
"MIT"
] | 3 | 2020-03-12T09:21:24.000Z | 2021-12-27T14:06:20.000Z | kdlutils/__init__.py | kaustubh-sadekar/githubActions | 2b8117898c4f9ca52d8800bdb95121154c7bf1e9 | [
"MIT"
] | null | null | null | kdlutils/__init__.py | kaustubh-sadekar/githubActions | 2b8117898c4f9ca52d8800bdb95121154c7bf1e9 | [
"MIT"
] | null | null | null | from kdlutils.kdlutils import *
| 16 | 31 | 0.8125 | 4 | 32 | 6.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 32 | 1 | 32 | 32 | 0.928571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
12d7beb8abca942b4e75e58d1a4a3c60a64f3faf | 141 | py | Python | ly2xml/__init__.py | bradley-gersh/ly2xml | 8e240e79e1e34349096378191aa58f1fbd65df93 | [
"MIT"
] | null | null | null | ly2xml/__init__.py | bradley-gersh/ly2xml | 8e240e79e1e34349096378191aa58f1fbd65df93 | [
"MIT"
] | null | null | null | ly2xml/__init__.py | bradley-gersh/ly2xml | 8e240e79e1e34349096378191aa58f1fbd65df93 | [
"MIT"
] | null | null | null | from .LilypondLexer import LilypondLexer
from .LilypondParser import LilypondParser
from .LilypondErrorListener import LilypondErrorListener
| 35.25 | 56 | 0.893617 | 12 | 141 | 10.5 | 0.416667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.085106 | 141 | 3 | 57 | 47 | 0.976744 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
12fdbd3f29d55f6b5398f9cf144a9c16e87b5ae3 | 140 | py | Python | search/__init__.py | shyammarjit/missionaries-and-cannibals | 2a9563b292b9e3db009dbc495d464597b7f4b00d | [
"MIT"
] | 17 | 2019-01-30T16:24:00.000Z | 2022-02-28T09:50:05.000Z | search/__init__.py | shyammarjit/missionaries-and-cannibals | 2a9563b292b9e3db009dbc495d464597b7f4b00d | [
"MIT"
] | null | null | null | search/__init__.py | shyammarjit/missionaries-and-cannibals | 2a9563b292b9e3db009dbc495d464597b7f4b00d | [
"MIT"
] | 8 | 2018-04-28T17:17:24.000Z | 2022-03-16T16:40:07.000Z | from .node import Node
from .problem import Problem
from .search import depth_limited_search
from .search import iterative_deepening_search
| 28 | 46 | 0.857143 | 20 | 140 | 5.8 | 0.45 | 0.172414 | 0.275862 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.114286 | 140 | 4 | 47 | 35 | 0.935484 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
42162064522d312d65e024f47d9098bca2048d4d | 43 | py | Python | src/stoat/core/structure/ref/__init__.py | saarkatz/guppy-struct | b9099353312c365cfd788dbd2d168a9c844765be | [
"Apache-2.0"
] | 1 | 2021-12-07T11:59:11.000Z | 2021-12-07T11:59:11.000Z | src/stoat/core/structure/ref/__init__.py | saarkatz/stoat-struct | b9099353312c365cfd788dbd2d168a9c844765be | [
"Apache-2.0"
] | null | null | null | src/stoat/core/structure/ref/__init__.py | saarkatz/stoat-struct | b9099353312c365cfd788dbd2d168a9c844765be | [
"Apache-2.0"
] | null | null | null | from .ref import Ref
from .this import This | 21.5 | 22 | 0.790698 | 8 | 43 | 4.25 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.162791 | 43 | 2 | 22 | 21.5 | 0.944444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
424e52fdb6ac7691d9640b94cfb4ec29a287a1ec | 48 | py | Python | python/pyMeta/__init__.py | ProkopHapala/SimpleSimulationEngine | 240f9b7e85b3a6eda7a27dc15fe3f7b8c08774c5 | [
"MIT"
] | 26 | 2016-12-04T04:45:12.000Z | 2022-03-24T09:39:28.000Z | python/pyMeta/__init__.py | Aki78/FlightAI | 9c5480f2392c9c89b9fee4902db0c4cde5323a6c | [
"MIT"
] | null | null | null | python/pyMeta/__init__.py | Aki78/FlightAI | 9c5480f2392c9c89b9fee4902db0c4cde5323a6c | [
"MIT"
] | 2 | 2019-02-09T12:31:06.000Z | 2019-04-28T02:24:50.000Z | #!/usr/bin/python
from metaprograming import *
| 12 | 28 | 0.75 | 6 | 48 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 48 | 3 | 29 | 16 | 0.857143 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
427837110c96172832b3beaa0d7d99fa1f1948a8 | 33 | py | Python | tasks/UDEMY/100_days/temp.py | AleksNeStu/projects | 1a4c68dfbdcb77228f0f3617e58fd18fcb1f5dbb | [
"Apache-2.0"
] | 2 | 2022-01-19T18:01:35.000Z | 2022-02-06T06:54:38.000Z | tasks/UDEMY/100_days/temp.py | AleksNeStu/projects | 1a4c68dfbdcb77228f0f3617e58fd18fcb1f5dbb | [
"Apache-2.0"
] | null | null | null | tasks/UDEMY/100_days/temp.py | AleksNeStu/projects | 1a4c68dfbdcb77228f0f3617e58fd18fcb1f5dbb | [
"Apache-2.0"
] | null | null | null | from markupsafe import escape
| 6.6 | 29 | 0.787879 | 4 | 33 | 6.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.212121 | 33 | 4 | 30 | 8.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c450e628cbb6835ffe91e1f34fa8bb74846ded09 | 19 | py | Python | vpac/__init__.py | ForrestCKoch/VPAC | 029935ba5732a323509d99dda4dcd56f3fdbbdeb | [
"MIT"
] | null | null | null | vpac/__init__.py | ForrestCKoch/VPAC | 029935ba5732a323509d99dda4dcd56f3fdbbdeb | [
"MIT"
] | null | null | null | vpac/__init__.py | ForrestCKoch/VPAC | 029935ba5732a323509d99dda4dcd56f3fdbbdeb | [
"MIT"
] | null | null | null | from vpac import *
| 9.5 | 18 | 0.736842 | 3 | 19 | 4.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.210526 | 19 | 1 | 19 | 19 | 0.933333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c4754f492a906ccd7d8b66ddbfdcee74953243e0 | 142 | py | Python | quiz_bot/quiz/interfaces/__init__.py | livestreamx/quiz-bot | e08e9161d908ce9cb851cd6c689f04703db1928f | [
"MIT"
] | 1 | 2022-03-05T13:42:08.000Z | 2022-03-05T13:42:08.000Z | quiz_bot/quiz/interfaces/__init__.py | livestreamx/quiz-bot | e08e9161d908ce9cb851cd6c689f04703db1928f | [
"MIT"
] | null | null | null | quiz_bot/quiz/interfaces/__init__.py | livestreamx/quiz-bot | e08e9161d908ce9cb851cd6c689f04703db1928f | [
"MIT"
] | 2 | 2021-06-20T10:40:25.000Z | 2022-02-15T04:26:58.000Z | # flake8: noqa
from .abstract_interface import IInterface
from .chat_interface import ChatInterface
from .quiz_interface import QuizInterface
| 28.4 | 42 | 0.859155 | 17 | 142 | 7 | 0.647059 | 0.378151 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007874 | 0.105634 | 142 | 4 | 43 | 35.5 | 0.929134 | 0.084507 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
67178fc2f10057b0e36ac8459456e665cfa69b70 | 30 | py | Python | datanorth/__init__.py | data-north/datanorth-python | ce4e64c6615dc8f618401313862d7c650b4ed3c8 | [
"MIT"
] | null | null | null | datanorth/__init__.py | data-north/datanorth-python | ce4e64c6615dc8f618401313862d7c650b4ed3c8 | [
"MIT"
] | null | null | null | datanorth/__init__.py | data-north/datanorth-python | ce4e64c6615dc8f618401313862d7c650b4ed3c8 | [
"MIT"
] | null | null | null | from .datanorth import Project | 30 | 30 | 0.866667 | 4 | 30 | 6.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 30 | 1 | 30 | 30 | 0.962963 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
675dcfb7423bee36a99f2c0f6622eb55b6d41da6 | 3,498 | py | Python | tests/misc/compare_uuid_generators.py | openpermissions/identity-srv | 97696413accfea052f4037a4291d8d4eed560cfb | [
"Apache-2.0"
] | 2 | 2016-05-03T20:10:04.000Z | 2019-05-20T01:41:33.000Z | tests/misc/compare_uuid_generators.py | openpermissions/identity-srv | 97696413accfea052f4037a4291d8d4eed560cfb | [
"Apache-2.0"
] | 1 | 2016-05-05T11:04:39.000Z | 2016-05-05T11:04:39.000Z | tests/misc/compare_uuid_generators.py | openpermissions/identity-srv | 97696413accfea052f4037a4291d8d4eed560cfb | [
"Apache-2.0"
] | 1 | 2019-05-20T01:41:23.000Z | 2019-05-20T01:41:23.000Z | # -*- coding: utf-8 -*-
# Copyright 2016 Open Permissions Platform Coalition
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License. You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software distributed under the License is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and limitations under the License.
import uuid
import time
start = time.time()
n = 10000
for i in range(n):
uuid.uuid5(uuid.uuid1(), uuid.uuid4().urn)
end = time.time()
diff = end - start
print("It took {} to generate {} UUID5 based on UUId1 and UUID4.urn."
"Avg time to gen one UUID: {}".format(diff, n, diff / n))
start = time.time()
n = 50000
for i in range(n):
uuid.uuid5(uuid.uuid1(), uuid.uuid4().urn)
end = time.time()
diff = end - start
print("It took {} to generate {} UUID5 based on UUId1 and UUID4.urn."
"Avg time to gen one UUID: {}".format(diff, n, diff / n))
start = time.time()
n = 100000
for i in range(n):
uuid.uuid5(uuid.uuid1(), uuid.uuid4().urn)
end = time.time()
diff = end - start
print("It took {} to generate {} UUID5 based on UUId1 and UUID4.urn."
"Avg time to gen one UUID: {}".format(diff, n, diff / n))
start = time.time()
n = 10000
for i in range(n):
uuid.uuid3(uuid.uuid1(), uuid.uuid4().urn)
end = time.time()
diff = end - start
print("It took {} to generate {} UUID3 based on UUId1 and UUID4.urn."
"Avg time to gen one UUID: {}".format(diff, n, diff / n))
start = time.time()
n = 50000
for i in range(n):
uuid.uuid3(uuid.uuid1(), uuid.uuid4().urn)
end = time.time()
diff = end - start
print("It took {} to generate {} UUID3 based on UUId1 and UUID4.urn."
"Avg time to gen one UUID: {}".format(diff, n, diff / n))
start = time.time()
n = 100000
for i in range(n):
uuid.uuid3(uuid.uuid1(), uuid.uuid4().urn)
end = time.time()
diff = end - start
print("It took {} to generate {} UUID3 based on UUId1 and UUID4.urn."
"Avg time to gen one UUID: {}".format(diff, n, diff / n))
start = time.time()
n = 10000
for i in range(n):
uuid.uuid1()
end = time.time()
diff = end - start
print("It took {} to generate {} UUID1."
"Avg time to gen one UUID: {}".format(diff, n, diff / n))
start = time.time()
n = 50000
for i in range(n):
uuid.uuid1()
end = time.time()
diff = end - start
print("It took {} to generate {} UUID1."
"Avg time to gen one UUID: {}".format(diff, n, diff / n))
start = time.time()
n = 100000
for i in range(n):
uuid.uuid1()
end = time.time()
diff = end - start
print("It took {} to generate {} UUID1."
"Avg time to gen one UUID: {}".format(diff, n, diff / n))
start = time.time()
n = 10000
for i in range(n):
uuid.uuid4()
end = time.time()
diff = end - start
print("It took {} to generate {} UUID4."
"Avg time to gen one UUID: {}".format(diff, n, diff / n))
start = time.time()
n = 50000
for i in range(n):
uuid.uuid4()
end = time.time()
diff = end - start
print("It took {} to generate {} UUID4."
"Avg time to gen one UUID: {}".format(diff, n, diff / n))
start = time.time()
n = 100000
for i in range(n):
uuid.uuid4()
end = time.time()
diff = end - start
print("It took {} to generate {} UUID4."
"Avg time to gen one UUID: {}".format(diff, n, diff / n))
| 29.15 | 107 | 0.642367 | 585 | 3,498 | 3.841026 | 0.158974 | 0.085447 | 0.069426 | 0.074766 | 0.781486 | 0.781486 | 0.781486 | 0.781486 | 0.781486 | 0.781486 | 0 | 0.043447 | 0.203831 | 3,498 | 119 | 108 | 29.394958 | 0.763375 | 0.168668 | 0 | 0.979592 | 0 | 0 | 0.308382 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.020408 | 0 | 0.020408 | 0.122449 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
67707bc53dd0975419d6dfe9086bb2494401009c | 48,484 | py | Python | data/transcoder_evaluation_gfg/python/CHECK_SUMS_TH_ROW_TH_COLUMN_MATRIX.py | mxl1n/CodeGen | e5101dd5c5e9c3720c70c80f78b18f13e118335a | [
"MIT"
] | 241 | 2021-07-20T08:35:20.000Z | 2022-03-31T02:39:08.000Z | data/transcoder_evaluation_gfg/python/CHECK_SUMS_TH_ROW_TH_COLUMN_MATRIX.py | mxl1n/CodeGen | e5101dd5c5e9c3720c70c80f78b18f13e118335a | [
"MIT"
] | 49 | 2021-07-22T23:18:42.000Z | 2022-03-24T09:15:26.000Z | data/transcoder_evaluation_gfg/python/CHECK_SUMS_TH_ROW_TH_COLUMN_MATRIX.py | mxl1n/CodeGen | e5101dd5c5e9c3720c70c80f78b18f13e118335a | [
"MIT"
] | 71 | 2021-07-21T05:17:52.000Z | 2022-03-29T23:49:28.000Z | # Copyright (c) 2019-present, Facebook, Inc.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
#
def f_gold ( a , n , m ) :
sum1 = 0
sum2 = 0
for i in range ( 0 , n ) :
sum1 = 0
sum2 = 0
for j in range ( 0 , m ) :
sum1 += a [ i ] [ j ]
sum2 += a [ j ] [ i ]
if ( sum1 == sum2 ) :
return 1
return 0
#TOFILL
if __name__ == '__main__':
param = [
([[7, 9, 13, 14, 17, 17, 17, 19, 21, 21, 23, 25, 25, 25, 28, 29, 34, 35, 37, 39, 40, 41, 42, 49, 54, 56, 61, 64, 65, 68, 70, 74, 78, 81, 81, 83, 86, 88, 88, 88, 91, 93, 95, 98], [5, 6, 7, 8, 11, 12, 16, 19, 20, 24, 24, 28, 34, 36, 39, 41, 45, 49, 53, 56, 57, 57, 59, 62, 62, 64, 69, 71, 72, 74, 74, 75, 75, 80, 82, 86, 88, 89, 90, 91, 93, 94, 95, 99], [1, 1, 3, 8, 9, 11, 18, 19, 20, 20, 22, 23, 27, 29, 29, 31, 34, 36, 42, 46, 47, 49, 51, 51, 58, 59, 63, 65, 66, 67, 70, 70, 71, 72, 74, 76, 77, 78, 79, 83, 84, 90, 96, 99], [1, 5, 6, 7, 8, 11, 17, 28, 32, 34, 37, 39, 40, 43, 44, 44, 45, 60, 61, 63, 64, 69, 71, 71, 73, 74, 75, 78, 82, 83, 84, 84, 86, 87, 87, 89, 93, 93, 95, 95, 97, 97, 99, 99], [2, 6, 8, 9, 12, 18, 19, 21, 24, 26, 30, 36, 36, 36, 38, 39, 40, 42, 46, 47, 48, 49, 53, 54, 54, 55, 58, 62, 65, 65, 66, 67, 70, 71, 71, 73, 79, 81, 81, 87, 89, 95, 97, 99], [1, 5, 7, 10, 11, 11, 12, 14, 19, 20, 25, 28, 29, 38, 44, 45, 48, 53, 56, 63, 65, 67, 68, 71, 71, 76, 76, 80, 81, 82, 82, 85, 87, 88, 89, 90, 90, 91, 93, 94, 95, 98, 99, 99], [1, 2, 6, 7, 7, 7, 18, 26, 27, 30, 31, 33, 33, 36, 36, 37, 41, 46, 51, 52, 56, 57, 57, 61, 63, 72, 72, 73, 73, 77, 79, 82, 84, 84, 87, 88, 88, 89, 89, 92, 95, 96, 98, 98], [5, 7, 7, 13, 14, 18, 18, 18, 21, 24, 26, 27, 28, 32, 36, 37, 39, 41, 41, 42, 44, 46, 50, 51, 52, 54, 54, 57, 60, 60, 62, 64, 66, 68, 69, 72, 77, 79, 82, 89, 90, 92, 93, 94], [2, 4, 6, 10, 11, 12, 13, 16, 16, 19, 21, 23, 26, 27, 28, 31, 33, 35, 38, 40, 41, 41, 42, 45, 46, 47, 47, 48, 49, 49, 52, 53, 55, 56, 60, 64, 66, 67, 73, 75, 78, 84, 91, 96], [3, 5, 10, 11, 13, 20, 22, 23, 23, 24, 24, 28, 31, 34, 35, 35, 35, 36, 37, 41, 41, 43, 43, 45, 49, 53, 53, 57, 61, 64, 64, 64, 67, 69, 75, 76, 79, 81, 82, 83, 89, 93, 98, 99], [1, 1, 4, 9, 13, 13, 13, 15, 16, 23, 23, 25, 26, 28, 29, 30, 34, 36, 39, 40, 41, 46, 48, 54, 55, 57, 58, 62, 62, 63, 63, 65, 67, 70, 71, 71, 76, 81, 83, 84, 85, 96, 96, 96], [2, 3, 7, 11, 13, 17, 20, 27, 27, 27, 28, 30, 34, 35, 42, 44, 44, 44, 46, 49, 51, 52, 57, 61, 61, 62, 62, 63, 71, 77, 78, 78, 78, 79, 79, 83, 89, 89, 90, 93, 93, 94, 96, 98], [1, 3, 5, 6, 8, 8, 10, 10, 11, 16, 17, 18, 18, 18, 18, 21, 27, 32, 36, 38, 39, 41, 43, 43, 45, 49, 55, 57, 61, 61, 65, 71, 73, 74, 74, 76, 78, 80, 88, 90, 94, 95, 97, 98], [6, 7, 7, 7, 10, 14, 14, 19, 20, 24, 25, 26, 28, 29, 30, 30, 30, 33, 34, 36, 37, 38, 50, 53, 55, 56, 58, 62, 64, 66, 67, 74, 74, 76, 77, 81, 84, 88, 92, 93, 94, 95, 95, 97], [1, 2, 4, 13, 16, 18, 23, 25, 26, 27, 28, 29, 31, 33, 39, 44, 47, 48, 53, 54, 57, 62, 63, 64, 64, 64, 66, 68, 73, 73, 75, 77, 78, 81, 85, 94, 95, 96, 96, 96, 97, 98, 99, 99], [2, 4, 6, 13, 14, 15, 15, 26, 29, 30, 30, 31, 32, 33, 35, 37, 37, 44, 45, 45, 45, 47, 51, 54, 55, 56, 57, 58, 59, 63, 63, 64, 66, 72, 73, 75, 78, 80, 86, 92, 93, 94, 96, 97], [1, 1, 2, 3, 6, 9, 11, 12, 14, 22, 27, 31, 35, 36, 36, 38, 41, 42, 42, 45, 47, 47, 47, 49, 49, 57, 58, 59, 62, 63, 70, 70, 73, 76, 78, 82, 83, 83, 85, 91, 93, 96, 99, 99], [1, 2, 2, 6, 7, 8, 8, 11, 13, 14, 18, 19, 23, 25, 25, 29, 30, 35, 35, 39, 40, 46, 55, 55, 56, 58, 59, 61, 62, 70, 70, 71, 72, 74, 74, 75, 80, 85, 86, 89, 91, 92, 93, 97], [4, 10, 17, 17, 18, 19, 20, 24, 26, 26, 28, 35, 37, 39, 40, 41, 43, 45, 52, 53, 54, 54, 61, 63, 65, 65, 73, 73, 76, 77, 77, 78, 78, 79, 82, 83, 92, 94, 95, 96, 96, 96, 97, 99], [6, 8, 8, 9, 11, 13, 13, 17, 18, 18, 19, 21, 22, 27, 29, 29, 32, 33, 36, 38, 40, 42, 47, 47, 56, 57, 58, 66, 68, 71, 76, 77, 78, 78, 79, 81, 83, 84, 87, 91, 91, 95, 98, 99], [1, 5, 5, 6, 7, 8, 10, 10, 12, 14, 15, 17, 21, 23, 29, 33, 36, 36, 38, 40, 42, 44, 44, 46, 51, 52, 52, 54, 57, 57, 58, 59, 59, 60, 72, 76, 80, 85, 86, 88, 91, 95, 97, 99], [5, 5, 6, 8, 9, 10, 11, 14, 16, 18, 22, 24, 24, 34, 34, 40, 40, 42, 43, 52, 53, 55, 56, 57, 58, 60, 67, 67, 72, 72, 75, 76, 76, 76, 79, 80, 81, 86, 87, 88, 91, 91, 92, 98], [1, 5, 19, 19, 20, 20, 21, 23, 25, 28, 30, 31, 33, 33, 33, 33, 38, 41, 45, 48, 52, 59, 60, 64, 64, 65, 65, 66, 69, 73, 78, 79, 80, 83, 85, 85, 90, 91, 91, 92, 92, 98, 98, 99], [2, 4, 6, 10, 13, 14, 17, 21, 24, 25, 27, 29, 29, 32, 37, 39, 43, 46, 53, 53, 55, 55, 56, 56, 61, 63, 64, 66, 67, 69, 69, 70, 70, 71, 74, 74, 76, 81, 82, 85, 86, 90, 95, 98], [1, 2, 3, 4, 6, 6, 9, 10, 12, 17, 18, 19, 19, 25, 26, 30, 36, 36, 41, 49, 50, 54, 56, 59, 59, 60, 61, 62, 65, 66, 66, 70, 71, 74, 75, 76, 77, 83, 86, 90, 91, 96, 96, 97], [2, 2, 2, 4, 5, 6, 7, 8, 10, 11, 12, 18, 21, 28, 31, 36, 37, 40, 43, 44, 45, 51, 53, 54, 55, 55, 56, 56, 57, 65, 66, 67, 71, 73, 78, 80, 82, 83, 86, 86, 91, 91, 92, 95], [1, 3, 7, 10, 13, 15, 16, 18, 21, 22, 23, 26, 26, 27, 32, 36, 38, 43, 43, 47, 48, 49, 50, 51, 56, 57, 60, 62, 63, 64, 67, 68, 72, 77, 81, 81, 82, 82, 82, 85, 91, 91, 93, 95], [3, 3, 3, 9, 9, 11, 12, 13, 17, 18, 19, 20, 23, 23, 24, 25, 26, 27, 28, 29, 29, 29, 30, 32, 33, 34, 35, 37, 39, 50, 51, 53, 54, 57, 60, 64, 71, 83, 88, 89, 92, 94, 96, 98], [1, 2, 3, 3, 4, 5, 7, 9, 11, 12, 12, 18, 22, 23, 25, 33, 41, 42, 43, 46, 46, 47, 53, 57, 58, 59, 60, 63, 63, 64, 68, 68, 69, 69, 70, 73, 75, 83, 86, 90, 92, 93, 95, 96], [4, 5, 5, 11, 13, 15, 16, 16, 17, 18, 24, 25, 25, 33, 33, 35, 38, 42, 42, 45, 47, 49, 49, 50, 52, 56, 57, 59, 60, 61, 64, 64, 65, 68, 70, 70, 73, 75, 79, 84, 85, 88, 95, 96], [3, 5, 6, 6, 13, 13, 16, 23, 24, 26, 27, 28, 30, 32, 33, 33, 34, 34, 47, 48, 48, 55, 56, 57, 63, 64, 67, 68, 68, 68, 70, 75, 79, 82, 82, 87, 89, 91, 91, 93, 93, 96, 97, 99], [1, 3, 12, 15, 15, 20, 21, 22, 24, 24, 28, 28, 31, 32, 33, 36, 40, 41, 41, 42, 43, 46, 46, 49, 50, 56, 58, 61, 63, 66, 67, 68, 70, 75, 75, 75, 82, 83, 87, 96, 96, 97, 97, 99], [2, 11, 12, 12, 13, 14, 16, 17, 18, 21, 23, 27, 29, 35, 35, 36, 36, 37, 43, 48, 48, 58, 63, 65, 66, 68, 69, 70, 73, 74, 74, 76, 77, 83, 83, 86, 88, 89, 89, 91, 96, 98, 99, 99], [1, 2, 4, 5, 5, 5, 7, 15, 16, 18, 22, 28, 30, 30, 34, 34, 36, 40, 41, 41, 43, 44, 48, 50, 51, 55, 57, 59, 60, 60, 62, 63, 69, 69, 70, 78, 85, 86, 86, 88, 90, 95, 95, 99], [4, 5, 10, 11, 11, 11, 11, 12, 16, 16, 18, 19, 20, 20, 28, 28, 31, 32, 36, 36, 37, 40, 42, 45, 49, 50, 51, 55, 60, 63, 64, 72, 75, 76, 78, 78, 78, 81, 83, 84, 85, 91, 93, 93], [3, 5, 7, 8, 12, 13, 15, 15, 15, 16, 18, 20, 20, 23, 27, 28, 30, 31, 33, 34, 36, 37, 45, 47, 49, 53, 58, 58, 60, 60, 62, 62, 63, 63, 64, 71, 72, 76, 78, 79, 80, 83, 89, 98], [3, 6, 7, 12, 13, 13, 13, 17, 17, 19, 20, 20, 24, 24, 28, 31, 37, 40, 41, 43, 44, 44, 45, 49, 52, 53, 57, 58, 64, 64, 67, 71, 76, 76, 77, 79, 80, 81, 86, 91, 91, 93, 95, 99], [1, 1, 5, 5, 6, 9, 10, 10, 12, 12, 15, 16, 22, 22, 24, 30, 30, 37, 38, 38, 45, 47, 48, 49, 56, 59, 61, 63, 63, 69, 72, 74, 76, 80, 83, 83, 84, 88, 88, 89, 93, 94, 95, 98], [3, 4, 4, 9, 10, 11, 11, 17, 18, 19, 22, 22, 24, 24, 24, 25, 25, 26, 26, 28, 31, 35, 37, 38, 41, 52, 52, 54, 61, 61, 64, 69, 72, 81, 81, 83, 88, 93, 93, 95, 95, 96, 98, 99], [1, 2, 3, 6, 8, 8, 10, 10, 11, 12, 17, 20, 23, 26, 27, 29, 33, 34, 36, 41, 42, 44, 57, 57, 57, 64, 65, 67, 68, 73, 78, 78, 79, 79, 82, 86, 87, 88, 88, 93, 95, 98, 99, 99], [3, 4, 5, 8, 8, 11, 13, 16, 16, 23, 33, 34, 38, 38, 38, 39, 40, 48, 50, 51, 56, 56, 59, 66, 71, 72, 72, 74, 74, 74, 78, 78, 79, 79, 81, 83, 86, 87, 88, 88, 90, 93, 95, 99], [3, 3, 12, 16, 18, 19, 21, 24, 24, 25, 30, 31, 31, 35, 41, 42, 45, 45, 45, 46, 49, 49, 52, 53, 55, 58, 59, 61, 67, 75, 75, 76, 78, 83, 83, 84, 85, 87, 88, 93, 94, 94, 95, 97], [2, 3, 6, 10, 11, 11, 12, 12, 16, 16, 20, 21, 24, 26, 29, 32, 37, 38, 41, 45, 46, 46, 47, 48, 52, 56, 56, 57, 58, 59, 63, 63, 67, 75, 80, 82, 84, 89, 89, 95, 95, 98, 98, 99], [2, 2, 2, 3, 3, 5, 16, 16, 16, 18, 19, 20, 22, 29, 29, 31, 32, 33, 34, 34, 37, 48, 51, 51, 52, 53, 58, 68, 71, 72, 75, 75, 78, 83, 83, 84, 85, 87, 89, 90, 95, 96, 97, 97]],31,34,),
([[80, 22, -16, 10, 44, 26, -88, -6, -42, 68, 60, -60, -86, 34, 94, -28, -68, 62, 14, -60, -46, -24, -38, 30, -60, -62, -52, -86, 2, 86, 24, -46, -4, 18], [-68, 54, 30, -12, -60, -60, 70, 78, -70, -50, -92, -26, -50, -98, 0, 50, -60, 2, 50, 74, -4, 52, 20, 76, 68, 56, 28, -98, -80, -86, 22, -58, -42, -44], [-60, 64, 24, -2, 82, -58, -32, 26, 86, 88, 22, 54, -98, 98, 60, 94, 84, -16, -64, 42, -92, -96, 58, 78, 74, -66, -32, -52, -26, -64, -34, 0, 32, 94], [22, 46, -54, -58, 58, 58, 92, 62, 98, 12, 78, -98, 58, 36, 68, 54, 54, -46, 2, -20, 2, -30, -56, -98, 92, 20, -14, -38, -46, -50, 98, -90, 26, -8], [56, 18, -38, 98, -62, -38, 50, 18, -84, 74, -36, 26, 80, 74, 12, 68, 94, -34, -42, -40, -76, 40, 80, 14, -44, 96, -70, 38, -18, 68, -46, -2, -6, 48], [-84, 30, 22, -44, 36, -44, 40, 86, 76, -4, 30, -4, 12, 78, 28, 58, -30, -8, 36, 14, 84, -2, -22, 38, 56, 14, 56, 12, 48, 22, -8, 6, -48, -10], [-42, -36, 52, -14, 46, -16, -90, -68, 74, 10, 14, 38, -24, 62, -80, 76, -70, -36, -28, 28, 70, 2, -32, 18, 88, -80, -34, 32, -94, -48, -44, 80, -22, 68], [26, 58, -52, 60, -62, -34, 8, -42, 12, -40, -36, 76, -76, 38, 60, -20, -86, -98, -76, -50, 72, 46, -14, -30, -66, -54, 80, 90, 40, 32, -32, 86, -38, 90], [-60, -20, -86, 74, 86, -24, 84, -72, -88, -38, 38, -38, 12, 90, 90, -66, -28, -6, -22, 94, -42, -46, -8, 94, -92, 76, -4, -4, -68, 98, -34, -86, 60, -28], [20, 76, 74, -16, 44, -42, 66, 58, -98, -82, 22, -50, -10, 72, -36, 44, -60, 10, 16, 40, 14, -66, 88, 42, -24, 48, -52, 68, 66, 78, -26, 30, 40, -46], [-54, -18, 28, 16, 46, -94, -66, -34, 16, -12, 46, -40, 2, 10, -8, 90, -26, 70, 16, -72, 52, -74, 16, -8, 40, 12, -12, -82, -56, 86, -42, 92, 94, -30], [82, 26, -72, 30, -40, 18, -94, -62, 30, 6, -82, 50, 6, 62, 66, 72, 24, 16, -46, 58, 62, 36, 94, -56, -94, 14, -44, -4, 28, -22, -38, -80, 24, 2], [32, -14, 2, 0, 28, -36, 92, 14, 52, 26, -36, 56, -44, 60, 48, 64, -24, 46, -22, 46, 80, -10, -26, 76, -40, 40, -38, -30, -46, 44, -94, -20, -64, 12], [-56, -62, 86, 72, 18, 34, -54, 24, 70, 66, 62, 62, -26, 32, 8, 26, 74, -80, -24, 56, -24, 72, 8, -18, -80, 42, 20, -18, -30, 80, -54, 74, -68, -28], [88, 10, -12, 54, 52, 64, -52, -18, 70, 6, -20, -62, -90, -54, 96, -40, -96, 62, -48, -46, 26, 8, -16, -44, 0, 32, -36, -42, 50, 12, 98, -12, 6, -46], [-40, -64, -82, 28, 84, 78, 18, -78, -8, -14, 94, -40, -60, 22, 48, -72, -32, -30, -28, 80, -48, 44, -78, -56, -54, 42, 36, -22, -32, -24, 6, -90, 16, -50], [10, 36, -4, -64, 92, 64, 70, 96, -10, -88, 64, 32, 4, -20, -16, -66, -98, 30, 78, -38, -90, 34, 58, -12, -46, 10, -62, 72, 74, 22, 38, -14, -38, -96], [72, -44, -16, 84, 86, 4, -92, -58, -40, 74, 4, 32, -64, 18, -90, -22, 6, -72, -66, -8, -32, 54, 48, -86, 46, 64, 88, -72, -72, -20, 12, 24, 18, -36], [88, 0, 76, 44, 10, -74, -46, -38, -42, 0, -36, -50, 12, 32, 76, -46, 28, -36, 80, 84, -62, 38, -68, -60, -50, 92, -88, -44, 58, -40, 30, -70, -62, 52], [-52, 96, 68, -74, -8, -42, 40, -38, 22, 58, -8, 36, -68, -68, -70, 84, 74, -22, 38, -34, 38, 0, 98, 80, -12, -44, -50, 24, -44, -96, -30, -94, 2, 34], [92, 50, 46, -12, 88, 44, 84, 86, -50, -62, -60, -84, -42, -50, -60, -76, -98, 54, -26, 10, 32, 26, -70, -38, -58, 8, -64, -44, 34, -10, -28, -26, 26, -52], [-26, 88, 16, 62, -64, -52, 34, 94, -38, -54, 0, 24, -22, 94, -96, 8, 80, -20, 78, -78, -42, 46, 10, 50, -30, 88, -66, -54, 74, 50, 4, 86, 66, 96], [-42, 34, -84, 86, 4, -56, -50, 86, -60, 78, 14, -34, -60, -12, 78, -90, -88, 14, -8, -62, -66, 58, -24, -14, -76, 28, -38, 54, 6, -52, 98, -40, 18, 22], [18, -84, -6, 4, -82, -40, 8, 86, 22, 36, -24, 66, -2, 74, -20, -4, 38, -52, -98, -42, 44, 0, 68, 72, -16, 86, 14, -80, -20, -74, -12, -52, 74, 10], [-8, 48, -10, -88, 90, -58, 22, -28, 6, -14, 64, 6, -10, 34, 78, 8, 84, -96, -70, 82, 58, 58, 94, -32, -54, 52, -36, -64, 62, 50, 98, -56, -90, 78], [-78, 24, 86, -92, -18, 68, 56, -26, 6, 34, 22, -76, 78, -8, 38, -66, -74, -56, 10, -30, -82, -50, -20, -68, -14, 60, -72, 64, 48, 26, 76, 28, 4, 82], [10, -26, -20, 80, 64, 42, -46, 86, 82, 34, -34, -66, -48, -80, -90, -92, 56, -24, -96, 62, -96, 64, -10, 58, 38, -18, 12, 22, 36, 14, 4, -30, 12, 6], [42, 24, -16, 24, 82, -48, 34, 58, -8, -48, -32, 58, 68, -72, 60, 62, 46, -4, -88, 62, 68, -60, 80, 32, 72, 52, -16, -72, -74, -10, -8, -46, 66, -48], [68, -86, 2, 0, -10, 28, -82, 8, -80, -52, 4, -34, 82, 24, 64, -8, -40, -52, -50, 14, -20, 30, 72, 18, 96, -6, 24, -94, 58, -58, -6, -56, -84, -94], [-4, 36, -48, -46, -26, -8, -76, -60, 12, 56, -10, -78, 0, 64, 72, -8, -76, 8, 28, -12, -20, 78, -40, 76, -38, -86, -86, -88, -38, 66, -74, 14, 8, 46], [98, 60, 14, 64, 8, 92, 38, -10, 32, -52, 20, 52, 86, -4, -82, 26, 74, -2, 72, 84, -46, 74, -18, -76, 14, -54, -60, 60, 18, -72, 26, -36, 70, 42], [46, -42, -80, 28, -82, -80, -20, 94, -32, 58, -54, 76, 56, 98, -42, -64, 40, -72, 62, 42, 2, -26, -96, -68, 50, -50, -70, 52, 48, -20, 78, 40, -68, 80], [-84, 18, -24, 60, -14, 72, 2, -54, -46, 64, -46, -30, 6, 6, -4, 10, 94, -4, 38, 62, -68, 40, -4, -42, 86, -48, -10, 74, -36, -4, 22, 88, -24, 26], [86, 60, 98, 82, 14, -72, 96, 50, 68, 38, 42, 36, -30, 48, 94, -72, 94, 82, 18, -14, 76, 96, 4, -52, -96, 26, -88, 8, -46, -2, -56, 14, 44, 60]],27,19,),
([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]],15,22,),
([[79, 55, 54, 46, 90, 46, 22, 40, 22, 68, 78, 62, 54, 48, 21, 27, 32, 18, 36, 36, 9, 74, 80, 79, 75, 73, 61, 72, 22, 75, 40, 51, 21, 51, 53, 41, 39, 32, 13, 6, 19, 53, 78, 49, 43, 65], [19, 56, 2, 39, 48, 40, 43, 52, 74, 52, 96, 34, 6, 96, 19, 4, 29, 85, 87, 90, 85, 9, 61, 63, 84, 58, 48, 69, 95, 47, 15, 37, 51, 34, 63, 38, 53, 6, 85, 9, 63, 60, 43, 94, 55, 84], [21, 33, 91, 82, 31, 62, 31, 87, 25, 66, 81, 81, 66, 15, 18, 26, 6, 39, 37, 61, 64, 8, 2, 45, 71, 34, 94, 89, 16, 22, 80, 59, 22, 2, 38, 53, 82, 72, 11, 15, 77, 63, 71, 52, 3, 5], [71, 6, 63, 75, 88, 59, 93, 73, 43, 49, 93, 72, 8, 74, 72, 42, 58, 57, 72, 82, 96, 72, 33, 5, 9, 47, 13, 60, 5, 1, 73, 18, 94, 19, 6, 70, 96, 82, 35, 76, 11, 55, 25, 43, 43, 69], [12, 58, 24, 76, 11, 91, 89, 84, 14, 48, 81, 40, 96, 80, 35, 95, 72, 35, 14, 50, 6, 19, 23, 69, 32, 48, 13, 77, 70, 59, 37, 9, 15, 31, 9, 59, 45, 84, 41, 66, 10, 50, 89, 4, 84, 93], [52, 94, 97, 92, 5, 17, 90, 63, 26, 12, 98, 40, 52, 97, 10, 85, 14, 62, 25, 82, 15, 5, 67, 59, 15, 17, 74, 70, 46, 2, 22, 58, 11, 8, 44, 16, 34, 49, 60, 52, 49, 91, 12, 35, 96, 81], [91, 66, 62, 88, 2, 90, 37, 99, 33, 93, 79, 84, 41, 59, 48, 99, 70, 35, 98, 15, 11, 18, 81, 27, 52, 64, 90, 88, 23, 83, 35, 62, 76, 2, 30, 85, 3, 24, 45, 92, 12, 32, 53, 84, 10, 82], [65, 41, 34, 67, 22, 37, 4, 15, 25, 43, 34, 78, 46, 81, 33, 63, 65, 90, 36, 8, 66, 47, 53, 80, 21, 87, 83, 8, 96, 15, 75, 93, 31, 2, 52, 16, 67, 53, 26, 85, 23, 12, 75, 41, 8, 4], [62, 1, 25, 10, 75, 92, 97, 27, 58, 14, 50, 37, 81, 18, 18, 98, 80, 57, 12, 42, 43, 55, 33, 16, 88, 63, 95, 96, 22, 68, 17, 84, 87, 95, 95, 54, 65, 90, 14, 53, 65, 86, 83, 95, 74, 30], [87, 49, 20, 15, 27, 43, 71, 88, 33, 75, 84, 1, 89, 52, 35, 22, 90, 19, 87, 54, 4, 85, 59, 35, 98, 49, 59, 73, 37, 83, 85, 78, 94, 66, 47, 30, 67, 73, 82, 14, 4, 22, 83, 19, 43, 87], [6, 24, 11, 15, 33, 53, 45, 98, 21, 30, 79, 5, 21, 86, 50, 30, 55, 97, 29, 9, 3, 90, 50, 80, 60, 53, 18, 12, 52, 73, 54, 44, 15, 39, 54, 38, 33, 66, 51, 86, 67, 59, 88, 4, 51, 57], [6, 74, 35, 43, 5, 89, 14, 5, 98, 27, 21, 61, 58, 47, 17, 19, 50, 37, 86, 56, 98, 85, 93, 43, 94, 65, 16, 21, 30, 64, 99, 29, 16, 88, 46, 36, 2, 7, 35, 25, 30, 34, 43, 71, 60, 12], [72, 27, 4, 79, 45, 74, 17, 9, 92, 54, 41, 35, 95, 39, 90, 26, 36, 83, 83, 63, 19, 78, 1, 22, 27, 90, 20, 14, 25, 55, 8, 1, 96, 64, 38, 52, 48, 86, 28, 61, 11, 68, 38, 17, 63, 36], [74, 50, 68, 16, 85, 96, 17, 23, 58, 30, 55, 50, 75, 40, 68, 39, 32, 15, 67, 42, 65, 79, 8, 85, 97, 48, 51, 13, 20, 59, 43, 61, 78, 53, 30, 31, 97, 32, 51, 22, 56, 91, 61, 16, 90, 57], [14, 48, 38, 6, 3, 26, 62, 88, 44, 19, 99, 74, 26, 52, 30, 58, 82, 67, 88, 89, 98, 36, 38, 75, 42, 60, 64, 73, 32, 5, 1, 16, 58, 47, 28, 41, 16, 8, 50, 63, 11, 49, 70, 31, 14, 97], [53, 31, 76, 82, 34, 57, 44, 18, 87, 41, 97, 55, 52, 8, 92, 52, 2, 22, 39, 90, 6, 60, 54, 95, 26, 20, 19, 36, 37, 22, 97, 51, 1, 42, 21, 93, 92, 58, 81, 31, 12, 88, 1, 13, 34, 6], [17, 85, 2, 45, 17, 17, 85, 55, 42, 98, 27, 5, 43, 27, 82, 49, 36, 63, 15, 47, 17, 65, 31, 54, 91, 14, 91, 52, 8, 4, 91, 46, 37, 77, 79, 72, 63, 83, 4, 83, 67, 80, 42, 79, 73, 92], [96, 87, 4, 89, 46, 84, 69, 67, 51, 4, 8, 24, 30, 14, 7, 61, 27, 65, 71, 57, 42, 28, 20, 75, 68, 43, 69, 50, 74, 97, 19, 82, 93, 56, 84, 17, 87, 50, 34, 7, 92, 92, 36, 48, 2, 69], [7, 13, 16, 32, 93, 10, 49, 23, 5, 14, 94, 35, 14, 75, 92, 73, 64, 70, 51, 7, 70, 19, 30, 94, 44, 36, 97, 80, 36, 90, 11, 8, 15, 21, 63, 40, 9, 97, 75, 31, 45, 89, 25, 53, 93, 29], [73, 4, 5, 72, 70, 88, 46, 59, 57, 25, 23, 12, 44, 1, 48, 3, 64, 98, 19, 44, 24, 79, 13, 6, 58, 88, 65, 25, 68, 78, 79, 38, 71, 89, 91, 67, 60, 31, 6, 63, 12, 76, 63, 6, 53, 19], [1, 38, 33, 56, 3, 71, 50, 93, 36, 16, 5, 21, 89, 48, 97, 92, 14, 54, 75, 58, 20, 5, 22, 67, 42, 50, 82, 75, 3, 9, 87, 79, 14, 88, 11, 52, 59, 21, 71, 43, 4, 39, 41, 55, 90, 41], [98, 49, 84, 29, 78, 96, 32, 4, 51, 6, 99, 19, 78, 72, 44, 66, 2, 6, 6, 68, 44, 36, 87, 61, 10, 55, 73, 47, 94, 63, 89, 28, 35, 73, 59, 78, 65, 23, 87, 94, 20, 52, 53, 64, 58, 12], [89, 12, 59, 4, 32, 34, 83, 37, 6, 76, 84, 76, 98, 45, 20, 17, 73, 94, 99, 54, 65, 37, 72, 89, 64, 4, 19, 68, 99, 86, 9, 36, 18, 73, 15, 84, 94, 37, 77, 53, 40, 11, 66, 26, 23, 78], [29, 44, 83, 89, 99, 69, 45, 98, 42, 58, 62, 87, 75, 53, 98, 6, 23, 10, 37, 78, 17, 21, 44, 86, 77, 13, 84, 4, 63, 35, 97, 89, 38, 29, 45, 60, 58, 64, 19, 11, 5, 40, 88, 22, 38, 36], [79, 30, 1, 29, 64, 84, 39, 51, 73, 75, 60, 28, 59, 31, 84, 39, 32, 93, 90, 93, 5, 30, 21, 37, 92, 67, 72, 4, 65, 67, 61, 26, 40, 64, 25, 93, 7, 20, 46, 26, 98, 56, 97, 88, 52, 13], [48, 63, 48, 51, 50, 31, 20, 9, 1, 74, 21, 99, 21, 37, 71, 8, 13, 44, 90, 72, 35, 22, 96, 13, 34, 25, 1, 46, 25, 59, 14, 88, 9, 27, 78, 38, 8, 58, 35, 24, 7, 84, 84, 78, 72, 17], [51, 80, 98, 62, 23, 4, 44, 60, 96, 48, 79, 24, 19, 70, 24, 78, 70, 59, 63, 39, 33, 69, 56, 84, 4, 56, 65, 23, 46, 54, 63, 29, 77, 66, 27, 84, 79, 94, 20, 32, 20, 26, 87, 24, 71, 44], [99, 31, 66, 46, 11, 32, 56, 27, 18, 40, 5, 23, 73, 2, 72, 32, 91, 4, 30, 96, 72, 47, 15, 38, 40, 61, 88, 84, 8, 48, 52, 97, 1, 57, 30, 9, 16, 32, 57, 44, 4, 70, 83, 93, 93, 62], [11, 29, 4, 44, 79, 15, 27, 36, 49, 63, 49, 28, 17, 96, 93, 53, 99, 41, 9, 44, 68, 32, 79, 91, 57, 18, 6, 67, 98, 66, 59, 93, 30, 27, 76, 68, 92, 33, 90, 59, 2, 66, 19, 33, 24, 56], [78, 93, 43, 35, 99, 64, 52, 11, 91, 97, 80, 79, 30, 29, 38, 34, 21, 25, 82, 61, 48, 72, 47, 74, 37, 29, 64, 90, 75, 65, 83, 70, 34, 66, 36, 78, 97, 27, 22, 60, 82, 34, 58, 48, 55, 92], [7, 22, 33, 35, 82, 1, 61, 55, 59, 4, 16, 70, 30, 26, 26, 5, 52, 24, 17, 91, 22, 14, 19, 54, 11, 76, 76, 56, 42, 36, 65, 57, 94, 67, 75, 87, 62, 53, 92, 87, 5, 7, 42, 13, 20, 28], [60, 20, 60, 69, 25, 37, 86, 56, 69, 20, 51, 69, 17, 23, 24, 24, 17, 73, 15, 41, 86, 97, 55, 43, 9, 86, 36, 59, 80, 87, 7, 99, 77, 72, 1, 94, 17, 95, 78, 16, 39, 47, 93, 22, 88, 40], [83, 91, 70, 47, 42, 13, 57, 17, 9, 90, 29, 24, 59, 87, 80, 53, 64, 99, 67, 4, 8, 67, 23, 13, 22, 70, 46, 60, 55, 76, 15, 10, 68, 77, 36, 90, 99, 42, 85, 60, 31, 17, 36, 52, 6, 58], [5, 61, 35, 13, 95, 22, 40, 76, 66, 47, 15, 79, 61, 58, 72, 17, 65, 97, 35, 32, 84, 53, 81, 41, 31, 69, 3, 10, 79, 41, 17, 50, 31, 99, 24, 33, 76, 79, 63, 56, 32, 80, 76, 91, 16, 19], [87, 60, 85, 63, 93, 5, 2, 59, 28, 57, 1, 62, 58, 89, 75, 83, 82, 16, 88, 74, 45, 84, 10, 10, 21, 70, 77, 25, 17, 41, 65, 29, 71, 38, 65, 85, 67, 18, 17, 83, 6, 88, 57, 96, 52, 62], [74, 80, 61, 41, 76, 87, 32, 63, 15, 35, 24, 72, 9, 39, 9, 26, 8, 75, 90, 69, 71, 53, 96, 41, 16, 27, 19, 22, 13, 27, 86, 23, 17, 80, 98, 53, 63, 99, 53, 69, 10, 90, 51, 12, 21, 79], [8, 85, 72, 54, 76, 13, 37, 48, 90, 82, 85, 90, 68, 77, 55, 72, 26, 26, 10, 28, 87, 88, 68, 42, 37, 66, 14, 22, 78, 46, 33, 39, 42, 21, 26, 88, 38, 1, 39, 24, 34, 29, 31, 28, 42, 98], [16, 50, 73, 65, 77, 52, 54, 82, 41, 41, 71, 28, 73, 7, 4, 83, 94, 14, 8, 71, 12, 60, 24, 32, 43, 64, 2, 8, 99, 2, 38, 38, 98, 74, 44, 80, 50, 4, 67, 77, 19, 13, 84, 95, 3, 17], [24, 7, 31, 44, 60, 97, 65, 6, 24, 8, 20, 90, 1, 30, 94, 27, 59, 6, 66, 1, 59, 59, 66, 64, 97, 90, 88, 88, 90, 97, 19, 48, 44, 39, 40, 70, 80, 42, 79, 36, 31, 40, 57, 14, 17, 48], [33, 13, 74, 49, 40, 66, 15, 16, 96, 74, 71, 97, 30, 89, 22, 13, 24, 68, 11, 94, 93, 78, 24, 98, 92, 23, 51, 1, 23, 22, 43, 17, 59, 74, 56, 86, 18, 89, 11, 24, 53, 92, 13, 35, 21, 45], [9, 43, 85, 60, 97, 86, 77, 75, 37, 52, 19, 15, 17, 74, 31, 51, 45, 80, 99, 25, 32, 41, 81, 42, 42, 55, 26, 21, 16, 8, 71, 13, 81, 40, 88, 47, 3, 35, 78, 72, 52, 7, 63, 35, 40, 91], [17, 45, 22, 66, 69, 65, 10, 20, 3, 70, 6, 60, 94, 84, 38, 62, 78, 2, 25, 69, 84, 19, 1, 41, 29, 90, 78, 84, 42, 49, 53, 58, 44, 80, 96, 98, 62, 65, 56, 31, 38, 93, 60, 2, 93, 75], [81, 35, 56, 41, 61, 85, 14, 63, 47, 98, 10, 51, 22, 3, 79, 80, 16, 48, 67, 28, 66, 50, 94, 29, 22, 48, 85, 66, 25, 62, 19, 58, 89, 1, 79, 8, 35, 97, 64, 42, 68, 49, 18, 94, 38, 84], [95, 35, 38, 92, 67, 6, 64, 70, 83, 88, 13, 59, 72, 68, 99, 17, 44, 41, 31, 90, 74, 90, 14, 14, 76, 47, 38, 84, 92, 50, 13, 61, 26, 45, 49, 13, 14, 80, 42, 97, 27, 11, 57, 89, 94, 19], [84, 20, 33, 54, 90, 35, 35, 47, 42, 54, 22, 25, 6, 74, 16, 35, 38, 78, 6, 46, 86, 70, 76, 70, 26, 24, 73, 80, 8, 99, 60, 37, 2, 72, 95, 64, 73, 72, 15, 13, 31, 15, 26, 92, 78, 83], [17, 76, 86, 79, 87, 96, 79, 49, 34, 83, 71, 97, 80, 1, 88, 31, 63, 65, 49, 36, 51, 93, 15, 38, 73, 55, 90, 71, 81, 39, 7, 12, 17, 79, 82, 34, 98, 58, 92, 83, 39, 76, 80, 13, 98, 86]],41,39,),
([[-18, -4, 34, 62, 78, 90, 90, 96], [-80, -80, -72, -50, -20, -2, 52, 62], [-98, -40, -38, -12, -4, 4, 70, 78], [-86, -72, -44, -38, 20, 76, 78, 82], [-82, -38, -8, 28, 44, 44, 50, 84], [-6, 0, 10, 30, 34, 36, 42, 62], [-54, -18, -12, -8, -6, 4, 56, 76], [-68, -10, -8, -4, 0, 26, 80, 92]],7,5,),
([[0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1], [1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0], [1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1], [0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1], [1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0], [0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1], [0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0], [0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0], [0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1], [0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0], [0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1], [1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0], [1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0], [0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0], [0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1], [1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0], [1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1], [1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1], [0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1], [0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1], [1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0], [0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0], [1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0], [0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1], [0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1], [0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0], [1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1], [0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0], [0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0], [1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1], [0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0], [0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1], [0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0], [1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1], [0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1], [0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0]],27,27,),
([[2, 2, 3, 4, 4, 5, 5, 10, 10, 12, 15, 18, 22, 23, 23, 31, 36, 37, 43, 44, 46, 49, 53, 54, 55, 56, 62, 62, 64, 65, 65, 67, 68, 75, 81, 82, 84, 84, 87, 92, 93, 94, 95, 95, 98], [6, 8, 12, 14, 14, 16, 16, 16, 17, 18, 22, 23, 24, 29, 31, 36, 36, 37, 38, 40, 48, 48, 52, 54, 56, 59, 59, 59, 62, 65, 66, 67, 70, 71, 71, 72, 79, 82, 85, 88, 92, 95, 96, 97, 97], [3, 9, 11, 12, 13, 13, 13, 15, 16, 17, 28, 28, 30, 31, 31, 35, 36, 39, 41, 47, 49, 49, 50, 51, 54, 54, 59, 62, 63, 64, 65, 66, 67, 68, 70, 71, 74, 75, 78, 79, 79, 84, 87, 93, 99], [3, 5, 7, 10, 11, 12, 13, 15, 15, 18, 19, 19, 26, 28, 28, 29, 33, 35, 35, 36, 39, 40, 42, 43, 53, 54, 54, 64, 69, 70, 74, 75, 76, 78, 80, 87, 89, 90, 93, 96, 96, 97, 98, 99, 99], [2, 2, 9, 9, 11, 14, 14, 16, 19, 19, 22, 23, 26, 27, 35, 37, 40, 43, 43, 44, 45, 45, 45, 46, 46, 51, 55, 56, 59, 61, 63, 65, 66, 68, 68, 75, 76, 77, 78, 84, 87, 89, 93, 94, 99], [2, 8, 9, 10, 12, 14, 19, 27, 28, 30, 30, 36, 38, 41, 41, 44, 46, 46, 47, 52, 54, 57, 57, 60, 62, 66, 66, 68, 68, 69, 69, 70, 70, 71, 73, 79, 80, 84, 88, 88, 89, 92, 93, 93, 96], [4, 9, 11, 12, 13, 21, 23, 25, 31, 31, 32, 34, 35, 37, 39, 41, 41, 42, 52, 53, 53, 55, 56, 57, 58, 59, 69, 71, 72, 75, 75, 77, 83, 84, 85, 87, 88, 89, 90, 90, 90, 95, 95, 98, 99], [14, 15, 17, 19, 19, 20, 21, 24, 25, 26, 29, 31, 35, 35, 39, 42, 43, 46, 46, 48, 54, 55, 55, 56, 62, 63, 63, 66, 67, 68, 69, 70, 72, 75, 76, 80, 82, 82, 84, 85, 87, 87, 97, 98, 98], [1, 2, 4, 5, 6, 7, 8, 9, 10, 11, 12, 14, 14, 18, 19, 21, 22, 26, 28, 32, 38, 40, 40, 42, 42, 43, 44, 45, 48, 48, 50, 53, 62, 75, 77, 77, 82, 84, 85, 86, 87, 92, 93, 96, 97], [1, 7, 9, 10, 11, 17, 19, 22, 23, 28, 33, 36, 37, 37, 38, 39, 43, 45, 46, 51, 52, 53, 54, 58, 58, 60, 60, 61, 61, 62, 64, 66, 69, 71, 73, 75, 77, 79, 83, 84, 90, 90, 94, 96, 98], [7, 10, 13, 14, 16, 25, 28, 29, 29, 30, 31, 31, 31, 37, 37, 37, 38, 38, 41, 41, 48, 49, 50, 52, 54, 54, 57, 59, 59, 61, 64, 65, 66, 67, 67, 70, 71, 71, 72, 72, 74, 82, 86, 89, 97], [2, 15, 18, 19, 19, 23, 25, 26, 26, 26, 29, 38, 40, 44, 47, 49, 51, 52, 53, 54, 56, 61, 61, 65, 67, 69, 69, 69, 70, 72, 73, 78, 78, 78, 83, 85, 88, 90, 91, 92, 92, 95, 95, 98, 99], [1, 3, 6, 13, 14, 14, 14, 15, 18, 19, 21, 25, 27, 28, 31, 32, 36, 38, 40, 43, 45, 50, 57, 58, 61, 62, 62, 69, 71, 71, 72, 73, 76, 77, 79, 80, 82, 83, 86, 92, 93, 93, 96, 97, 99], [1, 2, 4, 4, 8, 14, 15, 18, 21, 21, 22, 25, 26, 27, 27, 28, 32, 33, 34, 37, 38, 39, 43, 46, 47, 48, 53, 55, 57, 63, 66, 67, 70, 73, 75, 76, 76, 77, 79, 82, 87, 90, 91, 95, 99], [2, 5, 11, 13, 15, 16, 18, 22, 23, 24, 27, 27, 32, 32, 33, 34, 37, 41, 43, 45, 46, 53, 58, 60, 62, 63, 64, 65, 66, 68, 68, 71, 71, 72, 73, 74, 77, 81, 84, 89, 91, 94, 96, 96, 97], [1, 2, 3, 14, 16, 18, 22, 26, 27, 27, 30, 33, 43, 43, 44, 44, 44, 45, 47, 50, 51, 51, 55, 56, 56, 57, 58, 59, 63, 67, 68, 69, 74, 75, 79, 79, 80, 84, 87, 87, 94, 94, 95, 97, 97], [3, 8, 9, 13, 14, 15, 18, 19, 20, 21, 21, 21, 24, 25, 26, 28, 31, 35, 35, 45, 46, 46, 46, 46, 49, 52, 58, 64, 65, 68, 69, 70, 71, 74, 75, 76, 78, 80, 81, 83, 83, 85, 88, 92, 95], [3, 3, 10, 12, 12, 12, 13, 14, 14, 16, 18, 19, 22, 23, 24, 28, 30, 32, 34, 36, 37, 37, 38, 38, 45, 46, 51, 55, 56, 61, 61, 65, 67, 69, 70, 71, 74, 74, 78, 84, 87, 89, 89, 91, 98], [2, 4, 4, 5, 7, 13, 14, 17, 18, 19, 20, 21, 23, 23, 23, 27, 30, 32, 39, 40, 41, 44, 48, 49, 51, 55, 57, 59, 64, 70, 71, 71, 72, 73, 75, 77, 78, 79, 79, 80, 81, 92, 94, 96, 98], [1, 4, 4, 4, 8, 10, 10, 10, 12, 14, 16, 17, 24, 25, 25, 27, 28, 29, 33, 34, 35, 36, 39, 40, 40, 43, 44, 44, 46, 53, 54, 59, 60, 62, 63, 66, 70, 74, 77, 79, 83, 84, 90, 96, 97], [2, 10, 12, 12, 13, 15, 16, 18, 21, 29, 34, 36, 38, 41, 47, 47, 47, 48, 49, 51, 55, 56, 58, 60, 61, 63, 64, 68, 68, 71, 74, 74, 76, 77, 84, 84, 87, 87, 88, 89, 93, 93, 96, 97, 99], [1, 2, 4, 13, 14, 15, 23, 27, 30, 35, 36, 36, 38, 40, 42, 44, 46, 48, 51, 53, 54, 57, 57, 57, 61, 66, 67, 69, 69, 69, 73, 76, 76, 77, 77, 80, 82, 82, 85, 87, 88, 90, 93, 94, 96], [1, 5, 5, 8, 14, 20, 21, 29, 29, 34, 34, 34, 34, 35, 37, 37, 41, 45, 48, 48, 51, 52, 53, 54, 55, 60, 65, 65, 65, 66, 67, 68, 70, 74, 74, 76, 77, 77, 78, 83, 83, 86, 93, 94, 96], [1, 2, 2, 5, 6, 6, 7, 11, 11, 13, 13, 17, 18, 20, 22, 23, 23, 25, 32, 34, 38, 40, 45, 49, 49, 50, 57, 60, 65, 66, 67, 71, 71, 71, 76, 76, 80, 85, 89, 90, 90, 91, 92, 96, 99], [1, 2, 2, 5, 6, 8, 9, 12, 14, 17, 23, 25, 29, 32, 33, 37, 39, 39, 40, 45, 54, 54, 56, 57, 58, 60, 63, 63, 63, 66, 67, 68, 69, 70, 75, 77, 79, 80, 83, 91, 92, 93, 95, 95, 97], [5, 9, 11, 11, 15, 24, 27, 27, 27, 29, 31, 32, 35, 36, 38, 43, 44, 46, 49, 51, 53, 54, 56, 56, 58, 58, 59, 59, 59, 60, 62, 66, 67, 69, 69, 73, 77, 82, 85, 87, 89, 89, 96, 97, 99], [2, 3, 5, 5, 11, 12, 17, 20, 21, 22, 22, 23, 29, 31, 35, 35, 36, 37, 42, 43, 47, 52, 55, 57, 57, 64, 65, 67, 69, 69, 70, 70, 72, 74, 76, 79, 80, 80, 81, 84, 91, 92, 92, 93, 97], [1, 3, 3, 8, 9, 9, 12, 13, 14, 16, 16, 21, 23, 24, 27, 28, 29, 31, 33, 33, 35, 37, 45, 52, 53, 54, 55, 55, 59, 61, 61, 62, 63, 63, 69, 71, 78, 78, 80, 84, 84, 85, 86, 93, 96], [1, 4, 8, 8, 8, 9, 14, 21, 21, 22, 24, 26, 26, 27, 27, 34, 36, 37, 37, 37, 41, 42, 53, 53, 56, 60, 61, 61, 65, 65, 66, 67, 69, 70, 71, 72, 72, 81, 87, 89, 89, 91, 94, 95, 97], [3, 6, 9, 17, 18, 20, 21, 23, 24, 32, 33, 34, 34, 36, 38, 39, 39, 41, 42, 43, 45, 48, 49, 51, 52, 53, 56, 61, 63, 63, 64, 65, 66, 69, 72, 79, 79, 83, 84, 86, 88, 89, 94, 96, 99], [1, 6, 9, 13, 16, 18, 21, 21, 21, 23, 25, 30, 32, 36, 38, 44, 45, 46, 46, 48, 50, 51, 55, 60, 60, 62, 62, 65, 66, 68, 69, 70, 72, 78, 78, 84, 85, 85, 86, 87, 94, 94, 97, 98, 99], [1, 2, 5, 6, 11, 16, 20, 21, 25, 25, 28, 30, 33, 33, 36, 40, 42, 44, 44, 47, 49, 50, 52, 52, 56, 56, 57, 57, 63, 68, 71, 72, 78, 83, 87, 88, 90, 91, 94, 94, 98, 98, 98, 99, 99], [2, 2, 2, 4, 5, 7, 8, 8, 9, 10, 13, 18, 20, 22, 24, 25, 28, 28, 29, 30, 34, 35, 37, 40, 41, 45, 45, 46, 46, 46, 50, 53, 60, 71, 74, 75, 75, 81, 81, 87, 89, 93, 95, 97, 98], [3, 6, 8, 10, 14, 17, 18, 20, 21, 22, 24, 30, 31, 31, 32, 37, 39, 41, 47, 48, 51, 53, 55, 60, 62, 65, 66, 67, 67, 68, 69, 70, 71, 71, 74, 74, 75, 76, 83, 85, 88, 90, 93, 96, 99], [1, 2, 5, 13, 17, 22, 30, 33, 34, 36, 37, 39, 42, 43, 43, 43, 45, 45, 46, 48, 49, 51, 51, 53, 56, 56, 57, 64, 64, 67, 68, 68, 69, 70, 72, 74, 76, 77, 77, 88, 90, 92, 92, 95, 97], [4, 7, 7, 10, 11, 15, 18, 20, 21, 25, 26, 26, 27, 28, 28, 30, 35, 36, 40, 40, 49, 54, 59, 61, 65, 65, 70, 71, 72, 74, 75, 76, 78, 83, 83, 84, 87, 87, 87, 92, 94, 94, 96, 97, 98], [5, 7, 11, 12, 13, 16, 20, 21, 26, 30, 31, 32, 33, 37, 39, 40, 40, 42, 46, 51, 51, 51, 52, 52, 53, 55, 55, 57, 61, 64, 70, 75, 78, 78, 84, 85, 87, 90, 92, 93, 96, 96, 97, 98, 99], [1, 1, 2, 4, 5, 11, 12, 16, 17, 17, 18, 20, 21, 25, 28, 30, 32, 32, 33, 39, 39, 41, 42, 45, 45, 52, 53, 54, 56, 56, 56, 61, 61, 65, 67, 69, 69, 71, 72, 74, 78, 78, 81, 85, 88], [1, 2, 5, 5, 12, 13, 20, 20, 26, 27, 30, 33, 34, 34, 39, 42, 42, 45, 46, 46, 49, 53, 55, 56, 56, 57, 60, 60, 65, 65, 69, 72, 72, 73, 78, 79, 83, 85, 85, 86, 87, 89, 91, 95, 98], [3, 3, 8, 10, 13, 18, 22, 24, 25, 26, 26, 29, 30, 36, 38, 38, 40, 41, 42, 42, 45, 49, 50, 52, 52, 54, 55, 59, 59, 61, 64, 66, 69, 74, 77, 83, 84, 85, 86, 86, 87, 90, 90, 92, 92], [2, 3, 8, 8, 9, 10, 11, 12, 12, 19, 25, 26, 29, 30, 31, 34, 34, 38, 45, 47, 48, 49, 49, 52, 56, 59, 60, 66, 67, 69, 70, 72, 72, 73, 74, 74, 77, 79, 79, 83, 85, 87, 89, 95, 95], [4, 4, 5, 10, 15, 17, 20, 21, 21, 22, 23, 24, 31, 36, 37, 39, 40, 42, 44, 52, 53, 53, 56, 56, 61, 61, 63, 74, 77, 78, 79, 79, 80, 82, 83, 84, 85, 86, 86, 88, 94, 97, 97, 98, 99], [1, 1, 1, 3, 4, 8, 9, 11, 13, 17, 18, 21, 23, 24, 26, 30, 32, 34, 34, 34, 37, 39, 42, 46, 46, 46, 47, 50, 54, 54, 61, 71, 74, 76, 79, 81, 81, 83, 84, 86, 87, 88, 90, 97, 97], [2, 10, 10, 14, 15, 15, 18, 22, 23, 25, 29, 34, 35, 36, 37, 42, 42, 43, 45, 46, 49, 49, 53, 57, 58, 61, 62, 64, 67, 67, 69, 73, 77, 77, 80, 81, 83, 87, 90, 91, 92, 92, 93, 95, 95], [2, 9, 10, 10, 17, 19, 23, 25, 26, 26, 32, 36, 36, 39, 40, 41, 43, 44, 44, 45, 45, 50, 50, 52, 55, 57, 58, 60, 60, 61, 67, 68, 72, 75, 75, 78, 83, 85, 92, 92, 95, 97, 98, 99, 99]],26,27,),
([[-54, 56, -84, 24, -78, 86, -60, -40, 20, 96, 8, 42, 32, 74, -6, -44, 26, -66, -56], [-48, 96, 76, -98, 44, -80, 70, 78, -12, 68, 56, -88, 44, -18, 56, -80, -76, -90, -84], [40, 18, 88, 96, -96, -16, -10, -50, -14, -8, -36, -22, -48, -14, -34, -64, -76, 0, -66], [-80, 24, -62, -12, 78, -80, -84, 24, 74, 20, -12, -66, 46, -38, 6, -64, -38, 84, 66], [48, 16, 16, -96, 66, -10, -84, 78, 60, -30, -92, -76, 8, -6, -36, 94, -52, 24, -2], [-78, 52, 92, -54, -36, 32, 88, -12, 54, 44, -24, 60, 32, -86, 4, 22, 24, 94, 76], [46, -80, -44, -16, -52, 22, -14, 88, -82, 22, 16, 28, -32, 64, 46, 66, -26, 42, -26], [-44, 62, 78, -76, -92, -82, 72, -96, 18, -32, -94, -56, 26, -30, -42, -30, 96, -26, 84], [6, 14, -36, -94, 90, 44, -4, 78, -56, -96, -88, 8, -26, 48, -80, 12, 70, 26, -8], [-18, -38, 36, -76, -8, -26, -4, 88, 24, -42, -46, 86, 62, 8, -2, 24, -82, -48, -22], [-22, -82, 78, 86, -20, 74, -30, 2, 98, 8, -46, 62, 52, -2, 14, -32, 2, 36, -56], [58, 18, 34, -92, 98, 80, -68, 86, -22, -12, -24, 34, -36, -66, 0, -96, -2, -24, -60], [-34, -26, -32, -76, -96, -30, -82, -20, 0, -70, 30, 8, -84, 46, 10, 24, -16, 80, 16], [-54, 16, 4, 76, -80, 6, -26, -76, 94, 92, 20, 12, 94, 94, 2, -80, -38, 8, -76], [-70, -46, 90, -48, 76, 72, -34, 30, -26, -30, 92, -92, 60, 12, 80, -86, 76, -84, 10], [42, -80, 80, 92, -2, 56, 88, -8, 96, 16, -26, -54, 4, 52, -82, -10, -46, 80, 78], [14, 46, 22, -52, 84, 58, 42, 14, 14, 96, 34, -38, 56, 44, 90, -76, 56, 48, 22], [82, 38, -78, -48, 16, -98, -12, 78, -18, -10, 44, 88, -52, 96, -28, -42, -60, 46, 40], [2, 44, -6, 80, 98, -70, -68, 64, -10, 46, 72, -84, 32, 50, 68, 64, -90, 96, 28]],14,13,),
([[0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]],25,24,),
([[31, 48, 68, 49, 98, 47, 55, 79, 5, 20, 11, 9, 59, 53, 29, 43, 1, 50, 69, 37, 60, 89, 28, 44, 50, 90, 43, 31, 7, 79, 85, 6, 34, 25, 53, 17, 61, 5, 83, 52, 85], [70, 28, 50, 31, 1, 77, 22, 53, 9, 6, 6, 41, 39, 78, 40, 14, 38, 54, 41, 8, 33, 56, 49, 71, 36, 29, 20, 86, 79, 98, 79, 27, 92, 24, 95, 95, 25, 43, 60, 38, 8], [79, 27, 53, 17, 66, 13, 90, 3, 7, 6, 80, 2, 81, 47, 16, 65, 31, 71, 30, 63, 10, 62, 48, 45, 16, 13, 57, 39, 95, 36, 97, 59, 84, 28, 38, 27, 87, 62, 66, 46, 17], [21, 20, 37, 59, 72, 41, 27, 12, 90, 97, 40, 25, 44, 89, 44, 63, 51, 37, 82, 72, 99, 33, 54, 7, 48, 43, 56, 77, 21, 2, 21, 34, 14, 12, 13, 99, 76, 92, 19, 12, 7], [90, 44, 32, 15, 9, 55, 31, 24, 57, 39, 65, 75, 83, 45, 32, 46, 40, 28, 82, 22, 29, 66, 89, 8, 98, 80, 8, 12, 92, 38, 78, 66, 8, 58, 76, 9, 64, 61, 55, 99, 48], [51, 17, 26, 84, 15, 29, 10, 73, 12, 75, 80, 67, 86, 42, 6, 55, 79, 27, 22, 26, 96, 84, 73, 99, 29, 56, 73, 61, 12, 57, 20, 67, 34, 76, 60, 69, 47, 3, 9, 80, 40], [90, 99, 92, 54, 85, 5, 52, 80, 85, 3, 42, 28, 23, 85, 53, 44, 4, 60, 90, 80, 71, 96, 53, 8, 5, 99, 11, 66, 94, 77, 52, 50, 89, 69, 3, 57, 13, 22, 26, 15, 23], [82, 5, 43, 1, 28, 65, 42, 69, 37, 56, 77, 87, 1, 90, 35, 44, 95, 75, 3, 34, 62, 84, 42, 85, 50, 83, 22, 74, 91, 75, 41, 17, 49, 31, 16, 72, 46, 67, 78, 24, 64], [46, 82, 93, 52, 26, 92, 40, 77, 35, 72, 19, 76, 33, 89, 16, 51, 58, 68, 25, 58, 97, 97, 27, 39, 29, 69, 82, 18, 41, 44, 56, 82, 50, 14, 53, 99, 66, 33, 97, 56, 40], [82, 51, 57, 17, 43, 90, 99, 61, 40, 2, 87, 88, 88, 23, 53, 90, 45, 73, 30, 87, 17, 38, 54, 79, 81, 17, 11, 96, 15, 14, 40, 12, 73, 78, 6, 49, 19, 44, 34, 40, 28], [91, 76, 52, 21, 1, 23, 3, 66, 19, 85, 80, 9, 50, 13, 13, 74, 95, 52, 57, 55, 83, 61, 59, 71, 87, 82, 66, 35, 27, 25, 35, 24, 31, 99, 84, 7, 28, 15, 28, 54, 73], [70, 25, 50, 21, 2, 28, 82, 15, 59, 88, 77, 97, 28, 43, 53, 7, 5, 92, 59, 40, 3, 21, 31, 68, 57, 70, 26, 90, 51, 3, 52, 84, 73, 10, 39, 87, 32, 2, 38, 41, 50], [15, 30, 65, 19, 16, 95, 84, 25, 51, 82, 30, 90, 69, 19, 28, 91, 66, 6, 95, 4, 93, 52, 56, 11, 33, 7, 17, 5, 26, 42, 10, 30, 93, 38, 58, 78, 53, 15, 18, 72, 18], [11, 98, 6, 29, 26, 92, 76, 16, 20, 9, 85, 41, 79, 93, 36, 95, 60, 65, 56, 19, 59, 70, 12, 66, 40, 11, 11, 29, 24, 68, 49, 60, 88, 7, 93, 71, 6, 14, 73, 53, 34], [30, 43, 25, 96, 81, 79, 67, 27, 6, 23, 1, 29, 73, 43, 36, 43, 34, 22, 33, 29, 73, 92, 43, 39, 58, 67, 45, 50, 45, 53, 61, 94, 59, 83, 86, 23, 57, 6, 13, 85, 11], [61, 97, 61, 30, 8, 37, 41, 59, 61, 98, 87, 95, 19, 24, 51, 62, 4, 45, 90, 53, 58, 99, 33, 35, 88, 36, 14, 27, 40, 29, 99, 44, 33, 95, 65, 79, 21, 75, 47, 3, 21], [61, 97, 11, 85, 24, 99, 49, 40, 75, 37, 76, 50, 19, 65, 46, 92, 58, 17, 39, 87, 69, 51, 83, 53, 78, 1, 88, 32, 76, 52, 29, 65, 52, 7, 53, 93, 82, 97, 41, 10, 25], [66, 33, 93, 9, 83, 79, 54, 99, 80, 99, 62, 62, 78, 92, 80, 8, 32, 76, 81, 42, 63, 37, 90, 89, 17, 90, 46, 63, 45, 26, 38, 50, 8, 85, 84, 16, 76, 74, 53, 24, 22], [64, 59, 45, 43, 50, 87, 1, 65, 45, 28, 48, 5, 19, 6, 9, 91, 10, 19, 26, 65, 56, 86, 53, 64, 81, 38, 67, 6, 44, 27, 20, 40, 40, 84, 15, 82, 28, 29, 41, 57, 30], [64, 68, 41, 29, 43, 96, 96, 70, 43, 50, 38, 1, 42, 68, 10, 92, 74, 36, 80, 77, 32, 62, 39, 66, 51, 40, 30, 41, 34, 83, 13, 34, 43, 86, 42, 68, 29, 46, 67, 20, 34], [88, 50, 80, 30, 75, 87, 49, 29, 83, 98, 89, 7, 45, 85, 24, 48, 68, 10, 50, 73, 95, 75, 67, 43, 27, 74, 55, 24, 70, 37, 32, 20, 11, 16, 64, 35, 15, 72, 7, 39, 44], [27, 25, 57, 29, 4, 52, 9, 34, 73, 23, 42, 87, 48, 54, 71, 49, 42, 60, 58, 25, 11, 40, 54, 66, 57, 66, 30, 47, 90, 58, 34, 15, 9, 4, 26, 67, 4, 89, 4, 32, 86], [89, 16, 52, 90, 67, 58, 27, 16, 54, 7, 16, 18, 48, 22, 65, 89, 20, 59, 18, 1, 90, 42, 47, 80, 24, 75, 19, 26, 30, 97, 20, 58, 31, 3, 61, 65, 40, 79, 98, 74, 1], [8, 74, 8, 84, 80, 50, 63, 17, 76, 65, 30, 59, 42, 53, 27, 22, 64, 51, 49, 84, 33, 41, 43, 82, 58, 71, 99, 23, 75, 73, 23, 74, 75, 49, 13, 64, 36, 35, 87, 9, 84], [73, 48, 61, 39, 10, 27, 93, 72, 22, 89, 88, 82, 42, 37, 56, 43, 74, 25, 68, 62, 94, 63, 23, 20, 58, 24, 37, 92, 23, 11, 51, 66, 24, 52, 20, 32, 93, 85, 77, 93, 15], [66, 11, 45, 39, 5, 17, 81, 17, 58, 15, 34, 54, 90, 7, 22, 75, 66, 1, 78, 98, 76, 86, 51, 51, 70, 68, 87, 34, 95, 7, 31, 94, 59, 79, 23, 93, 91, 40, 78, 78, 91], [50, 6, 65, 92, 8, 11, 57, 39, 98, 59, 48, 89, 71, 88, 5, 54, 2, 94, 73, 35, 35, 58, 41, 20, 81, 61, 74, 68, 5, 33, 66, 13, 77, 65, 80, 52, 89, 83, 30, 33, 99], [45, 18, 39, 40, 85, 61, 55, 15, 41, 18, 83, 50, 42, 48, 74, 80, 23, 32, 87, 48, 78, 79, 76, 68, 67, 64, 37, 48, 6, 65, 40, 24, 68, 51, 10, 69, 20, 36, 32, 40, 60], [62, 42, 68, 48, 14, 19, 82, 70, 48, 37, 41, 77, 58, 36, 38, 65, 46, 50, 27, 14, 68, 67, 6, 57, 77, 14, 94, 25, 40, 48, 39, 16, 69, 48, 36, 45, 25, 41, 84, 15, 19], [24, 34, 63, 26, 30, 96, 39, 70, 47, 55, 83, 4, 18, 14, 8, 32, 15, 69, 5, 21, 12, 99, 41, 23, 28, 6, 41, 58, 88, 70, 47, 30, 81, 37, 42, 43, 4, 53, 7, 86, 52], [15, 6, 54, 2, 17, 63, 36, 56, 48, 90, 64, 88, 22, 30, 10, 16, 34, 47, 26, 4, 4, 24, 5, 1, 27, 79, 34, 82, 65, 70, 24, 26, 35, 62, 77, 67, 1, 77, 98, 31, 69], [11, 38, 9, 33, 36, 75, 12, 18, 64, 64, 71, 85, 1, 43, 74, 65, 51, 3, 28, 6, 60, 52, 8, 1, 77, 8, 12, 86, 18, 48, 89, 73, 68, 2, 41, 75, 55, 5, 24, 98, 92], [13, 76, 45, 26, 19, 90, 71, 32, 38, 26, 1, 29, 8, 74, 78, 44, 5, 58, 97, 58, 58, 93, 21, 35, 15, 8, 70, 54, 10, 36, 7, 17, 53, 8, 66, 5, 40, 77, 78, 4, 88], [74, 15, 46, 99, 53, 57, 59, 57, 43, 32, 88, 49, 45, 36, 75, 54, 83, 45, 90, 87, 13, 31, 30, 38, 20, 31, 32, 4, 46, 23, 95, 2, 82, 73, 48, 44, 45, 83, 26, 12, 81], [44, 49, 24, 76, 76, 98, 8, 15, 69, 2, 86, 38, 40, 50, 44, 53, 21, 28, 80, 50, 45, 4, 54, 38, 39, 9, 98, 30, 48, 54, 44, 90, 33, 87, 92, 80, 18, 53, 4, 32, 21], [67, 67, 7, 39, 3, 82, 1, 53, 48, 87, 86, 72, 8, 79, 79, 49, 24, 94, 42, 5, 40, 29, 44, 57, 23, 36, 62, 73, 14, 94, 64, 15, 70, 10, 35, 23, 79, 50, 98, 92, 60], [57, 99, 23, 49, 48, 12, 99, 3, 69, 84, 26, 65, 91, 44, 34, 30, 32, 4, 71, 73, 5, 29, 79, 9, 4, 53, 71, 32, 56, 75, 98, 74, 34, 92, 43, 76, 40, 71, 21, 68, 39], [40, 33, 53, 35, 87, 85, 80, 23, 79, 96, 44, 95, 9, 96, 15, 21, 95, 70, 88, 75, 37, 15, 28, 29, 63, 56, 78, 78, 88, 71, 35, 26, 68, 28, 21, 13, 2, 45, 20, 92, 82], [38, 63, 56, 59, 92, 18, 14, 52, 6, 46, 31, 64, 96, 70, 25, 41, 20, 52, 89, 79, 89, 50, 79, 8, 57, 23, 67, 75, 83, 92, 65, 45, 75, 73, 92, 76, 98, 34, 54, 10, 24], [21, 52, 25, 66, 13, 68, 25, 54, 23, 60, 26, 14, 36, 92, 70, 70, 13, 41, 45, 67, 76, 33, 47, 69, 23, 22, 43, 96, 77, 46, 94, 6, 86, 88, 33, 24, 84, 80, 78, 98, 76], [38, 83, 14, 92, 35, 2, 21, 41, 94, 49, 13, 53, 58, 63, 68, 17, 56, 48, 44, 59, 36, 11, 8, 46, 92, 83, 26, 68, 24, 43, 35, 56, 41, 81, 98, 5, 53, 50, 1, 96, 97]],40,35,)
]
n_success = 0
for i, parameters_set in enumerate(param):
if f_filled(*parameters_set) == f_gold(*parameters_set):
n_success+=1
print("#Results: %i, %i" % (n_success, len(param))) | 1,212.1 | 8,377 | 0.443074 | 12,683 | 48,484 | 1.692423 | 0.011827 | 0.115723 | 0.141719 | 0.16082 | 0.206103 | 0.167016 | 0.15495 | 0.15113 | 0.149918 | 0.149779 | 0 | 0.59251 | 0.265263 | 48,484 | 40 | 8,378 | 1,212.1 | 0.01005 | 0.003816 | 0 | 0.133333 | 0 | 0 | 0.000497 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.033333 | false | 0 | 0 | 0 | 0.1 | 0.033333 | 0 | 0 | 1 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
67e858404ab76962b0e9a30e8f12d2c9947ebb14 | 28 | py | Python | miranda/hq/__init__.py | Ouranosinc/miranda | 5c54767a4e6e6c3c1f638ca0fe22673ea98e2746 | [
"Apache-2.0"
] | 4 | 2019-11-07T17:45:26.000Z | 2021-09-22T18:22:01.000Z | miranda/hq/__init__.py | Ouranosinc/miranda | 5c54767a4e6e6c3c1f638ca0fe22673ea98e2746 | [
"Apache-2.0"
] | 12 | 2019-09-19T17:05:39.000Z | 2022-03-31T20:26:16.000Z | miranda/hq/__init__.py | Ouranosinc/miranda | 5c54767a4e6e6c3c1f638ca0fe22673ea98e2746 | [
"Apache-2.0"
] | 1 | 2020-02-01T01:01:22.000Z | 2020-02-01T01:01:22.000Z | from .daily import open_csv
| 14 | 27 | 0.821429 | 5 | 28 | 4.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 28 | 1 | 28 | 28 | 0.916667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e1d95f979f297810f0d7a299c1c2bb34d7d47ace | 98 | py | Python | test/commands.py | kochelmonster/runas | 52146a47f6e882a26b5ddec98b1924e8bcbc1515 | [
"MIT"
] | null | null | null | test/commands.py | kochelmonster/runas | 52146a47f6e882a26b5ddec98b1924e8bcbc1515 | [
"MIT"
] | null | null | null | test/commands.py | kochelmonster/runas | 52146a47f6e882a26b5ddec98b1924e8bcbc1515 | [
"MIT"
] | null | null | null | from runas import has_root
class SudoCommands:
def is_root(self):
return has_root()
| 14 | 26 | 0.693878 | 14 | 98 | 4.642857 | 0.785714 | 0.215385 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.244898 | 98 | 6 | 27 | 16.333333 | 0.878378 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0.25 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
e1f3c14fb0b997d6919ce864a41152ae389efb19 | 111 | py | Python | guild/tests/samples/projects/compare/op1.py | wheatdog/guildai | 817cf179d0b6910d3d4fca522045a8139aef6c9e | [
"Apache-2.0"
] | 694 | 2018-11-30T01:06:30.000Z | 2022-03-31T14:46:26.000Z | guild/tests/samples/projects/compare/op1.py | wheatdog/guildai | 817cf179d0b6910d3d4fca522045a8139aef6c9e | [
"Apache-2.0"
] | 323 | 2018-11-05T17:44:34.000Z | 2022-03-31T16:56:41.000Z | guild/tests/samples/projects/compare/op1.py | wheatdog/guildai | 817cf179d0b6910d3d4fca522045a8139aef6c9e | [
"Apache-2.0"
] | 68 | 2019-04-01T04:24:47.000Z | 2022-02-24T17:22:04.000Z | a = 1
b = 2
print("x: %s" % (a + 1))
print("y: %s" % (b + 2))
print("z/ab: %s" % (a + b))
print("sys/x: 123")
| 13.875 | 27 | 0.405405 | 24 | 111 | 1.875 | 0.5 | 0.088889 | 0.311111 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.083333 | 0.243243 | 111 | 7 | 28 | 15.857143 | 0.452381 | 0 | 0 | 0 | 0 | 0 | 0.252252 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.666667 | 1 | 0 | 1 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
e1f4ca6cdf2be93b39b00e1afab7bae3c345d4c9 | 29 | py | Python | exercises/binary/binary.py | RJTK/python | f9678d629735f75354bbd543eb7f10220a498dae | [
"MIT"
] | 1 | 2021-05-15T19:59:04.000Z | 2021-05-15T19:59:04.000Z | exercises/binary/binary.py | RJTK/python | f9678d629735f75354bbd543eb7f10220a498dae | [
"MIT"
] | null | null | null | exercises/binary/binary.py | RJTK/python | f9678d629735f75354bbd543eb7f10220a498dae | [
"MIT"
] | 2 | 2018-03-03T08:32:12.000Z | 2019-08-22T11:55:53.000Z | def parse_binary():
pass
| 9.666667 | 19 | 0.655172 | 4 | 29 | 4.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.241379 | 29 | 2 | 20 | 14.5 | 0.818182 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.