hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
d4a7a0c0e985bc9cd6761844cb3bff84eac950c1 | 9,012 | gyp | Python | third_party/WebKit/Source/bindings/modules/generated.gyp | Wzzzx/chromium-crosswalk | 768dde8efa71169f1c1113ca6ef322f1e8c9e7de | [
"BSD-3-Clause-No-Nuclear-License-2014",
"BSD-3-Clause"
] | 2 | 2019-01-28T08:09:58.000Z | 2021-11-15T15:32:10.000Z | third_party/WebKit/Source/bindings/modules/generated.gyp | maidiHaitai/haitaibrowser | a232a56bcfb177913a14210e7733e0ea83a6b18d | [
"BSD-3-Clause"
] | null | null | null | third_party/WebKit/Source/bindings/modules/generated.gyp | maidiHaitai/haitaibrowser | a232a56bcfb177913a14210e7733e0ea83a6b18d | [
"BSD-3-Clause"
] | 6 | 2020-09-23T08:56:12.000Z | 2021-11-18T03:40:49.000Z | # Copyright 2014 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
# Generate IDL interfaces info for modules, used to generate bindings
#
# Design doc: http://www.chromium.org/developers/design-documents/idl-build
{
'includes': [
# ../.. == Source
'../../bindings/core/core.gypi',
'../../bindings/scripts/scripts.gypi',
'../../build/scripts/scripts.gypi', # FIXME: Needed for event files, should be in modules, not bindings_modules http://crbug.com/358074
'../../modules/modules.gypi',
'generated.gypi',
'idl.gypi',
'modules.gypi',
],
'targets': [
################################################################################
{
# GN version: //third_party/WebKit/Source/bindings/modules:bindings_modules_generated
# FIXME: Should be in modules, not bindings_modules http://crbug.com/358074
'target_name': 'modules_event_generated',
'type': 'none',
'actions': [
{
# GN version: //third_party/WebKit/Source/bindings/modules:modules_bindings_generated_event_interfaces
'action_name': 'event_interfaces',
'variables': {
'event_idl_files': [
'<@(modules_event_idl_files)',
],
'event_idl_files_list':
'<|(event_idl_files_list.tmp <@(event_idl_files))',
},
'inputs': [
'<(bindings_scripts_dir)/generate_event_interfaces.py',
'<(bindings_scripts_dir)/utilities.py',
'<(event_idl_files_list)',
'<@(event_idl_files)',
],
'outputs': [
'<(blink_modules_output_dir)/EventModulesInterfaces.in',
],
'action': [
'python',
'<(bindings_scripts_dir)/generate_event_interfaces.py',
'--event-idl-files-list',
'<(event_idl_files_list)',
'--event-interfaces-file',
'<(blink_modules_output_dir)/EventModulesInterfaces.in',
'--write-file-only-if-changed',
'<(write_file_only_if_changed)',
'--suffix',
'Modules',
],
},
{
# GN version: //third_party/WebKit/Source/bindings/modules:bindings_modules_generated_event_modules_factory
'action_name': 'EventModulesFactory',
'inputs': [
'<@(make_event_factory_files)',
'<(blink_modules_output_dir)/EventModulesInterfaces.in',
],
'outputs': [
'<(blink_modules_output_dir)/EventModules.cpp',
'<(blink_modules_output_dir)/EventModulesHeaders.h',
],
'action': [
'python',
'../../build/scripts/make_event_factory.py',
'<(blink_modules_output_dir)/EventModulesInterfaces.in',
'--output_dir',
'<(blink_modules_output_dir)',
],
},
{
# GN version: //third_party/WebKit/Source/bindings/modules:bindings_modules_generated_event_modules_names
'action_name': 'EventModulesNames',
'inputs': [
'<@(make_names_files)',
'<(blink_modules_output_dir)/EventModulesInterfaces.in',
],
'outputs': [
'<(blink_modules_output_dir)/EventModulesNames.cpp',
'<(blink_modules_output_dir)/EventModulesNames.h',
],
'action': [
'python',
'../../build/scripts/make_names.py',
'<(blink_modules_output_dir)/EventModulesInterfaces.in',
'--output_dir',
'<(blink_modules_output_dir)',
],
},
{
# GN version: //third_party/WebKit/Source/bindings/modules:bindings_modules_generated_event_target_modules_names
'action_name': 'EventTargetModulesNames',
'inputs': [
'<@(make_names_files)',
'../../modules/EventTargetModulesFactory.in',
],
'outputs': [
'<(blink_modules_output_dir)/EventTargetModulesNames.cpp',
'<(blink_modules_output_dir)/EventTargetModulesNames.h',
],
'action': [
'python',
'../../build/scripts/make_names.py',
'../../modules/EventTargetModulesFactory.in',
'--output_dir',
'<(blink_modules_output_dir)',
],
},
],
},
################################################################################
{
'target_name': 'modules_global_objects',
'dependencies': [
'../core/generated.gyp:core_global_objects',
],
'variables': {
'idl_files': '<(modules_idl_files)',
'input_files': [
'<(bindings_core_output_dir)/GlobalObjectsCore.pickle',
],
'output_file':
'<(bindings_modules_output_dir)/GlobalObjectsModules.pickle',
},
'includes': ['../../bindings/scripts/global_objects.gypi'],
},
################################################################################
{
# Global constructors for global objects in modules (ServiceWorker)
# but interfaces in core.
'target_name': 'modules_core_global_constructors_idls',
'dependencies': [
'modules_global_objects',
],
'variables': {
'idl_files': [
'<@(core_idl_files)',
'<@(core_idl_with_modules_dependency_files)',
],
'global_objects_file':
'<(bindings_modules_output_dir)/GlobalObjectsModules.pickle',
'global_names_idl_files': [
'CompositorWorkerGlobalScope',
'<(blink_modules_output_dir)/CompositorWorkerGlobalScopeCoreConstructors.idl',
'PaintWorkletGlobalScope',
'<(blink_modules_output_dir)/PaintWorkletGlobalScopeCoreConstructors.idl',
'ServiceWorkerGlobalScope',
'<(blink_modules_output_dir)/ServiceWorkerGlobalScopeCoreConstructors.idl',
],
'outputs': [
'<@(modules_core_global_constructors_generated_idl_files)',
'<@(modules_core_global_constructors_generated_header_files)',
],
},
'includes': ['../../bindings/scripts/global_constructors.gypi'],
},
################################################################################
{
'target_name': 'modules_global_constructors_idls',
'dependencies': [
'modules_global_objects',
],
'variables': {
'idl_files': '<(modules_idl_files)',
'global_objects_file':
'<(bindings_modules_output_dir)/GlobalObjectsModules.pickle',
'global_names_idl_files': [
'Window',
'<(blink_modules_output_dir)/WindowModulesConstructors.idl',
'CompositorWorkerGlobalScope',
'<(blink_modules_output_dir)/CompositorWorkerGlobalScopeModulesConstructors.idl',
'DedicatedWorkerGlobalScope',
'<(blink_modules_output_dir)/DedicatedWorkerGlobalScopeModulesConstructors.idl',
'PaintWorkletGlobalScope',
'<(blink_modules_output_dir)/PaintWorkletGlobalScopeModulesConstructors.idl',
'ServiceWorkerGlobalScope',
'<(blink_modules_output_dir)/ServiceWorkerGlobalScopeModulesConstructors.idl',
'SharedWorkerGlobalScope',
'<(blink_modules_output_dir)/SharedWorkerGlobalScopeModulesConstructors.idl',
],
'outputs': [
'<@(modules_global_constructors_generated_idl_files)',
'<@(modules_global_constructors_generated_header_files)',
],
},
'includes': ['../../bindings/scripts/global_constructors.gypi'],
},
################################################################################
{
'target_name': 'interfaces_info_individual_modules',
'dependencies': [
'<(bindings_scripts_dir)/scripts.gyp:cached_lex_yacc_tables',
'modules_core_global_constructors_idls',
'modules_global_constructors_idls',
],
'variables': {
'cache_directory': '<(bindings_modules_output_dir)/../scripts',
'static_idl_files': '<(modules_static_idl_files)',
'generated_idl_files': '<(modules_generated_idl_files)',
'interfaces_info_file':
'<(bindings_modules_output_dir)/InterfacesInfoOverallIndividual.pickle',
'component_info_file':
'<(bindings_modules_output_dir)/ComponentInfoModules.pickle',
},
'includes': ['../../bindings/scripts/interfaces_info_individual.gypi'],
},
################################################################################
{
# GN version: //third_party/WebKit/Source/bindings/modules:interfaces_info
'target_name': 'interfaces_info',
'dependencies': [
'../core/generated.gyp:interfaces_info_individual_core',
'interfaces_info_individual_modules',
],
'variables': {
'input_files': [
'<(bindings_core_output_dir)/InterfacesInfoCoreIndividual.pickle',
'<(bindings_modules_output_dir)/InterfacesInfoOverallIndividual.pickle',
],
'output_file':
'<(bindings_modules_output_dir)/InterfacesInfoOverall.pickle',
},
'includes': ['../../bindings/scripts/interfaces_info_overall.gypi'],
},
################################################################################
], # targets
}
| 38.025316 | 140 | 0.597204 | 745 | 9,012 | 6.816107 | 0.189262 | 0.065577 | 0.100827 | 0.099252 | 0.564395 | 0.492911 | 0.349744 | 0.306026 | 0.243206 | 0.243206 | 0 | 0.002241 | 0.207723 | 9,012 | 236 | 141 | 38.186441 | 0.708964 | 0.128939 | 0 | 0.495238 | 1 | 0 | 0.663595 | 0.534727 | 0 | 0 | 0 | 0.004237 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 4 |
d4ab9c482ff6c78331e8db7ef24fc3bc6aabb7a4 | 106 | py | Python | Souped Up Salmon 2/it_exercises/makeit/makeit5.py | TheRealNutellaSwan/Souped-Up-Salmon-2 | af0fe063b899a1f139b40972f8fedb16b9114a61 | [
"MIT"
] | null | null | null | Souped Up Salmon 2/it_exercises/makeit/makeit5.py | TheRealNutellaSwan/Souped-Up-Salmon-2 | af0fe063b899a1f139b40972f8fedb16b9114a61 | [
"MIT"
] | null | null | null | Souped Up Salmon 2/it_exercises/makeit/makeit5.py | TheRealNutellaSwan/Souped-Up-Salmon-2 | af0fe063b899a1f139b40972f8fedb16b9114a61 | [
"MIT"
] | null | null | null | """
Make a program that ask user for a series of numbers (10 in total) and calculates their average.
""" | 35.333333 | 97 | 0.716981 | 18 | 106 | 4.222222 | 0.944444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.023529 | 0.198113 | 106 | 3 | 98 | 35.333333 | 0.870588 | 0.90566 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 4 |
d4bebc044f8e9b83aa543834e77b7a1244425fe9 | 180 | py | Python | ecommerce/apps.py | OkothPius/Masoko-Ecommerce | 9b2ad6ab0ede56ae000fdb355075ccd1dc280363 | [
"MIT"
] | 4 | 2020-03-12T09:32:33.000Z | 2021-01-20T13:01:15.000Z | ecommerce/apps.py | OkothPius/Masoko-Ecommerce | 9b2ad6ab0ede56ae000fdb355075ccd1dc280363 | [
"MIT"
] | 1 | 2022-01-13T15:27:13.000Z | 2022-01-13T15:27:13.000Z | ecommerce/apps.py | OkothPius/Masoko-Ecommerce | 9b2ad6ab0ede56ae000fdb355075ccd1dc280363 | [
"MIT"
] | 3 | 2020-03-22T10:51:49.000Z | 2021-01-20T13:02:02.000Z | from django.apps import AppConfig
class EcommerceConfig(AppConfig):
name = 'ecommerce'
def ready(self):
#import signal handlers
import ecommerce.signals
| 18 | 33 | 0.694444 | 19 | 180 | 6.578947 | 0.789474 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.238889 | 180 | 9 | 34 | 20 | 0.912409 | 0.122222 | 0 | 0 | 0 | 0 | 0.057325 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.4 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 4 |
d4d03207ad0b4b525f5a5461b0937f7410e94ec5 | 930 | py | Python | stocksdata/exchanges.py | LatticeData/StockDataPythonClient | cb6a6fe348af7efc9f396a8b8a36e3452eb3630a | [
"Apache-2.0"
] | 4 | 2021-08-09T03:37:41.000Z | 2021-08-17T21:42:54.000Z | stocksdata/exchanges.py | LatticeData/StockDataPythonClient | cb6a6fe348af7efc9f396a8b8a36e3452eb3630a | [
"Apache-2.0"
] | null | null | null | stocksdata/exchanges.py | LatticeData/StockDataPythonClient | cb6a6fe348af7efc9f396a8b8a36e3452eb3630a | [
"Apache-2.0"
] | 1 | 2021-08-24T22:50:42.000Z | 2021-08-24T22:50:42.000Z | from stocksdata.util.ttlcache import daily_cache
from stocksdata.util.get import *
## exchanges #####################################################################################################################################
@daily_cache
def all_public_companies():
return get_json("/market/all-public-companies").get("stocks")
@daily_cache
def nasdaq_exchange_composition():
return get_json("/market/exchange/nasdaq").get("stocks")
@daily_cache
def nyse_composition():
return get_json("/market/exchange/nyse").get("stocks")
@daily_cache
def shanghai_exchange_composition():
return get_json("/market/exchange/shanghai-stock-exchange").get("stocks")
@daily_cache
def hong_kong_exchange_composition():
return get_json("/market/exchange/hong-kong-stock-exchange").get("stocks")
@daily_cache
def london_exchange_composition():
return get_json("/market/exchange/london-stock-exchange").get("stocks") | 33.214286 | 146 | 0.66129 | 105 | 930 | 5.619048 | 0.247619 | 0.118644 | 0.132203 | 0.19322 | 0.60678 | 0.494915 | 0.430508 | 0 | 0 | 0 | 0 | 0 | 0.076344 | 930 | 28 | 147 | 33.214286 | 0.686845 | 0.009677 | 0 | 0.3 | 0 | 0 | 0.289172 | 0.243312 | 0 | 0 | 0 | 0 | 0 | 1 | 0.3 | true | 0 | 0.1 | 0.3 | 0.7 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 4 |
d4dbec990dbfb89b152c17508faef0f7e75b982e | 163 | py | Python | selections/tournament.py | aleksanderujek/knapsack-problem | 42ab0887e84979ebcc18d80a1f9269ce46088346 | [
"MIT"
] | null | null | null | selections/tournament.py | aleksanderujek/knapsack-problem | 42ab0887e84979ebcc18d80a1f9269ce46088346 | [
"MIT"
] | null | null | null | selections/tournament.py | aleksanderujek/knapsack-problem | 42ab0887e84979ebcc18d80a1f9269ce46088346 | [
"MIT"
] | null | null | null | import numpy as np
def tournament(population, k = 3):
elements = np.random.choice(population.individuals, k)
return max(elements, key=lambda x: x.fitness) | 32.6 | 58 | 0.730061 | 24 | 163 | 4.958333 | 0.791667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007299 | 0.159509 | 163 | 5 | 59 | 32.6 | 0.861314 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 4 |
d4eeeb03772f7eda0e88ed660e770a47135031eb | 278 | py | Python | tests/test_history.py | sharpertool/teamgen | 272c3a8cc8bd296a34fd944026227cdf75b3d8e4 | [
"MIT"
] | null | null | null | tests/test_history.py | sharpertool/teamgen | 272c3a8cc8bd296a34fd944026227cdf75b3d8e4 | [
"MIT"
] | null | null | null | tests/test_history.py | sharpertool/teamgen | 272c3a8cc8bd296a34fd944026227cdf75b3d8e4 | [
"MIT"
] | null | null | null | from unittest import TestCase
from teamgen import (Meeting, MatchRound, Team, Player)
class TestBalancedRound:
@classmethod
def setUpTestData(cls):
pass
def setUp(self):
pass
def test_get_history(self):
"""
"""
pass | 14.631579 | 55 | 0.607914 | 28 | 278 | 5.964286 | 0.75 | 0.083832 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.31295 | 278 | 19 | 56 | 14.631579 | 0.874346 | 0 | 0 | 0.3 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.3 | false | 0.3 | 0.2 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 4 |
be0312651f414d045928cbdba7bb9f2a2bf84926 | 8,125 | py | Python | tests/orchestrator/test_sequential_orchestrator.py | pblocz/azure-functions-durable-python | 59e77089d4d9934aff4c884b55e3df6f350d03fe | [
"MIT"
] | null | null | null | tests/orchestrator/test_sequential_orchestrator.py | pblocz/azure-functions-durable-python | 59e77089d4d9934aff4c884b55e3df6f350d03fe | [
"MIT"
] | null | null | null | tests/orchestrator/test_sequential_orchestrator.py | pblocz/azure-functions-durable-python | 59e77089d4d9934aff4c884b55e3df6f350d03fe | [
"MIT"
] | null | null | null | from .orchestrator_test_utils \
import assert_orchestration_state_equals, get_orchestration_state_result, assert_valid_schema
from tests.test_utils.ContextBuilder import ContextBuilder
from azure.durable_functions.models.OrchestratorState import OrchestratorState
from azure.durable_functions.models.actions.CallActivityAction \
import CallActivityAction
from tests.test_utils.testClasses import SerializableClass
def generator_function(context):
outputs = []
task1 = yield context.call_activity("Hello", "Tokyo")
task2 = yield context.call_activity("Hello", "Seattle")
task3 = yield context.call_activity("Hello", "London")
outputs.append(task1)
outputs.append(task2)
outputs.append(task3)
return outputs
def generator_function_rasing_ex(context):
outputs = []
task1 = yield context.call_activity("Hello", "Tokyo")
task2 = yield context.call_activity("Hello", "Seattle")
task3 = yield context.call_activity("Hello", "London")
outputs.append(task1)
outputs.append(task2)
outputs.append(task3)
raise ValueError("Oops!")
def generator_function_with_serialization(context):
"""Ochestrator to test sequential activity calls with a serializable input arguments."""
outputs = []
task1 = yield context.call_activity("Hello", SerializableClass("Tokyo"))
task2 = yield context.call_activity("Hello", SerializableClass("Seattle"))
task3 = yield context.call_activity("Hello", SerializableClass("London"))
outputs.append(task1)
outputs.append(task2)
outputs.append(task3)
return outputs
def base_expected_state(output=None) -> OrchestratorState:
return OrchestratorState(is_done=False, actions=[], output=output)
def add_hello_action(state: OrchestratorState, input_: str):
action = CallActivityAction(function_name='Hello', input_=input_)
state.actions.append([action])
def add_hello_completed_events(
context_builder: ContextBuilder, id_: int, result: str):
context_builder.add_task_scheduled_event(name='Hello', id_=id_)
context_builder.add_orchestrator_completed_event()
context_builder.add_orchestrator_started_event()
context_builder.add_task_completed_event(id_=id_, result=result)
def add_hello_failed_events(
context_builder: ContextBuilder, id_: int, reason: str, details: str):
context_builder.add_task_scheduled_event(name='Hello', id_=id_)
context_builder.add_orchestrator_completed_event()
context_builder.add_orchestrator_started_event()
context_builder.add_task_failed_event(
id_=id_, reason=reason, details=details)
def test_initial_orchestration_state():
context_builder = ContextBuilder('test_simple_function')
result = get_orchestration_state_result(
context_builder, generator_function)
expected_state = base_expected_state()
add_hello_action(expected_state, 'Tokyo')
expected = expected_state.to_json()
assert_valid_schema(result)
assert_orchestration_state_equals(expected, result)
def test_tokyo_state():
context_builder = ContextBuilder('test_simple_function')
add_hello_completed_events(context_builder, 0, "\"Hello Tokyo!\"")
result = get_orchestration_state_result(
context_builder, generator_function)
expected_state = base_expected_state()
add_hello_action(expected_state, 'Tokyo')
add_hello_action(expected_state, 'Seattle')
expected = expected_state.to_json()
assert_valid_schema(result)
assert_orchestration_state_equals(expected, result)
def test_failed_tokyo_state():
failed_reason = 'Reasons'
failed_details = 'Stuff and Things'
context_builder = ContextBuilder('test_simple_function')
add_hello_failed_events(
context_builder, 0, failed_reason, failed_details)
try:
result = get_orchestration_state_result(
context_builder, generator_function)
# expected an exception
assert False
except Exception as e:
error_label = "\n\n$OutOfProcData$:"
error_str = str(e)
expected_state = base_expected_state()
add_hello_action(expected_state, 'Tokyo')
error_msg = f'{failed_reason} \n {failed_details}'
expected_state._error = error_msg
state_str = expected_state.to_json_string()
expected_error_str = f"{error_msg}{error_label}{state_str}"
assert expected_error_str == error_str
def test_user_code_raises_exception():
context_builder = ContextBuilder('test_simple_function')
add_hello_completed_events(context_builder, 0, "\"Hello Tokyo!\"")
add_hello_completed_events(context_builder, 1, "\"Hello Seattle!\"")
add_hello_completed_events(context_builder, 2, "\"Hello London!\"")
try:
result = get_orchestration_state_result(
context_builder, generator_function_rasing_ex)
# expected an exception
assert False
except Exception as e:
error_label = "\n\n$OutOfProcData$:"
error_str = str(e)
expected_state = base_expected_state()
add_hello_action(expected_state, 'Tokyo')
add_hello_action(expected_state, 'Seattle')
add_hello_action(expected_state, 'London')
error_msg = 'Oops!'
expected_state._error = error_msg
state_str = expected_state.to_json_string()
expected_error_str = f"{error_msg}{error_label}{state_str}"
assert expected_error_str == error_str
def test_tokyo_and_seattle_state():
context_builder = ContextBuilder('test_simple_function')
add_hello_completed_events(context_builder, 0, "\"Hello Tokyo!\"")
add_hello_completed_events(context_builder, 1, "\"Hello Seattle!\"")
result = get_orchestration_state_result(
context_builder, generator_function)
expected_state = base_expected_state()
add_hello_action(expected_state, 'Tokyo')
add_hello_action(expected_state, 'Seattle')
add_hello_action(expected_state, 'London')
expected = expected_state.to_json()
assert_valid_schema(result)
assert_orchestration_state_equals(expected, result)
def test_tokyo_and_seattle_and_london_state():
context_builder = ContextBuilder('test_simple_function')
add_hello_completed_events(context_builder, 0, "\"Hello Tokyo!\"")
add_hello_completed_events(context_builder, 1, "\"Hello Seattle!\"")
add_hello_completed_events(context_builder, 2, "\"Hello London!\"")
result = get_orchestration_state_result(
context_builder, generator_function)
expected_state = base_expected_state(
['Hello Tokyo!', 'Hello Seattle!', 'Hello London!'])
add_hello_action(expected_state, 'Tokyo')
add_hello_action(expected_state, 'Seattle')
add_hello_action(expected_state, 'London')
expected_state._is_done = True
expected = expected_state.to_json()
assert_valid_schema(result)
assert_orchestration_state_equals(expected, result)
def test_tokyo_and_seattle_and_london_with_serialization_state():
"""Tests the sequential function pattern with custom object serialization.
This simple test validates that a sequential function pattern returns
the expected state when the input to activities is a user-provided
serializable class.
"""
context_builder = ContextBuilder('test_simple_function')
add_hello_completed_events(context_builder, 0, "\"Hello Tokyo!\"")
add_hello_completed_events(context_builder, 1, "\"Hello Seattle!\"")
add_hello_completed_events(context_builder, 2, "\"Hello London!\"")
result = get_orchestration_state_result(
context_builder, generator_function_with_serialization)
expected_state = base_expected_state(
['Hello Tokyo!', 'Hello Seattle!', 'Hello London!'])
add_hello_action(expected_state, SerializableClass("Tokyo"))
add_hello_action(expected_state, SerializableClass("Seattle"))
add_hello_action(expected_state, SerializableClass("London"))
expected_state._is_done = True
expected = expected_state.to_json()
assert_valid_schema(result)
assert_orchestration_state_equals(expected, result)
| 36.434978 | 97 | 0.745477 | 952 | 8,125 | 5.960084 | 0.12395 | 0.09852 | 0.041946 | 0.062037 | 0.784808 | 0.771237 | 0.717836 | 0.687522 | 0.678005 | 0.678005 | 0 | 0.004556 | 0.162585 | 8,125 | 222 | 98 | 36.599099 | 0.829365 | 0.043938 | 0 | 0.721519 | 0 | 0 | 0.079638 | 0.00905 | 0 | 0 | 0 | 0 | 0.094937 | 1 | 0.088608 | false | 0 | 0.031646 | 0.006329 | 0.139241 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 4 |
be05d0714193a02ed6e9b19c1d1d20b118368716 | 131 | py | Python | mysite/polls/apps.py | Gikeda2016/Gitflow-Django | e02e6afc53a947cc7a6f128f565bd2ad6dca646c | [
"MIT"
] | null | null | null | mysite/polls/apps.py | Gikeda2016/Gitflow-Django | e02e6afc53a947cc7a6f128f565bd2ad6dca646c | [
"MIT"
] | null | null | null | mysite/polls/apps.py | Gikeda2016/Gitflow-Django | e02e6afc53a947cc7a6f128f565bd2ad6dca646c | [
"MIT"
] | null | null | null | from django.apps import AppConfig
class PollsConfig(AppConfig): ## foi citado no settings em INSTALLED_APPS
name = 'polls'
| 21.833333 | 75 | 0.748092 | 17 | 131 | 5.705882 | 0.882353 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.183206 | 131 | 5 | 76 | 26.2 | 0.906542 | 0.305344 | 0 | 0 | 0 | 0 | 0.056818 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 4 |
07922819649fc8e6a92e76cf60b55884d2c1ba0f | 1,196 | py | Python | OpenGLWrapper_JE/venv/Lib/site-packages/OpenGL/raw/GLES2/IMG/multisampled_render_to_texture.py | JE-Chen/je_old_repo | a8b2f1ac2eec25758bd15b71c64b59b27e0bcda5 | [
"MIT"
] | null | null | null | OpenGLWrapper_JE/venv/Lib/site-packages/OpenGL/raw/GLES2/IMG/multisampled_render_to_texture.py | JE-Chen/je_old_repo | a8b2f1ac2eec25758bd15b71c64b59b27e0bcda5 | [
"MIT"
] | null | null | null | OpenGLWrapper_JE/venv/Lib/site-packages/OpenGL/raw/GLES2/IMG/multisampled_render_to_texture.py | JE-Chen/je_old_repo | a8b2f1ac2eec25758bd15b71c64b59b27e0bcda5 | [
"MIT"
] | null | null | null | '''Autogenerated by xml_generate script, do not edit!'''
from OpenGL import platform as _p, arrays
# Code generation uses this
from OpenGL.raw.GLES2 import _types as _cs
# End users want this...
from OpenGL.raw.GLES2._types import *
from OpenGL.raw.GLES2 import _errors
from OpenGL.constant import Constant as _C
import ctypes
_EXTENSION_NAME = 'GLES2_IMG_multisampled_render_to_texture'
def _f( function ):
return _p.createFunction( function,_p.PLATFORM.GLES2,'GLES2_IMG_multisampled_render_to_texture',error_checker=_errors._error_checker)
GL_FRAMEBUFFER_INCOMPLETE_MULTISAMPLE_IMG=_C('GL_FRAMEBUFFER_INCOMPLETE_MULTISAMPLE_IMG',0x9134)
GL_MAX_SAMPLES_IMG=_C('GL_MAX_SAMPLES_IMG',0x9135)
GL_RENDERBUFFER_SAMPLES_IMG=_C('GL_RENDERBUFFER_SAMPLES_IMG',0x9133)
GL_TEXTURE_SAMPLES_IMG=_C('GL_TEXTURE_SAMPLES_IMG',0x9136)
@_f
@_p.types(None,_cs.GLenum,_cs.GLenum,_cs.GLenum,_cs.GLuint,_cs.GLint,_cs.GLsizei)
def glFramebufferTexture2DMultisampleIMG(target,attachment,textarget,texture,level,samples):pass
@_f
@_p.types(None,_cs.GLenum,_cs.GLsizei,_cs.GLenum,_cs.GLsizei,_cs.GLsizei)
def glRenderbufferStorageMultisampleIMG(target,samples,internalformat,width,height):pass
| 49.833333 | 138 | 0.826087 | 170 | 1,196 | 5.394118 | 0.411765 | 0.065431 | 0.054526 | 0.058888 | 0.329335 | 0.122137 | 0.045802 | 0 | 0 | 0 | 0 | 0.024545 | 0.080268 | 1,196 | 23 | 139 | 52 | 0.809091 | 0.083612 | 0 | 0.105263 | 1 | 0 | 0.17636 | 0.159475 | 0 | 0 | 0.022514 | 0 | 0 | 1 | 0.157895 | false | 0.105263 | 0.315789 | 0.052632 | 0.526316 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 4 |
07a4a0db166a06f0f5f92d9f2e0e2ae15b3f0034 | 4,512 | py | Python | OpenGLWrapper_JE/venv/Lib/site-packages/OpenGL/GL/KHR/blend_equation_advanced.py | JE-Chen/je_old_repo | a8b2f1ac2eec25758bd15b71c64b59b27e0bcda5 | [
"MIT"
] | null | null | null | OpenGLWrapper_JE/venv/Lib/site-packages/OpenGL/GL/KHR/blend_equation_advanced.py | JE-Chen/je_old_repo | a8b2f1ac2eec25758bd15b71c64b59b27e0bcda5 | [
"MIT"
] | null | null | null | OpenGLWrapper_JE/venv/Lib/site-packages/OpenGL/GL/KHR/blend_equation_advanced.py | JE-Chen/je_old_repo | a8b2f1ac2eec25758bd15b71c64b59b27e0bcda5 | [
"MIT"
] | null | null | null | '''OpenGL extension KHR.blend_equation_advanced
This module customises the behaviour of the
OpenGL.raw.GL.KHR.blend_equation_advanced to provide a more
Python-friendly API
Overview (from the spec)
This extension adds a number of "advanced" blending equations that can be
used to perform new color blending operations, many of which are more
complex than the standard blend modes provided by unextended OpenGL. This
extension provides two different extension string entries:
- KHR_blend_equation_advanced: Provides the new blending equations, but
guarantees defined results only if each sample is touched no more than
once in any single rendering pass. The command BlendBarrierKHR() is
provided to indicate a boundary between passes.
- KHR_blend_equation_advanced_coherent: Provides the new blending
equations, and guarantees that blending is done coherently and in API
primitive order. An enable is provided to allow implementations to opt
out of fully coherent blending and instead behave as though only
KHR_blend_equation_advanced were supported.
Some implementations may support KHR_blend_equation_advanced without
supporting KHR_blend_equation_advanced_coherent.
In unextended OpenGL, the set of blending equations is limited, and can be
expressed very simply. The MIN and MAX blend equations simply compute
component-wise minimums or maximums of source and destination color
components. The FUNC_ADD, FUNC_SUBTRACT, and FUNC_REVERSE_SUBTRACT
multiply the source and destination colors by source and destination
factors and either add the two products together or subtract one from the
other. This limited set of operations supports many common blending
operations but precludes the use of more sophisticated transparency and
blending operations commonly available in many dedicated imaging APIs.
This extension provides a number of new "advanced" blending equations.
Unlike traditional blending operations using the FUNC_ADD equation, these
blending equations do not use source and destination factors specified by
BlendFunc. Instead, each blend equation specifies a complete equation
based on the source and destination colors. These new blend equations are
used for both RGB and alpha components; they may not be used to perform
separate RGB and alpha blending (via functions like
BlendEquationSeparate).
These blending operations are performed using premultiplied source and
destination colors, where RGB colors produced by the fragment shader and
stored in the framebuffer are considered to be multiplied by alpha
(coverage). Many of these advanced blending equations are formulated
where the result of blending source and destination colors with partial
coverage have three separate contributions: from the portions covered by
both the source and the destination, from the portion covered only by the
source, and from the portion covered only by the destination. Such
equations are defined assuming that the source and destination coverage
have no spatial correlation within the pixel.
In addition to the coherency issues on implementations not supporting
KHR_blend_equation_advanced_coherent, this extension has several
limitations worth noting. First, the new blend equations are not
supported while rendering to more than one color buffer at once; an
INVALID_OPERATION will be generated if an application attempts to render
any primitives in this unsupported configuration. Additionally, blending
precision may be limited to 16-bit floating-point, which could result in a
loss of precision and dynamic range for framebuffer formats with 32-bit
floating-point components, and in a loss of precision for formats with 12-
and 16-bit signed or unsigned normalized integer components.
The official definition of this extension is available here:
http://www.opengl.org/registry/specs/KHR/blend_equation_advanced.txt
'''
from OpenGL import platform, constant, arrays
from OpenGL import extensions, wrapper
import ctypes
from OpenGL.raw.GL import _types, _glgets
from OpenGL.raw.GL.KHR.blend_equation_advanced import *
from OpenGL.raw.GL.KHR.blend_equation_advanced import _EXTENSION_NAME
def glInitBlendEquationAdvancedKHR():
'''Return boolean indicating whether this extension is available'''
from OpenGL import extensions
return extensions.hasGLExtension( _EXTENSION_NAME )
### END AUTOGENERATED SECTION | 53.082353 | 76 | 0.802527 | 638 | 4,512 | 5.60815 | 0.398119 | 0.0436 | 0.049189 | 0.073784 | 0.127725 | 0.075182 | 0.051705 | 0.025154 | 0.025154 | 0 | 0 | 0.00214 | 0.171543 | 4,512 | 85 | 77 | 53.082353 | 0.955056 | 0.981826 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | true | 0 | 0.777778 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 4 |
07ab6ebcc021aa4895f74cc6f8a5293cafc1651e | 240 | py | Python | alphalens/__init__.py | liudengfeng/alphalens | 03b10c63f6d4fdef5d557549373fbaadeae0fd0e | [
"Apache-2.0"
] | null | null | null | alphalens/__init__.py | liudengfeng/alphalens | 03b10c63f6d4fdef5d557549373fbaadeae0fd0e | [
"Apache-2.0"
] | null | null | null | alphalens/__init__.py | liudengfeng/alphalens | 03b10c63f6d4fdef5d557549373fbaadeae0fd0e | [
"Apache-2.0"
] | null | null | null | from . import performance
from . import plotting
from . import tears
from . import utils
from ._version import get_versions
__version__ = get_versions()['version']
del get_versions
__all__ = ['performance', 'plotting', 'tears', 'utils']
| 20 | 55 | 0.75 | 29 | 240 | 5.793103 | 0.37931 | 0.238095 | 0.214286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.141667 | 240 | 11 | 56 | 21.818182 | 0.815534 | 0 | 0 | 0 | 0 | 0 | 0.15 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.625 | 0 | 0.625 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 4 |
07f7bb6c020d0bf16fa9a9b98739915d95ab83a2 | 84 | py | Python | ejercicios_python/Clase08/practica8-2.py | hcgalvan/UNSAM-Python-programming | c4b3f5ae0702dc03ea6010cb8051c7eec6aef42f | [
"MIT"
] | null | null | null | ejercicios_python/Clase08/practica8-2.py | hcgalvan/UNSAM-Python-programming | c4b3f5ae0702dc03ea6010cb8051c7eec6aef42f | [
"MIT"
] | null | null | null | ejercicios_python/Clase08/practica8-2.py | hcgalvan/UNSAM-Python-programming | c4b3f5ae0702dc03ea6010cb8051c7eec6aef42f | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Created on Thu Oct 21 18:04:04 2021
@author: User
"""
| 10.5 | 35 | 0.559524 | 14 | 84 | 3.357143 | 0.928571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.19697 | 0.214286 | 84 | 7 | 36 | 12 | 0.515152 | 0.869048 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 4 |
ed07559c5a2fb64653fea206c153b114dd809094 | 238 | py | Python | areahood/admin.py | naiyoma/Hood | 7920b077ce3118332b4477b7d55d3ced2c7702c8 | [
"Unlicense"
] | null | null | null | areahood/admin.py | naiyoma/Hood | 7920b077ce3118332b4477b7d55d3ced2c7702c8 | [
"Unlicense"
] | null | null | null | areahood/admin.py | naiyoma/Hood | 7920b077ce3118332b4477b7d55d3ced2c7702c8 | [
"Unlicense"
] | null | null | null | from django.contrib import admin
from .models import Profile,Post,Neighbourhood,Location
# Register your models here.
admin.site.register(Profile)
admin.site.register(Post)
admin.site.register(Neighbourhood)
admin.site.register(Location)
| 29.75 | 55 | 0.831933 | 32 | 238 | 6.1875 | 0.4375 | 0.181818 | 0.343434 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.071429 | 238 | 7 | 56 | 34 | 0.895928 | 0.109244 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 4 |
ed0c3e40d27a4c81b681589ca0bb5bd3628681f7 | 333 | py | Python | linkml_runtime/dumpers/__init__.py | deepakunni3/linkml-runtime | 145e9b41e5a62462f912942713600286923dea05 | [
"CC0-1.0"
] | null | null | null | linkml_runtime/dumpers/__init__.py | deepakunni3/linkml-runtime | 145e9b41e5a62462f912942713600286923dea05 | [
"CC0-1.0"
] | null | null | null | linkml_runtime/dumpers/__init__.py | deepakunni3/linkml-runtime | 145e9b41e5a62462f912942713600286923dea05 | [
"CC0-1.0"
] | null | null | null | from linkml_runtime.dumpers.json_dumper import JSONDumper
from linkml_runtime.dumpers.rdf_dumper import RDFDumper
from linkml_runtime.dumpers.yaml_dumper import YAMLDumper
from linkml_runtime.dumpers.csv_dumper import CSVDumper
json_dumper = JSONDumper()
rdf_dumper = RDFDumper()
yaml_dumper = YAMLDumper()
csv_dumper = CSVDumper()
| 33.3 | 57 | 0.852853 | 44 | 333 | 6.181818 | 0.318182 | 0.147059 | 0.25 | 0.352941 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.087087 | 333 | 9 | 58 | 37 | 0.894737 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 4 |
ed0cae1ade8e5df3e128e59cde2d1f409ca13f0e | 30,470 | py | Python | contractor/lib/config_test.py | T3kton/contractor | dd78f5b770ee7b5c41cddfc0a61869908b96e385 | [
"Apache-2.0"
] | 5 | 2019-02-15T15:55:56.000Z | 2020-08-02T03:36:44.000Z | contractor/lib/config_test.py | T3kton/contractor | dd78f5b770ee7b5c41cddfc0a61869908b96e385 | [
"Apache-2.0"
] | 4 | 2017-05-17T22:18:41.000Z | 2020-05-10T03:46:33.000Z | contractor/lib/config_test.py | T3kton/contractor | dd78f5b770ee7b5c41cddfc0a61869908b96e385 | [
"Apache-2.0"
] | 4 | 2017-05-09T21:05:51.000Z | 2020-09-25T16:37:20.000Z | import pytest
from datetime import datetime, timedelta, timezone
from django.core.exceptions import ValidationError
from contractor.Site.models import Site
from contractor.BluePrint.models import StructureBluePrint, FoundationBluePrint
from contractor.Building.models import Foundation, Structure
from contractor.lib.config import _updateConfig, mergeValues, getConfig, renderTemplate
def _strip_base( value ):
for i in ( '__contractor_host', '__pxe_location', '__pxe_template_location', '__last_modified', '__timestamp', '_structure_config_uuid' ):
try:
del value[ i ]
except KeyError:
pass
return value
def test_string():
config = {}
_updateConfig( {}, [], config )
assert config == {}
_updateConfig( { 'bob': '1 adsf' }, [], config )
assert config == { 'bob': '1 adsf' }
_updateConfig( { '<bob': '1 ' }, [], config )
assert config == { 'bob': '1 1 adsf' }
_updateConfig( { '>bob': ' 8' }, [], config )
assert config == { 'bob': '1 1 adsf 8' }
_updateConfig( { '>bob': [ 9 ] }, [], config )
assert config == { 'bob': '1 1 adsf 8[9]' }
_updateConfig( { '>bob': { 'a': 99 } }, [], config )
assert config == { 'bob': "1 1 adsf 8[9]{'a': 99}" }
_updateConfig( { '-bob': "{'a': 99}" }, [], config )
assert config == { 'bob': '1 1 adsf 8[9]' }
_updateConfig( { '-bob': '1 ' }, [], config )
assert config == { 'bob': '1 adsf 8[9]' }
_updateConfig( { '-bob': 'zzz' }, [], config )
assert config == { 'bob': '1 adsf 8[9]' }
_updateConfig( { 'bob': 'zzz' }, [], config )
assert config == { 'bob': 'zzz' }
_updateConfig( { '~bob': 'zzz' }, [], config )
assert config == {}
_updateConfig( { '~bob': 'zzz' }, [], config )
assert config == {}
_updateConfig( { '<bob': '1' }, [], config )
assert config == { 'bob': '1' }
_updateConfig( { '~bob': 'zzz' }, [], config )
assert config == {}
_updateConfig( { '<bob': '2' }, [], config )
assert config == { 'bob': '2' }
def test_list():
config = {}
_updateConfig( {}, [], config )
assert config == {}
_updateConfig( { 'bob': [ 1, 2, 2 ] }, [], config )
assert config == { 'bob': [ 1, 2, 2 ] }
_updateConfig( { '<bob': [ 0 ] }, [], config )
assert config == { 'bob': [ 0, 1, 2, 2 ] }
_updateConfig( { '<bob': 0 }, [], config )
assert config == { 'bob': [ 0, 0, 1, 2, 2 ] }
_updateConfig( { '>bob': [ 7, 8 ] }, [], config )
assert config == { 'bob': [ 0, 0, 1, 2, 2, 7, 8 ] }
_updateConfig( { '>bob': 9 }, [], config )
assert config == { 'bob': [ 0, 0, 1, 2, 2, 7, 8, 9 ] }
_updateConfig( { '<bob': { 's': '***' } }, [], config )
assert config == { 'bob': [ { 's': '***' }, 0, 0, 1, 2, 2, 7, 8, 9 ] }
_updateConfig( { '-bob': { 's': '***' } }, [], config )
assert config == { 'bob': [ 0, 0, 1, 2, 2, 7, 8, 9 ] }
_updateConfig( { '-bob': [ 7 ] }, [], config )
assert config == { 'bob': [ 0, 0, 1, 2, 2, 8, 9 ] }
_updateConfig( { '-bob': [ 1, 2 ] }, [], config )
assert config == { 'bob': [ 0, 0, 2, 8, 9 ] }
_updateConfig( { '-bob': [ 1, 2 ] }, [], config )
assert config == { 'bob': [ 0, 0, 8, 9 ] }
_updateConfig( { '-bob': 9 }, [], config )
assert config == { 'bob': [ 0, 0, 8 ] }
_updateConfig( { '-bob': 9 }, [], config )
assert config == { 'bob': [ 0, 0, 8 ] }
_updateConfig( { 'bob': [ 'zzz' ] }, [], config )
assert config == { 'bob': [ 'zzz' ] }
_updateConfig( { '~bob': [ 'zzz' ] }, [], config )
assert config == {}
_updateConfig( { '~bob': [ 'zzz' ] }, [], config )
assert config == {}
_updateConfig( { '<bob': [ 'a', 'b' ] }, [], config )
assert config == { 'bob': [ 'a', 'b' ] }
_updateConfig( { '~bob': [ 'zzz' ] }, [], config )
assert config == {}
_updateConfig( { '>bob': [ 'y', 'z' ] }, [], config )
assert config == { 'bob': [ 'y', 'z' ] }
def test_dict():
config = {}
_updateConfig( {}, [], config )
assert config == {}
_updateConfig( { 'bob': { 'a': 1, 'b': 2 } }, [], config )
assert config == { 'bob': { 'a': 1, 'b': 2 } }
_updateConfig( { '<bob': { 'c': '3' } }, [], config )
assert config == { 'bob': { 'a': 1, 'b': 2, 'c': '3' } }
with pytest.raises( ValueError ):
_updateConfig( { '<bob': 'asdf' }, [], config )
with pytest.raises( ValueError ):
_updateConfig( { '<bob': [ 'asdf' ] }, [], config )
_updateConfig( { '>bob': { 'd': '4' } }, [], config )
assert config == { 'bob': { 'a': 1, 'b': 2, 'c': '3', 'd': '4' } }
with pytest.raises( ValueError ):
_updateConfig( { '>bob': 'asdf' }, [], config )
with pytest.raises( ValueError ):
_updateConfig( { '>bob': [ 'asdf' ] }, [], config )
_updateConfig( { '-bob': 'd' }, [], config )
assert config == { 'bob': { 'a': 1, 'b': 2, 'c': '3' } }
_updateConfig( { '-bob': 'd' }, [], config )
assert config == { 'bob': { 'a': 1, 'b': 2, 'c': '3' } }
_updateConfig( { '-bob': [ 'b', 'c' ] }, [], config )
assert config == { 'bob': { 'a': 1 } }
_updateConfig( { '-bob': [ 'b', 'c' ] }, [], config )
assert config == { 'bob': { 'a': 1 } }
_updateConfig( { '~bob': { 'z': 'zzz' } }, [], config )
assert config == {}
_updateConfig( { '~bob': { 'z': 'zzz' } }, [], config )
assert config == {}
_updateConfig( { '<bob': { 'a': 'aaa' } }, [], config )
assert config == { 'bob': { 'a': 'aaa' } }
_updateConfig( { '~bob': { 'z': 'zzz' } }, [], config )
assert config == {}
_updateConfig( { '<bob': { 'b': 'bbb' } }, [], config )
assert config == { 'bob': { 'b': 'bbb' } }
def test_class():
config = {}
_updateConfig( {}, [], config )
assert config == {}
_updateConfig( { 'bob': 'adsf' }, [], config )
assert config == { 'bob': 'adsf' }
_updateConfig( { 'jane:joe': 'qwert' }, [], config )
assert config == { 'bob': 'adsf' }
_updateConfig( { 'jane:joe': 'qwert' }, [ 'joe' ], config )
assert config == { 'bob': 'adsf', 'jane': 'qwert' }
_updateConfig( { '<bob:joe': '1' }, [], config )
assert config == { 'bob': 'adsf', 'jane': 'qwert' }
_updateConfig( { '<bob:joe': '1' }, [ 'joe' ], config )
assert config == { 'bob': '1adsf', 'jane': 'qwert' }
_updateConfig( { 'bob:joe': '1' }, [ 'joe' ], config )
assert config == { 'bob': '1', 'jane': 'qwert' }
with pytest.raises( ValueError ):
_updateConfig( { 'bob:jo-e': '2' }, [ 'jo-e' ], config )
_updateConfig( { 'bo-b:joe': '3' }, [ 'joe' ], config )
assert config == { 'bo-b': '3', 'bob': '1', 'jane': 'qwert' }
def test_mergeValues():
values = { 'a': 'c' }
assert { 'a': 'c' } == mergeValues( values )
assert { 'a': 'c' } == values
values = { 'a': 3 }
assert { 'a': 3 } == mergeValues( values )
assert { 'a': 3 } == values
values = { 'a': 3, 'b': { 'c': 3 } }
assert { 'a': 3, 'b': { 'c': 3 } } == mergeValues( values )
assert { 'a': 3, 'b': { 'c': 3 } } == values
values = { 'a': 3, 'b': [ 1, '3', 2 ] }
assert { 'a': 3, 'b': [ 1, '3', 2 ] } == mergeValues( values )
assert { 'a': 3, 'b': [ 1, '3', 2 ] } == values
values = { 'a': 'c', 'd': '{{a}}' }
assert { 'a': 'c', 'd': 'c' } == mergeValues( values )
assert { 'a': 'c', 'd': '{{a}}' } == values
values = { 'a': 1, 'd': '{{a}}', 'e': '"{{a}}"' }
assert { 'a': 1, 'd': '1', 'e': '"1"' } == mergeValues( values )
assert { 'a': 1, 'd': '{{a}}', 'e': '"{{a}}"' } == values
values = { 'a': 'c', 'd': '{{ a }}' }
assert { 'a': 'c', 'd': 'c' } == mergeValues( values )
assert { 'a': 'c', 'd': '{{ a }}' } == values
values = { 'a': 'c', 'd': '{{z|default(\'v\')}}' }
assert { 'a': 'c', 'd': 'v' } == mergeValues( values )
assert { 'a': 'c', 'd': '{{z|default(\'v\')}}' } == values
values = { 'a': 'c', 'd': '{{a|default(\'v\')}}' }
assert { 'a': 'c', 'd': 'c' } == mergeValues( values )
assert { 'a': 'c', 'd': '{{a|default(\'v\')}}' } == values
values = { 'a': 'c', 'd': { 'bob': 'sdf', 'sally': '{{a}}' } }
assert { 'a': 'c', 'd': { 'bob': 'sdf', 'sally': 'c' } } == mergeValues( values )
assert { 'a': 'c', 'd': { 'bob': 'sdf', 'sally': '{{a}}' } } == values
values = { 'a': 'c', 'b': 'f', 'd': [ '{{a}}', '{{b}}', '{{a}}' ] }
assert { 'a': 'c', 'b': 'f', 'd': [ 'c', 'f', 'c' ] } == mergeValues( values )
assert { 'a': 'c', 'b': 'f', 'd': [ '{{a}}', '{{b}}', '{{a}}' ] } == values
values = { 'zone': 'local', 'dns_search': [ 'bob.{{zone}}', '{{zone}}' ], 'fqdn': 'mybox.{{zone}}' }
assert { 'zone': 'local', 'dns_search': [ 'bob.local', 'local' ], 'fqdn': 'mybox.local' } == mergeValues( values )
assert { 'zone': 'local', 'dns_search': [ 'bob.{{zone}}', '{{zone}}' ], 'fqdn': 'mybox.{{zone}}' } == values
# make sure we are not crossing item boundires
values = { 'a': '{', 'b': '{a', 'c': '{', 'd': '{', 'e': '}}' }
assert { 'a': '{', 'b': '{a', 'c': '{', 'd': '{', 'e': '}}' } == mergeValues( values )
assert { 'a': '{', 'b': '{a', 'c': '{', 'd': '{', 'e': '}}' } == values
values = { 'a': 'c', 'b': 'a', 'd': '{{ "{{" }}{{b}}}}' }
assert { 'a': 'c', 'b': 'a', 'd': 'c' } == mergeValues( values )
assert { 'a': 'c', 'b': 'a', 'd': '{{ "{{" }}{{b}}}}' } == values
def test_render():
assert renderTemplate( 'This is a test', {} ) == 'This is a test'
config = { 'i': 'is only' }
assert renderTemplate( 'This {{i}} {{j|default(\'a\')}} test', config ) == 'This is only a test'
assert renderTemplate( 'This {{i}} {{j}} test', config ) == 'This is only test'
assert renderTemplate( 'This does not {{exist|tojson}}', {} ) == 'This does not ""'
assert renderTemplate( 'This {{i|tojson}}', { 'i': [ 1, "sdf", [ 2, 3 ], { 'a': 'sdf' }, None, datetime.min ] } ) == 'This [1, "sdf", [2, 3], {"a": "sdf"}, null, "0001-01-01T00:00:00"]'
@pytest.mark.django_db
def test_valid_names():
s1 = Site( name='site1', description='test site 1' )
s1.full_clean()
s1.save()
s1.config_values[ 'valid' ] = 'value 1'
s1.full_clean()
s1.config_values[ '_nope' ] = 'more value 1'
with pytest.raises( ValidationError ):
s1.full_clean()
s1 = Site.objects.get( name='site1' )
s1.full_clean()
s1.config_values[ '__bad' ] = 'bad bad bad 1'
with pytest.raises( ValidationError ):
s1.full_clean()
del s1.config_values[ '__bad' ]
fb1 = FoundationBluePrint( name='fdnb1', description='Foundation BluePrint 1' )
fb1.foundation_type_list = [ 'Unknown' ]
fb1.full_clean()
fb1.save()
fb1.config_values[ 'valid' ] = 'value 2'
fb1.full_clean()
fb1.config_values[ '_nope' ] = 'more value 2'
with pytest.raises( ValidationError ):
fb1.full_clean()
fb1 = FoundationBluePrint.objects.get( pk='fdnb1' )
fb1.full_clean()
fb1.config_values[ '__bad' ] = 'bad bad bad 2'
with pytest.raises( ValidationError ):
fb1.full_clean()
del fb1.config_values[ '__bad' ]
fb1 = FoundationBluePrint.objects.get( pk='fdnb1' )
fb1.full_clean()
sb1 = StructureBluePrint( name='strb1', description='Structure BluePrint 1' )
sb1.full_clean()
sb1.save()
sb1.foundation_blueprint_list.add( fb1 )
sb1.config_values[ 'valid' ] = 'value 3'
sb1.full_clean()
sb1.config_values[ '_nope' ] = 'more value 3'
with pytest.raises( ValidationError ):
sb1.full_clean()
sb1 = StructureBluePrint.objects.get( name='strb1' )
sb1.full_clean()
sb1.config_values[ '__bad' ] = 'bad bad bad 3'
with pytest.raises( ValidationError ):
sb1.full_clean()
del sb1.config_values[ '__bad' ]
f1 = Foundation( site=s1, locator='fdn1', blueprint=fb1 )
f1.full_clean()
f1.save()
str1 = Structure( foundation=f1, site=s1, hostname='struct1', blueprint=sb1 )
str1.full_clean()
str1.save()
str1.config_values[ 'valid' ] = 'value 4'
str1.full_clean()
str1.config_values[ '_nope' ] = 'more value 4'
with pytest.raises( ValidationError ):
str1.full_clean()
str1 = Structure.objects.get( hostname='struct1' )
str1.full_clean()
str1.config_values[ '__bad' ] = 'bad bad bad 4'
with pytest.raises( ValidationError ):
str1.full_clean()
def test_getconfig():
with pytest.raises( ValueError ):
getConfig( object() )
@pytest.mark.django_db
def test_site():
s1 = Site( name='site1', description='test site 1' )
s1.config_values = {}
s1.full_clean()
s1.save()
now = datetime.now( timezone.utc )
tmp = getConfig( s1 )
assert now - tmp[ '__last_modified' ] < timedelta( seconds=5 )
assert now - tmp[ '__timestamp' ] < timedelta( seconds=5 )
del tmp[ '__last_modified' ]
del tmp[ '__timestamp' ]
del tmp[ '__pxe_template_location' ]
assert tmp == {
'__contractor_host': 'http://contractor/',
'__pxe_location': 'http://static/pxe/',
'_site': 'site1'
}
assert _strip_base( getConfig( s1 ) ) == {
'_site': 'site1'
}
s2 = Site( name='site2', description='test site 2', parent=s1 )
s2.config_values = { 'myval': 'this is a test', 'stuff': 'this is only a test' }
s2.full_clean()
s2.save()
assert _strip_base( getConfig( s2 ) ) == {
'_site': 'site2',
'myval': 'this is a test',
'stuff': 'this is only a test'
}
s1.config_values = { 'under': 'or over' }
s1.full_clean()
s1.save()
assert _strip_base( getConfig( s2 ) ) == {
'_site': 'site2',
'myval': 'this is a test',
'stuff': 'this is only a test',
'under': 'or over'
}
s2.config_values[ '>under' ] = ' here'
s2.config_values[ '<under' ] = 'going '
s2.full_clean()
s2.save()
assert _strip_base( getConfig( s2 ) ) == {
'_site': 'site2',
'myval': 'this is a test',
'stuff': 'this is only a test',
'under': 'going or over here'
}
@pytest.mark.django_db
def test_blueprint():
fb1 = FoundationBluePrint( name='fdnb1', description='Foundation BluePrint 1' )
fb1.foundation_type_list = [ 'Unknown' ]
fb1.full_clean()
fb1.save()
now = datetime.now( timezone.utc )
tmp = getConfig( fb1 )
assert now - tmp[ '__last_modified' ] < timedelta( seconds=5 )
assert now - tmp[ '__timestamp' ] < timedelta( seconds=5 )
del tmp[ '__last_modified' ]
del tmp[ '__timestamp' ]
del tmp[ '__pxe_template_location' ]
assert tmp == {
'__contractor_host': 'http://contractor/',
'__pxe_location': 'http://static/pxe/',
'_blueprint': 'fdnb1'
}
assert _strip_base( getConfig( fb1 ) ) == {
'_blueprint': 'fdnb1'
}
fb2 = FoundationBluePrint( name='fdnb2', description='Foundation BluePrint 2' )
fb2.foundation_type_list = [ 'Unknown' ]
fb2.config_values = { 'the': 'Nice value' }
fb2.full_clean()
fb2.save()
fb2.parent_list.add( fb1 )
assert _strip_base( getConfig( fb2 ) ) == {
'_blueprint': 'fdnb2',
'the': 'Nice value'
}
fb1.config_values = { 'from': 'inside the house' }
fb1.full_clean()
fb1.save()
assert _strip_base( getConfig( fb2 ) ) == {
'_blueprint': 'fdnb2',
'the': 'Nice value',
'from': 'inside the house'
}
sb1 = StructureBluePrint( name='strb1', description='Structure BluePrint 1' )
sb1.full_clean()
sb1.save()
now = datetime.now( timezone.utc )
tmp = getConfig( sb1 )
assert now - tmp[ '__last_modified' ] < timedelta( seconds=5 )
assert now - tmp[ '__timestamp' ] < timedelta( seconds=5 )
del tmp[ '__last_modified' ]
del tmp[ '__timestamp' ]
del tmp[ '__pxe_template_location' ]
assert tmp == {
'__contractor_host': 'http://contractor/',
'__pxe_location': 'http://static/pxe/',
'_blueprint': 'strb1'
}
assert _strip_base( getConfig( sb1 ) ) == {
'_blueprint': 'strb1'
}
sb2 = StructureBluePrint( name='strb2', description='Structure BluePrint 2' )
sb2.full_clean()
sb2.save()
sb2.parent_list.add( sb1 )
assert _strip_base( getConfig( sb2 ) ) == {
'_blueprint': 'strb2'
}
sb2.foundation_blueprint_list.add( fb2 )
assert _strip_base( getConfig( sb2 ) ) == {
'_blueprint': 'strb2'
}
@pytest.mark.django_db
def test_foundation():
s1 = Site( name='site1', description='test site 1' )
s1.config_values = {}
s1.full_clean()
s1.save()
fb1 = FoundationBluePrint( name='fdnb1', description='Foundation BluePrint 1' )
fb1.foundation_type_list = [ 'Unknown' ]
fb1.full_clean()
fb1.save()
f1 = Foundation( site=s1, locator='fdn1', blueprint=fb1 )
f1.full_clean()
f1.save()
now = datetime.now( timezone.utc )
tmp = getConfig( f1 )
assert now - tmp[ '__last_modified' ] < timedelta( seconds=5 )
assert now - tmp[ '__timestamp' ] < timedelta( seconds=5 )
del tmp[ '__last_modified' ]
del tmp[ '__timestamp' ]
del tmp[ '__pxe_template_location' ]
assert tmp == {
'__contractor_host': 'http://contractor/',
'__pxe_location': 'http://static/pxe/',
'_foundation_class_list': [],
'_foundation_id': 'fdn1',
'_foundation_id_map': None,
'_foundation_interface_list': [],
'_foundation_locator': 'fdn1',
'_foundation_state': 'planned',
'_foundation_type': 'Unknown',
'_provisioning_interface': None,
'_provisioning_interface_mac': None,
'_blueprint': 'fdnb1',
'_site': 'site1'
}
assert _strip_base( getConfig( f1 ) ) == {
'_foundation_class_list': [],
'_foundation_id': 'fdn1',
'_foundation_id_map': None,
'_foundation_interface_list': [],
'_foundation_locator': 'fdn1',
'_foundation_state': 'planned',
'_foundation_type': 'Unknown',
'_provisioning_interface': None,
'_provisioning_interface_mac': None,
'_blueprint': 'fdnb1',
'_site': 'site1'
}
fb1.config_values[ 'lucky' ] = 'blueprint'
fb1.full_clean()
fb1.save()
assert _strip_base( getConfig( f1 ) ) == {
'_foundation_class_list': [],
'_foundation_id': 'fdn1',
'_foundation_id_map': None,
'_foundation_interface_list': [],
'_foundation_locator': 'fdn1',
'_foundation_state': 'planned',
'_foundation_type': 'Unknown',
'_provisioning_interface': None,
'_provisioning_interface_mac': None,
'_blueprint': 'fdnb1',
'_site': 'site1',
'lucky': 'blueprint'
}
s1.config_values[ 'lucky' ] = 'site'
s1.full_clean()
s1.save()
assert _strip_base( getConfig( f1 ) ) == {
'_foundation_class_list': [],
'_foundation_id': 'fdn1',
'_foundation_id_map': None,
'_foundation_interface_list': [],
'_foundation_locator': 'fdn1',
'_foundation_state': 'planned',
'_foundation_type': 'Unknown',
'_provisioning_interface': None,
'_provisioning_interface_mac': None,
'_blueprint': 'fdnb1',
'_site': 'site1',
'lucky': 'site'
}
@pytest.mark.django_db
def test_structure():
s1 = Site( name='site1', description='test site 1' )
s1.full_clean()
s1.save()
fb1 = FoundationBluePrint( name='fdnb1', description='Foundation BluePrint 1' )
fb1.foundation_type_list = [ 'Unknown' ]
fb1.full_clean()
fb1.save()
f1 = Foundation( site=s1, locator='fdn1', blueprint=fb1 )
f1.full_clean()
f1.save()
sb1 = StructureBluePrint( name='strb1', description='Structure BluePrint 1' )
sb1.full_clean()
sb1.save()
sb1.foundation_blueprint_list.add( fb1 )
str1 = Structure( foundation=f1, site=s1, hostname='struct1', blueprint=sb1 )
str1.full_clean()
str1.save()
now = datetime.now( timezone.utc )
tmp = getConfig( str1 )
assert now - tmp[ '__last_modified' ] < timedelta( seconds=5 )
assert now - tmp[ '__timestamp' ] < timedelta( seconds=5 )
del tmp[ '__last_modified' ]
del tmp[ '__timestamp' ]
del tmp[ '_structure_config_uuid' ]
del tmp[ '__pxe_template_location' ]
assert tmp == {
'__contractor_host': 'http://contractor/',
'__pxe_location': 'http://static/pxe/',
'_foundation_class_list': [],
'_foundation_id': 'fdn1',
'_foundation_id_map': None,
'_foundation_interface_list': [],
'_foundation_locator': 'fdn1',
'_foundation_state': 'planned',
'_foundation_type': 'Unknown',
'_provisioning_interface': None,
'_provisioning_interface_mac': None,
'_provisioning_address': None,
'_primary_interface': None,
'_primary_interface_mac': None,
'_primary_address': None,
'_structure_id': str1.pk,
'_structure_state': 'planned',
'_fqdn': 'struct1',
'_hostname': 'struct1',
'_domain_name': None,
'_interface_map': {},
'_blueprint': 'strb1',
'_site': 'site1'
}
assert _strip_base( getConfig( str1 ) ) == {
'_foundation_class_list': [],
'_foundation_id': 'fdn1',
'_foundation_id_map': None,
'_foundation_interface_list': [],
'_foundation_locator': 'fdn1',
'_foundation_state': 'planned',
'_foundation_type': 'Unknown',
'_provisioning_interface': None,
'_provisioning_interface_mac': None,
'_provisioning_address': None,
'_primary_interface': None,
'_primary_interface_mac': None,
'_primary_address': None,
'_structure_id': str1.pk,
'_structure_state': 'planned',
'_fqdn': 'struct1',
'_hostname': 'struct1',
'_domain_name': None,
'_interface_map': {},
'_blueprint': 'strb1',
'_site': 'site1'
}
fb1.config_values[ 'bob' ] = 'foundation blueprint'
fb1.full_clean()
fb1.save()
assert _strip_base( getConfig( str1 ) ) == {
'_foundation_class_list': [],
'_foundation_id': 'fdn1',
'_foundation_id_map': None,
'_foundation_interface_list': [],
'_foundation_locator': 'fdn1',
'_foundation_state': 'planned',
'_foundation_type': 'Unknown',
'_provisioning_interface': None,
'_provisioning_interface_mac': None,
'_provisioning_address': None,
'_primary_interface': None,
'_primary_interface_mac': None,
'_primary_address': None,
'_structure_id': str1.pk,
'_structure_state': 'planned',
'_fqdn': 'struct1',
'_hostname': 'struct1',
'_domain_name': None,
'_interface_map': {},
'_blueprint': 'strb1',
'_site': 'site1'
}
sb1.config_values[ 'bob' ] = 'structure blueprint'
sb1.full_clean()
sb1.save()
assert _strip_base( getConfig( str1 ) ) == {
'_foundation_class_list': [],
'_foundation_id': 'fdn1',
'_foundation_id_map': None,
'_foundation_interface_list': [],
'_foundation_locator': 'fdn1',
'_foundation_state': 'planned',
'_foundation_type': 'Unknown',
'_provisioning_interface': None,
'_provisioning_interface_mac': None,
'_provisioning_address': None,
'_primary_interface': None,
'_primary_interface_mac': None,
'_primary_address': None,
'_structure_id': str1.pk,
'_structure_state': 'planned',
'_fqdn': 'struct1',
'_hostname': 'struct1',
'_domain_name': None,
'_interface_map': {},
'_blueprint': 'strb1',
'_site': 'site1',
'bob': 'structure blueprint'
}
str1.config_values[ 'bob' ] = 'structure'
str1.full_clean()
str1.save()
assert _strip_base( getConfig( str1 ) ) == {
'_foundation_class_list': [],
'_foundation_id': 'fdn1',
'_foundation_id_map': None,
'_foundation_interface_list': [],
'_foundation_locator': 'fdn1',
'_foundation_state': 'planned',
'_foundation_type': 'Unknown',
'_provisioning_interface': None,
'_provisioning_interface_mac': None,
'_provisioning_address': None,
'_primary_interface': None,
'_primary_interface_mac': None,
'_primary_address': None,
'_structure_id': str1.pk,
'_structure_state': 'planned',
'_fqdn': 'struct1',
'_hostname': 'struct1',
'_domain_name': None,
'_interface_map': {},
'_blueprint': 'strb1',
'_site': 'site1',
'bob': 'structure'
}
| 38.667513 | 187 | 0.439153 | 2,627 | 30,470 | 4.838219 | 0.070803 | 0.051928 | 0.077891 | 0.067742 | 0.824941 | 0.795594 | 0.733438 | 0.701259 | 0.631235 | 0.599764 | 0 | 0.026816 | 0.397867 | 30,470 | 787 | 188 | 38.716645 | 0.66594 | 0.001444 | 0 | 0.655791 | 0 | 0.001631 | 0.213713 | 0.042762 | 0 | 0 | 0 | 0 | 0.195759 | 1 | 0.021207 | false | 0.001631 | 0.011419 | 0 | 0.034258 | 0.050571 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 4 |
ed0e076496ae86a207b4b9296bd4d3dd3767dd69 | 142 | py | Python | kaztron/cog/modnotes/__init__.py | Laogeodritt/KazTron | 42f35e520875b458ffde7c2729865c95de606aca | [
"MIT"
] | 6 | 2018-07-04T20:41:01.000Z | 2021-09-08T08:10:34.000Z | kaztron/cog/modnotes/__init__.py | Laogeodritt/KazTron | 42f35e520875b458ffde7c2729865c95de606aca | [
"MIT"
] | 259 | 2018-05-01T22:41:32.000Z | 2022-02-08T23:25:00.000Z | kaztron/cog/modnotes/__init__.py | Laogeodritt/KazTron | 42f35e520875b458ffde7c2729865c95de606aca | [
"MIT"
] | 6 | 2019-04-16T22:13:15.000Z | 2021-12-15T08:06:38.000Z | from .modnotes import ModNotes, ModNotesConfig
from .controller import init_db
def setup(bot):
init_db()
bot.add_cog(ModNotes(bot))
| 17.75 | 46 | 0.746479 | 20 | 142 | 5.15 | 0.6 | 0.116505 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.161972 | 142 | 7 | 47 | 20.285714 | 0.865546 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.4 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 4 |
ed148a01b7e8b651bcc824ea3e058bc61cf18a44 | 132 | py | Python | dev/tools/rpc/rpctest.py | CrankySupertoon01/Toontown-2 | 60893d104528a8e7eb4aced5d0015f22e203466d | [
"MIT"
] | 1 | 2021-02-13T22:40:50.000Z | 2021-02-13T22:40:50.000Z | dev/tools/rpc/rpctest.py | CrankySupertoonArchive/Toontown-2 | 60893d104528a8e7eb4aced5d0015f22e203466d | [
"MIT"
] | 1 | 2018-07-28T20:07:04.000Z | 2018-07-30T18:28:34.000Z | dev/tools/rpc/rpctest.py | CrankySupertoonArchive/Toontown-2 | 60893d104528a8e7eb4aced5d0015f22e203466d | [
"MIT"
] | 2 | 2019-12-02T01:39:10.000Z | 2021-02-13T22:41:00.000Z | import pyjsonrpc
http_client = pyjsonrpc.HttpClient(url = 'http://localhost:8080')
print 'Connected'
print http_client.ping('test') | 26.4 | 65 | 0.780303 | 17 | 132 | 5.941176 | 0.705882 | 0.19802 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.033058 | 0.083333 | 132 | 5 | 66 | 26.4 | 0.801653 | 0 | 0 | 0 | 0 | 0 | 0.255639 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.25 | null | null | 0.5 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 4 |
ed2abfa948cd8cf531f21d69f0666950a15f4824 | 55 | py | Python | rx_scheduler/__init__.py | ryazantseff/rxpy-scheduler | afa75ce7db8b6dbe5e6127e87119d8fb33ade8dd | [
"MIT"
] | null | null | null | rx_scheduler/__init__.py | ryazantseff/rxpy-scheduler | afa75ce7db8b6dbe5e6127e87119d8fb33ade8dd | [
"MIT"
] | null | null | null | rx_scheduler/__init__.py | ryazantseff/rxpy-scheduler | afa75ce7db8b6dbe5e6127e87119d8fb33ade8dd | [
"MIT"
] | null | null | null | from .scheduler import Scheduler
from .task import Task | 27.5 | 32 | 0.836364 | 8 | 55 | 5.75 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.127273 | 55 | 2 | 33 | 27.5 | 0.958333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 4 |
ed2cbef4b40bca393e92f6bbf9c1ee270c3c2da4 | 27 | py | Python | jexia_sdk/__init__.py | jexia/jexia-sdk-python | 6e934222f06ac7600f3f86940480899f570b226f | [
"MIT"
] | 3 | 2020-03-05T19:29:38.000Z | 2021-04-06T11:27:52.000Z | jexia_sdk/__init__.py | jexia/jexia-sdk-python | 6e934222f06ac7600f3f86940480899f570b226f | [
"MIT"
] | null | null | null | jexia_sdk/__init__.py | jexia/jexia-sdk-python | 6e934222f06ac7600f3f86940480899f570b226f | [
"MIT"
] | 1 | 2021-05-03T15:02:38.000Z | 2021-05-03T15:02:38.000Z | __version__ = '0.0.1.dev6'
| 13.5 | 26 | 0.666667 | 5 | 27 | 2.8 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 0.111111 | 27 | 1 | 27 | 27 | 0.416667 | 0 | 0 | 0 | 0 | 0 | 0.37037 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 4 |
ed2e70ed8fd15f5fc56a147ccd1f629f9e7af6d5 | 9,212 | py | Python | monitoring/google/cloud/monitoring_v3/proto/uptime_service_pb2_grpc.py | DaveCheez/google-cloud-python | fc03d4d41f13e9d13db7206438163b3a471fdabd | [
"Apache-2.0"
] | 2 | 2021-11-26T07:08:43.000Z | 2022-03-07T20:20:04.000Z | monitoring/google/cloud/monitoring_v3/proto/uptime_service_pb2_grpc.py | DaveCheez/google-cloud-python | fc03d4d41f13e9d13db7206438163b3a471fdabd | [
"Apache-2.0"
] | 6 | 2019-05-27T22:05:58.000Z | 2019-08-05T16:46:16.000Z | monitoring/google/cloud/monitoring_v3/proto/uptime_service_pb2_grpc.py | DaveCheez/google-cloud-python | fc03d4d41f13e9d13db7206438163b3a471fdabd | [
"Apache-2.0"
] | 1 | 2020-04-14T10:47:41.000Z | 2020-04-14T10:47:41.000Z | # Generated by the gRPC Python protocol compiler plugin. DO NOT EDIT!
import grpc
from google.cloud.monitoring_v3.proto import (
uptime_pb2 as google_dot_cloud_dot_monitoring__v3_dot_proto_dot_uptime__pb2,
)
from google.cloud.monitoring_v3.proto import (
uptime_service_pb2 as google_dot_cloud_dot_monitoring__v3_dot_proto_dot_uptime__service__pb2,
)
from google.protobuf import empty_pb2 as google_dot_protobuf_dot_empty__pb2
class UptimeCheckServiceStub(object):
"""The UptimeCheckService API is used to manage (list, create, delete, edit)
uptime check configurations in the Stackdriver Monitoring product. An uptime
check is a piece of configuration that determines which resources and
services to monitor for availability. These configurations can also be
configured interactively by navigating to the [Cloud Console]
(http://console.cloud.google.com), selecting the appropriate project,
clicking on "Monitoring" on the left-hand side to navigate to Stackdriver,
and then clicking on "Uptime".
"""
def __init__(self, channel):
"""Constructor.
Args:
channel: A grpc.Channel.
"""
self.ListUptimeCheckConfigs = channel.unary_unary(
"/google.monitoring.v3.UptimeCheckService/ListUptimeCheckConfigs",
request_serializer=google_dot_cloud_dot_monitoring__v3_dot_proto_dot_uptime__service__pb2.ListUptimeCheckConfigsRequest.SerializeToString,
response_deserializer=google_dot_cloud_dot_monitoring__v3_dot_proto_dot_uptime__service__pb2.ListUptimeCheckConfigsResponse.FromString,
)
self.GetUptimeCheckConfig = channel.unary_unary(
"/google.monitoring.v3.UptimeCheckService/GetUptimeCheckConfig",
request_serializer=google_dot_cloud_dot_monitoring__v3_dot_proto_dot_uptime__service__pb2.GetUptimeCheckConfigRequest.SerializeToString,
response_deserializer=google_dot_cloud_dot_monitoring__v3_dot_proto_dot_uptime__pb2.UptimeCheckConfig.FromString,
)
self.CreateUptimeCheckConfig = channel.unary_unary(
"/google.monitoring.v3.UptimeCheckService/CreateUptimeCheckConfig",
request_serializer=google_dot_cloud_dot_monitoring__v3_dot_proto_dot_uptime__service__pb2.CreateUptimeCheckConfigRequest.SerializeToString,
response_deserializer=google_dot_cloud_dot_monitoring__v3_dot_proto_dot_uptime__pb2.UptimeCheckConfig.FromString,
)
self.UpdateUptimeCheckConfig = channel.unary_unary(
"/google.monitoring.v3.UptimeCheckService/UpdateUptimeCheckConfig",
request_serializer=google_dot_cloud_dot_monitoring__v3_dot_proto_dot_uptime__service__pb2.UpdateUptimeCheckConfigRequest.SerializeToString,
response_deserializer=google_dot_cloud_dot_monitoring__v3_dot_proto_dot_uptime__pb2.UptimeCheckConfig.FromString,
)
self.DeleteUptimeCheckConfig = channel.unary_unary(
"/google.monitoring.v3.UptimeCheckService/DeleteUptimeCheckConfig",
request_serializer=google_dot_cloud_dot_monitoring__v3_dot_proto_dot_uptime__service__pb2.DeleteUptimeCheckConfigRequest.SerializeToString,
response_deserializer=google_dot_protobuf_dot_empty__pb2.Empty.FromString,
)
self.ListUptimeCheckIps = channel.unary_unary(
"/google.monitoring.v3.UptimeCheckService/ListUptimeCheckIps",
request_serializer=google_dot_cloud_dot_monitoring__v3_dot_proto_dot_uptime__service__pb2.ListUptimeCheckIpsRequest.SerializeToString,
response_deserializer=google_dot_cloud_dot_monitoring__v3_dot_proto_dot_uptime__service__pb2.ListUptimeCheckIpsResponse.FromString,
)
class UptimeCheckServiceServicer(object):
"""The UptimeCheckService API is used to manage (list, create, delete, edit)
uptime check configurations in the Stackdriver Monitoring product. An uptime
check is a piece of configuration that determines which resources and
services to monitor for availability. These configurations can also be
configured interactively by navigating to the [Cloud Console]
(http://console.cloud.google.com), selecting the appropriate project,
clicking on "Monitoring" on the left-hand side to navigate to Stackdriver,
and then clicking on "Uptime".
"""
def ListUptimeCheckConfigs(self, request, context):
"""Lists the existing valid uptime check configurations for the project,
leaving out any invalid configurations.
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details("Method not implemented!")
raise NotImplementedError("Method not implemented!")
def GetUptimeCheckConfig(self, request, context):
"""Gets a single uptime check configuration.
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details("Method not implemented!")
raise NotImplementedError("Method not implemented!")
def CreateUptimeCheckConfig(self, request, context):
"""Creates a new uptime check configuration.
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details("Method not implemented!")
raise NotImplementedError("Method not implemented!")
def UpdateUptimeCheckConfig(self, request, context):
"""Updates an uptime check configuration. You can either replace the entire
configuration with a new one or replace only certain fields in the current
configuration by specifying the fields to be updated via `"updateMask"`.
Returns the updated configuration.
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details("Method not implemented!")
raise NotImplementedError("Method not implemented!")
def DeleteUptimeCheckConfig(self, request, context):
"""Deletes an uptime check configuration. Note that this method will fail
if the uptime check configuration is referenced by an alert policy or
other dependent configs that would be rendered invalid by the deletion.
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details("Method not implemented!")
raise NotImplementedError("Method not implemented!")
def ListUptimeCheckIps(self, request, context):
"""Returns the list of IPs that checkers run from
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details("Method not implemented!")
raise NotImplementedError("Method not implemented!")
def add_UptimeCheckServiceServicer_to_server(servicer, server):
rpc_method_handlers = {
"ListUptimeCheckConfigs": grpc.unary_unary_rpc_method_handler(
servicer.ListUptimeCheckConfigs,
request_deserializer=google_dot_cloud_dot_monitoring__v3_dot_proto_dot_uptime__service__pb2.ListUptimeCheckConfigsRequest.FromString,
response_serializer=google_dot_cloud_dot_monitoring__v3_dot_proto_dot_uptime__service__pb2.ListUptimeCheckConfigsResponse.SerializeToString,
),
"GetUptimeCheckConfig": grpc.unary_unary_rpc_method_handler(
servicer.GetUptimeCheckConfig,
request_deserializer=google_dot_cloud_dot_monitoring__v3_dot_proto_dot_uptime__service__pb2.GetUptimeCheckConfigRequest.FromString,
response_serializer=google_dot_cloud_dot_monitoring__v3_dot_proto_dot_uptime__pb2.UptimeCheckConfig.SerializeToString,
),
"CreateUptimeCheckConfig": grpc.unary_unary_rpc_method_handler(
servicer.CreateUptimeCheckConfig,
request_deserializer=google_dot_cloud_dot_monitoring__v3_dot_proto_dot_uptime__service__pb2.CreateUptimeCheckConfigRequest.FromString,
response_serializer=google_dot_cloud_dot_monitoring__v3_dot_proto_dot_uptime__pb2.UptimeCheckConfig.SerializeToString,
),
"UpdateUptimeCheckConfig": grpc.unary_unary_rpc_method_handler(
servicer.UpdateUptimeCheckConfig,
request_deserializer=google_dot_cloud_dot_monitoring__v3_dot_proto_dot_uptime__service__pb2.UpdateUptimeCheckConfigRequest.FromString,
response_serializer=google_dot_cloud_dot_monitoring__v3_dot_proto_dot_uptime__pb2.UptimeCheckConfig.SerializeToString,
),
"DeleteUptimeCheckConfig": grpc.unary_unary_rpc_method_handler(
servicer.DeleteUptimeCheckConfig,
request_deserializer=google_dot_cloud_dot_monitoring__v3_dot_proto_dot_uptime__service__pb2.DeleteUptimeCheckConfigRequest.FromString,
response_serializer=google_dot_protobuf_dot_empty__pb2.Empty.SerializeToString,
),
"ListUptimeCheckIps": grpc.unary_unary_rpc_method_handler(
servicer.ListUptimeCheckIps,
request_deserializer=google_dot_cloud_dot_monitoring__v3_dot_proto_dot_uptime__service__pb2.ListUptimeCheckIpsRequest.FromString,
response_serializer=google_dot_cloud_dot_monitoring__v3_dot_proto_dot_uptime__service__pb2.ListUptimeCheckIpsResponse.SerializeToString,
),
}
generic_handler = grpc.method_handlers_generic_handler(
"google.monitoring.v3.UptimeCheckService", rpc_method_handlers
)
server.add_generic_rpc_handlers((generic_handler,))
| 57.937107 | 152 | 0.782566 | 989 | 9,212 | 6.853387 | 0.169869 | 0.058424 | 0.049572 | 0.060195 | 0.735173 | 0.725583 | 0.721452 | 0.63116 | 0.618176 | 0.618176 | 0 | 0.008144 | 0.160226 | 9,212 | 158 | 153 | 58.303797 | 0.86802 | 0.206253 | 0 | 0.313725 | 1 | 0 | 0.114067 | 0.070334 | 0 | 0 | 0 | 0 | 0 | 1 | 0.078431 | false | 0 | 0.039216 | 0 | 0.137255 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 4 |
ed64e4ca72742e8a1d2fac5c5d26503ed29f5b0e | 80 | py | Python | a2e/processing/__init__.py | maechler/a2e | c28f546ca5fc3fdb9c740ea5f0f85d2aca044a00 | [
"MIT"
] | 1 | 2021-03-19T09:09:41.000Z | 2021-03-19T09:09:41.000Z | a2e/processing/__init__.py | maechler/a2e | c28f546ca5fc3fdb9c740ea5f0f85d2aca044a00 | [
"MIT"
] | null | null | null | a2e/processing/__init__.py | maechler/a2e | c28f546ca5fc3fdb9c740ea5f0f85d2aca044a00 | [
"MIT"
] | null | null | null | """
The module `a2e.processing` provides pre- and post-processing utilities
"""
| 20 | 71 | 0.7375 | 10 | 80 | 5.9 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014286 | 0.125 | 80 | 3 | 72 | 26.666667 | 0.828571 | 0.8875 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 4 |
ed7170a7c08a2646bba4e0bc8e96080c06dd5249 | 361 | py | Python | src/manage.py | numb95/Sabato | 0d5e196671c3fa7053157cb049619a18e22b4646 | [
"MIT"
] | 1 | 2020-09-03T18:13:14.000Z | 2020-09-03T18:13:14.000Z | src/manage.py | numb95/Sabato | 0d5e196671c3fa7053157cb049619a18e22b4646 | [
"MIT"
] | null | null | null | src/manage.py | numb95/Sabato | 0d5e196671c3fa7053157cb049619a18e22b4646 | [
"MIT"
] | null | null | null | from flask import Flask, jsonify
import config
import requests
from healthcheck import nodes
app = Flask(__name__)
@app.route('/healthcheck/slave')
def slave_health():
return jsonify(nodes.slave_status())
@app.route('/healthcheck/master')
def master_health():
return jsonify(nodes.master_status())
if __name__ == "__main__":
app.run(debug=True) | 20.055556 | 41 | 0.745152 | 47 | 361 | 5.382979 | 0.468085 | 0.063241 | 0.150198 | 0.189723 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.130194 | 361 | 18 | 42 | 20.055556 | 0.805732 | 0 | 0 | 0 | 0 | 0 | 0.124309 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.153846 | false | 0 | 0.307692 | 0.153846 | 0.615385 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 4 |
ed73f85deca341a82daf6746727758b5210e7a7e | 2,412 | py | Python | NetworkRouting/app.py | josephmoyer/WALKOFF-Apps | 26ecc045563b1edb1e6803cfd943173dfcd35579 | [
"CC0-1.0"
] | 96 | 2018-05-13T05:28:12.000Z | 2022-03-15T22:49:47.000Z | NetworkRouting/app.py | josephmoyer/WALKOFF-Apps | 26ecc045563b1edb1e6803cfd943173dfcd35579 | [
"CC0-1.0"
] | 8 | 2018-10-24T14:15:31.000Z | 2019-05-10T17:59:36.000Z | NetworkRouting/app.py | josephmoyer/WALKOFF-Apps | 26ecc045563b1edb1e6803cfd943173dfcd35579 | [
"CC0-1.0"
] | 34 | 2018-08-15T20:25:52.000Z | 2022-02-16T07:38:01.000Z | import logging
import socket
import time
import subprocess
from apps import App, action
logger = logging.getLogger("apps")
@action
def log_traffic(logFromIP, protocol):
print("logging:")
ips = logFromIP.split(" ")
for ip in ips:
print(ip)
args = ['iptables', '-A', 'INPUT', '-p', protocol, '-s', ip, '-j', 'LOG', '--log-prefix', 'INPUT:DROP:', '--log-level', '6']
runCommand(args)
return True
@action
def drop_traffic(dropFromIP, protocol):
print("dropping:")
ips = dropFromIP.split(" ")
for ip in ips:
print(ip)
args = ['iptables', '-A', 'INPUT', '-p', protocol, '-s', ip, '-j', 'DROP']
runCommand(args)
return True
#@action
def redirect_traffic(dropFrom, redirectTo, protocol):
args = ['echo', '"1"', '>', '/proc/sys/net/ipv4/ip_forward']
runCommand(args)
args = ['iptables', '-t', 'nat', '-A', 'PREROUTING', '-s', dropFrom, '-j', 'DNAT', '--to-destination', redirectTo]
runCommand(args)
# redirect ************** Not working, doesn't show up in IPTables or do anything
args = ['iptables', '-t', 'nat', '-A', 'POSTROUTING', '-j', 'MASQUERADE']
runCommand(args)
#args = ['iptables', '-t', 'nat', '-A', 'PREROUTING', '-s', redirectFrom, '-j', 'DNAT', '--to-destination', redirectTo]
#runCommand(args)
# redirect ************** Not working, doesn't show up in IPTables or do anything
#args = ['iptables', '-t', 'nat', '-A', 'POSTROUTING', '-j', 'MASQUERADE']
#runCommand(args)
return True
@action
def delete_iptable_drop_rule(dropFromIP, protocol):
print("dropping:")
ips = dropFromIP.split(" ")
for ip in ips:
args = ['iptables', '-D', 'INPUT', '-p', protocol, '-s', ip, '-j', 'DROP']
runCommand(args)
return True
@action
def delete_iptable_log_rule(logFromIP, protocol):
print("logging:")
ips = logFromIP.split(" ")
for ip in ips:
args = ['iptables', '-D', 'INPUT', '-p', protocol, '-s', ip, '-j', 'LOG', '--log-prefix', 'INPUT:DROP:', '--log-level', '6']
runCommand(args)
return True
def runCommand(args):
print(args)
p = subprocess.Popen(args)
while p.poll() is None:
if p.poll() is not None:
break
return
#redirect_traffic('172.217.7.206', '2.2.2.2', 'icmp')
#drop_traffic('172.217.7.206', 'icmp')
| 26.8 | 132 | 0.56592 | 289 | 2,412 | 4.681661 | 0.283737 | 0.103474 | 0.07391 | 0.088692 | 0.740577 | 0.715447 | 0.708795 | 0.708795 | 0.675536 | 0.623799 | 0 | 0.01507 | 0.229685 | 2,412 | 89 | 133 | 27.101124 | 0.713132 | 0.19859 | 0 | 0.545455 | 0 | 0 | 0.173777 | 0.015088 | 0 | 0 | 0 | 0 | 0 | 1 | 0.109091 | false | 0 | 0.090909 | 0 | 0.309091 | 0.127273 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 4 |
9c520240923b7c5b3123f5b1295020bd66ea94bc | 22 | py | Python | lib/twest/__init__.py | erkyrath/tworld | 9f5237771196b03753d027277ffc296e25fd7425 | [
"MIT"
] | 38 | 2015-01-03T16:59:20.000Z | 2021-10-13T09:15:53.000Z | lib/twest/__init__.py | Oreolek/tworld | 9f5237771196b03753d027277ffc296e25fd7425 | [
"MIT"
] | 32 | 2015-01-04T01:59:34.000Z | 2016-05-20T16:29:26.000Z | lib/twest/__init__.py | Oreolek/tworld | 9f5237771196b03753d027277ffc296e25fd7425 | [
"MIT"
] | 7 | 2015-10-08T21:01:20.000Z | 2020-05-21T17:42:54.000Z | """
Unit testing.
"""
| 5.5 | 13 | 0.5 | 2 | 22 | 5.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.181818 | 22 | 3 | 14 | 7.333333 | 0.611111 | 0.590909 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 4 |
92fb85696970a80e307b9aaf5a6c60ff4a659b81 | 208 | py | Python | twilbert/__init__.py | jogonba2/TWilBert | 1653350f0b85ebdb36a62859f43906debfcbd0cb | [
"Apache-2.0"
] | 9 | 2020-06-28T23:53:32.000Z | 2021-12-16T01:32:19.000Z | twilbert/__init__.py | jogonba2/TWilBert | 1653350f0b85ebdb36a62859f43906debfcbd0cb | [
"Apache-2.0"
] | 1 | 2020-11-18T19:09:47.000Z | 2021-06-03T14:12:27.000Z | twilbert/__init__.py | jogonba2/TWilBert | 1653350f0b85ebdb36a62859f43906debfcbd0cb | [
"Apache-2.0"
] | null | null | null | from __future__ import absolute_import
from . import layers
from . import applications
from . import models
from . import optimization
from . import utils
from . import preprocessing
__version__ = '1.0.0'
| 17.333333 | 38 | 0.783654 | 27 | 208 | 5.703704 | 0.481481 | 0.38961 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017241 | 0.163462 | 208 | 11 | 39 | 18.909091 | 0.867816 | 0 | 0 | 0 | 0 | 0 | 0.024038 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.875 | 0 | 0.875 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 4 |
133077de5b18bff780f2127dcb81425561fb61bc | 777 | py | Python | src/python/pants/goal/pantsd_stats.py | gshuflin/pants | cf483ead6d4d4a4cc4fc4ae18e3b5b633509d933 | [
"Apache-2.0"
] | null | null | null | src/python/pants/goal/pantsd_stats.py | gshuflin/pants | cf483ead6d4d4a4cc4fc4ae18e3b5b633509d933 | [
"Apache-2.0"
] | null | null | null | src/python/pants/goal/pantsd_stats.py | gshuflin/pants | cf483ead6d4d4a4cc4fc4ae18e3b5b633509d933 | [
"Apache-2.0"
] | null | null | null | # Copyright 2018 Pants project contributors (see CONTRIBUTORS.md).
# Licensed under the Apache License, Version 2.0 (see LICENSE).
class PantsDaemonStats:
"""Tracks various stats about the daemon."""
def __init__(self):
self.scheduler_metrics = {}
def set_scheduler_metrics(self, scheduler_metrics) -> None:
self.scheduler_metrics = scheduler_metrics
def set_target_root_size(self, size):
self.scheduler_metrics["target_root_size"] = size
def set_affected_targets_size(self, size):
self.scheduler_metrics["affected_targets_size"] = size
def get_all(self):
for key in ["target_root_size", "affected_targets_size"]:
self.scheduler_metrics.setdefault(key, 0)
return self.scheduler_metrics
| 32.375 | 66 | 0.711712 | 96 | 777 | 5.458333 | 0.427083 | 0.274809 | 0.267176 | 0.137405 | 0.122137 | 0.122137 | 0 | 0 | 0 | 0 | 0 | 0.011218 | 0.196911 | 777 | 23 | 67 | 33.782609 | 0.828526 | 0.213642 | 0 | 0 | 0 | 0 | 0.122517 | 0.069536 | 0 | 0 | 0 | 0 | 0 | 1 | 0.384615 | false | 0 | 0 | 0 | 0.538462 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 4 |
133acfaccd3ab8c37ed90d0d6925c68752d2c498 | 160 | py | Python | Quiz/resources.py | MathewAaron/portal-for-competitive-exams-for-blind-students | 3ad24be54830d83cda01514102e185a44686b752 | [
"MIT"
] | null | null | null | Quiz/resources.py | MathewAaron/portal-for-competitive-exams-for-blind-students | 3ad24be54830d83cda01514102e185a44686b752 | [
"MIT"
] | null | null | null | Quiz/resources.py | MathewAaron/portal-for-competitive-exams-for-blind-students | 3ad24be54830d83cda01514102e185a44686b752 | [
"MIT"
] | null | null | null | from import_export import resources
from .models import Question
class QuestionResource(resources.ModelResource):
class meta:
model = Question | 26.666667 | 49 | 0.75625 | 17 | 160 | 7.058824 | 0.647059 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2 | 160 | 6 | 50 | 26.666667 | 0.9375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.4 | 0 | 0.8 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 4 |
13c4edaf85af51aec8e5c2817ee4e778bcf12e41 | 539 | py | Python | 02-array-seq/deque_demo.py | niyunsheng/got_it-python | 26d91e56a58c7423253066f65ffb03b90d822e86 | [
"MIT"
] | 1 | 2020-12-18T07:34:58.000Z | 2020-12-18T07:34:58.000Z | 02-array-seq/deque_demo.py | niyunsheng/got_it-python | 26d91e56a58c7423253066f65ffb03b90d822e86 | [
"MIT"
] | null | null | null | 02-array-seq/deque_demo.py | niyunsheng/got_it-python | 26d91e56a58c7423253066f65ffb03b90d822e86 | [
"MIT"
] | null | null | null | from collections import deque
dq = deque(range(10), maxlen=10)
print(dq) # deque([0, 1, 2, 3, 4, 5, 6, 7, 8, 9], maxlen=10)
dq.rotate(3)
print(dq) # deque([7, 8, 9, 0, 1, 2, 3, 4, 5, 6], maxlen=10)
dq.rotate(-4)
print(dq) # deque([1, 2, 3, 4, 5, 6, 7, 8, 9, 0], maxlen=10)
dq.appendleft(-1)
print(dq) # deque([-1, 1, 2, 3, 4, 5, 6, 7, 8, 9], maxlen=10)
dq.extend([11, 22, 33])
print(dq) # deque([3, 4, 5, 6, 7, 8, 9, 11, 22, 33], maxlen=10)
dq.extendleft([10, 20, 30, 40])
print(dq) # deque([40, 30, 20, 10, 3, 4, 5, 6, 7, 8], maxlen=10) | 38.5 | 64 | 0.556586 | 120 | 539 | 2.5 | 0.241667 | 0.163333 | 0.24 | 0.08 | 0.226667 | 0.226667 | 0.206667 | 0.156667 | 0.156667 | 0.126667 | 0 | 0.226244 | 0.179963 | 539 | 14 | 64 | 38.5 | 0.452489 | 0.558442 | 0 | 0.461538 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.076923 | 0 | 0.076923 | 0.461538 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 4 |
13ee8c6fe538445df9d6dc3be7721fe1b6f94808 | 29 | py | Python | HacoWeb/haco/hacoweb/tests.py | DeanORourke1996/haco | fc04d763735ca376c51e82e1f1be20b092ce751c | [
"MIT"
] | null | null | null | HacoWeb/haco/hacoweb/tests.py | DeanORourke1996/haco | fc04d763735ca376c51e82e1f1be20b092ce751c | [
"MIT"
] | null | null | null | HacoWeb/haco/hacoweb/tests.py | DeanORourke1996/haco | fc04d763735ca376c51e82e1f1be20b092ce751c | [
"MIT"
] | null | null | null |
# Create your tests here.
| 5.8 | 25 | 0.655172 | 4 | 29 | 4.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.275862 | 29 | 4 | 26 | 7.25 | 0.904762 | 0.793103 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 4 |
b921b8b0ac46f30c77b22bc515d786ac2cf761c9 | 70 | py | Python | handypackages/textphrase/__init__.py | roundium/handypackages | b8a0e4952644144b31168f9a4ac8e743933d87c7 | [
"MIT"
] | 1 | 2019-07-31T11:40:06.000Z | 2019-07-31T11:40:06.000Z | handypackages/textphrase/__init__.py | roundium/handypackages | b8a0e4952644144b31168f9a4ac8e743933d87c7 | [
"MIT"
] | 10 | 2020-02-12T01:16:25.000Z | 2021-06-10T18:42:24.000Z | handypackages/textphrase/__init__.py | roundium/handypackages | b8a0e4952644144b31168f9a4ac8e743933d87c7 | [
"MIT"
] | 1 | 2019-07-31T11:40:18.000Z | 2019-07-31T11:40:18.000Z | default_app_config = 'handypackages.textphrase.apps.TextPhraseConfig'
| 35 | 69 | 0.871429 | 7 | 70 | 8.428571 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.042857 | 70 | 1 | 70 | 70 | 0.880597 | 0 | 0 | 0 | 0 | 0 | 0.657143 | 0.657143 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 4 |
b92947707d136869f00165114998236cd437d4c8 | 579 | py | Python | virtual/journal/pdf/build.py | PyUnchained/books | 656d375765300accb1767a8ab1bf4b96c884539f | [
"MIT"
] | null | null | null | virtual/journal/pdf/build.py | PyUnchained/books | 656d375765300accb1767a8ab1bf4b96c884539f | [
"MIT"
] | 14 | 2019-10-23T00:01:18.000Z | 2022-03-11T23:38:52.000Z | virtual/journal/pdf/build.py | PyUnchained/books | 656d375765300accb1767a8ab1bf4b96c884539f | [
"MIT"
] | null | null | null | from .presets import trial_balance_preset, cash_book_preset, balace_sheet_preset, profit_and_loss_preset
def build_journal(virtual_journal):
return None
def build_journal_from_preset(virtual_journal, test = False):
if virtual_journal.rule.preset == 'TB':
return trial_balance_preset(virtual_journal)
if virtual_journal.rule.preset == 'CB':
return cash_book_preset(virtual_journal)
if virtual_journal.rule.preset == 'BS':
return balace_sheet_preset(virtual_journal)
if virtual_journal.rule.preset == 'PL':
return profit_and_loss_preset(virtual_journal)
return None | 38.6 | 104 | 0.818653 | 83 | 579 | 5.325301 | 0.325301 | 0.316742 | 0.226244 | 0.180995 | 0.371041 | 0.312217 | 0.312217 | 0.312217 | 0 | 0 | 0 | 0 | 0.098446 | 579 | 15 | 105 | 38.6 | 0.846743 | 0 | 0 | 0.153846 | 0 | 0 | 0.013793 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.153846 | false | 0 | 0.076923 | 0.076923 | 0.692308 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 4 |
b94de150022d169bb2d30cbdf724bb654276c67a | 477 | py | Python | default_value_in_class.py | Aaftab-Alam/PythonPractice | 9167fef1ac87f1560803418aff0c328884ca5df7 | [
"Apache-2.0"
] | null | null | null | default_value_in_class.py | Aaftab-Alam/PythonPractice | 9167fef1ac87f1560803418aff0c328884ca5df7 | [
"Apache-2.0"
] | null | null | null | default_value_in_class.py | Aaftab-Alam/PythonPractice | 9167fef1ac87f1560803418aff0c328884ca5df7 | [
"Apache-2.0"
] | null | null | null | class Vehicle:
def __init__(self, name, max_speed, mileage):
self.name = name
self.max_speed = max_speed
self.mileage = mileage
def seating_capacity(self, capacity):
return f"The seating capacity of a {self.name} is {capacity} passengers"
class Bus(Vehicle):
def seating_capacit(self, capacity=40):
return super().seating_capacity(capacity=50)
School_bus = Bus("School Volvo", 180, 12)
print(School_bus.seating_capacit())
| 29.8125 | 80 | 0.689727 | 64 | 477 | 4.9375 | 0.4375 | 0.075949 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.023747 | 0.205451 | 477 | 15 | 81 | 31.8 | 0.810026 | 0 | 0 | 0 | 0 | 0 | 0.155136 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0.083333 | 0 | 0.166667 | 0.583333 | 0.083333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 4 |
b95e7d37eeeaff87e6a0ae58438833d8de0eeaf1 | 522 | py | Python | src/ansible_navigator/ui_framework/__init__.py | didib/ansible-navigator | 62fdbd05f25fb2d79133b3ab207f53ac2f2d6d36 | [
"Apache-2.0"
] | null | null | null | src/ansible_navigator/ui_framework/__init__.py | didib/ansible-navigator | 62fdbd05f25fb2d79133b3ab207f53ac2f2d6d36 | [
"Apache-2.0"
] | 1 | 2022-02-04T02:38:15.000Z | 2022-02-04T02:38:15.000Z | src/ansible_navigator/ui_framework/__init__.py | ganeshrn/ansible-navigator | 1580b5e4a4d715fa4bb844bfeeb40f1ac8e628f6 | [
"Apache-2.0",
"MIT"
] | 1 | 2021-11-17T09:45:18.000Z | 2021-11-17T09:45:18.000Z | """ ui_framework
"""
from .curses_defs import CursesLine
from .curses_defs import CursesLinePart
from .curses_defs import CursesLines
from .form import Form
from .form_utils import dict_to_form
from .form_utils import form_to_dict
from .form_utils import error_notification
from .form_utils import nonblocking_notification
from .form_utils import warning_notification
from .ui import Action
from .ui import Content
from .ui import Interaction
from .ui import Menu
from .ui import UIConfig
from .ui import UserInterface
| 24.857143 | 48 | 0.833333 | 77 | 522 | 5.441558 | 0.298701 | 0.114558 | 0.171838 | 0.22673 | 0.257757 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.126437 | 522 | 20 | 49 | 26.1 | 0.91886 | 0.022989 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 4 |
b98262b23df97c0d3c1623d92fc8175b32cc4b17 | 2,854 | py | Python | tests/python/gaia-ui-tests/gaiatest/apps/system/regions/cards_view.py | wilsonpage/gaia | d0beeca2401ff90564dfcbe45ee5237d46605ab8 | [
"Apache-2.0"
] | 1 | 2019-09-07T18:44:40.000Z | 2019-09-07T18:44:40.000Z | tests/python/gaia-ui-tests/gaiatest/apps/system/regions/cards_view.py | wilsonpage/gaia | d0beeca2401ff90564dfcbe45ee5237d46605ab8 | [
"Apache-2.0"
] | 3 | 2016-09-10T15:41:48.000Z | 2016-09-10T15:42:41.000Z | tests/python/gaia-ui-tests/gaiatest/apps/system/regions/cards_view.py | wilsonpage/gaia | d0beeca2401ff90564dfcbe45ee5237d46605ab8 | [
"Apache-2.0"
] | 1 | 2016-01-25T22:23:32.000Z | 2016-01-25T22:23:32.000Z | # This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at http://mozilla.org/MPL/2.0/.
import time
from marionette.by import By
from marionette.marionette import Actions
from gaiatest.apps.base import Base
class CardsView(Base):
# Home/Cards view locators
_cards_view_locator = (By.ID, 'cards-view')
# Check that the origin contains the current app name, origin is in the format:
# app://clock.gaiamobile.org
_apps_cards_locator = (By.CSS_SELECTOR, '#cards-view li[data-origin*="%s"]')
_close_buttons_locator = (By.CSS_SELECTOR, '#cards-view li[data-origin*="%s"] .close-card')
def _app_card_locator(self, app):
return (self._apps_cards_locator[0], self._apps_cards_locator[1] % app.lower())
def _close_button_locator(self, app):
return (self._close_buttons_locator[0], self._close_buttons_locator[1] % app.lower())
@property
def is_cards_view_displayed(self):
return self.is_element_displayed(*self._cards_view_locator)
def wait_for_card_ready(self, app):
current_frame = self.apps.displayed_app.frame
card = self.marionette.find_element(*self._app_card_locator(app))
self.wait_for_condition(lambda m: current_frame.size['width'] - card.size['width'] == 2 * card.location['x'])
# TODO: Remove sleep when we find a better wait
time.sleep(0.2)
def is_app_displayed(self, app):
return self.is_element_displayed(*self._app_card_locator(app))
def is_app_present(self, app):
return self.is_element_present(*self._app_card_locator(app))
def tap_app(self, app):
self.wait_for_condition(lambda m: self.is_app_displayed(app))
self.marionette.find_element(*self._app_card_locator(app)).tap()
def close_app(self, app):
self.wait_for_card_ready(app)
self.marionette.find_element(*self._close_button_locator(app)).tap()
self.wait_for_element_not_present(*self._app_card_locator(app))
def wait_for_cards_view(self):
self.wait_for_condition(lambda m: m.find_element(*self._cards_view_locator).get_attribute('class') == 'active')
def wait_for_cards_view_not_displayed(self):
self.wait_for_element_not_displayed(*self._cards_view_locator)
def swipe_to_previous_app(self):
current_frame = self.apps.displayed_app.frame
final_x_position = current_frame.size['width']
# start swipe from center of window
start_x_position = final_x_position // 2
start_y_position = current_frame.size['height'] // 2
# swipe forward to get previous app card
Actions(self.marionette).flick(
current_frame, start_x_position, start_y_position, final_x_position, start_y_position).perform()
| 40.771429 | 119 | 0.716188 | 422 | 2,854 | 4.528436 | 0.265403 | 0.043956 | 0.043956 | 0.047096 | 0.422815 | 0.333333 | 0.198325 | 0.097331 | 0.097331 | 0.049189 | 0 | 0.005534 | 0.176945 | 2,854 | 69 | 120 | 41.362319 | 0.808003 | 0.15452 | 0 | 0.047619 | 0 | 0 | 0.050354 | 0.017478 | 0 | 0 | 0 | 0.014493 | 0 | 1 | 0.261905 | false | 0 | 0.095238 | 0.119048 | 0.571429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 4 |
b991c28d9180a065752a2a5fc4f7265386bb7637 | 109 | py | Python | station/controller/handlers/handler.py | GLO3013-E4/COViRondelle2021 | f8d23903d0a906e93a7698a555d90ebecdf83969 | [
"MIT"
] | null | null | null | station/controller/handlers/handler.py | GLO3013-E4/COViRondelle2021 | f8d23903d0a906e93a7698a555d90ebecdf83969 | [
"MIT"
] | null | null | null | station/controller/handlers/handler.py | GLO3013-E4/COViRondelle2021 | f8d23903d0a906e93a7698a555d90ebecdf83969 | [
"MIT"
] | null | null | null | class Handler:
def handle(self, handled_data=None):
pass
def unregister(self):
pass
| 15.571429 | 40 | 0.605505 | 13 | 109 | 5 | 0.769231 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.311927 | 109 | 6 | 41 | 18.166667 | 0.866667 | 0 | 0 | 0.4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0.4 | 0 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 4 |
b9990dea67ffae5d1d2b367ee754c6789d275848 | 2,276 | py | Python | py/datacentric/schema/declaration/handler_declare_item.py | datacentricorg/datacentric-py | 40113ddfb68e62d98b880b3c7427db5cc9fbd8cd | [
"Apache-2.0"
] | 1 | 2020-02-03T18:32:42.000Z | 2020-02-03T18:32:42.000Z | py/datacentric/schema/declaration/handler_declare_item.py | datacentricorg/datacentric-py | 40113ddfb68e62d98b880b3c7427db5cc9fbd8cd | [
"Apache-2.0"
] | null | null | null | py/datacentric/schema/declaration/handler_declare_item.py | datacentricorg/datacentric-py | 40113ddfb68e62d98b880b3c7427db5cc9fbd8cd | [
"Apache-2.0"
] | null | null | null | # Copyright (C) 2013-present The DataCentric Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import attr
from typing import List
from datacentric.storage.data import Data
from datacentric.schema.declaration.param_decl import ParamDecl
from datacentric.schema.declaration.handler_type import HandlerType
from datacentric.schema.declaration.yes_no import YesNo
@attr.s(slots=True, auto_attribs=True)
class HandlerDeclareItem(Data):
"""
Represents a single item within handler declaration block.
Every declared handler must be implemented in this
class or its non-abstract descendant, error message
otherwise.
"""
name: str = attr.ib(default=None, kw_only=True, metadata={'optional': True})
"""Handler name."""
label: str = attr.ib(default=None, kw_only=True, metadata={'optional': True})
"""Handler label."""
comment: str = attr.ib(default=None, kw_only=True, metadata={'optional': True})
"""Handler comment."""
type: HandlerType = attr.ib(default=None, kw_only=True, metadata={'optional': True})
"""
Handler type.
TODO - rename both type and element name to HandlerKind
"""
params: List[ParamDecl] = attr.ib(default=None, kw_only=True, repr=False, metadata={'optional': True})
"""Handler parameters."""
static: YesNo = attr.ib(default=None, kw_only=True, metadata={'optional': True})
"""If this flag is set, handler will be static, otherwise it will be non-static."""
hidden: YesNo = attr.ib(default=None, kw_only=True, metadata={'optional': True})
"""
If this flag is set, handler will be hidden in the user interface
except in developer mode.
"""
category: str = attr.ib(default=None, kw_only=True, metadata={'optional': True})
"""Category."""
| 36.126984 | 106 | 0.715729 | 315 | 2,276 | 5.133333 | 0.44127 | 0.029685 | 0.064317 | 0.084106 | 0.285714 | 0.285714 | 0.285714 | 0.269017 | 0.269017 | 0.269017 | 0 | 0.004242 | 0.171353 | 2,276 | 62 | 107 | 36.709677 | 0.853128 | 0.328207 | 0 | 0 | 0 | 0 | 0.057971 | 0 | 0 | 0 | 0 | 0.016129 | 0 | 1 | 0 | true | 0 | 0.375 | 0 | 0.9375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 4 |
b9c1dec83dc3ab78ccc14464817ac9898924c6fe | 224 | py | Python | Snippets/intro-1/how_many_words.py | ursaMaj0r/python-csc-125 | 1d0968ad144112e24ae331c75aad58b74041593a | [
"MIT"
] | null | null | null | Snippets/intro-1/how_many_words.py | ursaMaj0r/python-csc-125 | 1d0968ad144112e24ae331c75aad58b74041593a | [
"MIT"
] | null | null | null | Snippets/intro-1/how_many_words.py | ursaMaj0r/python-csc-125 | 1d0968ad144112e24ae331c75aad58b74041593a | [
"MIT"
] | null | null | null | # input
input_words = []
input_word = input("Word: ")
# reponse
while input_word != '':
input_words.append(input_word)
input_word = input("Word: ")
print("You know {} unique word(s)!".format(len(set(input_words))))
| 22.4 | 66 | 0.665179 | 31 | 224 | 4.580645 | 0.451613 | 0.380282 | 0.394366 | 0.380282 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.151786 | 224 | 9 | 67 | 24.888889 | 0.747368 | 0.058036 | 0 | 0.333333 | 0 | 0 | 0.1875 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.166667 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 4 |
b9c48951e17925978dc7c85993cc704f2a0fc969 | 553 | py | Python | doping_test/TestCaseSelection.py | Biewer/Doping-Tests-for-Cyber-Physical-Systems-Tool | eb359ba618f0022dcd403edc99904f3ef2940e65 | [
"MIT"
] | null | null | null | doping_test/TestCaseSelection.py | Biewer/Doping-Tests-for-Cyber-Physical-Systems-Tool | eb359ba618f0022dcd403edc99904f3ef2940e65 | [
"MIT"
] | null | null | null | doping_test/TestCaseSelection.py | Biewer/Doping-Tests-for-Cyber-Physical-Systems-Tool | eb359ba618f0022dcd403edc99904f3ef2940e65 | [
"MIT"
] | null | null | null | class TestCaseSelection(object):
"""This class representents concrete implementations for test case selection startegies."""
# Three options for the three cases (as in Algorithm 1 from the paper)
OPTION_PASS = 1
OPTION_INPUT = 2
OPTION_OUTPUT = 3
def __init__(self):
super(TestCaseSelection, self).__init__()
def get_next_option(self, history):
# Omega_case
raise NotImplementedError('Abstract method not implemented!')
def get_next_input(self, history):
# Omega_In
raise NotImplementedError('Abstract method not implemented!') | 27.65 | 92 | 0.766727 | 70 | 553 | 5.814286 | 0.6 | 0.029484 | 0.04914 | 0.186732 | 0.255528 | 0.255528 | 0 | 0 | 0 | 0 | 0 | 0.008529 | 0.151899 | 553 | 20 | 93 | 27.65 | 0.859275 | 0.316456 | 0 | 0.2 | 0 | 0 | 0.172507 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.3 | false | 0.1 | 0 | 0 | 0.7 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 4 |
b9e0b92fec508ded3fa8ee23d921647f34786ded | 151 | py | Python | src/pretalx/mail/apps.py | lili668668/pretalx | 5ba2185ffd7c5f95254aafe25ad3de340a86eadb | [
"Apache-2.0"
] | 418 | 2017-10-05T05:52:49.000Z | 2022-03-24T09:50:06.000Z | src/pretalx/mail/apps.py | lili668668/pretalx | 5ba2185ffd7c5f95254aafe25ad3de340a86eadb | [
"Apache-2.0"
] | 1,049 | 2017-09-16T09:34:55.000Z | 2022-03-23T16:13:04.000Z | src/pretalx/mail/apps.py | lili668668/pretalx | 5ba2185ffd7c5f95254aafe25ad3de340a86eadb | [
"Apache-2.0"
] | 155 | 2017-10-16T18:32:01.000Z | 2022-03-15T12:48:33.000Z | from django.apps import AppConfig
class MailConfig(AppConfig):
name = "pretalx.mail"
def ready(self):
from . import signals # noqa
| 16.777778 | 37 | 0.668874 | 18 | 151 | 5.611111 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.245033 | 151 | 8 | 38 | 18.875 | 0.885965 | 0.02649 | 0 | 0 | 0 | 0 | 0.082759 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.4 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 4 |
b9e6cc3be84ed684ec6984b1a7cfe7b673a72c8d | 217 | py | Python | fastai/utils/show_install.py | feras-oughali/fastai | 4052f7adb441ab8a00eaa807b444a4e583b6bcc7 | [
"Apache-2.0"
] | 59 | 2020-08-18T03:41:35.000Z | 2022-03-23T03:51:55.000Z | fastai/utils/show_install.py | feras-oughali/fastai | 4052f7adb441ab8a00eaa807b444a4e583b6bcc7 | [
"Apache-2.0"
] | 17 | 2020-08-25T14:15:32.000Z | 2022-03-27T02:12:19.000Z | fastai/utils/show_install.py | feras-oughali/fastai | 4052f7adb441ab8a00eaa807b444a4e583b6bcc7 | [
"Apache-2.0"
] | 89 | 2020-08-17T23:45:42.000Z | 2022-03-27T20:53:43.000Z | from ..script import *
from .collect_env import *
# Temporary POC for module-based script
@call_parse
def main(show_nvidia_smi:Param(opt=False, nargs='?', type=bool)=False):
return show_install(show_nvidia_smi)
| 24.111111 | 71 | 0.760369 | 33 | 217 | 4.787879 | 0.757576 | 0.126582 | 0.164557 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.124424 | 217 | 8 | 72 | 27.125 | 0.831579 | 0.170507 | 0 | 0 | 0 | 0 | 0.00565 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.4 | 0.2 | 0.8 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 4 |
b9ee79014967c29d50919f2d9a2432e9bf02e128 | 145 | py | Python | clock-web/settings.py | somenkovnikita/clock-web | 6959a138fcab5b8085e9176196e4393d34504db8 | [
"MIT"
] | null | null | null | clock-web/settings.py | somenkovnikita/clock-web | 6959a138fcab5b8085e9176196e4393d34504db8 | [
"MIT"
] | null | null | null | clock-web/settings.py | somenkovnikita/clock-web | 6959a138fcab5b8085e9176196e4393d34504db8 | [
"MIT"
] | null | null | null | import os
dirname = os.path.dirname(__file__)
STATIC_PATH = os.path.join(dirname, 'static')
TEMPLATE_PATH = os.path.join(dirname, 'templates')
| 20.714286 | 50 | 0.751724 | 21 | 145 | 4.904762 | 0.428571 | 0.174757 | 0.194175 | 0.271845 | 0.407767 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.103448 | 145 | 6 | 51 | 24.166667 | 0.792308 | 0 | 0 | 0 | 0 | 0 | 0.103448 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 4 |
b9f68edb2f571e7bfaad9472d36a2d357b5574b2 | 23 | py | Python | venv/lib/python3.9/site-packages/crispy_forms/__init__.py | yahyasaadi/ig_clone | ea4dc40819bc2d3af78c49d85cd9828e1de95f60 | [
"MIT"
] | 5,916 | 2015-01-01T00:05:10.000Z | 2022-03-31T16:33:22.000Z | venv/lib/python3.9/site-packages/crispy_forms/__init__.py | yahyasaadi/ig_clone | ea4dc40819bc2d3af78c49d85cd9828e1de95f60 | [
"MIT"
] | 1,335 | 2015-01-04T20:39:30.000Z | 2022-03-31T22:48:53.000Z | venv/lib/python3.9/site-packages/crispy_forms/__init__.py | yahyasaadi/ig_clone | ea4dc40819bc2d3af78c49d85cd9828e1de95f60 | [
"MIT"
] | 1,183 | 2015-01-04T12:48:36.000Z | 2022-03-29T16:54:53.000Z | __version__ = "1.14.0"
| 11.5 | 22 | 0.652174 | 4 | 23 | 2.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2 | 0.130435 | 23 | 1 | 23 | 23 | 0.35 | 0 | 0 | 0 | 0 | 0 | 0.26087 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 4 |
b9f7435cd446905ab216f006fc8e7de8f193e83d | 245 | py | Python | src/ihtt/schemas.py | dekoza/i-hate-time-tracking | adb6018b56c836317535f2e2346dfb8d9cce3aac | [
"Apache-2.0"
] | null | null | null | src/ihtt/schemas.py | dekoza/i-hate-time-tracking | adb6018b56c836317535f2e2346dfb8d9cce3aac | [
"Apache-2.0"
] | null | null | null | src/ihtt/schemas.py | dekoza/i-hate-time-tracking | adb6018b56c836317535f2e2346dfb8d9cce3aac | [
"Apache-2.0"
] | null | null | null | from tortoise.contrib.pydantic import pydantic_model_creator
from ihtt.models import Session, Reporter, Entry
PydSession = pydantic_model_creator(Session)
PydReporter = pydantic_model_creator(Reporter)
PydEntry = pydantic_model_creator(Entry)
| 30.625 | 60 | 0.857143 | 30 | 245 | 6.733333 | 0.5 | 0.257426 | 0.39604 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.085714 | 245 | 7 | 61 | 35 | 0.901786 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.4 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 4 |
b9fe013cf32d822961499c61710ec40b23844b61 | 7,203 | py | Python | plenum/test/audit_ledger/test_audit_ledger_handler_catchup.py | andkononykhin/plenum | 28dc1719f4b7e80d31dafbadb38cfec4da949886 | [
"Apache-2.0"
] | null | null | null | plenum/test/audit_ledger/test_audit_ledger_handler_catchup.py | andkononykhin/plenum | 28dc1719f4b7e80d31dafbadb38cfec4da949886 | [
"Apache-2.0"
] | 1 | 2019-03-20T14:57:22.000Z | 2019-03-20T15:01:55.000Z | plenum/test/audit_ledger/test_audit_ledger_handler_catchup.py | andkononykhin/plenum | 28dc1719f4b7e80d31dafbadb38cfec4da949886 | [
"Apache-2.0"
] | null | null | null | from plenum.common.constants import DOMAIN_LEDGER_ID, POOL_LEDGER_ID
from plenum.server.batch_handlers.three_pc_batch import ThreePcBatch
from plenum.test.audit_ledger.helper import check_audit_txn, do_apply_audit_txn, add_txns, DEFAULT_PRIMARIES
# BOTH TESTS NEED TO BE RUN TOGETHER AS THEY SHARE COMMITTED STATE
from plenum.test.testing_utils import FakeSomething
def test_revert_works_after_catchup(alh, db_manager,
initial_domain_size, initial_pool_size, initial_config_size,
initial_seq_no):
size_before = alh.ledger.size
# apply and commit batch
do_apply_audit_txn(alh,
txns_count=7, ledger_id=DOMAIN_LEDGER_ID,
view_no=3, pp_sq_no=35, txn_time=11111)
txn_root_hash = db_manager.get_ledger(DOMAIN_LEDGER_ID).uncommitted_root_hash
state_root_hash = db_manager.get_state(DOMAIN_LEDGER_ID).headHash
alh.commit_batch(FakeSomething())
# add txns to audit ledger emulating catchup
caughtup_txns = 5
txns_per_batch = 2
add_txns_to_audit(alh,
count=caughtup_txns,
ledger_id=POOL_LEDGER_ID,
txns_per_batch=txns_per_batch,
view_no=3,
initial_pp_seq_no=36,
pp_time=11222)
alh.on_catchup_finished()
txn_root_hash_1 = db_manager.get_ledger(POOL_LEDGER_ID).uncommitted_root_hash
state_root_hash_1 = db_manager.get_state(POOL_LEDGER_ID).headHash
# apply two new batches and revert them both
do_apply_audit_txn(alh,
txns_count=3, ledger_id=DOMAIN_LEDGER_ID,
view_no=3, pp_sq_no=45, txn_time=21111)
txn_root_hash_2 = db_manager.get_ledger(DOMAIN_LEDGER_ID).uncommitted_root_hash
state_root_hash_2 = db_manager.get_state(DOMAIN_LEDGER_ID).headHash
do_apply_audit_txn(alh,
txns_count=6, ledger_id=DOMAIN_LEDGER_ID,
view_no=3, pp_sq_no=46, txn_time=21112)
assert alh.ledger.uncommitted_size == alh.ledger.size + 2
alh.post_batch_rejected(DOMAIN_LEDGER_ID)
assert alh.ledger.uncommitted_size == alh.ledger.size + 1
assert alh.ledger.size == size_before + 1 + caughtup_txns
check_audit_txn(txn=alh.ledger.get_last_txn(),
view_no=3, pp_seq_no=45,
seq_no=initial_seq_no + 1 + caughtup_txns + 1, txn_time=21111,
txn_roots={DOMAIN_LEDGER_ID: txn_root_hash_2},
state_roots={DOMAIN_LEDGER_ID: state_root_hash_2},
pool_size=initial_pool_size + txns_per_batch * caughtup_txns,
domain_size=initial_domain_size + 7 + 3,
config_size=initial_config_size,
last_pool_seqno=initial_seq_no + 1 + caughtup_txns,
last_domain_seqno=None,
last_config_seqno=None,
primaries=caughtup_txns + 1)
alh.post_batch_rejected(DOMAIN_LEDGER_ID)
assert alh.ledger.uncommitted_size == alh.ledger.size
assert alh.ledger.size == size_before + 1 + caughtup_txns
check_audit_txn(txn=alh.ledger.get_last_txn(),
view_no=3, pp_seq_no=40,
seq_no=initial_seq_no + caughtup_txns + 1, txn_time=11222,
txn_roots={POOL_LEDGER_ID: txn_root_hash_1},
state_roots={POOL_LEDGER_ID: state_root_hash_1},
pool_size=initial_pool_size + txns_per_batch * caughtup_txns,
domain_size=initial_domain_size + 7,
config_size=initial_config_size,
last_pool_seqno=None,
last_domain_seqno=initial_seq_no + 1,
last_config_seqno=None,
primaries=caughtup_txns)
def test_commit_works_after_catchup(alh, db_manager,
initial_domain_size, initial_pool_size, initial_config_size,
initial_seq_no):
size_before = alh.ledger.size
# apply and commit batch
do_apply_audit_txn(alh,
txns_count=7, ledger_id=DOMAIN_LEDGER_ID,
view_no=3, pp_sq_no=35, txn_time=11111)
txn_root_hash = db_manager.get_ledger(DOMAIN_LEDGER_ID).uncommitted_root_hash
state_root_hash = db_manager.get_state(DOMAIN_LEDGER_ID).headHash
alh.commit_batch(FakeSomething())
# add txns to audit ledger emulating catchup
caughtup_txns = 5
txns_per_batch = 2
add_txns_to_audit(alh,
count=caughtup_txns,
ledger_id=POOL_LEDGER_ID,
txns_per_batch=txns_per_batch,
view_no=3,
initial_pp_seq_no=36,
pp_time=11222)
alh.on_catchup_finished()
# apply and commit new batch
do_apply_audit_txn(alh,
txns_count=3, ledger_id=DOMAIN_LEDGER_ID,
view_no=3, pp_sq_no=45, txn_time=21111)
assert alh.ledger.uncommitted_size == alh.ledger.size + 1
txn_root_hash = db_manager.get_ledger(DOMAIN_LEDGER_ID).uncommitted_root_hash
state_root_hash = db_manager.get_state(DOMAIN_LEDGER_ID).headHash
alh.commit_batch(FakeSomething())
assert alh.ledger.uncommitted_size == alh.ledger.size
assert alh.ledger.size == size_before + 2 + caughtup_txns
check_audit_txn(txn=alh.ledger.get_last_committed_txn(),
view_no=3, pp_seq_no=45,
seq_no=initial_seq_no + 2 + caughtup_txns, txn_time=21111,
txn_roots={
DOMAIN_LEDGER_ID: txn_root_hash
},
state_roots={
DOMAIN_LEDGER_ID: state_root_hash
},
pool_size=initial_pool_size + txns_per_batch * caughtup_txns,
domain_size=initial_domain_size + 7 + 3,
config_size=initial_config_size,
last_pool_seqno=initial_seq_no + 1 + caughtup_txns,
last_domain_seqno=None,
last_config_seqno=None,
primaries=2 * (caughtup_txns + 1))
def add_txns_to_audit(alh, count, ledger_id, txns_per_batch, view_no, initial_pp_seq_no, pp_time):
db_manager = alh.database_manager
for i in range(count):
add_txns(db_manager, ledger_id, txns_per_batch, pp_time)
three_pc_batch = ThreePcBatch(ledger_id=ledger_id,
inst_id=0,
view_no=view_no,
pp_seq_no=initial_pp_seq_no + i,
pp_time=pp_time,
state_root=db_manager.get_state(ledger_id).headHash,
txn_root=db_manager.get_ledger(ledger_id).uncommitted_root_hash,
primaries=DEFAULT_PRIMARIES,
valid_digests=[])
alh._add_to_ledger(three_pc_batch)
alh.ledger.commitTxns(count)
| 46.470968 | 108 | 0.614605 | 927 | 7,203 | 4.31931 | 0.116505 | 0.07992 | 0.06993 | 0.017982 | 0.797453 | 0.742757 | 0.723027 | 0.703297 | 0.643107 | 0.631868 | 0 | 0.024325 | 0.320839 | 7,203 | 154 | 109 | 46.772727 | 0.794154 | 0.036929 | 0 | 0.6 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.064 | 1 | 0.024 | false | 0 | 0.032 | 0 | 0.056 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 4 |
6a19e316a9f38c2fd484790215fc3afd7735b18a | 102 | py | Python | homeassistant/components/songpal/const.py | itewk/home-assistant | 769cf19052f8c9ef374d8ba8ae7705ccc7bf4cf4 | [
"Apache-2.0"
] | 23 | 2017-11-15T21:03:53.000Z | 2021-03-29T21:33:48.000Z | homeassistant/components/songpal/const.py | itewk/home-assistant | 769cf19052f8c9ef374d8ba8ae7705ccc7bf4cf4 | [
"Apache-2.0"
] | 9 | 2022-01-27T06:32:10.000Z | 2022-03-31T07:07:51.000Z | homeassistant/components/songpal/const.py | itewk/home-assistant | 769cf19052f8c9ef374d8ba8ae7705ccc7bf4cf4 | [
"Apache-2.0"
] | 10 | 2018-01-01T00:12:51.000Z | 2021-12-21T23:08:05.000Z | """Constants for the Songpal component."""
DOMAIN = "songpal"
SET_SOUND_SETTING = "set_sound_setting"
| 25.5 | 42 | 0.764706 | 13 | 102 | 5.692308 | 0.692308 | 0.216216 | 0.405405 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.107843 | 102 | 3 | 43 | 34 | 0.813187 | 0.352941 | 0 | 0 | 0 | 0 | 0.4 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 4 |
6a248c9a8529f948caa07491d1a5f6568b830528 | 1,141 | py | Python | scripts/decks/most_common_words/utils.py | frekwencja/multilingual-word-frequency | 95bafb84c89b41d90bc3ad582f15c0efea086781 | [
"MIT"
] | 2 | 2021-11-24T20:12:26.000Z | 2021-11-30T18:47:43.000Z | scripts/decks/most_common_words/utils.py | frekwencja/language-learning-flashcards | 95bafb84c89b41d90bc3ad582f15c0efea086781 | [
"MIT"
] | 4 | 2021-11-24T16:18:44.000Z | 2021-12-31T21:59:12.000Z | scripts/decks/most_common_words/utils.py | frekwencja/multilingual-word-frequency | 95bafb84c89b41d90bc3ad582f15c0efea086781 | [
"MIT"
] | 1 | 2021-11-24T16:10:23.000Z | 2021-11-24T16:10:23.000Z | # ISO-639-1
LANGUAGES_SHORTCODES = ( "aa", "ab", "ae", "af", "ak", "am", "an", "ar", "as", "av", "ay", "az", "ba", "be", "bg", "bh", "bi", "bm", "bn", "bo", "br", "bs", "ca", "ce", "ch", "co", "cr", "cs", "cu", "cv", "cy", "da", "de", "dv", "dz", "ee", "el", "en", "eo", "es", "et", "eu", "fa", "ff", "fi", "fj", "fo", "fr", "fy", "ga", "gd", "gl", "gn", "gu", "gv", "ha", "he", "hi", "ho", "hr", "ht", "hu", "hy", "hz", "ia", "id", "ie", "ig", "ii", "ik", "io", "is", "it", "iu", "ja", "jv", "ka", "kg", "ki", "kj", "kk", "kl", "km", "kn", "ko", "kr", "ks", "ku", "kv", "kw", "ky", "la", "lb", "lg", "li", "ln", "lo", "lt", "lu", "lv", "mg", "mh", "mi", "mk", "ml", "mn", "mr", "ms", "mt", "my", "na", "nb", "nd", "ne", "ng", "nl", "nn", "no", "nr", "nv", "ny", "oc", "oj", "om", "or", "os", "pa", "pi", "pl", "ps", "pt", "qu", "rm", "rn", "ro", "ru", "rw", "sa", "sc", "sd", "se", "sg", "si", "sk", "sl", "sm", "sn", "so", "sq", "sr", "ss", "st", "su", "sv", "sw", "ta", "te", "tg", "th", "ti", "tk", "tl", "tn", "to", "tr", "ts", "tt", "tw", "ty", "ug", "uk", "ur", "uz", "ve", "vi", "vo", "wa", "wo", "xh", "yi", "yo", "za", "zh", "zu",) | 570.5 | 1,129 | 0.346188 | 189 | 1,141 | 2.084656 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004197 | 0.164768 | 1,141 | 2 | 1,129 | 570.5 | 0.409234 | 0.007888 | 0 | 0 | 0 | 0 | 0.325376 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 4 |
6a288ecca0d785d3bf2c30aa9fc65f3ce08a919d | 104 | py | Python | PYTHON/Aulas/ex001.py | FR7/Meus-Projetos | 1c8e1a91eaf143cccdc10f0e7edd013d910de474 | [
"MIT"
] | null | null | null | PYTHON/Aulas/ex001.py | FR7/Meus-Projetos | 1c8e1a91eaf143cccdc10f0e7edd013d910de474 | [
"MIT"
] | null | null | null | PYTHON/Aulas/ex001.py | FR7/Meus-Projetos | 1c8e1a91eaf143cccdc10f0e7edd013d910de474 | [
"MIT"
] | null | null | null | # Desafio 001
#Crie um programa ques escreva "Olá, Mundo!" na tela.
msg = "Olá, Mundo!!"
print(msg);
| 13 | 53 | 0.653846 | 16 | 104 | 4.25 | 0.8125 | 0.235294 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.035714 | 0.192308 | 104 | 7 | 54 | 14.857143 | 0.77381 | 0.605769 | 0 | 0 | 0 | 0 | 0.315789 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.5 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 4 |
6a2db308a94d86a9f2ae97cd8204e89e841f5b78 | 65 | py | Python | crispy_forms_uikit/templates/uikit-3.4.0/layout/__init__.py | rakenodiax/crispy-forms-uikit | eebe20414ef1d00cbdae2fcf1b262c7326539c38 | [
"MIT"
] | 3 | 2020-04-12T09:09:06.000Z | 2020-07-24T11:15:56.000Z | crispy_forms_uikit/templates/uikit-3.4.0/layout/__init__.py | rakenodiax/crispy-forms-uikit | eebe20414ef1d00cbdae2fcf1b262c7326539c38 | [
"MIT"
] | null | null | null | crispy_forms_uikit/templates/uikit-3.4.0/layout/__init__.py | rakenodiax/crispy-forms-uikit | eebe20414ef1d00cbdae2fcf1b262c7326539c38 | [
"MIT"
] | 1 | 2021-04-12T15:58:41.000Z | 2021-04-12T15:58:41.000Z | from .base import Layout
from .grid import Row, RowFluid, Column
| 21.666667 | 39 | 0.784615 | 10 | 65 | 5.1 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.153846 | 65 | 2 | 40 | 32.5 | 0.927273 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 4 |
6a346e049b47ae3c878b622dd638231fcad95477 | 145 | py | Python | Ekeopara_Praise/Phase 2/LIST/Day39 Tasks/Task1.py | CodedLadiesInnovateTech/-python-challenge-solutions | 430cd3eb84a2905a286819eef384ee484d8eb9e7 | [
"MIT"
] | 6 | 2020-05-23T19:53:25.000Z | 2021-05-08T20:21:30.000Z | Ekeopara_Praise/Phase 2/LIST/Day39 Tasks/Task1.py | CodedLadiesInnovateTech/-python-challenge-solutions | 430cd3eb84a2905a286819eef384ee484d8eb9e7 | [
"MIT"
] | 8 | 2020-05-14T18:53:12.000Z | 2020-07-03T00:06:20.000Z | Ekeopara_Praise/Phase 2/LIST/Day39 Tasks/Task1.py | CodedLadiesInnovateTech/-python-challenge-solutions | 430cd3eb84a2905a286819eef384ee484d8eb9e7 | [
"MIT"
] | 39 | 2020-05-10T20:55:02.000Z | 2020-09-12T17:40:59.000Z | '''1. Write a Python program to sum all the items in a list. '''
def sum_list(lst1):
return sum(lst1)
print(sum_list([1, 2, 3, 4, 5, 6, 7])) | 29 | 64 | 0.627586 | 30 | 145 | 2.966667 | 0.733333 | 0.157303 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.086207 | 0.2 | 145 | 5 | 65 | 29 | 0.681034 | 0.393103 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0.333333 | 0.666667 | 0.333333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 4 |
dbe05bcf62edf9e14d9e868bd84be81729119d03 | 45 | py | Python | python/testData/intentions/PyAnnotateVariableTypeIntentionTest/annotationNotPossibleForNestedStructuralType.py | truthiswill/intellij-community | fff88cfb0dc168eea18ecb745d3e5b93f57b0b95 | [
"Apache-2.0"
] | 2 | 2018-12-29T09:53:39.000Z | 2018-12-29T09:53:42.000Z | python/testData/intentions/PyAnnotateVariableTypeIntentionTest/annotationNotPossibleForNestedStructuralType.py | truthiswill/intellij-community | fff88cfb0dc168eea18ecb745d3e5b93f57b0b95 | [
"Apache-2.0"
] | 173 | 2018-07-05T13:59:39.000Z | 2018-08-09T01:12:03.000Z | python/testData/intentions/PyAnnotateVariableTypeIntentionTest/annotationNotPossibleForNestedStructuralType.py | truthiswill/intellij-community | fff88cfb0dc168eea18ecb745d3e5b93f57b0b95 | [
"Apache-2.0"
] | 2 | 2020-03-15T08:57:37.000Z | 2020-04-07T04:48:14.000Z | def func(x):
x.foo()
va<caret>r = [x] | 15 | 20 | 0.466667 | 9 | 45 | 2.333333 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.288889 | 45 | 3 | 20 | 15 | 0.65625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 4 |
e0010c165bdd0377179909884c61481772e72455 | 26 | py | Python | j1939/version.py | benkfra/J1939 | e69e56be977fe2b0cdf6b2749ee5a98c8b09a97e | [
"MIT"
] | 60 | 2018-05-06T16:40:00.000Z | 2022-02-25T23:32:08.000Z | j1939/version.py | benkfra/J1939 | e69e56be977fe2b0cdf6b2749ee5a98c8b09a97e | [
"MIT"
] | 5 | 2019-01-28T17:31:27.000Z | 2020-12-10T14:11:05.000Z | j1939/version.py | benkfra/J1939 | e69e56be977fe2b0cdf6b2749ee5a98c8b09a97e | [
"MIT"
] | 20 | 2018-10-31T08:47:44.000Z | 2021-11-28T19:33:34.000Z | __version__ = "0.1.0.dev1" | 26 | 26 | 0.692308 | 5 | 26 | 2.8 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 0.076923 | 26 | 1 | 26 | 26 | 0.416667 | 0 | 0 | 0 | 0 | 0 | 0.37037 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 4 |
e00321bb2973fa4dfec1fa9b85877133cf187d52 | 118 | py | Python | reconstruction/__init__.py | EJShim/spiralnet_plus | ff108aa9f70ff602432bd8086ff2a70f0dce37b7 | [
"MIT"
] | 78 | 2019-11-05T07:54:24.000Z | 2022-03-26T07:35:06.000Z | reconstruction/__init__.py | EJShim/spiralnet_plus | ff108aa9f70ff602432bd8086ff2a70f0dce37b7 | [
"MIT"
] | 8 | 2019-11-29T18:08:43.000Z | 2022-01-27T23:27:38.000Z | reconstruction/__init__.py | EJShim/spiralnet_plus | ff108aa9f70ff602432bd8086ff2a70f0dce37b7 | [
"MIT"
] | 11 | 2020-03-27T21:37:12.000Z | 2022-02-24T12:28:45.000Z | from .network import AE
from .train_eval import run, eval_error
__all__ = [
'AE',
'run',
'eval_error',
]
| 13.111111 | 39 | 0.627119 | 16 | 118 | 4.1875 | 0.5625 | 0.208955 | 0.358209 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.245763 | 118 | 8 | 40 | 14.75 | 0.752809 | 0 | 0 | 0 | 0 | 0 | 0.127119 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.285714 | 0 | 0.285714 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 4 |
e008172ad98471b2a777207665ed61b64342bfb0 | 2,331 | py | Python | sympy/polys/tests/test_constructor.py | pernici/sympy | 5e6e3b71da777f5b85b8ca2d16f33ed020cf8a41 | [
"BSD-3-Clause"
] | null | null | null | sympy/polys/tests/test_constructor.py | pernici/sympy | 5e6e3b71da777f5b85b8ca2d16f33ed020cf8a41 | [
"BSD-3-Clause"
] | null | null | null | sympy/polys/tests/test_constructor.py | pernici/sympy | 5e6e3b71da777f5b85b8ca2d16f33ed020cf8a41 | [
"BSD-3-Clause"
] | null | null | null | """Tests for tools for constructing domains for expressions. """
from sympy.polys.constructor import construct_domain
from sympy.polys.domains import ZZ, QQ, RR, EX
from sympy import S, sqrt
from sympy.abc import x, y
def test_construct_domain():
assert construct_domain([1, 2, 3]) == (ZZ, [ZZ(1), ZZ(2), ZZ(3)])
assert construct_domain([1, 2, 3], field=True) == (QQ, [QQ(1), QQ(2), QQ(3)])
assert construct_domain([S(1), S(2), S(3)]) == (ZZ, [ZZ(1), ZZ(2), ZZ(3)])
assert construct_domain([S(1), S(2), S(3)], field=True) == (QQ, [QQ(1), QQ(2), QQ(3)])
assert construct_domain([S(1)/2, S(2)]) == (QQ, [QQ(1,2), QQ(2)])
assert construct_domain([3.14, 1, S(1)/2]) == (RR, [RR(3.14), RR(1.0), RR(0.5)])
assert construct_domain([3.14, sqrt(2)], extension=None) == (EX, [EX(3.14), EX(sqrt(2))])
assert construct_domain([3.14, sqrt(2)], extension=True) == (EX, [EX(3.14), EX(sqrt(2))])
assert construct_domain([1, sqrt(2)], extension=None) == (EX, [EX(1), EX(sqrt(2))])
alg = QQ.algebraic_field(sqrt(2))
assert construct_domain([7, S(1)/2, sqrt(2)], extension=True) == \
(alg, [alg.convert(7), alg.convert(S(1)/2), alg.convert(sqrt(2))])
alg = QQ.algebraic_field(sqrt(2)+sqrt(3))
assert construct_domain([7, sqrt(2), sqrt(3)], extension=True) == \
(alg, [alg.convert(7), alg.convert(sqrt(2)), alg.convert(sqrt(3))])
dom = ZZ[x]
assert construct_domain([2*x, 3]) == \
(dom, [dom.convert(2*x), dom.convert(3)])
dom = ZZ[x,y]
assert construct_domain([2*x, 3*y]) == \
(dom, [dom.convert(2*x), dom.convert(3*y)])
dom = QQ[x]
assert construct_domain([x/2, 3]) == \
(dom, [dom.convert(x/2), dom.convert(3)])
dom = QQ[x,y]
assert construct_domain([x/2, 3*y]) == \
(dom, [dom.convert(x/2), dom.convert(3*y)])
dom = RR[x]
assert construct_domain([x/2, 3.5]) == \
(dom, [dom.convert(x/2), dom.convert(3.5)])
dom = RR[x,y]
assert construct_domain([x/2, 3.5*y]) == \
(dom, [dom.convert(x/2), dom.convert(3.5*y)])
dom = ZZ.frac_field(x)
assert construct_domain([2/x, 3]) == \
(dom, [dom.convert(2/x), dom.convert(3)])
dom = ZZ.frac_field(x,y)
assert construct_domain([2/x, 3*y]) == \
(dom, [dom.convert(2/x), dom.convert(3*y)])
| 31.931507 | 93 | 0.571858 | 396 | 2,331 | 3.300505 | 0.103535 | 0.24101 | 0.305279 | 0.084162 | 0.762051 | 0.709258 | 0.651109 | 0.611324 | 0.408569 | 0.34583 | 0 | 0.061637 | 0.192621 | 2,331 | 72 | 94 | 32.375 | 0.632837 | 0.024453 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.431818 | 1 | 0.022727 | false | 0 | 0.090909 | 0 | 0.113636 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 4 |
e00eb27582115bc442ef2022e5d9b6a48d3c4343 | 19,851 | py | Python | minerva/backend/models/pytorch/callbacks.py | minerva-ml/minerva-training-materials | bd6a75f93dc3609c346bf29baa397ef0459ccfb0 | [
"MIT"
] | 19 | 2018-05-25T03:51:14.000Z | 2021-12-27T12:43:47.000Z | minerva/backend/models/pytorch/callbacks.py | neptune-ml/minerva | a46f10e692a5f55c9c3e26add3cb197b6d00038c | [
"MIT"
] | 37 | 2017-12-21T12:17:29.000Z | 2018-03-29T10:49:44.000Z | minerva/backend/models/pytorch/callbacks.py | minerva-ml/minerva-training-materials | bd6a75f93dc3609c346bf29baa397ef0459ccfb0 | [
"MIT"
] | 5 | 2018-06-03T19:31:07.000Z | 2021-04-30T21:47:22.000Z | import os
import shutil
from datetime import datetime, timedelta
import matplotlib.pyplot as plt
import numpy as np
from PIL import Image
from torch.optim.lr_scheduler import ExponentialLR
from minerva.backend.models.pytorch.utils import overlay_box, overlay_keypoints, Averager, save_model
from minerva.backend.models.pytorch.validation import score_model_multi_output, predict_on_batch_multi_output
from minerva.backend.utils import get_unique_channel_name
from minerva.utils import get_logger
from deepsense import neptune
logger = get_logger()
class Callback:
def __init__(self):
self.epoch_id = None
self.batch_id = None
self.model = None
self.optimizer = None
self.loss_function = None
self.validation_datagen = None
self.lr_scheduler = None
def set_params(self, transformer, validation_datagen):
self.model = transformer.model
self.optimizer = transformer.optimizer
self.loss_function = transformer.loss_function
self.validation_datagen = validation_datagen
def on_train_begin(self, *args, **kwargs):
self.epoch_id = 0
self.batch_id = 0
def on_train_end(self, *args, **kwargs):
pass
def on_epoch_begin(self, *args, **kwargs):
pass
def on_epoch_end(self, *args, **kwargs):
self.epoch_id += 1
self.batch_id = 0
def on_batch_begin(self, *args, **kwargs):
pass
def on_batch_end(self, *args, **kwargs):
self.batch_id += 1
class CallbackList:
def __init__(self, callbacks=None):
if callbacks is None:
self.callbacks = []
elif isinstance(callbacks, Callback):
self.callbacks = [callbacks]
else:
self.callbacks = callbacks
def __len__(self):
return len(self.callbacks)
def set_params(self, *args, **kwargs):
for callback in self.callbacks:
callback.set_params(*args, **kwargs)
def on_train_begin(self, *args, **kwargs):
for callback in self.callbacks:
callback.on_train_begin(*args, **kwargs)
def on_train_end(self, *args, **kwargs):
for callback in self.callbacks:
callback.on_train_end(*args, **kwargs)
def on_epoch_begin(self, *args, **kwargs):
for callback in self.callbacks:
callback.on_epoch_begin(*args, **kwargs)
def on_epoch_end(self, *args, **kwargs):
for callback in self.callbacks:
callback.on_epoch_end(*args, **kwargs)
def on_batch_begin(self, *args, **kwargs):
for callback in self.callbacks:
callback.on_batch_begin(*args, **kwargs)
def on_batch_end(self, *args, **kwargs):
for callback in self.callbacks:
callback.on_batch_end(*args, **kwargs)
class TrainingMonitor(Callback):
def __init__(self, epoch_every=None, batch_every=None):
super().__init__()
self.epoch_loss_averager = Averager()
self.epoch_acc_averager = Averager()
if epoch_every == 0:
self.epoch_every = False
else:
self.epoch_every = epoch_every
if batch_every == 0:
self.batch_every = False
else:
self.batch_every = batch_every
def on_train_begin(self, *args, **kwargs):
self.epoch_loss_averager.reset()
self.epoch_acc_averager.reset()
self.epoch_id = 0
self.batch_id = 0
def on_epoch_end(self, *args, **kwargs):
epoch_avg_loss = self.epoch_loss_averager.value
epoch_avg_acc = self.epoch_acc_averager.value
self.epoch_loss_averager.reset()
self.epoch_acc_averager.reset()
if self.epoch_every and ((self.epoch_id % self.epoch_every) == 0):
logger.info('epoch {0} loss: {1:.5f}'.format(self.epoch_id, epoch_avg_loss))
logger.info('epoch {0} accuracy: {1:.5f}'.format(self.epoch_id, epoch_avg_acc))
self.epoch_id += 1
self.batch_id = 0
def on_batch_end(self, metrics, *args, **kwargs):
batch_loss = metrics['batch_loss']
batch_acc = metrics['batch_acc']
self.epoch_loss_averager.send(batch_loss)
self.epoch_acc_averager.send(batch_acc)
if self.batch_every and ((self.batch_id % self.batch_every) == 0):
logger.info('epoch {0} batch {1} loss: {2:.5f}'.format(self.epoch_id, self.batch_id, batch_loss))
logger.info('epoch {0} batch {1} accuracy: {2:.5f}'.format(self.epoch_id, self.batch_id, batch_acc))
self.batch_id += 1
class ValidationMonitor(Callback):
def __init__(self, epoch_every=None, batch_every=None):
super().__init__()
if epoch_every == 0:
self.epoch_every = False
else:
self.epoch_every = epoch_every
if batch_every == 0:
self.batch_every = False
else:
self.batch_every = batch_every
def on_epoch_end(self, *args, **kwargs):
if self.epoch_every and ((self.epoch_id % self.epoch_every) == 0):
self.model.eval()
val_loss, val_acc = score_model_multi_output(self.model, self.loss_function, self.validation_datagen)
self.model.train()
logger.info('epoch {0} validation loss: {1:.5f}'.format(self.epoch_id, val_loss))
logger.info('epoch {0} validation accuracy: {1:.5f}'.format(self.epoch_id, val_acc))
self.epoch_id += 1
self.batch_id = 0
def on_batch_end(self, metrics, *args, **kwargs):
if self.batch_every and ((self.batch_id % self.batch_every) == 0):
self.model.eval()
val_loss, val_acc = score_model_multi_output(self.model, self.loss_function, self.validation_datagen)
self.model.train()
logger.info('epoch {0} batch {1} validation loss: {2:.5f}'.format(self.epoch_id,
self.batch_id,
val_loss))
logger.info('epoch {0} batch {1} validation accuracy: {2:.5f}'.format(self.epoch_id,
self.batch_id,
val_acc))
self.batch_id += 1
class PlotBoundingBoxPredictions(Callback):
def __init__(self, img_dir, bins_nr, epoch_every=1, batch_every=None):
super().__init__()
self.img_dir = img_dir
self.bins_nr = bins_nr
if epoch_every == 0:
self.epoch_every = False
else:
self.epoch_every = epoch_every
if batch_every == 0:
self.batch_every = False
else:
self.batch_every = batch_every
self._create_dir()
def on_batch_end(self, *args, **kwargs):
if self.batch_every and ((self.batch_id % self.batch_every) == 0):
self._save_images()
self.batch_id += 1
def on_epoch_end(self, *args, **kwargs):
if self.epoch_every and ((self.epoch_id % self.epoch_every) == 0):
self._save_images()
self.epoch_id += 1
self.batch_id = 0
def _create_dir(self):
try:
shutil.rmtree(self.img_dir)
except Exception:
pass
os.makedirs(self.img_dir, exist_ok=True)
def _save_images(self):
for i, (image, box_coord, true_box) in enumerate(
predict_on_batch_multi_output(self.model, self.validation_datagen)):
image_with_box = overlay_box(image, box_coord, true_box, self.bins_nr)
filepath = os.path.join(self.img_dir,
'epoch{}_batch{}_idx{}.png'.format(self.epoch_id, self.batch_id, i))
plt.imsave(filepath, image_with_box)
class PlotKeyPointsPredictions(Callback):
def __init__(self, img_dir, bins_nr, epoch_every=1, batch_every=None):
super().__init__()
self.img_dir = img_dir
self.bins_nr = bins_nr
self.batch_every = batch_every
self.epoch_every = epoch_every
self._create_dir()
def on_batch_end(self, *args, **kwargs):
if self.batch_every and self.batch_id % self.batch_every == 0 and self.batch_id > 0:
self._save_images()
self.batch_id += 1
def on_epoch_end(self, *args, **kwargs):
if self.epoch_every and self.epoch_id % self.epoch_every == 0 and self.epoch_id > 0:
self._save_images()
self.epoch_id += 1
self.batch_id = 0
def _create_dir(self):
try:
shutil.rmtree(self.img_dir)
except Exception:
pass
os.makedirs(self.img_dir, exist_ok=True)
def _save_images(self):
for i, (image, box_coord, true_box) in enumerate(
predict_on_batch_multi_output(self.model, self.validation_datagen)):
image_with_box = overlay_box(image, box_coord, true_box, self.bins_nr)
filepath = os.path.join(self.img_dir,
'epoch{}_batch{}_idx{}.png'.format(self.epoch_id, self.batch_id, i))
plt.imsave(filepath, image_with_box)
class ExponentialLRScheduler(Callback):
def __init__(self, gamma, epoch_every=1, batch_every=None):
super().__init__()
self.gamma = gamma
if epoch_every == 0:
self.epoch_every = False
else:
self.epoch_every = epoch_every
if batch_every == 0:
self.batch_every = False
else:
self.batch_every = batch_every
def set_params(self, transformer, validation_datagen):
self.validation_datagen = validation_datagen
self.model = transformer.model
self.optimizer = transformer.optimizer
self.loss_function = transformer.loss_function
self.lr_scheduler = ExponentialLR(self.optimizer, self.gamma, last_epoch=-1)
def on_train_begin(self, *args, **kwargs):
self.epoch_id = 0
self.batch_id = 0
logger.info('initial lr: {0}'.format(self.optimizer.state_dict()['param_groups'][0]['initial_lr']))
def on_epoch_end(self, *args, **kwargs):
if self.epoch_every and (((self.epoch_id + 1) % self.epoch_every) == 0):
self.lr_scheduler.step()
logger.info('epoch {0} current lr: {1}'.format(self.epoch_id + 1,
self.optimizer.state_dict()['param_groups'][0]['lr']))
self.epoch_id += 1
self.batch_id = 0
def on_batch_end(self, *args, **kwargs):
if self.batch_every and ((self.batch_id % self.batch_every) == 0):
self.lr_scheduler.step()
logger.info('epoch {0} batch {1} current lr: {2}'.format(
self.epoch_id + 1, self.batch_id + 1, self.optimizer.state_dict()['param_groups'][0]['lr']))
self.batch_id += 1
class ModelCheckpoint(Callback):
def __init__(self, checkpoint_dir, best_only=False, epoch_every=1, batch_every=None):
super().__init__()
self.checkpoint_dir = checkpoint_dir
if best_only:
self.best_only = best_only
raise NotImplementedError
if epoch_every == 0:
self.epoch_every = False
else:
self.epoch_every = epoch_every
if batch_every == 0:
self.batch_every = False
else:
self.batch_every = batch_every
def on_train_begin(self, *args, **kwargs):
self.epoch_id = 0
self.batch_id = 0
os.makedirs(self.checkpoint_dir, exist_ok=True)
def on_epoch_end(self, *args, **kwargs):
if self.epoch_every and ((self.epoch_id % self.epoch_every) == 0):
full_path = os.path.join(self.checkpoint_dir, 'model_epoch{0}.torch'.format(self.epoch_id))
save_model(self.model, full_path)
logger.info('epoch {0} model saved to {1}'.format(self.epoch_id, full_path))
self.epoch_id += 1
self.batch_id = 0
def on_batch_end(self, *args, **kwargs):
if self.batch_every and ((self.batch_id % self.batch_every) == 0):
full_path = os.path.join(self.checkpoint_dir,
'model_epoch{0}_batch{1}.torch'.format(self.epoch_id, self.batch_id))
save_model(self.model, full_path)
logger.info('epoch {0} batch {1} model saved to {2}'.format(self.epoch_id, self.batch_id, full_path))
self.batch_id += 1
class NeptuneMonitor(Callback):
def __init__(self, name=None):
super().__init__()
self.ctx = neptune.Context()
self.name = name
self.epoch_loss_averager = Averager()
self.epoch_acc_averager = Averager()
def _get_channel_name(self, base_name):
if base_name not in self._channel_names:
self._channel_names[base_name] = get_unique_channel_name(self.ctx, base_name, suffix=self.name)
return self._channel_names[base_name]
def on_train_begin(self, *args, **kwargs):
self.epoch_loss_averager.reset()
self.epoch_acc_averager.reset()
self.epoch_id = 0
self.batch_id = 0
self._channel_names = {}
def on_batch_end(self, metrics, *args, **kwargs):
batch_loss = metrics['batch_loss']
batch_acc = metrics['batch_acc']
self.epoch_loss_averager.send(batch_loss)
self.epoch_acc_averager.send(batch_acc)
logs = {'epoch_id': self.epoch_id, 'batch_id': self.batch_id, 'batch_loss': batch_loss,
'batch_acc': batch_acc}
self.ctx.channel_send(self._get_channel_name('batch_loss'), x=logs['batch_id'], y=logs['batch_loss'])
self.ctx.channel_send(self._get_channel_name('batch_acc'), x=logs['batch_id'], y=logs['batch_acc'])
self.batch_id += 1
def on_epoch_end(self, *args, **kwargs):
epoch_avg_loss = self.epoch_loss_averager.value
epoch_avg_acc = self.epoch_acc_averager.value
self.epoch_loss_averager.reset()
self.epoch_acc_averager.reset()
self.model.eval()
val_loss, val_acc = score_model_multi_output(self.model, self.loss_function, self.validation_datagen)
self.model.train()
logs = {'epoch_id': self.epoch_id, 'batch_id': self.batch_id,
'epoch_loss': epoch_avg_loss,
'epoch_acc': epoch_avg_acc, 'epoch_val_loss': val_loss, 'epoch_val_acc': val_acc}
self._send_numeric_channels(logs)
self.epoch_id += 1
def _send_numeric_channels(self, logs):
self.ctx.channel_send(self._get_channel_name('epoch_loss'), x=logs['epoch_id'], y=logs['epoch_loss'])
self.ctx.channel_send(self._get_channel_name('epoch_acc'), x=logs['epoch_id'], y=logs['epoch_acc'])
self.ctx.channel_send(self._get_channel_name('epoch_val_loss'), x=logs['epoch_id'], y=logs['epoch_val_loss'])
self.ctx.channel_send(self._get_channel_name('epoch_val_acc'), x=logs['epoch_id'], y=logs['epoch_val_acc'])
class NeptuneMonitorLocalizer(NeptuneMonitor):
def __init__(self, bins_nr, img_nr, name=None):
super().__init__(name=name)
self.bins_nr = bins_nr
self.img_nr = img_nr
def on_epoch_end(self, *args, **kwargs):
epoch_avg_loss = self.epoch_loss_averager.value
epoch_avg_acc = self.epoch_acc_averager.value
self.epoch_loss_averager.reset()
self.epoch_acc_averager.reset()
self.model.eval()
val_loss, val_acc = score_model_multi_output(self.model, self.loss_function, self.validation_datagen)
self.model.train()
logs = {'epoch_id': self.epoch_id, 'batch_id': self.batch_id,
'epoch_loss': epoch_avg_loss,
'epoch_acc': epoch_avg_acc, 'epoch_val_loss': val_loss, 'epoch_val_acc': val_acc}
self._send_numeric_channels(logs)
for i, (image, y_pred, y_true) in enumerate(
predict_on_batch_multi_output(self.model, self.validation_datagen)):
image_with_box = overlay_box(image, y_pred, y_true, self.bins_nr)
pill_image = Image.fromarray((image_with_box * 255.).astype(np.uint8))
self.ctx.channel_send(self._get_channel_name("plotted bbox"), neptune.Image(
name='epoch{}_batch{}_idx{}'.format(self.epoch_id, self.batch_id, i),
description="true and prediction bbox",
data=pill_image))
if i == self.img_nr:
break
self.epoch_id += 1
class NeptuneMonitorKeypoints(NeptuneMonitor):
def __init__(self, bins_nr, img_nr, name=None):
super().__init__(name=name)
self.bins_nr = bins_nr
self.img_nr = img_nr
def on_epoch_end(self, *args, **kwargs):
epoch_avg_loss = self.epoch_loss_averager.value
epoch_avg_acc = self.epoch_acc_averager.value
self.epoch_loss_averager.reset()
self.epoch_acc_averager.reset()
self.model.eval()
val_loss, val_acc = score_model_multi_output(self.model, self.loss_function, self.validation_datagen)
self.model.train()
logs = {'epoch_id': self.epoch_id, 'batch_id': self.batch_id,
'epoch_loss': epoch_avg_loss,
'epoch_acc': epoch_avg_acc, 'epoch_val_loss': val_loss, 'epoch_val_acc': val_acc}
self._send_numeric_channels(logs)
for i, (image, y_pred, y_true) in enumerate(
predict_on_batch_multi_output(self.model, self.validation_datagen)):
image_with_keypoints = overlay_keypoints(image, y_pred, y_true, self.bins_nr)
pill_image = Image.fromarray((image_with_keypoints * 255.).astype(np.uint8))
self.ctx.channel_send(self._get_channel_name("plotted key points"), neptune.Image(
name='epoch{}_batch{}_idx{}'.format(self.epoch_id, self.batch_id, i),
description="true and prediction key points",
data=pill_image))
if i == self.img_nr:
break
self.epoch_id += 1
class ExperimentTiming(Callback):
def __init__(self):
super().__init__()
self.batch_start = None
self.epoch_start = None
self.current_sum = None
self.current_mean = None
def on_train_begin(self, *args, **kwargs):
self.epoch_id = 0
self.batch_id = 0
logger.info('starting training...')
def on_train_end(self, *args, **kwargs):
logger.info('training finished...')
def on_epoch_begin(self, *args, **kwargs):
if self.epoch_id > 0:
epoch_time = datetime.now() - self.epoch_start
logger.info('epoch {0} time {1}'.format(self.epoch_id - 1, str(epoch_time)[:-7]))
self.epoch_start = datetime.now()
self.current_sum = timedelta()
self.current_mean = timedelta()
logger.info('epoch {0} ...'.format(self.epoch_id))
def on_batch_begin(self, *args, **kwargs):
if self.batch_id > 0:
current_delta = datetime.now() - self.batch_start
self.current_sum += current_delta
self.current_mean = self.current_sum / self.batch_id
if self.batch_id > 0 and (((self.batch_id - 1) % 10) == 0):
logger.info('epoch {0} average batch time: {1}'.format(self.epoch_id, str(self.current_mean)[:-5]))
if self.batch_id == 0 or self.batch_id % 10 == 0:
logger.info('epoch {0} batch {1} ...'.format(self.epoch_id, self.batch_id))
self.batch_start = datetime.now()
class CallbackReduceLROnPlateau(Callback): # thank you keras
def __init__(self):
super().__init__()
pass
class CallbackEarlyStopping(Callback): # thank you keras
def __init__(self):
super().__init__()
pass
| 38.771484 | 117 | 0.620573 | 2,614 | 19,851 | 4.389059 | 0.073068 | 0.083936 | 0.048897 | 0.032598 | 0.815828 | 0.769023 | 0.749673 | 0.690839 | 0.670269 | 0.647869 | 0 | 0.009884 | 0.266082 | 19,851 | 511 | 118 | 38.847358 | 0.77761 | 0.001562 | 0 | 0.689157 | 0 | 0 | 0.064591 | 0.006106 | 0 | 0 | 0 | 0 | 0 | 1 | 0.144578 | false | 0.016867 | 0.028916 | 0.00241 | 0.212048 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 4 |
e015eab62c5138691ff3e8e80a8dd07b67260465 | 354 | py | Python | mycroft-file/main.py | atticus112/python-things | 117b6c0c778a39ae9f6f02afa0abe81b607c0c8e | [
"Unlicense"
] | null | null | null | mycroft-file/main.py | atticus112/python-things | 117b6c0c778a39ae9f6f02afa0abe81b607c0c8e | [
"Unlicense"
] | 1 | 2017-11-04T23:38:18.000Z | 2017-11-05T23:03:05.000Z | mycroft-file/main.py | atticus112/python-things | 117b6c0c778a39ae9f6f02afa0abe81b607c0c8e | [
"Unlicense"
] | 2 | 2017-11-05T22:39:43.000Z | 2018-08-24T02:43:20.000Z | from mycroft import MycroftSkill, intent_file_handler
class Skill(MycroftSkill):
"""The skill class"""
def __init__(self):
"""Init"""
MycroftSkill.__init__(self)
@intent_file_handler('check.email.intent')
def handle_email(self, message):
"""intent handle"""
pass
def create_skill():
return Skill()
| 23.6 | 53 | 0.646893 | 39 | 354 | 5.512821 | 0.512821 | 0.093023 | 0.15814 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.228814 | 354 | 14 | 54 | 25.285714 | 0.787546 | 0.096045 | 0 | 0 | 0 | 0 | 0.059211 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0.111111 | 0.111111 | 0.111111 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 4 |
e039d16e451587cf9a744f3cea32ebf50dad309a | 25,986 | py | Python | ekab/tbox.py | stawo/ekabPlanner | 63f78d4932daa1c17b1fdff30b074c8ad1a741d3 | [
"MIT"
] | null | null | null | ekab/tbox.py | stawo/ekabPlanner | 63f78d4932daa1c17b1fdff30b074c8ad1a741d3 | [
"MIT"
] | null | null | null | ekab/tbox.py | stawo/ekabPlanner | 63f78d4932daa1c17b1fdff30b074c8ad1a741d3 | [
"MIT"
] | null | null | null | from pyparsing import *
#~ Local libraries
from syntax.syntax import Syntax
class Concept:
def __init__(self, name):
if not isinstance(name, str):
raise Exception("A concept name must be provided as a string.")
self.__name = name
def __repr__(self):
return self.__name
def __eq__(self, other):
if isinstance(other, Concept) and str(other) == self.__name:
return True
return False
def __hash__(self):
return hash(self.__name)
class Role:
def __init__(self, name):
if not isinstance(name, str):
raise Exception("A role name must be provided as a string.")
self.__name = name
def __repr__(self):
return self.__name
def __eq__(self, other):
if isinstance(other, Role) and str(other) == self.__name:
return True
return False
def __hash__(self):
return hash(self.__name)
class Axiom:
def __init__(self, axiomToParse, conceptList, roleList):
#~ The function __init__ takes as input:
#~ - axiomToParse: the axiom that needs to be parsed.
#~ - conceptList: a (possibly empty) list (or set, or tuple) of concepts. It is used to validate the assertion.
#~ - roleList: a (possibly empty) list (or set, or tuple) of roles. It is used to validate the assertion.
self.__leftTerm = None
self.__leftTermExists = False
self.__leftTermInverse = False
self.__rightTerm = None
self.__rightTermExists = False
self.__rightTermInverse = False
self.__disjoint = False
self.__functionality = False
#~ Check the input
self.__checkInput(axiomToParse, conceptList, roleList)
#~ Parse the input
self.__parseAxiom(axiomToParse, conceptList, roleList)
def __checkInput(self, axiomToParse, conceptList, roleList):
if not isinstance(axiomToParse, (str, ParseResults, tuple, list)):
#~ Formato non valido
raise Exception("The provided axiom is not in a valid format: " + str(type(axiomToParse)) + "\n" + str(axiomToParse))
#~ conceptList and roleList, if not None, must be lists of strings
if not (isinstance(conceptList, (list, tuple, set)) and \
all(isinstance(concept, str) for concept in conceptList)):
raise Exception("The concept list provided is not valid. It must be a list of strings.")
if not (isinstance(roleList, (list, tuple, set)) and \
all(isinstance(role, str) for role in roleList)):
raise Exception("The role list provided is not valid. It must be a list of strings.")
def __checkConcept(self, term, conceptList):
#~ Checks if term is a valid concept.
#~ term must not be a reserved keyword,
#~ and must be in conceptList
#~ Returns True if it is valid, False otherwise
if not isinstance(term, str):
raise Exception("The term passed to the function __checkConcept must be a string: " + str(type(term)))
syntax = Syntax()
#~ The term can't be a reserved keyword
if term in syntax.keywords:
return False
if not term in conceptList:
return False
return True
def __checkRole(self, term, roleList):
#~ Checks if term is a valid role.
#~ term must not be a reserved keyword,
#~ and must be in roleList
#~ Returns True if it is valid, False otherwise
if not isinstance(term, str):
raise Exception("The term passed to the function __checkConcept must be a string: " + str(type(term)))
syntax = Syntax()
#~ The term can't be a reserved keyword
if term in syntax.keywords:
return False
if not term in roleList:
return False
return True
def __parseAxiom(self, axiomToParse, conceptList, roleList):
#~ (isA above :topC )
#~ (isA above (not (exists (inverse next )) ) )
#~ (isA (exists next ) (not (exists reachable-floor) ) )
#~ (isA next (not reachable-floor))
#~ (funct (inverse next ))
syntax = Syntax()
#~ If axiomToParse is a string, we parse it by considering the parenthesis that close every expression
#~ and analyse the resulting pyparsing.ParseResults.
#~ If axiomToParse is already a pyparsing.ParseResults (or a list, or a tuple), then we analise it directly.
result = None
if isinstance(axiomToParse, str):
raise Exception("Sorry, at the moment we can't parse axioms that are passed as a string.\n" + axiomToParse)
elif isinstance(axiomToParse, (ParseResults, tuple, list)):
result = axiomToParse
if isinstance(result[0], str) and result[0] == syntax.isA:
#~ We have either a positive or negative axiom, for sure not a functional one
#~ Parse the first term
if isinstance(result[1], str):
#~ The first term is either an atomic concept A or atomic role P
if not (self.__checkConcept(result[1], conceptList) or self.__checkRole(result[1], roleList)):
raise Exception("The term \"" + result[1] + "\" is not valid, it can't be used to build the following axiom:\n" + str(result))
self.__leftTerm = result[1]
elif isinstance(result[1], (ParseResults, tuple, list)):
if isinstance(result[1][0], str) and result[1][0] == syntax.exists:
if isinstance(result[1][1], str):
#~ It's the term: exists P
if not self.__checkRole(result[1][1], roleList):
#~ The term is in neither lists
raise Exception("The term \"" + result[1][1] + "\" used for the following axiom is not a valid role.\n" + str(result))
self.__leftTerm = result[1][1]
self.__leftTermExists = True
elif isinstance(result[1][1], (ParseResults, tuple, list)) and \
isinstance(result[1][1][0], str) and result[1][1][0] == syntax.inverse and \
isinstance(result[1][1][1], str):
#~ It's the term: exists P^-
if not self.__checkRole(result[1][1][1], roleList):
#~ The term is in neither lists
raise Exception("The term \"" + result[1][1][1] + "\" used for the following axiom is not a valid role.\n" + str(result))
self.__leftTerm = result[1][1][1]
self.__leftTermExists = True
self.__leftTermInverse = True
else:
#~ Something is wrong.
raise Exception("The left term of the following axiom was not recognized as a valid one.\n" + str(result))
elif isinstance(result[1][0], str) and result[1][0] == syntax.inverse and \
isinstance(result[1][1], str):
#~ It's the term: P^-
if not self.__checkRole(result[1][1], roleList):
#~ The term is in neither lists
raise Exception("The term \"" + result[1][1] + "\" used for the following axiom is not a valid role.\n" + str(result))
self.__leftTerm = result[1][1]
self.__leftTermInverse = True
else:
#~ Something is wrong.
raise Exception("The left term of the following axiom was not recognized as a valid one.\n" + str(result))
else:
#~ Something is wrong.
raise Exception("The left term of the following axiom was not recognized as a valid one.\n" + str(result))
#~ Parse the second term
if isinstance(result[2], str):
#~ The second term is either an atomic concept A or atomic role P
if not (self.__checkConcept(result[2], conceptList) or self.__checkRole(result[2], roleList)):
raise Exception("The term \"" + result[2] + "\" is not valid, it can't be used to build the following axiom:\n" + str(result))
if self.__checkConcept(self.__leftTerm, conceptList) and self.__checkRole(result[2], roleList):
raise Exception("The term \"" + result[2] + "\" is a role, while \"" + self.__leftTerm + "\" is a concept.\nIt can't be used to build the following axiom:\n" + str(result))
if not self.__leftTermExists and \
self.__checkRole(self.__leftTerm, roleList) and \
not self.__checkRole(result[2], roleList):
raise Exception("The term \"" + result[2] + "\" is not a role, while \"" + self.__leftTerm + "\" is.\nIt can't be used to build the following axiom:\n" + str(result))
self.__rightTerm = result[2]
elif isinstance(result[2], (ParseResults, tuple, list)):
if isinstance(result[2][0], str) and result[2][0] == syntax.exists:
if isinstance(result[2][1], str):
#~ It's the term: exists P
if not self.__checkRole(result[2][1], roleList):
raise Exception("The term \"" + result[2][1] + "\" is not a valid role, it can't be used to build the following axiom:\n" + str(result))
#~ If the first term is either P or P^-, then we raise an Exception
if not self.__leftTermExists and self.__checkRole(self.__leftTerm, roleList):
raise Exception("The term \"" + result[2][1] + "\" is not a concept, while \"" + self.__leftTerm + "\" is.\nIt can't be used to build the following axiom:\n" + str(result))
self.__rightTerm = result[2][1]
self.__rightTermExists = True
elif isinstance(result[2][1], (ParseResults, tuple, list)) and \
isinstance(result[2][1][0], str) and result[2][1][0] == syntax.inverse and \
isinstance(result[2][1][1], str):
#~ It's the term: exists P^-
if not self.__checkRole(result[2][1][1], roleList):
raise Exception("The term \"" + result[2][1][1] + "\" is not a valid role, it can't be used to build the following axiom:\n" + str(result))
#~ If the first term is either P or P^-, then we raise an Exception
if not self.__leftTermExists and self.__checkRole(self.__leftTerm, roleList):
raise Exception("The term \"" + result[2][1][1] + "\" is not a concept, while \"" + self.__leftTerm + "\" is.\nIt can't be used to build the following axiom:\n" + str(result))
self.__rightTerm = result[2][1][1]
self.__rightTermExists = True
self.__rightTermInverse = True
else:
#~ Something is wrong.
raise Exception("The right term of the following axiom was not recognized as a valid one.\n" + str(result))
elif isinstance(result[2][0], str) and result[2][0] == syntax.inverse and \
isinstance(result[2][1], str):
#~ It's the term: P^-
if not self.__checkRole(result[2][1], roleList):
raise Exception("The term \"" + result[2][1] + "\" is not a valid role, it can't be used to build the following axiom:\n" + str(result))
#~ If the first term is either P or P^-, then we raise an Exception
if not self.__leftTermExists and self.__checkRole(self.__leftTerm, roleList):
raise Exception("The term \"" + result[2][1] + "\" is not a concept, while \"" + self.__leftTerm + "\" is.\nIt can't be used to build the following axiom:\n" + str(result))
self.__rightTermTerm = result[2][1]
self.__rightTermInverse = True
elif isinstance(result[2][0], str) and result[2][0] == syntax.neg:
#~ The axiom is a disjunction
self.__disjoint = True
if isinstance(result[2][1], str):
#~ The second term is either an atomic concept A or atomic role P
if not (self.__checkConcept(result[2][1], conceptList) or self.__checkRole(result[2][1], roleList)):
raise Exception("The term \"" + result[2][1] + "\" is not valid, it can't be used to build the following axiom:\n" + str(result))
if (self.__checkConcept(self.__leftTerm, conceptList) or \
(self.__leftTermExists and self.__checkRole(self.__leftTerm, roleList))) and \
not self.__checkConcept(result[2][1], conceptList):
raise Exception("The term \"" + result[2][1] + "\" is not a concept, while \"" + self.__leftTerm + "\" is.\nIt can't be used to build the following axiom:\n" + str(result))
if self.__checkRole(self.__leftTerm, roleList) and not self.__checkRole(result[2][1], roleList):
raise Exception("The term \"" + result[2][1] + "\" is not a role, while \"" + self.__leftTerm + "\" is.\nIt can't be used to build the following axiom:\n" + str(result))
self.__rightTerm = result[2][1]
elif isinstance(result[2][1], (ParseResults, tuple, list)):
if isinstance(result[2][1][0], str) and result[2][1][0] == syntax.exists:
if isinstance(result[2][1][1], str):
#~ It's the term: exists P
if not self.__checkRole(result[2][1][1], roleList):
raise Exception("The term \"" + result[2][1][1] + "\" is not a valid role, it can't be used to build the following axiom:\n" + str(result))
#~ If the first term is either P or P^-, then we raise an Exception
if not self.__leftTermExists and self.__checkRole(self.__leftTerm, roleList):
raise Exception("The term \"" + result[2][1][1] + "\" is not a concept, while \"" + self.__leftTerm + "\" is.\nIt can't be used to build the following axiom:\n" + str(result))
self.__rightTerm = result[2][1][1]
self.__rightTermExists = True
elif isinstance(result[2][1][1], (ParseResults, tuple, list)) and \
isinstance(result[2][1][1][0], str) and result[2][1][1][0] == syntax.inverse and \
isinstance(result[2][1][1][1], str):
#~ It's the term: exists P^-
if not self.__checkRole(result[2][1][1][1], roleList):
raise Exception("The term \"" + result[2][1][1][1] + "\" is not a valid role, it can't be used to build the following axiom:\n" + str(result))
#~ If the first term is either P or P^-, then we raise an Exception
if not self.__leftTermExists and self.__checkRole(self.__leftTerm, roleList):
raise Exception("The term \"" + result[2][1][1][1] + "\" is not a concept, while \"" + self.__leftTerm + "\" is.\nIt can't be used to build the following axiom:\n" + str(result))
self.__rightTerm = result[2][1][1][1]
self.__rightTermExists = True
self.__rightTermInverse = True
else:
#~ Something is wrong.
raise Exception("The right term of the following axiom was not recognized as a valid one.\n" + str(result))
elif isinstance(result[2][1][0], str) and result[2][1][0] == syntax.inverse and \
isinstance(result[2][1][1], str):
#~ It's the term: P^-
if not self.__checkRole(result[2][1][1], roleList):
raise Exception("The term \"" + result[2][1][1] + "\" is not a valid role, it can't be used to build the following axiom:\n" + str(result))
#~ If the first term is either P or P^-, then we raise an Exception
if not self.__leftTermExists and self.__checkRole(self.__leftTerm, roleList):
raise Exception("The term \"" + result[2][1][1] + "\" is not a concept, while \"" + self.__leftTerm + "\" is.\nIt can't be used to build the following axiom:\n" + str(result))
self.__rightTermTerm = result[2][1][1]
self.__rightTermInverse = True
else:
#~ Something is wrong.
raise Exception("The right term of the following axiom was not recognized as a valid one.\n" + str(result))
else:
#~ Something is wrong.
raise Exception("The right term of the following axiom was not recognized as a valid one.\n" + str(result))
else:
#~ Something is wrong.
raise Exception("The right term of the following axiom was not recognized as a valid one.\n" + str(result))
elif isinstance(result[0], str) and result[0] == syntax.funct:
#~ It's a functionality axiom
self.__functionality = True
#~ Parse the first term
if isinstance(result[1], str):
#~ The first term is an atomic role P
if not self.__checkRole(result[1], roleList):
raise Exception("The term \"" + result[1] + "\" is not a valid role, it can't be used to build the following axiom:\n" + str(result))
self.__leftTerm = result[1]
elif isinstance(result[1], (ParseResults, tuple, list)) and \
isinstance(result[1][0], str) and result[1][0] == syntax.inverse and \
isinstance(result[1][1], str):
#~ The first term is: P^-
if not self.__checkRole(result[1][1], roleList):
raise Exception("The term \"" + result[1][1] + "\" is not a valid role, it can't be used to build the following axiom:\n" + str(result))
self.__leftTerm = result[1][1]
self.__leftTermInverse = True
else:
#~ Something is wrong.
raise Exception("The following functionality axiom was not recognized as a valid one.\n" + str(result))
else:
#~ Something is wrong.
raise Exception("The axiom was not recognized as a valid one.\n" + str(result))
def leftTerm(self):
return self.__leftTerm
def leftTermExists(self):
return self.__leftTermExists
def leftTermInverse(self):
return self.__leftTermInverse
def rightTerm(self):
return self.__rightTerm
def rightTermExists(self):
return self.__rightTermExists
def rightTermInverse(self):
return self.__rightTermInverse
def disjoint(self):
return self.__disjoint
def functionality(self):
return self.__functionality
def __repr__(self):
axiomStr = ""
if self.__functionality:
if self.__leftTermInverse:
axiomStr = "(funct (inverse " + self.__leftTerm + ") )"
else:
axiomStr = "(funct " + self.__leftTerm + ")"
else:
axiomStr = "(isA "
if self.__leftTermExists and self.__leftTermInverse:
axiomStr += "(exists (inverse " + self.__leftTerm + ") )"
elif self.__leftTermExists:
axiomStr += "(exists " + self.__leftTerm + ")"
elif self.__leftTermInverse:
axiomStr += "(inverse " + self.__leftTerm + ")"
else:
axiomStr += self.__leftTerm
axiomStr += " "
if self.__disjoint:
axiomStr += "(not "
if self.__rightTermExists and self.__rightTermInverse:
axiomStr += "(exists (inverse " + self.__rightTerm + ") )"
elif self.__rightTermExists:
axiomStr += "(exists " + self.__rightTerm + ")"
elif self.__rightTermInverse:
axiomStr += "(inverse " + self.__rightTerm + ")"
else:
axiomStr += self.__rightTerm
axiomStr += ")"
else:
if self.__rightTermExists and self.__rightTermInverse:
axiomStr += "(exists (inverse " + self.__rightTerm + ") )"
elif self.__rightTermExists:
axiomStr += "(exists " + self.__rightTerm + ")"
elif self.__rightTermInverse:
axiomStr += "(inverse " + self.__rightTerm + ")"
else:
axiomStr += self.__rightTerm
axiomStr += ")"
return axiomStr
class Assertion:
#~ Represent a membership assertion in a ABox
#~ Possible type of assertions:
#~ (concept val1)
#~ (role val1 val2)
def __init__(self, assertionToParse, conceptList, roleList, individualList):
#~ The function __init__ takes as input:
#~ - assertionToParse: the assertion that needs to be parsed.
#~ - conceptList: a (possibly empty) list of concepts. It is used to validate the assertion.
#~ - roleList: a (possibly empty) list of roles. It is used to validate the assertion.
#~ - individualList: a (possibly empty) list of individuals. It is used to validate the assertion.
self.__term = None
self.__individual1 = None
self.__individual2 = None
#~ Check the input
self.__checkInput(assertionToParse, conceptList, roleList, individualList)
#~ Parse the input
self.__parseAssertion(assertionToParse, conceptList, roleList, individualList)
def __checkInput(self, assertionToParse, conceptList, roleList, individualList):
if not isinstance(assertionToParse, (str, ParseResults, tuple, list)):
#~ Formato non valido
raise Exception("The provided assertion is not in a valid format: " + str(type(assertionToParse)) + "\n" + str(assertionToParse))
#~ conceptList, roleList, and individualList, if not None, must be lists of strings
if not (isinstance(conceptList, (list, tuple, set)) and \
all(isinstance(concept, str) for concept in conceptList)):
raise Exception("The concept list provided is not valid. It must be a list of strings.")
if not (isinstance(roleList, (list, tuple, set)) and \
all(isinstance(role, str) for role in roleList)):
raise Exception("The role list provided is not valid. It must be a list of strings.")
if not (isinstance(individualList, (list, tuple, set)) and \
all(isinstance(individual, str) for individual in individualList)):
raise Exception("The individual list provided is not valid. It must be a list of strings.")
def __checkConcept(self, term, conceptList):
#~ Checks if term is a valid concept.
#~ term must not be a reserved keyword,
#~ and must be in conceptList
#~ Returns True if it is valid, False otherwise
if not isinstance(term, str):
raise Exception("The term passed to the function __checkConcept must be a string: " + str(type(term)))
syntax = Syntax()
#~ The term can't be a reserved keyword
if term in syntax.keywords:
return False
if not term in conceptList:
return False
return True
def __checkRole(self, term, roleList):
#~ Checks if term is a valid role.
#~ term must not be a reserved keyword,
#~ and must be in roleList
#~ Returns True if it is valid, False otherwise
if not isinstance(term, str):
raise Exception("The term passed to the function __checkRole must be a string: " + str(type(term)))
syntax = Syntax()
#~ The term can't be a reserved keyword
if term in syntax.keywords:
return False
if not term in roleList:
return False
return True
def __checkIndividual(self, term, individualList):
#~ Checks if term is a valid individual.
#~ term must not be a reserved keyword,
#~ and must be in individualList
#~ Returns True if it is valid, False otherwise
if not isinstance(term, str):
raise Exception("The term passed to the function __checkIndividual must be a string: " + str(type(term)))
syntax = Syntax()
#~ The term can't be a reserved keyword
if term in syntax.keywords:
return False
if not term in individualList:
return False
return True
def __parseAssertion(self, assertionToParse, conceptList, roleList, individualList):
syntax = Syntax()
#~ If assertionToParse is a string, we parse it by considering the parenthesis that close every expression
#~ and analyse the resulting pyparsing.ParseResults.
#~ If axiomToParse is already a pyparsing.ParseResults (or a list, or a tuple), then we analise it directly.
if isinstance(assertionToParse, str):
raise Exception("Sorry, at the moment we can't parse assertions that are passed as a string.\n" + assertionToParse)
#~ If assertionToParse is either a ParseResults, tuple, or list, then it must have
#~ 2 or 3 elements, and all of them must be strings.
#~ Also, if it has 2 elements, then the first element must be in conceptList (if specified),
#~ while if it has 3 elements, it must be in roleList (if specified)
if len(assertionToParse) < 2 or len(assertionToParse) > 3:
raise Exception("An assertion must be formed of 2 elements (in case of an assertion involving a concept), or 3 (in case of a role).\n" + \
"The following assertion has instead " + str(len(assertionToParse)) + " elements:\n" + \
str(assertionToParse))
if not all(isinstance(element, str) for element in assertionToParse):
raise Exception("The element composing an assertion to parse must be all strings.\n" + str(assertionToParse))
if len(assertionToParse) == 2:
if not self.__checkConcept(assertionToParse[0], conceptList):
raise Exception("The term used in the assertion is not a valid concept.\n" + str(assertionToParse))
if not self.__checkIndividual(assertionToParse[1], individualList):
raise Exception("The individual used in the assertionis not a individual.\n" + str(assertionToParse))
#~ Save the elements
self.__term = assertionToParse[0]
self.__individual1 = assertionToParse[1]
elif len(assertionToParse) == 3:
if not self.__checkRole(assertionToParse[0], roleList):
raise Exception("The term used in the assertion is not a valid concept.\n" + str(assertionToParse))
if not self.__checkIndividual(assertionToParse[1], individualList):
raise Exception("The individual used in the assertionis not a individual.\n" + str(assertionToParse))
if not self.__checkIndividual(assertionToParse[2], individualList):
raise Exception("The individual used in the assertionis not a individual.\n" + str(assertionToParse))
#~ Save the elements
self.__term = assertionToParse[0]
self.__individual1 = assertionToParse[1]
self.__individual2 = assertionToParse[2]
else:
#~ Something is wrong.
raise Exception("Something went wrong with the analisys of the following assertion.\n" + str(assertionToParse))
def term(self):
return self.__term
def individual1(self):
return self.__individual1
def individual2(self):
return self.__individual2
def __eq__(self, other):
if not isinstance(other, Assertion):
return False
if other.term() != self.__term:
return False
if other.individual1() != self.__individual1:
return False
if other.individual2() != self.__individual2:
return False
return True
def __ne__(self, other):
return not self.__eq__(other)
def __hash__(self):
if self.__individual2 is None:
return hash(str(type(self)) + self.__term + self.__individual1)
else:
return hash(str(type(self)) + self.__term + self.__individual1 + self.__individual2)
def __repr__(self):
return self.toADL()
def toADL(self, indentLevel = 0):
if not isinstance(indentLevel, int) or indentLevel < 0:
raise Exception("The parameter indentLevel must be a positive integer.")
def __indent(indentLevel):
syntax = Syntax()
return syntax.indent*indentLevel
assertionString = __indent(indentLevel) + "(" + str(self.__term) + " " + str(self.__individual1)
if not self.__individual2 is None:
assertionString += " " + str(self.__individual2)
assertionString += ")\n"
return assertionString
| 37.229226 | 187 | 0.660317 | 3,519 | 25,986 | 4.746519 | 0.05655 | 0.02766 | 0.053942 | 0.038975 | 0.762677 | 0.732084 | 0.717296 | 0.708615 | 0.688858 | 0.662576 | 0 | 0.013589 | 0.224044 | 25,986 | 697 | 188 | 37.28264 | 0.814769 | 0.169861 | 0 | 0.518617 | 0 | 0.00266 | 0.237889 | 0 | 0 | 0 | 0 | 0 | 0.103723 | 1 | 0.098404 | false | 0.018617 | 0.005319 | 0.045213 | 0.236702 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 4 |
e03f92d0b6c1fac329ba044f63cbbb3c443e0e1e | 227 | py | Python | backend/mainContent/serializers.py | mariaeid/Examensarbete | 3b04d404f399ecd5e9e0011bbbe81e715d45fa62 | [
"MIT"
] | null | null | null | backend/mainContent/serializers.py | mariaeid/Examensarbete | 3b04d404f399ecd5e9e0011bbbe81e715d45fa62 | [
"MIT"
] | 14 | 2020-06-06T00:21:31.000Z | 2022-03-12T00:10:08.000Z | backend/mainContent/serializers.py | mariaeid/Examensarbete | 3b04d404f399ecd5e9e0011bbbe81e715d45fa62 | [
"MIT"
] | null | null | null | from rest_framework import serializers
from .models import MainContent
# Render the Python class to JSON
class MainContentSerializer(serializers.ModelSerializer):
class Meta:
model = MainContent
fields = ('__all__')
| 25.222222 | 57 | 0.784141 | 25 | 227 | 6.92 | 0.76 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15859 | 227 | 8 | 58 | 28.375 | 0.905759 | 0.136564 | 0 | 0 | 0 | 0 | 0.036082 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 4 |
e054274943ad39793c8a70825a0c4129dc557fa9 | 5,878 | py | Python | src/helpers/foremastbrainhelper_test.py | sandeepbhojwani/foremast-brain | b083ea08c0506517ede8501b9ad44408e89afdc6 | [
"Apache-2.0"
] | null | null | null | src/helpers/foremastbrainhelper_test.py | sandeepbhojwani/foremast-brain | b083ea08c0506517ede8501b9ad44408e89afdc6 | [
"Apache-2.0"
] | null | null | null | src/helpers/foremastbrainhelper_test.py | sandeepbhojwani/foremast-brain | b083ea08c0506517ede8501b9ad44408e89afdc6 | [
"Apache-2.0"
] | null | null | null | #import sys
#sys.path.append('../')
import json
def getBaselinejson():
json = {
"status": "success",
"data": {
"resultType": "matrix",
"result": [
{
"metric": {
"__name__": "namespace_pod_name_container_name:container_cpu_usage_seconds_total:sum_rate",
"container_name": "sample-metrics-app",
"namespace": "default",
"pod_name": "sample-metrics-app-5f67fcbc57-wvf22"
},
"values": [
[
1539120304.467,
"0.00006977266768511138"
],
[
1539120318.467,
"0.00006977266768511138"
],
[
1539120332.467,
"0.00010295167661433635"
]
]
},
{
"metric": {
"__name__": "namespace_pod_name_container_name:container_cpu_usage_seconds_total:sum_rate",
"container_name": "sample-metrics-app",
"namespace": "default",
"pod_name": "sample-metrics-app-5f67fcbc57-wvf23"
},
"values": [
[
1539120304.467,
"0.00006977266768511138"
],
[
1539120318.467,
"0.00006977266768511138"
],
[
1539120332.467,
"0.00010295167661433635"
]
]
}
]
}
}
return json
def getCurrentjson():
json = {
"status": "success",
"data": {
"resultType": "matrix",
"result": [
{
"metric": {
"__name__": "namespace_pod_name_container_name:container_cpu_usage_seconds_total:sum_rate",
"container_name": "sample-metrics-app",
"namespace": "default",
"pod_name": "sample-metrics-app-5f67fcbc57-wvf22"
},
"values": [
[
1539120304.467,
"3.00006977266768511138"
],
[
1539120318.467,
"2.00006977266768511138"
],
[
1539120332.467,
"5.00010295167661433635"
]
]
},
{
"metric": {
"__name__": "namespace_pod_name_container_name:container_cpu_usage_seconds_total:sum_rate",
"container_name": "sample-metrics-app",
"namespace": "default",
"pod_name": "sample-metrics-app-5f67fcbc57-wvf23"
},
"values": [
[
1539120304.467,
"3.00006977266768511138"
],
[
1539120318.467,
"2.00006977266768511138"
],
[
1539120332.467,
"5.00010295167661433635"
]
]
}
]
}
}
return json
def getHistoricaljson():
json = {
"status": "success",
"data": {
"resultType": "matrix",
"result": [
{
"metric": {
"__name__": "namespace_pod_name_container_name:container_cpu_usage_seconds_total:sum_rate",
"container_name": "sample-metrics-app",
"namespace": "default",
"pod_name": "sample-metrics-app-5f67fcbc57-wvf22"
},
"values": [
[
1539120304.467,
"0.0"
],
[
1539120318.467,
"0.10006977266768511138"
],
[
1539120332.467,
"0.10010295167661433635"
],
[
1539120346.467,
"0.10010295167661433635"
],
[
1539120360.467,
"0.10012437188560052037"
],
[
1539120374.467,
"0.10012437188560052037"
],
[
1539120388.467,
"0.10014702001258933535"
],
[
1539120402.467,
"0.10014702001258933535"
],
[
1539120416.467,
"0.10017783671051131245"
],
[
1539120430.467,
"0.30017783671051131245"
],
[
1539120444.467,
"0.00020147554258429564"
],
[
1539120458.467,
"0.10020147554258429564"
],
[
1539120472.467,
"0.1002256311930242375"
],
[
1539120486.467,
"0.1002256311930242375"
],
[
1539120500.467,
"0.10025112402336448713"
],
[
1539120514.467,
"0.10025112402336448713"
],
[
1539120528.467,
"0.10025112402336448713"
],
[
1539120542.467,
"0.10027547633703703785"
],
[
1539120556.467,
"0.10027547633703703785"
],
[
1539120570.467,
"0.1002703406037037029"
],
[
1539120584.467,
"0.1002703406037037029"
]
]
}
]
}
}
return json
| 26.477477 | 105 | 0.381082 | 317 | 5,878 | 6.829653 | 0.236593 | 0.049885 | 0.078522 | 0.092379 | 0.633256 | 0.633256 | 0.633256 | 0.633256 | 0.633256 | 0.633256 | 0 | 0.402699 | 0.520925 | 5,878 | 221 | 106 | 26.597285 | 0.366122 | 0.005444 | 0 | 0.553991 | 0 | 0 | 0.300308 | 0.21475 | 0 | 0 | 0 | 0 | 0 | 1 | 0.014085 | false | 0 | 0.004695 | 0 | 0.032864 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 4 |
e05bce374e420be51c3ba5b8f5abac3704006823 | 135 | py | Python | insertion.py | shiwanibiradar/10day_of_DS | 4b4caa679c680a46c5a1c216214d26052ab3aa60 | [
"MIT"
] | null | null | null | insertion.py | shiwanibiradar/10day_of_DS | 4b4caa679c680a46c5a1c216214d26052ab3aa60 | [
"MIT"
] | null | null | null | insertion.py | shiwanibiradar/10day_of_DS | 4b4caa679c680a46c5a1c216214d26052ab3aa60 | [
"MIT"
] | null | null | null | n=int(input("Enter the no of element"))
i=1
a=[]
for i in range(1,n+1):
b=input("Enter the element")
a.append(b)
print(sorted(a))
| 13.5 | 39 | 0.637037 | 29 | 135 | 2.965517 | 0.62069 | 0.232558 | 0.302326 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.026316 | 0.155556 | 135 | 9 | 40 | 15 | 0.72807 | 0 | 0 | 0 | 0 | 0 | 0.300752 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.142857 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 4 |
e06279f1277fb08454521b62ff04ef3859e15675 | 182 | py | Python | main.py | raihaan123/An_RL_Thing | 9a48ac3ccb21641e9901dd9c97a9f5374b2f54fe | [
"MIT"
] | null | null | null | main.py | raihaan123/An_RL_Thing | 9a48ac3ccb21641e9901dd9c97a9f5374b2f54fe | [
"MIT"
] | null | null | null | main.py | raihaan123/An_RL_Thing | 9a48ac3ccb21641e9901dd9c97a9f5374b2f54fe | [
"MIT"
] | null | null | null | import numpy as np
from agent import Agent
from environment import Environment
environment = Environment()
environment.create_agent()
for i in range(10):
environment.update() | 16.545455 | 35 | 0.78022 | 24 | 182 | 5.875 | 0.583333 | 0.468085 | 0.468085 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012987 | 0.153846 | 182 | 11 | 36 | 16.545455 | 0.902597 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.428571 | 0 | 0.428571 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 4 |
0eb69eb5d06604fab82598ce36cbff091e1cd1d9 | 52,258 | py | Python | src/make_plot.py | KevinVolkSTScI/matplotlib-gui-code | a6fa1a091b36d6c61a6de3c67b9c5e7a3fd23059 | [
"MIT"
] | null | null | null | src/make_plot.py | KevinVolkSTScI/matplotlib-gui-code | a6fa1a091b36d6c61a6de3c67b9c5e7a3fd23059 | [
"MIT"
] | null | null | null | src/make_plot.py | KevinVolkSTScI/matplotlib-gui-code | a6fa1a091b36d6c61a6de3c67b9c5e7a3fd23059 | [
"MIT"
] | null | null | null | """
This routine is the main plotting routine. The routine needs to be passed
the matplotlib_user_interface variable ("plotgui") which which the data
values and options (plus the plot variables) are stored.
Routines:
make_plot The main plottine routine, which reads many options and applies
them as needed.
"""
import numpy
import matplotlib
from matplotlib.collections import PatchCollection
from matplotlib.patches import Rectangle, Ellipse, FancyArrow
from matplotlib.ticker import MultipleLocator
import general_utilities
def make_plot(plotgui):
"""
Read the parameters and the data sets, and produce the plot.
This is the main plotting routine where the graphs are produced. It
looks at the (many) parameters to determine how the plot should be
made.
Parameters
----------
plotgui: a matplotlib_user_interface object for the plot
Returns
-------
None
"""
myfont = {'family': plotgui.fontname[plotgui.current_plot-1],
'weight': plotgui.fontweight[plotgui.current_plot-1],
'size': plotgui.fontsize[plotgui.current_plot-1]}
matplotlib.rc('font', **myfont)
plotgui.subplot[plotgui.current_plot-1].clear()
if plotgui.hide_subplot[plotgui.current_plot-1]:
plotgui.subplot[plotgui.current_plot-1].axis('off')
logxflag = plotgui.xparameters[plotgui.current_plot-1]['logarithmic']
logyflag = plotgui.yparameters[plotgui.current_plot-1]['logarithmic']
hlogxflag = plotgui.xparameters[plotgui.current_plot-1]['hybridlog']
hlogyflag = plotgui.yparameters[plotgui.current_plot-1]['hybridlog']
invertxflag = plotgui.xparameters[plotgui.current_plot-1]['invert']
invertyflag = plotgui.yparameters[plotgui.current_plot-1]['invert']
hidexflag = plotgui.xparameters[plotgui.current_plot-1]['hide']
hideyflag = plotgui.yparameters[plotgui.current_plot-1]['hide']
hidexticksflag = plotgui.xparameters[plotgui.current_plot-1]['hideticks']
hideyticksflag = plotgui.yparameters[plotgui.current_plot-1]['hideticks']
hidexlabelsflag = plotgui.xparameters[plotgui.current_plot-1]['hidelabels']
hideylabelsflag = plotgui.yparameters[plotgui.current_plot-1]['hidelabels']
inversexticksflag = \
plotgui.xparameters[plotgui.current_plot-1]['inverseticks']
inverseyticksflag = \
plotgui.yparameters[plotgui.current_plot-1]['inverseticks']
bothxticksflag = plotgui.xparameters[plotgui.current_plot-1]['bothticks']
bothyticksflag = plotgui.yparameters[plotgui.current_plot-1]['bothticks']
oppositexflag = plotgui.xparameters[plotgui.current_plot-1]['oppositeaxis']
oppositeyflag = plotgui.yparameters[plotgui.current_plot-1]['oppositeaxis']
majorxgridlinesflag = plotgui.xparameters[ plotgui.current_plot-1][
'majorgridlines']
minorxgridlinesflag = plotgui.xparameters[ plotgui.current_plot-1][
'minorgridlines']
majorygridlinesflag = plotgui.yparameters[ plotgui.current_plot-1][
'majorgridlines']
minorygridlinesflag = plotgui.yparameters[ plotgui.current_plot-1][
'minorgridlines']
gridoptionx = minorxgridlinesflag + 2*majorxgridlinesflag
gridoptiony = minorygridlinesflag + 2*majorygridlinesflag
gridflags1 = [[False, 'both', 'x'], [True, 'minor', 'x'],
[True, 'major', 'x'], [True, 'both', 'x']]
gridflags2 = [[False, 'both', 'y'], [True, 'minor', 'y'],
[True, 'major', 'y'], [True, 'both', 'y']]
try:
xminorticks = \
float(plotgui.xparameters[plotgui.current_plot-1]['minorticks'])
except ValueError:
xminorticks = 0.0
try:
yminorticks = \
float(plotgui.yparameters[plotgui.current_plot-1]['minorticks'])
except ValueError:
yminorticks = 0.0
for loop in range(plotgui.nsets):
if (plotgui.set_properties[loop]['display']) \
and (plotgui.set_properties[loop]['plot'] == plotgui.current_plot):
if plotgui.set_properties[loop]['errors']:
xerrors = numpy.zeros((2, len(plotgui.xdata[loop]['values'])),
dtype=numpy.float32)
xerrors[0, :] = numpy.copy(plotgui.xdata[loop]['lowerror'])
xerrors[1, :] = numpy.copy(plotgui.xdata[loop]['higherror'])
yerrors = numpy.zeros((2, len(plotgui.xdata[loop]['values'])),
dtype=numpy.float32)
yerrors[0, :] = numpy.copy(plotgui.ydata[loop]['lowerror'])
yerrors[1, :] = numpy.copy(plotgui.ydata[loop]['higherror'])
try:
if plotgui.set_properties[loop]['symbol'] == 'histogram':
barwidth = plotgui.xdata[loop]['values'][1] - \
plotgui.xdata[loop]['values'][0]
xoff = plotgui.xdata[loop]['values'][1:] - \
plotgui.xdata[loop]['values'][0:-1]
xoff = numpy.append(xoff, xoff[-1])
plotgui.subplot[plotgui.current_plot-1].bar(
plotgui.xdata[loop]['values'] - xoff/2.,
plotgui.ydata[loop]['values'],
width=barwidth,
color='white',
edgecolor=plotgui.set_properties[loop]['colour'],
linewidth=1)
elif logyflag == 0 and logxflag == 0 and hlogxflag == 0 \
and hlogyflag == 0:
if plotgui.set_properties[loop]['symbol'] is None:
plotgui.subplot[plotgui.current_plot-1].plot(
plotgui.xdata[loop]['values'],
plotgui.ydata[loop]['values'],
color=plotgui.set_properties[loop]['colour'],
linestyle=
plotgui.set_properties[loop]['linestyle'],
linewidth=
plotgui.set_properties[loop]['linewidth'])
elif plotgui.set_properties[loop]['linestyle'] is None:
plotgui.subplot[plotgui.current_plot-1].plot(
plotgui.xdata[loop]['values'],
plotgui.ydata[loop]['values'],
color=plotgui.set_properties[loop]['colour'],
marker=plotgui.set_properties[loop]['symbol'],
linestyle='none', markersize=
plotgui.set_properties[loop]['symbolsize'])
else:
plotgui.subplot[plotgui.current_plot-1].plot(
plotgui.xdata[loop]['values'],
plotgui.ydata[loop]['values'],
color=plotgui.set_properties[loop]['colour'],
marker=plotgui.set_properties[loop]['symbol'],
linestyle=
plotgui.set_properties[loop]['linestyle'],
markersize=
plotgui.set_properties[loop]['symbolsize'],
linewidth=
plotgui.set_properties[loop]['linewidth'])
if plotgui.set_properties[loop]['errors']:
plotgui.subplot[plotgui.current_plot-1].errorbar(
plotgui.xdata[loop]['values'],
plotgui.ydata[loop]['values'], yerrors, xerrors,
fmt='None',
ecolor=plotgui.set_properties[loop]['colour'],
elinewidth=
plotgui.set_properties[loop]['linewidth'])
if xminorticks > 0:
plotgui.subplot[plotgui.current_plot-1].xaxis.\
set_minor_locator(MultipleLocator(xminorticks))
xtl = int(
plotgui.xparameters[
plotgui.current_plot-1]['ticklength'] // 2)
plotgui.subplot[plotgui.current_plot-1].tick_params(
axis='x', which='minor', length=xtl)
if yminorticks > 0:
plotgui.subplot[plotgui.current_plot-1].yaxis.\
set_minor_locator(MultipleLocator(yminorticks))
ytl = int(
plotgui.yparameters[
plotgui.current_plot-1]['ticklength'] // 2)
plotgui.subplot[plotgui.current_plot-1].tick_params(
axis='x', which='minor', length=ytl)
elif hlogyflag == 0 and hlogxflag == 1:
newxvalues = general_utilities.hybrid_transform(
plotgui.xdata[loop]['values'])
if logyflag == 0:
if plotgui.set_properties[loop]['symbol'] is None:
plotgui.subplot[plotgui.current_plot-1].plot(
newxvalues, plotgui.ydata[loop]['values'],
color=plotgui.set_properties[loop]['colour'],
linestyle=
plotgui.set_properties[loop]['linestyle'],
linewidth=
plotgui.set_properties[loop]['linewidth'])
elif plotgui.set_properties[loop]['linestyle'] is \
None:
plotgui.subplot[plotgui.current_plot-1].plot(
newxvalues, plotgui.ydata[loop]['values'],
color=plotgui.set_properties[loop]['colour'],
marker=
plotgui.set_properties[loop]['symbol'],
linestyle='none',
markersize=
plotgui.set_properties[loop]['symbolsize'])
else:
plotgui.subplot[plotgui.current_plot-1].plot(
newxvalues, plotgui.ydata[loop]['values'],
color=plotgui.set_properties[loop]['colour'],
marker=
plotgui.set_properties[loop]['symbol'],
linestyle=
plotgui.set_properties[loop]['linestyle'],
markersize=
plotgui.set_properties[loop]['symbolsize'],
linewidth=
plotgui.set_properties[loop]['linewidth'])
if yminorticks > 0:
plotgui.subplot[plotgui.current_plot-1].yaxis.\
set_minor_locator(
MultipleLocator(yminorticks))
ytl = int(
plotgui.xparameters[plotgui.current_plot-1]
['ticklength'] // 2)
plotgui.subplot[plotgui.current_plot-1].\
tick_params(axis='y', which='minor',
length=ytl)
else:
if plotgui.set_properties[loop]['symbol'] is None:
plotgui.subplot[plotgui.current_plot-1].semilogy(
newxvalues, plotgui.ydata[loop]['values'],
color=plotgui.set_properties[loop]['colour'],
linestyle=
plotgui.set_properties[loop]['linestyle'],
linewidth=
plotgui.set_properties[loop]['linewidth'])
elif plotgui.set_properties[loop]['linestyle'] is \
None:
plotgui.subplot[
plotgui.current_plot-1].semilogy(
newxvalues, plotgui.ydata[loop]['values'],
color=
plotgui.set_properties[loop]['colour'],
marker=
plotgui.set_properties[loop]['symbol'],
linestyle='none', markersize=
plotgui.set_properties[loop]['symbolsize']
)
else:
plotgui.subplot[plotgui.current_plot-1].semilogy(
newxvalues, plotgui.ydata[loop]['values'],
color=plotgui.set_properties[loop]['colour'],
marker=
plotgui.set_properties[loop]['symbol'],
linestyle=
plotgui.set_properties[loop]['linestyle'],
markersize=
plotgui.set_properties[loop]['symbolsize'],
linewidth=
plotgui.set_properties[loop]['linewidth'])
elif hlogyflag == 1 and hlogxflag == 0:
newyvalues = general_utilities.hybrid_transform(
plotgui.ydata[loop]['values'])
if logxflag == 0:
if plotgui.set_properties[loop]['symbol'] is None:
plotgui.subplot[plotgui.current_plot-1].plot(
plotgui.xdata[loop]['values'], newyvalues,
color=plotgui.set_properties[loop]['colour'],
linestyle=
plotgui.set_properties[loop]['linestyle'],
linewidth=
plotgui.set_properties[loop]['linewidth'])
elif plotgui.set_properties[loop]['linestyle'] is \
None:
plotgui.subplot[plotgui.current_plot-1].plot(
plotgui.xdata[loop]['values'], newyvalues,
color=plotgui.set_properties[loop]['colour'],
marker=
plotgui.set_properties[loop]['symbol'],
linestyle='none',
markersize=
plotgui.set_properties[loop]['symbolsize'])
else:
plotgui.subplot[plotgui.current_plot-1].plot(
plotgui.xdata[loop]['values'], newyvalues,
color=plotgui.set_properties[loop]['colour'],
marker=
plotgui.set_properties[loop]['symbol'],
linestyle=
plotgui.set_properties[loop]['linestyle'],
markersize=
plotgui.set_properties[loop]['symbolsize'],
linewidth=
plotgui.set_properties[loop]['linewidth'])
if xminorticks > 0:
plotgui.subplot[plotgui.current_plot-1].xaxis.\
set_minor_locator(MultipleLocator(
xminorticks))
xtl = int(
plotgui.xparameters[
plotgui.current_plot-1]['ticklength']
// 2)
plotgui.subplot[plotgui.current_plot-1].tick_params(
axis='x', which='minor', length=xtl)
else:
if plotgui.set_properties[loop]['symbol'] is None:
plotgui.subplot[plotgui.current_plot-1].semilogx(
plotgui.xdata[loop]['values'],
newyvalues,
color=plotgui.set_properties[loop]['colour'],
linestyle=
plotgui.set_properties[loop]['linestyle'],
linewidth=
plotgui.set_properties[loop]['linewidth'])
elif plotgui.set_properties[loop]['linestyle'] is \
None:
plotgui.subplot[plotgui.current_plot-1].semilogx(
plotgui.xdata[loop]['values'],
newyvalues,
color=plotgui.set_properties[loop]['colour'],
marker=plotgui.set_properties[loop]['symbol'],
linestyle='none',
markersize=
plotgui.set_properties[loop]['symbolsize'])
else:
plotgui.subplot[plotgui.current_plot-1].semilogx(
plotgui.xdata[loop]['values'],
newyvalues,
color=plotgui.set_properties[loop]['colour'],
marker=
plotgui.set_properties[loop]['symbol'],
linestyle=
plotgui.set_properties[loop]['linestyle'],
markersize=
plotgui.set_properties[loop]['symbolsize'],
linewidth=
plotgui.set_properties[loop]['linewidth'])
elif hlogyflag == 1 and hlogxflag == 1:
newxvalues = general_utilities.hybrid_transform(
plotgui.xdata[loop]['values'])
newyvalues = general_utilities.hybrid_transform(
plotgui.ydata[loop]['values'])
if plotgui.set_properties[loop]['symbol'] is None:
plotgui.subplot[plotgui.current_plot-1].plot(
newxvalues, newyvalues,
color=plotgui.set_properties[loop]['colour'],
linestyle=
plotgui.set_properties[loop]['linestyle'],
linewidth=
plotgui.set_properties[loop]['linewidth'])
elif plotgui.set_properties[loop]['linestyle'] is None:
plotgui.subplot[plotgui.current_plot-1].plot(
newxvalues, newyvalues,
color=plotgui.set_properties[loop]['colour'],
marker=plotgui.set_properties[loop]['symbol'],
linestyle='none',
markersize=
plotgui.set_properties[loop]['symbolsize'])
else:
plotgui.subplot[plotgui.current_plot-1].plot(
newxvalues, newyvalues,
color=plotgui.set_properties[loop]['colour'],
marker=plotgui.set_properties[loop]['symbol'],
linestyle=
plotgui.set_properties[loop]['linestyle'],
markersize=
plotgui.set_properties[loop]['symbolsize'],
linewidth=
plotgui.set_properties[loop]['linewidth'])
elif logyflag == 0 and logxflag == 1 and hlogxflag == 0:
if plotgui.set_properties[loop]['symbol'] is None:
plotgui.subplot[plotgui.current_plot-1].semilogx(
plotgui.xdata[loop]['values'],
plotgui.ydata[loop]['values'],
color=plotgui.set_properties[loop]['colour'],
linestyle=
plotgui.set_properties[loop]['linestyle'],
linewidth=
plotgui.set_properties[loop]['linewidth'])
elif plotgui.set_properties[loop]['linestyle'] is None:
plotgui.subplot[plotgui.current_plot-1].semilogx(
plotgui.xdata[loop]['values'],
plotgui.ydata[loop]['values'],
color=plotgui.set_properties[loop]['colour'],
marker=plotgui.set_properties[loop]['symbol'],
linestyle='None',
markersize=
plotgui.set_properties[loop]['symbolsize'])
else:
plotgui.subplot[plotgui.current_plot-1].semilogx(
plotgui.xdata[loop]['values'],
plotgui.ydata[loop]['values'],
color=plotgui.set_properties[loop]['colour'],
marker=plotgui.set_properties[loop]['symbol'],
linestyle=
plotgui.set_properties[loop]['linestyle'],
markersize=
plotgui.set_properties[loop]['symbolsize'],
linewidth=
plotgui.set_properties[loop]['linewidth'])
if plotgui.set_properties[loop]['errors']:
plotgui.subplot[plotgui.current_plot-1].errorbar(
plotgui.xdata[loop]['values'],
plotgui.ydata[loop]['values'], xerrors, yerrors,
fmt=None, ecolor=
plotgui.set_properties[loop]['colour'],
elinewidth=
plotgui.set_properties[loop]['linewidth'])
if yminorticks > 0:
plotgui.subplot[plotgui.current_plot-1].\
yaxis.set_minor_locator(
MultipleLocator(yminorticks))
ytl = int(
plotgui.xparameters[plotgui.current_plot-1][
'ticklength'] // 2)
plotgui.subplot[plotgui.current_plot-1].tick_params(
axis='y', which='minor', length=ytl)
elif logxflag == 0 and logyflag == 1 and hlogyflag == 0:
if plotgui.set_properties[loop]['symbol'] is None:
plotgui.subplot[plotgui.current_plot-1].semilogy(
plotgui.xdata[loop]['values'],
plotgui.ydata[loop]['values'],
color=plotgui.set_properties[loop]['colour'],
linestyle=
plotgui.set_properties[loop]['linestyle'],
linewidth=
plotgui.set_properties[loop]['linewidth'])
elif plotgui.set_properties[loop]['linestyle'] is None:
plotgui.subplot[plotgui.current_plot-1].semilogy(
plotgui.xdata[loop]['values'],
plotgui.ydata[loop]['values'],
color=plotgui.set_properties[loop]['colour'],
marker=plotgui.set_properties[loop]['symbol'],
linestyle='None',
markersize=
plotgui.set_properties[loop]['symbolsize'])
else:
plotgui.subplot[plotgui.current_plot-1].semilogy(
plotgui.xdata[loop]['values'],
plotgui.ydata[loop]['values'],
color=plotgui.set_properties[loop]['colour'],
marker=plotgui.set_properties[loop]['symbol'],
linestyle=
plotgui.set_properties[loop]['linestyle'],
markersize=
plotgui.set_properties[loop]['symbolsize'],
linewidth=
plotgui.set_properties[loop]['linewidth'])
if plotgui.set_properties[loop]['errors']:
plotgui.subplot[plotgui.current_plot-1].errorbar(
plotgui.xdata[loop]['values'],
plotgui.ydata[loop]['values'], xerrors, yerrors,
fmt=None,
ecolor=plotgui.set_properties[loop]['colour'],
elinewidth=
plotgui.set_properties[loop]['linewidth'])
if xminorticks > 0:
plotgui.subplot[plotgui.current_plot-1].xaxis.\
set_minor_locator(MultipleLocator(xminorticks))
xtl = int(
plotgui.xparameters[
plotgui.current_plot-1]['ticklength'] // 2)
plotgui.subplot[plotgui.current_plot-1].tick_params(
axis='x', which='minor', length=xtl)
elif logxflag == 1 and logyflag == 1:
if plotgui.set_properties[loop]['symbol'] is None:
plotgui.subplot[plotgui.current_plot-1].loglog(
plotgui.xdata[loop]['values'],
plotgui.ydata[loop]['values'],
color=
plotgui.set_properties[loop]['colour'],
linestyle=
plotgui.set_properties[loop]['linestyle'],
linewidth=
plotgui.set_properties[loop]['linewidth'])
elif plotgui.set_properties[loop]['linestyle'] is None:
plotgui.subplot[plotgui.current_plot-1].loglog(
plotgui.xdata[loop]['values'],
plotgui.ydata[loop]['values'],
color=plotgui.set_properties[loop]['colour'],
marker=plotgui.set_properties[loop]['symbol'],
linestyle='None',
markersize=
plotgui.set_properties[loop]['symbolsize'])
else:
plotgui.subplot[plotgui.current_plot-1].loglog(
plotgui.xdata[loop]['values'],
plotgui.ydata[loop]['values'],
color=plotgui.set_properties[loop]['colour'],
marker=plotgui.set_properties[loop]['symbol'],
linestyle=
plotgui.set_properties[loop]['linestyle'],
markersize=
plotgui.set_properties[loop]['symbolsize'],
linewidth=
plotgui.set_properties[loop]['linewidth'])
if plotgui.set_properties[loop]['errors']:
plotgui.subplot[plotgui.current_plot-1].errorbar(
plotgui.xdata[loop]['values'],
plotgui.ydata[loop]['values'],
xerrors, yerrors, fmt=None,
ecolor=plotgui.set_properties[loop]['colour'],
elinewidth=
plotgui.set_properties[loop]['linewidth'])
else:
# One should not get here, but if the flags do not
# correspond to the expected options use a linear
# plot as the default
if plotgui.set_properties[loop]['symbol'] is None:
plotgui.subplot[plotgui.current_plot-1].plot(
plotgui.xdata[loop]['values'],
plotgui.ydata[loop]['values'],
color=plotgui.set_properties[loop]['colour'],
linestyle=
plotgui.set_properties[loop]['linestyle'],
linewidth=
plotgui.set_properties[loop]['linewidth'])
elif plotgui.set_properties[loop]['linestyle'] is None:
plotgui.subplot[plotgui.current_plot-1].plot(
plotgui.xdata[loop]['values'],
plotgui.ydata[loop]['values'],
color=plotgui.set_properties[loop]['colour'],
marker=plotgui.set_properties[loop]['symbol'],
linestyle='none',
markersize=
plotgui.set_properties[loop]['symbolsize'])
else:
plotgui.subplot[plotgui.current_plot-1].plot(
plotgui.xdata[loop]['values'],
plotgui.ydata[loop]['values'],
color=plotgui.set_properties[loop]['colour'],
marker=plotgui.set_properties[loop]['symbol'],
linestyle=
plotgui.set_properties[loop]['linestyle'],
markersize=
plotgui.set_properties[loop]['symbolsize'],
linewidth=
plotgui.set_properties[loop]['linewidth'])
if plotgui.set_properties[loop]['errors']:
plotgui.subplot[plotgui.current_plot-1].errorbar(
plotgui.xdata[loop]['values'],
plotgui.ydata[loop]['values'], yerrors, xerrors,
fmt='None',
ecolor=plotgui.set_properties[loop]['colour'],
elinewidth=
plotgui.set_properties[loop]['linewidth'])
if xminorticks > 0:
plotgui.subplot[plotgui.current_plot-1].xaxis.\
set_minor_locator(MultipleLocator(xminorticks))
xtl = int(plotgui.xparameters[
plotgui.current_plot-1]['ticklength'] // 2)
plotgui.subplot[plotgui.current_plot-1].tick_params(
axis='x', which='minor', length=xtl)
if yminorticks > 0:
plotgui.subplot[
plotgui.current_plot-1].yaxis.set_minor_locator(
MultipleLocator(yminorticks))
ytl = int(plotgui.yparameters[
plotgui.current_plot-1]['ticklength'] // 2)
plotgui.subplot[plotgui.current_plot-1].tick_params(
axis='y', which='minor', length=ytl)
except Exception:
pass
if bothyticksflag == 1:
plotgui.subplot[plotgui.current_plot-1].tick_params(
left=True, right=True, which='both')
else:
plotgui.subplot[plotgui.current_plot-1].tick_params(
left=True, right=False, which='both')
if bothxticksflag == 1:
plotgui.subplot[plotgui.current_plot-1].tick_params(
bottom=True, top=True, which='both')
else:
plotgui.subplot[plotgui.current_plot-1].tick_params(
bottom=True, top=False, which='both')
for n1 in range(plotgui.number_of_lines):
if plotgui.plot_lines[n1]['plot'] == plotgui.current_plot:
xlvalues = numpy.asarray([plotgui.plot_lines[n1]['xstart'],
plotgui.plot_lines[n1]['xend']])
if hlogxflag == 1:
xlvalues[0] = general_utilities.hybrid_transform(
xlvalues[0])
xlvalues[1] = general_utilities.hybrid_transform(
xlvalues[1])
ylvalues = numpy.asarray([plotgui.plot_lines[n1]['ystart'],
plotgui.plot_lines[n1]['yend']])
if hlogyflag == 1:
ylvalues[0] = general_utilities.hybrid_transform(
ylvalues[0])
ylvalues[1] = general_utilities.hybrid_transform(
ylvalues[1])
plotgui.subplot[plotgui.current_plot-1].plot(
xlvalues, ylvalues,
color=plotgui.plot_lines[n1]['line_colour'],
linestyle=plotgui.plot_lines[n1]['line_type'],
linewidth=plotgui.plot_lines[n1]['line_thickness'])
patches = []
for n1 in range(plotgui.number_of_boxes):
if plotgui.plot_boxes[n1]['plot'] == plotgui.current_plot:
xcorner = plotgui.plot_boxes[n1]['xstart']
ycorner = plotgui.plot_boxes[n1]['ystart']
delx = plotgui.plot_boxes[n1]['xend'] \
- plotgui.plot_boxes[n1]['xstart']
dely = plotgui.plot_boxes[n1]['yend'] \
- plotgui.plot_boxes[n1]['ystart']
if delx < 0.:
xcorner = xcorner + delx
delx = -delx
if dely < 0.:
ycorner = ycorner + dely
dely = -dely
if hlogxflag == 1:
xc1 = xcorner
xc2 = xcorner+delx
xcorner = general_utilities.hybrid_transform(xc1)
delx = general_utilities.hybrid_transform(xc2) - xcorner
if hlogyflag == 1:
yc1 = ycorner
yc2 = ycorner+dely
ycorner = general_utilities.hybrid_transform(yc1)
dely = general_utilities.hybrid_transform(yc2) - ycorner
rect = Rectangle(
(xcorner, ycorner), delx, dely,
angle=plotgui.plot_boxes[n1]['rotation'],
edgecolor=plotgui.plot_boxes[n1]['line_colour'],
linestyle=plotgui.plot_boxes[n1]['line_type'],
linewidth=plotgui.plot_boxes[n1]['line_thickness'],
facecolor=plotgui.plot_boxes[n1]['fill_colour'])
plotgui.subplot[plotgui.current_plot-1].add_artist(rect)
patches.append(rect)
if len(patches) > 0:
pc = PatchCollection(patches)
plotgui.subplot[plotgui.current_plot-1].add_collection(pc)
patches = []
for n1 in range(plotgui.number_of_ellipses):
if plotgui.plot_ellipses[n1]['plot'] == plotgui.current_plot:
xcenter = plotgui.plot_ellipses[n1]['xposition']
ycenter = plotgui.plot_ellipses[n1]['yposition']
delx = plotgui.plot_ellipses[n1]['major']
dely = plotgui.plot_ellipses[n1]['minor']
if hlogxflag == 1:
xc1 = xcorner
xc2 = xcorner+delx
xcorner = general_utilities.hybrid_transform(xc1)
delx = general_utilities.hybrid_transform(xc2) - xcorner
if hlogyflag == 1:
yc1 = ycorner
yc2 = ycorner+dely
ycorner = general_utilities.hybrid_transform(yc1)
dely = general_utilities.hybrid_transform(yc2) - ycorner
ellip = Ellipse(
(xcenter, ycenter), delx, dely,
angle=plotgui.plot_ellipses[n1]['rotation'],
edgecolor=plotgui.plot_ellipses[n1]['line_colour'],
linestyle=plotgui.plot_ellipses[n1]['line_type'],
linewidth=plotgui.plot_ellipses[n1]['line_thickness'],
facecolor=plotgui.plot_ellipses[n1]['fill_colour'])
plotgui.subplot[plotgui.current_plot-1].add_artist(ellip)
patches.append(ellip)
if len(patches) > 0:
pc = PatchCollection(patches)
plotgui.subplot[plotgui.current_plot-1].add_collection(pc)
patches = []
for n1 in range(plotgui.number_of_vectors):
if plotgui.plot_vectors[n1]['plot'] == plotgui.current_plot:
x1 = plotgui.plot_vectors[n1]['xstart']
y1 = plotgui.plot_vectors[n1]['ystart']
xlength = plotgui.plot_vectors[n1]['xend'] \
- plotgui.plot_vectors[n1]['xstart']
ylength = plotgui.plot_vectors[n1]['yend'] \
- plotgui.plot_vectors[n1]['ystart']
delx = plotgui.plot_vectors[n1]['delx']
dely = plotgui.plot_vectors[n1]['dely']
if hlogxflag == 1:
xs1 = general_utilities.hybrid_transform(x1)
xs2 = general_utilities.hybrid_transform(
plotgui.plot_vectors[n1]['xend'])
xlength = xs2 - xs1
delx = xs2 - general_utilities.hybrid_transform(
plotgui.plot_vectors[n1]['xend']-delx)
if hlogyflag == 1:
ys1 = general_utilities.hybrid_transform(y1)
ys2 = general_utilities.hybrid_transform(
plotgui.plot_vectors[n1]['yend'])
ylength = ys2 - ys1
delx = ys2 - general_utilities.hybrid_transform(
plotgui.plot_vectors[n1]['yend']-dely)
arrow = FancyArrow(
x1, y1, xlength, ylength, head_width=delx,
head_length=dely, fill=plotgui.plot_vectors[n1]['fill'],
edgecolor=plotgui.plot_vectors[n1]['line_colour'],
linestyle=plotgui.plot_vectors[n1]['line_type'],
linewidth=plotgui.plot_vectors[n1]['line_thickness'],
facecolor=plotgui.plot_vectors[n1]['fill_colour'])
plotgui.subplot[plotgui.current_plot-1].add_artist(arrow)
patches.append(arrow)
if len(patches) > 0:
pc = PatchCollection(patches)
plotgui.subplot[plotgui.current_plot-1].add_collection(pc)
if plotgui.plot_frame[plotgui.current_plot-1] > 0.:
plotgui.subplot[plotgui.current_plot-1].spines['bottom'].set_linewidth(
plotgui.plot_frame[plotgui.current_plot-1])
plotgui.subplot[plotgui.current_plot-1].spines['top'].set_linewidth(
plotgui.plot_frame[plotgui.current_plot-1])
plotgui.subplot[plotgui.current_plot-1].spines['left'].set_linewidth(
plotgui.plot_frame[plotgui.current_plot-1])
plotgui.subplot[plotgui.current_plot-1].spines['right'].set_linewidth(
plotgui.plot_frame[plotgui.current_plot-1])
else:
plotgui.subplot[plotgui.current_plot-1].spines['bottom'].set_linewidth(
0.5)
plotgui.subplot[plotgui.current_plot-1].spines['top'].set_linewidth(
0.5)
plotgui.subplot[plotgui.current_plot-1].spines['left'].set_linewidth(
0.5)
plotgui.subplot[plotgui.current_plot-1].spines['right'].set_linewidth(
0.5)
xisinverted = plotgui.subplot[plotgui.current_plot-1].xaxis_inverted()
yisinverted = plotgui.subplot[plotgui.current_plot-1].yaxis_inverted()
if (invertxflag == 1 and (not xisinverted)) or \
(invertxflag == 0 and xisinverted):
plotgui.subplot[plotgui.current_plot-1].invert_xaxis()
if (invertyflag == 1 and (not yisinverted)) or \
(invertyflag == 0 and yisinverted):
plotgui.subplot[plotgui.current_plot-1].invert_yaxis()
if plotgui.matplotlib_rounding.get():
xmin1, xmax1 = plotgui.subplot[plotgui.current_plot-1].get_xbound()
ymin1, ymax1 = plotgui.subplot[plotgui.current_plot-1].get_ybound()
if plotgui.original_range[plotgui.current_plot-1]:
plotgui.plot_range[plotgui.current_plot-1][0] = xmin1
plotgui.plot_range[plotgui.current_plot-1][1] = xmax1
plotgui.plot_range[plotgui.current_plot-1][2] = ymin1
plotgui.plot_range[plotgui.current_plot-1][3] = ymax1
plotgui.original_range[plotgui.current_plot-1] = False
if hlogxflag == 0:
plotgui.subplot[plotgui.current_plot-1].set_xbound(
plotgui.plot_range[plotgui.current_plot-1][0],
plotgui.plot_range[plotgui.current_plot-1][1])
else:
xrange = numpy.asarray([plotgui.plot_range[plotgui.current_plot-1][0],
plotgui.plot_range[plotgui.current_plot-1][1]],
dtype=numpy.float32)
xrange1 = general_utilities.hybrid_transform(xrange)
plotgui.subplot[plotgui.current_plot-1].set_xbound(xrange1[0],
xrange1[1])
newxvalues = general_utilities.hybrid_transform(
plotgui.xdata[loop]['values'])
tickmarks, ticklabels = general_utilities.hybrid_labels(xrange1)
plotgui.subplot[plotgui.current_plot-1].set_xticks(tickmarks)
plotgui.subplot[plotgui.current_plot-1].set_xticklabels(ticklabels)
if hlogyflag == 0:
plotgui.subplot[plotgui.current_plot-1].set_ybound(
plotgui.plot_range[plotgui.current_plot-1][2],
plotgui.plot_range[plotgui.current_plot-1][3])
else:
yrange = numpy.asarray([plotgui.plot_range[plotgui.current_plot-1][2],
plotgui.plot_range[plotgui.current_plot-1][3]],
dtype=numpy.float32)
yrange1 = general_utilities.hybrid_transform(yrange)
plotgui.subplot[plotgui.current_plot-1].set_ybound(yrange1[0],
yrange1[1])
newyvalues = general_utilities.hybrid_transform(
plotgui.ydata[loop]['values'])
tickmarks, ticklabels = general_utilities.hybrid_labels(yrange1)
plotgui.subplot[plotgui.current_plot-1].set_yticks(tickmarks)
plotgui.subplot[plotgui.current_plot-1].set_yticklabels(ticklabels)
plotgui.subplot[plotgui.current_plot-1].set_xlabel(
plotgui.xparameters[plotgui.current_plot-1]['label'],
family=plotgui.fontname[plotgui.current_plot-1],
size=plotgui.fontsize[plotgui.current_plot-1],
weight=plotgui.fontweight[plotgui.current_plot-1])
plotgui.subplot[plotgui.current_plot-1].set_ylabel(
plotgui.yparameters[plotgui.current_plot-1]['label'],
family=plotgui.fontname[plotgui.current_plot-1],
size=plotgui.fontsize[plotgui.current_plot-1],
weight=plotgui.fontweight[plotgui.current_plot-1])
try:
plotgui.subplot[plotgui.current_plot-1].set_title(
plotgui.title[plotgui.current_plot-1],
family=plotgui.fontname[plotgui.current_plot-1],
size=plotgui.fontsize[plotgui.current_plot-1],
weight=plotgui.fontweight[plotgui.current_plot-1])
except Exception:
pass
# Adjust the margins if the plot_margin value is set.
left = plotgui.bounding_box[plotgui.current_plot-1][0] + plotgui.plot_margin
right = plotgui.bounding_box[plotgui.current_plot-1][0] \
+ plotgui.bounding_box[plotgui.current_plot-1][2] - plotgui.plot_margin
bottom = plotgui.bounding_box[plotgui.current_plot-1][1] + plotgui.plot_margin
top = plotgui.bounding_box[plotgui.current_plot-1][1] \
+ plotgui.bounding_box[plotgui.current_plot-1][3] - plotgui.plot_margin
plotgui.subplot[plotgui.current_plot-1].set_position(
[left, bottom, right-left, top-bottom], which='both')
if plotgui.number_of_labels > 0:
for loop in range(plotgui.number_of_labels):
if plotgui.current_plot == plotgui.plot_labels[loop]['plot']:
xpos1 = plotgui.plot_labels[loop]['xposition']
if plotgui.xparameters[plotgui.current_plot-1]['hybridlog'] == 1:
xpos1 = general_utilities.hybrid_transform(xpos1)
ypos1 = plotgui.plot_labels[loop]['yposition']
if plotgui.xparameters[plotgui.current_plot-1]['hybridlog'] == 1:
ypos1 = general_utilities.hybrid_transform(ypos1)
plotgui.subplot[plotgui.current_plot-1].text(
xpos1, ypos1,
plotgui.plot_labels[loop]['labelstring'],
{'color': plotgui.plot_labels[loop]['colour'],
'fontsize': plotgui.plot_labels[loop]['size'],
'fontfamily': plotgui.plot_labels[loop]['font'],
'fontweight': plotgui.plot_labels[loop]['fontweight']})
try:
legend_flag = plotgui.legend_variable[plotgui.current_plot-1].get()
except Exception:
legend_flag = 0
if legend_flag:
plotgui.generate_legend(None)
legend_option = plotgui.legend_options[plotgui.current_plot-1].get()
legend_position = None
if legend_option == 'user':
try:
str1 = plotgui.legend_position_field.get()
values = str1.split()
if len(values) == 2:
xlpos = float(values[0])
ylpos = float(values[1])
legend_position = [xlpos, ylpos]
plotgui.legend_user_position[plotgui.current_plot-1] = \
legend_position
except ValueError:
legend_option = 'best'
else:
legend_option = plotgui.legend_position[plotgui.current_plot-1]
if legend_option is None:
legend_position = 'best'
try:
legend_frame = plotgui.legend_frame[plotgui.current_plot-1].get()
except Exception:
legend_frame = 0
if legend_position is None:
plotgui.subplot[plotgui.current_plot-1].legend(
handles=plotgui.legend_handles[plotgui.current_plot-1],
labels=plotgui.legend_labels[plotgui.current_plot-1],
loc=legend_option, frameon=legend_frame)
else:
plotgui.subplot[plotgui.current_plot-1].legend(
handles=plotgui.legend_handles[plotgui.current_plot-1],
labels=plotgui.legend_labels[plotgui.current_plot-1],
loc=legend_position, frameon=legend_frame)
if oppositexflag == 1:
plotgui.subplot[plotgui.current_plot-1].get_xaxis().set_ticks_position(
"top")
plotgui.subplot[plotgui.current_plot-1].get_xaxis().set_label_position(
"top")
if oppositeyflag == 1:
plotgui.subplot[plotgui.current_plot-1].get_yaxis().set_ticks_position(
"right")
plotgui.subplot[plotgui.current_plot-1].get_yaxis().set_label_position(
"right")
if inversexticksflag == 1:
plotgui.subplot[plotgui.current_plot-1].tick_params(
axis='x', direction='in', length=
plotgui.xparameters[plotgui.current_plot-1]['ticklength'])
plotgui.subplot[plotgui.current_plot-1].tick_params(
axis='x', direction='in', which='minor')
else:
plotgui.subplot[plotgui.current_plot-1].tick_params(
axis='x', direction='out', length=
plotgui.xparameters[plotgui.current_plot-1]['ticklength'])
plotgui.subplot[plotgui.current_plot-1].tick_params(
axis='x', direction='out', which='minor')
if inverseyticksflag == 1:
plotgui.subplot[plotgui.current_plot-1].tick_params(
axis='y', direction='in', length=
plotgui.yparameters[plotgui.current_plot-1]['ticklength'])
plotgui.subplot[plotgui.current_plot-1].tick_params(axis='y',
direction='in',
which='minor')
else:
plotgui.subplot[plotgui.current_plot-1].tick_params(
axis='y', direction='out', length=
plotgui.yparameters[plotgui.current_plot-1]['ticklength'])
plotgui.subplot[plotgui.current_plot-1].tick_params(axis='y',
direction='out',
which='minor')
if plotgui.grid_linetype[plotgui.current_plot-1] is None:
stylestring = 'None'
else:
stylestring = plotgui.grid_linetype[plotgui.current_plot-1]
if plotgui.grid_colour[plotgui.current_plot-1] is None:
colourvalue = 'black'
else:
colourvalue = plotgui.grid_colour[plotgui.current_plot-1]
if (gridoptionx == 0) and (gridoptiony == 0):
plotgui.subplot[plotgui.current_plot-1].grid(b=False, axis='both',
which='both')
else:
if gridoptionx == 0:
plotgui.subplot[plotgui.current_plot-1].grid(b=False, axis='x',
which='both')
else:
plotgui.subplot[plotgui.current_plot-1].grid(
b=gridflags1[gridoptionx][0],
which=gridflags1[gridoptionx][1],
axis=gridflags1[gridoptionx][2], linestyle=stylestring,
color=colourvalue)
if gridoptiony == 0:
plotgui.subplot[plotgui.current_plot-1].grid(b=False, axis='y',
which='both')
else:
plotgui.subplot[plotgui.current_plot-1].grid(
b=gridflags2[gridoptiony][0],
which=gridflags2[gridoptiony][1],
axis=gridflags2[gridoptiony][2], linestyle=stylestring,
color=colourvalue)
if hidexticksflag == 1:
plotgui.subplot[plotgui.current_plot-1].get_xaxis().set_ticks([])
if hideyticksflag == 1:
plotgui.subplot[plotgui.current_plot-1].get_yaxis().set_ticks([])
if hidexlabelsflag == 1:
plotgui.subplot[plotgui.current_plot-1].get_xaxis().set_ticklabels([])
if hideylabelsflag == 1:
plotgui.subplot[plotgui.current_plot-1].get_yaxis().set_ticklabels([])
if hidexflag == 1:
plotgui.subplot[plotgui.current_plot-1].get_xaxis().set_visible(False)
else:
plotgui.subplot[plotgui.current_plot-1].get_xaxis().set_visible(True)
if hideyflag == 1:
plotgui.subplot[plotgui.current_plot-1].get_yaxis().set_visible(False)
else:
plotgui.subplot[plotgui.current_plot-1].get_yaxis().set_visible(True)
if plotgui.equal_aspect[plotgui.current_plot-1]:
plotgui.figure.gca().set_aspect('equal', adjustable='box')
else:
plotgui.figure.gca().set_aspect('auto')
plotgui.canvas.draw()
# The follow is duplicate information, but may be used to set tick
# intervals....
if not plotgui.matplotlib_rounding.get():
plotgui.xparameters[plotgui.current_plot-1]['minimum'] = \
plotgui.plot_range[plotgui.current_plot-1][0]
plotgui.xparameters[plotgui.current_plot-1]['maximum'] = \
plotgui.plot_range[plotgui.current_plot-1][1]
plotgui.yparameters[plotgui.current_plot-1]['minimum'] = \
plotgui.plot_range[plotgui.current_plot-1][2]
plotgui.yparameters[plotgui.current_plot-1]['maximum'] = \
plotgui.plot_range[plotgui.current_plot-1][3]
else:
xr1, xr2 = plotgui.subplot[plotgui.current_plot-1].get_xbound()
yr1, yr2 = plotgui.subplot[plotgui.current_plot-1].get_ybound()
plotgui.xparameters[plotgui.current_plot-1]['minimum'] = xr1
plotgui.xparameters[plotgui.current_plot-1]['maximum'] = xr2
plotgui.yparameters[plotgui.current_plot-1]['minimum'] = yr1
plotgui.yparameters[plotgui.current_plot-1]['maximum'] = yr2
| 55.771612 | 82 | 0.5186 | 4,689 | 52,258 | 5.628705 | 0.07443 | 0.123063 | 0.158224 | 0.162695 | 0.813625 | 0.758762 | 0.698594 | 0.652484 | 0.622438 | 0.602925 | 0 | 0.015533 | 0.374163 | 52,258 | 936 | 83 | 55.831197 | 0.791469 | 0.016935 | 0 | 0.601996 | 0 | 0 | 0.054515 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.001109 | false | 0.002217 | 0.006652 | 0 | 0.007761 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 4 |
0ec9074f9f8cd0068885ab0e36d8ac04d386927b | 26 | py | Python | 基础教程/A1-Python与基础知识/算法第一步/ExampleCodes/chapter07/7-1.py | microsoft/ai-edu | 2f59fa4d3cf19f14e0b291e907d89664bcdc8df3 | [
"Apache-2.0"
] | 11,094 | 2019-05-07T02:48:50.000Z | 2022-03-31T08:49:42.000Z | 基础教程/A1-Python与基础知识/算法第一步/ExampleCodes/chapter07/7-1.py | microsoft/ai-edu | 2f59fa4d3cf19f14e0b291e907d89664bcdc8df3 | [
"Apache-2.0"
] | 157 | 2019-05-13T15:07:19.000Z | 2022-03-23T08:52:32.000Z | 基础教程/A1-Python与基础知识/算法第一步/ExampleCodes/chapter07/7-1.py | microsoft/ai-edu | 2f59fa4d3cf19f14e0b291e907d89664bcdc8df3 | [
"Apache-2.0"
] | 2,412 | 2019-05-07T02:55:15.000Z | 2022-03-30T06:56:52.000Z | a = "1"
a = a + 2
print(a) | 8.666667 | 9 | 0.423077 | 7 | 26 | 1.571429 | 0.571429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 0.307692 | 26 | 3 | 10 | 8.666667 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0.037037 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.333333 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 4 |
0eccaa23ccf8d7480b60ee8db7d1eab4116e870b | 257 | py | Python | gitless/cli/gl_untrack.py | goldstar611/gitless | 8c3041c69995a10f6bdf933405a9f01cdd21e969 | [
"MIT"
] | 9 | 2022-02-22T09:08:26.000Z | 2022-02-24T16:10:39.000Z | gitless/cli/gl_untrack.py | goldstar611/gitless | 8c3041c69995a10f6bdf933405a9f01cdd21e969 | [
"MIT"
] | 30 | 2022-02-20T17:21:37.000Z | 2022-03-09T16:16:09.000Z | gitless/cli/gl_untrack.py | goldstar611/gitless | 8c3041c69995a10f6bdf933405a9f01cdd21e969 | [
"MIT"
] | 1 | 2022-02-22T09:31:01.000Z | 2022-02-22T09:31:01.000Z | # -*- coding: utf-8 -*-
# Gitless - a version control system built on top of Git
# Licensed under MIT
"""gl untrack - Stop tracking changes to files."""
from . import file_cmd
parser = file_cmd.parser('stop tracking changes to files', 'untrack', ['un'])
| 25.7 | 77 | 0.688716 | 38 | 257 | 4.605263 | 0.763158 | 0.137143 | 0.217143 | 0.24 | 0.297143 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004717 | 0.175097 | 257 | 9 | 78 | 28.555556 | 0.820755 | 0.548638 | 0 | 0 | 0 | 0 | 0.361111 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 4 |
0eccd2f7adb0e8e0e69eb510cbc8b00ced45374d | 230 | py | Python | spotty/commands/writers/output_writrer.py | Inculus/spotty | 56863012668a6c13ad13c2a04f900047e229fbe6 | [
"MIT"
] | 1 | 2020-07-17T07:02:09.000Z | 2020-07-17T07:02:09.000Z | spotty/commands/writers/output_writrer.py | Inculus/spotty | 56863012668a6c13ad13c2a04f900047e229fbe6 | [
"MIT"
] | null | null | null | spotty/commands/writers/output_writrer.py | Inculus/spotty | 56863012668a6c13ad13c2a04f900047e229fbe6 | [
"MIT"
] | null | null | null | from spotty.commands.writers.abstract_output_writrer import AbstractOutputWriter
class OutputWriter(AbstractOutputWriter):
def _write(self, msg: str):
"""Prints messages to STDOUT."""
print(msg, flush=True)
| 25.555556 | 80 | 0.734783 | 25 | 230 | 6.64 | 0.92 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.169565 | 230 | 8 | 81 | 28.75 | 0.86911 | 0.113043 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0 | 0.75 | 0.25 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 4 |
0ee0ca4400485476c022297728a45b9de4309513 | 500 | py | Python | councilmatic/subscriptions/management/commands/syncsubscribers.py | isabella232/councilmatic | a6c560d9018062774310e3f7acc474c635934f17 | [
"BSD-Source-Code"
] | 17 | 2015-01-17T23:42:37.000Z | 2021-12-15T15:42:28.000Z | councilmatic/subscriptions/management/commands/syncsubscribers.py | codeforamerica/councilmatic | a6c560d9018062774310e3f7acc474c635934f17 | [
"BSD-Source-Code"
] | 2 | 2015-07-16T13:24:58.000Z | 2019-03-13T19:27:05.000Z | councilmatic/subscriptions/management/commands/syncsubscribers.py | isabella232/councilmatic | a6c560d9018062774310e3f7acc474c635934f17 | [
"BSD-Source-Code"
] | 6 | 2015-07-27T13:02:08.000Z | 2021-04-16T10:25:04.000Z | from django.core.management.base import BaseCommand, CommandError
from councilmatic.subscriptions.feeds import import_all_feeds
from councilmatic.subscriptions.feeds import SubscriptionEmailer
from councilmatic.subscriptions.models import Subscriber, User
class Command(BaseCommand):
help = "Create subscribers for all those users that don't have one."
def handle(self, *args, **options):
for user in User.objects.all():
Subscriber.objects.get_or_create_for_user(user)
| 38.461538 | 72 | 0.784 | 63 | 500 | 6.126984 | 0.603175 | 0.124352 | 0.225389 | 0.176166 | 0.207254 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.144 | 500 | 12 | 73 | 41.666667 | 0.901869 | 0 | 0 | 0 | 0 | 0 | 0.118 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.444444 | 0 | 0.777778 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 4 |
0efef48f74c8fd90c24f672f9d5dd1f57e5c6e0f | 276 | py | Python | src/sensitivity/linear_sensitivity_analysis.py | superdurszlak/ModelParameterIdentifier | bc39005eefbe5d667d887e8bfc67495f6f8befa5 | [
"MIT"
] | 1 | 2020-04-28T12:43:13.000Z | 2020-04-28T12:43:13.000Z | src/sensitivity/linear_sensitivity_analysis.py | superdurszlak/ModelParameterIdentifier | bc39005eefbe5d667d887e8bfc67495f6f8befa5 | [
"MIT"
] | null | null | null | src/sensitivity/linear_sensitivity_analysis.py | superdurszlak/ModelParameterIdentifier | bc39005eefbe5d667d887e8bfc67495f6f8befa5 | [
"MIT"
] | null | null | null | import numpy as np
from src.sensitivity.base_sensitivity_analysis import BaseSensitivityAnalysis
class LinearSensitivityAnalysis(BaseSensitivityAnalysis):
def _get_deviation_steps(self):
return (x for x in np.linspace(1.0, 0.0, self._samples, endpoint=False))
| 27.6 | 80 | 0.793478 | 35 | 276 | 6.085714 | 0.771429 | 0.018779 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016736 | 0.134058 | 276 | 9 | 81 | 30.666667 | 0.874477 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.4 | 0.2 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 4 |
1605cbc63a7f9d035d3a6ef93dfc32a24f012dd2 | 127 | py | Python | ligature/simulators/__init__.py | CorsoSource/ligature | 82f64109cb68ed6974e7246722d850f578e6782d | [
"Apache-2.0"
] | null | null | null | ligature/simulators/__init__.py | CorsoSource/ligature | 82f64109cb68ed6974e7246722d850f578e6782d | [
"Apache-2.0"
] | 1 | 2020-09-13T05:06:33.000Z | 2020-09-13T05:06:33.000Z | ligature/simulators/__init__.py | CorsoSource/ligature | 82f64109cb68ed6974e7246722d850f578e6782d | [
"Apache-2.0"
] | null | null | null | """
Simulators to generate streams of data.
"""
from ligature.scanners.drunk import DrunkenWalk
__all__ = ['DrunkenWalk'] | 18.142857 | 47 | 0.732283 | 14 | 127 | 6.357143 | 0.928571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15748 | 127 | 7 | 48 | 18.142857 | 0.831776 | 0.307087 | 0 | 0 | 1 | 0 | 0.142857 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 4 |
1612e3d88231f0e7831b466582dd22d3860064bc | 221 | py | Python | learn2learn/gym/__init__.py | Brikwerk/learn2learn | 7997c13c26ec627d13ce77ba98427260df78ada8 | [
"MIT"
] | 1,774 | 2019-09-05T20:41:16.000Z | 2022-03-30T09:49:02.000Z | learn2learn/gym/__init__.py | Kostis-S-Z/learn2learn | c0b7c088f15986880b136ec27059644ac513db60 | [
"MIT"
] | 196 | 2019-09-05T08:11:31.000Z | 2022-03-31T12:08:25.000Z | learn2learn/gym/__init__.py | Kostis-S-Z/learn2learn | c0b7c088f15986880b136ec27059644ac513db60 | [
"MIT"
] | 266 | 2019-09-13T10:17:54.000Z | 2022-03-28T07:17:21.000Z | #!/usr/bin/env python3
r"""
Environment, models, and other utilities related to reinforcement learning and OpenAI Gym.
"""
from . import envs
from .envs.meta_env import MetaEnv
from .async_vec_env import AsyncVectorEnv
| 22.1 | 90 | 0.782805 | 32 | 221 | 5.3125 | 0.75 | 0.105882 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005236 | 0.135747 | 221 | 9 | 91 | 24.555556 | 0.884817 | 0.506787 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.75 | 0 | 0.75 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 4 |
1615c0c8862f997707fdd9faba9e6e13c37f6739 | 108 | py | Python | elkconfdparser/__main__.py | iaroslavscript/elk-confd-parser | 6930afa981468ddf24c80a33feba5e10ca870b9d | [
"MIT"
] | null | null | null | elkconfdparser/__main__.py | iaroslavscript/elk-confd-parser | 6930afa981468ddf24c80a33feba5e10ca870b9d | [
"MIT"
] | 18 | 2021-07-29T18:24:28.000Z | 2021-08-01T07:44:16.000Z | elkconfdparser/__main__.py | iaroslavscript/elk-confd-parser | 6930afa981468ddf24c80a33feba5e10ca870b9d | [
"MIT"
] | null | null | null | import sys
from elkconfdparser.cli import main
if __name__ == "__main__": # noqa
main(sys.argv[1:])
| 13.5 | 35 | 0.685185 | 15 | 108 | 4.4 | 0.733333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011494 | 0.194444 | 108 | 7 | 36 | 15.428571 | 0.747126 | 0.037037 | 0 | 0 | 0 | 0 | 0.078431 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 4 |
161884b0ed8516bc561620a0fcf7d840e0555bac | 166 | py | Python | problem0119.py | kmarcini/Project-Euler-Python | d644e8e1ec4fac70a9ab407ad5e1f0a75547c8d3 | [
"BSD-3-Clause"
] | null | null | null | problem0119.py | kmarcini/Project-Euler-Python | d644e8e1ec4fac70a9ab407ad5e1f0a75547c8d3 | [
"BSD-3-Clause"
] | null | null | null | problem0119.py | kmarcini/Project-Euler-Python | d644e8e1ec4fac70a9ab407ad5e1f0a75547c8d3 | [
"BSD-3-Clause"
] | null | null | null | ###########################
#
# #119 Digit power sum - Project Euler
# https://projecteuler.net/problem=119
#
# Code by Kevin Marciniak
#
###########################
| 18.444444 | 38 | 0.46988 | 15 | 166 | 5.2 | 0.933333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.041096 | 0.120482 | 166 | 8 | 39 | 20.75 | 0.493151 | 0.578313 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 4 |
16284a4ded58f35ef3c4f3c771fd8c9a0268c229 | 215 | py | Python | bookmarks/images/admin.py | sharafx2qeshta/Django-3-by-example | f846ef46371a1199e52e285e0dd6f00fa42889bd | [
"MIT"
] | null | null | null | bookmarks/images/admin.py | sharafx2qeshta/Django-3-by-example | f846ef46371a1199e52e285e0dd6f00fa42889bd | [
"MIT"
] | null | null | null | bookmarks/images/admin.py | sharafx2qeshta/Django-3-by-example | f846ef46371a1199e52e285e0dd6f00fa42889bd | [
"MIT"
] | null | null | null | from django.contrib import admin
from .models import Image
@admin.register(Image)
class ImageAdmin(admin.ModelAdmin):
list_display = ['title', 'slug', 'image', 'created']
list_filter = ['created']
| 23.888889 | 57 | 0.688372 | 25 | 215 | 5.84 | 0.68 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.176744 | 215 | 8 | 58 | 26.875 | 0.824859 | 0 | 0 | 0 | 0 | 0 | 0.135266 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.833333 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 4 |
16337288cd7ce611b8eeff9dd81a0cf9b482c4d4 | 47 | py | Python | Flask-todolist-Sqlite3-master/venv/lib/python3.6/base64.py | IncredibleDraco/MyScholar | 272aafa33f7227d1bc0d937d046788cbabede453 | [
"Apache-2.0"
] | null | null | null | Flask-todolist-Sqlite3-master/venv/lib/python3.6/base64.py | IncredibleDraco/MyScholar | 272aafa33f7227d1bc0d937d046788cbabede453 | [
"Apache-2.0"
] | null | null | null | Flask-todolist-Sqlite3-master/venv/lib/python3.6/base64.py | IncredibleDraco/MyScholar | 272aafa33f7227d1bc0d937d046788cbabede453 | [
"Apache-2.0"
] | 1 | 2019-11-25T10:25:21.000Z | 2019-11-25T10:25:21.000Z | /home/sheldon/anaconda3/lib/python3.6/base64.py | 47 | 47 | 0.829787 | 8 | 47 | 4.875 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.106383 | 0 | 47 | 1 | 47 | 47 | 0.723404 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 4 |
16581aba820b6b7c16b2bb10ed4549879335d569 | 65 | py | Python | models/scene_seg_PAConv/__init__.py | SamGalanakis/FlowCompare | ed26e48298fe42cf9ddcc252c19b502b4a71d54e | [
"MIT"
] | 5 | 2021-05-22T23:14:19.000Z | 2022-02-19T14:11:27.000Z | models/scene_seg_PAConv/__init__.py | SamGalanakis/PointFlowChange | ed26e48298fe42cf9ddcc252c19b502b4a71d54e | [
"MIT"
] | 1 | 2021-06-05T11:20:53.000Z | 2021-06-05T12:30:44.000Z | models/scene_seg_PAConv/__init__.py | SamGalanakis/PointFlowChange | ed26e48298fe42cf9ddcc252c19b502b4a71d54e | [
"MIT"
] | null | null | null | from .model.pointnet2.pointnet2_paconv_seg import PointNet2SSGSeg | 65 | 65 | 0.907692 | 8 | 65 | 7.125 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.048387 | 0.046154 | 65 | 1 | 65 | 65 | 0.870968 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 4 |
1662b9efb2b6285b0058400ae6798fa9de84b647 | 271 | py | Python | simple-kashflow-sample.py | sampart/kashflow-python-samples | 5568eae2b626a993e563b1464c1defbdd0e2325b | [
"Unlicense"
] | null | null | null | simple-kashflow-sample.py | sampart/kashflow-python-samples | 5568eae2b626a993e563b1464c1defbdd0e2325b | [
"Unlicense"
] | null | null | null | simple-kashflow-sample.py | sampart/kashflow-python-samples | 5568eae2b626a993e563b1464c1defbdd0e2325b | [
"Unlicense"
] | null | null | null | # uses SUDS SOAP client: https://fedorahosted.org/suds/
from suds.client import Client
import logging
client = Client("https://securedwebapp.com/api/service.asmx?WSDL")
result = client.service.GetCustomers(UserName="username", Password="username")
print result
| 30.111111 | 79 | 0.760148 | 34 | 271 | 6.058824 | 0.617647 | 0.106796 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.114391 | 271 | 8 | 80 | 33.875 | 0.858333 | 0.195572 | 0 | 0 | 0 | 0 | 0.302885 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.2 | 0.4 | null | null | 0.2 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 4 |
1665af1c884ee4deef6ed890de6b4bc1158fb376 | 8,688 | py | Python | ibmsecurity/isam/web/reverse_proxy/transaction_logging.py | svetterIO/backup_ibmsecurity | f6e7db3ad95cfe03657e04e582771e37f470354d | [
"Apache-2.0"
] | 2 | 2019-12-05T13:51:10.000Z | 2019-12-20T08:02:35.000Z | ibmsecurity/isam/web/reverse_proxy/transaction_logging.py | svetterIO/backup_ibmsecurity | f6e7db3ad95cfe03657e04e582771e37f470354d | [
"Apache-2.0"
] | null | null | null | ibmsecurity/isam/web/reverse_proxy/transaction_logging.py | svetterIO/backup_ibmsecurity | f6e7db3ad95cfe03657e04e582771e37f470354d | [
"Apache-2.0"
] | null | null | null | import logging
import os.path
from ibmsecurity.utilities.tools import json_sort
logger = logging.getLogger(__name__)
uri = "/wga/reverseproxy"
requires_modules = "wga"
requires_version = None
def get(isamAppliance, instance_id, check_mode=False, force=False):
"""
Retrieving all transaction logging components and their details
"""
return isamAppliance.invoke_get("Retrieving all transaction logging components and their details",
"{0}/{1}/transaction_logging".format(uri, instance_id),
requires_modules=requires_modules, requires_version=requires_version)
def get_files(isamAppliance, instance_id, component_id, check_mode=False, force=False):
"""
Retrieving all transaction log files for a component
"""
return isamAppliance.invoke_get("Retrieving all transaction log files for a component",
"{0}/{1}/transaction_logging/{2}/translog_files".format(uri, instance_id,
component_id),
requires_modules=requires_modules, requires_version=requires_version)
def export_file(isamAppliance, instance_id, component_id, file_id, filepath, check_mode=False, force=False):
"""
Exporting the transaction logging data file or rollover transaction logging data file for a component
"""
if os.path.exists(filepath) is True:
logger.info("File '{0}' already exists. Skipping export.".format(filepath))
warnings = ["File '{0}' already exists. Skipping export.".format(filepath)]
return isamAppliance.create_return_object(warnings=warnings)
if check_mode is True:
return isamAppliance.create_return_object(changed=True)
else:
return isamAppliance.invoke_get_file(
"Exporting the transaction logging data file or rollover transaction logging data file for a component",
"{0}/{1}/transaction_logging/{2}/translog_files/{3}?export".format(uri, instance_id, component_id, file_id),
filepath
)
return isamAppliance.create_return_object()
def update(isamAppliance, instance_id, component_id, status, rollover_size, max_rollover_files, compress,
check_mode=False, force=False):
"""
Modifying the status and rollover size for a component
"""
if force is True or _check_update(isamAppliance, instance_id=instance_id, component_id=component_id, status=status,
rollover_size=rollover_size, max_rollover_files=max_rollover_files,
compress=compress) is False:
if check_mode is True:
return isamAppliance.create_return_object(changed=True)
else:
return isamAppliance.invoke_put(
"Modifying the status and rollover size for a component",
"{0}/{1}/transaction_logging/{2}".format(uri, instance_id, component_id),
{
'id': component_id,
'status': status,
'rollover_size': rollover_size,
'max_rollover_files': max_rollover_files,
'compress': compress
},
requires_modules=requires_modules, requires_version=requires_version
)
return isamAppliance.create_return_object()
def rollover(isamAppliance, instance_id, component_id, check_mode=False, force=False):
"""
Rolling over the transaction logging data file for a component
"""
if force is True or _check_enabled(isamAppliance, instance_id, component_id) is True:
if check_mode is True:
return isamAppliance.create_return_object(changed=True)
else:
return isamAppliance.invoke_put(
"Rolling over the transaction logging data file for a component",
"{0}/{1}/transaction_logging/{2}".format(uri, instance_id, component_id),
{
'rollover': "yes"
},
requires_modules=requires_modules, requires_version=requires_version
)
return isamAppliance.create_return_object()
def delete_unused(isamAppliance, instance_id, component_id, check_mode=False, force=False):
"""
Deleting the unused transaction logging data file and rollover files for a component
"""
ret_obj = get_files(isamAppliance, instance_id=instance_id, component_id=component_id)
if force is True or ret_obj['data'] != []:
if check_mode is True:
return isamAppliance.create_return_object(changed=True)
else:
return isamAppliance.invoke_delete(
"Deleting the unused transaction logging data file and rollover files for a component",
"{0}/{1}/transaction_logging/{2}/translog_files".format(uri, instance_id, component_id))
return isamAppliance.create_return_object()
def delete(isamAppliance, instance_id, component_id, file_id, check_mode=False, force=False):
"""
Deleting the transaction logging data file or rollover file for a component
"""
if force is True or _check_file(isamAppliance, instance_id, component_id, file_id) is True:
if check_mode is True:
return isamAppliance.create_return_object(changed=True)
else:
return isamAppliance.invoke_delete(
"Deleting the transaction logging data file or rollover file for a component",
"{0}/{1}/transaction_logging/{2}/translog_files/{3}".format(uri, instance_id, component_id, file_id))
return isamAppliance.create_return_object()
def delete_multiple_files(isamAppliance, instance_id, component_id, files, check_mode=False, force=False):
"""
Deleting multiple transaction logging data file and rollover files for a component
"""
delete_required = False
files_to_delete = []
ret_obj = get_files(isamAppliance, instance_id, component_id)
for obj1 in files:
for obj2 in ret_obj['data']:
if obj1['name'] == obj2['id']:
files_to_delete.append(obj1)
delete_required = True
if len(files_to_delete) == 1:
return delete(isamAppliance, instance_id, component_id, files_to_delete[0]['name'])
if force is True or delete_required is True:
if check_mode is True:
return isamAppliance.create_return_object(changed=True)
else:
return isamAppliance.invoke_put(
"Deleting multiple transaction logging data file and rollover files for a component",
"{0}/{1}/transaction_logging/{2}/translog_files?action=delete".format(uri, instance_id, component_id),
{
'files': files
}
)
return isamAppliance.create_return_object()
def _check_update(isamAppliance, instance_id, component_id, status, rollover_size, max_rollover_files, compress):
"""
Check to see if the file_id exists or not
"""
ret_obj = get(isamAppliance, instance_id)
new_obj = {
'id': component_id,
'status': status,
'rollover_size': rollover_size,
'max_rollover_files': max_rollover_files,
'compress': compress
}
sorted_new_obj = json_sort(new_obj)
for obj in ret_obj['data']:
if obj['id'] == component_id:
if obj['status'] == "Off" and status == "Off":
return True
else:
del obj['file_size']
sorted_obj = json_sort(obj)
if sorted_obj == sorted_new_obj:
return True
return False
def _check_file(isamAppliance, instance_id, component_id, file_id):
"""
Check to see if the file_id exists or not
"""
ret_obj = get_files(isamAppliance, instance_id, component_id)
for obj in ret_obj['data']:
if obj['id'] == file_id:
return True
return False
def _check(isamAppliance, instance_id, component_id):
"""
Check to see if the component_id exists or not
"""
ret_obj = get(isamAppliance, instance_id)
for obj in ret_obj['data']:
if obj['id'] == component_id:
return True
return False
def _check_enabled(isamAppliance, instance_id, component_id):
"""
Check to see if the component is enabled or not
"""
ret_obj = get(isamAppliance, instance_id)
for obj in ret_obj['data']:
if obj['id'] == component_id and obj['status'] == "On":
return True
return False
| 36.504202 | 120 | 0.64238 | 1,013 | 8,688 | 5.274432 | 0.099704 | 0.067939 | 0.077859 | 0.098259 | 0.870298 | 0.85102 | 0.797866 | 0.759311 | 0.665357 | 0.597792 | 0 | 0.005383 | 0.27302 | 8,688 | 237 | 121 | 36.658228 | 0.840564 | 0.087362 | 0 | 0.455172 | 0 | 0 | 0.15625 | 0.044938 | 0 | 0 | 0 | 0 | 0 | 1 | 0.082759 | false | 0 | 0.02069 | 0 | 0.317241 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 4 |
16767ab4c4b27dede6ce1201bcfc566046bf93ba | 71 | py | Python | electrode/clients/lib/efield/dipole_fit.py | krbjila/labrad_tools | 5c510cb35090807807bfe6bd910b9c35edce6fce | [
"MIT"
] | 1 | 2020-11-30T01:45:08.000Z | 2020-11-30T01:45:08.000Z | electrode/clients/lib/efield/dipole_fit.py | krbjila/labrad_tools | 5c510cb35090807807bfe6bd910b9c35edce6fce | [
"MIT"
] | 8 | 2021-02-23T00:18:12.000Z | 2022-03-12T00:54:50.000Z | electrode/clients/lib/efield/dipole_fit.py | krbjila/labrad_tools | 5c510cb35090807807bfe6bd910b9c35edce6fce | [
"MIT"
] | 1 | 2020-11-08T14:54:21.000Z | 2020-11-08T14:54:21.000Z | D_POLY_FIT = [0, 0.0561, -0.00337, 1.03812e-4, -1.42283e-6, 5.34778e-9] | 71 | 71 | 0.647887 | 17 | 71 | 2.588235 | 0.823529 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.515625 | 0.098592 | 71 | 1 | 71 | 71 | 0.171875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 4 |
167fe2bfb128930b563402f685497af5209c3129 | 30,527 | py | Python | tests/protocol/test_imds.py | ce-bu/WALinuxAgent | ec0609d673fa31ee4ad803885a02dbb55411c77c | [
"Apache-2.0"
] | 423 | 2015-01-05T04:57:44.000Z | 2022-03-20T11:44:57.000Z | tests/protocol/test_imds.py | ce-bu/WALinuxAgent | ec0609d673fa31ee4ad803885a02dbb55411c77c | [
"Apache-2.0"
] | 1,984 | 2015-01-07T03:20:12.000Z | 2022-03-31T09:46:24.000Z | tests/protocol/test_imds.py | larohra/WALinuxAgent | bac602173d1fd68d198a4f8b05a911c5e93927f6 | [
"Apache-2.0"
] | 398 | 2015-01-07T14:49:32.000Z | 2022-03-14T08:42:57.000Z | # Microsoft Azure Linux Agent
#
# Copyright 2018 Microsoft Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# Requires Python 2.6+ and Openssl 1.0+
import json
import os
import unittest
import azurelinuxagent.common.protocol.imds as imds
from azurelinuxagent.common.datacontract import set_properties
from azurelinuxagent.common.exception import HttpError, ResourceGoneError
from azurelinuxagent.common.future import ustr, httpclient
from azurelinuxagent.common.utils import restutil
from tests.ga.test_update import ResponseMock
from tests.tools import AgentTestCase, data_dir, MagicMock, Mock, patch
def get_mock_compute_response():
return ResponseMock(response='''{
"location": "westcentralus",
"name": "unit_test",
"offer": "UnitOffer",
"osType": "Linux",
"placementGroupId": "",
"platformFaultDomain": "0",
"platformUpdateDomain": "0",
"publisher": "UnitPublisher",
"resourceGroupName": "UnitResourceGroupName",
"sku": "UnitSku",
"subscriptionId": "e4402c6c-2804-4a0a-9dee-d61918fc4d28",
"tags": "Key1:Value1;Key2:Value2",
"vmId": "f62f23fb-69e2-4df0-a20b-cb5c201a3e7a",
"version": "UnitVersion",
"vmSize": "Standard_D1_v2"
}'''.encode('utf-8'))
class TestImds(AgentTestCase):
@patch("azurelinuxagent.ga.update.restutil.http_get")
def test_get(self, mock_http_get):
mock_http_get.return_value = get_mock_compute_response()
test_subject = imds.ImdsClient(restutil.KNOWN_WIRESERVER_IP)
test_subject.get_compute()
self.assertEqual(1, mock_http_get.call_count)
positional_args, kw_args = mock_http_get.call_args
self.assertEqual('http://169.254.169.254/metadata/instance/compute?api-version=2018-02-01', positional_args[0])
self.assertTrue('User-Agent' in kw_args['headers'])
self.assertTrue('Metadata' in kw_args['headers'])
self.assertEqual(True, kw_args['headers']['Metadata'])
@patch("azurelinuxagent.ga.update.restutil.http_get")
def test_get_bad_request(self, mock_http_get):
mock_http_get.return_value = ResponseMock(status=restutil.httpclient.BAD_REQUEST)
test_subject = imds.ImdsClient(restutil.KNOWN_WIRESERVER_IP)
self.assertRaises(HttpError, test_subject.get_compute)
@patch("azurelinuxagent.ga.update.restutil.http_get")
def test_get_internal_service_error(self, mock_http_get):
mock_http_get.return_value = ResponseMock(status=restutil.httpclient.INTERNAL_SERVER_ERROR)
test_subject = imds.ImdsClient(restutil.KNOWN_WIRESERVER_IP)
self.assertRaises(HttpError, test_subject.get_compute)
@patch("azurelinuxagent.ga.update.restutil.http_get")
def test_get_empty_response(self, mock_http_get):
mock_http_get.return_value = ResponseMock(response=''.encode('utf-8'))
test_subject = imds.ImdsClient(restutil.KNOWN_WIRESERVER_IP)
self.assertRaises(ValueError, test_subject.get_compute)
def test_deserialize_ComputeInfo(self):
s = '''{
"location": "westcentralus",
"name": "unit_test",
"offer": "UnitOffer",
"osType": "Linux",
"placementGroupId": "",
"platformFaultDomain": "0",
"platformUpdateDomain": "0",
"publisher": "UnitPublisher",
"resourceGroupName": "UnitResourceGroupName",
"sku": "UnitSku",
"subscriptionId": "e4402c6c-2804-4a0a-9dee-d61918fc4d28",
"tags": "Key1:Value1;Key2:Value2",
"vmId": "f62f23fb-69e2-4df0-a20b-cb5c201a3e7a",
"version": "UnitVersion",
"vmSize": "Standard_D1_v2",
"vmScaleSetName": "MyScaleSet",
"zone": "In"
}'''
data = json.loads(s)
compute_info = imds.ComputeInfo()
set_properties("compute", compute_info, data)
self.assertEqual('westcentralus', compute_info.location)
self.assertEqual('unit_test', compute_info.name)
self.assertEqual('UnitOffer', compute_info.offer)
self.assertEqual('Linux', compute_info.osType)
self.assertEqual('', compute_info.placementGroupId)
self.assertEqual('0', compute_info.platformFaultDomain)
self.assertEqual('0', compute_info.platformUpdateDomain)
self.assertEqual('UnitPublisher', compute_info.publisher)
self.assertEqual('UnitResourceGroupName', compute_info.resourceGroupName)
self.assertEqual('UnitSku', compute_info.sku)
self.assertEqual('e4402c6c-2804-4a0a-9dee-d61918fc4d28', compute_info.subscriptionId)
self.assertEqual('Key1:Value1;Key2:Value2', compute_info.tags)
self.assertEqual('f62f23fb-69e2-4df0-a20b-cb5c201a3e7a', compute_info.vmId)
self.assertEqual('UnitVersion', compute_info.version)
self.assertEqual('Standard_D1_v2', compute_info.vmSize)
self.assertEqual('MyScaleSet', compute_info.vmScaleSetName)
self.assertEqual('In', compute_info.zone)
self.assertEqual('UnitPublisher:UnitOffer:UnitSku:UnitVersion', compute_info.image_info)
def test_is_custom_image(self):
image_origin = self._setup_image_origin_assert("", "", "", "")
self.assertEqual(imds.IMDS_IMAGE_ORIGIN_CUSTOM, image_origin)
def test_is_endorsed_CentOS(self):
self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("OpenLogic", "CentOS", "6.3", ""))
self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("OpenLogic", "CentOS", "6.4", ""))
self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("OpenLogic", "CentOS", "6.5", ""))
self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("OpenLogic", "CentOS", "6.6", ""))
self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("OpenLogic", "CentOS", "6.7", ""))
self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("OpenLogic", "CentOS", "6.8", ""))
self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("OpenLogic", "CentOS", "6.9", ""))
self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("OpenLogic", "CentOS", "7.0", ""))
self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("OpenLogic", "CentOS", "7.1", ""))
self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("OpenLogic", "CentOS", "7.2", ""))
self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("OpenLogic", "CentOS", "7.3", ""))
self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("OpenLogic", "CentOS", "7.4", ""))
self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("OpenLogic", "CentOS", "7-LVM", ""))
self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("OpenLogic", "CentOS", "7-RAW", ""))
self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("OpenLogic", "CentOS-HPC", "6.5", ""))
self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("OpenLogic", "CentOS-HPC", "6.8", ""))
self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("OpenLogic", "CentOS-HPC", "7.1", ""))
self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("OpenLogic", "CentOS-HPC", "7.3", ""))
self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("OpenLogic", "CentOS-HPC", "7.4", ""))
self.assertEqual(imds.IMDS_IMAGE_ORIGIN_PLATFORM, self._setup_image_origin_assert("OpenLogic", "CentOS", "6.2", ""))
self.assertEqual(imds.IMDS_IMAGE_ORIGIN_PLATFORM, self._setup_image_origin_assert("OpenLogic", "CentOS", "6.1", ""))
def test_is_endorsed_CoreOS(self):
self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("CoreOS", "CoreOS", "stable", "494.4.0"))
self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("CoreOS", "CoreOS", "stable", "899.17.0"))
self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("CoreOS", "CoreOS", "stable", "1688.5.3"))
self.assertEqual(imds.IMDS_IMAGE_ORIGIN_PLATFORM, self._setup_image_origin_assert("CoreOS", "CoreOS", "stable", "494.3.0"))
self.assertEqual(imds.IMDS_IMAGE_ORIGIN_PLATFORM, self._setup_image_origin_assert("CoreOS", "CoreOS", "alpha", ""))
self.assertEqual(imds.IMDS_IMAGE_ORIGIN_PLATFORM, self._setup_image_origin_assert("CoreOS", "CoreOS", "beta", ""))
def test_is_endorsed_Debian(self):
self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("credativ", "Debian", "7", ""))
self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("credativ", "Debian", "8", ""))
self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("credativ", "Debian", "9", ""))
self.assertEqual(imds.IMDS_IMAGE_ORIGIN_PLATFORM, self._setup_image_origin_assert("credativ", "Debian", "9-DAILY", ""))
self.assertEqual(imds.IMDS_IMAGE_ORIGIN_PLATFORM, self._setup_image_origin_assert("credativ", "Debian", "10-DAILY", ""))
def test_is_endorsed_Rhel(self):
self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("RedHat", "RHEL", "6.7", ""))
self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("RedHat", "RHEL", "6.8", ""))
self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("RedHat", "RHEL", "6.9", ""))
self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("RedHat", "RHEL", "7.0", ""))
self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("RedHat", "RHEL", "7.1", ""))
self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("RedHat", "RHEL", "7.2", ""))
self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("RedHat", "RHEL", "7.3", ""))
self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("RedHat", "RHEL", "7.4", ""))
self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("RedHat", "RHEL", "7-LVM", ""))
self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("RedHat", "RHEL", "7-RAW", ""))
self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("RedHat", "RHEL-SAP-HANA", "7.2", ""))
self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("RedHat", "RHEL-SAP-HANA", "7.3", ""))
self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("RedHat", "RHEL-SAP-HANA", "7.4", ""))
self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("RedHat", "RHEL-SAP", "7.2", ""))
self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("RedHat", "RHEL-SAP", "7.3", ""))
self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("RedHat", "RHEL-SAP", "7.4", ""))
self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("RedHat", "RHEL-SAP-APPS", "7.2", ""))
self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("RedHat", "RHEL-SAP-APPS", "7.3", ""))
self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("RedHat", "RHEL-SAP-APPS", "7.4", ""))
self.assertEqual(imds.IMDS_IMAGE_ORIGIN_PLATFORM, self._setup_image_origin_assert("RedHat", "RHEL", "6.6", ""))
def test_is_endorsed_SuSE(self):
self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("SuSE", "SLES", "11-SP4", ""))
self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("SuSE", "SLES-BYOS", "11-SP4", ""))
self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("SuSE", "SLES", "12-SP1", ""))
self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("SuSE", "SLES", "12-SP2", ""))
self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("SuSE", "SLES", "12-SP3", ""))
self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("SuSE", "SLES", "12-SP4", ""))
self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("SuSE", "SLES", "12-SP5", ""))
self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("SuSE", "SLES-BYOS", "12-SP1", ""))
self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("SuSE", "SLES-BYOS", "12-SP2", ""))
self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("SuSE", "SLES-BYOS", "12-SP3", ""))
self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("SuSE", "SLES-BYOS", "12-SP4", ""))
self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("SuSE", "SLES-BYOS", "12-SP5", ""))
self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("SuSE", "SLES-SAP", "12-SP1", ""))
self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("SuSE", "SLES-SAP", "12-SP2", ""))
self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("SuSE", "SLES-SAP", "12-SP3", ""))
self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("SuSE", "SLES-SAP", "12-SP4", ""))
self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("SuSE", "SLES-SAP", "12-SP5", ""))
self.assertEqual(imds.IMDS_IMAGE_ORIGIN_PLATFORM, self._setup_image_origin_assert("SuSE", "SLES", "11-SP3", ""))
def test_is_endorsed_UbuntuServer(self):
self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("Canonical", "UbuntuServer", "14.04.0-LTS", ""))
self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("Canonical", "UbuntuServer", "14.04.1-LTS", ""))
self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("Canonical", "UbuntuServer", "14.04.2-LTS", ""))
self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("Canonical", "UbuntuServer", "14.04.3-LTS", ""))
self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("Canonical", "UbuntuServer", "14.04.4-LTS", ""))
self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("Canonical", "UbuntuServer", "14.04.5-LTS", ""))
self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("Canonical", "UbuntuServer", "14.04.6-LTS", ""))
self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("Canonical", "UbuntuServer", "14.04.7-LTS", ""))
self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("Canonical", "UbuntuServer", "14.04.8-LTS", ""))
self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("Canonical", "UbuntuServer", "16.04-LTS", ""))
self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("Canonical", "UbuntuServer", "18.04-LTS", ""))
self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("Canonical", "UbuntuServer", "20.04-LTS", ""))
self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("Canonical", "UbuntuServer", "22.04-LTS", ""))
self.assertEqual(imds.IMDS_IMAGE_ORIGIN_PLATFORM, self._setup_image_origin_assert("Canonical", "UbuntuServer", "12.04-LTS", ""))
self.assertEqual(imds.IMDS_IMAGE_ORIGIN_PLATFORM, self._setup_image_origin_assert("Canonical", "UbuntuServer", "17.10", ""))
self.assertEqual(imds.IMDS_IMAGE_ORIGIN_PLATFORM, self._setup_image_origin_assert("Canonical", "UbuntuServer", "18.04-DAILY-LTS", ""))
@staticmethod
def _setup_image_origin_assert(publisher, offer, sku, version):
s = '''{{
"publisher": "{0}",
"offer": "{1}",
"sku": "{2}",
"version": "{3}"
}}'''.format(publisher, offer, sku, version)
data = json.loads(s)
compute_info = imds.ComputeInfo()
set_properties("compute", compute_info, data)
return compute_info.image_origin
def test_response_validation(self):
# invalid json or empty response
self._assert_validation(http_status_code=200,
http_response='',
expected_valid=False,
expected_response='JSON parsing failed')
self._assert_validation(http_status_code=200,
http_response=None,
expected_valid=False,
expected_response='JSON parsing failed')
self._assert_validation(http_status_code=200,
http_response='{ bad json ',
expected_valid=False,
expected_response='JSON parsing failed')
# 500 response
self._assert_validation(http_status_code=500,
http_response='error response',
expected_valid=False,
expected_response='IMDS error in /metadata/instance: [HTTP Failed] [500: reason] error response')
# 429 response - throttling does not mean service is unhealthy
self._assert_validation(http_status_code=429,
http_response='server busy',
expected_valid=True,
expected_response='[HTTP Failed] [429: reason] server busy')
# 404 response - error responses do not mean service is unhealthy
self._assert_validation(http_status_code=404,
http_response='not found',
expected_valid=True,
expected_response='[HTTP Failed] [404: reason] not found')
# valid json
self._assert_validation(http_status_code=200,
http_response=self._imds_response('valid'),
expected_valid=True,
expected_response='')
# unicode
self._assert_validation(http_status_code=200,
http_response=self._imds_response('unicode'),
expected_valid=True,
expected_response='')
def test_field_validation(self):
# TODO: compute fields (#1249)
self._assert_field('network', 'interface', 'ipv4', 'ipAddress', 'privateIpAddress')
self._assert_field('network', 'interface', 'ipv4', 'ipAddress')
self._assert_field('network', 'interface', 'ipv4')
self._assert_field('network', 'interface', 'macAddress')
self._assert_field('network')
def _assert_field(self, *fields):
response = self._imds_response('valid')
response_obj = json.loads(ustr(response, encoding="utf-8"))
# assert empty value
self._update_field(response_obj, fields, '')
altered_response = json.dumps(response_obj).encode()
self._assert_validation(http_status_code=200,
http_response=altered_response,
expected_valid=False,
expected_response='Empty field: [{0}]'.format(fields[-1]))
# assert missing value
self._update_field(response_obj, fields, None)
altered_response = json.dumps(response_obj).encode()
self._assert_validation(http_status_code=200,
http_response=altered_response,
expected_valid=False,
expected_response='Missing field: [{0}]'.format(fields[-1]))
def _update_field(self, obj, fields, val):
if isinstance(obj, list):
self._update_field(obj[0], fields, val)
else:
f = fields[0]
if len(fields) == 1:
if val is None:
del obj[f]
else:
obj[f] = val
else:
self._update_field(obj[f], fields[1:], val)
@staticmethod
def _imds_response(f):
path = os.path.join(data_dir, "imds", "{0}.json".format(f))
with open(path, "rb") as fh:
return fh.read()
def _assert_validation(self, http_status_code, http_response, expected_valid, expected_response):
test_subject = imds.ImdsClient(restutil.KNOWN_WIRESERVER_IP)
with patch("azurelinuxagent.common.utils.restutil.http_get") as mock_http_get:
mock_http_get.return_value = ResponseMock(status=http_status_code,
reason='reason',
response=http_response)
validate_response = test_subject.validate()
self.assertEqual(1, mock_http_get.call_count)
positional_args, kw_args = mock_http_get.call_args
self.assertTrue('User-Agent' in kw_args['headers'])
self.assertEqual(restutil.HTTP_USER_AGENT_HEALTH, kw_args['headers']['User-Agent'])
self.assertTrue('Metadata' in kw_args['headers'])
self.assertEqual(True, kw_args['headers']['Metadata'])
self.assertEqual('http://169.254.169.254/metadata/instance?api-version=2018-02-01',
positional_args[0])
self.assertEqual(expected_valid, validate_response[0])
self.assertTrue(expected_response in validate_response[1],
"Expected: '{0}', Actual: '{1}'"
.format(expected_response, validate_response[1]))
def test_endpoint_fallback(self):
# http error status codes are tested in test_response_validation, none of which
# should trigger a fallback. This is confirmed as _assert_validation will count
# http GET calls and enforces a single GET call (fallback would cause 2) and
# checks the url called.
test_subject = imds.ImdsClient("foo.bar")
# ensure user-agent gets set correctly
for is_health, expected_useragent in [(False, restutil.HTTP_USER_AGENT), (True, restutil.HTTP_USER_AGENT_HEALTH)]:
# set a different resource path for health query to make debugging unit test easier
resource_path = 'something/health' if is_health else 'something'
for has_primary_ioerror in (False, True):
# secondary endpoint unreachable
test_subject._http_get = Mock(side_effect=self._mock_http_get)
self._mock_imds_setup(primary_ioerror=has_primary_ioerror, secondary_ioerror=True)
result = test_subject.get_metadata(resource_path=resource_path, is_health=is_health)
self.assertFalse(result.success) if has_primary_ioerror else self.assertTrue(result.success) # pylint: disable=expression-not-assigned
self.assertFalse(result.service_error)
if has_primary_ioerror:
self.assertEqual('IMDS error in /metadata/{0}: Unable to connect to endpoint'.format(resource_path), result.response)
else:
self.assertEqual('Mock success response', result.response)
for _, kwargs in test_subject._http_get.call_args_list:
self.assertTrue('User-Agent' in kwargs['headers'])
self.assertEqual(expected_useragent, kwargs['headers']['User-Agent'])
self.assertEqual(2 if has_primary_ioerror else 1, test_subject._http_get.call_count)
# IMDS success
test_subject._http_get = Mock(side_effect=self._mock_http_get)
self._mock_imds_setup(primary_ioerror=has_primary_ioerror)
result = test_subject.get_metadata(resource_path=resource_path, is_health=is_health)
self.assertTrue(result.success)
self.assertFalse(result.service_error)
self.assertEqual('Mock success response', result.response)
for _, kwargs in test_subject._http_get.call_args_list:
self.assertTrue('User-Agent' in kwargs['headers'])
self.assertEqual(expected_useragent, kwargs['headers']['User-Agent'])
self.assertEqual(2 if has_primary_ioerror else 1, test_subject._http_get.call_count)
# IMDS throttled
test_subject._http_get = Mock(side_effect=self._mock_http_get)
self._mock_imds_setup(primary_ioerror=has_primary_ioerror, throttled=True)
result = test_subject.get_metadata(resource_path=resource_path, is_health=is_health)
self.assertFalse(result.success)
self.assertFalse(result.service_error)
self.assertEqual('IMDS error in /metadata/{0}: Throttled'.format(resource_path), result.response)
for _, kwargs in test_subject._http_get.call_args_list:
self.assertTrue('User-Agent' in kwargs['headers'])
self.assertEqual(expected_useragent, kwargs['headers']['User-Agent'])
self.assertEqual(2 if has_primary_ioerror else 1, test_subject._http_get.call_count)
# IMDS gone error
test_subject._http_get = Mock(side_effect=self._mock_http_get)
self._mock_imds_setup(primary_ioerror=has_primary_ioerror, gone_error=True)
result = test_subject.get_metadata(resource_path=resource_path, is_health=is_health)
self.assertFalse(result.success)
self.assertTrue(result.service_error)
self.assertEqual('IMDS error in /metadata/{0}: HTTP Failed with Status Code 410: Gone'.format(resource_path), result.response)
for _, kwargs in test_subject._http_get.call_args_list:
self.assertTrue('User-Agent' in kwargs['headers'])
self.assertEqual(expected_useragent, kwargs['headers']['User-Agent'])
self.assertEqual(2 if has_primary_ioerror else 1, test_subject._http_get.call_count)
# IMDS bad request
test_subject._http_get = Mock(side_effect=self._mock_http_get)
self._mock_imds_setup(primary_ioerror=has_primary_ioerror, bad_request=True)
result = test_subject.get_metadata(resource_path=resource_path, is_health=is_health)
self.assertFalse(result.success)
self.assertFalse(result.service_error)
self.assertEqual('IMDS error in /metadata/{0}: [HTTP Failed] [404: reason] Mock not found'.format(resource_path), result.response)
for _, kwargs in test_subject._http_get.call_args_list:
self.assertTrue('User-Agent' in kwargs['headers'])
self.assertEqual(expected_useragent, kwargs['headers']['User-Agent'])
self.assertEqual(2 if has_primary_ioerror else 1, test_subject._http_get.call_count)
def _mock_imds_setup(self, primary_ioerror=False, secondary_ioerror=False, gone_error=False, throttled=False, bad_request=False):
self._mock_imds_expect_fallback = primary_ioerror # pylint: disable=attribute-defined-outside-init
self._mock_imds_primary_ioerror = primary_ioerror # pylint: disable=attribute-defined-outside-init
self._mock_imds_secondary_ioerror = secondary_ioerror # pylint: disable=attribute-defined-outside-init
self._mock_imds_gone_error = gone_error # pylint: disable=attribute-defined-outside-init
self._mock_imds_throttled = throttled # pylint: disable=attribute-defined-outside-init
self._mock_imds_bad_request = bad_request # pylint: disable=attribute-defined-outside-init
def _mock_http_get(self, *_, **kwargs):
if "foo.bar" == kwargs['endpoint'] and not self._mock_imds_expect_fallback:
raise Exception("Unexpected endpoint called")
if self._mock_imds_primary_ioerror and "169.254.169.254" == kwargs['endpoint']:
raise HttpError("[HTTP Failed] GET http://{0}/metadata/{1} -- IOError timed out -- 6 attempts made"
.format(kwargs['endpoint'], kwargs['resource_path']))
if self._mock_imds_secondary_ioerror and "foo.bar" == kwargs['endpoint']:
raise HttpError("[HTTP Failed] GET http://{0}/metadata/{1} -- IOError timed out -- 6 attempts made"
.format(kwargs['endpoint'], kwargs['resource_path']))
if self._mock_imds_gone_error:
raise ResourceGoneError("Resource is gone")
if self._mock_imds_throttled:
raise HttpError("[HTTP Retry] GET http://{0}/metadata/{1} -- Status Code 429 -- 25 attempts made"
.format(kwargs['endpoint'], kwargs['resource_path']))
resp = MagicMock()
resp.reason = 'reason'
if self._mock_imds_bad_request:
resp.status = httpclient.NOT_FOUND
resp.read.return_value = 'Mock not found'
else:
resp.status = httpclient.OK
resp.read.return_value = 'Mock success response'
return resp
if __name__ == '__main__':
unittest.main()
| 61.546371 | 151 | 0.676614 | 3,616 | 30,527 | 5.385785 | 0.102323 | 0.100539 | 0.08878 | 0.09941 | 0.753941 | 0.718408 | 0.71009 | 0.692837 | 0.686316 | 0.655199 | 0 | 0.022383 | 0.200904 | 30,527 | 495 | 152 | 61.670707 | 0.77597 | 0.054607 | 0 | 0.344648 | 0 | 0.013055 | 0.181266 | 0.025092 | 0 | 0 | 0 | 0.00202 | 0.446475 | 1 | 0.060052 | false | 0 | 0.02611 | 0.002611 | 0.099217 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 4 |
169b2560b243aa8f298be9ffa81703f7c86feada | 467 | py | Python | classification/libs/readers/__init__.py | 96lives/matrixlstm | 83332111a459dd3fbca944898fffd935faac8820 | [
"Apache-2.0"
] | 19 | 2020-08-11T09:18:28.000Z | 2022-03-10T13:53:13.000Z | N-ROD/evrepr/thirdparty/matrixlstm/classification/libs/readers/__init__.py | Chiaraplizz/home | 18cc93a795ce132e05b886aa34565a102915b1c6 | [
"MIT"
] | 4 | 2021-01-04T11:55:50.000Z | 2021-09-18T14:00:50.000Z | N-ROD/evrepr/thirdparty/matrixlstm/classification/libs/readers/__init__.py | Chiaraplizz/home | 18cc93a795ce132e05b886aa34565a102915b1c6 | [
"MIT"
] | 4 | 2020-09-03T07:12:55.000Z | 2021-08-19T11:37:55.000Z | from .eventdataset import EventDataset, EventDetectionDataset
from .eventchunkdataset import EventChunkDataset, EventDetectionChunkDataset
from .filereader import BinFileReader, AerFileReader, PropheseeReader
from .utils import get_file_format, get_filereader, get_dataset, get_loader, get_splits, \
random_split_source, format_transform, format_batch_size, format_usechunks, \
chunkparams
from .transforms import get_transforms
| 58.375 | 96 | 0.800857 | 46 | 467 | 7.847826 | 0.565217 | 0.049862 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.156317 | 467 | 7 | 97 | 66.714286 | 0.916244 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.714286 | 0 | 0.714286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 4 |
16b40aa78ed717b00578d7359671464fd0585cda | 125 | py | Python | EduSim/SimOS/__init__.py | bigdata-ustc/EduSim | 849eed229c24615e5f2c3045036311e83c22ea68 | [
"MIT"
] | 18 | 2019-11-11T03:45:35.000Z | 2022-02-09T15:31:51.000Z | EduSim/SimOS/__init__.py | ghzhao78506/EduSim | cb10e952eb212d8a9344143f889207b5cd48ba9d | [
"MIT"
] | 3 | 2020-10-23T01:05:57.000Z | 2021-03-16T12:12:24.000Z | EduSim/SimOS/__init__.py | bigdata-ustc/EduSim | 849eed229c24615e5f2c3045036311e83c22ea68 | [
"MIT"
] | 6 | 2020-06-09T21:32:00.000Z | 2022-03-12T00:25:18.000Z | # coding: utf-8
# 2020/4/30 @ tongshiwei
from .SimOS import train_eval, MetaAgent, RandomAgent
from .config import as_level
| 20.833333 | 53 | 0.768 | 19 | 125 | 4.947368 | 0.894737 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.074766 | 0.144 | 125 | 5 | 54 | 25 | 0.803738 | 0.288 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 4 |
16bed5d29f5a3d80173e077110354f53221ba870 | 253 | py | Python | oscar/lib/python2.7/site-packages/prompt_toolkit/clipboard/__init__.py | sainjusajan/django-oscar | 466e8edc807be689b0a28c9e525c8323cc48b8e1 | [
"BSD-3-Clause"
] | null | null | null | oscar/lib/python2.7/site-packages/prompt_toolkit/clipboard/__init__.py | sainjusajan/django-oscar | 466e8edc807be689b0a28c9e525c8323cc48b8e1 | [
"BSD-3-Clause"
] | null | null | null | oscar/lib/python2.7/site-packages/prompt_toolkit/clipboard/__init__.py | sainjusajan/django-oscar | 466e8edc807be689b0a28c9e525c8323cc48b8e1 | [
"BSD-3-Clause"
] | null | null | null | from .base import Clipboard, ClipboardData
from .in_memory import InMemoryClipboard
# We are not importing `PyperclipClipboard` here, because it would require the
# `pyperclip` module to be present.
#from .pyperclip import PyperclipClipboard
| 28.111111 | 79 | 0.782609 | 30 | 253 | 6.566667 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166008 | 253 | 8 | 80 | 31.625 | 0.933649 | 0.596838 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 4 |
16c80fd227f49791f1b2794545a7467a34000add | 158 | py | Python | tests/test_index.py | sushinoya/ggpy | 455f75edaa55b2f9bce7f24cddf81096be5362d5 | [
"BSD-2-Clause"
] | 31 | 2018-10-17T01:19:07.000Z | 2022-01-23T13:16:49.000Z | tests/test_index.py | sushinoya/ggpy | 455f75edaa55b2f9bce7f24cddf81096be5362d5 | [
"BSD-2-Clause"
] | 1 | 2019-07-11T17:43:52.000Z | 2019-07-11T17:43:52.000Z | tests/test_index.py | sushinoya/ggpy | 455f75edaa55b2f9bce7f24cddf81096be5362d5 | [
"BSD-2-Clause"
] | 9 | 2018-10-12T16:44:43.000Z | 2020-07-02T15:52:37.000Z | from __future__ import print_function
from ggplot import *
meat = meat.set_index(['date'])
print(ggplot(meat, aes(x='__index__', y='beef')) + geom_point())
| 22.571429 | 64 | 0.721519 | 23 | 158 | 4.478261 | 0.695652 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.113924 | 158 | 6 | 65 | 26.333333 | 0.735714 | 0 | 0 | 0 | 0 | 0 | 0.107595 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 4 |
16cc1967738a7463528538265f67d18015c05004 | 200 | py | Python | src/square_list.py | smorenburg/python | 74b1e72944dfd244f0169e8a7adb9e29ed1a7d27 | [
"MIT"
] | null | null | null | src/square_list.py | smorenburg/python | 74b1e72944dfd244f0169e8a7adb9e29ed1a7d27 | [
"MIT"
] | null | null | null | src/square_list.py | smorenburg/python | 74b1e72944dfd244f0169e8a7adb9e29ed1a7d27 | [
"MIT"
] | null | null | null | def square_list(nums):
"""
>>> square_list([1, 2, 3])
[1, 4, 9]
"""
idx = 0
while idx < len(nums):
nums[idx] = nums[idx] * nums[idx]
idx += 1
return nums
| 15.384615 | 41 | 0.445 | 28 | 200 | 3.107143 | 0.5 | 0.241379 | 0.252874 | 0.321839 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.063492 | 0.37 | 200 | 12 | 42 | 16.666667 | 0.626984 | 0.18 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 4 |
16d2e44e42fe63aaa148de2346d9c7aa0b000a36 | 3,682 | py | Python | mikaponics/foundation/mixins.py | mikaponics/mikaponics-back | 98e1ff8bab7dda3492e5ff637bf5aafd111c840c | [
"BSD-3-Clause"
] | 2 | 2019-04-30T23:51:41.000Z | 2019-05-04T00:35:52.000Z | mikaponics/foundation/mixins.py | mikaponics/mikaponics-back | 98e1ff8bab7dda3492e5ff637bf5aafd111c840c | [
"BSD-3-Clause"
] | 27 | 2019-04-30T20:22:28.000Z | 2022-02-10T08:10:32.000Z | mikaponics/foundation/mixins.py | mikaponics/mikaponics-back | 98e1ff8bab7dda3492e5ff637bf5aafd111c840c | [
"BSD-3-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
from django.core.exceptions import PermissionDenied
from django.urls import reverse
from django.utils.translation import ugettext_lazy as _
from django.views.generic import DetailView, ListView, TemplateView
class ExtraRequestProcessingMixin(object):
"""
Mixin used to get extract the filter parameters from the request GET
variables.
"""
def get_param_urls(self, skip_parameters_array):
parameters = ""
for i, param in enumerate(self.request.GET):
if str(param) not in skip_parameters_array:
# Figure out whether to use '?' or '&' operation.
operation = '?'
if i > 0:
operation = '&'
# Generate the URL.
parameter = operation + param + "=" + self.request.GET.get(param, None)
parameters += parameter
return parameters
def get_params_dict(self, skip_parameters_array):
parameters = {}
for param in self.request.GET:
if str(param) not in skip_parameters_array:
parameters[param] = self.request.GET.get(param, None)
return parameters
class MikaponicsListView(ListView, ExtraRequestProcessingMixin):
"""
An opinionated modification on the class-based "ListView" view to
have a few enhancements suited for our `mikaponics` app.
"""
menu_id = None # Required for navigation
skip_parameters_array = []
def get_context_data(self, **kwargs):
"""
Override the 'get_context_data' function do add our enhancements.
"""
base_context = super().get_context_data(**kwargs)
# Attach a "menu_id" object to the view.
base_context['menu_id'] = self.menu_id
# DEVELOPERS NOTE:
# - This class based view will have URL parameters for filtering and
# searching records.
# - We will extract the URL parameters and save them into our context
# so we can use this to help the pagination.
base_context['filter_parameters'] = self.get_param_urls(self.skip_parameters_array)
base_context['parameters'] = self.get_params_dict(self.skip_parameters_array)
# Return our custom context based on our `mikaponics` app.
return base_context
def get_paginate_by(self, queryset):
"""
Paginate by specified value in querystring, or use default class property value.
"""
return self.request.GET.get('paginate_by', self.paginate_by)
class MikaponicsDetailView(DetailView, ExtraRequestProcessingMixin):
"""
An opinionated modification on the class-based "ListView" view to
have a few enhancements suited for our `mikaponics` app.
"""
menu_id = None # Required for navigation
skip_parameters_array = []
def get_context_data(self, **kwargs):
"""
Override the 'get_context_data' function do add our enhancements.
"""
base_context = super().get_context_data(**kwargs)
# Attach a "menu_id" object to the view.
base_context['menu_id'] = self.menu_id
# DEVELOPERS NOTE:
# - This class based view will have URL parameters for filtering and
# searching records.
# - We will extract the URL parameters and save them into our context
# so we can use this to help the pagination.
base_context['filter_parameters'] = self.get_param_urls(self.skip_parameters_array)
base_context['parameters'] = self.get_params_dict(self.skip_parameters_array)
# Return our custom context based on our `mikaponics` app.
return base_context
| 36.82 | 91 | 0.655894 | 442 | 3,682 | 5.30543 | 0.260181 | 0.059701 | 0.081023 | 0.058849 | 0.711727 | 0.711727 | 0.70064 | 0.643923 | 0.643923 | 0.643923 | 0 | 0.000738 | 0.264259 | 3,682 | 99 | 92 | 37.191919 | 0.864895 | 0.353884 | 0 | 0.487805 | 0 | 0 | 0.036705 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.121951 | false | 0 | 0.097561 | 0 | 0.512195 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 4 |
16e5c52f17f8d4fb701e483bbf7db4fe0295bc21 | 16,269 | py | Python | nautobot/dcim/migrations/0004_initial_part_4.py | psmware-ltd/nautobot | ac516287fb8edcc3482bd011839de837c6bbf0df | [
"Apache-2.0"
] | 384 | 2021-02-24T01:40:40.000Z | 2022-03-30T10:30:59.000Z | nautobot/dcim/migrations/0004_initial_part_4.py | psmware-ltd/nautobot | ac516287fb8edcc3482bd011839de837c6bbf0df | [
"Apache-2.0"
] | 1,067 | 2021-02-24T00:58:08.000Z | 2022-03-31T23:38:23.000Z | nautobot/dcim/migrations/0004_initial_part_4.py | psmware-ltd/nautobot | ac516287fb8edcc3482bd011839de837c6bbf0df | [
"Apache-2.0"
] | 128 | 2021-02-24T02:45:16.000Z | 2022-03-20T18:48:36.000Z | # Generated by Django 3.1.7 on 2021-04-01 06:35
from django.db import migrations, models
import django.db.models.deletion
import nautobot.extras.models.statuses
import nautobot.extras.utils
import taggit.managers
class Migration(migrations.Migration):
initial = True
dependencies = [
("contenttypes", "0002_remove_content_type_name"),
("tenancy", "0001_initial"),
("virtualization", "0001_initial"),
("dcim", "0003_initial_part_3"),
("extras", "0002_initial_part_2"),
("ipam", "0001_initial_part_1"),
]
operations = [
migrations.AddField(
model_name="device",
name="cluster",
field=models.ForeignKey(
blank=True,
null=True,
on_delete=django.db.models.deletion.SET_NULL,
related_name="devices",
to="virtualization.cluster",
),
),
migrations.AddField(
model_name="device",
name="device_role",
field=models.ForeignKey(
on_delete=django.db.models.deletion.PROTECT, related_name="devices", to="dcim.devicerole"
),
),
migrations.AddField(
model_name="device",
name="device_type",
field=models.ForeignKey(
on_delete=django.db.models.deletion.PROTECT, related_name="instances", to="dcim.devicetype"
),
),
migrations.AddField(
model_name="device",
name="local_context_data_owner_content_type",
field=models.ForeignKey(
blank=True,
default=None,
limit_choices_to=nautobot.extras.utils.FeatureQuery("config_context_owners"),
null=True,
on_delete=django.db.models.deletion.CASCADE,
to="contenttypes.contenttype",
),
),
migrations.AddField(
model_name="device",
name="platform",
field=models.ForeignKey(
blank=True,
null=True,
on_delete=django.db.models.deletion.SET_NULL,
related_name="devices",
to="dcim.platform",
),
),
migrations.AddField(
model_name="device",
name="primary_ip4",
field=models.OneToOneField(
blank=True,
null=True,
on_delete=django.db.models.deletion.SET_NULL,
related_name="primary_ip4_for",
to="ipam.ipaddress",
),
),
migrations.AddField(
model_name="device",
name="primary_ip6",
field=models.OneToOneField(
blank=True,
null=True,
on_delete=django.db.models.deletion.SET_NULL,
related_name="primary_ip6_for",
to="ipam.ipaddress",
),
),
migrations.AddField(
model_name="device",
name="rack",
field=models.ForeignKey(
blank=True,
null=True,
on_delete=django.db.models.deletion.PROTECT,
related_name="devices",
to="dcim.rack",
),
),
migrations.AddField(
model_name="device",
name="site",
field=models.ForeignKey(
on_delete=django.db.models.deletion.PROTECT, related_name="devices", to="dcim.site"
),
),
migrations.AddField(
model_name="device",
name="status",
field=nautobot.extras.models.statuses.StatusField(
null=True,
on_delete=django.db.models.deletion.PROTECT,
related_name="dcim_device_related",
to="extras.status",
),
),
migrations.AddField(
model_name="device",
name="tags",
field=taggit.managers.TaggableManager(through="extras.TaggedItem", to="extras.Tag"),
),
migrations.AddField(
model_name="device",
name="tenant",
field=models.ForeignKey(
blank=True,
null=True,
on_delete=django.db.models.deletion.PROTECT,
related_name="devices",
to="tenancy.tenant",
),
),
migrations.AddField(
model_name="device",
name="virtual_chassis",
field=models.ForeignKey(
blank=True,
null=True,
on_delete=django.db.models.deletion.SET_NULL,
related_name="members",
to="dcim.virtualchassis",
),
),
migrations.AddField(
model_name="consoleserverporttemplate",
name="device_type",
field=models.ForeignKey(
on_delete=django.db.models.deletion.CASCADE,
related_name="consoleserverporttemplates",
to="dcim.devicetype",
),
),
migrations.AddField(
model_name="consoleserverport",
name="_cable_peer_type",
field=models.ForeignKey(
blank=True,
null=True,
on_delete=django.db.models.deletion.SET_NULL,
related_name="+",
to="contenttypes.contenttype",
),
),
migrations.AddField(
model_name="consoleserverport",
name="_path",
field=models.ForeignKey(
blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, to="dcim.cablepath"
),
),
migrations.AddField(
model_name="consoleserverport",
name="cable",
field=models.ForeignKey(
blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name="+", to="dcim.cable"
),
),
migrations.AddField(
model_name="consoleserverport",
name="device",
field=models.ForeignKey(
on_delete=django.db.models.deletion.CASCADE, related_name="consoleserverports", to="dcim.device"
),
),
migrations.AddField(
model_name="consoleserverport",
name="tags",
field=taggit.managers.TaggableManager(through="extras.TaggedItem", to="extras.Tag"),
),
migrations.AddField(
model_name="consoleporttemplate",
name="device_type",
field=models.ForeignKey(
on_delete=django.db.models.deletion.CASCADE, related_name="consoleporttemplates", to="dcim.devicetype"
),
),
migrations.AddField(
model_name="consoleport",
name="_cable_peer_type",
field=models.ForeignKey(
blank=True,
null=True,
on_delete=django.db.models.deletion.SET_NULL,
related_name="+",
to="contenttypes.contenttype",
),
),
migrations.AddField(
model_name="consoleport",
name="_path",
field=models.ForeignKey(
blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, to="dcim.cablepath"
),
),
migrations.AddField(
model_name="consoleport",
name="cable",
field=models.ForeignKey(
blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name="+", to="dcim.cable"
),
),
migrations.AddField(
model_name="consoleport",
name="device",
field=models.ForeignKey(
on_delete=django.db.models.deletion.CASCADE, related_name="consoleports", to="dcim.device"
),
),
migrations.AddField(
model_name="consoleport",
name="tags",
field=taggit.managers.TaggableManager(through="extras.TaggedItem", to="extras.Tag"),
),
migrations.AddField(
model_name="cablepath",
name="destination_type",
field=models.ForeignKey(
blank=True,
null=True,
on_delete=django.db.models.deletion.CASCADE,
related_name="+",
to="contenttypes.contenttype",
),
),
migrations.AddField(
model_name="cablepath",
name="origin_type",
field=models.ForeignKey(
on_delete=django.db.models.deletion.CASCADE, related_name="+", to="contenttypes.contenttype"
),
),
migrations.AddField(
model_name="cable",
name="_termination_a_device",
field=models.ForeignKey(
blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, related_name="+", to="dcim.device"
),
),
migrations.AddField(
model_name="cable",
name="_termination_b_device",
field=models.ForeignKey(
blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, related_name="+", to="dcim.device"
),
),
migrations.AddField(
model_name="cable",
name="status",
field=nautobot.extras.models.statuses.StatusField(
null=True,
on_delete=django.db.models.deletion.PROTECT,
related_name="dcim_cable_related",
to="extras.status",
),
),
migrations.AddField(
model_name="cable",
name="tags",
field=taggit.managers.TaggableManager(through="extras.TaggedItem", to="extras.Tag"),
),
migrations.AddField(
model_name="cable",
name="termination_a_type",
field=models.ForeignKey(
limit_choices_to=models.Q(
models.Q(
models.Q(("app_label", "circuits"), ("model__in", ("circuittermination",))),
models.Q(
("app_label", "dcim"),
(
"model__in",
(
"consoleport",
"consoleserverport",
"frontport",
"interface",
"powerfeed",
"poweroutlet",
"powerport",
"rearport",
),
),
),
_connector="OR",
)
),
on_delete=django.db.models.deletion.PROTECT,
related_name="+",
to="contenttypes.contenttype",
),
),
migrations.AddField(
model_name="cable",
name="termination_b_type",
field=models.ForeignKey(
limit_choices_to=models.Q(
models.Q(
models.Q(("app_label", "circuits"), ("model__in", ("circuittermination",))),
models.Q(
("app_label", "dcim"),
(
"model__in",
(
"consoleport",
"consoleserverport",
"frontport",
"interface",
"powerfeed",
"poweroutlet",
"powerport",
"rearport",
),
),
),
_connector="OR",
)
),
on_delete=django.db.models.deletion.PROTECT,
related_name="+",
to="contenttypes.contenttype",
),
),
migrations.AlterUniqueTogether(
name="rearporttemplate",
unique_together={("device_type", "name")},
),
migrations.AlterUniqueTogether(
name="rearport",
unique_together={("device", "name")},
),
migrations.AlterUniqueTogether(
name="rackgroup",
unique_together={("site", "slug"), ("site", "name")},
),
migrations.AlterUniqueTogether(
name="rack",
unique_together={("group", "facility_id"), ("group", "name")},
),
migrations.AlterUniqueTogether(
name="powerporttemplate",
unique_together={("device_type", "name")},
),
migrations.AlterUniqueTogether(
name="powerport",
unique_together={("device", "name")},
),
migrations.AlterUniqueTogether(
name="powerpanel",
unique_together={("site", "name")},
),
migrations.AlterUniqueTogether(
name="poweroutlettemplate",
unique_together={("device_type", "name")},
),
migrations.AlterUniqueTogether(
name="poweroutlet",
unique_together={("device", "name")},
),
migrations.AlterUniqueTogether(
name="powerfeed",
unique_together={("power_panel", "name")},
),
migrations.AlterUniqueTogether(
name="inventoryitem",
unique_together={("device", "parent", "name")},
),
migrations.AlterUniqueTogether(
name="interfacetemplate",
unique_together={("device_type", "name")},
),
migrations.AlterUniqueTogether(
name="interface",
unique_together={("device", "name")},
),
migrations.AlterUniqueTogether(
name="frontporttemplate",
unique_together={("device_type", "name"), ("rear_port", "rear_port_position")},
),
migrations.AlterUniqueTogether(
name="frontport",
unique_together={("rear_port", "rear_port_position"), ("device", "name")},
),
migrations.AlterUniqueTogether(
name="devicetype",
unique_together={("manufacturer", "model"), ("manufacturer", "slug")},
),
migrations.AlterUniqueTogether(
name="devicebaytemplate",
unique_together={("device_type", "name")},
),
migrations.AlterUniqueTogether(
name="devicebay",
unique_together={("device", "name")},
),
migrations.AlterUniqueTogether(
name="device",
unique_together={
("rack", "position", "face"),
("virtual_chassis", "vc_position"),
("site", "tenant", "name"),
},
),
migrations.AlterUniqueTogether(
name="consoleserverporttemplate",
unique_together={("device_type", "name")},
),
migrations.AlterUniqueTogether(
name="consoleserverport",
unique_together={("device", "name")},
),
migrations.AlterUniqueTogether(
name="consoleporttemplate",
unique_together={("device_type", "name")},
),
migrations.AlterUniqueTogether(
name="consoleport",
unique_together={("device", "name")},
),
migrations.AlterUniqueTogether(
name="cablepath",
unique_together={("origin_type", "origin_id")},
),
migrations.AlterUniqueTogether(
name="cable",
unique_together={("termination_b_type", "termination_b_id"), ("termination_a_type", "termination_a_id")},
),
]
| 35.913907 | 118 | 0.489643 | 1,232 | 16,269 | 6.280032 | 0.124188 | 0.076774 | 0.0981 | 0.115161 | 0.774848 | 0.748352 | 0.705829 | 0.597131 | 0.52501 | 0.519193 | 0 | 0.004666 | 0.394001 | 16,269 | 452 | 119 | 35.993363 | 0.780099 | 0.002766 | 0 | 0.779775 | 1 | 0 | 0.168228 | 0.02435 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.011236 | 0 | 0.020225 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 4 |
bc499cc723099ff90cbc289c86ce1471407eaf1e | 135 | py | Python | back-end/python-scripts/calltest1.py | matteomitico/vtc-senior-project | 778b59537edae54fbcb284a862bad587469b96af | [
"MIT"
] | null | null | null | back-end/python-scripts/calltest1.py | matteomitico/vtc-senior-project | 778b59537edae54fbcb284a862bad587469b96af | [
"MIT"
] | null | null | null | back-end/python-scripts/calltest1.py | matteomitico/vtc-senior-project | 778b59537edae54fbcb284a862bad587469b96af | [
"MIT"
] | null | null | null | #!/usr/bin/python
import json
d = {"one":"1", "two":2}
json.dump(d, open("text.txt",'w'))
d2 = json.load(open("text.txt"))
print d2
| 13.5 | 34 | 0.592593 | 25 | 135 | 3.2 | 0.72 | 0.2 | 0.275 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.033898 | 0.125926 | 135 | 9 | 35 | 15 | 0.644068 | 0.118519 | 0 | 0 | 0 | 0 | 0.205128 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.2 | null | null | 0.2 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 4 |
bc56bf3916b229c0e841d5b8c671d42fd63fe193 | 98 | py | Python | Modulo 2/lista07/11.py | BelfortJoao/Programacao-1 | 2d463744379ad3e4b0f5882ad923aae7ff80197a | [
"MIT"
] | 2 | 2021-08-17T14:02:13.000Z | 2021-08-19T02:37:28.000Z | Modulo 2/lista07/11.py | BelfortJoao/Programacao-1 | 2d463744379ad3e4b0f5882ad923aae7ff80197a | [
"MIT"
] | null | null | null | Modulo 2/lista07/11.py | BelfortJoao/Programacao-1 | 2d463744379ad3e4b0f5882ad923aae7ff80197a | [
"MIT"
] | 1 | 2021-09-05T20:18:45.000Z | 2021-09-05T20:18:45.000Z | x = [int(input()) for x in range(15)]
for i in range(15):
if x[i] <0:
x[i]=-1
print(x) | 19.6 | 37 | 0.5 | 22 | 98 | 2.227273 | 0.545455 | 0.285714 | 0.367347 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.084507 | 0.27551 | 98 | 5 | 38 | 19.6 | 0.605634 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.2 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 4 |
bc5bff62093c53ba0517abdd41b4112f16cd86cb | 446 | py | Python | pyshinobicctvapi/users.py | xannor/yshinobicctvapi | 6f96ef103ea10e9cfe4897b690430ea5480d98ba | [
"Unlicense"
] | null | null | null | pyshinobicctvapi/users.py | xannor/yshinobicctvapi | 6f96ef103ea10e9cfe4897b690430ea5480d98ba | [
"Unlicense"
] | null | null | null | pyshinobicctvapi/users.py | xannor/yshinobicctvapi | 6f96ef103ea10e9cfe4897b690430ea5480d98ba | [
"Unlicense"
] | null | null | null | from typing import Optional
from .connection import Connection
class User:
"""
Shinobi User API
"""
def __init__(self, user: dict = None):
if user is None:
user = {}
self._user = user
# self._id = user['uid']
@property
def email(self) -> Optional[str]:
return self._user.get("mail")
@property
def group(self) -> Optional[str]:
return self._user.get("ke")
| 17.84 | 42 | 0.56278 | 53 | 446 | 4.584906 | 0.490566 | 0.131687 | 0.123457 | 0.17284 | 0.263374 | 0.263374 | 0.263374 | 0 | 0 | 0 | 0 | 0 | 0.313901 | 446 | 24 | 43 | 18.583333 | 0.794118 | 0.089686 | 0 | 0.153846 | 0 | 0 | 0.015385 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.230769 | false | 0 | 0.153846 | 0.153846 | 0.615385 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 4 |
bc86b7b6cdc30314dc42a3730dc0ee84eef82337 | 161 | py | Python | susi/__init__.py | FBlandfort/Subset-Simulation-Interpolation | e0c1433d6769c8a8c41943ed824007c2511a0c2e | [
"MIT"
] | 5 | 2021-04-26T07:42:04.000Z | 2022-03-20T02:59:00.000Z | susi/__init__.py | FBlandfort/Subset-Simulation-Interpolation | e0c1433d6769c8a8c41943ed824007c2511a0c2e | [
"MIT"
] | null | null | null | susi/__init__.py | FBlandfort/Subset-Simulation-Interpolation | e0c1433d6769c8a8c41943ed824007c2511a0c2e | [
"MIT"
] | 3 | 2020-12-28T12:00:37.000Z | 2022-03-31T13:36:12.000Z |
from . import main
from . import props
from . import sampling
from . import examples
'''
import susi
susi.props.attr(name="test",rtype="ln",mx=1,vx=2)
'''
| 10.733333 | 49 | 0.677019 | 25 | 161 | 4.36 | 0.64 | 0.366972 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014925 | 0.167702 | 161 | 14 | 50 | 11.5 | 0.798507 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 4 |
bca4c712774eb00f205508ccc01f8de535821d0c | 54 | py | Python | democracy/plugins/__init__.py | tomijarvi/kerrokantasi | c36cedf44739a95dce14e141d2b2960688ad4eeb | [
"MIT"
] | null | null | null | democracy/plugins/__init__.py | tomijarvi/kerrokantasi | c36cedf44739a95dce14e141d2b2960688ad4eeb | [
"MIT"
] | 4 | 2021-09-01T09:49:04.000Z | 2022-03-12T01:09:43.000Z | democracy/plugins/__init__.py | tomijarvi/kerrokantasi | c36cedf44739a95dce14e141d2b2960688ad4eeb | [
"MIT"
] | 1 | 2020-01-31T07:41:04.000Z | 2020-01-31T07:41:04.000Z | from ._base import Plugin, get_implementation # noqa
| 27 | 53 | 0.796296 | 7 | 54 | 5.857143 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148148 | 54 | 1 | 54 | 54 | 0.891304 | 0.074074 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 4 |
bcefd0d452bf0f7c234174f18ab420066009dc51 | 47 | py | Python | tests/fixes/issue_41/test_module/__init__.py | allrod5/injectable | 74e640f0911480fb06fa97c1a468c3863541c0fd | [
"MIT"
] | 71 | 2018-02-05T04:12:27.000Z | 2022-02-15T23:08:16.000Z | tests/fixes/issue_41/test_module/__init__.py | Euraxluo/injectable | 74e640f0911480fb06fa97c1a468c3863541c0fd | [
"MIT"
] | 104 | 2018-02-06T23:37:36.000Z | 2021-08-25T04:50:15.000Z | tests/fixes/issue_41/test_module/__init__.py | Euraxluo/injectable | 74e640f0911480fb06fa97c1a468c3863541c0fd | [
"MIT"
] | 13 | 2019-02-10T18:52:50.000Z | 2022-01-26T17:12:35.000Z | from .foo_module import Foo
__all__ = ["Foo"]
| 11.75 | 27 | 0.702128 | 7 | 47 | 4 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.170213 | 47 | 3 | 28 | 15.666667 | 0.717949 | 0 | 0 | 0 | 0 | 0 | 0.06383 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 4 |
4c20574ce10e9613503077617f154c4759ac156e | 93 | py | Python | Source/Backend/core/supplier/admin.py | csgn/Price-Tracker | 67507b85dac9b0b9c8c975342678ebec3be2b7c0 | [
"MIT"
] | null | null | null | Source/Backend/core/supplier/admin.py | csgn/Price-Tracker | 67507b85dac9b0b9c8c975342678ebec3be2b7c0 | [
"MIT"
] | 4 | 2021-12-12T10:31:23.000Z | 2021-12-12T10:33:46.000Z | Source/Backend/core/supplier/admin.py | csgn/Price-Tracker | 67507b85dac9b0b9c8c975342678ebec3be2b7c0 | [
"MIT"
] | null | null | null | from django.contrib import admin
from .models import Supplier
admin.site.register(Supplier)
| 18.6 | 32 | 0.827957 | 13 | 93 | 5.923077 | 0.692308 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.107527 | 93 | 4 | 33 | 23.25 | 0.927711 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 4 |
4c2707be3aa8f78393f5c7be07c3d468223a4f77 | 131 | py | Python | src/analytics/admin.py | atharvajakkanwar/webapp-ecommerce-python | 54e5cdae1a7f7398a30f64bceb0b550584e84e6b | [
"MIT"
] | 1 | 2019-05-30T01:48:54.000Z | 2019-05-30T01:48:54.000Z | src/analytics/admin.py | atharvajakkanwar/webapp-ecommerce-python | 54e5cdae1a7f7398a30f64bceb0b550584e84e6b | [
"MIT"
] | 11 | 2020-06-05T17:45:00.000Z | 2022-03-12T00:34:17.000Z | src/analytics/admin.py | sairam95/onestopshop | c3aa6fa0bdc11e85ef54b85c0ded8806ed5b8422 | [
"MIT"
] | null | null | null | from django.contrib import admin
from .models import ObjectViewed
# Register your models here.
admin.site.register(ObjectViewed)
| 18.714286 | 33 | 0.816794 | 17 | 131 | 6.294118 | 0.647059 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.122137 | 131 | 6 | 34 | 21.833333 | 0.930435 | 0.198473 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 4 |
4c38003e527d6cf68366fb4e71ce12aff759f403 | 3,511 | py | Python | tests/tools/test_listtostring.py | eric-s-s/share-with-z | 60a4c9788085ba55c6214114447e1c16bc49f7ce | [
"MIT"
] | 5 | 2016-07-29T18:28:04.000Z | 2019-06-17T19:49:11.000Z | tests/tools/test_listtostring.py | eric-s-s/dice-tables | 60a4c9788085ba55c6214114447e1c16bc49f7ce | [
"MIT"
] | 16 | 2015-11-12T02:13:30.000Z | 2020-12-05T21:35:00.000Z | tests/tools/test_listtostring.py | eric-s-s/dice-tables | 60a4c9788085ba55c6214114447e1c16bc49f7ce | [
"MIT"
] | null | null | null | from __future__ import absolute_import
import unittest
import dicetables.tools.listtostring as l2s
class TestListToString(unittest.TestCase):
def test_format_for_sequence_string_adds_commas(self):
self.assertEqual(l2s.format_for_sequence_str(10), '10')
self.assertEqual(l2s.format_for_sequence_str(1000000), '1,000,000')
def test_format_for_sequence_string_negative_numbers(self):
self.assertEqual(l2s.format_for_sequence_str(-2), '(-2)')
self.assertEqual(l2s.format_for_sequence_str(-2000), '(-2,000)')
def test_format_one_sequence_single_number(self):
self.assertEqual(l2s.format_one_sequence([1]), '1')
def test_format_one_sequence_first_and_last_same_number(self):
self.assertEqual(l2s.format_one_sequence([1, 2, 3, 1]), '1')
def test_format_one_sequence_first_and_last_different(self):
self.assertEqual(l2s.format_one_sequence([-1, 3, 2]), '(-1)-2')
def test_gap_is_larger_than_one_true(self):
self.assertTrue(l2s.gap_is_larger_than_one([[1, 2, 3], [4, 5]], 7))
def test_gap_is_larger_than_one_false(self):
self.assertFalse(l2s.gap_is_larger_than_one([[1, 2, 3], [4, 5]], 5))
self.assertFalse(l2s.gap_is_larger_than_one([[1, 2, 3], [4, 5]], 6))
def test_split_at_gaps_larger_than_one_empty_list(self):
self.assertEqual(l2s.split_at_gaps_larger_than_one([]), [])
def test_split_at_gaps_larger_than_one_one_element(self):
self.assertEqual(l2s.split_at_gaps_larger_than_one([1]), [[1]])
def test_split_at_gaps_larger_than_one_same_element(self):
self.assertEqual(l2s.split_at_gaps_larger_than_one([1, 1, 1, 1]), [[1, 1, 1, 1]])
def test_split_at_gaps_larger_than_one_not_large_gaps(self):
self.assertEqual(l2s.split_at_gaps_larger_than_one([1, 1, 2, 2, 3]), [[1, 1, 2, 2, 3]])
def test_split_at_gaps_larger_than_one_all_large_gaps(self):
self.assertEqual(l2s.split_at_gaps_larger_than_one([1, 3, 9]), [[1], [3], [9]])
def test_split_at_gaps_larger_than_one_mixed_case(self):
self.assertEqual(l2s.split_at_gaps_larger_than_one([1, 1, 2, 5, 6, 9]), [[1, 1, 2], [5, 6], [9]])
def test_split_at_gaps_larger_than_one_mixed_case_negative_pos_zero(self):
self.assertEqual(l2s.split_at_gaps_larger_than_one([-2, 0, 1, 2, 5, 6, 9]), [[-2], [0, 1, 2], [5, 6], [9]])
def test_get_string_from_list_returns_single_number(self):
self.assertEqual(l2s.get_string_from_list_of_ints([1, 1, 1]), '1')
def test_get_string_from_list_puts_parentheses_around_negative_numbers(self):
self.assertEqual(l2s.get_string_from_list_of_ints([-1]), '(-1)')
def test_get_string_from_list_returns_proper_output_for_a_run_of_numbers(self):
self.assertEqual(l2s.get_string_from_list_of_ints([1, 1, 2, 3, 4, 5]), '1-5')
def test_get_string_from_list_returns_values_separated_by_commas(self):
the_list = [-5, -4, -3, -1, 0, 1, 2, 3, 5]
self.assertEqual(l2s.get_string_from_list_of_ints(the_list), '(-5)-(-3), (-1)-3, 5')
def test_get_string_from_list_returns_numbers_with_commas(self):
the_list = [-1234567, 1234567]
self.assertEqual(l2s.get_string_from_list_of_ints(the_list), '(-1,234,567), 1,234,567')
def test_get_string_from_list_returns_number_out_of_order(self):
the_list = [5, 3, 7, 1, 3, 6, 8, 9, 10, 2]
self.assertEqual(l2s.get_string_from_list_of_ints(the_list), '1-3, 5-10')
if __name__ == '__main__':
unittest.main()
| 45.012821 | 115 | 0.714326 | 583 | 3,511 | 3.833619 | 0.15952 | 0.06264 | 0.161074 | 0.147651 | 0.794183 | 0.778523 | 0.731544 | 0.634452 | 0.486353 | 0.445638 | 0 | 0.068758 | 0.146682 | 3,511 | 77 | 116 | 45.597403 | 0.677236 | 0 | 0 | 0 | 0 | 0 | 0.028197 | 0 | 0 | 0 | 0 | 0 | 0.442308 | 1 | 0.384615 | false | 0 | 0.057692 | 0 | 0.461538 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 4 |
4c4efd7abc23ada191f590168c06fdec129ec0ac | 1,664 | py | Python | hail/python/hail/fs/fs.py | MariusDanner/hail | 5ca0305f8243b5888931b1afaa1fbfb617dee097 | [
"MIT"
] | null | null | null | hail/python/hail/fs/fs.py | MariusDanner/hail | 5ca0305f8243b5888931b1afaa1fbfb617dee097 | [
"MIT"
] | null | null | null | hail/python/hail/fs/fs.py | MariusDanner/hail | 5ca0305f8243b5888931b1afaa1fbfb617dee097 | [
"MIT"
] | null | null | null | import abc
import sys
import os
from typing import Dict, List
from hail.utils.java import Env, info
from hail.utils import local_path_uri
class FS(abc.ABC):
@abc.abstractmethod
def open(self, path: str, mode: str = 'r', buffer_size: int = 8192):
pass
@abc.abstractmethod
def copy(self, src: str, dest: str):
pass
@abc.abstractmethod
def exists(self, path: str) -> bool:
pass
@abc.abstractmethod
def is_file(self, path: str) -> bool:
pass
@abc.abstractmethod
def is_dir(self, path: str) -> bool:
pass
@abc.abstractmethod
def stat(self, path: str) -> Dict:
pass
@abc.abstractmethod
def ls(self, path: str) -> List[Dict]:
pass
@abc.abstractmethod
def mkdir(self, path: str):
"""Ensure files can be created whose dirname is `path`.
Warning
-------
On file systems without a notion of directories, this function will do nothing. For example,
on Google Cloud Storage, this operation does nothing.
"""
pass
@abc.abstractmethod
def remove(self, path: str):
pass
@abc.abstractmethod
def rmtree(self, path: str):
pass
def copy_log(self, path: str) -> None:
log = Env.hc()._log
try:
if self.is_dir(path):
_, tail = os.path.split(log)
path = os.path.join(path, tail)
info(f"copying log to {repr(path)}...")
self.copy(local_path_uri(Env.hc()._log), path)
except Exception as e:
sys.stderr.write(f'Could not copy log: encountered error:\n {e}')
| 23.771429 | 100 | 0.584736 | 219 | 1,664 | 4.388128 | 0.424658 | 0.176899 | 0.208117 | 0.224766 | 0.240375 | 0.125911 | 0.125911 | 0.125911 | 0.085328 | 0 | 0 | 0.003457 | 0.304688 | 1,664 | 69 | 101 | 24.115942 | 0.827139 | 0.130409 | 0 | 0.425532 | 0 | 0 | 0.054325 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.234043 | false | 0.212766 | 0.12766 | 0 | 0.382979 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 4 |
4c4f110c0ae6ee0ab0bc260ba3837f5037c2041f | 26,908 | py | Python | icon_file.py | InternalCode/SQLAutoBackupGUI | e8bdb34746fdcd84c2301a57f47ad5e7d80837ef | [
"MIT"
] | null | null | null | icon_file.py | InternalCode/SQLAutoBackupGUI | e8bdb34746fdcd84c2301a57f47ad5e7d80837ef | [
"MIT"
] | null | null | null | icon_file.py | InternalCode/SQLAutoBackupGUI | e8bdb34746fdcd84c2301a57f47ad5e7d80837ef | [
"MIT"
] | null | null | null | ico = '''R0lGODlhlgCWAOf/AAYMDwEQEAITGA0TDwQWHwoWFgkXGxEVFwYaJwQdLgEhNxQdIxgeHx4c
IRsfHBEhLAAlQBwhGQomPQUpSg4nRBInOQAsUiMmHiMpLSYpJikoLB4rOAgwVwI1YDIrMgA6
ais0PzQyNzI1LTkyODI1MgA/djA2Og9AeAZEggJLjjpBSD5COj5CQkRBR0dBQABRmipLSgBW
pRRXp0tOSVJMTwdbsklPUk9NWFVJdgBgvU5XQgBlyABr1VhaY1lbXBhmy1FeW4BaZn5ZfXhd
fm9hfRJ23n1edF9qXHpiW2ZneXZjdG1oXltsdGhpanlkamBsanFna2tpcmdvUnpslml1c3Bz
c3ZxeIdta4BuhYpseo9sa4VveSSE54Bxe2p3hnN0i4NxdHh2Zn10bH1yhoVybXZ1hG54f3J5
aHV5Xnx1dHd3b2h9ZYR1XmR9dXB6b2V8gYpwioF2ZXp7WnB8eXd6en14f3CGYIp/Z5Z6eoR/
hB+U/4N/ineEgH6CgjKR75R8h5J+eo99lYOFZTOR/op/lI6BcIKCk3yGe49/iW2KhJF/goKG
a4SEfIuCeoaEdXyFjnGKfHSKc3+HdYqDgm6LjziW7iab/4WIiDaX/IqHjn+LiCyh/jud/Eia
+5aLlYeSio6Qin6WhJmPfZeNoY2Tg36XkoOVnJCUfo+Vdz6j+4yTnJmRhJ2PjY6TlXWblJeR
jnaanUqg+oeWlpKSnpWSloeZeqmNnoWcckWl956TnZaXko6akr6Praedipyii42mlaKgkqid
o5qjkqedq6udnY2moYinq5qiq4CqrZakpKWgnaOgpZGlrJ2ipJmknKCinaGhrISrpJSynoq2
qqysrLqorbipuYy3uY+3sayxmaixqrSsuressaWytZy1s7OvpqK0qq2wupq2uqi0orO0tLe6
pLy9yMa7wJ7HxsC+tbjBsM25wr6+v6vExrXCurXCxaXIvLTS1KvV07XTyczN0dXL09DOzMXS
zMjSxcLT1dHQx9nM4L7Yx9fY3dnZ1v///yH5BAEKAP8ALAAAAACWAJYAAAj+AKEssXKkShkx
RNwsMfNGzBk6T6y0obNkUZclTRydYfNEzBwnbuiUKXNoTpUqkswcktTlEJkvkyLh4aNp0p5J
eS4dMgTp0CVNoJC5AgKjqNGjSGEAQVbs0aM8iyTtYTSnS6Y6cyYBisRozKM5abBkQRPGSdU5
ZqRUcXNGzRkjTaqY+UJGTSQzUKpEcXKmSyMxbtJY2VKljpUmTM4sMuPFzRgrStoMCWMHShIq
c4hUQTOlShYjacSIeZQxTBRIVr4IEpOmjhu0UZrkSaOkz6E+XySRouRmkpcpjEDhZMSK1J5H
xZi5IhEBgPPn0J8PuNACljFdqzKB6gOoCiNFfU7+MepDJ1IgRnkAzZkD5ohJMJKSmCHl8U6X
OmPooFmCRU0dNF+IcQUOWMyxBSRVzOEFI2WkkYUacomhRhRKeMEHGU1AwUQXSTDhhhJWkFHQ
E0+04QYVHSVhRRpxHNLFGXXsQQcVtBWWhBFedMGSGJKgB8gnnzgSSSSEPDLJInlokkkmn+hy
zRMhNBBAdFQ6F4ELlCBjiiZ5/IGIJpJMoqSLcyjCyCGhlDeHY3k4UsYbkuRRhSMSenHEHm5F
cQYVQtBpyCJJHAHIREfYhxARZuxliBpdhKLEElF04UYUVjRChxhJuCEGFGI8wUcdSkDhSBp5
uNGFFWUcEckcbeRxiBv+B51Rhmt5LHhVHY2k4UgfjZDiCCOAnLKHcEBdAqQr1/ggggMDVBld
AA6QkMg1xZDCRyCXTNIHI1w6RQojopiRCSlnvGHFJGrkQUceS6xBRCKNxJHEFnV0QUQbdRg0
hhpWzKFGE06c1EQYcliRRBxigEhFEj084Z4jWUTRhhdP1CEGHmJYcYYZZITBoRegphHFIlhQ
oQQQRIQxhmJzMBKFGmmQgYoZiDjyCSGaMPJII1No8sorjoDyySqfLKmdLMZcYwMGBzj7LAMk
+GCMMayA4gkfn0yiSCh5dK3JJegx8gohhzCSSRh8RPIHHV3A8WAeZomxBB9UwFGHF2ks4oT+
E46s0UUba0IxxxVLULHEV2s0AZwbYLjhRg89ZJLGEYzwEQYdSehpBBRPwLGGI2bsscYRMy5B
RiJ9dNEFGGOQ0gYjjexRxyN1IMKIJJB8EgojiIDSyCS5IPKJNEUT4skszCAzgwZNTwlAAM5P
Cf3zDJgwwzSt/OIJl5MQkskepExSCiHDNsIIKYfcNIkkh1i+RS1o9EXFFF2YYYURRLh4CRVU
vEHEGoxiBOfGMIc6oGYOUICDHOhQiDWUwQxUOMIZmrAwK5SsC3R4yCTS4IUliIYMkXCEGxyh
BizcZw502EMk8sW+lcSuC2WYhC4awQojZcIToDiFJ2ihi1aEghD+rwBFK5pxDFewIAMHmJ70
ogedA2iABkyhBS2W5AlDCIcVf8DJHiDBu08UIhOTeITs+PCIRKykfmY4SBrUAAo1sGoJVSlD
fvhAhy0koSRVWEISmuAQM+DlDGD4QqKUEAU+JEENaniDCNFwBAqSgRFxoEMTEGaGwECBEXRw
g1ve0AdFmIF9ZNggKIZFiDA64ic4nAUrkmELUOwCFEKU4i7EQYkWMKAAS4SeLqf3vAJgwAaw
YEYrnKELUNBiF7vQxSdU8YpLtGIV2dIEIrRYB0P0YRJ86EMaUkiHOtQrEYHpVx8Y5QhHAGwN
aaBCI7AAokyOUDRmCYIY2mAFMFSBDVH+oAIT4PWEKjzBRYpAFCOCcIn6QeEWdCADHiIBCSzg
IRMaPEQmUKGJRhSiD0BhhSGA8gpP7OKGiPDoK1jhjFMkIxriSAQLbrnLlu4SAAVgQAtKgQ1V
5GIVxszFLlixC1HQIhekeIUmPoEIMGYNEI+ARCD6cIk2iGYLRNhDSFaIlTocYgtU6EIe3JiF
OTyBDF0YQhoUYQVIpcEJSiDRIZSwhTjwIQhQkNQc4mCFIMAoCVkAxBqj8MA2pCEQcMjEG/6w
1UUw4qE5CYUm0gAJ7QACFKogRB4Q8QpRtGIXxxCGLGShi1GQohnOuAYQTHDL5+1SAAKAXmph
GtMWzOEa2ND+xS5UoQtn4PAVn9gFIUiRiUt4AhGKuAQr+iCJVSDiFuhxwzpHBocyYIEQGVtD
PSERhvUcQQheUAIW3tAGSVThDXU4wx62YAVI1GEJT8hCJZmQBiikwQ1N4ANIvrAEKPjRIHxo
RCfpkAadxQmTXmiEcBBxiO0hoguPMF4tajGKXbxCGbugxSt4KNtmLGMXw+jGNGZAWpimNgCo
DXGIA1CAmEYNHclYxjNoQYxZjGIWx1MFT1mBCGWAAhWvCCMrJvGH+oExE10YwyHGkIYuJMIM
fMibGsgg1isQxAta6I0X1NCHJJRBCUQoQxcKtgQ3rIEMfADEIcRAByvE4QxksMP+GSrFMSuU
TQ2ByQQj2qAGPlyiCmNIz3EUMYlL/IwVN6HFL3SRC1Ucs7bOoIUwpKGLZDgDGuDohjhuQFoB
GCC1Is60pQXAgBD04BrgIEYwnJELW0DjFbp4xi42u4tWwBgURpPoI0ChiT7kIRCnKMUeLlEH
wUykDrK53xnOkIZarCESZaCDEqqQhoeoYQiCowIE3eCExMQBM1hYQg/+cIY2FGILaSAEt7Jg
CCqAARJ0MAIZ3jDkU6hEE7cgBB9AsWNVmCITyfSELILRDF2wAhi0IOkzxtGMYDzDGc3ohjFo
gIEFGODSms70paF2A2+IgxrCiIajlUHqZwQCGq1QBiv+dEGLT7TCE5kgRve+hp40FDcSl1BD
IronBjaYgVRKEMMh5HAGU11BCx2kghii8L+EpKEKfMjCGSIhBo/ZoQrlgpkXMlYFL2TBCnmw
wi0WcQsySEITX9hDHkYRCuEw6RKg0IUoijlMVX6i0LoIRjCUQQtlCCMZyWgGMZ7RDHS4ggYa
eDjEIx7ihy/gl9HoRjKGoQxqKMPgztAGv1thC18o4xWtCJ8pJMqKQOyiJqH4Qyam4KpFmaGM
2/zCEayAnzjgCRFNsELT5yCGLkCCDEqAoEb8ZoYzgKQqTbiPGcQwJ5DNoQyAKAMfyKyJQ2jC
FJ+gRSk+MQtQGCITrVAFyln+kQuSA2MZzXjG3pehDG0o4xjU0EY3HE+NdpRiBhh4OOEl/nAN
2MAU4iAHOJ4xDGrwHRra8Axxdwy6IAy6IFsn11GgMAtFIwlGEhyXIAumAApfIU3tAwaxlwZj
cB9fUAfyFAl3VAZV0ASv8QVjUDfO9RHlpAaHkAbJ9kB9QAZpYBuD0QVzgAWQMAvRYArGgAyX
pQq0tQufcDypJgzBIAzPAAzOcAwHpwzJwHHPQA3EAGnfQA3oMAcqwADyN38i9nAYwAJz0A7a
IA7cQA3rp2LO4AzjwHcrplPLgGr+Fg2/03yKsFE6qCXRsAt9sAeMhWVNAG74sgYO0QV2sAWN
AAb+UVBOrMEIRqAGdmAGVcEVSdAFDmMFlsgHjBAKoPNJGFULmlAKyAALN3ADsIANyQAKrxCE
u0AKwCAMwkAL2jAKyiB+z6AN1DAO36AN4EAOtmCG1BAO6BAFJKCFg8eFlmYADMACT9AOu0gO
x1ANs0gN1FCFyjAMxaAMy9B9pAYMO/UKrwQIgdAKU2MMTQAEshANsFAKjXALeENkUXAHdWAH
T+AIIvE6kiAJ3QFnQ4BPIWIIVUAFTgAGZOAGsNMHtKcgiUAJQKcjoQCKrqADLEACNPAGyBCK
s/AKLtZ95BcMtqgMjacN4PcNZVgN3/AN5NAN4fAO13AEJuBwxciFhmf+Ak0gDu/gDuxADdww
DFVIDZGXC87Qf8GwDNDgDK3ACs2AW62AkbpQDNeQCDMQAiuAJaEoCZ+wB2ZQLyuDbEj2IU9A
BVkwCU/gBl/QBVAQBf5UBUMHK145Bm5QBZwyCXOgBJ7IB8dxCaAICzNAAg4ANS3wBMhwDcbw
DNWYYrPIeOUHfus3DOSQDoupDoxJDuzwDuLQBC25hcZ4jAtgAj6ADd6gDdygDd8gDdIYDtQQ
DOn3Dc7wDMuwDM7whq0ADQ6mCr8QDdhABS4wjBlgPZQQDZ9gCKpTUb9CB1FQCg2xN3TiBnfg
BnzgR1hnBSdyCZJgB02gTa6RBoIUI6yxTZn+YAqhaAPDeAAHwABf6AOu4A3HwAwq5pHE4JHL
QA7KUJKfyQ3kAJneAA7UwA7h0A7F4AOV+ZLzF5MzYAzowA7sYA4mWQ7koH7kMA7Q0A3HkA2O
ZwvOYAv8Fg3HcAyAOQMukAEMMAADwJeuoCXoA2RlgDFhwBi68ihwcAiAIAZNEAWnQAmTsCJG
QAdrIBdjQE+NEAX2FAZrEAagAAZ5sAvIUAouEALM4hwlpgEkcANvIA7G0A3Q8A3HsAy/6Azp
twzpEA7kII3uEA7doA4p2Q36CX8ORwBoilpouqZsiqYGcHgzYArokA/zYJPaQA7boA7gUIXD
sA3U0Azh8A3D8Az+fHcMztAN10AJN0ACGVAAzQIAA+AAIaACISoLv6BYTsAIYFA/UtAIRBAF
8uEIXXAJgIAafKAGYwAJGFEFawAFZakSVuAXe2AFe6AJmtAKQpGXSeocU3IAktoCPRAN6LB+
0iiN38ClkLkN2mCTBFqg7vAO8RAPb7A0C9Cm1nqtBPCFVCAP8/AO7EAO+sAOjFmG1DCfxWqa
5kcNx+ANxvAEHMYAzeMcHuoAJsACWbILsvMISiCCBpkGYNAWQUAFjWEGYBA/VqAnWeUGznkn
h/AILiEJuPIIlxWKT+kAB/CozxEA4akBLEADiSAO6BAO46ANdyqS4WCT5qAN7GCT+dD+Du8w
D+iADPz5AASAADZ7szabADi7swSQmT4grOgQD+9gDvMJmepgi/PJDcTQpd0wDBYHCz6wAhmw
q9FxAA1AAi7gCtHgCrPANnFpCtoECTRiBYygBG0ZB1VwCE5gbrOiBVSABmvQBst5CIXwCXxg
CIhwlzMgAgtwsRnbRE5kPVWADfDQDc1wktbgDpBpDs9qDu0QD/JwDcxwGD7QAhvwAAiQAJq7
uQmgAJrruZyruTaLASpAA1BgBcwgDvIwtO1wD/gADt+aDtWgrOMAD+BwDXxAAybQqBhLJb4q
AixQnrBwCJ8QYIXABg1CB4jgopHQCGEwBEzAFw9xImHQCGf+8Cp98Aih4AaI8AhvcJcuYAIN
UABOw6sxFQI2AAXC+g7hsKXsoA7cEJmQiw7RkAZNYAM2oAIgcLkK0L/+qwAQEMACLMD/278V
sAEmoAIq4APTyQztgA7dyg7LSqDq1w3ocA1UQGkWm0Tl66smoAOwUKRfUGZMxweFsBUH0Qh5
YQaL8CEiFAlpEAl0cAlJEF5JQpd5wJR/tyweCh1MBB0xpQEt4AOPgA4uW5PskA/5gA7ecAlN
0AMtAAL7uwEVIAEAHMATkMVavMVcPAEDLAEHvAEgoAI2wMCX0A754A426a3m8A7oYApPwAKk
1TTlKx0MkAErQAnXcG988BqQAAj+aqBzZ6WwUbAEaqAJZPAIJnJRadDIn/AIarAHh6AKQmED
HtAAzeI8z8KrzxFTuekDdCAOQtuy8SAOc9AD+WsCVFzFEkABW2wBHBDLstwBtNwBsizLFmAB
WkwBEgDGYpzAPhAFmiDKLssO8AAPmuADLIAB8Po8zlzHvkoC9ooMssA1WFAHktAHdPkIVpYE
VUAHh2AGYZA6YvAF5GE/XcAdc/AIsGAKNBAldFy+utRLMdVpM9AEpiAP6CAOjwAFPqC/VNzL
FODKEzDLtPwBCJ3QCr3QtIzLXszLvgzMTaAJj3vBUQAEIcDMuCQ9z0wljxoAkcqxwaQJjtAG
L9gIabv+B03QBE8QCioyiZIyKmkAB0rAPlKxC1C7UuRbx7ykSwdgeEI8kXNwvwBdxRQAAVkc
yweN0CXQ1CeAAlAd1VJ9AifQ1And0Bywy748xsH8iU3QAqR1aRvd0z7s0QXgAC1AiqFwc9cU
GHB5BI8QB2oACYUEiVbAX4jYXY9AXHtQCtFgBkfEwfLMq1MSYgTwphiQwCpgAvtr1EnNAR2Q
0CXw1FGdApZ92ZiN2VNdAlcdy1nMywc8xvmrAg3ncCJG1tCcAS1ABcYgC6qgBt2UK4ywBexx
BkdgBFRwBlWQBEpgB1YABWWQB14hCYhACq4gC9NwBIwaz1USPapl2Gj6AA/+sAHULdATAMtL
PdlQjdkv0N0vEAMxIAPgLd4x4N3dfdlQXdUIbcscoMsQ4MvULd1rOmIvpclVEqkmYAPRIA2K
tTt00AZCqgah4AUOZMhr4ImHQF90kE0H8giZUBO7cA2w4AIiwADSscmnBd01iwAP0LmtTAHY
zdSUbdndDd7hLQM1kOIqvuIpTt4y4N2Wnd6c3dC6DNEJgLk2m6b0nbFLBB0hPQPFcAwOywh7
ECZWEJB00IFzwARQcKKJwIeHgAXdVF6M8BNhpAmugA2UwALMw8lenuFquuGf28pIDdkITdUo
QOLfLQMonuI58OZvvgNyPudw/uYqHt7l/QIxjgL+nP0BtmwBAw3GoosA841aLmXfz3MAGeAC
b7DHmhAK1gII2TwGYsd6tQcJgDEbWQEFeRApzdYHeDBrB8gMT9ICXe7chA1iYS7m/YvFSv0B
I17ibO7mOTDncv4DO8ADur7rO4Drtm7nKQ7e5w3VM27LWQwBoKuzhE4A9M1LPhxTLPBpx+AJ
RMMIf+ArOsMW7KNNwacxgUAHgaBfacAH40ILe+AIsyAKu1AM3hANPkACDRDPLQUA0J2znQvA
123m2p0Csk7rc77rRcAFflAJBF/wBM8FXFAEvC7nwC7ew67ef+7F/Su6hf5hiA4A0XwDxlAM
syALrTAGcHALaAIHpGD+BY5QCm6gCXjACjfXgVvVCF/wKdqCdpKAebPQCseADrDwzs2sRM+j
4Zl7764cy2ee5vwe3jUQ572u61xQCamQCriAC08/9VQ/9VHvB1yg63SeAzXg8HrO537u2QEM
ujnO7IbuUuGZATOgCaYIhEsiCo2ACCu0QXtwB9YLBlYgCW4AQY2gBGEBChJ1JpnACh21C7OA
d9MwBy5w6mC+pkGP7wUd2U2d5mue9LWe6zzQ9FD/9JzQ+Z1vCZxgCZjQCZ7f+ZuQCpwg9ZWQ
8DzA8FzP5l8P9hGP7BSv4xbvPIq+AlRwDcdAWx4VRnCgE3jQB24ACfSCQWdQCGkwCXQwB1b+
AAehkAaHwFshRQqe8Aqy8ITPcA2ixeVjLWJoau/4HuIlQPngbflLXwR+IPWovwmhbwmijwl6
gAn0X//2X/pWv/o8gOt2/uJfP+MAwcHCBAgKEiRAQEChAIYBHAI4kGHEkmvgmnmiBQrUJESN
Mkk6lUfSo1dnxOw5c6jMFkCHGNXJlKlRoEmncr2atUrULF3Dinkr5sMEgwIAHDJUiADBQQUQ
Jljg0OHDCRQpXsSoUSPHjh08ilTClSrVJk6YzOpBi3ZQ2kFt0+rBBNfSXE6bYuGqxIUH1xxZ
Y7xIgaLEhw8CCRpEqJAAQwEBChRgQOKGsWvOmrVq9WnWI1B/NGn+eoSF0SErdQ5dsdPnURIy
eebwQRTqk6dPmUC1UqbL2ahgz45pQ6fJRgiijAUQUMrUKVSpKKq+kKGVq9dKYmOVtYRpbVvu
3b2r1bMdLiZLdVPhLbJ3R9+/KQIPNlzwYMLFDB8z0EAj0bVuzpwJa2WXXXQBJZNLEMnklNES
AaSMSPiIQo091MhjD9Mm6YOVSTLxBBRdcnnGP2XCAUccdKiwQQMGDEAKuaWaeoqDD0qoKoUY
otvqBx78wIUTH8mLKzzuhBTPu+2+G6+8Tc6rRL2+ZGhPsA86GEi+xOozYAEMWGgCHXG0aUYX
MT95xZNXWCHlkkY02aOPLtygo48w0HD+YwtDGiGlC0YaIWQUUC5pZRRachHTGWWUoSYcca5p
ggUNDDAgqReXk5FGq7CSjgcuwkqFk+yEBM+tt9I6C7xQhzQrO07E0mu9HG4ETMr4DFJKIUgZ
4HIad6CBBpgQmxnQNlA8uUQTQkaLZI8pyiCDjz3m+GQLRzgrY5I/SXnlFWdeWeaZZYj5hhpu
3JknKBZWTCoBBRSIccbncNxBRz9Sua6sIIkkdTzy5spuLrP+ZWvItczyUSw/nKwhSviqpDUh
AyLzARZm2hmGGm2WMXQWYY5RZBZEQGlkjy5CqSOSPFyzIo9FMnkEtEN2yaOOWUjRhRbdLKvm
m4q1Ieed4Gb+MGEB5NRVgIIJorL0Kul0rM7TUkFdS9+yPK3Lx6p/zK7UI1EtT6xKivhhvYRj
XdjKhBYwwQYz0GGnG2qoCWaYZmTZhdBPhr0ElE9EYeSRQ06uIw86DlFkjzz66MijVl7J5Rhd
llkmmd+WsVidd9pBp1EMHpj06OasgveHIuj9McioSSWv4Ov84KJ111v3oxOxSr83PNtTXTWV
Ivga+70OOJiAgoYxsGEJdNohpxttklFmnGBoceYZWnbpMI9JJolEkTMkOYT7MRzZg5EuGAwF
Q1ZaEVP6Zbi1mGdq1EnnnXiw8YGFDRKowKmopqoKq612GJ3V4nIkUinJR6zjQQL+FbjABcKu
U/ZSi6jkwom7pEdsMgDMCaYEPPk8wAQ0QEY73NENcGiDGsMYhjOCQYwzBQgzoDgEIhTBEj4w
ooZeuAQf/uAIQyjiQJ4AxiueIYxlQKMZy0AUNcjhjm+Qgx38kMcj7JcACUAAKu6y0dJG16mz
fIdgnYiFH9KzQK6U0Yw6WmAR/CBA070FEwXDxe7EFoP3FMYCEJBABTbAAj7Igx3bUMc2yPE2
bSgDGq2wzDMWBwpEPOIT1mOEIzIRuEnMARRm4EyFhqWLVswiF8p4hjaO8Y1vcIMc5DCHO9hx
jxOpYANFOxoWoVODeG0RgrZDS6o2sQkujJEHYeNLDoT+OcxhmlGBReCC1T7FlgmKBWzriZIG
gUcBCWxABZSQB/K20cRtcAOUuoCGLpoxM1X46WOeYEQW8vCGvx0CEhtphSFuQ4tZ0OIXRExh
Mk7YDCeS4xzmkJ84fODKKu6vBKDTio5m9ykJ6qFrvUxgGYWZFYrKwKLRoahW/odGTZGli6BK
3Xl2x55YbZCa1nyCOODhjnSEgxw8gwYpbeG4D3FSI6DQBCrGMBpKiIEOjwDEJVQhikPoohS7
eMUxoJELYiwDGIPkBjW+cQ4n5qMdfbDBBipgNBlRRWlb4UHT4jJAt7yxLmMM20RrAKUY/OUF
b4XrVdqKUWHGK4FFYKN40NL+tSbNMYNTsgA1QWADTcjjHexghzq+kY23dSNyYRLGK6jnCURc
Ig+XuIQZHIEgUogEEXmojSxWoQtlOKMXyvgG5cgBP3KEKx/x8MZANyABz1nKf13hUSqyc7oJ
cuJgEVUrlN7qHuIWl7hvbWtW6qpALjyQt7nkBC4ONkffAS+PXLpGPNhhDm64jRrKiJwyPMnJ
jGhkEpcQRSVJdohJkEIToviEIjzBilGgDxrOoIZquaGNbSSWHfRARxWyKgEKWEAqJ7gULUXX
I0/hi2Cp0AvCLDrcwDgHBSfAcIYtXCPkhi6Bzd2EJSLYljduAhdc4MpaS0olwfbADO1ALDm6
u4z+bzgDicHQhfQERYiN5AEUYyCEJsqwBzEwYiOskAUrstWKZyQDHI41YTrSkdht0GMeyWgC
QSnQ1f5lChchFrF4/sVL4JI0VoIpwWAIs2Y2p5kqz1Faprzio92CdFWxABtJZTWBaqogCuJ4
hzvgR0hrPOMZzTDkh3zxiUnM4g6aUAQkDkEHOJSCEZUFxS4Q0YogBqM3wVjGNkwYDmvcwx7x
mAeKQLBVA3/gXWDlUVlGDJdVjfGCUUIzYTrwOw702tdRkQph3nwpeN2Vi6YbhFk6wYm+5gCD
gQEsNU3gg1bMwxzsGOQ4xpGManRrGccAEChmEYi8ASIPXzjF9R6xh5b+kWIX6HNG46jRVHGd
UhvqQGw7upHl2XrOq7ctwnXqHLU36g64vQuMBjfY64FMwOEPj1Gvg22pBP/PKx4dsVlMzIUf
POkFgvkdn63ZBGyIkB3uQFS3hvGMVxDDGQKKCU4N1Ig+MIIOpDAEHzzRiD/sInq4WQYzgKEN
aMhYG3+8RzyEs+oCHxh0YK3EdcwiZh9F2K8JHwyvqwQBrnfd60452v74V/GudFTWdk5Fs5+d
dYJUAAQ+YEY8BB0ObdR9GBh7xiFb4QlPGEJDrcgDIg7RPUXs4ja+MDwpjsE4bgSjfcoLx2Ev
NwctI60qOPrBpnSLbIL99oKx0uDvGs71dZX+3vSl57rDmSPL237YXt15MIrZU92Tcsmw7pDx
27jRrWbkYha+AJSZPKOJMSgBp3V4BShe8YsgjmIXeQcGNJbBDZdi2x3myEdsTVAB2lbq6V0R
q+0G5qODK2yDDic9Yg7ygAQ8wP3sJxqMPMd6Of/IVHpYFS724uy/TtOaPig5gDql1HoGangG
YvC9XZAFT5gFUBCFUACFPnCJR0CE2SAQWnAcQ6EGcICGE+KZo3utS7ABppuAzwG4YwMpSzC4
q5OSkEu/9VOKGJRB92OKdaGUsfsqHQmgWbOEjaMukLsjt/MBTUCHdyAHa2itcBmG0lIFZWCF
XaCF2QAFI5uENND+G41AkFlgBWCwjGDQBmAgpVMKB3aQnwBrgdnasg6gkVnCLS7Cl/LwPP6r
o1lBDM5BgPfbAAzYnPe7Q/izwXZJGqyYDi4IsTYyq74amwsrDDzagBZognh4h3DYhuuzmPvS
BWLQBZ44n1UghOthBEiYhE/4BFKQhVAwFF84tJUDh20Ah2BQJXeQn33bvu6biu/jAYHTjoEx
ixWcPUWkksM4iPZzPz00ARUwRhZgARMIgT28wxq8waSRM6kzFUyIBd2Bpv4Lng2YNnGQO1Xq
hm98m2ZYvCXLBFLwBUPQhC4og0aYBEKYnlZghSNihmE4hrcBh+RpB/mRB3kQDq1yChP+TCgu
kEa1IBi96LjeCT0OQgyleAAMMAEW6AEfaIImeIIomEggsIFk3ADOCUYYabU1xDxNSYUuGr9U
8IML8p3AcrsWwKbjaQdzEIcnGwZRKqJd+IRF2gM+oEBNcIRZ+ATdYIZkSIZpSAZxEAfMQQcv
wQZZyIMmsAET6DfLY8Ow4iJUOav9ezaQU0gY9KAW8IEqeARvSMp5QDV4SAZNcEoT2JzkUBf9
+RzoACsesD/uIA962YtELIGQq6a0aQI6gIViwIbjQQd4AAdvMAbJARa9yQRI0IQNAQVTSIZi
YAZsMEpsuAZkMAZZoIInaAIf8ExkNAH2Y5eussUeyUVS8a3+/VMxRdxKhHgAbbQBKJAFdJCH
fMgHerBN24yHbGqFtORIotEfLPIfHSHEqXOL8oij9Xg2aZoAg/AgFmgBGvABIOiBJzADSjAG
ZLBMcUAGZjCGYpAFZ2AETUAGZJiGa4iGaICFRHiCI/CBGaCBFoDOZDQBtVxLGJGRLtuK5vIU
VHkjC+qLFWOY+dgSH+gDeLDNe8CHd6AHf6AHeriHfsDNeNg3GgjN5PBIqTgouOyKImCo21kV
juvFwjiMO9QSDXDIh1wBFpiBGwACGniCRIAFZIiGa7jMaYAFSngCHaCBGQDNECABtUTRBSDS
BciSpICRz4kO8HNDEOWEiFpN+AD+Rtc0gR7og9q8h3ywB3+wh3Vgh9u0h/+a0HaoAgv9Tfx0
F6Upu2UjK1pLu7ukIyCUD1vJEiJlAAzQAA2oTxNYgRZYARegARh9A1gAghAIAbXMgAxwAAZg
gAWADEgxgAOA1EiZFANbQ0HkAbEgSYfiBIPsxbzcSvebtj5oB9t00HTYUn8wB3qoh/+6zX4g
y8x5ShpUl6cASLBKJk4wFR/Rvx3ASyqRD/qYVEiRVEY1VgbIAAwggSA1ARhwAAcYgAOQVmkN
AAEY1klNl9FUQxTgUB4wTbJKHQBdOzvqoIZkgSqQh3mgB3u4B3awhwethzBdB1Z1UHvAB9v0
hiqwn2b+RFME69bRmbpcsoRqxEpspBU6HVb7cIwCmNYBWNQGgAEAkFijoNjHiFQWMY7jGJrR
nJHvCyDjRAvyu0sM2rNyrdJkiIfbbFB8wAdziNd+uIdz0IcH1Yctxc18KAWh4Ej83NaplMuz
I7HoEtEaMNgryVhrVdgAmNilHYCInVildYijONoWeZFY8lfp4M82JY8mgdPq6iAEwAAVqIJ5
uE2W5VKX1Yd6oAcxTVV7OAd66Id4aIcmaAEM4FdbtZQlTaA10toQPUhshIDEmFrGiFqjUNql
dVqJjVqpHdxsjaXnCMixGCDomq4RBVZa8SAaQNl6dQd/qNlWfYd3ZYd+YFf+fEgHfuiH0VW6
EdxZK7pVeZG1slqVSvhBEg1c+hjchqDYw53YxFVcxs3doQlO/dyBSvgRq2yVAIU2heQcD2oC
ccgHCV1XelAHetiHdbWHeK2HelCHlsWHergHd4AtILjQB+hXq0goTXm9oH1TX41T2zXa3G2M
wkXcAXjaxZXfdBneLAI/qUMVMhMbwFWA5mUBK4DEe/CHeuBSLuUHc7iHtXVQ3LSHCWZXe7Cq
J2CB5j3fbi1OItlFRHzfDiAIhEiI3K3WxV1aAPBdwwXeo0XSf3S1FNBbsaLL8kje5bTdAUaA
PaIDLMVem8WH6WXVB6XZB2XXuNVXDQ5OkAQrvBr+uLoEYWgD1vgd3MI93MNdYRbOXwI4CBju
MvCbS+i6YQHmHOK50m1wUMSK13X1h34wh+9lVXb43u4N4nuQhzSwnwVgvyW2RSf2YAqKYhQA
rIKoFfmd34dA3BSOWhc2DoXo4hKMYUytDoZKtjca4xRgzoIo4xvoYXxwh37whwfmUrhNXS7t
Un0wB5gF337AB3lQg7ptRj7+15GMILPq1UQcZAXAXRO+4qdV4aVd3GoNXuGF5MujJar8ULgA
YDnMZB22pjxI2Ql2V3uN0LWFVyN+11Te0teyggzunErhVhloYqcRkoIL5FzeZV5G5N7dXfwV
ZhfW38fl3x1pUjFWTTL+BtsWqAPbXNeZNWUIbtB1ndd+SAcixs253VfzlWVx7opkMkRA7lpB
FuHbLWF15l12Bmao3eKl2F94ibr+lN3kJdrlPYzmVYEmKFWXPQc0DlNVOod0COJ6YFB70Ady
UNV7lcWd5aqOfYH0ba4wG79YOOeJpuIqNlxfhgHehdpFblxizk/0zYEfmJeyEJU3OpiDDOE7
wlwTuIFkyIcHhtlzUGB6wId3PYdzkFB3eNdRxod4+AQbwIAFqNSn7mlcpepyXpWTHFEOKuqj
Per7/WVfdowWbuSNtVqo3oGsxSWrhtI4BdWvxYAWMODbNDXSxQdTU1u4JeuAzl4LJlMVsFv+
0XTdvNWKvW0wuvTb2WPOg62PqQ3m35VYGCgKpVbaApjUjHVc0uTQj4U9kXXfj5PSYPUgHzAG
wyJdtC3rfPgvezUHLpXjB5UHWeiBC21Lf+NWTP3ZWi64oS3adGYMSBGAAniIiy6AplXURWVY
aX0MAZDUaz2OqpXKGAArXOgEQ7ThgsXkHBZG4qmCUpVegqYHc9AHfLjemf3emcVXzeHX4PRX
hvYK3YogTo0F/F5to71WFnkM9T6AZ2WABsAAIFjRFVjWDMjTRjXxC9/Y0b7uY65GSlZmkQ5h
4FkXKrWBPkA1d8gHtR1lBBZiBEc1OphVP7TVGK5r4ozdsjKx/fv+C60MVoQdVkZd1DzVABEo
xhZoARp1hTbozBlogRlwgT0lcT0sUqHh4iSN4RmuStnl2t9mzSm9wxbogUsoVZqdYH3oBwm1
h3SgbKvig1nF0CVe8bLjWzsLo4jOS61OAIUw0jHHjwww1BCAzhmYAR9oAxnFTMykzMuMhlKY
AyAAAh8IVCvfUz3kHDSF3P1Mc1LBs3uW4oGYcfezJiCgg3ZI1yw1tYJeV5h9rXzNyDNdjreU
77KjZfF7o1gY2vdVSATQEjytT0hvARu4AelsAj6Y0WuYBho9hlKoBVpQBf8ohmO4BrG8BnFg
hmiQBT6YSM/sUaisVd0Odm/dBLJaCzj+/CU9C+5Xb8gqrYK4Q7XcvAd+yFK5lYUomO7mVY5Y
si2fBlroSk7lzeQKeE4WuAHP5EwvqPZrwIZpKIZoQAZVMAVT8IRHoI1J6CToWYZmAAdmKMx2
QEp0wIbMrAKhiMoO8KoZLsTe5sWsPD8reb+0+cpWEId52Ed5kNtuQEsgSMbfrO79ebWygyCC
68EJV07gJtG9tIEmmANTiAajTEpyD4dk+IVZmIVWMAVaYIRW6IM3eIU0ST5Z0IVdGAVDa4Zv
dKlwgMXdlAdTGDDXjWSslVztAFFPRUjAmtLmhc2rtwI1UIM0qIKJzEioVPoM5Z+vamiMk932
TZiUxKPBMgX+2szHd/jG5EkGaACHZwAFm9GbSyCETJDAQEC8SwAGYBiUZAilcKgY+HmHc3gH
fmgHbHgCFeC+w65rAKpGqpbdTSi/OWSYOhzG+mQBFUBG+iT1O1yKtrQiLkPsBJpL2W34kRZk
/zOBJsiufLQct+EGXyCGY1AGVdiFP/CFVgAFVWCESAAFUmgFRFCF/M8FWpAcyqlHagAIa+7C
saPHLl67JjY2SIDA4UMJFC9k1NjBo1IqTJYGcdSjB9MmLjx47Kgh40UKFCU+cLAwAYICBQkS
PECA4MEDDDkx6NR5E8HMmBAmWHgYMcXEiha5cMKESc8gqJg4papEMkeMFypZUpD+sEFFlXjh
3pnrRu5buGXBhr3S9WqUrl17FIFqtMdQHkaZVGVaBUqXsmXKnA0jRi2cOm7p3qm7F0+TDxAV
KFjo8AFFihg1cuzgkqopx45TUxXh8SNHjawpV7Z8CXMmzZo2F9is/QC2AghDi3Y4+kIzZx5F
NjX1GPUjJ1xcTqdO+aGDBQoVQNjQFO9dOnLkqHHT9u2Ys1mzdBET5unVKUatMmnaM8nQpV+e
QLV6NTiYNm3UlnULp41cN+7Ec00TKlQgwQQPnYCUDMFhBFpHyPkx0g6oqbZVBxy4FpNMseH0
4W0JcDhUgpb5BpxFPPixyVNQiZZKLFepdgJLE0hngg/+4rzDGDvckOMdOMHk0kwuuoyyiye7
ZDKJJ3vQcYkjSyLiySy0EPlMMOM8c0w1w7BDjj7msDNPO3WowBAFHPSGGXA/cBELJ5Y8ddxU
sYh0VWooYdjahhx26GduJPIGEWZJBccDU3OGhpxyO5R0EgonZDiBVzZEgY47X4ZDDTWDNaPM
M87oQqUurRByySekDNHHJ3sQQksepLRyXi/OOJMNp+Fkx40757xDTzy7+GDCZJVdxqCDsbDo
Ikej3Vlhnqt98ByfrunWkG7ZTkCUmpadUCiKFg1X3KKj4YKnns+9VAGOssxjTjrsqEONfs18
8wwtzrCiyieggPKJJpkgwkf+FwMzkkcrtIz3iie5QBPMN9Rs5843PqrDzjveFMhQgoRmttkO
ReDCCSdQuYgcJ6U5aqGeW1Fb7bYxE1WUmtNGBC7I4pJsCbMoK8dcVpHW6JUKTXiTT6bfaKdN
M6Eukwu+xXiiiiZ9ZOIJIoZMwsgXoQRCyCrC7NLKKlDj+k03PpKzDTu+vhMPHTZIRpmJyFrk
B8mKisZJLKXhmZWeJaz0XAcZcnA44odbZjMKhRrqqHB5G+eRHpZQZVWFJzkHnXTUPULmQOyA
Q03EyzwdnjO0gDKfJoRM0kcYB4PSRx6ThGLI1LN44gw0wzzTDHfVbLMONwXNw0wTIHBsFGYU
BYf+y2dynuwUyX5Da9ILLZ8g+LTdew/RCd9ihpRmII9URN56DzJVckUArRWNGlKwAQs9MHOd
O9xwIzE1xAyTejBAMQtTZGJ1pthDHvJQBi/koRSOmEQr8rA6YbxCFqFKxjK48QxycAMc7PiS
OwQ0B7lJgG4fWFBSHMUFXBBHbx/BRCc4cacfQEsGgEtBShoXPsHx8GbjI5/zIBe5TjjlZB+x
nJ1Igr0UrIRzEgDBDfiAjnlgB0D0WgY1npGMYegiF7KIi78+MYk88MERn0BEHzQxiUnsYQ+t
2EVcgPEMZQQjGOGARjXIkakPmqMd4tgYgphHvuD4IXrGEY3lZEihCqH+xoY3xGFKcPhDpGQv
BsDhTIqYUhyTRUUjl/tbSiQlv68Y7R3uUIfS1HEMbkCDG68Ihip0AQpSGIIRr/jEKS6hiTxo
4hB76AMEZ1GMq70iX7b4xX1+xI1vnEMd6qAHPewRDz6QkDIKkgiKLvKZIkalkxqJhR/8RsMc
oMYkMrBh9tKpThtSZDOYPF8h49QR0dQpFXjSHI2gA4En+uAS7cgHO8yxDW4cwzDaAI8yaJGL
VoBCFrdshCRAIQZDxEETmhgDIkhBij9oohW/aIUqWOEMUClDG+EYh8TyZw5zCEgcTzBQCWuG
M86ILBWbkNM8PeLJVNzJNCsrZw3Mec5zBjX+qOR01A9GgqjPkCs0UbHcJkjjqCWqZFISmE4T
xAHQc1iDO98YBzRM54xnAIMWZgVFJjLRCEOsIhORmEMj8iCMQxCCFaRQhCc+8YpXhGcWz4gY
MfSjnXSkwyD5kAcVztQQ3vimQTv4QU2b4sIjWoI4PSWJoxhJzs1yFqlKRRQniMOzQ3qTZFLN
XAyk1ZKufIUP8phHOlaqtMOMFBrHGEbDQLGKVRwCFJNQKyPEQAcvPOkQjKgLKC6hilGoopgi
9R//3iGxcxDksHNowQYORBQTsSlnRUhF9Fp0nPV5kmRc8JtSf0DDzCI1RZ/1zCZYJN4IeTIW
SYRWajHkkqvSLxH+6ECHptYGDWV047asyMUs9kWLR2jiEYyoBSMmYZc9GPcSeejDJWZpC0+0
oou52MVtnwGNcfBiG9lxhzmOdwMT3KYCQ6kZCsNVUyI6pZv0lVNo/XDez/KYx0XgQiFFW2Mb
D0JOlQWvSPCLknxqCAIVoIkJeuAN/C2THNAgxjd0AYy9IokVoEAEIwihiF1oYg6RQISDQ8EH
PrSnD/6azzNk0bSmKYMaD5PYNzCGDiiwAAO0EdGLLTNTR33XppKlXDd1yj6ScULHXDhvESL9
aD8EeWdOaVHPXsg30twzXRmywGtwgoEWWAEdO0rHN4iRDCwpwxeqaIUsZuHFZkyCDwP+A4Uj
wAAfNUBiDnnIxB8K2CpQ0EIWqdPFMQSzjHCYJRzuQMccZuBnAgAF0EXx2G9ABlmMbHO+i1J0
IhkdVZIRpylNkROmc0pe0+KiCCujKmtAHZMEIIAACzABDfrQDnhsShnAUAbUclEkhs1CEY+4
Sx668AhS7MEKeUAgI8zAB1KEGRRI0sUXq/SLaoDVGYeZxzxMMSwMEIDaQAnUtU+Us4vEIryY
SPTkXnhpp+BYI+iu8eTGK5U64QJzNQwcS1zyGpvY+94+SAY6usGpZjTDGckIRC5oEQi+MLgW
p5gEItKgdUpYwQyEkMQh+ACJT5RhFaYAhS180YxZNEMYw1j+xjO2gbZuGKMJfTaAAUpe7dxw
C9s2POpSUxHDmzvVZIg2zsspR9pvU+7S4OUpZlkWSmq9pEP1JgDeGbACH2ADHcwGRzLargxV
OOMU/pLwKQ7RByd8Ig196EIZJEEHROThEmXYAyN2UUtPlGo82QiG6fQDDmwogQUaYIABBCAA
vQdlN9zFYQyCaJFKQG9ninfqorLPSewr/ojsroTKGpk9aU1q6JdXvgEYoAEWJOEaSgeHrZqx
11lMQoCeyIMj9jC7MndhCZJwRBdUAYb1ASB8AiPAASK0ApOUCkgFRjN4A7TRgAkwQAEon/Ix
H6D1nW9kG2ckVRFgRAtdmuHlXPb+2Vim0Ry5VYVIjJNJXEgTNZnlnZ/yFYD6zYAbiIM3aAMx
EAMr7AIt2EIjgAIjHEIoOAIhEEwcYAEdmIETdEEakMEYQIIiGAJ7PAIqOIIsmEJ9OEMdPYM2
oAMsSOACFEABBIAF6t3e7YaaCA7OaBuicNu5xcmcXF9OjSDN1VcqQM8KXo/mCA1LwOBMEB0B
WGAAFMABMEAI0IApXMMv2EorJFggFKAufAIcOEIfSAIkhAEYaF0d5EEtpIHEOQIdUMIjhMEf
NEImfIIv5AIiNKIzeAM2AAELUGAA1KIZXqDJNR+3PN9vOM87cQG3MdW5acTMPUUxXhqj5WHP
JRkjtSD+0AVd5QWiDAqALQLAIZKADxjDNSSDLqiCJ1xCXj0CKOSBJDyCLznCRJmBGLjBHEiC
GgAg/hGC/mXYKdACKYDCKwjDMXQDgRhfGdpiLZ4hBvLdoIgP+VwS5BRBIUEPnAhjJ1gO+0Bk
CoIX9ISTT9WQapBfa7wGvckgQNriAWRACxzBNWhDfeiOl5HCGhVhIEiCEdDBJHSBEqRBFUiB
FYxBGVgB2IWCXXyCXrVFKzjDL4hDIsxABlAgAHzkLS5fLgLaiz3ECTkOUR1VQgJjHj4eeFFF
VlJFy0FPJexY5JVTRkbKC4IaR17eIBJiAAAAW1qj+rVAIlxDMXTY1UyCKIz+ASNgQRrUgRJ0
gSOQwRzMgRu8QRicgjs+QjkyQiloQiOUQiPswiwMgyxgAyzogAk4QAG0JVsCpEBSG26oIXc5
TvQZFSYl1Ug8WiWkpmqqpqOlV2aRU2qcROAwmdDNmyCmpRnWYlLqJgAMACLeACwgQzL4yymI
wiUYQiMQwhtoAiQcghxs4h90ARm4QRocwhnUwSGYQRpMAh2QQiCEwhu1gjb6AAk4wAGsZVui
py0KpE18ZqCFJiVZUvlQ5WPRUI9RiHq9Jmymxg3pEOG0BgXMW0eWnPIB5G6iJ1segAOYABAg
AzN4wh/8wSNcwh7UgSEcAh3kARXgX4UCQhWkAST+0EEjiAEYHAIilMIhfAJH/QsyYMMTsEAG
HEB6buZaciYuNuWBUIA1LQ4b4lD2sFNRdRZ76edmFZUljV8O0Qi1RMc+PVlHemQ11qhmruUB
rN8TXIM08AXBJBCG5kEjNGdOMkIXtGMaYAEf1EEXKMInUAEgNMIYTYI0MENRTqBmpqdSBiQu
1kYCHEgJ7aLNGGR8WlI7FRWhFqojPZIOKemn2YgESMBMyEbJpSU1RimCauYAhGQLUIIxFMMl
MMkcIIIikEIjdMErNNgXNEEfmMEYfMEZJMIZLAIfONAeiMEqEEIpMAMyzAAJRMABDECdIqhS
WiBT2tsDbICxNioF7KL+iYRP40CSOskntEKrOklS4wxO4QAoaxkrTkSqsH5knc7oAPgmCdCA
KyDDLOwBJRhCKKxBHVhBGTQCHxyBEMDBGFCBE0QCHaRBI3hohKGiLMgCLERDD5BAjH7rt6on
ngoA3i0ABpiACqiA8vBpsiZIzUBECYjPJEGSxm5s45CltR7OtnRFBWwACDzsw/rZAuAdNU4q
jRqsZl5jg8rCI9RCH0BCGohiHTzCEpjBGjRhFZyBFcyBGhwCHBhXXtABEZqCK9AAwYbrwU5p
jQYk3hnAvbEAEJgBFfiACpiAsU6GjnJLxVps+GBsx5bt2HLPf/KJyBqrCthAEzzCIzTBDIT+
AAYknwHcqcumZ0iSQA8kQilkghNOghdAgiZsQR/QQRKcARGsARW4QRt8wSN0QRJgASSQAR9s
gXuMAjJQghjKaN5uJo2SIcOywAw0QTLIQzwUAx34gNx0bQlBQLLSTLd8D+1+z7UejkvoaJOS
bAtYiiagA+q6lA/0GVJGKegebAAMgANoAFwaQynUghv8gSBIQibUAR8UghDQwSMkghGkwRG4
QRNkAhJIQhqAQSOEQRVEgieEgixEAyy0QHl67udK6SFmgAoAgRRdRz4gRDJYgQ/cAMQeK7LG
jAXIruIUDgIbDuK4RMiK7MiWrA34AB0kA8i1w9u0gyYAwQocX2b+SunnAgANkoAKJAIywIIC
rlkiVIEYlIEYEIEYOEEdmFkVYIEb1EESiJ0jHEIX6KUk8EEmDNDmuoAIMICv8qbLBoCCmgAL
+EApoAP+7Mg75EM7MAMdNIHWBnB2SUCgkIjMFHABy8y2ZMvukqwKLDEVJAI2oC4+AFRAtQM6
1J0NTGBmfnCCMkAGuAAlTAOZMYIVHAIDSUIXdAEkNEEVhMES2AEdPEEjZME7dsEW1EEVmIEd
lEEomCMcGII2zqkD+GpbcjJb+uoBFEAGjCsVYAM8WHCvZMo9YEw7xIM4PEIUAAENPKyxioif
XItu6G62AIpMkHHbAkEVyIIbI01BvEP+gGyDNrRDO3RDFcxtjMrvJ9epuK4AJVxDKchCHZBC
FaDCXs4BFeRfExxCI4yzEZxBFByBEszBE6gBFkBBFEByJEDUdl5CcJZCC3gAA0CzNC/vCgCB
JoiDOIRQpqjDKX0QO4zFPMRDPHTDLlSBD/QA11YbbEw0h0z0RN8Evt0AFWgCBLaDO0TxOxw0
O2THOWhDN4iDG3/C8GYAZkpzUiYlA5BAC0ACMsgCq1TyGyAyGfTBHDhBG9DBLcyBGQCtI0RB
GIiBGlCBF1CBGUwCGTTCGNxBHzQCI4RCJpiC+9pA/H7rAShoCCxxE7/DHWkDFoXDdrBNMdtD
PrxDK8uDOIj+nAksgMnVBl3XNV0TwAOYgA2Ygjg4cT5QkZjAS0DpCjl8yTJEzEkzgw+4wGV2
dW9a6vKyACUggzE8gg8fwv91ARXUQSPAARgkgiacwRG48IdWQRBEwcMltRiUQhKIwRf0wR5U
wWLmAftacwtowCa/bEiKAA1QgTZ2g9LNllmrA2OYRfFkijvcw47MgzxUQZ9FKnQLIlpCNwE0
7KVcRxvv0T0oBjmAAweRQzh8x+h4Az/OAQ2EQMG+rANkADXrcYM9ghuYgcRRgSS4wSVQQRmA
wRRYgRtIwSQ8wR6kARFYASOUARZYARh8QRdcghfgWhI0Qh18WR6YwjQsbQhsMij+2/HmuQI2
XEN3BwO90Ms3VANiZAc4aEM67FGYoNhjzIAGyDW3UreMKyzDzgAdyAPS0EM32AN4D4M5aMfo
kA40dIo2ENgsJAMzXIMpzMAKsLSMJm+NkzAsEO4eMCYRTIJ1fsEcLIEYOMITWMEXMEIYBAES
BHIZIEEPUMESqEEdAKYSkEFOKrjrsTAowEKFw+8mX2MEt2g3LEOyUUMw9M7+hDc1pA07SMyX
kENIs02wSNvUCiukRzreacAMmMI8sA1xC9Q3/Mc3fIM6bAd4N0MwKMMwdIovOEOHR8MTtMCF
9yoDeMAMkLAh8AEh9EEZ0MEbmIH1doEaPEEXHEIQOIL+GUSBEbTBG9ABFITBIURyI3gBOqYB
FJwBTcKeIzzQHuBBHnxCKZTw3GoAe9NAXE4DMyxDMwBDMDSdQVFDL0DDdnjHl/jIj7tDdsCD
OAxLyiZfpOf7wuJINMTDoTOGWR/6dqzadpgOuYcVluhCMjgDhV9DUaK3BmiAC8RlKSTCHpRB
GQit0FIB0KaBaGeBGyRBEzQBFmBBE8ikGIwBmx/CEZjBHoQBH1BBFeyBI5dpHehfHXzCI8iC
McACC6xAC/gALFwDM4zUMcyCfQBG0z0DODQDOEBDM1SDOpgDOMgdO1iDMRM3Nti7yua7pFNt
CHBeO2iHdljDWZCONYBDxFT+QzMMGDfmSy6oAi3sAivMQjRcAyxo8ArEOjIUwyNkQh8cwhwE
bV9WAX3DThqEARRUARi4QSOUAXUygRyIwWZfAhYIwhbIgRvgwRhkwhtcQl5OQiRIQh4YgiZ8
QbkuLRM4qCnsgjK8Ahw9gy8Mgw+K2EhhCTHwzjicgzLcS8Row0ejQxOQwL2fodejH9WqwBOg
AzykwzJYw7LlRzjIkYgFOjDMwpVpgy3oQtshyXyYAoUjQ9b2ATKUwi6MgRXQQR3YqxJEwhLQ
gRuswROEgSRcwuXqQBQMfveqQR4kgda5AUBAqdNnixUli7bMCXOozCQqeRyVmYVI065osJAx
o+X+aderV7R0rcq1y1myZM6aNSOm7Rg1asuoaaPGjdwxbd3AoavSgoEBAwKABhUq1CeDFlHQ
devGjVq3ZMHIgXN2bNwzasqSzVL2rNUoVrpGgdLlKdMnQrtkISuGzNSlQ4n4dLES5owYMkug
PDqjhEobKk+gnEFjZsuXLlUKuYHSRQyVNGKimHkSpkuaRn/ouDGTJs0kPqQMjZJl6lOrR61m
nVblqdUrXcSczXqlLFewZcGOLYP2ktq2mOSUopvDAoPPoceD+tTQgk/S3dqcUXv2Ddqwqc+U
KXNGrNUuZcJAeSLEelIjUJcuPfrkapenRaAkdcnTJE2VunX4npEyyYn+fTVprOgiCYXciCIP
Pp5YxA0jnIgkDSjcoOOKPh7ho47KumAkDzMO6UOTRxhRBI9HHmHlE1VeCQ+UT3JpsZVniBkG
JuywS6aaZbTR5ptuxKHEBg18+gk5opSbwRSlutEGHGiwI+YYYprB6plcnmGlxVdY2agXUF65
ZA9SPPtij1AWSWTDPsAIxQ0nJgnDCi/MSEIJKwxBaI0/jkiECTOUUMMRKs6wwgwxHktDCTea
wEIzQhyJAg8oSDlEEjz6cIQUSUJB5BJEGDHExFdI2YWWs2RzLZdknhnmmGZGgU23lKjZ0Skb
MChAACGHBOonBkjQwZhuhvHuGVt0CaaXY4D+AQbFWVZTJRdWEJEFlFZAUWWVXiiq4xMzPAzE
ilncIIIIMqxo4g03ylCiCjXWKAONJviIAolEqiCijTMYiUIJOsxwY5E06GBCiDPOcKOLNaig
ogw66kiiCzoKaaQOTdJYJa46HkHEEEbOW4VFWkjJBBRWdlkFJO1GgYaWkpyxqhlnusHGBwx6
wjVXnxYwwYdrumkmGNtyUWYZZXShRdQsc0nRk1dCyaSVSzLRZJJM6MjDakj2OGMOR7ooQ5Iy
1FCDiTHsmMIK+8wgog8z4kgijTEenqMNK6Sgook51IjkvyUOSSMLAd0Yg4gwqlACCkcckSSS
Osy4xJAuJtH4kTL+DpmklUY+AQWUsDwZxZNZaBn5lWeeoQUY7ZYhSUmZQ2CgAFuPCyAAoQpg
wIQexFEyShh3aUaYZJZx7ZOxVPmlFZFFIaUUWv5ohJFCwNgDFEjgACOPKuagY4hCt0iDDzEO
cSMMIgaNC/IA53iijiaIGOOIJYaoo9wq3GgjCkcAmcOJJ5TYoo87klCHQ9CBDmfogiP48L05
9MEUWzjQHz5BlrFESxe+oMUrdsGKZ3jCFr9whjecsYtlOCMa14ACCRgggALIjoWzc6HsAFA7
FrThGskQhil84QxaFIMVIctF6jZnCEMgIhSMyAUh9jAJQvThE49wQx8GggpJUOF7f0n+Ahki
ERE6HJAQXmDCEtLghjU04RCR8AIjzMAEHNytDUSwgvecEIU1GGENYjgbI7yghCjg6w5kgMIi
sAAJLDCiC5qoAxlIwYc8BCITh7gEKvKwkTyAwhSZkMVGdKELLs1CF8IQBnZik4xpUGEFrgMA
AFqYShbWbgWJEAc1utIMXeyClrlohTNSdIpnaaIVq5jFLhjxijLywQ2IqIMk0oCKR1TBCmX4
nhqG0IQy7AEMkjgD3NAAiCac4RFZmEMS3JCGAcLBL0OgAxn45YUmVMEM6nxMFKxgBSYcwgpE
iMI6G1WFPTCCD5MAhCH4kAYsZEgThiCFJi4xiUfMIhOZYEX+K1DTS2rNchecpGUxsAGLFjhg
hapkISpRWYAMtAAW2EjGSHahi6+8giyZuOAhMjGJQ4AiD3WoAyP2kAZGWKERkyCDGiZxmPAd
QmuXwEIVyBDOMuDNDVYYwxvTMAdS9OEMR5AEH/rgti44gQlVEIMQthAFzpgBCoxIAxHqsIQo
6O0LYjWYhhLkhjkcQhN9gEQo+pAJQExiqn3IwytMsQdVZC4TIXmoJyhqy1c0wxuwWIEGDnDK
AKBysh+dbAw1MINomCQYs3zFJ1bhkV30VBOa+ITUXlGhP2CBDp9oRBryoIkv8OEImiiDFSYh
hjPAwQhQ6IMRmmAFMZhhCFtIQrv+mtAENVyhDmCAwhyUSwcwHAIJTcgDJdbgVCO0AQpNEEMY
LuEGKchBDUkwwhW2EARGQGEMfehDF+DgBkfUgRB/cMQjJqaIT2ChEXwABR9eoYpT5OE0szyE
J3xxy88mwxvImIEGOtpCyk4YAAfIwA2QEQ1n6IIUv5wFKwzxiFrs4hKNyMPG+DDTRFBuEpbZ
gxmwMAQ10EETYPjCHyhRByxQoQtNUMIc3KAGMsRBMgM8ghXOoIYzLMIOf+DDGxDBhKTWRQxH
aIITzBCEOTw3DWoAgxOUANsrqMEKJHJDkqtQB+x1QRKnkEQeGNGhujKiaVKbmiI0oblVPCKm
QQyEJ1T+YYxpPDiylz3loRENAAeQoAcmnQjiJjGthiJCEn9IRJ5BAYdH3KE+jqiUGN7QBzE0
okB7sMIcFDHjOoghCGKgQxrcVl4nOAFTUODDHL6whD4k4hB+AnMjmCAFQsGTD00IIBWYEJcl
iAEJkEjDEzazh0LQwQpGCGeQ07DULewhi5dgRB0aUdo+SMwQZdBcbCfhJc1V0hOeKMY1aECC
Axg60YgOAK+ogA1VhC4PIjNEJhgxiT9kwtuT2EMeQhEKPmjCEXLlZy0egYUyMEYQTaiDmJNA
BTMEqgrBBUMW6uADKkAhDWtYw/hkyhcvJCYKSziYExxxhi28wamHoALeoDD+8zfVYgyOOIQX
qhCJOIfxDJJ4Qxgax4c/aOKJkJjUGCg300mcwlKGeIUi+FCasoz4GkcYwQLofejKgtQBLkgE
M34BikZOwhOgMCQjWCEJQkwOFJOwwh8OQQo1SMKBPZXPAKuAiC6AoQlHqMLhJSEGRjjiCVZY
whbPQARKWAERPnAIE4aghIKJYamQmIMYWG2FPhyhDV7gAxAWsQYiaCZhThADFjbtBiwkIQmc
r8ymrXA9N/wNDrXYw3w1IZ9M9DwSn5BUEU1BilFkwhGEQEYbWBABe9f7lBZmgSuKMa091AIR
h9gDKsrQiFck4hNpgMPcSUEIMfjeCllQwxPKYN3O8MHfCWdIwv3ASYQlLEIIRjjEkRMoDZzA
CuqAYA4hCZagCehAEpogCiQBP8QADdogD+bAChAI1uwmDKTgUO4mjvgACrwgD+4lqsTACQxG
C5ipC0iNEWqBDsrgEhIhaiLmE9zgDy6hDjKBFOpgTDimFIwBFlzAAaiv3gbgAlogGorBECIn
D1bBciABibrgE/hAUx7BEcQAERAnEi6hD+IiEsygDOogDMJgDixDDbCgnazAFa7AbuhADNTA
YJrKbQ5hDaIg6MSAf55ADcwgIAAAOw=='''
| 73.720548 | 82 | 0.955255 | 806 | 26,908 | 31.890819 | 0.990074 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.112079 | 0.027129 | 26,908 | 364 | 83 | 73.923077 | 0.869814 | 0 | 0 | 0 | 0 | 0 | 0.99951 | 0.985835 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 4 |
4c56f8b9e6b1feab8f0d24ca17b288d7aa8868b2 | 261 | py | Python | tests/test_components.py | ArjixWasTaken/domonic | b80433af8c3172b8a15cb62eeca4e6c5add8cb7e | [
"MIT"
] | null | null | null | tests/test_components.py | ArjixWasTaken/domonic | b80433af8c3172b8a15cb62eeca4e6c5add8cb7e | [
"MIT"
] | null | null | null | tests/test_components.py | ArjixWasTaken/domonic | b80433af8c3172b8a15cb62eeca4e6c5add8cb7e | [
"MIT"
] | null | null | null | """
test_components
~~~~~~~~~~~~
tests for components
"""
import unittest
from domonic.html import *
from domonic.JSON import *
# class TestCase(unittest.TestCase):
# def test_json(self):
# if __name__ == '__main__':
# unittest.main()
| 13.736842 | 36 | 0.628352 | 28 | 261 | 5.5 | 0.607143 | 0.142857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.214559 | 261 | 18 | 37 | 14.5 | 0.75122 | 0.601533 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 4 |
4c62d876b499701c6491761aaff0e2065a6b77de | 10,691 | py | Python | 2021/d15.py | MichaelSquires/AdventOfCode | e6ffbacc000f1479d9dbb8a902adb3e08464929b | [
"MIT"
] | null | null | null | 2021/d15.py | MichaelSquires/AdventOfCode | e6ffbacc000f1479d9dbb8a902adb3e08464929b | [
"MIT"
] | null | null | null | 2021/d15.py | MichaelSquires/AdventOfCode | e6ffbacc000f1479d9dbb8a902adb3e08464929b | [
"MIT"
] | null | null | null | '''
## --- Day 15: Chiton ---
You've almost reached the exit of the cave, but the walls are getting closer
together. Your submarine can barely still fit, though; the main problem is that
the walls of the cave are covered in chitons, and it would be best not to bump
any of them.
The cavern is large, but has a very low ceiling, restricting your motion to two
dimensions. The shape of the cavern resembles a square; a quick scan of chiton
density produces a map of risk level throughout the cave (your puzzle input).
For example:
```
1163751742
1381373672
2136511328
3694931569
7463417111
1319128137
1359912421
3125421639
1293138521
2311944581
```
You start in the top left position, your destination is the bottom right
position, and you cannot move diagonally. The number at each position is its
risk level; to determine the total risk of an entire path, add up the risk
levels of each position you enter (that is, don't count the risk level of your
starting position unless you enter it; leaving it adds no risk to your total).
Your goal is to find a path with the lowest total risk. In this example, a path
with the lowest total risk is highlighted here:
```
1163751742
1381373672
2136511328
3694931569
7463417111
1319128137
1359912421
3125421639
1293138521
2311944581
```
The total risk of this path is 40 (the starting position is never entered, so
its risk is not counted).
What is the lowest total risk of any path from the top left to the bottom right?
## --- Part Two ---
Now that you know how to find low-risk paths in the cave, you can try to find
your way out.
The entire cave is actually five times larger in both dimensions than you
thought; the area you originally scanned is just one tile in a 5x5 tile area
that forms the full map. Your original map tile repeats to the right and
downward; each time the tile repeats to the right or downward, all of its risk
levels are 1 higher than the tile immediately up or left of it. However, risk
levels above 9 wrap back around to 1. So, if your original map had some position
with a risk level of 8, then that same position on each of the 25 total tiles
would be as follows:
```
8 9 1 2 3
9 1 2 3 4
1 2 3 4 5
2 3 4 5 6
3 4 5 6 7
```
Each single digit above corresponds to the example position with a value of 8 on
the top-left tile. Because the full map is actually five times larger in both
dimensions, that position appears a total of 25 times, once in each duplicated
tile, with the values shown above.
Here is the full five-times-as-large version of the first example above, with
the original map in the top left corner highlighted:
```
11637517422274862853338597396444961841755517295286
13813736722492484783351359589446246169155735727126
21365113283247622439435873354154698446526571955763
36949315694715142671582625378269373648937148475914
74634171118574528222968563933317967414442817852555
13191281372421239248353234135946434524615754563572
13599124212461123532357223464346833457545794456865
31254216394236532741534764385264587549637569865174
12931385212314249632342535174345364628545647573965
23119445813422155692453326671356443778246755488935
22748628533385973964449618417555172952866628316397
24924847833513595894462461691557357271266846838237
32476224394358733541546984465265719557637682166874
47151426715826253782693736489371484759148259586125
85745282229685639333179674144428178525553928963666
24212392483532341359464345246157545635726865674683
24611235323572234643468334575457944568656815567976
42365327415347643852645875496375698651748671976285
23142496323425351743453646285456475739656758684176
34221556924533266713564437782467554889357866599146
33859739644496184175551729528666283163977739427418
35135958944624616915573572712668468382377957949348
43587335415469844652657195576376821668748793277985
58262537826937364893714847591482595861259361697236
96856393331796741444281785255539289636664139174777
35323413594643452461575456357268656746837976785794
35722346434683345754579445686568155679767926678187
53476438526458754963756986517486719762859782187396
34253517434536462854564757396567586841767869795287
45332667135644377824675548893578665991468977611257
44961841755517295286662831639777394274188841538529
46246169155735727126684683823779579493488168151459
54698446526571955763768216687487932779859814388196
69373648937148475914825958612593616972361472718347
17967414442817852555392896366641391747775241285888
46434524615754563572686567468379767857948187896815
46833457545794456865681556797679266781878137789298
64587549637569865174867197628597821873961893298417
45364628545647573965675868417678697952878971816398
56443778246755488935786659914689776112579188722368
55172952866628316397773942741888415385299952649631
57357271266846838237795794934881681514599279262561
65719557637682166874879327798598143881961925499217
71484759148259586125936169723614727183472583829458
28178525553928963666413917477752412858886352396999
57545635726865674683797678579481878968159298917926
57944568656815567976792667818781377892989248891319
75698651748671976285978218739618932984172914319528
56475739656758684176786979528789718163989182927419
67554889357866599146897761125791887223681299833479
```
Equipped with the full map, you can now find a path from the top left corner to
the bottom right corner with the lowest total risk:
```
11637517422274862853338597396444961841755517295286
13813736722492484783351359589446246169155735727126
21365113283247622439435873354154698446526571955763
36949315694715142671582625378269373648937148475914
74634171118574528222968563933317967414442817852555
13191281372421239248353234135946434524615754563572
13599124212461123532357223464346833457545794456865
31254216394236532741534764385264587549637569865174
12931385212314249632342535174345364628545647573965
23119445813422155692453326671356443778246755488935
22748628533385973964449618417555172952866628316397
24924847833513595894462461691557357271266846838237
32476224394358733541546984465265719557637682166874
47151426715826253782693736489371484759148259586125
85745282229685639333179674144428178525553928963666
24212392483532341359464345246157545635726865674683
24611235323572234643468334575457944568656815567976
42365327415347643852645875496375698651748671976285
23142496323425351743453646285456475739656758684176
34221556924533266713564437782467554889357866599146
33859739644496184175551729528666283163977739427418
35135958944624616915573572712668468382377957949348
43587335415469844652657195576376821668748793277985
58262537826937364893714847591482595861259361697236
96856393331796741444281785255539289636664139174777
35323413594643452461575456357268656746837976785794
35722346434683345754579445686568155679767926678187
53476438526458754963756986517486719762859782187396
34253517434536462854564757396567586841767869795287
45332667135644377824675548893578665991468977611257
44961841755517295286662831639777394274188841538529
46246169155735727126684683823779579493488168151459
54698446526571955763768216687487932779859814388196
69373648937148475914825958612593616972361472718347
17967414442817852555392896366641391747775241285888
46434524615754563572686567468379767857948187896815
46833457545794456865681556797679266781878137789298
64587549637569865174867197628597821873961893298417
45364628545647573965675868417678697952878971816398
56443778246755488935786659914689776112579188722368
55172952866628316397773942741888415385299952649631
57357271266846838237795794934881681514599279262561
65719557637682166874879327798598143881961925499217
71484759148259586125936169723614727183472583829458
28178525553928963666413917477752412858886352396999
57545635726865674683797678579481878968159298917926
57944568656815567976792667818781377892989248891319
75698651748671976285978218739618932984172914319528
56475739656758684176786979528789718163989182927419
67554889357866599146897761125791887223681299833479
```
The total risk of this path is 315 (the starting position is still never
entered, so its risk is not counted).
Using the full map, what is the lowest total risk of any path from the top left
to the bottom right?
'''
import copy
import queue
import collections
import utils
import pyrust
SAMPLE = '''\
1163751742
1381373672
2136511328
3694931569
7463417111
1319128137
1359912421
3125421639
1293138521
2311944581
'''
INFINITY = 2**32
class Grid(utils.Grid):
def adjacent(self, x, y):
up = utils.up(x, y)
down = utils.down(x, y)
left = utils.left(x, y)
right = utils.right(x, y)
ret = []
for x, y in (up, down, left, right):
if x < 0 or x > self.width - 1:
continue
if y < 0 or y > self.height - 1:
continue
ret.append((x,y))
return ret
def cost(self, coords):
return sum([self[xy] for xy in coords])
def dijkstra(self):
return pyrust.dijkstra(self) # pylint: disable=no-member
def pydijkstra(self):
'''
https://en.wikipedia.org/wiki/Dijkstra%27s_algorithm
'''
source = (0, 0)
target = (self.height - 1, self.width - 1)
Q = queue.PriorityQueue()
dist = collections.defaultdict(lambda: INFINITY)
dist[source] = 0
prev = collections.defaultdict(lambda: None)
Q.put((0, source))
while not Q.empty():
_, u = Q.get()
for v in self.adjacent(*u):
alt = dist[u] + self[v]
if alt < dist[v]:
dist[v] = alt
prev[v] = u
Q.put((alt, v))
S = []
u = target
while u is not None:
S.insert(0, u)
u = prev[u]
return S
def lowest(dict_):
ret = ((0,0), INFINITY)
for xy, val in dict_.items():
_, m = ret
if val < m:
ret = (xy, val)
return ret[0]
def parse(data):
data1 = [list(map(int, k)) for k in data.splitlines()]
grid1 = Grid.init_with_data(data1)
data2 = [list(map(int, k)) for k in data.splitlines()]
data3 = copy.deepcopy(data2)
overflow = list(range(10))
overflow.extend(range(1, 10))
for row in range(len(data2)):
for ii in range(1, 5):
data3[row].extend([overflow[k+ii] for k in data2[row]])
for ii in range(1, 5):
for row in range(len(data2)):
data3.append([overflow[k+ii] for k in data3[row]])
grid2 = Grid.init_with_data(data3)
return grid1, grid2
def part1(data):
grid = data[0]
grid[0,0] = 0
path = grid.dijkstra()
return grid.cost(path)
def part2(data):
grid = data[1]
grid[0,0] = 0
path = grid.dijkstra()
return grid.cost(path)
| 32.495441 | 80 | 0.810214 | 999 | 10,691 | 8.661662 | 0.328328 | 0.008321 | 0.006934 | 0.010401 | 0.694095 | 0.684618 | 0.672599 | 0.660811 | 0.643707 | 0.636542 | 0 | 0.592101 | 0.147414 | 10,691 | 328 | 81 | 32.594512 | 0.357213 | 0.762323 | 0 | 0.133333 | 0 | 0 | 0.044639 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0.088889 | false | 0 | 0.055556 | 0.022222 | 0.244444 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 4 |
4c683e41585302089a9f512ef478e01221d76fcc | 23 | py | Python | pykg2vec/_version.py | aungthu17593/pykg2vec | 007f860a81fbd43b468055cca3d5d6c1633f485a | [
"MIT"
] | null | null | null | pykg2vec/_version.py | aungthu17593/pykg2vec | 007f860a81fbd43b468055cca3d5d6c1633f485a | [
"MIT"
] | null | null | null | pykg2vec/_version.py | aungthu17593/pykg2vec | 007f860a81fbd43b468055cca3d5d6c1633f485a | [
"MIT"
] | 1 | 2021-07-21T14:40:55.000Z | 2021-07-21T14:40:55.000Z | __version__ = "0.0.48"
| 11.5 | 22 | 0.652174 | 4 | 23 | 2.75 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2 | 0.130435 | 23 | 1 | 23 | 23 | 0.35 | 0 | 0 | 0 | 0 | 0 | 0.26087 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 4 |
4c72412b4fbba7f1f855b09f604ecebba0728915 | 27,201 | py | Python | MACS2/OptValidator.py | mr-c/MACS | 8ac898614aa1da24d811f315bd0d069f1d4fcd4e | [
"BSD-3-Clause"
] | 1 | 2020-03-16T02:31:57.000Z | 2020-03-16T02:31:57.000Z | MACS2/OptValidator.py | mr-c/MACS | 8ac898614aa1da24d811f315bd0d069f1d4fcd4e | [
"BSD-3-Clause"
] | null | null | null | MACS2/OptValidator.py | mr-c/MACS | 8ac898614aa1da24d811f315bd0d069f1d4fcd4e | [
"BSD-3-Clause"
] | 3 | 2019-11-11T18:49:58.000Z | 2021-10-03T19:16:08.000Z | # Time-stamp: <2016-02-15 14:57:27 Tao Liu>
"""Module Description
Copyright (c) 2010,2011 Tao Liu <taoliu@jimmy.harvard.edu>
This code is free software; you can redistribute it and/or modify it
under the terms of the BSD License (see the file COPYING included with
the distribution).
@status: experimental
@version: $Revision$
@author: Tao Liu
@contact: taoliu@jimmy.harvard.edu
"""
# ------------------------------------
# python modules
# ------------------------------------
import sys
import os
import re
import logging
from argparse import ArgumentError
from subprocess import Popen, PIPE
from math import log
from MACS2.IO.Parser import BEDParser, ELANDResultParser, ELANDMultiParser, \
ELANDExportParser, SAMParser, BAMParser, BAMPEParser,\
BEDPEParser, BowtieParser, guess_parser
# ------------------------------------
# constants
# ------------------------------------
efgsize = {"hs":2.7e9,
"mm":1.87e9,
"ce":9e7,
"dm":1.2e8}
# ------------------------------------
# Misc functions
# ------------------------------------
def opt_validate ( options ):
"""Validate options from a OptParser object.
Ret: Validated options object.
"""
# gsize
try:
options.gsize = efgsize[options.gsize]
except:
try:
options.gsize = float(options.gsize)
except:
logging.error("Error when interpreting --gsize option: %s" % options.gsize)
logging.error("Available shortcuts of effective genome sizes are %s" % ",".join(efgsize.keys()))
sys.exit(1)
# format
options.gzip_flag = False # if the input is gzip file
options.format = options.format.upper()
if options.format == "ELAND":
options.parser = ELANDResultParser
elif options.format == "BED":
options.parser = BEDParser
elif options.format == "ELANDMULTI":
options.parser = ELANDMultiParser
elif options.format == "ELANDEXPORT":
options.parser = ELANDExportParser
elif options.format == "SAM":
options.parser = SAMParser
elif options.format == "BAM":
options.parser = BAMParser
options.gzip_flag = True
elif options.format == "BAMPE":
options.parser = BAMPEParser
options.gzip_flag = True
options.nomodel = True
elif options.format == "BEDPE":
options.parser = BEDPEParser
options.nomodel = True
elif options.format == "BOWTIE":
options.parser = BowtieParser
elif options.format == "AUTO":
options.parser = guess_parser
else:
logging.error("Format \"%s\" cannot be recognized!" % (options.format))
sys.exit(1)
# duplicate reads
if options.keepduplicates != "auto" and options.keepduplicates != "all":
if not options.keepduplicates.isdigit():
logging.error("--keep-dup should be 'auto', 'all' or an integer!")
sys.exit(1)
# shiftsize>0
#if options.shiftsize: # only if --shiftsize is set, it's true
# options.extsize = 2 * options.shiftsize
#else: # if --shiftsize is not set
# options.shiftsize = options.extsize / 2
if options.extsize < 1 :
logging.error("--extsize must >= 1!")
sys.exit(1)
# refine_peaks, call_summits can't be combined with --broad
#if options.broad and (options.refine_peaks or options.call_summits):
# logging.error("--broad can't be combined with --refine-peaks or --call-summits!")
# sys.exit(1)
if options.broad and options.call_summits:
logging.error("--broad can't be combined with --call-summits!")
sys.exit(1)
if options.pvalue:
# if set, ignore qvalue cutoff
options.log_qvalue = None
options.log_pvalue = log(options.pvalue,10)*-1
else:
options.log_qvalue = log(options.qvalue,10)*-1
options.log_pvalue = None
if options.broad:
options.log_broadcutoff = log(options.broadcutoff,10)*-1
# uppercase the format string
options.format = options.format.upper()
# upper and lower mfold
options.lmfold = options.mfold[0]
options.umfold = options.mfold[1]
if options.lmfold > options.umfold:
logging.error("Upper limit of mfold should be greater than lower limit!" % options.mfold)
sys.exit(1)
# output filenames
options.peakxls = os.path.join( options.outdir, options.name+"_peaks.xls" )
options.peakbed = os.path.join( options.outdir, options.name+"_peaks.bed" )
options.peakNarrowPeak = os.path.join( options.outdir, options.name+"_peaks.narrowPeak" )
options.peakBroadPeak = os.path.join( options.outdir, options.name+"_peaks.broadPeak" )
options.peakGappedPeak = os.path.join( options.outdir, options.name+"_peaks.gappedPeak" )
options.summitbed = os.path.join( options.outdir, options.name+"_summits.bed" )
options.bdg_treat = os.path.join( options.outdir, options.name+"_treat_pileup.bdg" )
options.bdg_control= os.path.join( options.outdir, options.name+"_control_lambda.bdg" )
if options.cutoff_analysis:
options.cutoff_analysis_file = os.path.join( options.outdir, options.name+"_cutoff_analysis.txt" )
else:
options.cutoff_analysis_file = None
#options.negxls = os.path.join( options.name+"_negative_peaks.xls" )
#options.diagxls = os.path.join( options.name+"_diag.xls" )
options.modelR = os.path.join( options.outdir, options.name+"_model.r" )
#options.pqtable = os.path.join( options.outdir, options.name+"_pq_table.txt" )
# logging object
logging.basicConfig(level=(4-options.verbose)*10,
format='%(levelname)-5s @ %(asctime)s: %(message)s ',
datefmt='%a, %d %b %Y %H:%M:%S',
stream=sys.stderr,
filemode="w"
)
options.error = logging.critical # function alias
options.warn = logging.warning
options.debug = logging.debug
options.info = logging.info
options.argtxt = "\n".join((
"# Command line: %s" % " ".join(sys.argv[1:]),\
"# ARGUMENTS LIST:",\
"# name = %s" % (options.name),\
"# format = %s" % (options.format),\
"# ChIP-seq file = %s" % (options.tfile),\
"# control file = %s" % (options.cfile),\
"# effective genome size = %.2e" % (options.gsize),\
#"# tag size = %d" % (options.tsize),\
"# band width = %d" % (options.bw),\
"# model fold = %s\n" % (options.mfold),\
))
if options.pvalue:
if options.broad:
options.argtxt += "# pvalue cutoff for narrow/strong regions = %.2e\n" % (options.pvalue)
options.argtxt += "# pvalue cutoff for broad/weak regions = %.2e\n" % (options.broadcutoff)
options.argtxt += "# qvalue will not be calculated and reported as -1 in the final output.\n"
else:
options.argtxt += "# pvalue cutoff = %.2e\n" % (options.pvalue)
options.argtxt += "# qvalue will not be calculated and reported as -1 in the final output.\n"
else:
if options.broad:
options.argtxt += "# qvalue cutoff for narrow/strong regions = %.2e\n" % (options.qvalue)
options.argtxt += "# qvalue cutoff for broad/weak regions = %.2e\n" % (options.broadcutoff)
else:
options.argtxt += "# qvalue cutoff = %.2e\n" % (options.qvalue)
if options.downsample:
options.argtxt += "# Larger dataset will be randomly sampled towards smaller dataset.\n"
if options.seed >= 0:
options.argtxt += "# Random seed has been set as: %d\n" % options.seed
else:
if options.tolarge:
options.argtxt += "# Smaller dataset will be scaled towards larger dataset.\n"
else:
options.argtxt += "# Larger dataset will be scaled towards smaller dataset.\n"
if options.ratio != 1.0:
options.argtxt += "# Using a custom scaling factor: %.2e\n" % (options.ratio)
if options.cfile:
options.argtxt += "# Range for calculating regional lambda is: %d bps and %d bps\n" % (options.smalllocal,options.largelocal)
else:
options.argtxt += "# Range for calculating regional lambda is: %d bps\n" % (options.largelocal)
if options.broad:
options.argtxt += "# Broad region calling is on\n"
else:
options.argtxt += "# Broad region calling is off\n"
if options.fecutoff != 1.0:
options.argtxt += "# Additional cutoff on fold-enrichment is: %.2f\n" % (options.fecutoff)
if options.format in ["BAMPE", "BEDPE"]:
# neutralize SHIFT
options.shift = 0
options.argtxt += "# Paired-End mode is on\n"
else:
options.argtxt += "# Paired-End mode is off\n"
#if options.refine_peaks:
# options.argtxt += "# Refining peak for read balance is on\n"
if options.call_summits:
options.argtxt += "# Searching for subpeak summits is on\n"
if options.do_SPMR and options.store_bdg:
options.argtxt += "# MACS will save fragment pileup signal per million reads\n"
return options
def diff_opt_validate ( options ):
"""Validate options from a OptParser object.
Ret: Validated options object.
"""
# format
options.gzip_flag = False # if the input is gzip file
# options.format = options.format.upper()
# fox this stuff
# if True: pass
# elif options.format == "AUTO":
# options.parser = guess_parser
# else:
# logging.error("Format \"%s\" cannot be recognized!" % (options.format))
# sys.exit(1)
if options.peaks_pvalue:
# if set, ignore qvalue cutoff
options.peaks_log_qvalue = None
options.peaks_log_pvalue = log(options.peaks_pvalue,10)*-1
options.track_score_method = 'p'
else:
options.peaks_log_qvalue = log(options.peaks_qvalue,10)*-1
options.peaks_log_pvalue = None
options.track_score_method = 'q'
if options.diff_pvalue:
# if set, ignore qvalue cutoff
options.log_qvalue = None
options.log_pvalue = log(options.diff_pvalue,10)*-1
options.score_method = 'p'
else:
options.log_qvalue = log(options.diff_qvalue,10)*-1
options.log_pvalue = None
options.score_method = 'q'
# output filenames
options.peakxls = options.name+"_diffpeaks.xls"
options.peakbed = options.name+"_diffpeaks.bed"
options.peak1xls = options.name+"_diffpeaks_by_peaks1.xls"
options.peak2xls = options.name+"_diffpeaks_by_peaks2.xls"
options.bdglogLR = options.name+"_logLR.bdg"
options.bdgpvalue = options.name+"_logLR.bdg"
options.bdglogFC = options.name+"_logLR.bdg"
options.call_peaks = True
if not (options.peaks1 == '' or options.peaks2 == ''):
if options.peaks1 == '':
raise ArgumentError('peaks1', 'Must specify both peaks1 and peaks2, or neither (to call peaks again)')
elif options.peaks2 == '':
raise ArgumentError('peaks2', 'Must specify both peaks1 and peaks2, or neither (to call peaks again)')
options.call_peaks = False
options.argtxt = "\n".join((
"# ARGUMENTS LIST:",\
"# name = %s" % (options.name),\
# "# format = %s" % (options.format),\
"# ChIP-seq file 1 = %s" % (options.t1bdg),\
"# control file 1 = %s" % (options.c1bdg),\
"# ChIP-seq file 2 = %s" % (options.t2bdg),\
"# control file 2 = %s" % (options.c2bdg),\
"# Peaks, condition 1 = %s" % (options.peaks1),\
"# Peaks, condition 2 = %s" % (options.peaks2),\
""
))
else:
options.argtxt = "\n".join((
"# ARGUMENTS LIST:",\
"# name = %s" % (options.name),\
# "# format = %s" % (options.format),\
"# ChIP-seq file 1 = %s" % (options.t1bdg),\
"# control file 1 = %s" % (options.c1bdg),\
"# ChIP-seq file 2 = %s" % (options.t2bdg),\
"# control file 2 = %s" % (options.c2bdg),\
""
))
if options.peaks_pvalue:
options.argtxt += "# treat/control -log10(pvalue) cutoff = %.2e\n" % (options.peaks_log_pvalue)
options.argtxt += "# treat/control -log10(qvalue) will not be calculated and reported as -1 in the final output.\n"
else:
options.argtxt += "# treat/control -log10(qvalue) cutoff = %.2e\n" % (options.peaks_log_qvalue)
if options.diff_pvalue:
options.argtxt += "# differential pvalue cutoff = %.2e\n" % (options.log_pvalue)
options.argtxt += "# differential qvalue will not be calculated and reported as -1 in the final output.\n"
else:
options.argtxt += "# differential qvalue cutoff = %.2e\n" % (options.log_qvalue)
# logging object
logging.basicConfig(level=(4-options.verbose)*10,
format='%(levelname)-5s @ %(asctime)s: %(message)s ',
datefmt='%a, %d %b %Y %H:%M:%S',
stream=sys.stderr,
filemode="w"
)
options.error = logging.critical # function alias
options.warn = logging.warning
options.debug = logging.debug
options.info = logging.info
return options
def opt_validate_filterdup ( options ):
"""Validate options from a OptParser object.
Ret: Validated options object.
"""
# gsize
try:
options.gsize = efgsize[options.gsize]
except:
try:
options.gsize = float(options.gsize)
except:
logging.error("Error when interpreting --gsize option: %s" % options.gsize)
logging.error("Available shortcuts of effective genome sizes are %s" % ",".join(efgsize.keys()))
sys.exit(1)
# format
options.gzip_flag = False # if the input is gzip file
options.format = options.format.upper()
if options.format == "ELAND":
options.parser = ELANDResultParser
elif options.format == "BED":
options.parser = BEDParser
elif options.format == "ELANDMULTI":
options.parser = ELANDMultiParser
elif options.format == "ELANDEXPORT":
options.parser = ELANDExportParser
elif options.format == "SAM":
options.parser = SAMParser
elif options.format == "BAM":
options.parser = BAMParser
options.gzip_flag = True
elif options.format == "BOWTIE":
options.parser = BowtieParser
elif options.format == "AUTO":
options.parser = guess_parser
else:
logging.error("Format \"%s\" cannot be recognized!" % (options.format))
sys.exit(1)
# duplicate reads
if options.keepduplicates != "auto" and options.keepduplicates != "all":
if not options.keepduplicates.isdigit():
logging.error("--keep-dup should be 'auto', 'all' or an integer!")
sys.exit(1)
# uppercase the format string
options.format = options.format.upper()
# logging object
logging.basicConfig(level=(4-options.verbose)*10,
format='%(levelname)-5s @ %(asctime)s: %(message)s ',
datefmt='%a, %d %b %Y %H:%M:%S',
stream=sys.stderr,
filemode="w"
)
options.error = logging.critical # function alias
options.warn = logging.warning
options.debug = logging.debug
options.info = logging.info
return options
def opt_validate_randsample ( options ):
"""Validate options from a OptParser object.
Ret: Validated options object.
"""
# format
options.gzip_flag = False # if the input is gzip file
options.format = options.format.upper()
if options.format == "ELAND":
options.parser = ELANDResultParser
elif options.format == "BED":
options.parser = BEDParser
elif options.format == "ELANDMULTI":
options.parser = ELANDMultiParser
elif options.format == "ELANDEXPORT":
options.parser = ELANDExportParser
elif options.format == "SAM":
options.parser = SAMParser
elif options.format == "BAM":
options.parser = BAMParser
options.gzip_flag = True
elif options.format == "BOWTIE":
options.parser = BowtieParser
elif options.format == "AUTO":
options.parser = guess_parser
else:
logging.error("Format \"%s\" cannot be recognized!" % (options.format))
sys.exit(1)
# uppercase the format string
options.format = options.format.upper()
# percentage or number
if options.percentage:
if options.percentage > 100.0:
logging.error("Percentage can't be bigger than 100.0. Please check your options and retry!")
sys.exit(1)
elif options.number:
if options.number <= 0:
logging.error("Number of tags can't be smaller than or equal to 0. Please check your options and retry!")
sys.exit(1)
# logging object
logging.basicConfig(level=(4-options.verbose)*10,
format='%(levelname)-5s @ %(asctime)s: %(message)s ',
datefmt='%a, %d %b %Y %H:%M:%S',
stream=sys.stderr,
filemode="w"
)
options.error = logging.critical # function alias
options.warn = logging.warning
options.debug = logging.debug
options.info = logging.info
return options
def opt_validate_refinepeak ( options ):
"""Validate options from a OptParser object.
Ret: Validated options object.
"""
# format
options.gzip_flag = False # if the input is gzip file
options.format = options.format.upper()
if options.format == "ELAND":
options.parser = ELANDResultParser
elif options.format == "BED":
options.parser = BEDParser
elif options.format == "ELANDMULTI":
options.parser = ELANDMultiParser
elif options.format == "ELANDEXPORT":
options.parser = ELANDExportParser
elif options.format == "SAM":
options.parser = SAMParser
elif options.format == "BAM":
options.parser = BAMParser
options.gzip_flag = True
elif options.format == "BOWTIE":
options.parser = BowtieParser
elif options.format == "AUTO":
options.parser = guess_parser
else:
logging.error("Format \"%s\" cannot be recognized!" % (options.format))
sys.exit(1)
# uppercase the format string
options.format = options.format.upper()
# logging object
logging.basicConfig(level=(4-options.verbose)*10,
format='%(levelname)-5s @ %(asctime)s: %(message)s ',
datefmt='%a, %d %b %Y %H:%M:%S',
stream=sys.stderr,
filemode="w"
)
options.error = logging.critical # function alias
options.warn = logging.warning
options.debug = logging.debug
options.info = logging.info
return options
def opt_validate_predictd ( options ):
"""Validate options from a OptParser object.
Ret: Validated options object.
"""
# gsize
try:
options.gsize = efgsize[options.gsize]
except:
try:
options.gsize = float(options.gsize)
except:
logging.error("Error when interpreting --gsize option: %s" % options.gsize)
logging.error("Available shortcuts of effective genome sizes are %s" % ",".join(efgsize.keys()))
sys.exit(1)
# format
options.gzip_flag = False # if the input is gzip file
options.format = options.format.upper()
if options.format == "ELAND":
options.parser = ELANDResultParser
elif options.format == "BED":
options.parser = BEDParser
elif options.format == "ELANDMULTI":
options.parser = ELANDMultiParser
elif options.format == "ELANDEXPORT":
options.parser = ELANDExportParser
elif options.format == "SAM":
options.parser = SAMParser
elif options.format == "BAM":
options.parser = BAMParser
options.gzip_flag = True
elif options.format == "BAMPE":
options.parser = BAMPEParser
options.gzip_flag = True
options.nomodel = True
elif options.format == "BEDPE":
options.parser = BEDPEParser
options.nomodel = True
elif options.format == "BOWTIE":
options.parser = BowtieParser
elif options.format == "AUTO":
options.parser = guess_parser
else:
logging.error("Format \"%s\" cannot be recognized!" % (options.format))
sys.exit(1)
# uppercase the format string
options.format = options.format.upper()
# upper and lower mfold
options.lmfold = options.mfold[0]
options.umfold = options.mfold[1]
if options.lmfold > options.umfold:
logging.error("Upper limit of mfold should be greater than lower limit!" % options.mfold)
sys.exit(1)
options.modelR = os.path.join( options.outdir, options.rfile )
# logging object
logging.basicConfig(level=(4-options.verbose)*10,
format='%(levelname)-5s @ %(asctime)s: %(message)s ',
datefmt='%a, %d %b %Y %H:%M:%S',
stream=sys.stderr,
filemode="w"
)
options.error = logging.critical # function alias
options.warn = logging.warning
options.debug = logging.debug
options.info = logging.info
return options
def opt_validate_pileup ( options ):
"""Validate options from a OptParser object.
Ret: Validated options object.
"""
# format
options.gzip_flag = False # if the input is gzip file
options.format = options.format.upper()
if options.format == "ELAND":
options.parser = ELANDResultParser
elif options.format == "BED":
options.parser = BEDParser
elif options.format == "ELANDMULTI":
options.parser = ELANDMultiParser
elif options.format == "ELANDEXPORT":
options.parser = ELANDExportParser
elif options.format == "SAM":
options.parser = SAMParser
elif options.format == "BAM":
options.parser = BAMParser
options.gzip_flag = True
elif options.format == "BOWTIE":
options.parser = BowtieParser
elif options.format == "AUTO":
options.parser = guess_parser
else:
logging.error("Format \"%s\" cannot be recognized!" % (options.format))
sys.exit(1)
# uppercase the format string
options.format = options.format.upper()
# logging object
logging.basicConfig(level=(4-options.verbose)*10,
format='%(levelname)-5s @ %(asctime)s: %(message)s ',
datefmt='%a, %d %b %Y %H:%M:%S',
stream=sys.stderr,
filemode="w"
)
options.error = logging.critical # function alias
options.warn = logging.warning
options.debug = logging.debug
options.info = logging.info
# extsize
if options.extsize <= 0 :
logging.error("--extsize must > 0!")
sys.exit(1)
return options
def opt_validate_bdgcmp ( options ):
"""Validate options from a OptParser object.
Ret: Validated options object.
"""
# logging object
logging.basicConfig(level=20,
format='%(levelname)-5s @ %(asctime)s: %(message)s ',
datefmt='%a, %d %b %Y %H:%M:%S',
stream=sys.stderr,
filemode="w"
)
options.error = logging.critical # function alias
options.warn = logging.warning
options.debug = logging.debug
options.info = logging.info
# methods should be valid:
for method in set(options.method):
if method not in [ 'ppois', 'qpois', 'subtract', 'logFE', 'FE', 'logLR', 'slogLR', 'max' ]:
logging.error( "Invalid method: %s" % method )
sys.exit( 1 )
# # of --ofile must == # of -m
if options.ofile:
if len(options.method) != len(options.ofile):
logging.error("The number and the order of arguments for --ofile must be the same as for -m.")
sys.exit(1)
return options
def opt_validate_cmbreps ( options ):
"""Validate options from a OptParser object.
Ret: Validated options object.
"""
# logging object
logging.basicConfig(level=20,
format='%(levelname)-5s @ %(asctime)s: %(message)s ',
datefmt='%a, %d %b %Y %H:%M:%S',
stream=sys.stderr,
filemode="w"
)
options.error = logging.critical # function alias
options.warn = logging.warning
options.debug = logging.debug
options.info = logging.info
# methods should be valid:
if options.method not in [ 'fisher', 'max', 'mean']:
logging.error( "Invalid method: %s" % options.method )
sys.exit( 1 )
if len( options.ifile ) != 2:
logging.error("Only support two replicates!")
sys.exit( 1 )
# # of -i must == # of -w
# if not options.weights:
# options.weights = [ 1.0 ] * len( options.ifile )
# if len( options.ifile ) != len( options.weights ):
# logging.error("Must provide same number of weights as number of input files.")
# sys.exit( 1 )
# if options.method == "fisher" and len( options.ifile ) > 3:
# logging.error("NOT IMPLEMENTED! Can't combine more than 3 replicates using Fisher's method.")
# sys.exit( 1 )
return options
def opt_validate_bdgopt ( options ):
"""Validate options from a OptParser object.
Ret: Validated options object.
"""
# logging object
logging.basicConfig(level=20,
format='%(levelname)-5s @ %(asctime)s: %(message)s ',
datefmt='%a, %d %b %Y %H:%M:%S',
stream=sys.stderr,
filemode="w"
)
options.error = logging.critical # function alias
options.warn = logging.warning
options.debug = logging.debug
options.info = logging.info
# methods should be valid:
if options.method.lower() not in [ 'multiply', 'add', 'p2q', 'max', 'min']:
logging.error( "Invalid method: %s" % options.method )
sys.exit( 1 )
if options.method.lower() in [ 'multiply', 'add' ] and not options.extraparam:
logging.error( "Need EXTRAPARAM for method multiply or add!")
sys.exit( 1 )
return options
| 36.075597 | 133 | 0.586265 | 3,046 | 27,201 | 5.194025 | 0.133946 | 0.073952 | 0.050503 | 0.015043 | 0.753176 | 0.717591 | 0.683712 | 0.663232 | 0.634031 | 0.628216 | 0 | 0.010929 | 0.286864 | 27,201 | 753 | 134 | 36.123506 | 0.804671 | 0.107827 | 0 | 0.717949 | 0 | 0.003945 | 0.191976 | 0.002093 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.015779 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 4 |
4c77314a03ac30ed2d31189469b4dd1297947b62 | 213 | py | Python | sample.py | terryhahm/ARAM | bbaa6446aec6ad7141d492aef174832e627c7b74 | [
"MIT"
] | null | null | null | sample.py | terryhahm/ARAM | bbaa6446aec6ad7141d492aef174832e627c7b74 | [
"MIT"
] | null | null | null | sample.py | terryhahm/ARAM | bbaa6446aec6ad7141d492aef174832e627c7b74 | [
"MIT"
] | null | null | null | import numpy as np
import pandas as pd
import riotAPIKey
import time
import requests
df = pd.read_csv('./data/br1matchId.csv').dropna(axis = 0)
df = df.astype(int)
df.to_csv('./data/br1matchId.csv', index=False) | 23.666667 | 58 | 0.746479 | 36 | 213 | 4.361111 | 0.611111 | 0.089172 | 0.216561 | 0.254777 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015957 | 0.117371 | 213 | 9 | 59 | 23.666667 | 0.819149 | 0 | 0 | 0 | 0 | 0 | 0.196262 | 0.196262 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.625 | 0 | 0.625 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 4 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.