hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
bc648c2976801da1917ecc20a8032cce85084bff | 72 | py | Python | vnegmas/backend/src/dynetx/readwrite/__init__.py | YueNing/vnegmas | e95adc56ee9aab8d6cd6f28cce04383e199dc2b8 | [
"MIT"
] | 3 | 2019-06-29T11:40:29.000Z | 2019-09-07T02:15:09.000Z | vnegmas/backend/src/dynetx/readwrite/__init__.py | YueNing/vnegmas | e95adc56ee9aab8d6cd6f28cce04383e199dc2b8 | [
"MIT"
] | null | null | null | vnegmas/backend/src/dynetx/readwrite/__init__.py | YueNing/vnegmas | e95adc56ee9aab8d6cd6f28cce04383e199dc2b8 | [
"MIT"
] | null | null | null | from ..readwrite.edgelist import *
from ..readwrite.json_graph import *
| 24 | 36 | 0.777778 | 9 | 72 | 6.111111 | 0.666667 | 0.472727 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 72 | 2 | 37 | 36 | 0.859375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
bcb4cbe870458bcf42356dd082ee6eaea4680eac | 43 | py | Python | backend/todo_app/serializers/__init__.py | nitinmehra/TodoApp | e1e8938330df6b59b8b064ac1a2dde61744d8392 | [
"MIT"
] | null | null | null | backend/todo_app/serializers/__init__.py | nitinmehra/TodoApp | e1e8938330df6b59b8b064ac1a2dde61744d8392 | [
"MIT"
] | null | null | null | backend/todo_app/serializers/__init__.py | nitinmehra/TodoApp | e1e8938330df6b59b8b064ac1a2dde61744d8392 | [
"MIT"
] | null | null | null | from .todo_serializer import TodoSerializer | 43 | 43 | 0.906977 | 5 | 43 | 7.6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.069767 | 43 | 1 | 43 | 43 | 0.95 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
bcd94de810d9b5962438f44413d0ccf7a0a6c05f | 1,437 | py | Python | tests/yaml_cached_view_test.py | connorworley/service_configuration_lib | a25f0fa56119907ccd756817caf0275e8d73df0b | [
"Apache-2.0"
] | 6 | 2017-01-25T05:44:06.000Z | 2022-01-03T17:17:06.000Z | tests/yaml_cached_view_test.py | connorworley/service_configuration_lib | a25f0fa56119907ccd756817caf0275e8d73df0b | [
"Apache-2.0"
] | 27 | 2015-11-09T17:54:30.000Z | 2022-03-30T19:39:43.000Z | tests/yaml_cached_view_test.py | connorworley/service_configuration_lib | a25f0fa56119907ccd756817caf0275e8d73df0b | [
"Apache-2.0"
] | 14 | 2016-02-10T02:27:00.000Z | 2021-03-24T18:01:56.000Z | def test_yaml_configs_file_watcher_creation(yaml_configs_file_watcher):
assert len(yaml_configs_file_watcher.configs_view.configs['foo']) == 2
assert yaml_configs_file_watcher.configs_view.configs['foo']['smartstack'] == {'main.fake': 42}
assert yaml_configs_file_watcher.configs_view.configs['foo']['authorization'] == \
{'authorization': {'enabled': True}}
def test_yaml_configs_file_watcher_delete_not_presented_files(yaml_configs_file_watcher, mock_soa_dir):
# Not present in cache, excluded by filter
yaml_configs_file_watcher._maybe_remove_path_from_cache(mock_soa_dir.join('foo', 'smartstackX.yaml'))
assert len(yaml_configs_file_watcher.configs_view.configs['foo']) == 2
# Not present in cache, not excluded by filter
assert len(yaml_configs_file_watcher.configs_view.configs['bar']) == 0
yaml_configs_file_watcher._maybe_remove_path_from_cache(mock_soa_dir.join('bar', 'smartstack.yaml'))
assert len(yaml_configs_file_watcher.configs_view.configs['foo']) == 2
assert len(yaml_configs_file_watcher.configs_view.configs['bar']) == 0
def test_yaml_configs_file_watcher_delete(yaml_configs_file_watcher, mock_soa_dir):
assert len(yaml_configs_file_watcher.configs_view.configs['foo']) == 2
yaml_configs_file_watcher._maybe_remove_path_from_cache(mock_soa_dir.join('foo', 'smartstack.yaml'))
assert len(yaml_configs_file_watcher.configs_view.configs['foo']) == 1
| 57.48 | 105 | 0.789144 | 208 | 1,437 | 4.985577 | 0.197115 | 0.190935 | 0.260366 | 0.381871 | 0.818708 | 0.818708 | 0.790743 | 0.661524 | 0.661524 | 0.572806 | 0 | 0.006944 | 0.098121 | 1,437 | 24 | 106 | 59.875 | 0.79321 | 0.059151 | 0 | 0.375 | 0 | 0 | 0.099333 | 0 | 0 | 0 | 0 | 0 | 0.5625 | 1 | 0.1875 | false | 0 | 0 | 0 | 0.1875 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
bcdb3577158d523bcb302d0df0cc1c204ef64249 | 33 | py | Python | Week 1: Integers, I-O, simple string operations/08.py | MLunov/Python-programming-basics-HSE | 7df8bba105db84d6b932c454fdc39193a648254e | [
"MIT"
] | null | null | null | Week 1: Integers, I-O, simple string operations/08.py | MLunov/Python-programming-basics-HSE | 7df8bba105db84d6b932c454fdc39193a648254e | [
"MIT"
] | null | null | null | Week 1: Integers, I-O, simple string operations/08.py | MLunov/Python-programming-basics-HSE | 7df8bba105db84d6b932c454fdc39193a648254e | [
"MIT"
] | null | null | null | print((int(input()) // 10) % 10)
| 16.5 | 32 | 0.515152 | 5 | 33 | 3.4 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 0.151515 | 33 | 1 | 33 | 33 | 0.464286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
4c1675ca4910a9c71cb9af45310593328acbc528 | 111 | py | Python | multipeak/__init__.py | graphenestandards/raman | 045ba19dedb269b133c65ce8b01d8ce8c25335b7 | [
"MIT"
] | 2 | 2018-10-01T14:16:42.000Z | 2019-03-26T02:27:27.000Z | multipeak/__init__.py | graphenestandards/raman | 045ba19dedb269b133c65ce8b01d8ce8c25335b7 | [
"MIT"
] | null | null | null | multipeak/__init__.py | graphenestandards/raman | 045ba19dedb269b133c65ce8b01d8ce8c25335b7 | [
"MIT"
] | 1 | 2020-01-04T11:30:13.000Z | 2020-01-04T11:30:13.000Z | from .multipeak import printmd, Dataset, MultiPseudoVoigtModel
from .grapheneRaman import GrapheneModelResults
| 37 | 62 | 0.873874 | 10 | 111 | 9.7 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.09009 | 111 | 2 | 63 | 55.5 | 0.960396 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0.5 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 6 |
4c43212b160294ce98d9839a02e9d07df72d7ad6 | 3,303 | py | Python | acr_module/acr/preprocessing/word_embeddings.py | 13520505/bigdataproj | 09202c7e13366726415b1111cc93d3083d102cb3 | [
"MIT"
] | null | null | null | acr_module/acr/preprocessing/word_embeddings.py | 13520505/bigdataproj | 09202c7e13366726415b1111cc93d3083d102cb3 | [
"MIT"
] | 9 | 2020-01-28T23:07:43.000Z | 2022-02-10T00:36:23.000Z | acr_module/acr/preprocessing/word_embeddings.py | 13520505/bigdataproj | 09202c7e13366726415b1111cc93d3083d102cb3 | [
"MIT"
] | null | null | null | import numpy as np
from gensim.models.keyedvectors import KeyedVectors
from ..utils import serialize
from ..acr_commons import PAD_TOKEN, UNK_TOKEN
def load_word_embeddings(path, binary=True):
w2v_model = KeyedVectors.load_word2vec_format(path, binary=binary)
return w2v_model
# for model vietnamese, conflict but keep
def load_word_embeddings_vietnamese(path, binary=True):
w2v_model = KeyedVectors.load_word2vec_format(path, binary=binary, unicode_errors='ignore')
return w2v_model
def process_word_embedding_for_corpus_vocab(w2v_model, words_freq,
keep_most_frequent_words=100000):
print('Tokens vocab. from articles: {}'.format(len(words_freq)))
most_freq_words = set(list(map(lambda x: x[0], words_freq.most_common(keep_most_frequent_words))))
print('Most common tokens vocab. from articles: {}'.format(len(most_freq_words)))
RESERVED_TOKENS_IN_VOCAB=2
embedding_size = w2v_model.vector_size
new_embeddings_list = []
new_vocab = {}
last_token_id = RESERVED_TOKENS_IN_VOCAB
w2v_vocab = set(w2v_model.wv.index2word)
for word in most_freq_words:
if word in w2v_vocab:
new_vocab[word] = last_token_id
last_token_id += 1
new_embeddings_list.append(w2v_model[word])
#Inserting the 2 reserved tokens
new_vocab[PAD_TOKEN] = 0
new_vocab[UNK_TOKEN] = 1
np.random.seed(10)
unk_vector = np.random.uniform(low=-0.04, high=0.04, size=embedding_size)
pad_vector = np.random.uniform(low=-0.04, high=0.04, size=embedding_size)
new_embeddings_matrix = np.vstack([unk_vector, pad_vector] + new_embeddings_list)
print('Most common tokens with word embeddings: {}'.format(new_embeddings_matrix.shape[0]))
return new_vocab, new_embeddings_matrix
def process_word_embedding_for_corpus_vocab_second_time(w2v_model, words_freq,word_vocab,
keep_most_frequent_words=100000):
print('Tokens vocab. from articles: {}'.format(len(words_freq)))
most_freq_words = set(list(map(lambda x: x[0], words_freq.most_common(keep_most_frequent_words))))
print('Most common tokens vocab. from articles: {}'.format(len(most_freq_words)))
RESERVED_TOKENS_IN_VOCAB= max(word_vocab.values())
embedding_size = w2v_model.vector_size
new_embeddings_list = []
new_vocab = {}
last_token_id = RESERVED_TOKENS_IN_VOCAB
w2v_vocab = set(w2v_model.wv.index2word)
for word in most_freq_words:
if word in w2v_vocab:
new_vocab[word] = last_token_id
last_token_id += 1
new_embeddings_list.append(w2v_model[word])
np.random.seed(10)
unk_vector = np.random.uniform(low=-0.04, high=0.04, size=embedding_size)
pad_vector = np.random.uniform(low=-0.04, high=0.04, size=embedding_size)
new_embeddings_matrix = np.vstack([unk_vector, pad_vector] + new_embeddings_list)
print('Most common tokens with word embeddings: {}'.format(new_embeddings_matrix.shape[0]))
return new_vocab, new_embeddings_matrix
def save_word_vocab_embeddings(output_path, word_vocab, word_embeddings_matrix):
to_serialize = (word_vocab, word_embeddings_matrix)
serialize(output_path, to_serialize) | 37.534091 | 102 | 0.718438 | 468 | 3,303 | 4.724359 | 0.188034 | 0.043419 | 0.035278 | 0.037992 | 0.791949 | 0.765717 | 0.765717 | 0.733605 | 0.733605 | 0.733605 | 0 | 0.02609 | 0.187708 | 3,303 | 88 | 103 | 37.534091 | 0.797987 | 0.021193 | 0 | 0.711864 | 0 | 0 | 0.07428 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.084746 | false | 0 | 0.067797 | 0 | 0.220339 | 0.101695 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4c70d474146e85ddf0c2c965050a44e6d6c12bb5 | 9,216 | py | Python | applications/structural_application/custom_problemtype/write_cnd.py | AndreaVoltan/MyKratos7.0 | e977752722e8ef1b606f25618c4bf8fd04c434cc | [
"BSD-4-Clause"
] | 2 | 2020-04-30T19:13:08.000Z | 2021-04-14T19:40:47.000Z | applications/structural_application/custom_problemtype/write_cnd.py | AndreaVoltan/MyKratos7.0 | e977752722e8ef1b606f25618c4bf8fd04c434cc | [
"BSD-4-Clause"
] | 1 | 2020-04-30T19:19:09.000Z | 2020-05-02T14:22:36.000Z | applications/structural_application/custom_problemtype/write_cnd.py | AndreaVoltan/MyKratos7.0 | e977752722e8ef1b606f25618c4bf8fd04c434cc | [
"BSD-4-Clause"
] | 1 | 2020-06-12T08:51:24.000Z | 2020-06-12T08:51:24.000Z | from __future__ import print_function, absolute_import, division #makes KratosMultiphysics backward compatible with python 2.6 and 2.7
import string;
import basicfunctions;
#
def WriteScalarCond(name, condtype, meshtype, projectname, domaintype):
newsection = ''
if (string.count(condtype, 'p') >= 1):
newsection = newsection + ScalarCond(name, 'point', meshtype, domaintype)
if (string.count(condtype, 'l') >= 1):
newsection = newsection + ScalarCond(name, 'line', meshtype, domaintype)
if (string.count(condtype, 's') >= 1):
newsection = newsection + ScalarCond(name, 'surface', meshtype, domaintype)
if (string.count(condtype, 'v') >= 1):
newsection = newsection + ScalarCond(name, 'volume', meshtype, domaintype)
AddBCToCndFile(newsection, projectname, domaintype)
def WriteVectorCond(name, condtype, meshtype, projectname, domaintype):
newsection = ''
if (string.count(condtype, 'p') >= 1):
newsection = newsection + VectorCond(name, 'point', meshtype, domaintype)
if (string.count(condtype, 'l') >= 1):
newsection = newsection + VectorCond(name, 'line', meshtype, domaintype)
if (string.count(condtype, 's') >= 1):
newsection = newsection + VectorCond(name, 'surface', meshtype, domaintype)
if (string.count(condtype, 'v') >= 1):
newsection = newsection + VectorCond(name, 'volume', meshtype, domaintype)
AddBCToCndFile(newsection, projectname, domaintype)
def WriteElemCond(name, elemtype, projectname, domaintype):
newsection = ''
if (string.count(elemtype, 'p') >= 1):
newsection = newsection + ElemCond(name, 'point')
if (string.count(elemtype, 'l') >= 1):
newsection = newsection + ElemCond(name, 'line')
if (string.count(elemtype, 's') >= 1):
newsection = newsection + ElemCond(name, 'surface')
if (string.count(elemtype, 'v') >= 1):
newsection = newsection + ElemCond(name, 'volume')
AddElemToCndFile(newsection, projectname, domaintype)
#
def ScalarCond(name, condtype, meshtype, domaintype):
cond = 'CONDITION: ' + condtype + '_' + name + '_('+domaintype+')\n'
condt = 'CONDTYPE: over ' + condtype + 's \n'
condmeshtype = 'CONDMESHTYPE: over ' + meshtype + '\n'
question = 'QUESTION: Value\nVALUE: 0.0\n'
end = 'END CONDITION\n'
return cond + condt + condmeshtype + question + end
def VectorCond(name, condtype, meshtype, domaintype):
cond = 'CONDITION: ' + condtype + '_' + name + '_('+domaintype+')\n'
cond = cond + 'CONDTYPE: over ' + condtype + 's \n'
cond = cond + 'CONDMESHTYPE: over ' + meshtype + '\n'
cond = cond + 'QUESTION:' + name + '_X#CB#(0,1)\nVALUE: 0\nDEPENDENCIES: (0,HIDE,Value_X,#CURRENT#)(1,RESTORE,Value_X,#CURRENT#)\n'
cond = cond + 'QUESTION: Value_X\nVALUE: 0.0\n'
cond = cond + 'QUESTION:' + name + '_Y#CB#(0,1)\nVALUE: 0\nDEPENDENCIES: (0,HIDE,Value_Y,#CURRENT#)(1,RESTORE,Value_Y,#CURRENT#)\n'
cond = cond + 'QUESTION: Value_Y\nVALUE: 0.0\n'
cond = cond + 'QUESTION:' + name + '_Z#CB#(0,1)\nVALUE: 0\nDEPENDENCIES: (0,HIDE,Value_Z,#CURRENT#)(1,RESTORE,Value_Z,#CURRENT#)\n'
cond = cond + 'QUESTION: Value_Z\nVALUE: 0.0\n'
cond = cond + 'END CONDITION\n'
return cond
def ElemCond(name, elemtype):
cond = 'CONDITION: ' + elemtype + '_' + name + '\n'
condt = 'CONDTYPE: over ' + elemtype + 's \n'
condmeshtype = 'CONDMESHTYPE: over body elements\n'
question = 'QUESTION: Element_Type#CB#(' + name + ')\nVALUE: ' + name + '\n'
end = 'END CONDITION\n'
return cond + condt + condmeshtype + question + end
#
def AddBCToCndFile(newsection, projectname, domaintype):
if domaintype == 'Fluid':
key = 'BOOK: Fluid_Boundary_Conditions\n'
if domaintype == 'Structure':
key = 'BOOK: Structure_Boundary_Conditions\n'
file = projectname + '.cnd'
filecontent = basicfunctions.splitfile(key, basicfunctions.readfile(file))
newcontent = filecontent[0] + key + newsection + filecontent[1]
basicfunctions.writefile(file, newcontent)
def AddElemToCndFile(newsection, projectname, domaintype):
if domaintype == 'Fluid':
key = 'BOOK: Fluid_Element_Type\n'
if domaintype == 'Structure':
key = 'BOOK: Structure_Element_Type\n'
file = projectname + '.cnd'
filecontent = basicfunctions.splitfile(key, basicfunctions.readfile(file))
newcontent = filecontent[0] + key + newsection + filecontent[1]
basicfunctions.writefile(file, newcontent)
# ALT ########################################
# def WriteFluidScalarCond(name,condtype,meshtype,projectname):
# filecontent = basicfunctions.getkeyword('BOOK: ',basicfunctions.readfile(projectname+'.cnd'))
# section = filecontent[0]
# restcontent = filecontent[1]+filecontent[2]+filecontent[3]+filecontent[4]+filecontent[5]
# if (string.count(condtype,'p')>=1):
# section=section+ScalarCond(name,'point',meshtype,'Fluid')
# if (string.count(condtype,'l')>=1):
# section=section+ScalarCond(name,'line',meshtype,'Fluid')
# if (string.count(condtype,'s')>=1):
# section=section+ScalarCond(name,'surface',meshtype,'Fluid')
# if (string.count(condtype,'v')>=1):
# section=section+ScalarCond(name,'volume',meshtype,'Fluid')
# filecontent=section+restcontent
# basicfunctions.writefile(projectname+'.cnd',filecontent)
# def WriteFluidVectorCond(name,condtype,meshtype,projectname):
# filecontent = basicfunctions.getkeyword('BOOK: ',basicfunctions.readfile(projectname+'.cnd'))
# section = filecontent[0]
# restcontent = filecontent[1]+filecontent[2]+filecontent[3]+filecontent[4]+filecontent[5]
# if (string.count(condtype,'p')>=1):
# section=section+VectorCond(name,'point',meshtype,'Fluid')
# if (string.count(condtype,'l')>=1):
# section=section+VectorCond(name,'line',meshtype,'Fluid')
# if (string.count(condtype,'s')>=1):
# section=section+VectorCond(name,'surface',meshtype,'Fluid')
# if (string.count(condtype,'v')>=1):
# section=section+VectorCond(name,'volume',meshtype,'Fluid')
# filecontent=section+restcontent
# basicfunctions.writefile(projectname+'.cnd',filecontent)
# def WriteStructureScalarCond(name,condtype,meshtype,projectname):
# filecontent = basicfunctions.getkeyword('BOOK: ',basicfunctions.readfile(projectname+'.cnd'))
# section = filecontent[3]
# restcontent1 = filecontent[0]+filecontent[1]+filecontent[2]
# restcontent2 = filecontent[4]+filecontent[5]
# if (string.count(condtype,'p')>=1):
# section=section+ScalarCond(name,'point',meshtype,'Structure')
# if (string.count(condtype,'l')>=1):
# section=section+ScalarCond(name,'line',meshtype,'Structure')
# if (string.count(condtype,'s')>=1):
# section=section+ScalarCond(name,'surface',meshtype,'Structure')
# if (string.count(condtype,'v')>=1):
# section=section+ScalarCond(name,'volume',meshtype,'Structure')
# filecontent=restcontent1+section+restcontent2
# basicfunctions.writefile(projectname+'.cnd',filecontent)
# def WriteStructureVectorCond(name,condtype,meshtype,projectname):
# filecontent = basicfunctions.getkeyword('BOOK: ',basicfunctions.readfile(projectname+'.cnd'))
# section = filecontent[3]
# restcontent1 = filecontent[0]+filecontent[1]+filecontent[2]
# restcontent2 = filecontent[4]+filecontent[5]
# if (string.count(condtype,'p')>=1):
# section=section+VectorCond(name,'point',meshtype,'Structure')
# if (string.count(condtype,'l')>=1):
# section=section+VectorCond(name,'line',meshtype,'Structure')
# if (string.count(condtype,'s')>=1):
# section=section+VectorCond(name,'surface',meshtype,'Structure')
# if (string.count(condtype,'v')>=1):
# section=section+VectorCond(name,'volume',meshtype,'Structure')
# filecontent=restcontent1+section+restcontent2
# basicfunctions.writefile(projectname+'.cnd',filecontent)
# def WriteFluidElemCond(name,elemtype,projectname):
# filecontent = basicfunctions.getkeyword('BOOK: ',basicfunctions.readfile(projectname+'.cnd'))
# section = filecontent[1]
# restcontent1 = filecontent[0]
# restcontent2 = filecontent[2]+filecontent[3]+filecontent[4]+filecontent[5]
# if (string.count(elemtype,'p')>=1):
# section=section+ElemCond(name,'point')
# if (string.count(elemtype,'l')>=1):
# section=section+ElemCond(name,'line')
# if (string.count(elemtype,'s')>=1):
# section=section+ElemCond(name,'surface')
# if (string.count(elemtype,'v')>=1):
# section=section+ElemCond(name,'volume')
# filecontent=restcontent1+section+restcontent2
# basicfunctions.writefile(projectname+'.cnd',filecontent)
#
#
# def WriteStructureElemCond(name,elemtype,projectname):
# filecontent = basicfunctions.getkeyword('BOOK: ',basicfunctions.readfile(projectname+'.cnd'))
# section = filecontent[4]
# restcontent1 = filecontent[0]+filecontent[1]+filecontent[2]+filecontent[3]
# restcontent2 = filecontent[5]
# if (string.count(elemtype,'p')>=1):
# section=section+ElemCond(name,'point')
# if (string.count(elemtype,'l')>=1):
# section=section+ElemCond(name,'line')
# if (string.count(elemtype,'s')>=1):
# section=section+ElemCond(name,'surface')
# if (string.count(elemtype,'v')>=1):
# section=section+ElemCond(name,'volume')
# filecontent=restcontent1+section+restcontent2
# basicfunctions.writefile(projectname+'.cnd',filecontent)
#
| 39.724138 | 135 | 0.698025 | 1,018 | 9,216 | 6.287819 | 0.096267 | 0.044993 | 0.073114 | 0.078738 | 0.869552 | 0.819872 | 0.793314 | 0.773942 | 0.76488 | 0.687549 | 0 | 0.014814 | 0.128364 | 9,216 | 231 | 136 | 39.896104 | 0.782024 | 0.489366 | 0 | 0.3875 | 0 | 0.0375 | 0.204814 | 0.055361 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.0375 | 0 | 0.175 | 0.0125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d5e4a58c9e17f368c0538057145ccd4c027f0619 | 115 | py | Python | server/views/__init__.py | dpuljic01/financial-dashboard | 95ccde046597c281448ba29e903c26685a248e18 | [
"MIT"
] | 13 | 2020-09-12T00:15:02.000Z | 2022-03-11T12:34:26.000Z | server/views/__init__.py | dpuljic01/masters-thesis | 95ccde046597c281448ba29e903c26685a248e18 | [
"MIT"
] | 7 | 2020-09-11T21:36:53.000Z | 2022-02-27T09:18:00.000Z | server/views/__init__.py | dpuljic01/masters-thesis | 95ccde046597c281448ba29e903c26685a248e18 | [
"MIT"
] | 5 | 2020-11-08T08:16:40.000Z | 2022-02-07T20:39:03.000Z | from server.views import auth, news, portfolio, user, tickers
blueprints = (auth, news, portfolio, user, tickers)
| 28.75 | 61 | 0.756522 | 15 | 115 | 5.8 | 0.666667 | 0.183908 | 0.390805 | 0.482759 | 0.643678 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.13913 | 115 | 3 | 62 | 38.333333 | 0.878788 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
910f96ce8dcf7f7adc55b498c61114d65743e306 | 43,931 | py | Python | autolrn/classification/train_calibrate.py | SimonCarozza/autolrn | d0875844a3e9b4fc22510ef320aa498e339b6192 | [
"MIT"
] | null | null | null | autolrn/classification/train_calibrate.py | SimonCarozza/autolrn | d0875844a3e9b4fc22510ef320aa498e339b6192 | [
"MIT"
] | null | null | null | autolrn/classification/train_calibrate.py | SimonCarozza/autolrn | d0875844a3e9b4fc22510ef320aa498e339b6192 | [
"MIT"
] | null | null | null | """
This module allows you to tune and calibrate the best estimator.
This module allows you to tune and calibrate the best estimator
returned by nested cv evaluation, and to just calibrate
the best estimator returned by the non-nested cv evaluation.
"""
from sklearn.calibration import CalibratedClassifierCV
import numpy as np
import re
from random import randint
from scipy.stats import randint as sp_randint
import matplotlib.pyplot as plt
from . import param_grids_distros as pgd
from .. import auto_utils as au
from . import eval_utils as eu
from sklearn.model_selection import StratifiedKFold, StratifiedShuffleSplit
from sklearn.utils import multiclass as mc
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix
from sklearn.metrics import classification_report
from sklearn.pipeline import Pipeline
import os
import sys
stderr = sys.stderr
sys.stderr = open(os.devnull, 'w')
from keras.wrappers.scikit_learn import KerasClassifier
sys.stderr = stderr
import warnings
warnings.filterwarnings("ignore")
def best_keras_clf_estimator(
y_type, best_nn_build_fn, nb_epoch, input_dim, labels, batch_size=None):
best_model_estim = None
best_model_estim = KerasClassifier(
build_fn=best_nn_build_fn, nb_epoch=nb_epoch,
input_dim=input_dim, verbose=0)
if y_type == 'multiclass':
if labels is None:
raise ValueError("%r is not a valid type for var 'labels'" % labels)
elif not isinstance(labels, list):
raise TypeError("Multiclass keras models need a list of string labels.")
else:
output_dim = len(labels)
best_model_estim.set_params(output_dim=output_dim)
if batch_size is not None and isinstance(batch_size, int):
best_model_estim.set_params(batch_size=batch_size)
return best_model_estim
def confusion_matrix_and_clf_report(y_type, model_name, y_test, y_pred):
if y_type == 'binary':
tn, fp, fn, tp = confusion_matrix(y_test, y_pred).ravel()
print()
print("Test errors for '%s'" % model_name)
print("\ttrue negatives: %d" % tn)
print("\tfalse positives: %d" % fp)
print("\tfalse negatives: %d" % fn)
print("\ttrue positives: %d" % tp)
else:
print()
print("Confusion matrix for '%s'.\n"
% model_name, confusion_matrix(y_test, y_pred))
print()
print("Classification report for '%s'\n"
% model_name, classification_report(y_test, y_pred))
def check_model_hasroc(model_name, model_data):
has_roc = 0
try:
model_data[2]
except IndexError:
print("ROC_AUC score is not available.")
roc_auc = None
except Exception as e:
print(e)
else:
roc_auc = model_data[2]
has_roc = 1
finally:
print(
"We have %s data to compare prediction confidence of models."
% model_name)
if has_roc:
print("ROC_AUC included.")
return has_roc, roc_auc
def logreg_calibration_reference(y_type, scoring, models_params):
lr = None
lr_params = None
if 'LogRClf_2nd' in models_params:
lr = models_params['LogRClf_2nd'][0]
if y_type == 'binary':
lr.set_params(solver='liblinear')
else:
# y_type == 'multiclass'
if scoring == 'neg_log_loss':
lr.set_params(
solver='lbfgs', penalty='l2', multi_class='multinomial')
lr_params = models_params['LogRClf_2nd'][1]
else:
if y_type == 'binary':
lr =\
pgd.full_search_models_and_parameters['LogRClf_2nd'][0].set_params(
solver='liblinear')
lr_params = pgd.full_search_models_and_parameters['LogRClf_2nd'][1]
else:
# y_type == 'multiclass'
if scoring == 'neg_log_loss':
# solver='saga', penalty='l1'
lr =\
pgd.full_search_models_and_parameters['LogRClf_2nd'][0].set_params(
solver='lbfgs', penalty='l2', multi_class='multinomial')
lr_params = pgd.full_search_models_and_parameters['LogRClf_2nd'][1]
return lr, lr_params
def classic_cv_calibration_process(
ref_pred_score, training_estimator, final_estimator, X, y,
X_train, y_train, X_test, y_test, y_type, best_model_name,
best_model_estim, weights_test, weights_all, scoring, best_score,
best_nn_build_fn, nb_epoch, batch_size, tuning_method, models_data,
kfold, labels, d_name, serial):
temp_estimator = training_estimator
# check predicted probabilities for prediction confidence
uncalibrated_data = eu.probability_confidence_before_calibration(
temp_estimator, X_train, y_train, X_test, y_test, tuning_method,
models_data, labels, serial
)
input_dim = 0
if best_model_name in (
'baseline_nn_default_Clf_2nd', 'baseline_nn_smaller_Clf_2nd',
'larger_nn_Clf_2nd', 'deep_nn_Clf_2nd', 'larger_deep_nn_Clf_2nd',
'deeper_nn_Clf_2nd'):
del best_model_estim
input_dim = int(X_train.shape[1])
else:
del temp_estimator
unc_pred_score = uncalibrated_data[0]
unc_pipeline = uncalibrated_data[1]
has_unc_roc, unc_roc_auc = check_model_hasroc("target", uncalibrated_data)
predicted = unc_pipeline.predict(X_test)
w_unc_acc = unc_pipeline.score(X_test, y_test, sample_weight=weights_test)*100
confusion_matrix_and_clf_report(y_type, best_model_name, y_test, predicted)
print()
# in case of LogRegression,
# you should compare its probability curve against the ideal one
# all stuff before plot of calibration curves should go into a function
if unc_pred_score < ref_pred_score:
print("'%s' is already well calibrated." % best_model_name)
print("Let's resume metrics on test data.")
if best_model_name not in (
'baseline_nn_default_Clf_2nd', 'baseline_nn_smaller_Clf_2nd',
'larger_nn_Clf_2nd', 'deep_nn_Clf_2nd', 'larger_deep_nn_Clf_2nd',
'deeper_nn_Clf_2nd'):
print('Mean cv score [%s] of best uncalibrated ("%s"): %1.3f'
% (scoring.strip('neg_'), best_model_name, best_score))
if has_unc_roc:
print('Scoring [%s] of best uncalibrated ("%s") on test data: %1.3f'
% (scoring.strip('neg_'), best_model_name, unc_roc_auc))
print('Accuracy of best uncalibrated ("%s") on test data: %.2f%%'
% (best_model_name, w_unc_acc))
print()
print("=== [task] Refit '%s' on all data." % best_model_name)
print()
print("X shape: ", X.shape)
print("y shape: ", y.shape)
print()
# best_rscv_pipeline -> final_best_rscv_estimator
eu.model_finalizer(
final_estimator, X, y, scoring, tuning_method, d_name, serial)
else:
print("'%s' needs probability calibration." % best_model_name)
if best_model_name in (
'baseline_nn_default_Clf_2nd', 'baseline_nn_smaller_Clf_2nd',
'larger_nn_Clf_2nd', 'deep_nn_Clf_2nd', 'larger_deep_nn_Clf_2nd',
'deeper_nn_Clf_2nd'):
print("Calibration of Keras neural networks is not implemented yet")
print("""
We assume this is the best calibration we can achieve
with a Keras neural network, which approaches
LogisticRegression's prediction confidence as
the nr of iterations increases.
""")
else:
temp_estimator = training_estimator
# In case model needs calibration
calib_data = eu.calibrate_probabilities(
temp_estimator, X_train, y_train, X_test, y_test, 'sigmoid',
tuning_method, models_data, kfold, labels, serial
)
calib_pred_score = calib_data[0]
calib_pipeline = calib_data[1]
has_calib_roc, calib_roc_auc = check_model_hasroc("target calib", calib_data)
print()
if calib_pred_score >= unc_pred_score:
print("Sorry, we could not calibrate '%s' any better."
% best_model_name)
print("We're rejecting calibrated '%s' and saving the uncalibrated one."
% best_model_name)
print("Let's resume scores on validation and test data.")
print('Mean cross-validated score [%s] of best uncalibrated ("%s"): %1.3f'
% (scoring.strip('neg_'), best_model_name, best_score))
if has_unc_roc:
print('Scoring [%s] of best uncalibrated ("%s") on test data: %1.3f'
% (scoring.strip('neg_'), best_model_name, unc_roc_auc))
print('Accuracy of best uncalibrated ("%s") on test data: %.2f%%'
% (best_model_name, w_unc_acc))
print()
# final_best_rscv_estimator
print("=== [task] Refit '%s' on all data." % best_model_name)
print()
print("X shape: ", X.shape)
print("y shape: ", y.shape)
print()
# best_rscv_pipeline -> final_best_rscv_estimator
eu.model_finalizer(
final_estimator, X, y, scoring, tuning_method, d_name, serial)
else:
print("Achieved better calibration of model '%s'."
% best_model_name)
print("Let's resume scores on test data.")
print('Mean cv score [%s] of best uncalibrated ("%s"): %1.3f'
% (scoring.strip('neg_'), best_model_name, best_score))
predicted = calib_pipeline.predict(X_test)
print()
print("After probability calibration...")
confusion_matrix_and_clf_report(y_type, best_model_name, y_test, predicted)
print()
w_calib_acc = calib_pipeline.score(
X_test, y_test, sample_weight=weights_test)*100
if has_calib_roc:
print('Scoring [%s] of best calibrated ("%s") on test data: %1.3f'
% (scoring.strip('neg_'), best_model_name, calib_roc_auc))
print('Accuracy of best calibrated ("%s") on test data: %.2f%%'
% (best_model_name, w_calib_acc))
print()
print("=== [task]: Train and calibrate probabilities of "
"pre-optimized model '%s' on all data."
% best_model_name)
print()
final_calib_pipeline = CalibratedClassifierCV(
final_estimator, method='sigmoid', cv=kfold)
final_calib_pipeline.fit(X, y)
fin_w_acc = final_calib_pipeline.score(X, y, sample_weight=weights_all)*100
print('Overall accuracy of finalized best CCCV ("%s_rscv"): %.2f%%'
% (best_model_name, fin_w_acc))
au.save_model(
final_calib_pipeline, best_model_name + '_final_calib_'
+ tuning_method + '_' + serial + '.pkl', d_name=d_name
)
# Uncomment to see pipeline, steps and params
# print("Finalized calibrated best model '%s'." % best_model_name)
# params = final_calib_pipeline.get_params()
# for param_name in sorted(params.keys()):
# print("\t%s: %r" % (param_name, params[param_name]))
# print()
if best_model_name in (
'baseline_nn_default_Clf_2nd', 'baseline_nn_smaller_Clf_2nd',
'larger_nn_Clf_2nd', 'deep_nn_Clf_2nd', 'larger_deep_nn_Clf_2nd',
'deeper_nn_Clf_2nd'):
print('Mean cv score [%s] of best uncalibrated ("%s"): %1.3f' % (
scoring, best_model_name, best_score))
if has_unc_roc:
print('Scoring [%s] of best ("%s") on test data: %1.3f' % (
scoring.strip('neg_'), best_model_name, unc_roc_auc))
print('Accuracy of best ("%s") on test data: %.2f%%'
% (best_model_name, w_unc_acc))
print()
NN_transformer = final_estimator.fit(X, y)
X_transformed = NN_transformer.transform(X)
input_dim_final = int(X_transformed.shape[1])
print()
print("Input dimensions -- training: %d, finalization %d"
% (input_dim, input_dim_final))
print()
f_name = best_model_name + '_feateng_for_keras_model_' + serial
au.save_model(final_estimator, f_name + '.pkl', d_name=d_name)
del final_estimator
best_model_estim = best_keras_clf_estimator(
y_type, best_nn_build_fn, nb_epoch, input_dim_final, labels, batch_size)
steps_fin = []
steps_fin.append((best_model_name, best_model_estim))
untrained_NN_pipeline = Pipeline(steps_fin)
eu.model_finalizer(
untrained_NN_pipeline, X_transformed, y, scoring,
tuning_method, d_name, serial
)
print()
def nested_cv_calibration_process(
ref_pred_score, training_estimator, final_estimator, X, y,
X_train, y_train, X_test, y_test, y_type, best_model_name,
best_model_estim, weights_test, weights_all, scoring, best_score,
n_splits, n_iter, best_nn_build_fn, nb_epoch, param_grid, tuning_method,
models_data, kfold, labels, d_name, serial, random_state=0):
temp_estimator = training_estimator
print()
# check predicted probabilities for prediction confidence
uncalibrated_data = eu.probability_confidence_before_calibration(
temp_estimator, X_train, y_train, X_test, y_test, tuning_method,
models_data, labels, serial
)
# Keras stuff
input_dim = 0
if best_model_name == "KerasClf_2nd":
input_dim = int(X_train.shape[1])
del best_model_estim
else:
del temp_estimator
unc_pred_score = uncalibrated_data[0]
unc_pipeline = uncalibrated_data[1]
has_unc_roc, unc_roc_auc = check_model_hasroc("target", uncalibrated_data)
predicted = unc_pipeline.predict(X_test)
w_unc_acc = unc_pipeline.score(X_test, y_test, sample_weight=weights_test)*100
confusion_matrix_and_clf_report(y_type, best_model_name, y_test, predicted)
print()
# in case of LogRegression,
# you should compare its probability curve against the ideal one
if unc_pred_score < ref_pred_score:
print("'%s' is already well calibrated." % best_model_name)
print("Let's resume metrics on test data.")
if best_model_name != "KerasClf_2nd":
print('Mean cv score [%s] of best uncalibrated ("%s"): %1.3f'
% (scoring.strip('neg_'), best_model_name, best_score))
if has_unc_roc:
print('Scoring [%s] of best uncalibrated ("%s") on test data: %1.3f'
% (scoring.strip('neg_'), best_model_name, unc_roc_auc))
print('Accuracy of best uncalibrated ("%s") on test data: %.2f%%'
% (best_model_name, w_unc_acc))
print()
print("=== [task] Refit '%s' on all data." % best_model_name)
print()
print("X shape: ", X.shape)
print("y shape: ", y.shape)
print()
# best_rscv_pipeline -> final_best_rscv_estimator
final_best_pipeline = eu.rscv_tuner(
final_estimator, X, y, n_splits, param_grid, n_iter, scoring,
refit=True, cv_meth=tuning_method, random_state=random_state)
print()
print("Best estimator [%s]'s' params after hyp-tuning on all data."
% best_model_name)
params = final_best_pipeline.get_params()
for param_name in sorted(params.keys()):
print("\t%s: %r" % (param_name, params[param_name]))
print()
# input("Press key to continue... \n")
w_best_acc = final_best_pipeline.score(
X, y, sample_weight=weights_all)*100
au.save_model(
final_best_pipeline, best_model_name
+ '_final_nocalib_' + tuning_method + '_' + serial + '.pkl',
d_name=d_name
)
print()
# Uncomment to see pipeline, steps and params
# print("Finalized uncalibrated best model '%s'."
# % best_model_name)
# for step in final_best_rscv_pipeline.steps:
# print(type(step))
# print("step:", step[0])
# params = step[1].get_params()
# for param_name in sorted(params.keys()):
# print("\t%s: %r" % (param_name, params[param_name]))
else:
print("'%s' needs probability calibration." % best_model_name)
if best_model_name == "KerasClf_2nd":
print("Calibration of Keras neural networks is not implemented yet")
print("""
We assume this is the best calibration we can achieve
with a Keras neural network, which approaches
LogisticRegression's prediction confidence as
the nr of iterations increases.
""")
else:
temp_pipeline = training_estimator
# In case model needs calibration
calib_data = eu.calibrate_probabilities(
temp_pipeline, X_train, y_train, X_test, y_test, 'sigmoid',
tuning_method, models_data, kfold, labels, serial
)
calib_rscv_pred_score = calib_data[0]
calib_rscv_pipeline = calib_data[1]
has_calib_roc, calib_rscv_roc_auc = check_model_hasroc("target", calib_data)
if calib_rscv_pred_score >= unc_pred_score:
print("Sorry, we could not calibrate '%s' any better."
% best_model_name)
print("Rejecting calibrated '%s' and saving the uncalibrated one."
% best_model_name)
print("Let's resume metrics on test data.")
print('Mean cv score [%s] of best uncalibrated ("%s"): %1.3f' % (
scoring.strip('neg_'), best_model_name, best_score))
if has_unc_roc:
print('Scoring [%s] of best uncalibrated ("%s") on test data: %1.3f'
% (scoring.strip('neg_'), best_model_name, unc_roc_auc))
print('Accuracy of best uncalibrated ("%s") on test data: %.2f%%'
% (best_model_name, w_unc_acc))
print()
# final_best_rscv_estimator
final_best_rscv_pipeline = eu.rscv_tuner(
final_estimator, X, y, n_splits, param_grid, n_iter,
scoring, refit=True, cv_meth=tuning_method, random_state=random_state)
w_best_rscv_acc = final_best_rscv_pipeline.score(
X, y, sample_weight=weights_all)*100
print('Accuracy of best ("%s") on all data: %.2f%%'
% (best_model_name, w_best_rscv_acc))
print()
au.save_model(
final_best_rscv_pipeline, best_model_name
+ '_final_nocalib_' + tuning_method + '_'
+ serial + '.pkl', d_name=d_name)
print()
# Uncomment to see pipeline, steps and params
# print("Finalized uncalibrated best model '%s'." % best_model_name)
# for step in final_best_rscv_pipeline.steps:
# print(type(step))
# print("step:", step[0])
# params = step[1].get_params()
# for param_name in sorted(params.keys()):
# print("\t%s: %r" % (param_name, params[param_name]))
# print()
else:
print("Achieved better calibration of model '%s'."
% best_model_name)
print("Let's resume scors on validation and test data.")
print('Mean cv score [%s] of best uncalibrated ("%s"): %1.3f'
% (scoring.strip('neg_'), best_model_name, best_score))
w_calib_rscv_acc = calib_rscv_pipeline.score(
X_test, y_test, sample_weight=weights_test)*100
if has_calib_roc:
print('Scoring [%s] of best uncalibrated ("%s") on test data: %1.3f%%'
% (scoring.strip('neg_'), best_model_name, calib_rscv_roc_auc))
print('Accuracy of best calibrated ("%s") on test data: %.2f%%'
% (best_model_name, w_calib_rscv_acc))
print()
print("=== [task]: Tune '%s'' params with '%s' on all "
"data and calibrate probabilities."
% (best_model_name, tuning_method))
print()
best_rscv_parameters = eu.rscv_tuner(
final_estimator, X, y, n_splits, param_grid, n_iter, scoring,
refit=False, cv_meth=tuning_method, random_state=random_state
)
temp_pipeline.set_params(**best_rscv_parameters)
# calib_rscv_pipeline.named_steps[name].set_params(**best_parameters)
final_calib_rscv_clf = CalibratedClassifierCV(
temp_pipeline, method='sigmoid', cv=kfold)
final_calib_rscv_clf.fit(X, y)
fin_w_rscv_acc = final_calib_rscv_clf.score(
X, y, sample_weight=weights_all)*100
print('Overall accuracy of finalized best CCCV ("%s_%s"): %.2f%%'
% (best_model_name, tuning_method, fin_w_rscv_acc))
au.save_model(
final_calib_rscv_clf, best_model_name + '_final_calib_'
+ tuning_method + '_' + serial + '.pkl', d_name=d_name)
if best_model_name == "KerasClf_2nd":
print('Mean cv score [%s] of best uncalibrated ("%s"): %1.3f' % (
scoring, best_model_name, best_score))
if has_unc_roc:
print('Scoring [%s] of best ("%s") on test data: %1.3f' % (
scoring.strip('neg_'), best_model_name, unc_roc_auc))
print('Accuracy of best ("%s") on test data: %.2f%%'
% (best_model_name, w_unc_acc))
print()
NN_transformer = final_estimator.fit(X, y)
X_transformed = NN_transformer.transform(X)
input_dim_final = int(X_transformed.shape[1])
print()
print("Input dimensions -- training: %d, finalization %d"
% (input_dim, input_dim_final))
print()
f_name = best_model_name + '_feateng_for_keras_model_' + serial
au.save_model(final_estimator, f_name + '.pkl', d_name=d_name)
# del final_pipeline
if input_dim_final != input_dim:
for n in np.arange(0, 3):
param_grid[best_model_name + '__units_' + str(n)] = sp_randint(
input_dim_final, 5*input_dim_final)
best_model_estim = best_keras_clf_estimator(
y_type, best_nn_build_fn, nb_epoch, input_dim, labels)
# finalize Keras clf
steps_fin = []
steps_fin.append((best_model_name, best_model_estim))
untrained_NN_pipeline = Pipeline(steps_fin)
# # final_best_rscv_pipeline
# final_best_NN_pipeline = eu.rscv_tuner(
# untrained_NN_pipeline, X_transformed, y, n_splits, param_grid, n_iter,
# scoring, refit=True, random_state=random_state
# )
# f_name = best_model_name + '_' + tuning_method + '_' + serial
# keras_f_name = au.create_keras_model_filename(f_name, d_name=d_name)
# final_best_NN_pipeline.named_steps[best_model_name].model.save(
# keras_f_name + '.h5')
# this is equivalent to the process above
final_best_NN_pipeline = eu.tune_and_evaluate(
untrained_NN_pipeline, X_transformed, y, None, None, n_splits,
param_grid, n_iter, scoring, [], refit=True,
random_state=random_state, serial=serial, d_name=d_name,
save=True, cv_meth=tuning_method)
w_best_NN_acc = final_best_NN_pipeline.score(
X_transformed, y, sample_weight=weights_all)*100
print('Accuracy of best ("%s") on all data: %.2f%%' % (
best_model_name, w_best_NN_acc))
print()
# Uncomment to see pipeline, steps and params
# print()
# print("Best estimator [%s]'s' params after hyp-tuning on all data."
# % best_model_name)
# for step in final_best_NN_pipeline.steps:
# print(type(step))
# print("step:", step[0])
# params = step[1].get_params()
# for param_name in sorted(params.keys()):
# print("\t%s: %r" % (param_name, params[param_name]))
def calibrate_best_model(
X, y, X_train, X_test, y_train, y_test, tt_index, preprocessing,
scores_of_best_model, all_models_and_parameters, n_splits,
nb_epoch, scoring, models_data, d_name, random_state):
"""
# calibrate best model from cross validation without hyperparameter tuning.
---
...
"""
# Here start best model's calibration process
best_model_name = scores_of_best_model[4][0]
best_model_estim = scores_of_best_model[4][1]
if best_model_name in (
'baseline_nn_default_Clf_2nd', 'baseline_nn_smaller_Clf_2nd',
'larger_nn_Clf_2nd', 'deep_nn_Clf_2nd', 'larger_deep_nn_Clf_2nd',
'deeper_nn_Clf_2nd'):
best_nn_build_fn = scores_of_best_model[4][2]
else:
best_nn_build_fn = None
best_score = scores_of_best_model[0]
best_score_dev = scores_of_best_model[1]
best_cv_results = scores_of_best_model[2]
# best_brier_score = scores_of_best_model[3]
best_exec_time = scores_of_best_model[4]
print()
print("# Check prediction confidence of '%s' and eventually calibrate it."
% best_model_name)
print()
# you should automatically know which encoding method to use: le or ohe
encoding, scaler_tuple, featselector = preprocessing
labels = None
if 'labels' in all_models_and_parameters:
labels = all_models_and_parameters['labels']
print("Checking prediction confidence of multiclass '%s'"
% best_model_name)
else:
print("No list of labels here. It's a binary problem.")
# training pipeline
steps = []
# here you should also insert imputing and label encoding
steps.append((best_model_name, best_model_estim))
training_pipeline = Pipeline(steps)
Y_type = mc.type_of_target(y)
print("X shape: ", X.shape)
print("y shape: ", y.shape)
print("X sample:\n", X[:3])
print("Y sample:\n", y[:3])
print()
# finalization pipeline -- for all models less Keras ones
steps_fin = []
# ...
steps_fin.append(scaler_tuple)
steps_fin.append(featselector)
if best_model_name not in (
'baseline_nn_default_Clf_2nd', 'baseline_nn_smaller_Clf_2nd',
'larger_nn_Clf_2nd', 'deep_nn_Clf_2nd', 'larger_deep_nn_Clf_2nd',
'deeper_nn_Clf_2nd'):
steps_fin.append((best_model_name, best_model_estim))
final_pipeline = Pipeline(steps_fin)
# define serial nr. once and for all
serial = "%04d" % randint(0, 1000)
# Now, this model/pipeline might need calibration
print()
print("======= Training best estimator [%s], checking predicted "
"probabilities and calibrating them" % best_model_name)
# Train LogisticRegression for comparison of predicted probas
kfold = StratifiedKFold(
n_splits=n_splits, shuffle=True, random_state=random_state)
w = eu.calculate_sample_weight(y_test)
w_all = eu.calculate_sample_weight(y)
# LogisticRegression as a calibration reference
steps = []
steps.append(('LogRClf_2nd',
pgd.starting_point_models_and_params['LogRClf_2nd']))
general_lr_pipeline = Pipeline(steps)
temp_pipeline = general_lr_pipeline
tuning_method = 'light_opt'
print()
# check predicted probabilities for prediction confidence
uncalibrated_lr_data = eu.probability_confidence_before_calibration(
temp_pipeline, X_train, y_train, X_test, y_test, tuning_method,
models_data, labels, serial
)
del temp_pipeline
lr_pred_score = uncalibrated_lr_data[0]
lr_pipeline = uncalibrated_lr_data[1]
has_roc, lr_roc_auc = check_model_hasroc(
"LogRegression reference", uncalibrated_lr_data)
print()
predicted = lr_pipeline.predict(X_test)
confusion_matrix_and_clf_report(Y_type, 'LogRClf_2nd', y_test, predicted)
print()
print()
# Evalute prediction confidence and, in case, calibrate
# X = X.astype(np.float32)
try:
models_data[0]
except IndexError:
print("No LogRClf_2nd's data here. List 'models_data' is empty.")
except Exception as e:
print(e)
else:
print("LogRClf_2nd's data appended to models_data list")
print()
if best_model_name != "LogRClf_2nd":
# eventually calibrating any ther model != LogReg
input_dim = 0
batch_size = 0
if best_model_name in (
'baseline_nn_default_Clf_2nd', 'baseline_nn_smaller_Clf_2nd',
'larger_nn_Clf_2nd', 'deep_nn_Clf_2nd', 'larger_deep_nn_Clf_2nd',
'deeper_nn_Clf_2nd'):
# no hyperparam tuning for now
tuning_method = 'None'
input_dim = int(X_train.shape[1])
batch_size = 32
best_model_estim = best_keras_clf_estimator(
Y_type, best_nn_build_fn, nb_epoch, input_dim, labels, batch_size)
temp_pipeline = training_pipeline # best model's pipeline
# plot a learning curve
print()
print()
print("[task] === Plot a learning curve")
y_lim = None
if Y_type == "binary":
# minimum and maximum yvalues plotted in learning curve plot
y_lim = (0.5, 1.01)
au.plot_learning_curve(
temp_pipeline, X_train, y_train, ylim=y_lim, cv=n_splits,
scoring=scoring, n_jobs=-2, serial=serial,
tuning=tuning_method, d_name=d_name
)
# plt.show()
del temp_pipeline
# temp_pipeline = training_pipeline
print()
# tune, etc.
print()
print("===== Check need for calibration")
# here you should be able to automatically assess
# whether current model in pipeline
# actually needs calibration or not
# if no calibration is needed,
# you could finalize if you're happy with default hyperparameters
# you could also compare
# model(default_parameters) vs model(tuned_parameters)
###
print("Check '%s''s prediction confidence after CV and calibrate probabilities."
% best_model_name)
print()
# calibration returns models_data ...
classic_cv_calibration_process(
lr_pred_score, training_pipeline, final_pipeline, X, y, X_train, y_train,
X_test, y_test, Y_type, best_model_name, best_model_estim, w, w_all,
scoring, best_score, best_nn_build_fn, nb_epoch, batch_size, tuning_method,
models_data, kfold, labels, d_name, serial)
if Y_type == 'binary':
eu.plot_calibration_curves(
y_test, best_model_name + '_' + tuning_method,
models_data, 1, d_name
)
plt.show()
else:
# best_model_name == 'LogRClf_2nd':
print()
print()
print("[task] === plotting a learning curve")
print("Data:", d_name)
print()
y_lim = None
if Y_type == "binary":
y_lim = (0.5, 1.01)
au.plot_learning_curve(
lr_pipeline, X_train, y_train, ylim=y_lim,
cv=kfold, scoring=scoring, n_jobs=-2, serial=serial,
tuning=tuning_method, d_name=d_name
)
# plt.show()
del lr_pipeline
lr_pipeline = uncalibrated_lr_data[1]
print()
print("'LogRClf_2nd' is already well calibrated for definition!")
print()
print("Mean cv score [%s]: %1.3f"
% (scoring.strip('neg_'), best_score))
if has_roc:
best_lr_roc_auc = lr_roc_auc
print("ROC_AUC score on left-out data: %1.3f." % best_lr_roc_auc)
print("- The higher, the better.")
# refit with RSCV
eu.model_finalizer(
final_pipeline, X, y, scoring, tuning_method, d_name, serial)
# best_lr_pipeline = final_best_lr_pipeline
if Y_type == 'binary':
eu.plot_calibration_curves(
y_test, best_model_name + '_' + tuning_method,
models_data, 1, d_name)
plt.show()
plt.close('all')
print()
print()
def tune_calibrate_best_model(
X, y, X_train, X_test, y_train, y_test, tt_index, preprocessing,
scores_of_best_model, all_models_and_parameters, n_splits, n_iter,
nb_epoch, scoring, models_data, d_name, random_state):
"""
First line.
----------------------------------------------------------------------------
...
"""
# Here start best model's calibration process
best_model_name = scores_of_best_model[4][0]
best_model_estim = scores_of_best_model[4][1]
if best_model_name == "KerasClf_2nd":
best_nn_build_fn = scores_of_best_model[4][2]
else:
best_nn_build_fn = None
best_score = scores_of_best_model[0]
best_score_dev = scores_of_best_model[1]
# best_brier_score = scores_of_best_model[2]
best_exec_time = scores_of_best_model[3]
# here you should automatically know which encoding method to use: le or ohe
encoding, scaler_tuple, featselector = preprocessing
labels = None
if 'labels' in all_models_and_parameters:
labels = all_models_and_parameters['labels']
print("Checking prediction confidence of multiclass '%s'"
% best_model_name)
else:
print("No list of labels here. It's a binary problem.")
# training pipeline
steps = []
# here you should also insert imputing and label encoding
# steps.append(scaler_tuple)
# for now
# steps.append(featselector)
steps.append((best_model_name, best_model_estim))
training_pipeline = Pipeline(steps)
Y_type = mc.type_of_target(y)
# finalization pipeline -- for all models less Keras ones
steps_fin = []
steps_fin.append(scaler_tuple)
steps_fin.append(featselector)
if best_model_name != "KerasClf_2nd":
steps_fin.append((best_model_name, best_model_estim))
final_pipeline = Pipeline(steps_fin)
# define serial nr. once and for all
serial = "%04d" % randint(0, 1000)
# Now, this model/pipeline might need calibration
print()
print("======= Tuning best estimator [%s], checking predicted "
"probabilities and calibrating them" % best_model_name)
# Train LogisticRegression for comparison of predicted probas
# select param grid associated to resulting best model
param_grid = dict()
# retrieve n_iter for rscv/bscv from param_grid
kfold = StratifiedKFold(
n_splits=n_splits, shuffle=True, random_state=random_state)
w = eu.calculate_sample_weight(y_test)
w_all = eu.calculate_sample_weight(y)
# LogisticRegression as a calibration reference
lr, lr_params = logreg_calibration_reference(
Y_type, scoring, all_models_and_parameters)
steps = []
steps.append(('LogRClf_2nd', lr))
general_lr_pipeline = Pipeline(steps)
temp_pipeline = general_lr_pipeline
print()
llr_n_iter = n_iter
best_LogRClf_parameters = eu.tune_and_evaluate(
temp_pipeline, X_train, y_train, X_test, y_test, n_splits,
lr_params, llr_n_iter, scoring, models_data, refit=False,
random_state=random_state)
temp_pipeline.set_params(**best_LogRClf_parameters)
print()
# check predicted probabilities for prediction confidence
uncalibrated_lr_data = eu.probability_confidence_before_calibration(
temp_pipeline, X_train, y_train, X_test, y_test, 'rscv', models_data,
labels, serial
)
del temp_pipeline
print()
lr_pred_score = uncalibrated_lr_data[0]
lr_pipeline = uncalibrated_lr_data[1]
has_roc, lr_roc_auc = check_model_hasroc(
"LogRegression reference", uncalibrated_lr_data)
predicted = lr_pipeline.predict(X_test)
confusion_matrix_and_clf_report(Y_type, 'LogRClf_2nd', y_test, predicted)
print()
print()
try:
models_data[0]
except IndexError:
print("No LogRClf_2nd's data here. List 'models_data' is empty.")
except Exception as e:
print(e)
else:
print("LogRClf_2nd's data appended to models_data list")
print()
# Evalute prediction confidence and, in case, calibrate
# X = X.astype(np.float32)
tuning_method = None
if 'xscv' in all_models_and_parameters:
# 'bscv'
tuning_method = all_models_and_parameters['xscv']
else:
tuning_method = 'rscv'
if best_model_name != "LogRClf_2nd":
# tuning and eventually calibrating any other model != LogReg
if best_model_name == "KerasClf_2nd":
input_dim = int(X_train.shape[1])
param_grid = pgd.Keras_param_grid
# Keras Clf not included in 'all_models_and_parameters' dict
# param_grid[best_model_name + '__units'] = sp_randint(input_dim, 5*input_dim)
for n in np.arange(0, 3):
param_grid[best_model_name + '__units_' + str(n)] = sp_randint(
input_dim, 5*input_dim)
else:
param_grid = all_models_and_parameters[best_model_name][1]
if best_model_name == 'Bagging_SVMClf_2nd':
# 'Bagging_SVMClf_2nd' made of 10 estimators
del param_grid['Bagging_SVMClf_2nd__n_estimators']
# tune, etc.
print()
print("===== Randomized Search CV")
# here you should be able to automatically assess whether
# current model in pipeline actually needs calibration or not
# if no calibration is needed,
# you could finalize if you're happy with default hyperparameters
# you could also compare
# model(default_parameters) vs model(tuned_parameters)
print()
print("Best model's [%s] parameter grid for RSCV:\n"
% best_model_name, param_grid)
print()
temp_pipeline = training_pipeline # best_pipeline
# check that the total space of params >= n_iter
# best_estimator_2nd
best_parameters = eu.tune_and_evaluate(
temp_pipeline, X_train, y_train, X_test, y_test, n_splits,
param_grid, n_iter, scoring, models_data, refit=False,
random_state=random_state, cv_meth=tuning_method)
temp_pipeline.set_params(**best_parameters)
# plot a learning curve
print()
print()
print("[task] === plotting a learning curve")
y_lim = None
if Y_type == "binary":
y_lim = (0.5, 1.01)
au.plot_learning_curve(
temp_pipeline, X_train, y_train, ylim=y_lim, cv=kfold,
scoring=scoring, n_jobs=-2, serial=serial,
tuning=tuning_method, d_name=d_name
)
# plt.show()
del temp_pipeline
### calib fct
print("Check '%s''s prediction confidence after (%s) CV and "
"calibrate probabilities."
% (best_model_name, tuning_method))
print()
nested_cv_calibration_process(
lr_pred_score , training_pipeline, final_pipeline, X, y,
X_train, y_train, X_test, y_test, Y_type, best_model_name,
best_model_estim, w, w_all, scoring, best_score, n_splits, n_iter,
best_nn_build_fn, nb_epoch, param_grid, tuning_method, models_data,
kfold, labels, d_name, serial, random_state)
if Y_type == "binary":
eu.plot_calibration_curves(
y_test, best_model_name + '_' + tuning_method,
models_data, 1, d_name
)
plt.show()
print()
print()
else:
# best_model_name=='LogRClf_2nd':
# plot a learning curve
print()
print()
print("[task] === plotting a learning curve")
y_lim = None
if Y_type == "binary":
y_lim = (0.5, 1.01)
au.plot_learning_curve(
lr_pipeline, X_train, y_train, ylim=y_lim, cv=kfold,
scoring=scoring, n_jobs=-2, serial=serial,
tuning=tuning_method, d_name=d_name
)
# plt.show()
del lr_pipeline
lr_pipeline = uncalibrated_lr_data[1]
print()
print()
print("'%s' is already well calibrated for definition!"
% best_model_name)
print()
print('Mean cv score [%s] of best uncalibrated ("%s"): %1.3f'
% (scoring.strip('neg_'), best_model_name, best_score))
# best_lr_pipeline = lr_pipeline
if lr_roc_auc is not None:
print('Scoring [%s] of best uncalibrated ("%s") on test data: %1.3f'
% (scoring, best_model_name, lr_roc_auc))
w_lr_acc = lr_pipeline.score(X_test, y_test, sample_weight=w)*100
print('Accuracy of best uncalibrated ("%s") on test data: %.2f%%'
% (best_model_name, w_lr_acc))
print()
# refit with RSCV; param_grid = pgd.LogR_param_grid
final_best_lr_pipeline = eu.rscv_tuner(
final_pipeline, X, y, n_splits,
all_models_and_parameters['LogRClf_2nd'][1], n_iter,
scoring, refit=True, random_state=random_state
)
au.save_model(
final_best_lr_pipeline, best_model_name + '_final_calib_rscv_'
+ serial + '.pkl', d_name=d_name)
print()
print("Performance on all data.")
# w_all = calculate_sample_weight(y)
# best_lr_pipeline
w_lr_acc = final_best_lr_pipeline.score(X, y, sample_weight=w_all)*100
print('Accuracy of best ("%s") on all data: %.2f%%'
% (best_model_name, w_lr_acc))
print()
# Uncomment to see pipeline, steps and params
# print("Finalized '%s'." % best_model_name)
# for step in final_best_lr_pipeline.steps:
# print("step:", step[0])
# params = step[1].get_params()
# for param_name in sorted(params.keys()):
# print("\t%s: %r" % (param_name, params[param_name]))
# print()
if Y_type == 'binary':
eu.plot_calibration_curves(
y_test, best_model_name + '_rscv', models_data,
1, d_name)
plt.show()
plt.close('all')
print()
print()
| 34.591339 | 92 | 0.60438 | 5,513 | 43,931 | 4.483947 | 0.078904 | 0.059345 | 0.059426 | 0.015372 | 0.806553 | 0.78034 | 0.74551 | 0.719741 | 0.70801 | 0.68661 | 0 | 0.00956 | 0.299948 | 43,931 | 1,269 | 93 | 34.618597 | 0.794238 | 0.143407 | 0 | 0.654289 | 0 | 0 | 0.184799 | 0.016409 | 0 | 0 | 0 | 0 | 0 | 1 | 0.010243 | false | 0 | 0.024328 | 0 | 0.038412 | 0.248399 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
9110644e34a91156a14239ac1b816c7ffe345127 | 31 | py | Python | graph_ensemble/wrappers/__init__.py | YotamFY/GraphEnsemble | a636feb1084a8b45aa1754034744281277474dd1 | [
"BSD-3-Clause"
] | 2 | 2017-11-15T14:08:34.000Z | 2017-11-18T10:43:07.000Z | graph_ensemble/wrappers/__init__.py | YotamFY/GraphEnsemble | a636feb1084a8b45aa1754034744281277474dd1 | [
"BSD-3-Clause"
] | 2 | 2017-11-04T14:33:30.000Z | 2017-11-18T10:42:52.000Z | graph_ensemble/wrappers/__init__.py | YotamFY/GraphEnsemble | a636feb1084a8b45aa1754034744281277474dd1 | [
"BSD-3-Clause"
] | null | null | null | from sklearn_wrapper import *
| 10.333333 | 29 | 0.806452 | 4 | 31 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.16129 | 31 | 2 | 30 | 15.5 | 0.923077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
912e8989a18ad101c36787dc7de396be821ea550 | 175 | py | Python | keras/datasets/__init__.py | nils-werner/keras | 78f26df8fb2b8aa5c6262aef44a494a8335a9c6e | [
"MIT"
] | 30 | 2017-04-11T04:17:22.000Z | 2020-09-08T08:18:37.000Z | keras/datasets/__init__.py | candleinwindsteve/keras | 9eb7ecd3e525c9cff31ebd59a96794f212ca5e1e | [
"MIT"
] | 8 | 2020-09-26T00:55:16.000Z | 2022-03-12T00:23:07.000Z | udacity-car/lib/python2.7/site-packages/keras/datasets/__init__.py | 808brick/CarND-Capstone | f9e536b4a9d96322d7e971073602c8969dbd9369 | [
"MIT"
] | 21 | 2017-03-27T08:06:11.000Z | 2020-06-18T09:35:07.000Z | from __future__ import absolute_import
from . import mnist
from . import imdb
from . import reuters
from . import cifar10
from . import cifar100
from . import boston_housing
| 19.444444 | 38 | 0.8 | 24 | 175 | 5.583333 | 0.458333 | 0.447761 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.034247 | 0.165714 | 175 | 8 | 39 | 21.875 | 0.883562 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e68764071b92826b6642ca0a264ae0893e184c3d | 93 | py | Python | unravel/admin/document_tag_admin.py | cofiem/logomachy | fed77dc4b821f25a60fd9b9474c232107fe98f53 | [
"Apache-2.0"
] | null | null | null | unravel/admin/document_tag_admin.py | cofiem/logomachy | fed77dc4b821f25a60fd9b9474c232107fe98f53 | [
"Apache-2.0"
] | 1 | 2017-10-29T08:16:02.000Z | 2017-10-30T14:19:59.000Z | unravel/admin/document_tag_admin.py | cofiem/logomachy | fed77dc4b821f25a60fd9b9474c232107fe98f53 | [
"Apache-2.0"
] | null | null | null | from unravel.admin.base_admin import BaseAdmin
class DocumentTagAdmin(BaseAdmin):
pass
| 15.5 | 46 | 0.806452 | 11 | 93 | 6.727273 | 0.818182 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.139785 | 93 | 5 | 47 | 18.6 | 0.925 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
e6993dd0f9a93fd195df6150af1e21d909ae0c46 | 44 | py | Python | topojson/__init__.py | ataalik/topojson.py | 111bfd80fd338ac5d951d8d5ada8ee1471e00a31 | [
"BSD-3-Clause"
] | null | null | null | topojson/__init__.py | ataalik/topojson.py | 111bfd80fd338ac5d951d8d5ada8ee1471e00a31 | [
"BSD-3-Clause"
] | null | null | null | topojson/__init__.py | ataalik/topojson.py | 111bfd80fd338ac5d951d8d5ada8ee1471e00a31 | [
"BSD-3-Clause"
] | null | null | null | from .conversion import convert as topojson
| 22 | 43 | 0.840909 | 6 | 44 | 6.166667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.136364 | 44 | 1 | 44 | 44 | 0.973684 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
fc00b4b8eb1f1a7ffa6a78087f0d3c93b2b3fc81 | 200 | py | Python | copydog/utils/task.py | coagulant/copydog | 81a64738dbcfd6d1149fda914382f08202a3864a | [
"BSD-3-Clause"
] | 16 | 2015-01-04T20:40:08.000Z | 2018-01-24T20:19:12.000Z | copydog/utils/task.py | coagulant/copydog | 81a64738dbcfd6d1149fda914382f08202a3864a | [
"BSD-3-Clause"
] | 3 | 2015-09-18T10:05:00.000Z | 2021-03-25T21:27:26.000Z | copydog/utils/task.py | coagulant/copydog | 81a64738dbcfd6d1149fda914382f08202a3864a | [
"BSD-3-Clause"
] | 5 | 2015-04-21T15:07:08.000Z | 2017-09-01T17:06:58.000Z | # -*- coding: utf-8 -*-
def periodic(scheduler, interval, action, actionargs=()):
scheduler.enter(interval, 1, periodic,
(scheduler, interval, action, actionargs))
action(*actionargs) | 33.333333 | 57 | 0.67 | 20 | 200 | 6.7 | 0.55 | 0.358209 | 0.373134 | 0.462687 | 0.61194 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012048 | 0.17 | 200 | 6 | 58 | 33.333333 | 0.795181 | 0.105 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0 | 0 | 0.25 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
fc2b8abc97922d576ccd8ba092119618fd732632 | 6,842 | py | Python | test/test_convert_pes.py | EmbroiderPy/pyembroidery | 3d0db61e7a08bba8e51f4e5873ebfa678c12960b | [
"MIT"
] | null | null | null | test/test_convert_pes.py | EmbroiderPy/pyembroidery | 3d0db61e7a08bba8e51f4e5873ebfa678c12960b | [
"MIT"
] | null | null | null | test/test_convert_pes.py | EmbroiderPy/pyembroidery | 3d0db61e7a08bba8e51f4e5873ebfa678c12960b | [
"MIT"
] | null | null | null | from __future__ import print_function
import unittest
from pyembroidery import *
from test.pattern_for_tests import *
class TestConverts(unittest.TestCase):
def position_equals(self, stitches, j, k):
self.assertEqual(stitches[j][:1], stitches[k][:1])
def test_convert_pes_to_u01(self):
file1 = "convert_u01.pes"
file2 = "converted_pes.u01"
write_pes(get_big_pattern(), file1)
f_pattern = read_pes(file1)
write_u01(f_pattern, file2)
t_pattern = read_u01(file2)
self.assertIsNotNone(t_pattern)
self.assertEqual(t_pattern.count_stitch_commands(NEEDLE_SET), 16)
self.assertEqual(t_pattern.count_stitch_commands(STITCH), 16 * 5)
self.position_equals(t_pattern.stitches, 0, -1)
print("pes->u01: ", t_pattern.stitches)
self.addCleanup(os.remove, file1)
self.addCleanup(os.remove, file2)
def test_convert_pes_to_csv(self):
file1 = "convert_csv.pes"
file2 = "converted_pes.csv"
write_pes(get_big_pattern(), file1)
f_pattern = read_pes(file1)
write_csv(f_pattern, file2)
t_pattern = read_csv(file2)
self.assertIsNotNone(t_pattern)
self.assertEqual(t_pattern.count_stitch_commands(COLOR_CHANGE), 15)
self.assertEqual(t_pattern.count_stitch_commands(STITCH), 16 * 5)
self.position_equals(t_pattern.stitches, 0, -1)
print("pes->csv: ", t_pattern.stitches)
self.addCleanup(os.remove, file1)
self.addCleanup(os.remove, file2)
def test_convert_pes_to_exp(self):
file1 = "convert_exp.pes"
file2 = "converted_pes.exp"
write_pes(get_big_pattern(), file1)
f_pattern = read_pes(file1)
write_exp(f_pattern, file2)
t_pattern = read_exp(file2)
self.assertIsNotNone(t_pattern)
self.assertEqual(t_pattern.count_stitch_commands(COLOR_CHANGE), 15)
self.assertEqual(t_pattern.count_stitch_commands(STITCH), 16 * 5)
self.position_equals(t_pattern.stitches, 0, -1)
print("pes->exp: ", t_pattern.stitches)
self.addCleanup(os.remove, file1)
self.addCleanup(os.remove, file2)
def test_convert_pes_to_pes(self):
file1 = "convert_pes.pes"
file2 = "converted_pes.pes"
write_pes(get_big_pattern(), file1)
f_pattern = read_pes(file1)
write_pes(f_pattern, file2)
t_pattern = read_pes(file2)
self.assertIsNotNone(t_pattern)
self.assertEqual(t_pattern.count_stitch_commands(COLOR_CHANGE), 15)
self.assertEqual(t_pattern.count_stitch_commands(STITCH), 16 * 5)
self.position_equals(t_pattern.stitches, 0, -1)
print("pes->pes: ", t_pattern.stitches)
self.addCleanup(os.remove, file1)
self.addCleanup(os.remove, file2)
def test_convert_pes_to_jef(self):
file1 = "convert_jef.pes"
file2 = "converted_pes.jef"
write_pes(get_big_pattern(), file1)
f_pattern = read_pes(file1)
write_jef(f_pattern, file2)
t_pattern = read_jef(file2)
self.assertIsNotNone(t_pattern)
self.assertEqual(t_pattern.count_stitch_commands(COLOR_CHANGE), 15)
self.assertEqual(t_pattern.count_stitch_commands(STITCH), 16 * 5)
self.position_equals(t_pattern.stitches, 0, -1)
print("pes->jef: ", t_pattern.stitches)
self.addCleanup(os.remove, file1)
self.addCleanup(os.remove, file2)
def test_convert_pes_to_pec(self):
file1 = "convert_pec.pes"
file2 = "converted_pes.pec"
write_pes(get_big_pattern(), file1)
f_pattern = read_pes(file1)
write_pec(f_pattern, file2)
t_pattern = read_pec(file2)
self.assertIsNotNone(t_pattern)
self.assertEqual(t_pattern.count_stitch_commands(COLOR_CHANGE), 15)
self.assertEqual(t_pattern.count_stitch_commands(STITCH), 16 * 5)
self.position_equals(t_pattern.stitches, 0, -1)
print("pes->pec: ", t_pattern.stitches)
self.addCleanup(os.remove, file1)
self.addCleanup(os.remove, file2)
def test_convert_pes_to_vp3(self):
file1 = "convert_vp3.pes"
file2 = "converted_pes.vp3"
write_pes(get_big_pattern(), file1)
f_pattern = read_pes(file1)
write_vp3(f_pattern, file2)
t_pattern = read_vp3(file2)
self.assertIsNotNone(t_pattern)
self.assertEqual(t_pattern.count_stitch_commands(COLOR_CHANGE), 15)
self.assertEqual(t_pattern.count_stitch_commands(STITCH), 16 * 5)
self.position_equals(t_pattern.stitches, 0, -1)
print("pes->vp3: ", t_pattern.stitches)
self.addCleanup(os.remove, file1)
self.addCleanup(os.remove, file2)
def test_convert_pes_to_dst(self):
file1 = "convert_dst.pes"
file2 = "converted_pes.dst"
write_pes(get_big_pattern(), file1)
f_pattern = read_pes(file1)
write_dst(f_pattern, file2)
t_pattern = read_dst(file2)
self.assertIsNotNone(t_pattern)
self.assertEqual(t_pattern.count_stitch_commands(COLOR_CHANGE), 15)
self.assertEqual(t_pattern.count_stitch_commands(STITCH), 16 * 5)
self.position_equals(t_pattern.stitches, 0, -1)
print("pes->dst: ", t_pattern.stitches)
self.addCleanup(os.remove, file1)
self.addCleanup(os.remove, file2)
def test_convert_pes_to_gcode(self):
file1 = "convert_gcode.pes"
file2 = "converted_pes.gcode"
write_pes(get_big_pattern(), file1)
f_pattern = read_pes(file1)
write_gcode(f_pattern, file2)
t_pattern = read_gcode(file2)
self.assertIsNotNone(t_pattern)
self.assertEqual(t_pattern.count_stitch_commands(COLOR_CHANGE), 15)
self.assertEqual(t_pattern.count_stitch_commands(STITCH), 16 * 5)
self.position_equals(t_pattern.stitches, 0, -1)
print("pes->gcode: ", t_pattern.stitches)
self.addCleanup(os.remove, file1)
self.addCleanup(os.remove, file2)
def test_convert_pes_to_xxx(self):
file1 = "convert_xxx.pes"
file2 = "converted_pes.xxx"
write_pes(get_big_pattern(), file1)
f_pattern = read_pes(file1)
write_xxx(f_pattern, file2)
t_pattern = read_xxx(file2)
self.assertIsNotNone(t_pattern)
self.assertEqual(t_pattern.count_stitch_commands(COLOR_CHANGE), 15)
self.assertEqual(t_pattern.count_stitch_commands(STITCH), 16 * 5)
self.position_equals(t_pattern.stitches, 0, -1)
print("pes->xxx: ", t_pattern.stitches)
self.addCleanup(os.remove, file1)
self.addCleanup(os.remove, file2)
def test_write_pes_long(self):
file1 = "long.pes"
write_pes(get_long_jump(), file1)
self.addCleanup(os.remove, file1)
| 38.655367 | 75 | 0.668372 | 895 | 6,842 | 4.803352 | 0.068156 | 0.111654 | 0.078158 | 0.107467 | 0.813212 | 0.801349 | 0.743196 | 0.743196 | 0.743196 | 0.743196 | 0 | 0.032598 | 0.22435 | 6,842 | 176 | 76 | 38.875 | 0.777464 | 0 | 0 | 0.529801 | 0 | 0 | 0.063432 | 0 | 0 | 0 | 0 | 0 | 0.205298 | 1 | 0.07947 | false | 0 | 0.02649 | 0 | 0.112583 | 0.072848 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
fc3030bbf373749ce797dd5a92ba404e7f2df4f8 | 1,138 | py | Python | src/AOJ/ITP1_6_C.py | nabetama-training/CompetitionProgrammingPractice | 0801173df3992c2e78b02b383f2df9ba792cbf2f | [
"BSD-2-Clause"
] | null | null | null | src/AOJ/ITP1_6_C.py | nabetama-training/CompetitionProgrammingPractice | 0801173df3992c2e78b02b383f2df9ba792cbf2f | [
"BSD-2-Clause"
] | 2 | 2020-07-04T04:19:28.000Z | 2020-07-26T06:16:07.000Z | src/AOJ/ITP1_6_C.py | nabetama-training/CompetitionProgrammingPractice | 0801173df3992c2e78b02b383f2df9ba792cbf2f | [
"BSD-2-Clause"
] | null | null | null | def resolve():
# bills = [
# [
# [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
# [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
# [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
# ],
# [
# [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
# [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
# [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
# ],
# [
# [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
# [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
# [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
# ],
# [
# [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
# [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
# [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
# ],
# ]
bills = [[[0 for _ in range(10)] for _ in range(3)] for _ in range(4)]
n = int(input())
for _ in range(n):
b, f, r, v = map(int, input().split())
bills[b - 1][f - 1][r - 1] += v
count = 0
for bill in bills:
count += 1
for floor in bill:
[print(" {}".format(r), end='') for r in floor]
print('')
if count < len(bills):
print('####################')
| 28.45 | 74 | 0.282953 | 186 | 1,138 | 1.709677 | 0.150538 | 0.748428 | 1.113208 | 1.471698 | 0.377358 | 0.377358 | 0.377358 | 0.377358 | 0.377358 | 0.377358 | 0 | 0.215947 | 0.471002 | 1,138 | 39 | 75 | 29.179487 | 0.312292 | 0.477153 | 0 | 0 | 0 | 0 | 0.04021 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071429 | false | 0 | 0 | 0 | 0.071429 | 0.214286 | 0 | 0 | 1 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
fc398e96a4e8b1a14aa331e080cf17290c4f7ca7 | 27 | py | Python | out_packg/main.py | Firekiss/python_learn | 15922af566a08924834ff924982a36a65b724bbf | [
"MIT"
] | null | null | null | out_packg/main.py | Firekiss/python_learn | 15922af566a08924834ff924982a36a65b724bbf | [
"MIT"
] | null | null | null | out_packg/main.py | Firekiss/python_learn | 15922af566a08924834ff924982a36a65b724bbf | [
"MIT"
] | null | null | null | def main():
print('main') | 13.5 | 15 | 0.592593 | 4 | 27 | 4 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148148 | 27 | 2 | 15 | 13.5 | 0.695652 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0 | 0 | 0 | 0.5 | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
fc77ec8ac86182c524326f90ac9990cd6235a2bf | 124 | py | Python | bin/src/seq_len.py | pzross/pfsr1-motif | e9bbf370b59680436d9a22d691a522b7d1a59a32 | [
"MIT"
] | null | null | null | bin/src/seq_len.py | pzross/pfsr1-motif | e9bbf370b59680436d9a22d691a522b7d1a59a32 | [
"MIT"
] | null | null | null | bin/src/seq_len.py | pzross/pfsr1-motif | e9bbf370b59680436d9a22d691a522b7d1a59a32 | [
"MIT"
] | null | null | null | def seq_len(sequence):
#sequence = sequence.replace("\n", "")
sequence = sequence.replace("\r", "")
return len(sequence)
| 24.8 | 39 | 0.669355 | 15 | 124 | 5.466667 | 0.533333 | 0.585366 | 0.560976 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.120968 | 124 | 4 | 40 | 31 | 0.752294 | 0.298387 | 0 | 0 | 0 | 0 | 0.023256 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
5d8ccb20542df2a417135ed2c8ec46527e233a4f | 207 | py | Python | tests/test_problem30.py | nolanwrightdev/blind-75-python | b92ef3449eb0143c760ddd339897a3f0a2972830 | [
"MIT"
] | 6 | 2020-02-01T23:29:51.000Z | 2022-02-20T20:46:56.000Z | tests/test_problem30.py | nolanwrightdev/blind-75-python | b92ef3449eb0143c760ddd339897a3f0a2972830 | [
"MIT"
] | null | null | null | tests/test_problem30.py | nolanwrightdev/blind-75-python | b92ef3449eb0143c760ddd339897a3f0a2972830 | [
"MIT"
] | null | null | null | import unittest
from problems.problem30 import solution
class Test(unittest.TestCase):
def test(self):
self.assertEqual(solution([7, 1, 5, 3, 6, 4]), 5)
self.assertEqual(solution([7, 6, 4, 3, 1]), 0)
| 23 | 51 | 0.695652 | 33 | 207 | 4.363636 | 0.575758 | 0.208333 | 0.319444 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.084746 | 0.144928 | 207 | 8 | 52 | 25.875 | 0.728814 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0.166667 | false | 0 | 0.333333 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5da76418571acf80c2956f0a0d9098011db947be | 43 | py | Python | hello world.py | 1172100327/hit-1172100327 | 3d162f46cd765f2581991fecf7796fae8453f0e1 | [
"MIT"
] | null | null | null | hello world.py | 1172100327/hit-1172100327 | 3d162f46cd765f2581991fecf7796fae8453f0e1 | [
"MIT"
] | 1 | 2019-07-07T10:26:33.000Z | 2019-07-07T10:26:33.000Z | hello world.py | 1172100327/hit-1172100327 | 3d162f46cd765f2581991fecf7796fae8453f0e1 | [
"MIT"
] | 1 | 2019-07-07T10:09:48.000Z | 2019-07-07T10:09:48.000Z | print("hello world")
print("hello python")
| 14.333333 | 21 | 0.72093 | 6 | 43 | 5.166667 | 0.666667 | 0.645161 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.093023 | 43 | 2 | 22 | 21.5 | 0.794872 | 0 | 0 | 0 | 0 | 0 | 0.534884 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
5dd143f87ce4bc538e283a0ba125a465a5f58cbb | 70 | py | Python | utils/__init__.py | alessandrolamberti/voice_gender_recognition | 95cc25a239cb78cc451632474e3307c676f658ee | [
"MIT"
] | 1 | 2022-01-31T13:02:27.000Z | 2022-01-31T13:02:27.000Z | utils/__init__.py | alessandrolamberti/voice_gender_recognition | 95cc25a239cb78cc451632474e3307c676f658ee | [
"MIT"
] | null | null | null | utils/__init__.py | alessandrolamberti/voice_gender_recognition | 95cc25a239cb78cc451632474e3307c676f658ee | [
"MIT"
] | null | null | null | from .audio import *
from .preprocess import *
from .database import * | 23.333333 | 25 | 0.757143 | 9 | 70 | 5.888889 | 0.555556 | 0.377358 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.157143 | 70 | 3 | 26 | 23.333333 | 0.898305 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b9057e092d022a74f9413b63058254a4f9ecc606 | 96 | py | Python | venv/lib/python3.8/site-packages/numpy/lib/tests/test_recfunctions.py | GiulianaPola/select_repeats | 17a0d053d4f874e42cf654dd142168c2ec8fbd11 | [
"MIT"
] | 2 | 2022-03-13T01:58:52.000Z | 2022-03-31T06:07:54.000Z | venv/lib/python3.8/site-packages/numpy/lib/tests/test_recfunctions.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | 19 | 2021-11-20T04:09:18.000Z | 2022-03-23T15:05:55.000Z | venv/lib/python3.8/site-packages/numpy/lib/tests/test_recfunctions.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | null | null | null | /home/runner/.cache/pip/pool/a9/08/d1/1ed20267038fe30bf947d889ab58b81a90772e43b1c98262cf4ad9537e | 96 | 96 | 0.895833 | 9 | 96 | 9.555556 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.447917 | 0 | 96 | 1 | 96 | 96 | 0.447917 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f8eac8bd36b86ba853fab2bec106dcd5c1bbb2d9 | 436 | py | Python | smtwiki/core/views.py | seahurt/smtwiki | 98a5a53fb50ab558a88c6ef2fe907c5d7882a600 | [
"MIT"
] | null | null | null | smtwiki/core/views.py | seahurt/smtwiki | 98a5a53fb50ab558a88c6ef2fe907c5d7882a600 | [
"MIT"
] | null | null | null | smtwiki/core/views.py | seahurt/smtwiki | 98a5a53fb50ab558a88c6ef2fe907c5d7882a600 | [
"MIT"
] | null | null | null | from django.shortcuts import render
# Create your views here.
def login(request):
pass
def logout(request):
pass
def home(request):
pass
def profile(request):
pass
def category_view(request):
pass
def tag_view(request):
pass
def archive(request):
pass
def user_doc_view(request):
pass
def search(request):
pass
def logs_view(request):
pass
def statistics(request):
pass
| 9.276596 | 35 | 0.669725 | 58 | 436 | 4.948276 | 0.431034 | 0.421603 | 0.487805 | 0.250871 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.247706 | 436 | 46 | 36 | 9.478261 | 0.875 | 0.052752 | 0 | 0.478261 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.478261 | false | 0.478261 | 0.043478 | 0 | 0.521739 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 6 |
5d0c739576fa227182f3e4456e44795db4abf47e | 45 | py | Python | pyhac/tracker/__init__.py | zroger49/hac | 5905369344c985d5293d572a610c82308306e385 | [
"Apache-2.0"
] | 28 | 2021-08-18T09:33:30.000Z | 2021-11-08T09:14:35.000Z | pyhac/tracker/__init__.py | zroger49/hac | 5905369344c985d5293d572a610c82308306e385 | [
"Apache-2.0"
] | 3 | 2021-08-19T14:01:25.000Z | 2021-09-06T10:57:46.000Z | pyhac/tracker/__init__.py | zroger49/hac | 5905369344c985d5293d572a610c82308306e385 | [
"Apache-2.0"
] | 4 | 2021-08-12T02:35:17.000Z | 2021-09-21T18:35:16.000Z | from .holistic_tracker import HolisticTracker | 45 | 45 | 0.911111 | 5 | 45 | 8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.066667 | 45 | 1 | 45 | 45 | 0.952381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5d123c11b362dc8304b5d14baa5ea8f06faa934f | 13,044 | py | Python | wrappers/python/TestMPINInstall.py | skair39/milagro-crypto-c | 819e856f4648d2113891e226da74a5466038f820 | [
"Apache-2.0"
] | 1 | 2021-07-13T20:22:34.000Z | 2021-07-13T20:22:34.000Z | wrappers/python/TestMPINInstall.py | skair39/milagro-crypto-c | 819e856f4648d2113891e226da74a5466038f820 | [
"Apache-2.0"
] | null | null | null | wrappers/python/TestMPINInstall.py | skair39/milagro-crypto-c | 819e856f4648d2113891e226da74a5466038f820 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
"""
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
"""
import os
import unittest
import json
import hashlib
import mpin
HASH_TYPE_MPIN = mpin.SHA256
class TestMPIN(unittest.TestCase):
"""Tests M-Pin crypto code"""
def setUp(self):
# Form MPin ID
endUserData = {
"issued": "2013-10-19T06:12:28Z",
"userID": "testUser@miracl.com",
"mobile": 1,
"salt": "e985da112a378c222cfc2f7226097b0c"
}
self.mpin_id = json.dumps(endUserData)
# Hash value of MPIN_ID
self.hash_mpin_id = mpin.hash_id(HASH_TYPE_MPIN, self.mpin_id)
# Assign a seed value
seedHex = "3ade3d4a5c698e8910bf92f25d97ceeb7c25ed838901a5cb5db2cf25434c1fe76c7f79b7af2e5e1e4988e4294dbd9bd9fa3960197fb7aec373609fb890d74b16a4b14b2ae7e23b75f15d36c21791272372863c4f8af39980283ae69a79cf4e48e908f9e0"
self.seed = seedHex.decode("hex")
self.date = 16238
def test_1(self):
"""test_1 Good PIN and good token"""
PIN1 = 1234
PIN2 = 1234
# random number generator
rng = mpin.create_csprng(self.seed)
# Generate Client master secret share for MIRACL and Customer
rtn, ms1 = mpin.random_generate(rng)
self.assertEqual(rtn, 0)
rtn, ms2 = mpin.random_generate(rng)
self.assertEqual(rtn, 0)
# Generate server secret shares
rtn, ss1 = mpin.get_server_secret(ms1)
self.assertEqual(rtn, 0)
rtn, ss2 = mpin.get_server_secret(ms2)
self.assertEqual(rtn, 0)
# Combine server secret shares
rtn, server_secret = mpin.recombine_G2(ss1, ss2)
self.assertEqual(rtn, 0)
# Generate client secret shares
rtn, cs1 = mpin.get_client_secret(ms1, self.hash_mpin_id)
self.assertEqual(rtn, 0)
rtn, cs2 = mpin.get_client_secret(ms2, self.hash_mpin_id)
self.assertEqual(rtn, 0)
# Combine client secret shares
rtn, client_secret = mpin.recombine_G1(cs1, cs2)
self.assertEqual(rtn, 0)
# Generate Time Permit shares
rtn, tp1 = mpin.get_client_permit(
HASH_TYPE_MPIN, self.date, ms1, self.hash_mpin_id)
self.assertEqual(rtn, 0)
rtn, tp2 = mpin.get_client_permit(
HASH_TYPE_MPIN, self.date, ms2, self.hash_mpin_id)
self.assertEqual(rtn, 0)
# Combine Time Permit shares
rtn, time_permit = mpin.recombine_G1(tp1, tp2)
self.assertEqual(rtn, 0)
# Client extracts PIN from secret to create Token
rtn, token = mpin.extract_pin(
HASH_TYPE_MPIN, self.mpin_id, PIN1, client_secret)
self.assertEqual(rtn, 0)
# Client first pass
rtn, x, u, ut, sec = mpin.client_1(
HASH_TYPE_MPIN, self.date, self.mpin_id, rng, None, PIN2, token, time_permit)
self.assertEqual(rtn, 0)
# Server calculates H(ID) and H(T|H(ID))
HID, HTID = mpin.server_1(HASH_TYPE_MPIN, self.date, self.mpin_id)
# Server generates Random number Y and sends it to Client
rtn, y = mpin.random_generate(rng)
self.assertEqual(rtn, 0)
# Client second pass
rtn, v = mpin.client_2(x, y, sec)
self.assertEqual(rtn, 0)
# Server second pass
rtn, E, F = mpin.server_2(
self.date, HID, HTID, y, server_secret, u, ut, v)
self.assertEqual(rtn, 0)
def test_2(self):
"""test_2 Bad PIN and good token"""
PIN1 = 1234
PIN2 = 2000
# random number generator
rng = mpin.create_csprng(self.seed)
# Generate Client master secret share for MIRACL and Customer
rtn, ms1 = mpin.random_generate(rng)
self.assertEqual(rtn, 0)
rtn, ms2 = mpin.random_generate(rng)
self.assertEqual(rtn, 0)
# Generate server secret shares
rtn, ss1 = mpin.get_server_secret(ms1)
self.assertEqual(rtn, 0)
rtn, ss2 = mpin.get_server_secret(ms2)
self.assertEqual(rtn, 0)
# Combine server secret shares
rtn, server_secret = mpin.recombine_G2(ss1, ss2)
self.assertEqual(rtn, 0)
# Generate client secret shares
rtn, cs1 = mpin.get_client_secret(ms1, self.hash_mpin_id)
self.assertEqual(rtn, 0)
rtn, cs2 = mpin.get_client_secret(ms2, self.hash_mpin_id)
self.assertEqual(rtn, 0)
# Combine client secret shares
rtn, client_secret = mpin.recombine_G1(cs1, cs2)
self.assertEqual(rtn, 0)
# Generate Time Permit shares
rtn, tp1 = mpin.get_client_permit(
HASH_TYPE_MPIN, self.date, ms1, self.hash_mpin_id)
self.assertEqual(rtn, 0)
rtn, tp2 = mpin.get_client_permit(
HASH_TYPE_MPIN, self.date, ms2, self.hash_mpin_id)
self.assertEqual(rtn, 0)
# Combine Time Permit shares
rtn, time_permit = mpin.recombine_G1(tp1, tp2)
self.assertEqual(rtn, 0)
# Client extracts PIN from secret to create Token
rtn, token = mpin.extract_pin(
HASH_TYPE_MPIN, self.mpin_id, PIN1, client_secret)
self.assertEqual(rtn, 0)
# Client first pass
rtn, x, u, ut, sec = mpin.client_1(
HASH_TYPE_MPIN, self.date, self.mpin_id, rng, None, PIN2, token, time_permit)
self.assertEqual(rtn, 0)
# Server calculates H(ID) and H(T|H(ID))
HID, HTID = mpin.server_1(HASH_TYPE_MPIN, self.date, self.mpin_id)
# Server generates Random number Y and sends it to Client
rtn, y = mpin.random_generate(rng)
self.assertEqual(rtn, 0)
# Client second pass
rtn, v = mpin.client_2(x, y, sec)
self.assertEqual(rtn, 0)
# Server second pass
rtn, E, F = mpin.server_2(
self.date, HID, HTID, y, server_secret, u, ut, v)
self.assertEqual(rtn, -19)
def test_3(self):
"""test_2 Good PIN and bad token"""
PIN1 = 1234
PIN2 = 1234
# random number generator
rng = mpin.create_csprng(self.seed)
# Generate Client master secret share for MIRACL and Customer
rtn, ms1 = mpin.random_generate(rng)
self.assertEqual(rtn, 0)
rtn, ms2 = mpin.random_generate(rng)
self.assertEqual(rtn, 0)
# Generate server secret shares
rtn, ss1 = mpin.get_server_secret(ms1)
self.assertEqual(rtn, 0)
rtn, ss2 = mpin.get_server_secret(ms2)
self.assertEqual(rtn, 0)
# Combine server secret shares
rtn, server_secret = mpin.recombine_G2(ss1, ss2)
self.assertEqual(rtn, 0)
# Generate client secret shares
rtn, cs1 = mpin.get_client_secret(ms1, self.hash_mpin_id)
self.assertEqual(rtn, 0)
rtn, cs2 = mpin.get_client_secret(ms2, self.hash_mpin_id)
self.assertEqual(rtn, 0)
# Combine client secret shares
rtn, client_secret = mpin.recombine_G1(cs1, cs2)
self.assertEqual(rtn, 0)
# Generate Time Permit shares
rtn, tp1 = mpin.get_client_permit(
HASH_TYPE_MPIN, self.date, ms1, self.hash_mpin_id)
self.assertEqual(rtn, 0)
rtn, tp2 = mpin.get_client_permit(
HASH_TYPE_MPIN, self.date, ms2, self.hash_mpin_id)
self.assertEqual(rtn, 0)
# Combine Time Permit shares
rtn, time_permit = mpin.recombine_G1(tp1, tp2)
self.assertEqual(rtn, 0)
# Client extracts PIN from secret to create Token
rtn, token = mpin.extract_pin(
HASH_TYPE_MPIN, self.mpin_id, PIN1, client_secret)
self.assertEqual(rtn, 0)
# Client first pass
rtn, x, u, ut, sec = mpin.client_1(
HASH_TYPE_MPIN, self.date, self.mpin_id, rng, None, PIN2, token, time_permit)
self.assertEqual(rtn, 0)
# Server calculates H(ID) and H(T|H(ID))
HID, HTID = mpin.server_1(HASH_TYPE_MPIN, self.date, self.mpin_id)
# Server generates Random number Y and sends it to Client
rtn, y = mpin.random_generate(rng)
self.assertEqual(rtn, 0)
# Client second pass
rtn, v = mpin.client_2(x, y, sec)
self.assertEqual(rtn, 0)
# Server second pass
# v is equal to ut to model a bad token
rtn, E, F = mpin.server_2(
self.date, HID, HTID, y, server_secret, u, ut, ut)
self.assertEqual(rtn, -19)
def test_4(self):
"""test_4 Make sure all client secret are unique"""
# random number generator
rng = mpin.create_csprng(self.seed)
# Generate master secret share
rtn, ms1 = mpin.random_generate(rng)
self.assertEqual(rtn, 0)
s = set()
match = 0
for i in range(1, 1000):
rand_val = os.urandom(32)
hash_mpin_id = mpin.hash_id(HASH_TYPE_MPIN, rand_val)
# Generate client secret shares
rtn, cs1 = mpin.get_client_secret(ms1, hash_mpin_id)
self.assertEqual(rtn, 0)
cs1Hex = cs1.encode("hex")
if cs1Hex in s:
match = 1
self.assertEqual(match, 0)
s.add(cs1Hex)
def test_5(self):
"""test_5 Make sure all one time passwords are random i.e. they should collide"""
# random number generator
rng = mpin.create_csprng(self.seed)
s = set()
match = 0
for i in range(1, 10000):
OTP = mpin.generate_otp(rng)
if OTP in s:
# print i
match = 1
s.add(OTP)
self.assertEqual(match, 1)
def test_6(self):
"""test_6 Make sure all random values are random i.e. they should collide"""
# random number generator
rng = mpin.create_csprng(self.seed)
# Generate 4 byte random number
s = set()
match = 0
for i in range(1, 208900):
random = mpin.generate_random(rng, 4)
# print i, " ", random.encode("hex")
if random in s:
match = 1
break
s.add(random)
self.assertEqual(match, 1)
def test_7(self):
"""test_7 AES-GCM: Successful encryption and decryption"""
# Generate 16 byte key
key = os.urandom(mpin.PAS)
# Generate 12 byte IV
iv = os.urandom(mpin.IVL)
# Generate a 32 byte random header
header = os.urandom(32)
# Plaintext input
plaintext1 = "A test message"
ciphertext, tag1 = mpin.aes_gcm_encrypt(key, iv, header, plaintext1)
plaintext2, tag2 = mpin.aes_gcm_decrypt(key, iv, header, ciphertext)
self.assertEqual(tag1, tag2)
self.assertEqual(plaintext1, plaintext2)
def test_8(self):
"""test_8 AES-GCM: Failed encryption and decryption by changing a ciphertext byte"""
# Generate 16 byte key
key = os.urandom(mpin.PAS)
# Generate 12 byte IV
iv = os.urandom(mpin.IVL)
# Generate a 32 byte random header
header = os.urandom(32)
# Plaintext input
plaintext1 = "A test message"
ciphertext, tag1 = mpin.aes_gcm_encrypt(key, iv, header, plaintext1)
new = list(ciphertext)
new[0] = "a"
ciphertext_bad = ''.join(new)
plaintext2, tag2 = mpin.aes_gcm_decrypt(
key, iv, header, ciphertext_bad)
self.assertNotEqual(tag1, tag2)
self.assertNotEqual(plaintext1, plaintext2)
def test_9(self):
"""test_9 AES-GCM: Failed encryption and decryption by changing a header byte"""
# Generate 16 byte key
key = os.urandom(mpin.PAS)
# Generate 12 byte IV
iv = os.urandom(mpin.IVL)
# Generate a 32 byte random header
header = os.urandom(32)
# Plaintext input
plaintext1 = "A test message"
ciphertext, tag1 = mpin.aes_gcm_encrypt(key, iv, header, plaintext1)
new = list(header)
new[0] = "a"
header_bad = ''.join(new)
plaintext2, tag2 = mpin.aes_gcm_decrypt(
key, iv, header_bad, ciphertext)
self.assertNotEqual(tag1, tag2)
self.assertEqual(plaintext1, plaintext2)
if __name__ == '__main__':
# Run tests
unittest.main()
| 32.207407 | 220 | 0.614842 | 1,720 | 13,044 | 4.533721 | 0.148256 | 0.10772 | 0.115414 | 0.116953 | 0.756091 | 0.749936 | 0.725827 | 0.717363 | 0.717363 | 0.688895 | 0 | 0.048918 | 0.294772 | 13,044 | 404 | 221 | 32.287129 | 0.798782 | 0.248083 | 0 | 0.73516 | 0 | 0 | 0.036249 | 0.02396 | 0 | 0 | 0 | 0 | 0.269406 | 1 | 0.045662 | false | 0 | 0.022831 | 0 | 0.073059 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5d158d2020aa8bbd22f2857f2541159609242ac3 | 2,290 | py | Python | temboo/core/Library/Genability/TariffData/__init__.py | jordanemedlock/psychtruths | 52e09033ade9608bd5143129f8a1bfac22d634dd | [
"Apache-2.0"
] | 7 | 2016-03-07T02:07:21.000Z | 2022-01-21T02:22:41.000Z | temboo/core/Library/Genability/TariffData/__init__.py | jordanemedlock/psychtruths | 52e09033ade9608bd5143129f8a1bfac22d634dd | [
"Apache-2.0"
] | null | null | null | temboo/core/Library/Genability/TariffData/__init__.py | jordanemedlock/psychtruths | 52e09033ade9608bd5143129f8a1bfac22d634dd | [
"Apache-2.0"
] | 8 | 2016-06-14T06:01:11.000Z | 2020-04-22T09:21:44.000Z | from temboo.Library.Genability.TariffData.GetLoadServingEntities import GetLoadServingEntities, GetLoadServingEntitiesInputSet, GetLoadServingEntitiesResultSet, GetLoadServingEntitiesChoreographyExecution
from temboo.Library.Genability.TariffData.GetLoadServingEntity import GetLoadServingEntity, GetLoadServingEntityInputSet, GetLoadServingEntityResultSet, GetLoadServingEntityChoreographyExecution
from temboo.Library.Genability.TariffData.GetMetricsForZipCode import GetMetricsForZipCode, GetMetricsForZipCodeInputSet, GetMetricsForZipCodeResultSet, GetMetricsForZipCodeChoreographyExecution
from temboo.Library.Genability.TariffData.GetPropertyKey import GetPropertyKey, GetPropertyKeyInputSet, GetPropertyKeyResultSet, GetPropertyKeyChoreographyExecution
from temboo.Library.Genability.TariffData.GetPropertyKeys import GetPropertyKeys, GetPropertyKeysInputSet, GetPropertyKeysResultSet, GetPropertyKeysChoreographyExecution
from temboo.Library.Genability.TariffData.GetSeasonGroups import GetSeasonGroups, GetSeasonGroupsInputSet, GetSeasonGroupsResultSet, GetSeasonGroupsChoreographyExecution
from temboo.Library.Genability.TariffData.GetTariff import GetTariff, GetTariffInputSet, GetTariffResultSet, GetTariffChoreographyExecution
from temboo.Library.Genability.TariffData.GetTariffs import GetTariffs, GetTariffsInputSet, GetTariffsResultSet, GetTariffsChoreographyExecution
from temboo.Library.Genability.TariffData.GetTerritories import GetTerritories, GetTerritoriesInputSet, GetTerritoriesResultSet, GetTerritoriesChoreographyExecution
from temboo.Library.Genability.TariffData.GetTerritory import GetTerritory, GetTerritoryInputSet, GetTerritoryResultSet, GetTerritoryChoreographyExecution
from temboo.Library.Genability.TariffData.GetTimeOfUseGroup import GetTimeOfUseGroup, GetTimeOfUseGroupInputSet, GetTimeOfUseGroupResultSet, GetTimeOfUseGroupChoreographyExecution
from temboo.Library.Genability.TariffData.GetTimeOfUseGroupIntervals import GetTimeOfUseGroupIntervals, GetTimeOfUseGroupIntervalsInputSet, GetTimeOfUseGroupIntervalsResultSet, GetTimeOfUseGroupIntervalsChoreographyExecution
from temboo.Library.Genability.TariffData.GetZipCodeDetails import GetZipCodeDetails, GetZipCodeDetailsInputSet, GetZipCodeDetailsResultSet, GetZipCodeDetailsChoreographyExecution
| 163.571429 | 224 | 0.920524 | 143 | 2,290 | 14.741259 | 0.405594 | 0.06167 | 0.104839 | 0.166509 | 0.228178 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.039738 | 2,290 | 13 | 225 | 176.153846 | 0.958618 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 1 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5d248b5311805cc669f597a0049416f9ae77266a | 38 | py | Python | djmo/__init__.py | davidegriffon/djmo | 853212471e59a0fa49d22f17cf57caeb552783f8 | [
"MIT"
] | 5 | 2015-12-09T08:13:03.000Z | 2019-05-12T07:14:25.000Z | djmo/__init__.py | davidegriffon/djmo | 853212471e59a0fa49d22f17cf57caeb552783f8 | [
"MIT"
] | null | null | null | djmo/__init__.py | davidegriffon/djmo | 853212471e59a0fa49d22f17cf57caeb552783f8 | [
"MIT"
] | null | null | null | from .decorators import observe_models | 38 | 38 | 0.894737 | 5 | 38 | 6.6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.078947 | 38 | 1 | 38 | 38 | 0.942857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5d315c96b6def878c27f286d34a91c832bc294f7 | 151 | py | Python | backend/milestone/admin.py | Tim6FTN/UKS | 3cf19f014cdc7845bf0b808b97c4e05dc49b062e | [
"MIT"
] | 1 | 2021-01-10T12:34:59.000Z | 2021-01-10T12:34:59.000Z | backend/milestone/admin.py | Tim6FTN/UKS | 3cf19f014cdc7845bf0b808b97c4e05dc49b062e | [
"MIT"
] | 37 | 2021-01-07T22:31:25.000Z | 2021-02-20T10:59:46.000Z | backend/milestone/admin.py | Tim6FTN/UKS | 3cf19f014cdc7845bf0b808b97c4e05dc49b062e | [
"MIT"
] | null | null | null | from django.contrib import admin
from milestone.models import Milestone
@admin.register(Milestone)
class MilestoneAdmin(admin.ModelAdmin):
pass
| 16.777778 | 39 | 0.807947 | 18 | 151 | 6.777778 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125828 | 151 | 8 | 40 | 18.875 | 0.924242 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.2 | 0.4 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
5d5e1f970af6a7da6a0898e0dcd906b99ed1998a | 26 | py | Python | tests/lif/gt.py | Mieschendahl/assignment-final-stub | 19eea657fcc4f8a455c42028f34b918628514cc0 | [
"MIT"
] | null | null | null | tests/lif/gt.py | Mieschendahl/assignment-final-stub | 19eea657fcc4f8a455c42028f34b918628514cc0 | [
"MIT"
] | 1 | 2022-03-20T11:08:45.000Z | 2022-03-20T11:08:45.000Z | tests/lif/gt.py | Mieschendahl/assignment-final-stub | 19eea657fcc4f8a455c42028f34b918628514cc0 | [
"MIT"
] | 6 | 2022-03-13T13:10:25.000Z | 2022-03-28T22:18:12.000Z | print(42 if 2 > 1 else 0)
| 13 | 25 | 0.615385 | 7 | 26 | 2.285714 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.263158 | 0.269231 | 26 | 1 | 26 | 26 | 0.578947 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
5d6e70f84903dda3c7d973e8058e45ce37dfba6f | 29 | py | Python | pam/plot/__init__.py | JosePazNoguera/pam | afb580c57223acd01466938eea8dc3d83097d5dd | [
"MIT"
] | 29 | 2020-04-10T23:24:26.000Z | 2021-05-21T12:30:03.000Z | pam/plot/__init__.py | JosePazNoguera/pam | afb580c57223acd01466938eea8dc3d83097d5dd | [
"MIT"
] | 63 | 2020-04-29T19:02:11.000Z | 2022-03-29T14:02:04.000Z | pam/plot/__init__.py | JosePazNoguera/pam | afb580c57223acd01466938eea8dc3d83097d5dd | [
"MIT"
] | 13 | 2020-04-16T19:00:18.000Z | 2022-03-18T14:42:48.000Z | from pam.plot.plans import *
| 14.5 | 28 | 0.758621 | 5 | 29 | 4.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.137931 | 29 | 1 | 29 | 29 | 0.88 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
538ab429e510bf4bccf77006e4af82d638cd8152 | 20,123 | py | Python | pycle/bicycle-scrapes/epey-scrape/downLink0.py | fusuyfusuy/School-Projects | 8e38f19da90f63ac9c9ec91e550fc5aaab3d0234 | [
"MIT"
] | null | null | null | pycle/bicycle-scrapes/epey-scrape/downLink0.py | fusuyfusuy/School-Projects | 8e38f19da90f63ac9c9ec91e550fc5aaab3d0234 | [
"MIT"
] | null | null | null | pycle/bicycle-scrapes/epey-scrape/downLink0.py | fusuyfusuy/School-Projects | 8e38f19da90f63ac9c9ec91e550fc5aaab3d0234 | [
"MIT"
] | null | null | null |
from bs4 import BeautifulSoup
import os
import wget
from urllib.request import Request, urlopen
bicycles=[{'name': 'Corelli Snoop 5.1 Bisiklet', 'link': 'https://www.epey.com/bisiklet/corelli-snoop-5-1.html'}, {'name': 'Corelli Snoop 5.2 Bisiklet', 'link': 'https://www.epey.com/bisiklet/corelli-snoop-5-2.html'}, {'name': 'Bianchi Junior Bisiklet', 'link': 'https://www.epey.com/bisiklet/bianchi-junior-24.html'}, {'name': 'Carraro Elite 704 Bisiklet', 'link': 'https://www.epey.com/bisiklet/carraro-elite-704.html'}, {'name': 'Corelli Sprint KR100 Bisiklet', 'link': 'https://www.epey.com/bisiklet/corelli-sprint-kr100.html'}, {'name': 'Kron XC100 29 MD Bisiklet', 'link': 'https://www.epey.com/bisiklet/kron-xc100-29-md.html'}, {'name': 'Carraro Force 720 Bisiklet', 'link': 'https://www.epey.com/bisiklet/carraro-force-720.html'}, {'name': 'Carraro Race 012 Bisiklet', 'link': 'https://www.epey.com/bisiklet/carraro-cr-race-012.html'}, {'name': 'Carraro Force 650 Bisiklet', 'link': 'https://www.epey.com/bisiklet/carraro-force-650.html'}, {'name': 'RKS MX35 Bisiklet', 'link': 'https://www.epey.com/bisiklet/rks-mx35.html'}, {'name': 'Corelli Speedo 2.0 Bisiklet', 'link': 'https://www.epey.com/bisiklet/corelli-speedo-2-0.html'}, {'name': 'Kross Trans 3.0 Bisiklet', 'link': 'https://www.epey.com/bisiklet/kross-trans-3-0.html'}, {'name': 'Salcano XRS044 Sora Bisiklet', 'link': 'https://www.epey.com/bisiklet/salcano-xrs044-sora.html'}, {'name': 'Bisan CTS 5300 26 Bisiklet', 'link': 'https://www.epey.com/bisiklet/bisan-cts-5300-26.html'}, {'name': 'Corelli Mocha Bisiklet', 'link': 'https://www.epey.com/bisiklet/corelli-mocha.html'}, {'name': 'Carraro Daytona 2624 Bisiklet', 'link': 'https://www.epey.com/bisiklet/carraro-daytona-2624.html'}, {'name': 'Ümit 2955 Kratos 2D Bisiklet', 'link': 'https://www.epey.com/bisiklet/umit-2955-kratos-2d.html'}, {'name': 'Kron EFT1000 Bisiklet', 'link': 'https://www.epey.com/bisiklet/kron-eft1000.html'}, {'name': 'Peugeot M11-26 Bisiklet', 'link': 'https://www.epey.com/bisiklet/peugeot-m11-26.html'}, {'name': 'Berg Buzzy Bisiklet', 'link': 'https://www.epey.com/bisiklet/berg-buzzy.html'}, {'name': 'Corelli Snoop 1.0 Bisiklet', 'link': 'https://www.epey.com/bisiklet/corelli-snoop-1-0.html'}, {'name': 'Salcano NG700 26 MD Bisiklet', 'link': 'https://www.epey.com/bisiklet/salcano-ng700-26-md.html'}, {'name': 'Whistle Miwok 1730 Bisiklet', 'link': 'https://www.epey.com/bisiklet/whistle-miwok-1730.html'}, {'name': 'Cube Attain GTC Bisiklet', 'link': 'https://www.epey.com/bisiklet/cube-attain-gtc.html'}, {'name': 'Bianchi Montana D Bisiklet', 'link': 'https://www.epey.com/bisiklet/bianchi-montana-d-26.html'}, {'name': 'RKS RSI-X-Pro Bisiklet', 'link': 'https://www.epey.com/bisiklet/rks-rsi-x-pro.html'}, {'name': 'Carraro E-Pack Bisiklet', 'link': 'https://www.epey.com/bisiklet/carraro-e-pack.html'}, {'name': 'Gitane Dream Bisiklet', 'link': 'https://www.epey.com/bisiklet/gitane-dream.html'}, {'name': 'Bianchi Aspid Mix 2621 Bisiklet', 'link': 'https://www.epey.com/bisiklet/bianchi-aspid-mix-2621.html'}, {'name': 'Corelli Atrox 1.1 Bisiklet', 'link': 'https://www.epey.com/bisiklet/corelli-atrox-1-1.html'}, {'name': 'Bianchi Bella 26 Bisiklet', 'link': 'https://www.epey.com/bisiklet/bianchi-bella-26.html'}, {'name': 'Carraro Big Bang Bisiklet', 'link': 'https://www.epey.com/bisiklet/carraro-big-bang.html'}, {'name': 'Corelli Chronic 2.2 Bisiklet', 'link': 'https://www.epey.com/bisiklet/corelli-chronic-2-2.html'}, {'name': 'Bisan MTS 4300 24 Bisiklet', 'link': 'https://www.epey.com/bisiklet/bisan-mts-4300-24.html'}, {'name': 'Carraro Force 410 Bisiklet', 'link': 'https://www.epey.com/bisiklet/carraro-force-410.html'}, {'name': 'Ümit 1248 Racer Bisiklet', 'link': 'https://www.epey.com/bisiklet/umit-1248-racer.html'}, {'name': 'Ümit 2662 Camaro V Bisiklet', 'link': 'https://www.epey.com/bisiklet/umit-2662-camaro-v.html'}, {'name': 'Corelli Dolce 1.0 Bisiklet', 'link': 'https://www.epey.com/bisiklet/corelli-dolce-1-0.html'}, {'name': 'Corelli Like 2.0 Bisiklet', 'link': 'https://www.epey.com/bisiklet/corelli-like-2-0.html'}, {'name': 'Corelli Teton 2.3 Bisiklet', 'link': 'https://www.epey.com/bisiklet/corelli-teton-2-3.html'}, {'name': 'Bianchi NTH 7 29 Bisiklet', 'link': 'https://www.epey.com/bisiklet/bianchi-nth-7-29.html'}, {'name': 'Salcano Excel 26 Bisiklet', 'link': 'https://www.epey.com/bisiklet/salcano-excel-26-v.html'}, {'name': 'Corelli Leone 4.1 Bisiklet', 'link': 'https://www.epey.com/bisiklet/corelli-leone-4-1.html'}, {'name': 'Whistle Miwok 1760 Bisiklet', 'link': 'https://www.epey.com/bisiklet/whistle-miwok-1760.html'}, {'name': 'Ghost Square Trekking 2 Bisiklet', 'link': 'https://www.epey.com/bisiklet/ghost-square-trekking-2.html'}, {'name': 'Bianchi Pink Magic 14 Bisiklet', 'link': 'https://www.epey.com/bisiklet/bianchi-pink-magic.html'}, {'name': 'Merida BIG.NINE 20 29 Bisiklet', 'link': 'https://www.epey.com/bisiklet/merida-big-nine-20-29.html'}, {'name': 'Salcano Badboy 12 Bisiklet', 'link': 'https://www.epey.com/bisiklet/salcano-badboy-12.html'}, {'name': 'Salcano Sarajevo 700 Lady Bisiklet', 'link': 'https://www.epey.com/bisiklet/salcano-sarajevo-700-lady.html'}, {'name': 'Carraro Big 627 Bisiklet', 'link': 'https://www.epey.com/bisiklet/carraro-big-627-27-5.html'}, {'name': 'Ümit 1645 Ninja Turtles Bisiklet', 'link': 'https://www.epey.com/bisiklet/umit-1645-ninja-turtles.html'}, {'name': 'Ümit 1616 Hello Kitty Bisiklet', 'link': 'https://www.epey.com/bisiklet/umit-1616-hello-kitty.html'}, {'name': 'Ümit 2401 Colorado Bisiklet', 'link': 'https://www.epey.com/bisiklet/umit-2401-colorado.html'}, {'name': 'Salcano Attack 20 Bisiklet', 'link': 'https://www.epey.com/bisiklet/salcano-attack-20.html'}, {'name': 'Salcano City Fun S 60 V Bisiklet', 'link': 'https://www.epey.com/bisiklet/salcano-city-fun-s-60-v.html'}, {'name': 'Salcano Lion 27.5 MD Bisiklet', 'link': 'https://www.epey.com/bisiklet/salcano-lion-27-5.html'}, {'name': 'Carraro Flexi Voyager Bisiklet', 'link': 'https://www.epey.com/bisiklet/carraro-flexi-voyager.html'}, {'name': 'Corelli Leon 3.0 Bisiklet', 'link': 'https://www.epey.com/bisiklet/corelli-leon-3-0.html'}, {'name': 'Kron TX100L Lady MD Bisiklet', 'link': 'https://www.epey.com/bisiklet/kron-tx100l-lady-md.html'}, {'name': 'Kron XC100L 24 Lady HD Bisiklet', 'link': 'https://www.epey.com/bisiklet/kron-xc100l-24-lady-hd.html'}, {'name': 'Kron XC75 26 MD Bisiklet', 'link': 'https://www.epey.com/bisiklet/kron-xc75-26-md.html'}, {'name': 'Kron Ares 4.0 20 V Bisiklet', 'link': 'https://www.epey.com/bisiklet/kron-ares-4-0-20-v.html'}, {'name': 'Kron Vortex 3.0 26 Bisiklet', 'link': 'https://www.epey.com/bisiklet/kron-vortex-3-0-26.html'}, {'name': 'Bianchi Captain Kidd 16 Bisiklet', 'link': 'https://www.epey.com/bisiklet/bianchi-captain-kidd-16.html'}, {'name': 'Peugeot M13-27.5 Bisiklet', 'link': 'https://www.epey.com/bisiklet/peugeot-m13-27-5.html'}, {'name': 'Peugeot JM246 Bisiklet', 'link': 'https://www.epey.com/bisiklet/peugeot-jm246.html'}, {'name': 'Bisan SPX-3050 26 Bisiklet', 'link': 'https://www.epey.com/bisiklet/bisan-spx-3050.html'}, {'name': 'Bianchi Discovery 505 Bisiklet', 'link': 'https://www.epey.com/bisiklet/bianchi-discovery-505.html'}, {'name': 'Carraro Moggy 20 Bisiklet', 'link': 'https://www.epey.com/bisiklet/carraro-moggy-20.html'}, {'name': 'Carraro Big 2730 Bisiklet', 'link': 'https://www.epey.com/bisiklet/carraro-big-2730.html'}, {'name': 'Carraro Force 601 Bisiklet', 'link': 'https://www.epey.com/bisiklet/carraro-force-601.html'}, {'name': 'Berg BMW Street Racer Bisiklet', 'link': 'https://www.epey.com/bisiklet/berg-bmw-street-racer.html'}, {'name': 'Ümit 1449 Little Pony Bisiklet', 'link': 'https://www.epey.com/bisiklet/umit-1449-little-pony.html'}, {'name': 'Ümit 1675 Spartan Bisiklet', 'link': 'https://www.epey.com/bisiklet/umit-1675-spartan.html'}, {'name': 'Ümit 2861 Magnetic HYD Bisiklet', 'link': 'https://www.epey.com/bisiklet/umit-2861-magnetic-hyd.html'}, {'name': 'Ümit 2475 Spartan Bisiklet', 'link': 'https://www.epey.com/bisiklet/umit-2475-spartan.html'}, {'name': 'Whistle Tulukai 1465D Bisiklet', 'link': 'https://www.epey.com/bisiklet/whistle-tulukai-1465d.html'}, {'name': 'Kron TX450 HD Bisiklet', 'link': 'https://www.epey.com/bisiklet/kron-tx450-hd.html'}, {'name': 'Corelli Agile 2.0 Bisiklet', 'link': 'https://www.epey.com/bisiklet/corelli-agile-2-0.html'}, {'name': 'Corelli Speedo 4.0 Bisiklet', 'link': 'https://www.epey.com/bisiklet/corelli-speedo-4-0.html'}, {'name': 'Corelli Snoop 1.3 Bisiklet', 'link': 'https://www.epey.com/bisiklet/corelli-snoop-1-3.html'}, {'name': 'Corelli Dusty 1.2 Bisiklet', 'link': 'https://www.epey.com/bisiklet/corelli-dusty-1-2.html'}, {'name': 'Bianchi RCX 729 Bisiklet', 'link': 'https://www.epey.com/bisiklet/bianchi-rcx-729.html'}, {'name': 'Scott Contessa JR 20 Bisiklet', 'link': 'https://www.epey.com/bisiklet/scott-contessa-jr-20.html'}, {'name': 'Ghost Lanao 1.6 AL Bisiklet', 'link': 'https://www.epey.com/bisiklet/ghost-lanao-1-6-al.html'}, {'name': 'Bianchi Buffalo 20 Bisiklet', 'link': 'https://www.epey.com/bisiklet/bianchi-buffalo-20-md.html'}, {'name': 'Salcano Astro 27.5 Lady V Bisiklet', 'link': 'https://www.epey.com/bisiklet/salcano-astro-27-5-lady-v.html'}, {'name': 'Salcano NG650 24 Lady HD Bisiklet', 'link': 'https://www.epey.com/bisiklet/salcano-ng650-24-lady-hd.html'}, {'name': 'Salcano NG800 26 HD Bisiklet', 'link': 'https://www.epey.com/bisiklet/salcano-ng800-26-hd.html'}, {'name': 'Salcano Efes 24 MD Bisiklet', 'link': 'https://www.epey.com/bisiklet/salcano-efes-24-md.html'}, {'name': 'Salcano City Explorer 30 V Bisiklet', 'link': 'https://www.epey.com/bisiklet/salcano-city-explorer-30-v.html'}, {'name': 'Salcano City Sport 20 Lady HD Bisiklet', 'link': 'https://www.epey.com/bisiklet/salcano-city-sport-20-lady-hd.html'}, {'name': 'Salcano Excel 20 Bisiklet', 'link': 'https://www.epey.com/bisiklet/salcano-excel-20.html'}, {'name': 'Lapierre Cyclo Cross CX Karbon Bisiklet', 'link': 'https://www.epey.com/bisiklet/lapierre-cyclo-cross-cx-karbon.html'}, {'name': 'Corelli Pierre 5.0 Bisiklet', 'link': 'https://www.epey.com/bisiklet/corelli-pierre-5-0.html'}, {'name': 'Corelli Jazz 2.3 Bisiklet', 'link': 'https://www.epey.com/bisiklet/corelli-jazz-2-3.html'}, {'name': 'Corelli Cyborg 3.0 Bisiklet', 'link': 'https://www.epey.com/bisiklet/corelli-cyborg-3-0.html'}, {'name': 'Corelli Edida 2.0 Bisiklet', 'link': 'https://www.epey.com/bisiklet/corelli-edida-2-0.html'}, {'name': 'Corelli Grace 1.0 Bisiklet', 'link': 'https://www.epey.com/bisiklet/corelli-grace-1-0.html'}, {'name': 'Orbis Reflex 26 Bisiklet', 'link': 'https://www.epey.com/bisiklet/orbis-reflex-26.html'}, {'name': 'Orbis Jewel 20 Bisiklet', 'link': 'https://www.epey.com/bisiklet/orbis-jewel-20.html'}, {'name': 'Orbis Kick 16 Bisiklet', 'link': 'https://www.epey.com/bisiklet/orbis-kick-16.html'}, {'name': 'Orbis Jungle Parrot 16 Bisiklet', 'link': 'https://www.epey.com/bisiklet/orbis-jungle-parrot-16.html'}, {'name': 'Orbis Stingray 24 Bisiklet', 'link': 'https://www.epey.com/bisiklet/orbis-stingray-24.html'}, {'name': 'Orbis Sonic 26 Bisiklet', 'link': 'https://www.epey.com/bisiklet/orbis-sonic-26.html'}, {'name': 'Tern Link D8 Bisiklet', 'link': 'https://www.epey.com/bisiklet/tern-link-d8.html'}, {'name': 'Bisan XTY - 5600 HD Bisiklet', 'link': 'https://www.epey.com/bisiklet/bisan-xty-5600-hd.html'}, {'name': 'Kron R1 Bisiklet', 'link': 'https://www.epey.com/bisiklet/kron-r1.html'}, {'name': 'Kron XC450 29 HD Bisiklet', 'link': 'https://www.epey.com/bisiklet/kron-xc450-29-man-hd.html'}, {'name': 'Kron XC150 20 Bisiklet', 'link': 'https://www.epey.com/bisiklet/kron-xc150-20.html'}, {'name': 'Whistle Patwin 1724 Bisiklet', 'link': 'https://www.epey.com/bisiklet/whistle-patwin-1724.html'}, {'name': 'Mosso 20 Marine SR2 Bisiklet', 'link': 'https://www.epey.com/bisiklet/mosso-20-marine-sr2.html'}, {'name': 'Mosso Legarda 1721 MSM H Bisiklet', 'link': 'https://www.epey.com/bisiklet/mosso-legarda-1721-msm-h.html'}, {'name': 'Mosso Cavalier 700 Tourney Bisiklet', 'link': 'https://www.epey.com/bisiklet/mosso-cavalier-700-tourney.html'}, {'name': 'Mosso 680XC SLX Bisiklet', 'link': 'https://www.epey.com/bisiklet/mosso-680xc-slx.html'}, {'name': 'Mosso 29 Blackedition H Bisiklet', 'link': 'https://www.epey.com/bisiklet/mosso-blackedition-29.html'}, {'name': 'Merida Scultura Disc 400 Bisiklet', 'link': 'https://www.epey.com/bisiklet/merida-scultura-disc400.html'}, {'name': 'Ghost Square Cross 2 Miss Bisiklet', 'link': 'https://www.epey.com/bisiklet/ghost-square-cross-2-miss.html'}, {'name': 'Carraro Grande 704 Bisiklet', 'link': 'https://www.epey.com/bisiklet/carraro-grande-704.html'}, {'name': 'Carraro Barbie 20 Bisiklet', 'link': 'https://www.epey.com/bisiklet/carraro-barbie-20.html'}, {'name': 'Carraro Sportive 226 Bisiklet', 'link': 'https://www.epey.com/bisiklet/carraro-sportive-226.html'}, {'name': 'Bianchi Angry 24 Bisiklet', 'link': 'https://www.epey.com/bisiklet/bianchi-angry-24.html'}, {'name': 'Bianchi Modena Bisiklet', 'link': 'https://www.epey.com/bisiklet/bianchi-modena.html'}, {'name': 'Bianchi TDL 1300 Bisiklet', 'link': 'https://www.epey.com/bisiklet/bianchi-tdl-1300.html'}, {'name': 'Sedona 810 Bisiklet', 'link': 'https://www.epey.com/bisiklet/sedona-810.html'}, {'name': 'Salcano NG555 Lady 26 HD Bisiklet', 'link': 'https://www.epey.com/bisiklet/salcano-ng555-lady-26-hd.html'}, {'name': 'Salcano NG750 27.5 HD Bisiklet', 'link': 'https://www.epey.com/bisiklet/salcano-ng750-27-5-hd.html'}, {'name': 'Bisan XTY 5400 MD Bisiklet', 'link': 'https://www.epey.com/bisiklet/bisan-xty-5400-md.html'}, {'name': 'Ümit 2630 Velocity Man Bisiklet', 'link': 'https://www.epey.com/bisiklet/umit-2630-velocity-man.html'}, {'name': 'Arbike 2403 Bisiklet', 'link': 'https://www.epey.com/bisiklet/arbike-2403.html'}, {'name': 'Arbike 2009 Bisiklet', 'link': 'https://www.epey.com/bisiklet/arbike-2009.html'}, {'name': 'Cube Attention SL 27.5 Bisiklet', 'link': 'https://www.epey.com/bisiklet/cube-attention-sl-27-5.html'}, {'name': 'Salcano NG650 29 HD Bisiklet', 'link': 'https://www.epey.com/bisiklet/salcano-ng650-hd-29.html'}, {'name': 'Cube Travel Bisiklet', 'link': 'https://www.epey.com/bisiklet/cube-travel.html'}, {'name': 'Cube Kathmandu Bisiklet', 'link': 'https://www.epey.com/bisiklet/cube-kathmandu.html'}, {'name': 'Cube Aerium HPA Pro Bisiklet', 'link': 'https://www.epey.com/bisiklet/cube-aerium-hpa-pro.html'}, {'name': 'Cube Aim SL 29 Bisiklet', 'link': 'https://www.epey.com/bisiklet/cube-aim-sl-29.html'}, {'name': 'Cube Reaction GTC 27.5 Bisiklet', 'link': 'https://www.epey.com/bisiklet/cube-reaction-gtc-27-5.html'}, {'name': 'Merida BIG.NINE 9000 29 Bisiklet', 'link': 'https://www.epey.com/bisiklet/merida-big-nine-9000-29.html'}, {'name': 'Merida BIG.NINE 70 29 Bisiklet', 'link': 'https://www.epey.com/bisiklet/merida-big-nine-70-29.html'}, {'name': 'Merida BIG.SEVEN 100 27.5 Bisiklet', 'link': 'https://www.epey.com/bisiklet/merida-big-seven-100-27-5.html'}, {'name': 'Trek X-Caliber 8 27.5 Bisiklet', 'link': 'https://www.epey.com/bisiklet/trek-x-caliber-8-27-5.html'}, {'name': 'Corratec X Vert 0.3 29ER 29 Bisiklet', 'link': 'https://www.epey.com/bisiklet/corratec-x-vert-0-3-29er-29.html'}, {'name': 'Corratec Inside Link Alu 65X 27.5 Bisiklet', 'link': 'https://www.epey.com/bisiklet/corratec-inside-link-alu-65x-27-5.html'}, {'name': 'Corratec 29ER One Gent Cross 29 Bisiklet', 'link': 'https://www.epey.com/bisiklet/corratec-29er-one-gent-cross-29.html'}, {'name': 'Cannondale F29 2 29 Bisiklet', 'link': 'https://www.epey.com/bisiklet/cannondale-f29-2-29.html'}, {'name': 'Cannondale F SI Carbon 3 29 Bisiklet', 'link': 'https://www.epey.com/bisiklet/cannondale-f-si-carbon-3-29.html'}, {'name': 'Cannondale F SI Carbon 4 29 Bisiklet', 'link': 'https://www.epey.com/bisiklet/cannondale-f-si-carbon-4-29.html'}, {'name': 'Look 695 Light Proteam Shimano Dura Ace Mavic Aksium 28 Bisiklet', 'link': 'https://www.epey.com/bisiklet/look-695-light-proteam-shimano-dura-ace-mavic-aksium-28.html'}, {'name': 'Geotech Belgium 4.0 28 Bisiklet', 'link': 'https://www.epey.com/bisiklet/geotech-belgium-4-0-28.html'}, {'name': 'Kron WSX500 Lady 26 Bisiklet', 'link': 'https://www.epey.com/bisiklet/kron-wsx500-lady-26.html'}, {'name': 'Salcano Assos 20 27.5 SLX Bisiklet', 'link': 'https://www.epey.com/bisiklet/salcano-assos-20-slx-27-5.html'}, {'name': 'Salcano Insomnia 26 Bisiklet', 'link': 'https://www.epey.com/bisiklet/salcano-insomnia-26.html'}, {'name': 'Salcano City Wind 20 Bisiklet', 'link': 'https://www.epey.com/bisiklet/salcano-city-wind-20.html'}, {'name': 'Salcano XRS002 105 Bisiklet', 'link': 'https://www.epey.com/bisiklet/salcano-xrs002-105.html'}, {'name': 'Salcano NG750 20 Girl Bisiklet', 'link': 'https://www.epey.com/bisiklet/salcano-ng750-lady-20.html'}, {'name': 'Salcano Fantom 12 Bisiklet', 'link': 'https://www.epey.com/bisiklet/salcano-fantom-12.html'}, {'name': 'Salcano Didim Bisiklet', 'link': 'https://www.epey.com/bisiklet/salcano-didim-20.html'}, {'name': 'Salcano Bodrum 26 Lady Bisiklet', 'link': 'https://www.epey.com/bisiklet/salcano-bodrum-26-lady.html'}, {'name': 'Salcano Istanbul 24 Lady HD Bisiklet', 'link': 'https://www.epey.com/bisiklet/salcano-istanbul-24-lady-hd.html'}, {'name': 'Scott Genius LTD Karbon Bisiklet', 'link': 'https://www.epey.com/bisiklet/scott-genius-ltd-karbon-26.html'}, {'name': 'Sedona 300 Bisiklet', 'link': 'https://www.epey.com/bisiklet/sedona-300.html'}, {'name': 'Bianchi Touring 730 28 Bisiklet', 'link': 'https://www.epey.com/bisiklet/bianchi-touring-730-28.html'}, {'name': 'Bianchi SLR 400 Bisiklet', 'link': 'https://www.epey.com/bisiklet/bianchi-slr-400-28.html'}, {'name': 'Bianchi AFX 7127 27.5 Bisiklet', 'link': 'https://www.epey.com/bisiklet/bianchi-afx-7127-27-5.html'}, {'name': 'Bianchi Speed 1000 Bisiklet', 'link': 'https://www.epey.com/bisiklet/bianchi-speed-1000-26.html'}, {'name': 'Bianchi Hit 20 Bisiklet', 'link': 'https://www.epey.com/bisiklet/bianchi-hit-20.html'}, {'name': 'Carraro Sportive 221 Bisiklet', 'link': 'https://www.epey.com/bisiklet/carraro-sportive-221-28.html'}, {'name': 'Carraro Flexi 106 Bisiklet', 'link': 'https://www.epey.com/bisiklet/carraro-flexi-106.html'}, {'name': 'Carraro Crx 970 26 Bisiklet', 'link': 'https://www.epey.com/bisiklet/carraro-crx-970-26.html'}, {'name': 'Carraro Force 320 Bisiklet', 'link': 'https://www.epey.com/bisiklet/carraro-force-320.html'}, {'name': 'Sedona Totem Bisiklet', 'link': 'https://www.epey.com/bisiklet/sedona-totem.html'}, {'name': 'Ümit 2023 60 Bluepower Bisiklet', 'link': 'https://www.epey.com/bisiklet/umit-2023-60-bluepower.html'}, {'name': 'Ümit 2637 Octagon 26 Bisiklet', 'link': 'https://www.epey.com/bisiklet/umit-2637-octagon-26.html'}, {'name': 'Ümit 2057 Albatros Bisiklet', 'link': 'https://www.epey.com/bisiklet/umit-2057-albatros.html'}, {'name': 'Ümit 2008 Princess Bisiklet', 'link': 'https://www.epey.com/bisiklet/umit-2008-princess.html'}, {'name': 'Ümit 2616 Hello Kitty 26 Bisiklet', 'link': 'https://www.epey.com/bisiklet/umit-2616-hello-kitty-26.html'}, {'name': 'Ümit 2454 Shoot 24 Bisiklet', 'link': 'https://www.epey.com/bisiklet/umit-2454-shoot-24.html'}, {'name': 'Ümit 2449 Monster High 24 Bisiklet', 'link': 'https://www.epey.com/bisiklet/umit-2449-monster-high-24.html'}, {'name': 'Ümit Coranna 2039 Panter 20 Bisiklet', 'link': 'https://www.epey.com/bisiklet/umit-coranna-2039-panter-20.html'}, {'name': 'Volta VB3 Bisiklet', 'link': 'https://www.epey.com/bisiklet/volta-vb3.html'}, {'name': 'Salcano Serenity 1 Bisiklet', 'link': 'https://www.epey.com/bisiklet/salcano-serenity-1.html'}]
for i in bicycles:
url = i['link']
try:
req = Request(url, headers={'User-Agent': 'Mozilla/5.0'})
webpage = urlopen(req).read()
except:
print("err in "+i['link'])
else:
print("Downloaded "+i['name']+" ", end="\r")
fileName = i['name'].replace('/','_')
f = open("./listItems/"+fileName+'.html', 'wb')
f.write(webpage)
f.close
| 914.681818 | 19,543 | 0.690354 | 3,082 | 20,123 | 4.507138 | 0.119727 | 0.157224 | 0.222734 | 0.26204 | 0.658556 | 0.602044 | 0.586711 | 0.569577 | 0.347635 | 0.112159 | 0 | 0.059512 | 0.07812 | 20,123 | 21 | 19,544 | 958.238095 | 0.689289 | 0 | 0 | 0 | 0 | 2.055556 | 0.81642 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.222222 | 0 | 0.222222 | 0.166667 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
54f52e70486d5eae11668b4275dbe49b4a320e05 | 211 | py | Python | ramda/F_test.py | jakobkolb/ramda.py | 982b2172f4bb95b9a5b09eff8077362d6f2f0920 | [
"MIT"
] | 56 | 2018-08-06T08:44:58.000Z | 2022-03-17T09:49:03.000Z | ramda/F_test.py | jakobkolb/ramda.py | 982b2172f4bb95b9a5b09eff8077362d6f2f0920 | [
"MIT"
] | 28 | 2019-06-17T11:09:52.000Z | 2022-02-18T16:59:21.000Z | ramda/F_test.py | jakobkolb/ramda.py | 982b2172f4bb95b9a5b09eff8077362d6f2f0920 | [
"MIT"
] | 5 | 2019-09-18T09:24:38.000Z | 2021-07-21T08:40:23.000Z | from ramda.F import F
def F_test():
assert not F(1, 2, 3, 4), "Should be False"
assert not F(False), "Should be False"
assert not F(True), "Should be False"
assert not F({}), "Should be False"
| 23.444444 | 47 | 0.620853 | 38 | 211 | 3.421053 | 0.421053 | 0.276923 | 0.307692 | 0.438462 | 0.530769 | 0.530769 | 0 | 0 | 0 | 0 | 0 | 0.025 | 0.241706 | 211 | 8 | 48 | 26.375 | 0.7875 | 0 | 0 | 0 | 0 | 0 | 0.28436 | 0 | 0 | 0 | 0 | 0 | 0.666667 | 1 | 0.166667 | true | 0 | 0.166667 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
54f70c16e4fa8abebe2028e0531c90f83ebe9a10 | 131 | py | Python | addition_module/DSDG/data/__init__.py | weihaoxie/FaceX-Zoo | db0b087e4f4d28152e172d6c8d3767a8870733b4 | [
"Apache-2.0"
] | 1 | 2022-02-07T02:03:37.000Z | 2022-02-07T02:03:37.000Z | addition_module/DSDG/data/__init__.py | weihaoxie/FaceX-Zoo | db0b087e4f4d28152e172d6c8d3767a8870733b4 | [
"Apache-2.0"
] | null | null | null | addition_module/DSDG/data/__init__.py | weihaoxie/FaceX-Zoo | db0b087e4f4d28152e172d6c8d3767a8870733b4 | [
"Apache-2.0"
] | null | null | null | from .generation_dataset import GenDataset_s
from .recognition_dataset import ImageList, SeparateBatchSampler, SeparateImageList
| 43.666667 | 84 | 0.877863 | 13 | 131 | 8.615385 | 0.769231 | 0.232143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.091603 | 131 | 2 | 85 | 65.5 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
076a142acbf736d4da6359a2d314465d2a26dfee | 32,393 | py | Python | src/genie/libs/parser/iosxr/tests/test_show_segment_routing.py | psolarcz/genieparser | 811c197a1dab6a635e6dec145b99194648bf4ff4 | [
"Apache-2.0"
] | null | null | null | src/genie/libs/parser/iosxr/tests/test_show_segment_routing.py | psolarcz/genieparser | 811c197a1dab6a635e6dec145b99194648bf4ff4 | [
"Apache-2.0"
] | null | null | null | src/genie/libs/parser/iosxr/tests/test_show_segment_routing.py | psolarcz/genieparser | 811c197a1dab6a635e6dec145b99194648bf4ff4 | [
"Apache-2.0"
] | 1 | 2021-07-07T18:07:56.000Z | 2021-07-07T18:07:56.000Z | #!/bin/env python
import unittest
from unittest.mock import Mock
from ats.topology import Device
from genie.metaparser.util.exceptions import SchemaEmptyParserError, SchemaMissingKeyError
from genie.libs.parser.iosxr.show_segment_routing import ShowPceLsp,\
ShowPceIPV4Peer,\
ShowPceLspDetail,\
ShowPceIPV4PeerDetail,\
ShowPceIPV4PeerPrefix,\
ShowPceIpv4TopologySummary,\
ShowIsisSegmentRoutingPrefixSidMap,\
ShowOspfSegmentRoutingPrefixSidMap,\
ShowSegmentRoutingLocalBlockInconsistencies,\
ShowSegmentRoutingMappingServerPrefixSidMapIPV4,\
ShowSegmentRoutingMappingServerPrefixSidMapIPV4Detail
# =============================================================
# Unittest for:
# * 'Show Isis Segment Routing Prefix Sid Map'
# =============================================================
class test_show_isis_routing_prefix_sid_map(unittest.TestCase):
device = Device(name='DeviceA')
dev2 = Device(name='DeviceB')
empty_output = {'execute.return_value': ''}
golden_output = {'execute.return_value': '''
RP/0/0/CPU0:router# show isis segment-routing prefix-sid-map active-policy
IS-IS 1 active policy
Prefix SID Index Range Flags
10.4.1.100/32 100 20
10.4.1.150/32 150 10
Number of mapping entries: 2
RP/0/0/CPU0:router# show isis segment-routing prefix-sid-map backup-policy
IS-IS 1 backup policy
Prefix SID Index Range Flags
10.4.1.100/32 100 20
10.4.1.150/32 150 10
Number of mapping entries: 2
'''}
golden_parsed_output = {
'process_id': {
1: {
'policy': {
'active': {
'sid': {
100: {
'prefix': '10.4.1.100/32',
'range': 20,
},
150: {
'prefix': '10.4.1.150/32',
'range': 10,
}
},
'number_of_mapping_entries': 2,
},
'backup': {
'sid': {
100: {
'prefix': '10.4.1.100/32',
'range': 20,
},
150: {
'prefix': '10.4.1.150/32',
'range': 10,
}
},
'number_of_mapping_entries': 2,
},
}
}
}
}
def test_empty_output(self):
self.device1 = Mock(**self.empty_output)
obj = ShowIsisSegmentRoutingPrefixSidMap(device=self.device1)
with self.assertRaises(SchemaEmptyParserError):
parsed = obj.parse()
def test_golden_output(self):
self.maxDiff = None
self.dev2 = Mock(**self.golden_output)
obj = ShowIsisSegmentRoutingPrefixSidMap(device=self.dev2)
parsed = obj.parse()
self.assertEqual(parsed, self.golden_parsed_output)
# =============================================================
# Unittest for:
# * 'Show Ospf Segment Routing Prefix Sid Map'
# =============================================================
class test_show_ospf_routing_prefix_sid_map(unittest.TestCase):
device = Device(name='DeviceA')
dev2 = Device(name='DeviceB')
empty_output = {'execute.return_value': ''}
golden_output = {'execute.return_value': '''
RP/0/0/CPU0:router# show ospf segment-routing prefix-sid-map active-policy
SRMS active policy for Process ID 1
Prefix SID Index Range Flags
10.4.1.100/32 100 20
10.4.1.150/32 150 10
Number of mapping entries: 2
RP/0/0/CPU0:router# show ospf segment-routing prefix-sid-map backup-policy
SRMS backup policy for Process ID 1
Prefix SID Index Range Flags
10.4.1.100/32 100 20
10.4.1.150/32 150 10
Number of mapping entries: 2
'''}
golden_parsed_output = {
'process_id': {
1: {
'policy': {
'active': {
'sid': {
100: {
'prefix': '10.4.1.100/32',
'range': 20,
},
150: {
'prefix': '10.4.1.150/32',
'range': 10,
}
},
'number_of_mapping_entries': 2,
},
'backup': {
'sid': {
100: {
'prefix': '10.4.1.100/32',
'range': 20,
},
150: {
'prefix': '10.4.1.150/32',
'range': 10,
}
},
'number_of_mapping_entries': 2,
},
}
}
}
}
def test_empty_output(self):
self.device1 = Mock(**self.empty_output)
obj = ShowOspfSegmentRoutingPrefixSidMap(device=self.device1)
with self.assertRaises(SchemaEmptyParserError):
parsed = obj.parse()
def test_golden_output(self):
self.maxDiff = None
self.dev2 = Mock(**self.golden_output)
obj = ShowOspfSegmentRoutingPrefixSidMap(device=self.dev2)
parsed = obj.parse()
self.assertEqual(parsed, self.golden_parsed_output)
# =============================================================
# Unittest for:
# * 'Show pce ipv4 peer'
# =============================================================
class test_show_pce_ivp4_peer(unittest.TestCase):
dev1 = Device(name='DeviceA')
dev2 = Device(name='DeviceB')
empty_output = {'execute.return_value': ''}
golden_output = {'execute.return_value': '''
RP/0/RSP0/CPU0:router# show pce ipv4 peer
PCE's peer database:
--------------------
Peer address: 192.168.0.1
State: Up
Capabilities: Stateful, Segment-Routing, Update
'''}
golden_parsed_output = {
'pce_peer_database': {
'192.168.0.1': {
'state': 'Up',
'capabilities': {
'stateful': True,
'segment-routing': True,
'update': True
}
}
}
}
def test_empty_output(self):
self.dev1 = Mock(**self.empty_output)
obj = ShowPceIPV4Peer(device=self.dev1)
with self.assertRaises(SchemaEmptyParserError):
parsed = obj.parse()
def test_golden_output(self):
self.maxDiff = None
self.dev2 = Mock(**self.golden_output)
obj = ShowPceIPV4Peer(device=self.dev2)
parsed = obj.parse()
self.assertEqual(parsed, self.golden_parsed_output)
# =============================================================
# Unittest for:
# * 'show pce ipv4 peer detail'
# =============================================================
class test_show_pce_ipv4_peer_detail(unittest.TestCase):
dev1 = Device(name='DeviceA')
dev2 = Device(name='DeviceB')
empty_output = {'execute.return_value': ''}
golden_output = {'execute.return_value': '''
RP/0/RSP0/CPU0:router# show pce ipv4 peer detail
PCE's peer database:
--------------------
Peer address: 192.168.0.1
State: Up
Capabilities: Stateful, Segment-Routing, Update
PCEP has been up for: 00:01:50
PCEP session ID: local 0, remote 0
Sending KA every 30 seconds
Minimum acceptable KA interval: 20 seconds
Peer timeout after 120 seconds
Statistics:
Keepalive messages: rx 4 tx 4
Request messages: rx 3 tx 0
Reply messages: rx 0 tx 3
Error messages: rx 0 tx 0
Open messages: rx 1 tx 1
Report messages: rx 4 tx 0
Update messages: rx 0 tx 2
Initiate messages: rx 0 tx 0
'''}
golden_parsed_output = {
'pce_peer_database': {
'192.168.0.1': {
'state': 'Up',
'capabilities': {
'stateful': True,
'segment-routing': True,
'update': True
},
'pcep': {
'uptime': '00:01:50',
'session_id_local': 0,
'session_id_remote': 0
},
'ka': {
'sending_intervals': 30,
'minimum_acceptable_inteval': 20
},
'peer_timeout': 120,
'statistics': {
'rx': {
'keepalive_messages': 4,
'request_messages': 3,
'reply_messages': 0,
'error_messages': 0,
'open_messages': 1,
'report_messages': 4,
'update_messages': 0,
'initiate_messages': 0
},
'tx': {
'keepalive_messages': 4,
'request_messages': 0,
'reply_messages': 3,
'error_messages': 0,
'open_messages': 1,
'report_messages': 0,
'update_messages': 2,
'initiate_messages': 0
}
}
}
}
}
def test_empty_output(self):
self.dev1 = Mock(**self.empty_output)
obj = ShowPceIPV4PeerDetail(device=self.dev1)
with self.assertRaises(SchemaEmptyParserError):
parsed = obj.parse()
def test_golden_output(self):
self.maxDiff = None
self.dev2 = Mock(**self.golden_output)
obj = ShowPceIPV4PeerDetail(device=self.dev2)
parsed = obj.parse()
self.assertEqual(parsed, self.golden_parsed_output)
# =============================================================
# Unittest for:
# * 'show pce ipv4 prefix'
# =============================================================
class test_Show_Pce_IPV4_Peer_prefix(unittest.TestCase):
dev1 = Device(name='DeviceA')
dev2 = Device(name='DeviceB')
empty_output = {'execute.return_value': ''}
golden_output = {'execute.return_value': '''
RP/0/RSP0/CPU0:router# show pce ipv4 prefix
PCE's prefix database:
----------------------
Node 1
TE router ID: 192.168.0.4
Host name: rtrD
ISIS system ID: 1921.6800.1004 level-1 ASN: 65001 domain ID: 1111
ISIS system ID: 1921.6800.1004 level-2 ASN: 65001 domain ID: 1111
ISIS system ID: 1921.6800.1004 level-2 ASN: 65001 domain ID: 9999
Advertised Prefixes:
192.168.0.4
192.168.0.4
192.168.0.4
192.168.0.6
Node 2
TE router ID: 192.168.0.1
Host name: rtrA
ISIS system ID: 1921.6800.1001 level-2
Advertised Prefixes:
192.168.0.1
'''}
golden_parsed_output = {
'nodes': {
1: {
'te_router_id': '192.168.0.4',
'host_name': 'rtrD',
'isis_system_id': [
'1921.6800.1004 level-1',
'1921.6800.1004 level-2',
'1921.6800.1004 level-2'],
'asn': [
65001,
65001,
65001],
'domain_id': [
1111,
1111,
9999],
'advertised_prefixes': [
'192.168.0.4',
'192.168.0.4',
'192.168.0.4',
'192.168.0.6']},
2: {
'te_router_id': '192.168.0.1',
'host_name': 'rtrA',
'isis_system_id': ['1921.6800.1001 level-2'],
'advertised_prefixes': ['192.168.0.1']}}}
def test_empty_output(self):
self.dev1 = Mock(**self.empty_output)
obj = ShowPceIPV4PeerPrefix(device=self.dev1)
with self.assertRaises(SchemaEmptyParserError):
parsed = obj.parse()
def test_golden_output(self):
self.maxDiff = None
self.dev2 = Mock(**self.golden_output)
obj = ShowPceIPV4PeerPrefix(device=self.dev2)
parsed = obj.parse()
self.assertEqual(parsed, self.golden_parsed_output)
# =============================================================
# Unittest for:
# * 'show pce ipv4 topology summary'
# =============================================================
class test_Show_Pce_Ipv4_Topology_Summary(unittest.TestCase):
dev1 = Device(name='DeviceA')
dev2 = Device(name='DeviceB')
empty_output = {'execute.return_value': ''}
golden_output = {'execute.return_value': '''
RP/0/RSP0/CPU0:router# show pce ipv4 topology summary
PCE's topology database summary:
--------------------------------
Topology nodes: 4
Prefixes: 4
Prefix SIDs: 4
Links: 12
Adjacency SIDs: 24
'''}
golden_parsed_output = {
'pce_topology_database_summary': {
'adjancency_sids': {
'total': 24,
},
'links': {
'total': 12,
},
'prefix_sids': {
'total': 4,
},
'prefixes': 4,
'topology_nodes': 4,
},
}
expected_output_2 = {
'pce_topology_database_summary': {
'adjancency_sids': {
'epe': 0,
'protected': 0,
'total': 0,
'unprotected': 0,
},
'links': {
'epe': 0,
'total': 0,
},
'prefix_sids': {
'regular': 0,
'strict': 0,
'total': 0,
},
'prefixes': 0,
'private_information': {
'consistent': 'yes',
'lookup_nodes': 0,
'update_stats': {
'links': {
'added': 0,
'deleted': 0,
},
'noded': {
'added': 0,
'deleted': 0,
},
'prefix': {
'added': 0,
'deleted': 0,
},
},
},
'topology_nodes': 0,
},
}
device_output_2 = {'execute.return_value': '''
PCE's topology database summary:
--------------------------------
Topology nodes: 0
Prefixes: 0
Prefix SIDs:
Total: 0
Regular: 0
Strict: 0
Links:
Total: 0
EPE: 0
Adjacency SIDs:
Total: 0
Unprotected: 0
Protected: 0
EPE: 0
Private Information:
Lookup Nodes 0
Consistent yes
Update Stats (from IGP and/or BGP):
Noded added: 0
Noded deleted: 0
Links added: 0
Links deleted: 0
Prefix added: 0
Prefix deleted: 0
'''}
def test_empty_output(self):
self.dev1 = Mock(**self.empty_output)
obj = ShowPceIpv4TopologySummary(device=self.dev1)
with self.assertRaises(SchemaEmptyParserError):
parsed = obj.parse()
def test_golden_output(self):
self.maxDiff = None
self.dev2 = Mock(**self.golden_output)
obj = ShowPceIpv4TopologySummary(device=self.dev2)
parsed = obj.parse()
self.assertEqual(parsed, self.golden_parsed_output)
def test_golden_output_2(self):
self.maxDiff = None
self.dev3 = Mock(**self.device_output_2)
obj = ShowPceIpv4TopologySummary(device=self.dev3)
parsed = obj.parse()
self.assertEqual(parsed, self.expected_output_2)
# =============================================================
# Unittest for:
# * 'show pce lsp'
# =============================================================
class test_show_Pce_Lsp(unittest.TestCase):
dev1 = Device(name='DeviceA')
dev2 = Device(name='DeviceB')
empty_output = {'execute.return_value': ''}
golden_output = {'execute.return_value': '''
RP/0/RSP0/CPU0:router# show pce lsp
PCE's tunnel database:
----------------------
PCC 192.168.0.1:
Tunnel Name: rtrA_t1
LSPs:
LSP[0]:
source 192.168.0.1, destination 192.168.0.4, tunnel ID 1, LSP ID 2
State: Admin up, Operation up
Setup type: Segment Routing
Binding SID: 24013
'''}
golden_parsed_output = {
'pcc': {
'192.168.0.1': {
'tunnel_name': {
'rtrA_t1': {
'lsps': {
0: {
'source': '192.168.0.1',
'destination': '192.168.0.4',
'tunnel_id': 1,
'lsp_id': 2,
'admin_state': 'up',
'operation_state': 'up',
'setup_type': 'Segment Routing',
'binding_sid': 24013
}
}
}
}
}
}
}
def test_empty_output(self):
self.dev1 = Mock(**self.empty_output)
obj = ShowPceLsp(device=self.dev1)
with self.assertRaises(SchemaEmptyParserError):
parsed = obj.parse()
def test_golden_output(self):
self.maxDiff = None
self.dev2 = Mock(**self.golden_output)
obj = ShowPceLsp(device=self.dev2)
parsed = obj.parse()
self.assertEqual(parsed, self.golden_parsed_output)
# =============================================================
# Unittest for:
# * 'show pce lsp detail'
# =============================================================
class test_Show_Pce_Lsp_Detail(unittest.TestCase):
dev1 = Device(name='DeviceA')
dev2 = Device(name='DeviceB')
empty_output = {'execute.return_value': ''}
golden_output = {'execute.return_value': '''
RP/0/RSP0/CPU0:router# show pce lsp detail
PCE's tunnel database:
----------------------
PCC 192.168.0.1:
Tunnel Name: rtrA_t1
LSPs:
LSP[0]:
source 192.168.0.1, destination 192.168.0.4, tunnel ID 1, LSP ID 2
State: Admin up, Operation up
Setup type: Segment Routing
Binding SID: 24013
PCEP information:
plsp-id 2, flags: D:1 S:0 R:0 A:1 O:1
Reported path:
Metric type: TE, Accumulated Metric 42
SID[0]: Adj, Label 24000, Address: local 10.10.10.1 remote 10.10.10.2
SID[1]: Adj, Label 24000, Address: local 10.19.14.2 remote 10.19.14.4
Computed path:
Metric type: TE, Accumulated Metric 42
SID[0]: Adj, Label 24000, Address: local 10.10.10.1 remote 10.10.10.2
SID[1]: Adj, Label 24000, Address: local 10.19.14.2 remote 10.19.14.4
Recorded path:
None
RP/0/RSP0/CPU0:router# show pce lsp detail
PCE's tunnel database:
----------------------
PCC 192.168.0.1:
Tunnel Name: rtrA_t1
LSPs:
LSP[0]:
source 192.168.0.1, destination 192.168.0.4, tunnel ID 1, LSP ID 2
State: Admin up, Operation up
Setup type: Segment Routing
Binding SID: 24013
PCEP information:
plsp-id 2, flags: D:1 S:0 R:0 A:1 O:1
Reported path:
Metric type: TE, Accumulated Metric 42
SID[0]: Adj, Label 24000, Address: local 10.10.10.1 remote 10.10.10.2
SID[1]: Adj, Label 24000, Address: local 10.19.14.2 remote 10.19.14.4
Computed path:
Metric type: TE, Accumulated Metric 42
SID[0]: Adj, Label 24000, Address: local 10.10.10.1 remote 10.10.10.2
SID[1]: Adj, Label 24000, Address: local 10.19.14.2 remote 10.19.14.4
Recorded path:
None
Event history (latest first):
Time Event
June 13 2016 13:28:29 Report
Symbolic-name: rtrA_t1, LSP-ID: 2,
Source: 192.168.0.1 Destination: 192.168.0.4,
D:1, R:0, A:1 O:1, Sig.BW: 0, Act.BW: 0
June 13 2016 13:28:28 Report
Symbolic-name: rtrA_t1, LSP-ID: 2,
Source: 192.168.0.1 Destination: 192.168.0.4,
D:1, R:0, A:1 O:1, Sig.BW: 0, Act.BW: 0
June 13 2016 13:28:28 Create
Symbolic-name: rtrA_t1, PLSP-ID: 2,
Peer: 192.168.0.1
'''}
golden_parsed_output = {
'pcc': {
'192.168.0.1': {
'tunnel_name': 'rtrA_t1',
'lsps': {
0: {
'source': '192.168.0.1',
'destination': '192.168.0.4',
'tunnel_id': 1,
'lsp_id': 2,
'admin_state': 'up',
'operation_state': 'up',
'setup_type': 'segment routing',
'binding_sid': 24013,
'pcep_information': {
'plsp_id': 2,
'flags': {
'd': 1,
's': 0,
'r': 0,
'a': 1,
'o': 1,
}
},
'paths': {
'reported': {
'metric_type': 'TE',
'accumulated_metric': 42,
'sids': {
0: {
'type': 'Adj',
'label': 24000,
'local_address': '10.10.10.1',
'remote_address': '10.10.10.2'
},
1: {
'type': 'Adj',
'label': 24000,
'local_address': '10.19.14.2',
'remote_address': '10.19.14.4'
}
}
},
'computed': {
'metric_type': 'TE',
'accumulated_metric': 42,
'sids': {
0: {
'type': 'Adj',
'label': 24000,
'local_address': '10.10.10.1',
'remote_address': '10.10.10.2'
},
1: {
'type': 'Adj',
'label': 24000,
'local_address': '10.19.14.2',
'remote_address': '10.19.14.4'
}
}
},
'recorded': {}
}
},
'event_history': {
'June 13 2016 13:28:29': {
'report': {
'symbolic_name': 'rtrA_t1',
'lsp-id': 2,
'source': '192.168.0.1',
'destination': '192.168.0.4',
'flags': {
'd': 1,
'r': 0,
'a': 1,
'o': 1,
'sig_bw': 0,
'act_bw': 0
}
}
},
'June 13 2016 13:28:28': {
'report': {
'symbolic_name': 'rtrA_t1',
'lsp-id': 2,
'source': '192.168.0.1',
'destination': '192.168.0.4',
'flags': {
'd': 1,
'r': 0,
'a': 1,
'o': 1,
'sig_bw': 0,
'act_bw': 0
}
},
'create': {
'symbolic_name': 'rtrA_t1',
'plsp-id': 2,
'peer': '192.168.0.1'
}
}
}
}
}
}
}
def test_empty_output(self):
self.dev1 = Mock(**self.empty_output)
obj = ShowPceLspDetail(device=self.dev1)
with self.assertRaises(SchemaEmptyParserError):
parsed = obj.parse()
def test_golden_output(self):
self.maxDiff = None
self.dev2 = Mock(**self.golden_output)
obj = ShowPceLspDetail(device=self.dev2)
parsed = obj.parse()
self.assertEqual(parsed, self.golden_parsed_output)
# =============================================================
# Unittest for:
# * 'show segment-routing local-block inconsistencies'
# =============================================================
class Test_Show_Segment_Routing_Local_Block_Inconsistencies(unittest.TestCase):
dev1 = Device(name='DeviceA')
dev2 = Device(name='DeviceB')
empty_output = {'execute.return_value': ''}
golden_output = {'execute.return_value': '''
RP/0/RSP0/CPU0:router(config)# show segment-routing local-block inconsistencies
Tue Aug 15 13:53:30.555 EDT
SRLB inconsistencies range: Start/End: 30000/30009
'''}
golden_parsed_output = {
'srlb_inconsistencies_range': {
'start': 30000,
'end': 30009,
}
}
def test_empty_output(self):
self.dev1 = Mock(**self.empty_output)
obj = ShowSegmentRoutingLocalBlockInconsistencies(device=self.dev1)
with self.assertRaises(SchemaEmptyParserError):
parsed = obj.parse()
def test_golden_output(self):
self.maxDiff = None
self.dev2 = Mock(**self.golden_output)
obj = ShowSegmentRoutingLocalBlockInconsistencies(device=self.dev2)
parsed = obj.parse()
self.assertEqual(parsed, self.golden_parsed_output)
# ====================================================================
# Unittest for:
# * 'show segment-routing mapping-server prefix-sid-map ipv4'
# ====================================================================
class Test_Show_Segment_Routing_Mapping_Server_Prefix_Sid_Map_IPV4(
unittest.TestCase):
dev1 = Device(name='DeviceA')
dev2 = Device(name='DeviceB')
empty_output = {'execute.return_value': ''}
golden_output = {'execute.return_value': '''
show segment-routing mapping-server prefix-sid-map ipv4
Prefix SID Index Range Flags
10.186.1.0/24 400 300
10.1.1.1/32 10 200
Number of mapping entries: 2
'''}
golden_parsed_output = {
'ipv4': {
'number_of_mapping_entries': 2,
'prefix': {
'10.186.1.0/24': {
'sid_index': 400,
'range': 300
},
'10.1.1.1/32': {
'sid_index': 10,
'range': 200
}
},
}
}
def test_empty_output(self):
self.dev1 = Mock(**self.empty_output)
obj = ShowSegmentRoutingMappingServerPrefixSidMapIPV4(
device=self.dev1)
with self.assertRaises(SchemaEmptyParserError):
parsed = obj.parse()
def test_golden_output(self):
self.maxDiff = None
self.dev2 = Mock(**self.golden_output)
obj = ShowSegmentRoutingMappingServerPrefixSidMapIPV4(
device=self.dev2)
parsed = obj.parse()
self.assertEqual(parsed, self.golden_parsed_output)
# ====================================================================
# Unittest for:
# * 'show segment-routing mapping-server prefix-sid-map ipv4 detail'
# ====================================================================
class Test_Show_Segment_Routing_Mapping_Server_Prefix_Sid_Map_IPV_4Detail(
unittest.TestCase):
dev1 = Device(name='DeviceA')
dev2 = Device(name='DeviceB')
empty_output = {'execute.return_value': ''}
golden_output = {'execute.return_value': '''
RP/0/0/CPU0:router# show segment-routing mapping-server prefix-sid-map ipv4 detail
Prefix
10.186.1.0/24
SID Index: 400
Range: 300
Last Prefix: 10.229.44.0/24
Last SID Index: 699
Flags:
10.1.1.1/32
SID Index: 10
Range: 200
'''}
golden_parsed_output = {
'ipv4': {
'prefix': {
'10.186.1.0/24': {
'sid_index': 400,
'range': 300,
'last_prefix': '10.229.44.0/24',
'last_sid_index': 699
},
'10.1.1.1/32': {
'sid_index': 10,
'range': 200
},
}
}
}
def test_empty_output(self):
self.dev1 = Mock(**self.empty_output)
obj = ShowSegmentRoutingMappingServerPrefixSidMapIPV4Detail(
device=self.dev1)
with self.assertRaises(SchemaEmptyParserError):
parsed = obj.parse()
def test_golden_output(self):
self.maxDiff = None
self.dev2 = Mock(**self.golden_output)
obj = ShowSegmentRoutingMappingServerPrefixSidMapIPV4Detail(
device=self.dev2)
parsed = obj.parse()
self.assertEqual(parsed, self.golden_parsed_output)
if __name__ == '__main__':
unittest.main()
| 34.169831 | 90 | 0.418887 | 2,876 | 32,393 | 4.590056 | 0.091446 | 0.019544 | 0.022801 | 0.014544 | 0.839255 | 0.762366 | 0.741535 | 0.724642 | 0.709416 | 0.705553 | 0 | 0.086787 | 0.436915 | 32,393 | 947 | 91 | 34.205913 | 0.636952 | 0.061989 | 0 | 0.597187 | 0 | 0.025575 | 0.378543 | 0.01882 | 0 | 0 | 0 | 0 | 0.029412 | 1 | 0.029412 | false | 0 | 0.006394 | 0 | 0.122762 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
db517619400b36c1337d966e5e515072d896b35f | 5,034 | py | Python | tests/nn/test_kalman.py | hectorcarrion/sleap | 1150c2d0b64543e07d4b2429ea245a5afaa07cee | [
"BSD-3-Clause-Clear"
] | 156 | 2020-05-01T18:43:43.000Z | 2022-03-25T10:31:18.000Z | tests/nn/test_kalman.py | oeway/sleap | 1eb06f81eb8f0bc1beedd1c3dd10902f8ff9e724 | [
"BSD-3-Clause-Clear"
] | 299 | 2020-04-20T16:37:52.000Z | 2022-03-31T23:54:48.000Z | tests/nn/test_kalman.py | oeway/sleap | 1eb06f81eb8f0bc1beedd1c3dd10902f8ff9e724 | [
"BSD-3-Clause-Clear"
] | 41 | 2020-05-14T15:25:21.000Z | 2022-03-25T12:44:54.000Z | import numpy as np
import sleap.nn.tracker.components
import sleap.nn.tracker.kalman as k
from sleap.nn.tracker.components import greedy_matching
def test_first_choice_matching():
instances = ["instance a", "instance b"]
tracks = ["track a", "track b"]
# columns are tracks
# rows are instances
cost_matrix = np.array([[10, 150], [50, 100]])
match_tuples = k.match_tuples_from_match_function(
cost_matrix=cost_matrix,
row_items=instances,
column_items=tracks,
match_function=sleap.nn.tracker.components.first_choice_matching,
)
assert len(match_tuples) == 2
assert ("instance a", "track a", 10) in match_tuples
assert ("instance b", "track a", 50) in match_tuples
match_by_track = k.match_dict_from_match_function(
cost_matrix=cost_matrix,
row_items=instances,
column_items=tracks,
match_function=sleap.nn.tracker.components.first_choice_matching,
)
assert len(match_by_track) == 1
assert match_by_track["track a"] == "instance a"
match_by_instance = k.match_dict_from_match_function(
cost_matrix=cost_matrix,
row_items=instances,
column_items=tracks,
match_function=sleap.nn.tracker.components.first_choice_matching,
key_by_column=False,
)
assert len(match_by_instance) == 2
assert match_by_instance["instance a"] == "track a"
assert match_by_instance["instance b"] == "track a"
# another cost matrix
# make sure we get *best* match for each track, regardless of row order
cost_matrix = np.array(
[
[50, 100],
[10, 150],
]
)
match_by_track = k.match_dict_from_match_function(
cost_matrix=cost_matrix,
row_items=instances,
column_items=tracks,
match_function=sleap.nn.tracker.components.first_choice_matching,
)
assert len(match_by_track) == 1
assert match_by_track["track a"] == "instance b"
def test_greedy_matching():
instances = ["instance a", "instance b"]
tracks = ["track a", "track b"]
# columns are tracks
# rows are instances
cost_matrix = np.array([[10, 200], [75, 150]])
matches = k.matches_from_match_tuples(
k.match_tuples_from_match_function(
cost_matrix=cost_matrix,
row_items=instances,
column_items=tracks,
match_function=greedy_matching,
)
)
assert len(matches) == 2
assert matches[0].track == "track a"
assert matches[0].instance == "instance a"
assert matches[0].score == 10
assert matches[1].track == "track b"
assert matches[1].instance == "instance b"
assert matches[1].score == 150
def test_track_instance_matches():
instances = ["instance a", "instance b"]
tracks = ["track a", "track b"]
# columns are tracks
# rows are instances
cost_matrix = np.array([[10, 200], [75, 150]])
matches = k.get_track_instance_matches(
cost_matrix=cost_matrix,
instances=instances,
tracks=tracks,
are_too_close_function=lambda x, y: True,
)
# instance b would prefer track a but gets bumped to track b
# since there's no competition for track b, the "too close" check
# isn't applied.
assert len(matches) == 2
assert matches[0].track == "track a"
assert matches[0].instance == "instance a"
assert matches[0].score == 10
assert matches[1].track == "track b"
assert matches[1].instance == "instance b"
assert matches[1].score == 150
# another cost matrix
# best match is instance a -> track a
# next match is instance b -> track b
# but instance b would prefer track a
cost_matrix = np.array(
[
[10, 100],
[50, 150],
]
)
matches = k.get_track_instance_matches(
cost_matrix=cost_matrix,
instances=instances,
tracks=tracks,
are_too_close_function=lambda x, y: True,
)
assert len(matches) == 2
assert matches[0].track == "track a"
assert matches[0].instance == "instance a"
assert matches[0].score == 10
assert matches[1].track == "track b"
assert matches[1].instance == "instance b"
assert matches[1].score == 150
# best match is instance b -> track a (cost 10)
# next match is instance a -> track b (cost 100)
# each instance gets its first choice so "too close" check shouldn't apply
cost_matrix = np.array(
[
[50, 100],
[10, 150],
]
)
matches = k.get_track_instance_matches(
cost_matrix=cost_matrix,
instances=instances,
tracks=tracks,
are_too_close_function=lambda x, y: True,
)
assert len(matches) == 2
assert matches[0].track == "track a"
assert matches[0].instance == "instance b"
assert matches[0].score == 10
assert matches[1].track == "track b"
assert matches[1].instance == "instance a"
assert matches[1].score == 100
| 28.122905 | 78 | 0.633691 | 664 | 5,034 | 4.620482 | 0.138554 | 0.078227 | 0.054759 | 0.052151 | 0.810626 | 0.733703 | 0.716754 | 0.716754 | 0.70013 | 0.70013 | 0 | 0.032405 | 0.258244 | 5,034 | 178 | 79 | 28.280899 | 0.789234 | 0.126142 | 0 | 0.634146 | 0 | 0 | 0.077626 | 0 | 0 | 0 | 0 | 0 | 0.308943 | 1 | 0.02439 | false | 0 | 0.03252 | 0 | 0.056911 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
dba9e568cac7b83d6d1bb6469479836aa4dd47ee | 300 | py | Python | eskedit/__init__.py | quinlan-lab/kmertools | 93e90919c26e2fc899a905b77748857404389e13 | [
"MIT"
] | 1 | 2020-08-25T01:35:38.000Z | 2020-08-25T01:35:38.000Z | eskedit/__init__.py | quinlan-lab/kmertools | 93e90919c26e2fc899a905b77748857404389e13 | [
"MIT"
] | null | null | null | eskedit/__init__.py | quinlan-lab/kmertools | 93e90919c26e2fc899a905b77748857404389e13 | [
"MIT"
] | 1 | 2021-07-13T23:21:56.000Z | 2021-07-13T23:21:56.000Z | from eskedit.kmethods import *
from eskedit.ksplit import *
from eskedit.constants import *
from eskedit.dpgp_prep import *
from eskedit.kclass import *
from eskedit.train_model import *
from eskedit.chrom_binning import *
from eskedit.ktrain import train_kmer_model
from eskedit.query import kquery
| 30 | 43 | 0.826667 | 43 | 300 | 5.651163 | 0.395349 | 0.407407 | 0.489712 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.12 | 300 | 9 | 44 | 33.333333 | 0.920455 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
91688482b206717a899a236804cb81d8620dd5c8 | 26 | py | Python | 1 Python Basics/8_tuples.py | narayanants/python-mega-course | 2ba2980ab21dfbed5f86f00695559f7831b5c566 | [
"MIT"
] | null | null | null | 1 Python Basics/8_tuples.py | narayanants/python-mega-course | 2ba2980ab21dfbed5f86f00695559f7831b5c566 | [
"MIT"
] | null | null | null | 1 Python Basics/8_tuples.py | narayanants/python-mega-course | 2ba2980ab21dfbed5f86f00695559f7831b5c566 | [
"MIT"
] | null | null | null | num1 = (1,2,3)
print(num1) | 13 | 14 | 0.615385 | 6 | 26 | 2.666667 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.217391 | 0.115385 | 26 | 2 | 15 | 13 | 0.478261 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
918eb8bfdaff2596613fa540d164b040802fa655 | 25 | py | Python | autograder/__init__.py | CC-4/lab7 | 24a44c5c337dfcfbb0ffd4765e5b5303546a7801 | [
"MIT"
] | null | null | null | autograder/__init__.py | CC-4/lab7 | 24a44c5c337dfcfbb0ffd4765e5b5303546a7801 | [
"MIT"
] | null | null | null | autograder/__init__.py | CC-4/lab7 | 24a44c5c337dfcfbb0ffd4765e5b5303546a7801 | [
"MIT"
] | null | null | null | from .termcolor import *
| 12.5 | 24 | 0.76 | 3 | 25 | 6.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.16 | 25 | 1 | 25 | 25 | 0.904762 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
91d6633cf1da711ae3c3961673cde34cfb498436 | 30,025 | py | Python | tests/components/konnected/test_init.py | tbarbette/core | 8e58c3aa7bc8d2c2b09b6bd329daa1c092d52d3c | [
"Apache-2.0"
] | 11 | 2018-02-16T15:35:47.000Z | 2020-01-14T15:20:00.000Z | tests/components/konnected/test_init.py | jagadeeshvenkatesh/core | 1bd982668449815fee2105478569f8e4b5670add | [
"Apache-2.0"
] | 79 | 2020-07-23T07:13:37.000Z | 2022-03-22T06:02:37.000Z | tests/components/konnected/test_init.py | jagadeeshvenkatesh/core | 1bd982668449815fee2105478569f8e4b5670add | [
"Apache-2.0"
] | 14 | 2018-08-19T16:28:26.000Z | 2021-09-02T18:26:53.000Z | """Test Konnected setup process."""
from unittest.mock import patch
import pytest
from homeassistant.components import konnected
from homeassistant.components.konnected import config_flow
from homeassistant.config import async_process_ha_core_config
from homeassistant.const import HTTP_NOT_FOUND
from homeassistant.setup import async_setup_component
from tests.common import MockConfigEntry
@pytest.fixture(name="mock_panel")
async def mock_panel_fixture():
"""Mock a Konnected Panel bridge."""
with patch("konnected.Client", autospec=True) as konn_client:
def mock_constructor(host, port, websession):
"""Fake the panel constructor."""
konn_client.host = host
konn_client.port = port
return konn_client
konn_client.side_effect = mock_constructor
konn_client.ClientError = config_flow.CannotConnect
konn_client.get_status.return_value = {
"hwVersion": "2.3.0",
"swVersion": "2.3.1",
"heap": 10000,
"uptime": 12222,
"ip": "192.168.1.90",
"port": 9123,
"sensors": [],
"actuators": [],
"dht_sensors": [],
"ds18b20_sensors": [],
"mac": "11:22:33:44:55:66",
"settings": {},
}
yield konn_client
async def test_config_schema(hass):
"""Test that config schema is imported properly."""
config = {
konnected.DOMAIN: {
konnected.CONF_API_HOST: "http://1.1.1.1:8888",
konnected.CONF_ACCESS_TOKEN: "abcdefgh",
konnected.CONF_DEVICES: [{konnected.CONF_ID: "aabbccddeeff"}],
}
}
assert konnected.CONFIG_SCHEMA(config) == {
"konnected": {
"access_token": "abcdefgh",
"api_host": "http://1.1.1.1:8888",
"devices": [
{
"default_options": {
"blink": True,
"api_host": "http://1.1.1.1:8888",
"discovery": True,
"io": {
"1": "Disabled",
"10": "Disabled",
"11": "Disabled",
"12": "Disabled",
"2": "Disabled",
"3": "Disabled",
"4": "Disabled",
"5": "Disabled",
"6": "Disabled",
"7": "Disabled",
"8": "Disabled",
"9": "Disabled",
"alarm1": "Disabled",
"alarm2_out2": "Disabled",
"out": "Disabled",
"out1": "Disabled",
},
},
"id": "aabbccddeeff",
}
],
}
}
# check with host info
config = {
konnected.DOMAIN: {
konnected.CONF_ACCESS_TOKEN: "abcdefgh",
konnected.CONF_DEVICES: [
{konnected.CONF_ID: "aabbccddeeff", "host": "192.168.1.1", "port": 1234}
],
}
}
assert konnected.CONFIG_SCHEMA(config) == {
"konnected": {
"access_token": "abcdefgh",
"devices": [
{
"default_options": {
"blink": True,
"api_host": "",
"discovery": True,
"io": {
"1": "Disabled",
"10": "Disabled",
"11": "Disabled",
"12": "Disabled",
"2": "Disabled",
"3": "Disabled",
"4": "Disabled",
"5": "Disabled",
"6": "Disabled",
"7": "Disabled",
"8": "Disabled",
"9": "Disabled",
"alarm1": "Disabled",
"alarm2_out2": "Disabled",
"out": "Disabled",
"out1": "Disabled",
},
},
"id": "aabbccddeeff",
"host": "192.168.1.1",
"port": 1234,
}
],
}
}
# check pin to zone and multiple output
config = {
konnected.DOMAIN: {
konnected.CONF_ACCESS_TOKEN: "abcdefgh",
konnected.CONF_DEVICES: [
{
konnected.CONF_ID: "aabbccddeeff",
"binary_sensors": [
{"pin": 2, "type": "door"},
{"zone": 1, "type": "door"},
],
"switches": [
{
"zone": 3,
"name": "Beep Beep",
"momentary": 65,
"pause": 55,
"repeat": 4,
},
{
"zone": 3,
"name": "Warning",
"momentary": 100,
"pause": 100,
"repeat": -1,
},
],
}
],
}
}
assert konnected.CONFIG_SCHEMA(config) == {
"konnected": {
"access_token": "abcdefgh",
"devices": [
{
"default_options": {
"blink": True,
"api_host": "",
"discovery": True,
"io": {
"1": "Binary Sensor",
"10": "Disabled",
"11": "Disabled",
"12": "Disabled",
"2": "Binary Sensor",
"3": "Switchable Output",
"4": "Disabled",
"5": "Disabled",
"6": "Disabled",
"7": "Disabled",
"8": "Disabled",
"9": "Disabled",
"alarm1": "Disabled",
"alarm2_out2": "Disabled",
"out": "Disabled",
"out1": "Disabled",
},
"binary_sensors": [
{"inverse": False, "type": "door", "zone": "2"},
{"inverse": False, "type": "door", "zone": "1"},
],
"switches": [
{
"zone": "3",
"activation": "high",
"name": "Beep Beep",
"momentary": 65,
"pause": 55,
"repeat": 4,
},
{
"zone": "3",
"activation": "high",
"name": "Warning",
"momentary": 100,
"pause": 100,
"repeat": -1,
},
],
},
"id": "aabbccddeeff",
}
],
}
}
async def test_setup_with_no_config(hass):
"""Test that we do not discover anything or try to set up a Konnected panel."""
assert await async_setup_component(hass, konnected.DOMAIN, {})
# No flows started
assert len(hass.config_entries.flow.async_progress()) == 0
# Nothing saved from configuration.yaml
assert hass.data[konnected.DOMAIN][konnected.CONF_ACCESS_TOKEN] is None
assert hass.data[konnected.DOMAIN][konnected.CONF_API_HOST] is None
assert konnected.YAML_CONFIGS not in hass.data[konnected.DOMAIN]
async def test_setup_defined_hosts_known_auth(hass, mock_panel):
"""Test we don't initiate a config entry if configured panel is known."""
MockConfigEntry(
domain="konnected",
unique_id="112233445566",
data={"host": "0.0.0.0", "id": "112233445566"},
).add_to_hass(hass)
MockConfigEntry(
domain="konnected",
unique_id="aabbccddeeff",
data={"host": "1.2.3.4", "id": "aabbccddeeff"},
).add_to_hass(hass)
assert (
await async_setup_component(
hass,
konnected.DOMAIN,
{
konnected.DOMAIN: {
konnected.CONF_ACCESS_TOKEN: "abcdefgh",
konnected.CONF_DEVICES: [
{
config_flow.CONF_ID: "aabbccddeeff",
config_flow.CONF_HOST: "0.0.0.0",
config_flow.CONF_PORT: 1234,
}
],
}
},
)
is True
)
assert hass.data[konnected.DOMAIN][konnected.CONF_ACCESS_TOKEN] == "abcdefgh"
assert konnected.YAML_CONFIGS not in hass.data[konnected.DOMAIN]
# Flow aborted
assert len(hass.config_entries.flow.async_progress()) == 0
async def test_setup_defined_hosts_no_known_auth(hass):
"""Test we initiate config entry if config panel is not known."""
assert (
await async_setup_component(
hass,
konnected.DOMAIN,
{
konnected.DOMAIN: {
konnected.CONF_ACCESS_TOKEN: "abcdefgh",
konnected.CONF_DEVICES: [{konnected.CONF_ID: "aabbccddeeff"}],
}
},
)
is True
)
# Flow started for discovered bridge
assert len(hass.config_entries.flow.async_progress()) == 1
async def test_setup_multiple(hass):
"""Test we initiate config entry for multiple panels."""
assert (
await async_setup_component(
hass,
konnected.DOMAIN,
{
konnected.DOMAIN: {
konnected.CONF_ACCESS_TOKEN: "arandomstringvalue",
konnected.CONF_API_HOST: "http://192.168.86.32:8123",
konnected.CONF_DEVICES: [
{
konnected.CONF_ID: "aabbccddeeff",
"binary_sensors": [
{"zone": 4, "type": "motion", "name": "Hallway Motion"},
{
"zone": 5,
"type": "window",
"name": "Master Bedroom Window",
},
{
"zone": 6,
"type": "window",
"name": "Downstairs Windows",
},
],
"switches": [{"zone": "out", "name": "siren"}],
},
{
konnected.CONF_ID: "445566778899",
"binary_sensors": [
{"zone": 1, "type": "motion", "name": "Front"},
{"zone": 2, "type": "window", "name": "Back"},
],
"switches": [
{
"zone": "out",
"name": "Buzzer",
"momentary": 65,
"pause": 55,
"repeat": 4,
}
],
},
],
}
},
)
is True
)
# Flow started for discovered bridge
assert len(hass.config_entries.flow.async_progress()) == 2
# Globals saved
assert (
hass.data[konnected.DOMAIN][konnected.CONF_ACCESS_TOKEN] == "arandomstringvalue"
)
assert (
hass.data[konnected.DOMAIN][konnected.CONF_API_HOST]
== "http://192.168.86.32:8123"
)
async def test_config_passed_to_config_entry(hass):
"""Test that configured options for a host are loaded via config entry."""
entry = MockConfigEntry(
domain=konnected.DOMAIN,
data={config_flow.CONF_ID: "aabbccddeeff", config_flow.CONF_HOST: "0.0.0.0"},
)
entry.add_to_hass(hass)
with patch.object(konnected, "AlarmPanel", autospec=True) as mock_int:
assert (
await async_setup_component(
hass,
konnected.DOMAIN,
{
konnected.DOMAIN: {
konnected.CONF_ACCESS_TOKEN: "abcdefgh",
konnected.CONF_DEVICES: [{konnected.CONF_ID: "aabbccddeeff"}],
}
},
)
is True
)
assert len(mock_int.mock_calls) == 3
p_hass, p_entry = mock_int.mock_calls[0][1]
assert p_hass is hass
assert p_entry is entry
async def test_unload_entry(hass, mock_panel):
"""Test being able to unload an entry."""
await async_process_ha_core_config(
hass,
{"internal_url": "http://example.local:8123"},
)
entry = MockConfigEntry(
domain=konnected.DOMAIN, data={konnected.CONF_ID: "aabbccddeeff"}
)
entry.add_to_hass(hass)
assert await async_setup_component(hass, konnected.DOMAIN, {}) is True
assert hass.data[konnected.DOMAIN]["devices"].get("aabbccddeeff") is not None
assert await konnected.async_unload_entry(hass, entry)
assert hass.data[konnected.DOMAIN]["devices"] == {}
async def test_api(hass, aiohttp_client, mock_panel):
"""Test callback view."""
await async_setup_component(hass, "http", {"http": {}})
device_config = config_flow.CONFIG_ENTRY_SCHEMA(
{
"host": "1.2.3.4",
"port": 1234,
"id": "112233445566",
"model": "Konnected Pro",
"access_token": "abcdefgh",
"api_host": "http://192.168.86.32:8123",
"default_options": config_flow.OPTIONS_SCHEMA({config_flow.CONF_IO: {}}),
}
)
device_options = config_flow.OPTIONS_SCHEMA(
{
"api_host": "http://192.168.86.32:8123",
"io": {
"1": "Binary Sensor",
"2": "Binary Sensor",
"3": "Binary Sensor",
"4": "Digital Sensor",
"5": "Digital Sensor",
"6": "Switchable Output",
"out": "Switchable Output",
},
"binary_sensors": [
{"zone": "1", "type": "door"},
{"zone": "2", "type": "window", "name": "winder", "inverse": True},
{"zone": "3", "type": "door"},
],
"sensors": [
{"zone": "4", "type": "dht"},
{"zone": "5", "type": "ds18b20", "name": "temper"},
],
"switches": [
{
"zone": "out",
"name": "switcher",
"activation": "low",
"momentary": 50,
"pause": 100,
"repeat": 4,
},
{"zone": "6"},
],
}
)
entry = MockConfigEntry(
domain="konnected",
title="Konnected Alarm Panel",
data=device_config,
options=device_options,
)
entry.add_to_hass(hass)
assert (
await async_setup_component(
hass,
konnected.DOMAIN,
{konnected.DOMAIN: {konnected.CONF_ACCESS_TOKEN: "globaltoken"}},
)
is True
)
client = await aiohttp_client(hass.http.app)
# Test the get endpoint for switch status polling
resp = await client.get("/api/konnected")
assert resp.status == HTTP_NOT_FOUND # no device provided
resp = await client.get("/api/konnected/223344556677")
assert resp.status == HTTP_NOT_FOUND # unknown device provided
resp = await client.get("/api/konnected/device/112233445566")
assert resp.status == HTTP_NOT_FOUND # no zone provided
result = await resp.json()
assert result == {"message": "Switch on zone or pin unknown not configured"}
resp = await client.get("/api/konnected/device/112233445566?zone=8")
assert resp.status == HTTP_NOT_FOUND # invalid zone
result = await resp.json()
assert result == {"message": "Switch on zone or pin 8 not configured"}
resp = await client.get("/api/konnected/device/112233445566?pin=12")
assert resp.status == HTTP_NOT_FOUND # invalid pin
result = await resp.json()
assert result == {"message": "Switch on zone or pin 12 not configured"}
resp = await client.get("/api/konnected/device/112233445566?zone=out")
assert resp.status == 200
result = await resp.json()
assert result == {"state": 1, "zone": "out"}
resp = await client.get("/api/konnected/device/112233445566?pin=8")
assert resp.status == 200
result = await resp.json()
assert result == {"state": 1, "pin": "8"}
# Test the post endpoint for sensor updates
resp = await client.post("/api/konnected/device", json={"zone": "1", "state": 1})
assert resp.status == HTTP_NOT_FOUND
resp = await client.post(
"/api/konnected/device/112233445566", json={"zone": "1", "state": 1}
)
assert resp.status == 401
result = await resp.json()
assert result == {"message": "unauthorized"}
resp = await client.post(
"/api/konnected/device/223344556677",
headers={"Authorization": "Bearer abcdefgh"},
json={"zone": "1", "state": 1},
)
assert resp.status == 400
resp = await client.post(
"/api/konnected/device/112233445566",
headers={"Authorization": "Bearer abcdefgh"},
json={"zone": "15", "state": 1},
)
assert resp.status == 400
result = await resp.json()
assert result == {"message": "unregistered sensor/actuator"}
resp = await client.post(
"/api/konnected/device/112233445566",
headers={"Authorization": "Bearer abcdefgh"},
json={"zone": "1", "state": 1},
)
assert resp.status == 200
result = await resp.json()
assert result == {"message": "ok"}
resp = await client.post(
"/api/konnected/device/112233445566",
headers={"Authorization": "Bearer globaltoken"},
json={"zone": "1", "state": 1},
)
assert resp.status == 200
result = await resp.json()
assert result == {"message": "ok"}
resp = await client.post(
"/api/konnected/device/112233445566",
headers={"Authorization": "Bearer abcdefgh"},
json={"zone": "4", "temp": 22, "humi": 20},
)
assert resp.status == 200
result = await resp.json()
assert result == {"message": "ok"}
# Test the put endpoint for sensor updates
resp = await client.post(
"/api/konnected/device/112233445566",
headers={"Authorization": "Bearer abcdefgh"},
json={"zone": "1", "state": 1},
)
assert resp.status == 200
result = await resp.json()
assert result == {"message": "ok"}
async def test_state_updates_zone(hass, aiohttp_client, mock_panel):
"""Test callback view."""
await async_process_ha_core_config(
hass,
{"internal_url": "http://example.local:8123"},
)
device_config = config_flow.CONFIG_ENTRY_SCHEMA(
{
"host": "1.2.3.4",
"port": 1234,
"id": "112233445566",
"model": "Konnected Pro",
"access_token": "abcdefgh",
"default_options": config_flow.OPTIONS_SCHEMA({config_flow.CONF_IO: {}}),
}
)
device_options = config_flow.OPTIONS_SCHEMA(
{
"io": {
"1": "Binary Sensor",
"2": "Binary Sensor",
"3": "Binary Sensor",
"4": "Digital Sensor",
"5": "Digital Sensor",
"6": "Switchable Output",
"out": "Switchable Output",
},
"binary_sensors": [
{"zone": "1", "type": "door"},
{"zone": "2", "type": "window", "name": "winder", "inverse": True},
{"zone": "3", "type": "door"},
],
"sensors": [
{"zone": "4", "type": "dht"},
{"zone": "5", "type": "ds18b20", "name": "temper"},
],
"switches": [
{
"zone": "out",
"name": "switcher",
"activation": "low",
"momentary": 50,
"pause": 100,
"repeat": 4,
},
{"zone": "6"},
],
}
)
entry = MockConfigEntry(
domain="konnected",
title="Konnected Alarm Panel",
data=device_config,
options=device_options,
)
entry.add_to_hass(hass)
# Add empty data field to ensure we process it correctly (possible if entry is ignored)
entry = MockConfigEntry(domain="konnected", title="Konnected Alarm Panel", data={})
entry.add_to_hass(hass)
assert (
await async_setup_component(
hass,
konnected.DOMAIN,
{konnected.DOMAIN: {konnected.CONF_ACCESS_TOKEN: "1122334455"}},
)
is True
)
client = await aiohttp_client(hass.http.app)
# Test updating a binary sensor
resp = await client.post(
"/api/konnected/device/112233445566",
headers={"Authorization": "Bearer abcdefgh"},
json={"zone": "1", "state": 0},
)
assert resp.status == 200
result = await resp.json()
assert result == {"message": "ok"}
await hass.async_block_till_done()
assert hass.states.get("binary_sensor.konnected_445566_zone_1").state == "off"
resp = await client.post(
"/api/konnected/device/112233445566",
headers={"Authorization": "Bearer abcdefgh"},
json={"zone": "1", "state": 1},
)
assert resp.status == 200
result = await resp.json()
assert result == {"message": "ok"}
await hass.async_block_till_done()
assert hass.states.get("binary_sensor.konnected_445566_zone_1").state == "on"
# Test updating sht sensor
resp = await client.post(
"/api/konnected/device/112233445566",
headers={"Authorization": "Bearer abcdefgh"},
json={"zone": "4", "temp": 22, "humi": 20},
)
assert resp.status == 200
result = await resp.json()
assert result == {"message": "ok"}
await hass.async_block_till_done()
assert hass.states.get("sensor.konnected_445566_sensor_4_humidity").state == "20"
assert (
hass.states.get("sensor.konnected_445566_sensor_4_temperature").state == "22.0"
)
resp = await client.post(
"/api/konnected/device/112233445566",
headers={"Authorization": "Bearer abcdefgh"},
json={"zone": "4", "temp": 25, "humi": 23},
)
assert resp.status == 200
result = await resp.json()
assert result == {"message": "ok"}
await hass.async_block_till_done()
assert hass.states.get("sensor.konnected_445566_sensor_4_humidity").state == "23"
assert (
hass.states.get("sensor.konnected_445566_sensor_4_temperature").state == "25.0"
)
# Test updating ds sensor
resp = await client.post(
"/api/konnected/device/112233445566",
headers={"Authorization": "Bearer abcdefgh"},
json={"zone": "5", "temp": 32, "addr": 1},
)
assert resp.status == 200
result = await resp.json()
assert result == {"message": "ok"}
await hass.async_block_till_done()
assert hass.states.get("sensor.temper_temperature").state == "32.0"
resp = await client.post(
"/api/konnected/device/112233445566",
headers={"Authorization": "Bearer abcdefgh"},
json={"zone": "5", "temp": 42, "addr": 1},
)
assert resp.status == 200
result = await resp.json()
assert result == {"message": "ok"}
await hass.async_block_till_done()
assert hass.states.get("sensor.temper_temperature").state == "42.0"
async def test_state_updates_pin(hass, aiohttp_client, mock_panel):
"""Test callback view."""
await async_process_ha_core_config(
hass,
{"internal_url": "http://example.local:8123"},
)
device_config = config_flow.CONFIG_ENTRY_SCHEMA(
{
"host": "1.2.3.4",
"port": 1234,
"id": "112233445566",
"model": "Konnected",
"access_token": "abcdefgh",
"default_options": config_flow.OPTIONS_SCHEMA({config_flow.CONF_IO: {}}),
}
)
device_options = config_flow.OPTIONS_SCHEMA(
{
"io": {
"1": "Binary Sensor",
"2": "Binary Sensor",
"3": "Binary Sensor",
"4": "Digital Sensor",
"5": "Digital Sensor",
"6": "Switchable Output",
"out": "Switchable Output",
},
"binary_sensors": [
{"zone": "1", "type": "door"},
{"zone": "2", "type": "window", "name": "winder", "inverse": True},
{"zone": "3", "type": "door"},
],
"sensors": [
{"zone": "4", "type": "dht"},
{"zone": "5", "type": "ds18b20", "name": "temper"},
],
"switches": [
{
"zone": "out",
"name": "switcher",
"activation": "low",
"momentary": 50,
"pause": 100,
"repeat": 4,
},
{"zone": "6"},
],
}
)
entry = MockConfigEntry(
domain="konnected",
title="Konnected Alarm Panel",
data=device_config,
options=device_options,
)
entry.add_to_hass(hass)
# Add empty data field to ensure we process it correctly (possible if entry is ignored)
entry = MockConfigEntry(
domain="konnected",
title="Konnected Alarm Panel",
data={},
)
entry.add_to_hass(hass)
assert (
await async_setup_component(
hass,
konnected.DOMAIN,
{konnected.DOMAIN: {konnected.CONF_ACCESS_TOKEN: "1122334455"}},
)
is True
)
client = await aiohttp_client(hass.http.app)
# Test updating a binary sensor
resp = await client.post(
"/api/konnected/device/112233445566",
headers={"Authorization": "Bearer abcdefgh"},
json={"pin": "1", "state": 0},
)
assert resp.status == 200
result = await resp.json()
assert result == {"message": "ok"}
await hass.async_block_till_done()
assert hass.states.get("binary_sensor.konnected_445566_zone_1").state == "off"
resp = await client.post(
"/api/konnected/device/112233445566",
headers={"Authorization": "Bearer abcdefgh"},
json={"pin": "1", "state": 1},
)
assert resp.status == 200
result = await resp.json()
assert result == {"message": "ok"}
await hass.async_block_till_done()
assert hass.states.get("binary_sensor.konnected_445566_zone_1").state == "on"
# Test updating sht sensor
resp = await client.post(
"/api/konnected/device/112233445566",
headers={"Authorization": "Bearer abcdefgh"},
json={"pin": "6", "temp": 22, "humi": 20},
)
assert resp.status == 200
result = await resp.json()
assert result == {"message": "ok"}
await hass.async_block_till_done()
assert hass.states.get("sensor.konnected_445566_sensor_4_humidity").state == "20"
assert (
hass.states.get("sensor.konnected_445566_sensor_4_temperature").state == "22.0"
)
resp = await client.post(
"/api/konnected/device/112233445566",
headers={"Authorization": "Bearer abcdefgh"},
json={"pin": "6", "temp": 25, "humi": 23},
)
assert resp.status == 200
result = await resp.json()
assert result == {"message": "ok"}
await hass.async_block_till_done()
assert hass.states.get("sensor.konnected_445566_sensor_4_humidity").state == "23"
assert (
hass.states.get("sensor.konnected_445566_sensor_4_temperature").state == "25.0"
)
# Test updating ds sensor
resp = await client.post(
"/api/konnected/device/112233445566",
headers={"Authorization": "Bearer abcdefgh"},
json={"pin": "7", "temp": 32, "addr": 1},
)
assert resp.status == 200
result = await resp.json()
assert result == {"message": "ok"}
await hass.async_block_till_done()
assert hass.states.get("sensor.temper_temperature").state == "32.0"
resp = await client.post(
"/api/konnected/device/112233445566",
headers={"Authorization": "Bearer abcdefgh"},
json={"pin": "7", "temp": 42, "addr": 1},
)
assert resp.status == 200
result = await resp.json()
assert result == {"message": "ok"}
await hass.async_block_till_done()
assert hass.states.get("sensor.temper_temperature").state == "42.0"
| 34.275114 | 91 | 0.478668 | 2,742 | 30,025 | 5.103939 | 0.103574 | 0.029725 | 0.028939 | 0.049303 | 0.837085 | 0.813648 | 0.787067 | 0.75727 | 0.725188 | 0.685673 | 0 | 0.05554 | 0.387144 | 30,025 | 875 | 92 | 34.314286 | 0.705016 | 0.027144 | 0 | 0.660079 | 0 | 0 | 0.213128 | 0.051736 | 0 | 0 | 0 | 0 | 0.125165 | 1 | 0.001318 | false | 0.001318 | 0.01054 | 0 | 0.013175 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
37ef59e2c4870fa735dbebd61973d4c001ce1b3c | 62 | py | Python | python/testData/testRunner/env/createConfigurationTest/aid/test/base.py | jnthn/intellij-community | 8fa7c8a3ace62400c838e0d5926a7be106aa8557 | [
"Apache-2.0"
] | 2 | 2019-04-28T07:48:50.000Z | 2020-12-11T14:18:08.000Z | python/testData/testRunner/env/createConfigurationTest/aid/test/base.py | Cyril-lamirand/intellij-community | 60ab6c61b82fc761dd68363eca7d9d69663cfa39 | [
"Apache-2.0"
] | 173 | 2018-07-05T13:59:39.000Z | 2018-08-09T01:12:03.000Z | python/testData/testRunner/env/createConfigurationTest/aid/test/base.py | Cyril-lamirand/intellij-community | 60ab6c61b82fc761dd68363eca7d9d69663cfa39 | [
"Apache-2.0"
] | 2 | 2020-03-15T08:57:37.000Z | 2020-04-07T04:48:14.000Z | import unittest
class TestBase(unittest.TestCase):
pass | 10.333333 | 34 | 0.758065 | 7 | 62 | 6.714286 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.177419 | 62 | 6 | 35 | 10.333333 | 0.921569 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
37f7968a3629d191846112d0961fa9827d89e762 | 57,998 | py | Python | geoapps/simpegPF/EM/FDEM/FieldsFDEM.py | RichardScottOZ/geoapps | 5b3c1d4fd11add45992e8b2497312ac014272b69 | [
"MIT"
] | null | null | null | geoapps/simpegPF/EM/FDEM/FieldsFDEM.py | RichardScottOZ/geoapps | 5b3c1d4fd11add45992e8b2497312ac014272b69 | [
"MIT"
] | null | null | null | geoapps/simpegPF/EM/FDEM/FieldsFDEM.py | RichardScottOZ/geoapps | 5b3c1d4fd11add45992e8b2497312ac014272b69 | [
"MIT"
] | null | null | null | import numpy as np
import scipy.sparse as sp
import geoapps.simpegPF as spf
from .. import Utils
from geoapps.simpegPF.EM.Utils import omega
from geoapps.simpegPF.Utils import Zero, Identity, sdiag
class FieldsFDEM(spf.Problem.Fields):
"""
Fancy Field Storage for a FDEM survey. Only one field type is stored for
each problem, the rest are computed. The fields object acts like an array
and is indexed by
.. code-block:: python
f = problem.fields(m)
e = f[srcList,'e']
b = f[srcList,'b']
If accessing all sources for a given field, use the :code:`:`
.. code-block:: python
f = problem.fields(m)
e = f[:,'e']
b = f[:,'b']
The array returned will be size (nE or nF, nSrcs :math:`\\times`
nFrequencies)
"""
knownFields = {}
dtype = complex
def _GLoc(self, fieldType):
"""Grid location of the fieldType"""
return self.aliasFields[fieldType][1]
def _e(self, solution, srcList):
"""
Total electric field is sum of primary and secondary
:param numpy.ndarray solution: field we solved for
:param list srcList: list of sources
:rtype: numpy.ndarray
:return: total electric field
"""
if (
getattr(self, "_ePrimary", None) is None
or getattr(self, "_eSecondary", None) is None
):
raise NotImplementedError(
"Getting e from {!s} is not implemented".format(
self.knownFields.keys()[0]
)
)
return self._ePrimary(solution, srcList) + self._eSecondary(solution, srcList)
def _b(self, solution, srcList):
"""
Total magnetic flux density is sum of primary and secondary
:param numpy.ndarray solution: field we solved for
:param list srcList: list of sources
:rtype: numpy.ndarray
:return: total magnetic flux density
"""
if (
getattr(self, "_bPrimary", None) is None
or getattr(self, "_bSecondary", None) is None
):
raise NotImplementedError(
"Getting b from {!s} is not implemented".format(
self.knownFields.keys()[0]
)
)
return self._bPrimary(solution, srcList) + self._bSecondary(solution, srcList)
def _bSecondary(self, solution, srcList):
"""
Total magnetic flux density is sum of primary and secondary
:param numpy.ndarray solution: field we solved for
:param list srcList: list of sources
:rtype: numpy.ndarray
:return: total magnetic flux density
"""
if getattr(self, "_bSecondary", None) is None:
raise NotImplementedError(
"Getting b from {} is not implemented".format(
self.knownFields.keys()[0]
)
)
return self._bSecondary(solution, srcList)
def _h(self, solution, srcList):
"""
Total magnetic field is sum of primary and secondary
:param numpy.ndarray solution: field we solved for
:param list srcList: list of sources
:rtype: numpy.ndarray
:return: total magnetic field
"""
if (
getattr(self, "_hPrimary", None) is None
or getattr(self, "_hSecondary", None) is None
):
raise NotImplementedError(
"Getting h from {!s} is not implemented".format(
self.knownFields.keys()[0]
)
)
return self._hPrimary(solution, srcList) + self._hSecondary(solution, srcList)
def _j(self, solution, srcList):
"""
Total current density is sum of primary and secondary
:param numpy.ndarray solution: field we solved for
:param list srcList: list of sources
:rtype: numpy.ndarray
:return: total current density
"""
if (
getattr(self, "_jPrimary", None) is None
or getattr(self, "_jSecondary", None) is None
):
raise NotImplementedError(
"Getting j from {!s} is not implemented".format(
self.knownFields.keys()[0]
)
)
return self._jPrimary(solution, srcList) + self._jSecondary(solution, srcList)
def _eDeriv(self, src, du_dm_v, v, adjoint=False):
r"""
Total derivative of e with respect to the inversion model. Returns
:math:`d\mathbf{e}/d\mathbf{m}` for forward and
(:math:`d\mathbf{e}/d\mathbf{u}`, :math:`d\mathb{u}/d\mathbf{m}`)
for the adjoint
:param SimPEG.EM.FDEM.Src.BaseFDEMSrc src: source
:param numpy.ndarray du_dm_v: derivative of the solution vector with
respect to the model times a vector (is None for adjoint)
:param numpy.ndarray v: vector to take sensitivity product with
:param bool adjoint: adjoint?
:rtype: numpy.ndarray
:return: derivative times a vector (or tuple for adjoint)
"""
if (
getattr(self, "_eDeriv_u", None) is None
or getattr(self, "_eDeriv_m", None) is None
):
raise NotImplementedError(
"Getting eDerivs from {!s} is not implemented".format(
self.knownFields.keys()[0]
)
)
if adjoint:
return (self._eDeriv_u(src, v, adjoint), self._eDeriv_m(src, v, adjoint))
return np.array(
self._eDeriv_u(src, du_dm_v, adjoint) + self._eDeriv_m(src, v, adjoint),
dtype=complex,
)
def _bDeriv(self, src, du_dm_v, v, adjoint=False):
r"""
Total derivative of b with respect to the inversion model. Returns
:math:`d\mathbf{b}/d\mathbf{m}` for forward and
(:math:`d\mathbf{b}/d\mathbf{u}`, :math:`d\mathb{u}/d\mathbf{m}`) for
the adjoint
:param SimPEG.EM.FDEM.Src.BaseFDEMSrc src: source
:param numpy.ndarray du_dm_v: derivative of the solution vector with
respect to the model times a vector (is None for adjoint)
:param numpy.ndarray v: vector to take sensitivity product with
:param bool adjoint: adjoint?
:rtype: numpy.ndarray
:return: derivative times a vector (or tuple for adjoint)
"""
if (
getattr(self, "_bDeriv_u", None) is None
or getattr(self, "_bDeriv_m", None) is None
):
raise NotImplementedError(
"Getting bDerivs from {!s} is not implemented".format(
self.knownFields.keys()[0]
)
)
if adjoint:
return (self._bDeriv_u(src, v, adjoint), self._bDeriv_m(src, v, adjoint))
return np.array(
self._bDeriv_u(src, du_dm_v, adjoint) + self._bDeriv_m(src, v, adjoint),
dtype=complex,
)
def _bSecondaryDeriv(self, src, du_dm_v, v, adjoint=False):
r"""
Total derivative of b with respect to the inversion model. Returns
:math:`d\mathbf{b}/d\mathbf{m}` for forward and
(:math:`d\mathbf{b}/d\mathbf{u}`, :math:`d\mathb{u}/d\mathbf{m}`) for
the adjoint
:param SimPEG.EM.FDEM.SrcFDEM.BaseFDEMSrc src: sorce
:param numpy.ndarray du_dm_v: derivative of the solution vector with
respect to the model times a vector (is None for adjoint)
:param numpy.ndarray v: vector to take sensitivity product with
:param bool adjoint: adjoint?
:rtype: numpy.ndarray
:return: derivative times a vector (or tuple for adjoint)
"""
# TODO: modify when primary field is dependent on m
return self._bDeriv(src, du_dm_v, v, adjoint=adjoint)
def _hDeriv(self, src, du_dm_v, v, adjoint=False):
r"""
Total derivative of h with respect to the inversion model. Returns
:math:`d\mathbf{h}/d\mathbf{m}` for forward and
(:math:`d\mathbf{h}/d\mathbf{u}`, :math:`d\mathb{u}/d\mathbf{m}`)
for the adjoint
:param SimPEG.EM.FDEM.Src.BaseFDEMSrc src: source
:param numpy.ndarray du_dm_v: derivative of the solution vector with
respect to the model times a vector (is None for adjoint)
:param numpy.ndarray v: vector to take sensitivity product with
:param bool adjoint: adjoint?
:rtype: numpy.ndarray
:return: derivative times a vector (or tuple for adjoint)
"""
if (
getattr(self, "_hDeriv_u", None) is None
or getattr(self, "_hDeriv_m", None) is None
):
raise NotImplementedError(
"Getting hDerivs from {!s} is not implemented".format(
self.knownFields.keys()[0]
)
)
if adjoint:
return (self._hDeriv_u(src, v, adjoint), self._hDeriv_m(src, v, adjoint))
return np.array(
self._hDeriv_u(src, du_dm_v, adjoint) + self._hDeriv_m(src, v, adjoint),
dtype=complex,
)
def _jDeriv(self, src, du_dm_v, v, adjoint=False):
r"""
Total derivative of j with respect to the inversion model. Returns
:math:`d\mathbf{j}/d\mathbf{m}` for forward and
(:math:`d\mathbf{j}/d\mathbf{u}`, :math:`d\mathb{u}/d\mathbf{m}`) for
the adjoint
:param SimPEG.EM.FDEM.Src.BaseFDEMSrc src: source
:param numpy.ndarray du_dm_v: derivative of the solution vector with
respect to the model times a vector (is None for adjoint)
:param numpy.ndarray v: vector to take sensitivity product with
:param bool adjoint: adjoint?
:rtype: numpy.ndarray
:return: derivative times a vector (or tuple for adjoint)
"""
if (
getattr(self, "_jDeriv_u", None) is None
or getattr(self, "_jDeriv_m", None) is None
):
raise NotImplementedError(
"Getting jDerivs from {!s} is not implemented".format(
self.knownFields.keys()[0]
)
)
if adjoint:
return (self._jDeriv_u(src, v, adjoint), self._jDeriv_m(src, v, adjoint))
return np.array(
self._jDeriv_u(src, du_dm_v, adjoint) + self._jDeriv_m(src, v, adjoint),
dtype=complex,
)
class Fields3D_e(FieldsFDEM):
"""
Fields object for Problem3D_e.
:param discretize.BaseMesh.BaseMesh mesh: mesh
:param SimPEG.EM.FDEM.SurveyFDEM.Survey survey: survey
"""
knownFields = {"eSolution": "E"}
aliasFields = {
"e": ["eSolution", "E", "_e"],
"ePrimary": ["eSolution", "E", "_ePrimary"],
"eSecondary": ["eSolution", "E", "_eSecondary"],
"b": ["eSolution", "F", "_b"],
"bPrimary": ["eSolution", "F", "_bPrimary"],
"bSecondary": ["eSolution", "F", "_bSecondary"],
"j": ["eSolution", "CCV", "_j"],
"h": ["eSolution", "CCV", "_h"],
}
def startup(self):
self.prob = self.survey.prob
self._edgeCurl = self.survey.prob.mesh.edgeCurl
self._aveE2CCV = self.survey.prob.mesh.aveE2CCV
self._aveF2CCV = self.survey.prob.mesh.aveF2CCV
self._nC = self.survey.prob.mesh.nC
self._MeSigma = self.survey.prob.MeSigma
self._MeSigmaDeriv = self.survey.prob.MeSigmaDeriv
self._MfMui = self.survey.prob.MfMui
self._MfMuiDeriv = self.survey.prob.MfMuiDeriv
def _GLoc(self, fieldType):
if fieldType in ["e", "eSecondary", "ePrimary"]:
return "E"
elif fieldType in ["b", "bSecondary", "bPrimary"]:
return "F"
elif (fieldType == "h") or (fieldType == "j"):
return "CCV"
else:
raise Exception("Field type must be e, b, h, j")
def _ePrimary(self, eSolution, srcList):
"""
Primary electric field from source
:param numpy.ndarray eSolution: field we solved for
:param list srcList: list of sources
:rtype: numpy.ndarray
:return: primary electric field as defined by the sources
"""
ePrimary = np.zeros([self.prob.mesh.nE, len(srcList)], dtype=complex)
for i, src in enumerate(srcList):
ep = src.ePrimary(self.prob)
ePrimary[:, i] = ePrimary[:, i] + ep
return ePrimary
def _eSecondary(self, eSolution, srcList):
"""
Secondary electric field is the thing we solved for
:param numpy.ndarray eSolution: field we solved for
:param list srcList: list of sources
:rtype: numpy.ndarray
:return: secondary electric field
"""
return eSolution
def _eDeriv_u(self, src, v, adjoint=False):
"""
Partial derivative of the total electric field with respect to the
thing we solved for.
:param SimPEG.EM.FDEM.SrcFDEM.BaseFDEMSrc src: source
:param numpy.ndarray v: vector to take product with
:param bool adjoint: adjoint?
:rtype: numpy.ndarray
:return: product of the derivative of the electric field with respect
to the field we solved for with a vector
"""
return Identity() * v
def _eDeriv_m(self, src, v, adjoint=False):
"""
Partial derivative of the total electric field with respect to the
inversion model. Here, we assume that the primary does not depend on
the model. Note that this also includes derivative contributions from
the sources.
:param SimPEG.EM.FDEM.SrcFDEM.BaseFDEMSrc src: source
:param numpy.ndarray v: vector to take product with
:param bool adjoint: adjoint?
:rtype: SimPEG.Utils.Zero
:return: product of the electric field derivative with respect to the
inversion model with a vector
"""
return src.ePrimaryDeriv(self.prob, v, adjoint)
def _bPrimary(self, eSolution, srcList):
"""
Primary magnetic flux density from source
:param numpy.ndarray eSolution: field we solved for
:param list srcList: list of sources
:rtype: numpy.ndarray
:return: primary magnetic flux density as defined by the sources
"""
bPrimary = np.zeros(
[self._edgeCurl.shape[0], eSolution.shape[1]], dtype=complex
)
for i, src in enumerate(srcList):
bp = src.bPrimary(self.prob)
bPrimary[:, i] = bPrimary[:, i] + bp
return bPrimary
def _bSecondary(self, eSolution, srcList):
"""
Secondary magnetic flux density from eSolution
:param numpy.ndarray eSolution: field we solved for
:param list srcList: list of sources
:rtype: numpy.ndarray
:return: secondary magnetic flux density
"""
C = self._edgeCurl
b = C * eSolution
for i, src in enumerate(srcList):
b[:, i] *= -1.0 / (1j * omega(src.freq)) # freq depends on the source
s_m = src.s_m(self.prob)
b[:, i] = b[:, i] + 1.0 / (1j * omega(src.freq)) * s_m
return b
def _bDeriv_u(self, src, du_dm_v, adjoint=False):
"""
Derivative of the magnetic flux density with respect to the thing we
solved for
:param SimPEG.EM.FDEM.SrcFDEM.BaseFDEMSrc src: source
:param numpy.ndarray du_dm_v: vector to take product with
:param bool adjoint: adjoint?
:rtype: numpy.ndarray
:return: product of the derivative of the magnetic flux density with
respect to the field we solved for with a vector
"""
C = self._edgeCurl
if adjoint:
return -1.0 / (1j * omega(src.freq)) * (C.T * du_dm_v)
return -1.0 / (1j * omega(src.freq)) * (C * du_dm_v)
def _bDeriv_m(self, src, v, adjoint=False):
"""
Derivative of the magnetic flux density with respect to the inversion
model.
:param SimPEG.EM.FDEM.SrcFDEM.BaseFDEMSrc src: source
:param numpy.ndarray v: vector to take product with
:param bool adjoint: adjoint?
:rtype: numpy.ndarray
:return: product of the magnetic flux density derivative with respect
to the inversion model with a vector
"""
return self._bDeriv_src(src, v, adjoint=adjoint)
def _bDeriv_src(self, src, v, adjoint=False):
s_mDeriv = src.s_mDeriv(self.prob, v, adjoint)
return 1.0 / (1j * omega(src.freq)) * s_mDeriv + src.bPrimaryDeriv(
self.prob, v, adjoint
)
def _j(self, eSolution, srcList):
"""
Current density from eSolution
:param numpy.ndarray eSolution: field we solved for
:param list srcList: list of sources
:rtype: numpy.ndarray
:return: current density
"""
aveE2CCV = self._aveE2CCV
# number of components (instead of checking if cyl or not)
n = int(aveE2CCV.shape[0] / self._nC)
VI = sdiag(np.kron(np.ones(n), 1.0 / self.prob.mesh.vol))
return VI * (aveE2CCV * (self._MeSigma * self._e(eSolution, srcList)))
def _jDeriv_u(self, src, du_dm_v, adjoint=False):
"""
Derivative of the current density with respect to the thing we solved
for
:param SimPEG.EM.FDEM.SrcFDEM.BaseFDEMSrc src: source
:param numpy.ndarray du_dm_v: vector to take product with
:param bool adjoint: adjoint?
:rtype: numpy.ndarray
:return: product of the derivative of the current density with respect
to the field we solved for with a vector
"""
# number of components (instead of checking if cyl or not)
n = int(self._aveE2CCV.shape[0] / self._nC)
VI = sdiag(np.kron(np.ones(n), 1.0 / self.prob.mesh.vol))
if adjoint:
return self._eDeriv_u(
src,
self._MeSigma.T * (self._aveE2CCV.T * (VI.T * du_dm_v)),
adjoint=adjoint,
)
return VI * (
self._aveE2CCV
* (self._MeSigma * (self._eDeriv_u(src, du_dm_v, adjoint=adjoint)))
)
def _jDeriv_m(self, src, v, adjoint=False):
"""
Derivative of the current density with respect to the inversion model.
:param SimPEG.EM.FDEM.SrcFDEM.BaseFDEMSrc src: source
:param numpy.ndarray v: vector to take product with
:param bool adjoint: adjoint?
:rtype: numpy.ndarray
:return: product of the current density derivative with respect to the
inversion model with a vector
"""
e = self[src, "e"]
n = int(self._aveE2CCV.shape[0] / self._nC) # number of components
VI = sdiag(np.kron(np.ones(n), 1.0 / self.prob.mesh.vol))
if adjoint:
return (
self._MeSigmaDeriv(e).T * (self._aveE2CCV.T * (VI.T * v))
+ self._eDeriv_m(src, self._aveE2CCV.T * (VI.T * v), adjoint=adjoint)
) + src.jPrimaryDeriv(self.prob, v, adjoint)
return (
VI
* (
self._aveE2CCV
* (self._eDeriv_m(src, v, adjoint=adjoint) + self._MeSigmaDeriv(e) * v)
)
) + src.jPrimaryDeriv(self.prob, v, adjoint)
def _h(self, eSolution, srcList):
"""
Magnetic field from eSolution
:param numpy.ndarray eSolution: field we solved for
:param list srcList: list of sources
:rtype: numpy.ndarray
:return: magnetic field
"""
n = int(self._aveF2CCV.shape[0] / self._nC) # Number of Components
VI = sdiag(np.kron(np.ones(n), 1.0 / self.prob.mesh.vol))
return VI * (self._aveF2CCV * (self._MfMui * self._b(eSolution, srcList)))
def _hDeriv_u(self, src, du_dm_v, adjoint=False):
"""
Derivative of the magnetic field with respect to the thing we solved
for
:param SimPEG.EM.FDEM.SrcFDEM.BaseFDEMSrc src: source
:param numpy.ndarray du_dm_v: vector to take product with
:param bool adjoint: adjoint?
:rtype: numpy.ndarray
:return: product of the derivative of the magnetic field with respect
to the field we solved for with a vector
"""
n = int(self._aveF2CCV.shape[0] / self._nC) # Number of Components
VI = sdiag(np.kron(np.ones(n), 1.0 / self.prob.mesh.vol))
if adjoint:
v = self._MfMui.T * (self._aveF2CCV.T * (VI.T * du_dm_v))
return self._bDeriv_u(src, v, adjoint=adjoint)
return VI * (
self._aveF2CCV
* (self._MfMui * self._bDeriv_u(src, du_dm_v, adjoint=adjoint))
)
def _hDeriv_mui(self, src, v, adjoint=False):
n = int(self._aveF2CCV.shape[0] / self._nC) # Number of Components
VI = sdiag(np.kron(np.ones(n), 1.0 / self.prob.mesh.vol))
if adjoint is True:
return self._MfMuiDeriv(self[src, "b"]).T * (self._aveF2CCV.T * (VI.T * v))
return VI * (self._aveF2CCV * (self._MfMuiDeriv(self[src, "b"]) * v))
def _hDeriv_m(self, src, v, adjoint=False):
"""
Derivative of the magnetic field with respect to the inversion model.
:param SimPEG.EM.FDEM.SrcFDEM.BaseFDEMSrc src: source
:param numpy.ndarray v: vector to take product with
:param bool adjoint: adjoint?
:rtype: numpy.ndarray
:return: product of the magnetic field derivative with respect to the
inversion model with a vector
"""
n = int(self._aveF2CCV.shape[0] / self._nC) # Number of Components
VI = sdiag(np.kron(np.ones(n), 1.0 / self.prob.mesh.vol))
if adjoint:
return self._bDeriv_m(
src, self._MfMui.T * (self._aveF2CCV.T * (VI.T * v)), adjoint=adjoint
) + self._hDeriv_mui(src, v, adjoint=adjoint)
return (
VI
* (self._aveF2CCV * (self._MfMui * self._bDeriv_m(src, v, adjoint=adjoint)))
) + self._hDeriv_mui(src, v, adjoint=adjoint)
class Fields3D_b(FieldsFDEM):
"""
Fields object for Problem3D_b.
:param discretize.BaseMesh.BaseMesh mesh: mesh
:param SimPEG.EM.FDEM.SurveyFDEM.Survey survey: survey
"""
knownFields = {"bSolution": "F"}
aliasFields = {
"b": ["bSolution", "F", "_b"],
"bPrimary": ["bSolution", "F", "_bPrimary"],
"bSecondary": ["bSolution", "F", "_bSecondary"],
"e": ["bSolution", "E", "_e"],
"ePrimary": ["bSolution", "E", "_ePrimary"],
"eSecondary": ["bSolution", "E", "_eSecondary"],
"j": ["bSolution", "CCV", "_j"],
"h": ["bSolution", "CCV", "_h"],
}
def startup(self):
self.prob = self.survey.prob
self._edgeCurl = self.survey.prob.mesh.edgeCurl
self._MeSigma = self.survey.prob.MeSigma
self._MeSigmaI = self.survey.prob.MeSigmaI
self._MfMui = self.survey.prob.MfMui
self._MfMuiDeriv = self.survey.prob.MfMuiDeriv
self._MeSigmaDeriv = self.survey.prob.MeSigmaDeriv
self._MeSigmaIDeriv = self.survey.prob.MeSigmaIDeriv
self._Me = self.survey.prob.Me
self._aveF2CCV = self.survey.prob.mesh.aveF2CCV
self._aveE2CCV = self.survey.prob.mesh.aveE2CCV
self._sigma = self.survey.prob.sigma
self._mui = self.survey.prob.mui
self._nC = self.survey.prob.mesh.nC
def _GLoc(self, fieldType):
if fieldType in ["e", "eSecondary", "ePrimary"]:
return "E"
elif fieldType in ["b", "bSecondary", "bPrimary"]:
return "F"
elif (fieldType == "h") or (fieldType == "j"):
return "CCV"
else:
raise Exception("Field type must be e, b, h, j")
def _bPrimary(self, bSolution, srcList):
"""
Primary magnetic flux density from source
:param numpy.ndarray bSolution: field we solved for
:param list srcList: list of sources
:rtype: numpy.ndarray
:return: primary electric field as defined by the sources
"""
bPrimary = np.zeros([self.prob.mesh.nF, len(srcList)], dtype=complex)
for i, src in enumerate(srcList):
bp = src.bPrimary(self.prob)
bPrimary[:, i] = bPrimary[:, i] + bp
return bPrimary
def _bSecondary(self, bSolution, srcList):
"""
Secondary magnetic flux density is the thing we solved for
:param numpy.ndarray bSolution: field we solved for
:param list srcList: list of sources
:rtype: numpy.ndarray
:return: secondary magnetic flux density
"""
return bSolution
def _bDeriv_u(self, src, du_dm_v, adjoint=False):
"""
Partial derivative of the total magnetic flux density with respect to
the thing we solved for.
:param SimPEG.EM.FDEM.SrcFDEM.BaseFDEMSrc src: source
:param numpy.ndarray du_dm_v: vector to take product with
:param bool adjoint: adjoint?
:rtype: numpy.ndarray
:return: product of the derivative of the magnetic flux density with
respect to the field we solved for with a vector
"""
return Identity() * du_dm_v
def _bDeriv_m(self, src, v, adjoint=False):
"""
Partial derivative of the total magnetic flux density with respect to
the inversion model. Here, we assume that the primary does not depend
on the model. Note that this also includes derivative contributions
from the sources.
:param SimPEG.EM.FDEM.SrcFDEM.BaseFDEMSrc src: source
:param numpy.ndarray v: vector to take product with
:param bool adjoint: adjoint?
:rtype: SimPEG.Utils.Zero
:return: product of the magnetic flux density derivative with respect
to the inversion model with a vector
"""
# assuming primary does not depend on the model
return Zero()
def _ePrimary(self, bSolution, srcList):
"""
Primary electric field from source
:param numpy.ndarray bSolution: field we solved for
:param list srcList: list of sources
:rtype: numpy.ndarray
:return: primary electric field as defined by the sources
"""
ePrimary = np.zeros(
[self._edgeCurl.shape[1], bSolution.shape[1]], dtype=complex
)
for i, src in enumerate(srcList):
ep = src.ePrimary(self.prob)
ePrimary[:, i] = ePrimary[:, i] + ep
return ePrimary
def _eSecondary(self, bSolution, srcList):
"""
Secondary electric field from bSolution
:param numpy.ndarray bSolution: field we solved for
:param list srcList: list of sources
:rtype: numpy.ndarray
:return: secondary electric field
"""
e = self._edgeCurl.T * (self._MfMui * bSolution)
for i, src in enumerate(srcList):
s_e = src.s_e(self.prob)
e[:, i] = e[:, i] + -s_e
return self._MeSigmaI * e
def _eDeriv_u(self, src, du_dm_v, adjoint=False):
"""
Derivative of the electric field with respect to the thing we solved
for
:param SimPEG.EM.FDEM.SrcFDEM.BaseFDEMSrc src: source
:param numpy.ndarray v: vector to take product with
:param bool adjoint: adjoint?
:rtype: numpy.ndarray
:return: product of the derivative of the electric field with respect
to the field we solved for with a vector
"""
if not adjoint:
return self._MeSigmaI * (self._edgeCurl.T * (self._MfMui * du_dm_v))
return self._MfMui.T * (self._edgeCurl * (self._MeSigmaI.T * du_dm_v))
def _eDeriv_m(self, src, v, adjoint=False):
"""
Derivative of the electric field with respect to the inversion model
:param SimPEG.EM.FDEM.SrcFDEM.BaseFDEMSrc src: source
:param numpy.ndarray v: vector to take product with
:param bool adjoint: adjoint?
:rtype: numpy.ndarray
:return: product of the derivative of the electric field with respect
to the model with a vector
"""
bSolution = Utils.mkvc(self[src, "bSolution"])
s_e = src.s_e(self.prob)
w = -s_e + self._edgeCurl.T * (self._MfMui * bSolution)
if adjoint:
s_eDeriv = src.s_eDeriv(self.prob, self._MeSigmaI.T * v, adjoint)
return (
self._MeSigmaIDeriv(w).T * v
+ self._MfMuiDeriv(bSolution).T
* (self._edgeCurl * (self._MeSigmaI.T * v))
- s_eDeriv
+ src.ePrimaryDeriv(self.prob, v, adjoint)
)
s_eDeriv = src.s_eDeriv(self.prob, v, adjoint)
return (
self._MeSigmaIDeriv(w) * v
+ self._MeSigmaI * (self._edgeCurl.T * (self._MfMuiDeriv(bSolution) * v))
- self._MeSigmaI * s_eDeriv
+ src.ePrimaryDeriv(self.prob, v, adjoint)
)
def _j(self, bSolution, srcList):
"""
Secondary current density from bSolution
:param numpy.ndarray bSolution: field we solved for
:param list srcList: list of sources
:rtype: numpy.ndarray
:return: primary current density
"""
n = int(self._aveE2CCV.shape[0] / self._nC) # number of components
VI = sdiag(np.kron(np.ones(n), 1.0 / self.prob.mesh.vol))
j = self._edgeCurl.T * (self._MfMui * bSolution)
for i, src in enumerate(srcList):
s_e = src.s_e(self.prob)
j[:, i] = j[:, i] - s_e
return VI * (self._aveE2CCV * j)
def _jDeriv_u(self, src, du_dm_v, adjoint=False):
"""
Partial derivative of the current density with respect to the thing we
solved for.
:param SimPEG.EM.FDEM.SrcFDEM.BaseFDEMSrc src: source
:param numpy.ndarray du_dm_v: vector to take product with
:param bool adjoint: adjoint?
:rtype: numpy.ndarray
:return: product of the derivative of the current density with respect
to the field we solved for with a vector
"""
n = int(self._aveE2CCV.shape[0] / self._nC) # number of components
VI = sdiag(np.kron(np.ones(n), 1.0 / self.prob.mesh.vol))
if adjoint:
return self._MfMui.T * (
self._edgeCurl * (self._aveE2CCV.T * (VI.T * du_dm_v))
)
return VI * (self._aveE2CCV * (self._edgeCurl.T * (self._MfMui * du_dm_v)))
# forgetting the source term here
def _jDeriv_mui(self, src, v, adjoint=False):
n = int(self._aveE2CCV.shape[0] / self._nC) # number of components
VI = sdiag(np.kron(np.ones(n), 1.0 / self.prob.mesh.vol))
MfMuiDeriv = self._MfMuiDeriv(self[src, "b"])
if adjoint:
return MfMuiDeriv.T * (self._edgeCurl * (self._aveE2CCV.T * (VI.T * v)))
return VI * (self._aveE2CCV * (self._edgeCurl.T * (MfMuiDeriv * v)))
def _jDeriv_m(self, src, v, adjoint=False):
"""
Derivative of the current density with respect to the inversion model
:param SimPEG.EM.FDEM.SrcFDEM.BaseFDEMSrc src: source
:param numpy.ndarray v: vector to take product with
:param bool adjoint: adjoint?
:rtype: numpy.ndarray
:return: product of the derivative of the current density with respect
to the model with a vector
"""
return self._jDeriv_mui(src, v, adjoint)
def _h(self, bSolution, srcList):
"""
Magnetic field from bSolution
:param numpy.ndarray bSolution: field we solved for
:param list srcList: list of sources
:rtype: numpy.ndarray
:return: magnetic field
"""
n = int(self._aveF2CCV.shape[0] / self._nC) # number of components
VI = sdiag(np.kron(np.ones(n), 1.0 / self.prob.mesh.vol))
return VI * (self._aveF2CCV * (self._MfMui * self._b(bSolution, srcList)))
def _hDeriv_u(self, src, du_dm_v, adjoint=False):
"""
Partial derivative of the magnetic field with respect to the thing we
solved for.
:param SimPEG.EM.FDEM.SrcFDEM.BaseFDEMSrc src: source
:param numpy.ndarray du_dm_v: vector to take product with
:param bool adjoint: adjoint?
:rtype: numpy.ndarray
:return: product of the derivative of the magnetic field with respect
to the field we solved for with a vector
"""
n = int(self._aveF2CCV.shape[0] / self._nC) # number of components
VI = sdiag(np.kron(np.ones(n), 1.0 / self.prob.mesh.vol))
if adjoint:
return self._MfMui.T * (self._aveF2CCV.T * (VI.T * du_dm_v))
return VI * (self._aveF2CCV * (self._MfMui * du_dm_v))
def _hDeriv_mui(self, src, v, adjoint=False):
b = self[src, "b"]
n = int(self._aveF2CCV.shape[0] / self._nC) # number of components
VI = sdiag(np.kron(np.ones(n), 1.0 / self.prob.mesh.vol))
if adjoint:
return self._MfMuiDeriv(b).T * (self._aveF2CCV.T * (VI * v))
return VI * (self._aveF2CCV * (self._MfMuiDeriv(b) * v))
def _hDeriv_m(self, src, v, adjoint=False):
"""
Derivative of the magnetic field with respect to the inversion model
:param SimPEG.EM.FDEM.SrcFDEM.BaseFDEMSrc src: source
:param numpy.ndarray v: vector to take product with
:param bool adjoint: adjoint?
:rtype: numpy.ndarray
:return: product of the derivative of the magnetic field with respect
to the model with a vector
"""
return src.hPrimaryDeriv(self.prob, v, adjoint) + self._hDeriv_mui(
src, v, adjoint
)
class Fields3D_j(FieldsFDEM):
"""
Fields object for Problem3D_j.
:param discretize.BaseMesh.BaseMesh mesh: mesh
:param SimPEG.EM.FDEM.SurveyFDEM.Survey survey: survey
"""
knownFields = {"jSolution": "F"}
aliasFields = {
"j": ["jSolution", "F", "_j"],
"jPrimary": ["jSolution", "F", "_jPrimary"],
"jSecondary": ["jSolution", "F", "_jSecondary"],
"h": ["jSolution", "E", "_h"],
"hPrimary": ["jSolution", "E", "_hPrimary"],
"hSecondary": ["jSolution", "E", "_hSecondary"],
"e": ["jSolution", "CCV", "_e"],
"b": ["jSolution", "CCV", "_b"],
}
def startup(self):
self.prob = self.survey.prob
self._edgeCurl = self.survey.prob.mesh.edgeCurl
self._MeMu = self.survey.prob.MeMu
self._MeMuI = self.survey.prob.MeMuI
self._MeMuIDeriv = self.survey.prob.MeMuIDeriv
self._MfRho = self.survey.prob.MfRho
self._MfRhoDeriv = self.survey.prob.MfRhoDeriv
self._rho = self.survey.prob.rho
self._mu = self.survey.prob.mui
self._aveF2CCV = self.survey.prob.mesh.aveF2CCV
self._aveE2CCV = self.survey.prob.mesh.aveE2CCV
self._nC = self.survey.prob.mesh.nC
def _GLoc(self, fieldType):
if fieldType in ["h", "hSecondary", "hPrimary"]:
return "E"
elif fieldType in ["j", "jSecondary", "jPrimary"]:
return "F"
elif (fieldType == "e") or (fieldType == "b"):
return "CCV"
else:
raise Exception("Field type must be e, b, h, j")
def _jPrimary(self, jSolution, srcList):
"""
Primary current density from source
:param numpy.ndarray jSolution: field we solved for
:param list srcList: list of sources
:rtype: numpy.ndarray
:return: primary current density as defined by the sources
"""
jPrimary = np.zeros_like(jSolution, dtype=complex)
for i, src in enumerate(srcList):
jp = src.jPrimary(self.prob)
jPrimary[:, i] = jPrimary[:, i] + jp
return jPrimary
def _jSecondary(self, jSolution, srcList):
"""
Secondary current density is the thing we solved for
:param numpy.ndarray jSolution: field we solved for
:param list srcList: list of sources
:rtype: numpy.ndarray
:return: secondary current density
"""
return jSolution
def _j(self, jSolution, srcList):
"""
Total current density is sum of primary and secondary
:param numpy.ndarray jSolution: field we solved for
:param list srcList: list of sources
:rtype: numpy.ndarray
:return: total current density
"""
return self._jPrimary(jSolution, srcList) + self._jSecondary(jSolution, srcList)
def _jDeriv_u(self, src, du_dm_v, adjoint=False):
"""
Partial derivative of the total current density with respect to the
thing we solved for.
:param SimPEG.EM.FDEM.SrcFDEM.BaseFDEMSrc src: source
:param numpy.ndarray v: vector to take product with
:param bool adjoint: adjoint?
:rtype: numpy.ndarray
:return: product of the derivative of the current density with respect
to the field we solved for with a vector
"""
return Identity() * du_dm_v
def _jDeriv_m(self, src, v, adjoint=False):
"""
Partial derivative of the total current density with respect to the
inversion model. Here, we assume that the primary does not depend on
the model. Note that this also includes derivative contributions from
the sources.
:param SimPEG.EM.FDEM.SrcFDEM.BaseFDEMSrc src: source
:param numpy.ndarray v: vector to take product with
:param bool adjoint: adjoint?
:rtype: SimPEG.Utils.Zero
:return: product of the current density derivative with respect to the
inversion model with a vector
"""
# assuming primary does not depend on the model
return src.jPrimaryDeriv(self.prob, v, adjoint)
def _hPrimary(self, jSolution, srcList):
"""
Primary magnetic field from source
:param numpy.ndarray hSolution: field we solved for
:param list srcList: list of sources
:rtype: numpy.ndarray
:return: primary magnetic field as defined by the sources
"""
hPrimary = np.zeros(
[self._edgeCurl.shape[1], jSolution.shape[1]], dtype=complex
)
for i, src in enumerate(srcList):
hp = src.hPrimary(self.prob)
hPrimary[:, i] = hPrimary[:, i] + hp
return hPrimary
def _hSecondary(self, jSolution, srcList):
"""
Secondary magnetic field from bSolution
:param numpy.ndarray jSolution: field we solved for
:param list srcList: list of sources
:rtype: numpy.ndarray
:return: secondary magnetic field
"""
h = self._edgeCurl.T * (self._MfRho * jSolution)
for i, src in enumerate(srcList):
h[:, i] *= -1.0 / (1j * omega(src.freq))
s_m = src.s_m(self.prob)
h[:, i] = h[:, i] + 1.0 / (1j * omega(src.freq)) * (s_m)
return self._MeMuI * h
def _hDeriv_u(self, src, du_dm_v, adjoint=False):
"""
Derivative of the magnetic field with respect to the thing we solved
for
:param SimPEG.EM.FDEM.SrcFDEM.BaseFDEMSrc src: source
:param numpy.ndarray du_dm_v: vector to take product with
:param bool adjoint: adjoint?
:rtype: numpy.ndarray
:return: product of the derivative of the magnetic field with respect
to the field we solved for with a vector
"""
if adjoint:
return (
-1.0
/ (1j * omega(src.freq))
* self._MfRho.T
* (self._edgeCurl * (self._MeMuI.T * du_dm_v))
)
return (
-1.0
/ (1j * omega(src.freq))
* self._MeMuI
* (self._edgeCurl.T * (self._MfRho * du_dm_v))
)
def _hDeriv_m(self, src, v, adjoint=False):
"""
Derivative of the magnetic field with respect to the inversion model
:param SimPEG.EM.FDEM.SrcFDEM.BaseFDEMSrc src: source
:param numpy.ndarray v: vector to take product with
:param bool adjoint: adjoint?
:rtype: numpy.ndarray
:return: product of the derivative of the magnetic field with respect
to the model with a vector
"""
jSolution = Utils.mkvc(self[[src], "jSolution"])
MeMuI = self._MeMuI
MeMuIDeriv = self._MeMuIDeriv
C = self._edgeCurl
MfRho = self._MfRho
MfRhoDeriv = self._MfRhoDeriv
s_m = src.s_m(self.prob)
def s_mDeriv(v):
return src.s_mDeriv(self.prob, v, adjoint=adjoint)
if not adjoint:
hDeriv_m = (
1.0
/ (1j * omega(src.freq))
* (
-1.0
* (
MeMuI * (C.T * (MfRhoDeriv(jSolution) * v))
+ MeMuIDeriv(C.T * (MfRho * jSolution)) * v
)
+ MeMuI * s_mDeriv(v)
+ MeMuIDeriv(s_m) * v
)
)
elif adjoint:
hDeriv_m = (
1.0
/ (1j * omega(src.freq))
* (
(
-1.0
* (
MfRhoDeriv(jSolution).T * (C * (MeMuI.T * v))
+ MeMuIDeriv(C.T * (MfRho * jSolution)).T * v
)
)
+ s_mDeriv(MeMuI.T * v)
+ MeMuIDeriv(s_m).T * v
)
)
return hDeriv_m + src.hPrimaryDeriv(self.prob, v, adjoint)
def _e(self, jSolution, srcList):
"""
Electric field from jSolution
:param numpy.ndarray hSolution: field we solved for
:param list srcList: list of sources
:rtype: numpy.ndarray
:return: electric field
"""
n = int(self._aveF2CCV.shape[0] / self._nC) # number of components
VI = sdiag(np.kron(np.ones(n), 1.0 / self.prob.mesh.vol))
return VI * (self._aveF2CCV * (self._MfRho * self._j(jSolution, srcList)))
def _eDeriv_u(self, src, du_dm_v, adjoint=False):
"""
Derivative of the electric field with respect to the thing we solved
for
:param SimPEG.EM.FDEM.SrcFDEM.BaseFDEMSrc src: source
:param numpy.ndarray du_dm_v: vector to take product with
:param bool adjoint: adjoint?
:rtype: numpy.ndarray
:return: product of the derivative of the electric field with respect
to the field we solved for with a vector
"""
n = int(self._aveF2CCV.shape[0] / self._nC) # number of components
VI = sdiag(np.kron(np.ones(n), 1.0 / self.prob.mesh.vol))
if adjoint:
return self._MfRho.T * (self._aveF2CCV.T * (VI.T * du_dm_v))
return VI * (self._aveF2CCV * (self._MfRho * du_dm_v))
def _eDeriv_m(self, src, v, adjoint=False):
"""
Derivative of the electric field with respect to the inversion model
:param SimPEG.EM.FDEM.SrcFDEM.BaseFDEMSrc src: source
:param numpy.ndarray v: vector to take product with
:param bool adjoint: adjoint?
:rtype: numpy.ndarray
:return: product of the derivative of the electric field with respect
to the model with a vector
"""
jSolution = Utils.mkvc(self[src, "jSolution"])
n = int(self._aveF2CCV.shape[0] / self._nC) # number of components
VI = sdiag(np.kron(np.ones(n), 1.0 / self.prob.mesh.vol))
if adjoint:
return self._MfRhoDeriv(jSolution).T * (
self._aveF2CCV.T * (VI.T * v)
) + src.ePrimaryDeriv(self.prob, v, adjoint)
return VI * (
self._aveF2CCV * (self._MfRhoDeriv(jSolution) * v)
) + src.ePrimaryDeriv(self.prob, v, adjoint)
def _b(self, jSolution, srcList):
"""
Secondary magnetic flux density from jSolution
:param numpy.ndarray hSolution: field we solved for
:param list srcList: list of sources
:rtype: numpy.ndarray
:return: secondary magnetic flux density
"""
n = int(self._aveE2CCV.shape[0] / self._nC) # number of components
VI = sdiag(np.kron(np.ones(n), 1.0 / self.prob.mesh.vol))
return VI * (self._aveE2CCV * (self._MeMu * self._h(jSolution, srcList)))
def _bDeriv_u(self, src, du_dm_v, adjoint=False):
"""
Derivative of the magnetic flux density with respect to the thing we
solved for
:param SimPEG.EM.FDEM.SrcFDEM.BaseFDEMSrc src: source
:param numpy.ndarray du_dm_v: vector to take product with
:param bool adjoint: adjoint?
:rtype: numpy.ndarray
:return: product of the derivative of the magnetic flux density with
respect to the field we solved for with a vector
"""
# if self.prob.mesh._meshType == 'CYL':
# self.
n = int(self._aveE2CCV.shape[0] / self._nC) # number of components
VI = sdiag(np.kron(np.ones(n), 1.0 / self.prob.mesh.vol))
if adjoint:
return (
-1.0
/ (1j * omega(src.freq))
* self._MfRho.T
* (self._edgeCurl * (self._aveE2CCV.T * (VI.T * du_dm_v)))
)
return (
-1.0
/ (1j * omega(src.freq))
* VI
* (self._aveE2CCV * (self._edgeCurl.T * (self._MfRho * du_dm_v)))
)
def _bDeriv_m(self, src, v, adjoint=False):
"""
Derivative of the magnetic flux density with respect to the inversion
model
:param SimPEG.EM.FDEM.SrcFDEM.BaseFDEMSrc src: source
:param numpy.ndarray v: vector to take product with
:param bool adjoint: adjoint?
:rtype: numpy.ndarray
:return: product of the derivative of the magnetic flux density with
respect to the model with a vector
"""
jSolution = self[src, "jSolution"]
n = int(self._aveE2CCV.shape[0] / self._nC) # number of components
VI = sdiag(np.kron(np.ones(n), 1.0 / self.prob.mesh.vol))
def s_mDeriv(v):
return src.s_mDeriv(self.prob, v, adjoint=adjoint)
if adjoint:
v = self._aveE2CCV.T * (VI.T * v)
return 1.0 / (1j * omega(src.freq)) * (
s_mDeriv(v) - self._MfRhoDeriv(jSolution).T * (self._edgeCurl * v)
) + src.bPrimaryDeriv(self.prob, v, adjoint)
return 1.0 / (1j * omega(src.freq)) * VI * (
self._aveE2CCV
* (s_mDeriv(v) - self._edgeCurl.T * (self._MfRhoDeriv(jSolution) * v))
) + src.bPrimaryDeriv(self.prob, v, adjoint)
class Fields3D_h(FieldsFDEM):
"""
Fields object for Problem3D_h.
:param discretize.BaseMesh.BaseMesh mesh: mesh
:param SimPEG.EM.FDEM.SurveyFDEM.Survey survey: survey
"""
knownFields = {"hSolution": "E"}
aliasFields = {
"h": ["hSolution", "E", "_h"],
"hPrimary": ["hSolution", "E", "_hPrimary"],
"hSecondary": ["hSolution", "E", "_hSecondary"],
"j": ["hSolution", "F", "_j"],
"jPrimary": ["hSolution", "F", "_jPrimary"],
"jSecondary": ["hSolution", "F", "_jSecondary"],
"e": ["hSolution", "CCV", "_e"],
"b": ["hSolution", "CCV", "_b"],
}
def startup(self):
self.prob = self.survey.prob
self._edgeCurl = self.survey.prob.mesh.edgeCurl
self._MeMu = self.survey.prob.MeMu
self._MeMuDeriv = self.survey.prob.MeMuDeriv
# self._MeMuI = self.survey.prob.MeMuI
self._MfRho = self.survey.prob.MfRho
self._MfRhoDeriv = self.survey.prob.MfRhoDeriv
self._rho = self.survey.prob.rho
self._mu = self.survey.prob.mui
self._aveF2CCV = self.survey.prob.mesh.aveF2CCV
self._aveE2CCV = self.survey.prob.mesh.aveE2CCV
self._nC = self.survey.prob.mesh.nC
def _GLoc(self, fieldType):
if fieldType in ["h", "hSecondary", "hPrimary"]:
return "E"
elif fieldType in ["j", "jSecondary", "jPrimary"]:
return "F"
elif (fieldType == "e") or (fieldType == "b"):
return "CCV"
else:
raise Exception("Field type must be e, b, h, j")
def _hPrimary(self, hSolution, srcList):
"""
Primary magnetic field from source
:param numpy.ndarray eSolution: field we solved for
:param list srcList: list of sources
:rtype: numpy.ndarray
:return: primary magnetic field as defined by the sources
"""
hPrimary = np.zeros_like(hSolution, dtype=complex)
for i, src in enumerate(srcList):
hp = src.hPrimary(self.prob)
hPrimary[:, i] = hPrimary[:, i] + hp
return hPrimary
def _hSecondary(self, hSolution, srcList):
"""
Secondary magnetic field is the thing we solved for
:param numpy.ndarray hSolution: field we solved for
:param list srcList: list of sources
:rtype: numpy.ndarray
:return: secondary magnetic field
"""
return hSolution
def _hDeriv_u(self, src, du_dm_v, adjoint=False):
"""
Partial derivative of the total magnetic field with respect to the
thing we solved for.
:param SimPEG.EM.FDEM.SrcFDEM.BaseFDEMSrc src: source
:param numpy.ndarray du_dm_v: vector to take product with
:param bool adjoint: adjoint?
:rtype: numpy.ndarray
:return: product of the derivative of the magnetic field with respect
to the field we solved for with a vector
"""
return Identity() * du_dm_v
def _hDeriv_m(self, src, v, adjoint=False):
"""
Partial derivative of the total magnetic field with respect to the
inversion model. Here, we assume that the primary does not depend
on the model. Note that this also includes derivative contributions
from the sources.
:param SimPEG.EM.FDEM.SrcFDEM.BaseFDEMSrc src: source
:param numpy.ndarray v: vector to take product with
:param bool adjoint: adjoint?
:rtype: SimPEG.Utils.Zero
:return: product of the magnetic field derivative with respect to the
inversion model with a vector
"""
return src.hPrimaryDeriv(self.prob, v, adjoint)
def _jPrimary(self, hSolution, srcList):
"""
Primary current density from source
:param numpy.ndarray hSolution: field we solved for
:param list srcList: list of sources
:rtype: numpy.ndarray
:return: primary current density as defined by the sources
"""
jPrimary = np.zeros(
[self._edgeCurl.shape[0], hSolution.shape[1]], dtype=complex
)
for i, src in enumerate(srcList):
jp = src.jPrimary(self.prob)
jPrimary[:, i] = jPrimary[:, i] + jp
return jPrimary
def _jSecondary(self, hSolution, srcList):
"""
Secondary current density from hSolution
:param numpy.ndarray hSolution: field we solved for
:param list srcList: list of sources
:rtype: numpy.ndarray
:return: secondary current density
"""
j = self._edgeCurl * hSolution
for i, src in enumerate(srcList):
s_e = src.s_e(self.prob)
j[:, i] = j[:, i] + -s_e
return j
def _jDeriv_u(self, src, du_dm_v, adjoint=False):
"""
Derivative of the current density with respect to the thing we solved
for
:param SimPEG.EM.FDEM.SrcFDEM.BaseFDEMSrc src: source
:param numpy.ndarray du_dm_v: vector to take product with
:param bool adjoint: adjoint?
:rtype: numpy.ndarray
:return: product of the derivative of the current density with respect
to the field we solved for with a vector
"""
if not adjoint:
return self._edgeCurl * du_dm_v
elif adjoint:
return self._edgeCurl.T * du_dm_v
def _jDeriv_m(self, src, v, adjoint=False):
"""
Derivative of the current density with respect to the inversion model.
:param SimPEG.EM.FDEM.SrcFDEM.BaseFDEMSrc src: source
:param numpy.ndarray v: vector to take product with
:param bool adjoint: adjoint?
:rtype: numpy.ndarray
:return: product of the current density derivative with respect to the
inversion model with a vector
"""
return -src.s_eDeriv(self.prob, v, adjoint) + src.jPrimaryDeriv(
self.prob, v, adjoint
)
def _e(self, hSolution, srcList):
"""
Electric field from hSolution
:param numpy.ndarray hSolution: field we solved for
:param list srcList: list of sources
:rtype: numpy.ndarray
:return: electric field
"""
n = int(self._aveF2CCV.shape[0] / self._nC) # number of components
VI = sdiag(np.kron(np.ones(n), 1.0 / self.prob.mesh.vol))
return VI * (self._aveF2CCV * (self._MfRho * self._j(hSolution, srcList)))
def _eDeriv_u(self, src, du_dm_v, adjoint=False):
"""
Derivative of the electric field with respect to the thing we solved
for
:param SimPEG.EM.FDEM.SrcFDEM.BaseFDEMSrc src: source
:param numpy.ndarray du_dm_v: vector to take product with
:param bool adjoint: adjoint?
:rtype: numpy.ndarray
:return: product of the derivative of the electric field with respect
to the field we solved for with a vector
"""
n = int(self._aveF2CCV.shape[0] / self._nC) # number of components
VI = sdiag(np.kron(np.ones(n), 1.0 / self.prob.mesh.vol))
if adjoint:
return self._edgeCurl.T * (
self._MfRho.T * (self._aveF2CCV.T * (VI.T * du_dm_v))
)
return VI * (self._aveF2CCV * (self._MfRho * self._edgeCurl * du_dm_v))
def _eDeriv_m(self, src, v, adjoint=False):
"""
Derivative of the electric field with respect to the inversion model.
:param SimPEG.EM.FDEM.SrcFDEM.BaseFDEMSrc src: source
:param numpy.ndarray v: vector to take product with
:param bool adjoint: adjoint?
:rtype: numpy.ndarray
:return: product of the electric field derivative with respect to the
inversion model with a vector
"""
hSolution = Utils.mkvc(self[src, "hSolution"])
n = int(self._aveF2CCV.shape[0] / self._nC) # number of components
VI = sdiag(np.kron(np.ones(n), 1.0 / self.prob.mesh.vol))
s_e = src.s_e(self.prob)
if adjoint:
w = self._aveF2CCV.T * (VI.T * v)
return (
self._MfRhoDeriv(self._edgeCurl * hSolution).T * w
- self._MfRhoDeriv(s_e).T * w
+ src.ePrimaryDeriv(self.prob, v, adjoint)
)
return VI * (
self._aveF2CCV
* (
self._MfRhoDeriv(self._edgeCurl * hSolution) * v
- self._MfRhoDeriv(s_e) * v
)
) + src.ePrimaryDeriv(self.prob, v, adjoint)
def _b(self, hSolution, srcList):
"""
Magnetic flux density from hSolution
:param numpy.ndarray hSolution: field we solved for
:param list srcList: list of sources
:rtype: numpy.ndarray
:return: magnetic flux density
"""
h = self._h(hSolution, srcList)
n = int(self._aveE2CCV.shape[0] / self._nC) # number of components
VI = sdiag(np.kron(np.ones(n), 1.0 / self.prob.mesh.vol))
return VI * (self._aveE2CCV * (self._MeMu * h))
def _bDeriv_u(self, src, du_dm_v, adjoint=False):
"""
Derivative of the magnetic flux density with respect to the thing we
solved for
:param SimPEG.EM.FDEM.SrcFDEM.BaseFDEMSrc src: source
:param numpy.ndarray du_dm_v: vector to take product with
:param bool adjoint: adjoint?
:rtype: numpy.ndarray
:return: product of the derivative of the magnetic flux density with
respect to the field we solved for with a vector
"""
n = int(self._aveE2CCV.shape[0] / self._nC) # number of components
VI = sdiag(np.kron(np.ones(n), 1.0 / self.prob.mesh.vol))
if adjoint:
return self._MeMu.T * (self._aveE2CCV.T * (VI.T * du_dm_v))
return VI * (self._aveE2CCV * (self._MeMu * du_dm_v))
def _bDeriv_mu(self, src, v, adjoint=False):
h = self[src, "h"]
n = int(self._aveE2CCV.shape[0] / self._nC) # number of components
VI = sdiag(np.kron(np.ones(n), 1.0 / self.prob.mesh.vol))
MeMuDeriv = self._MeMuDeriv(h)
if adjoint:
return MeMuDeriv.T * (self._aveE2CCV.T * (VI.T * v))
return VI * (self._aveE2CCV * (MeMuDeriv * v))
# return VI * (self._aveE2CCV * (self._MeMu * h))
def _bDeriv_m(self, src, v, adjoint=False):
"""
Derivative of the magnetic flux density with respect to the inversion
model.
:param SimPEG.EM.FDEM.SrcFDEM.BaseFDEMSrc src: source
:param numpy.ndarray v: vector to take product with
:param bool adjoint: adjoint?
:rtype: numpy.ndarray
:return: product of the magnetic flux density derivative with respect
to the inversion model with a vector
"""
return src.bPrimaryDeriv(self.prob, v, adjoint) + self._bDeriv_mu(
src, v, adjoint
)
| 36.684377 | 88 | 0.586244 | 7,241 | 57,998 | 4.60116 | 0.035078 | 0.048624 | 0.028874 | 0.035537 | 0.882522 | 0.859621 | 0.835519 | 0.79518 | 0.771888 | 0.752379 | 0 | 0.005925 | 0.310373 | 57,998 | 1,580 | 89 | 36.707595 | 0.827062 | 0.385306 | 0 | 0.502865 | 0 | 0 | 0.05207 | 0 | 0 | 0 | 0 | 0.000633 | 0 | 1 | 0.118911 | false | 0 | 0.008596 | 0.002865 | 0.310888 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5336a59c6050f38d2160cb522400e85aae92a3b7 | 1,854 | py | Python | tests/apis/test_iterjsonlines.py | tavaresrodrigo/kopf | 97e1c7a926705a79dabce2931e96a924252b61df | [
"MIT"
] | 855 | 2020-08-19T09:40:38.000Z | 2022-03-31T19:13:29.000Z | tests/apis/test_iterjsonlines.py | tavaresrodrigo/kopf | 97e1c7a926705a79dabce2931e96a924252b61df | [
"MIT"
] | 715 | 2019-12-23T14:17:35.000Z | 2022-03-30T20:54:45.000Z | tests/apis/test_iterjsonlines.py | tavaresrodrigo/kopf | 97e1c7a926705a79dabce2931e96a924252b61df | [
"MIT"
] | 97 | 2019-04-25T09:32:54.000Z | 2022-03-30T10:15:30.000Z | import asynctest
from kopf._cogs.clients.api import iter_jsonlines
async def test_empty_content():
async def iter_chunked(n: int):
if False: # to make this function a generator
yield b''
content = asynctest.Mock(iter_chunked=iter_chunked)
lines = []
async for line in iter_jsonlines(content):
lines.append(line)
assert lines == []
async def test_empty_chunk():
async def iter_chunked(n: int):
yield b''
content = asynctest.Mock(iter_chunked=iter_chunked)
lines = []
async for line in iter_jsonlines(content):
lines.append(line)
assert lines == []
async def test_one_chunk_one_line():
async def iter_chunked(n: int):
yield b'hello'
content = asynctest.Mock(iter_chunked=iter_chunked)
lines = []
async for line in iter_jsonlines(content):
lines.append(line)
assert lines == [b'hello']
async def test_one_chunk_two_lines():
async def iter_chunked(n: int):
yield b'hello\nworld'
content = asynctest.Mock(iter_chunked=iter_chunked)
lines = []
async for line in iter_jsonlines(content):
lines.append(line)
assert lines == [b'hello', b'world']
async def test_one_chunk_empty_lines():
async def iter_chunked(n: int):
yield b'\n\nhello\n\nworld\n\n'
content = asynctest.Mock(iter_chunked=iter_chunked)
lines = []
async for line in iter_jsonlines(content):
lines.append(line)
assert lines == [b'hello', b'world']
async def test_few_chunks_split():
async def iter_chunked(n: int):
yield b'\n\nhell'
yield b'o\n\nwor'
yield b'ld\n\n'
content = asynctest.Mock(iter_chunked=iter_chunked)
lines = []
async for line in iter_jsonlines(content):
lines.append(line)
assert lines == [b'hello', b'world']
| 23.468354 | 55 | 0.653182 | 258 | 1,854 | 4.51938 | 0.186047 | 0.169811 | 0.06175 | 0.09777 | 0.830189 | 0.799314 | 0.779588 | 0.779588 | 0.759863 | 0.641509 | 0 | 0 | 0.238403 | 1,854 | 78 | 56 | 23.769231 | 0.825779 | 0.017799 | 0 | 0.698113 | 0 | 0 | 0.052776 | 0.012095 | 0 | 0 | 0 | 0 | 0.113208 | 1 | 0 | false | 0 | 0.037736 | 0 | 0.037736 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
53593a6b103512752bd4b81ea59e314db5646b57 | 91 | py | Python | pypingcli/commands/__init__.py | tameeshB/pyPingCLI | 070edab79b24c257a8512b2a1410d12583f96d69 | [
"MIT"
] | 1 | 2020-11-09T16:49:51.000Z | 2020-11-09T16:49:51.000Z | pypingcli/commands/__init__.py | tameeshB/pyPingCLI | 070edab79b24c257a8512b2a1410d12583f96d69 | [
"MIT"
] | 7 | 2018-10-01T01:13:59.000Z | 2018-10-03T16:44:34.000Z | pypingcli/commands/__init__.py | tameeshB/pyPingCLI | 070edab79b24c257a8512b2a1410d12583f96d69 | [
"MIT"
] | 6 | 2018-10-01T10:09:16.000Z | 2020-09-16T06:59:16.000Z | from .start import *
from .connectIP import *
from .setUser import *
from .prompts import * | 22.75 | 24 | 0.747253 | 12 | 91 | 5.666667 | 0.5 | 0.441176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.164835 | 91 | 4 | 25 | 22.75 | 0.894737 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
72651d6aba4b024f267c256a2457141296766243 | 73 | py | Python | lib/ctyEnge/SuggestionsAndRecommendations/__init__.py | JoshOrndorff/snippets | ef06e03de09897014f88d89a84b597aabde7edaa | [
"Unlicense"
] | null | null | null | lib/ctyEnge/SuggestionsAndRecommendations/__init__.py | JoshOrndorff/snippets | ef06e03de09897014f88d89a84b597aabde7edaa | [
"Unlicense"
] | null | null | null | lib/ctyEnge/SuggestionsAndRecommendations/__init__.py | JoshOrndorff/snippets | ef06e03de09897014f88d89a84b597aabde7edaa | [
"Unlicense"
] | null | null | null | from .SuggestionsAndRecommendations import SuggestionsAndRecommendations
| 36.5 | 72 | 0.931507 | 4 | 73 | 17 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.054795 | 73 | 1 | 73 | 73 | 0.985507 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
728e14d0cb7e3e55c9800e3d196655603249ce46 | 57,600 | py | Python | tests/test_search.py | nazrulworld/fhirpath | 3b819870c57a0befcac18916a4d03b64c0e202ca | [
"Apache-2.0"
] | 25 | 2019-05-14T13:35:32.000Z | 2022-02-21T23:03:35.000Z | tests/test_search.py | nazrulworld/fhirpath | 3b819870c57a0befcac18916a4d03b64c0e202ca | [
"Apache-2.0"
] | 29 | 2020-02-14T08:14:02.000Z | 2021-02-23T20:14:42.000Z | tests/test_search.py | nazrulworld/fhirpath | 3b819870c57a0befcac18916a4d03b64c0e202ca | [
"Apache-2.0"
] | 4 | 2020-06-30T08:05:54.000Z | 2021-08-09T19:10:35.000Z | # _*_ coding: utf-8 _*_
import re
from urllib.parse import urlencode
from pytest import raises
import pytest
from fhirpath import Q_
from fhirpath.enums import FHIR_VERSION
from fhirpath.enums import OPERATOR
from fhirpath.enums import MatchType
from fhirpath.enums import SortOrderType
from fhirpath.interfaces.fql import IGroupTerm
from fhirpath.search import Search
from fhirpath.search import AsyncSearch
from fhirpath.search import SearchContext
from fhirpath.exceptions import ValidationError
from fhir.resources.patient import Patient
from fhir.resources.observation import Observation
from fhir.resources.practitioner import Practitioner
from fhir.resources.medicationrequest import MedicationRequest
__author__ = "Md Nazrul Islam<email2nazrul@gmail.com>"
def test_params_definition(engine):
""" """
definition = SearchContext(
engine=engine, resource_type="Organization"
).get_parameters_definition(FHIR_VERSION.R4)
assert definition[0].name.expression == "Organization.name"
def test_parse_query_string():
""" """
params = (
("status:not", "completed"),
("status", "active"),
("code", "http://acme.org/conditions/codes|ha125"),
("probability", "gt0.8"),
("date", "ge2010-01-01"),
("date", "le2011-12-31"),
("alue-quantity", "5.4|http://unitsofmeasure.org|mg"),
("definition:below", "http:http://acme.com/some-profile"),
("code", "http://loinc.org|1234-5&subject.name=peter"),
("_sort", "status,-date,category"),
("_count", "1"),
("medication.ingredient-code", "abc"),
("_include", "Observation:related-target"),
)
result = Search.parse_query_string(urlencode(params))
assert len(result.getall("code")) == 2
assert len(result.getall("date")) == 2
assert len(result.getall("medication.ingredient-code")) == 1
def test_prepare_params(engine):
""" """
context = SearchContext(engine, "Task")
params = (
("status:not", "completed"),
("status", "active"),
("code", "http://acme.org/conditions/codes|ha125"),
("probability", "gt0.8"),
("date", "ge2010-01-01"),
("date", "le2011-12-31"),
("code", "http://loinc.org|1234-5&subject.name=peter"),
("_sort", "status,-date,category"),
("_count", "1"),
)
fhir_search = Search(context, params=params)
# xxx: should be 2 as :not could be normalized? anyway we will figure it out
assert len(fhir_search.search_params.getall("status")) == 1
# should be gone from normal search params
assert len(fhir_search.search_params.getall("_sort", [])) == 0
assert "_count" not in fhir_search.search_params
def test_parameter_normalization(engine):
""" """
context = SearchContext(engine, "Task")
# TODO we need the [0] because normalize_param returns a list to handle the
# case where we search on several resource types
path_, value_pack, modifier = context.normalize_param("status:not", ["completed"])[
0
]
# single valued
assert isinstance(value_pack, tuple)
# OPERATOR
assert value_pack[0] == "eq"
# actual value
assert value_pack[1] == "completed"
field_name, value_pack, modifier = context.normalize_param(
"authored-on", ["ge2010-01-01", "le2011-12-31"]
)[0]
assert modifier is None
assert isinstance(value_pack, list)
assert len(value_pack) == 2
# OPERATOR
assert value_pack[0][0] == "ge"
# actual value
assert value_pack[0][1] == "2010-01-01"
# test with escape comma(,)
field_name, value_pack, modifier = context.normalize_param(
"code",
[
"http://acme.org/conditions/codes|ha125",
"http://loinc.org|1\\,234-5&subject.name=peter",
],
)[0]
assert isinstance(value_pack, list)
# OPERATOR
assert value_pack[1][0] == "eq"
# actual value
assert value_pack[1][1] == "http://loinc.org|1\\,234-5&subject.name=peter"
# Test IN Operator
field_name, value_pack, modifier = context.normalize_param(
"_lastUpdated", ["le2019-09-12T13:20:44+0000,2018-09-12T13:20:44+0000"]
)[0]
assert isinstance(value_pack, tuple)
operator_, values = value_pack
assert operator_ is None
assert len(values) == 2
assert values[0][1] == "2019-09-12T13:20:44+0000"
# Test AND+IN Operator
field_name, value_pack, modifier = context.normalize_param(
"_id", ["567890", "998765555554678,45555555555567"]
)[0]
assert isinstance(value_pack, list)
assert isinstance(value_pack[0], tuple)
assert isinstance(value_pack[1][1], list)
def test_composite_parameter_normalization(engine):
""" """
context = SearchContext(engine, "ChargeItemDefinition")
normalize_value = context.normalize_param("context-type-quantity", ["HL7$99"])
assert len(normalize_value) == 2
assert normalize_value[0][0].path.endswith(".code")
# value.as(Quantity) | value.as(Range)
assert len(normalize_value[1]) == 2
assert normalize_value[1][1][0].path.endswith(".valueRange") is True
context = SearchContext(engine, "Observation")
normalize_value = context.normalize_param(
"code-value-quantity", ["http://loinc.org|11557-6$6.2"]
)
assert isinstance(normalize_value[1], tuple)
def test_parameter_normalization_with_space_as(engine):
""" """
context = SearchContext(engine, "MedicationRequest")
path_, value_pack, _ = context.normalize_param(
"code", ["http://acme.org/conditions/codes|ha125"]
)[0]
# single valued
assert isinstance(value_pack, tuple)
assert path_.path == "MedicationRequest.medicationCodeableConcept"
def test_parameter_normalization_empty_value(engine):
context = SearchContext(engine, "MedicationRequest")
# normalize a param with an empty value: it should be ignored
params = context.normalize_param("code", [""])
assert len(params) == 0
def test_parameter_normalization_prefix(engine):
""" """
# number
context = SearchContext(engine, "MolecularSequence")
_, value_pack, _ = context.normalize_param("variant-end", ["gt1"])[0]
assert value_pack == ("gt", "1")
# quantity
context = SearchContext(engine, "Substance")
_, value_pack, _ = context.normalize_param("quantity", ["ne1"])[0]
assert value_pack == ("ne", "1")
# date
context = SearchContext(engine, "Patient")
_, value_pack, _ = context.normalize_param("death-date", ["lt1980"])[0]
assert value_pack == ("lt", "1980")
# string
_, value_pack, _ = context.normalize_param("name", ["leslie"])[0]
assert value_pack == ("eq", "leslie")
# token
_, value_pack, _ = context.normalize_param("language", ["nepalese"])[0]
assert value_pack == ("eq", "nepalese")
# reference
_, value_pack, _ = context.normalize_param("organization", ["necker"])[0]
assert value_pack == ("eq", "necker")
def test_create_term(engine):
""" """
context = SearchContext(engine, "Task")
params = (
("status:not", "completed"),
("status", "active"),
("code", "http://acme.org/conditions/codes|ha125"),
("probability", "gt0.8"),
("authored-on", "ge2019-07-17T19:32:59.991658"),
("authored-on", "le2013-01-17T19:32:59.991658"),
("code", "http://loinc.org|1\\,234-5&subject.name=peter"),
("_sort", "status,-date,category"),
("_count", "1"),
)
fhir_search = Search(context, params=params)
path_, value_pack, modifier = context.normalize_param("status:not", ["completed"])[
0
]
term = fhir_search.create_term(path_, value_pack, modifier)
term.finalize(fhir_search.context.engine)
assert term.unary_operator == OPERATOR.neg
assert term.arithmetic_operator == OPERATOR.and_
assert term.value.value == "completed"
path_, value_pack, modifier = context.normalize_param(
"authored-on", ["ge2019-07-17T19:32:59.991658", "le2013-01-17T19:32:59.991658"]
)[0]
term = fhir_search.create_term(path_, value_pack, modifier)
term.finalize(fhir_search.context.engine)
# Now term should transformed as group of terms
# as we see authored-on has multiple value
assert IGroupTerm.providedBy(term) is True
assert len(term.terms) == 2
assert term.match_operator == MatchType.ANY
assert term.arithmetic_operator is None
def test_create_codeableconcept_term(engine):
""" """
context = SearchContext(engine, "Task")
params = (
("code", "http://acme.org/conditions/codes|ha125"),
("code", "http://terminology.hl7.org/CodeSystem/task-performer-type|"),
("code", "|performer"),
("code:text", "Performer"),
("code:not", "http://loinc.org|1\\,234-5&subject.name=peter"),
)
fhir_search = Search(context, params=params)
path_, value_pack, modifier = context.normalize_param(
"code",
[
"http://acme.org/conditions/codes|ha125",
"http://terminology.hl7.org/CodeSystem/task-performer-type|",
"|performer",
],
)[0]
terms = fhir_search.create_codeableconcept_term(path_, value_pack, modifier)
[t.finalize(fhir_search.context.engine) for t in terms]
assert IGroupTerm.providedBy(terms[0]) is True
code1_group = terms[0]
assert code1_group.terms[0].path.path == "Task.code.coding.system"
assert code1_group.terms[1].path.path == "Task.code.coding.code"
assert IGroupTerm.providedBy(terms[1]) is True
assert IGroupTerm.providedBy(terms[2]) is True
code2_group = terms[1]
assert code2_group.terms[0].path.path == "Task.code.coding.system"
code3_group = terms[2]
assert code3_group.terms[0].path.path == "Task.code.coding.code"
path_, value_pack, modifier = context.normalize_param("code:text", ["Performer"])[0]
term = fhir_search.create_codeableconcept_term(path_, value_pack, modifier)
term.finalize(fhir_search.context.engine)
assert term.terms[0].path.path == "Task.code.text"
def test_create_identifier_term(engine):
""" """
context = SearchContext(engine, "Task")
params = (
("identifier", "http://example.com/fhir/identifier/mrn|123456"),
("identifier", "http://terminology.hl7.org/CodeSystem/task-performer-type|"),
("identifier", "|performer"),
("identifier:text", "Performer"),
("identifier:not", "http://example.com/fhir/identifier/mrn|123456"),
)
fhir_search = Search(context, params=params)
path_, value_pack, modifier = context.normalize_param(
"identifier",
[
"http://example.com/fhir/identifier/mrn|123456",
"http://terminology.hl7.org/CodeSystem/task-performer-type|",
"|performer",
],
)[0]
terms = fhir_search.create_identifier_term(path_, value_pack, modifier)
[t.finalize(fhir_search.context.engine) for t in terms]
assert IGroupTerm.providedBy(terms[0]) is True
identifier_group = terms[0]
assert identifier_group.terms[0].path.path == "Task.identifier.system"
assert identifier_group.terms[1].path.path == "Task.identifier.value"
assert terms[1].terms[0].path.path == "Task.identifier.system"
assert terms[2].terms[0].path.path == "Task.identifier.value"
path_, value_pack, modifier = context.normalize_param(
"identifier:text", ["Performer"]
)[0]
term = fhir_search.create_identifier_term(path_, value_pack, modifier)
term.finalize(fhir_search.context.engine)
assert term.terms[0].path.path == "Task.identifier.type.text"
path_, value_pack, modifier = context.normalize_param(
"identifier:not", ["http://example.com/fhir/identifier/mrn|123456"]
)[0]
term = fhir_search.create_identifier_term(path_, value_pack, modifier)
term.finalize(fhir_search.context.engine)
assert term.terms[0].unary_operator == OPERATOR.neg
def test_create_quantity_term(engine):
""" """
context = SearchContext(engine, "ChargeItem")
params = (
("quantity", "5.4|http://unitsofmeasure.org|mg"),
("quantity", "lt5.1||mg"),
("quantity", "5.40e-3"),
("quantity:not", "ap5.4|http://unitsofmeasure.org|mg"),
)
fhir_search = Search(context, params=params)
path_, value_pack, modifier = context.normalize_param(
"quantity", ["5.4|http://unitsofmeasure.org|mg", "lt5.1||mg", "5.40e-3"]
)[0]
terms = fhir_search.create_quantity_term(path_, value_pack, modifier)
[t.finalize(fhir_search.context.engine) for t in terms]
assert IGroupTerm.providedBy(terms[0]) is True
quantity_group = terms[0]
assert quantity_group.terms[0].path.path == "ChargeItem.quantity.value"
assert quantity_group.terms[1].path.path == "ChargeItem.quantity.system"
assert quantity_group.terms[2].path.path == "ChargeItem.quantity.code"
assert terms[1].terms[1].path.path == "ChargeItem.quantity.unit"
assert float(terms[2].value.value) == float("5.40e-3")
def test_sa_term(engine):
""" """
context = SearchContext(engine, "Organization")
params = (("_id:below", "fo"),)
fhir_search = Search(context, params=params)
path_, value_pack, modifier = context.normalize_param("_id:below", ["fo"])[0]
term = fhir_search.create_term(path_, value_pack, modifier)
term.finalize(fhir_search.context.engine)
assert term.comparison_operator == OPERATOR.sa
def test_sort_attachment(engine):
""" """
context = SearchContext(engine, "Task")
params = (("status", "active"), ("_sort", "status,-modified"), ("_count", "100"))
fhir_search = Search(context, params=params)
builder = Q_(context.resource_types, context.engine)
builder = fhir_search.attach_sort_terms(builder)
assert len(builder._sort) == 2
assert builder._sort[1].order == SortOrderType.DESC
def test_limit_attachment(engine):
""" """
context = SearchContext(engine, "Task")
params = (("status", "active"), ("_sort", "status,-modified"), ("_count", "100"))
fhir_search = Search(context, params=params)
builder = Q_(context.resource_types, context.engine)
builder = fhir_search.attach_limit_terms(builder)
assert builder._limit.limit == 100
assert builder._limit.offset == 0
params = (
("status", "active"),
("_sort", "status,-modified"),
("_count", "100"),
("page", "4"),
)
fhir_search = Search(context, params=params)
builder = Q_(context.resource_types, context.engine)
builder = fhir_search.attach_limit_terms(builder)
assert builder._limit.offset == 300
def test_build_query_from_search_params(engine):
""" """
context = SearchContext(engine, "ChargeItem")
params = (
("subject", "Patient/PAT001"),
("quantity", "5.4|http://unitsofmeasure.org|mg"),
("quantity", "lt5.9||mg"),
("quantity", "5.40e-3"),
("quantity:not", "gt5.4|http://unitsofmeasure.org|mg"),
)
fhir_search = Search(context, params=params)
builder = Q_(fhir_search.context.resource_types, fhir_search.context.engine)
terms_container = list()
for param_name in set(fhir_search.search_params):
raw_value = list(fhir_search.search_params.getall(param_name, []))
normalized_data = context.normalize_param(param_name, raw_value)
fhir_search.add_term(normalized_data, terms_container)
builder = builder.where(*terms_container)
builder.finalize()
query = builder.get_query()
assert len(query.get_element()) == 1
# 4 + 1
assert len(query.get_where()) == 5
def test_build_result(engine):
""" """
search_context = SearchContext(engine, "Organization")
params = (
("active", "true"),
("_lastUpdated", "2010-05-28T05:35:56+00:00"),
("_profile", "http://hl7.org/fhir/Organization"),
("identifier", "urn:oid:2.16.528.1|91654"),
("type", "http://hl7.org/fhir/organization-type|prov"),
("address-postalcode", "9100 AA"),
("address", "Den Burg"),
("_sort", "_id"),
("_count", "100"),
("page", "4"),
)
fhir_search = Search(search_context, params=params)
query_result = fhir_search.build()
assert query_result.__class__.__name__ == "QueryResult"
def test_search_result(es_data, engine):
""" """
search_context = SearchContext(engine, "Organization")
params = (
("active", "true"),
("_lastUpdated", "2010-05-28T05:35:56+00:00"),
("_profile", "http://hl7.org/fhir/Organization"),
("identifier", "urn:oid:2.16.528.1|91654"),
("type", "http://hl7.org/fhir/organization-type|prov"),
("address-postalcode", "9100 AA"),
("address", "Den Burg"),
)
fhir_search = Search(search_context, params=params)
bundle = fhir_search()
assert bundle.total == 1
def test_search_result_as_json(es_data, engine):
""" """
search_context = SearchContext(engine, "Organization")
params = (
("active", "true"),
("_lastUpdated", "2010-05-28T05:35:56+00:00"),
("_profile", "http://hl7.org/fhir/Organization"),
("identifier", "urn:oid:2.16.528.1|91654"),
("type", "http://hl7.org/fhir/organization-type|prov"),
("address-postalcode", "9100 AA"),
("address", "Den Burg"),
)
fhir_search = Search(search_context, params=params)
bundle = fhir_search(as_json=True)
assert bundle["resourceType"] == "Bundle"
assert bundle["total"] == 1
assert isinstance(bundle["entry"][0], dict)
def test_search_missing_modifier(es_data, engine):
""" """
search_context = SearchContext(engine, "Organization")
params = (("active:missing", "false"),)
fhir_search = Search(search_context, params=params)
bundle = fhir_search()
assert len(bundle.entry) == 1
def test_in_search(es_data, engine):
""" """
search_context = SearchContext(engine, "Organization")
params = (
("active", "true"),
("address", "Den Burg,Fake Lane"),
("_profile", "http://hl7.org/fhir/Organization,http://another"),
)
fhir_search = Search(search_context, params=params)
bundle = fhir_search()
assert bundle.total == 1
def test_composite_param_search(es_data, engine):
""" """
search_context = SearchContext(engine, "DocumentReference")
params = (("relationship", "appends$ref1"),)
fhir_search = Search(search_context, params=params)
bundle = fhir_search()
assert bundle.total == 1
search_context = SearchContext(engine, "Observation")
params = (("code-value-quantity", "http://loinc.org|718-7$7.2"),)
fhir_search = Search(search_context, params=params)
bundle = fhir_search()
assert bundle.total == 1
def test_codeableconcept_with_not_modifier(es_data, engine):
""" """
# test with single
search_context = SearchContext(engine, "ChargeItem")
params = (("code:not", "http://snomed.info/sct|01510"),)
fhir_search = Search(search_context, params=params)
bundle = fhir_search()
assert bundle.total == 0
params = (("code:not", "http://snomed.info/sct|01510,http://lonic.org|1510-9"),)
fhir_search = Search(search_context, params=params)
bundle = fhir_search()
assert bundle.total == 0
def test_search_result_with_below_modifier(es_data, engine):
""" """
search_context = SearchContext(engine, "Organization")
params = (("name:below", "Burge"),)
fhir_search = Search(search_context, params=params)
bundle = fhir_search()
assert bundle.total == 1
# little bit complex
search_context = SearchContext(engine, "Patient")
params = (("identifier:below", "|2403"),)
fhir_search = Search(search_context, params=params)
bundle = fhir_search()
assert bundle.total == 1
params = (("given:below", "Eel,Eve"),)
fhir_search = Search(search_context, params=params)
bundle = fhir_search()
assert bundle.total == 1
params = (("gender:below", "ma,naz"),)
fhir_search = Search(search_context, params=params)
bundle = fhir_search()
assert bundle.total == 1
def test_search_result_with_above_modifier(es_data, engine):
""" """
# little bit complex
search_context = SearchContext(engine, "Patient")
params = (("identifier:above", "|0002"),)
fhir_search = Search(search_context, params=params)
bundle = fhir_search()
assert bundle.total == 1
search_context = SearchContext(engine, "Organization")
params = (("name:above", "Medical Center"),)
fhir_search = Search(search_context, params=params)
bundle = fhir_search()
assert bundle.total == 1
def test_search_result_with_contains_modifier(es_data, engine):
""" """
# little bit complex
search_context = SearchContext(engine, "Patient")
params = (("identifier:contains", "|365"),)
fhir_search = Search(search_context, params=params)
bundle = fhir_search()
assert bundle.total == 1
params = (("given:contains", "ect"),)
fhir_search = Search(search_context, params=params)
bundle = fhir_search()
assert bundle.total == 1
search_context = SearchContext(engine, "Organization")
params = (("name:contains", "Medical"),)
fhir_search = Search(search_context, params=params)
bundle = fhir_search()
assert bundle.total == 1
@pytest.mark.asyncio
async def test_async_search_result_with_contains_modifier(es_data, async_engine):
""" """
# little bit complex
search_context = SearchContext(async_engine, "Patient")
params = (("identifier:contains", "|365"),)
fhir_search = AsyncSearch(search_context, params=params)
bundle = await fhir_search()
assert bundle.total == 1
params = (("given:contains", "ect"),)
fhir_search = AsyncSearch(search_context, params=params)
bundle = await fhir_search()
assert bundle.total == 1
search_context = SearchContext(async_engine, "Organization")
params = (("name:contains", "Medical"),)
fhir_search = AsyncSearch(search_context, params=params)
bundle = await fhir_search()
assert bundle.total == 1
def test_search_result_with_exact_modifier(es_data, engine):
""" """
search_context = SearchContext(engine, "Patient")
params = (("family:exact", "Saint"),)
fhir_search = Search(search_context, params=params)
bundle = fhir_search()
assert bundle.total == 1
params = (("family:exact", "Other"),)
fhir_search = Search(search_context, params=params)
bundle = fhir_search()
assert bundle.total == 0
params = (("family:exact", "Sain"),)
fhir_search = Search(search_context, params=params)
bundle = fhir_search()
assert bundle.total == 0
params = (("family:exact", "saint"),)
fhir_search = Search(search_context, params=params)
bundle = fhir_search()
assert bundle.total == 0
params = (("family:exact", "Sàint"),)
fhir_search = Search(search_context, params=params)
bundle = fhir_search()
assert bundle.total == 0
def test_search_identifier_modifier(es_data, engine):
search_context = SearchContext(engine, "Observation")
params = (("subject:identifier", "240365-0002"),)
bundle = Search(search_context, params=params)()
assert bundle.total == 1
params = (("subject:identifier", "123465789"),)
bundle = Search(search_context, params=params)()
assert bundle.total == 0
# [param-ref]:identifier=[system]|[value]: the value of [code] matches an
# reference.identifier.value, and the value of [system] matches the system
# property of the Identifier
params = (("subject:identifier", "CPR|240365-0002",),)
bundle = Search(search_context, params=params)()
assert bundle.total == 1
params = (("subject:identifier", "CPR|123456789",),)
bundle = Search(search_context, params=params)()
assert bundle.total == 0
# TODO: not working yet, when omitting the system, the search is applied only
# on the value (it should also filter by empty system)
# [param-ref]:identifier=|[code]: the value of [code] matches a
# reference.identifier.value, and the Identifier has no system property
params = (("subject:identifier", "|240365-0002"),)
bundle = Search(search_context, params=params)()
assert bundle.total == 1
params = (("subject:identifier", "|123456789"),)
bundle = Search(search_context, params=params)()
assert bundle.total == 0
# [param-ref]:identifier=[system]|: any element where the value of [system] matches
# the system property of the Identifier
params = (("subject:identifier", "CPR|",),)
bundle = Search(search_context, params=params)()
assert bundle.total == 1
params = (("subject:identifier", "other|"),)
bundle = Search(search_context, params=params)()
assert bundle.total == 0
def test_issue9_multiple_negative_terms_not_working(es_data, engine):
"""https://github.com/nazrulworld/fhirpath/issues/9"""
search_context = SearchContext(engine, "Task")
params = (("status:not", "ready,cancelled"),)
fhir_search = Search(search_context, params=params)
bundle = fhir_search()
assert bundle.total == 1
@pytest.mark.asyncio
async def test_async_issue9_multiple_negative_terms_not_working(es_data, async_engine):
"""https://github.com/nazrulworld/fhirpath/issues/9"""
search_context = SearchContext(async_engine, "Task")
params = (("status:not", "ready,cancelled"),)
fhir_search = AsyncSearch(search_context, params=params)
bundle = await fhir_search()
assert bundle.total == 1
def test_search_negative_address(es_data, engine):
""" """
search_context = SearchContext(engine, "Organization")
params = (("address:not", "Den Burg"),)
fhir_search = Search(search_context, params=params)
bundle = fhir_search()
assert bundle.total == 0
params = (("address-postalcode:not", "9105 PZ"),)
fhir_search = Search(search_context, params=params)
bundle = fhir_search()
assert bundle.total == 0
params = (
("_profile:not", "urn:oid:002.160,urn:oid:002.260,http://hl7.org/fhir/Other",),
)
fhir_search = Search(search_context, params=params)
bundle = fhir_search()
assert bundle.total == 0
def test_issue8_without_param(es_data, engine):
""" """
search_context = SearchContext(engine, "Organization")
fhir_search = Search(search_context)
bundle = fhir_search()
assert bundle.total == 1
def test_search_include(es_data, engine):
# typed _include
search_context = SearchContext(engine, "Observation")
params = (("_include", "Observation:subject:Patient"),)
fhir_search = Search(search_context, params=params)
bundle = fhir_search()
assert bundle.total == 1
assert len(bundle.entry) == 2
assert isinstance(bundle.entry[0].resource, Observation)
assert isinstance(bundle.entry[1].resource, Patient)
# untyped _include
search_context = SearchContext(engine, "Observation")
params = (("_include", "Observation:subject"),)
fhir_search = Search(search_context, params=params)
bundle = fhir_search()
assert bundle.total == 1
assert len(bundle.entry) == 2
assert isinstance(bundle.entry[0].resource, Observation)
assert isinstance(bundle.entry[1].resource, Patient)
# many types
search_context = SearchContext(engine, "Observation")
params = (
("_include", "Observation:subject:Patient"),
("_include", "Observation:subject:Location"),
)
fhir_search = Search(search_context, params=params)
bundle = fhir_search()
assert bundle.total == 1
assert len(bundle.entry) == 2
assert isinstance(bundle.entry[0].resource, Observation)
assert isinstance(bundle.entry[1].resource, Patient)
# .where(resolve() is Resource) constraint
search_context = SearchContext(engine, "Observation")
params = (("_include", "Observation:patient"),)
fhir_search = Search(search_context, params=params)
bundle = fhir_search()
assert bundle.total == 1
assert len(bundle.entry) == 2
# many references
search_context = SearchContext(engine, "Observation")
params = (
("_include", "Observation:subject:Patient"),
("_include", "Observation:performer"),
)
fhir_search = Search(search_context, params=params)
bundle = fhir_search()
assert bundle.total == 1
assert len(bundle.entry) == 3
assert isinstance(bundle.entry[0].resource, Observation)
assert isinstance(bundle.entry[1].resource, Patient)
assert isinstance(bundle.entry[2].resource, Practitioner)
# bad syntax
search_context = SearchContext(engine, "Observation")
params = (("_include", "subject"),)
fhir_search = Search(search_context, params=params)
with raises(
ValidationError,
match=re.escape(
"bad _include param 'subject', "
"should be Resource:search_param[:target_type]"
),
):
fhir_search()
# bad searchparam
search_context = SearchContext(engine, "Observation")
params = (("_include", "Observation:category"),)
fhir_search = Search(search_context, params=params)
with raises(
ValidationError,
match=re.escape(
"search parameter Observation.category "
"must be of type 'reference', got token"
),
):
fhir_search()
# unknown searchparam
search_context = SearchContext(engine, "Observation")
params = (("_include", "Observation:unknown"),)
fhir_search = Search(search_context, params=params)
with raises(
ValidationError,
match=re.escape(
"No search definition is available for search "
"parameter ``unknown`` on Resource ``Observation``."
),
):
fhir_search()
# bad target
search_context = SearchContext(engine, "Observation")
params = (("_include", "Observation:subject:DocumentReference"),)
fhir_search = Search(search_context, params=params)
with raises(
ValidationError,
match=re.escape(
"the search param Observation.subject may refer "
"to Group, Device, Patient, Location"
", not to DocumentReference"
),
):
fhir_search()
def test_search_has(es_data, engine):
# found
search_context = SearchContext(engine, "Patient")
params = (("_has:Observation:patient:code", "718-7"),)
fhir_search = Search(search_context, params=params)
bundle = fhir_search()
assert bundle.total == 1
assert isinstance(bundle.entry[0].resource, Patient)
# not found
search_context = SearchContext(engine, "Patient")
params = (("_has:Observation:patient:code", "XXX-YYY"),)
fhir_search = Search(search_context, params=params)
bundle = fhir_search()
assert bundle.total == 0
# bad syntax
search_context = SearchContext(engine, "Patient")
params = (("_has:Observation:patient", "718-7"),)
fhir_search = Search(search_context, params=params)
with raises(
ValidationError,
match=re.escape(
"bad _has param '_has:Observation:patient', "
"should be _has:Resource:ref_search_param:value_search_param=value"
),
):
fhir_search()
# bad searchparam
search_context = SearchContext(engine, "Patient")
params = (("_has:Observation:category:code", "something"),)
fhir_search = Search(search_context, params=params)
with raises(
ValidationError,
match=re.escape(
"search parameter Observation.category must be "
"of type 'reference', got token"
),
):
fhir_search()
# unknown searchparam
search_context = SearchContext(engine, "Patient")
params = (("_has:Observation:unknown:code", "something"),)
fhir_search = Search(search_context, params=params)
with raises(
ValidationError,
match=re.escape(
"No search definition is available for search "
"parameter ``unknown`` on Resource ``Observation``."
),
):
fhir_search()
# bad target
search_context = SearchContext(engine, "Patient")
params = (("_has:Observation:encounter:identifier", "something"),)
fhir_search = Search(search_context, params=params)
with raises(
ValidationError,
match=re.escape(
"invalid reference Observation.encounter (Encounter,EpisodeOfCare) "
"in the current search context (Patient)"
),
):
fhir_search()
def test_search_revinclude(es_data, engine):
# untyped
search_context = SearchContext(engine, "Patient")
params = (("_revinclude", "Observation:subject"),)
fhir_search = Search(search_context, params=params)
bundle = fhir_search()
assert bundle.total == 1
assert len(bundle.entry) == 2
assert isinstance(bundle.entry[0].resource, Patient)
assert isinstance(bundle.entry[1].resource, Observation)
# typed
search_context = SearchContext(engine, "Patient")
params = (("_revinclude", "Observation:subject:Patient"),)
fhir_search = Search(search_context, params=params)
bundle = fhir_search()
assert bundle.total == 1
assert len(bundle.entry) == 2
assert isinstance(bundle.entry[0].resource, Patient)
assert isinstance(bundle.entry[1].resource, Observation)
# no results
search_context = SearchContext(engine, "Location")
params = (("_revinclude", "Observation:subject"),)
fhir_search = Search(search_context, params=params)
bundle = fhir_search()
assert bundle.total == 0
assert len(bundle.entry) == 0
# double _revinclude
search_context = SearchContext(engine, "Patient")
params = (
("_revinclude", "Observation:subject"),
("_revinclude", "MedicationRequest:subject"),
)
fhir_search = Search(search_context, params=params)
bundle = fhir_search()
assert bundle.total == 1
assert len(bundle.entry) == 3
assert isinstance(bundle.entry[0].resource, Patient)
assert isinstance(bundle.entry[1].resource, Observation)
assert isinstance(bundle.entry[2].resource, MedicationRequest)
# with _has
search_context = SearchContext(engine, "Patient")
params = (
("_has:Observation:patient:code", "718-7"),
("_revinclude", "Observation:subject"),
)
fhir_search = Search(search_context, params=params)
bundle = fhir_search()
assert bundle.total == 1
assert len(bundle.entry) == 2
assert isinstance(bundle.entry[0].resource, Patient)
assert isinstance(bundle.entry[1].resource, Observation)
# bad syntax
search_context = SearchContext(engine, "Patient")
params = (("_revinclude", "subject"),)
fhir_search = Search(search_context, params=params)
with raises(
ValidationError,
match=re.escape(
"bad _revinclude param 'subject', should be "
"Resource:search_param[:target_type]"
),
):
fhir_search()
# bad syntax
search_context = SearchContext(engine, "Patient")
params = (("_revinclude", "Observation:subject:too:long"),)
fhir_search = Search(search_context, params=params)
with raises(
ValidationError,
match=re.escape(
"bad _revinclude param 'Observation:subject:too:long', "
"should be Resource:search_param[:target_type]"
),
):
fhir_search()
# bad searchparam
search_context = SearchContext(engine, "Patient")
params = (("_revinclude", "Observation:category"),)
fhir_search = Search(search_context, params=params)
with raises(
ValidationError,
match=re.escape(
"search parameter Observation.category must "
"be of type 'reference', got token"
),
):
fhir_search()
# unknown searchparam
search_context = SearchContext(engine, "Patient")
params = (("_revinclude", "Observation:unknown:code"),)
fhir_search = Search(search_context, params=params)
with raises(
ValidationError,
match=re.escape(
"No search definition is available for search "
"parameter ``unknown`` on Resource ``Observation``."
),
):
fhir_search()
# bad target
search_context = SearchContext(engine, "Patient")
params = (("_revinclude", "Observation:encounter:identifier"),)
fhir_search = Search(search_context, params=params)
with raises(
ValidationError,
match=re.escape(
"invalid reference Observation.encounter (Encounter,EpisodeOfCare) "
"in the current search context (Patient)"
),
):
fhir_search()
@pytest.mark.asyncio
async def test_async_search_revinclude(es_data, async_engine):
# untyped
search_context = SearchContext(async_engine, "Patient")
params = (("_revinclude", "Observation:subject"),)
fhir_search = AsyncSearch(search_context, params=params)
bundle = await fhir_search()
assert bundle.total == 1
assert len(bundle.entry) == 2
assert isinstance(bundle.entry[0].resource, Patient)
assert isinstance(bundle.entry[1].resource, Observation)
# typed
search_context = SearchContext(async_engine, "Patient")
params = (("_revinclude", "Observation:subject:Patient"),)
fhir_search = AsyncSearch(search_context, params=params)
bundle = await fhir_search()
assert bundle.total == 1
assert len(bundle.entry) == 2
assert isinstance(bundle.entry[0].resource, Patient)
assert isinstance(bundle.entry[1].resource, Observation)
# no results
search_context = SearchContext(async_engine, "Location")
params = (("_revinclude", "Observation:subject"),)
fhir_search = AsyncSearch(search_context, params=params)
bundle = await fhir_search()
assert bundle.total == 0
assert len(bundle.entry) == 0
# double _revinclude
search_context = SearchContext(async_engine, "Patient")
params = (
("_revinclude", "Observation:subject"),
("_revinclude", "MedicationRequest:subject"),
)
fhir_search = AsyncSearch(search_context, params=params)
bundle = await fhir_search()
assert bundle.total == 1
assert len(bundle.entry) == 3
assert isinstance(bundle.entry[0].resource, Patient)
assert isinstance(bundle.entry[1].resource, Observation)
assert isinstance(bundle.entry[2].resource, MedicationRequest)
# with _has
search_context = SearchContext(async_engine, "Patient")
params = (
("_has:Observation:patient:code", "718-7"),
("_revinclude", "Observation:subject"),
)
fhir_search = AsyncSearch(search_context, params=params)
bundle = await fhir_search()
assert bundle.total == 1
assert len(bundle.entry) == 2
assert isinstance(bundle.entry[0].resource, Patient)
assert isinstance(bundle.entry[1].resource, Observation)
# bad syntax
search_context = SearchContext(async_engine, "Patient")
params = (("_revinclude", "subject"),)
fhir_search = AsyncSearch(search_context, params=params)
with raises(
ValidationError,
match=re.escape(
"bad _revinclude param 'subject', should be "
"Resource:search_param[:target_type]"
),
):
await fhir_search()
# bad syntax
search_context = SearchContext(async_engine, "Patient")
params = (("_revinclude", "Observation:subject:too:long"),)
fhir_search = AsyncSearch(search_context, params=params)
with raises(
ValidationError,
match=re.escape(
"bad _revinclude param 'Observation:subject:too:long', "
"should be Resource:search_param[:target_type]"
),
):
await fhir_search()
# bad searchparam
search_context = SearchContext(async_engine, "Patient")
params = (("_revinclude", "Observation:category"),)
fhir_search = AsyncSearch(search_context, params=params)
with raises(
ValidationError,
match=re.escape(
"search parameter Observation.category must "
"be of type 'reference', got token"
),
):
await fhir_search()
# unknown searchparam
search_context = SearchContext(async_engine, "Patient")
params = (("_revinclude", "Observation:unknown:code"),)
fhir_search = AsyncSearch(search_context, params=params)
with raises(
ValidationError,
match=re.escape(
"No search definition is available for search "
"parameter ``unknown`` on Resource ``Observation``."
),
):
await fhir_search()
# bad target
search_context = SearchContext(async_engine, "Patient")
params = (("_revinclude", "Observation:encounter:identifier"),)
fhir_search = AsyncSearch(search_context, params=params)
with raises(
ValidationError,
match=re.escape(
"invalid reference Observation.encounter (Encounter,EpisodeOfCare) "
"in the current search context (Patient)"
),
):
await fhir_search()
def test_search_fhirpath_reference_analyzer(es_data, engine):
""" References need to be indexed in a special way in order to be found"""
search_context = SearchContext(engine, "Observation")
# search Normal
params = (("subject", "Patient/19c5245f-89a8-49f8-b244-666b32adb92e"),)
fhir_search = Search(search_context, params=params)
bundle = fhir_search()
assert bundle.total == 1
# search by ID
params = (("subject", "19c5245f-89a8-49f8-b244-666b32adb92e"),)
fhir_search = Search(search_context, params=params)
bundle = fhir_search()
assert bundle.total == 1
# search by (wrong) ID
params = (("subject", "29c5245f-89a8-49f8-b244-666b32adb92e"),)
fhir_search = Search(search_context, params=params)
bundle = fhir_search()
assert bundle.total == 0
# search by resource_type (should not find anything)
params = (("subject", "Patient"),)
fhir_search = Search(search_context, params=params)
bundle = fhir_search()
assert bundle.total == 0
# test negative: search by last part
params = (("subject:not", "19c5245f-89a8-49f8-b244-666b32adb92e"),)
fhir_search = Search(search_context, params=params)
bundle = fhir_search()
assert bundle.total == 0
# test full URI with wrong last part
params = (("subject", "Patient/fake245f-89a8-49f8-b244-666b32adb92e"),)
fhir_search = Search(search_context, params=params)
bundle = fhir_search()
assert bundle.total == 0
# test full URI with wrong first part
params = (("subject", "Device/19c5245f-89a8-49f8-b244-666b32adb92e"),)
fhir_search = Search(search_context, params=params)
bundle = fhir_search()
assert bundle.total == 0
# search by resource_type as prefix
params = (("subject:below", "Patient"),)
fhir_search = Search(search_context, params=params)
bundle = fhir_search()
assert bundle.total == 1
# search by ID as suffix
params = (("subject:above", "19c5245f-89a8-49f8-b244-666b32adb92e"),)
fhir_search = Search(search_context, params=params)
bundle = fhir_search()
assert bundle.total == 1
def test_searchparam_type_date_period_eq(es_data, engine):
search_context = SearchContext(engine, "Encounter")
params = (("date", "eq2015-01-17"),)
fhir_search = Search(search_context, params=params)
bundle = fhir_search()
assert bundle.total == 0
search_context = SearchContext(engine, "CarePlan")
params = (("date", "eq2017-06-01"),)
fhir_search = Search(search_context, params=params)
bundle = fhir_search()
assert bundle.total == 1
def test_searchparam_type_date_period_ne(es_data, engine):
search_context = SearchContext(engine, "Encounter")
params = (("date", "ne2015-01-17"),)
fhir_search = Search(search_context, params=params)
bundle = fhir_search()
assert bundle.total == 1
search_context = SearchContext(engine, "CarePlan")
params = (("date", "ne2017-06-01T17:00:00"),)
fhir_search = Search(search_context, params=params)
bundle = fhir_search()
assert bundle.total == 1
params = (("date", "ne2017-06-01"),)
fhir_search = Search(search_context, params=params)
bundle = fhir_search()
assert bundle.total == 0
def test_searchparam_type_date_period_gt(es_data, engine):
search_context = SearchContext(engine, "Encounter")
params = (("date", "gt2015-01-20"),)
fhir_search = Search(search_context, params=params)
bundle = fhir_search()
assert bundle.total == 1
search_context = SearchContext(engine, "CarePlan")
params = (("date", "gt2017-06-01T17:00:00"),)
fhir_search = Search(search_context, params=params)
bundle = fhir_search()
assert bundle.total == 1
params = (("date", "gt2017-06-02"),)
fhir_search = Search(search_context, params=params)
bundle = fhir_search()
assert bundle.total == 0
def test_searchparam_type_date_period_lt(es_data, engine):
search_context = SearchContext(engine, "Encounter")
params = (("date", "lt2015-01-20"),)
fhir_search = Search(search_context, params=params)
bundle = fhir_search()
assert bundle.total == 1
search_context = SearchContext(engine, "CarePlan")
params = (("date", "lt2017-06-01T15:00:00"),)
fhir_search = Search(search_context, params=params)
bundle = fhir_search()
assert bundle.total == 0
def test_searchparam_type_date_period_ge(es_data, engine):
search_context = SearchContext(engine, "Encounter")
params = (("date", "ge2015-01-20"),)
fhir_search = Search(search_context, params=params)
bundle = fhir_search()
assert bundle.total == 1
search_context = SearchContext(engine, "CarePlan")
params = (("date", "ge2017-06-01T17:00:00"),)
fhir_search = Search(search_context, params=params)
bundle = fhir_search()
assert bundle.total == 1
params = (("date", "ge2017-06-01"),)
fhir_search = Search(search_context, params=params)
bundle = fhir_search()
assert bundle.total == 1
params = (("date", "ge2017-06-02"),)
fhir_search = Search(search_context, params=params)
bundle = fhir_search()
assert bundle.total == 0
def test_searchparam_type_date_period_le(es_data, engine):
search_context = SearchContext(engine, "Encounter")
params = (("date", "le2015-01-20"),)
fhir_search = Search(search_context, params=params)
bundle = fhir_search()
assert bundle.total == 1
search_context = SearchContext(engine, "CarePlan")
params = (("date", "le2017-06-01T16:00:00"),)
fhir_search = Search(search_context, params=params)
bundle = fhir_search()
assert bundle.total == 1
params = (("date", "le2017-06-01T15:00:00"),)
fhir_search = Search(search_context, params=params)
bundle = fhir_search()
assert bundle.total == 0
def test_searchparam_type_date_period_sa(es_data, engine):
search_context = SearchContext(engine, "CarePlan")
params = (("date", "sa2017-06-01T17:00:00"),)
fhir_search = Search(search_context, params=params)
bundle = fhir_search()
assert bundle.total == 0
params = (("date", "sa2017-06-01T12:00:00"),)
fhir_search = Search(search_context, params=params)
bundle = fhir_search()
assert bundle.total == 1
@pytest.mark.asyncio
async def test_async_searchparam_type_date_period_sa(es_data, async_engine):
search_context = SearchContext(async_engine, "CarePlan")
params = (("date", "sa2017-06-01T17:00:00"),)
fhir_search = AsyncSearch(search_context, params=params)
bundle = await fhir_search()
assert bundle.total == 0
params = (("date", "sa2017-06-01T12:00:00"),)
fhir_search = AsyncSearch(search_context, params=params)
bundle = await fhir_search()
assert bundle.total == 1
def test_searchparam_type_date_period_eb(es_data, engine):
search_context = SearchContext(engine, "Encounter")
params = (("date", "eb2019-01-20"),)
fhir_search = Search(search_context, params=params)
bundle = fhir_search()
assert bundle.total == 0
search_context = SearchContext(engine, "CarePlan")
params = (("date", "eb2017-06-01T18:00:00"),)
fhir_search = Search(search_context, params=params)
bundle = fhir_search()
assert bundle.total == 0
params = (("date", "eb2019-01-20"),)
fhir_search = Search(search_context, params=params)
bundle = fhir_search()
assert bundle.total == 1
def test_searchparam_type_date_period_ap(es_data, engine):
search_context = SearchContext(engine, "Encounter")
params = (("date", "ap2015-01-17T16:00:00"),)
fhir_search = Search(search_context, params=params)
bundle = fhir_search()
assert bundle.total == 1
params = (("date", "ap2015-01-16T16:00:00"),)
fhir_search = Search(search_context, params=params)
bundle = fhir_search()
assert bundle.total == 0
params = (("date", "ap2019-01-20"),)
fhir_search = Search(search_context, params=params)
bundle = fhir_search()
assert bundle.total == 1
search_context = SearchContext(engine, "CarePlan")
params = (("date", "ap2017-06-01T17:00:00"),)
fhir_search = Search(search_context, params=params)
bundle = fhir_search()
assert bundle.total == 1
params = (("date", "ap2017-06-01T19:00:00"),)
fhir_search = Search(search_context, params=params)
bundle = fhir_search()
assert bundle.total == 0
@pytest.mark.asyncio
async def test_async_searchparam_type_date_period_ap(es_data, async_engine):
search_context = SearchContext(async_engine, "Encounter")
params = (("date", "ap2015-01-17T16:00:00"),)
fhir_search = AsyncSearch(search_context, params=params)
bundle = await fhir_search()
assert bundle.total == 1
params = (("date", "ap2015-01-16T16:00:00"),)
fhir_search = AsyncSearch(search_context, params=params)
bundle = await fhir_search()
assert bundle.total == 0
params = (("date", "ap2019-01-20"),)
fhir_search = AsyncSearch(search_context, params=params)
bundle = await fhir_search()
assert bundle.total == 1
search_context = SearchContext(async_engine, "CarePlan")
params = (("date", "ap2017-06-01T17:00:00"),)
fhir_search = AsyncSearch(search_context, params=params)
bundle = await fhir_search()
assert bundle.total == 1
params = (("date", "ap2017-06-01T19:00:00"),)
fhir_search = AsyncSearch(search_context, params=params)
bundle = await fhir_search()
assert bundle.total == 0
def test_searchparam_ignored_pretty_format(es_data, engine):
search_context = SearchContext(engine, "Encounter")
params = (
("date", "ap2015-01-17T16:00:00"),
("_pretty", "true"),
("_format", "application/fhir+json"),
)
fhir_search = Search(search_context, params=params)
bundle = fhir_search()
assert bundle.total == 1
search_context = SearchContext(engine, "Patient")
params = (
("_pretty", "true"),
("_format", "application/fhir+json"),
)
fhir_search = Search(search_context, params=params)
bundle = fhir_search()
assert bundle.total == 1
def test_searchparam_elements(es_data, engine):
search_context = SearchContext(engine, "Patient")
result = Search(search_context, query_string="_elements=identifier,active,link")()
assert result.entry[0].resource.id is not None
assert result.entry[0].resource.identifier is not None
assert result.entry[0].resource.active is not None
assert result.entry[0].resource.link is not None
assert result.entry[0].resource.meta is None
assert result.entry[0].resource.birthDate is None
assert result.entry[0].resource.maritalStatus is None
def test_searchparam_unexisting_elements(es_data, engine):
"""Elements that don't exist should be ignored"""
search_context = SearchContext(engine, "Patient")
result = Search(search_context, query_string="_elements=active,unexisting")()
assert result.entry[0].resource.id is not None
assert result.entry[0].resource.identifier is None
assert result.entry[0].resource.active is not None
assert result.entry[0].resource.link is None
assert result.entry[0].resource.meta is None
assert result.entry[0].resource.birthDate is None
assert result.entry[0].resource.maritalStatus is None
def test_searchparam_summary_true(es_data, engine):
"""Handle _summary=true
Return a limited subset of elements from the resource. This subset SHOULD consist
solely of all supported elements that are marked as "summary" in the base definition
of the resource(s)
"""
search_context = SearchContext(engine, "Patient")
result = Search(search_context, params=(("_summary", "true"),))()
assert result.entry[0].resource.text is None
assert result.entry[0].resource.id is not None
assert result.entry[0].resource.meta is not None
assert (
result.entry[0].resource.birthDate is not None
) # birthDate should not be in summary
assert (
result.entry[0].resource.maritalStatus is None
) # maritalStatus should not be in summary
@pytest.mark.asyncio
async def test_async_searchparam_summary_true(es_data, async_engine):
"""Handle _summary=true
Return a limited subset of elements from the resource. This subset SHOULD consist
solely of all supported elements that are marked as "summary" in the base definition
of the resource(s)
"""
search_context = SearchContext(async_engine, "Patient")
result = await AsyncSearch(search_context, params=(("_summary", "true"),))()
assert result.entry[0].resource.text is None
assert result.entry[0].resource.id is not None
assert result.entry[0].resource.meta is not None
assert (
result.entry[0].resource.birthDate is not None
) # birthDate should not be in summary
assert (
result.entry[0].resource.maritalStatus is None
) # maritalStatus should not be in summary
def test_searchparam_summary_false(es_data, engine):
"""Handle _summary=false
Return all parts of the resource(s)
"""
search_context = SearchContext(engine, "Patient")
result = Search(search_context, params=(("_summary", "false"),))()
assert result.entry[0].resource.text is not None
assert result.entry[0].resource.id is not None
assert result.entry[0].resource.meta is not None
assert result.entry[0].resource.link is not None
assert result.entry[0].resource.birthDate is not None
def test_searchparam_summary_text(es_data, engine):
"""Handle _summary=text
Return only the "text" element, the 'id' element, the 'meta' element,
and only top-level mandatory elements
"""
search_context = SearchContext(engine, "Patient")
result = Search(search_context, params=(("_summary", "text"),))()
assert result.entry[0].resource.text is not None
assert result.entry[0].resource.id is not None
assert result.entry[0].resource.meta is not None
assert result.entry[0].resource.link is not None # link is a mandatory top element
# communication would also be a mandatory top element but is not in the document
assert result.entry[0].resource.birthDate is None # birthDate is not mandatory
def test_searchparam_summary_data(es_data, engine):
"""Handle _summary=data
Remove the text element
"""
search_context = SearchContext(engine, "Patient")
result = Search(search_context, params=(("_summary", "data"),))()
assert result.entry[0].resource.text is None
assert result.entry[0].resource.id is not None
assert result.entry[0].resource.meta is not None
assert result.entry[0].resource.link is not None
assert result.entry[0].resource.birthDate is not None
@pytest.mark.asyncio
async def test_async_searchparam_summary_data(es_data, async_engine):
"""Handle _summary=data
Remove the text element
"""
search_context = SearchContext(async_engine, "Patient")
result = await AsyncSearch(search_context, params=(("_summary", "data"),))()
assert result.entry[0].resource.text is None
assert result.entry[0].resource.id is not None
assert result.entry[0].resource.meta is not None
assert result.entry[0].resource.link is not None
assert result.entry[0].resource.birthDate is not None
def test_searchparam_summary_count(es_data, engine):
"""Handle _summary=count
Search only: just return a count of the matching resources, without returning the
actual matches
"""
search_context = SearchContext(engine, "Patient")
result = Search(search_context, params=(("_summary", "count"),))()
assert result.entry == []
assert result.total == 1
@pytest.mark.asyncio
async def test_async_searchparam_summary_count(es_data, async_engine):
"""Handle _summary=count
Search only: just return a count of the matching resources, without returning the
actual matches
"""
search_context = SearchContext(async_engine, "Patient")
result = await AsyncSearch(search_context, params=(("_summary", "count"),))()
assert result.entry == []
assert result.total == 1
def test_search_patient_with_name(es_data, engine):
"""Issue:28"""
# little bit complex
search_context = SearchContext(engine, "Patient")
params = (("name", "Jonson"), ("name", "Saint"))
fhir_search = Search(search_context, params=params)
bundle = fhir_search()
assert bundle.total == 1
| 35.272505 | 88 | 0.66934 | 6,745 | 57,600 | 5.547961 | 0.074722 | 0.070816 | 0.070068 | 0.08685 | 0.83365 | 0.802036 | 0.780952 | 0.762887 | 0.722242 | 0.683627 | 0 | 0.031963 | 0.195573 | 57,600 | 1,632 | 89 | 35.294118 | 0.775656 | 0.053299 | 0 | 0.695724 | 0 | 0.001645 | 0.189289 | 0.05305 | 0 | 0 | 0 | 0.000613 | 0.222862 | 1 | 0.043586 | false | 0 | 0.014803 | 0 | 0.058388 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
72b41c53c99af8c20200d649987e3332db63bc99 | 174 | py | Python | src/airfly/_vendor/airflow/providers/qubole/operators/qubole.py | ryanchao2012/airfly | 230ddd88885defc67485fa0c51f66c4a67ae98a9 | [
"MIT"
] | 7 | 2021-09-27T11:38:48.000Z | 2022-02-01T06:06:24.000Z | src/airfly/_vendor/airflow/providers/qubole/operators/qubole.py | ryanchao2012/airfly | 230ddd88885defc67485fa0c51f66c4a67ae98a9 | [
"MIT"
] | null | null | null | src/airfly/_vendor/airflow/providers/qubole/operators/qubole.py | ryanchao2012/airfly | 230ddd88885defc67485fa0c51f66c4a67ae98a9 | [
"MIT"
] | null | null | null | # Auto generated by 'inv collect-airflow'
from airfly._vendor.airflow.models.baseoperator import BaseOperator
class QuboleOperator(BaseOperator):
qubole_conn_id: "str"
| 24.857143 | 67 | 0.804598 | 21 | 174 | 6.52381 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.114943 | 174 | 6 | 68 | 29 | 0.88961 | 0.224138 | 0 | 0 | 1 | 0 | 0.022556 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f477d9ff495f260b08c96e89e5bb434fed5d1ec0 | 116 | py | Python | MedTARSQI/src/main/resources/ttk/utilities/printing.py | CDCgov/DCPC | c3fadef1bd6345e01a58afef051491d8ef6a7f93 | [
"Apache-2.0"
] | 6 | 2018-11-03T22:43:35.000Z | 2022-02-15T17:51:33.000Z | MedTARSQI/src/main/resources/ttk/utilities/printing.py | CDCgov/DCPC | c3fadef1bd6345e01a58afef051491d8ef6a7f93 | [
"Apache-2.0"
] | 2 | 2019-04-08T03:42:59.000Z | 2019-10-28T13:42:59.000Z | MedTARSQI/src/main/resources/ttk/utilities/printing.py | CDCgov/DCPC | c3fadef1bd6345e01a58afef051491d8ef6a7f93 | [
"Apache-2.0"
] | 10 | 2017-04-10T21:40:22.000Z | 2022-02-21T16:50:10.000Z | import pprint
def pp(stuff):
pretty_printer = pprint.PrettyPrinter(indent=3)
pretty_printer.pprint(stuff)
| 16.571429 | 51 | 0.75 | 15 | 116 | 5.666667 | 0.666667 | 0.305882 | 0.447059 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010204 | 0.155172 | 116 | 6 | 52 | 19.333333 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0 | 0.5 | 0.75 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
be4ef979f4d92f8f835e57616524eb8c0cfbffdb | 65 | py | Python | tests/run.py | buswinka/DetectStereocillia | 7205680d9861cb50a447fe730696d2631f8256ba | [
"MIT"
] | null | null | null | tests/run.py | buswinka/DetectStereocillia | 7205680d9861cb50a447fe730696d2631f8256ba | [
"MIT"
] | null | null | null | tests/run.py | buswinka/DetectStereocillia | 7205680d9861cb50a447fe730696d2631f8256ba | [
"MIT"
] | 1 | 2022-03-20T03:05:20.000Z | 2022-03-20T03:05:20.000Z | import tests.utils_test
tests.utils_test.test_render_keypoints() | 21.666667 | 40 | 0.876923 | 10 | 65 | 5.3 | 0.6 | 0.377358 | 0.528302 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.046154 | 65 | 3 | 40 | 21.666667 | 0.854839 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
be4f5c6a6a3e837d2f81e12045ad6df6b3176166 | 5,941 | py | Python | reader/chartsDBAPIs2_TBD_asyncGather.py | sifarone/gce_k8s_deployment | f596e17b9d0263ae24c61ebba9925af4719b4306 | [
"MIT"
] | null | null | null | reader/chartsDBAPIs2_TBD_asyncGather.py | sifarone/gce_k8s_deployment | f596e17b9d0263ae24c61ebba9925af4719b4306 | [
"MIT"
] | null | null | null | reader/chartsDBAPIs2_TBD_asyncGather.py | sifarone/gce_k8s_deployment | f596e17b9d0263ae24c61ebba9925af4719b4306 | [
"MIT"
] | 1 | 2021-01-24T17:07:37.000Z | 2021-01-24T17:07:37.000Z | import asyncio
import chartsUtils as utils
import cashData.cashDbAPIs as cashDbAPIs
import fnoData.stkOptDbAPIs as stkOptDbAPIs
import fnoData.stkFutDbAPIs as stkFutDbAPIs
import fnoData.idxOptDbAPIs as idxOptDbAPIs
import fnoData.idxFutDbAPIs as idxFutDbAPIs
import indexData.indexDbAPIs as indexDbAPIs
async def getData(params, source): #3
if source == 'cash':
data = await cashDbAPIs.CashDbAPIs().getCashData(params['symbol'], params['startDate'], params['date'])
return data
elif source == 'stkOpt':
data = await stkOptDbAPIs.StkOptDbAPIs().getStkOptData(params['symbol'], params['stkOptExpiryDate'])
return data
elif source == 'stkFut':
pass
elif source == 'idxOpt':
pass
elif source == 'idxFut':
pass
elif source == 'index':
pass
elif source == 'put_stkOptOIvsDeltaOI':
# convert dates from string to date types
expDate = utils.convertStringToDatetime(params['stkOptExpiryDate'])
dd = utils.convertStringToDatetime(params['date'])
data = await stkOptDbAPIs.StkOptDbAPIs().getStrikePricePutCallDetailsForADate(params['symbol'], expDate, dd)
return data['PE']
elif source == 'call_stkOptOIvsDeltaOI':
# convert dates from string to datetime types
expDate = utils.convertStringToDatetime(params['stkOptExpiryDate'])
dd = utils.convertStringToDatetime(params['date'])
data = await stkOptDbAPIs.StkOptDbAPIs().getStrikePricePutCallDetailsForADate(params['symbol'], expDate, dd)
return data['CE']
elif source == 'put_idxOptOIvsDeltaOI':
# convert dates from string to date types
expDate = utils.convertStringToDatetime(params['idxOptExpiryDate'])
dd = utils.convertStringToDatetime(params['date'])
data = await idxOptDbAPIs.IdxOptDbAPIs().getStrikePricePutCallDetailsForADate(params['symbol'], expDate, dd)
print('=== ', data)
return data['PE']
elif source == 'call_idxOptOIvsDeltaOI':
# convert dates from string to datetime types
expDate = utils.convertStringToDatetime(params['idxOptExpiryDate'])
dd = utils.convertStringToDatetime(params['date'])
data = await idxOptDbAPIs.IdxOptDbAPIs().getStrikePricePutCallDetailsForADate(params['symbol'], expDate, dd)
return data['CE']
elif source == 'analytics':
pass
else:
print('Something is not write with charts!!')
async def gatherData(params, source):
dataSources = {}
if source == 'cash':
data = await cashDbAPIs.CashDbAPIs().getCashData(params['symbol'], params['startDate'], params['date'])
return data
elif source == 'stkOpt':
data = await stkOptDbAPIs.StkOptDbAPIs().getStkOptData(params['symbol'], params['stkOptExpiryDate'])
return data
elif source == 'stkFut':
pass
elif source == 'idxOpt':
pass
elif source == 'idxFut':
pass
elif source == 'index':
pass
elif source == 'put_stkOptOIvsDeltaOI':
# convert dates from string to date types
expDate = utils.convertStringToDatetime(params['stkOptExpiryDate'])
dd = utils.convertStringToDatetime(params['date'])
data = await stkOptDbAPIs.StkOptDbAPIs().getStrikePricePutCallDetailsForADate(params['symbol'], expDate, dd)
return data['PE']
elif source == 'call_stkOptOIvsDeltaOI':
# convert dates from string to datetime types
expDate = utils.convertStringToDatetime(params['stkOptExpiryDate'])
dd = utils.convertStringToDatetime(params['date'])
data = await stkOptDbAPIs.StkOptDbAPIs().getStrikePricePutCallDetailsForADate(params['symbol'], expDate, dd)
return data['CE']
elif source == 'put_idxOptOIvsDeltaOI':
# convert dates from string to date types
expDate = utils.convertStringToDatetime(params['idxOptExpiryDate'])
dd = utils.convertStringToDatetime(params['date'])
data = await idxOptDbAPIs.IdxOptDbAPIs().getStrikePricePutCallDetailsForADate(params['symbol'], expDate, dd)
print('=== ', data)
return data['PE']
elif source == 'call_idxOptOIvsDeltaOI':
# convert dates from string to datetime types
expDate = utils.convertStringToDatetime(params['idxOptExpiryDate'])
dd = utils.convertStringToDatetime(params['date'])
data = await idxOptDbAPIs.IdxOptDbAPIs().getStrikePricePutCallDetailsForADate(params['symbol'], expDate, dd)
return data['CE']
elif source == 'analytics':
pass
else:
print('Something is not write with charts!!')
async def getDataFromSources(params, chart): #2
sourceList = chart['sourceList']
aggregatedData = {}
for source in sourceList:
#data = await getData(params, source)
dataSources = await gatherData(params, source)
fields = chart[source]
for field in fields:
print('+++ ', data)
aggregatedData.update({field:data[field]})
print('***** ',aggregatedData)
return aggregatedData
async def getChartsData(request): #1
returnData = {}
body = await request.json()
if body:
params = {}
params.update({
'symbol' : body.get('symbol'),
'startDate' : body.get('startDate'),
'stkOptExpiryDate' : body.get('stkOptExpiryDate'),
'idxOptExpiryDate' : body.get('idxOptExpiryDate'),
'strikePrice' : body.get('strikePrice'),
#'optionType' : body.get('optionType'),
'date' : body.get('date')
})
charts = body.get('charts')
for chart in charts:
chartData = await getDataFromSources(params, body.get(chart))
returnData.update({chart : chartData})
else:
returnData.update({'ERROR' : ''})
return returnData
| 41.838028 | 116 | 0.653089 | 542 | 5,941 | 7.143911 | 0.162362 | 0.051653 | 0.140496 | 0.045455 | 0.733471 | 0.733471 | 0.733471 | 0.733471 | 0.733471 | 0.733471 | 0 | 0.000659 | 0.233967 | 5,941 | 141 | 117 | 42.134752 | 0.850143 | 0.070022 | 0 | 0.658333 | 0 | 0 | 0.14462 | 0.03121 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.083333 | 0.066667 | 0 | 0.183333 | 0.05 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
be7d430ee98a61614b6f0f10d068ce835fa443aa | 20 | py | Python | PyAPT/__init__.py | burggraaff/PyAPT | 08d16bd2600575dd72af9a39532218c2b5ed7634 | [
"MIT"
] | 29 | 2015-11-23T17:05:17.000Z | 2018-05-09T16:36:41.000Z | PyAPT/__init__.py | burggraaff/PyAPT | 08d16bd2600575dd72af9a39532218c2b5ed7634 | [
"MIT"
] | 16 | 2015-04-17T23:48:32.000Z | 2018-03-27T15:19:10.000Z | PyAPT/__init__.py | burggraaff/PyAPT | 08d16bd2600575dd72af9a39532218c2b5ed7634 | [
"MIT"
] | 19 | 2015-05-04T20:41:18.000Z | 2018-06-12T11:01:04.000Z | from PyAPT import *
| 10 | 19 | 0.75 | 3 | 20 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2 | 20 | 1 | 20 | 20 | 0.9375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
bebdffea9351bc3af91a5692d58a91a014fa6854 | 41 | py | Python | tests/t8.py | jplevyak/pyc | 9f4bc49be78ba29427841460945ce63826fcd857 | [
"BSD-3-Clause"
] | 3 | 2019-08-21T22:01:35.000Z | 2021-07-25T00:21:28.000Z | tests/t8.py | jplevyak/pyc | 9f4bc49be78ba29427841460945ce63826fcd857 | [
"BSD-3-Clause"
] | null | null | null | tests/t8.py | jplevyak/pyc | 9f4bc49be78ba29427841460945ce63826fcd857 | [
"BSD-3-Clause"
] | null | null | null | a = 1
while a < 4:
print a
a = a + 1
| 8.2 | 12 | 0.439024 | 10 | 41 | 1.8 | 0.5 | 0.222222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.130435 | 0.439024 | 41 | 4 | 13 | 10.25 | 0.652174 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.25 | 1 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
fe45efd4882a0523b188ddc2db036914d8d57049 | 288 | py | Python | recbole/data/dataloader/__init__.py | mengruwu/RecBole | 251fb3478a35b061a2a411d598e8c716b94133c1 | [
"MIT"
] | null | null | null | recbole/data/dataloader/__init__.py | mengruwu/RecBole | 251fb3478a35b061a2a411d598e8c716b94133c1 | [
"MIT"
] | null | null | null | recbole/data/dataloader/__init__.py | mengruwu/RecBole | 251fb3478a35b061a2a411d598e8c716b94133c1 | [
"MIT"
] | null | null | null | from recbole.data.dataloader.abstract_dataloader import *
from recbole.data.dataloader.general_dataloader import *
from recbole.data.dataloader.knowledge_dataloader import *
from recbole.data.dataloader.user_dataloader import *
from recbole.data.dataloader.customized_dataloader import *
| 48 | 59 | 0.861111 | 35 | 288 | 6.942857 | 0.285714 | 0.226337 | 0.308642 | 0.514403 | 0.674897 | 0.674897 | 0 | 0 | 0 | 0 | 0 | 0 | 0.069444 | 288 | 5 | 60 | 57.6 | 0.906716 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
fe56b708959592002b41797ca9d2ed32f37b916c | 141 | py | Python | python/macau/__init__.py | edebrouwer/macau | 0b22d21ed954209406246e70178523102e98f922 | [
"MIT"
] | 38 | 2016-02-27T22:18:42.000Z | 2021-11-29T12:17:39.000Z | python/macau/__init__.py | edebrouwer/macau | 0b22d21ed954209406246e70178523102e98f922 | [
"MIT"
] | 11 | 2016-05-23T14:14:53.000Z | 2020-09-16T08:12:40.000Z | python/macau/__init__.py | edebrouwer/macau | 0b22d21ed954209406246e70178523102e98f922 | [
"MIT"
] | 19 | 2016-04-12T12:13:38.000Z | 2021-06-01T15:05:59.000Z | from .version import __version__
from .macau import macau, bpmf, MacauResult
from .macau import blockcg, make_train_test, make_train_test_df
| 35.25 | 63 | 0.836879 | 21 | 141 | 5.190476 | 0.52381 | 0.165138 | 0.275229 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.113475 | 141 | 3 | 64 | 47 | 0.872 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
fe8c16ed3d061d9b74bb54febd630fc1a79a0812 | 11,337 | py | Python | tests/providers/google/firebase/hooks/test_firestore.py | ncolomer/airflow | cb7c67dea9cd9b9c5de10e355b63039446003149 | [
"Apache-2.0"
] | 1 | 2021-03-03T14:13:28.000Z | 2021-03-03T14:13:28.000Z | tests/providers/google/firebase/hooks/test_firestore.py | ncolomer/airflow | cb7c67dea9cd9b9c5de10e355b63039446003149 | [
"Apache-2.0"
] | null | null | null | tests/providers/google/firebase/hooks/test_firestore.py | ncolomer/airflow | cb7c67dea9cd9b9c5de10e355b63039446003149 | [
"Apache-2.0"
] | null | null | null | # Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
"""
Tests for Google Cloud Firestore
"""
import unittest
from typing import Optional
from unittest import mock
from mock import PropertyMock
from airflow.exceptions import AirflowException
from airflow.providers.google.firebase.hooks.firestore import CloudFirestoreHook
from tests.providers.google.cloud.utils.base_gcp_mock import (
GCP_PROJECT_ID_HOOK_UNIT_TEST,
mock_base_gcp_hook_default_project_id,
mock_base_gcp_hook_no_default_project_id,
)
EXPORT_DOCUMENT_BODY = {
"outputUriPrefix": "gs://test-bucket/test-namespace/",
"collectionIds": ["test-collection"],
}
TEST_OPERATION = {
"name": "operation-name",
}
TEST_WAITING_OPERATION = {"done": False, "response": "response"}
TEST_DONE_OPERATION = {"done": True, "response": "response"}
TEST_ERROR_OPERATION = {"done": True, "response": "response", "error": "error"}
TEST_PROJECT_ID = "firestore--project-id"
class TestCloudFirestoreHookWithPassedProjectId(unittest.TestCase):
hook = None # type: Optional[CloudFirestoreHook]
def setUp(self):
with mock.patch(
"airflow.providers.google.common.hooks.base_google.GoogleBaseHook.__init__",
new=mock_base_gcp_hook_default_project_id,
):
self.hook = CloudFirestoreHook(gcp_conn_id="test")
@mock.patch("airflow.providers.google.firebase.hooks.firestore.CloudFirestoreHook._authorize")
@mock.patch("airflow.providers.google.firebase.hooks.firestore.build")
@mock.patch("airflow.providers.google.firebase.hooks.firestore.build_from_document")
def test_client_creation(self, mock_build_from_document, mock_build, mock_authorize):
result = self.hook.get_conn()
mock_build.assert_called_once_with('firestore', 'v1', cache_discovery=False)
mock_build_from_document.assert_called_once_with(
mock_build.return_value._rootDesc, http=mock_authorize.return_value
)
self.assertEqual(mock_build_from_document.return_value, result)
self.assertEqual(self.hook._conn, result)
@mock.patch("airflow.providers.google.firebase.hooks.firestore.CloudFirestoreHook.get_conn")
def test_mmediately_complete(self, get_conn_mock):
service_mock = get_conn_mock.return_value
mock_export_documents = service_mock.projects.return_value.databases.return_value.exportDocuments
mock_operation_get = (
service_mock.projects.return_value.databases.return_value.operations.return_value.get
)
(mock_export_documents.return_value.execute.return_value) = TEST_OPERATION
(mock_operation_get.return_value.execute.return_value) = TEST_DONE_OPERATION
self.hook.export_documents(body=EXPORT_DOCUMENT_BODY, project_id=TEST_PROJECT_ID)
mock_export_documents.assert_called_once_with(
body=EXPORT_DOCUMENT_BODY, name='projects/firestore--project-id/databases/(default)'
)
@mock.patch("airflow.providers.google.firebase.hooks.firestore.CloudFirestoreHook.get_conn")
@mock.patch("airflow.providers.google.firebase.hooks.firestore.time.sleep")
def test_waiting_operation(self, _, get_conn_mock):
service_mock = get_conn_mock.return_value
mock_export_documents = service_mock.projects.return_value.databases.return_value.exportDocuments
mock_operation_get = (
service_mock.projects.return_value.databases.return_value.operations.return_value.get
)
(mock_export_documents.return_value.execute.return_value) = TEST_OPERATION
execute_mock = mock.Mock(
**{"side_effect": [TEST_WAITING_OPERATION, TEST_DONE_OPERATION, TEST_DONE_OPERATION]}
)
mock_operation_get.return_value.execute = execute_mock
self.hook.export_documents(body=EXPORT_DOCUMENT_BODY, project_id=TEST_PROJECT_ID)
mock_export_documents.assert_called_once_with(
body=EXPORT_DOCUMENT_BODY, name='projects/firestore--project-id/databases/(default)'
)
@mock.patch("airflow.providers.google.firebase.hooks.firestore.CloudFirestoreHook.get_conn")
@mock.patch("airflow.providers.google.firebase.hooks.firestore.time.sleep")
def test_error_operation(self, _, get_conn_mock):
service_mock = get_conn_mock.return_value
mock_export_documents = service_mock.projects.return_value.databases.return_value.exportDocuments
mock_operation_get = (
service_mock.projects.return_value.databases.return_value.operations.return_value.get
)
(mock_export_documents.return_value.execute.return_value) = TEST_OPERATION
execute_mock = mock.Mock(**{"side_effect": [TEST_WAITING_OPERATION, TEST_ERROR_OPERATION]})
mock_operation_get.return_value.execute = execute_mock
with self.assertRaisesRegex(AirflowException, "error"):
self.hook.export_documents(body=EXPORT_DOCUMENT_BODY, project_id=TEST_PROJECT_ID)
class TestCloudFirestoreHookWithDefaultProjectIdFromConnection(unittest.TestCase):
hook = None # type: Optional[CloudFirestoreHook]
def setUp(self):
with mock.patch(
"airflow.providers.google.common.hooks.base_google.GoogleBaseHook.__init__",
new=mock_base_gcp_hook_default_project_id,
):
self.hook = CloudFirestoreHook(gcp_conn_id="test")
@mock.patch("airflow.providers.google.firebase.hooks.firestore.CloudFirestoreHook._authorize")
@mock.patch("airflow.providers.google.firebase.hooks.firestore.build")
@mock.patch("airflow.providers.google.firebase.hooks.firestore.build_from_document")
def test_client_creation(self, mock_build_from_document, mock_build, mock_authorize):
result = self.hook.get_conn()
mock_build.assert_called_once_with('firestore', 'v1', cache_discovery=False)
mock_build_from_document.assert_called_once_with(
mock_build.return_value._rootDesc, http=mock_authorize.return_value
)
self.assertEqual(mock_build_from_document.return_value, result)
self.assertEqual(self.hook._conn, result)
@mock.patch(
'airflow.providers.google.common.hooks.base_google.GoogleBaseHook.project_id',
new_callable=PropertyMock,
return_value=GCP_PROJECT_ID_HOOK_UNIT_TEST,
)
@mock.patch("airflow.providers.google.firebase.hooks.firestore.CloudFirestoreHook.get_conn")
def test_immediately_complete(self, get_conn_mock, mock_project_id):
service_mock = get_conn_mock.return_value
mock_export_documents = service_mock.projects.return_value.databases.return_value.exportDocuments
mock_operation_get = (
service_mock.projects.return_value.databases.return_value.operations.return_value.get
)
(mock_export_documents.return_value.execute.return_value) = TEST_OPERATION
mock_operation_get.return_value.execute.return_value = TEST_DONE_OPERATION
self.hook.export_documents(body=EXPORT_DOCUMENT_BODY)
mock_export_documents.assert_called_once_with(
body=EXPORT_DOCUMENT_BODY, name='projects/example-project/databases/(default)'
)
@mock.patch(
'airflow.providers.google.common.hooks.base_google.GoogleBaseHook.project_id',
new_callable=PropertyMock,
return_value=GCP_PROJECT_ID_HOOK_UNIT_TEST,
)
@mock.patch("airflow.providers.google.firebase.hooks.firestore.CloudFirestoreHook.get_conn")
@mock.patch("airflow.providers.google.firebase.hooks.firestore.time.sleep")
def test_waiting_operation(self, _, get_conn_mock, mock_project_id):
service_mock = get_conn_mock.return_value
mock_export_documents = service_mock.projects.return_value.databases.return_value.exportDocuments
mock_operation_get = (
service_mock.projects.return_value.databases.return_value.operations.return_value.get
)
(mock_export_documents.return_value.execute.return_value) = TEST_OPERATION
execute_mock = mock.Mock(
**{"side_effect": [TEST_WAITING_OPERATION, TEST_DONE_OPERATION, TEST_DONE_OPERATION]}
)
mock_operation_get.return_value.execute = execute_mock
self.hook.export_documents(body=EXPORT_DOCUMENT_BODY)
mock_export_documents.assert_called_once_with(
body=EXPORT_DOCUMENT_BODY, name='projects/example-project/databases/(default)'
)
@mock.patch(
'airflow.providers.google.common.hooks.base_google.GoogleBaseHook.project_id',
new_callable=PropertyMock,
return_value=GCP_PROJECT_ID_HOOK_UNIT_TEST,
)
@mock.patch("airflow.providers.google.firebase.hooks.firestore.CloudFirestoreHook.get_conn")
@mock.patch("airflow.providers.google.firebase.hooks.firestore.time.sleep")
def test_error_operation(self, _, get_conn_mock, mock_project_id):
service_mock = get_conn_mock.return_value
mock_export_documents = service_mock.projects.return_value.databases.return_value.exportDocuments
mock_operation_get = (
service_mock.projects.return_value.databases.return_value.operations.return_value.get
)
(mock_export_documents.return_value.execute.return_value) = TEST_OPERATION
execute_mock = mock.Mock(**{"side_effect": [TEST_WAITING_OPERATION, TEST_ERROR_OPERATION]})
mock_operation_get.return_value.execute = execute_mock
with self.assertRaisesRegex(AirflowException, "error"):
self.hook.export_documents(body=EXPORT_DOCUMENT_BODY)
class TestCloudFirestoreHookWithoutProjectId(unittest.TestCase):
hook = None # type: Optional[CloudFirestoreHook]
def setUp(self):
with mock.patch(
"airflow.providers.google.common.hooks.base_google.GoogleBaseHook.__init__",
new=mock_base_gcp_hook_no_default_project_id,
):
self.hook = CloudFirestoreHook(gcp_conn_id="test")
@mock.patch(
'airflow.providers.google.common.hooks.base_google.GoogleBaseHook.project_id',
new_callable=PropertyMock,
return_value=None,
)
@mock.patch("airflow.providers.google.firebase.hooks.firestore.CloudFirestoreHook.get_conn")
def test_create_build(self, mock_get_conn, mock_project_id):
with self.assertRaises(AirflowException) as e:
self.hook.export_documents(body={})
self.assertEqual(
"The project id must be passed either as keyword project_id parameter or as project_id extra in "
"Google Cloud connection definition. Both are not set!",
str(e.exception),
)
| 45.898785 | 109 | 0.747111 | 1,361 | 11,337 | 5.89493 | 0.133725 | 0.09049 | 0.068553 | 0.074785 | 0.817649 | 0.804063 | 0.798579 | 0.791724 | 0.787361 | 0.787361 | 0 | 0.000632 | 0.163094 | 11,337 | 246 | 110 | 46.085366 | 0.844962 | 0.078592 | 0 | 0.666667 | 0 | 0 | 0.221881 | 0.18666 | 0 | 0 | 0 | 0 | 0.087432 | 1 | 0.065574 | false | 0.010929 | 0.038251 | 0 | 0.136612 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
fea49fc0b9d46b56e695a1332e53a7d8b07db839 | 13,863 | py | Python | pce/src/PCE/tools/schedulers.py | elise-baumgartner/onramp | beb3c807264fcb70d8069ff2e3990b0ce3f59912 | [
"BSD-3-Clause"
] | 2 | 2016-09-09T04:19:01.000Z | 2019-02-15T20:28:13.000Z | pce/src/PCE/tools/schedulers.py | elise-baumgartner/onramp | beb3c807264fcb70d8069ff2e3990b0ce3f59912 | [
"BSD-3-Clause"
] | 67 | 2016-06-02T19:37:56.000Z | 2018-02-22T05:23:45.000Z | pce/src/PCE/tools/schedulers.py | elise-baumgartner/onramp | beb3c807264fcb70d8069ff2e3990b0ce3f59912 | [
"BSD-3-Clause"
] | 9 | 2015-06-22T22:10:22.000Z | 2016-04-26T15:35:45.000Z | """Encapsulation of functionality provided by various batch schedulers.
Exports:
SLURMScheduler: Interface to SLURM batch scheduler.
Scheduler: Generic instantiator for all implemented schedulers.
"""
import logging
import os
from subprocess import CalledProcessError, check_output, STDOUT
from PCEHelper import pce_root
class _BatchScheduler(object):
"""Superclass for batch scheduler classes.
Subclasses must override the non-magic methods defined here.
"""
local_python = os.path.join(pce_root, 'src', 'env', 'bin', 'python')
@classmethod
def is_scheduler_for(cls, type):
"""Return boolean indicating whether the class provides an interface to
the batch scheduler type given.
Args:
type (str): Batch scheduler type.
Returns:
True if class provides interface to given batch scheduler, False if
not.
"""
pass
def get_batch_script(self, run_name, numtasks=4, num_nodes=1, email=None):
"""Return the batch script that runs a job as per args formatted for the
given batch scheduler.
Args:
run_name (str): Human-readable label for job run.
numtasks (int): Number of tasks to schedule.
num_nodes (int): Number of nodes to allocate for job.
email (str): Email to send results to upon completion. If None, no
email sent.
Returns:
Batch script implementing given attrs.
"""
pass
def schedule(self, proj_loc):
"""Schedule a job using the given batch scheduler.
Args:
proj_loc (str): Folder containing the batch script 'script.sh' for
the job to schedule.
Returns:
Result dict with the following fields:
status_code: Status code
status_msg: String giving detailed status info.
"""
pass
def check_status(self, scheduler_job_num):
"""Return job status from scheduler.
Args:
scheduler_job_num (int): Job number of the job to check state on as
given by the scheduler, not as given by OnRamp.
Returns:
2-Tuple with 0th item being error code and 1st item being a string
giving detailed status info.
"""
pass
def cancel_job(self, scheduler_job_num):
"""Cancel the given job.
Args:
scheduler_job_num (int): Job number, as given by the scheduler, of the
job to cancel.
Returns:
2-Tuple with 0th item being error code and 1st item being a string
giving detailed status info.
"""
pass
def __init__(self, type):
"""Set batch scheduler type and return the instance.
Args:
type (str): Batch scheduler type.
"""
self.logger = logging.getLogger('onramp')
class SLURMScheduler(_BatchScheduler):
@classmethod
def is_scheduler_for(cls, type):
"""Return boolean indicating whether the class provides an interface to
the batch scheduler type given.
Args:
type (str): Batch scheduler type.
Returns:
True if class provides interface to given batch scheduler, False if
not.
"""
return type == 'SLURM'
def get_batch_script(self, run_name, numtasks=4, num_nodes=1, email=None):
"""Return the batch script that runs a job as per args formatted for the
SLURM batch scheduler.
Args:
run_name (str): Human-readable label for job run.
numtasks (int): Number of tasks to schedule.
num_nodes (int): Number of nodes to allocate for job.
email (str): Email to send results to upon completion. If None, no
email sent.
Returns:
Batch script implementing given attrs.
"""
contents = '#!/bin/bash\n'
contents += '\n'
contents += '###################################\n'
contents += '# Slurm Submission options\n'
contents += '#\n'
contents += '#SBATCH --job-name=\"' + run_name + '\"\n'
contents += '#SBATCH -o output.txt\n'
contents += '#SBATCH -n ' + str(numtasks) + '\n'
contents += '#SBATCH -N ' + str(num_nodes) + '\n'
if email:
self.logger.debug('%s configured for email reporting to %s'
% (run_name, email))
contents += '#SBATCH --mail-user=' + email + '\n'
contents += '###################################\n'
contents += '\n'
contents += self.local_python + ' bin/onramp_run.py\n'
return contents
def schedule(self, proj_loc):
"""Schedule a job using the SLURM batch scheduler.
Args:
proj_loc (str): Folder containing the batch script 'script.sh' for
the job to schedule.
Returns:
Result dict with the following fields:
status_code: Status code
status_msg: String giving detailed status info.
"""
ret_dir = os.getcwd()
os.chdir(proj_loc)
try:
batch_output = check_output(['sbatch', 'script.sh'], stderr=STDOUT)
except CalledProcessError as e:
msg = 'Job scheduling call failed'
os.chdir(ret_dir)
return {
'status_code': e.returncode,
'msg': '%s: %s' % (msg, e.output),
'status_msg': '%s: %s' % (msg, e.output)
}
os.chdir(ret_dir)
output_fields = batch_output.strip().split()
if 'Submitted batch job' != ' '.join(output_fields[:-1]):
msg = 'Unexpeted output from sbatch'
self.logger.error(msg)
return {
'status_code': -7,
'status_msg': msg
}
try:
job_num = int(output_fields[3:][0])
except ValueError, IndexError:
msg = 'Unexpeted output from sbatch'
self.logger.error(msg)
return {
'status_code': -7,
'status_msg': msg
}
return {
'status_code': 0,
'status_msg': 'Job %d scheduled' % job_num,
'job_num': job_num
}
def check_status(self, scheduler_job_num):
"""Return job status from scheduler.
Args:
scheduler_job_num (int): Job number of the job to check state on as
given by the scheduler, not as given by OnRamp.
Returns:
2-Tuple with 0th item being error code and 1st item being a string
giving detailed status info.
"""
try:
job_info = check_output(['scontrol', 'show', 'job',
str(scheduler_job_num)])
except CalledProcessError as e:
msg = 'Job info call failed'
self.logger.error(msg)
return (-1, msg)
job_state = job_info.split('JobState=')[1].split()[0]
if job_state == 'RUNNING':
return (0, 'Running')
elif job_state == 'COMPLETED':
return (0, 'Done')
elif job_state == 'PENDING':
return (0, 'Queued')
elif job_state == 'FAILED':
msg = 'Job failed'
self.logger.error(msg)
return (-1, msg)
else:
msg = 'Unexpected job state from scheduler'
self.logger.error(msg)
return (-2, msg)
def cancel_job(self, scheduler_job_num):
"""Cancel the given job.
Args:
scheduler_job_num (int): Job number, as given by the scheduler, of the
job to cancel.
Returns:
2-Tuple with 0th item being error code and 1st item being a string
giving detailed status info.
"""
try:
result = check_output(['scancel', str(scheduler_job_num)], stderr=STDOUT)
except CalledProcessError as e:
msg = 'Job cancel call failed'
self.logger.error(msg)
return (-1, msg)
return (0, result)
class PBSScheduler(_BatchScheduler):
@classmethod
def is_scheduler_for(cls, type):
"""Return boolean indicating whether the class provides an interface to
the batch scheduler type given.
Args:
type (str): Batch scheduler type.
Returns:
True if class provides interface to given batch scheduler, False if
not.
"""
return type == 'PBS'
def get_batch_script(self, run_name, numtasks=4, num_nodes=1, email=None):
"""Return the batch script that runs a job as per args formatted for the
PBS batch scheduler.
Args:
run_name (str): Human-readable label for job run.
numtasks (int): Number of tasks to schedule.
num_nodes (int): Number of nodes to allocate for job.
email (str): Email to send results to upon completion. If None, no
email sent.
Returns:
Batch script implementing given attrs.
"""
script = '#!/bin/bash\n'
script += '\n'
script += '################################################\n'
script += '#PBS -l nodes=' + num_nodes + '\n'
script += '#PBS -N ' + run_name + '\n'
script += '#PBS -V\n'
script += '#PBS -j oe\n'
script += '#PBS -o output.txt\n'
script += '################################################\n'
script += '\n'
script += 'cd ${PBS_O_WORKDIR}\n'
script += self.local_python + ' bin/onramp_run.py\n'
return script
def schedule(self, proj_loc):
"""Schedule a job using the PBS batch scheduler.
Args:
proj_loc (str): Folder containing the batch script 'script.sh' for
the job to schedule.
Returns:
Result dict with the following fields:
status_code: Status code
status_msg: String giving detailed status info.
"""
ret_dir = os.getcwd()
os.chdir(proj_loc)
try:
batch_output = check_output(['qsub', 'script.sh'], stderr=STDOUT)
except CalledProcessError as e:
msg = 'Job scheduling call failed'
os.chdir(ret_dir)
return {
'returncode': e.returncode,
'msg': '%s: %s' % (msg, e.output)
}
os.chdir(ret_dir)
output_fields = batch_output.strip().split('.')
try:
job_num = int(output_fields[0])
except ValueError, IndexError:
msg = 'Unexpeted output from sbatch'
self.logger.error(msg)
return {
'status_code': -7,
'status_msg': msg
}
return {
'status_code': 0,
'status_msg': 'Job %d scheduled' % job_num,
'job_num': job_num
}
def check_status(self, scheduler_job_num):
"""Return job status from scheduler.
Args:
scheduler_job_num (int): Job number of the job to check state on as
given by the scheduler, not as given by OnRamp.
Returns:
2-Tuple with 0th item being error code and 1st item being a string
giving detailed status info.
"""
try:
job_info = check_output(['qstat', '-i', str(scheduler_job_num)],
stderr=STDOUT)
except CalledProcessError as e:
if e.output.startswith('qstat: Unknown Job Id %d' % scheduler_job_num):
return (0, 'No info')
msg = 'Job info call failed: %s' % e.output
self.logger.error(msg)
return (-1, msg)
last_line = job_info.strip().split('\n')[-1:][0]
job_state = last_line.split()[9]
if (job_state == 'R'
or job_state == 'r'
or job_state == 's'
or job_state == 'S'
or job_state == 't'
or job_state == 'T'):
return (0, 'Running')
elif job_state == 'W' or job_state == 'H':
return (0, 'Queued')
elif job_state == 'E':
msg = 'Job failed'
# TODO: Can maybe add error info here by qstat -j job_list option.
self.logger.error(msg)
return (-1, msg)
elif job_state == 'd':
msg = 'Job scheduled for deletion'
self.logger.error(msg)
return (-1, msg)
else:
msg = 'Unexpected job state from scheduler'
self.logger.error(msg)
return (-2, msg)
def cancel_job(self, scheduler_job_num):
"""Cancel the given job.
Args:
scheduler_job_num (int): Job number, as given by the scheduler, of the
job to cancel.
Returns:
2-Tuple with 0th item being error code and 1st item being a string
giving detailed status info.
"""
try:
result = check_output(['qdel', str(scheduler_job_num)], stderr=STDOUT)
except CalledProcessError as e:
msg = 'Job cancel call failed'
self.logger.error(msg)
return (-1, msg)
return (0, result)
def Scheduler(type):
"""Instantiate the appropriate scheduler class for given type.
Args:
type (str): Identifier for batch scheduler type.
Returns:
Instance of a _BatchScheduler for given type.
"""
for cls in _BatchScheduler.__subclasses__():
if cls.is_scheduler_for(type):
return cls(type)
raise ValueError
| 33.566586 | 85 | 0.542235 | 1,608 | 13,863 | 4.567786 | 0.141169 | 0.020422 | 0.034717 | 0.029408 | 0.781484 | 0.769503 | 0.724847 | 0.712321 | 0.706467 | 0.691491 | 0 | 0.006144 | 0.354252 | 13,863 | 412 | 86 | 33.648058 | 0.814343 | 0.004617 | 0 | 0.57 | 0 | 0 | 0.16455 | 0.022545 | 0 | 0 | 0 | 0.002427 | 0 | 0 | null | null | 0.025 | 0.02 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
feb1766f170694b224d2531694737fc464d05ff8 | 234 | py | Python | examples/Displayable.py | Ellis0817/Introduction-to-Programming-Using-Python | 1882a2a846162d5ff56d4d56c3940b638ef408bd | [
"MIT"
] | null | null | null | examples/Displayable.py | Ellis0817/Introduction-to-Programming-Using-Python | 1882a2a846162d5ff56d4d56c3940b638ef408bd | [
"MIT"
] | 4 | 2019-11-07T12:32:19.000Z | 2020-07-19T14:04:44.000Z | examples/Displayable.py | Ellis0817/Introduction-to-Programming-Using-Python | 1882a2a846162d5ff56d4d56c3940b638ef408bd | [
"MIT"
] | 5 | 2019-12-04T15:56:55.000Z | 2022-01-14T06:19:18.000Z | class Displayable:
def getX(self): # Get x-coordinate of the vertex
return 0
def getY(self): # Get y-coordinate of the vertex
return 0
def getName(self): # Get display name of the vertex
return "" | 26 | 55 | 0.628205 | 34 | 234 | 4.323529 | 0.529412 | 0.142857 | 0.22449 | 0.346939 | 0.421769 | 0.421769 | 0.421769 | 0 | 0 | 0 | 0 | 0.012195 | 0.299145 | 234 | 9 | 56 | 26 | 0.884146 | 0.393162 | 0 | 0.285714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.428571 | false | 0 | 0 | 0.428571 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
22cb57965d887698eaf8d420378d4e11ac3c4382 | 10,300 | py | Python | solnml/utils/savemodel.py | williamy1996/Autoexpression | b470d9ff67074c8b076abbc1dce359db9a36f921 | [
"MIT"
] | null | null | null | solnml/utils/savemodel.py | williamy1996/Autoexpression | b470d9ff67074c8b076abbc1dce359db9a36f921 | [
"MIT"
] | null | null | null | solnml/utils/savemodel.py | williamy1996/Autoexpression | b470d9ff67074c8b076abbc1dce359db9a36f921 | [
"MIT"
] | null | null | null | import argparse
import os
import sys
import time
import pickle
import numpy as np
import pandas as pd
import json
import pickle as pkl
from sklearn.datasets import load_iris
from sklearn.metrics import balanced_accuracy_score
from sklearn.model_selection import train_test_split
from solnml.utils.data_manager import DataManager
from solnml.estimators import Classifier
from solnml.utils import saveloadmodel
class Ensemble_models:
def __init__(self,ensemble_info,mdl_list):
self.ensemble_info = ensemble_info
self.model_list = mdl_list
def predict_proba(self,test_x):
print('This is \'predict_proba\'. For regression model, please run \'predict\'.')
if(self.ensemble_info['ensemble_method']=='none'):
mdl0 = self.model_list[0].replace('\n','')
base_model = pickle.load(open(mdl0,'rb'))
y_predict = base_model.predict_proba(test_x)
return y_predict
if(self.ensemble_info['ensemble_method']=='bagging'):
y_predict = []
for mdl in self.model_list:
mdl = mdl.replace('\n','')
base_model = pickle.load(open(mdl,'rb'))
y_predict.append(base_model.predict_proba(test_x))
y_predict = np.array(y_predict)
return np.average(y_predict,axis=0)
if(self.ensemble_info['ensemble_method']=='ensemble_selection'):
y_predict = []
weights = np.array(pd.read_json(self.ensemble_info['ensemble_weights']))[:,0]
i = 0
for mdl in self.model_list:
mdl = mdl.replace('\n','')
base_model = pickle.load(open(mdl,'rb'))
y_predict.append(base_model.predict_proba(test_x)*weights[i])
i+=1
y_predict = np.array(y_predict)
return np.sum(y_predict,axis=0)
if(self.ensemble_info['ensemble_method']=='stacking'):
meta_learner = pickle.load(open(self.ensemble_info['meta_learner_path'],'rb'))
kfold = self.ensemble_info['kfold']
mdl0 = self.model_list[0].replace('\n','')
base_model = pickle.load(open(mdl0,'rb'))
y_predict = base_model.predict_proba(test_x)
n_dim = y_predict.shape[1]
sample_dim = y_predict.shape[0]
y_predict = []
if(n_dim==2):
n_dim = 1
i=0
for mdl in self.model_list:
if(i == 0):
new_sumpredict = np.zeros([sample_dim,n_dim])
mdl = mdl.replace('\n','')
base_model = pickle.load(open(mdl,'rb'))
new_predict = base_model.predict_proba(test_x)
if(n_dim==1):
new_predict = new_predict[:,1:]
new_sumpredict = new_sumpredict + new_predict/kfold
i+=1
if(i==kfold):
i=0
y_predict.append(new_sumpredict)
y_predict = np.hstack(y_predict)
y_pred = meta_learner.predict_proba(y_predict)
return y_pred
if(self.ensemble_info['ensemble_method']=='blending'):
meta_learner = pickle.load(open(self.ensemble_info['meta_learner_path'],'rb'))
mdl0 = self.model_list[0].replace('\n','')
base_model = pickle.load(open(mdl0,'rb'))
y_predict = base_model.predict_proba(test_x)
n_dim = y_predict.shape[1]
if(n_dim==2):
n_dim = 1
y_predict = []
for mdl in self.model_list:
mdl = mdl.replace('\n','')
base_model = pickle.load(open(mdl,'rb'))
new_predict = base_model.predict_proba(test_x)
if(n_dim==1):
new_predict = new_predict[:,1:]
y_predict.append(new_predict)
y_predict = np.hstack(y_predict)
y_pred = meta_learner.predict_proba(y_predict)
return y_pred
def predict(self,test_x):
print('This is \'predict\'. For classification model, please run \'predict_proba\'.')
if(self.ensemble_info['ensemble_method']=='none'):
mdl0 = self.model_list[0].replace('\n','')
base_model = pickle.load(open(mdl0,'rb'))
y_predict = base_model.predict(test_x)
return y_predict
if(self.ensemble_info['ensemble_method']=='bagging'):
y_predict = []
for mdl in self.model_list:
mdl = mdl.replace('\n','')
base_model = pickle.load(open(mdl,'rb'))
y_predict.append(base_model.predict(test_x))
y_predict = np.array(y_predict)
return np.average(y_predict,axis=0)
if(self.ensemble_info['ensemble_method']=='ensemble_selection'):
y_predict = []
weights = np.array(pd.read_json(self.ensemble_info['ensemble_weights']))[:,0]
i = 0
for mdl in self.model_list:
mdl = mdl.replace('\n','')
base_model = pickle.load(open(mdl,'rb'))
y_predict.append(base_model.predict(test_x)*weights[i])
i+=1
y_predict = np.array(y_predict)
return np.sum(y_predict,axis=0)
if(self.ensemble_info['ensemble_method']=='stacking'):
meta_learner = pickle.load(open(self.ensemble_info['meta_learner_path'],'rb'))
kfold = self.ensemble_info['kfold']
mdl0 = self.model_list[0].replace('\n','')
base_model = pickle.load(open(mdl0,'rb'))
y_predict = base_model.predict(test_x)
n_dim = y_predict.shape[1]
sample_dim = y_predict.shape[0]
y_predict = []
if(n_dim==2):
n_dim = 1
i=0
for mdl in self.model_list:
if(i == 0):
new_sumpredict = np.zeros([sample_dim,n_dim])
mdl = mdl.replace('\n','')
base_model = pickle.load(open(mdl,'rb'))
new_predict = base_model.predict(test_x)
if(n_dim==1):
new_predict = new_predict[:,1:]
new_sumpredict = new_sumpredict + new_predict/kfold
i+=1
if(i==kfold):
i=0
y_predict.append(new_sumpredict)
y_predict = np.hstack(y_predict)
y_pred = meta_learner.predict(y_predict)
return y_pred
if(self.ensemble_info['ensemble_method']=='blending'):
meta_learner = pickle.load(open(self.ensemble_info['meta_learner_path'],'rb'))
mdl0 = self.model_list[0].replace('\n','')
base_model = pickle.load(open(mdl0,'rb'))
y_predict = base_model.predict(test_x)
n_dim = y_predict.shape[1]
if(n_dim==2):
n_dim = 1
y_predict = []
for mdl in self.model_list:
mdl = mdl.replace('\n','')
base_model = pickle.load(open(mdl,'rb'))
new_predict = base_model.predict(test_x)
if(n_dim==1):
new_predict = new_predict[:,1:]
y_predict.append(new_predict)
y_predict = np.hstack(y_predict)
y_pred = meta_learner.predict(y_predict)
return y_pred
def save_model(mdl,save_dir):
mdl_list = ''
if not os.path.exists(save_dir):
os.makedirs(save_dir)
info = mdl.get_ens_model_info()
if(info is None):
f_ens_info = open(save_dir +'/ens_info','w')
ens_dict = {}
ens_dict['ensemble_method'] = 'none'
f_ens_info.write(json.dumps(ens_dict))
f_ens_info.close()
os.system('cp '+ clf.best_algo_path + ' '+save_dir +'/')
f_mdl_list = open(save_dir +'/model_list','w')
f_mdl_list.write(os.path.basename(clf.best_algo_path))
f_mdl_list.close()
return
f_ens_info = open(save_dir +'/ens_info','w')
ens_dict = {}
if(mdl.task_type == 4):
ens_dict['task_type'] = 'RGS'
else:
ens_dict['task_type'] = 'CLF'
ens_met = info['ensemble_method']
ens_dict['ensemble_method'] = ens_met
if(ens_met=='bagging'):
f_ens_info.write(json.dumps(ens_dict))
if(ens_met=='ensemble_selection'):
ens_dict['ensemble_weights'] = pd.DataFrame(info['ensemble_weights']).to_json()
f_ens_info.write(json.dumps(ens_dict))
if(ens_met=='stacking'):
meta_learner_path = save_dir +'/'+os.path.basename(info['meta_learner_path'])
os.system('cp '+ info['meta_learner_path'] + ' '+save_dir +'/')
ens_dict['meta_learner_path'] = meta_learner_path
ens_dict['kfold'] = info['kfold']
f_ens_info.write(json.dumps(ens_dict))
if(ens_met=='blending'):
meta_learner_path = save_dir +'/'+os.path.basename(info['meta_learner_path'])
os.system('cp '+ info['meta_learner_path'] + ' '+save_dir +'/')
ens_dict['meta_learner_path'] = meta_learner_path
f_ens_info.write(json.dumps(ens_dict))
f_ens_info.close()
if(ens_met=='stacking'):
for conf in info['config']:
for partpath in conf[-1]:
os.system('cp '+ partpath + ' '+save_dir +'/')
mdl_list += (os.path.basename(partpath)+'\n')
else:
for conf in info['config']:
os.system('cp '+ conf[-1] + ' '+save_dir +'/')
mdl_list += (os.path.basename(conf[-1])+'\n')
f_mdl_list = open(save_dir +'/model_list','w')
f_mdl_list.write(mdl_list)
f_mdl_list.close()
def load_model(save_dir):
f_ens_info = open(save_dir +'/ens_info','r')
ens_info = json.loads(f_ens_info.read())
f_ens_info.close()
mdl_list = []
f_mdl_list = open(save_dir +'/model_list','r')
for mdl in f_mdl_list:
mdl.replace('\n','')
mdl_list.append(save_dir +'/'+mdl)
f_mdl_list.close()
return Ensemble_models(ens_info,mdl_list) | 39.163498 | 93 | 0.55466 | 1,310 | 10,300 | 4.068702 | 0.09542 | 0.081051 | 0.060038 | 0.044653 | 0.776548 | 0.758161 | 0.757036 | 0.735084 | 0.724953 | 0.724953 | 0 | 0.008797 | 0.315728 | 10,300 | 263 | 94 | 39.163498 | 0.747446 | 0 | 0 | 0.729258 | 0 | 0 | 0.086205 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.021834 | false | 0 | 0.065502 | 0 | 0.144105 | 0.008734 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
fe142eb7a790e219b6058125507af4d92e353aee | 20,842 | py | Python | deep_qa-master/tests/tensors/masked_operations_test.py | RTHMaK/RPGOne | 3f3ada7db1762781668bfb2377154fdc00e17212 | [
"Apache-2.0"
] | 1 | 2017-04-11T13:03:55.000Z | 2017-04-11T13:03:55.000Z | deep_qa-master/tests/tensors/masked_operations_test.py | RTHMaK/RPGOne | 3f3ada7db1762781668bfb2377154fdc00e17212 | [
"Apache-2.0"
] | null | null | null | deep_qa-master/tests/tensors/masked_operations_test.py | RTHMaK/RPGOne | 3f3ada7db1762781668bfb2377154fdc00e17212 | [
"Apache-2.0"
] | null | null | null | # pylint: disable=no-self-use,invalid-name
import numpy
from numpy.testing import assert_almost_equal, assert_array_almost_equal
import keras.backend as K
from deep_qa.tensors.backend import l1_normalize
from deep_qa.tensors.masked_operations import masked_batch_dot, masked_softmax
from ..common.test_markers import requires_tensorflow
class TestMaskedOperations:
def test_masked_batch_dot_masks_properly(self):
embedding_dim = 3
a_length = 4
b_length = 5
batch_size = 2
tensor_a = numpy.random.rand(batch_size, a_length, embedding_dim)
tensor_b = numpy.random.rand(batch_size, b_length, embedding_dim)
mask_a = numpy.ones((batch_size, a_length))
mask_a[1, 3] = 0
mask_b = numpy.ones((batch_size, b_length))
mask_b[1, 2] = 0
result = K.eval(masked_batch_dot(K.variable(tensor_a),
K.variable(tensor_b),
K.variable(mask_a),
K.variable(mask_b)))
assert numpy.all(result[0, :, :] != numpy.zeros((a_length, b_length)))
assert numpy.any(result[1, 0, :] != numpy.zeros((b_length)))
assert numpy.any(result[1, 1, :] != numpy.zeros((b_length)))
assert numpy.any(result[1, 2, :] != numpy.zeros((b_length)))
assert numpy.all(result[1, 3, :] == numpy.zeros((b_length)))
assert numpy.any(result[1, :, 0] != numpy.zeros((a_length)))
assert numpy.any(result[1, :, 1] != numpy.zeros((a_length)))
assert numpy.all(result[1, :, 2] == numpy.zeros((a_length)))
assert numpy.any(result[1, :, 3] != numpy.zeros((a_length)))
assert numpy.any(result[1, :, 4] != numpy.zeros((a_length)))
result = K.eval(masked_batch_dot(K.variable(tensor_a),
K.variable(tensor_b),
None,
None))
assert numpy.all(result[0, :, :] != numpy.zeros((a_length, b_length)))
assert numpy.all(result[1, :, :] != numpy.zeros((a_length, b_length)))
result = K.eval(masked_batch_dot(K.variable(tensor_a),
K.variable(tensor_b),
K.variable(mask_a),
None))
assert numpy.all(result[0, :, :] != numpy.zeros((a_length, b_length)))
assert numpy.any(result[1, 0, :] != numpy.zeros((b_length)))
assert numpy.any(result[1, 1, :] != numpy.zeros((b_length)))
assert numpy.any(result[1, 2, :] != numpy.zeros((b_length)))
assert numpy.all(result[1, 3, :] == numpy.zeros((b_length)))
assert numpy.any(result[1, :, 0] != numpy.zeros((a_length)))
assert numpy.any(result[1, :, 1] != numpy.zeros((a_length)))
assert numpy.any(result[1, :, 2] != numpy.zeros((a_length)))
assert numpy.any(result[1, :, 3] != numpy.zeros((a_length)))
assert numpy.any(result[1, :, 4] != numpy.zeros((a_length)))
result = K.eval(masked_batch_dot(K.variable(tensor_a),
K.variable(tensor_b),
None,
K.variable(mask_b)))
assert numpy.all(result[0, :, :] != numpy.zeros((a_length, b_length)))
assert numpy.any(result[1, 0, :] != numpy.zeros((b_length)))
assert numpy.any(result[1, 1, :] != numpy.zeros((b_length)))
assert numpy.any(result[1, 2, :] != numpy.zeros((b_length)))
assert numpy.any(result[1, 3, :] != numpy.zeros((b_length)))
assert numpy.any(result[1, :, 0] != numpy.zeros((a_length)))
assert numpy.any(result[1, :, 1] != numpy.zeros((a_length)))
assert numpy.all(result[1, :, 2] == numpy.zeros((a_length)))
assert numpy.any(result[1, :, 3] != numpy.zeros((a_length)))
assert numpy.any(result[1, :, 4] != numpy.zeros((a_length)))
def test_masked_batch_dot_handles_uneven_tensors(self):
# We're going to test masked_batch_dot with tensors of shape (batch_size, a_length,
# embedding_dim) and (batch_size, embedding_dim). The result should have shape
# (batch_size, a_length).
embedding_dim = 3
a_length = 5
batch_size = 2
tensor_a = numpy.random.rand(batch_size, a_length, embedding_dim)
tensor_b = numpy.random.rand(batch_size, embedding_dim)
mask_a = numpy.ones((batch_size, a_length))
mask_a[0, 3] = 0
mask_b = numpy.ones((batch_size,))
mask_b[1] = 0
result = K.eval(masked_batch_dot(K.variable(tensor_a),
K.variable(tensor_b),
K.variable(mask_a),
K.variable(mask_b)))
assert result[0, 0] != 0
assert result[0, 1] != 0
assert result[0, 2] != 0
assert result[0, 3] == 0
assert result[0, 4] != 0
assert numpy.all(result[1, :] == numpy.zeros((a_length)))
# We should get the same result if we flip the order of the tensors.
flipped_result = K.eval(masked_batch_dot(K.variable(tensor_b),
K.variable(tensor_a),
K.variable(mask_b),
K.variable(mask_a)))
assert numpy.all(result == flipped_result)
@requires_tensorflow
def test_masked_batch_dot_handles_uneven_higher_order_tensors(self):
# We're going to test masked_batch_dot with tensors of shape (batch_size, common,
# a_length, embedding_dim) and (batch_size, common, embedding_dim). The result should have
# shape (batch_size, common, a_length). This currently doesn't work with the theano
# backend, because of an inconsistency in K.batch_dot for higher-order tensors. The code
# will crash if you try to run this in Theano, so we require tensorflow for this test.
embedding_dim = 3
common_length = 4
a_length = 5
batch_size = 2
tensor_a = numpy.random.rand(batch_size, common_length, a_length, embedding_dim)
tensor_b = numpy.random.rand(batch_size, common_length, embedding_dim)
mask_a = numpy.ones((batch_size, common_length, a_length))
mask_a[1, 1, 3] = 0
mask_b = numpy.ones((batch_size, common_length))
mask_b[1, 2] = 0
result = K.eval(masked_batch_dot(K.variable(tensor_a),
K.variable(tensor_b),
K.variable(mask_a),
K.variable(mask_b)))
assert numpy.all(result[0, :, :] != numpy.zeros((common_length, a_length)))
assert numpy.all(result[1, 0, :] != numpy.zeros((a_length)))
assert result[1, 1, 0] != 0
assert result[1, 1, 1] != 0
assert result[1, 1, 2] != 0
assert result[1, 1, 3] == 0
assert result[1, 1, 4] != 0
assert numpy.all(result[1, 2, :] == numpy.zeros((a_length)))
assert numpy.all(result[1, 3, :] != numpy.zeros((a_length)))
# We should get the same result if we pass the smaller tensor in first.
flipped_result = K.eval(masked_batch_dot(K.variable(tensor_b),
K.variable(tensor_a),
K.variable(mask_b),
K.variable(mask_a)))
assert numpy.all(result == flipped_result)
def test_l1_normalize_no_mask(self):
# Testing the general unmasked 1D case.
vector_1d = K.variable(numpy.array([[2, 1, 5, 7]]))
vector_1d_normalized = K.eval(l1_normalize(vector_1d))
assert_almost_equal(vector_1d_normalized,
numpy.array([[0.13333333, 0.06666666,
0.33333333, 0.46666666]]))
assert_almost_equal(1.0, numpy.sum(vector_1d_normalized), decimal=6)
# Testing the unmasked 1D case with all 0s.
vector_1d_zeros = K.variable(numpy.array([[0, 0, 0, 0]]))
vector_1d_zeros_normalized = K.eval(l1_normalize(vector_1d_zeros))
assert_array_almost_equal(vector_1d_zeros_normalized,
numpy.array([[0.25, 0.25, 0.25, 0.25]]))
# Testing the general unmasked batched case when
# inputs are not all 0's
matrix = K.variable(numpy.array([[2, 1, 5, 7], [2, 2, 2, 2]]))
matrix_normalized = K.eval(l1_normalize(matrix))
assert_array_almost_equal(matrix_normalized,
numpy.array([[0.13333333, 0.06666666,
0.33333333, 0.46666666],
[0.25, 0.25,
0.25, 0.25]]))
assert_almost_equal(numpy.array([1.0, 1.0]),
numpy.sum(matrix_normalized, axis=1), decimal=6)
# Testing the general unmasked batched case when
# one row is all 0's
matrix = K.variable(numpy.array([[2, 1, 5, 7], [0, 0, 0, 0]]))
matrix_normalized = K.eval(l1_normalize(matrix))
assert_array_almost_equal(matrix_normalized,
numpy.array([[0.13333333, 0.06666666,
0.33333333, 0.46666666],
[0.25, 0.25,
0.25, 0.25]]))
assert_almost_equal(numpy.array([1.0, 1.0]),
numpy.sum(matrix_normalized, axis=1), decimal=6)
def test_l1_normalize_masked(self):
# Testing the general masked 1D case.
vector_1d = K.variable(numpy.array([[2, 1, 5, 7]]))
vector_1d_mask = K.variable(numpy.array([[1, 1, 0, 1]]))
vector_1d_normalized = K.eval(l1_normalize(vector_1d,
vector_1d_mask))
assert_array_almost_equal(vector_1d_normalized,
numpy.array([[0.2, 0.1,
0.0, 0.7]]))
assert_almost_equal(1.0, numpy.sum(vector_1d_normalized), decimal=6)
vector_1d = K.variable(numpy.array([[1.0, 2.0, 3.0, 4.0]]))
vector_1d_mask = K.variable(numpy.array([[1, 1, 0, 1]]))
vector_1d_normalized = K.eval(l1_normalize(vector_1d,
vector_1d_mask))
assert_array_almost_equal(vector_1d_normalized,
numpy.array([[0.14285715, 0.2857143,
0, 0.5714286]]))
assert_almost_equal(1.0, numpy.sum(vector_1d_normalized), decimal=6)
# Testing the masked 1D case where the mask is
# not all zero and the input is all zero.
vector_1d_zeros = K.variable(numpy.array([[0, 0, 0, 0]]))
vector_1d_zeros_mask = K.variable(numpy.array([[1, 1, 0, 1]]))
vector_1d_zeros_normalized = K.eval(l1_normalize(vector_1d_zeros,
vector_1d_zeros_mask))
assert_array_almost_equal(vector_1d_zeros_normalized,
numpy.array([[0.3333333, 0.3333333,
0.0, 0.3333333]]))
vector_1d_zeros = K.variable(numpy.array([[0, 0, 0, 0]]))
vector_1d_zeros_mask = K.variable(numpy.array([[0, 0, 0, 0]]))
vector_1d_zeros_normalized = K.eval(l1_normalize(vector_1d_zeros,
vector_1d_zeros_mask))
assert_array_almost_equal(vector_1d_zeros_normalized,
numpy.array([[0.25, 0.25,
0.25, 0.25]]))
# Testing the general batched masked case when the input is not
# all 0's and the masks are not all 0's.
matrix = K.variable(numpy.array([[2, 1, 5, 7], [2, 2, 2, 2]]))
matrix_mask = K.variable(numpy.array([[1, 1, 0, 1], [1, 1, 1, 1]]))
matrix_normalized = K.eval(l1_normalize(matrix, matrix_mask))
assert_array_almost_equal(matrix_normalized,
numpy.array([[0.2, 0.1,
0.0, 0.7],
[0.25, 0.25,
0.25, 0.25]]))
assert_almost_equal(numpy.array([1.0, 1.0]),
numpy.sum(matrix_normalized, axis=1), decimal=6)
# Testing the batched masked case when the masks are all 0's
# and one of the input rows is all 0's.
matrix = K.variable(numpy.array([[2, 1, 5, 7], [0, 0, 0, 0]]))
matrix_mask = K.variable(numpy.array([[0, 0, 0, 0], [0, 0, 0, 0]]))
matrix_normalized = K.eval(l1_normalize(matrix, matrix_mask))
assert_array_almost_equal(matrix_normalized,
numpy.array([[0.25, 0.25,
0.25, 0.25],
[0.25, 0.25,
0.25, 0.25]]))
assert_almost_equal(numpy.array([1.0, 1.0]),
numpy.sum(matrix_normalized, axis=1), decimal=6)
def test_l1_normalize_special_cases(self):
# Testing the special masked 1D case where the mask
# all zero and the input is all zero as well.
vector_1d_zeros = K.variable(numpy.array([[0.0, 0.0, 0.0, 0.0]]))
vector_1d_zeros_mask = K.variable(numpy.array([[0, 0, 0, 0]]))
vector_1d_zeros_normalized = K.eval(l1_normalize(vector_1d_zeros,
vector_1d_zeros_mask))
assert_array_almost_equal(vector_1d_zeros_normalized,
numpy.array([[0.25, 0.25, 0.25, 0.25]]))
# Testing the special masked 1D case where the mask
# all zero and the input is not all zero.
vector_1d_zeros = K.variable(numpy.array([[2, 1, 5, 7]]))
vector_1d_zeros_mask = K.variable(numpy.array([[0, 0, 0, 0]]))
vector_1d_zeros_normalized = K.eval(l1_normalize(vector_1d_zeros,
vector_1d_zeros_mask))
assert_array_almost_equal(vector_1d_zeros_normalized,
numpy.array([[0.25, 0.25, 0.25, 0.25]]))
def test_masked_softmax_no_mask(self):
# Testing the general unmasked 1D case.
vector_1d = K.variable(numpy.array([[1.0, 2.0, 3.0]]))
vector_1d_softmaxed = K.eval(masked_softmax(vector_1d, None))
assert_array_almost_equal(vector_1d_softmaxed,
numpy.array([[0.090031, 0.244728, 0.665241]]))
assert_almost_equal(1.0, numpy.sum(vector_1d_softmaxed), decimal=6)
vector_1d = K.variable(numpy.array([[1.0, 2.0, 5.0]]))
vector_1d_softmaxed = K.eval(masked_softmax(vector_1d, None))
assert_array_almost_equal(vector_1d_softmaxed,
numpy.array([[0.017148, 0.046613, 0.93624]]))
# Testing the unmasked 1D case where the input is all 0s.
vector_zero = K.variable(numpy.array([[0.0, 0.0, 0.0]]))
vector_zero_softmaxed = K.eval(masked_softmax(vector_zero, None))
assert_array_almost_equal(vector_zero_softmaxed,
numpy.array([[0.33333334, 0.33333334, 0.33333334]]))
# Testing the general unmasked batched case.
matrix = K.variable(numpy.array([[1.0, 2.0, 5.0], [1.0, 2.0, 3.0]]))
masked_matrix_softmaxed = K.eval(masked_softmax(matrix, None))
assert_array_almost_equal(masked_matrix_softmaxed,
numpy.array([[0.01714783, 0.04661262, 0.93623955],
[0.09003057, 0.24472847, 0.66524096]]))
# Testing the unmasked batched case where one of the inputs are all 0s.
matrix = K.variable(numpy.array([[1.0, 2.0, 5.0], [0.0, 0.0, 0.0]]))
masked_matrix_softmaxed = K.eval(masked_softmax(matrix, None))
assert_array_almost_equal(masked_matrix_softmaxed,
numpy.array([[0.01714783, 0.04661262, 0.93623955],
[0.33333334, 0.33333334, 0.33333334]]))
def test_masked_softmax_masked(self):
# Testing the general masked 1D case.
vector_1d = K.variable(numpy.array([[1.0, 2.0, 5.0]]))
mask_1d = K.variable(numpy.array([[1.0, 0.0, 1.0]]))
vector_1d_softmaxed = K.eval(masked_softmax(vector_1d, mask_1d))
assert_array_almost_equal(vector_1d_softmaxed,
numpy.array([[0.01798621, 0.0, 0.98201382]]))
vector_1d = K.variable(numpy.array([[0.0, 2.0, 3.0, 4.0]]))
mask_1d = K.variable(numpy.array([[1.0, 0.0, 1.0, 1.0]]))
vector_1d_softmaxed = K.eval(masked_softmax(vector_1d, mask_1d))
assert_array_almost_equal(vector_1d_softmaxed,
numpy.array([[0.01321289, 0.0,
0.26538793, 0.72139918]]))
# Testing the masked 1D case where the input is all 0s and the mask
# is not all 0s.
vector_1d = K.variable(numpy.array([[0.0, 0.0, 0.0, 0.0]]))
mask_1d = K.variable(numpy.array([[0.0, 0.0, 0.0, 1.0]]))
vector_1d_softmaxed = K.eval(masked_softmax(vector_1d, mask_1d))
assert_array_almost_equal(vector_1d_softmaxed,
numpy.array([[0, 0, 0, 1]]))
# Testing the masked 1D case where the input is not all 0s
# and the mask is all 0s.
vector_1d = K.variable(numpy.array([[0.0, 2.0, 3.0, 4.0]]))
mask_1d = K.variable(numpy.array([[0.0, 0.0, 0.0, 0.0]]))
vector_1d_softmaxed = K.eval(masked_softmax(vector_1d, mask_1d))
assert_array_almost_equal(vector_1d_softmaxed,
numpy.array([[0.0, 0.0,
0.0, 0.0]]))
# Testing the masked 1D case where the input is all 0s and
# the mask is all 0s.
vector_1d = K.variable(numpy.array([[0.0, 0.0, 0.0, 0.0]]))
mask_1d = K.variable(numpy.array([[0.0, 0.0, 0.0, 0.0]]))
vector_1d_softmaxed = K.eval(masked_softmax(vector_1d, mask_1d))
assert_array_almost_equal(vector_1d_softmaxed,
numpy.array([[0.0, 0.0,
0.0, 0.0]]))
# Testing the general masked batched case.
matrix = K.variable(numpy.array([[1.0, 2.0, 5.0], [1.0, 2.0, 3.0]]))
mask = K.variable(numpy.array([[1.0, 0.0, 1.0], [1.0, 1.0, 1.0]]))
masked_matrix_softmaxed = K.eval(masked_softmax(matrix, mask))
assert_array_almost_equal(masked_matrix_softmaxed,
numpy.array([[0.01798621, 0.0, 0.98201382],
[0.090031, 0.244728, 0.665241]]))
# Testing the masked batch case where one of the inputs is all 0s but
# none of the masks are all 0.
matrix = K.variable(numpy.array([[0.0, 0.0, 0.0], [1.0, 2.0, 3.0]]))
mask = K.variable(numpy.array([[1.0, 0.0, 1.0], [1.0, 1.0, 1.0]]))
masked_matrix_softmaxed = K.eval(masked_softmax(matrix, mask))
assert_array_almost_equal(masked_matrix_softmaxed,
numpy.array([[0.5, 0.0, 0.5],
[0.090031, 0.244728, 0.665241]]))
# Testing the masked batch case where one of the inputs is all 0s and
# one of the masks are all 0.
matrix = K.variable(numpy.array([[0.0, 0.0, 0.0], [1.0, 2.0, 3.0]]))
mask = K.variable(numpy.array([[1.0, 0.0, 1.0], [0.0, 0.0, 0.0]]))
masked_matrix_softmaxed = K.eval(masked_softmax(matrix, mask))
assert_array_almost_equal(masked_matrix_softmaxed,
numpy.array([[0.5, 0.0, 0.5],
[0.0, 0.0, 0.0]]))
matrix = K.variable(numpy.array([[0.0, 0.0, 0.0], [1.0, 2.0, 3.0]]))
mask = K.variable(numpy.array([[0.0, 0.0, 0.0], [1.0, 0.0, 1.0]]))
masked_matrix_softmaxed = K.eval(masked_softmax(matrix, mask))
assert_array_almost_equal(masked_matrix_softmaxed,
numpy.array([[0.0, 0.0, 0.0],
[0.11920292, 0.0, 0.88079708]]))
| 55.28382 | 99 | 0.535985 | 2,786 | 20,842 | 3.813711 | 0.059225 | 0.032188 | 0.035012 | 0.030494 | 0.896188 | 0.874824 | 0.852706 | 0.826918 | 0.814494 | 0.771953 | 0 | 0.09133 | 0.33754 | 20,842 | 376 | 100 | 55.430851 | 0.678207 | 0.106612 | 0 | 0.690722 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.28866 | 1 | 0.027491 | false | 0 | 0.020619 | 0 | 0.051546 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a3c14f8b0001a63dd4def44daefd40fa78c9e0b9 | 155 | py | Python | reunition/apps/reunions/admin.py | reunition/reunition | a68d4555092f41f5712c6f7061aa35e816a6283e | [
"MIT"
] | null | null | null | reunition/apps/reunions/admin.py | reunition/reunition | a68d4555092f41f5712c6f7061aa35e816a6283e | [
"MIT"
] | null | null | null | reunition/apps/reunions/admin.py | reunition/reunition | a68d4555092f41f5712c6f7061aa35e816a6283e | [
"MIT"
] | null | null | null | from django.contrib import admin
from .models import Reunion
class ReunionAdmin(admin.ModelAdmin):
pass
admin.site.register(Reunion, ReunionAdmin)
| 15.5 | 42 | 0.793548 | 19 | 155 | 6.473684 | 0.684211 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.135484 | 155 | 9 | 43 | 17.222222 | 0.91791 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.2 | 0.4 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
4316a9baa2536a249fa27dcdfa774b2da21fe791 | 91 | py | Python | trade_app/custom_exceptions/__init__.py | Plazas87/trading_stocks_platform | c34de11152798720cefa552f4b713231508e23a8 | [
"MIT"
] | null | null | null | trade_app/custom_exceptions/__init__.py | Plazas87/trading_stocks_platform | c34de11152798720cefa552f4b713231508e23a8 | [
"MIT"
] | null | null | null | trade_app/custom_exceptions/__init__.py | Plazas87/trading_stocks_platform | c34de11152798720cefa552f4b713231508e23a8 | [
"MIT"
] | 2 | 2020-10-28T14:07:43.000Z | 2021-11-03T22:49:21.000Z | from .exceptions import CreateOrderException, MaxBuyPerTradeException, CloseOrderException
| 45.5 | 90 | 0.901099 | 6 | 91 | 13.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.065934 | 91 | 1 | 91 | 91 | 0.964706 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4327d03512b58a005c50d469e35e09bcbdea2993 | 9,682 | py | Python | t3nsor/tensor_train.py | onucharles/tensorized-rnn | 69fc031f1efe169ee88327d10bdf5e5bc24f03cf | [
"MIT"
] | 16 | 2020-11-19T16:10:17.000Z | 2021-12-02T13:31:30.000Z | t3nsor/tensor_train.py | MohamedAbdelsalam9/TT-Transformer | aeef0f0199207121e3e5621bfeb67b15ffbb3d1d | [
"MIT"
] | 2 | 2021-02-26T08:45:16.000Z | 2021-08-11T13:47:43.000Z | t3nsor/tensor_train.py | kritostudent/tensorized-rnn | 69fc031f1efe169ee88327d10bdf5e5bc24f03cf | [
"MIT"
] | 7 | 2021-01-21T11:24:31.000Z | 2022-03-21T08:59:08.000Z | import torch
import numpy as np
import torch.nn as nn
class TensorTrain(object):
def __init__(self, tt_cores, shape=None, tt_ranks=None, convert_to_tensors=True):
#tt_cores = list(tt_cores)
if convert_to_tensors:
for i in range(len(tt_cores)):
tt_cores[i] = torch.Tensor(tt_cores[i])
self._tt_cores = tt_cores
if len(self._tt_cores[0].shape) == 4:
self._is_tt_matrix = True
else:
self._is_tt_matrix = False
if self._is_tt_matrix:
self._raw_shape = [[tt_core.shape[1] for tt_core in self._tt_cores],
[tt_core.shape[2] for tt_core in self._tt_cores]]
self._shape = [int(np.prod(self._raw_shape[0])), int(np.prod(self._raw_shape[1]))]
self._ndims = len(self._raw_shape[0])
else:
self._raw_shape = [tt_core.shape[1] for tt_core in self._tt_cores]
self._shape = [tt_core.shape[1] for tt_core in self._tt_cores]
self._ndims = len(self._raw_shape)
self._ranks = [tt_core.shape[0] for tt_core in self._tt_cores] + [1, ]
self._is_parameter = False
self._parameter = None
self._dof = np.sum([np.prod(list(tt_core.shape)) for tt_core in self._tt_cores])
self._total = np.prod(self._shape)
@property
def tt_cores(self):
"""A list of TT-cores.
Returns:
A list of 3d or 4d tensors of shape
"""
return self._tt_cores
@property
def raw_shape(self):
return self._raw_shape
@property
def is_tt_matrix(self):
return self._is_tt_matrix
@property
def shape(self):
return self._shape
@property
def ranks(self):
return self._ranks
@property
def ndims(self):
return self._ndims
@property
def is_parameter(self):
return self._is_parameter
@property
def parameter(self):
if self.is_parameter:
return self._parameter
else:
raise ValueError('Not a parameter, run .to_parameter() first')
@property
def dof(self):
return self._dof
@property
def total(self):
return self._total
def to(self, device):
new_cores = []
for core in self.tt_cores:
new_cores.append(core.to(device))
return TensorTrain(new_cores, convert_to_tensors=False)
def detach(self):
new_cores = []
for core in self.tt_cores:
new_cores.append(core.detach())
return TensorTrain(new_cores, convert_to_tensors=False)
def requires_grad_(self, requires_grad=True):
new_cores = []
for core in self.tt_cores:
new_cores.append(core.requires_grad_(requires_grad))
return TensorTrain(new_cores, convert_to_tensors=False)
def to_parameter(self):
new_cores = []
for core in self.tt_cores:
core = nn.Parameter(core)
core.is_tt = True
new_cores.append(core)
tt_p = TensorTrain(new_cores, convert_to_tensors=False)
tt_p._parameter = nn.ParameterList(tt_p.tt_cores)
tt_p._is_parameter = True
return tt_p
def full(self):
num_dims = self.ndims
ranks = self.ranks
shape = self.shape
raw_shape = self.raw_shape
res = self.tt_cores[0]
for i in range(1, num_dims):
res = res.view(-1, ranks[i])
curr_core = self.tt_cores[i].view(ranks[i], -1)
res = torch.matmul(res, curr_core)
if self.is_tt_matrix:
intermediate_shape = []
for i in range(num_dims):
intermediate_shape.append(raw_shape[0][i])
intermediate_shape.append(raw_shape[1][i])
res = res.view(*intermediate_shape)
transpose = []
for i in range(0, 2 * num_dims, 2):
transpose.append(i)
for i in range(1, 2 * num_dims, 2):
transpose.append(i)
res = res.permute(*transpose)
if self.is_tt_matrix:
res = res.contiguous().view(*shape)
else:
res = res.view(*shape)
return res
def __str__(self):
"""A string describing the TensorTrain object, its TT-rank, and shape."""
shape = self.shape
tt_ranks = self.ranks
device = self.tt_cores[0].device
compression_rate = self.total / self.dof
if self.is_tt_matrix:
raw_shape = self.raw_shape
return "A TT-Matrix of size %d x %d, underlying tensor" \
"shape: %s x %s, TT-ranks: %s " \
"\n on device '%s' with compression rate %.2f" % (shape[0], shape[1],
raw_shape[0], raw_shape[1],
tt_ranks, device, compression_rate)
else:
return "A Tensor Train of shape %s, TT-ranks: %s" \
"\n on device '%s' with compression rate %.2f" % (shape, tt_ranks, device, compression_rate)
class TensorTrainBatch():
def __init__(self, tt_cores, shape=None, tt_ranks=None, convert_to_tensors=True):
#tt_cores = list(tt_cores)
if convert_to_tensors:
for i in range(len(tt_cores)):
tt_cores[i] = torch.Tensor(tt_cores[i])
self._tt_cores = tt_cores
self._batch_size = self._tt_cores[0].shape[0]
if len(self._tt_cores[0].shape) == 5:
self._is_tt_matrix = True
else:
self._is_tt_matrix = False
if self._is_tt_matrix:
self._raw_shape = [[tt_core.shape[2] for tt_core in self._tt_cores],
[tt_core.shape[3] for tt_core in self._tt_cores]]
self._shape = [self._batch_size, int(
np.prod(self._raw_shape[0])), int(np.prod(self._raw_shape[1]))]
self._ndims = len(self._raw_shape[0])
else:
self._raw_shape = [tt_core.shape[2] for tt_core in self._tt_cores]
self._shape = [self._batch_size, ] + [tt_core.shape[2] for tt_core in self._tt_cores]
self._ndims = len(self._raw_shape)
self._ranks = [tt_core.shape[1] for tt_core in self._tt_cores] + [1, ]
@property
def tt_cores(self):
"""A list of TT-cores.
Returns:
A list of 4d or 5d tensors.
"""
return self._tt_cores
@property
def raw_shape(self):
return self._raw_shape
@property
def is_tt_matrix(self):
return self._is_tt_matrix
@property
def shape(self):
return self._shape
@property
def ranks(self):
return self._ranks
@property
def ndims(self):
return self._ndims
@property
def batch_size(self):
return self._batch_size
def to(self, device):
new_cores = []
for core in self.tt_cores:
new_cores.append(core.to(device))
return TensorTrainBatch(new_cores, convert_to_tensors=False)
def detach(self):
new_cores = []
for core in self.tt_cores:
new_cores.append(core.detach())
return TensorTrainBatch(new_cores, convert_to_tensors=False)
def requires_grad_(self, requires_grad=True):
new_cores = []
for core in self.tt_cores:
new_cores.append(core.requires_grad_(requires_grad))
return TensorTrainBatch(new_cores, convert_to_tensors=False)
def full(self):
num_dims = self.ndims
ranks = self.ranks
shape = self.shape
raw_shape = self.raw_shape
res = self.tt_cores[0]
batch_size = self.batch_size
for i in range(1, num_dims):
res = res.view(batch_size, -1, ranks[i])
curr_core = self.tt_cores[i].view(batch_size, ranks[i], -1)
res = torch.einsum('oqb,obw->oqw', (res, curr_core))
if self.is_tt_matrix:
intermediate_shape = [batch_size]
for i in range(num_dims):
intermediate_shape.append(raw_shape[0][i])
intermediate_shape.append(raw_shape[1][i])
res = res.view(*intermediate_shape)
transpose = [0]
for i in range(0, 2 * num_dims, 2):
transpose.append(i + 1)
for i in range(1, 2 * num_dims, 2):
transpose.append(i + 1)
res = res.permute(transpose)
if self.is_tt_matrix:
res = res.contiguous().view(*shape)
else:
res = res.view(*shape)
return res
def __str__(self):
"""A string describing the TensorTrainBatch, its TT-rank and shape."""
shape = self.shape
tt_ranks = self.ranks
batch_size_str = str(self.batch_size)
device = self.tt_cores[0].device
if self.is_tt_matrix:
raw_shape = self.raw_shape
type_str = 'TT-matrices'
return "A %s element batch of %s of size %d x %d, underlying tensor " \
"shape: %s x %s, TT-ranks: %s" \
"on device '%s' " % (batch_size_str, type_str,
shape[1], shape[2],
raw_shape[0], raw_shape[1],
tt_ranks, device)
else:
type_str = 'Tensor Trains'
return "A %s element batch of %s of shape %s, TT-ranks: %s \n on device '%s'" % \
(batch_size_str, type_str, shape[1:], tt_ranks, device)
| 32.381271 | 111 | 0.562177 | 1,287 | 9,682 | 3.962704 | 0.08547 | 0.068627 | 0.071176 | 0.042353 | 0.836667 | 0.816275 | 0.803333 | 0.786863 | 0.767647 | 0.721373 | 0 | 0.010572 | 0.335674 | 9,682 | 298 | 112 | 32.489933 | 0.782338 | 0.031915 | 0 | 0.683983 | 0 | 0.004329 | 0.048597 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.12987 | false | 0 | 0.012987 | 0.060606 | 0.281385 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4a56930a72377fe6e9125897fade214c30e3eea2 | 19,796 | py | Python | ivy/functional/ivy/meta.py | Bobbyorr007/ivy | 2f4441c7eee681055d6c22e05d922a66e40bc12c | [
"Apache-2.0"
] | 1 | 2022-03-10T23:51:18.000Z | 2022-03-10T23:51:18.000Z | ivy/functional/ivy/meta.py | thecoder12/ivy | 84c5fb82ec43c5c7d0154d5110973805e524831c | [
"Apache-2.0"
] | null | null | null | ivy/functional/ivy/meta.py | thecoder12/ivy | 84c5fb82ec43c5c7d0154d5110973805e524831c | [
"Apache-2.0"
] | null | null | null | # global
import ivy
from ivy.functional.ivy.gradients import gradient_descent_update
# Extra #
# ------#
# Private #
def _compute_cost_and_update_grads(cost_fn, order, batch, variables, outer_v, keep_outer_v,
average_across_steps_or_final, all_grads, unique_outer, batched, num_tasks):
if order == 1:
cost, inner_grads = ivy.execute_with_gradients(
lambda v: cost_fn(batch, v=variables.set_at_key_chains(v) if unique_outer else v),
variables.at_key_chains(outer_v, ignore_none=True) if keep_outer_v else
variables.prune_key_chains(outer_v, ignore_none=True), retain_grads=False)
if batched:
inner_grads = inner_grads * num_tasks
if average_across_steps_or_final:
all_grads.append(inner_grads)
else:
cost = cost_fn(batch, v=variables)
return cost
def _train_task(inner_batch, outer_batch, inner_cost_fn, outer_cost_fn, variables, inner_grad_steps,
inner_learning_rate, inner_optimization_step, order, average_across_steps, inner_v, keep_innver_v,
outer_v, keep_outer_v, batched, num_tasks, stop_gradients):
# init
total_cost = 0
all_grads = list()
# inner and outer
unique_inner = inner_v is not None
unique_outer = outer_v is not None
# iterate through inner loop training steps
for i in range(inner_grad_steps):
# compute inner gradient for update the inner variables
cost, inner_update_grads = ivy.execute_with_gradients(
lambda v: inner_cost_fn(inner_batch, v=variables.set_at_key_chains(v) if unique_inner else v),
variables.at_key_chains(inner_v, ignore_none=True) if keep_innver_v else
variables.prune_key_chains(inner_v, ignore_none=True), retain_grads=order > 1)
if batched:
inner_update_grads = inner_update_grads * num_tasks
# compute the cost to be optimized, and update all_grads if fist order method
if outer_cost_fn is None and not unique_inner and not unique_outer:
all_grads.append(inner_update_grads)
else:
cost = _compute_cost_and_update_grads(
inner_cost_fn if outer_cost_fn is None else outer_cost_fn, order, outer_batch, variables, outer_v,
keep_outer_v, average_across_steps, all_grads, unique_outer, batched, num_tasks)
# update cost and update parameters
total_cost = total_cost + cost
if unique_inner:
variables = variables.set_at_key_chains(
inner_optimization_step(variables.at_key_chains(inner_v) if keep_innver_v else
variables.prune_key_chains(inner_v), inner_update_grads,
inner_learning_rate, inplace=False, stop_gradients=stop_gradients))
else:
variables = inner_optimization_step(variables, inner_update_grads, inner_learning_rate, inplace=False,
stop_gradients=stop_gradients)
# once training is finished, compute the final cost, and update all_grads if fist order method
final_cost = _compute_cost_and_update_grads(
inner_cost_fn if outer_cost_fn is None else outer_cost_fn, order, outer_batch, variables, outer_v,
keep_outer_v, True, all_grads, unique_outer, batched, num_tasks)
# update variables
if stop_gradients:
variables = variables.stop_gradients()
if not batched:
variables = variables.expand_dims(0)
# average the cost or gradients across all timesteps if this option is chosen
if average_across_steps:
total_cost = total_cost + final_cost
if order == 1:
all_grads = sum(all_grads) / max(len(all_grads), 1)
return total_cost / (inner_grad_steps + 1), variables, all_grads
# else return only the final values
if order == 1:
all_grads = all_grads[-1]
return final_cost, variables, all_grads
def _train_tasks_batched(batch, inner_batch_fn, outer_batch_fn, inner_cost_fn, outer_cost_fn, variables,
inner_grad_steps, inner_learning_rate, inner_optimization_step, order, average_across_steps,
inner_v, keep_innver_v, outer_v, keep_outer_v, return_inner_v, num_tasks, stop_gradients):
inner_batch = batch
outer_batch = batch
if inner_batch_fn is not None:
inner_batch = inner_batch_fn(inner_batch)
if outer_batch_fn is not None:
outer_batch = outer_batch_fn(outer_batch)
cost, updated_ivs, grads = _train_task(inner_batch, outer_batch, inner_cost_fn, outer_cost_fn, variables,
inner_grad_steps, inner_learning_rate, inner_optimization_step, order,
average_across_steps, inner_v, keep_innver_v, outer_v, keep_outer_v, True,
num_tasks, stop_gradients)
grads = grads.reduce_mean(0) if isinstance(grads, ivy.Container) else grads
if order == 1:
if return_inner_v in ['all', True]:
return cost, grads, updated_ivs
elif return_inner_v == 'first':
return cost, grads, updated_ivs[0:1]
return cost, grads
if return_inner_v in ['all', True]:
return cost, updated_ivs
elif return_inner_v == 'first':
return cost, updated_ivs[0:1]
return cost
def _train_tasks_with_for_loop(batch, inner_sub_batch_fn, outer_sub_batch_fn, inner_cost_fn, outer_cost_fn, variables,
inner_grad_steps, inner_learning_rate, inner_optimization_step, order,
average_across_steps, inner_v, keep_innver_v, outer_v, keep_outer_v, return_inner_v,
num_tasks, stop_gradients):
total_cost = 0
updated_ivs_to_return = list()
all_grads = list()
if isinstance(inner_v, (list, tuple)) and isinstance(inner_v[0], (list, tuple, dict, type(None))):
inner_v_seq = True
else:
inner_v_seq = False
if isinstance(outer_v, (list, tuple)) and isinstance(outer_v[0], (list, tuple, dict, type(None))):
outer_v_seq = True
else:
outer_v_seq = False
for i, sub_batch in enumerate(batch.unstack(0, True, num_tasks)):
if inner_sub_batch_fn is not None:
inner_sub_batch = inner_sub_batch_fn(sub_batch)
else:
inner_sub_batch = sub_batch
if outer_sub_batch_fn is not None:
outer_sub_batch = outer_sub_batch_fn(sub_batch)
else:
outer_sub_batch = sub_batch
iv = inner_v[i] if inner_v_seq else inner_v
ov = outer_v[i] if outer_v_seq else outer_v
cost, updated_iv, grads = _train_task(inner_sub_batch, outer_sub_batch, inner_cost_fn, outer_cost_fn, variables,
inner_grad_steps, inner_learning_rate, inner_optimization_step, order,
average_across_steps, iv, keep_innver_v, ov, keep_outer_v, False,
num_tasks, stop_gradients)
if (return_inner_v == 'first' and i == 0) or return_inner_v in ['all', True]:
updated_ivs_to_return.append(updated_iv)
total_cost = total_cost + cost
all_grads.append(grads)
if order == 1:
if return_inner_v:
return total_cost / num_tasks, sum(all_grads) / num_tasks, ivy.Container.concat(updated_ivs_to_return, 0)
return total_cost / num_tasks, sum(all_grads) / num_tasks
if return_inner_v:
return total_cost / num_tasks, ivy.Container.concat(updated_ivs_to_return, 0)
return total_cost / num_tasks
def _train_tasks(batch, inner_batch_fn, outer_batch_fn, inner_cost_fn, outer_cost_fn, variables, inner_grad_steps,
inner_learning_rate, inner_optimization_step, order, average_across_steps, batched, inner_v,
keep_innver_v, outer_v, keep_outer_v, return_inner_v, num_tasks, stop_gradients):
if batched:
return _train_tasks_batched(
batch, inner_batch_fn, outer_batch_fn, inner_cost_fn, outer_cost_fn, variables, inner_grad_steps,
inner_learning_rate, inner_optimization_step, order, average_across_steps, inner_v, keep_innver_v, outer_v,
keep_outer_v, return_inner_v, num_tasks, stop_gradients)
return _train_tasks_with_for_loop(
batch, inner_batch_fn, outer_batch_fn, inner_cost_fn, outer_cost_fn, variables, inner_grad_steps,
inner_learning_rate, inner_optimization_step, order, average_across_steps, inner_v, keep_innver_v, outer_v,
keep_outer_v, return_inner_v, num_tasks, stop_gradients)
# Public #
# First Order
def fomaml_step(batch, inner_cost_fn, outer_cost_fn, variables, inner_grad_steps, inner_learning_rate,
inner_optimization_step=gradient_descent_update, inner_batch_fn=None, outer_batch_fn=None,
average_across_steps=False, batched=True, inner_v=None, keep_inner_v=True, outer_v=None,
keep_outer_v=True, return_inner_v=False, num_tasks=None, stop_gradients=True):
"""
Perform step of first order MAML.
:param batch: The input batch
:type batch: ivy.Container
:param inner_cost_fn: callable for the inner loop cost function, receving task-specific sub-batch,
inner vars and outer vars
:type inner_cost_fn: callable
:param outer_cost_fn: callable for the outer loop cost function, receving task-specific sub-batch,
inner vars and outer vars. If None, the cost from the inner loop will also be
optimized in the outer loop.
:type outer_cost_fn: callable, optional
:param variables: Variables to be optimized during the meta step
:type variables: ivy.Container
:param inner_grad_steps: Number of gradient steps to perform during the inner loop.
:type inner_grad_steps: int
:param inner_learning_rate: The learning rate of the inner loop.
:type inner_learning_rate: float
:param inner_optimization_step: The function used for the inner loop optimization.
Default is ivy.gradient_descent_update.
:type inner_optimization_step: callable, optional
:param inner_batch_fn: Function to apply to the task sub-batch, before passing to the inner_cost_fn.
Default is None.
:type inner_batch_fn: callable, optional
:param outer_batch_fn: Function to apply to the task sub-batch, before passing to the outer_cost_fn.
Default is None.
:type outer_batch_fn: callable, optional
:param average_across_steps: Whether to average the inner loop steps for the outer loop update. Default is False.
:type average_across_steps: bool, optional
:param batched: Whether to batch along the time dimension, and run the meta steps in batch. Default is True.
:type batched: bool, optional
:param inner_v: Nested variable keys to be optimized during the inner loop, with same keys and boolean values.
:type inner_v: dict str or list, optional
:param keep_inner_v: If True, the key chains in inner_v will be kept, otherwise they will be removed.
Default is True.
:type keep_inner_v: bool, optional
:param outer_v: Nested variable keys to be optimized during the inner loop, with same keys and boolean values.
:type outer_v: dict str or list, optional
:param keep_outer_v: If True, the key chains in inner_v will be kept, otherwise they will be removed.
Default is True.
:type keep_outer_v: bool, optional
:param return_inner_v: Either 'first', 'all', or False. 'first' means the variables for the first task inner loop
will also be returned. variables for all tasks will be returned with 'all'. Default is False.
:type return_inner_v: str, optional
:param num_tasks: Number of unique tasks to inner-loop optimize for the meta step. Determined from batch by default.
:type num_tasks: int, optional
:param stop_gradients: Whether to stop the gradients of the cost. Default is True.
:type stop_gradients: bool, optional
:return: The cost and the gradients with respect to the outer loop variables.
"""
if num_tasks is None:
num_tasks = batch.shape[0]
rets = _train_tasks(
batch, inner_batch_fn, outer_batch_fn, inner_cost_fn, outer_cost_fn, variables, inner_grad_steps,
inner_learning_rate, inner_optimization_step, 1, average_across_steps, batched, inner_v, keep_inner_v, outer_v,
keep_outer_v, return_inner_v, num_tasks, stop_gradients)
cost = rets[0]
if stop_gradients:
cost = ivy.stop_gradient(cost, preserve_type=False)
grads = rets[1]
if return_inner_v:
return cost, grads, rets[2]
return cost, grads
def reptile_step(batch, cost_fn, variables, inner_grad_steps, inner_learning_rate,
inner_optimization_step=gradient_descent_update, batched=True, return_inner_v=False, num_tasks=None,
stop_gradients=True):
"""
Perform step of Reptile.
:param batch: The input batch
:type batch: ivy.Container
:param cost_fn: callable for the cost function, receivng the task-specific sub-batch and variables
:type cost_fn: callable
:param variables: Variables to be optimized
:type variables: ivy.Container
:param inner_grad_steps: Number of gradient steps to perform during the inner loop.
:type inner_grad_steps: int
:param inner_learning_rate: The learning rate of the inner loop.
:type inner_learning_rate: float
:param inner_optimization_step: The function used for the inner loop optimization.
Default is ivy.gradient_descent_update.
:type inner_optimization_step: callable, optional
:param batched: Whether to batch along the time dimension, and run the meta steps in batch. Default is True.
:type batched: bool, optional
:param return_inner_v: Either 'first', 'all', or False. 'first' means the variables for the first task inner loop
will also be returned. variables for all tasks will be returned with 'all'. Default is False.
:type return_inner_v: str, optional
:param num_tasks: Number of unique tasks to inner-loop optimize for the meta step. Determined from batch by default.
:type num_tasks: int, optional
:param stop_gradients: Whether to stop the gradients of the cost. Default is True.
:type stop_gradients: bool, optional
:return: The cost and the gradients with respect to the outer loop variables.
"""
if num_tasks is None:
num_tasks = batch.shape[0]
# noinspection PyTypeChecker
rets = _train_tasks(
batch, None, None, cost_fn, None, variables, inner_grad_steps, inner_learning_rate, inner_optimization_step,
1, True, batched, None, True, None, True, return_inner_v, num_tasks, stop_gradients)
cost = rets[0]
if stop_gradients:
cost = ivy.stop_gradient(cost, preserve_type=False)
grads = rets[1] / inner_learning_rate
if return_inner_v:
return cost, grads, rets[2]
return cost, grads
# Second Order
def maml_step(batch, inner_cost_fn, outer_cost_fn, variables, inner_grad_steps, inner_learning_rate,
inner_optimization_step=gradient_descent_update, inner_batch_fn=None, outer_batch_fn=None,
average_across_steps=False, batched=True, inner_v=None, keep_inner_v=True, outer_v=None,
keep_outer_v=True, return_inner_v=False, num_tasks=None, stop_gradients=True):
"""
Perform step of vanilla second order MAML.
:param batch: The input batch
:type batch: ivy.Container
:param inner_cost_fn: callable for the inner loop cost function, receing sub-batch, inner vars and outer vars
:type inner_cost_fn: callable
:param outer_cost_fn: callable for the outer loop cost function, receving task-specific sub-batch,
inner vars and outer vars. If None, the cost from the inner loop will also be
optimized in the outer loop.
:type outer_cost_fn: callable, optional
:param variables: Variables to be optimized during the meta step
:type variables: ivy.Container
:param inner_grad_steps: Number of gradient steps to perform during the inner loop.
:type inner_grad_steps: int
:param inner_learning_rate: The learning rate of the inner loop.
:type inner_learning_rate: float
:param inner_optimization_step: The function used for the inner loop optimization.
Default is ivy.gradient_descent_update.
:type inner_optimization_step: callable, optional
:param inner_batch_fn: Function to apply to the task sub-batch, before passing to the inner_cost_fn.
Default is None.
:type inner_batch_fn: callable, optional
:param outer_batch_fn: Function to apply to the task sub-batch, before passing to the outer_cost_fn.
Default is None.
:type outer_batch_fn: callable, optional
:param average_across_steps: Whether to average the inner loop steps for the outer loop update. Default is False.
:type average_across_steps: bool, optional
:param batched: Whether to batch along the time dimension, and run the meta steps in batch. Default is True.
:type batched: bool, optional
:param inner_v: Nested variable keys to be optimized during the inner loop, with same keys and boolean values.
:type inner_v: dict str or list, optional
:param keep_inner_v: If True, the key chains in inner_v will be kept, otherwise they will be removed.
Default is True.
:type keep_inner_v: bool, optional
:param outer_v: Nested variable keys to be optimized during the inner loop, with same keys and boolean values.
:type outer_v: dict str or list, optional
:param keep_outer_v: If True, the key chains in inner_v will be kept, otherwise they will be removed.
Default is True.
:type keep_outer_v: bool, optional
:param return_inner_v: Either 'first', 'all', or False. 'first' means the variables for the first task inner loop
will also be returned. variables for all tasks will be returned with 'all'. Default is False.
:type return_inner_v: str, optional
:param num_tasks: Number of unique tasks to inner-loop optimize for the meta step. Determined from batch by default.
:type num_tasks: int, optional
:param stop_gradients: Whether to stop the gradients of the cost. Default is True.
:type stop_gradients: bool, optional
:return: The cost and the gradients with respect to the outer loop variables.
"""
if num_tasks is None:
num_tasks = batch.shape[0]
unique_outer = outer_v is not None
cost, grads, *rets = ivy.execute_with_gradients(lambda v: _train_tasks(
batch, inner_batch_fn, outer_batch_fn, inner_cost_fn, outer_cost_fn,
variables.set_at_key_chains(v) if unique_outer else v, inner_grad_steps, inner_learning_rate,
inner_optimization_step, 2, average_across_steps, batched, inner_v, keep_inner_v, outer_v, keep_outer_v,
return_inner_v, num_tasks, False),
variables.at_key_chains(outer_v, ignore_none=True)
if keep_outer_v else variables.prune_key_chains(outer_v, ignore_none=True))
if stop_gradients:
cost = ivy.stop_gradient(cost, preserve_type=False)
# noinspection PyRedundantParentheses
return (cost, grads.sum(0), *rets)
| 54.235616 | 120 | 0.692918 | 2,829 | 19,796 | 4.552846 | 0.061506 | 0.030745 | 0.025155 | 0.020652 | 0.83486 | 0.811491 | 0.783851 | 0.757143 | 0.743634 | 0.724767 | 0 | 0.002411 | 0.245858 | 19,796 | 364 | 121 | 54.384615 | 0.860339 | 0.405587 | 0 | 0.406417 | 0 | 0 | 0.002135 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.042781 | false | 0 | 0.010695 | 0 | 0.160428 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4aabac7b053e0355257ed6780bafc40b771be868 | 79 | py | Python | data/typing/numpy.linalg._umath_linalg.py | vfdev-5/python-record-api | 006faf0bba9cd4cb55fbacc13d2bbda365f5bf0b | [
"MIT"
] | 67 | 2020-08-17T11:53:26.000Z | 2021-11-08T20:16:06.000Z | data/typing/numpy.linalg._umath_linalg.py | vfdev-5/python-record-api | 006faf0bba9cd4cb55fbacc13d2bbda365f5bf0b | [
"MIT"
] | 36 | 2020-08-17T11:09:51.000Z | 2021-12-15T18:09:47.000Z | data/typing/numpy.linalg._umath_linalg.py | pydata-apis/python-api-record | 684cffbbb6dc6e81f9de4e02619c8b0ebc557b2b | [
"MIT"
] | 7 | 2020-08-19T05:06:47.000Z | 2020-11-04T05:10:38.000Z | from typing import *
# usage.dask: 1
eig: object
# usage.dask: 1
inv: object
| 9.875 | 20 | 0.683544 | 13 | 79 | 4.153846 | 0.692308 | 0.333333 | 0.37037 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.031746 | 0.202532 | 79 | 7 | 21 | 11.285714 | 0.825397 | 0.341772 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
4ab4aaa472f76a2f49d1a3b7d03db652abf66f2b | 96 | py | Python | paypalcheckoutsdk/core/util.py | seba-spoon/Checkout-Python-SDK | 24a1f7c73fd6e270d21547fb65037e21a5d4f22b | [
"BSD-Source-Code"
] | 165 | 2019-03-06T14:32:47.000Z | 2022-03-26T17:49:57.000Z | paypalcheckoutsdk/core/util.py | seba-spoon/Checkout-Python-SDK | 24a1f7c73fd6e270d21547fb65037e21a5d4f22b | [
"BSD-Source-Code"
] | 24 | 2019-03-06T00:21:50.000Z | 2021-12-22T11:40:05.000Z | paypalcheckoutsdk/core/util.py | seba-spoon/Checkout-Python-SDK | 24a1f7c73fd6e270d21547fb65037e21a5d4f22b | [
"BSD-Source-Code"
] | 87 | 2019-02-15T03:59:20.000Z | 2022-03-27T13:26:29.000Z |
def older_than_27():
import sys
return True if sys.version_info[:2] < (2, 7) else False | 24 | 59 | 0.666667 | 17 | 96 | 3.588235 | 0.882353 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.066667 | 0.21875 | 96 | 4 | 59 | 24 | 0.746667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4aba032c66a49f3ee559b91adeba1321b370cfc7 | 30 | py | Python | src/utils/__init__.py | imatge-upc/AI4Agriculture-grape-detection | f8fce3115dfd50e80193f82a4cbba97641ea828c | [
"MIT"
] | 3 | 2021-06-20T22:15:59.000Z | 2021-11-18T10:12:47.000Z | src/utils/__init__.py | paumarquez/AI4Agriculture-grape-detection | 4adc277320f527f1bbefe4d504e3223928201f69 | [
"MIT"
] | 4 | 2021-07-12T09:01:08.000Z | 2022-03-12T00:59:15.000Z | src/utils/__init__.py | paumarquez/AI4Agriculture-grape-detection | 4adc277320f527f1bbefe4d504e3223928201f69 | [
"MIT"
] | 1 | 2021-07-02T09:27:13.000Z | 2021-07-02T09:27:13.000Z | from .transforms import resize | 30 | 30 | 0.866667 | 4 | 30 | 6.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 30 | 1 | 30 | 30 | 0.962963 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4ac8fe2be4938c25d967a8a8dc23e40bb28a3b42 | 12,483 | py | Python | mapping.py | arthurazs/uff-siscomp | 50ea5cc349fef5af5f95599978f258a411024ad6 | [
"MIT"
] | null | null | null | mapping.py | arthurazs/uff-siscomp | 50ea5cc349fef5af5f95599978f258a411024ad6 | [
"MIT"
] | null | null | null | mapping.py | arthurazs/uff-siscomp | 50ea5cc349fef5af5f95599978f258a411024ad6 | [
"MIT"
] | null | null | null | from random import randint
RANDOM = 1
FIFO = 2
LRU = 3
LFU = 4
DIRECT = 1
ASSOCIATIVE = 2
SET_ASSOCIATIVE = 3
class Cache:
def __init__(self, mapping, cache_size, policy=RANDOM, frame_size=2):
if mapping == DIRECT:
print('Mapping => Direct')
self._cache = Direct(cache_size)
self.alloc = self._cache.alloc
elif mapping == ASSOCIATIVE:
print('Mapping => Associative')
self._cache = Associative(cache_size, policy)
self.alloc = self._cache.alloc
elif mapping == SET_ASSOCIATIVE:
print('Mapping => Set Associative')
self._cache = SetAssociative(cache_size, frame_size, policy)
self.alloc = self._cache.alloc
class Direct:
def __init__(self, cache_size):
self._cache_size = cache_size
self._cache = [None] * self._cache_size
self._hit = 0
self._miss = 0
def alloc(self, tag):
position = tag % self._cache_size
tag_output = f'\nTag:\t\t{tag}'
output = f'Old cache:\t{self._cache}\n'
result = 'MISS'
if tag == self._cache[position]:
self._hit += 1
result = 'HIT'
else:
self._miss += 1
self._cache[position] = tag
tag_output += f' ({result})'
output += f'New cache:\t{self._cache}\n'
output += f'Hit/Miss:\t{self._hit}/{self._miss}'
percentage = self._hit / (self._hit + self._miss) * 100
output += f'\nHit rate:\t{percentage:.2f}%'
print(f'{tag_output}\n{output}')
class Associative:
def __init__(self, cache_size, policy=RANDOM):
self._cache_size = cache_size
self._cache = [None] * self._cache_size
self._hit = 0
self._miss = 0
self._counter = 0
if policy == RANDOM:
print('Policy => Random')
self.alloc = self._random_alloc
elif policy == FIFO:
print('Policy => FIFO')
self.alloc = self._fifo_alloc
elif policy == LRU:
print('Policy => LRU')
self.alloc = self._lru_alloc
elif policy == LFU:
print('Policy => LFU')
self._cache = {}
self.alloc = self._lfu_alloc
def _random_alloc(self, tag):
tag_output = f'\nTag:\t\t{tag}'
output = ''
result = 'MISS'
output += f'Old cache:\t{self._cache}\n'
if tag in self._cache:
self._hit += 1
result = 'HIT'
elif self._counter < self._cache_size: # NOT FULL
self._cache[self._counter] = tag
self._counter += 1
self._miss += 1
else:
self._miss += 1
position = randint(0, self._cache_size - 1)
self._cache[position] = tag
tag_output += f' ({result})'
output += f'New cache:\t{self._cache}\n'
output += f'Hit/Miss:\t{self._hit}/{self._miss}'
percentage = self._hit / (self._hit + self._miss) * 100
output += f'\nHit rate:\t{percentage:.2f}%'
print(f'{tag_output}\n{output}')
def _fifo_alloc(self, tag):
tag_output = f'\nTag:\t\t{tag}'
output = ''
result = 'MISS'
output += f'Old cache:\t{self._cache}\n'
if tag in self._cache:
self._hit += 1
result = 'HIT'
elif self._counter < self._cache_size: # NOT FULL
self._cache[self._counter] = tag
self._counter += 1
self._miss += 1
else:
self._miss += 1
self._cache.pop(0)
self._cache.append(tag)
tag_output += f' ({result})'
output += f'New cache:\t{self._cache}\n'
output += f'Hit/Miss:\t{self._hit}/{self._miss}'
percentage = self._hit / (self._hit + self._miss) * 100
output += f'\nHit rate:\t{percentage:.2f}%'
print(f'{tag_output}\n{output}')
def _lru_alloc(self, tag):
tag_output = f'\nTag:\t\t{tag}'
output = ''
result = 'MISS'
output += f'Old cache:\t{self._cache}\n'
if tag in self._cache:
self._hit += 1
result = 'HIT'
self._cache.remove(tag)
self._cache.append(tag)
else:
self._cache.pop(0)
self._cache.append(tag)
self._miss += 1
tag_output += f' ({result})'
output += f'New cache:\t{self._cache}\n'
output += f'Hit/Miss:\t{self._hit}/{self._miss}'
percentage = self._hit / (self._hit + self._miss) * 100
output += f'\nHit rate:\t{percentage:.2f}%'
print(f'{tag_output}\n{output}')
def _lfu_alloc(self, tag):
def find_tag(tag):
for frequency, tags in self._cache.items():
if tag in tags:
return frequency
return None
frequency = find_tag(tag)
tag_output = f'\nTag:\t\t{tag}'
output = ''
result = 'MISS'
output += 'Old cache:\t{'
for key, value in sorted(self._cache.items()):
output += f'{key}: {value}, '
output = output[:-2] + '}\n'
if frequency is not None:
result = 'HIT'
self._hit += 1
self._cache[frequency].remove(tag)
if len(self._cache[frequency]) == 0:
del self._cache[frequency]
try:
self._cache[frequency + 1].append(tag)
except KeyError:
self._cache[frequency + 1] = [tag]
elif self._counter < self._cache_size: # NOT FULL
try:
self._cache[0].append(tag)
except KeyError:
self._cache[0] = [tag]
self._counter += 1
self._miss += 1
else:
least = min(self._cache)
self._cache[least].pop(0)
if len(self._cache[least]) == 0:
del self._cache[least]
try:
self._cache[0].append(tag)
except KeyError:
self._cache[0] = [tag]
self._miss += 1
tag_output += f' ({result})'
output += 'New cache:\t{'
for key, value in sorted(self._cache.items()):
output += f'{key}: {value}, '
output = output[:-2] + '}\n'
output += f'Hit/Miss:\t{self._hit}/{self._miss}'
percentage = self._hit / (self._hit + self._miss) * 100
output += f'\nHit rate:\t{percentage:.2f}%'
print(f'{tag_output}\n{output}')
class SetAssociative:
def __init__(self, cache_size, frame_size=2, policy=RANDOM):
self._cache_size = cache_size
self._frame_size = frame_size
self._lines = cache_size // frame_size
self._cache = [[None] * frame_size for i in range(self._lines)]
self._hit = 0
self._miss = 0
self._counter = {}
for index in range(self._lines):
self._counter[index] = 0
if policy == RANDOM:
print('Policy => Random')
self.alloc = self._random_alloc
elif policy == FIFO:
print('Policy => FIFO')
self.alloc = self._fifo_alloc
elif policy == LRU:
print('Policy => LRU')
self.alloc = self._lru_alloc
elif policy == LFU:
self._cache = [{} for _ in range(self._lines)]
print('Policy => LFU')
self.alloc = self._lfu_alloc
def _random_alloc(self, tag):
position = tag % (self._lines)
tag_output = f'\nTag:\t\t{tag}'
result = 'MISS'
output = f'Old cache:\t{self._cache}\n'
cache_pos = self._cache[position]
if tag in cache_pos:
result = 'HIT'
self._hit += 1
elif self._counter[position] < self._frame_size: # NOT FULL
cache_pos[self._counter[position]] = tag
self._counter[position] += 1
self._miss += 1
else:
self._miss += 1
random_position = randint(0, self._frame_size - 1)
cache_pos[random_position] = tag
tag_output += f' ({result})'
output += f'New cache:\t{self._cache}\n'
output += f'Hit/Miss:\t{self._hit}/{self._miss}'
percentage = self._hit / (self._hit + self._miss) * 100
output += f'\nHit rate:\t{percentage:.2f}%'
print(f'{tag_output}\n{output}')
def _fifo_alloc(self, tag):
position = tag % (self._lines)
tag_output = f'\nTag:\t\t{tag}'
result = 'MISS'
output = f'Old cache:\t{self._cache}\n'
cache_pos = self._cache[position]
if tag in cache_pos:
result = 'HIT'
self._hit += 1
elif self._counter[position] < self._frame_size: # NOT FULL
cache_pos[self._counter[position]] = tag
self._counter[position] += 1
self._miss += 1
else:
self._miss += 1
cache_pos.pop(0)
cache_pos.append(tag)
tag_output += f' ({result})'
output += f'New cache:\t{self._cache}\n'
output += f'Hit/Miss:\t{self._hit}/{self._miss}'
percentage = self._hit / (self._hit + self._miss) * 100
output += f'\nHit rate:\t{percentage:.2f}%'
print(f'{tag_output}\n{output}')
def _lru_alloc(self, tag):
position = tag % (self._lines)
tag_output = f'\nTag:\t\t{tag}'
result = 'MISS'
output = f'Old cache:\t{self._cache}\n'
cache_pos = self._cache[position]
if tag in cache_pos:
result = 'HIT'
self._hit += 1
cache_pos.remove(tag)
cache_pos.append(tag)
else:
cache_pos.pop(0)
cache_pos.append(tag)
self._miss += 1
tag_output += f' ({result})'
output += f'New cache:\t{self._cache}\n'
output += f'Hit/Miss:\t{self._hit}/{self._miss}'
percentage = self._hit / (self._hit + self._miss) * 100
output += f'\nHit rate:\t{percentage:.2f}%'
print(f'{tag_output}\n{output}')
def _lfu_alloc(self, tag):
def find_tag_in_set(tag, position, cache):
for frequency, tags in cache.items():
if tag in tags:
return frequency
return None
position = tag % (self._lines)
tag_output = f'\nTag:\t\t{tag}'
result = 'MISS'
output = 'Old cache:\t['
for elements in self._cache:
output += '{'
if elements.items():
for key, value in sorted(elements.items()):
output += f'{key}: {value}, '
output = output[:-2] + '}, '
else:
output += '}, '
output = output[:-2] + ']\n'
cache_pos = self._cache[position]
frequency = find_tag_in_set(tag, position, cache_pos)
if frequency is not None:
result = 'HIT'
self._hit += 1
cache_pos[frequency].remove(tag)
if len(cache_pos[frequency]) == 0:
del cache_pos[frequency]
try:
cache_pos[frequency + 1].append(tag)
except KeyError:
cache_pos[frequency + 1] = [tag]
elif self._counter[position] < self._frame_size: # NOT FULL
try:
cache_pos[0].append(tag)
except KeyError:
cache_pos[0] = [tag]
self._counter[position] += 1
self._miss += 1
else:
least = min(cache_pos)
cache_pos[least].pop(0)
if len(cache_pos[least]) == 0:
del cache_pos[least]
try:
cache_pos[0].append(tag)
except KeyError:
cache_pos[0] = [tag]
self._miss += 1
tag_output += f' ({result})'
output += 'New cache:\t['
for elements in self._cache:
output += '{'
if elements.items():
for key, value in sorted(elements.items()):
output += f'{key}: {value}, '
output = output[:-2] + '}, '
else:
output += '}, '
output = output[:-2] + ']\n'
output += f'Hit/Miss:\t{self._hit}/{self._miss}'
percentage = self._hit / (self._hit + self._miss) * 100
output += f'\nHit rate:\t{percentage:.2f}%'
print(f'{tag_output}\n{output}')
| 34.771588 | 73 | 0.511576 | 1,499 | 12,483 | 4.041361 | 0.052035 | 0.109937 | 0.049026 | 0.044569 | 0.834764 | 0.803896 | 0.775338 | 0.742654 | 0.697425 | 0.68554 | 0 | 0.014213 | 0.351839 | 12,483 | 358 | 74 | 34.868715 | 0.73452 | 0.004246 | 0 | 0.78979 | 0 | 0 | 0.143041 | 0.084601 | 0 | 0 | 0 | 0 | 0 | 1 | 0.045045 | false | 0 | 0.003003 | 0 | 0.072072 | 0.06006 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4354800f9af3f297cf4bc70fbd16fc3f6c470f48 | 10,940 | py | Python | lib/rucio/web/rest/flaskapi/v1/requests.py | mkszuba/rucio | 32469bb38e7e10f59f28e51669748a8687f33bb7 | [
"Apache-2.0"
] | 2 | 2020-02-18T22:34:24.000Z | 2022-03-09T16:26:18.000Z | lib/rucio/web/rest/flaskapi/v1/requests.py | mkszuba/rucio | 32469bb38e7e10f59f28e51669748a8687f33bb7 | [
"Apache-2.0"
] | null | null | null | lib/rucio/web/rest/flaskapi/v1/requests.py | mkszuba/rucio | 32469bb38e7e10f59f28e51669748a8687f33bb7 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
# Copyright 2021-2022 CERN
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# Authors:
# - Benedikt Ziemons <benedikt.ziemons@cern.ch>, 2021
# - Thomas Beermann <thomas.beermann@cern.ch>, 2021
# - Rob Barnsley <rob.barnsley@skao.int>, 2021-2022
import json
import flask
from flask import Flask, Blueprint, Response
from rucio.api import request
from rucio.common.exception import RequestNotFound
from rucio.common.utils import APIEncoder, render_json
from rucio.core.rse import get_rses_with_attribute_value, get_rse_name
from rucio.db.sqla.constants import RequestState
from rucio.web.rest.flaskapi.v1.common import check_accept_header_wrapper_flask, parse_scope_name, try_stream, \
request_auth_env, response_headers, generate_http_error_flask, ErrorHandlingMethodView
class RequestGet(ErrorHandlingMethodView):
""" REST API to get requests. """
@check_accept_header_wrapper_flask(['application/json'])
def get(self, scope_name, rse):
"""
List request for given DID to a destination RSE.
.. :quickref: RequestGet; list requests
:param scope_name: data identifier (scope)/(name).
:param rse: destination RSE.
:reqheader Content-Type: application/json
:status 200: Request found.
:status 404: Request not found.
:status 406: Not Acceptable.
"""
try:
scope, name = parse_scope_name(scope_name, flask.request.environ.get('vo'))
except ValueError as error:
return generate_http_error_flask(400, error)
try:
request_data = request.get_request_by_did(
scope=scope,
name=name,
rse=rse,
issuer=flask.request.environ.get('issuer'),
vo=flask.request.environ.get('vo'),
)
return Response(json.dumps(request_data, cls=APIEncoder), content_type='application/json')
except RequestNotFound as error:
return generate_http_error_flask(404, error.__class__.__name__, f'No request found for DID {scope}:{name} at RSE {rse}')
class RequestHistoryGet(ErrorHandlingMethodView):
""" REST API to get historical requests. """
@check_accept_header_wrapper_flask(['application/json'])
def get(self, scope_name, rse):
"""
List request for given DID to a destination RSE.
.. :quickref: RequestHistoryGet; list requests
:param scope_name: data identifier (scope)/(name).
:param rse: destination RSE.
:reqheader Content-Type: application/json
:status 200: Request found.
:status 404: Request not found.
"""
try:
scope, name = parse_scope_name(scope_name, flask.request.environ.get('vo'))
except ValueError as error:
return generate_http_error_flask(400, error)
try:
request_data = request.get_request_history_by_did(
scope=scope,
name=name,
rse=rse,
issuer=flask.request.environ.get('issuer'),
vo=flask.request.environ.get('vo'),
)
return Response(json.dumps(request_data, cls=APIEncoder), content_type='application/json')
except RequestNotFound as error:
return generate_http_error_flask(404, error.__class__.__name__, f'No request found for DID {scope}:{name} at RSE {rse}')
class RequestList(ErrorHandlingMethodView):
""" REST API to get requests. """
@check_accept_header_wrapper_flask(['application/x-json-stream'])
def get(self):
"""
List requests for a given source and destination RSE or site.
.. :quickref: RequestsGet; list requests
:reqheader Content-Type: application/x-json-stream
:status 200: Request found.
:status 404: Request not found.
:status 406: Not Acceptable.
"""
src_rse = flask.request.args.get('src_rse', default=None)
dst_rse = flask.request.args.get('dst_rse', default=None)
src_site = flask.request.args.get('src_site', default=None)
dst_site = flask.request.args.get('dst_site', default=None)
request_states = flask.request.args.get('request_states', default=None)
if not request_states:
return generate_http_error_flask(400, 'MissingParameter', 'Request state is missing')
if src_rse and not dst_rse:
return generate_http_error_flask(400, 'MissingParameter', 'Destination RSE is missing')
elif dst_rse and not src_rse:
return generate_http_error_flask(400, 'MissingParameter', 'Source RSE is missing')
elif src_site and not dst_site:
return generate_http_error_flask(400, 'MissingParameter', 'Destination site is missing')
elif dst_site and not src_site:
return generate_http_error_flask(400, 'MissingParameter', 'Source site is missing')
try:
states = [RequestState(state) for state in request_states.split(',')]
except ValueError:
return generate_http_error_flask(400, 'Invalid', 'Request state value is invalid')
src_rses = []
dst_rses = []
if src_site:
src_rses = get_rses_with_attribute_value(key='site', value=src_site, lookup_key='site', vo=flask.request.environ.get('vo'))
if not src_rses:
return generate_http_error_flask(404, 'NotFound', f'Could not resolve site name {src_site} to RSE')
src_rses = [get_rse_name(rse['rse_id']) for rse in src_rses]
dst_rses = get_rses_with_attribute_value(key='site', value=dst_site, lookup_key='site', vo=flask.request.environ.get('vo'))
if not dst_rses:
return generate_http_error_flask(404, 'NotFound', f'Could not resolve site name {dst_site} to RSE')
dst_rses = [get_rse_name(rse['rse_id']) for rse in dst_rses]
else:
dst_rses = [dst_rse]
src_rses = [src_rse]
def generate(issuer, vo):
for result in request.list_requests(src_rses, dst_rses, states, issuer=issuer, vo=vo):
del result['_sa_instance_state']
yield render_json(**result) + '\n'
return try_stream(generate(issuer=flask.request.environ.get('issuer'), vo=flask.request.environ.get('vo')))
class RequestHistoryList(ErrorHandlingMethodView):
""" REST API to get requests. """
@check_accept_header_wrapper_flask(['application/x-json-stream'])
def get(self):
"""
List historical requests for a given source and destination RSE or site.
.. :quickref: RequestsGet; list requests
:reqheader Content-Type: application/x-json-stream
:status 200: Request found.
:status 404: Request not found.
"""
src_rse = flask.request.args.get('src_rse', default=None)
dst_rse = flask.request.args.get('dst_rse', default=None)
src_site = flask.request.args.get('src_site', default=None)
dst_site = flask.request.args.get('dst_site', default=None)
request_states = flask.request.args.get('request_states', default=None)
offset = flask.request.args.get('offset', default=0)
limit = flask.request.args.get('limit', default=100)
if not request_states:
return generate_http_error_flask(400, 'MissingParameter', 'Request state is missing')
if src_rse and not dst_rse:
return generate_http_error_flask(400, 'MissingParameter', 'Destination RSE is missing')
elif dst_rse and not src_rse:
return generate_http_error_flask(400, 'MissingParameter', 'Source RSE is missing')
elif src_site and not dst_site:
return generate_http_error_flask(400, 'MissingParameter', 'Destination site is missing')
elif dst_site and not src_site:
return generate_http_error_flask(400, 'MissingParameter', 'Source site is missing')
try:
states = [RequestState(state) for state in request_states.split(',')]
except ValueError:
return generate_http_error_flask(400, 'Invalid', 'Request state value is invalid')
src_rses = []
dst_rses = []
if src_site:
src_rses = get_rses_with_attribute_value(key='site', value=src_site, lookup_key='site', vo=flask.request.environ.get('vo'))
if not src_rses:
return generate_http_error_flask(404, 'NotFound', f'Could not resolve site name {src_site} to RSE')
src_rses = [get_rse_name(rse['rse_id']) for rse in src_rses]
dst_rses = get_rses_with_attribute_value(key='site', value=dst_site, lookup_key='site', vo=flask.request.environ.get('vo'))
if not dst_rses:
return generate_http_error_flask(404, 'NotFound', f'Could not resolve site name {dst_site} to RSE')
dst_rses = [get_rse_name(rse['rse_id']) for rse in dst_rses]
else:
dst_rses = [dst_rse]
src_rses = [src_rse]
def generate(issuer, vo):
for result in request.list_requests_history(src_rses, dst_rses, states, issuer=issuer, vo=vo, offset=offset, limit=limit):
del result['_sa_instance_state']
yield render_json(**result) + '\n'
return try_stream(generate(issuer=flask.request.environ.get('issuer'), vo=flask.request.environ.get('vo')))
def blueprint():
bp = Blueprint('requests', __name__, url_prefix='/requests')
request_get_view = RequestGet.as_view('request_get')
bp.add_url_rule('/<path:scope_name>/<rse>', view_func=request_get_view, methods=['get', ])
request_history_get_view = RequestHistoryGet.as_view('request_history_get')
bp.add_url_rule('/history/<path:scope_name>/<rse>', view_func=request_history_get_view, methods=['get', ])
request_list_view = RequestList.as_view('request_list')
bp.add_url_rule('/list', view_func=request_list_view, methods=['get', ])
request_history_list_view = RequestHistoryList.as_view('request_history_list')
bp.add_url_rule('/history/list', view_func=request_history_list_view, methods=['get', ])
bp.before_request(request_auth_env)
bp.after_request(response_headers)
return bp
def make_doc():
""" Only used for sphinx documentation """
doc_app = Flask(__name__)
doc_app.register_blueprint(blueprint())
return doc_app
| 44.291498 | 135 | 0.667916 | 1,417 | 10,940 | 4.927311 | 0.1482 | 0.044686 | 0.051131 | 0.06617 | 0.766686 | 0.731166 | 0.731166 | 0.722286 | 0.722286 | 0.711974 | 0 | 0.014683 | 0.228062 | 10,940 | 246 | 136 | 44.471545 | 0.812078 | 0.177148 | 0 | 0.734694 | 0 | 0 | 0.150835 | 0.012205 | 0 | 0 | 0 | 0 | 0 | 1 | 0.054422 | false | 0 | 0.061224 | 0 | 0.319728 | 0.027211 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
43ce89d366f9caca8da13a7eee3a9d0592eb7197 | 158 | py | Python | src/vesper/data/base/__init__.py | onecommons/vesper | 818c09350b8fe53ea484aaff24deb1002a67f471 | [
"Apache-2.0"
] | null | null | null | src/vesper/data/base/__init__.py | onecommons/vesper | 818c09350b8fe53ea484aaff24deb1002a67f471 | [
"Apache-2.0"
] | null | null | null | src/vesper/data/base/__init__.py | onecommons/vesper | 818c09350b8fe53ea484aaff24deb1002a67f471 | [
"Apache-2.0"
] | null | null | null | #:copyright: Copyright 2009-2010 by the Vesper team, see AUTHORS.
#:license: Dual licenced under the GPL or Apache2 licences, see LICENSE.
from _base import * | 52.666667 | 72 | 0.778481 | 24 | 158 | 5.083333 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.066667 | 0.14557 | 158 | 3 | 73 | 52.666667 | 0.837037 | 0.85443 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
78d8b670bd988b47d77de5bf708a07569bc0c9b1 | 7,188 | py | Python | ejercicios/13_ejercicio.py | jorgemauricio/becarios_utna | 5b9082c35520fa162d780e843aae17ef6f003a28 | [
"MIT"
] | 1 | 2018-03-06T22:44:35.000Z | 2018-03-06T22:44:35.000Z | ejercicios/13_ejercicio.py | jorgemauricio/becarios_utna | 5b9082c35520fa162d780e843aae17ef6f003a28 | [
"MIT"
] | null | null | null | ejercicios/13_ejercicio.py | jorgemauricio/becarios_utna | 5b9082c35520fa162d780e843aae17ef6f003a28 | [
"MIT"
] | 1 | 2018-03-06T22:44:39.000Z | 2018-03-06T22:44:39.000Z | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
#######################################
# Author: Jorge Mauricio
# Email: jorge.ernesto.mauricio@gmail.com
# Date: 2018-02-01
# Version: 1.0
#######################################
Objetivo:
Dado un texto, contabilizar el número de veces que cada una de las palabras se
repiten.
texto = "Muy buenas tardes a todas y a todos ustedes.
Quiero saludar, con respeto, al Presidente de la Cámara de Diputados y al
Presidente de la Cámara de Senadores.
A los señores Dirigentes de los partidos políticos en nuestro país. A sus
principales fuerzas.
De igual forma, saludo con respeto a los señores Gobernadores. A
Gobernadores electos. Al Jefe de Gobierno electo del Distrito Federal. Todos
ellos con origen en distintas expresiones políticas.
Saludo a los señores Coordinadores Parlamentarios de distintas fuerzas
políticas que están, hoy, aquí presentes.
Con respeto, también, saludo a la representación, en actores políticos
distinguidos, de las distintas fuerzas políticas de nuestro país.
A todos, les saludo con afecto y reconocimiento por este encuentro, sin duda,
inédito, pero que representa un gran paso para impulsar la trasformación de
nuestro país. Es el punto de encuentro y de coincidencia que, realmente,
aplaudo, celebro, y que sea para el bien de México.
México comienza una nueva etapa de su vida democrática. Ha llegado el
momento del encuentro y del acuerdo. Ha llegado el momento de dar el
siguiente paso en el perfeccionamiento democrático: Transitar del sufragio
efectivo al gobierno eficaz.
En este propósito, los actores políticos deben, o debemos, caminar juntos.
Debemos dialogar para construir consensos. En esta hora decisiva de la vida
de la República se requiere que los políticos hagamos de las coincidencias la
base para alcanzar los acuerdos esenciales.
Se necesita que la pluralidad y la diferencia de visiones, en lugar de ser
obstáculo, permitan el ascenso de México, enriquezcan el proyecto de Nación
que todos queremos para el Siglo XXI.
Como Presidente de la República, estoy plenamente convencido que
gobernar en democracia significa estar atento y escuchar a las diversas
voces que expresan el sentir de los mexicanos.
He señalado, con plena convicción, que seré un Presidente democrático.
Esto significa voluntad absoluta para conciliar posiciones. Voluntad para
anteponer, invariablemente, el interés superior de la Nación."
Resultado:
***** Numero de palabras: 321
Palabra MUY se repite 1
Palabra BUENAS se repite 1
Palabra TARDES se repite 1
Palabra A se repite 10
Palabra TODAS se repite 1
Palabra Y se repite 8
Palabra TODOS se repite 4
Palabra USTEDES se repite 1
Palabra QUIERO se repite 1
Palabra SALUDAR se repite 1
Palabra CON se repite 6
Palabra RESPETO se repite 3
Palabra AL se repite 4
Palabra PRESIDENTE se repite 4
Palabra DE se repite 26
Palabra LA se repite 11
Palabra CÁMARA se repite 2
Palabra DIPUTADOS se repite 1
Palabra SENADORES se repite 1
Palabra LOS se repite 8
Palabra SEÑORES se repite 3
Palabra DIRIGENTES se repite 1
Palabra PARTIDOS se repite 1
Palabra POLÍTICOS se repite 4
Palabra EN se repite 8
Palabra NUESTRO se repite 3
Palabra PAÍS se repite 3
Palabra SUS se repite 1
Palabra PRINCIPALES se repite 1
Palabra FUERZAS se repite 3
Palabra IGUAL se repite 1
Palabra FORMA se repite 1
Palabra SALUDO se repite 4
Palabra GOBERNADORES se repite 2
Palabra ELECTOS se repite 1
Palabra JEFE se repite 1
Palabra GOBIERNO se repite 2
Palabra ELECTO se repite 1
Palabra DEL se repite 4
Palabra DISTRITO se repite 1
Palabra FEDERAL se repite 1
Palabra ELLOS se repite 1
Palabra ORIGEN se repite 1
Palabra DISTINTAS se repite 3
Palabra EXPRESIONES se repite 1
Palabra POLÍTICAS se repite 3
Palabra COORDINADORES se repite 1
Palabra PARLAMENTARIOS se repite 1
Palabra QUE se repite 10
Palabra ESTÁN se repite 1
Palabra HOY se repite 1
Palabra AQUÍ se repite 1
Palabra PRESENTES se repite 1
Palabra TAMBIÉN se repite 1
Palabra REPRESENTACIÓN se repite 1
Palabra ACTORES se repite 2
Palabra DISTINGUIDOS se repite 1
Palabra LAS se repite 3
Palabra LES se repite 1
Palabra AFECTO se repite 1
Palabra RECONOCIMIENTO se repite 1
Palabra POR se repite 1
Palabra ESTE se repite 2
Palabra ENCUENTRO se repite 3
Palabra SIN se repite 1
Palabra DUDA se repite 1
Palabra INÉDITO se repite 1
Palabra PERO se repite 1
Palabra REPRESENTA se repite 1
Palabra UN se repite 2
Palabra GRAN se repite 1
Palabra PASO se repite 2
Palabra PARA se repite 7
Palabra IMPULSAR se repite 1
Palabra TRASFORMACIÓN se repite 1
Palabra ES se repite 1
Palabra EL se repite 11
Palabra PUNTO se repite 1
Palabra COINCIDENCIA se repite 1
Palabra REALMENTE se repite 1
Palabra APLAUDO se repite 1
Palabra CELEBRO se repite 1
Palabra SEA se repite 1
Palabra BIEN se repite 1
Palabra MÉXICO se repite 3
Palabra COMIENZA se repite 1
Palabra UNA se repite 1
Palabra NUEVA se repite 1
Palabra ETAPA se repite 1
Palabra SU se repite 1
Palabra VIDA se repite 2
Palabra DEMOCRÁTICA se repite 1
Palabra HA se repite 2
Palabra LLEGADO se repite 2
Palabra MOMENTO se repite 2
Palabra ACUERDO se repite 1
Palabra DAR se repite 1
Palabra SIGUIENTE se repite 1
Palabra PERFECCIONAMIENTO se repite 1
Palabra DEMOCRÁTICO: se repite 1
Palabra TRANSITAR se repite 1
Palabra SUFRAGIO se repite 1
Palabra EFECTIVO se repite 1
Palabra EFICAZ se repite 1
Palabra PROPÓSITO se repite 1
Palabra DEBEN se repite 1
Palabra O se repite 1
Palabra DEBEMOS se repite 2
Palabra CAMINAR se repite 1
Palabra JUNTOS se repite 1
Palabra DIALOGAR se repite 1
Palabra CONSTRUIR se repite 1
Palabra CONSENSOS se repite 1
Palabra ESTA se repite 1
Palabra HORA se repite 1
Palabra DECISIVA se repite 1
Palabra REPÚBLICA se repite 2
Palabra SE se repite 2
Palabra REQUIERE se repite 1
Palabra HAGAMOS se repite 1
Palabra COINCIDENCIAS se repite 1
Palabra BASE se repite 1
Palabra ALCANZAR se repite 1
Palabra ACUERDOS se repite 1
Palabra ESENCIALES se repite 1
Palabra NECESITA se repite 1
Palabra PLURALIDAD se repite 1
Palabra DIFERENCIA se repite 1
Palabra VISIONES se repite 1
Palabra LUGAR se repite 1
Palabra SER se repite 1
Palabra OBSTÁCULO se repite 1
Palabra PERMITAN se repite 1
Palabra ASCENSO se repite 1
Palabra ENRIQUEZCAN se repite 1
Palabra PROYECTO se repite 1
Palabra NACIÓN se repite 2
Palabra QUEREMOS se repite 1
Palabra SIGLO se repite 1
Palabra XXI se repite 1
Palabra COMO se repite 1
Palabra ESTOY se repite 1
Palabra PLENAMENTE se repite 1
Palabra CONVENCIDO se repite 1
Palabra GOBERNAR se repite 1
Palabra DEMOCRACIA se repite 1
Palabra SIGNIFICA se repite 2
Palabra ESTAR se repite 1
Palabra ATENTO se repite 1
Palabra ESCUCHAR se repite 1
Palabra DIVERSAS se repite 1
Palabra VOCES se repite 1
Palabra EXPRESAN se repite 1
Palabra SENTIR se repite 1
Palabra MEXICANOS se repite 1
Palabra HE se repite 1
Palabra SEÑALADO se repite 1
Palabra PLENA se repite 1
Palabra CONVICCIÓN se repite 1
Palabra SERÉ se repite 1
Palabra DEMOCRÁTICO se repite 1
Palabra ESTO se repite 1
Palabra VOLUNTAD se repite 2
Palabra ABSOLUTA se repite 1
Palabra CONCILIAR se repite 1
Palabra POSICIONES se repite 1
Palabra ANTEPONER se repite 1
Palabra INVARIABLEMENTE se repite 1
Palabra INTERÉS se repite 1
Palabra SUPERIOR se repite 1
"""
| 31.80531 | 78 | 0.804535 | 1,216 | 7,188 | 4.755757 | 0.174342 | 0.235172 | 0.197648 | 0.348608 | 0.023171 | 0.023171 | 0.014871 | 0.014871 | 0.014871 | 0 | 0 | 0.031869 | 0.170562 | 7,188 | 225 | 79 | 31.946667 | 0.938108 | 0.998052 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0.022222 | null | 1 | null | true | 0 | 0 | null | null | null | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6001d0518a371ac1dccd244e1db7dfaa72dd0fe4 | 214 | py | Python | src/spaceone/monitoring/manager/__init__.py | xellos00/plugin-azure-monitor | 5409707b2ea068ba35b408e01580c356e6b536c8 | [
"Apache-2.0"
] | null | null | null | src/spaceone/monitoring/manager/__init__.py | xellos00/plugin-azure-monitor | 5409707b2ea068ba35b408e01580c356e6b536c8 | [
"Apache-2.0"
] | null | null | null | src/spaceone/monitoring/manager/__init__.py | xellos00/plugin-azure-monitor | 5409707b2ea068ba35b408e01580c356e6b536c8 | [
"Apache-2.0"
] | 3 | 2020-11-23T10:08:28.000Z | 2020-12-28T04:41:41.000Z | from spaceone.monitoring.manager.azure_manager import AzureManager
from spaceone.monitoring.manager.data_source_manager import DataSourceManager
from spaceone.monitoring.manager.metric_manager import MetricManager
| 53.5 | 77 | 0.901869 | 25 | 214 | 7.56 | 0.48 | 0.190476 | 0.349206 | 0.460317 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.056075 | 214 | 3 | 78 | 71.333333 | 0.935644 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
60384b7a56328112edabfe3553a66bb3aa0ad273 | 168 | py | Python | oplab/v3/quant.py | oplab-team/oplab-client-python | a09a67a78c82b983374ae6a43dab68de208706b5 | [
"MIT"
] | 3 | 2020-05-18T05:04:54.000Z | 2022-02-14T14:09:39.000Z | oplab/v3/quant.py | oplab-team/oplab-client-python | a09a67a78c82b983374ae6a43dab68de208706b5 | [
"MIT"
] | 1 | 2020-05-23T19:49:11.000Z | 2020-05-23T19:50:04.000Z | oplab/v3/quant.py | oplab-team/oplab-client-python | a09a67a78c82b983374ae6a43dab68de208706b5 | [
"MIT"
] | null | null | null | class Quant:
def __init__(self, client) -> None:
self.client = client
def url(self):
return '%s%s' % (self.client.config['base_url'], 'quant')
| 24 | 65 | 0.583333 | 22 | 168 | 4.227273 | 0.545455 | 0.322581 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.25 | 168 | 6 | 66 | 28 | 0.738095 | 0 | 0 | 0 | 0 | 0 | 0.10119 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0 | 0.2 | 0.8 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
606a02e7537c104b5e2aec16b5931850f0453239 | 3,341 | py | Python | tests/test_file_handling.py | j19sch/pytest-instrument | 53e26a2c507456327887e007fd2609e71ec52999 | [
"MIT"
] | null | null | null | tests/test_file_handling.py | j19sch/pytest-instrument | 53e26a2c507456327887e007fd2609e71ec52999 | [
"MIT"
] | null | null | null | tests/test_file_handling.py | j19sch/pytest-instrument | 53e26a2c507456327887e007fd2609e71ec52999 | [
"MIT"
] | null | null | null | from pathlib import PurePath
import re
import pytest
from tests import helpers
@pytest.fixture(scope="function")
def tests_filename(testdir):
filename = "test_single_test_examples.py"
testdir.copy_example(filename)
return filename
def test_single_json_log_file_is_created_with_json_instrument_option(
testdir, tests_filename
):
test_to_run = "test_passes"
result = testdir.runpytest(
"-vs", "--instrument=json", f"{tests_filename}::{test_to_run}"
)
result.assert_outcomes(error=0, failed=0, passed=1)
log_files = helpers.get_files_from_artifacts_dir_by_extension(testdir, "json")
assert len(log_files) == 1
split_log_file_basename = PurePath(log_files[0]).stem.split("_", maxsplit=1)
helpers.validate_timestamp(split_log_file_basename[0], "%Y%m%dT%H%M%S")
records = helpers.get_json_log_file_from_artifacts_dir_and_return_records(testdir)
session_id = records[0]["session_id"]
assert split_log_file_basename[1] == session_id[:8]
def test_single_plain_log_file_is_created_with_log_instrument_option(
testdir, tests_filename
):
test_to_run = "test_passes"
result = testdir.runpytest(
"-vs", "--instrument=log", f"{tests_filename}::{test_to_run}"
)
result.assert_outcomes(error=0, failed=0, passed=1)
log_files = helpers.get_files_from_artifacts_dir_by_extension(testdir, "log")
assert len(log_files) == 1
split_log_file_basename = PurePath(log_files[0]).stem.split("_", maxsplit=1)
helpers.validate_timestamp(split_log_file_basename[0], "%Y%m%dT%H%M%S")
records = helpers.get_plain_log_file_from_artifacts_dir_and_return_records(testdir)
pattern = re.compile(r"^.+ session id: (.+)$")
match = pattern.search(records[0])
session_id = match[1]
assert split_log_file_basename[1] == session_id[:8]
def test_two_log_files_are_created_with_json_and_log_instrument_option(
testdir, tests_filename
):
test_to_run = "test_passes"
result = testdir.runpytest(
"-vs", "--instrument=json,log", f"{tests_filename}::{test_to_run}"
)
result.assert_outcomes(error=0, failed=0, passed=1)
json_log_files = helpers.get_files_from_artifacts_dir_by_extension(testdir, "json")
assert len(json_log_files) == 1
plain_log_files = helpers.get_files_from_artifacts_dir_by_extension(testdir, "log")
assert len(plain_log_files) == 1
assert PurePath(json_log_files[0]).stem == PurePath(plain_log_files[0]).stem
records = helpers.get_json_log_file_from_artifacts_dir_and_return_records(testdir)
session_id_json = records[0]["session_id"]
records = helpers.get_plain_log_file_from_artifacts_dir_and_return_records(testdir)
pattern = re.compile(r"^.+ session id: (.+)$")
match = pattern.search(records[0])
session_id_plain = match[1]
assert session_id_json == session_id_plain
def test_no_file_created_without_instrument_option(testdir, tests_filename):
test_to_run = "test_passes"
result = testdir.runpytest("-vs", f"{tests_filename}::{test_to_run}")
result.assert_outcomes(error=0, failed=0, passed=1)
log_files = helpers.get_files_from_artifacts_dir_by_extension(testdir, "json")
assert len(log_files) == 0
log_files = helpers.get_files_from_artifacts_dir_by_extension(testdir, "log")
assert len(log_files) == 0
| 34.802083 | 87 | 0.744388 | 484 | 3,341 | 4.716942 | 0.157025 | 0.059571 | 0.070083 | 0.066579 | 0.798511 | 0.780114 | 0.780114 | 0.780114 | 0.780114 | 0.780114 | 0 | 0.01253 | 0.140078 | 3,341 | 95 | 88 | 35.168421 | 0.782109 | 0 | 0 | 0.565217 | 0 | 0 | 0.114038 | 0.051781 | 0 | 0 | 0 | 0 | 0.202899 | 1 | 0.072464 | false | 0.115942 | 0.057971 | 0 | 0.144928 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
60a8b0b6e6edc75600b92b94704857a8ce70f1a3 | 34 | py | Python | Tox Whitebox/titlecase/__init__.py | adrianopaduam/pytest_studies | 05932963539fb43baf64900b08fa12b8178c663b | [
"MIT"
] | null | null | null | Tox Whitebox/titlecase/__init__.py | adrianopaduam/pytest_studies | 05932963539fb43baf64900b08fa12b8178c663b | [
"MIT"
] | null | null | null | Tox Whitebox/titlecase/__init__.py | adrianopaduam/pytest_studies | 05932963539fb43baf64900b08fa12b8178c663b | [
"MIT"
] | null | null | null | from .titlecase import title_case
| 17 | 33 | 0.852941 | 5 | 34 | 5.6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.117647 | 34 | 1 | 34 | 34 | 0.933333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
60b85edf927478abb74878a6c6e7914d73182692 | 147 | py | Python | src/testcase/GN_Y201H/input_case/GN_Y201H_Smart_Link.py | maiyajj/AutoTest_script-Appium_Connect | f9c2c42c281a9e2f984acb4a72dda0694b053f22 | [
"Apache-2.0"
] | 28 | 2017-11-10T00:19:16.000Z | 2022-02-19T16:42:05.000Z | src/testcase/GN_Y201H/input_case/GN_Y201H_Smart_Link.py | maiyajj/AutoTest_script-Appium_Connect | f9c2c42c281a9e2f984acb4a72dda0694b053f22 | [
"Apache-2.0"
] | null | null | null | src/testcase/GN_Y201H/input_case/GN_Y201H_Smart_Link.py | maiyajj/AutoTest_script-Appium_Connect | f9c2c42c281a9e2f984acb4a72dda0694b053f22 | [
"Apache-2.0"
] | 23 | 2017-08-22T06:12:19.000Z | 2021-09-18T05:45:41.000Z | # coding=utf-8
try:
from src.testcase.GN_Y201H.case.GN_Y201H_SMART_LINK.GN_Y201H_SMART_LINK_001 import *
except ImportError as e:
print(e)
| 24.5 | 88 | 0.77551 | 26 | 147 | 4.076923 | 0.730769 | 0.198113 | 0.226415 | 0.301887 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.102362 | 0.136054 | 147 | 5 | 89 | 29.4 | 0.732283 | 0.081633 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.25 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
60c05c63c965aa435cb0eeae775774e83e58951e | 25,575 | py | Python | rmp/models/processed/chemicals.py | rji-futures-lab/django-rmp-data | 6580961dd965025f312127bdd788f2771463f0b5 | [
"MIT"
] | null | null | null | rmp/models/processed/chemicals.py | rji-futures-lab/django-rmp-data | 6580961dd965025f312127bdd788f2771463f0b5 | [
"MIT"
] | 67 | 2018-08-29T18:15:56.000Z | 2020-06-05T18:57:59.000Z | rmp/models/processed/chemicals.py | rji-futures-lab/django-rmp-data | 6580961dd965025f312127bdd788f2771463f0b5 | [
"MIT"
] | 6 | 2018-10-18T19:19:38.000Z | 2021-09-15T17:21:52.000Z | """
Models for processed RMP data.
"""
import os
from django.conf import settings
from django.db import models
from django.db.models import (
F, Max, OuterRef, Subquery, Sum, Count, Case, When, Value
)
from rmp.fields import (
CopyFromBigIntegerField,
CopyFromBooleanField,
CopyFromCharField,
CopyFromDateField,
CopyFromDateTimeField,
CopyFromDecimalField,
CopyFromForeignKey,
CopyFromIntegerField,
CopyFromOneToOneField,
CopyFromTextField,
CopyFromFloatField,
)
from rmp.models import raw as raw_models
from rmp.models import processed as processed_models
from rmp.models.base import BaseRMPModel
class FlammablesAltRelease(BaseRMPModel):
flammable_id = CopyFromIntegerField(
primary_key=True,
source_column='flammable_id'
)
procchem = CopyFromForeignKey(
'ProcChem',
on_delete=models.CASCADE,
source_column='process_chemical_id'
)
analytical_basis = CopyFromCharField(
max_length=255,
blank=True,
)
scenario = CopyFromCharField(
max_length=200,
blank=True,
)
quantity_released = CopyFromDecimalField(
max_digits=5,
decimal_places=2,
null=True,
)
endpoint_used = CopyFromCharField(
max_length=30,
blank=True,
)
lfl_value = CopyFromDecimalField(
max_digits=5,
decimal_places=1,
null=True,
)
endpoint_distance = CopyFromDecimalField(
source_column="distance2_endpoint",
max_digits=5,
decimal_places=1,
null=True,
)
population = CopyFromIntegerField(
source_column="residential_population",
null=True,
verbose_name='Residential population',
)
pr_schools = CopyFromBooleanField(
verbose_name='Schools'
)
pr_residences = CopyFromBooleanField(
verbose_name='Residences'
)
pr_hospitals = CopyFromBooleanField(
verbose_name='Hospitals'
)
pr_prisons = CopyFromBooleanField(
verbose_name='Prisons/Correctional Facilities'
)
pr_public_recreation= CopyFromBooleanField(
verbose_name='Recreation Areas'
)
pr_comm_ind = CopyFromBooleanField(
verbose_name='Major Commercial, office, industrial areas'
)
pr_other_type = CopyFromCharField(
max_length=200,
blank=True,
)
er_natl_state_parks = CopyFromBooleanField(
verbose_name='National or state parks, forests, or monuments',
)
er_wildlife_sactuary = CopyFromBooleanField(
verbose_name='Officially designated wildlife sanctuaries, preserves, or refuges',
)
er_fed_wilderness = CopyFromBooleanField(
verbose_name='Federal wilderness area',
)
er_other_type = CopyFromCharField(
max_length=200,
blank=True,
)
pm_dikes = CopyFromBooleanField(
verbose_name='Dikes',
)
pm_firewalls = CopyFromBooleanField(
source_column='pm_fire_walls',
verbose_name='Fire wall',
)
pm_blastwalls = CopyFromBooleanField(
source_column='pm_blast_walls',
verbose_name='Blast walls',
)
pm_enclosures = CopyFromBooleanField(
verbose_name='Enclosures',
)
pm_other_type = CopyFromCharField(
max_length=200,
blank=True,
)
am_sprinklers = CopyFromBooleanField(
source_column='am_sprinkler_systems',
verbose_name='Sprinkler systems',
)
am_deluge_systems = CopyFromBooleanField(
verbose_name='Deluge systems',
)
am_watercurtain = CopyFromBooleanField(
source_column='am_water_curtain',
verbose_name='Water curtain',
)
am_excess_flowvalve = CopyFromBooleanField(
source_column='am_excess_flow_valve',
verbose_name='Excess flow valve',
)
am_other_type = CopyFromCharField(
max_length=200,
blank=True,
)
ptrgraphic = CopyFromCharField(
source_column='ptr_graphic',
max_length=12,
blank=True,
)
cbi_flag = CopyFromBooleanField()
@classmethod
def get_transform_queryset(self):
m = raw_models.tblS5FlammablesAltReleases
return m.objects.get_default_transform_queryset()
@property
def public_receptors_within_distance(self):
self._public_receptors_within_distance = [
f.verbose_name for f
in self._meta.model.get_prefixed_boolean_fields('pr_')
if self.__dict__[f.name]
]
if self.pr_other_type != '':
self._public_receptors_within_distance.append(
self.pr_other_type
)
return self._public_receptors_within_distance
@property
def public_receptors_not_within_distance(self):
self._public_receptors_not_within_distance = [
f.verbose_name for f
in self._meta.model.get_prefixed_boolean_fields('pr_')
if not self.__dict__[f.name]
]
return self._public_receptors_not_within_distance
@property
def environmental_receptors_within_distance(self):
self._environmental_receptors_within_distance = [
f.verbose_name for f
in self._meta.model.get_prefixed_boolean_fields('er_')
if self.__dict__[f.name]
]
if self.er_other_type != '':
self._environmental_receptors_within_distance.append(
self.er_other_type
)
return self._environmental_receptors_within_distance
@property
def environmental_receptors_not_within_distance(self):
self._environmental_receptors_not_within_distance = [
f.verbose_name for f
in self._meta.model.get_prefixed_boolean_fields('er_')
if not self.__dict__[f.name]
]
return self._environmental_receptors_not_within_distance
@property
def passive_mitigation_considered(self):
self._passive_mitigation_considered = [
f.verbose_name for f
in self._meta.model.get_prefixed_boolean_fields('pm_')
if self.__dict__[f.name]
]
if self.pm_other_type != '':
self._passive_mitigation_considered.append(
self.pm_other_type
)
return self._passive_mitigation_considered
@property
def passive_mitigation_not_considered(self):
self._passive_mitigation_not_considered = [
f.verbose_name for f
in self._meta.model.get_prefixed_boolean_fields('pm_')
if not self.__dict__[f.name]
]
return self._passive_mitigation_not_considered
@property
def active_mitigation_considered(self):
self._active_mitigation_considered = [
f.verbose_name for f
in self._meta.model.get_prefixed_boolean_fields('am_')
if self.__dict__[f.name]
]
if self.am_other_type != '':
self._active_mitigation_considered.append(
self.pm_other_type
)
return self._active_mitigation_considered
@property
def active_mitigation_not_considered(self):
self._active_mitigation_not_considered = [
f.verbose_name for f
in self._meta.model.get_prefixed_boolean_fields('am_')
if not self.__dict__[f.name]
]
return self._active_mitigation_not_considered
class ToxicsAltRelease(BaseRMPModel):
id = CopyFromIntegerField(
primary_key=True,
source_column='toxic_id',
)
procchem = CopyFromForeignKey(
'ProcChem',
on_delete=models.CASCADE,
source_column='ProcessChemicalID',
)
percent_weight = CopyFromDecimalField(
max_digits=4,
decimal_places=1,
null=True,
)
physical_state = CopyFromForeignKey(
"PhysCd",
on_delete=models.PROTECT,
db_column='physical_state',
null=True,
)
analytical_basis = CopyFromCharField(
max_length=255,
blank=True,
)
scenario = CopyFromForeignKey(
"ScenCd",
on_delete=models.PROTECT,
db_column='scenario',
null=True,
)
quantity_released = CopyFromDecimalField(
max_digits=5,
decimal_places=2,
null=True,
)
release_duration = CopyFromDecimalField(
max_digits=5,
decimal_places=2,
null=True,
)
release_rate = CopyFromBooleanField(
null=True,
)
wind_speed = CopyFromDecimalField(
max_digits=6,
decimal_places=2,
null=True,
)
stability_class = CopyFromCharField(
max_length=1,
blank=True,
)
topography = CopyFromForeignKey(
"TopoCd",
on_delete=models.PROTECT,
db_column='topography',
null=True,
)
endpoint_distance = CopyFromDecimalField(
source_column='distance2_endpoint',
max_digits=5,
decimal_places=1,
null=True,
)
residential_population = CopyFromIntegerField(
null=True,
verbose_name='Residential population',
)
pr_schools = CopyFromBooleanField(
verbose_name='Schools'
)
pr_residences = CopyFromBooleanField(
verbose_name='Residences'
)
pr_hospitals = CopyFromBooleanField(
verbose_name='Hospitals'
)
pr_prisons = CopyFromBooleanField(
verbose_name='Prisons/Correctional Facilities'
)
pr_public_recreation= CopyFromBooleanField(
verbose_name='Recreation Areas'
)
pr_comm_ind = CopyFromBooleanField(
verbose_name='Major Commercial, office, industrial areas'
)
pr_other_type = CopyFromCharField(
max_length=200,
blank=True,
)
er_natl_state_parks = CopyFromBooleanField(
verbose_name='National or state parks, forests, or monuments',
)
er_wildlife_sactuary = CopyFromBooleanField(
verbose_name='Officially designated wildlife sanctuaries, preserves, or refuges',
)
er_fed_wilderness = CopyFromBooleanField(
verbose_name='Federal wilderness area',
)
er_other_type = CopyFromCharField(
max_length=200,
blank=True,
)
pm_dikes = CopyFromBooleanField(
verbose_name='Dikes',
)
pm_enclosures = CopyFromBooleanField(
verbose_name='Enclosures',
)
pm_berms = CopyFromBooleanField(
verbose_name='Berms',
)
pm_drains = CopyFromBooleanField(
verbose_name='Drains',
)
pm_sumps = CopyFromBooleanField(
verbose_name='Sumps',
)
pm_other_type = CopyFromCharField(
max_length=200,
blank=True,
)
am_sprinkler_systems = CopyFromBooleanField(
verbose_name='Sprinkler systems'
)
am_deluge_systems = CopyFromBooleanField(
verbose_name='Deluge systems'
)
am_water_curtain = CopyFromBooleanField(
verbose_name='Water curtain'
)
am_neutralization = CopyFromBooleanField(
verbose_name='Neutralization'
)
am_excess_flow_valve = CopyFromBooleanField(
verbose_name='Excess flow valve'
)
am_flares = CopyFromBooleanField(
verbose_name='Flares'
)
am_scrubbers = CopyFromBooleanField(
verbose_name='Scrubbers'
)
am_emergency_shutdown = CopyFromBooleanField(
verbose_name='Emergency shutdown'
)
am_other_type = CopyFromCharField(
max_length=200,
blank=True,
)
ptr_graphic = CopyFromCharField(
max_length=12,
blank=True,
)
cbi_flag = CopyFromBooleanField()
@classmethod
def get_transform_queryset(self):
m = raw_models.tblS3ToxicsAltReleases
return m.objects.get_default_transform_queryset()
@property
def public_receptors_within_distance(self):
self._public_receptors_within_distance = [
f.verbose_name for f
in self._meta.model.get_prefixed_boolean_fields('pr_')
if self.__dict__[f.name]
]
if self.pr_other_type != '':
self._public_receptors_within_distance.append(
self.pr_other_type
)
return self._public_receptors_within_distance
@property
def public_receptors_not_within_distance(self):
self._public_receptors_not_within_distance = [
f.verbose_name for f
in self._meta.model.get_prefixed_boolean_fields('pr_')
if not self.__dict__[f.name]
]
return self._public_receptors_not_within_distance
@property
def environmental_receptors_within_distance(self):
self._environmental_receptors_within_distance = [
f.verbose_name for f
in self._meta.model.get_prefixed_boolean_fields('er_')
if self.__dict__[f.name]
]
if self.er_other_type != '':
self._environmental_receptors_within_distance.append(
self.er_other_type
)
return self._environmental_receptors_within_distance
@property
def environmental_receptors_not_within_distance(self):
self._environmental_receptors_not_within_distance = [
f.verbose_name for f
in self._meta.model.get_prefixed_boolean_fields('er_')
if not self.__dict__[f.name]
]
return self._environmental_receptors_not_within_distance
@property
def passive_mitigation_considered(self):
self._passive_mitigation_considered = [
f.verbose_name for f
in self._meta.model.get_prefixed_boolean_fields('pm_')
if self.__dict__[f.name]
]
if self.pm_other_type != '':
self._passive_mitigation_considered.append(
self.pm_other_type
)
return self._passive_mitigation_considered
@property
def passive_mitigation_not_considered(self):
self._passive_mitigation_not_considered = [
f.verbose_name for f
in self._meta.model.get_prefixed_boolean_fields('pm_')
if not self.__dict__[f.name]
]
return self._passive_mitigation_not_considered
@property
def active_mitigation_considered(self):
self._active_mitigation_considered = [
f.verbose_name for f
in self._meta.model.get_prefixed_boolean_fields('am_')
if self.__dict__[f.name]
]
if self.am_other_type != '':
self._active_mitigation_considered.append(
self.pm_other_type
)
return self._active_mitigation_considered
@property
def active_mitigation_not_considered(self):
self._active_mitigation_not_considered = [
f.verbose_name for f
in self._meta.model.get_prefixed_boolean_fields('am_')
if not self.__dict__[f.name]
]
return self._active_mitigation_not_considered
class ToxicsWorstCase(BaseRMPModel):
id = CopyFromIntegerField(
primary_key=True,
source_column='toxic_id',
)
procchem = CopyFromForeignKey(
'ProcChem',
on_delete=models.CASCADE,
source_column="ProcessChemicalID",
)
percent_weight = CopyFromDecimalField(
max_digits=4,
decimal_places=1,
null=True,
)
physical_state = CopyFromForeignKey(
"PhysCd",
on_delete=models.PROTECT,
db_column='physical_state',
null=True,
)
analytical_basis = CopyFromCharField(
max_length=255,
blank=True,
)
scenario = CopyFromForeignKey(
"ScenCd",
on_delete=models.PROTECT,
db_column='scenario',
null=True,
)
quantity_released = CopyFromDecimalField(
max_digits=6,
decimal_places=2,
null=True,
)
release_duration = CopyFromDecimalField(
max_digits=7,
decimal_places=2,
null=True,
)
release_rate = CopyFromDecimalField(
max_digits=4,
decimal_places=1,
null=True,
)
wind_speed = CopyFromDecimalField(
max_digits=4,
decimal_places=1,
null=True,
)
stability_class = CopyFromCharField(
max_length=1,
blank=True,
)
topography = CopyFromForeignKey(
"TopoCd",
on_delete=models.PROTECT,
db_column='topography',
null=True,
)
distance2_endpoint = CopyFromDecimalField(
max_digits=5,
decimal_places=1,
null=True,
)
residential_population = CopyFromIntegerField(
null=True,
)
pr_schools = CopyFromBooleanField(
verbose_name='Schools'
)
pr_residences = CopyFromBooleanField(
verbose_name='Residences'
)
pr_hospitals = CopyFromBooleanField(
verbose_name='Hospitals'
)
pr_prisons = CopyFromBooleanField(
verbose_name='Prisons/Correctional Facilities'
)
pr_public_recreation= CopyFromBooleanField(
verbose_name='Recreation Areas'
)
pr_comm_ind = CopyFromBooleanField(
verbose_name='Major Commercial, office, industrial areas'
)
pr_other_type = CopyFromCharField(
max_length=200,
blank=True,
)
er_natl_state_parks = CopyFromBooleanField(
verbose_name='National or state parks, forests, or monuments',
)
er_wildlife_sactuary = CopyFromBooleanField(
verbose_name='Officially designated wildlife sanctuaries, preserves, or refuges',
)
er_fed_wilderness = CopyFromBooleanField(
verbose_name='Federal wilderness area',
)
er_other_type = CopyFromCharField(
max_length=200,
blank=True,
)
pm_dikes = CopyFromBooleanField(
verbose_name='Dikes',
)
pm_enclosures = CopyFromBooleanField(
verbose_name='Enclosures',
)
pm_berms = CopyFromBooleanField(
verbose_name='Berms',
)
pm_drains = CopyFromBooleanField(
verbose_name='Drains',
)
pm_sumps = CopyFromBooleanField(
verbose_name='Sumps',
)
pm_other_type = CopyFromCharField(
max_length=200,
blank=True,
)
ptr_graphic = CopyFromCharField(
max_length=12,
blank=True,
)
cbi_flag = CopyFromBooleanField()
@classmethod
def get_transform_queryset(self):
m = raw_models.tblS2ToxicsWorstCase
return m.objects.get_default_transform_queryset()
@property
def public_receptors_within_distance(self):
self._public_receptors_within_distance = [
f.verbose_name for f
in self._meta.model.get_prefixed_boolean_fields('pr_')
if self.__dict__[f.name]
]
if self.pr_other_type != '':
self._public_receptors_within_distance.append(
self.pr_other_type
)
return self._public_receptors_within_distance
@property
def public_receptors_not_within_distance(self):
self._public_receptors_not_within_distance = [
f.verbose_name for f
in self._meta.model.get_prefixed_boolean_fields('pr_')
if not self.__dict__[f.name]
]
return self._public_receptors_not_within_distance
@property
def environmental_receptors_within_distance(self):
self._environmental_receptors_within_distance = [
f.verbose_name for f
in self._meta.model.get_prefixed_boolean_fields('er_')
if self.__dict__[f.name]
]
if self.er_other_type != '':
self._environmental_receptors_within_distance.append(
self.er_other_type
)
return self._environmental_receptors_within_distance
@property
def environmental_receptors_not_within_distance(self):
self._environmental_receptors_not_within_distance = [
f.verbose_name for f
in self._meta.model.get_prefixed_boolean_fields('er_')
if not self.__dict__[f.name]
]
return self._environmental_receptors_not_within_distance
@property
def passive_mitigation_considered(self):
self._passive_mitigation_considered = [
f.verbose_name for f
in self._meta.model.get_prefixed_boolean_fields('pm_')
if self.__dict__[f.name]
]
if self.pm_other_type != '':
self._passive_mitigation_considered.append(
self.pm_other_type
)
return self._passive_mitigation_considered
@property
def passive_mitigation_not_considered(self):
self._passive_mitigation_not_considered = [
f.verbose_name for f
in self._meta.model.get_prefixed_boolean_fields('pm_')
if not self.__dict__[f.name]
]
return self._passive_mitigation_not_considered
class FlammablesWorstCase(BaseRMPModel):
id = CopyFromIntegerField(
primary_key=True,
source_column='flammable_id',
)
procchem = CopyFromForeignKey(
'ProcChem',
on_delete=models.CASCADE,
source_column='ProcessChemicalID',
)
analytical_basis = CopyFromCharField(
max_length=255,
blank=True,
)
distance2_endpoint = CopyFromDecimalField(
max_digits=5,
decimal_places=1,
null=True,
)
quantity_released = CopyFromIntegerField(
null=True,
)
residential_population = CopyFromIntegerField(
null=True,
)
pr_schools = CopyFromBooleanField(
verbose_name='Schools'
)
pr_residences = CopyFromBooleanField(
verbose_name='Residences'
)
pr_hospitals = CopyFromBooleanField(
verbose_name='Hospitals'
)
pr_prisons = CopyFromBooleanField(
verbose_name='Prisons/Correctional Facilities'
)
pr_public_recreation= CopyFromBooleanField(
verbose_name='Recreation Areas'
)
pr_comm_ind = CopyFromBooleanField(
verbose_name='Major Commercial, office, industrial areas'
)
pr_other_type = CopyFromCharField(
max_length=200,
blank=True,
)
er_natl_state_parks = CopyFromBooleanField(
verbose_name='National or state parks, forests, or monuments',
)
er_wildlife_sactuary = CopyFromBooleanField(
verbose_name='Officially designated wildlife sanctuaries, preserves, or refuges',
)
er_fed_wilderness = CopyFromBooleanField(
verbose_name='Federal wilderness area',
)
er_other_type = CopyFromCharField(
max_length=200,
blank=True,
)
pm_blast_walls = CopyFromBooleanField(
verbose_name='Blast walls'
)
pm_other_type = CopyFromCharField(
max_length=200,
blank=True,
)
ptr_graphic = CopyFromCharField(
max_length=12,
blank=True,
)
cbi_flag = CopyFromBooleanField()
@classmethod
def get_transform_queryset(self):
m = raw_models.tblS4FlammablesWorstCase
return m.objects.get_default_transform_queryset()
@property
def public_receptors_within_distance(self):
self._public_receptors_within_distance = [
f.verbose_name for f
in self._meta.model.get_prefixed_boolean_fields('pr_')
if self.__dict__[f.name]
]
if self.pr_other_type != '':
self._public_receptors_within_distance.append(
self.pr_other_type
)
return self._public_receptors_within_distance
@property
def public_receptors_not_within_distance(self):
self._public_receptors_not_within_distance = [
f.verbose_name for f
in self._meta.model.get_prefixed_boolean_fields('pr_')
if not self.__dict__[f.name]
]
return self._public_receptors_not_within_distance
@property
def environmental_receptors_within_distance(self):
self._environmental_receptors_within_distance = [
f.verbose_name for f
in self._meta.model.get_prefixed_boolean_fields('er_')
if self.__dict__[f.name]
]
if self.er_other_type != '':
self._environmental_receptors_within_distance.append(
self.er_other_type
)
return self._environmental_receptors_within_distance
@property
def environmental_receptors_not_within_distance(self):
self._environmental_receptors_not_within_distance = [
f.verbose_name for f
in self._meta.model.get_prefixed_boolean_fields('er_')
if not self.__dict__[f.name]
]
return self._environmental_receptors_not_within_distance
@property
def passive_mitigation_considered(self):
self._passive_mitigation_considered = [
f.verbose_name for f
in self._meta.model.get_prefixed_boolean_fields('pm_')
if self.__dict__[f.name]
]
if self.pm_other_type != '':
self._passive_mitigation_considered.append(
self.pm_other_type
)
return self._passive_mitigation_considered
@property
def passive_mitigation_not_considered(self):
self._passive_mitigation_not_considered = [
f.verbose_name for f
in self._meta.model.get_prefixed_boolean_fields('pm_')
if not self.__dict__[f.name]
]
return self._passive_mitigation_not_considered
| 28.012048 | 89 | 0.647977 | 2,548 | 25,575 | 6.084772 | 0.07967 | 0.065983 | 0.11597 | 0.02709 | 0.897188 | 0.891254 | 0.885578 | 0.878096 | 0.874162 | 0.855005 | 0 | 0.005809 | 0.279765 | 25,575 | 912 | 90 | 28.042763 | 0.835885 | 0.001173 | 0 | 0.68805 | 0 | 0 | 0.07029 | 0.000862 | 0 | 0 | 0 | 0 | 0 | 1 | 0.040252 | false | 0.03522 | 0.010063 | 0 | 0.256604 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
60df159b4c6dc6625fc0efba0e2665104839d97b | 220 | py | Python | src/test/pythonFiles/testFiles/standard/tests/test_foreign_nested_tests.py | ChaseKnowlden/vscode-jupyter | 9bdaf87f0b6dcd717c508e9023350499a6093f97 | [
"MIT"
] | 615 | 2020-11-11T22:55:28.000Z | 2022-03-30T21:48:08.000Z | src/test/pythonFiles/testFiles/standard/tests/test_foreign_nested_tests.py | ChaseKnowlden/vscode-jupyter | 9bdaf87f0b6dcd717c508e9023350499a6093f97 | [
"MIT"
] | 8,428 | 2020-11-11T22:06:43.000Z | 2022-03-31T23:42:34.000Z | src/test/pythonFiles/testFiles/standard/tests/test_foreign_nested_tests.py | vasili8m/vscode-python | 846eee870e8b7bab38172600836faedb5fb80166 | [
"MIT"
] | 158 | 2020-11-12T07:49:02.000Z | 2022-03-27T20:50:20.000Z | from .external import ForeignTests
class TestNestedForeignTests:
class TestInheritingHere(ForeignTests):
def test_nested_normal(self):
assert True
def test_normal(self):
assert True
| 22 | 43 | 0.704545 | 22 | 220 | 6.909091 | 0.636364 | 0.092105 | 0.210526 | 0.263158 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.245455 | 220 | 9 | 44 | 24.444444 | 0.915663 | 0 | 0 | 0.285714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.285714 | 1 | 0.285714 | false | 0 | 0.142857 | 0 | 0.714286 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
60f851d7fdf8aef65c1416a21860d448bbcfa769 | 42 | py | Python | harvester/harvester/harvester/__init__.py | HammerMuseum/channel-backend | 7c4e4259d6a043daa9f41b9ad5ae0a63fd1c475e | [
"MIT"
] | 3 | 2021-05-05T18:19:27.000Z | 2021-05-28T18:03:28.000Z | harvester/harvester/harvester/__init__.py | HammerMuseum/channel-backend | 7c4e4259d6a043daa9f41b9ad5ae0a63fd1c475e | [
"MIT"
] | 4 | 2021-05-17T14:13:51.000Z | 2021-05-17T14:22:46.000Z | harvester/harvester/harvester/__init__.py | HammerMuseum/channel-backend | 7c4e4259d6a043daa9f41b9ad5ae0a63fd1c475e | [
"MIT"
] | null | null | null | from .HarvesterBase import HarvesterBase
| 14 | 40 | 0.857143 | 4 | 42 | 9 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.119048 | 42 | 2 | 41 | 21 | 0.972973 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
718808394ef31ba7eeab103dde5980ca840178a1 | 81 | py | Python | pyroll/core/profile/__init__.py | pyroll-project/pyroll-core | f59094d58c2f7493ddc6345b3afc4700ca259681 | [
"BSD-3-Clause"
] | null | null | null | pyroll/core/profile/__init__.py | pyroll-project/pyroll-core | f59094d58c2f7493ddc6345b3afc4700ca259681 | [
"BSD-3-Clause"
] | null | null | null | pyroll/core/profile/__init__.py | pyroll-project/pyroll-core | f59094d58c2f7493ddc6345b3afc4700ca259681 | [
"BSD-3-Clause"
] | null | null | null | from .profile import Profile
from . import hookspecs
from . import base_plugins
| 16.2 | 28 | 0.802469 | 11 | 81 | 5.818182 | 0.545455 | 0.3125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.160494 | 81 | 4 | 29 | 20.25 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
71a398e1f760812592a4ecc5f7fd42bc9da70272 | 20,183 | py | Python | envoy/tests/test_parser.py | remicalixte/integrations-core | b115e18c52820fe1a92495f538fdc14ddf83cfe1 | [
"BSD-3-Clause"
] | 1 | 2021-01-28T01:45:37.000Z | 2021-01-28T01:45:37.000Z | envoy/tests/test_parser.py | remicalixte/integrations-core | b115e18c52820fe1a92495f538fdc14ddf83cfe1 | [
"BSD-3-Clause"
] | 3 | 2021-01-27T04:56:40.000Z | 2021-02-26T06:29:22.000Z | envoy/tests/test_parser.py | remicalixte/integrations-core | b115e18c52820fe1a92495f538fdc14ddf83cfe1 | [
"BSD-3-Clause"
] | 1 | 2019-12-23T13:35:17.000Z | 2019-12-23T13:35:17.000Z | import pytest
from datadog_checks.envoy.errors import UnknownMetric, UnknownTags
from datadog_checks.envoy.metrics import METRIC_PREFIX, METRICS
from datadog_checks.envoy.parser import parse_histogram, parse_metric
def test_unknown_metric():
with pytest.raises(UnknownMetric):
parse_metric('foo.bar')
def test_unknown_tag():
with pytest.raises(UnknownTags):
parse_metric('stats.major.overflow')
def test_runtime():
metric = 'runtime.num_keys'
tags = [tag for tags in METRICS[metric]['tags'] for tag in tags]
assert parse_metric(metric) == (METRIC_PREFIX + metric, list(tags), METRICS[metric]['method'])
def test_cds():
metric = 'cluster_manager.cds.config_reload'
tags = [tag for tags in METRICS[metric]['tags'] for tag in tags]
assert parse_metric(metric) == (METRIC_PREFIX + metric, list(tags), METRICS[metric]['method'])
def test_http_router_filter():
metric = 'http{}.rq_total'
untagged_metric = metric.format('')
tags = [tag for tags in METRICS[untagged_metric]['tags'] for tag in tags]
tag0 = 'some_stat_prefix'
tagged_metric = metric.format('.{}'.format(tag0))
assert parse_metric(tagged_metric) == (
METRIC_PREFIX + untagged_metric,
['{}:{}'.format(tags[0], tag0)],
METRICS[untagged_metric]['method'],
)
def test_http_router_filter_vhost():
metric = 'vhost{}.vcluster{}.upstream_rq_time'
untagged_metric = metric.format('', '')
tags = [tag for tags in METRICS[untagged_metric]['tags'] for tag in tags]
tag0 = 'some_vhost_name'
tag1 = 'some_vcluster_name'
tagged_metric = metric.format('.{}'.format(tag0), '.{}'.format(tag1))
assert parse_metric(tagged_metric) == (
METRIC_PREFIX + untagged_metric,
['{}:{}'.format(tags[0], tag0), '{}:{}'.format(tags[1], tag1)],
METRICS[untagged_metric]['method'],
)
def test_http_rate_limit():
metric = 'cluster{}.ratelimit.ok'
untagged_metric = metric.format('')
tags = [tag for tags in METRICS[untagged_metric]['tags'] for tag in tags]
tag0 = 'some_route_target_cluster'
tagged_metric = metric.format('.{}'.format(tag0))
assert parse_metric(tagged_metric) == (
METRIC_PREFIX + untagged_metric,
['{}:{}'.format(tags[0], tag0)],
METRICS[untagged_metric]['method'],
)
def test_ip_tagging():
metric = 'http{}.ip_tagging{}.hit'
untagged_metric = metric.format('', '')
tags = [tag for tags in METRICS[untagged_metric]['tags'] for tag in tags]
tag0 = 'some_stat_prefix'
tag1 = 'some_tag_name'
tagged_metric = metric.format('.{}'.format(tag0), '.{}'.format(tag1))
assert parse_metric(tagged_metric) == (
METRIC_PREFIX + untagged_metric,
['{}:{}'.format(tags[0], tag0), '{}:{}'.format(tags[1], tag1)],
METRICS[untagged_metric]['method'],
)
def test_grpc():
metric = 'cluster{}.grpc{}{}.total'
untagged_metric = metric.format('', '', '')
tags = [tag for tags in METRICS[untagged_metric]['tags'] for tag in tags]
tag0 = 'some_route_target_cluster'
tag1 = 'some_grpc_service'
tag2 = 'some_grpc_method'
tagged_metric = metric.format('.{}'.format(tag0), '.{}'.format(tag1), '.{}'.format(tag2))
assert parse_metric(tagged_metric) == (
METRIC_PREFIX + untagged_metric,
['{}:{}'.format(tags[0], tag0), '{}:{}'.format(tags[1], tag1), '{}:{}'.format(tags[2], tag2)],
METRICS[untagged_metric]['method'],
)
def test_dynamodb_operation():
metric = 'http{}.dynamodb.operation{}.upstream_rq_total'
untagged_metric = metric.format('', '')
tags = [tag for tags in METRICS[untagged_metric]['tags'] for tag in tags]
tag0 = 'some_stat_prefix'
tag1 = 'some_operation_name'
tagged_metric = metric.format('.{}'.format(tag0), '.{}'.format(tag1))
assert parse_metric(tagged_metric) == (
METRIC_PREFIX + untagged_metric,
['{}:{}'.format(tags[0], tag0), '{}:{}'.format(tags[1], tag1)],
METRICS[untagged_metric]['method'],
)
def test_dynamodb_table():
metric = 'http{}.dynamodb.table{}.upstream_rq_total'
untagged_metric = metric.format('', '')
tags = [tag for tags in METRICS[untagged_metric]['tags'] for tag in tags]
tag0 = 'some_stat_prefix'
tag1 = 'some_table_name'
tagged_metric = metric.format('.{}'.format(tag0), '.{}'.format(tag1))
assert parse_metric(tagged_metric) == (
METRIC_PREFIX + untagged_metric,
['{}:{}'.format(tags[0], tag0), '{}:{}'.format(tags[1], tag1)],
METRICS[untagged_metric]['method'],
)
def test_dynamodb_error():
metric = 'http{}.dynamodb.error{}{}'
untagged_metric = metric.format('', '', '')
tags = [tag for tags in METRICS[untagged_metric]['tags'] for tag in tags]
tag0 = 'some_stat_prefix'
tag1 = 'some_table_name'
tag2 = 'error_type'
tagged_metric = metric.format('.{}'.format(tag0), '.{}'.format(tag1), '.{}'.format(tag2))
assert parse_metric(tagged_metric) == (
METRIC_PREFIX + untagged_metric,
['{}:{}'.format(tags[0], tag0), '{}:{}'.format(tags[1], tag1), '{}:{}'.format(tags[2], tag2)],
METRICS[untagged_metric]['method'],
)
def test_http_buffer_filter():
metric = 'http{}.buffer.rq_timeout'
untagged_metric = metric.format('')
tags = [tag for tags in METRICS[untagged_metric]['tags'] for tag in tags]
tag0 = 'some_stat_prefix'
tagged_metric = metric.format('.{}'.format(tag0))
assert parse_metric(tagged_metric) == (
METRIC_PREFIX + untagged_metric,
['{}:{}'.format(tags[0], tag0)],
METRICS[untagged_metric]['method'],
)
def test_rds():
metric = 'http{}.rds{}.config_reload'
untagged_metric = metric.format('', '')
tags = [tag for tags in METRICS[untagged_metric]['tags'] for tag in tags]
tag0 = 'some_stat_prefix'
tag1 = 'some_route_config_name'
tagged_metric = metric.format('.{}'.format(tag0), '.{}'.format(tag1))
assert parse_metric(tagged_metric) == (
METRIC_PREFIX + untagged_metric,
['{}:{}'.format(tags[0], tag0), '{}:{}'.format(tags[1], tag1)],
METRICS[untagged_metric]['method'],
)
def test_tcp_proxy():
metric = 'tcp{}.downstream_cx_total'
untagged_metric = metric.format('')
tags = [tag for tags in METRICS[untagged_metric]['tags'] for tag in tags]
tag0 = 'some_stat_prefix'
tagged_metric = metric.format('.{}'.format(tag0))
assert parse_metric(tagged_metric) == (
METRIC_PREFIX + untagged_metric,
['{}:{}'.format(tags[0], tag0)],
METRICS[untagged_metric]['method'],
)
def test_tls():
metric = 'auth.clientssl{}.update_success'
untagged_metric = metric.format('')
tags = [tag for tags in METRICS[untagged_metric]['tags'] for tag in tags]
tag0 = 'some_stat_prefix'
tagged_metric = metric.format('.{}'.format(tag0))
assert parse_metric(tagged_metric) == (
METRIC_PREFIX + untagged_metric,
['{}:{}'.format(tags[0], tag0)],
METRICS[untagged_metric]['method'],
)
def test_network_rate_limit():
metric = 'ratelimit{}.total'
untagged_metric = metric.format('')
tags = [tag for tags in METRICS[untagged_metric]['tags'] for tag in tags]
tag0 = 'some_stat_prefix'
tagged_metric = metric.format('.{}'.format(tag0))
assert parse_metric(tagged_metric) == (
METRIC_PREFIX + untagged_metric,
['{}:{}'.format(tags[0], tag0)],
METRICS[untagged_metric]['method'],
)
def test_redis():
metric = 'redis{}.downstream_rq_total'
untagged_metric = metric.format('')
tags = [tag for tags in METRICS[untagged_metric]['tags'] for tag in tags]
tag0 = 'some_stat_prefix'
tagged_metric = metric.format('.{}'.format(tag0))
assert parse_metric(tagged_metric) == (
METRIC_PREFIX + untagged_metric,
['{}:{}'.format(tags[0], tag0)],
METRICS[untagged_metric]['method'],
)
def test_redis_splitter():
metric = 'redis{}.splitter.invalid_request'
untagged_metric = metric.format('')
tags = [tag for tags in METRICS[untagged_metric]['tags'] for tag in tags]
tag0 = 'some_stat_prefix'
tagged_metric = metric.format('.{}'.format(tag0))
assert parse_metric(tagged_metric) == (
METRIC_PREFIX + untagged_metric,
['{}:{}'.format(tags[0], tag0)],
METRICS[untagged_metric]['method'],
)
def test_redis_command():
metric = 'redis{}.command{}.total'
untagged_metric = metric.format('', '')
tags = [tag for tags in METRICS[untagged_metric]['tags'] for tag in tags]
tag0 = 'some_stat_prefix'
tag1 = 'some_command'
tagged_metric = metric.format('.{}'.format(tag0), '.{}'.format(tag1))
assert parse_metric(tagged_metric) == (
METRIC_PREFIX + untagged_metric,
['{}:{}'.format(tags[0], tag0), '{}:{}'.format(tags[1], tag1)],
METRICS[untagged_metric]['method'],
)
def test_mongo():
metric = 'mongo{}.op_insert'
untagged_metric = metric.format('')
tags = [tag for tags in METRICS[untagged_metric]['tags'] for tag in tags]
tag0 = 'some_stat_prefix'
tagged_metric = metric.format('.{}'.format(tag0))
assert parse_metric(tagged_metric) == (
METRIC_PREFIX + untagged_metric,
['{}:{}'.format(tags[0], tag0)],
METRICS[untagged_metric]['method'],
)
def test_mongo_command():
metric = 'mongo{}.cmd{}.total'
untagged_metric = metric.format('', '')
tags = [tag for tags in METRICS[untagged_metric]['tags'] for tag in tags]
tag0 = 'some_stat_prefix'
tag1 = 'some_command'
tagged_metric = metric.format('.{}'.format(tag0), '.{}'.format(tag1))
assert parse_metric(tagged_metric) == (
METRIC_PREFIX + untagged_metric,
['{}:{}'.format(tags[0], tag0), '{}:{}'.format(tags[1], tag1)],
METRICS[untagged_metric]['method'],
)
def test_mongo_collection():
metric = 'mongo{}.collection{}.query.total'
untagged_metric = metric.format('', '')
tags = [tag for tags in METRICS[untagged_metric]['tags'] for tag in tags]
tag0 = 'some_stat_prefix'
tag1 = 'some_collection'
tagged_metric = metric.format('.{}'.format(tag0), '.{}'.format(tag1))
assert parse_metric(tagged_metric) == (
METRIC_PREFIX + untagged_metric,
['{}:{}'.format(tags[0], tag0), '{}:{}'.format(tags[1], tag1)],
METRICS[untagged_metric]['method'],
)
def test_listener():
metric = 'listener{}.ssl.ciphers{}'
untagged_metric = metric.format('', '')
tags = [tag for tags in METRICS[untagged_metric]['tags'] for tag in tags]
tag0 = '0.0.0.0_80'
tag1 = 'some_ciphers'
tagged_metric = metric.format('.{}'.format(tag0), '.{}'.format(tag1))
assert parse_metric(tagged_metric) == (
METRIC_PREFIX + untagged_metric,
['{}:{}'.format(tags[0], tag0), '{}:{}'.format(tags[1], tag1)],
METRICS[untagged_metric]['method'],
)
def test_listener_manager():
metric = 'listener_manager.listener_added'
tags = [tag for tags in METRICS[metric]['tags'] for tag in tags]
assert parse_metric(metric) == (METRIC_PREFIX + metric, list(tags), METRICS[metric]['method'])
def test_listener_tls():
metric = 'listener{}.ssl.versions{}'
untagged_metric = metric.format('', '')
tags = [tag for tags in METRICS[untagged_metric]['tags'] for tag in tags]
tag0 = '0.0.0.0'
tag1 = 'TLSv1.2'
tagged_metric = metric.format('.{}'.format(tag0), '.{}'.format(tag1))
assert parse_metric(tagged_metric) == (
METRIC_PREFIX + untagged_metric,
['{}:{}'.format(tags[0], tag0), '{}:{}'.format(tags[1], tag1)],
METRICS[untagged_metric]['method'],
)
def test_listener_curves():
metric = 'listener{}.ssl.curves{}'
untagged_metric = metric.format('', '')
tags = [tag for tags in METRICS[untagged_metric]['tags'] for tag in tags]
tag0 = '0.0.0.0'
tag1 = 'P-256'
tagged_metric = metric.format('.{}'.format(tag0), '.{}'.format(tag1))
assert parse_metric(tagged_metric) == (
METRIC_PREFIX + untagged_metric,
['{}:{}'.format(tags[0], tag0), '{}:{}'.format(tags[1], tag1)],
METRICS[untagged_metric]['method'],
)
def test_listener_sigalgs():
metric = 'listener{}.ssl.sigalgs{}'
untagged_metric = metric.format('', '')
tags = [tag for tags in METRICS[untagged_metric]['tags'] for tag in tags]
tag0 = '0.0.0.0'
tag1 = 'rsa_pss_rsae_sha256'
tagged_metric = metric.format('.{}'.format(tag0), '.{}'.format(tag1))
assert parse_metric(tagged_metric) == (
METRIC_PREFIX + untagged_metric,
['{}:{}'.format(tags[0], tag0), '{}:{}'.format(tags[1], tag1)],
METRICS[untagged_metric]['method'],
)
def test_http():
metric = 'http{}.downstream_cx_total'
untagged_metric = metric.format('')
tags = [tag for tags in METRICS[untagged_metric]['tags'] for tag in tags]
tag0 = 'some_stat_prefix'
tagged_metric = metric.format('.{}'.format(tag0))
assert parse_metric(tagged_metric) == (
METRIC_PREFIX + untagged_metric,
['{}:{}'.format(tags[0], tag0)],
METRICS[untagged_metric]['method'],
)
def test_http_user_agent():
metric = 'http{}.user_agent{}.downstream_cx_total'
untagged_metric = metric.format('', '')
tags = [tag for tags in METRICS[untagged_metric]['tags'] for tag in tags]
tag0 = 'some_stat_prefix'
tag1 = 'some_user_agent'
tagged_metric = metric.format('.{}'.format(tag0), '.{}'.format(tag1))
assert parse_metric(tagged_metric) == (
METRIC_PREFIX + untagged_metric,
['{}:{}'.format(tags[0], tag0), '{}:{}'.format(tags[1], tag1)],
METRICS[untagged_metric]['method'],
)
def test_http_listener():
metric = 'listener{}.http{}.downstream_rq_2xx'
untagged_metric = metric.format('', '')
tags = [tag for tags in METRICS[untagged_metric]['tags'] for tag in tags]
tag0 = '0.0.0.0_80'
tag1 = 'some_stat_prefix'
tagged_metric = metric.format('.{}'.format(tag0), '.{}'.format(tag1))
assert parse_metric(tagged_metric) == (
METRIC_PREFIX + untagged_metric,
['{}:{}'.format(tags[0], tag0), '{}:{}'.format(tags[1], tag1)],
METRICS[untagged_metric]['method'],
)
def test_http2():
metric = 'http2.rx_reset'
tags = [tag for tags in METRICS[metric]['tags'] for tag in tags]
assert parse_metric(metric) == (METRIC_PREFIX + metric, list(tags), METRICS[metric]['method'])
def test_cluster_manager():
metric = 'cluster_manager.cluster_added'
tags = [tag for tags in METRICS[metric]['tags'] for tag in tags]
assert parse_metric(metric) == (METRIC_PREFIX + metric, list(tags), METRICS[metric]['method'])
def test_cluster():
metric = 'cluster{}.upstream_cx_total'
untagged_metric = metric.format('')
tags = [tag for tags in METRICS[untagged_metric]['tags'] for tag in tags]
tag0 = 'some_name'
tagged_metric = metric.format('.{}'.format(tag0))
assert parse_metric(tagged_metric) == (
METRIC_PREFIX + untagged_metric,
['{}:{}'.format(tags[0], tag0)],
METRICS[untagged_metric]['method'],
)
def test_cluster_health_check():
metric = 'cluster{}.health_check.healthy'
untagged_metric = metric.format('')
tags = [tag for tags in METRICS[untagged_metric]['tags'] for tag in tags]
tag0 = 'some_name'
tagged_metric = metric.format('.{}'.format(tag0))
assert parse_metric(tagged_metric) == (
METRIC_PREFIX + untagged_metric,
['{}:{}'.format(tags[0], tag0)],
METRICS[untagged_metric]['method'],
)
def test_cluster_outlier_detection():
metric = 'cluster{}.outlier_detection.ejections_enforced_total'
untagged_metric = metric.format('')
tags = [tag for tags in METRICS[untagged_metric]['tags'] for tag in tags]
tag0 = 'some_name'
tagged_metric = metric.format('.{}'.format(tag0))
assert parse_metric(tagged_metric) == (
METRIC_PREFIX + untagged_metric,
['{}:{}'.format(tags[0], tag0)],
METRICS[untagged_metric]['method'],
)
def test_cluster_dynamic_http():
metric = 'cluster{}.upstream_rq_time'
untagged_metric = metric.format('')
tags = [tag for tags in METRICS[untagged_metric]['tags'] for tag in tags]
tag0 = 'some_name'
tagged_metric = metric.format('.{}'.format(tag0))
assert parse_metric(tagged_metric) == (
METRIC_PREFIX + untagged_metric,
['{}:{}'.format(tags[0], tag0)],
METRICS[untagged_metric]['method'],
)
def test_cluster_dynamic_http_zones():
metric = 'cluster{}.zone{}{}.upstream_rq_time'
untagged_metric = metric.format('', '', '')
tags = [tag for tags in METRICS[untagged_metric]['tags'] for tag in tags]
tag0 = 'some_name'
tag1 = 'some_table_name'
tag2 = 'some_to_zone'
tagged_metric = metric.format('.{}'.format(tag0), '.{}'.format(tag1), '.{}'.format(tag2))
assert parse_metric(tagged_metric) == (
METRIC_PREFIX + untagged_metric,
['{}:{}'.format(tags[0], tag0), '{}:{}'.format(tags[1], tag1), '{}:{}'.format(tags[2], tag2)],
METRICS[untagged_metric]['method'],
)
def test_cluster_load_balancer():
metric = 'cluster{}.lb_healthy_panic'
untagged_metric = metric.format('')
tags = [tag for tags in METRICS[untagged_metric]['tags'] for tag in tags]
tag0 = 'some_name'
tagged_metric = metric.format('.{}'.format(tag0))
assert parse_metric(tagged_metric) == (
METRIC_PREFIX + untagged_metric,
['{}:{}'.format(tags[0], tag0)],
METRICS[untagged_metric]['method'],
)
def test_cluster_load_balancer_subsets():
metric = 'cluster{}.lb_subsets_active'
untagged_metric = metric.format('')
tags = [tag for tags in METRICS[untagged_metric]['tags'] for tag in tags]
tag0 = 'some_name'
tagged_metric = metric.format('.{}'.format(tag0))
assert parse_metric(tagged_metric) == (
METRIC_PREFIX + untagged_metric,
['{}:{}'.format(tags[0], tag0)],
METRICS[untagged_metric]['method'],
)
def test_tag_with_dots():
metric = 'cluster{}.lb_healthy_panic'
untagged_metric = metric.format('')
tags = [tag for tags in METRICS[untagged_metric]['tags'] for tag in tags]
tag0 = 'out.alerting-event-evaluator-test.datadog.svc.cluster.local|iperf'
tagged_metric = metric.format('.{}'.format(tag0))
assert parse_metric(tagged_metric) == (
METRIC_PREFIX + untagged_metric,
['{}:{}'.format(tags[0], tag0)],
METRICS[untagged_metric]['method'],
)
def test_no_match():
metric = 'envoy.http.downstream_rq_time'
value = 'No recorded values'
assert list(parse_histogram(metric, value)) == []
def test_ignore_nan():
metric = 'envoy.http.downstream_rq_time'
value = 'P0(0,0) P25(nan,0)'
assert list(parse_histogram(metric, value)) == [('envoy.http.downstream_rq_time.0percentile', 0.0)]
def test_correct():
metric = 'envoy.http.downstream_rq_time'
value = (
'P0(0,0) P25(25,0) P50(50,0) P75(75,0) P90(90,1.06) P95(95,1.08) '
'P99(99,1.096) P99.9(99.9,1.0996) P100(100,1.1)'
)
assert list(parse_histogram(metric, value)) == [
('envoy.http.downstream_rq_time.0percentile', 0.0),
('envoy.http.downstream_rq_time.25percentile', 25.0),
('envoy.http.downstream_rq_time.50percentile', 50.0),
('envoy.http.downstream_rq_time.75percentile', 75.0),
('envoy.http.downstream_rq_time.90percentile', 90.0),
('envoy.http.downstream_rq_time.95percentile', 95.0),
('envoy.http.downstream_rq_time.99percentile', 99.0),
('envoy.http.downstream_rq_time.99_9percentile', 99.9),
('envoy.http.downstream_rq_time.100percentile', 100.0),
]
def test_correct_unknown_percentile():
metric = 'envoy.http.downstream_rq_time'
value = 'P0(0,0) P25(25,0) P55.5(55.5,0)'
assert list(parse_histogram(metric, value)) == [
('envoy.http.downstream_rq_time.0percentile', 0.0),
('envoy.http.downstream_rq_time.25percentile', 25.0),
('envoy.http.downstream_rq_time.55_5percentile', 55.5),
]
| 34.092905 | 103 | 0.635337 | 2,455 | 20,183 | 4.995519 | 0.076986 | 0.155251 | 0.099804 | 0.044521 | 0.845972 | 0.841243 | 0.827789 | 0.821184 | 0.821184 | 0.817841 | 0 | 0.024114 | 0.190457 | 20,183 | 591 | 104 | 34.150592 | 0.726483 | 0 | 0 | 0.649891 | 0 | 0.006565 | 0.176782 | 0.087896 | 0 | 0 | 0 | 0 | 0.094092 | 1 | 0.098468 | false | 0 | 0.008753 | 0 | 0.107221 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
71aac7d7679f09e4dc4f7f5153ff8b2db6c27df5 | 123 | py | Python | utz/git/log.py | ryan-williams/jupyter-rc | 362d9d758498de317976006ce7dfc65e2bce2c57 | [
"MIT"
] | null | null | null | utz/git/log.py | ryan-williams/jupyter-rc | 362d9d758498de317976006ce7dfc65e2bce2c57 | [
"MIT"
] | 2 | 2020-11-18T15:36:35.000Z | 2021-06-25T18:21:32.000Z | utz/git/log.py | runsascoded/utz | 362d9d758498de317976006ce7dfc65e2bce2c57 | [
"MIT"
] | 1 | 2020-06-10T23:14:00.000Z | 2020-06-10T23:14:00.000Z |
from ..process import output
def msg(ref=None):
return output('git','log','-n1','--format=%B',ref).decode().strip()
| 17.571429 | 71 | 0.626016 | 18 | 123 | 4.277778 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009259 | 0.121951 | 123 | 6 | 72 | 20.5 | 0.703704 | 0 | 0 | 0 | 0 | 0 | 0.163934 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
71d3eec0407fe8495ff6092382e949b625526343 | 120 | py | Python | nbmolviz/methods/__init__.py | jparkhill/notebook-molecular-visualization | 2dd61fedcf363d7362b727669b86c5f1c07656fd | [
"Apache-2.0"
] | 55 | 2016-07-21T23:25:59.000Z | 2022-02-14T01:04:49.000Z | nbmolviz/methods/__init__.py | jparkhill/notebook-molecular-visualization | 2dd61fedcf363d7362b727669b86c5f1c07656fd | [
"Apache-2.0"
] | 40 | 2016-07-26T20:57:04.000Z | 2021-09-06T02:31:52.000Z | nbmolviz/methods/__init__.py | Autodesk/notebook-molecular-visualization | 2dd61fedcf363d7362b727669b86c5f1c07656fd | [
"Apache-2.0"
] | 18 | 2016-07-25T21:49:02.000Z | 2020-10-03T11:17:03.000Z | from .atoms import *
from .atomgroups import *
from .molecules import *
from .method import *
from .trajectory import *
| 20 | 25 | 0.75 | 15 | 120 | 6 | 0.466667 | 0.444444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 120 | 5 | 26 | 24 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
71dc719106d8066b19aa03a04ea03a61f0aa9d2d | 290 | py | Python | python/graphscope/experimental/nx/tests/algorithms/forward/test_efficiency.py | wenyuanyu/GraphScope | a40ccaf70557e608d8b091eb25ab04477f99ce21 | [
"Apache-2.0"
] | 2 | 2020-12-15T08:42:10.000Z | 2022-01-14T09:13:16.000Z | python/graphscope/experimental/nx/tests/algorithms/forward/test_efficiency.py | wenyuanyu/GraphScope | a40ccaf70557e608d8b091eb25ab04477f99ce21 | [
"Apache-2.0"
] | 1 | 2020-12-22T13:15:40.000Z | 2020-12-22T13:15:40.000Z | python/graphscope/experimental/nx/tests/algorithms/forward/test_efficiency.py | wenyuanyu/GraphScope | a40ccaf70557e608d8b091eb25ab04477f99ce21 | [
"Apache-2.0"
] | 1 | 2021-11-23T03:40:43.000Z | 2021-11-23T03:40:43.000Z | import networkx.algorithms.tests.test_efficiency
import pytest
from graphscope.experimental.nx.utils.compat import import_as_graphscope_nx
import_as_graphscope_nx(networkx.algorithms.tests.test_efficiency,
decorators=pytest.mark.usefixtures("graphscope_session"))
| 36.25 | 81 | 0.806897 | 34 | 290 | 6.617647 | 0.529412 | 0.16 | 0.204444 | 0.24 | 0.328889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.124138 | 290 | 7 | 82 | 41.428571 | 0.885827 | 0 | 0 | 0 | 0 | 0 | 0.062069 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.8 | 0 | 0.8 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e07acd0bbc45233a58c392b320cfd846a1776e04 | 160 | py | Python | AcademicDealerBackend/projectinfo/admin.py | Acciente717/AcademicDealerBackend | 8024725f88997fa430fa92e1caa28161ffbb06f6 | [
"MIT"
] | 5 | 2019-03-10T06:57:15.000Z | 2019-03-17T03:04:40.000Z | AcademicDealerBackend/projectinfo/admin.py | Acciente717/AcademicDealerBackend | 8024725f88997fa430fa92e1caa28161ffbb06f6 | [
"MIT"
] | 11 | 2019-05-14T15:13:48.000Z | 2019-05-31T15:31:33.000Z | AcademicDealerBackend/projectinfo/admin.py | Acciente717/AcademicDealerBackend | 8024725f88997fa430fa92e1caa28161ffbb06f6 | [
"MIT"
] | null | null | null | from django.contrib import admin
from .models import Topic, Project, Reply
admin.site.register(Topic)
admin.site.register(Project)
admin.site.register(Reply)
| 20 | 41 | 0.80625 | 23 | 160 | 5.608696 | 0.478261 | 0.209302 | 0.395349 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.09375 | 160 | 7 | 42 | 22.857143 | 0.889655 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.4 | 0 | 0.4 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
e08a39dea0e90bfce3cd095df91308b4a2a84377 | 30 | py | Python | testmockvpython.py | jonschull/Lyte | e9ba2bb1b07c9398b81a6f591898d2474d1a4609 | [
"MIT"
] | 1 | 2018-06-07T17:54:27.000Z | 2018-06-07T17:54:27.000Z | testmockvpython.py | jonschull/Lyte | e9ba2bb1b07c9398b81a6f591898d2474d1a4609 | [
"MIT"
] | 1 | 2018-06-28T05:08:57.000Z | 2018-06-28T05:08:57.000Z | testmockvpython.py | jonschull/Lyte | e9ba2bb1b07c9398b81a6f591898d2474d1a4609 | [
"MIT"
] | null | null | null | import mockvpython
print('hi') | 15 | 18 | 0.8 | 4 | 30 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.066667 | 30 | 2 | 19 | 15 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0.064516 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
e093284d80c715352055e4d617830d17ac985ea3 | 11,784 | py | Python | test/vanilla/low-level/Expected/AcceptanceTests/MediaTypesLowLevel/mediatypeslowlevel/rest/_request_builders_py3.py | cfculhane/autorest.python | 8cbca95faee88d933a58bbbd17b76834faa8d387 | [
"MIT"
] | null | null | null | test/vanilla/low-level/Expected/AcceptanceTests/MediaTypesLowLevel/mediatypeslowlevel/rest/_request_builders_py3.py | cfculhane/autorest.python | 8cbca95faee88d933a58bbbd17b76834faa8d387 | [
"MIT"
] | null | null | null | test/vanilla/low-level/Expected/AcceptanceTests/MediaTypesLowLevel/mediatypeslowlevel/rest/_request_builders_py3.py | cfculhane/autorest.python | 8cbca95faee88d933a58bbbd17b76834faa8d387 | [
"MIT"
] | 1 | 2022-03-28T08:58:03.000Z | 2022-03-28T08:58:03.000Z | # coding=utf-8
# --------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for license information.
# Code generated by Microsoft (R) AutoRest Code Generator.
# Changes may cause incorrect behavior and will be lost if the code is regenerated.
# --------------------------------------------------------------------------
from typing import Any, Dict, IO, Optional, TypeVar, Union
from azure.core.rest import HttpRequest
from msrest import Serializer
T = TypeVar("T")
JSONType = Any
_SERIALIZER = Serializer()
_SERIALIZER.client_side_validation = False
def build_analyze_body_request(*, json: JSONType = None, content: Any = None, **kwargs: Any) -> HttpRequest:
"""Analyze body, that could be different media types.
See https://aka.ms/azsdk/python/protocol/quickstart for how to incorporate this request builder
into your code flow.
:keyword json: Pass in a JSON-serializable object (usually a dictionary). See the template in
our example to find the input shape. Input parameter.
:paramtype json: JSONType
:keyword content: Pass in binary content you want in the body of the request (typically bytes,
a byte iterator, or stream input). Input parameter.
:paramtype content: any
:keyword str content_type: Media type of the body sent to the API. Default value is
"application/json". Allowed values are: "application/pdf", "image/jpeg", "image/png",
"image/tiff", "application/json."
:return: Returns an :class:`~azure.core.rest.HttpRequest` that you will pass to the client's
`send_request` method. See https://aka.ms/azsdk/python/protocol/quickstart for how to
incorporate this response into your code flow.
:rtype: ~azure.core.rest.HttpRequest
Example:
.. code-block:: python
# JSON input template you can fill out and use as your body input.
json = b'bytes' # Optional.
"""
content_type = kwargs.pop("content_type", None) # type: Optional[str]
accept = "application/json"
# Construct URL
url = "/mediatypes/analyze"
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
if content_type is not None:
header_parameters["Content-Type"] = _SERIALIZER.header("content_type", content_type, "str")
header_parameters["Accept"] = _SERIALIZER.header("accept", accept, "str")
return HttpRequest(method="POST", url=url, headers=header_parameters, json=json, content=content, **kwargs)
def build_analyze_body_no_accept_header_request(
*, json: JSONType = None, content: Any = None, **kwargs: Any
) -> HttpRequest:
"""Analyze body, that could be different media types. Adds to AnalyzeBody by not having an accept
type.
See https://aka.ms/azsdk/python/protocol/quickstart for how to incorporate this request builder
into your code flow.
:keyword json: Pass in a JSON-serializable object (usually a dictionary). See the template in
our example to find the input shape. Input parameter.
:paramtype json: JSONType
:keyword content: Pass in binary content you want in the body of the request (typically bytes,
a byte iterator, or stream input). Input parameter.
:paramtype content: any
:keyword str content_type: Media type of the body sent to the API. Default value is
"application/json". Allowed values are: "application/pdf", "image/jpeg", "image/png",
"image/tiff", "application/json."
:return: Returns an :class:`~azure.core.rest.HttpRequest` that you will pass to the client's
`send_request` method. See https://aka.ms/azsdk/python/protocol/quickstart for how to
incorporate this response into your code flow.
:rtype: ~azure.core.rest.HttpRequest
Example:
.. code-block:: python
# JSON input template you can fill out and use as your body input.
json = b'bytes' # Optional.
"""
content_type = kwargs.pop("content_type", None) # type: Optional[str]
# Construct URL
url = "/mediatypes/analyzeNoAccept"
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
if content_type is not None:
header_parameters["Content-Type"] = _SERIALIZER.header("content_type", content_type, "str")
return HttpRequest(method="POST", url=url, headers=header_parameters, json=json, content=content, **kwargs)
def build_content_type_with_encoding_request(*, content: Any = None, **kwargs: Any) -> HttpRequest:
"""Pass in contentType 'text/plain; charset=UTF-8' to pass test. Value for input does not matter.
See https://aka.ms/azsdk/python/protocol/quickstart for how to incorporate this request builder
into your code flow.
:keyword content: Pass in binary content you want in the body of the request (typically bytes,
a byte iterator, or stream input). Input parameter.
:paramtype content: any
:return: Returns an :class:`~azure.core.rest.HttpRequest` that you will pass to the client's
`send_request` method. See https://aka.ms/azsdk/python/protocol/quickstart for how to
incorporate this response into your code flow.
:rtype: ~azure.core.rest.HttpRequest
"""
content_type = kwargs.pop("content_type", None) # type: Optional[str]
accept = "application/json"
# Construct URL
url = "/mediatypes/contentTypeWithEncoding"
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
if content_type is not None:
header_parameters["Content-Type"] = _SERIALIZER.header("content_type", content_type, "str")
header_parameters["Accept"] = _SERIALIZER.header("accept", accept, "str")
return HttpRequest(method="POST", url=url, headers=header_parameters, content=content, **kwargs)
def build_binary_body_with_two_content_types_request(
*, json: JSONType = None, content: Any = None, **kwargs: Any
) -> HttpRequest:
"""Binary body with two content types. Pass in of {'hello': 'world'} for the application/json
content type, and a byte stream of 'hello, world!' for application/octet-stream.
See https://aka.ms/azsdk/python/protocol/quickstart for how to incorporate this request builder
into your code flow.
:keyword json: Pass in a JSON-serializable object (usually a dictionary). See the template in
our example to find the input shape. The payload body.
:paramtype json: JSONType
:keyword content: Pass in binary content you want in the body of the request (typically bytes,
a byte iterator, or stream input). The payload body.
:paramtype content: any
:return: Returns an :class:`~azure.core.rest.HttpRequest` that you will pass to the client's
`send_request` method. See https://aka.ms/azsdk/python/protocol/quickstart for how to
incorporate this response into your code flow.
:rtype: ~azure.core.rest.HttpRequest
Example:
.. code-block:: python
# JSON input template you can fill out and use as your body input.
json = b'bytes' # Optional.
"""
content_type = kwargs.pop("content_type", None) # type: Optional[str]
accept = "text/plain"
# Construct URL
url = "/mediatypes/binaryBodyTwoContentTypes"
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
if content_type is not None:
header_parameters["Content-Type"] = _SERIALIZER.header("content_type", content_type, "str")
header_parameters["Accept"] = _SERIALIZER.header("accept", accept, "str")
return HttpRequest(method="POST", url=url, headers=header_parameters, json=json, content=content, **kwargs)
def build_binary_body_with_three_content_types_request(
*, json: JSONType = None, content: Any = None, **kwargs: Any
) -> HttpRequest:
"""Binary body with three content types. Pass in string 'hello, world' with content type
'text/plain', {'hello': world'} with content type 'application/json' and a byte string for
'application/octet-stream'.
See https://aka.ms/azsdk/python/protocol/quickstart for how to incorporate this request builder
into your code flow.
:keyword json: Pass in a JSON-serializable object (usually a dictionary). See the template in
our example to find the input shape. The payload body.
:paramtype json: JSONType
:keyword content: Pass in binary content you want in the body of the request (typically bytes,
a byte iterator, or stream input). The payload body.
:paramtype content: any
:keyword str content_type: Media type of the body sent to the API. Default value is
"application/json". Allowed values are: "application/json", "application/octet-stream",
"text/plain."
:return: Returns an :class:`~azure.core.rest.HttpRequest` that you will pass to the client's
`send_request` method. See https://aka.ms/azsdk/python/protocol/quickstart for how to
incorporate this response into your code flow.
:rtype: ~azure.core.rest.HttpRequest
Example:
.. code-block:: python
# JSON input template you can fill out and use as your body input.
json = b'bytes' # Optional.
"""
content_type = kwargs.pop("content_type", None) # type: Optional[str]
accept = "text/plain"
# Construct URL
url = "/mediatypes/binaryBodyThreeContentTypes"
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
if content_type is not None:
header_parameters["Content-Type"] = _SERIALIZER.header("content_type", content_type, "str")
header_parameters["Accept"] = _SERIALIZER.header("accept", accept, "str")
return HttpRequest(method="POST", url=url, headers=header_parameters, json=json, content=content, **kwargs)
def build_put_text_and_json_body_request(*, json: JSONType = None, content: Any = None, **kwargs: Any) -> HttpRequest:
"""Body that's either text/plain or application/json.
See https://aka.ms/azsdk/python/protocol/quickstart for how to incorporate this request builder
into your code flow.
:keyword json: Pass in a JSON-serializable object (usually a dictionary). See the template in
our example to find the input shape. The payload body.
:paramtype json: JSONType
:keyword content: Pass in binary content you want in the body of the request (typically bytes,
a byte iterator, or stream input). The payload body.
:paramtype content: any
:keyword str content_type: Media type of the body sent to the API. Default value is
"application/json". Allowed values are: "text/plain", "application/json."
:return: Returns an :class:`~azure.core.rest.HttpRequest` that you will pass to the client's
`send_request` method. See https://aka.ms/azsdk/python/protocol/quickstart for how to
incorporate this response into your code flow.
:rtype: ~azure.core.rest.HttpRequest
Example:
.. code-block:: python
# JSON input template you can fill out and use as your body input.
json = "str" # Optional.
"""
content_type = kwargs.pop("content_type", None) # type: Optional[str]
accept = "text/plain"
# Construct URL
url = "/mediatypes/textAndJson"
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
if content_type is not None:
header_parameters["Content-Type"] = _SERIALIZER.header("content_type", content_type, "str")
header_parameters["Accept"] = _SERIALIZER.header("accept", accept, "str")
return HttpRequest(method="POST", url=url, headers=header_parameters, json=json, content=content, **kwargs)
| 44.977099 | 118 | 0.696283 | 1,568 | 11,784 | 5.161352 | 0.116709 | 0.059805 | 0.020882 | 0.019276 | 0.880143 | 0.873965 | 0.864821 | 0.864821 | 0.857902 | 0.857902 | 0 | 0.00021 | 0.190088 | 11,784 | 261 | 119 | 45.149425 | 0.847758 | 0.618551 | 0 | 0.692308 | 0 | 0 | 0.154965 | 0.040371 | 0 | 0 | 0 | 0 | 0 | 1 | 0.092308 | false | 0 | 0.046154 | 0 | 0.230769 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e09d4b2ccb6eb24372ef26bf110333f313a43652 | 173 | py | Python | department_app/service/__init__.py | DarishkaAMS/Flask_Projects-EPAM-CRM | ecf8ffc0f6eb8b9b8796d195fea3e3d0ce98d2f9 | [
"MIT"
] | 2 | 2021-11-13T18:18:55.000Z | 2021-12-05T09:39:20.000Z | department_app/service/__init__.py | DarishkaAMS/Flask_Projects-EPAM-CRM | ecf8ffc0f6eb8b9b8796d195fea3e3d0ce98d2f9 | [
"MIT"
] | null | null | null | department_app/service/__init__.py | DarishkaAMS/Flask_Projects-EPAM-CRM | ecf8ffc0f6eb8b9b8796d195fea3e3d0ce98d2f9 | [
"MIT"
] | null | null | null | """
__init__.py file of service module with
imported employee_service and department_service submodules
"""
from . import employee_service
from . import department_service
| 21.625 | 59 | 0.820809 | 22 | 173 | 6.090909 | 0.636364 | 0.223881 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.127168 | 173 | 7 | 60 | 24.714286 | 0.887417 | 0.572254 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e0e358b2c98487b271a8d739e5350c0640c6cdbb | 4,811 | py | Python | beartype_test/a00_unit/a10_pep/test_pep593.py | posita/beartype | e56399686e1f2ffd5128a4030b19314504e32450 | [
"MIT"
] | 1,056 | 2020-04-03T10:21:29.000Z | 2022-03-28T12:38:16.000Z | beartype_test/a00_unit/a10_pep/test_pep593.py | posita/beartype | e56399686e1f2ffd5128a4030b19314504e32450 | [
"MIT"
] | 107 | 2020-04-04T06:00:16.000Z | 2022-03-29T18:58:50.000Z | beartype_test/a00_unit/a10_pep/test_pep593.py | posita/beartype | e56399686e1f2ffd5128a4030b19314504e32450 | [
"MIT"
] | 30 | 2020-10-06T19:14:25.000Z | 2022-03-02T08:02:59.000Z | #!/usr/bin/env python3
# --------------------( LICENSE )--------------------
# Copyright (c) 2014-2021 Beartype authors.
# See "LICENSE" for further details.
'''
**Beartype** :pep:`593` **unit tests.**
This submodule unit tests :pep:`593` support implemented in the
:func:`beartype.beartype` decorator.
'''
# ....................{ IMPORTS }....................
#!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
# WARNING: To raise human-readable test errors, avoid importing from
# package-specific submodules at module scope.
#!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
# ....................{ TESTS ~ validators }....................
def test_die_unless_hint_pep593() -> None:
'''
Test the
:beartype._util.hint.pep.proposal.utilpep593.die_unless_hint_pep593`
validator.
'''
# Defer heavyweight imports.
from beartype.roar import BeartypeDecorHintPep593Exception
from beartype._util.hint.pep.proposal.utilpep593 import (
die_unless_hint_pep593)
from beartype_test.util.mod.pytmodimport import (
import_module_typing_any_attr_or_none_safe)
from pytest import raises
from typing import Optional
# "typing.Annotated" type hint factory imported from either the "typing" or
# "typing_extensions" modules if importable *OR* "None" otherwise.
Annotated = import_module_typing_any_attr_or_none_safe('Annotated')
# If this factory exists, assert this validator avoids raising exceptions
# for a type hint subscripting this factory.
if Annotated is not None:
die_unless_hint_pep593(Annotated[Optional[str], int])
# Assert this validator raises the expected exception for an arbitrary
# PEP-compliant type hint *NOT* subscripting this factory.
with raises(BeartypeDecorHintPep593Exception):
die_unless_hint_pep593(Optional[str])
# ....................{ TESTS ~ getters }....................
def test_get_hint_pep593_metadata() -> None:
'''
Test the
:beartype._util.hint.pep.proposal.utilpep593.get_hint_pep593_metadata`
getter.
'''
# Defer heavyweight imports.
from beartype.roar import BeartypeDecorHintPep593Exception
from beartype._util.hint.pep.proposal.utilpep593 import (
get_hint_pep593_metadata)
from beartype_test.util.mod.pytmodimport import (
import_module_typing_any_attr_or_none_safe)
from pytest import raises
from typing import Optional
# "typing.Annotated" type hint factory imported from either the "typing" or
# "typing_extensions" modules if importable *OR* "None" otherwise.
Annotated = import_module_typing_any_attr_or_none_safe('Annotated')
# If this factory exists, assert this getter returns the expected tuple for
# an arbitrary PEP 593-compliant type hint.
if Annotated is not None:
assert get_hint_pep593_metadata(Annotated[
Optional[str],
'Thy', 'caverns', 'echoing', 'to', 'the', "Arve's", 'commotion,'
]) == (
'Thy', 'caverns', 'echoing', 'to', 'the', "Arve's", 'commotion,')
# Assert this getter raises the expected exception for an arbitrary
# PEP-compliant type hint *NOT* subscripting this factory.
with raises(BeartypeDecorHintPep593Exception):
get_hint_pep593_metadata(Optional[str])
def test_get_hint_pep593_metahint() -> None:
'''
Test the
:beartype._util.hint.pep.proposal.utilpep593.get_hint_pep593_metahint`
getter.
'''
# Defer heavyweight imports.
from beartype.roar import BeartypeDecorHintPep593Exception
from beartype._util.hint.pep.proposal.utilpep593 import (
get_hint_pep593_metahint)
from beartype_test.util.mod.pytmodimport import (
import_module_typing_any_attr_or_none_safe)
from pytest import raises
from typing import Optional
# "typing.Annotated" type hint factory imported from either the "typing" or
# "typing_extensions" modules if importable *OR* "None" otherwise.
Annotated = import_module_typing_any_attr_or_none_safe('Annotated')
# If this factory exists, assert this getter returns the expected
# PEP-compliant type hint for an arbitrary PEP 593-compliant type hint.
if Annotated is not None:
metahint = Optional[int]
assert get_hint_pep593_metahint(Annotated[
metahint,
'A', 'loud', 'lone', 'sound', 'no', 'other', 'sound', 'can', 'tame'
]) is metahint
# Assert this getter raises the expected exception for an arbitrary
# PEP-compliant type hint *NOT* subscripting this factory.
with raises(BeartypeDecorHintPep593Exception):
get_hint_pep593_metahint(Optional[str])
| 40.428571 | 79 | 0.657867 | 539 | 4,811 | 5.682746 | 0.217069 | 0.048972 | 0.042442 | 0.037218 | 0.736206 | 0.716618 | 0.716618 | 0.716618 | 0.693111 | 0.67744 | 0 | 0.026411 | 0.197256 | 4,811 | 118 | 80 | 40.771186 | 0.766701 | 0.482228 | 0 | 0.574468 | 0 | 0 | 0.057071 | 0 | 0 | 0 | 0 | 0 | 0.042553 | 1 | 0.06383 | false | 0 | 0.446809 | 0 | 0.510638 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e0ec80212f6e3d6872c432543c46f2ef3abc82fa | 8,388 | py | Python | ms_graph/drive_items.py | areed1192/ms-graph-python-client | dad30327f575b3a76cb1b0b7000b2935c16c511a | [
"MIT"
] | 39 | 2020-11-24T20:46:02.000Z | 2022-03-29T13:47:09.000Z | ms_graph/drive_items.py | areed1192/ms-graph-python-client | dad30327f575b3a76cb1b0b7000b2935c16c511a | [
"MIT"
] | 7 | 2021-02-11T11:43:27.000Z | 2022-01-25T06:04:41.000Z | ms_graph/drive_items.py | areed1192/ms-graph-python-client | dad30327f575b3a76cb1b0b7000b2935c16c511a | [
"MIT"
] | 34 | 2020-10-28T10:47:37.000Z | 2022-02-02T09:50:04.000Z | from typing import Dict
from ms_graph.session import GraphSession
class DriveItems():
"""
## Overview:
----
The driveItem resource represents a file, folder,
or other item stored in a drive. All file system
objects in OneDrive and SharePoint are returned as
driveItem resources.
"""
def __init__(self, session: object) -> None:
"""Initializes the `DriveItems` object.
### Parameters
----
session : object
An authenticated session for our Microsoft Graph Client.
"""
# Set the session.
self.graph_session: GraphSession = session
# Set the endpoint.
self.endpoint = 'drive'
self.collections_endpoint = 'drives/'
def get_drive_item(self, drive_id: str, item_id: str) -> Dict:
"""Grab's a DriveItem Resource using the Item ID and Drive ID.
### Parameters
----
drive_id : str
The Drive ID in which the resource exist.
item_id : str
The item ID of the object you want to
return.
### Returns
----
Dict:
A DriveItem resource object.
"""
content = self.graph_session.make_request(
method='get',
endpoint=self.collections_endpoint + "/{drive_id}/items/{item_id}".format(
drive_id=drive_id,
item_id=item_id
)
)
return content
def get_drive_item_by_path(self, drive_id: str, item_path: str) -> Dict:
"""Grab's a DriveItem Resource using the Item ID and Drive ID.
### Parameters
----
drive_id : str
The Drive ID in which the resource exist.
item_path : str
The path to the Item.
### Returns
----
Dict:
A DriveItem resource object.
"""
content = self.graph_session.make_request(
method='get',
endpoint=self.collections_endpoint + "/{drive_id}/root:/{path}".format(
drive_id=drive_id,
path=item_path
)
)
return content
def get_group_drive_item(self, group_id: str, item_id: str) -> Dict:
"""Grab's a DriveItem Resource using the Item ID and Drive ID.
### Parameters
----
group_id : str
The Group ID in which the resource exist.
item_id : str
The item ID of the object you want to
return.
### Returns
----
Dict:
A DriveItem resource object.
"""
content = self.graph_session.make_request(
method='get',
endpoint="/groups/{group_id}/drive/items/{item_id}".format(
group_id=group_id,
item_id=item_id
)
)
return content
def get_group_drive_item_by_path(self, group_id: str, item_path: str) -> Dict:
"""Grab's a DriveItem Resource using the Item ID and Drive ID.
### Parameters
----
drive_id : str
The Drive ID in which the resource exist.
item_path : str
The path to the Item.
### Returns
----
Dict:
A DriveItem resource object.
"""
content = self.graph_session.make_request(
method='get',
endpoint="/groups/{group_id}/drive/root:/{item_path}".format(
group_id=group_id,
item_path=item_path
)
)
return content
def get_my_drive_item(self, item_id: str) -> Dict:
"""Grab's a DriveItem Resource using the Item ID and Drive ID.
### Parameters
----
item_id : str
The item ID of the object you want to
return.
### Returns
----
Dict:
A DriveItem resource object.
"""
content = self.graph_session.make_request(
method='get',
endpoint="/me/drive/items/{item_id}".format(
item_id=item_id
)
)
return content
def get_my_drive_item_by_path(self, item_path: str) -> Dict:
"""Grab's a DriveItem Resource using the Item ID and Drive ID.
### Parameters
----
item_path : str
The path to the Item.
### Returns
----
Dict:
A DriveItem resource object.
"""
content = self.graph_session.make_request(
method='get',
endpoint="/me/drive/root:/{item_path}".format(
item_path=item_path
)
)
return content
def get_site_drive_item(self, site_id: str, item_id: str) -> Dict:
"""Grab's a DriveItem Resource using the Item ID and Drive ID.
### Parameters
----
site_id : str
The site ID which to query the item from.
item_id : str
The item ID of the object you want to
return.
### Returns
----
Dict:
A DriveItem resource object.
"""
content = self.graph_session.make_request(
method='get',
endpoint="/sites/{site_id}/drive/items/{item_id}".format(
site_id=site_id,
item_id=item_id
)
)
return content
def get_site_drive_item_by_path(self, site_id: str, item_path: str) -> Dict:
"""Grab's a DriveItem Resource using the Item ID and Drive ID.
### Parameters
----
site_id : str
The site ID which to query the item from.
item_path : str
The path to the Item.
### Returns
----
Dict:
A DriveItem resource object.
"""
content = self.graph_session.make_request(
method='get',
endpoint="/sites/{site_id}/drive/root:/{item_path}".format(
site_id=site_id,
item_path=item_path
)
)
return content
def get_site_drive_item_from_list(self, site_id: str, list_id: str, item_id: str) -> Dict:
"""Grab's a DriveItem Resource using the Item ID and Drive ID.
### Parameters
----
site_id : str
The site ID which to query the item from.
list_id : str
The list ID which to query the item from.
item_id : str
The item ID of the object you want to
return.
### Returns
----
Dict:
A DriveItem resource object.
"""
content = self.graph_session.make_request(
method='get',
endpoint="/sites/{site_id}/lists/{list_id}/items/{item_id}/driveItem".format(
site_id=site_id,
list_id=list_id,
item_id=item_id
)
)
return content
def get_user_drive_item(self, user_id: str, item_id: str) -> Dict:
"""Grab's a DriveItem Resource using the Item ID and Drive ID.
### Parameters
----
user_id : str
The User ID which to query the item from.
item_id : str
The item ID of the object you want to
return.
### Returns
----
Dict:
A DriveItem resource object.
"""
content = self.graph_session.make_request(
method='get',
endpoint="/users/{user_id}/drive/items/{item_id}".format(
user_id=user_id,
item_id=item_id
)
)
return content
def get_user_drive_item_by_path(self, user_id: str, item_path: str) -> Dict:
"""Grab's a DriveItem Resource using the Item ID and Drive ID.
### Parameters
----
site_id : str
The User ID which to query the item from.
item_path : str
The path to the Item.
### Returns
----
Dict:
A DriveItem resource object.
"""
content = self.graph_session.make_request(
method='get',
endpoint="/users/{user_id}/drive/root:/{item_path}".format(
user_id=user_id,
item_path=item_path
)
)
return content
| 25.418182 | 94 | 0.520148 | 964 | 8,388 | 4.342324 | 0.088174 | 0.067367 | 0.094601 | 0.031534 | 0.857143 | 0.808887 | 0.768514 | 0.738653 | 0.731247 | 0.69828 | 0 | 0 | 0.388889 | 8,388 | 329 | 95 | 25.495441 | 0.816621 | 0.370768 | 0 | 0.495238 | 0 | 0 | 0.107532 | 0.096634 | 0 | 0 | 0 | 0 | 0 | 1 | 0.114286 | false | 0 | 0.019048 | 0 | 0.247619 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e0eccfe1101a2bc126b23bf30ffd6c0dd1c564be | 38 | py | Python | tapiriik/services/Smashrun/__init__.py | prohfesor/tapiriik | 0c476f8bb6b3d51674f0117b054777405ff2ee0d | [
"Apache-2.0"
] | 1,445 | 2015-01-01T21:43:31.000Z | 2022-03-17T13:40:23.000Z | tapiriik/services/Smashrun/__init__.py | prohfesor/tapiriik | 0c476f8bb6b3d51674f0117b054777405ff2ee0d | [
"Apache-2.0"
] | 441 | 2015-01-02T03:37:49.000Z | 2022-03-31T18:18:03.000Z | tapiriik/services/Smashrun/__init__.py | prohfesor/tapiriik | 0c476f8bb6b3d51674f0117b054777405ff2ee0d | [
"Apache-2.0"
] | 333 | 2015-01-06T12:14:15.000Z | 2022-03-27T19:58:48.000Z | from .smashrun import SmashrunService
| 19 | 37 | 0.868421 | 4 | 38 | 8.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.105263 | 38 | 1 | 38 | 38 | 0.970588 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1cc55745fd85dcaa6e80e02a57d5e37bdc3a531d | 341 | py | Python | py_rete/__init__.py | benthomasson/py_rete | b21e06e891991fe790c95a745cbd076aae9a426d | [
"MIT"
] | 18 | 2020-07-13T16:13:05.000Z | 2022-03-18T00:18:51.000Z | py_rete/__init__.py | benthomasson/py_rete | b21e06e891991fe790c95a745cbd076aae9a426d | [
"MIT"
] | null | null | null | py_rete/__init__.py | benthomasson/py_rete | b21e06e891991fe790c95a745cbd076aae9a426d | [
"MIT"
] | 6 | 2020-04-16T08:44:53.000Z | 2022-02-22T01:07:58.000Z | from py_rete.fact import Fact # noqa F401
from py_rete.common import V # noqa F401
from py_rete.production import Production # noqa F401
from py_rete.network import ReteNetwork # noqa F401
from py_rete.conditions import AND # noqa F401
from py_rete.conditions import Cond # noqa F401
from py_rete.conditions import Filter # noqa F401
| 42.625 | 54 | 0.794721 | 56 | 341 | 4.714286 | 0.285714 | 0.159091 | 0.265152 | 0.318182 | 0.590909 | 0.386364 | 0.386364 | 0 | 0 | 0 | 0 | 0.073684 | 0.164223 | 341 | 7 | 55 | 48.714286 | 0.852632 | 0.202346 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e8133419bc46d63c755e090e97bcd07a4552db70 | 4,792 | py | Python | src/resources/classes/enemy.py | 41y08h/warah-killer | d1e0270b95b2abfa5d0eeae3cfb0b04b3b68a358 | [
"MIT"
] | 1 | 2020-04-09T08:03:29.000Z | 2020-04-09T08:03:29.000Z | resources/classes/enemy.py | TaffoVelikoff/ViolentCityGame | 53151a2686282b68d36685927931d31f883a2594 | [
"MIT"
] | null | null | null | resources/classes/enemy.py | TaffoVelikoff/ViolentCityGame | 53151a2686282b68d36685927931d31f883a2594 | [
"MIT"
] | null | null | null | import os
import pygame
import random
import globals
from os import listdir
from resources import game
from resources import colors
from os.path import isfile, join
class Enemy(pygame.sprite.Sprite):
selectedPos = 'left'
speed = 0
spriteWidth = 200
# Constructor
def __init__(self):
# Call the parent class (Sprite) constructor
pygame.sprite.Sprite.__init__(self)
# Enemy image
self.image = pygame.image.load(os.path.join(globals.data_dir, 'img/enemy/01.png')).convert_alpha()
self.rect = self.image.get_rect()
# Speed
if globals.debug == False:
self.speed = 5 + globals.level
else:
globals.speed = 0
# Create mask
self.mask = pygame.mask.from_surface(self.image)
'''
The enemy can appear at 4 different places on the screen
and start moving in 4 different possitions
'''
# Position to place
positions = ('left', 'right', 'up', 'down')
self.selectedPos = random.choice(positions)
# Place sprite on screen (based on the randomly selected position)
if self.selectedPos == 'left':
self.rect.center = (0, random.randint(self.spriteWidth/2, globals.winHeight - self.spriteWidth/2))
elif self.selectedPos == 'right':
self.rect.center = (globals.winWidth, random.randint(self.spriteWidth/2, globals.winHeight - self.spriteWidth/2))
elif self.selectedPos == 'up':
self.rect.center = (random.randint(self.spriteWidth/2, globals.winWidth - self.spriteWidth/2), 0)
else:
self.rect.center = (random.randint(self.spriteWidth/2, globals.winWidth - self.spriteWidth/2), globals.winHeight)
def update(self):
self.image.set_colorkey((0, 0, 0))
if self.selectedPos == 'left':
self.rect.x += self.speed
elif self.selectedPos == 'right':
self.rect.x -= self.speed
elif self.selectedPos == 'up':
self.rect.y += self.speed
else:
self.rect.y -= self.speed
class Blood(pygame.sprite.Sprite):
frames = []
# Constructor
def __init__(self, x = None, y = None):
# Counter
self.steps = 0
# Call the parent class (Sprite) constructor
pygame.sprite.Sprite.__init__(self)
# Get all frames
path = "data/img/enemy/explode/"
frames = [f for f in listdir(path) if isfile(join(path, f))]
# Put all frames in a list of Pygame images
self.images = []
for frame in frames:
self.images.append(pygame.image.load(path + frame).convert_alpha())
self.index = 0
self.image = self.images[self.index]
self.rect = self.image.get_rect()
# Position
mousePos = pygame.mouse.get_pos()
self.rect.x = mousePos[0] - 79
self.rect.y = mousePos[1] - 115
def update(self):
# ANIMATE AND KILL AFTER ANIMATION
if self.index >= len(self.images):
self.kill()
else:
self.image = self.images[self.index]
# This will slow down animation
if self.steps % 2 == 0:
self.index += 1
self.steps += 1
class Beer(pygame.sprite.Sprite):
frames = []
selectedPos = 'left'
speed = 5
spriteWidth = 128
# Constructor
def __init__(self, x = None, y = None):
# Counter
self.steps = 0
# Call the parent class (Sprite) constructor
pygame.sprite.Sprite.__init__(self)
# Get all frames
path = "data/img/enemy/beer/"
frames = [f for f in listdir(path) if isfile(join(path, f))]
# Put all frames in a list of Pygame images
self.images = []
for frame in frames:
self.images.append(pygame.image.load(path + frame).convert_alpha())
self.index = 0
self.image = self.images[self.index]
self.rect = self.image.get_rect()
# Position to place
positions = ('left', 'right', 'up', 'down')
self.selectedPos = random.choice(positions)
# Place sprite on screen (based on the randomly selected position)
if self.selectedPos == 'left':
self.rect.center = (0, random.randint(self.spriteWidth/2, globals.winHeight - self.spriteWidth/2))
elif self.selectedPos == 'right':
self.rect.center = (globals.winWidth, random.randint(self.spriteWidth/2, globals.winHeight - self.spriteWidth/2))
elif self.selectedPos == 'up':
self.rect.center = (random.randint(self.spriteWidth/2, globals.winWidth - self.spriteWidth/2), 0)
else:
self.rect.center = (random.randint(self.spriteWidth/2, globals.winWidth - self.spriteWidth/2), globals.winHeight)
def update(self):
# ANIMATE
if self.index >= len(self.images):
self.index = 0
else:
self.image = self.images[self.index]
# This will slow down animation
if self.steps % 2 == 0:
self.index += 1
# Restart steps
self.steps += 1
# Transparent
self.image.set_colorkey((0, 0, 0))
# Move
if self.selectedPos == 'left':
self.rect.x += self.speed
elif self.selectedPos == 'right':
self.rect.x -= self.speed
elif self.selectedPos == 'up':
self.rect.y += self.speed
else:
self.rect.y -= self.speed
def kill(self):
# Put outside of screen
self.rect.center = (-300, -300) | 27.227273 | 116 | 0.687396 | 683 | 4,792 | 4.771596 | 0.178624 | 0.054004 | 0.078552 | 0.070574 | 0.744707 | 0.744707 | 0.737343 | 0.706045 | 0.706045 | 0.706045 | 0 | 0.016747 | 0.177588 | 4,792 | 176 | 117 | 27.227273 | 0.8102 | 0.136477 | 0 | 0.736364 | 0 | 0 | 0.035294 | 0.005757 | 0 | 0 | 0 | 0 | 0 | 1 | 0.063636 | false | 0 | 0.072727 | 0 | 0.236364 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
08ff4b58d0101e49a968fe4799383c92c2d5b460 | 48 | py | Python | solutions/helpers/__init__.py | SebastiaanZ/aoc-2019 | e1fe4630b0f375be0b79398e07e23b9c0196efbb | [
"MIT"
] | 3 | 2019-12-02T19:38:14.000Z | 2020-01-28T00:06:09.000Z | solutions/helpers/__init__.py | SebastiaanZ/aoc-2019 | e1fe4630b0f375be0b79398e07e23b9c0196efbb | [
"MIT"
] | 6 | 2020-03-24T17:58:40.000Z | 2022-03-12T00:18:45.000Z | solutions/helpers/__init__.py | SebastiaanZ/aoc-2019 | e1fe4630b0f375be0b79398e07e23b9c0196efbb | [
"MIT"
] | null | null | null | from .intcode import IntCodeApplication # noqa
| 24 | 47 | 0.8125 | 5 | 48 | 7.8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.145833 | 48 | 1 | 48 | 48 | 0.95122 | 0.083333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1c3ea98a7585fb03beb4e762dac750a21ae4a86a | 119 | py | Python | projects/thesis/continuous/custom/continuous/deform_feature/__init__.py | cpark90/rrrcnn | ba66cc391265be76fa3896b66459ff7241b47972 | [
"Apache-2.0"
] | null | null | null | projects/thesis/continuous/custom/continuous/deform_feature/__init__.py | cpark90/rrrcnn | ba66cc391265be76fa3896b66459ff7241b47972 | [
"Apache-2.0"
] | null | null | null | projects/thesis/continuous/custom/continuous/deform_feature/__init__.py | cpark90/rrrcnn | ba66cc391265be76fa3896b66459ff7241b47972 | [
"Apache-2.0"
] | null | null | null | from .deform_feature_map_layer import *
from .deform_orienation_layer import *
from .deformable_by_grad_layer import *
| 29.75 | 39 | 0.848739 | 17 | 119 | 5.470588 | 0.588235 | 0.354839 | 0.322581 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.10084 | 119 | 3 | 40 | 39.666667 | 0.869159 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
1c3ec6d3f7be58abd038f8964534a0cb398cb936 | 2,732 | py | Python | blog/migrations/0011_auto_20200728_0547.py | tbrlpld/wagtail-gatsby-blog-backend | f68f1d9e2577d5271960f142bf37dcbcdac6767a | [
"MIT"
] | null | null | null | blog/migrations/0011_auto_20200728_0547.py | tbrlpld/wagtail-gatsby-blog-backend | f68f1d9e2577d5271960f142bf37dcbcdac6767a | [
"MIT"
] | null | null | null | blog/migrations/0011_auto_20200728_0547.py | tbrlpld/wagtail-gatsby-blog-backend | f68f1d9e2577d5271960f142bf37dcbcdac6767a | [
"MIT"
] | null | null | null | # Generated by Django 2.2.13 on 2020-07-28 05:47
from django.db import migrations
import wagtail.core.blocks
import wagtail.core.blocks.static_block
import wagtail.core.fields
import wagtail.documents.blocks
import wagtail.embeds.blocks
import wagtail.images.blocks
class Migration(migrations.Migration):
dependencies = [
('blog', '0010_auto_20200728_0540'),
]
operations = [
migrations.AlterField(
model_name='blogpage',
name='freeformbody',
field=wagtail.core.fields.StreamField([('heading', wagtail.core.blocks.CharBlock(classname='full title')), ('paragraph', wagtail.core.blocks.RichTextBlock()), ('image', wagtail.images.blocks.ImageChooserBlock()), ('text', wagtail.core.blocks.TextBlock()), ('email', wagtail.core.blocks.EmailBlock(help_text='Your email goes here.')), ('integer', wagtail.core.blocks.IntegerBlock(help_text='Just a number.')), ('float', wagtail.core.blocks.FloatBlock(help_text='A floating point number.')), ('decimal', wagtail.core.blocks.DecimalBlock(decimal_places=2, help_text='A decimal number.')), ('regex', wagtail.core.blocks.RegexBlock(error_messages={'invalid': 'You need to have " stuff " in the string.'}, help_text='A string with stuff in the middle.', regex='^.*stuff.*$')), ('url', wagtail.core.blocks.URLBlock()), ('bool', wagtail.core.blocks.BooleanBlock(required=False)), ('date', wagtail.core.blocks.DateBlock()), ('time', wagtail.core.blocks.TimeBlock()), ('datetime', wagtail.core.blocks.DateTimeBlock()), ('rawhtml', wagtail.core.blocks.RawHTMLBlock(help_text='Here you can show off your HTML skills.')), ('blockquote', wagtail.core.blocks.BlockQuoteBlock()), ('choice', wagtail.core.blocks.ChoiceBlock(choices=[('yes', 'Yes'), ('no', 'No'), ('maybe', 'Maybe')])), ('page', wagtail.core.blocks.PageChooserBlock()), ('doc', wagtail.documents.blocks.DocumentChooserBlock()), ('embed', wagtail.embeds.blocks.EmbedBlock()), ('static', wagtail.core.blocks.static_block.StaticBlock(admin_text='Latest Posts (no configuration needed)', help_text='If you include this block, the latest posts will be displayed here.')), ('person', wagtail.core.blocks.StructBlock([('first_name', wagtail.core.blocks.CharBlock()), ('last_name', wagtail.core.blocks.CharBlock()), ('biography', wagtail.core.blocks.TextBlock()), ('pic', wagtail.images.blocks.ImageChooserBlock(required=False))], icon='user')), ('list', wagtail.core.blocks.ListBlock(wagtail.core.blocks.CharBlock(label='List Item'))), ('substream', wagtail.core.blocks.StreamBlock([('image', wagtail.images.blocks.ImageChooserBlock()), ('quote', wagtail.core.blocks.BlockQuoteBlock()), ('author', wagtail.core.blocks.CharBlock(min_length=5))]))], blank=True),
),
]
| 109.28 | 2,214 | 0.725842 | 331 | 2,732 | 5.933535 | 0.453172 | 0.173625 | 0.251018 | 0.066191 | 0.100815 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013655 | 0.08858 | 2,732 | 24 | 2,215 | 113.833333 | 0.7751 | 0.016837 | 0 | 0 | 1 | 0 | 0.218703 | 0.008569 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.388889 | 0 | 0.555556 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
98be3747a5b4b85bd40719ce7b3809ac7ffea433 | 510 | py | Python | stubs/micropython-v1_13-95-pyboard/gc.py | mattytrentini/micropython-stubs | 4d596273823b69e9e5bcf5fa67f249c374ee0bbc | [
"MIT"
] | null | null | null | stubs/micropython-v1_13-95-pyboard/gc.py | mattytrentini/micropython-stubs | 4d596273823b69e9e5bcf5fa67f249c374ee0bbc | [
"MIT"
] | null | null | null | stubs/micropython-v1_13-95-pyboard/gc.py | mattytrentini/micropython-stubs | 4d596273823b69e9e5bcf5fa67f249c374ee0bbc | [
"MIT"
] | null | null | null | """
Module: 'gc' on pyboard 1.13.0-95
"""
# MCU: (sysname='pyboard', nodename='pyboard', release='1.13.0', version='v1.13-95-g0fff2e03f on 2020-10-03', machine='PYBv1.1 with STM32F405RG')
# Stubber: 1.3.4 - updated
from typing import Any
def collect(*args) -> Any:
pass
def disable(*args) -> Any:
pass
def enable(*args) -> Any:
pass
def isenabled(*args) -> Any:
pass
def mem_alloc(*args) -> Any:
pass
def mem_free(*args) -> Any:
pass
def threshold(*args) -> Any:
pass
| 14.571429 | 145 | 0.619608 | 77 | 510 | 4.077922 | 0.532468 | 0.156051 | 0.245223 | 0.267516 | 0.10828 | 0 | 0 | 0 | 0 | 0 | 0 | 0.091133 | 0.203922 | 510 | 34 | 146 | 15 | 0.682266 | 0.398039 | 0 | 0.466667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.466667 | true | 0.466667 | 0.066667 | 0 | 0.533333 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 6 |
c726bb342736f4037853205064035977032db551 | 32,985 | py | Python | ta/volatility.py | Ruil/ta | d5593e4123c1ed3338fc01d0fe65a631538bc76e | [
"MIT"
] | null | null | null | ta/volatility.py | Ruil/ta | d5593e4123c1ed3338fc01d0fe65a631538bc76e | [
"MIT"
] | null | null | null | ta/volatility.py | Ruil/ta | d5593e4123c1ed3338fc01d0fe65a631538bc76e | [
"MIT"
] | null | null | null | """
.. module:: volatility
:synopsis: Volatility Indicators.
.. moduleauthor:: Dario Lopez Padial (Bukosabino)
"""
import numpy as np
import pandas as pd
from ta.utils import IndicatorMixin
class AverageTrueRange(IndicatorMixin):
"""Average True Range (ATR)
The indicator provide an indication of the degree of price volatility.
Strong moves, in either direction, are often accompanied by large ranges,
or large True Ranges.
http://stockcharts.com/school/doku.php?id=chart_school:technical_indicators:average_true_range_atr
Args:
high(pandas.Series): dataset 'High' column.
low(pandas.Series): dataset 'Low' column.
close(pandas.Series): dataset 'Close' column.
window(int): n period.
fillna(bool): if True, fill nan values.
"""
def __init__(
self,
high: pd.Series,
low: pd.Series,
close: pd.Series,
window: int = 14,
fillna: bool = False,
):
self._high = high
self._low = low
self._close = close
self._window = min(window, len(self._close))
self._fillna = fillna
self._run()
def _run(self):
close_shift = self._close.shift(1)
true_range = self._true_range(self._high, self._low, close_shift)
atr = np.zeros(len(self._close))
#print(len(atr), ' window: ', self._window, len(true_range))
atr[self._window - 1] = true_range[0 : self._window].mean()
for i in range(self._window, len(atr)):
atr[i] = (atr[i - 1] * (self._window - 1) + true_range.iloc[i]) / float(
self._window
)
self._atr = pd.Series(data=atr, index=true_range.index)
def average_true_range(self) -> pd.Series:
"""Average True Range (ATR)
Returns:
pandas.Series: New feature generated.
"""
atr = self._check_fillna(self._atr, value=0)
return pd.Series(atr, name="atr")
class BollingerBands(IndicatorMixin):
"""Bollinger Bands
https://school.stockcharts.com/doku.php?id=technical_indicators:bollinger_bands
Args:
close(pandas.Series): dataset 'Close' column.
window(int): n period.
window_dev(int): n factor standard deviation
fillna(bool): if True, fill nan values.
"""
def __init__(
self,
close: pd.Series,
window: int = 20,
window_dev: int = 2,
fillna: bool = False,
):
self._close = close
self._window = window
self._window_dev = window_dev
self._fillna = fillna
self._run()
def _run(self):
min_periods = 0 if self._fillna else self._window
self._mavg = self._close.rolling(self._window, min_periods=min_periods).mean()
self._mstd = self._close.rolling(self._window, min_periods=min_periods).std(
ddof=0
)
self._hband = self._mavg + self._window_dev * self._mstd
self._lband = self._mavg - self._window_dev * self._mstd
def bollinger_mavg(self) -> pd.Series:
"""Bollinger Channel Middle Band
Returns:
pandas.Series: New feature generated.
"""
mavg = self._check_fillna(self._mavg, value=-1)
return pd.Series(mavg, name="mavg")
def bollinger_hband(self) -> pd.Series:
"""Bollinger Channel High Band
Returns:
pandas.Series: New feature generated.
"""
hband = self._check_fillna(self._hband, value=-1)
return pd.Series(hband, name="hband")
def bollinger_lband(self) -> pd.Series:
"""Bollinger Channel Low Band
Returns:
pandas.Series: New feature generated.
"""
lband = self._check_fillna(self._lband, value=-1)
return pd.Series(lband, name="lband")
def bollinger_wband(self) -> pd.Series:
"""Bollinger Channel Band Width
From: https://school.stockcharts.com/doku.php?id=technical_indicators:bollinger_band_width
Returns:
pandas.Series: New feature generated.
"""
wband = ((self._hband - self._lband) / self._mavg) * 100
wband = self._check_fillna(wband, value=0)
return pd.Series(wband, name="bbiwband")
def bollinger_pband(self) -> pd.Series:
"""Bollinger Channel Percentage Band
From: https://school.stockcharts.com/doku.php?id=technical_indicators:bollinger_band_perce
Returns:
pandas.Series: New feature generated.
"""
pband = (self._close - self._lband) / (self._hband - self._lband)
pband = self._check_fillna(pband, value=0)
return pd.Series(pband, name="bbipband")
def bollinger_hband_indicator(self) -> pd.Series:
"""Bollinger Channel Indicator Crossing High Band (binary).
It returns 1, if close is higher than bollinger_hband. Else, it returns 0.
Returns:
pandas.Series: New feature generated.
"""
hband = pd.Series(
np.where(self._close > self._hband, 1.0, 0.0), index=self._close.index
)
hband = self._check_fillna(hband, value=0)
return pd.Series(hband, index=self._close.index, name="bbihband")
def bollinger_lband_indicator(self) -> pd.Series:
"""Bollinger Channel Indicator Crossing Low Band (binary).
It returns 1, if close is lower than bollinger_lband. Else, it returns 0.
Returns:
pandas.Series: New feature generated.
"""
lband = pd.Series(
np.where(self._close < self._lband, 1.0, 0.0), index=self._close.index
)
lband = self._check_fillna(lband, value=0)
return pd.Series(lband, name="bbilband")
class KeltnerChannel(IndicatorMixin):
"""KeltnerChannel
Keltner Channels are a trend following indicator used to identify reversals with channel breakouts and
channel direction. Channels can also be used to identify overbought and oversold levels when the trend
is flat.
https://school.stockcharts.com/doku.php?id=technical_indicators:keltner_channels
Args:
high(pandas.Series): dataset 'High' column.
low(pandas.Series): dataset 'Low' column.
close(pandas.Series): dataset 'Close' column.
window(int): n period.
window_atr(int): n atr period. Only valid if original_version param is False.
fillna(bool): if True, fill nan values.
original_version(bool): if True, use original version as the centerline (SMA of typical price)
if False, use EMA of close as the centerline. More info:
https://school.stockcharts.com/doku.php?id=technical_indicators:keltner_channels
"""
def __init__(
self,
high: pd.Series,
low: pd.Series,
close: pd.Series,
window: int = 20,
window_atr: int = 10,
fillna: bool = False,
original_version: bool = True,
):
self._high = high
self._low = low
self._close = close
self._window = window
self._window_atr = window_atr
self._fillna = fillna
self._original_version = original_version
self._run()
def _run(self):
min_periods = 1 if self._fillna else self._window
if self._original_version:
self._tp = (
((self._high + self._low + self._close) / 3.0)
.rolling(self._window, min_periods=min_periods)
.mean()
)
self._tp_high = (
(((4 * self._high) - (2 * self._low) + self._close) / 3.0)
.rolling(self._window, min_periods=0)
.mean()
)
self._tp_low = (
(((-2 * self._high) + (4 * self._low) + self._close) / 3.0)
.rolling(self._window, min_periods=0)
.mean()
)
else:
self._tp = self._close.ewm(
span=self._window, min_periods=min_periods, adjust=False
).mean()
atr = AverageTrueRange(
close=self._close,
high=self._high,
low=self._low,
window=self._window_atr,
fillna=self._fillna,
).average_true_range()
self._tp_high = self._tp + (2 * atr)
self._tp_low = self._tp - (2 * atr)
def keltner_channel_mband(self) -> pd.Series:
"""Keltner Channel Middle Band
Returns:
pandas.Series: New feature generated.
"""
tp_middle = self._check_fillna(self._tp, value=-1)
return pd.Series(tp_middle, name="mavg")
def keltner_channel_hband(self) -> pd.Series:
"""Keltner Channel High Band
Returns:
pandas.Series: New feature generated.
"""
tp_high = self._check_fillna(self._tp_high, value=-1)
return pd.Series(tp_high, name="kc_hband")
def keltner_channel_lband(self) -> pd.Series:
"""Keltner Channel Low Band
Returns:
pandas.Series: New feature generated.
"""
tp_low = self._check_fillna(self._tp_low, value=-1)
return pd.Series(tp_low, name="kc_lband")
def keltner_channel_wband(self) -> pd.Series:
"""Keltner Channel Band Width
Returns:
pandas.Series: New feature generated.
"""
wband = ((self._tp_high - self._tp_low) / self._tp) * 100
wband = self._check_fillna(wband, value=0)
return pd.Series(wband, name="bbiwband")
def keltner_channel_pband(self) -> pd.Series:
"""Keltner Channel Percentage Band
Returns:
pandas.Series: New feature generated.
"""
pband = (self._close - self._tp_low) / (self._tp_high - self._tp_low)
pband = self._check_fillna(pband, value=0)
return pd.Series(pband, name="bbipband")
def keltner_channel_hband_indicator(self) -> pd.Series:
"""Keltner Channel Indicator Crossing High Band (binary)
It returns 1, if close is higher than keltner_channel_hband. Else, it returns 0.
Returns:
pandas.Series: New feature generated.
"""
hband = pd.Series(
np.where(self._close > self._tp_high, 1.0, 0.0), index=self._close.index
)
hband = self._check_fillna(hband, value=0)
return pd.Series(hband, name="dcihband")
def keltner_channel_lband_indicator(self) -> pd.Series:
"""Keltner Channel Indicator Crossing Low Band (binary)
It returns 1, if close is lower than keltner_channel_lband. Else, it returns 0.
Returns:
pandas.Series: New feature generated.
"""
lband = pd.Series(
np.where(self._close < self._tp_low, 1.0, 0.0), index=self._close.index
)
lband = self._check_fillna(lband, value=0)
return pd.Series(lband, name="dcilband")
class DonchianChannel(IndicatorMixin):
"""Donchian Channel
https://www.investopedia.com/terms/d/donchianchannels.asp
Args:
high(pandas.Series): dataset 'High' column.
low(pandas.Series): dataset 'Low' column.
close(pandas.Series): dataset 'Close' column.
window(int): n period.
fillna(bool): if True, fill nan values.
"""
def __init__(
self,
high: pd.Series,
low: pd.Series,
close: pd.Series,
window: int = 20,
offset: int = 0,
fillna: bool = False,
):
self._offset = offset
self._close = close
self._high = high
self._low = low
self._window = window
self._fillna = fillna
self._run()
def _run(self):
self._min_periods = 1 if self._fillna else self._window
self._hband = self._high.rolling(
self._window, min_periods=self._min_periods
).max()
self._lband = self._low.rolling(
self._window, min_periods=self._min_periods
).min()
def donchian_channel_hband(self) -> pd.Series:
"""Donchian Channel High Band
Returns:
pandas.Series: New feature generated.
"""
hband = self._check_fillna(self._hband, value=-1)
if self._offset != 0:
hband = hband.shift(self._offset)
return pd.Series(hband, name="dchband")
def donchian_channel_lband(self) -> pd.Series:
"""Donchian Channel Low Band
Returns:
pandas.Series: New feature generated.
"""
lband = self._check_fillna(self._lband, value=-1)
if self._offset != 0:
lband = lband.shift(self._offset)
return pd.Series(lband, name="dclband")
def donchian_channel_mband(self) -> pd.Series:
"""Donchian Channel Middle Band
Returns:
pandas.Series: New feature generated.
"""
mband = ((self._hband - self._lband) / 2.0) + self._lband
mband = self._check_fillna(mband, value=-1)
if self._offset != 0:
mband = mband.shift(self._offset)
return pd.Series(mband, name="dcmband")
def donchian_channel_wband(self) -> pd.Series:
"""Donchian Channel Band Width
Returns:
pandas.Series: New feature generated.
"""
mavg = self._close.rolling(self._window, min_periods=self._min_periods).mean()
wband = ((self._hband - self._lband) / mavg) * 100
wband = self._check_fillna(wband, value=0)
if self._offset != 0:
wband = wband.shift(self._offset)
return pd.Series(wband, name="dcwband")
def donchian_channel_pband(self) -> pd.Series:
"""Donchian Channel Percentage Band
Returns:
pandas.Series: New feature generated.
"""
pband = (self._close - self._lband) / (self._hband - self._lband)
pband = self._check_fillna(pband, value=0)
if self._offset != 0:
pband = pband.shift(self._offset)
return pd.Series(pband, name="dcpband")
class UlcerIndex(IndicatorMixin):
"""Ulcer Index
https://stockcharts.com/school/doku.php?id=chart_school:technical_indicators:ulcer_index
Args:
close(pandas.Series): dataset 'Close' column.
window(int): n period.
fillna(bool): if True, fill nan values.
"""
def __init__(self, close: pd.Series, window: int = 14, fillna: bool = False):
self._close = close
self._window = window
self._fillna = fillna
self._run()
def _run(self):
_ui_max = self._close.rolling(self._window, min_periods=1).max()
_r_i = 100 * (self._close - _ui_max) / _ui_max
def ui_function():
def _ui_function(x):
return np.sqrt((x ** 2 / self._window).sum())
return _ui_function
self._ulcer_idx = _r_i.rolling(self._window).apply(ui_function(), raw=True)
def ulcer_index(self) -> pd.Series:
"""Ulcer Index (UI)
Returns:
pandas.Series: New feature generated.
"""
ulcer_idx = self._check_fillna(self._ulcer_idx)
return pd.Series(ulcer_idx, name="ui")
def average_true_range(high, low, close, window=14, fillna=False):
"""Average True Range (ATR)
The indicator provide an indication of the degree of price volatility.
Strong moves, in either direction, are often accompanied by large ranges,
or large True Ranges.
http://stockcharts.com/school/doku.php?id=chart_school:technical_indicators:average_true_range_atr
Args:
high(pandas.Series): dataset 'High' column.
low(pandas.Series): dataset 'Low' column.
close(pandas.Series): dataset 'Close' column.
window(int): n period.
fillna(bool): if True, fill nan values.
Returns:
pandas.Series: New feature generated.
"""
indicator = AverageTrueRange(
high=high, low=low, close=close, window=window, fillna=fillna
)
return indicator.average_true_range()
def bollinger_mavg(close, window=20, fillna=False):
"""Bollinger Bands (BB)
N-period simple moving average (MA).
https://en.wikipedia.org/wiki/Bollinger_Bands
Args:
close(pandas.Series): dataset 'Close' column.
window(int): n period.
fillna(bool): if True, fill nan values.
Returns:
pandas.Series: New feature generated.
"""
indicator = BollingerBands(close=close, window=window, fillna=fillna)
return indicator.bollinger_mavg()
def bollinger_hband(close, window=20, window_dev=2, fillna=False):
"""Bollinger Bands (BB)
Upper band at K times an N-period standard deviation above the moving
average (MA + Kdeviation).
https://en.wikipedia.org/wiki/Bollinger_Bands
Args:
close(pandas.Series): dataset 'Close' column.
window(int): n period.
window_dev(int): n factor standard deviation
fillna(bool): if True, fill nan values.
Returns:
pandas.Series: New feature generated.
"""
indicator = BollingerBands(
close=close, window=window, window_dev=window_dev, fillna=fillna
)
return indicator.bollinger_hband()
def bollinger_lband(close, window=20, window_dev=2, fillna=False):
"""Bollinger Bands (BB)
Lower band at K times an N-period standard deviation below the moving
average (MA − Kdeviation).
https://en.wikipedia.org/wiki/Bollinger_Bands
Args:
close(pandas.Series): dataset 'Close' column.
window(int): n period.
window_dev(int): n factor standard deviation
fillna(bool): if True, fill nan values.
Returns:
pandas.Series: New feature generated.
"""
indicator = BollingerBands(
close=close, window=window, window_dev=window_dev, fillna=fillna
)
return indicator.bollinger_lband()
def bollinger_wband(close, window=20, window_dev=2, fillna=False):
"""Bollinger Channel Band Width
From: https://school.stockcharts.com/doku.php?id=technical_indicators:bollinger_band_width
Args:
close(pandas.Series): dataset 'Close' column.
window(int): n period.
window_dev(int): n factor standard deviation
fillna(bool): if True, fill nan values.
Returns:
pandas.Series: New feature generated.
"""
indicator = BollingerBands(
close=close, window=window, window_dev=window_dev, fillna=fillna
)
return indicator.bollinger_wband()
def bollinger_pband(close, window=20, window_dev=2, fillna=False):
"""Bollinger Channel Percentage Band
From: https://school.stockcharts.com/doku.php?id=technical_indicators:bollinger_band_perce
Args:
close(pandas.Series): dataset 'Close' column.
window(int): n period.
window_dev(int): n factor standard deviation
fillna(bool): if True, fill nan values.
Returns:
pandas.Series: New feature generated.
"""
indicator = BollingerBands(
close=close, window=window, window_dev=window_dev, fillna=fillna
)
return indicator.bollinger_pband()
def bollinger_hband_indicator(close, window=20, window_dev=2, fillna=False):
"""Bollinger High Band Indicator
Returns 1, if close is higher than bollinger high band. Else, return 0.
https://en.wikipedia.org/wiki/Bollinger_Bands
Args:
close(pandas.Series): dataset 'Close' column.
window(int): n period.
window_dev(int): n factor standard deviation
fillna(bool): if True, fill nan values.
Returns:
pandas.Series: New feature generated.
"""
indicator = BollingerBands(
close=close, window=window, window_dev=window_dev, fillna=fillna
)
return indicator.bollinger_hband_indicator()
def bollinger_lband_indicator(close, window=20, window_dev=2, fillna=False):
"""Bollinger Low Band Indicator
Returns 1, if close is lower than bollinger low band. Else, return 0.
https://en.wikipedia.org/wiki/Bollinger_Bands
Args:
close(pandas.Series): dataset 'Close' column.
window(int): n period.
window_dev(int): n factor standard deviation
fillna(bool): if True, fill nan values.
Returns:
pandas.Series: New feature generated.
"""
indicator = BollingerBands(
close=close, window=window, window_dev=window_dev, fillna=fillna
)
return indicator.bollinger_lband_indicator()
def keltner_channel_mband(
high, low, close, window=20, window_atr=10, fillna=False, original_version=True
):
"""Keltner channel (KC)
Showing a simple moving average line (central) of typical price.
https://school.stockcharts.com/doku.php?id=technical_indicators:keltner_channels
Args:
high(pandas.Series): dataset 'High' column.
low(pandas.Series): dataset 'Low' column.
close(pandas.Series): dataset 'Close' column.
window(int): n period.
window_atr(int): n atr period. Only valid if original_version param is False.
fillna(bool): if True, fill nan values.
original_version(bool): if True, use original version as the centerline (SMA of typical price)
if False, use EMA of close as the centerline. More info:
https://school.stockcharts.com/doku.php?id=technical_indicators:keltner_channels
Returns:
pandas.Series: New feature generated.
"""
indicator = KeltnerChannel(
high=high,
low=low,
close=close,
window=window,
window_atr=window_atr,
fillna=fillna,
original_version=original_version,
)
return indicator.keltner_channel_mband()
def keltner_channel_hband(
high, low, close, window=20, window_atr=10, fillna=False, original_version=True
):
"""Keltner channel (KC)
Showing a simple moving average line (high) of typical price.
https://school.stockcharts.com/doku.php?id=technical_indicators:keltner_channels
Args:
high(pandas.Series): dataset 'High' column.
low(pandas.Series): dataset 'Low' column.
close(pandas.Series): dataset 'Close' column.
window(int): n period.
window_atr(int): n atr period. Only valid if original_version param is False.
fillna(bool): if True, fill nan values.
original_version(bool): if True, use original version as the centerline (SMA of typical price)
if False, use EMA of close as the centerline. More info:
https://school.stockcharts.com/doku.php?id=technical_indicators:keltner_channels
Returns:
pandas.Series: New feature generated.
"""
indicator = KeltnerChannel(
high=high,
low=low,
close=close,
window=window,
window_atr=window_atr,
fillna=fillna,
original_version=original_version,
)
return indicator.keltner_channel_hband()
def keltner_channel_lband(
high, low, close, window=20, window_atr=10, fillna=False, original_version=True
):
"""Keltner channel (KC)
Showing a simple moving average line (low) of typical price.
https://school.stockcharts.com/doku.php?id=technical_indicators:keltner_channels
Args:
high(pandas.Series): dataset 'High' column.
low(pandas.Series): dataset 'Low' column.
close(pandas.Series): dataset 'Close' column.
window(int): n period.
window_atr(int): n atr period. Only valid if original_version param is False.
fillna(bool): if True, fill nan values.
original_version(bool): if True, use original version as the centerline (SMA of typical price)
if False, use EMA of close as the centerline. More info:
https://school.stockcharts.com/doku.php?id=technical_indicators:keltner_channels
Returns:
pandas.Series: New feature generated.
"""
indicator = KeltnerChannel(
high=high,
low=low,
close=close,
window=window,
window_atr=window_atr,
fillna=fillna,
original_version=original_version,
)
return indicator.keltner_channel_lband()
def keltner_channel_wband(
high, low, close, window=20, window_atr=10, fillna=False, original_version=True
):
"""Keltner Channel Band Width
https://school.stockcharts.com/doku.php?id=technical_indicators:keltner_channels
Args:
high(pandas.Series): dataset 'High' column.
low(pandas.Series): dataset 'Low' column.
close(pandas.Series): dataset 'Close' column.
window(int): n period.
window_atr(int): n atr period. Only valid if original_version param is False.
fillna(bool): if True, fill nan values.
original_version(bool): if True, use original version as the centerline (SMA of typical price)
if False, use EMA of close as the centerline. More info:
https://school.stockcharts.com/doku.php?id=technical_indicators:keltner_channels
Returns:
pandas.Series: New feature generated.
"""
indicator = KeltnerChannel(
high=high,
low=low,
close=close,
window=window,
window_atr=window_atr,
fillna=fillna,
original_version=original_version,
)
return indicator.keltner_channel_wband()
def keltner_channel_pband(
high, low, close, window=20, window_atr=10, fillna=False, original_version=True
):
"""Keltner Channel Percentage Band
https://school.stockcharts.com/doku.php?id=technical_indicators:keltner_channels
Args:
high(pandas.Series): dataset 'High' column.
low(pandas.Series): dataset 'Low' column.
close(pandas.Series): dataset 'Close' column.
window(int): n period.
window_atr(int): n atr period. Only valid if original_version param is False.
fillna(bool): if True, fill nan values.
original_version(bool): if True, use original version as the centerline (SMA of typical price)
if False, use EMA of close as the centerline. More info:
https://school.stockcharts.com/doku.php?id=technical_indicators:keltner_channels
Returns:
pandas.Series: New feature generated.
"""
indicator = KeltnerChannel(
high=high,
low=low,
close=close,
window=window,
window_atr=window_atr,
fillna=fillna,
original_version=original_version,
)
return indicator.keltner_channel_pband()
def keltner_channel_hband_indicator(
high, low, close, window=20, window_atr=10, fillna=False, original_version=True
):
"""Keltner Channel High Band Indicator (KC)
Returns 1, if close is higher than keltner high band channel. Else,
return 0.
https://school.stockcharts.com/doku.php?id=technical_indicators:keltner_channels
Args:
high(pandas.Series): dataset 'High' column.
low(pandas.Series): dataset 'Low' column.
close(pandas.Series): dataset 'Close' column.
window(int): n period.
window_atr(int): n atr period. Only valid if original_version param is False.
fillna(bool): if True, fill nan values.
original_version(bool): if True, use original version as the centerline (SMA of typical price)
if False, use EMA of close as the centerline. More info:
https://school.stockcharts.com/doku.php?id=technical_indicators:keltner_channels
Returns:
pandas.Series: New feature generated.
"""
indicator = KeltnerChannel(
high=high,
low=low,
close=close,
window=window,
window_atr=window_atr,
fillna=fillna,
original_version=original_version,
)
return indicator.keltner_channel_hband_indicator()
def keltner_channel_lband_indicator(
high, low, close, window=20, window_atr=10, fillna=False, original_version=True
):
"""Keltner Channel Low Band Indicator (KC)
Returns 1, if close is lower than keltner low band channel. Else, return 0.
https://school.stockcharts.com/doku.php?id=technical_indicators:keltner_channels
Args:
high(pandas.Series): dataset 'High' column.
low(pandas.Series): dataset 'Low' column.
close(pandas.Series): dataset 'Close' column.
window(int): n period.
window_atr(int): n atr period. Only valid if original_version param is False.
fillna(bool): if True, fill nan values.
original_version(bool): if True, use original version as the centerline (SMA of typical price)
if False, use EMA of close as the centerline. More info:
https://school.stockcharts.com/doku.php?id=technical_indicators:keltner_channels
Returns:
pandas.Series: New feature generated.
"""
indicator = KeltnerChannel(
high=high,
low=low,
close=close,
window=window,
window_atr=window_atr,
fillna=fillna,
original_version=original_version,
)
return indicator.keltner_channel_lband_indicator()
def donchian_channel_hband(high, low, close, window=20, offset=0, fillna=False):
"""Donchian Channel High Band (DC)
The upper band marks the highest price of an issue for n periods.
https://www.investopedia.com/terms/d/donchianchannels.asp
Args:
high(pandas.Series): dataset 'High' column.
low(pandas.Series): dataset 'Low' column.
close(pandas.Series): dataset 'Close' column.
window(int): n period.
fillna(bool): if True, fill nan values.
Returns:
pandas.Series: New feature generated.
"""
indicator = DonchianChannel(
high=high, low=low, close=close, window=window, offset=offset, fillna=fillna
)
return indicator.donchian_channel_hband()
def donchian_channel_lband(high, low, close, window=20, offset=0, fillna=False):
"""Donchian Channel Low Band (DC)
The lower band marks the lowest price for n periods.
https://www.investopedia.com/terms/d/donchianchannels.asp
Args:
high(pandas.Series): dataset 'High' column.
low(pandas.Series): dataset 'Low' column.
close(pandas.Series): dataset 'Close' column.
window(int): n period.
fillna(bool): if True, fill nan values.
Returns:
pandas.Series: New feature generated.
"""
indicator = DonchianChannel(
high=high, low=low, close=close, window=window, offset=offset, fillna=fillna
)
return indicator.donchian_channel_lband()
def donchian_channel_mband(high, low, close, window=10, offset=0, fillna=False):
"""Donchian Channel Middle Band (DC)
https://www.investopedia.com/terms/d/donchianchannels.asp
Args:
high(pandas.Series): dataset 'High' column.
low(pandas.Series): dataset 'Low' column.
close(pandas.Series): dataset 'Close' column.
window(int): n period.
fillna(bool): if True, fill nan values.
Returns:
pandas.Series: New feature generated.
"""
indicator = DonchianChannel(
high=high, low=low, close=close, window=window, offset=offset, fillna=fillna
)
return indicator.donchian_channel_mband()
def donchian_channel_wband(high, low, close, window=10, offset=0, fillna=False):
"""Donchian Channel Band Width (DC)
https://www.investopedia.com/terms/d/donchianchannels.asp
Args:
high(pandas.Series): dataset 'High' column.
low(pandas.Series): dataset 'Low' column.
close(pandas.Series): dataset 'Close' column.
window(int): n period.
fillna(bool): if True, fill nan values.
Returns:
pandas.Series: New feature generated.
"""
indicator = DonchianChannel(
high=high, low=low, close=close, window=window, offset=offset, fillna=fillna
)
return indicator.donchian_channel_wband()
def donchian_channel_pband(high, low, close, window=10, offset=0, fillna=False):
"""Donchian Channel Percentage Band (DC)
https://www.investopedia.com/terms/d/donchianchannels.asp
Args:
high(pandas.Series): dataset 'High' column.
low(pandas.Series): dataset 'Low' column.
close(pandas.Series): dataset 'Close' column.
window(int): n period.
fillna(bool): if True, fill nan values.
Returns:
pandas.Series: New feature generated.
"""
indicator = DonchianChannel(
high=high, low=low, close=close, window=window, offset=offset, fillna=fillna
)
return indicator.donchian_channel_pband()
def ulcer_index(close, window=14, fillna=False):
"""Ulcer Index
https://stockcharts.com/school/doku.php?id=chart_school:technical_indicators:ulcer_index
Args:
close(pandas.Series): dataset 'Close' column.
window(int): n period.
fillna(bool): if True, fill nan values.
Returns:
pandas.Series: New feature generated.
"""
indicator = UlcerIndex(close=close, window=window, fillna=fillna)
return indicator.ulcer_index()
| 32.723214 | 106 | 0.63832 | 4,041 | 32,985 | 5.063598 | 0.057659 | 0.058645 | 0.053856 | 0.045157 | 0.86321 | 0.817173 | 0.793031 | 0.780276 | 0.746017 | 0.718307 | 0 | 0.00722 | 0.256753 | 32,985 | 1,007 | 107 | 32.75571 | 0.827378 | 0.465909 | 0 | 0.481959 | 0 | 0 | 0.008845 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.139175 | false | 0 | 0.007732 | 0.002577 | 0.273196 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c729c075da998c52671b42fcc0e5b3a16c4e479a | 28 | py | Python | alexi/twitter/__init__.py | harryjjacobs/alexi | 1306c26adf339bbf88e9b2c29e038f242e4ca45f | [
"MIT"
] | 2 | 2019-07-20T01:48:20.000Z | 2019-11-15T06:50:54.000Z | alexi/twitter/__init__.py | harryjjacobs/alexi | 1306c26adf339bbf88e9b2c29e038f242e4ca45f | [
"MIT"
] | 5 | 2020-02-12T08:58:06.000Z | 2021-09-22T17:56:42.000Z | alexi/twitter/__init__.py | harryjjacobs/alexi | 1306c26adf339bbf88e9b2c29e038f242e4ca45f | [
"MIT"
] | null | null | null | from .twitter import Twitter | 28 | 28 | 0.857143 | 4 | 28 | 6 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.107143 | 28 | 1 | 28 | 28 | 0.96 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c76c04d5eb893cc9436be1837e71d314b18f2c4e | 6,272 | py | Python | phone_iso3166/nanpa.py | horpto/phone-iso3166 | 09d6bb8cf69bc0dffb97677924aa3318b579f030 | [
"MIT"
] | 19 | 2017-03-28T10:35:22.000Z | 2022-03-14T04:39:03.000Z | phone_iso3166/nanpa.py | horpto/phone-iso3166 | 09d6bb8cf69bc0dffb97677924aa3318b579f030 | [
"MIT"
] | 17 | 2016-11-11T11:50:57.000Z | 2021-06-22T09:32:17.000Z | phone_iso3166/nanpa.py | horpto/phone-iso3166 | 09d6bb8cf69bc0dffb97677924aa3318b579f030 | [
"MIT"
] | 5 | 2015-09-28T18:25:38.000Z | 2021-07-05T11:57:58.000Z | # Generated by get_npna_npa.py
# Based on http://nanpa.com/reports/reports_npa.html
npa = \
{2: {0: {1: 'US',
2: 'US',
3: 'US',
4: 'CA',
5: 'US',
6: 'US',
7: 'US',
8: 'US',
9: 'US'},
1: {0: 'US',
2: 'US',
3: 'US',
4: 'US',
5: 'US',
6: 'US',
7: 'US',
8: 'US',
9: 'US'},
2: {0: 'US',
3: 'US',
4: 'US',
5: 'US',
6: 'CA',
7: 'US',
8: 'US',
9: 'US'},
3: {1: 'US', 4: 'US', 6: 'CA', 9: 'US'},
4: {0: 'US', 2: 'BS', 6: 'BB', 8: 'US', 9: 'CA'},
5: {0: 'CA', 1: 'US', 2: 'US', 3: 'US', 4: 'US', 6: 'US', 7: 'CA'},
6: {0: 'US', 2: 'US', 3: 'CA', 4: 'AI', 7: 'US', 8: 'AG', 9: 'US'},
7: {0: 'US', 2: 'US', 3: 'CA', 4: 'US', 6: 'US', 9: 'US'},
8: {1: 'US', 3: 'US', 4: 'VG', 9: 'CA'}},
3: {0: {1: 'US',
2: 'US',
3: 'US',
4: 'US',
5: 'US',
6: 'CA',
7: 'US',
8: 'US',
9: 'US'},
1: {0: 'US',
2: 'US',
3: 'US',
4: 'US',
5: 'US',
6: 'US',
7: 'US',
8: 'US',
9: 'US'},
2: {0: 'US', 1: 'US', 3: 'US', 5: 'US', 6: 'US', 7: 'US'},
3: {0: 'US', 1: 'US', 2: 'US', 4: 'US', 6: 'US', 7: 'US', 9: 'US'},
4: {0: 'VI', 1: 'US', 3: 'CA', 5: 'KY', 6: 'US', 7: 'US'},
5: {1: 'US', 2: 'US', 3: 'US', 4: 'CA'},
6: {0: 'US', 1: 'US', 4: 'US', 5: 'CA', 7: 'CA', 8: 'CA', 9: 'US'},
8: {0: 'US', 2: 'CA', 5: 'US', 6: 'US', 7: 'CA'}},
4: {0: {1: 'US',
2: 'US',
3: 'CA',
4: 'US',
5: 'US',
6: 'US',
7: 'US',
8: 'US',
9: 'US'},
1: {0: 'US',
2: 'US',
3: 'US',
4: 'US',
5: 'US',
6: 'CA',
7: 'US',
8: 'CA',
9: 'US'},
2: {3: 'US', 4: 'US', 5: 'US', 8: 'CA'},
3: {0: 'US', 1: 'CA', 2: 'US', 4: 'US', 5: 'US', 7: 'CA', 8: 'CA'},
4: {0: 'US', 1: 'BM', 2: 'US', 3: 'US', 5: 'US', 7: 'US', 8: 'US'},
5: {0: 'CA', 8: 'US'},
6: {0: 'CA', 3: 'US', 4: 'US', 8: 'CA', 9: 'US'},
7: {0: 'US', 3: 'GD', 4: 'CA', 5: 'US', 8: 'US', 9: 'US'},
8: {0: 'US', 4: 'US', 7: 'CA'}},
5: {0: {1: 'US',
2: 'US',
3: 'US',
4: 'US',
5: 'US',
6: 'CA',
7: 'US',
8: 'US',
9: 'US'},
1: {0: 'US',
2: 'US',
3: 'US',
4: 'CA',
5: 'US',
6: 'US',
7: 'US',
8: 'US',
9: 'CA'},
2: {0: 'US', 6: 'US'},
3: {0: 'US', 1: 'US', 4: 'US', 7: 'CA', 9: 'US'},
4: {0: 'US', 1: 'US', 8: 'CA'},
5: {1: 'US', 7: 'US', 9: 'US'},
6: {1: 'US', 2: 'US', 3: 'US', 4: 'US', 7: 'US', 8: 'CA'},
7: {0: 'US', 1: 'US', 2: 'US', 3: 'US', 4: 'US', 5: 'US', 9: 'CA'},
8: {0: 'US', 1: 'CA', 2: 'US', 4: 'CA', 5: 'US', 6: 'US', 7: 'CA'}},
6: {0: {0: 'CA',
1: 'US',
2: 'US',
3: 'US',
4: 'CA',
5: 'US',
6: 'US',
7: 'US',
8: 'US',
9: 'US'},
1: {0: 'US',
2: 'US',
3: 'CA',
4: 'US',
5: 'US',
6: 'US',
7: 'US',
8: 'US',
9: 'US'},
2: {0: 'US', 2: 'CA', 3: 'US', 6: 'US', 7: 'US', 8: 'US', 9: 'US'},
3: {0: 'US', 1: 'US', 6: 'US', 9: 'CA'},
4: {0: 'US', 1: 'US', 6: 'US', 7: 'CA', 9: 'TC'},
5: {0: 'US', 1: 'US', 6: 'US', 7: 'US', 8: 'JM', 9: 'US'},
6: {0: 'US', 1: 'US', 2: 'US', 4: 'MS', 7: 'US', 9: 'US'},
7: {0: 'MP', 1: 'US', 2: 'CA', 8: 'US', 9: 'US'},
8: {0: 'US', 1: 'US', 2: 'US', 3: 'CA', 4: 'US', 9: 'US'}},
7: {0: {1: 'US',
2: 'US',
3: 'US',
4: 'US',
5: 'CA',
6: 'US',
7: 'US',
8: 'US',
9: 'CA'},
1: {0: 'US',
2: 'US',
3: 'US',
4: 'US',
5: 'US',
6: 'US',
7: 'US',
8: 'US',
9: 'US'},
2: {0: 'US', 1: 'SX', 4: 'US', 5: 'US', 6: 'US', 7: 'US'},
3: {0: 'US', 1: 'US', 2: 'US', 4: 'US', 7: 'US'},
4: {0: 'US', 2: 'CA', 3: 'US', 7: 'US'},
5: {3: 'CA', 4: 'US', 7: 'US', 8: 'LC'},
6: {0: 'US', 2: 'US', 3: 'US', 4: 'US', 5: 'US', 7: 'DM', 9: 'US'},
7: {0: 'US',
1: 'US',
2: 'US',
3: 'US',
4: 'US',
5: 'US',
8: 'CA',
9: 'US'},
8: {0: 'CA', 1: 'US', 2: 'CA', 4: 'VC', 5: 'US', 6: 'US', 7: 'PR'}},
8: {0: {1: 'US',
2: 'US',
3: 'US',
4: 'US',
5: 'US',
6: 'US',
7: 'CA',
8: 'US',
9: 'DO'},
1: {0: 'US',
2: 'US',
3: 'US',
4: 'US',
5: 'US',
6: 'US',
7: 'US',
8: 'US',
9: 'CA'},
2: {0: 'US', 5: 'CA', 6: 'US', 8: 'US', 9: 'DO'},
3: {0: 'US', 1: 'US', 2: 'US', 5: 'US', 8: 'US', 9: 'US'},
4: {0: 'US', 3: 'US', 5: 'US', 7: 'US', 8: 'US', 9: 'DO'},
5: {0: 'US', 1: 'CA', 4: 'US', 6: 'US', 7: 'US', 8: 'US', 9: 'US'},
6: {0: 'US',
2: 'US',
3: 'US',
4: 'US',
5: 'US',
7: 'CA',
8: 'TT',
9: 'KN'},
7: {0: 'US', 1: 'CA', 2: 'US', 3: 'CA', 6: 'JM', 8: 'US', 9: 'CA'}},
9: {0: {1: 'US',
2: 'CA',
3: 'US',
4: 'US',
5: 'CA',
6: 'US',
7: 'US',
8: 'US',
9: 'US'},
1: {0: 'US',
2: 'US',
3: 'US',
4: 'US',
5: 'US',
6: 'US',
7: 'US',
8: 'US',
9: 'US'},
2: {0: 'US', 5: 'US', 8: 'US', 9: 'US'},
3: {0: 'US',
1: 'US',
4: 'US',
5: 'US',
6: 'US',
7: 'US',
8: 'US',
9: 'PR'},
4: {0: 'US',
1: 'US',
2: 'CA',
3: 'US',
5: 'US',
7: 'US',
8: 'US',
9: 'US'},
5: {1: 'US', 2: 'US', 4: 'US', 6: 'US', 9: 'US'},
7: {0: 'US', 1: 'US', 2: 'US', 3: 'US', 5: 'US', 8: 'US', 9: 'US'},
8: {0: 'US', 3: 'US', 4: 'US', 5: 'US', 6: 'US', 9: 'US'}}}
| 26.352941 | 73 | 0.236129 | 983 | 6,272 | 1.503561 | 0.046796 | 0.095399 | 0.104871 | 0.105548 | 0.842355 | 0.748309 | 0.688092 | 0.607578 | 0.523681 | 0.445196 | 0 | 0.14893 | 0.441167 | 6,272 | 237 | 74 | 26.464135 | 0.272753 | 0.012596 | 0 | 0.717949 | 1 | 0 | 0.14378 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c7a1346eabd5e9b824b636fa8a9f6a56608176f1 | 4,163 | py | Python | getters.py | pierrewinter/EPL_Fantasy_Team_Recommender | ab3e8649f6a4169843c31d9b8ebcc0bcbb9d4552 | [
"MIT"
] | 1 | 2019-07-18T18:31:58.000Z | 2019-07-18T18:31:58.000Z | getters.py | pierrewinter/EPL_Fantasy_Team_Recommender | ab3e8649f6a4169843c31d9b8ebcc0bcbb9d4552 | [
"MIT"
] | null | null | null | getters.py | pierrewinter/EPL_Fantasy_Team_Recommender | ab3e8649f6a4169843c31d9b8ebcc0bcbb9d4552 | [
"MIT"
] | null | null | null | import requests
import json
import time
def get_data():
""" Retrieve the fpl player data from the hard-coded url
"""
response = requests.get("https://fantasy.premierleague.com/api/bootstrap-static")
if response.status_code != 200:
raise Exception("Response was code " + str(response.status_code))
responseStr = response.text
data = json.loads(responseStr)
return data
def get_individual_player_data(player_id):
""" Retrieve the player-specific detailed data
Args:
player_id (int): ID of the player whose data is to be retrieved
"""
base_url = "https://fantasy.premierleague.com/api/element-summary/"
full_url = base_url + str(player_id)
response = ''
while response == '':
try:
response = requests.get(full_url)
except:
time.sleep(5)
if response.status_code != 200:
raise Exception("Response was code " + str(response.status_code))
data = json.loads(response.text)
return data
def get_entry_data(entry_id):
""" Retrieve the summary/history data for a specific entry/team
Args:
entry_id (int) : ID of the team whose data is to be retrieved
"""
base_url = "https://fantasy.premierleague.com/api/entry/"
full_url = base_url + str(entry_id) + "/history"
response = ''
while response == '':
try:
response = requests.get(full_url)
except:
time.sleep(5)
if response.status_code != 200:
raise Exception("Response was code " + str(response.status_code))
data = json.loads(response.text)
return data
def get_entry_personal_data(entry_id):
""" Retrieve the summary/history data for a specific entry/team
Args:
entry_id (int) : ID of the team whose data is to be retrieved
"""
base_url = "https://fantasy.premierleague.com/api/entry/"
full_url = base_url + str(entry_id)
response = ''
while response == '':
try:
response = requests.get(full_url)
except:
time.sleep(5)
if response.status_code != 200:
raise Exception("Response was code " + str(response.status_code))
data = json.loads(response.text)
return data
def get_entry_gws_data(entry_id):
""" Retrieve the gw-by-gw data for a specific entry/team
Args:
entry_id (int) : ID of the team whose data is to be retrieved
"""
base_url = "https://fantasy.premierleague.com/api/entry/"
gw_data = []
for i in range(1, 39):
full_url = base_url + str(entry_id) + "/event/" + str(i)
response = ''
while response == '':
try:
response = requests.get(full_url)
except:
time.sleep(5)
if response.status_code != 200:
raise Exception("Response was code " + str(response.status_code))
data = json.loads(response.text)
gw_data += [data]
return data
def get_entry_transfers_data(entry_id):
""" Retrieve the transfer data for a specific entry/team
Args:
entry_id (int) : ID of the team whose data is to be retrieved
"""
base_url = "https://fantasy.premierleague.com/api/entry/"
full_url = base_url + str(entry_id) + "/transfers"
response = ''
while response == '':
try:
response = requests.get(full_url)
except:
time.sleep(5)
if response.status_code != 200:
raise Exception("Response was code " + str(response.status_code))
data = json.loads(response.text)
return data
def get_fixtures_data():
""" Retrieve the fixtures data for the season
"""
url = "https://fantasy.premierleague.com/api/fixtures/"
response = ''
while response == '':
try:
response = requests.get(url)
except:
time.sleep(5)
if response.status_code != 200:
raise Exception("Response was code " + str(response.status_code))
data = json.loads(response.text)
return data
def main():
data = get_data()
with open('raw.json', 'w') as outf:
json.dump(data, outf)
if __name__ == '__main__':
main()
| 30.837037 | 85 | 0.619025 | 535 | 4,163 | 4.672897 | 0.157009 | 0.0784 | 0.1008 | 0.0784 | 0.8016 | 0.7516 | 0.738 | 0.7112 | 0.7112 | 0.7112 | 0 | 0.009849 | 0.268316 | 4,163 | 134 | 86 | 31.067164 | 0.8109 | 0.177756 | 0 | 0.6875 | 0 | 0 | 0.150528 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0 | 0.03125 | 0 | 0.1875 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c7b33f222a0009657928a2b64efd10111da75b92 | 49 | py | Python | src/normalizer/__init__.py | Ehsan-Tavan/-boost_converter | 7d52be7aa7994c137b1b63aa87eb51291f320cf5 | [
"MIT"
] | null | null | null | src/normalizer/__init__.py | Ehsan-Tavan/-boost_converter | 7d52be7aa7994c137b1b63aa87eb51291f320cf5 | [
"MIT"
] | null | null | null | src/normalizer/__init__.py | Ehsan-Tavan/-boost_converter | 7d52be7aa7994c137b1b63aa87eb51291f320cf5 | [
"MIT"
] | null | null | null | from .validation import is_normalizer_type_exist
| 24.5 | 48 | 0.897959 | 7 | 49 | 5.857143 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.081633 | 49 | 1 | 49 | 49 | 0.911111 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.