hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
e9a898e7da425c340b32907026fcd7bb947046e7 | 3,533 | py | Python | test/test_script_set.py | luwenchang/dataflux-func | fa07ecaa03db756f35168bbd8fbdfdf5053b767d | [
"Apache-2.0"
] | 202 | 2021-01-08T09:02:34.000Z | 2022-03-12T02:48:07.000Z | test/test_script_set.py | Rogers0007/dataflux-func | fa07ecaa03db756f35168bbd8fbdfdf5053b767d | [
"Apache-2.0"
] | 1 | 2021-05-15T02:36:07.000Z | 2021-06-03T15:01:10.000Z | test/test_script_set.py | Rogers0007/dataflux-func | fa07ecaa03db756f35168bbd8fbdfdf5053b767d | [
"Apache-2.0"
] | 31 | 2021-01-23T04:56:41.000Z | 2022-03-11T07:10:04.000Z | # -*- coding: utf-8 -*-
import pytest
from . import BaseTestSuit, AssertDesc, gen_rand_string
SCRIPT_SET_ID = f"UnitTest_{gen_rand_string(4)}"
class TestSuitScriptSet(BaseTestSuit):
def setup_class(self):
pass
def teardown_class(self):
pass
def test_list(self):
# 测试接口
code, res = self.API.get('/api/v1/script-sets/do/list')
assert code == 200, AssertDesc.status_code(res)
def test_add(self):
# 数据
data = {
'id' : SCRIPT_SET_ID,
'title' : '测试脚本集标题',
'description': '测试脚本集描述',
}
# 测试接口
body = { 'data': data }
code, res = self.API.post('/api/v1/script-sets/do/add', body=body)
assert code == 200, AssertDesc.status_code(res)
assert res['data']['id'] == SCRIPT_SET_ID, AssertDesc.data_value_not_match()
self.state('test_add', True)
# 验证数据
query = { 'id': SCRIPT_SET_ID }
code, res = self.API.get('/api/v1/script-sets/do/list', query=query)
assert code == 200, AssertDesc.status_code(res)
assert len(res['data']) == 1, AssertDesc.data_count_not_match()
assert res['data'][0]['id'] == SCRIPT_SET_ID, AssertDesc.data_value_not_match()
assert res['data'][0]['title'] == data['title'], AssertDesc.data_value_not_match()
assert res['data'][0]['description'] == data['description'], AssertDesc.data_value_not_match()
def test_modify(self):
if not self.state('test_add'):
pytest.skip(f"No test data to run this case")
# 数据
data = {
'title' : '测试脚本集标题(修改)',
'description': '测试脚本集描述(修改)',
}
# 测试接口
params = { 'id': SCRIPT_SET_ID }
body = { 'data': data }
code, res = self.API.post('/api/v1/script-sets/:id/do/modify', params=params, body=body)
assert code == 200, AssertDesc.status_code(res)
assert res['data']['id'] == SCRIPT_SET_ID, AssertDesc.data_value_not_match()
# 验证数据
query = { 'id': SCRIPT_SET_ID }
code, res = self.API.get('/api/v1/script-sets/do/list', query=query)
assert code == 200, AssertDesc.status_code(res)
assert len(res['data']) == 1, AssertDesc.data_count_not_match()
assert res['data'][0]['id'] == SCRIPT_SET_ID, AssertDesc.data_value_not_match()
assert res['data'][0]['title'] == data['title'], AssertDesc.data_value_not_match()
assert res['data'][0]['description'] == data['description'], AssertDesc.data_value_not_match()
def test_delete(self):
if not self.state('test_add'):
pytest.skip(f"No test data to run this case")
# 测试接口
params = { 'id': SCRIPT_SET_ID }
code, res = self.API.get('/api/v1/script-sets/:id/do/delete', params=params)
assert code == 200, AssertDesc.status_code(res)
assert res['data']['id'] == SCRIPT_SET_ID, AssertDesc.data_value_not_match()
# 验证数据
query = { 'id': SCRIPT_SET_ID }
code, res = self.API.get('/api/v1/script-sets/do/list', query=query)
assert code == 200, AssertDesc.status_code(res)
assert len(res['data']) == 0, AssertDesc.data_count_not_match()
| 37.989247 | 102 | 0.545429 | 425 | 3,533 | 4.343529 | 0.16 | 0.053088 | 0.071506 | 0.077465 | 0.804442 | 0.779523 | 0.757313 | 0.737811 | 0.737811 | 0.737811 | 0 | 0.016036 | 0.311633 | 3,533 | 92 | 103 | 38.402174 | 0.74301 | 0.017549 | 0 | 0.610169 | 0 | 0 | 0.150578 | 0.066185 | 0 | 0 | 0 | 0 | 0.338983 | 1 | 0.101695 | false | 0.033898 | 0.033898 | 0 | 0.152542 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
75783ee26cc3733a66473fcbae00d3669dcc91d2 | 4,542 | py | Python | tests/test_bilstm.py | wellcometrust/WellcomeML | f7f5427f6dfdc6e5ee1342764263c6411e0f9bdf | [
"MIT"
] | 29 | 2020-01-31T17:05:38.000Z | 2021-12-14T14:17:55.000Z | tests/test_bilstm.py | wellcometrust/WellcomeML | f7f5427f6dfdc6e5ee1342764263c6411e0f9bdf | [
"MIT"
] | 342 | 2020-02-05T10:40:43.000Z | 2022-03-17T19:50:23.000Z | tests/test_bilstm.py | wellcometrust/WellcomeML | f7f5427f6dfdc6e5ee1342764263c6411e0f9bdf | [
"MIT"
] | 9 | 2020-06-07T17:01:00.000Z | 2021-11-24T16:03:38.000Z | import tempfile
from wellcomeml.ml.bilstm import BiLSTMClassifier
from wellcomeml.ml.keras_vectorizer import KerasVectorizer
from sklearn.pipeline import Pipeline
from scipy.sparse import csr_matrix
import numpy as np
def test_vanilla():
X = [
"One",
"One only",
"Two nothing else",
"Two and three"
]
Y = np.array([0, 0, 1, 1])
model = Pipeline([
('vec', KerasVectorizer()),
('clf', BiLSTMClassifier(nb_epochs=10))
])
model.fit(X, Y)
assert model.score(X, Y) > 0.6
def test_save_load():
X = [
"One",
"One only",
"Two nothing else",
"Two and three"
]
Y = np.array([0, 0, 1, 1])
vec = KerasVectorizer()
X_vec = vec.fit_transform(X)
model = BiLSTMClassifier()
model.fit(X_vec, Y)
with tempfile.TemporaryDirectory() as tmp_dir:
model.save(tmp_dir)
loaded_model = BiLSTMClassifier()
loaded_model.load(tmp_dir)
assert hasattr(loaded_model, 'model')
assert loaded_model.score(X_vec, Y) > 0.6
def test_save_load_attention():
X = [
"One",
"One only",
"Two nothing else",
"Two and three"
]
Y = np.array([0, 0, 1, 1])
vec = KerasVectorizer()
X_vec = vec.fit_transform(X)
model = BiLSTMClassifier(attention=True)
model.fit(X_vec, Y)
with tempfile.TemporaryDirectory() as tmp_dir:
model.save(tmp_dir)
loaded_model = BiLSTMClassifier()
loaded_model.load(tmp_dir)
assert hasattr(loaded_model, 'model')
assert loaded_model.score(X_vec, Y) > 0.6
def test_multilabel():
X = [
"One and two",
"One only",
"Three and four, nothing else",
"Two nothing else",
"Two and three"
]
Y = np.array([
[1, 1, 0, 0],
[1, 0, 0, 0],
[0, 0, 1, 1],
[0, 1, 0, 0],
[0, 1, 1, 0]
])
model = Pipeline([
('vec', KerasVectorizer()),
('clf', BiLSTMClassifier(multilabel=True))
])
model.fit(X, Y)
assert model.score(X, Y) > 0.4
assert model.predict(X).shape == (5, 4)
def test_sparse():
X = [
"One and two",
"One only",
"Three and four, nothing else",
"Two nothing else",
"Two and three"
]
Y = csr_matrix(np.array([
[1, 1, 0, 0],
[1, 0, 0, 0],
[0, 0, 1, 1],
[0, 1, 0, 0],
[0, 1, 1, 0]
]))
model = Pipeline([
('vec', KerasVectorizer()),
('clf', BiLSTMClassifier(
multilabel=True,
batch_size=2,
sparse_y=True
))
])
model.fit(X, Y)
assert model.score(X, Y) > 0.4
assert model.predict(X).shape == (5, 4)
def test_attention():
X = [
"One",
"One only",
"Two nothing else",
"Two and three"
]
Y = np.array([0, 0, 1, 1])
model = Pipeline([
('vec', KerasVectorizer()),
('clf', BiLSTMClassifier(
nb_epochs=10,
attention=True,
attention_heads=10))
])
model.fit(X, Y)
assert model.score(X, Y) > 0.6
def test_early_stopping():
X = [
"One",
"One only",
"Two nothing else",
"Two and three"
]
Y = np.array([0, 0, 1, 1])
model = Pipeline([
('vec', KerasVectorizer()),
('clf', BiLSTMClassifier(
early_stopping=True,
nb_epochs=10000
))
])
# if early_stopping is not working it will take
# a lot of time to finish running this test
model.fit(X, Y)
assert model.score(X, Y) > 0.6
def test_predict_proba():
X = [
"One",
"One only",
"Two nothing else",
"Two and three"
]
Y = np.array([0, 0, 1, 1])
model = Pipeline([
('vec', KerasVectorizer()),
('clf', BiLSTMClassifier())
])
model.fit(X, Y)
Y_pred_prob = model.predict_proba(X)
assert sum(Y_pred_prob >= 0) == Y.shape[0]
assert sum(Y_pred_prob <= 1) == Y.shape[0]
def test_threshold():
X = [
"One",
"One only",
"Two nothing else",
"Two and three"
]
Y = np.array([0, 0, 1, 1])
model = Pipeline([
('vec', KerasVectorizer()),
('clf', BiLSTMClassifier(threshold=0.1))
])
model.fit(X, Y)
Y_pred_expected = model.predict_proba(X) > 0.1
Y_pred = model.predict(X)
assert np.array_equal(Y_pred_expected, Y_pred)
| 22.374384 | 58 | 0.518934 | 574 | 4,542 | 3.998258 | 0.156794 | 0.018301 | 0.016993 | 0.019172 | 0.757734 | 0.742048 | 0.728976 | 0.722004 | 0.722004 | 0.722004 | 0 | 0.035251 | 0.337957 | 4,542 | 202 | 59 | 22.485149 | 0.727968 | 0.019155 | 0 | 0.723529 | 0 | 0 | 0.108715 | 0 | 0 | 0 | 0 | 0 | 0.082353 | 1 | 0.052941 | false | 0 | 0.035294 | 0 | 0.088235 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
757af59a37c898ce38fcfbbaa887162aa405313c | 660 | py | Python | obsei/analyzer/__init__.py | kuutsav/obsei | edce8e57ab870428a130bdef6b8fc31415635988 | [
"Apache-2.0"
] | 359 | 2021-07-03T09:34:42.000Z | 2022-03-30T21:48:08.000Z | obsei/analyzer/__init__.py | kuutsav/obsei | edce8e57ab870428a130bdef6b8fc31415635988 | [
"Apache-2.0"
] | 80 | 2020-12-23T17:22:04.000Z | 2021-07-03T05:33:28.000Z | obsei/analyzer/__init__.py | admariner/obsei | 3a4871b87f1e0e58fd2189c1193f2b86029e690d | [
"Apache-2.0"
] | 38 | 2021-07-03T11:33:48.000Z | 2022-03-25T10:26:24.000Z | from obsei.analyzer.dummy_analyzer import DummyAnalyzer, DummyAnalyzerConfig
from obsei.analyzer.ner_analyzer import TransformersNERAnalyzer, SpacyNERAnalyzer
from obsei.analyzer.pii_analyzer import PresidioPIIAnalyzer, PresidioPIIAnalyzerConfig, PresidioAnonymizerConfig, PresidioModelConfig, PresidioEngineConfig
from obsei.analyzer.sentiment_analyzer import VaderSentimentAnalyzer, TransformersSentimentAnalyzerConfig, TransformersSentimentAnalyzer
from obsei.analyzer.translation_analyzer import TranslationAnalyzer
from obsei.analyzer.classification_analyzer import ClassificationAnalyzerConfig, ZeroShotClassificationAnalyzer, TextClassificationAnalyzer
| 94.285714 | 155 | 0.915152 | 52 | 660 | 11.5 | 0.5 | 0.090301 | 0.170569 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.051515 | 660 | 6 | 156 | 110 | 0.955272 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
75884266f51538e3c2ee6b087ac7a376be4ca57f | 458 | py | Python | leet/strings/firstUniqChar.py | peterlamar/python-cp-cheatsheet | f9f854064a3c657c04fab27d0a496401bfa97da1 | [
"Apache-2.0"
] | 140 | 2020-10-21T13:23:52.000Z | 2022-03-31T15:09:45.000Z | leet/strings/firstUniqChar.py | stacykutyepov/python-cp-cheatsheet | a00a57e1b36433648d1cace331e15ff276cef189 | [
"Apache-2.0"
] | 1 | 2021-07-22T14:01:25.000Z | 2021-07-22T14:01:25.000Z | leet/strings/firstUniqChar.py | stacykutyepov/python-cp-cheatsheet | a00a57e1b36433648d1cace331e15ff276cef189 | [
"Apache-2.0"
] | 33 | 2020-10-21T14:17:02.000Z | 2022-03-25T11:25:03.000Z | class Solution:
def firstUniqChar(self, s: str) -> int:
cnt = collections.Counter()
for c in s:
cnt[c] += 1
for i, c in enumerate(s):
if cnt[c] == 1:
return i
return -1
def firstUniqChar(self, s: str) -> int:
cnt = collections.Counter(s)
for i, c in enumerate(s):
if cnt[c] == 1:
return i
return -1 | 24.105263 | 43 | 0.432314 | 56 | 458 | 3.535714 | 0.339286 | 0.045455 | 0.075758 | 0.212121 | 0.868687 | 0.868687 | 0.868687 | 0.868687 | 0.868687 | 0.383838 | 0 | 0.020325 | 0.462882 | 458 | 19 | 44 | 24.105263 | 0.784553 | 0 | 0 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.133333 | false | 0 | 0 | 0 | 0.466667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
75ffb62ae5b265a6265488bc99e8f8a57cc1f556 | 34 | py | Python | conf/script/src/ext/meta_prog/generics/__init__.py | benoit-dubreuil/template-repo-cpp-full-ecosystem | f506dd5e2a61cdd311b6a6a4be4abc59567b4b20 | [
"MIT"
] | null | null | null | conf/script/src/ext/meta_prog/generics/__init__.py | benoit-dubreuil/template-repo-cpp-full-ecosystem | f506dd5e2a61cdd311b6a6a4be4abc59567b4b20 | [
"MIT"
] | 113 | 2021-02-15T19:22:36.000Z | 2021-05-07T15:17:42.000Z | conf/script/src/ext/meta_prog/generics/__init__.py | benoit-dubreuil/template-repo-cpp-full-ecosystem | f506dd5e2a61cdd311b6a6a4be4abc59567b4b20 | [
"MIT"
] | null | null | null | from .cls_proxy_injector import *
| 17 | 33 | 0.823529 | 5 | 34 | 5.2 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.117647 | 34 | 1 | 34 | 34 | 0.866667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f94eb966eafbfb9ccdd0fb436a2cbfad1e4d395e | 270 | py | Python | radiomicsfeatureextractionpipeline/backend/test/mock_ups/logic/roi_selector/regular_expression_roi_selector.py | Maastro-CDS-Imaging-Group/SQLite4Radiomics | e3a7afc181eec0fe04c18da00edc3772064e6758 | [
"Apache-2.0"
] | null | null | null | radiomicsfeatureextractionpipeline/backend/test/mock_ups/logic/roi_selector/regular_expression_roi_selector.py | Maastro-CDS-Imaging-Group/SQLite4Radiomics | e3a7afc181eec0fe04c18da00edc3772064e6758 | [
"Apache-2.0"
] | 6 | 2021-06-09T19:39:27.000Z | 2021-09-30T16:41:40.000Z | radiomicsfeatureextractionpipeline/backend/test/mock_ups/logic/roi_selector/regular_expression_roi_selector.py | Maastro-CDS-Imaging-Group/SQLite4Radiomics | e3a7afc181eec0fe04c18da00edc3772064e6758 | [
"Apache-2.0"
] | null | null | null | from logic.roi_selector.regular_expression_roi_selector import RegularExpressionROISelector
from test.mock_ups.logic.roi_selector.roi_selector import ROISelectorMockUp
class RegularExpressionROISelectorMockUp(RegularExpressionROISelector, ROISelectorMockUp):
pass
| 38.571429 | 91 | 0.896296 | 26 | 270 | 9.038462 | 0.576923 | 0.187234 | 0.13617 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.066667 | 270 | 6 | 92 | 45 | 0.93254 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.25 | 0.5 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
f980175ad4a3c1f825117acc9bb44bad7c452fe4 | 182 | py | Python | tests/unit/test_errors.py | MichaelYusko/Flaskbox | 26014cadf3bed5af72f6d270eb9fbc06faefcfd2 | [
"MIT"
] | 1 | 2018-01-16T18:51:28.000Z | 2018-01-16T18:51:28.000Z | tests/unit/test_errors.py | MichaelYusko/Flaskbox | 26014cadf3bed5af72f6d270eb9fbc06faefcfd2 | [
"MIT"
] | 2 | 2018-04-01T15:28:35.000Z | 2018-04-05T21:33:01.000Z | tests/unit/test_errors.py | MichaelYusko/Flaskbox | 26014cadf3bed5af72f6d270eb9fbc06faefcfd2 | [
"MIT"
] | 3 | 2018-01-15T08:29:14.000Z | 2018-04-01T15:56:01.000Z | import pytest
from flaskbox.fake_data import fake_data
def test_generate_value_error():
with pytest.raises(TypeError):
fake_data.generate_value([{'key': 'bad type'}])
| 20.222222 | 55 | 0.736264 | 25 | 182 | 5.08 | 0.68 | 0.188976 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.153846 | 182 | 8 | 56 | 22.75 | 0.824675 | 0 | 0 | 0 | 1 | 0 | 0.06044 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | true | 0 | 0.4 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f98fb4e804cd90ece2f5ad905766992b3b25d0c4 | 83 | py | Python | rl/algos/__init__.py | RohanPankaj/apex | 74e96386bf9446d1179106d6d65ea0368c1b5b27 | [
"MIT"
] | null | null | null | rl/algos/__init__.py | RohanPankaj/apex | 74e96386bf9446d1179106d6d65ea0368c1b5b27 | [
"MIT"
] | null | null | null | rl/algos/__init__.py | RohanPankaj/apex | 74e96386bf9446d1179106d6d65ea0368c1b5b27 | [
"MIT"
] | null | null | null | from .ppo import PPO
from .mirror_ppo import MirrorPPO
#from .dagger import DAgger
| 20.75 | 33 | 0.807229 | 13 | 83 | 5.076923 | 0.461538 | 0.272727 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.144578 | 83 | 3 | 34 | 27.666667 | 0.929577 | 0.313253 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f9ca9c01c917c60a1c7bbe64b7567f8b1e339756 | 49 | py | Python | language/python/module/subModules/subA.py | wxyyxc1992/coding-snippets | 302932746f7ec252651ac153f5a83a19d15d4bd9 | [
"MIT"
] | 52 | 2018-02-04T13:19:38.000Z | 2019-03-25T09:48:18.000Z | language/python/module/subModules/subA.py | wxyyxc1992/algods-snippets | 302932746f7ec252651ac153f5a83a19d15d4bd9 | [
"MIT"
] | 1 | 2020-07-16T09:02:00.000Z | 2020-07-16T09:02:00.000Z | language/python/module/subModules/subA.py | wxyyxc1992/algods-snippets | 302932746f7ec252651ac153f5a83a19d15d4bd9 | [
"MIT"
] | 9 | 2018-03-30T01:08:22.000Z | 2018-11-15T06:58:20.000Z | def subAFunc():
print('Hello from subAFunc')
| 16.333333 | 32 | 0.673469 | 6 | 49 | 5.5 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.183673 | 49 | 2 | 33 | 24.5 | 0.825 | 0 | 0 | 0 | 0 | 0 | 0.387755 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0 | 0 | 0 | 0.5 | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
fb15942594b0e87f92aaf468dbe6a0abdcdf93ca | 31 | py | Python | sc_tracker/forms/__init__.py | SpeedConEU/speedcon_donation_tracker | 9b1c9b1e95d6b93762ddc4871d9e3850f55597d3 | [
"BSD-2-Clause"
] | null | null | null | sc_tracker/forms/__init__.py | SpeedConEU/speedcon_donation_tracker | 9b1c9b1e95d6b93762ddc4871d9e3850f55597d3 | [
"BSD-2-Clause"
] | null | null | null | sc_tracker/forms/__init__.py | SpeedConEU/speedcon_donation_tracker | 9b1c9b1e95d6b93762ddc4871d9e3850f55597d3 | [
"BSD-2-Clause"
] | null | null | null | from .donate import DonateForm
| 15.5 | 30 | 0.83871 | 4 | 31 | 6.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.129032 | 31 | 1 | 31 | 31 | 0.962963 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9b49a7e603f3fdbda89a6a2ee0012915cb9e885d | 104 | py | Python | config.py | bdunks/gdscript-language-server | ade3fc514b78b9c3dcdb51d17d7fc5cf147459d1 | [
"MIT"
] | null | null | null | config.py | bdunks/gdscript-language-server | ade3fc514b78b9c3dcdb51d17d7fc5cf147459d1 | [
"MIT"
] | null | null | null | config.py | bdunks/gdscript-language-server | ade3fc514b78b9c3dcdb51d17d7fc5cf147459d1 | [
"MIT"
] | null | null | null | def can_build(env, platform):
return env['tools'] and env['gdscript']
def configure(env):
pass
| 17.333333 | 43 | 0.673077 | 15 | 104 | 4.6 | 0.733333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.182692 | 104 | 5 | 44 | 20.8 | 0.811765 | 0 | 0 | 0 | 0 | 0 | 0.125 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0.25 | 0 | 0.25 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 6 |
9ba1b2c097ec6e88e2027fcc39cad76bb5563b74 | 149 | py | Python | app/app/calc.py | ProXPegasus/recpie-app-api | f11644398393a7063f9330814361dae22a330203 | [
"MIT"
] | null | null | null | app/app/calc.py | ProXPegasus/recpie-app-api | f11644398393a7063f9330814361dae22a330203 | [
"MIT"
] | null | null | null | app/app/calc.py | ProXPegasus/recpie-app-api | f11644398393a7063f9330814361dae22a330203 | [
"MIT"
] | null | null | null | def add(x ,y):
return x+y
def Subtraction(x, y):
return x-y
def Multiplication(x , y):
return x*y
def Division(x , y):
return x/y
| 12.416667 | 26 | 0.590604 | 28 | 149 | 3.142857 | 0.285714 | 0.181818 | 0.363636 | 0.409091 | 0.556818 | 0.443182 | 0 | 0 | 0 | 0 | 0 | 0 | 0.275168 | 149 | 11 | 27 | 13.545455 | 0.814815 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
32cb5be7052d25e834de62cb685c383efceef6d4 | 217 | py | Python | src/easyconfig/errors/__init__.py | spacemanspiff2007/easyconfig | aa6f75455af5f3705bd7e15aac772b9664815624 | [
"Apache-2.0"
] | null | null | null | src/easyconfig/errors/__init__.py | spacemanspiff2007/easyconfig | aa6f75455af5f3705bd7e15aac772b9664815624 | [
"Apache-2.0"
] | 2 | 2021-09-15T06:18:28.000Z | 2022-03-19T00:43:16.000Z | src/easyconfig/errors/__init__.py | spacemanspiff2007/easyconfig | aa6f75455af5f3705bd7e15aac772b9664815624 | [
"Apache-2.0"
] | null | null | null | from .errors import DuplicateSubscriptionError, FunctionCallNotAllowedError, \
ModelNotProperlyInitialized, ReferenceFolderMissingError, SubscriptionAlreadyCanceledError
from .handler import set_exception_handler
| 54.25 | 94 | 0.889401 | 14 | 217 | 13.642857 | 0.785714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.078341 | 217 | 3 | 95 | 72.333333 | 0.955 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
32e995aa9c56a69816f4cf465ab0f641bf4315bb | 216 | py | Python | planningpoker/persistence/__init__.py | not-raspberry/planningpoker | 75113821d479f9973b41c39ad77940801c5e9525 | [
"MIT"
] | null | null | null | planningpoker/persistence/__init__.py | not-raspberry/planningpoker | 75113821d479f9973b41c39ad77940801c5e9525 | [
"MIT"
] | null | null | null | planningpoker/persistence/__init__.py | not-raspberry/planningpoker | 75113821d479f9973b41c39ad77940801c5e9525 | [
"MIT"
] | null | null | null | """
Storage backends.
Concrete implementations are to be injected into each view.
"""
from planningpoker.persistence.base import BasePersistence
from planningpoker.persistence.memory import ProcessMemoryPersistence
| 27 | 69 | 0.842593 | 23 | 216 | 7.913043 | 0.826087 | 0.186813 | 0.307692 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.101852 | 216 | 7 | 70 | 30.857143 | 0.938144 | 0.361111 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b5c74eb640101f54cb113a722d5cca52da5ef32a | 40 | py | Python | spanmb/models/__init__.py | zmmzGitHub/SpanMB_BERT | 133c93e2876e27379f249df3922e531e2be66f04 | [
"MIT"
] | null | null | null | spanmb/models/__init__.py | zmmzGitHub/SpanMB_BERT | 133c93e2876e27379f249df3922e531e2be66f04 | [
"MIT"
] | null | null | null | spanmb/models/__init__.py | zmmzGitHub/SpanMB_BERT | 133c93e2876e27379f249df3922e531e2be66f04 | [
"MIT"
] | null | null | null | from spanmb.models.spanmb import SpanMB
| 20 | 39 | 0.85 | 6 | 40 | 5.666667 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 40 | 1 | 40 | 40 | 0.944444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b5ebd2d66e837e68cd333598cdf2a5749a88a4dd | 98 | py | Python | PocaExchange/api/tests/functions/__init__.py | SamLangTen/PocaExchange | ed657a3c02437f07dcbd20a8ac8e0708fab26a07 | [
"Apache-2.0"
] | null | null | null | PocaExchange/api/tests/functions/__init__.py | SamLangTen/PocaExchange | ed657a3c02437f07dcbd20a8ac8e0708fab26a07 | [
"Apache-2.0"
] | null | null | null | PocaExchange/api/tests/functions/__init__.py | SamLangTen/PocaExchange | ed657a3c02437f07dcbd20a8ac8e0708fab26a07 | [
"Apache-2.0"
] | null | null | null | from .account_api_test import AccountAPITest
from .drift_bottle_api_test import DriftBottleAPITest | 49 | 53 | 0.908163 | 13 | 98 | 6.461538 | 0.692308 | 0.166667 | 0.309524 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.071429 | 98 | 2 | 53 | 49 | 0.923077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
bd15b81c9ad8c955877f32626f4b1d0ab8af273f | 182 | py | Python | Modules and packages/Packages/functions/greeting/official.py | kislyakovm/introduction-to-python | 2b44da4eb5a4fc1cba7676db5f49b651fa130b87 | [
"MIT"
] | 5 | 2021-07-25T23:35:40.000Z | 2022-03-20T05:14:35.000Z | Modules and packages/Packages/functions/greeting/official.py | kislyakovm/introduction-to-python | 2b44da4eb5a4fc1cba7676db5f49b651fa130b87 | [
"MIT"
] | 24 | 2021-06-23T10:33:17.000Z | 2022-03-09T15:44:17.000Z | Modules and packages/Packages/functions/greeting/official.py | kislyakovm/introduction-to-python | 2b44da4eb5a4fc1cba7676db5f49b651fa130b87 | [
"MIT"
] | 4 | 2021-05-06T09:52:30.000Z | 2022-01-25T10:53:55.000Z | """ documentation string for module my_module
This module contains official hello function
"""
def hello(name):
return f"Dear {name}, I am glad to finally meet you in person."
| 22.75 | 67 | 0.736264 | 28 | 182 | 4.75 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.181319 | 182 | 7 | 68 | 26 | 0.892617 | 0.472527 | 0 | 0 | 0 | 0 | 0.602273 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
bd29fda4b61b3b4677a3e5ccab1c901a35ea203d | 25 | py | Python | euc/kingsong/__init__.py | mrk-its/euc.kingsong | 5f29c11bd88bea5d6c89260430838c616f6518ce | [
"MIT"
] | 2 | 2020-06-05T03:21:03.000Z | 2021-03-04T23:38:36.000Z | euc/kingsong/__init__.py | mrk-its/euc.kingsong | 5f29c11bd88bea5d6c89260430838c616f6518ce | [
"MIT"
] | null | null | null | euc/kingsong/__init__.py | mrk-its/euc.kingsong | 5f29c11bd88bea5d6c89260430838c616f6518ce | [
"MIT"
] | null | null | null | from .kingsong import KS
| 12.5 | 24 | 0.8 | 4 | 25 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.16 | 25 | 1 | 25 | 25 | 0.952381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1f99403bfbb8b4f262b0e4c3b96a49728152cd56 | 74 | py | Python | apps/odoo/lib/odoo-10.0.post20170615-py2.7.egg/odoo/addons/website_forum_doc/models/__init__.py | gtfarng/Odoo_migrade | 9cc28fae4c379e407645248a29d22139925eafe7 | [
"Apache-2.0"
] | 1 | 2019-12-19T01:53:13.000Z | 2019-12-19T01:53:13.000Z | apps/odoo/lib/odoo-10.0.post20170615-py2.7.egg/odoo/addons/website_forum_doc/models/__init__.py | gtfarng/Odoo_migrade | 9cc28fae4c379e407645248a29d22139925eafe7 | [
"Apache-2.0"
] | null | null | null | apps/odoo/lib/odoo-10.0.post20170615-py2.7.egg/odoo/addons/website_forum_doc/models/__init__.py | gtfarng/Odoo_migrade | 9cc28fae4c379e407645248a29d22139925eafe7 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
import forum_documentation_toc
import forum_post
| 14.8 | 30 | 0.743243 | 10 | 74 | 5.2 | 0.8 | 0.423077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015625 | 0.135135 | 74 | 4 | 31 | 18.5 | 0.796875 | 0.283784 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1fe61e21d9e47d91deb06565648ee1927dc5c765 | 147 | py | Python | challenges/solving_project_euler/imadken/imadken_20.py | kdj309/H4ckT0b3rF3st-2k21 | 5395c0bfb442a64ad7efc7d83e12e1d08cdb7438 | [
"MIT"
] | 23 | 2021-09-21T15:48:16.000Z | 2022-01-10T10:54:49.000Z | challenges/solving_project_euler/imadken/imadken_20.py | kdj309/H4ckT0b3rF3st-2k21 | 5395c0bfb442a64ad7efc7d83e12e1d08cdb7438 | [
"MIT"
] | 14 | 2021-10-05T07:10:31.000Z | 2021-10-17T04:55:29.000Z | challenges/solving_project_euler/imadken/imadken_20.py | kdj309/H4ckT0b3rF3st-2k21 | 5395c0bfb442a64ad7efc7d83e12e1d08cdb7438 | [
"MIT"
] | 30 | 2021-09-25T19:45:22.000Z | 2021-10-31T19:16:43.000Z | from math import factorial
def Sum_Digits_Factorial(x):
return sum([int(i) for i in str(factorial(x))])
print(Sum_Digits_Factorial(100)) | 21 | 51 | 0.727891 | 24 | 147 | 4.291667 | 0.666667 | 0.174757 | 0.349515 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.024194 | 0.156463 | 147 | 7 | 52 | 21 | 0.806452 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0.25 | 0.75 | 0.25 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
95099385a305b0e4be6ae4fb4afa5e9a057f4153 | 161 | py | Python | alectiolite/callbacks/__init__.py | alectio/Alectio-lite | b321a749d7436dad6b88125b9d6968cb2bcea172 | [
"MIT"
] | null | null | null | alectiolite/callbacks/__init__.py | alectio/Alectio-lite | b321a749d7436dad6b88125b9d6968cb2bcea172 | [
"MIT"
] | null | null | null | alectiolite/callbacks/__init__.py | alectio/Alectio-lite | b321a749d7436dad6b88125b9d6968cb2bcea172 | [
"MIT"
] | 1 | 2021-01-29T23:51:44.000Z | 2021-01-29T23:51:44.000Z | from alectiolite.callbacks.base import AlectioCallback
from alectiolite.callbacks.curate import CurateCallback
__all__ = ["AlectioCallback", "CurateCallback"]
| 26.833333 | 55 | 0.838509 | 15 | 161 | 8.733333 | 0.6 | 0.229008 | 0.366412 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.086957 | 161 | 5 | 56 | 32.2 | 0.891156 | 0 | 0 | 0 | 0 | 0 | 0.180124 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
20f799769f42747f8ef715196739b196d924509e | 67 | py | Python | mlflow/Ops/__init__.py | BBBBlarry/mlflow | e8f5b9450ba6d3b68fe0cbb530840c0a760d3b08 | [
"MIT"
] | null | null | null | mlflow/Ops/__init__.py | BBBBlarry/mlflow | e8f5b9450ba6d3b68fe0cbb530840c0a760d3b08 | [
"MIT"
] | null | null | null | mlflow/Ops/__init__.py | BBBBlarry/mlflow | e8f5b9450ba6d3b68fe0cbb530840c0a760d3b08 | [
"MIT"
] | null | null | null | from linalg import *
from graph_ops import *
from math_ops import * | 22.333333 | 23 | 0.791045 | 11 | 67 | 4.636364 | 0.545455 | 0.392157 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.164179 | 67 | 3 | 24 | 22.333333 | 0.910714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1f37c89e1c18dbacf2591e662ca2b4b0ae768e24 | 70,206 | py | Python | kmip/tests/unit/core/messages/payloads/test_register.py | ondrap/PyKMIP | c8ea17d8faf827e0f9d004972835128a1a71569f | [
"Apache-2.0"
] | 179 | 2015-03-20T06:08:59.000Z | 2022-03-14T02:24:38.000Z | kmip/tests/unit/core/messages/payloads/test_register.py | imharshr/PyKMIP | 9403ff3d2aa83de4c786b8eedeb85d169fd4a594 | [
"Apache-2.0"
] | 600 | 2015-04-08T14:14:48.000Z | 2022-03-28T13:49:47.000Z | kmip/tests/unit/core/messages/payloads/test_register.py | imharshr/PyKMIP | 9403ff3d2aa83de4c786b8eedeb85d169fd4a594 | [
"Apache-2.0"
] | 131 | 2015-03-30T12:51:49.000Z | 2022-03-23T04:34:34.000Z | # Copyright (c) 2019 The Johns Hopkins University/Applied Physics Laboratory
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import testtools
from kmip.core import enums
from kmip.core import exceptions
from kmip.core import objects
from kmip.core import primitives
from kmip.core import secrets
from kmip.core import utils
from kmip.core.messages import payloads
class TestRegisterRequestPayload(testtools.TestCase):
def setUp(self):
super(TestRegisterRequestPayload, self).setUp()
self.certificate_value = (
b'\x30\x82\x03\x12\x30\x82\x01\xFA\xA0\x03\x02\x01\x02\x02\x01\x01'
b'\x30\x0D\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x05\x05\x00\x30'
b'\x3B\x31\x0B\x30\x09\x06\x03\x55\x04\x06\x13\x02\x55\x53\x31\x0D'
b'\x30\x0B\x06\x03\x55\x04\x0A\x13\x04\x54\x45\x53\x54\x31\x0E\x30'
b'\x0C\x06\x03\x55\x04\x0B\x13\x05\x4F\x41\x53\x49\x53\x31\x0D\x30'
b'\x0B\x06\x03\x55\x04\x03\x13\x04\x4B\x4D\x49\x50\x30\x1E\x17\x0D'
b'\x31\x30\x31\x31\x30\x31\x32\x33\x35\x39\x35\x39\x5A\x17\x0D\x32'
b'\x30\x31\x31\x30\x31\x32\x33\x35\x39\x35\x39\x5A\x30\x3B\x31\x0B'
b'\x30\x09\x06\x03\x55\x04\x06\x13\x02\x55\x53\x31\x0D\x30\x0B\x06'
b'\x03\x55\x04\x0A\x13\x04\x54\x45\x53\x54\x31\x0E\x30\x0C\x06\x03'
b'\x55\x04\x0B\x13\x05\x4F\x41\x53\x49\x53\x31\x0D\x30\x0B\x06\x03'
b'\x55\x04\x03\x13\x04\x4B\x4D\x49\x50\x30\x82\x01\x22\x30\x0D\x06'
b'\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x01\x05\x00\x03\x82\x01\x0F'
b'\x00\x30\x82\x01\x0A\x02\x82\x01\x01\x00\xAB\x7F\x16\x1C\x00\x42'
b'\x49\x6C\xCD\x6C\x6D\x4D\xAD\xB9\x19\x97\x34\x35\x35\x77\x76\x00'
b'\x3A\xCF\x54\xB7\xAF\x1E\x44\x0A\xFB\x80\xB6\x4A\x87\x55\xF8\x00'
b'\x2C\xFE\xBA\x6B\x18\x45\x40\xA2\xD6\x60\x86\xD7\x46\x48\x34\x6D'
b'\x75\xB8\xD7\x18\x12\xB2\x05\x38\x7C\x0F\x65\x83\xBC\x4D\x7D\xC7'
b'\xEC\x11\x4F\x3B\x17\x6B\x79\x57\xC4\x22\xE7\xD0\x3F\xC6\x26\x7F'
b'\xA2\xA6\xF8\x9B\x9B\xEE\x9E\x60\xA1\xD7\xC2\xD8\x33\xE5\xA5\xF4'
b'\xBB\x0B\x14\x34\xF4\xE7\x95\xA4\x11\x00\xF8\xAA\x21\x49\x00\xDF'
b'\x8B\x65\x08\x9F\x98\x13\x5B\x1C\x67\xB7\x01\x67\x5A\xBD\xBC\x7D'
b'\x57\x21\xAA\xC9\xD1\x4A\x7F\x08\x1F\xCE\xC8\x0B\x64\xE8\xA0\xEC'
b'\xC8\x29\x53\x53\xC7\x95\x32\x8A\xBF\x70\xE1\xB4\x2E\x7B\xB8\xB7'
b'\xF4\xE8\xAC\x8C\x81\x0C\xDB\x66\xE3\xD2\x11\x26\xEB\xA8\xDA\x7D'
b'\x0C\xA3\x41\x42\xCB\x76\xF9\x1F\x01\x3D\xA8\x09\xE9\xC1\xB7\xAE'
b'\x64\xC5\x41\x30\xFB\xC2\x1D\x80\xE9\xC2\xCB\x06\xC5\xC8\xD7\xCC'
b'\xE8\x94\x6A\x9A\xC9\x9B\x1C\x28\x15\xC3\x61\x2A\x29\xA8\x2D\x73'
b'\xA1\xF9\x93\x74\xFE\x30\xE5\x49\x51\x66\x2A\x6E\xDA\x29\xC6\xFC'
b'\x41\x13\x35\xD5\xDC\x74\x26\xB0\xF6\x05\x02\x03\x01\x00\x01\xA3'
b'\x21\x30\x1F\x30\x1D\x06\x03\x55\x1D\x0E\x04\x16\x04\x14\x04\xE5'
b'\x7B\xD2\xC4\x31\xB2\xE8\x16\xE1\x80\xA1\x98\x23\xFA\xC8\x58\x27'
b'\x3F\x6B\x30\x0D\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x05\x05'
b'\x00\x03\x82\x01\x01\x00\xA8\x76\xAD\xBC\x6C\x8E\x0F\xF0\x17\x21'
b'\x6E\x19\x5F\xEA\x76\xBF\xF6\x1A\x56\x7C\x9A\x13\xDC\x50\xD1\x3F'
b'\xEC\x12\xA4\x27\x3C\x44\x15\x47\xCF\xAB\xCB\x5D\x61\xD9\x91\xE9'
b'\x66\x31\x9D\xF7\x2C\x0D\x41\xBA\x82\x6A\x45\x11\x2F\xF2\x60\x89'
b'\xA2\x34\x4F\x4D\x71\xCF\x7C\x92\x1B\x4B\xDF\xAE\xF1\x60\x0D\x1B'
b'\xAA\xA1\x53\x36\x05\x7E\x01\x4B\x8B\x49\x6D\x4F\xAE\x9E\x8A\x6C'
b'\x1D\xA9\xAE\xB6\xCB\xC9\x60\xCB\xF2\xFA\xE7\x7F\x58\x7E\xC4\xBB'
b'\x28\x20\x45\x33\x88\x45\xB8\x8D\xD9\xAE\xEA\x53\xE4\x82\xA3\x6E'
b'\x73\x4E\x4F\x5F\x03\xB9\xD0\xDF\xC4\xCA\xFC\x6B\xB3\x4E\xA9\x05'
b'\x3E\x52\xBD\x60\x9E\xE0\x1E\x86\xD9\xB0\x9F\xB5\x11\x20\xC1\x98'
b'\x34\xA9\x97\xB0\x9C\xE0\x8D\x79\xE8\x13\x11\x76\x2F\x97\x4B\xB1'
b'\xC8\xC0\x91\x86\xC4\xD7\x89\x33\xE0\xDB\x38\xE9\x05\x08\x48\x77'
b'\xE1\x47\xC7\x8A\xF5\x2F\xAE\x07\x19\x2F\xF1\x66\xD1\x9F\xA9\x4A'
b'\x11\xCC\x11\xB2\x7E\xD0\x50\xF7\xA2\x7F\xAE\x13\xB2\x05\xA5\x74'
b'\xC4\xEE\x00\xAA\x8B\xD6\x5D\x0D\x70\x57\xC9\x85\xC8\x39\xEF\x33'
b'\x6A\x44\x1E\xD5\x3A\x53\xC6\xB6\xB6\x96\xF1\xBD\xEB\x5F\x7E\xA8'
b'\x11\xEB\xB2\x5A\x7F\x86'
)
# Encoding obtained from the KMIP 1.1 testing document, Section 13.2.2.
# Modified to exclude the Link attribute.
#
# TODO (ph) Add the Link attribute back in once Links are supported.
#
# This encoding matches the following set of values:
# Request Payload
# Object Type - Certificate
# Template Attribute
# Attribute
# Attribute Name - Cryptographic Usage Mask
# Attribute Value - Sign | Verify
# Certificate
# Certificate Type - X.509
# Certificate Value -
# 0x30820312308201FAA003020102020101300D06092A864886F70D01
# 01050500303B310B3009060355040613025553310D300B060355040A
# 130454455354310E300C060355040B13054F41534953310D300B0603
# 55040313044B4D4950301E170D3130313130313233353935395A170D
# 3230313130313233353935395A303B310B3009060355040613025553
# 310D300B060355040A130454455354310E300C060355040B13054F41
# 534953310D300B060355040313044B4D495030820122300D06092A86
# 4886F70D01010105000382010F003082010A0282010100AB7F161C00
# 42496CCD6C6D4DADB919973435357776003ACF54B7AF1E440AFB80B6
# 4A8755F8002CFEBA6B184540A2D66086D74648346D75B8D71812B205
# 387C0F6583BC4D7DC7EC114F3B176B7957C422E7D03FC6267FA2A6F8
# 9B9BEE9E60A1D7C2D833E5A5F4BB0B1434F4E795A41100F8AA214900
# DF8B65089F98135B1C67B701675ABDBC7D5721AAC9D14A7F081FCEC8
# 0B64E8A0ECC8295353C795328ABF70E1B42E7BB8B7F4E8AC8C810CDB
# 66E3D21126EBA8DA7D0CA34142CB76F91F013DA809E9C1B7AE64C541
# 30FBC21D80E9C2CB06C5C8D7CCE8946A9AC99B1C2815C3612A29A82D
# 73A1F99374FE30E54951662A6EDA29C6FC411335D5DC7426B0F60502
# 03010001A321301F301D0603551D0E0416041404E57BD2C431B2E816
# E180A19823FAC858273F6B300D06092A864886F70D01010505000382
# 010100A876ADBC6C8E0FF017216E195FEA76BFF61A567C9A13DC50D1
# 3FEC12A4273C441547CFABCB5D61D991E966319DF72C0D41BA826A45
# 112FF26089A2344F4D71CF7C921B4BDFAEF1600D1BAAA15336057E01
# 4B8B496D4FAE9E8A6C1DA9AEB6CBC960CBF2FAE77F587EC4BB282045
# 338845B88DD9AEEA53E482A36E734E4F5F03B9D0DFC4CAFC6BB34EA9
# 053E52BD609EE01E86D9B09FB51120C19834A997B09CE08D79E81311
# 762F974BB1C8C09186C4D78933E0DB38E905084877E147C78AF52FAE
# 07192FF166D19FA94A11CC11B27ED050F7A27FAE13B205A574C4EE00
# AA8BD65D0D7057C985C839EF336A441ED53A53C6B6B696F1BDEB5F7E
# A811EBB25A7F86
self.full_encoding = utils.BytearrayStream(
b'\x42\x00\x79\x01\x00\x00\x03\x88'
b'\x42\x00\x57\x05\x00\x00\x00\x04\x00\x00\x00\x01\x00\x00\x00\x00'
b'\x42\x00\x91\x01\x00\x00\x00\x38'
b'\x42\x00\x08\x01\x00\x00\x00\x30'
b'\x42\x00\x0A\x07\x00\x00\x00\x18'
b'\x43\x72\x79\x70\x74\x6F\x67\x72\x61\x70\x68\x69\x63\x20\x55\x73'
b'\x61\x67\x65\x20\x4D\x61\x73\x6B'
b'\x42\x00\x0B\x02\x00\x00\x00\x04\x00\x00\x00\x03\x00\x00\x00\x00'
b'\x42\x00\x13\x01\x00\x00\x03\x30'
b'\x42\x00\x1D\x05\x00\x00\x00\x04\x00\x00\x00\x01\x00\x00\x00\x00'
b'\x42\x00\x1E\x08\x00\x00\x03\x16' + self.certificate_value +
b'\x00\x00'
)
# Encoding obtained from the KMIP 1.1 testing document, Section 13.2.2.
# Modified to exclude the Link attribute. Manually converted into the
# KMIP 2.0 format.
#
# TODO (ph) Add the Link attribute back in once Links are supported.
#
# This encoding matches the following set of values:
# Request Payload
# Object Type - Certificate
# Attributes
# Cryptographic Usage Mask - Sign | Verify
# Certificate
# Certificate Type - X.509
# Certificate Value - See comment for the full encoding.
# Protection Storage Masks
# Protection Storage Mask - Software | Hardware
self.full_encoding_with_attributes = utils.BytearrayStream(
b'\x42\x00\x79\x01\x00\x00\x03\x78'
b'\x42\x00\x57\x05\x00\x00\x00\x04\x00\x00\x00\x01\x00\x00\x00\x00'
b'\x42\x01\x25\x01\x00\x00\x00\x10'
b'\x42\x00\x2C\x02\x00\x00\x00\x04\x00\x00\x00\x03\x00\x00\x00\x00'
b'\x42\x00\x13\x01\x00\x00\x03\x30'
b'\x42\x00\x1D\x05\x00\x00\x00\x04\x00\x00\x00\x01\x00\x00\x00\x00'
b'\x42\x00\x1E\x08\x00\x00\x03\x16' + self.certificate_value +
b'\x00\x00'
b'\x42\x01\x5F\x01\x00\x00\x00\x10'
b'\x42\x01\x5E\x02\x00\x00\x00\x04\x00\x00\x00\x03\x00\x00\x00\x00'
)
# Encoding obtained from the KMIP 1.1 testing document, Section 13.2.2.
# Modified to exclude the Link attribute.
#
# TODO (ph) Add the Link attribute back in once Links are supported.
#
# This encoding matches the following set of values:
# Request Payload
# Template Attribute
# Attribute
# Attribute Name - Cryptographic Usage Mask
# Attribute Value - Sign | Verify
# Certificate
# Certificate Type - X.509
# Certificate Value - See comment for the full encoding.
self.no_object_type_encoding = utils.BytearrayStream(
b'\x42\x00\x79\x01\x00\x00\x03\x78'
b'\x42\x00\x91\x01\x00\x00\x00\x38'
b'\x42\x00\x08\x01\x00\x00\x00\x30'
b'\x42\x00\x0A\x07\x00\x00\x00\x18'
b'\x43\x72\x79\x70\x74\x6F\x67\x72\x61\x70\x68\x69\x63\x20\x55\x73'
b'\x61\x67\x65\x20\x4D\x61\x73\x6B'
b'\x42\x00\x0B\x02\x00\x00\x00\x04\x00\x00\x00\x03\x00\x00\x00\x00'
b'\x42\x00\x13\x01\x00\x00\x03\x30'
b'\x42\x00\x1D\x05\x00\x00\x00\x04\x00\x00\x00\x01\x00\x00\x00\x00'
b'\x42\x00\x1E\x08\x00\x00\x03\x16' + self.certificate_value +
b'\x00\x00'
)
# Encoding obtained from the KMIP 1.1 testing document, Section 13.2.2.
#
# This encoding matches the following set of values:
# Request Payload
# Object Type - Certificate
# Certificate
# Certificate Type - X.509
# Certificate Value - See comment for the full encoding.
self.no_template_attribute_encoding = utils.BytearrayStream(
b'\x42\x00\x79\x01\x00\x00\x03\x48'
b'\x42\x00\x57\x05\x00\x00\x00\x04\x00\x00\x00\x01\x00\x00\x00\x00'
b'\x42\x00\x13\x01\x00\x00\x03\x30'
b'\x42\x00\x1D\x05\x00\x00\x00\x04\x00\x00\x00\x01\x00\x00\x00\x00'
b'\x42\x00\x1E\x08\x00\x00\x03\x16' + self.certificate_value +
b'\x00\x00'
)
# Encoding obtained from the KMIP 1.1 testing document, Section 13.2.2.
# Modified to exclude the Link attribute.
#
# TODO (ph) Add the Link attribute back in once Links are supported.
#
# This encoding matches the following set of values:
# Request Payload
# Object Type - Certificate
# Template Attribute
# Attribute
# Attribute Name - Cryptographic Usage Mask
# Attribute Value - Sign | Verify
self.no_managed_object_encoding = utils.BytearrayStream(
b'\x42\x00\x79\x01\x00\x00\x00\x50'
b'\x42\x00\x57\x05\x00\x00\x00\x04\x00\x00\x00\x01\x00\x00\x00\x00'
b'\x42\x00\x91\x01\x00\x00\x00\x38'
b'\x42\x00\x08\x01\x00\x00\x00\x30'
b'\x42\x00\x0A\x07\x00\x00\x00\x18'
b'\x43\x72\x79\x70\x74\x6F\x67\x72\x61\x70\x68\x69\x63\x20\x55\x73'
b'\x61\x67\x65\x20\x4D\x61\x73\x6B'
b'\x42\x00\x0B\x02\x00\x00\x00\x04\x00\x00\x00\x03\x00\x00\x00\x00'
)
def tearDown(self):
super(TestRegisterRequestPayload, self).tearDown()
def test_invalid_object_type(self):
"""
Test that a TypeError is raised when an invalid value is used to set
the object type of a Register request payload.
"""
kwargs = {'object_type': 'invalid'}
self.assertRaisesRegex(
TypeError,
"Object type must be an ObjectType enumeration.",
payloads.RegisterRequestPayload,
**kwargs
)
args = (
payloads.RegisterRequestPayload(),
'object_type',
'invalid'
)
self.assertRaisesRegex(
TypeError,
"Object type must be an ObjectType enumeration.",
setattr,
*args
)
def test_invalid_template_attribute(self):
"""
Test that a TypeError is raised when an invalid value is used to set
the template attribute of a Register request payload.
"""
kwargs = {'template_attribute': 'invalid'}
self.assertRaisesRegex(
TypeError,
"Template attribute must be a TemplateAttribute structure.",
payloads.RegisterRequestPayload,
**kwargs
)
args = (
payloads.RegisterRequestPayload(),
'template_attribute',
'invalid'
)
self.assertRaisesRegex(
TypeError,
"Template attribute must be a TemplateAttribute structure.",
setattr,
*args
)
def test_invalid_managed_object(self):
"""
Test that a TypeError is raised when an invalid value is used to set
the managed object of a Register request payload.
"""
kwargs = {'managed_object': 'invalid'}
self.assertRaisesRegex(
TypeError,
"Managed object must be a supported managed object structure.",
payloads.RegisterRequestPayload,
**kwargs
)
args = (
payloads.RegisterRequestPayload(),
'managed_object',
'invalid'
)
self.assertRaisesRegex(
TypeError,
"Managed object must be a supported managed object structure.",
setattr,
*args
)
def test_invalid_protection_storage_masks(self):
"""
Test that a TypeError is raised when an invalid value is used to set
the protection storage masks of a Register request payload.
"""
kwargs = {"protection_storage_masks": "invalid"}
self.assertRaisesRegex(
TypeError,
"The protection storage masks must be a ProtectionStorageMasks "
"structure.",
payloads.RegisterRequestPayload,
**kwargs
)
kwargs = {
"protection_storage_masks": objects.ProtectionStorageMasks(
tag=enums.Tags.COMMON_PROTECTION_STORAGE_MASKS
)
}
self.assertRaisesRegex(
TypeError,
"The protection storage masks must be a ProtectionStorageMasks "
"structure with a ProtectionStorageMasks tag.",
payloads.RegisterRequestPayload,
**kwargs
)
args = (
payloads.RegisterRequestPayload(),
"protection_storage_masks",
"invalid"
)
self.assertRaisesRegex(
TypeError,
"The protection storage masks must be a ProtectionStorageMasks "
"structure.",
setattr,
*args
)
args = (
payloads.RegisterRequestPayload(),
"protection_storage_masks",
objects.ProtectionStorageMasks(
tag=enums.Tags.COMMON_PROTECTION_STORAGE_MASKS
)
)
self.assertRaisesRegex(
TypeError,
"The protection storage masks must be a ProtectionStorageMasks "
"structure with a ProtectionStorageMasks tag.",
setattr,
*args
)
def test_read(self):
"""
Test that a Register request payload can be read from a data stream.
"""
payload = payloads.RegisterRequestPayload()
self.assertIsNone(payload.object_type)
self.assertIsNone(payload.template_attribute)
self.assertIsNone(payload.managed_object)
self.assertIsNone(payload.protection_storage_masks)
payload.read(self.full_encoding)
self.assertEqual(enums.ObjectType.CERTIFICATE, payload.object_type)
self.assertEqual(
objects.TemplateAttribute(
attributes=[
objects.Attribute(
attribute_name=objects.Attribute.AttributeName(
"Cryptographic Usage Mask"
),
attribute_value=primitives.Integer(
enums.CryptographicUsageMask.SIGN.value |
enums.CryptographicUsageMask.VERIFY.value,
tag=enums.Tags.CRYPTOGRAPHIC_USAGE_MASK
)
)
]
),
payload.template_attribute
)
self.assertEqual(
secrets.Certificate(
certificate_type=enums.CertificateType.X_509,
certificate_value=self.certificate_value
),
payload.managed_object
)
self.assertIsNone(payload.protection_storage_masks)
def test_read_kmip_2_0(self):
"""
Test that a Register request payload can be read from a data stream
encoded with the KMIP 2.0 format.
"""
payload = payloads.RegisterRequestPayload()
self.assertIsNone(payload.object_type)
self.assertIsNone(payload.template_attribute)
self.assertIsNone(payload.managed_object)
self.assertIsNone(payload.protection_storage_masks)
payload.read(
self.full_encoding_with_attributes,
kmip_version=enums.KMIPVersion.KMIP_2_0
)
self.assertEqual(enums.ObjectType.CERTIFICATE, payload.object_type)
self.assertEqual(
objects.TemplateAttribute(
attributes=[
objects.Attribute(
attribute_name=objects.Attribute.AttributeName(
"Cryptographic Usage Mask"
),
attribute_value=primitives.Integer(
enums.CryptographicUsageMask.SIGN.value |
enums.CryptographicUsageMask.VERIFY.value,
tag=enums.Tags.CRYPTOGRAPHIC_USAGE_MASK
)
)
]
),
payload.template_attribute
)
self.assertEqual(
secrets.Certificate(
certificate_type=enums.CertificateType.X_509,
certificate_value=self.certificate_value
),
payload.managed_object
)
self.assertEqual(
objects.ProtectionStorageMasks(protection_storage_masks=[3]),
payload.protection_storage_masks
)
def test_read_missing_object_type(self):
"""
Test that an InvalidKmipEncoding error is raised during the decoding
of a Register request payload when the object type is missing from the
encoding.
"""
payload = payloads.RegisterRequestPayload()
args = (self.no_object_type_encoding, )
self.assertRaisesRegex(
exceptions.InvalidKmipEncoding,
"The Register request payload encoding is missing the object "
"type.",
payload.read,
*args
)
def test_read_missing_template_attribute(self):
"""
Test that an InvalidKmipEncoding error is raised during the decoding
of a Register request payload when the template attribute is missing
from the encoding.
"""
payload = payloads.RegisterRequestPayload()
args = (self.no_template_attribute_encoding, )
self.assertRaisesRegex(
exceptions.InvalidKmipEncoding,
"The Register request payload encoding is missing the template "
"attribute.",
payload.read,
*args
)
def test_read_missing_attributes(self):
"""
Test that an InvalidKmipEncoding error is raised during the decoding
of a Register request payload when the attributes structure is missing
from the encoding.
"""
payload = payloads.RegisterRequestPayload()
args = (self.no_template_attribute_encoding, )
kwargs = {"kmip_version": enums.KMIPVersion.KMIP_2_0}
self.assertRaisesRegex(
exceptions.InvalidKmipEncoding,
"The Register request payload encoding is missing the attributes "
"structure.",
payload.read,
*args,
**kwargs
)
def test_read_missing_managed_object(self):
"""
Test that an InvalidKmipEncoding error is raised during the decoding
of a Register request payload when the managed object is missing from
the encoding.
"""
payload = payloads.RegisterRequestPayload()
args = (self.no_managed_object_encoding, )
self.assertRaisesRegex(
exceptions.InvalidKmipEncoding,
"The Register request payload encoding is missing the managed "
"object.",
payload.read,
*args
)
def test_write(self):
"""
Test that a Register request payload can be written to a data stream.
"""
payload = payloads.RegisterRequestPayload(
object_type=enums.ObjectType.CERTIFICATE,
template_attribute=objects.TemplateAttribute(
attributes=[
objects.Attribute(
attribute_name=objects.Attribute.AttributeName(
"Cryptographic Usage Mask"
),
attribute_value=primitives.Integer(
enums.CryptographicUsageMask.SIGN.value |
enums.CryptographicUsageMask.VERIFY.value,
tag=enums.Tags.CRYPTOGRAPHIC_USAGE_MASK
)
)
]
),
managed_object=secrets.Certificate(
certificate_type=enums.CertificateType.X_509,
certificate_value=self.certificate_value
)
)
stream = utils.BytearrayStream()
payload.write(stream)
self.assertEqual(len(self.full_encoding), len(stream))
self.assertEqual(str(self.full_encoding), str(stream))
def test_write_kmip_2_0(self):
"""
Test that a Register request payload can be written to a data stream
encoded with the KMIP 2.0 format.
"""
payload = payloads.RegisterRequestPayload(
object_type=enums.ObjectType.CERTIFICATE,
template_attribute=objects.TemplateAttribute(
attributes=[
objects.Attribute(
attribute_name=objects.Attribute.AttributeName(
"Cryptographic Usage Mask"
),
attribute_value=primitives.Integer(
enums.CryptographicUsageMask.SIGN.value |
enums.CryptographicUsageMask.VERIFY.value,
tag=enums.Tags.CRYPTOGRAPHIC_USAGE_MASK
)
)
]
),
managed_object=secrets.Certificate(
certificate_type=enums.CertificateType.X_509,
certificate_value=self.certificate_value
),
protection_storage_masks=objects.ProtectionStorageMasks(
protection_storage_masks=[
(
enums.ProtectionStorageMask.SOFTWARE.value |
enums.ProtectionStorageMask.HARDWARE.value
)
]
)
)
stream = utils.BytearrayStream()
payload.write(stream, kmip_version=enums.KMIPVersion.KMIP_2_0)
self.assertEqual(len(self.full_encoding_with_attributes), len(stream))
self.assertEqual(str(self.full_encoding_with_attributes), str(stream))
def test_write_missing_object_type(self):
"""
Test that an InvalidField error is raised during the encoding of a
Register request payload when the payload is missing the object type.
"""
payload = payloads.RegisterRequestPayload(
template_attribute=objects.TemplateAttribute(
attributes=[
objects.Attribute(
attribute_name=objects.Attribute.AttributeName(
"Cryptographic Usage Mask"
),
attribute_value=primitives.Integer(
enums.CryptographicUsageMask.SIGN.value |
enums.CryptographicUsageMask.VERIFY.value,
tag=enums.Tags.CRYPTOGRAPHIC_USAGE_MASK
)
)
]
),
managed_object=secrets.Certificate(
certificate_type=enums.CertificateType.X_509,
certificate_value=self.certificate_value
)
)
args = (utils.BytearrayStream(), )
self.assertRaisesRegex(
exceptions.InvalidField,
"The Register request payload is missing the object type field.",
payload.write,
*args
)
def test_write_missing_template_attribute(self):
"""
Test that an InvalidField error is raised during the encoding of a
Register request payload when the payload is missing the template
attribute.
"""
payload = payloads.RegisterRequestPayload(
object_type=enums.ObjectType.CERTIFICATE,
managed_object=secrets.Certificate(
certificate_type=enums.CertificateType.X_509,
certificate_value=self.certificate_value
)
)
args = (utils.BytearrayStream(), )
self.assertRaisesRegex(
exceptions.InvalidField,
"The Register request payload is missing the template attribute "
"field.",
payload.write,
*args
)
def test_write_missing_attributes(self):
"""
Test that an InvalidField error is raised during the encoding of a
Register request payload when the payload is missing the attributes
structure.
"""
payload = payloads.RegisterRequestPayload(
object_type=enums.ObjectType.CERTIFICATE,
managed_object=secrets.Certificate(
certificate_type=enums.CertificateType.X_509,
certificate_value=self.certificate_value
)
)
args = (utils.BytearrayStream(), )
kwargs = {"kmip_version": enums.KMIPVersion.KMIP_2_0}
self.assertRaisesRegex(
exceptions.InvalidField,
"The Register request payload is missing the template attribute "
"field.",
payload.write,
*args,
**kwargs
)
def test_write_missing_managed_object(self):
"""
Test that an InvalidField error is raised during the encoding of a
Register request payload when the payload is missing the managed
object.
"""
payload = payloads.RegisterRequestPayload(
object_type=enums.ObjectType.SECRET_DATA,
template_attribute=objects.TemplateAttribute(
attributes=[
objects.Attribute(
attribute_name=objects.Attribute.AttributeName(
"Cryptographic Usage Mask"
),
attribute_value=primitives.Integer(
enums.CryptographicUsageMask.VERIFY.value,
tag=enums.Tags.CRYPTOGRAPHIC_USAGE_MASK
)
)
]
)
)
args = (utils.BytearrayStream(), )
self.assertRaisesRegex(
exceptions.InvalidField,
"The Register request payload is missing the managed object "
"field.",
payload.write,
*args
)
def test_repr(self):
"""
Test that repr can be applied to a Register request payload structure.
"""
payload = payloads.RegisterRequestPayload(
object_type=enums.ObjectType.SECRET_DATA,
template_attribute=objects.TemplateAttribute(
attributes=[
objects.Attribute(
attribute_name=objects.Attribute.AttributeName(
"Cryptographic Usage Mask"
),
attribute_value=primitives.Integer(
enums.CryptographicUsageMask.VERIFY.value,
tag=enums.Tags.CRYPTOGRAPHIC_USAGE_MASK
)
)
]
),
managed_object=secrets.SecretData(
secret_data_type=primitives.Enumeration(
enums.SecretDataType,
value=enums.SecretDataType.PASSWORD,
tag=enums.Tags.SECRET_DATA_TYPE
),
key_block=objects.KeyBlock(
key_format_type=objects.KeyFormatType(
enums.KeyFormatType.OPAQUE
),
key_value=objects.KeyValue(
key_material=objects.KeyMaterial(
(
b'\x53\x65\x63\x72\x65\x74\x50\x61\x73\x73\x77'
b'\x6F\x72\x64'
)
)
)
)
),
protection_storage_masks=objects.ProtectionStorageMasks(
protection_storage_masks=[
(
enums.ProtectionStorageMask.SOFTWARE.value |
enums.ProtectionStorageMask.HARDWARE.value
)
]
)
)
self.assertEqual(
"RegisterRequestPayload("
"object_type=ObjectType.SECRET_DATA, "
"template_attribute=Struct(), "
"managed_object=Struct(), "
"protection_storage_masks=ProtectionStorageMasks("
"protection_storage_masks=[3]))",
repr(payload)
)
def test_str(self):
"""
Test that str can be applied to a Register request payload structure.
"""
payload = payloads.RegisterRequestPayload(
object_type=enums.ObjectType.SECRET_DATA,
template_attribute=objects.TemplateAttribute(
attributes=[
objects.Attribute(
attribute_name=objects.Attribute.AttributeName(
"Cryptographic Usage Mask"
),
attribute_value=primitives.Integer(
enums.CryptographicUsageMask.VERIFY.value,
tag=enums.Tags.CRYPTOGRAPHIC_USAGE_MASK
)
)
]
),
managed_object=secrets.SecretData(
secret_data_type=primitives.Enumeration(
enums.SecretDataType,
value=enums.SecretDataType.PASSWORD,
tag=enums.Tags.SECRET_DATA_TYPE
),
key_block=objects.KeyBlock(
key_format_type=objects.KeyFormatType(
enums.KeyFormatType.OPAQUE
),
key_value=objects.KeyValue(
key_material=objects.KeyMaterial(
(
b'\x53\x65\x63\x72\x65\x74\x50\x61\x73\x73\x77'
b'\x6F\x72\x64'
)
)
)
)
),
protection_storage_masks=objects.ProtectionStorageMasks(
protection_storage_masks=[
(
enums.ProtectionStorageMask.SOFTWARE.value |
enums.ProtectionStorageMask.HARDWARE.value
)
]
)
)
self.assertEqual(
'{'
'"object_type": ObjectType.SECRET_DATA, '
'"template_attribute": Struct(), '
'"managed_object": Struct(), '
'"protection_storage_masks": {"protection_storage_masks": [3]}'
'}',
str(payload)
)
def test_equal_on_equal(self):
"""
Test that the equality operator returns True when comparing two
Register request payloads with the same data.
"""
a = payloads.RegisterRequestPayload()
b = payloads.RegisterRequestPayload()
self.assertTrue(a == b)
self.assertTrue(b == a)
a = payloads.RegisterRequestPayload(
object_type=enums.ObjectType.CERTIFICATE,
template_attribute=objects.TemplateAttribute(
attributes=[
objects.Attribute(
attribute_name=objects.Attribute.AttributeName(
"Cryptographic Usage Mask"
),
attribute_value=primitives.Integer(
enums.CryptographicUsageMask.SIGN.value |
enums.CryptographicUsageMask.VERIFY.value,
tag=enums.Tags.CRYPTOGRAPHIC_USAGE_MASK
)
)
]
),
managed_object=secrets.Certificate(
certificate_type=enums.CertificateType.X_509,
certificate_value=self.certificate_value
),
protection_storage_masks=objects.ProtectionStorageMasks(
protection_storage_masks=[
(
enums.ProtectionStorageMask.SOFTWARE.value |
enums.ProtectionStorageMask.HARDWARE.value
)
]
)
)
b = payloads.RegisterRequestPayload(
object_type=enums.ObjectType.CERTIFICATE,
template_attribute=objects.TemplateAttribute(
attributes=[
objects.Attribute(
attribute_name=objects.Attribute.AttributeName(
"Cryptographic Usage Mask"
),
attribute_value=primitives.Integer(
enums.CryptographicUsageMask.SIGN.value |
enums.CryptographicUsageMask.VERIFY.value,
tag=enums.Tags.CRYPTOGRAPHIC_USAGE_MASK
)
)
]
),
managed_object=secrets.Certificate(
certificate_type=enums.CertificateType.X_509,
certificate_value=self.certificate_value
),
protection_storage_masks=objects.ProtectionStorageMasks(
protection_storage_masks=[
(
enums.ProtectionStorageMask.SOFTWARE.value |
enums.ProtectionStorageMask.HARDWARE.value
)
]
)
)
self.assertTrue(a == b)
self.assertTrue(b == a)
def test_equal_on_not_equal_object_type(self):
"""
Test that the equality operator returns False when comparing two
Register request payloads with different object types.
"""
a = payloads.RegisterRequestPayload(
object_type=enums.ObjectType.SYMMETRIC_KEY
)
b = payloads.RegisterRequestPayload(
object_type=enums.ObjectType.SECRET_DATA
)
self.assertFalse(a == b)
self.assertFalse(b == a)
def test_equal_on_not_equal_template_attribute(self):
"""
Test that the equality operator returns False when comparing two
Register request payloads with different template attributes.
"""
a = payloads.RegisterRequestPayload(
template_attribute=objects.TemplateAttribute(
attributes=[
objects.Attribute(
attribute_name=objects.Attribute.AttributeName(
"Cryptographic Usage Mask"
),
attribute_value=primitives.Integer(
value=enums.CryptographicUsageMask.VERIFY.value,
tag=enums.Tags.CRYPTOGRAPHIC_USAGE_MASK
)
)
]
)
)
b = payloads.RegisterRequestPayload(
template_attribute=objects.TemplateAttribute(
attributes=[
objects.Attribute(
attribute_name=objects.Attribute.AttributeName(
"Cryptographic Usage Mask"
),
attribute_value=primitives.Integer(
value=enums.CryptographicUsageMask.SIGN.value,
tag=enums.Tags.CRYPTOGRAPHIC_USAGE_MASK
)
)
]
)
)
self.assertFalse(a == b)
self.assertFalse(b == a)
def test_equal_on_not_equal_managed_object(self):
"""
Test that the equality operator returns False when comparing two
Register request payloads with different managed objects.
"""
a = payloads.RegisterRequestPayload(
managed_object=secrets.Certificate(
certificate_type=enums.CertificateType.X_509,
certificate_value=self.certificate_value
)
)
b = payloads.RegisterRequestPayload(
managed_object=secrets.Certificate(
certificate_type=enums.CertificateType.PGP,
certificate_value=self.certificate_value
)
)
self.assertFalse(a == b)
self.assertFalse(b == a)
def test_equal_on_not_equal_protection_storage_masks(self):
"""
Test that the equality operator returns False when comparing two Create
request payloads with different protection storage masks.
"""
a = payloads.RegisterRequestPayload(
protection_storage_masks=objects.ProtectionStorageMasks(
protection_storage_masks=[
(
enums.ProtectionStorageMask.SOFTWARE.value |
enums.ProtectionStorageMask.HARDWARE.value
)
]
)
)
b = payloads.RegisterRequestPayload(
protection_storage_masks=objects.ProtectionStorageMasks(
protection_storage_masks=[
(
enums.ProtectionStorageMask.ON_SYSTEM.value |
enums.ProtectionStorageMask.OFF_SYSTEM.value
)
]
)
)
self.assertFalse(a == b)
self.assertFalse(b == a)
def test_equal_on_type_mismatch(self):
"""
Test that the equality operator returns False when comparing two
Register request payloads with different types.
"""
a = payloads.RegisterRequestPayload()
b = 'invalid'
self.assertFalse(a == b)
self.assertFalse(b == a)
def test_not_equal_on_equal(self):
"""
Test that the inequality operator returns False when comparing two
Register request payloads with the same data.
"""
a = payloads.RegisterRequestPayload()
b = payloads.RegisterRequestPayload()
self.assertFalse(a != b)
self.assertFalse(b != a)
a = payloads.RegisterRequestPayload(
object_type=enums.ObjectType.CERTIFICATE,
template_attribute=objects.TemplateAttribute(
attributes=[
objects.Attribute(
attribute_name=objects.Attribute.AttributeName(
"Cryptographic Usage Mask"
),
attribute_value=primitives.Integer(
enums.CryptographicUsageMask.SIGN.value |
enums.CryptographicUsageMask.VERIFY.value,
tag=enums.Tags.CRYPTOGRAPHIC_USAGE_MASK
)
)
]
),
managed_object=secrets.Certificate(
certificate_type=enums.CertificateType.X_509,
certificate_value=self.certificate_value
),
protection_storage_masks=objects.ProtectionStorageMasks(
protection_storage_masks=[
(
enums.ProtectionStorageMask.SOFTWARE.value |
enums.ProtectionStorageMask.HARDWARE.value
)
]
)
)
b = payloads.RegisterRequestPayload(
object_type=enums.ObjectType.CERTIFICATE,
template_attribute=objects.TemplateAttribute(
attributes=[
objects.Attribute(
attribute_name=objects.Attribute.AttributeName(
"Cryptographic Usage Mask"
),
attribute_value=primitives.Integer(
enums.CryptographicUsageMask.SIGN.value |
enums.CryptographicUsageMask.VERIFY.value,
tag=enums.Tags.CRYPTOGRAPHIC_USAGE_MASK
)
)
]
),
managed_object=secrets.Certificate(
certificate_type=enums.CertificateType.X_509,
certificate_value=self.certificate_value
),
protection_storage_masks=objects.ProtectionStorageMasks(
protection_storage_masks=[
(
enums.ProtectionStorageMask.SOFTWARE.value |
enums.ProtectionStorageMask.HARDWARE.value
)
]
)
)
self.assertFalse(a != b)
self.assertFalse(b != a)
def test_not_equal_on_not_equal_object_type(self):
"""
Test that the inequality operator returns True when comparing two
Register request payloads with different object types.
"""
a = payloads.RegisterRequestPayload(
object_type=enums.ObjectType.SYMMETRIC_KEY
)
b = payloads.RegisterRequestPayload(
object_type=enums.ObjectType.SECRET_DATA
)
self.assertTrue(a != b)
self.assertTrue(b != a)
def test_not_equal_on_not_equal_template_attribute(self):
"""
Test that the inequality operator returns True when comparing two
Register request payloads with different template attributes.
"""
a = payloads.RegisterRequestPayload(
template_attribute=objects.TemplateAttribute(
attributes=[
objects.Attribute(
attribute_name=objects.Attribute.AttributeName(
"Cryptographic Usage Mask"
),
attribute_value=primitives.Integer(
value=enums.CryptographicUsageMask.VERIFY.value,
tag=enums.Tags.CRYPTOGRAPHIC_USAGE_MASK
)
)
]
)
)
b = payloads.RegisterRequestPayload(
template_attribute=objects.TemplateAttribute(
attributes=[
objects.Attribute(
attribute_name=objects.Attribute.AttributeName(
"Cryptographic Usage Mask"
),
attribute_value=primitives.Integer(
value=enums.CryptographicUsageMask.SIGN.value,
tag=enums.Tags.CRYPTOGRAPHIC_USAGE_MASK
)
)
]
)
)
self.assertTrue(a != b)
self.assertTrue(b != a)
def test_not_equal_on_not_equal_managed_object(self):
"""
Test that the inequality operator returns True when comparing two
Register request payloads with different managed objects.
"""
a = payloads.RegisterRequestPayload(
managed_object=secrets.Certificate(
certificate_type=enums.CertificateType.X_509,
certificate_value=self.certificate_value
)
)
b = payloads.RegisterRequestPayload(
managed_object=secrets.Certificate(
certificate_type=enums.CertificateType.PGP,
certificate_value=self.certificate_value
)
)
self.assertTrue(a != b)
self.assertTrue(b != a)
def test_not_equal_on_not_equal_protection_storage_masks(self):
"""
Test that the inequality operator returns True when comparing two
Register request payloads with different protection storage masks.
"""
a = payloads.RegisterRequestPayload(
protection_storage_masks=objects.ProtectionStorageMasks(
protection_storage_masks=[
(
enums.ProtectionStorageMask.SOFTWARE.value |
enums.ProtectionStorageMask.HARDWARE.value
)
]
)
)
b = payloads.RegisterRequestPayload(
protection_storage_masks=objects.ProtectionStorageMasks(
protection_storage_masks=[
(
enums.ProtectionStorageMask.ON_SYSTEM.value |
enums.ProtectionStorageMask.OFF_SYSTEM.value
)
]
)
)
self.assertTrue(a != b)
self.assertTrue(b != a)
def test_not_equal_on_type_mismatch(self):
"""
Test that the inequality operator returns True when comparing two
Register request payloads with different types.
"""
a = payloads.RegisterRequestPayload()
b = 'invalid'
self.assertTrue(a != b)
self.assertTrue(b != a)
class TestRegisterResponsePayload(testtools.TestCase):
def setUp(self):
super(TestRegisterResponsePayload, self).setUp()
# Encoding obtained from the KMIP 1.1 testing document, Section 13.2.2.
# Modified to include the template attribute.
#
# This encoding matches the following set of values:
# Response Payload
# Unique Identifier - 7091d0bf-548a-4d4a-93a6-6dd71cf75221
# Template Attribute
# Attribute
# Attribute Name - State
# Attribute Value - Pre-active
self.full_encoding = utils.BytearrayStream(
b'\x42\x00\x7C\x01\x00\x00\x00\x60'
b'\x42\x00\x94\x07\x00\x00\x00\x24'
b'\x37\x30\x39\x31\x64\x30\x62\x66\x2D\x35\x34\x38\x61\x2D\x34\x64'
b'\x34\x61\x2D\x39\x33\x61\x36\x2D\x36\x64\x64\x37\x31\x63\x66\x37'
b'\x35\x32\x32\x31\x00\x00\x00\x00'
b'\x42\x00\x91\x01\x00\x00\x00\x28'
b'\x42\x00\x08\x01\x00\x00\x00\x20'
b'\x42\x00\x0A\x07\x00\x00\x00\x05'
b'\x53\x74\x61\x74\x65\x00\x00\x00'
b'\x42\x00\x0B\x05\x00\x00\x00\x04\x00\x00\x00\x01\x00\x00\x00\x00'
)
# Encoding obtained from the KMIP 1.1 testing document, Section 13.2.2.
# Modified to include the template attribute.
#
# This encoding matches the following set of values:
# Response Payload
# Template Attribute
# Attribute
# Attribute Name - State
# Attribute Value - Pre-active
self.no_unique_identifier_encoding = utils.BytearrayStream(
b'\x42\x00\x7C\x01\x00\x00\x00\x30'
b'\x42\x00\x91\x01\x00\x00\x00\x28'
b'\x42\x00\x08\x01\x00\x00\x00\x20'
b'\x42\x00\x0A\x07\x00\x00\x00\x05'
b'\x53\x74\x61\x74\x65\x00\x00\x00'
b'\x42\x00\x0B\x05\x00\x00\x00\x04\x00\x00\x00\x01\x00\x00\x00\x00'
)
# Encoding obtained from the KMIP 1.1 testing document, Section 13.2.2.
#
# This encoding matches the following set of values:
# Response Payload
# Unique Identifier - 7091d0bf-548a-4d4a-93a6-6dd71cf75221
self.no_template_attribute_encoding = utils.BytearrayStream(
b'\x42\x00\x7C\x01\x00\x00\x00\x30'
b'\x42\x00\x94\x07\x00\x00\x00\x24'
b'\x37\x30\x39\x31\x64\x30\x62\x66\x2D\x35\x34\x38\x61\x2D\x34\x64'
b'\x34\x61\x2D\x39\x33\x61\x36\x2D\x36\x64\x64\x37\x31\x63\x66\x37'
b'\x35\x32\x32\x31\x00\x00\x00\x00'
)
def test_invalid_unique_identifier(self):
"""
Test that a TypeError is raised when an invalid value is used to set
the unique identifier of a Register response payload.
"""
kwargs = {'unique_identifier': 0}
self.assertRaisesRegex(
TypeError,
"Unique identifier must be a string.",
payloads.RegisterResponsePayload,
**kwargs
)
args = (payloads.RegisterResponsePayload(), 'unique_identifier', 0)
self.assertRaisesRegex(
TypeError,
"Unique identifier must be a string.",
setattr,
*args
)
def test_invalid_template_attribute(self):
"""
Test that a TypeError is raised when an invalid value is used to set
the template attribute of a Register response payload.
"""
kwargs = {'template_attribute': 'invalid'}
self.assertRaisesRegex(
TypeError,
"Template attribute must be a TemplateAttribute structure.",
payloads.RegisterResponsePayload,
**kwargs
)
args = (
payloads.RegisterResponsePayload(),
'template_attribute',
'invalid'
)
self.assertRaisesRegex(
TypeError,
"Template attribute must be a TemplateAttribute structure.",
setattr,
*args
)
def test_read(self):
"""
Test that a Register response payload can be read from a data stream.
"""
payload = payloads.RegisterResponsePayload()
self.assertIsNone(payload.unique_identifier)
self.assertIsNone(payload.template_attribute)
payload.read(self.full_encoding)
self.assertEqual(
"7091d0bf-548a-4d4a-93a6-6dd71cf75221",
payload.unique_identifier
)
self.assertEqual(
objects.TemplateAttribute(
attributes=[
objects.Attribute(
attribute_name=objects.Attribute.AttributeName(
"State"
),
attribute_value=primitives.Enumeration(
enums.State,
enums.State.PRE_ACTIVE,
tag=enums.Tags.STATE
)
)
]
),
payload.template_attribute
)
def test_read_kmip_2_0(self):
"""
Test that a Register response payload can be read from a data stream
encoded with the KMIP 2.0 format.
"""
payload = payloads.RegisterResponsePayload()
self.assertIsNone(payload.unique_identifier)
self.assertIsNone(payload.template_attribute)
payload.read(
self.no_template_attribute_encoding,
kmip_version=enums.KMIPVersion.KMIP_2_0
)
self.assertEqual(
"7091d0bf-548a-4d4a-93a6-6dd71cf75221",
payload.unique_identifier
)
self.assertIsNone(payload.template_attribute)
def test_read_missing_unique_identifier(self):
"""
Test that an InvalidKmipEncoding error is raised during the decoding
of a Register response payload when the unique identifier is missing
from the encoding.
"""
payload = payloads.RegisterResponsePayload()
self.assertIsNone(payload.unique_identifier)
self.assertIsNone(payload.template_attribute)
args = (self.no_unique_identifier_encoding, )
self.assertRaisesRegex(
exceptions.InvalidKmipEncoding,
"The Register response payload encoding is missing the unique "
"identifier.",
payload.read,
*args
)
def test_read_missing_template_attribute(self):
"""
Test that a Register response payload can be read from a data stream
event when missing the template attribute.
"""
payload = payloads.RegisterResponsePayload()
self.assertEqual(None, payload.unique_identifier)
self.assertEqual(None, payload.template_attribute)
payload.read(self.no_template_attribute_encoding)
self.assertEqual(
"7091d0bf-548a-4d4a-93a6-6dd71cf75221",
payload.unique_identifier
)
self.assertIsNone(payload.template_attribute)
def test_write(self):
"""
Test that a Register response payload can be written to a data stream.
"""
payload = payloads.RegisterResponsePayload(
unique_identifier="7091d0bf-548a-4d4a-93a6-6dd71cf75221",
template_attribute=objects.TemplateAttribute(
attributes=[
objects.Attribute(
attribute_name=objects.Attribute.AttributeName(
"State"
),
attribute_value=primitives.Enumeration(
enums.State,
value=enums.State.PRE_ACTIVE,
tag=enums.Tags.STATE
)
)
]
)
)
stream = utils.BytearrayStream()
payload.write(stream)
self.assertEqual(len(self.full_encoding), len(stream))
self.assertEqual(str(self.full_encoding), str(stream))
def test_write_kmip_2_0(self):
"""
Test that a Register response payload can be written to a data stream
encoded with the KMIP 2.0 format.
"""
payload = payloads.RegisterResponsePayload(
unique_identifier="7091d0bf-548a-4d4a-93a6-6dd71cf75221",
template_attribute=objects.TemplateAttribute(
attributes=[
objects.Attribute(
attribute_name=objects.Attribute.AttributeName(
"State"
),
attribute_value=primitives.Enumeration(
enums.State,
value=enums.State.PRE_ACTIVE,
tag=enums.Tags.STATE
)
)
]
)
)
stream = utils.BytearrayStream()
payload.write(stream, kmip_version=enums.KMIPVersion.KMIP_2_0)
self.assertEqual(len(self.no_template_attribute_encoding), len(stream))
self.assertEqual(str(self.no_template_attribute_encoding), str(stream))
def test_write_missing_unique_identifier(self):
"""
Test that an InvalidField error is raised during the encoding of a
Register response payload when the payload is missing the unique
identifier.
"""
payload = payloads.RegisterResponsePayload(
template_attribute=objects.TemplateAttribute(
attributes=[
objects.Attribute(
attribute_name=objects.Attribute.AttributeName(
"State"
),
attribute_value=primitives.Enumeration(
enums.State,
value=enums.State.PRE_ACTIVE,
tag=enums.Tags.STATE
)
)
]
)
)
stream = utils.BytearrayStream()
args = (stream, )
self.assertRaisesRegex(
exceptions.InvalidField,
"The Register response payload is missing the unique identifier "
"field.",
payload.write,
*args
)
def test_write_missing_template_attribute(self):
"""
Test that a Register response payload can be written to a data stream
even when missing the template attribute.
"""
payload = payloads.RegisterResponsePayload(
unique_identifier="7091d0bf-548a-4d4a-93a6-6dd71cf75221"
)
stream = utils.BytearrayStream()
payload.write(stream)
self.assertEqual(len(self.no_template_attribute_encoding), len(stream))
self.assertEqual(str(self.no_template_attribute_encoding), str(stream))
def test_repr(self):
"""
Test that repr can be applied to a Register response payload structure.
"""
payload = payloads.RegisterResponsePayload(
unique_identifier="7091d0bf-548a-4d4a-93a6-6dd71cf75221",
template_attribute=objects.TemplateAttribute(
attributes=[
objects.Attribute(
attribute_name=objects.Attribute.AttributeName(
"State"
),
attribute_value=primitives.Enumeration(
enums.State,
value=enums.State.PRE_ACTIVE,
tag=enums.Tags.STATE
)
)
]
)
)
self.assertEqual(
"RegisterResponsePayload("
"unique_identifier='7091d0bf-548a-4d4a-93a6-6dd71cf75221', "
"template_attribute=Struct())",
repr(payload)
)
def test_str(self):
"""
Test that str can be applied to a Register response payload structure.
"""
payload = payloads.RegisterResponsePayload(
unique_identifier="7091d0bf-548a-4d4a-93a6-6dd71cf75221",
template_attribute=objects.TemplateAttribute(
attributes=[
objects.Attribute(
attribute_name=objects.Attribute.AttributeName(
"State"
),
attribute_value=primitives.Enumeration(
enums.State,
value=enums.State.PRE_ACTIVE,
tag=enums.Tags.STATE
)
)
]
)
)
self.assertEqual(
'{'
'"unique_identifier": "7091d0bf-548a-4d4a-93a6-6dd71cf75221", '
'"template_attribute": Struct()'
'}',
str(payload)
)
def test_equal_on_equal(self):
"""
Test that the equality operator returns True when comparing two
Register response payloads with the same data.
"""
a = payloads.RegisterResponsePayload()
b = payloads.RegisterResponsePayload()
self.assertTrue(a == b)
self.assertTrue(b == a)
a = payloads.RegisterResponsePayload(
unique_identifier="7091d0bf-548a-4d4a-93a6-6dd71cf75221",
template_attribute=objects.TemplateAttribute(
attributes=[
objects.Attribute(
attribute_name=objects.Attribute.AttributeName(
"State"
),
attribute_value=primitives.Enumeration(
enums.State,
value=enums.State.PRE_ACTIVE,
tag=enums.Tags.STATE
)
)
]
)
)
b = payloads.RegisterResponsePayload(
unique_identifier="7091d0bf-548a-4d4a-93a6-6dd71cf75221",
template_attribute=objects.TemplateAttribute(
attributes=[
objects.Attribute(
attribute_name=objects.Attribute.AttributeName(
"State"
),
attribute_value=primitives.Enumeration(
enums.State,
value=enums.State.PRE_ACTIVE,
tag=enums.Tags.STATE
)
)
]
)
)
self.assertTrue(a == b)
self.assertTrue(b == a)
def test_equal_on_not_equal_unique_identifier(self):
"""
Test that the equality operator returns False when comparing two
Register response payloads with different unique identifiers.
"""
a = payloads.RegisterResponsePayload(unique_identifier="a")
b = payloads.RegisterResponsePayload(unique_identifier="b")
self.assertFalse(a == b)
self.assertFalse(b == a)
def test_equal_on_not_equal_template_attribute(self):
"""
Test that the equality operator returns False when comparing two
Register response payloads with different template attributes.
"""
a = payloads.RegisterResponsePayload(
template_attribute=objects.TemplateAttribute(
attributes=[
objects.Attribute(
attribute_name=objects.Attribute.AttributeName(
"State"
),
attribute_value=primitives.Enumeration(
enums.State,
value=enums.State.PRE_ACTIVE,
tag=enums.Tags.STATE
)
)
]
)
)
b = payloads.RegisterResponsePayload(
template_attribute=objects.TemplateAttribute(
attributes=[
objects.Attribute(
attribute_name=objects.Attribute.AttributeName(
"State"
),
attribute_value=primitives.Enumeration(
enums.State,
value=enums.State.ACTIVE,
tag=enums.Tags.STATE
)
)
]
)
)
self.assertFalse(a == b)
self.assertFalse(b == a)
def test_equal_on_type_mismatch(self):
"""
Test that the equality operator returns False when comparing two
Register response payloads with different types.
"""
a = payloads.RegisterResponsePayload()
b = "invalid"
self.assertFalse(a == b)
self.assertFalse(b == a)
def test_not_equal_on_equal(self):
"""
Test that the inequality operator returns False when comparing two
Register response payloads with the same data.
"""
a = payloads.RegisterResponsePayload()
b = payloads.RegisterResponsePayload()
self.assertFalse(a != b)
self.assertFalse(b != a)
a = payloads.RegisterResponsePayload(
unique_identifier="7091d0bf-548a-4d4a-93a6-6dd71cf75221",
template_attribute=objects.TemplateAttribute(
attributes=[
objects.Attribute(
attribute_name=objects.Attribute.AttributeName(
"State"
),
attribute_value=primitives.Enumeration(
enums.State,
value=enums.State.PRE_ACTIVE,
tag=enums.Tags.STATE
)
)
]
)
)
b = payloads.RegisterResponsePayload(
unique_identifier="7091d0bf-548a-4d4a-93a6-6dd71cf75221",
template_attribute=objects.TemplateAttribute(
attributes=[
objects.Attribute(
attribute_name=objects.Attribute.AttributeName(
"State"
),
attribute_value=primitives.Enumeration(
enums.State,
value=enums.State.PRE_ACTIVE,
tag=enums.Tags.STATE
)
)
]
)
)
self.assertFalse(a != b)
self.assertFalse(b != a)
def test_not_equal_on_not_equal_unique_identifier(self):
"""
Test that the inequality operator returns True when comparing two
Register response payloads with different unique identifiers.
"""
a = payloads.RegisterResponsePayload(unique_identifier="a")
b = payloads.RegisterResponsePayload(unique_identifier="b")
self.assertTrue(a != b)
self.assertTrue(b != a)
def test_not_equal_on_not_equal_template_attribute(self):
"""
Test that the inequality operator returns True when comparing two
Register response payloads with different template attributes.
"""
a = payloads.RegisterResponsePayload(
template_attribute=objects.TemplateAttribute(
attributes=[
objects.Attribute(
attribute_name=objects.Attribute.AttributeName(
"State"
),
attribute_value=primitives.Enumeration(
enums.State,
value=enums.State.PRE_ACTIVE,
tag=enums.Tags.STATE
)
)
]
)
)
b = payloads.RegisterResponsePayload(
template_attribute=objects.TemplateAttribute(
attributes=[
objects.Attribute(
attribute_name=objects.Attribute.AttributeName(
"State"
),
attribute_value=primitives.Enumeration(
enums.State,
value=enums.State.ACTIVE,
tag=enums.Tags.STATE
)
)
]
)
)
self.assertTrue(a != b)
self.assertTrue(b != a)
def test_not_equal_on_type_mismatch(self):
"""
Test that the inequality operator returns True when comparing two
Register response payloads with different types.
"""
a = payloads.RegisterResponsePayload()
b = "invalid"
self.assertTrue(a != b)
self.assertTrue(b != a)
| 39.177455 | 79 | 0.549711 | 6,477 | 70,206 | 5.853945 | 0.07874 | 0.028009 | 0.021126 | 0.03244 | 0.871822 | 0.859321 | 0.834081 | 0.819496 | 0.813271 | 0.809737 | 0 | 0.093231 | 0.36994 | 70,206 | 1,791 | 80 | 39.19933 | 0.763937 | 0.164259 | 0 | 0.681577 | 0 | 0.055345 | 0.165432 | 0.124877 | 0 | 0 | 0 | 0.000558 | 0.087945 | 1 | 0.040182 | false | 0.001516 | 0.006065 | 0 | 0.047763 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1f6004fe40619bfe1de5b9aa1069e2a921243eab | 314 | py | Python | tests/test_base_null.py | stjordanis/datar | 4e2b5db026ad35918954576badef9951928c0cb1 | [
"MIT"
] | 110 | 2021-03-09T04:10:40.000Z | 2022-03-13T10:28:20.000Z | tests/test_base_null.py | sthagen/datar | 1218a549e2f0547c7b5a824ca6d9adf1bf96ba46 | [
"MIT"
] | 54 | 2021-06-20T18:53:44.000Z | 2022-03-29T22:13:07.000Z | tests/test_base_null.py | sthagen/datar | 1218a549e2f0547c7b5a824ca6d9adf1bf96ba46 | [
"MIT"
] | 11 | 2021-06-18T03:03:14.000Z | 2022-02-25T11:48:26.000Z | import pytest
from datar.base.null import *
def test_null_is_none():
assert NULL is None
def test_as_null():
assert as_null() is NULL
assert as_null(1) is NULL
assert as_null(a=1) is NULL
assert as_null(1, a=1) is NULL
def test_is_null():
assert is_null(NULL)
assert not is_null(1)
| 18.470588 | 34 | 0.691083 | 59 | 314 | 3.457627 | 0.254237 | 0.205882 | 0.235294 | 0.313725 | 0.279412 | 0.279412 | 0 | 0 | 0 | 0 | 0 | 0.020492 | 0.22293 | 314 | 16 | 35 | 19.625 | 0.815574 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.583333 | 1 | 0.25 | true | 0 | 0.166667 | 0 | 0.416667 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1f6203a1ab4a80a5831b27bdd972a0bbb7c34d0f | 125 | py | Python | core/config/production.py | unklearn/python-runtime | d6647ce9bb9c02879a4b0fc58e267806b199eb2e | [
"MIT"
] | null | null | null | core/config/production.py | unklearn/python-runtime | d6647ce9bb9c02879a4b0fc58e267806b199eb2e | [
"MIT"
] | null | null | null | core/config/production.py | unklearn/python-runtime | d6647ce9bb9c02879a4b0fc58e267806b199eb2e | [
"MIT"
] | null | null | null | from .development import DevelopmentConfig
class ProductionConfig(DevelopmentConfig):
"""Production config"""
pass
| 17.857143 | 42 | 0.768 | 10 | 125 | 9.6 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.152 | 125 | 6 | 43 | 20.833333 | 0.90566 | 0.136 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
2f3ca990418f65827a5d26779f25779ed68ec928 | 119 | py | Python | core/model/__init__.py | magdalenawi/DNN-PP | b9cec20369b64eb078c7a84ae739c44197f75563 | [
"MIT"
] | null | null | null | core/model/__init__.py | magdalenawi/DNN-PP | b9cec20369b64eb078c7a84ae739c44197f75563 | [
"MIT"
] | null | null | null | core/model/__init__.py | magdalenawi/DNN-PP | b9cec20369b64eb078c7a84ae739c44197f75563 | [
"MIT"
] | null | null | null | from model.DNNPP import Descriptor
from model.getFeatures import save_smiles_dicts, get_smiles_dicts, get_smiles_array
| 39.666667 | 83 | 0.882353 | 18 | 119 | 5.5 | 0.611111 | 0.181818 | 0.282828 | 0.40404 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.084034 | 119 | 2 | 84 | 59.5 | 0.908257 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
85f44cb8abae59a22c5083e4965a97a73bf226cd | 4,264 | py | Python | misc/filtering/biquad.py | Stefanlarsson95/AMPI | b801511a7fc2e7bb2dbac7088cc21ded48693094 | [
"MIT"
] | 1 | 2019-08-18T15:22:29.000Z | 2019-08-18T15:22:29.000Z | misc/filtering/biquad.py | Stefanlarsson95/AMPI | b801511a7fc2e7bb2dbac7088cc21ded48693094 | [
"MIT"
] | null | null | null | misc/filtering/biquad.py | Stefanlarsson95/AMPI | b801511a7fc2e7bb2dbac7088cc21ded48693094 | [
"MIT"
] | null | null | null | '''
Created on 09.11.2013
@author: matuschd
Formulas from "Cookbook formulae for audio EQ biquad filter coefficients"
by Robert Bristow-Johnson <rbj@audioimagination.com>
'''
import math
def _omega(f0,fs):
return math.pi*f0/fs*2
def _alpha(omega,q):
return math.sin(omega)/(2*q)
def _a(dbgain):
return pow(10,dbgain/40)
def _normalize(params,a0):
res=[]
for p in params:
res.append(p/a0)
return res
def low_pass(f0,q,fs):
w0=_omega(f0,fs)
alpha=_alpha(w0,q)
b0 = (1 - math.cos(w0))/2
b1 = 1 - math.cos(w0)
b2 = (1 - math.cos(w0))/2
a0 = 1 + alpha
a1 = -2* math.cos(w0)
a2 = 1 - alpha
return _normalize([a1,a2,b0,b1,b2],a0)
def high_pass(f0,q,fs):
w0=_omega(f0,fs)
alpha=_alpha(w0,q)
b0 = (1 + math.cos(w0))/2
b1 = -(1 + math.cos(w0))
b2 = (1 + math.cos(w0))/2
a0 = 1 + alpha
a1 = -2*math.cos(w0)
a2 = 1 - alpha
return _normalize([a1,a2,b0,b1,b2],a0)
def band_pass_peak_q(f0,q,fs):
w0=_omega(f0,fs)
alpha=_alpha(w0,q)
b0 = math.sin(w0)/2
b1 = 0
b2 = -math.sin(w0)/2
a0 = 1 + alpha
a1 = -2*math.cos(w0)
a2 = 1 - alpha
return _normalize([a1,a2,b0,b1,b2],a0)
def band_pass(f0,q,fs):
w0=_omega(f0,fs)
alpha=_alpha(w0,q)
b0 = alpha
b1 = 0
b2 = -alpha
a0 = 1 + alpha
a1 = -2*math.cos(w0)
a2 = 1 - alpha
return _normalize([a1,a2,b0,b1,b2],a0)
def notch(f0,q,fs):
w0=_omega(f0,fs)
alpha=_alpha(w0,q)
b0 = 1
b1 = -2*math.cos(w0)
b2 = 1
a0 = 1 + alpha
a1 = -2*math.cos(w0)
a2 = 1 - alpha
return _normalize([a1,a2,b0,b1,b2],a0)
def all_pass(f0,q,fs):
w0=_omega(f0,fs)
alpha=_alpha(w0,q)
b0 = 1 - alpha
b1 = -2*math.cos(w0)
b2 = 1 + alpha
a0 = 1 + alpha
a1 = -2*math.cos(w0)
a2 = 1 - alpha
return _normalize([a1,a2,b0,b1,b2],a0)
def peaking_eq(f0,q,dbgain,fs):
w0=_omega(f0,fs)
alpha=_alpha(w0,q)
a=_a(dbgain)
b0 = 1 + alpha*a
b1 = -2*math.cos(w0)
b2 = 1 - alpha*a
a0 = 1 + alpha/a
a1 = -2*math.cos(w0)
a2 = 1 - alpha/a
return _normalize([a1,a2,b0,b1,b2],a0)
def low_shelf(f0,q,dbgain,fs):
w0=_omega(f0,fs)
alpha=_alpha(w0,q)
a=_a(dbgain)
b0 = a*( (a+1) - (a-1)*math.cos(w0) + 2*math.sqrt(a)*alpha )
b1 = 2*a*( (a-1) - (a+1)*math.cos(w0) )
b2 = a*( (a+1) - (a-1)*math.cos(w0) - 2*math.sqrt(a)*alpha )
a0 = (a+1) + (a-1)*math.cos(w0) + 2*math.sqrt(a)*alpha
a1 = -2*( (a-1) + (a+1)*math.cos(w0) )
a2 = (a+1) + (a-1)*math.cos(w0) - 2*math.sqrt(a)*alpha
return _normalize([a1,a2,b0,b1,b2],a0)
def high_shelf(f0,q,dbgain,fs):
w0=_omega(f0,fs)
alpha=_alpha(w0,q)
a=_a(dbgain)
b0 = a*( (a+1) + (a-1)*math.cos(w0) + 2*math.sqrt(a)*alpha )
b1 = -2*a*( (a-1) + (a+1)*math.cos(w0) )
b2 = a*( (a+1) + (a-1)*math.cos(w0) - 2*math.sqrt(a)*alpha )
a0 = (a+1) - (a-1)*math.cos(w0) + 2*math.sqrt(a)*alpha
a1 = 2*( (a-1) - (a+1)*math.cos(w0) )
a2 = (a+1) - (a-1)*math.cos(w0) - 2*math.sqrt(a)*alpha
return _normalize([a1,a2,b0,b1,b2],a0)
'''
from A pratical guide for digital audio IIR filters
http://freeverb3.sourceforge.net/iir_filter.shtml
'''
def low_pass_firstorder(f0,q,fs):
w = math.tan(math.pi*f0/fs)
n = 1/(1+w)
b0 = w*n
b1 = b0
a1 = n*(w-1)
return [a1,0,b0,b1,0]
def high_pass_firstorder(f0,q,fs):
w = math.tan(math.pi*f0/fs)
n = 1/(1+w)
b0 = n
b1 = -b0
a1 = n*(w-1)
return [a1,0,b0,b1,0]
| 27.333333 | 79 | 0.451923 | 671 | 4,264 | 2.798808 | 0.120715 | 0.104366 | 0.134185 | 0.095847 | 0.744941 | 0.744941 | 0.744941 | 0.736954 | 0.707668 | 0.691693 | 0 | 0.113585 | 0.378518 | 4,264 | 155 | 80 | 27.509677 | 0.595094 | 0.039634 | 0 | 0.521008 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.12605 | false | 0.058824 | 0.008403 | 0.02521 | 0.260504 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
c83e2026b56664f087f9f005fc6b0fba0557b86a | 186 | py | Python | textrankpy/__init__.py | itsshavar/textrankpy | e9c9ce316142a81b25d64ccf016f5f7234c3757c | [
"Apache-2.0"
] | null | null | null | textrankpy/__init__.py | itsshavar/textrankpy | e9c9ce316142a81b25d64ccf016f5f7234c3757c | [
"Apache-2.0"
] | null | null | null | textrankpy/__init__.py | itsshavar/textrankpy | e9c9ce316142a81b25d64ccf016f5f7234c3757c | [
"Apache-2.0"
] | null | null | null | from .textrankpy import json_iter, limit_keyphrases, limit_sentences, make_sentence, normalize_key_phrases, parse_doc, pretty_print, rank_kernel, render_ranks, text_rank, top_sentences
| 62 | 184 | 0.860215 | 26 | 186 | 5.692308 | 0.884615 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.080645 | 186 | 2 | 185 | 93 | 0.865497 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 6 |
c0de10a0e768fee809ea93d904a448451fb21946 | 28 | py | Python | dataloader/__init__.py | Nyquixt/DyConv | 255193068424aaa83352bee258d34cb8b32b6ee6 | [
"MIT"
] | null | null | null | dataloader/__init__.py | Nyquixt/DyConv | 255193068424aaa83352bee258d34cb8b32b6ee6 | [
"MIT"
] | null | null | null | dataloader/__init__.py | Nyquixt/DyConv | 255193068424aaa83352bee258d34cb8b32b6ee6 | [
"MIT"
] | null | null | null | from .down_imagenet import * | 28 | 28 | 0.821429 | 4 | 28 | 5.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.107143 | 28 | 1 | 28 | 28 | 0.88 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c0f81aa1d8c09509eb9fef4adcb9648edaa752ca | 2,906 | py | Python | src/data/test_files.py | tamarakatic/radiology_project | 5e46774ce0b73b2899eda0336163b5fa534abce6 | [
"MIT"
] | null | null | null | src/data/test_files.py | tamarakatic/radiology_project | 5e46774ce0b73b2899eda0336163b5fa534abce6 | [
"MIT"
] | null | null | null | src/data/test_files.py | tamarakatic/radiology_project | 5e46774ce0b73b2899eda0336163b5fa534abce6 | [
"MIT"
] | null | null | null | TEST_FILES=['9.txt', '21.txt', '25.txt', '58.txt', '63.txt', '76.txt', '104.txt', '105.txt', '127.txt', '152.txt', '156.txt', '160.txt', '165.txt', '188.txt', '199.txt', '201.txt', '210.txt', '251.txt', '273.txt', '295.txt', '322.txt', '355.txt', '380.txt', '404.txt', '417.txt', '422.txt', '426.txt', '463.txt', '493.txt', '498.txt', '504.txt', '511.txt', '516.txt', '580.txt', '598.txt', '616.txt', '621.txt', '653.txt', '656.txt', '659.txt', '675.txt', '680.txt', '689.txt', '703.txt', '709.txt', '777.txt', '782.txt', '813.txt', '817.txt', '821.txt', '837.txt', '843.txt', '860.txt', '880.txt', '906.txt', '915.txt', '922.txt', '927.txt', '929.txt', '938.txt', '949.txt', '984.txt', '1001.txt', '1002.txt', '1016.txt', '1035.txt', '1044.txt', '1076.txt', '1088.txt', '1094.txt', '1100.txt', '1121.txt', '1133.txt', '1171.txt', '1184.txt', '1247.txt', '1248.txt', '1261.txt', '1284.txt', '1288.txt', '1290.txt', '1291.txt', '1302.txt', '1324.txt', '1348.txt', '1353.txt', '1376.txt', '1383.txt', '1386.txt', '1405.txt', '1407.txt', '1411.txt', '1430.txt', '1437.txt', '1441.txt', '1449.txt', '1453.txt', '1459.txt', '1475.txt', '1478.txt', '1479.txt', '1496.txt', '1501.txt', '1513.txt', '1527.txt', '1543.txt', '1554.txt', '1570.txt', '1576.txt', '1577.txt', '1600.txt', '1648.txt', '1664.txt', '1676.txt', '1702.txt', '1709.txt', '1722.txt', '1725.txt', '1727.txt', '1743.txt', '1767.txt', '1808.txt', '1818.txt', '1822.txt', '1845.txt', '1849.txt', '1859.txt', '1880.txt', '1886.txt', '1888.txt', '1890.txt', '1898.txt', '1904.txt', '1936.txt', '1945.txt', '2034.txt', '2035.txt', '2037.txt', '2044.txt', '2073.txt', '2084.txt', '2105.txt', '2133.txt', '2138.txt', '2142.txt', '2180.txt', '2187.txt', '2197.txt', '2209.txt', '2213.txt', '2226.txt', '2235.txt', '2270.txt', '2296.txt', '2341.txt', '2350.txt', '2351.txt', '2352.txt', '2379.txt', '2400.txt', '2409.txt', '2411.txt', '2421.txt', '2439.txt', '2455.txt', '2479.txt', '2506.txt', '2521.txt', '2543.txt', '2565.txt', '2582.txt', '2588.txt', '2589.txt', '2602.txt', '2618.txt', '2647.txt', '2653.txt', '2695.txt', '2702.txt', '2705.txt', '2711.txt', '2737.txt', '2763.txt', '2793.txt', '2801.txt', '2802.txt', '2857.txt', '2860.txt', '2882.txt', '2883.txt', '2931.txt', '2934.txt', '2943.txt', '2944.txt', '2954.txt', '2957.txt', '2999.txt', '3000.txt', '3001.txt', '3017.txt', '3035.txt', '3062.txt', '3068.txt', '3072.txt', '3074.txt', '3096.txt', '3107.txt', '3110.txt', '3111.txt', '3116.txt', '3124.txt', '3137.txt', '3140.txt', '3172.txt', '3193.txt', '3213.txt', '3219.txt', '3242.txt', '3273.txt', '3331.txt', '3348.txt', '3349.txt', '3353.txt', '3372.txt', '3379.txt', '3423.txt', '3428.txt', '3451.txt', '3457.txt', '3482.txt', '3492.txt', '3493.txt', '3514.txt', '3516.txt', '3533.txt', '3550.txt', '3589.txt', '3616.txt', '3702.txt', '3714.txt', '3741.txt', '3748.txt', '3797.txt', '3798.txt', '3822.txt', '3949.txt', '3991.txt'] | 2,906 | 2,906 | 0.574673 | 496 | 2,906 | 3.364919 | 0.504032 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.345489 | 0.084652 | 2,906 | 1 | 2,906 | 2,906 | 0.281955 | 0 | 0 | 0 | 0 | 0 | 0.656003 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c0fb0380be418528d7923042ecda85140baab310 | 3,616 | py | Python | sqlite/sql_tasks/task_2/insurance_app.py | Akawi85/Stutern_Projects | bb39688f06b6166022b370b747aba24b4ea15d57 | [
"MIT"
] | null | null | null | sqlite/sql_tasks/task_2/insurance_app.py | Akawi85/Stutern_Projects | bb39688f06b6166022b370b747aba24b4ea15d57 | [
"MIT"
] | null | null | null | sqlite/sql_tasks/task_2/insurance_app.py | Akawi85/Stutern_Projects | bb39688f06b6166022b370b747aba24b4ea15d57 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
# import dependencies
import sqlite3
# Query the DB and Return all RECORDs
def show_all():
# Connect to database
conn = sqlite3.connect('insurance.db')
# Create a cursor
cursor = conn.cursor()
# select all rows from the test_data table
cursor.execute("SELECT * FROM test_data")
# get all items from what is executed
items = cursor.fetchall()
# loop through the items and display all items
for item in items:
print(item)
# save the execution
conn.commit()
# close the connection
conn.close()
# Create a function that returns n rows for just females for the age, bmi, gender and kids column
def get_females(n):
# connect to a database
conn = sqlite3.connect("insurance.db")
# create a cursor
cursor = conn.cursor()
# select rows where sex_male is either male or female
cursor.execute("SELECT * FROM test_data WHERE sex_male = '0'")
# fetchall rows in the table
items = cursor.fetchmany(n)
# format the columns output
print("Age"+ "\t BMI"+ " \t Gender"+ "\t\tkids"+ "\n........................................." )
#format the output of four columns for the first four rows
for item in items:
age, bmi, sex_male, _, _, _, _, kids = item
print(f"{age}\t{bmi}\t {sex_male}\t\t{kids}")
# commit changes
conn.commit()
# close connection
conn.close()
# Create a function to delete one item given the row id
def delete_one(num):
# Connect to database
conn = sqlite3.connect('insurance.db')
# Create a cursor
cursor = conn.cursor()
# delete item with rowid
cursor.execute("DELETE from test_data WHERE rowid= 'num'")
# commit changes
conn.commit()
# close connection
conn.close()
# Create a function that returns n rows for just males for the age, bmi, gender and kids column
def get_males(n):
# connect to a database
conn = sqlite3.connect("insurance.db")
# create a cursor
cursor = conn.cursor()
# select rows where sex_male is either male or female
cursor.execute("SELECT * FROM test_data WHERE sex_male = '1'")
# fetchall rows in the table
items = cursor.fetchmany(n)
# format the columns output
print("Age"+ "\t BMI"+ " \t Gender"+ "\t\tkids"+ "\n........................................." )
#format the output of four columns for the first four rows
for item in items:
age, bmi, sex_male, _, _, _, _, kids = item
print(f"{age}\t{bmi}\t {sex_male}\t\t{kids}")
# commit changes
conn.commit()
# close connection
conn.close()
# create a function to display the age, bmi, gender and kids column for n rows
def four_columns_n_rows(n):
# create an insurance database
conn = sqlite3.connect("insurance.db")
# create a cursor
cursor = conn.cursor()
# query table values
cursor.execute("SELECT * FROM test_data")
# get five rows from table
items = cursor.fetchmany(n)
# format the columns output
print("Age"+ "\t BMI"+ " \t Gender"+ "\t\tkids"+ "\n........................................." )
# format the output of four columns for the first four rows
for item in items:
age, bmi, sex_male, _, _, _, _, kids = item
print(f"{age}\t{bmi}\t{kids}\t{sex_male}")
# commit changes
conn.commit()
# close connection
conn.close()
| 26.985075 | 101 | 0.584071 | 474 | 3,616 | 4.381857 | 0.191983 | 0.033702 | 0.028888 | 0.02311 | 0.771786 | 0.771786 | 0.771786 | 0.725566 | 0.702937 | 0.702937 | 0 | 0.003486 | 0.285951 | 3,616 | 133 | 102 | 27.18797 | 0.80093 | 0.376106 | 0 | 0.755102 | 0 | 0 | 0.2625 | 0.077404 | 0 | 0 | 0 | 0 | 0 | 1 | 0.102041 | false | 0 | 0.020408 | 0 | 0.122449 | 0.142857 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
23d790f11191b54bc2a6c46f0c8f64bb40212666 | 3,870 | py | Python | aws_network_tap/tests/models/test_spile.py | vectranetworks/AWS-Session-Mirroring-Tool | d3137e4b649d4d54b3023fc7c860b11c7dd535a4 | [
"Apache-2.0"
] | 3 | 2020-05-14T21:13:03.000Z | 2021-06-14T20:45:12.000Z | aws_network_tap/tests/models/test_spile.py | vectranetworks/AWS-Traffic-Mirroring-Session-Manager | d3137e4b649d4d54b3023fc7c860b11c7dd535a4 | [
"Apache-2.0"
] | 6 | 2020-01-14T23:20:13.000Z | 2021-04-30T20:58:32.000Z | aws_network_tap/tests/models/test_spile.py | vectranetworks/AWS-Session-Mirroring-Tool | d3137e4b649d4d54b3023fc7c860b11c7dd535a4 | [
"Apache-2.0"
] | 1 | 2020-02-19T16:31:27.000Z | 2020-02-19T16:31:27.000Z | from unittest import TestCase
from unittest.mock import MagicMock
from uuid import uuid4
import boto3
from aws_network_tap.models.spile import Spile, ENI_Tag, Ec2ApiClient, MirrorSession
class TestSpile(TestCase):
def mirror_session_factory(self, tags):
return MirrorSession('catfood session', 'tmt-{}'.format(uuid4()), 'tmf-{}'.format(uuid4()), 'eni-{}'.format(uuid4()), tags)
def test_should_manage_ours(self):
eni_tag = ENI_Tag('id-12345', 'eth0', None, Ec2ApiClient.STATE_RUNNING)
spile = Spile(boto3.client('s3'), eni_tag)
mirror_session = self.mirror_session_factory({Spile.CREATOR_KEY: Spile.CREATOR_VALUE})
self.assertTrue(spile.should_manage(mirror_session))
def test_should_not_manage_others(self):
eni_tag = ENI_Tag('id-12345', 'eth0', None, Ec2ApiClient.STATE_RUNNING)
spile = Spile(boto3.client('s3'), eni_tag)
mirror_session = self.mirror_session_factory({Spile.CREATOR_KEY: 'Bob Barker'})
self.assertFalse(spile.should_manage(mirror_session))
def test_should_not_manage_unknown(self):
eni_tag = ENI_Tag('id-12345', 'eth0', None, Ec2ApiClient.STATE_RUNNING)
spile = Spile(boto3.client('s3'), eni_tag)
mirror_session = self.mirror_session_factory({})
self.assertFalse(spile.should_manage(mirror_session))
def test_manage_do_not_tap(self):
eni_tag = ENI_Tag('id-12345', 'eth0', None, Ec2ApiClient.STATE_RUNNING)
spile = Spile(boto3.client('s3'), eni_tag)
spile._find_tap = MagicMock(return_value=None)
result = spile.manage('arn:/foo/bar', do_tap=False)
self.assertIsNone(result)
def test_manage_do_tap(self):
eni_tag = ENI_Tag('id-12345', 'eth0', None, Ec2ApiClient.STATE_RUNNING)
spile = Spile(boto3.client('s3'), eni_tag)
spile._find_tap = MagicMock(return_value=None)
spile._tap = MagicMock()
result = spile.manage('arn:/foo/bar', do_tap=True)
self.assertIsNotNone(result)
spile._tap.assert_called_once()
def test_manage_not_running_prevents_tap(self):
eni_tag = ENI_Tag('id-12345', 'eth0', None, Ec2ApiClient.STATE_STOPPED)
spile = Spile(boto3.client('s3'), eni_tag)
spile._find_tap = MagicMock(return_value=None)
result = spile.manage('arn:/foo/bar', do_tap=True)
self.assertIsNone(result)
def test_manage_no_change_already_tapped(self):
eni_tag = ENI_Tag('id-12345', 'eth0', None, Ec2ApiClient.STATE_RUNNING)
spile = Spile(boto3.client('s3'), eni_tag)
existing_session = self.mirror_session_factory({Spile.CREATOR_KEY: Spile.CREATOR_VALUE})
spile._find_tap = MagicMock(return_value=existing_session)
result = spile.manage(existing_session.target_id, do_tap=True)
self.assertEqual(existing_session, result)
def test_manage_remove(self):
eni_tag = ENI_Tag('id-12345', 'eth0', None, Ec2ApiClient.STATE_RUNNING)
spile = Spile(boto3.client('s3'), eni_tag)
existing_session = self.mirror_session_factory({Spile.CREATOR_KEY: Spile.CREATOR_VALUE})
spile._find_tap = MagicMock(return_value=existing_session)
spile._untap = MagicMock()
result = spile.manage(existing_session.target_id, do_tap=False)
self.assertIsNone(result)
spile._untap.assert_called_once()
def test_manage_change_target(self):
eni_tag = ENI_Tag('id-12345', 'eth0', None, Ec2ApiClient.STATE_RUNNING)
spile = Spile(boto3.client('s3'), eni_tag)
existing_session = self.mirror_session_factory({Spile.CREATOR_KEY: Spile.CREATOR_VALUE})
spile._find_tap = MagicMock(return_value=existing_session)
spile._tap = MagicMock()
result = spile.manage('tmt-12345', do_tap=True)
self.assertIsNotNone(result)
spile._tap.assert_called_once() | 48.375 | 131 | 0.699742 | 502 | 3,870 | 5.093626 | 0.153386 | 0.065702 | 0.035198 | 0.045757 | 0.800548 | 0.800548 | 0.727806 | 0.727806 | 0.727806 | 0.666406 | 0 | 0.028986 | 0.179845 | 3,870 | 80 | 132 | 48.375 | 0.776623 | 0 | 0 | 0.573529 | 0 | 0 | 0.055283 | 0 | 0 | 0 | 0 | 0 | 0.176471 | 1 | 0.147059 | false | 0 | 0.073529 | 0.014706 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f1bdc35b19aaae8d38736bc3993b318a0a52b9dd | 19,627 | py | Python | CellProfiler/tests/modules/test_tile.py | aidotse/Team-rahma.ai | 66857731e1ca2472e0783e37ba472b55a7ac9cd4 | [
"MIT"
] | null | null | null | CellProfiler/tests/modules/test_tile.py | aidotse/Team-rahma.ai | 66857731e1ca2472e0783e37ba472b55a7ac9cd4 | [
"MIT"
] | null | null | null | CellProfiler/tests/modules/test_tile.py | aidotse/Team-rahma.ai | 66857731e1ca2472e0783e37ba472b55a7ac9cd4 | [
"MIT"
] | null | null | null | import numpy
import six.moves
import cellprofiler_core.image
import cellprofiler_core.measurement
import cellprofiler.modules.tile
import cellprofiler_core.object
import cellprofiler_core.pipeline
import cellprofiler_core.workspace
import tests.modules
INPUT_IMAGE_NAME = "inputimage"
OUTPUT_IMAGE_NAME = "outputimage"
def input_image_name(index):
return INPUT_IMAGE_NAME + str(index + 1)
def test_load_v1():
file = tests.modules.get_test_resources_directory("tile/v1.pipeline")
with open(file, "r") as fd:
data = fd.read()
pipeline = cellprofiler_core.pipeline.Pipeline()
def callback(caller, event):
assert not isinstance(event, cellprofiler_core.pipeline.event.LoadException)
pipeline.add_listener(callback)
pipeline.load(six.moves.StringIO(data))
assert len(pipeline.modules()) == 1
module = pipeline.modules()[0]
assert isinstance(module, cellprofiler.modules.tile.Tile)
assert module.input_image == "ResizedColorImage"
assert module.output_image == "TiledImage"
assert module.tile_method == cellprofiler.modules.tile.T_ACROSS_CYCLES
assert module.rows == 2
assert module.columns == 12
assert module.wants_automatic_rows
assert not module.wants_automatic_columns
assert module.place_first == cellprofiler.modules.tile.P_TOP_LEFT
assert module.tile_style == cellprofiler.modules.tile.S_ROW
assert not module.meander
assert len(module.additional_images) == 3
for g, expected in zip(
module.additional_images, ("Cytoplasm", "ColorImage", "DNA")
):
assert g.input_image_name == expected
def make_tile_workspace(images):
module = cellprofiler.modules.tile.Tile()
module.set_module_num(1)
module.tile_method.value = cellprofiler.modules.tile.T_ACROSS_CYCLES
module.input_image.value = INPUT_IMAGE_NAME
module.output_image.value = OUTPUT_IMAGE_NAME
pipeline = cellprofiler_core.pipeline.Pipeline()
def callback(caller, event):
assert not isinstance(event, cellprofiler_core.pipeline.event.RunException)
pipeline.add_listener(callback)
pipeline.add_module(module)
image_set_list = cellprofiler_core.image.ImageSetList()
for i, image in enumerate(images):
image_set = image_set_list.get_image_set(i)
image_set.add(INPUT_IMAGE_NAME, cellprofiler_core.image.Image(image))
workspace = cellprofiler_core.workspace.Workspace(
pipeline,
module,
image_set_list.get_image_set(0),
cellprofiler_core.object.ObjectSet(),
cellprofiler_core.measurement.Measurements(),
image_set_list,
)
return workspace, module
def test_manual_rows_and_columns():
numpy.random.seed(0)
images = [
numpy.random.uniform(size=(20, 10)).astype(numpy.float32) for i in range(96)
]
workspace, module = make_tile_workspace(images)
assert isinstance(module, cellprofiler.modules.tile.Tile)
assert isinstance(workspace, cellprofiler_core.workspace.Workspace)
module.wants_automatic_columns.value = False
module.wants_automatic_rows.value = False
module.rows.value = 6
module.columns.value = 16
module.tile_style.value = cellprofiler.modules.tile.S_ROW
module.prepare_group(workspace, (), numpy.arange(1, 97))
for i in range(96):
workspace.set_image_set_for_testing_only(i)
module.run(workspace)
image = workspace.image_set.get_image(OUTPUT_IMAGE_NAME)
pixel_data = image.pixel_data
assert pixel_data.shape[0] == 6 * 20
assert pixel_data.shape[1] == 16 * 10
for i, image in enumerate(images):
ii = int(i / 16)
jj = i % 16
iii = ii * 20
jjj = jj * 10
assert numpy.all(pixel_data[iii : (iii + 20), jjj : (jjj + 10)] == image)
def test_automatic_rows():
numpy.random.seed(1)
images = [
numpy.random.uniform(size=(20, 10)).astype(numpy.float32) for i in range(96)
]
workspace, module = make_tile_workspace(images)
assert isinstance(module, cellprofiler.modules.tile.Tile)
assert isinstance(workspace, cellprofiler_core.workspace.Workspace)
module.wants_automatic_columns.value = False
module.wants_automatic_rows.value = True
module.rows.value = 8
module.columns.value = 16
module.tile_style.value = cellprofiler.modules.tile.S_ROW
module.prepare_group(workspace, (), numpy.arange(1, 97))
for i in range(96):
workspace.set_image_set_for_testing_only(i)
module.run(workspace)
image = workspace.image_set.get_image(OUTPUT_IMAGE_NAME)
pixel_data = image.pixel_data
assert pixel_data.shape[0] == 6 * 20
assert pixel_data.shape[1] == 16 * 10
for i, image in enumerate(images):
ii = int(i / 16)
jj = i % 16
iii = ii * 20
jjj = jj * 10
assert numpy.all(pixel_data[iii : (iii + 20), jjj : (jjj + 10)] == image)
def test_automatic_columns():
numpy.random.seed(2)
images = [
numpy.random.uniform(size=(20, 10)).astype(numpy.float32) for i in range(96)
]
workspace, module = make_tile_workspace(images)
assert isinstance(module, cellprofiler.modules.tile.Tile)
assert isinstance(workspace, cellprofiler_core.workspace.Workspace)
module.wants_automatic_columns.value = True
module.wants_automatic_rows.value = False
module.rows.value = 6
module.columns.value = 365
module.tile_style.value = cellprofiler.modules.tile.S_ROW
module.prepare_group(workspace, (), numpy.arange(1, 97))
for i in range(96):
workspace.set_image_set_for_testing_only(i)
module.run(workspace)
image = workspace.image_set.get_image(OUTPUT_IMAGE_NAME)
pixel_data = image.pixel_data
assert pixel_data.shape[0] == 6 * 20
assert pixel_data.shape[1] == 16 * 10
for i, image in enumerate(images):
ii = int(i / 16)
jj = i % 16
iii = ii * 20
jjj = jj * 10
assert numpy.all(pixel_data[iii : (iii + 20), jjj : (jjj + 10)] == image)
def test_automatic_rows_and_columns():
numpy.random.seed(3)
images = [
numpy.random.uniform(size=(20, 10)).astype(numpy.float32) for i in range(96)
]
workspace, module = make_tile_workspace(images)
assert isinstance(module, cellprofiler.modules.tile.Tile)
assert isinstance(workspace, cellprofiler_core.workspace.Workspace)
module.wants_automatic_columns.value = True
module.wants_automatic_rows.value = True
module.rows.value = 365
module.columns.value = 24
module.tile_style.value = cellprofiler.modules.tile.S_ROW
module.prepare_group(workspace, (), numpy.arange(1, 97))
for i in range(96):
workspace.set_image_set_for_testing_only(i)
module.run(workspace)
image = workspace.image_set.get_image(OUTPUT_IMAGE_NAME)
pixel_data = image.pixel_data
assert pixel_data.shape[0] == 9 * 20
assert pixel_data.shape[1] == 11 * 10
for i, image in enumerate(images):
ii = int(i / 11)
jj = i % 11
iii = ii * 20
jjj = jj * 10
assert numpy.all(pixel_data[iii : (iii + 20), jjj : (jjj + 10)] == image)
def test_color():
numpy.random.seed(4)
images = [
numpy.random.uniform(size=(20, 10, 3)).astype(numpy.float32) for i in range(96)
]
workspace, module = make_tile_workspace(images)
assert isinstance(module, cellprofiler.modules.tile.Tile)
assert isinstance(workspace, cellprofiler_core.workspace.Workspace)
module.wants_automatic_columns.value = False
module.wants_automatic_rows.value = False
module.rows.value = 6
module.columns.value = 16
module.tile_style.value = cellprofiler.modules.tile.S_ROW
module.prepare_group(workspace, (), numpy.arange(1, 97))
for i in range(96):
workspace.set_image_set_for_testing_only(i)
module.run(workspace)
image = workspace.image_set.get_image(OUTPUT_IMAGE_NAME)
pixel_data = image.pixel_data
assert pixel_data.shape[0] == 6 * 20
assert pixel_data.shape[1] == 16 * 10
for i, image in enumerate(images):
ii = int(i / 16)
jj = i % 16
iii = ii * 20
jjj = jj * 10
assert numpy.all(pixel_data[iii : (iii + 20), jjj : (jjj + 10), :] == image)
def test_columns_first():
numpy.random.seed(5)
images = [
numpy.random.uniform(size=(20, 10)).astype(numpy.float32) for i in range(96)
]
workspace, module = make_tile_workspace(images)
assert isinstance(module, cellprofiler.modules.tile.Tile)
assert isinstance(workspace, cellprofiler_core.workspace.Workspace)
module.wants_automatic_columns.value = False
module.wants_automatic_rows.value = False
module.rows.value = 6
module.columns.value = 16
module.tile_style.value = cellprofiler.modules.tile.S_COL
module.prepare_group(workspace, (), numpy.arange(1, 97))
for i in range(96):
workspace.set_image_set_for_testing_only(i)
module.run(workspace)
module.post_group(workspace, None)
image = workspace.image_set.get_image(OUTPUT_IMAGE_NAME)
pixel_data = image.pixel_data
assert pixel_data.shape[0] == 6 * 20
assert pixel_data.shape[1] == 16 * 10
for i, image in enumerate(images):
ii = i % 6
jj = int(i / 6)
iii = ii * 20
jjj = jj * 10
assert numpy.all(pixel_data[iii : (iii + 20), jjj : (jjj + 10)] == image)
def test_top_right():
numpy.random.seed(0)
images = [
numpy.random.uniform(size=(20, 10)).astype(numpy.float32) for i in range(96)
]
workspace, module = make_tile_workspace(images)
assert isinstance(module, cellprofiler.modules.tile.Tile)
assert isinstance(workspace, cellprofiler_core.workspace.Workspace)
module.wants_automatic_columns.value = False
module.wants_automatic_rows.value = False
module.rows.value = 6
module.columns.value = 16
module.tile_style.value = cellprofiler.modules.tile.S_ROW
module.place_first.value = cellprofiler.modules.tile.P_TOP_RIGHT
module.prepare_group(workspace, (), numpy.arange(1, 97))
for i in range(96):
workspace.set_image_set_for_testing_only(i)
module.run(workspace)
module.post_group(workspace, None)
image = workspace.image_set.get_image(OUTPUT_IMAGE_NAME)
pixel_data = image.pixel_data
assert pixel_data.shape[0] == 6 * 20
assert pixel_data.shape[1] == 16 * 10
for i, image in enumerate(images):
ii = int(i / 16)
jj = 15 - (i % 16)
iii = ii * 20
jjj = jj * 10
assert numpy.all(pixel_data[iii : (iii + 20), jjj : (jjj + 10)] == image)
def test_bottom_left():
numpy.random.seed(8)
images = [
numpy.random.uniform(size=(20, 10)).astype(numpy.float32) for i in range(96)
]
workspace, module = make_tile_workspace(images)
assert isinstance(module, cellprofiler.modules.tile.Tile)
assert isinstance(workspace, cellprofiler_core.workspace.Workspace)
module.wants_automatic_columns.value = False
module.wants_automatic_rows.value = False
module.rows.value = 6
module.columns.value = 16
module.tile_style.value = cellprofiler.modules.tile.S_ROW
module.place_first.value = cellprofiler.modules.tile.P_BOTTOM_LEFT
module.prepare_group(workspace, (), numpy.arange(1, 97))
for i in range(96):
workspace.set_image_set_for_testing_only(i)
module.run(workspace)
module.post_group(workspace, None)
image = workspace.image_set.get_image(OUTPUT_IMAGE_NAME)
pixel_data = image.pixel_data
assert pixel_data.shape[0] == 6 * 20
assert pixel_data.shape[1] == 16 * 10
for i, image in enumerate(images):
ii = 5 - int(i / 16)
jj = i % 16
iii = ii * 20
jjj = jj * 10
assert numpy.all(pixel_data[iii : (iii + 20), jjj : (jjj + 10)] == image)
def test_bottom_right():
numpy.random.seed(9)
images = [
numpy.random.uniform(size=(20, 10)).astype(numpy.float32) for i in range(96)
]
workspace, module = make_tile_workspace(images)
assert isinstance(module, cellprofiler.modules.tile.Tile)
assert isinstance(workspace, cellprofiler_core.workspace.Workspace)
module.wants_automatic_columns.value = False
module.wants_automatic_rows.value = False
module.rows.value = 6
module.columns.value = 16
module.tile_style.value = cellprofiler.modules.tile.S_ROW
module.place_first.value = cellprofiler.modules.tile.P_BOTTOM_RIGHT
module.prepare_group(workspace, (), numpy.arange(1, 97))
for i in range(96):
workspace.set_image_set_for_testing_only(i)
module.run(workspace)
module.post_group(workspace, None)
image = workspace.image_set.get_image(OUTPUT_IMAGE_NAME)
pixel_data = image.pixel_data
assert pixel_data.shape[0] == 6 * 20
assert pixel_data.shape[1] == 16 * 10
for i, image in enumerate(images):
ii = 5 - int(i / 16)
jj = 15 - (i % 16)
iii = ii * 20
jjj = jj * 10
assert numpy.all(pixel_data[iii : (iii + 20), jjj : (jjj + 10)] == image)
def test_different_sizes():
numpy.random.seed(10)
images = [
numpy.random.uniform(size=(20, 10)).astype(numpy.float32),
numpy.random.uniform(size=(10, 20)).astype(numpy.float32),
numpy.random.uniform(size=(40, 5)).astype(numpy.float32),
numpy.random.uniform(size=(40, 20)).astype(numpy.float32),
]
workspace, module = make_tile_workspace(images)
assert isinstance(module, cellprofiler.modules.tile.Tile)
assert isinstance(workspace, cellprofiler_core.workspace.Workspace)
module.wants_automatic_columns.value = False
module.wants_automatic_rows.value = False
module.rows.value = 1
module.columns.value = 4
module.tile_style.value = cellprofiler.modules.tile.S_ROW
module.prepare_group(workspace, (), numpy.arange(1, 4))
for i in range(4):
workspace.set_image_set_for_testing_only(i)
module.run(workspace)
module.post_group(workspace, None)
pixel_data = workspace.image_set.get_image(OUTPUT_IMAGE_NAME).pixel_data
assert pixel_data.shape[0] == 20
assert pixel_data.shape[1] == 40
assert numpy.all(pixel_data[:, :10] == images[0])
assert numpy.all(pixel_data[:10, 10:20] == images[1][:, :10])
assert numpy.all(pixel_data[10:, 10:20] == 0)
assert numpy.all(pixel_data[:, 20:25] == images[2][:20, :])
assert numpy.all(pixel_data[:, 25:30] == 0)
assert numpy.all(pixel_data[:, 30:] == images[3][:20, :10])
def test_filtered():
numpy.random.seed(9)
images = [
numpy.random.uniform(size=(20, 10)).astype(numpy.float32) for i in range(96)
]
workspace, module = make_tile_workspace(images)
assert isinstance(module, cellprofiler.modules.tile.Tile)
assert isinstance(workspace, cellprofiler_core.workspace.Workspace)
module.wants_automatic_columns.value = False
module.wants_automatic_rows.value = False
module.rows.value = 6
module.columns.value = 16
module.tile_style.value = cellprofiler.modules.tile.S_ROW
module.place_first.value = cellprofiler.modules.tile.P_BOTTOM_RIGHT
module.prepare_group(workspace, (), numpy.arange(1, 97))
for i in range(95):
workspace.set_image_set_for_testing_only(i)
module.run(workspace)
workspace.set_image_set_for_testing_only(95)
module.post_group(workspace, None)
image = workspace.image_set.get_image(OUTPUT_IMAGE_NAME)
pixel_data = image.pixel_data
assert pixel_data.shape[0] == 6 * 20
assert pixel_data.shape[1] == 16 * 10
for i, image in enumerate(images[:-1]):
ii = 5 - int(i / 16)
jj = 15 - (i % 16)
iii = ii * 20
jjj = jj * 10
assert numpy.all(pixel_data[iii : (iii + 20), jjj : (jjj + 10)] == image)
def make_place_workspace(images):
image_set_list = cellprofiler_core.image.ImageSetList()
image_set = image_set_list.get_image_set(0)
module = cellprofiler.modules.tile.Tile()
module.set_module_num(1)
module.tile_method.value = cellprofiler.modules.tile.T_WITHIN_CYCLES
module.output_image.value = OUTPUT_IMAGE_NAME
module.wants_automatic_rows.value = False
module.wants_automatic_columns.value = True
module.rows.value = 1
for i, image in enumerate(images):
image_name = input_image_name(i)
if i == 0:
module.input_image.value = image_name
else:
if len(module.additional_images) <= i:
module.add_image()
module.additional_images[i - 1].input_image_name.value = image_name
image_set.add(image_name, cellprofiler_core.image.Image(image))
pipeline = cellprofiler_core.pipeline.Pipeline()
def callback(caller, event):
assert not isinstance(event, cellprofiler_core.pipeline.event.RunException)
pipeline.add_listener(callback)
pipeline.add_module(module)
workspace = cellprofiler_core.workspace.Workspace(
pipeline,
module,
image_set,
cellprofiler_core.object.ObjectSet(),
cellprofiler_core.measurement.Measurements(),
image_set_list,
)
return workspace, module
def test_some_images():
numpy.random.seed(31)
for i in range(1, 5):
images = [
numpy.random.uniform(size=(20, 10)).astype(numpy.float32) for ii in range(i)
]
workspace, module = make_place_workspace(images)
assert isinstance(module, cellprofiler.modules.tile.Tile)
assert isinstance(workspace, cellprofiler_core.workspace.Workspace)
module.run(workspace)
image = workspace.image_set.get_image(OUTPUT_IMAGE_NAME)
pixel_data = image.pixel_data
for j, p in enumerate(images):
jj = 10 * j
assert numpy.all(pixel_data[:, jj : (jj + 10)] == p)
def test_mix_color_bw():
numpy.random.seed(32)
for color in range(3):
images = [
numpy.random.uniform(size=(20, 10, 3) if i == color else (20, 10)).astype(
numpy.float32
)
for i in range(3)
]
workspace, module = make_place_workspace(images)
module.run(workspace)
image = workspace.image_set.get_image(OUTPUT_IMAGE_NAME)
pixel_data = image.pixel_data
for j, p in enumerate(images):
jj = 10 * j
if j == color:
assert numpy.all(pixel_data[:, jj : (jj + 10), :] == p)
else:
for k in range(3):
assert numpy.all(pixel_data[:, jj : (jj + 10), k] == p)
def test_different_sizes():
numpy.random.seed(33)
images = [
numpy.random.uniform(size=(20, 10)).astype(numpy.float32),
numpy.random.uniform(size=(10, 20)).astype(numpy.float32),
numpy.random.uniform(size=(40, 5)).astype(numpy.float32),
numpy.random.uniform(size=(40, 20)).astype(numpy.float32),
]
workspace, module = make_place_workspace(images)
module.run(workspace)
image = workspace.image_set.get_image(OUTPUT_IMAGE_NAME)
pixel_data = image.pixel_data
assert pixel_data.shape[0] == 40
assert pixel_data.shape[1] == 80
mask = numpy.ones(pixel_data.shape, bool)
assert numpy.all(pixel_data[:20, :10] == images[0])
mask[:20, :10] = False
assert numpy.all(pixel_data[:10, 20:40] == images[1])
mask[:10, 20:40] = False
assert numpy.all(pixel_data[:, 40:45] == images[2])
mask[:, 40:45] = False
assert numpy.all(pixel_data[:, 60:] == images[3])
mask[:, 60:] = False
assert numpy.all(pixel_data[mask] == 0)
| 36.481413 | 88 | 0.6756 | 2,657 | 19,627 | 4.809183 | 0.061347 | 0.054234 | 0.064799 | 0.037565 | 0.867115 | 0.855768 | 0.826499 | 0.786273 | 0.764987 | 0.744561 | 0 | 0.037955 | 0.210679 | 19,627 | 537 | 89 | 36.549348 | 0.786858 | 0 | 0 | 0.688841 | 0 | 0 | 0.004433 | 0 | 0 | 0 | 0 | 0 | 0.190987 | 1 | 0.045064 | false | 0 | 0.019313 | 0.002146 | 0.070815 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f1e5dabd3ec17bfe3f42219c4b86e78449573040 | 22 | py | Python | mqtt/__init__.py | tam-wh/py-simple-nvr | 0444e352e2d92ed7a5722c79b005a47367742032 | [
"MIT"
] | null | null | null | mqtt/__init__.py | tam-wh/py-simple-nvr | 0444e352e2d92ed7a5722c79b005a47367742032 | [
"MIT"
] | null | null | null | mqtt/__init__.py | tam-wh/py-simple-nvr | 0444e352e2d92ed7a5722c79b005a47367742032 | [
"MIT"
] | null | null | null | from .mqtt import Mqtt | 22 | 22 | 0.818182 | 4 | 22 | 4.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.136364 | 22 | 1 | 22 | 22 | 0.947368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7b44edbc074087ad6bdf95879906f7fe5f40a6ab | 3,952 | py | Python | usfm/footnoted_verses.py | unfoldingWord-dev/tools | 7251d64b4750f1615125dab3c09d6d00a9c284b4 | [
"MIT"
] | 6 | 2015-07-27T21:50:39.000Z | 2020-06-25T14:32:35.000Z | usfm/footnoted_verses.py | unfoldingWord-dev/tools | 7251d64b4750f1615125dab3c09d6d00a9c284b4 | [
"MIT"
] | 89 | 2015-06-24T09:35:40.000Z | 2022-02-13T14:40:31.000Z | usfm/footnoted_verses.py | unfoldingWord-dev/tools | 7251d64b4750f1615125dab3c09d6d00a9c284b4 | [
"MIT"
] | 12 | 2015-07-13T17:31:04.000Z | 2021-08-06T06:50:21.000Z | # Verses with marked footnotes in the English ULB
footnotedVerses = ["GEN 1:26", "GEN 4:8", "GEN 31:25", "LEV 20:7", "JOS 5:1", "JOS 9:4", "RUT 2:7", "RUT 3:3", "1SA 1:1", "1SA 1:24", "1SA 1:28", "1SA 6:19", "1SA 8:16", "1SA 10:27", "1SA 14:41", "1SA 20:41", "1SA 22:3", "1SA 27:8", "2SA 8:18", "2SA 17:25", "2SA 21:8", "2SA 23:8", "2SA 23:27", "1KI 2:26", "1KI 4:4", "1KI 5:11", "1KI 7:7", "1KI 9:18", "1KI 10:8", "1KI 12:2", "2KI 12:21", "2KI 15:16", "2KI 24:3", "1CH 1:4", "1CH 1:6", "1CH 2:7", "1CH 2:24", "1CH 3:5", "1CH 4:12", "1CH 4:13", "1CH 4:17", "1CH 6:27", "1CH 6:59", "1CH 6:77", "1CH 9:19", "1CH 11:11", "1CH 15:18", "1CH 25:2", "1CH 25:3", "1CH 25:4", "1CH 25:11", "1CH 25:14", "1CH 25:18", "1CH 26:1", "1CH 26:2", "1CH 26:14", "1CH 26:18", "2CH 1:5", "2CH 2:10", "2CH 3:10", "2CH 4:16", "2CH 9:4", "2CH 9:7", "2CH 16:4", "2CH 17:3", "2CH 20:1", "2CH 20:2", "2CH 20:9", "2CH 20:25", "2CH 22:11", "2CH 23:20", "2CH 26:5", "2CH 31:16", "2CH 32:5", "2CH 32:22", "2CH 32:29", "2CH 33:19", "2CH 34:21", "2CH 36:2", "EZR 3:9", "EZR 4:6", "EZR 8:10", "EZR 10:25", "EZR 10:29", "EZR 10:38", "EZR 10:40", "EZR 10:44", "NEH 7:43", "NEH 7:70", "NEH 8:7", "NEH 11:8", "NEH 12:14", "NEH 12:17", "EST 1:1", "JOB 15:30", "JOB 17:11", "JOB 23:2", "JOB 30:22", "JOB 36:27", "PSA 8:2", "PSA 18:13", "PSA 68:26", "PSA 84:6", "PSA 89:19", "PRO 7:22", "PRO 13:15", "PRO 21:29", "PRO 25:27", "PRO 27:9", "PRO 30:1", "ECC 2:8", "ECC 3:15", "ECC 3:21", "ECC 7:18", "ECC 8:8", "ECC 8:9", "ECC 8:10", "ECC 9:2", "ECC 11:5", "SNG 5:6", "SNG 5:13", "SNG 6:13", "SNG 7:6", "SNG 7:9", "SNG 7:11", "SNG 8:10", "ISA 1:17", "ISA 5:17", "ISA 7:2", "ISA 9:2", "ISA 9:20", "ISA 10:27", "ISA 14:4", "ISA 19:13", "ISA 19:18", "ISA 21:8", "ISA 23:1", "ISA 23:2", "ISA 23:10", "ISA 26:16", "ISA 27:8", "ISA 28:25", "ISA 33:8", "ISA 33:9", "ISA 37:25", "ISA 38:11", "ISA 38:13", "ISA 40:3", "ISA 40:9", "ISA 49:24", "ISA 51:19", "ISA 52:5", "ISA 53:11", "ISA 57:9", "ISA 66:17", "ISA 66:18", "JER 2:11", "JER 13:4", "JER 13:7", "JER 15:14", "JER 25:38", "JER 27:1", "JER 28:8", "JER 43:12", "JER 49:1", "EZK 6:14", "EZK 7:5", "EZK 16:6", "EZK 16:57", "EZK 18:10", "EZK 19:7", "EZK 19:10", "EZK 22:16", "EZK 22:25", "EZK 26:20", "EZK 32:9", "EZK 37:23", "EZK 40:6", "EZK 40:48", "EZK 40:49", "EZK 41:1", "EZK 41:22", "EZK 42:4", "EZK 42:10", "EZK 42:16", "EZK 42:17", "EZK 42:18", "EZK 42:19", "EZK 43:3", "EZK 45:1", "EZK 46:22", "EZK 47:15", "EZK 47:18", "EZK 47:22", "DAN 9:1", "DAN 10:13", "DAN 11:6", "DAN 11:39", "HOS 5:2", "HOS 7:14", "HOS 11:2", "HOS 14:2", "AMO 8:14", "MIC 5:1", "MIC 5:6", "MIC 6:9", "MIC 6:14", "MIC 6:16", "HAB 1:9", "HAB 2:1", "HAB 2:15", "HAB 3:1", "ZEP 1:5", "ZEP 3:8", "ZEP 3:18", "ZEC 5:6", "ZEC 9:8", "ZEC 10:4", "MAL 2:3", "MAL 2:12", "MAT 5:44", "MAT 6:13", "MAT 15:6", "MAT 17:21", "MAT 18:11", "MAT 20:16", "MAT 23:14", "MRK 6:3", "MRK 7:16", "MRK 7:25", "MRK 9:44", "MRK 9:46", "MRK 11:26", "MRK 13:33", "MRK 14:68", "MRK 15:28", "MRK 15:40", "MRK 16:9", "MRK 16:20", "LUK 2:14", "LUK 2:33", "LUK 2:49", "LUK 8:43", "LUK 10:1", "LUK 11:11", "LUK 17:36", "LUK 18:24", "LUK 23:17", "JHN 5:3", "JHN 5:4", "JHN 6:69", "JHN 7:53", "JHN 8:1", "JHN 8:11", "ACT 8:37", "ACT 10:19", "ACT 10:32", "ACT 10:33", "ACT 12:25", "ACT 13:18", "ACT 15:18", "ACT 15:34", "ACT 20:28", "ACT 24:6", "ACT 24:7", "ACT 28:29", "ROM 8:28", "ROM 11:6", "ROM 16:24", "1CO 2:1", "1CO 9:20", "1CO 10:28", "1CO 13:3", "1CO 16:24", "2CO 8:7", "2CO 13:13", "EPH 1:1", "EPH 1:5", "PHP 4:23", "COL 1:2", "COL 1:7", "COL 1:12", "COL 1:14", "COL 2:13", "COL 3:4", "COL 3:6", "COL 4:8", "1TH 1:1", "1TH 2:7", "1TH 3:2", "2TH 2:3", "2TH 2:13", "1TI 6:5", "2TI 1:11", "2TI 2:14", "HEB 2:7", "HEB 4:2", "HEB 9:11", "HEB 10:34", "HEB 11:11", "HEB 11:37", "HEB 12:20", "JAS 2:20", "1PE 1:22", "2PE 2:4", "2PE 2:13", "2PE 2:15", "2PE 3:10", "1JN 1:4", "1JN 3:1", "1JN 4:3", "1JN 5:8", "REV 1:8", "REV 5:14", "REV 8:7", "REV 8:13", "REV 11:17", "REV 22:14", "REV 22:19", "REV 22:21"] | 1,317.333333 | 3,901 | 0.510628 | 969 | 3,952 | 2.082559 | 0.122807 | 0.014866 | 0.005946 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.332626 | 0.164727 | 3,952 | 3 | 3,901 | 1,317.333333 | 0.278703 | 0.011893 | 0 | 0 | 0 | 0 | 0.666752 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c870373f520decc0519ba03e286cf9567a26c197 | 96 | py | Python | venv/lib/python3.8/site-packages/rope/base/utils/datastructures.py | GiulianaPola/select_repeats | 17a0d053d4f874e42cf654dd142168c2ec8fbd11 | [
"MIT"
] | 2 | 2022-03-13T01:58:52.000Z | 2022-03-31T06:07:54.000Z | venv/lib/python3.8/site-packages/rope/base/utils/datastructures.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | 19 | 2021-11-20T04:09:18.000Z | 2022-03-23T15:05:55.000Z | venv/lib/python3.8/site-packages/rope/base/utils/datastructures.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | null | null | null | /home/runner/.cache/pip/pool/6c/fe/7d/ff16db44afbd02ab6621cc902f0737a350399211db75b62714cce6d169 | 96 | 96 | 0.895833 | 9 | 96 | 9.555556 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.40625 | 0 | 96 | 1 | 96 | 96 | 0.489583 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c886fe1aca369cd7aad696e7df81778fb312df73 | 41 | py | Python | buildwebpagelib/__init__.py | johenglisch/buildwebpage | ed24e13b22d9ef076413c6cf3bc032c849fef922 | [
"Unlicense"
] | null | null | null | buildwebpagelib/__init__.py | johenglisch/buildwebpage | ed24e13b22d9ef076413c6cf3bc032c849fef922 | [
"Unlicense"
] | null | null | null | buildwebpagelib/__init__.py | johenglisch/buildwebpage | ed24e13b22d9ef076413c6cf3bc032c849fef922 | [
"Unlicense"
] | null | null | null | from .io import *
from .webpage import *
| 13.666667 | 22 | 0.707317 | 6 | 41 | 4.833333 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.195122 | 41 | 2 | 23 | 20.5 | 0.878788 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c8cb54b41e1306767ee9e09305b600926bb0a082 | 96 | py | Python | venv/lib/python3.8/site-packages/poetry/core/_vendor/lark/parsers/lalr_analysis.py | GiulianaPola/select_repeats | 17a0d053d4f874e42cf654dd142168c2ec8fbd11 | [
"MIT"
] | 2 | 2022-03-13T01:58:52.000Z | 2022-03-31T06:07:54.000Z | venv/lib/python3.8/site-packages/poetry/core/_vendor/lark/parsers/lalr_analysis.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | 19 | 2021-11-20T04:09:18.000Z | 2022-03-23T15:05:55.000Z | venv/lib/python3.8/site-packages/poetry/core/_vendor/lark/parsers/lalr_analysis.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | null | null | null | /home/runner/.cache/pip/pool/4a/14/3a/89ce1457812cf8a9de240f7cab609c401278c6fc9c9599c3c0de4e1d6f | 96 | 96 | 0.895833 | 9 | 96 | 9.555556 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.40625 | 0 | 96 | 1 | 96 | 96 | 0.489583 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c8e0221f6c53a8d2ec3fb6aca7abdbd252072fa4 | 348 | py | Python | tests/tensortrade/orders/test_broker.py | cihilt/tensortrade | 47b8f2f043d3cc430838aac02a915ab42dcc7b64 | [
"Apache-2.0"
] | 7 | 2020-09-28T23:36:40.000Z | 2022-02-22T02:00:32.000Z | tests/tensortrade/orders/test_broker.py | cihilt/tensortrade | 47b8f2f043d3cc430838aac02a915ab42dcc7b64 | [
"Apache-2.0"
] | 4 | 2020-11-13T18:48:52.000Z | 2022-02-10T01:29:47.000Z | tests/tensortrade/orders/test_broker.py | cihilt/tensortrade | 47b8f2f043d3cc430838aac02a915ab42dcc7b64 | [
"Apache-2.0"
] | 3 | 2020-11-23T17:31:59.000Z | 2021-04-08T10:55:03.000Z |
import pytest
def test_init():
pytest.fail("Failed.")
def test_trades():
pytest.fail("Failed.")
def test_submit():
pytest.fail("Failed.")
def test_cancel():
pytest.fail("Failed.")
def test_update():
pytest.fail("Failed.")
def test_on_fill():
pytest.fail("Failed.")
def test_reset():
pytest.fail("Failed.") | 11.6 | 26 | 0.632184 | 45 | 348 | 4.711111 | 0.311111 | 0.231132 | 0.528302 | 0.537736 | 0.650943 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.186782 | 348 | 30 | 27 | 11.6 | 0.749117 | 0 | 0 | 0.466667 | 0 | 0 | 0.140805 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.466667 | true | 0 | 0.066667 | 0 | 0.533333 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
cdbf3d72339caa9f00fa202bed70b6edd9bf4ff8 | 16,089 | py | Python | venv/lib/python3.8/site-packages/azureml/data/azure_data_lake_datastore.py | amcclead7336/Enterprise_Data_Science_Final | ccdc0aa08d4726bf82d71c11a1cc0c63eb301a28 | [
"Unlicense",
"MIT"
] | null | null | null | venv/lib/python3.8/site-packages/azureml/data/azure_data_lake_datastore.py | amcclead7336/Enterprise_Data_Science_Final | ccdc0aa08d4726bf82d71c11a1cc0c63eb301a28 | [
"Unlicense",
"MIT"
] | null | null | null | venv/lib/python3.8/site-packages/azureml/data/azure_data_lake_datastore.py | amcclead7336/Enterprise_Data_Science_Final | ccdc0aa08d4726bf82d71c11a1cc0c63eb301a28 | [
"Unlicense",
"MIT"
] | 2 | 2021-05-23T16:46:31.000Z | 2021-05-26T23:51:09.000Z | # ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
"""Contains the base functionality for datastores that save connection information to Azure Data Lake Storage."""
from .abstract_datastore import AbstractDatastore
class AbstractADLSDatastore(AbstractDatastore):
"""Represents the base class for datastores that save connection information to Azure Data Lake Storage.
You should not work with this class directly. To create a datastore that saves connection information
to Azure Data Lake Storage, use one of the ``register_azure_data_lake*`` methods of the
:class:`azureml.core.datastore.Datastore` class.
:param workspace: The workspace this datastore belongs to.
:type workspace: str
:param name: The datastore name.
:type name: str
:param datastore_type: The datastore type, for example, "AzureDataLake" or "AzureDataLakeGen2".
:type datastore_type: str
:param tenant_id: The Directory ID/Tenant ID of the service principal.
:type tenant_id: str
:param client_id: The Client ID/Application ID of the service principal.
:type client_id: str
:param client_secret: The secret of the service principal.
:type client_secret: str
:param resource_url: The resource url, which determines what operations will be performed on
the Data Lake Store.
:type resource_url: str
:param authority_url: The authorization server's url, defaults to https://login.microsoftonline.com.
:type authority_url: str
"""
def __init__(self, workspace, name, datastore_type, tenant_id, client_id, client_secret, resource_url,
authority_url, service_data_access_auth_identity):
"""Initialize a new Azure Data Lake Datastore.
:param workspace: The workspace this datastore belongs to.
:type workspace: str
:param name: The datastore name.
:type name: str
:param datastore_type: The datastore type, for example, "AzureDataLake" or "AzureDataLakeGen2".
:type datastore_type: str
:param tenant_id: The Directory ID/Tenant ID of the service principal.
:type tenant_id: str
:param client_id: The Client ID/Application ID of the service principal.
:type client_id: str
:param client_secret: The secret of the service principal.
:type client_secret: str
:param resource_url: The resource URL, which determines what operations will be performed on
the Data Lake Store.
:type resource_url: str
:param authority_url: The authorization server's url, defaults to https://login.microsoftonline.com.
:type authority_url: str
:param service_data_access_auth_identity: Indicates which identity to use
to authenticate service data access to customer's storage. Possible values
include: 'None', 'WorkspaceSystemAssignedIdentity', 'WorkspaceUserAssignedIdentity'
:type service_data_access_auth_identity: str or
~_restclient.models.ServiceDataAccessAuthIdentity
"""
super(AbstractADLSDatastore, self).__init__(workspace, name, datastore_type)
self.tenant_id = tenant_id
self.client_id = client_id
self.client_secret = client_secret
self.resource_url = resource_url
self.authority_url = authority_url
self.service_data_access_auth_identity = service_data_access_auth_identity
def _as_dict(self, hide_secret=True):
output = super(AbstractADLSDatastore, self)._as_dict()
output["tenant_id"] = self.tenant_id
output["client_id"] = self.client_id
output["resource_url"] = self.resource_url
output["authority_url"] = self.authority_url
if self.service_data_access_auth_identity:
output["service_data_access_auth_identity"] = self.service_data_access_auth_identity
if not hide_secret:
output["client_secret"] = self.client_secret
return output
class AzureDataLakeDatastore(AbstractADLSDatastore):
"""Represents a datastore that saves connection information to Azure Data Lake Storage.
To create a datastore that saves connection information to Azure Data Lake Storage, use the
:meth:`azureml.core.datastore.Datastore.register_azure_data_lake` method
of the :class:`azureml.core.datastore.Datastore` class.
Note: When using a datastore to access data, you must have permission to access the data, which depends on the
credentials registered with the datastore.
:param workspace: The workspace this datastore belongs to.
:type workspace: str
:param name: The datastore name.
:type name: str
:param store_name: The Azure Data Lake store name.
:type store_name: str
:param tenant_id: The Directory ID/Tenant ID of the service principal.
:type tenant_id: str
:param client_id: The Client ID/Application ID of the service principal.
:type client_id: str
:param client_secret: The secret of the service principal.
:type client_secret: str
:param resource_url: The resource url, which determines what operations will be performed on
the Data Lake Store.
:type resource_url: str, optional
:param authority_url: The authority URL used to authenticate the user.
:type authority_url: str, optional
:param subscription_id: The ID of the subscription the ADLS store belongs to.
Specify both ``subscription_id`` and ``resource_group`` when using the AzureDataLakeDatastore
as a data transfer destination.
:type subscription_id: str, optional
:param resource_group: The resource group the ADLS store belongs to.
Specify both ``subscription_id`` and ``resource_group`` when using the AzureDataLakeDatastore
as a data transfer destination.
:type resource_group: str, optional
:param service_data_access_auth_identity: Indicates which identity to use
to authenticate service data access to customer's storage. Possible values
include: 'None', 'WorkspaceSystemAssignedIdentity', 'WorkspaceUserAssignedIdentity'
:type service_data_access_auth_identity: str or
~_restclient.models.ServiceDataAccessAuthIdentity
"""
def __init__(self, workspace, name, store_name, tenant_id, client_id, client_secret,
resource_url=None, authority_url=None, subscription_id=None, resource_group=None,
service_data_access_auth_identity=None):
"""Initialize a new Azure Data Lake Datastore.
:param workspace: The workspace this datastore belongs to.
:type workspace: str
:param name: The datastore name.
:type name: str
:param store_name: The Azure Data Lake store name.
:type store_name: str
:param tenant_id: The Directory ID/Tenant ID of the service principal.
:type tenant_id: str
:param client_id: The Client ID/Application ID of the service principal.
:type client_id: str
:param client_secret: The secret of the service principal.
:type client_secret: str
:param resource_url: The resource url, which determines what operations will be performed on
the Data Lake Store.
:type resource_url: str, optional
:param authority_url: The authority URL used to authenticate the user.
:type authority_url: str, optional
:param subscription_id: The ID of the subscription the ADLS store belongs to.
Specify both ``subscription_id`` and ``resource_group`` when using the AzureDataLakeDatastore
as a data transfer destination.
:type subscription_id: str, optional
:param resource_group: The resource group the ADLS store belongs to.
Specify both ``subscription_id`` and ``resource_group`` when using the AzureDataLakeDatastore
as a data transfer destination.
:type resource_group: str, optional
:param service_data_access_auth_identity: Indicates which identity to use
to authenticate service data access to customer's storage. Possible values
include: 'None', 'WorkspaceSystemAssignedIdentity', 'WorkspaceUserAssignedIdentity'
:type service_data_access_auth_identity: str or
~_restclient.models.ServiceDataAccessAuthIdentity
"""
import azureml.data.constants as constants
super(AzureDataLakeDatastore, self).__init__(workspace, name, constants.AZURE_DATA_LAKE, tenant_id, client_id,
client_secret, resource_url, authority_url,
service_data_access_auth_identity)
self.store_name = store_name
self._subscription_id = subscription_id
self._resource_group = resource_group
@property
def subscription_id(self):
"""Return the subscription ID of the ADLS store.
:return: The subscription ID of the ADLS store.
:rtype: str
"""
return self._subscription_id
@subscription_id.setter
def subscription_id(self, subscription_id):
"""Obsolete, will be removed in future releases."""
self._subscription_id = subscription_id
@property
def resource_group(self):
"""Return the resource group of the ADLS store.
:return: The resource group of the ADLS store
:rtype: str
"""
return self._resource_group
@resource_group.setter
def resource_group(self, resource_group):
"""Obsolete, will be removed in future releases."""
self._resource_group = resource_group
def _as_dict(self, hide_secret=True):
output = super(AzureDataLakeDatastore, self)._as_dict(hide_secret)
output["store_name"] = self.store_name
output["subscription_id"] = self.subscription_id
output["resource_group"] = self.resource_group
return output
class AzureDataLakeGen2Datastore(AbstractADLSDatastore):
"""Represents a datastore that saves connection information to Azure Data Lake Storage Gen2.
To create a datastore that saves connection information to Azure Data Lake Storage, use the
``register_azure_data_lake_gen2`` method of the :class:`azureml.core.datastore.Datastore` class.
To access data from an AzureDataLakeGen2Datastore object, create a :class:`azureml.core.Dataset` and use
one of the methods like :meth:`azureml.data.dataset_factory.FileDatasetFactory.from_files` for a FileDataset.
For more information, see `Create Azure Machine Learning datasets
<https://docs.microsoft.com/azure/machine-learning/how-to-create-register-datasets>`_.
Also keep in mind:
* Uploading data to AzureDataLakeGen2 datastores is not supported at this time.
* When using a datastore to access data, you must have permission to access the data, which depends on the
credentials registered with the datastore.
* When using Service Principal Authentication to access storage via AzureDataLakeGen2, the service principal
or app registration must be assigned the specific role-based access control (RBAC) role at minimum of
"Storage Blob Data Reader". For more information, see `Storage built-in roles
<https://docs.microsoft.com/azure/role-based-access-control/built-in-roles#storage>`_.
:param workspace: The workspace this datastore belongs to.
:type workspace: str
:param name: The datastore name.
:type name: str
:param container_name: The name of the Azure blob container.
:type container_name: str
:param account_name: The storage account name.
:type account_name: str
:param tenant_id: The Directory ID/Tenant ID of the service principal.
:type tenant_id: str
:param client_id: The Client ID/Application ID of the service principal.
:type client_id: str
:param client_secret: The secret of the service principal.
:type client_secret: str
:param resource_url: The resource url, which determines what operations will be performed on
the Data Lake Store.
:type resource_url: str
:param authority_url: The authority URL used to authenticate the user.
:type authority_url: str
:param protocol: The protocol to use to connect to the blob container.
If None, defaults to https.
:type protocol: str
:param endpoint: The endpoint of the blob container. If None, defaults to core.windows.net.
:type endpoint: str
:param service_data_access_auth_identity: Indicates which identity to use
to authenticate service data access to customer's storage. Possible values
include: 'None', 'WorkspaceSystemAssignedIdentity', 'WorkspaceUserAssignedIdentity'
:type service_data_access_auth_identity: str or
~_restclient.models.ServiceDataAccessAuthIdentity
"""
def __init__(self, workspace, name, container_name, account_name, tenant_id=None, client_id=None,
client_secret=None, resource_url=None, authority_url=None, protocol=None, endpoint=None,
service_data_access_auth_identity=None):
"""Initialize a new Azure Data Lake Gen2 Datastore.
:param workspace: The workspace this datastore belongs to.
:type workspace: str
:param name: The datastore name.
:type name: str
:param container_name: The name of the Azure blob container.
:type container_name: str
:param account_name: The storage account name.
:type account_name: str
:param tenant_id: The Directory ID/Tenant ID of the service principal.
:type tenant_id: str
:param client_id: The Client ID/Application ID of the service principal.
:type client_id: str
:param client_secret: The secret of the service principal.
:type client_secret: str
:param resource_url: The resource url, which determines what operations will be performed on
the Data Lake Store.
:type resource_url: str
:param authority_url: The authority URL used to authenticate the user.
:type authority_url: str
:param protocol: The protocol to use to connect to the blob container.
If None, defaults to https.
:type protocol: str
:param endpoint: The endpoint of the blob container. If None, defaults to core.windows.net.
:type endpoint: str
:param service_data_access_auth_identity: Indicates which identity to use
to authenticate service data access to customer's storage. Possible values
include: 'None', 'WorkspaceSystemAssignedIdentity', 'WorkspaceUserAssignedIdentity'
:type service_data_access_auth_identity: str or
~_restclient.models.ServiceDataAccessAuthIdentity
"""
import azureml.data.constants as constants
super(AzureDataLakeGen2Datastore, self).__init__(workspace, name, constants.AZURE_DATA_LAKE_GEN2, tenant_id,
client_id, client_secret, resource_url, authority_url,
service_data_access_auth_identity)
self.container_name = container_name
self.account_name = account_name
self.protocol = protocol
self.endpoint = endpoint
def _as_dict(self, hide_secret=True):
output = super(AzureDataLakeGen2Datastore, self)._as_dict(hide_secret)
output["container_name"] = self.container_name
output["account_name"] = self.account_name
output["protocol"] = self.protocol
output["endpoint"] = self.endpoint
return output
| 51.07619 | 119 | 0.689042 | 1,945 | 16,089 | 5.529563 | 0.098715 | 0.036448 | 0.039517 | 0.039052 | 0.815342 | 0.774059 | 0.75258 | 0.75258 | 0.71232 | 0.692422 | 0 | 0.000986 | 0.243583 | 16,089 | 314 | 120 | 51.238854 | 0.882744 | 0.660016 | 0 | 0.257143 | 0 | 0 | 0.042864 | 0.008321 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.042857 | 0 | 0.3 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
cdc71e52dbf818d574dd5a79e2e3891cca8ea656 | 124 | py | Python | server/conf/conftest.py | jwnwilson/angularjs_tornado_webapp | acb810060fbb2f5d38b8d5b0cb484c9810358dda | [
"MIT"
] | null | null | null | server/conf/conftest.py | jwnwilson/angularjs_tornado_webapp | acb810060fbb2f5d38b8d5b0cb484c9810358dda | [
"MIT"
] | null | null | null | server/conf/conftest.py | jwnwilson/angularjs_tornado_webapp | acb810060fbb2f5d38b8d5b0cb484c9810358dda | [
"MIT"
] | null | null | null | import os
import sys
root_dir = os.path.dirname(
os.path.dirname(os.path.abspath(__file__)))
sys.path.append(root_dir)
| 17.714286 | 47 | 0.75 | 21 | 124 | 4.142857 | 0.47619 | 0.206897 | 0.298851 | 0.344828 | 0.367816 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.112903 | 124 | 6 | 48 | 20.666667 | 0.790909 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.4 | 0 | 0.4 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
cdeff07eeae5b195aef4ec3c01ac2d47b07c827c | 46 | py | Python | aa/unit/__init__.py | projectweekend/aa | 5c6da28121306b1125b2734d5a96677b3e3786e0 | [
"MIT"
] | null | null | null | aa/unit/__init__.py | projectweekend/aa | 5c6da28121306b1125b2734d5a96677b3e3786e0 | [
"MIT"
] | 2 | 2018-04-22T22:59:59.000Z | 2018-04-22T23:00:11.000Z | aa/unit/__init__.py | projectweekend/aa | 5c6da28121306b1125b2734d5a96677b3e3786e0 | [
"MIT"
] | null | null | null | from .army import Army
from .unit import Unit
| 15.333333 | 22 | 0.782609 | 8 | 46 | 4.5 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.173913 | 46 | 2 | 23 | 23 | 0.947368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a82231a1f72a006228f463207c161a0ee81ec416 | 49 | py | Python | lab1/text_recognizer/networks/__init__.py | AlTheEngineer/fsdl-text-recognizer-project | 57469296b6969f54d90533733e250797b775ce53 | [
"MIT"
] | null | null | null | lab1/text_recognizer/networks/__init__.py | AlTheEngineer/fsdl-text-recognizer-project | 57469296b6969f54d90533733e250797b775ce53 | [
"MIT"
] | null | null | null | lab1/text_recognizer/networks/__init__.py | AlTheEngineer/fsdl-text-recognizer-project | 57469296b6969f54d90533733e250797b775ce53 | [
"MIT"
] | null | null | null | from .mlp import mlp
from .resnet import resnet
| 12.25 | 26 | 0.77551 | 8 | 49 | 4.75 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.183673 | 49 | 3 | 27 | 16.333333 | 0.95 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b58e0f8db4b7df6fc2e52a0d97f7c3d42af660fc | 31 | py | Python | model/__init__.py | dhkim2810/AgeDetect | 0dfa4773a440735309b46cfae87b635f8f697345 | [
"Unlicense",
"MIT"
] | null | null | null | model/__init__.py | dhkim2810/AgeDetect | 0dfa4773a440735309b46cfae87b635f8f697345 | [
"Unlicense",
"MIT"
] | null | null | null | model/__init__.py | dhkim2810/AgeDetect | 0dfa4773a440735309b46cfae87b635f8f697345 | [
"Unlicense",
"MIT"
] | null | null | null | from .model import create_model | 31 | 31 | 0.870968 | 5 | 31 | 5.2 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.096774 | 31 | 1 | 31 | 31 | 0.928571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a97b184d7fd62e3e0e998c371ea2d5269a0a5274 | 4,388 | py | Python | route/analyticRoute.py | filipefcl/fs-webservice-api | ca71e457003119e87d91a8560a1e43d76132d80a | [
"MIT"
] | null | null | null | route/analyticRoute.py | filipefcl/fs-webservice-api | ca71e457003119e87d91a8560a1e43d76132d80a | [
"MIT"
] | null | null | null | route/analyticRoute.py | filipefcl/fs-webservice-api | ca71e457003119e87d91a8560a1e43d76132d80a | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
import json
import traceback
import inspect
import requests
from datetime import datetime
from flask import Flask, request
from util import Util, Constants, Log, CodeReturn
from controller import Controller
log = Log('AnalyticRoute')
util = Util()
constants = Constants()
controller = Controller()
codeReturn = CodeReturn()
class AnalyticRoute:
def list_acc_ref_movimentation(self, request):
try:
header = request.headers
#Get Token from Header
token = str(header['token'])
#Get datas from PARAMS
data = json.loads(str(request.args.get('data')).replace("'", '"'))
companies_token = data['companies_token']
classification_ref = data['classification_ref']
date_start = data['date_start']
date_end = data['date_end']
except:
return util.make_json(codeReturn.BAD_REQUEST_CODE, codeReturn.BAD_REQUEST_MSG, [])
headers = {
'Content-Type': 'application/json',
'token': token
}
params = {
'data':json.dumps(data)
}
url = constants.CORE_URL_ANALYTIC_ACC_REF_MOV
return json.dumps(requests.get(url, headers=headers, params=params).json())
def list_acc_ref_balance(self, request):
try:
header = request.headers
#Get Token from Header
token = str(header['token'])
#Get datas from PARAMS
data = json.loads(str(request.args.get('data')).replace("'", '"'))
companies_token = data['companies_token']
classification_ref = data['classification_ref']
date_start = data['date_start']
date_end = data['date_end']
except:
return util.make_json(codeReturn.BAD_REQUEST_CODE, codeReturn.BAD_REQUEST_MSG, [])
headers = {
'Content-Type': 'application/json',
'token': token
}
params = {
'data':json.dumps(data)
}
url = constants.CORE_URL_ANALYTIC_ACC_REF_BALANCE
return json.dumps(requests.get(url, headers=headers, params=params).json())
def list_serie(self, request):
try:
header = request.headers
#Get Token from Header
token = str(header['token'])
except:
return util.make_json(codeReturn.BAD_REQUEST_CODE, codeReturn.BAD_REQUEST_MSG, [])
headers = {
'Content-Type': 'application/json',
'token': token
}
url = constants.CORE_URL_ANALYTIC_SERIE_LIST
return json.dumps(requests.get(url, headers=headers).json())
def list_serie_data(self, request):
try:
header = request.headers
#Get Token from Header
token = str(header['token'])
#Get datas from PARAMS
data = json.loads(str(request.args.get('data')).replace("'", '"'))
serie_type_id = data['serie_type_id']
date_start = data['date_start']
date_end = data['date_end']
except:
return util.make_json(codeReturn.BAD_REQUEST_CODE, codeReturn.BAD_REQUEST_MSG, [])
headers = {
'Content-Type': 'application/json',
'token': token
}
params = {
'data':json.dumps(data)
}
url = constants.CORE_URL_ANALYTIC_SERIE_DATA_LIST
return json.dumps(requests.get(url, headers=headers, params=params).json())
def get_roi(self, request):
try:
header = request.headers
#Get Token from Header
token = str(header['token'])
#Get datas from PARAMS
data = json.loads(str(request.args.get('data')).replace("'", '"'))
companies_token = data['companies_token']
date_start = data['date_start']
date_end = data['date_end']
except:
return util.make_json(codeReturn.BAD_REQUEST_CODE, codeReturn.BAD_REQUEST_MSG, [])
headers = {
'Content-Type': 'application/json',
'token': token
}
params = {
'data':json.dumps(data)
}
url = constants.CORE_URL_ANALYTIC_ROI_GET
return json.dumps(requests.get(url, headers=headers, params=params).json()) | 29.253333 | 94 | 0.578396 | 469 | 4,388 | 5.223881 | 0.127932 | 0.044898 | 0.081633 | 0.040816 | 0.835102 | 0.835102 | 0.82 | 0.82 | 0.82 | 0.799184 | 0 | 0.000329 | 0.308341 | 4,388 | 150 | 95 | 29.253333 | 0.806919 | 0.047858 | 0 | 0.663462 | 0 | 0 | 0.098129 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.048077 | false | 0 | 0.076923 | 0 | 0.230769 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a99c8be8592ec7e933f1be3a834dcf65cf23e465 | 113 | py | Python | menpo/shape/__init__.py | ikassi/menpo | ca702fc814a1ad50b27c44c6544ba364d3aa7e31 | [
"BSD-3-Clause"
] | null | null | null | menpo/shape/__init__.py | ikassi/menpo | ca702fc814a1ad50b27c44c6544ba364d3aa7e31 | [
"BSD-3-Clause"
] | null | null | null | menpo/shape/__init__.py | ikassi/menpo | ca702fc814a1ad50b27c44c6544ba364d3aa7e31 | [
"BSD-3-Clause"
] | 1 | 2021-04-14T12:09:00.000Z | 2021-04-14T12:09:00.000Z | from menpo.shape.pointcloud import PointCloud
from menpo.shape.mesh import TriMesh, FastTriMesh, TexturedTriMesh
| 37.666667 | 66 | 0.858407 | 14 | 113 | 6.928571 | 0.642857 | 0.185567 | 0.28866 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.088496 | 113 | 2 | 67 | 56.5 | 0.941748 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8d98925fcdfaf3e2943c4fe6d826b795e2b3f667 | 16,683 | py | Python | testcases/alphavantage_test.py | indalsig/pyalgotrade | 64a6958cf9e60ac59c26bafb959d058e6127496d | [
"Apache-2.0"
] | null | null | null | testcases/alphavantage_test.py | indalsig/pyalgotrade | 64a6958cf9e60ac59c26bafb959d058e6127496d | [
"Apache-2.0"
] | null | null | null | testcases/alphavantage_test.py | indalsig/pyalgotrade | 64a6958cf9e60ac59c26bafb959d058e6127496d | [
"Apache-2.0"
] | null | null | null | # PyAlgoTrade
#
# Copyright 2011-2018 Gabriel Martin Becedillas Ruiz
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
.. moduleauthor:: Juan Salvador Magán Valero <jmaganvalero@gmail.com>
"""
import os
import datetime
import tempfile
import shutil
import subprocess
import six
from . import common
from pyalgotrade.tools import alphavantage
from pyalgotrade import bar
from pyalgotrade.barfeed import alphavantagefeed
from pyalgotrade.barfeed import csvfeed
try:
# This will get environment variables set.
from . import credentials
except:
pass
ALPHA_VANTAGE_API_KEY = os.getenv("ALPHA_VANTAGE_API_KEY")
assert ALPHA_VANTAGE_API_KEY is not None, "ALPHA_VANTAGE_API_KEY not set"
def bytes_to_str(b):
ret = b
if six.PY3:
# Convert the bytes to a string.
ret = ret.decode()
return ret
def check_output(*args, **kwargs):
ret = subprocess.check_output(*args, **kwargs)
if six.PY3:
# Convert the bytes to a string.
ret = bytes_to_str(ret)
return ret
class ToolsTestCase(common.TestCase):
def testDownloadAndParseDailyUsingApiKey(self):
with common.TmpDir() as tmpPath:
instrument = "ORCL"
path = os.path.join(tmpPath, "alpha-vantage-daily-orcl.csv")
alphavantage.download_daily_bars(instrument, path, apiKey=ALPHA_VANTAGE_API_KEY)
bf = alphavantagefeed.Feed()
bf.addBarsFromCSV(instrument, path)
bf.loadAll()
for b in bf[instrument]:
if b.getDateTime() == datetime.datetime(2019, 12, 17):
self.assertEqual(b.getOpen(), 53.89)
self.assertEqual(b.getHigh(), 54.06)
self.assertEqual(b.getLow(), 52.83)
self.assertEqual(b.getClose(), 52.84)
self.assertEqual(b.getVolume(), 19778245.0)
self.assertEqual(b.getPrice(), 52.84)
# Not checking against a specific value since this is going to change
# as time passes by.
self.assertNotEquals(bf[instrument][-1].getAdjClose(), None)
def testDownloadAndParseDaily_UseAdjClose(self):
with common.TmpDir() as tmpPath:
instrument = "ORCL"
path = os.path.join(tmpPath, "alpha-vantage-daily-orcl.csv")
alphavantage.download_daily_bars(instrument, path, apiKey=ALPHA_VANTAGE_API_KEY)
bf = alphavantagefeed.Feed()
bf.addBarsFromCSV(instrument, path)
# Need to setUseAdjustedValues(True) after loading the file because we
# can't tell in advance if adjusted values are there or not.
bf.setUseAdjustedValues(True)
bf.loadAll()
for b in bf[instrument]:
if b.getDateTime() == datetime.datetime(2019, 12, 17):
self.assertEqual(b.getOpen(), 53.89)
self.assertEqual(b.getHigh(), 54.06)
self.assertEqual(b.getLow(), 52.83)
self.assertEqual(b.getClose(), 52.84)
self.assertEqual(b.getVolume(), 19778245.0)
self.assertEqual(b.getPrice(), 52.84)
self.assertEqual(b.getPrice(), b.getAdjClose())
# Not checking against a specific value since this is going to change
# as time passes by.
self.assertNotEquals(bf[instrument][-1].getAdjClose(), None)
def testDownloadAndParseDailyNoAdjClose(self):
with common.TmpDir() as tmpPath:
instrument = "ORCL"
path = os.path.join(tmpPath, "alpha-vantage-daily-%s.csv" % (instrument,))
alphavantage.download_daily_bars(instrument, path, apiKey=ALPHA_VANTAGE_API_KEY)
bf = alphavantagefeed.Feed()
bf.setNoAdjClose()
bf.addBarsFromCSV(instrument, path, skipMalformedBars=True)
bf.loadAll()
for b in bf[instrument]:
if b.getDateTime() == datetime.datetime(2019, 12, 17):
self.assertEqual(b.getOpen(), 53.89)
self.assertEqual(b.getHigh(), 54.06)
self.assertEqual(b.getLow(), 52.83)
self.assertEqual(b.getClose(), 52.84)
self.assertEqual(b.getVolume(), 19778245.0)
self.assertEqual(b.getAdjClose(), None)
self.assertEqual(b.getPrice(), 52.84)
def testDownloadAndParseWeekly(self):
with common.TmpDir() as tmpPath:
instrument = "AAPL"
path = os.path.join(tmpPath, "alpha-vantage-aapl-weekly.csv")
alphavantage.download_weekly_bars(instrument, path, apiKey=ALPHA_VANTAGE_API_KEY)
bf = alphavantagefeed.Feed(frequency=bar.Frequency.WEEK)
bf.setBarFilter(csvfeed.DateRangeFilter(fromDate=datetime.datetime(2010, 1, 1),
toDate=datetime.datetime(2010, 12, 31)))
bf.addBarsFromCSV(instrument, path)
bf.loadAll()
# Alpha Vantage used to report 2010-1-8 as the first week of 2010.
self.assertTrue(
bf[instrument][0].getDateTime() in [datetime.datetime(2010, 1, 8), datetime.datetime(2010, 1, 15)]
)
self.assertEqual(bf[instrument][-1].getDateTime(), datetime.datetime(2010, 12, 31))
self.assertEqual(bf[instrument][-1].getOpen(), 322.8519)
self.assertEqual(bf[instrument][-1].getHigh(), 326.66)
self.assertEqual(bf[instrument][-1].getLow(), 321.31)
self.assertEqual(bf[instrument][-1].getClose(), 322.56)
self.assertEqual(bf[instrument][-1].getVolume(), 33567200.0)
self.assertEqual(bf[instrument][-1].getPrice(), 322.56)
# Not checking against a specific value since this is going to change
# as time passes by.
self.assertNotEquals(bf[instrument][-1].getAdjClose(), None)
def testDownloadAndParseHourly(self):
with common.TmpDir() as tmpPath:
instrument = "AAPL"
path = os.path.join(tmpPath, "alpha-vantage-aapl-hourly.csv")
alphavantage.download_intradays_bars(instrument, path, bar.Frequency.HOUR, apiKey=ALPHA_VANTAGE_API_KEY)
bf = alphavantagefeed.Feed(frequency=bar.Frequency.HOUR)
bf.addBarsFromCSV(instrument, path)
bf.loadAll()
self.assertIsNotNone(bf[instrument][-1].getDateTime())
self.assertIsNotNone(bf[instrument][-1].getOpen())
self.assertIsNotNone(bf[instrument][-1].getHigh())
self.assertIsNotNone(bf[instrument][-1].getLow())
self.assertIsNotNone(bf[instrument][-1].getClose())
self.assertIsNotNone(bf[instrument][-1].getVolume())
self.assertIsNotNone(bf[instrument][-1].getPrice())
# Not adjusted price in intraday bars
self.assertIsNone(bf[instrument][-1].getAdjClose())
def testDownloadAndParseMinutes(self):
with common.TmpDir() as tmpPath:
instrument = "AAPL"
path = os.path.join(tmpPath, "alpha-vantage-aapl-minute.csv")
alphavantage.download_intradays_bars(instrument, path, bar.Frequency.MINUTE, apiKey=ALPHA_VANTAGE_API_KEY)
bf = alphavantagefeed.Feed(frequency=bar.Frequency.MINUTE)
bf.addBarsFromCSV(instrument, path)
bf.loadAll()
self.assertIsNotNone(bf[instrument][-1].getDateTime())
self.assertIsNotNone(bf[instrument][-1].getOpen())
self.assertIsNotNone(bf[instrument][-1].getHigh())
self.assertIsNotNone(bf[instrument][-1].getLow())
self.assertIsNotNone(bf[instrument][-1].getClose())
self.assertIsNotNone(bf[instrument][-1].getVolume())
self.assertIsNotNone(bf[instrument][-1].getPrice())
# Not adjusted price in intraday bars
self.assertIsNone(bf[instrument][-1].getAdjClose())
def testInvalidFrequency(self):
with self.assertRaisesRegexp(Exception, "Invalid frequency.*"):
alphavantagefeed.Feed(frequency=bar.Frequency.SECOND)
def testBuildFeedDaily(self):
with common.TmpDir() as tmpPath:
instrument = "ORCL"
bf = alphavantage.build_feed([instrument], tmpPath, datetime.datetime(2010, 1, 1),
datetime.datetime(2010, 12, 31), apiKey=ALPHA_VANTAGE_API_KEY)
bf.loadAll()
self.assertEqual(bf[instrument][-1].getDateTime(), datetime.datetime(2010, 12, 31))
self.assertEqual(bf[instrument][-1].getOpen(), 31.22)
self.assertEqual(bf[instrument][-1].getHigh(), 31.33)
self.assertEqual(bf[instrument][-1].getLow(), 30.93)
self.assertEqual(bf[instrument][-1].getClose(), 31.3)
self.assertEqual(bf[instrument][-1].getVolume(), 11716300)
self.assertEqual(bf[instrument][-1].getPrice(), 31.3)
# Not checking against a specific value since this is going to change
# as time passes by.
self.assertNotEqual(bf[instrument][-1].getAdjClose(), None)
def testBuildFeedWeekly(self):
with common.TmpDir() as tmpPath:
instrument = "AAPL"
bf = alphavantage.build_feed(
[instrument], tmpPath, datetime.datetime(2010, 1, 1),
datetime.datetime(2010, 12, 31), bar.Frequency.WEEK,
apiKey=ALPHA_VANTAGE_API_KEY
)
bf.loadAll()
# Alpha Vantage used to report 2010-1-8 as the first week of 2010.
self.assertTrue(
bf[instrument][0].getDateTime() in [datetime.datetime(2010, 1, 8), datetime.datetime(2010, 1, 15)]
)
self.assertEqual(bf[instrument][-1].getDateTime(), datetime.datetime(2010, 12, 31))
self.assertEqual(bf[instrument][-1].getOpen(), 322.8519)
self.assertEqual(bf[instrument][-1].getHigh(), 326.66)
self.assertEqual(bf[instrument][-1].getLow(), 321.31)
self.assertEqual(bf[instrument][-1].getClose(), 322.56)
self.assertEqual(bf[instrument][-1].getVolume(), 33567200.0)
self.assertEqual(bf[instrument][-1].getPrice(), 322.56)
# Not checking against a specific value since this is going to change
# as time passes by.
self.assertNotEquals(bf[instrument][-1].getAdjClose(), None)
def testInvalidInstrument(self):
instrument = "inexistent"
# Don't skip errors.
with self.assertRaisesRegexp(Exception, "Invalid content-type: application/json"):
with common.TmpDir() as tmpPath:
alphavantage.build_feed([instrument], tmpPath, frequency=bar.Frequency.WEEK,
apiKey=ALPHA_VANTAGE_API_KEY
)
# Skip errors.
with common.TmpDir() as tmpPath:
bf = alphavantage.build_feed(
[instrument], tmpPath, frequency=bar.Frequency.WEEK, skipErrors=True,
apiKey=ALPHA_VANTAGE_API_KEY
)
bf.loadAll()
self.assertNotIn(instrument, bf)
def testBuildFeedDailyCreatingDir(self):
tmpPath = tempfile.mkdtemp()
shutil.rmtree(tmpPath)
try:
instrument = "ORCL"
bf = alphavantage.build_feed([instrument], tmpPath, datetime.datetime(2010,1,1),
datetime.datetime(2010, 12, 31), apiKey=ALPHA_VANTAGE_API_KEY)
bf.loadAll()
self.assertEqual(bf[instrument][-1].getDateTime(), datetime.datetime(2010, 12, 31))
self.assertEqual(bf[instrument][-1].getOpen(), 31.22)
self.assertEqual(bf[instrument][-1].getHigh(), 31.33)
self.assertEqual(bf[instrument][-1].getLow(), 30.93)
self.assertEqual(bf[instrument][-1].getClose(), 31.3)
self.assertEqual(bf[instrument][-1].getVolume(), 11716300)
self.assertEqual(bf[instrument][-1].getPrice(), 31.3)
# Not checking against a specific value since this is going to change
# as time passes by.
self.assertNotEquals(bf[instrument][-1].getAdjClose(), None)
finally:
shutil.rmtree(tmpPath)
def testCommandLineDailyCreatingDir(self):
tmpPath = tempfile.mkdtemp()
shutil.rmtree(tmpPath)
try:
instrument = "ORCL"
subprocess.call([
"python", "-m", "pyalgotrade.tools.alphavantage",
"--symbol=%s" % instrument,
"--storage=%s" % tmpPath,
"--api-key=%s" % ALPHA_VANTAGE_API_KEY
])
bf = alphavantagefeed.Feed()
bf.setBarFilter(csvfeed.DateRangeFilter(fromDate=datetime.datetime(2010, 1, 1),
toDate=datetime.datetime(2010, 12, 31)))
bf.addBarsFromCSV(instrument, os.path.join(tmpPath, "ORCL-alpha-vantage.csv"))
bf.loadAll()
self.assertEqual(bf[instrument][-1].getDateTime(), datetime.datetime(2010, 12, 31))
self.assertEqual(bf[instrument][-1].getOpen(), 31.22)
self.assertEqual(bf[instrument][-1].getHigh(), 31.33)
self.assertEqual(bf[instrument][-1].getLow(), 30.93)
self.assertEqual(bf[instrument][-1].getClose(), 31.3)
self.assertEqual(bf[instrument][-1].getVolume(), 11716300)
self.assertEqual(bf[instrument][-1].getPrice(), 31.3)
finally:
shutil.rmtree(tmpPath)
def testCommandLineWeeklyCreatingDir(self):
tmpPath = tempfile.mkdtemp()
shutil.rmtree(tmpPath)
try:
instrument = "AAPL"
subprocess.call([
"python", "-m", "pyalgotrade.tools.alphavantage",
"--symbol=%s" % instrument,
"--storage=%s" % tmpPath,
"--frequency=weekly",
"--api-key=%s" % ALPHA_VANTAGE_API_KEY
])
bf = alphavantagefeed.Feed()
bf.setBarFilter(csvfeed.DateRangeFilter(fromDate=datetime.datetime(2010, 1, 1),
toDate=datetime.datetime(2010, 12, 31)))
bf.addBarsFromCSV(instrument, os.path.join(tmpPath, "AAPL-alpha-vantage.csv"))
bf.loadAll()
self.assertEqual(bf[instrument][-1].getDateTime(), datetime.datetime(2010, 12, 31))
self.assertEqual(bf[instrument][-1].getOpen(), 322.8519)
self.assertEqual(bf[instrument][-1].getHigh(), 326.66)
self.assertEqual(bf[instrument][-1].getLow(), 321.31)
self.assertEqual(bf[instrument][-1].getClose(), 322.56)
self.assertEqual(bf[instrument][-1].getVolume(), 33567200.0)
self.assertEqual(bf[instrument][-1].getPrice(), 322.56)
finally:
shutil.rmtree(tmpPath)
def testIgnoreErrors(self):
with common.TmpDir() as tmpPath:
instrument = "inexistent"
output = check_output(
[
"python", "-m", "pyalgotrade.tools.alphavantage",
"--symbol=%s" % instrument,
"--storage=%s" % tmpPath,
"--frequency=daily",
"--ignore-errors"
],
stderr=subprocess.STDOUT
)
self.assertIn("Invalid content-type: application/json", output)
def testDontIgnoreErrors(self):
with self.assertRaises(Exception) as e:
with common.TmpDir() as tmpPath:
instrument = "inexistent"
check_output(
[
"python", "-m", "pyalgotrade.tools.alphavantage",
"--symbol=%s" % instrument,
"--storage=%s" % tmpPath,
"--frequency=daily"
],
stderr=subprocess.STDOUT
)
self.assertIn("Invalid content-type: application/json", bytes_to_str(e.exception.output))
| 45.334239 | 118 | 0.594917 | 1,760 | 16,683 | 5.594318 | 0.153409 | 0.084095 | 0.084501 | 0.115174 | 0.796059 | 0.765184 | 0.758582 | 0.749035 | 0.731465 | 0.700183 | 0 | 0.046463 | 0.284002 | 16,683 | 367 | 119 | 45.457766 | 0.777815 | 0.098304 | 0 | 0.710247 | 0 | 0 | 0.053677 | 0.025005 | 0 | 0 | 0 | 0 | 0.328622 | 1 | 0.060071 | false | 0.003534 | 0.042403 | 0 | 0.113074 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8daa7dc00218d00c8e82337b98a0c52c465a1ade | 8,758 | py | Python | fhir/resources/tests/test_substancespecification.py | mmabey/fhir.resources | cc73718e9762c04726cd7de240c8f2dd5313cbe1 | [
"BSD-3-Clause"
] | null | null | null | fhir/resources/tests/test_substancespecification.py | mmabey/fhir.resources | cc73718e9762c04726cd7de240c8f2dd5313cbe1 | [
"BSD-3-Clause"
] | null | null | null | fhir/resources/tests/test_substancespecification.py | mmabey/fhir.resources | cc73718e9762c04726cd7de240c8f2dd5313cbe1 | [
"BSD-3-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Profile: http://hl7.org/fhir/StructureDefinition/SubstanceSpecification
Release: R4
Version: 4.0.1
Build ID: 9346c8cc45
Last updated: 2019-11-01T09:29:23.356+11:00
"""
import io
import json
import os
import unittest
import pytest
from .. import substancespecification
from ..fhirdate import FHIRDate
from .fixtures import force_bytes
@pytest.mark.usefixtures("base_settings")
class SubstanceSpecificationTests(unittest.TestCase):
def instantiate_from(self, filename):
datadir = os.environ.get("FHIR_UNITTEST_DATADIR") or ""
with io.open(os.path.join(datadir, filename), "r", encoding="utf-8") as handle:
js = json.load(handle)
self.assertEqual("SubstanceSpecification", js["resourceType"])
return substancespecification.SubstanceSpecification(js)
def testSubstanceSpecification1(self):
inst = self.instantiate_from("substancesourcematerial-example.json")
self.assertIsNotNone(
inst, "Must have instantiated a SubstanceSpecification instance"
)
self.implSubstanceSpecification1(inst)
js = inst.as_json()
self.assertEqual("SubstanceSpecification", js["resourceType"])
inst2 = substancespecification.SubstanceSpecification(js)
self.implSubstanceSpecification1(inst2)
def implSubstanceSpecification1(self, inst):
self.assertEqual(force_bytes(inst.id), force_bytes("example"))
self.assertEqual(force_bytes(inst.meta.tag[0].code), force_bytes("HTEST"))
self.assertEqual(
force_bytes(inst.meta.tag[0].display), force_bytes("test health data")
)
self.assertEqual(
force_bytes(inst.meta.tag[0].system),
force_bytes("http://terminology.hl7.org/CodeSystem/v3-ActReason"),
)
self.assertEqual(
force_bytes(inst.text.div),
force_bytes(
'<div xmlns="http://www.w3.org/1999/xhtml"><p><b>Generated Narrative with Details</b></p><p><b>id</b>: example</p></div>'
),
)
self.assertEqual(force_bytes(inst.text.status), force_bytes("generated"))
def testSubstanceSpecification2(self):
inst = self.instantiate_from("substanceprotein-example.json")
self.assertIsNotNone(
inst, "Must have instantiated a SubstanceSpecification instance"
)
self.implSubstanceSpecification2(inst)
js = inst.as_json()
self.assertEqual("SubstanceSpecification", js["resourceType"])
inst2 = substancespecification.SubstanceSpecification(js)
self.implSubstanceSpecification2(inst2)
def implSubstanceSpecification2(self, inst):
self.assertEqual(force_bytes(inst.id), force_bytes("example"))
self.assertEqual(force_bytes(inst.meta.tag[0].code), force_bytes("HTEST"))
self.assertEqual(
force_bytes(inst.meta.tag[0].display), force_bytes("test health data")
)
self.assertEqual(
force_bytes(inst.meta.tag[0].system),
force_bytes("http://terminology.hl7.org/CodeSystem/v3-ActReason"),
)
self.assertEqual(
force_bytes(inst.text.div),
force_bytes(
'<div xmlns="http://www.w3.org/1999/xhtml"><p><b>Generated Narrative with Details</b></p><p><b>id</b>: example</p></div>'
),
)
self.assertEqual(force_bytes(inst.text.status), force_bytes("generated"))
def testSubstanceSpecification3(self):
inst = self.instantiate_from("substancepolymer-example.json")
self.assertIsNotNone(
inst, "Must have instantiated a SubstanceSpecification instance"
)
self.implSubstanceSpecification3(inst)
js = inst.as_json()
self.assertEqual("SubstanceSpecification", js["resourceType"])
inst2 = substancespecification.SubstanceSpecification(js)
self.implSubstanceSpecification3(inst2)
def implSubstanceSpecification3(self, inst):
self.assertEqual(force_bytes(inst.id), force_bytes("example"))
self.assertEqual(force_bytes(inst.meta.tag[0].code), force_bytes("HTEST"))
self.assertEqual(
force_bytes(inst.meta.tag[0].display), force_bytes("test health data")
)
self.assertEqual(
force_bytes(inst.meta.tag[0].system),
force_bytes("http://terminology.hl7.org/CodeSystem/v3-ActReason"),
)
self.assertEqual(
force_bytes(inst.text.div),
force_bytes(
'<div xmlns="http://www.w3.org/1999/xhtml"><p><b>Generated Narrative with Details</b></p><p><b>id</b>: example</p></div>'
),
)
self.assertEqual(force_bytes(inst.text.status), force_bytes("generated"))
def testSubstanceSpecification4(self):
inst = self.instantiate_from("substancespecification-example.json")
self.assertIsNotNone(
inst, "Must have instantiated a SubstanceSpecification instance"
)
self.implSubstanceSpecification4(inst)
js = inst.as_json()
self.assertEqual("SubstanceSpecification", js["resourceType"])
inst2 = substancespecification.SubstanceSpecification(js)
self.implSubstanceSpecification4(inst2)
def implSubstanceSpecification4(self, inst):
self.assertEqual(force_bytes(inst.id), force_bytes("example"))
self.assertEqual(force_bytes(inst.meta.tag[0].code), force_bytes("HTEST"))
self.assertEqual(
force_bytes(inst.meta.tag[0].display), force_bytes("test health data")
)
self.assertEqual(
force_bytes(inst.meta.tag[0].system),
force_bytes("http://terminology.hl7.org/CodeSystem/v3-ActReason"),
)
self.assertEqual(
force_bytes(inst.text.div),
force_bytes(
'<div xmlns="http://www.w3.org/1999/xhtml"><p><b>Generated Narrative with Details</b></p><p><b>id</b>: example</p></div>'
),
)
self.assertEqual(force_bytes(inst.text.status), force_bytes("generated"))
def testSubstanceSpecification5(self):
inst = self.instantiate_from("substancereferenceinformation-example.json")
self.assertIsNotNone(
inst, "Must have instantiated a SubstanceSpecification instance"
)
self.implSubstanceSpecification5(inst)
js = inst.as_json()
self.assertEqual("SubstanceSpecification", js["resourceType"])
inst2 = substancespecification.SubstanceSpecification(js)
self.implSubstanceSpecification5(inst2)
def implSubstanceSpecification5(self, inst):
self.assertEqual(force_bytes(inst.id), force_bytes("example"))
self.assertEqual(force_bytes(inst.meta.tag[0].code), force_bytes("HTEST"))
self.assertEqual(
force_bytes(inst.meta.tag[0].display), force_bytes("test health data")
)
self.assertEqual(
force_bytes(inst.meta.tag[0].system),
force_bytes("http://terminology.hl7.org/CodeSystem/v3-ActReason"),
)
self.assertEqual(
force_bytes(inst.text.div),
force_bytes(
'<div xmlns="http://www.w3.org/1999/xhtml"><p><b>Generated Narrative with Details</b></p><p><b>id</b>: example</p></div>'
),
)
self.assertEqual(force_bytes(inst.text.status), force_bytes("generated"))
def testSubstanceSpecification6(self):
inst = self.instantiate_from("substancenucleicacid-example.json")
self.assertIsNotNone(
inst, "Must have instantiated a SubstanceSpecification instance"
)
self.implSubstanceSpecification6(inst)
js = inst.as_json()
self.assertEqual("SubstanceSpecification", js["resourceType"])
inst2 = substancespecification.SubstanceSpecification(js)
self.implSubstanceSpecification6(inst2)
def implSubstanceSpecification6(self, inst):
self.assertEqual(force_bytes(inst.id), force_bytes("example"))
self.assertEqual(force_bytes(inst.meta.tag[0].code), force_bytes("HTEST"))
self.assertEqual(
force_bytes(inst.meta.tag[0].display), force_bytes("test health data")
)
self.assertEqual(
force_bytes(inst.meta.tag[0].system),
force_bytes("http://terminology.hl7.org/CodeSystem/v3-ActReason"),
)
self.assertEqual(
force_bytes(inst.text.div),
force_bytes(
'<div xmlns="http://www.w3.org/1999/xhtml"><p><b>Generated Narrative with Details</b></p><p><b>id</b>: example</p></div>'
),
)
self.assertEqual(force_bytes(inst.text.status), force_bytes("generated"))
| 41.704762 | 137 | 0.65129 | 922 | 8,758 | 6.090022 | 0.129067 | 0.130009 | 0.128228 | 0.160285 | 0.754408 | 0.716474 | 0.716474 | 0.716474 | 0.716474 | 0.716474 | 0 | 0.019183 | 0.220256 | 8,758 | 209 | 138 | 41.904306 | 0.803046 | 0.021238 | 0 | 0.61236 | 0 | 0.033708 | 0.239841 | 0.079285 | 0 | 0 | 0 | 0 | 0.275281 | 1 | 0.073034 | false | 0 | 0.044944 | 0 | 0.129213 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a5c605a2de552666ecbd5c21debb8f1b07cb9712 | 116 | py | Python | VoidFinder/vast/voidfinder/tests/test_init.py | DESI-UR/VAST | 25f9449a7fe04bba6e261b6406a5d82aefcac2d7 | [
"BSD-3-Clause"
] | 5 | 2021-08-18T18:38:16.000Z | 2022-02-17T01:33:59.000Z | VoidFinder/vast/voidfinder/tests/test_init.py | DESI-UR/VAST | 25f9449a7fe04bba6e261b6406a5d82aefcac2d7 | [
"BSD-3-Clause"
] | 36 | 2020-09-16T15:44:28.000Z | 2022-03-28T20:32:30.000Z | VoidFinder/vast/voidfinder/tests/test_init.py | DESI-UR/VAST | 25f9449a7fe04bba6e261b6406a5d82aefcac2d7 | [
"BSD-3-Clause"
] | 3 | 2022-02-05T07:00:39.000Z | 2022-03-25T04:27:03.000Z | from vast import voidfinder as voidfinder
def test_version_exists():
assert hasattr(voidfinder, '__version__')
| 23.2 | 45 | 0.793103 | 14 | 116 | 6.142857 | 0.785714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.137931 | 116 | 4 | 46 | 29 | 0.86 | 0 | 0 | 0 | 0 | 0 | 0.094828 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a5fa9fb3bc903331198bb7f0a2f28b54d7bb20e9 | 29 | py | Python | src/Sparrow/Python/__init__.py | DockBio/sparrow | f82cf86584e9edfc6f2c78af4896dc6f2ee8a455 | [
"BSD-3-Clause"
] | null | null | null | src/Sparrow/Python/__init__.py | DockBio/sparrow | f82cf86584e9edfc6f2c78af4896dc6f2ee8a455 | [
"BSD-3-Clause"
] | null | null | null | src/Sparrow/Python/__init__.py | DockBio/sparrow | f82cf86584e9edfc6f2c78af4896dc6f2ee8a455 | [
"BSD-3-Clause"
] | null | null | null | from .scine_sparrow import *
| 14.5 | 28 | 0.793103 | 4 | 29 | 5.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.137931 | 29 | 1 | 29 | 29 | 0.88 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
571d18bf574e89544c89966fd062fdefe4168b5a | 25 | py | Python | Cybernator/__init__.py | RuCybernetic/Cybernetor | 9aa58f23ccb1455b601e07044334ed61ca68c382 | [
"MIT"
] | 19 | 2020-05-05T23:27:31.000Z | 2022-01-12T15:08:12.000Z | Cybernator/__init__.py | RuCybernetic/Cybernetor | 9aa58f23ccb1455b601e07044334ed61ca68c382 | [
"MIT"
] | null | null | null | Cybernator/__init__.py | RuCybernetic/Cybernetor | 9aa58f23ccb1455b601e07044334ed61ca68c382 | [
"MIT"
] | 23 | 2020-05-12T08:41:12.000Z | 2022-02-07T07:49:28.000Z | from .Cybernator import * | 25 | 25 | 0.8 | 3 | 25 | 6.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.12 | 25 | 1 | 25 | 25 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5733060ac5df66caac4ee3cd862b8dffb1bb13ab | 20 | py | Python | sptf/__init__.py | andychisholm/spark-tensorflow | 6b60b04d3eeee13beb0f81c461855c2946f24283 | [
"MIT"
] | 3 | 2016-06-15T08:47:59.000Z | 2017-04-26T06:32:06.000Z | sptf/__init__.py | andychisholm/spark-tensorflow | 6b60b04d3eeee13beb0f81c461855c2946f24283 | [
"MIT"
] | 15 | 2016-03-06T17:10:40.000Z | 2016-05-28T14:06:16.000Z | mathlibpy/discrete/__init__.py | jackromo/mathLibPy | b80badd293b93da85aaf122c3d3da022f6dab361 | [
"MIT"
] | 2 | 2016-07-03T23:48:04.000Z | 2018-09-15T08:15:35.000Z | from graph import *
| 10 | 19 | 0.75 | 3 | 20 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2 | 20 | 1 | 20 | 20 | 0.9375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5742616c6bfd5709025ef55c8a3f021aa4449acd | 6,588 | py | Python | tests/answers/ci_overlap.py | mtzgroup/tcpb-client | 162cecde1652a267ec1e956f16a34dccd539f436 | [
"MIT"
] | null | null | null | tests/answers/ci_overlap.py | mtzgroup/tcpb-client | 162cecde1652a267ec1e956f16a34dccd539f436 | [
"MIT"
] | 16 | 2021-02-21T05:24:44.000Z | 2022-03-03T01:05:25.000Z | tests/answers/ci_overlap.py | mtzgroup/tcpb-client | 162cecde1652a267ec1e956f16a34dccd539f436 | [
"MIT"
] | null | null | null | from numpy import array
correct_answer = {
"atoms": array([b"C", b"C", b"H", b"H", b"H", b"H"], dtype="|S2"),
"geom": array(
[
[0.35673483, -0.05087227, -0.47786734],
[1.61445821, -0.06684947, -0.02916681],
[-0.14997206, 0.87780529, -0.62680155],
[-0.16786485, -0.95561368, -0.6942637],
[2.15270896, 0.84221076, 0.19314809],
[2.16553127, -0.97886933, 0.15232587],
]
),
"charges": array([6.0, 6.0, 1.0, 1.0, 1.0, 1.0]),
"spins": array([0.0, 0.0, 0.0, 0.0, 0.0, 0.0]),
"dipole_moment": 0.1483353208868677,
"dipole_vector": array([0.11836348, 0.04279511, 0.07849861]),
"job_dir": "server_2021-02-04-03.28.57/job_71",
"job_scr_dir": "server_2021-02-04-03.28.57/job_71/scr",
"server_job_id": 71,
"orbfile": "server_2021-02-04-03.28.57/job_71/scr/c0.cisno",
"orb_energies": array(
[
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
]
),
"orb_occupations": array(
[
2.0,
2.0,
2.0,
2.0,
2.0,
2.0,
2.0,
2.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
]
),
"bond_order": array(
[
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
]
),
"ci_overlap": array(
[
[
1.00000000e00,
5.52706892e-16,
-2.69852362e-16,
3.97020621e-16,
-1.90978242e-16,
-1.41779792e-15,
6.44472138e-16,
4.88763422e-16,
2.89929213e-16,
6.82794953e-16,
],
[
5.64973358e-16,
1.00000000e00,
-7.92416263e-16,
7.92043568e-16,
2.30921510e-16,
7.55889555e-16,
6.74892559e-16,
-7.93751187e-16,
-4.81313661e-16,
9.11856379e-17,
],
[
-2.89149255e-16,
-9.02686400e-16,
1.00000000e00,
1.25681055e-15,
-6.03846400e-16,
7.04050186e-16,
-8.89019100e-16,
3.53740017e-16,
-8.55875971e-17,
8.65032397e-17,
],
[
4.56618032e-16,
9.21847979e-16,
9.79153146e-16,
1.00000000e00,
1.88583415e-16,
-1.87454585e-16,
1.91226105e-16,
-2.30113176e-17,
-2.81362640e-16,
1.17129855e-15,
],
[
-1.75933719e-16,
3.42187758e-16,
-6.31710396e-16,
1.34366530e-15,
1.00000000e00,
8.88745085e-17,
3.19561179e-16,
-8.47032947e-18,
-1.34479702e-16,
-4.72912259e-17,
],
[
-1.36900096e-15,
7.45648821e-16,
6.52709190e-16,
-1.98781904e-16,
9.78843979e-17,
1.00000000e00,
-3.07711929e-16,
-2.77925254e-16,
5.85772517e-17,
7.56502066e-16,
],
[
9.24586543e-16,
6.79008716e-16,
-8.55679248e-16,
1.90712380e-16,
3.13720251e-16,
-3.07640672e-16,
1.00000000e00,
-4.39804917e-17,
-1.35793569e-16,
3.86034842e-16,
],
[
2.66856036e-16,
-8.17966271e-16,
3.67582229e-16,
-2.25100859e-17,
-1.46164005e-17,
-2.78172608e-16,
-1.21060829e-15,
1.00000000e00,
-3.41457616e-16,
4.46123783e-17,
],
[
3.25321638e-16,
-4.76461221e-16,
-8.72152768e-17,
-2.68222724e-16,
-1.29180108e-16,
7.14950335e-17,
-1.61007517e-16,
-3.33961374e-16,
1.00000000e00,
-1.07031083e-17,
],
[
1.27909937e-15,
8.67022925e-17,
8.79254081e-17,
1.13777891e-15,
-4.75210894e-17,
7.33829323e-16,
4.10761852e-16,
4.13699362e-17,
-1.88702000e-17,
1.00000000e00,
],
]
),
}
| 24.766917 | 70 | 0.307681 | 716 | 6,588 | 2.805866 | 0.23324 | 0.263813 | 0.389746 | 0.511697 | 0.200597 | 0.200597 | 0.196615 | 0.192135 | 0.192135 | 0.192135 | 0 | 0.569215 | 0.559199 | 6,588 | 265 | 71 | 24.860377 | 0.12259 | 0 | 0 | 0.496212 | 0 | 0 | 0.03901 | 0.017608 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.003788 | 0 | 0.003788 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
93aa84ee49a58d3dc4e1ab0ce8e8761fae87429f | 3,912 | py | Python | blog_server/tests/functional/test_blog.py | szhu9903/flask-react-blog | b1939a5d95e0084a82c230f2a20a9b197d2eef46 | [
"MIT"
] | 2 | 2022-03-12T14:51:42.000Z | 2022-03-25T13:20:16.000Z | blog_server/tests/functional/test_blog.py | szhu9903/flask-react-blog | b1939a5d95e0084a82c230f2a20a9b197d2eef46 | [
"MIT"
] | 7 | 2022-03-19T02:17:54.000Z | 2022-03-28T10:12:52.000Z | blog_server/tests/functional/test_blog.py | szhu9903/flask-react-blog | b1939a5d95e0084a82c230f2a20a9b197d2eef46 | [
"MIT"
] | 1 | 2022-03-25T13:20:28.000Z | 2022-03-25T13:20:28.000Z | import pytest
@pytest.mark.usefixtures('client_admin')
@pytest.mark.usefixtures('client_github')
class TestBlog:
def test_get_blog(self, client):
response = client.get('/api/v3/blog/')
assert response.status_code == 200
assert response.json['status'] == 200
assert response.json['code'] == 0x06
assert len(response.json['data']) > 0 and len(response.json['data']) <= response.json['total_count']
def test_get_blog_lv(self, client):
response = client.get('/api/v3/blog/?view=blog_LV')
assert response.status_code == 200
assert response.json['status'] == 200
assert len(response.json['data']) > 0 and len(response.json['data']) <= response.json['total_count']
blog_data = response.json['data'][0]
assert 'bt_name' in blog_data and 'bu_username' in blog_data
@pytest.mark.parametrize(('filter_str'),(
('filter=b_title=test',),('filter=b_type=2',),('filter=b_type=1')
))
def test_get_blog_lv_filter(self, client, filter_str):
response = client.get(f'/api/v3/blog/?view=blog_LV&{filter_str}')
assert response.status_code == 200
assert response.json['status'] == 200
assert len(response.json['data']) > 0 and len(response.json['data']) <= response.json['total_count']
blog_data = response.json['data'][0]
assert 'bt_name' in blog_data and 'bu_username' in blog_data
def test_post_blog(self, client, client_admin, client_github):
request_json = {
"data":{
"b_brief": "描述",
"b_content": "##文章内容",
"b_title": "测试test1",
"b_tag":[
{"bt_tagid": 3},
{"bt_tagid": 4}
],
"b_type": 2
}
}
response = client.post('/api/v3/blog/', json=request_json)
assert response.status_code == 200
assert response.json['status'] == 403
response = client_github.post('/api/v3/blog/', json=request_json)
assert response.status_code == 200
assert response.json['status'] == 403
response = client_admin.post('/api/v3/blog/', json=request_json)
assert response.status_code == 200
assert response.json['status'] == 200
assert response.json['code'] == 0x06
assert response.json['rowid'] is not None
self.add_blogid = response.json['rowid']
def test_put_blog(self, client, client_admin, client_github):
request_json = {
"data": {
"b_brief": "描述test",
"b_tag": [
{"bt_tagid": 1},
{"bt_tagid": 2}
],
}
}
response = client.put('/api/v3/blog/1/', json=request_json)
assert response.status_code == 200
assert response.json['status'] == 403
response = client_github.put('/api/v3/blog/1/', json=request_json)
assert response.status_code == 200
assert response.json['status'] == 403
response = client_admin.put('/api/v3/blog/1/', json=request_json)
assert response.status_code == 200
print(response.json)
assert response.json['status'] == 200
assert response.json['code'] == 0x06
def test_delete_blog(self, client, client_admin, client_github):
response = client.delete('/api/v3/blog/3/')
assert response.status_code == 200
assert response.json['status'] == 403
response = client_github.delete('/api/v3/blog/3/')
assert response.status_code == 200
assert response.json['status'] == 403
response = client_admin.delete('/api/v3/blog/3/')
assert response.status_code == 200
print(response.json)
assert response.json['status'] == 200
assert response.json['code'] == 0x06
| 41.617021 | 108 | 0.582822 | 472 | 3,912 | 4.65678 | 0.141949 | 0.169245 | 0.139217 | 0.133758 | 0.801183 | 0.786624 | 0.77343 | 0.756597 | 0.72384 | 0.72384 | 0 | 0.040493 | 0.274029 | 3,912 | 93 | 109 | 42.064516 | 0.733451 | 0 | 0 | 0.529412 | 0 | 0 | 0.152646 | 0.01662 | 0 | 0 | 0.004091 | 0 | 0.4 | 1 | 0.070588 | false | 0 | 0.011765 | 0 | 0.094118 | 0.023529 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
93c4c07f640a63eeb9997f53eda6e91e51ba1b28 | 2,550 | py | Python | utils/weights_calculator.py | Simek13/Retinal-fluid-segmentation | 1e80fa6d9593c083887e4178ed0558a311106f13 | [
"Apache-2.0"
] | null | null | null | utils/weights_calculator.py | Simek13/Retinal-fluid-segmentation | 1e80fa6d9593c083887e4178ed0558a311106f13 | [
"Apache-2.0"
] | null | null | null | utils/weights_calculator.py | Simek13/Retinal-fluid-segmentation | 1e80fa6d9593c083887e4178ed0558a311106f13 | [
"Apache-2.0"
] | 1 | 2021-03-20T06:29:17.000Z | 2021-03-20T06:29:17.000Z | from PIL import Image
import os
import numpy as np
def print_current(text):
print('###################')
print(text)
print('###################')
def calc_weights_presence(masks_path, n_labels):
masks = os.listdir(masks_path)
class_counter = [0] * n_labels
for m in masks:
mask = np.array(Image.open(os.path.join(masks_path, m)))
for i in range(n_labels):
if i in mask:
class_counter[i] += 1
class_weights = np.asarray(class_counter) / max(class_counter)
return class_weights
def calc_weights_presence_balanced(masks_path, n_labels):
masks = os.listdir(masks_path)
class_counter = [0] * n_labels
for m in masks:
mask = np.array(Image.open(os.path.join(masks_path, m)))
if np.count_nonzero(mask) != 0:
for i in range(n_labels):
if i in mask:
class_counter[i] += 1
class_weights = np.asarray(class_counter) / max(class_counter)
return class_weights
def calc_weights_pixel_wise(masks_path, n_labels):
masks = os.listdir(masks_path)
class_counter = [0] * n_labels
for m in masks:
mask = np.array(Image.open(os.path.join(masks_path, m)))
for i in range(n_labels):
class_counter[i] += np.count_nonzero(mask == i)
class_weights = np.asarray(class_counter) / max(class_counter)
return class_weights
def calc_weights_pixel_wise_balanced(masks_path, n_labels):
masks = os.listdir(masks_path)
class_counter = [0] * n_labels
for m in masks:
mask = np.array(Image.open(os.path.join(masks_path, m)))
if np.count_nonzero(mask) != 0:
for i in range(n_labels):
class_counter[i] += np.count_nonzero(mask == i)
class_weights = np.asarray(class_counter) / max(class_counter)
return class_weights
def average_class_number(masks_path, n_labels):
masks = os.listdir(masks_path)
class_counter = [0] * n_labels
counter = 0
for m in masks:
counter += 1
mask = np.array(Image.open(os.path.join(masks_path, m)))
for i in range(n_labels):
class_counter[i] += np.count_nonzero(mask == i)
class_weights = np.asarray(class_counter) / counter
return class_weights
if __name__ == '__main__':
print_current('Calculating...')
cirrus_masks = '../datasets/Retouch/Cirrus/masks'
spectralis_masks = '../datasets/Retouch/Spectralis/masks'
# calc_weights(cirrus_masks)
print_current(average_class_number(spectralis_masks, 4))
| 32.278481 | 66 | 0.645882 | 361 | 2,550 | 4.296399 | 0.144044 | 0.147002 | 0.032237 | 0.05158 | 0.766602 | 0.766602 | 0.766602 | 0.766602 | 0.766602 | 0.766602 | 0 | 0.006119 | 0.23098 | 2,550 | 78 | 67 | 32.692308 | 0.784804 | 0.010196 | 0 | 0.714286 | 0 | 0 | 0.050753 | 0.026963 | 0 | 0 | 0 | 0 | 0 | 1 | 0.095238 | false | 0 | 0.047619 | 0 | 0.222222 | 0.095238 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
9e20fb8653c0503e9b86f25b8a4a488cced51eb9 | 27 | py | Python | bindings/python/native/iota_client/__init__.py | GoldenPedro/iota.rs | 71464f96b8e29d9fbed34a6ff77e757a112fedd4 | [
"Apache-2.0"
] | 256 | 2017-06-27T02:37:21.000Z | 2022-03-28T07:51:48.000Z | bindings/python/native/iota_client/__init__.py | GoldenPedro/iota.rs | 71464f96b8e29d9fbed34a6ff77e757a112fedd4 | [
"Apache-2.0"
] | 379 | 2017-06-25T05:49:14.000Z | 2022-03-29T18:57:11.000Z | bindings/python/native/iota_client/__init__.py | GoldenPedro/iota.rs | 71464f96b8e29d9fbed34a6ff77e757a112fedd4 | [
"Apache-2.0"
] | 113 | 2017-06-25T14:07:05.000Z | 2022-03-30T09:10:12.000Z | from .iota_client import *
| 13.5 | 26 | 0.777778 | 4 | 27 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148148 | 27 | 1 | 27 | 27 | 0.869565 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f5915353023a67f7c766ca71b31e47cd1a74c6d5 | 419 | py | Python | backend/app/app/db/base.py | reppertj/earworm | 5c3d457e2c09ce96be75fcb19cd9acf819b84c4b | [
"MIT"
] | 18 | 2021-04-23T20:54:31.000Z | 2022-02-19T13:10:41.000Z | backend/app/app/db/base.py | reppertj/earworm | 5c3d457e2c09ce96be75fcb19cd9acf819b84c4b | [
"MIT"
] | null | null | null | backend/app/app/db/base.py | reppertj/earworm | 5c3d457e2c09ce96be75fcb19cd9acf819b84c4b | [
"MIT"
] | 2 | 2021-05-13T10:26:11.000Z | 2021-07-10T03:55:48.000Z | # Import all the models, so that Base has them before being
# imported by Alembic
from app.db.base_class import Base # noqa
from app.models.user import User # noqa
from app.models.embedding import Embedding # noqa
from app.models.embedding_model import Embedding_Model # noqa
from app.models.track import Track # noqa
from app.models.license import License # noqa
from app.models.provider import Provider # noqa
| 41.9 | 62 | 0.787589 | 66 | 419 | 4.954545 | 0.378788 | 0.149847 | 0.201835 | 0.311927 | 0.159021 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.155131 | 419 | 9 | 63 | 46.555556 | 0.923729 | 0.267303 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f5a3eb9f8d9ce1e1328ae7a299ffabb7e6d1b21c | 66 | py | Python | SmartDeal-Training/se/__init__.py | VITA-Group/SmartDeal | 8e1de77497eedbeea412a8c51142834c28a53709 | [
"MIT"
] | 2 | 2021-07-20T02:48:35.000Z | 2021-11-29T02:55:36.000Z | SmartDeal-Training/se/__init__.py | VITA-Group/SmartDeal | 8e1de77497eedbeea412a8c51142834c28a53709 | [
"MIT"
] | null | null | null | SmartDeal-Training/se/__init__.py | VITA-Group/SmartDeal | 8e1de77497eedbeea412a8c51142834c28a53709 | [
"MIT"
] | null | null | null | from .conv_mask import SEConv2d
from .linear_mask import SELinear
| 22 | 33 | 0.848485 | 10 | 66 | 5.4 | 0.7 | 0.37037 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017241 | 0.121212 | 66 | 2 | 34 | 33 | 0.913793 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
19b1d0058a74b7943b3f6df1d2d36ef3561f9b5f | 686 | py | Python | tests_src/pyifx.hsl.color_overlay.py | Video-Lab/pyifx | 9b9aaa690059f3148833041eebdc4de7cc8d5459 | [
"MIT"
] | null | null | null | tests_src/pyifx.hsl.color_overlay.py | Video-Lab/pyifx | 9b9aaa690059f3148833041eebdc4de7cc8d5459 | [
"MIT"
] | null | null | null | tests_src/pyifx.hsl.color_overlay.py | Video-Lab/pyifx | 9b9aaa690059f3148833041eebdc4de7cc8d5459 | [
"MIT"
] | null | null | null | from test_vars import *
set_paths("../tests/imgs/hsl/color-overlay")
pyifx.hsl.color_overlay(img1, [255,0,0], 100)
pyifx.hsl.color_overlay(img_vol, [0,255,0], 100)
pyifx.hsl.color_overlay(img_list, [0,0,255], 100)
call_error_test("pyifx.hsl.color_overlay", ["asdf", [255,0,0], 60])
call_error_test("pyifx.hsl.color_overlay", [img1, [255,0], 60])
call_error_test("pyifx.hsl.color_overlay", [img1, [255,0, 'e'], 60])
call_error_test("pyifx.hsl.color_overlay", [img1, [255,0,0], "s"])
call_error_test("pyifx.hsl.color_overlay", [img1, [255,0,0], 200])
call_error_test("pyifx.hsl.color_overlay", [img1, [255,0], -10])
call_error_test("pyifx.hsl.color_overlay", [img1, [255,0,0], 60, "s"]) | 49 | 70 | 0.708455 | 123 | 686 | 3.723577 | 0.219512 | 0.19214 | 0.360262 | 0.436681 | 0.810044 | 0.810044 | 0.810044 | 0.622271 | 0.558952 | 0.558952 | 0 | 0.118565 | 0.065598 | 686 | 14 | 70 | 49 | 0.595944 | 0 | 0 | 0 | 0 | 0 | 0.289665 | 0.279476 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.083333 | 0 | 0.083333 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
272c45527f279d4337e7dfdfad7ed1cfe3cc1ad2 | 28 | py | Python | main.py | maxfischer2781/pyfold | 79953124a8e25190f24f0b00c6906c3c9d1a687a | [
"MIT"
] | null | null | null | main.py | maxfischer2781/pyfold | 79953124a8e25190f24f0b00c6906c3c9d1a687a | [
"MIT"
] | null | null | null | main.py | maxfischer2781/pyfold | 79953124a8e25190f24f0b00c6906c3c9d1a687a | [
"MIT"
] | null | null | null | import pyfold
import sample
| 9.333333 | 13 | 0.857143 | 4 | 28 | 6 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 28 | 2 | 14 | 14 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
27392671c7acb32653d1da341e5d4e2446704cdb | 27 | py | Python | electricityLoadForecasting/tools/encoding/__init__.py | BCD65/electricityLoadForecasting | 07a6ed060afaf7cc2906c0389b5c9e9b0fede193 | [
"MIT"
] | null | null | null | electricityLoadForecasting/tools/encoding/__init__.py | BCD65/electricityLoadForecasting | 07a6ed060afaf7cc2906c0389b5c9e9b0fede193 | [
"MIT"
] | null | null | null | electricityLoadForecasting/tools/encoding/__init__.py | BCD65/electricityLoadForecasting | 07a6ed060afaf7cc2906c0389b5c9e9b0fede193 | [
"MIT"
] | null | null | null |
from .encoding import *
| 5.4 | 23 | 0.666667 | 3 | 27 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.259259 | 27 | 4 | 24 | 6.75 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
274499b09061fb122ba0db2f4583ae47aea522af | 20 | py | Python | github-webhooks/tests/__init__.py | srnd/mattermost-code-review | 39122bef4d3a77291983867afa39a7b6a41d7777 | [
"MIT"
] | 2 | 2019-07-19T21:56:30.000Z | 2019-07-19T21:58:12.000Z | github-webhooks/tests/__init__.py | srnd/mattermost-code-review | 39122bef4d3a77291983867afa39a7b6a41d7777 | [
"MIT"
] | 3 | 2019-07-18T15:53:53.000Z | 2020-07-21T23:42:30.000Z | github-webhooks/tests/__init__.py | srnd/PRRr | 39122bef4d3a77291983867afa39a7b6a41d7777 | [
"MIT"
] | null | null | null | from .. import Event | 20 | 20 | 0.75 | 3 | 20 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15 | 20 | 1 | 20 | 20 | 0.882353 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
27d44e66da6916995bfffa1a7dc2020d7d5fcaa7 | 57 | py | Python | run_please2.py | jadnohra/please2 | 8654347b72758f8f6cd255ef97500da55839a62a | [
"MIT"
] | null | null | null | run_please2.py | jadnohra/please2 | 8654347b72758f8f6cd255ef97500da55839a62a | [
"MIT"
] | null | null | null | run_please2.py | jadnohra/please2 | 8654347b72758f8f6cd255ef97500da55839a62a | [
"MIT"
] | null | null | null | import please2.command_line
please2.command_line.main()
| 14.25 | 27 | 0.842105 | 8 | 57 | 5.75 | 0.625 | 0.608696 | 0.782609 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.037736 | 0.070175 | 57 | 3 | 28 | 19 | 0.830189 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
27d6dcea9f0b2f6d6406e84c959b2feaa162a8c0 | 30,221 | py | Python | tests/databricks/test_utils.py | Comcast/pipeline-deploy | e3f923d3683a091f4c5ee9e039672c3edcee0eb1 | [
"Apache-2.0"
] | null | null | null | tests/databricks/test_utils.py | Comcast/pipeline-deploy | e3f923d3683a091f4c5ee9e039672c3edcee0eb1 | [
"Apache-2.0"
] | 1 | 2022-02-23T21:49:33.000Z | 2022-02-23T21:49:33.000Z | tests/databricks/test_utils.py | Comcast/pipeline-deploy | e3f923d3683a091f4c5ee9e039672c3edcee0eb1 | [
"Apache-2.0"
] | null | null | null | """
Copyright 2022 Comcast Cable Communications Management, LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
SPDX-License-Identifier: Apache-2.0
"""
import os
import requests
import pytest
from databricks_cli.workspace.api import DIRECTORY, NOTEBOOK, WorkspaceFileInfo
from pipeline_deploy.databricks import utils
from pytest_mock import MockFixture
from requests.exceptions import HTTPError
from tests.databricks import test_data as data
from tests.databricks.utils import FILE_PATH
from tests.utils import get_mock_export_workspace
class TestEnumerateLocalDirectories:
def test_with_no_exclude_and_no_include(self):
actual = list(utils.enumerate_local_directories(None, None, FILE_PATH))
expected = [os.path.join(FILE_PATH, 'directory')]
assert actual == expected
def test_with_exclude(self):
actual = list(utils.enumerate_local_directories(['*directory*'], None, FILE_PATH))
assert not actual
def test_with_include(self):
actual = list(utils.enumerate_local_directories(None, ['*foo*'], FILE_PATH))
assert not actual
class TestEnumerateLocalJobs:
def test_with_no_exclude_no_include_and_no_prefix(self):
actual = [*utils.enumerate_local_jobs(None, None, FILE_PATH, None)]
expected = [{'name': 'job 1'}, {'name': 'job 2'}]
assert actual == expected
def test_with_exclude(self):
actual = [*utils.enumerate_local_jobs(['job 1'], None, FILE_PATH, None)]
expected = [{'name': 'job 2'}]
assert actual == expected
def test_with_include(self):
actual = [*utils.enumerate_local_jobs(None, ['job 1'], FILE_PATH, None)]
expected = [{'name': 'job 1'}]
assert actual == expected
def test_with_prefix(self):
actual = [*utils.enumerate_local_jobs(None, ['prefix job 1'], FILE_PATH, 'prefix ')]
expected = [{'name': 'job 1'}]
assert actual == expected
class TestEnumerateLocalNotebooks:
def test_with_no_exclude_and_no_include(self):
actual = [*utils.enumerate_local_notebooks(None, None, FILE_PATH)]
expected = [
os.path.join(FILE_PATH,'foo.py'),
os.path.join(FILE_PATH, 'directory', 'bar.py')
]
assert expected == actual
def test_with_exclude(self):
actual = [*utils.enumerate_local_notebooks(['*foo*'], None, FILE_PATH)]
expected = [os.path.join(FILE_PATH, 'directory', 'bar.py')]
assert expected == actual
def test_with_include(self):
actual = [*utils.enumerate_local_notebooks(None, ['*foo*'], FILE_PATH)]
expected = [os.path.join(FILE_PATH,'foo.py')]
assert expected == actual
class TestEnumerateRemoteJobs:
def test_without_include_without_exclude_and_when_no_jobs_exist_for_the_owner(self, mocker: MockFixture):
mock_client = mocker.MagicMock()
mock_client.list_jobs = mocker.Mock(return_value=data.LIST_JOBS_NOT_OWNED)
mock_get_job_owner = mocker.patch('pipeline_deploy.databricks.utils.get_job_owner')
mock_get_job_owner.return_value = 'not_owner@company.com'
actual = [*utils.enumerate_remote_jobs(mock_client, None, None, 'owner@company.com')]
assert not actual
def test_without_include_without_exclude_and_when_jobs_exist_for_the_owner(self, mocker: MockFixture):
mock_client = mocker.MagicMock()
mock_client.list_jobs = mocker.MagicMock(return_value=data.LIST_JOBS_OWNED)
mock_get_job_owner = mocker.patch('pipeline_deploy.databricks.utils.get_job_owner')
def _mock_get_job_owner(client, job_id):
if job_id == '3':
return 'not_owner@company.com'
return 'owner@company.com'
mock_get_job_owner.side_effect = _mock_get_job_owner
def _mock_get_job(job_id):
for job in data.LIST_JOBS_OWNED['jobs']:
if job['job_id'] == job_id:
return job
mock_client.get_job = mocker.MagicMock(side_effect=_mock_get_job)
expected = [*data.LIST_JOBS_OWNED["jobs"][:2]]
actual = [*utils.enumerate_remote_jobs(mock_client, None, None, 'owner@company.com')]
assert actual == expected
def test_with_exclude(self, mocker: MockFixture):
mock_client = mocker.MagicMock()
mock_client.list_jobs = mocker.MagicMock(return_value=data.LIST_JOBS_OWNED)
def mock_get_job(job_id):
for job in data.LIST_JOBS_OWNED['jobs']:
if job['job_id'] == job_id:
return job
mock_client.get_job = mocker.MagicMock(side_effect=mock_get_job)
expected = [*data.LIST_JOBS_OWNED["jobs"][1:]]
actual = [*utils.enumerate_remote_jobs(mock_client, ['Job 1'], None, None)]
assert actual == expected
def test_with_include(self, mocker: MockFixture):
mock_client = mocker.MagicMock()
mock_client.list_jobs = mocker.MagicMock(return_value=data.LIST_JOBS_OWNED)
def mock_get_job(job_id):
for job in data.LIST_JOBS_OWNED['jobs']:
if job['job_id'] == job_id:
return job
mock_client.get_job = mocker.MagicMock(side_effect=mock_get_job)
expected = [data.LIST_JOBS_OWNED["jobs"][0]]
actual = [*utils.enumerate_remote_jobs(mock_client, None, ['Job 1'], None)]
assert actual == expected
class TestEnumerateRemotePaths:
def test_without_include_and_without_exclude(self, mocker: MockFixture):
mock_root_directory = [
WorkspaceFileInfo('/remote/path/file-1', NOTEBOOK, '1'),
WorkspaceFileInfo('/remote/path/sub-directory', DIRECTORY, '2')
]
mock_sub_directory = [
WorkspaceFileInfo('/remote/path/sub-directory/file-2', NOTEBOOK, '3'),
]
def mock_list_objects(path):
if path == '/remote/path':
return mock_root_directory
if path == '/remote/path/sub-directory':
return mock_sub_directory
return []
mock_client = mocker.MagicMock()
mock_client.list_objects = mocker.MagicMock(side_effect=mock_list_objects)
actual = [*utils.enumerate_remote_paths(mock_client, None, None, '/remote/path')]
expected = [*mock_root_directory, *mock_sub_directory]
assert actual == expected
def test_with_exclude(self, mocker: MockFixture):
mock_root_directory = [
WorkspaceFileInfo('/remote/path/file-1', NOTEBOOK, '1'),
WorkspaceFileInfo('/remote/path/sub-directory', DIRECTORY, '2')
]
mock_sub_directory = [
WorkspaceFileInfo('/remote/path/sub-directory/file-2', NOTEBOOK, '3'),
]
def mock_list_objects(path):
if path == '/remote/path':
return mock_root_directory
if path == '/remote/path/sub-directory':
return mock_sub_directory
return []
mock_client = mocker.MagicMock()
mock_client.list_objects = mocker.MagicMock(side_effect=mock_list_objects)
actual = [*utils.enumerate_remote_paths(mock_client, ['/remote/path/sub-directory*'], None, '/remote/path')]
expected = [mock_root_directory[0]]
assert actual == expected
def test_with_include(self, mocker: MockFixture):
mock_root_directory = [
WorkspaceFileInfo('/remote/path/file-1', NOTEBOOK, '1'),
WorkspaceFileInfo('/remote/path/sub-directory', DIRECTORY, '2')
]
mock_sub_directory = [
WorkspaceFileInfo('/remote/path/sub-directory/file-2', NOTEBOOK, '3'),
]
def mock_list_objects(path):
if path == '/remote/path':
return mock_root_directory
if path == '/remote/path/sub-directory':
return mock_sub_directory
return []
mock_client = mocker.MagicMock()
mock_client.list_objects = mocker.MagicMock(side_effect=mock_list_objects)
actual = [*utils.enumerate_remote_paths(mock_client, None, ['/remote/path/sub-directory*'], '/remote/path')]
expected = [mock_root_directory[1], *mock_sub_directory]
assert actual == expected
class TestFilterJobs:
def test_when_exclude_and_include_are_not_supplied(self):
assert utils.filter_jobs(None, None, 'job name')
def test_when_exclude_is_supplied_and_there_is_no_match(self):
assert utils.filter_jobs(['exclude'], None, 'job name')
def test_when_exclude_is_supplied_and_there_is_a_match(self):
assert not utils.filter_jobs(['job*'], None, 'job name')
def test_when_include_is_supplied_and_there_is_no_match(self):
assert not utils.filter_jobs(None, ['include'], 'job name')
def test_when_include_is_supplied_and_there_is_a_match(self):
assert utils.filter_jobs(None, ['job*'], 'job name')
class TestFilterNotebooks:
def test_when_exclude_and_include_are_not_supplied(self):
assert utils.filter_notebooks(None, None, 'notebook-name')
def test_when_exclude_is_supplied_and_there_is_no_match(self):
assert utils.filter_notebooks(['exclude'], None, 'notebook-name')
def test_when_exclude_is_supplied_and_there_is_a_match(self):
assert not utils.filter_notebooks(['notebook*'], None, 'notebook-name')
def test_when_include_is_supplied_and_there_is_no_match(self):
assert not utils.filter_notebooks(None, ['include'], 'notebook-name')
def test_when_include_is_supplied_and_there_is_a_match(self):
assert utils.filter_notebooks(None, ['notebook*'], 'notebook-name')
class TestGetJobOwner:
def test_when_the_owner_is_a_user(self, mocker: MockFixture):
mock_client = mocker.MagicMock()
def mock_perform_query(method, path, query_data=None, headers=None):
if method == 'GET':
return data.PERMISSIONS_WITH_OWNER
return None
mock_client.perform_query = mocker.MagicMock(side_effect=mock_perform_query)
actual = utils.get_job_owner(mock_client, '1234567')
assert actual == 'owner@company.com'
def test_when_the_owner_is_a_group(self, mocker: MockFixture):
mock_client = mocker.MagicMock()
def mock_perform_query(method, path, query_data=None, headers=None):
if method == 'GET':
return data.PERMISSIONS_WITH_OWNER_GROUP
return None
mock_client.perform_query = mocker.MagicMock(side_effect=mock_perform_query)
actual = utils.get_job_owner(mock_client, '1234567')
assert actual == 'group'
def test_when_the_owner_is_a_service_principal(self, mocker: MockFixture):
mock_client = mocker.MagicMock()
def mock_perform_query(method, path, query_data=None, headers=None):
if method == 'GET':
return data.PERMISSIONS_WITH_OWNER_SERVICE_PRINCIPAL
return None
mock_client.perform_query = mocker.MagicMock(side_effect=mock_perform_query)
actual = utils.get_job_owner(mock_client, '1234567')
assert actual == 'service'
def test_when_there_is_no_owner(self, mocker: MockFixture):
mock_client = mocker.MagicMock()
def mock_perform_query(method, path, query_data=None, headers=None):
if method == 'GET':
return data.PERMISSIONS_WITH_ONLY_OWNER_AS_MANAGER
return None
mock_client.perform_query = mocker.MagicMock(side_effect=mock_perform_query)
with pytest.raises(AttributeError) as exinfo:
utils.get_job_owner(mock_client, '1234567')
assert str(exinfo.value) == 'Owner for job 1234567 could not be found.'
class TestGetLocalNotebooksMap:
def test_when_there_are_no_notebooks(self, mocker: MockFixture):
mock_enumerate_local_notebooks = mocker.patch('pipeline_deploy.databricks.utils.enumerate_local_notebooks')
mock_enumerate_local_notebooks.return_value = []
exclude = ['exclude']
include = ['include']
assert not utils.get_local_notebooks_map(exclude, include, FILE_PATH, '/path/notebooks')
mock_enumerate_local_notebooks.assert_called_with(exclude, include, FILE_PATH)
def test_when_there_are_notebooks(self, mocker: MockFixture):
mock_enumerate_local_notebooks = mocker.patch('pipeline_deploy.databricks.utils.enumerate_local_notebooks')
mock_enumerate_local_notebooks.return_value = [
os.path.join(FILE_PATH,'foo.py'),
os.path.join(FILE_PATH, 'directory', 'bar.py')
]
exclude = ['exclude']
include = ['include']
expected = {
'/path/notebooks/foo': os.path.join(FILE_PATH,'foo.py'),
'/path/notebooks/directory/bar': os.path.join(FILE_PATH, 'directory', 'bar.py')
}
actual = utils.get_local_notebooks_map(exclude, include, FILE_PATH, '/path/notebooks')
assert expected == actual
mock_enumerate_local_notebooks.assert_called_with(exclude, include, FILE_PATH)
class TestGetLanguageForNotebook:
def test_with_python_files(self):
assert utils.get_language_for_notebook('/path/directory/notebook.py') == 'PYTHON'
def test_with_sql_files(self):
assert utils.get_language_for_notebook('/path/directory/notebook.sql') == 'SQL'
def test_with_scala_files(self):
assert utils.get_language_for_notebook('/path/directory/notebook.scala') == 'SCALA'
def test_with_r_files(self):
assert utils.get_language_for_notebook('/path/directory/notebook.r') == 'R'
def test_with_files_not_covered_by_the_function(self):
with pytest.raises(AttributeError) as ex:
utils.get_language_for_notebook('/path/directory/notebook.txt')
assert str(ex.value) == 'Unknown extension .TXT.'
class TestGetNotebookPath:
def test_retrieving_the_nested_notebook_path(self):
expected = '/path/directory/notebook'
job = {
'settings': {
'notebook_task': {
'notebook_path': expected
}
}
}
assert utils.get_notebook_path(job) == expected
class TestIsJobRunning:
def test_when_there_are_no_running_jobs(self, mocker: MockFixture):
mock_client = mocker.MagicMock()
mock_client.list_runs = mocker.MagicMock(return_value=data.LIST_RUNS_JOB_NOT_RUNNING)
assert not utils.is_job_running(mock_client, '123456', 'job-run')
def test_when_a_job_is_stopping(self, mocker: MockFixture):
mock_client = mocker.MagicMock()
mock_client.list_runs = mocker.MagicMock(return_value=data.LIST_RUNS_JOB_STOPPING)
assert utils.is_job_running(mock_client, '123456', 'job-run')
def test_when_there_are_running_jobs(self, mocker: MockFixture):
mock_client = mocker.MagicMock()
mock_client.list_runs = mocker.MagicMock(return_value=data.LIST_RUNS_JOB_RUNNING)
assert utils.is_job_running(mock_client, '123456', 'job-run')
def test_when_there_are_pending_jobs(self, mocker: MockFixture):
mock_client = mocker.MagicMock()
mock_client.list_runs = mocker.MagicMock(return_value=data.LIST_RUNS_JOB_PENDING)
assert utils.is_job_running(mock_client, '123456', 'job-run')
def test_when_there_are_no_runs(self, mocker: MockFixture):
mock_client = mocker.MagicMock()
mock_client.list_runs = mocker.MagicMock(return_value={})
assert not utils.is_job_running(mock_client, '123456', 'job-run')
def test_when_the_job_doesnt_exist(self, mocker: MockFixture):
def mock_list_runs(job_id, active_only, completed_only, offset, limit):
response = mocker.MagicMock()
response.json = mocker.MagicMock(return_value={'error_code':'RESOURCE_DOES_NOT_EXIST'})
raise HTTPError(request=requests.Request(), response=response)
mock_client = mocker.MagicMock()
mock_client.list_runs = mocker.MagicMock(side_effect=mock_list_runs)
assert not utils.is_job_running(mock_client, '123456', 'job-run')
class TestIsNotebookUpdated:
def test_when_there_are_no_changes_to_the_notebok(self, mocker: MockFixture):
mock_export_workspace = get_mock_export_workspace(data.EXPORT_WORKSPACE_UNCHANGED_NOTEBOOK)
mock_client = mocker.MagicMock()
mock_client.export_workspace = mocker.MagicMock(side_effect=mock_export_workspace)
remote_path = '/path/notebooks/foo'
local_path = os.path.join(FILE_PATH, 'foo.py')
assert not utils.is_notebook_updated(mock_client, False, local_path, remote_path)
def test_when_there_are_only_whitespace_changes_to_the_notebook(self, mocker: MockFixture):
mock_export_workspace = get_mock_export_workspace(data.EXPORT_WORKSPACE_CHANGED_NOTEBOOK_ONLY_WHITESPACES)
mock_client = mocker.MagicMock()
mock_client.export_workspace = mocker.MagicMock(side_effect=mock_export_workspace)
remote_path = '/path/notebooks/foo'
local_path = os.path.join(FILE_PATH, 'foo.py')
assert not utils.is_notebook_updated(mock_client, False, local_path, remote_path)
def test_when_there_are_changes_to_the_notebook(self, mocker: MockFixture):
mock_export_workspace = get_mock_export_workspace(data.EXPORT_WORKSPACE_CHANGED_NOTEBOOK)
mock_client = mocker.MagicMock()
mock_client.export_workspace = mocker.MagicMock(side_effect=mock_export_workspace)
remote_path = '/path/notebooks/foo'
local_path = os.path.join(FILE_PATH, 'foo.py')
assert utils.is_notebook_updated(mock_client, False, local_path, remote_path)
def test_when_there_are_changes_to_the_notebook_and_the_diff_flag_is_set(self, mocker: MockFixture):
mock_export_workspace = get_mock_export_workspace(data.EXPORT_WORKSPACE_CHANGED_NOTEBOOK)
mock_client = mocker.MagicMock()
mock_client.export_workspace = mocker.MagicMock(side_effect=mock_export_workspace)
remote_path = '/path/notebooks/foo'
local_path = os.path.join(FILE_PATH, 'foo.py')
assert utils.is_notebook_updated(mock_client, True, local_path, remote_path)
class TestIsStreamingjob:
def test_when_the_job_has_retries_set_to_a_limit(self):
assert not utils.is_streaming_job({'max_retries': 1})
def test_when_the_job_has_retries_not_set(self):
assert not utils.is_streaming_job({})
def test_when_the_job_has_retries_set_to_unlimited_and_has_a_schedule(self):
assert not utils.is_streaming_job({'max_retries': -1, 'schedule': {}})
def test_when_the_job_has_retries_set_to_unlimited_and_has_no_schedule(self):
assert utils.is_streaming_job({'max_retries': -1})
class TestIsStreamingNotebook:
def test_when_there_are_no_jobs(self):
target = utils.is_streaming_notebook([], FILE_PATH, '/path/notebooks')
job = os.path.join(FILE_PATH.replace('\\', '/'), 'foo')
assert not target(job)
def test_when_the_notebook_is_not_in_the_jobs_list(self):
target = utils.is_streaming_notebook(data.JOBS_DATA_REMOTE_STREAMING, FILE_PATH, '/path/notebooks')
job = os.path.join(FILE_PATH.replace('\\', '/'), 'not-found-job')
assert not target(job)
def test_when_the_notebook_is_in_the_list(self):
target = utils.is_streaming_notebook(data.JOBS_DATA_REMOTE_STREAMING, FILE_PATH, '/path/notebooks')
job = os.path.join(FILE_PATH.replace('\\', '/'), 'streaming-job-1')
assert target(job)
class TestRestartJob:
def test_when_the_job_is_not_running(self, mocker: MockFixture):
mock_runs_client = mocker.MagicMock()
mock_runs_client.list_runs = mocker.MagicMock(return_value={"runs":[]})
mock_runs_client.cancel_run = mocker.MagicMock()
mock_jobs_client = mocker.MagicMock()
mock_jobs_client.run_now = mocker.MagicMock()
utils.restart_job(mock_jobs_client, '123456', 'test-job', mock_runs_client)
mock_runs_client.cancel_run.assert_not_called()
mock_jobs_client.run_now.assert_called_once_with('123456', None, None, None, None)
def test_when_the_job_is_running(self, mocker: MockFixture):
mock_runs_client = mocker.MagicMock()
run_query_count = 0
def mock_list_runs(job_id, active_only, completed_only, offset, limit):
nonlocal run_query_count
run_query_count = run_query_count + 1
if run_query_count > 3:
return data.LIST_RUNS_JOB_NOT_RUNNING
if run_query_count > 2:
return data.LIST_RUNS_JOB_STOPPING
return data.LIST_RUNS_JOB_RUNNING
mock_runs_client.list_runs = mocker.MagicMock(side_effect=mock_list_runs)
mock_runs_client.cancel_run = mocker.MagicMock()
mock_jobs_client = mocker.MagicMock()
mock_jobs_client.run_now = mocker.MagicMock()
utils.restart_job(mock_jobs_client, '123456', 'test-job', mock_runs_client)
mock_runs_client.cancel_run.assert_called_once_with('1234567')
mock_jobs_client.run_now.assert_called_once_with('123456', None, None, None, None)
def test_when_the_job_is_pending(self, mocker: MockFixture):
mock_runs_client = mocker.MagicMock()
run_query_count = 0
def mock_list_runs(job_id, active_only, completed_only, offset, limit):
nonlocal run_query_count
run_query_count = run_query_count + 1
if run_query_count > 3:
return data.LIST_RUNS_JOB_NOT_RUNNING
if run_query_count > 2:
return data.LIST_RUNS_JOB_STOPPING
return data.LIST_RUNS_JOB_PENDING
mock_runs_client.list_runs = mocker.MagicMock(side_effect=mock_list_runs)
mock_runs_client.cancel_run = mocker.MagicMock()
mock_jobs_client = mocker.MagicMock()
mock_jobs_client.run_now = mocker.MagicMock()
utils.restart_job(mock_jobs_client, '123456', 'test-job', mock_runs_client)
mock_runs_client.cancel_run.assert_called_once_with('1234567')
mock_jobs_client.run_now.assert_called_once_with('123456', None, None, None, None)
class TestSetJobOwner:
def test_if_the_job_already_has_the_proper_owner_set_for_the_job(self, mocker: MockFixture):
mock_client = mocker.MagicMock()
def mock_perform_query(method, path, query_data=None, headers=None):
if method == 'GET':
return data.PERMISSIONS_WITH_OWNER
return None
mock_client.perform_query = mocker.MagicMock(side_effect=mock_perform_query)
utils.set_job_owner(mock_client, '123456', 'owner@company.com')
mock_client.perform_query.assert_called_with('GET', '/permissions/jobs/123456')
def test_if_the_job_does_not_already_have_the_proper_owner_set(self, mocker: MockFixture):
mock_client = mocker.MagicMock()
def mock_perform_query(method, path, query_data=None, headers=None):
if method == 'GET':
return data.PERMISSIONS_WITHOUT_OWNER
return None
mock_client.perform_query = mocker.MagicMock(side_effect=mock_perform_query)
expected = {
"access_control_list": [
{
"user_name": 'owner@company.com',
"permission_level": "IS_OWNER"
}
]
}
utils.set_job_owner(mock_client, '123456', 'owner@company.com')
mock_client.perform_query.assert_any_call('GET', '/permissions/jobs/123456')
mock_client.perform_query.assert_called_with('PUT', '/permissions/jobs/123456', expected)
class TestSetJobPermissions:
def test_if_the_group_already_has_manager_permissions_for_the_job(self, mocker: MockFixture):
mock_client = mocker.MagicMock()
def mock_perform_query(method, path, query_data=None, headers=None):
if method == 'GET':
return data.PERMISSIONS_WITH_OWNER_AS_MANAGER_AND_GROUP_WITH_MANAGER_PERMISSION
return None
mock_client.perform_query = mocker.MagicMock(side_effect=mock_perform_query)
utils.set_job_permissions(mock_client, 'group', '123456')
mock_client.perform_query.assert_called_with('GET', '/permissions/jobs/123456')
def test_if_the_group_has_non_manager_permissions_for_the_job(self, mocker: MockFixture):
mock_client = mocker.MagicMock()
def mock_perform_query(method, path, query_data=None, headers=None):
if method == 'GET':
return data.PERMISSIONS_WITH_OWNER_AS_MANAGER_AND_GROUP_WITH_VIEW_PERMISSION
return None
mock_client.perform_query = mocker.MagicMock(side_effect=mock_perform_query)
expected = {
"access_control_list": [
{
"user_name": 'owner@company.com',
"permission_level": "CAN_MANAGE"
},
{
"group_name": "group",
"permission_level": "CAN_VIEW"
},
{
"group_name": "group",
"permission_level": "CAN_MANAGE"
}
]
}
utils.set_job_permissions(mock_client, 'group', '123456')
mock_client.perform_query.assert_any_call('GET', '/permissions/jobs/123456')
mock_client.perform_query.assert_called_with('PUT', '/permissions/jobs/123456', expected)
def test_if_the_group_has_no_permissions_for_the_job(self, mocker: MockFixture):
mock_client = mocker.MagicMock()
def mock_perform_query(method, path, query_data=None, headers=None):
if method == 'GET':
return data.PERMISSIONS_WITH_ONLY_OWNER_AS_MANAGER
return None
mock_client.perform_query = mocker.MagicMock(side_effect=mock_perform_query)
expected = {
"access_control_list": [
{
"user_name": 'owner@company.com',
"permission_level": "CAN_MANAGE"
},
{
"group_name": "group",
"permission_level": "CAN_MANAGE"
}
]
}
utils.set_job_permissions(mock_client, 'group', '123456')
mock_client.perform_query.assert_any_call('GET', '/permissions/jobs/123456')
mock_client.perform_query.assert_called_with('PUT', '/permissions/jobs/123456', expected)
class TestStartJob:
def test_invoking_the_client_call(self, mocker: MockFixture):
mock_client = mocker.MagicMock()
mock_client.run_now = mocker.MagicMock()
utils.start_job(mock_client, '123456', 'job-name')
mock_client.run_now.assert_called_with('123456', None, None, None, None)
class TestStopJob:
def test_when_there_are_no_runs_for_the_job(self, mocker: MockFixture):
mock_client = mocker.MagicMock()
mock_client.list_runs = mocker.MagicMock(return_value={'runs': []})
mock_client.cancel_run = mocker.MagicMock()
utils.stop_job(mock_client, '123456', 'job-name')
mock_client.cancel_run.assert_not_called()
def test_when_there_are_no_active_runs_for_the_job(self, mocker: MockFixture):
mock_client = mocker.MagicMock()
mock_client.list_runs = mocker.MagicMock(return_value=data.LIST_RUNS_JOB_NOT_RUNNING)
mock_client.cancel_run = mocker.MagicMock()
utils.stop_job(mock_client, '123456', 'job-name')
mock_client.cancel_run.assert_not_called()
def test_when_the_job_does_not_exist(self, mocker: MockFixture):
mock_client = mocker.MagicMock()
def mock_list_runs(job_id, active_only, completed_only, offset, limit):
response = mocker.MagicMock()
response.json = mocker.MagicMock(return_value={'error_code':'RESOURCE_DOES_NOT_EXIST'})
raise HTTPError(request=requests.Request(), response=response)
mock_client.list_runs = mocker.MagicMock(side_effect=mock_list_runs)
mock_client.cancel_run = mocker.MagicMock()
utils.stop_job(mock_client, '123456', 'job-name')
mock_client.cancel_run.assert_not_called()
def test_when_there_are_active_runs_for_the_job(self, mocker: MockFixture):
mock_client = mocker.MagicMock()
mock_client.list_runs = mocker.MagicMock(return_value=data.LIST_RUNS_JOB_RUNNING)
mock_client.cancel_run = mocker.MagicMock()
utils.stop_job(mock_client, '123456', 'job-name')
mock_client.cancel_run.assert_called_with('1234567')
def test_when_there_are_pending_runs_for_the_job(self, mocker: MockFixture):
mock_client = mocker.MagicMock()
mock_client.list_runs = mocker.MagicMock(return_value=data.LIST_RUNS_JOB_RUNNING)
mock_client.cancel_run = mocker.MagicMock()
utils.stop_job(mock_client, '123456', 'job-name')
mock_client.cancel_run.assert_called_with('1234567')
def test_when_there_are_active_runs_for_the_job_and_the_job_gets_deleted(self, mocker: MockFixture):
mock_client = mocker.MagicMock()
mock_client.list_runs = mocker.MagicMock(return_value=data.LIST_RUNS_JOB_RUNNING)
def mock_cancel_run(run_id):
response = mocker.MagicMock()
response.json = mocker.MagicMock(return_value={'error_code':'RESOURCE_DOES_NOT_EXIST'})
raise HTTPError(request=requests.Request(), response=response)
mock_client.cancel_run = mocker.MagicMock(side_effect=mock_cancel_run)
with pytest.raises(HTTPError) as exinfo:
utils.stop_job(mock_client, '123456', 'job-name')
assert exinfo | 40.949864 | 116 | 0.688329 | 3,789 | 30,221 | 5.11639 | 0.069148 | 0.062932 | 0.023832 | 0.047715 | 0.854379 | 0.839884 | 0.817652 | 0.785876 | 0.762354 | 0.731095 | 0 | 0.013219 | 0.21399 | 30,221 | 738 | 117 | 40.949865 | 0.802896 | 0.020416 | 0 | 0.59434 | 0 | 0 | 0.09626 | 0.034193 | 0 | 0 | 0 | 0 | 0.149057 | 1 | 0.173585 | false | 0 | 0.018868 | 0 | 0.301887 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
27e3c07c7849c7c9c8d9a8a2c78287faaa6f90f5 | 114 | py | Python | katas/kyu_7/sorting_dictionaries.py | the-zebulan/CodeWars | 1eafd1247d60955a5dfb63e4882e8ce86019f43a | [
"MIT"
] | 40 | 2016-03-09T12:26:20.000Z | 2022-03-23T08:44:51.000Z | katas/kyu_7/sorting_dictionaries.py | akalynych/CodeWars | 1eafd1247d60955a5dfb63e4882e8ce86019f43a | [
"MIT"
] | null | null | null | katas/kyu_7/sorting_dictionaries.py | akalynych/CodeWars | 1eafd1247d60955a5dfb63e4882e8ce86019f43a | [
"MIT"
] | 36 | 2016-11-07T19:59:58.000Z | 2022-03-31T11:18:27.000Z | from operator import itemgetter
def sort_dict(d):
return sorted(d.items(), key=itemgetter(1), reverse=True)
| 19 | 61 | 0.736842 | 17 | 114 | 4.882353 | 0.882353 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010204 | 0.140351 | 114 | 5 | 62 | 22.8 | 0.836735 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
fdee8ccd653d8ca48b2c723a60965cbd00994694 | 46 | py | Python | smoke/engine/__init__.py | SmallMunich/Smoke | 591a03bdb5cad962999914c9a97c7a8bed9e529b | [
"MIT"
] | 465 | 2020-03-03T09:25:16.000Z | 2022-03-30T09:34:34.000Z | smoke/engine/__init__.py | SmallMunich/Smoke | 591a03bdb5cad962999914c9a97c7a8bed9e529b | [
"MIT"
] | 65 | 2020-03-13T12:45:29.000Z | 2022-03-28T08:09:21.000Z | smoke/engine/__init__.py | SmallMunich/Smoke | 591a03bdb5cad962999914c9a97c7a8bed9e529b | [
"MIT"
] | 160 | 2020-03-04T06:09:17.000Z | 2022-03-30T02:31:38.000Z | from .defaults import *
from .launch import *
| 15.333333 | 23 | 0.73913 | 6 | 46 | 5.666667 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.173913 | 46 | 2 | 24 | 23 | 0.894737 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e30673d6b05294f5ef0ef781207d3f9909ec90ff | 14,584 | py | Python | test/test_md025.py | scop/pymarkdown | 562ba8f7857d99ba09e86e42de5a37ec6d9b2c30 | [
"MIT"
] | null | null | null | test/test_md025.py | scop/pymarkdown | 562ba8f7857d99ba09e86e42de5a37ec6d9b2c30 | [
"MIT"
] | null | null | null | test/test_md025.py | scop/pymarkdown | 562ba8f7857d99ba09e86e42de5a37ec6d9b2c30 | [
"MIT"
] | null | null | null | """
Module to provide tests related to the MD026 rule.
"""
from test.markdown_scanner import MarkdownScanner
import pytest
@pytest.mark.rules
def test_md025_bad_configuration_level():
"""
Test to make sure we get the expected behavior after scanning a good file from the
test/resources/rules/md004 directory that has consistent asterisk usage on a single
level list.
"""
# Arrange
scanner = MarkdownScanner()
supplied_arguments = [
"--set",
"plugins.md025.level=1",
"--strict-config",
"scan",
"test/resources/rules/md025/good_single_top_level.md",
]
expected_return_code = 1
expected_output = ""
expected_error = (
"BadPluginError encountered while configuring plugins:\n"
+ "The value for property 'plugins.md025.level' must be of type 'int'."
)
# Act
execute_results = scanner.invoke_main(arguments=supplied_arguments)
# Assert
execute_results.assert_results(
expected_output, expected_error, expected_return_code
)
@pytest.mark.rules
def test_md025_good_configuration_level():
"""
Test to make sure we get the expected behavior after scanning a good file from the
test/resources/rules/md004 directory that has consistent asterisk usage on a single
level list.
"""
# Arrange
scanner = MarkdownScanner()
supplied_arguments = [
"--set",
"plugins.md025.level=$#1",
"--strict-config",
"scan",
"test/resources/rules/md025/good_single_top_level_atx.md",
]
expected_return_code = 0
expected_output = ""
expected_error = ""
# Act
execute_results = scanner.invoke_main(arguments=supplied_arguments)
# Assert
execute_results.assert_results(
expected_output, expected_error, expected_return_code
)
@pytest.mark.rules
def test_md025_bad_configuration_level_bad():
"""
Test to make sure we get the expected behavior after scanning a good file from the
test/resources/rules/md004 directory that has consistent asterisk usage on a single
level list.
"""
# Arrange
scanner = MarkdownScanner()
supplied_arguments = [
"--set",
"plugins.md025.level=$#0",
"--strict-config",
"scan",
"test/resources/rules/md025/good_single_top_level.md",
]
expected_return_code = 1
expected_output = ""
expected_error = (
"BadPluginError encountered while configuring plugins:\n"
+ "The value for property 'plugins.md025.level' is not valid: Allowable values are between 1 and 6."
)
# Act
execute_results = scanner.invoke_main(arguments=supplied_arguments)
# Assert
execute_results.assert_results(
expected_output, expected_error, expected_return_code
)
@pytest.mark.rules
def test_md025_bad_configuration_front_matter_title():
"""
Test to make sure we get the expected behavior after scanning a good file from the
test/resources/rules/md004 directory that has consistent asterisk usage on a single
level list.
"""
# Arrange
scanner = MarkdownScanner()
supplied_arguments = [
"--set",
"plugins.md025.front_matter_title=$#1",
"--strict-config",
"scan",
"test/resources/rules/md025/good_single_top_level.md",
]
expected_return_code = 1
expected_output = ""
expected_error = (
"BadPluginError encountered while configuring plugins:\n"
+ "The value for property 'plugins.md025.front_matter_title' must be of type 'str'."
)
# Act
execute_results = scanner.invoke_main(arguments=supplied_arguments)
# Assert
execute_results.assert_results(
expected_output, expected_error, expected_return_code
)
@pytest.mark.rules
def test_md025_good_configuration_front_matter_title():
"""
Test to make sure we get the expected behavior after scanning a good file from the
test/resources/rules/md004 directory that has consistent asterisk usage on a single
level list.
"""
# Arrange
scanner = MarkdownScanner()
supplied_arguments = [
"--set",
"plugins.md025.front_matter_title=subject",
"--strict-config",
"scan",
"test/resources/rules/md025/good_single_top_level_atx.md",
]
expected_return_code = 0
expected_output = ""
expected_error = ""
# Act
execute_results = scanner.invoke_main(arguments=supplied_arguments)
# Assert
execute_results.assert_results(
expected_output, expected_error, expected_return_code
)
@pytest.mark.rules
def test_md025_bad_configuration_front_matter_title_bad():
"""
Test to make sure we get the expected behavior after scanning a good file from the
test/resources/rules/md004 directory that has consistent asterisk usage on a single
level list.
"""
# Arrange
scanner = MarkdownScanner()
supplied_arguments = [
"--set",
"plugins.md025.front_matter_title=",
"--strict-config",
"scan",
"test/resources/rules/md025/good_single_top_level.md",
]
expected_return_code = 1
expected_output = ""
expected_error = (
"BadPluginError encountered while configuring plugins:\n"
+ "The value for property 'plugins.md025.front_matter_title' is not valid: Empty strings are not allowable values."
)
# Act
execute_results = scanner.invoke_main(arguments=supplied_arguments)
# Assert
execute_results.assert_results(
expected_output, expected_error, expected_return_code
)
@pytest.mark.rules
def test_md025_bad_configuration_front_matter_title_invalid():
"""
Test to make sure we get the expected behavior after scanning a good file from the
test/resources/rules/md004 directory that has consistent asterisk usage on a single
level list.
"""
# Arrange
scanner = MarkdownScanner()
supplied_arguments = [
"--set",
"plugins.md025.front_matter_title=a:b",
"--strict-config",
"scan",
"test/resources/rules/md025/good_single_top_level.md",
]
expected_return_code = 1
expected_output = ""
expected_error = (
"BadPluginError encountered while configuring plugins:\n"
+ "The value for property 'plugins.md025.front_matter_title' is not valid: Colons (:) are not allowed in the value."
)
# Act
execute_results = scanner.invoke_main(arguments=supplied_arguments)
# Assert
execute_results.assert_results(
expected_output, expected_error, expected_return_code
)
@pytest.mark.rules
def test_md025_good_single_top_level_atx():
"""
Test to make sure we get the expected behavior after scanning a good file from the
test/resources/rules/md004 directory that has consistent asterisk usage on a single
level list.
"""
# Arrange
scanner = MarkdownScanner()
supplied_arguments = [
"scan",
"test/resources/rules/md025/good_single_top_level_atx.md",
]
expected_return_code = 0
expected_output = ""
expected_error = ""
# Act
execute_results = scanner.invoke_main(arguments=supplied_arguments)
# Assert
execute_results.assert_results(
expected_output, expected_error, expected_return_code
)
@pytest.mark.rules
def test_md025_good_single_top_level_setext():
"""
Test to make sure we get the expected behavior after scanning a good file from the
test/resources/rules/md004 directory that has consistent asterisk usage on a single
level list.
"""
# Arrange
scanner = MarkdownScanner()
supplied_arguments = [
"scan",
"test/resources/rules/md025/good_single_top_level_setext.md",
]
expected_return_code = 0
expected_output = ""
expected_error = ""
# Act
execute_results = scanner.invoke_main(arguments=supplied_arguments)
# Assert
execute_results.assert_results(
expected_output, expected_error, expected_return_code
)
@pytest.mark.rules
def test_md025_bad_top_level_atx_top_level_atx():
"""
Test to make sure we get the expected behavior after scanning a good file from the
test/resources/rules/md004 directory that has consistent asterisk usage on a single
level list.
"""
# Arrange
scanner = MarkdownScanner()
supplied_arguments = [
"scan",
"test/resources/rules/md025/bad_top_level_atx_top_level_atx.md",
]
expected_return_code = 1
expected_output = (
"test/resources/rules/md025/bad_top_level_atx_top_level_atx.md:5:1: "
+ "MD025: Multiple top level headings in the same document (single-title,single-h1)"
)
expected_error = ""
# Act
execute_results = scanner.invoke_main(arguments=supplied_arguments)
# Assert
execute_results.assert_results(
expected_output, expected_error, expected_return_code
)
@pytest.mark.rules
def test_md025_bad_top_level_atx_top_level_setext():
"""
Test to make sure we get the expected behavior after scanning a good file from the
test/resources/rules/md004 directory that has consistent asterisk usage on a single
level list.
"""
# Arrange
scanner = MarkdownScanner()
supplied_arguments = [
"--disable-rules",
"md003",
"scan",
"test/resources/rules/md025/bad_top_level_atx_top_level_setext.md",
]
expected_return_code = 1
expected_output = (
"test/resources/rules/md025/bad_top_level_atx_top_level_setext.md:6:1: "
+ "MD025: Multiple top level headings in the same document (single-title,single-h1)"
)
expected_error = ""
# Act
execute_results = scanner.invoke_main(arguments=supplied_arguments)
# Assert
execute_results.assert_results(
expected_output, expected_error, expected_return_code
)
@pytest.mark.rules
def test_md025_bad_top_level_setext_top_level_setext():
"""
Test to make sure we get the expected behavior after scanning a good file from the
test/resources/rules/md004 directory that has consistent asterisk usage on a single
level list.
"""
# Arrange
scanner = MarkdownScanner()
supplied_arguments = [
"scan",
"test/resources/rules/md025/bad_top_level_setext_top_level_setext.md",
]
expected_return_code = 1
expected_output = (
"test/resources/rules/md025/bad_top_level_setext_top_level_setext.md:7:1: "
+ "MD025: Multiple top level headings in the same document (single-title,single-h1)"
)
expected_error = ""
# Act
execute_results = scanner.invoke_main(arguments=supplied_arguments)
# Assert
execute_results.assert_results(
expected_output, expected_error, expected_return_code
)
@pytest.mark.rules
def test_md025_bad_top_level_setext_top_level_atx():
"""
Test to make sure we get the expected behavior after scanning a good file from the
test/resources/rules/md004 directory that has consistent asterisk usage on a single
level list.
"""
# Arrange
scanner = MarkdownScanner()
supplied_arguments = [
"--disable-rules",
"md003",
"scan",
"test/resources/rules/md025/bad_top_level_setext_top_level_atx.md",
]
expected_return_code = 1
expected_output = (
"test/resources/rules/md025/bad_top_level_setext_top_level_atx.md:6:1: "
+ "MD025: Multiple top level headings in the same document (single-title,single-h1)"
)
expected_error = ""
# Act
execute_results = scanner.invoke_main(arguments=supplied_arguments)
# Assert
execute_results.assert_results(
expected_output, expected_error, expected_return_code
)
@pytest.mark.rules
def test_md025_good_front_matter_title():
"""
Test to make sure we get the expected behavior after scanning a good file from the
test/resources/rules/md004 directory that has consistent asterisk usage on a single
level list.
"""
# Arrange
scanner = MarkdownScanner()
supplied_arguments = [
"--set",
"extensions.front-matter.enabled=$!True",
"scan",
"test/resources/rules/md025/good_front_matter_title.md",
]
expected_return_code = 0
expected_output = ""
expected_error = ""
# Act
execute_results = scanner.invoke_main(arguments=supplied_arguments)
# Assert
execute_results.assert_results(
expected_output, expected_error, expected_return_code
)
@pytest.mark.rules
def test_md025_bad_front_matter_title_top_level_atx():
"""
Test to make sure we get the expected behavior after scanning a good file from the
test/resources/rules/md004 directory that has consistent asterisk usage on a single
level list.
"""
# Arrange
scanner = MarkdownScanner()
supplied_arguments = [
"--set",
"extensions.front-matter.enabled=$!True",
"scan",
"test/resources/rules/md025/bad_front_matter_title_top_level_atx.md",
]
expected_return_code = 1
expected_output = (
"test/resources/rules/md025/bad_front_matter_title_top_level_atx.md:7:1: "
+ "MD025: Multiple top level headings in the same document (single-title,single-h1)"
)
expected_error = ""
# Act
execute_results = scanner.invoke_main(arguments=supplied_arguments)
# Assert
execute_results.assert_results(
expected_output, expected_error, expected_return_code
)
@pytest.mark.rules
def test_md025_bad_front_matter_title_top_level_setext():
"""
Test to make sure we get the expected behavior after scanning a good file from the
test/resources/rules/md004 directory that has consistent asterisk usage on a single
level list.
"""
# Arrange
scanner = MarkdownScanner()
supplied_arguments = [
"--set",
"extensions.front-matter.enabled=$!True",
"scan",
"test/resources/rules/md025/bad_front_matter_title_top_level_setext.md",
]
expected_return_code = 1
expected_output = (
"test/resources/rules/md025/bad_front_matter_title_top_level_setext.md:8:1: "
+ "MD025: Multiple top level headings in the same document (single-title,single-h1)"
)
expected_error = ""
# Act
execute_results = scanner.invoke_main(arguments=supplied_arguments)
# Assert
execute_results.assert_results(
expected_output, expected_error, expected_return_code
)
| 28.100193 | 124 | 0.689043 | 1,776 | 14,584 | 5.405405 | 0.064752 | 0.039167 | 0.07125 | 0.073125 | 0.974167 | 0.9725 | 0.972083 | 0.971458 | 0.971458 | 0.967813 | 0 | 0.023503 | 0.226892 | 14,584 | 518 | 125 | 28.15444 | 0.827938 | 0.220721 | 0 | 0.7 | 0 | 0.010345 | 0.288339 | 0.178852 | 0 | 0 | 0 | 0 | 0.055172 | 1 | 0.055172 | false | 0 | 0.006897 | 0 | 0.062069 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e3738de20640de22a8b43782e73a45f7e8134407 | 35,815 | py | Python | test_sfauto/test_10_volumes.py | cseelye/sfauto | 2a0d8a541431f84e4d887821cd66a5ea525e3026 | [
"MIT"
] | null | null | null | test_sfauto/test_10_volumes.py | cseelye/sfauto | 2a0d8a541431f84e4d887821cd66a5ea525e3026 | [
"MIT"
] | null | null | null | test_sfauto/test_10_volumes.py | cseelye/sfauto | 2a0d8a541431f84e4d887821cd66a5ea525e3026 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
#pylint: skip-file
from __future__ import print_function
import pytest
import random
from libsf import SolidFireAPIError
from . import globalconfig
from .fake_cluster import APIFailure, APIVersion
from .testutil import RandomString, RandomIP
@pytest.mark.usefixtures("fake_cluster_perclass")
class TestVolumeCreate(object):
def test_negative_VolumeCreateNoAccount(self):
print()
from volume_create import VolumeCreate
assert not VolumeCreate(volume_size=random.randint(1, 8000),
volume_name=RandomString(random.randint(1, 64)),
volume_count=1,
account_id=9999)
def test_negative_VolumeCreateAccountSearchFailure(self):
print()
accounts = globalconfig.cluster.ListAccounts({})["accounts"]
existing_id = accounts[random.randint(0, len(accounts)-1)]["accountID"]
from volume_create import VolumeCreate
with APIFailure("ListAccounts"):
assert not VolumeCreate(volume_size=random.randint(1, 8000),
volume_name=RandomString(random.randint(1, 64)),
volume_count=1,
account_id=existing_id)
def test_negative_VolumeCreateSingleFailure(self):
print()
accounts = globalconfig.cluster.ListAccounts({})["accounts"]
existing_id = accounts[random.randint(0, len(accounts)-1)]["accountID"]
from volume_create import VolumeCreate
with APIFailure("CreateVolume"):
assert not VolumeCreate(volume_size=random.randint(1, 8000),
volume_name=RandomString(random.randint(1, 64)),
volume_count=1,
account_id=existing_id)
def test_negative_VolumeCreateFailure(self):
print()
accounts = globalconfig.cluster.ListAccounts({})["accounts"]
existing_id = accounts[random.randint(0, len(accounts)-1)]["accountID"]
from volume_create import VolumeCreate
with APIFailure("CreateMultipleVolumes"):
assert not VolumeCreate(volume_size=random.randint(1, 8000),
volume_prefix=RandomString(random.randint(1, 50)),
volume_count=random.randint(2, 20),
account_id=existing_id)
def test_VolumeCreateSingle(self):
print()
accounts = globalconfig.cluster.ListAccounts({})["accounts"]
existing_id = accounts[random.randint(0, len(accounts)-1)]["accountID"]
from volume_create import VolumeCreate
assert VolumeCreate(volume_size=random.randint(1, 8000),
volume_name=RandomString(random.randint(1, 64)),
volume_count=1,
account_id=existing_id)
def test_VolumeCreateSingleExplicit(self):
print()
accounts = globalconfig.cluster.ListAccounts({})["accounts"]
existing_id = accounts[random.randint(0, len(accounts)-1)]["accountID"]
from volume_create import VolumeCreate
assert VolumeCreate(volume_size=random.randint(1, 8000),
volume_prefix=RandomString(random.randint(1, 64)),
volume_count=1,
create_single=True,
account_id=existing_id)
def test_VolumeCreateGiBExplicit(self):
print()
accounts = globalconfig.cluster.ListAccounts({})["accounts"]
existing_id = accounts[random.randint(0, len(accounts)-1)]["accountID"]
from volume_create import VolumeCreate
assert VolumeCreate(volume_size=random.randint(1, 7400),
volume_prefix=RandomString(random.randint(1, 64)),
volume_count=random.randint(2, 10),
gib=True,
account_id=existing_id)
def test_VolumeCreateWaitExplicit(self):
print()
accounts = globalconfig.cluster.ListAccounts({})["accounts"]
existing_id = accounts[random.randint(0, len(accounts)-1)]["accountID"]
from volume_create import VolumeCreate
assert VolumeCreate(volume_size=random.randint(1, 7400),
volume_prefix=RandomString(random.randint(1, 64)),
volume_count=2,
create_single=True,
wait=1,
account_id=existing_id)
def test_VolumeCreateWithAllOptions(self):
print()
accounts = globalconfig.cluster.ListAccounts({})["accounts"]
existing_id = accounts[random.randint(0, len(accounts)-1)]["accountID"]
max_iops = random.randint(5000, 90000)
burst_iops = max_iops + random.randint(1, 10000)
from volume_create import VolumeCreate
assert VolumeCreate(volume_size=random.randint(1, 7400),
volume_prefix=RandomString(random.randint(1, 50)) + "-",
volume_count=random.randint(1, 10),
volume_start=random.randint(1, 100),
min_iops=random.randint(100, 1000),
max_iops=max_iops,
burst_iops=burst_iops,
enable512e=random.choice([True, False]),
gib=random.choice([True, False]),
create_single=random.choice([True, False]),
wait=random.randint(0, 1),
account_id=existing_id)
@pytest.mark.usefixtures("fake_cluster_perclass")
class TestVolumeDelete(object):
def test_VolumeDeleteNoMatches(self):
print()
from volume_delete import VolumeDelete
assert VolumeDelete(volume_prefix="nomatchingvolumes")
def test_negative_VolumeDeleteNoArgs(self):
print()
from volume_delete import VolumeDelete
assert not VolumeDelete()
def test_VolumeDeleteTestMode(self):
print()
volume_ids = [vol["volumeID"] for vol in globalconfig.cluster.ListActiveVolumes({})["volumes"]]
from volume_delete import VolumeDelete
assert VolumeDelete(volume_ids=random.sample(volume_ids, random.randint(2, 15)),
test=True)
def test_negative_VolumeDeleteFailure(self):
print()
volume_names = [vol["name"] for vol in globalconfig.cluster.ListActiveVolumes({})["volumes"]]
from volume_delete import VolumeDelete
with APIFailure("DeleteVolumes"):
assert not VolumeDelete(volume_names=random.sample(volume_names, random.randint(2, 15)))
def test_negative_VolumeDeleteFailurePreFluorine(self):
print()
volume_ids = [vol["volumeID"] for vol in globalconfig.cluster.ListActiveVolumes({})["volumes"]]
from volume_delete import VolumeDelete
with APIVersion(8.0):
with APIFailure("DeleteVolume"):
assert not VolumeDelete(volume_ids=random.sample(volume_ids, random.randint(2, 15)))
def test_negative_VolumeDeleteSearchFailure(self):
print()
volume_ids = [vol["volumeID"] for vol in globalconfig.cluster.ListActiveVolumes({})["volumes"]]
from volume_delete import VolumeDelete
with APIFailure("ListActiveVolumes"):
assert not VolumeDelete(volume_ids=random.sample(volume_ids, random.randint(2, 15)))
def test_DeleteSingleVolume(self):
print()
volume_names = [vol["name"] for vol in globalconfig.cluster.ListActiveVolumes({})["volumes"]]
from volume_delete import VolumeDelete
assert VolumeDelete(volume_names=volume_names[random.randint(0, len(volume_names)-1)],
purge=True)
def test_VolumeDeleteNoPurge(self):
print()
volume_ids = [vol["volumeID"] for vol in globalconfig.cluster.ListActiveVolumes({})["volumes"]]
from volume_delete import VolumeDelete
assert VolumeDelete(volume_ids=random.sample(volume_ids, random.randint(2, 15)),
purge=False)
def test_VolumeDelete(self):
print()
volume_names = [vol["name"] for vol in globalconfig.cluster.ListActiveVolumes({})["volumes"]]
from volume_delete import VolumeDelete
assert VolumeDelete(volume_names=random.sample(volume_names, random.randint(2, 15)),
purge=random.choice([True, False]))
def test_VolumeDeletePreFluorine(self):
print()
volume_ids = [vol["volumeID"] for vol in globalconfig.cluster.ListActiveVolumes({})["volumes"]]
from volume_delete import VolumeDelete
with APIVersion(8.0):
assert VolumeDelete(volume_ids=random.sample(volume_ids, random.randint(2, 15)),
purge=random.choice([True, False]))
@pytest.mark.usefixtures("fake_cluster_perclass")
class TestPurgeVolumes(object):
def test_negative_VolumePurgeSearchFailure(self):
print()
from volume_purge import VolumePurge
with APIFailure("ListDeletedVolumes"):
assert not VolumePurge()
def test_negative_VolumePurgeFailure(self):
print()
volume_ids = random.sample([vol["volumeID"] for vol in globalconfig.cluster.ListActiveVolumes({})["volumes"]], random.randint(2, 15))
from volume_delete import VolumeDelete
assert VolumeDelete(volume_ids=volume_ids,
purge=False)
from volume_purge import VolumePurge
with APIFailure("PurgeDeletedVolumes"):
assert not VolumePurge()
def test_negative_VolumePurgeFailurePreFluorine(self):
print()
volume_ids = random.sample([vol["volumeID"] for vol in globalconfig.cluster.ListActiveVolumes({})["volumes"]], random.randint(2, 15))
from volume_delete import VolumeDelete
assert VolumeDelete(volume_ids=volume_ids,
purge=False)
from volume_purge import VolumePurge
with APIVersion(8.0):
with APIFailure("PurgeDeletedVolume"):
assert not VolumePurge()
def test_VolumePurge(self):
print()
volume_ids = random.sample([vol["volumeID"] for vol in globalconfig.cluster.ListActiveVolumes({})["volumes"]], random.randint(2, 15))
from volume_delete import VolumeDelete
assert VolumeDelete(volume_ids=volume_ids,
purge=False)
from volume_purge import VolumePurge
assert VolumePurge()
def test_VolumePurgeNoVolumes(self):
print()
from volume_purge import VolumePurge
assert VolumePurge()
assert VolumePurge()
@pytest.mark.usefixtures("fake_cluster_perclass")
class TestVolumeExtend(object):
def test_VolumeExtendNoMatch(self):
print()
from volume_extend import VolumeExtend
assert not VolumeExtend(new_size=random.randint(2, 8000),
volume_names=["nomatch", "doesntexist","invalid"])
def test_VolumeExtendTestMode(self):
print()
volume_ids = random.sample([vol["volumeID"] for vol in globalconfig.cluster.ListActiveVolumes({})["volumes"]], random.randint(2, 15))
from volume_extend import VolumeExtend
assert VolumeExtend(new_size=random.randint(2, 8000),
volume_ids=volume_ids,
test=True)
def test_VolumeExtend(self):
print()
volumes = [vol["volumeID"] for vol in globalconfig.cluster.ListActiveVolumes({})["volumes"] if vol["totalSize"] < 200 * 1000 * 1000 * 1000]
volume_ids = random.sample(volumes, random.randint(2, min(15, len(volumes))))
from volume_extend import VolumeExtend
assert VolumeExtend(new_size=random.randint(200, 8000),
volume_ids=volume_ids)
def test_negative_VolumeExtendSmaller(self):
print()
volumes = [vol["volumeID"] for vol in globalconfig.cluster.ListActiveVolumes({})["volumes"] if vol["totalSize"] > 1 * 1000 * 1000 * 1000]
volume_ids = random.sample(volumes, random.randint(2, min(15, len(volumes))))
from volume_extend import VolumeExtend
assert not VolumeExtend(new_size=1,
volume_ids=volume_ids)
def test_VolumeExtendGiB(self):
print()
volumes = [vol["volumeID"] for vol in globalconfig.cluster.ListActiveVolumes({})["volumes"] if vol["totalSize"] < 150 * 1024 * 1024 * 1024]
volume_ids = random.sample(volumes, random.randint(2, min(15, len(volumes))))
from volume_extend import VolumeExtend
assert VolumeExtend(new_size=random.randint(200, 7400),
gib=True,
volume_ids=volume_ids)
def test_negative_VolumeExtendFailure(self):
print()
volumes = [vol["volumeID"] for vol in globalconfig.cluster.ListActiveVolumes({})["volumes"] if vol["totalSize"] < 150 * 1000 * 1000 * 1000]
if len(volumes) < 3:
print("small volumes = {}".format(volumes))
print("all volumes = {}".format(globalconfig.cluster.ListActiveVolumes({})["volumes"]))
volume_ids = random.sample(volumes, random.randint(2, min(15, len(volumes))))
from volume_extend import VolumeExtend
with APIFailure("ModifyVolume"):
assert not VolumeExtend(new_size=random.randint(200, 8000),
volume_ids=volume_ids)
def test_negative_VolumeExtendSearchFailure(self):
print()
volumes = [vol["volumeID"] for vol in globalconfig.cluster.ListActiveVolumes({})["volumes"] if vol["totalSize"] < 150 * 1000 * 1000 * 1000]
volume_ids = random.sample(volumes, random.randint(2, min(15, len(volumes))))
from volume_extend import VolumeExtend
with APIFailure("ListActiveVolumes"):
assert not VolumeExtend(new_size=random.randint(200, 8000),
volume_ids=volume_ids)
@pytest.mark.usefixtures("fake_cluster_perclass")
class TestVolumeSetQos(object):
def test_VolumeSetQos(self):
print()
volume_ids = random.sample([vol["volumeID"] for vol in globalconfig.cluster.ListActiveVolumes({})["volumes"]], random.randint(2, 15))
from volume_set_qos import VolumeSetQos
assert VolumeSetQos(volume_ids=volume_ids,
min_iops=random.randint(50, 1000),
max_iops=random.randint(1500, 90000),
burst_iops=random.randint(91000,100000))
def test_VolumeSetQosTestMode(self):
print()
volume_ids = random.sample([vol["volumeID"] for vol in globalconfig.cluster.ListActiveVolumes({})["volumes"]], random.randint(2, 15))
from volume_set_qos import VolumeSetQos
assert VolumeSetQos(volume_ids=volume_ids,
min_iops=random.randint(50, 1000),
max_iops=random.randint(1500, 90000),
burst_iops=random.randint(91000,100000))
def test_VolumeSetQosNoMatch(self):
print()
from volume_set_qos import VolumeSetQos
assert not VolumeSetQos(volume_names=["nomatch", "doesntexist","invalid"],
min_iops=random.randint(50, 1000),
max_iops=random.randint(1500, 90000),
burst_iops=random.randint(91000,100000))
def test_negative_SetVolumeQoSFailure(self):
print()
volume_ids = random.sample([vol["volumeID"] for vol in globalconfig.cluster.ListActiveVolumes({})["volumes"]], random.randint(2, 15))
from volume_set_qos import VolumeSetQos
with APIFailure("ModifyVolume"):
assert not VolumeSetQos(volume_ids=volume_ids,
min_iops=random.randint(50, 1000),
max_iops=random.randint(1500, 90000),
burst_iops=random.randint(91000,100000))
def test_negative_SetVolumeQoSSearchFailure(self):
print()
volume_ids = random.sample([vol["volumeID"] for vol in globalconfig.cluster.ListActiveVolumes({})["volumes"]], random.randint(2, 15))
from volume_set_qos import VolumeSetQos
with APIFailure("ListActiveVolumes"):
assert not VolumeSetQos(volume_ids=volume_ids)
@pytest.mark.usefixtures("fake_cluster_perclass")
class TestVolumeLock(object):
def test_VolumeLock(self):
print()
volume_ids = random.sample([vol["volumeID"] for vol in globalconfig.cluster.ListActiveVolumes({})["volumes"]], random.randint(2, 15))
from volume_lock import VolumeLock
assert VolumeLock(volume_ids=volume_ids)
def test_VolumeLockTestMode(self):
print()
volume_names = random.sample([vol["name"] for vol in globalconfig.cluster.ListActiveVolumes({})["volumes"]], random.randint(2, 15))
from volume_lock import VolumeLock
assert VolumeLock(volume_names=volume_names,
test=True)
def test_VolumeLockNoMatch(self):
print()
from volume_lock import VolumeLock
assert not VolumeLock(volume_names=["nomatch", "doesntexist","invalid"])
def test_negative_VolumeLockFailure(self):
print()
volume_ids = random.sample([vol["volumeID"] for vol in globalconfig.cluster.ListActiveVolumes({})["volumes"]], random.randint(2, 15))
from volume_lock import VolumeLock
with APIFailure("ModifyVolume"):
assert not VolumeLock(volume_ids=volume_ids)
def test_negative_VolumeLockSearchFailure(self):
print()
volume_ids = random.sample([vol["volumeID"] for vol in globalconfig.cluster.ListActiveVolumes({})["volumes"]], random.randint(2, 15))
from volume_lock import VolumeLock
with APIFailure("ListActiveVolumes"):
assert not VolumeLock(volume_ids=volume_ids)
@pytest.mark.usefixtures("fake_cluster_perclass")
class TestVolumeUnlock(object):
def test_VolumeUnlock(self):
print()
volume_ids = random.sample([vol["volumeID"] for vol in globalconfig.cluster.ListActiveVolumes({})["volumes"]], random.randint(2, 15))
from volume_unlock import VolumeUnlock
assert VolumeUnlock(volume_ids=volume_ids)
def test_VolumeUnlockTestMode(self):
print()
volume_names = random.sample([vol["name"] for vol in globalconfig.cluster.ListActiveVolumes({})["volumes"]], random.randint(2, 15))
from volume_unlock import VolumeUnlock
assert VolumeUnlock(volume_names=volume_names,
test=True)
def test_VolumeUnlockNoMatch(self):
print()
from volume_unlock import VolumeUnlock
assert not VolumeUnlock(volume_names=["nomatch", "doesntexist","invalid"])
def test_negative_VolumeUnlockFailure(self):
print()
volume_ids = random.sample([vol["volumeID"] for vol in globalconfig.cluster.ListActiveVolumes({})["volumes"]], random.randint(2, 15))
from volume_unlock import VolumeUnlock
with APIFailure("ModifyVolume"):
assert not VolumeUnlock(volume_ids=volume_ids)
def test_negative_VolumeUnlockSearchFailure(self):
print()
volume_ids = random.sample([vol["volumeID"] for vol in globalconfig.cluster.ListActiveVolumes({})["volumes"]], random.randint(2, 15))
from volume_unlock import VolumeUnlock
with APIFailure("ListActiveVolumes"):
assert not VolumeUnlock(volume_ids=volume_ids)
@pytest.mark.usefixtures("fake_cluster_perclass")
class TestVolumeSetAttributes(object):
def test_VolumeSetAttribute(self):
print()
volume_ids = random.sample([vol["volumeID"] for vol in globalconfig.cluster.ListActiveVolumes({})["volumes"]], random.randint(2, 15))
from volume_set_attribute import VolumeSetAttribute
assert VolumeSetAttribute(volume_ids=volume_ids,
attribute_name=RandomString(32),
attribute_value=RandomString(64))
def test_VolumeSetAttributeTestMode(self):
print()
volume_names = random.sample([vol["name"] for vol in globalconfig.cluster.ListActiveVolumes({})["volumes"]], random.randint(2, 15))
from volume_set_attribute import VolumeSetAttribute
assert VolumeSetAttribute(volume_names=volume_names,
attribute_name=RandomString(32),
attribute_value=RandomString(64),
test=True)
def test_VolumeSetAttributeNoMatch(self):
print()
from volume_set_attribute import VolumeSetAttribute
assert VolumeSetAttribute(volume_names=["nomatch", "doesntexist","invalid"],
attribute_name=RandomString(32),
attribute_value=RandomString(64))
def test_VolumeSetAttributeFailure(self):
print()
volume_ids = random.sample([vol["volumeID"] for vol in globalconfig.cluster.ListActiveVolumes({})["volumes"]], random.randint(2, 15))
from volume_set_attribute import VolumeSetAttribute
with APIFailure("ModifyVolume"):
assert not VolumeSetAttribute(volume_ids=volume_ids,
attribute_name=RandomString(32),
attribute_value=RandomString(64))
def test_VolumeSetAttributeSearchFailure(self):
print()
volume_ids = random.sample([vol["volumeID"] for vol in globalconfig.cluster.ListActiveVolumes({})["volumes"]], random.randint(2, 15))
from volume_set_attribute import VolumeSetAttribute
with APIFailure("ListActiveVolumes"):
assert not VolumeSetAttribute(volume_ids=volume_ids,
attribute_name=RandomString(32),
attribute_value=RandomString(64))
@pytest.mark.usefixtures("fake_cluster_perclass")
class TestGetVolumeIQN(object):
def test_negative_GetVolumeIQNNoVolumeName(self):
print()
from volume_get_iqn import GetVolumeIQN
assert not GetVolumeIQN(volume_name=RandomString(random.randint(8, 64)))
def test_negative_GetVolumeIQNNoVolumeID(self):
print()
volume_ids = [vol["volumeID"] for vol in globalconfig.cluster.ListActiveVolumes({})["volumes"]]
while True:
non_id = random.randint(1, 20000)
if non_id not in volume_ids:
break
from volume_get_iqn import GetVolumeIQN
assert not GetVolumeIQN(volume_id=non_id)
def test_negative_GetVolumeIQNSearchFailure(self):
print()
from volume_get_iqn import GetVolumeIQN
with APIFailure("ListActiveVolumes"):
assert not GetVolumeIQN(volume_name=RandomString(random.randint(1, 64)))
def test_GetVolumeIQN(self):
print()
volume_name = random.choice([vol["name"] for vol in globalconfig.cluster.ListActiveVolumes({})["volumes"]])
from volume_get_iqn import GetVolumeIQN
assert GetVolumeIQN(volume_name=volume_name)
def test_GetVolumeIQNBash(self, capsys):
print()
volume = random.choice(globalconfig.cluster.ListActiveVolumes({})["volumes"])
from volume_get_iqn import GetVolumeIQN
assert GetVolumeIQN(volume_id=volume["volumeID"], output_format="bash")
out, _ = capsys.readouterr()
out = out.strip()
print(out)
assert len(out.split("\n")) == 1
assert out.startswith("iqn")
assert out.endswith("{}.{}".format(volume["name"], volume["volumeID"]))
def test_GetVolumeIQNJson(self, capsys):
print()
volume = random.choice(globalconfig.cluster.ListActiveVolumes({})["volumes"])
from volume_get_iqn import GetVolumeIQN
assert GetVolumeIQN(volume_name=volume["name"], output_format="json")
out, _ = capsys.readouterr()
out = out.strip()
print(out)
assert len(out.split("\n")) == 1
import json
iqn = json.loads(out)["iqn"]
assert iqn.startswith("iqn")
assert iqn.endswith("{}.{}".format(volume["name"], volume["volumeID"]))
@pytest.mark.usefixtures("fake_cluster_perclass")
class TestVolumeClone(object):
def test_negative_VolumeCloneLimitsFailure(self):
print()
volume_ids = random.sample([vol["volumeID"] for vol in globalconfig.cluster.ListActiveVolumes({})["volumes"]], random.randint(2, 5))
from volume_clone import VolumeClone
with APIFailure("GetLimits"):
assert not VolumeClone(clone_count=random.randint(2, 5),
volume_ids=volume_ids)
def test_negative_VolumeCloneNewAccountNoAccount(self):
print()
volume_ids = random.sample([vol["volumeID"] for vol in globalconfig.cluster.ListActiveVolumes({})["volumes"]], random.randint(2, 5))
from volume_clone import VolumeClone
assert not VolumeClone(clone_count=random.randint(2, 5),
volume_ids=volume_ids,
dest_account_id=9999)
def test_negative_VolumeCloneNewAccountSearchFailure(self):
print()
volume_ids = random.sample([vol["volumeID"] for vol in globalconfig.cluster.ListActiveVolumes({})["volumes"]], random.randint(2, 5))
from volume_clone import VolumeClone
with APIFailure("ListAccounts"):
assert not VolumeClone(clone_count=random.randint(2, 5),
volume_ids=volume_ids,
dest_account_id=2)
def test_negative_VolumeCloneVolumeSearchFailure(self):
print()
volume_ids = random.sample([vol["volumeID"] for vol in globalconfig.cluster.ListActiveVolumes({})["volumes"]], random.randint(2, 5))
from volume_clone import VolumeClone
with APIFailure("ListActiveVolumes"):
assert not VolumeClone(clone_count=random.randint(2, 5),
volume_ids=volume_ids)
def test_VolumeCloneTest(self):
print()
volume_ids = random.sample([vol["volumeID"] for vol in globalconfig.cluster.ListActiveVolumes({})["volumes"]], random.randint(2, 5))
from volume_clone import VolumeClone
assert VolumeClone(clone_count=random.randint(2, 5),
volume_ids=volume_ids,
test=True)
def test_negative_VolumeCloneFailure(self):
print()
volume_ids = random.sample([vol["volumeID"] for vol in globalconfig.cluster.ListActiveVolumes({})["volumes"]], random.randint(2, 5))
from volume_clone import VolumeClone
with APIFailure("CloneVolume"):
assert not VolumeClone(clone_count=random.randint(2, 5),
volume_ids=volume_ids)
def test_VolumeClone(self):
print()
volume_ids = random.sample([vol["volumeID"] for vol in globalconfig.cluster.ListActiveVolumes({})["volumes"]], random.randint(2, 5))
from volume_clone import VolumeClone
assert VolumeClone(clone_count=random.randint(2, 5),
volume_ids=volume_ids)
def test_VolumeCloneNewAccount(self):
print()
print("all volumes = {}".format(globalconfig.cluster.ListActiveVolumes({})["volumes"]))
print("all accounts = {}".format(globalconfig.cluster.ListAccounts({})["accounts"]))
new_account = random.choice([account["accountID"] for account in globalconfig.cluster.ListAccounts({})["accounts"]])
source_volumes = [vol["volumeID"] for vol in globalconfig.cluster.ListActiveVolumes({})["volumes"] if vol["accountID"] != new_account]
print("new_account = {}".format(new_account))
print("source_volumes = {}".format(source_volumes))
volume_ids = random.sample(source_volumes, random.randint(1, min(5, len(source_volumes))))
from volume_clone import VolumeClone
assert VolumeClone(clone_count=random.randint(2, 5),
volume_ids=volume_ids,
dest_account_id=new_account)
def test_VolumeCloneNewSize(self):
print()
clone_size = random.randint(800, 2000)
volume_ids = random.sample([vol["volumeID"] for vol in globalconfig.cluster.ListActiveVolumes({})["volumes"] if vol["totalSize"] < clone_size*1000*1000*1000], random.randint(2, 5))
from volume_clone import VolumeClone
assert VolumeClone(clone_count=random.randint(2, 5),
volume_ids=volume_ids,
clone_size=clone_size)
@pytest.mark.usefixtures("fake_cluster_perclass")
class TestRemoteRepPauseVolume(object):
def test_negative_RemoteRepPauseVolumeSearchFailure(self):
print()
paired_volumes = [vol["volumeID"] for vol in globalconfig.cluster.ListActiveVolumes({})["volumes"] if "volumePairs" in vol and vol["volumePairs"]]
volume_ids = random.sample(paired_volumes, random.randint(2, min(15, len(paired_volumes)-1)))
from remoterep_pause_volume import RemoteRepPauseVolume
with APIFailure("ListActiveVolumes"):
assert not RemoteRepPauseVolume(volume_ids=volume_ids)
def test_RemoteRepPauseVolumeNoVolumes(self):
print()
from remoterep_pause_volume import RemoteRepPauseVolume
assert RemoteRepPauseVolume(volume_prefix="nomatch")
def test_RemoteRepPauseVolumeTestMode(self):
print()
paired_volumes = [vol["volumeID"] for vol in globalconfig.cluster.ListActiveVolumes({})["volumes"] if "volumePairs" in vol and vol["volumePairs"]]
volume_ids = random.sample(paired_volumes, random.randint(2, min(15, len(paired_volumes)-1)))
from remoterep_pause_volume import RemoteRepPauseVolume
assert RemoteRepPauseVolume(volume_ids=volume_ids,
test=True)
def test_negative_RemoteRepPauseVolumeFailure(self):
print()
paired_volumes = [vol["volumeID"] for vol in globalconfig.cluster.ListActiveVolumes({})["volumes"] if "volumePairs" in vol and vol["volumePairs"]]
volume_ids = random.sample(paired_volumes, random.randint(2, min(15, len(paired_volumes)-1)))
from remoterep_pause_volume import RemoteRepPauseVolume
with APIFailure("ModifyVolumePair"):
assert not RemoteRepPauseVolume(volume_ids=volume_ids)
def test_RemoteRepPauseVolume(self):
print()
paired_volumes = [vol["volumeID"] for vol in globalconfig.cluster.ListActiveVolumes({})["volumes"] if "volumePairs" in vol and vol["volumePairs"]]
volume_ids = random.sample(paired_volumes, random.randint(2, min(15, len(paired_volumes)-1)))
from remoterep_pause_volume import RemoteRepPauseVolume
assert RemoteRepPauseVolume(volume_ids=volume_ids)
@pytest.mark.usefixtures("fake_cluster_perclass")
class TestRemoteRepResumeVolume(object):
def test_negative_RemoteRepResumeVolumeSearchFailure(self):
print()
paired_volumes = [vol["volumeID"] for vol in globalconfig.cluster.ListActiveVolumes({})["volumes"] if "volumePairs" in vol and vol["volumePairs"]]
volume_ids = random.sample(paired_volumes, random.randint(2, min(15, len(paired_volumes)-1)))
from remoterep_resume_volume import RemoteRepResumeVolume
with APIFailure("ListActiveVolumes"):
assert not RemoteRepResumeVolume(volume_ids=volume_ids)
def test_RemoteRepResumeVolumeNoVolumes(self):
print()
from remoterep_resume_volume import RemoteRepResumeVolume
assert RemoteRepResumeVolume(volume_prefix="nomatch")
def test_RemoteRepResumeVolumeTestMode(self):
print()
paired_volumes = [vol["volumeID"] for vol in globalconfig.cluster.ListActiveVolumes({})["volumes"] if "volumePairs" in vol and vol["volumePairs"]]
volume_ids = random.sample(paired_volumes, random.randint(2, min(15, len(paired_volumes)-1)))
from remoterep_resume_volume import RemoteRepResumeVolume
assert RemoteRepResumeVolume(volume_ids=volume_ids,
test=True)
def test_negative_RemoteRepResumeVolumeFailure(self):
print()
paired_volumes = [vol["volumeID"] for vol in globalconfig.cluster.ListActiveVolumes({})["volumes"] if "volumePairs" in vol and vol["volumePairs"]]
volume_ids = random.sample(paired_volumes, random.randint(2, min(15, len(paired_volumes)-1)))
from remoterep_resume_volume import RemoteRepResumeVolume
with APIFailure("ModifyVolumePair"):
assert not RemoteRepResumeVolume(volume_ids=volume_ids)
def test_RemoteRepResumeVolume(self):
print()
paired_volumes = [vol["volumeID"] for vol in globalconfig.cluster.ListActiveVolumes({})["volumes"] if "volumePairs" in vol and vol["volumePairs"]]
volume_ids = random.sample(paired_volumes, random.randint(2, min(15, len(paired_volumes)-1)))
from remoterep_resume_volume import RemoteRepResumeVolume
assert RemoteRepResumeVolume(volume_ids=volume_ids)
@pytest.mark.usefixtures("fake_cluster_perclass")
class TestForceWoleSync(object):
def test_VolumeForceWholeSyncWait(self):
print()
volume_ids = random.sample([vol["volumeID"] for vol in globalconfig.cluster.ListActiveVolumes({})["volumes"]], random.randint(2, 5))
from volume_force_whole_sync import VolumeForceWholeSync
assert VolumeForceWholeSync(volume_ids=volume_ids,
wait=True)
def test_VolumeForceWholeSyncNoWait(self):
print()
volume_ids = random.sample([vol["volumeID"] for vol in globalconfig.cluster.ListActiveVolumes({})["volumes"]], random.randint(2, 5))
from volume_force_whole_sync import VolumeForceWholeSync
assert VolumeForceWholeSync(volume_ids=volume_ids,
wait=False)
def test_negative_NoVolumes(self):
print()
volume_ids = random.sample([vol["volumeID"] for vol in globalconfig.cluster.ListActiveVolumes({})["volumes"]], random.randint(2, 5))
from volume_force_whole_sync import VolumeForceWholeSync
assert not VolumeForceWholeSync(volume_ids=999999)
def test_negative_NoMatch(self):
print()
volume_ids = random.sample([vol["volumeID"] for vol in globalconfig.cluster.ListActiveVolumes({})["volumes"]], random.randint(2, 5))
from volume_force_whole_sync import VolumeForceWholeSync
assert VolumeForceWholeSync(volume_regex="nomatch")
def test_TestMode(self):
print()
volume_ids = random.sample([vol["volumeID"] for vol in globalconfig.cluster.ListActiveVolumes({})["volumes"]], random.randint(2, 5))
from volume_force_whole_sync import VolumeForceWholeSync
assert VolumeForceWholeSync(volume_ids=volume_ids,
test=True)
def test_negative_APIFailure(self):
print()
volume_ids = random.sample([vol["volumeID"] for vol in globalconfig.cluster.ListActiveVolumes({})["volumes"]], random.randint(2, 5))
from volume_force_whole_sync import VolumeForceWholeSync
with APIFailure("ForceWholeFileSync"):
assert not VolumeForceWholeSync(volume_ids=volume_ids)
| 49.331956 | 188 | 0.64956 | 3,642 | 35,815 | 6.220209 | 0.070566 | 0.059195 | 0.041405 | 0.117683 | 0.824314 | 0.797343 | 0.780348 | 0.74062 | 0.711353 | 0.687649 | 0 | 0.022462 | 0.249197 | 35,815 | 725 | 189 | 49.4 | 0.820007 | 0.001033 | 0 | 0.676898 | 0 | 0 | 0.064654 | 0.008218 | 0 | 0 | 0 | 0 | 0.148627 | 1 | 0.132472 | false | 0 | 0.150242 | 0 | 0.303716 | 0.147011 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8b9c3e5eff0e7082529056a6c8a5cdeb82bdb133 | 1,556 | py | Python | Bosikov_Garik_dz_04/Task_4_2.py | gbosikov/Python_Course | 79d70dd6cd48dff158310ac82093c8a8c57ea7c4 | [
"MIT"
] | null | null | null | Bosikov_Garik_dz_04/Task_4_2.py | gbosikov/Python_Course | 79d70dd6cd48dff158310ac82093c8a8c57ea7c4 | [
"MIT"
] | null | null | null | Bosikov_Garik_dz_04/Task_4_2.py | gbosikov/Python_Course | 79d70dd6cd48dff158310ac82093c8a8c57ea7c4 | [
"MIT"
] | null | null | null | from decimal import Decimal
import requests
def currency_rates(code: str) -> float:
"""возвращает курс валюты `code` по отношению к рублю"""
response = requests.get('http://www.cbr.ru/scripts/XML_daily.asp')
resp_text = response.text
currency_idx = resp_text.find(code.upper())
if currency_idx == -1:
return
else:
resp_text = resp_text[currency_idx:resp_text.find('</Value>', currency_idx)]
nominal = float(resp_text[resp_text.find('<Nominal>') + 9:resp_text.find('</Nominal>')].replace(',', '.'))
value = float(resp_text[resp_text.find('<Value>') + 7:].replace(',', '.'))
result_value = round(value / nominal, 2)
return result_value
print(currency_rates("USD"))
print(currency_rates("noname"))
#
# def currency_rates_adv(code: str) -> float:
# """возвращает курс валюты `code` по отношению к рублю"""
# response = requests.get('http://www.cbr.ru/scripts/XML_daily.asp')
#
# resp_text = response.text
# currency_idx = resp_text.find(code.upper())
#
# if currency_idx == -1:
# return
# else:
# resp_text = resp_text[currency_idx:resp_text.find('</Value>', currency_idx)]
# nominal = Decimal(resp_text[resp_text.find('<Nominal>') + 9:resp_text.find('</Nominal>')].replace(',', '.'))
# value = Decimal(resp_text[resp_text.find('<Value>') + 7:].replace(',', '.'))
#
# result_value = value / nominal.quantize(Decimal('1.00'),)
# return result_value
#
#
# print(currency_rates_adv("USD"))
# print(currency_rates_adv("noname"))
| 33.826087 | 118 | 0.645244 | 201 | 1,556 | 4.781095 | 0.248756 | 0.166493 | 0.12487 | 0.099896 | 0.811655 | 0.811655 | 0.71384 | 0.71384 | 0.71384 | 0.71384 | 0 | 0.00777 | 0.172879 | 1,556 | 45 | 119 | 34.577778 | 0.738928 | 0.513496 | 0 | 0 | 0 | 0 | 0.117647 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0 | 0.125 | 0 | 0.3125 | 0.125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8be69c48a5ef21986bf0e7e50a0fe0d71c566063 | 34,908 | py | Python | tests/test_marathon_serviceinit.py | oholiab/paasta | f008e07622aaf09b43513fcaaef501e320870ae6 | [
"Apache-2.0"
] | null | null | null | tests/test_marathon_serviceinit.py | oholiab/paasta | f008e07622aaf09b43513fcaaef501e320870ae6 | [
"Apache-2.0"
] | 4 | 2021-02-08T20:55:34.000Z | 2021-03-26T00:29:25.000Z | tests/test_marathon_serviceinit.py | oholiab/paasta | f008e07622aaf09b43513fcaaef501e320870ae6 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
# Copyright 2015 Yelp Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import contextlib
import datetime
import re
import marathon
import mock
from paasta_tools import marathon_tools, marathon_serviceinit
from paasta_tools.smartstack_tools import DEFAULT_SYNAPSE_PORT
from paasta_tools.utils import compose_job_id
from paasta_tools.utils import NoDockerImageError
from paasta_tools.utils import remove_ansi_escape_sequences
from paasta_tools.utils import PaastaColors
fake_marathon_job_config = marathon_tools.MarathonServiceConfig(
'servicename',
'instancename',
{
'instances': 3,
'cpus': 1,
'mem': 100,
'nerve_ns': 'fake_nerve_ns',
},
{
'docker_image': 'test_docker:1.0',
'desired_state': 'start',
'force_bounce': None,
}
)
def test_start_marathon_job():
client = mock.create_autospec(marathon.MarathonClient)
cluster = 'my_cluster'
service = 'my_service'
instance = 'my_instance'
app_id = 'mock_app_id'
normal_instance_count = 5
marathon_serviceinit.start_marathon_job(service, instance, app_id, normal_instance_count, client, cluster)
client.scale_app.assert_called_once_with(app_id, instances=normal_instance_count, force=True)
def test_stop_marathon_job():
client = mock.create_autospec(marathon.MarathonClient)
cluster = 'my_cluster'
service = 'my_service'
instance = 'my_instance'
app_id = 'mock_app_id'
marathon_serviceinit.stop_marathon_job(service, instance, app_id, client, cluster)
client.scale_app.assert_called_once_with(app_id, instances=0, force=True)
def test_get_bouncing_status():
with contextlib.nested(
mock.patch('paasta_tools.marathon_serviceinit.marathon_tools.get_matching_appids', autospec=True),
) as (
mock_get_matching_appids,
):
mock_get_matching_appids.return_value = ['a', 'b']
mock_config = marathon_tools.MarathonServiceConfig(
'fake_service',
'fake_instance',
{'bounce_method': 'fake_bounce'},
{},
)
actual = marathon_serviceinit.get_bouncing_status('fake_service', 'fake_instance', 'unused', mock_config)
assert 'fake_bounce' in actual
assert 'Bouncing' in actual
def test_status_desired_state():
with mock.patch(
'paasta_tools.marathon_serviceinit.get_bouncing_status',
autospec=True,
) as mock_get_bouncing_status:
mock_get_bouncing_status.return_value = 'Bouncing (fake_bounce)'
fake_complete_config = mock.Mock()
fake_complete_config.get_desired_state_human = mock.Mock(return_value='Started')
actual = marathon_serviceinit.status_desired_state(
'fake_service',
'fake_instance',
'unused',
fake_complete_config,
)
assert 'Started' in actual
assert 'Bouncing' in actual
def test_status_marathon_job_verbose():
client = mock.create_autospec(marathon.MarathonClient)
app = mock.create_autospec(marathon.models.app.MarathonApp)
client.get_app.return_value = app
service = 'my_service'
instance = 'my_instance'
task = mock.Mock()
with contextlib.nested(
mock.patch('paasta_tools.marathon_serviceinit.marathon_tools.get_matching_appids'),
mock.patch('paasta_tools.marathon_serviceinit.get_verbose_status_of_marathon_app'),
) as (
mock_get_matching_appids,
mock_get_verbose_app,
):
mock_get_matching_appids.return_value = ['/app1']
mock_get_verbose_app.return_value = ([task], 'fake_return')
tasks, out = marathon_serviceinit.status_marathon_job_verbose(service, instance, client)
mock_get_matching_appids.assert_called_once_with(service, instance, client)
mock_get_verbose_app.assert_called_once_with(app)
assert tasks == [task]
assert 'fake_return' in out
def test_get_verbose_status_of_marathon_app():
fake_app = mock.create_autospec(marathon.models.app.MarathonApp)
fake_app.version = '2015-01-15T05:30:49.862Z'
fake_app.id = '/fake--service'
fake_task = mock.create_autospec(marathon.models.app.MarathonTask)
fake_task.id = 'fake_task_id'
fake_task.host = 'fake_deployed_host'
fake_task.ports = [6666]
fake_task.staged_at = datetime.datetime.fromtimestamp(0)
fake_app.tasks = [fake_task]
tasks, out = marathon_serviceinit.get_verbose_status_of_marathon_app(fake_app)
assert 'fake_task_id' in out
assert '/fake--service' in out
assert 'App created: 2015-01-15 05:30:49' in out
assert 'fake_deployed_host:6666' in out
assert tasks == [fake_task]
def test_get_verbose_status_of_marathon_app_column_alignment():
fake_app = mock.create_autospec(marathon.models.app.MarathonApp)
fake_app.version = '2015-01-15T05:30:49.862Z'
fake_app.id = '/fake--service'
fake_task1 = mock.create_autospec(marathon.models.app.MarathonTask)
fake_task1.id = 'fake_task1_id'
fake_task1.host = 'fake_deployed_host'
fake_task1.ports = [6666]
fake_task1.staged_at = datetime.datetime.fromtimestamp(0)
fake_task2 = mock.create_autospec(marathon.models.app.MarathonTask)
fake_task2.id = 'fake_task2_id'
fake_task2.host = 'fake_deployed_host_with_a_really_long_name'
fake_task2.ports = [6666]
fake_task2.staged_at = datetime.datetime.fromtimestamp(0)
fake_app.tasks = [fake_task1, fake_task2]
tasks, out = marathon_serviceinit.get_verbose_status_of_marathon_app(fake_app)
(_, _, _, headers_line, task1_line, task2_line) = out.split('\n')
assert headers_line.index('Host deployed to') == task1_line.index('fake_deployed_host')
assert headers_line.index('Host deployed to') == task2_line.index('fake_deployed_host_with_a_really_long_name')
assert headers_line.index('Deployed at what localtime') == task1_line.index('1970-01-01T00:00')
assert headers_line.index('Deployed at what localtime') == task2_line.index('1970-01-01T00:00')
def test_status_marathon_job_when_running():
client = mock.create_autospec(marathon.MarathonClient)
app = mock.create_autospec(marathon.models.app.MarathonApp)
client.get_app.return_value = app
service = 'my_service'
instance = 'my_instance'
app_id = 'mock_app_id'
normal_instance_count = 5
mock_tasks_running = 5
app.tasks_running = mock_tasks_running
app.deployments = []
with contextlib.nested(
mock.patch('paasta_tools.marathon_tools.is_app_id_running', return_value=True),
) as (
is_app_id_running_patch,
):
marathon_serviceinit.status_marathon_job(service, instance, app_id, normal_instance_count, client)
is_app_id_running_patch.assert_called_once_with(app_id, client)
def tests_status_marathon_job_when_running_running_no_tasks():
client = mock.create_autospec(marathon.MarathonClient)
app = mock.create_autospec(marathon.models.app.MarathonApp)
client.get_app.return_value = app
service = 'my_service'
instance = 'my_instance'
app_id = 'mock_app_id'
normal_instance_count = 5
mock_tasks_running = 0
app.tasks_running = mock_tasks_running
app.deployments = []
with contextlib.nested(
mock.patch('paasta_tools.marathon_tools.is_app_id_running', return_value=True),
) as (
is_app_id_running_patch,
):
marathon_serviceinit.status_marathon_job(service, instance, app_id, normal_instance_count, client)
is_app_id_running_patch.assert_called_once_with(app_id, client)
def tests_status_marathon_job_when_running_not_running():
client = mock.create_autospec(marathon.MarathonClient)
service = 'my_service'
instance = 'my_instance'
app_id = 'mock_app_id'
normal_instance_count = 5
with contextlib.nested(
mock.patch('paasta_tools.marathon_tools.is_app_id_running', return_value=True),
) as (
is_app_id_running_patch,
):
marathon_serviceinit.status_marathon_job(service, instance, app_id, normal_instance_count, client)
is_app_id_running_patch.assert_called_once_with(app_id, client)
def tests_status_marathon_job_when_running_running_tasks_with_deployments():
client = mock.create_autospec(marathon.MarathonClient)
app = mock.create_autospec(marathon.models.app.MarathonApp)
client.get_app.return_value = app
service = 'my_service'
instance = 'my_instance'
app_id = 'mock_app_id'
normal_instance_count = 5
mock_tasks_running = 0
app.tasks_running = mock_tasks_running
app.deployments = ['test_deployment']
with contextlib.nested(
mock.patch('paasta_tools.marathon_tools.is_app_id_running', return_value=True),
) as (
is_app_id_running_patch,
):
output = marathon_serviceinit.status_marathon_job(service, instance, app_id, normal_instance_count, client)
is_app_id_running_patch.assert_called_once_with(app_id, client)
assert 'Deploying' in output
def test_format_haproxy_backend_row():
actual = marathon_serviceinit.format_haproxy_backend_row(
backend={
'svname': '169.254.123.1:1234_host1',
'status': 'UP',
'check_status': 'L7OK',
'check_code': '200',
'check_duration': 4,
'lastchg': 0
},
is_correct_instance=True,
)
expected = (
' host1:1234',
'L7OK/200 in 4ms',
'now',
PaastaColors.default('UP'),
)
assert actual == expected
def test_status_smartstack_backends_normal():
service = 'my_service'
instance = 'my_instance'
service_instance = compose_job_id(service, instance)
cluster = 'fake_cluster'
good_task = mock.Mock()
bad_task = mock.Mock()
other_task = mock.Mock()
haproxy_backends_by_task = {
good_task: {'status': 'UP', 'lastchg': '1', 'last_chk': 'OK',
'check_code': '200', 'svname': 'ipaddress1:1001_hostname1',
'check_status': 'L7OK', 'check_duration': 1},
bad_task: {'status': 'UP', 'lastchg': '1', 'last_chk': 'OK',
'check_code': '200', 'svname': 'ipaddress2:1002_hostname2',
'check_status': 'L7OK', 'check_duration': 1},
}
with contextlib.nested(
mock.patch('paasta_tools.marathon_tools.load_service_namespace_config', autospec=True),
mock.patch('paasta_tools.marathon_tools.read_namespace_for_service_instance'),
mock.patch('paasta_tools.marathon_serviceinit.get_mesos_slaves_grouped_by_attribute'),
mock.patch('paasta_tools.marathon_serviceinit.get_backends', autospec=True),
mock.patch('paasta_tools.marathon_serviceinit.match_backends_and_tasks', autospec=True),
) as (
mock_load_service_namespace_config,
mock_read_ns,
mock_get_mesos_slaves_grouped_by_attribute,
mock_get_backends,
mock_match_backends_and_tasks,
):
mock_load_service_namespace_config.return_value.get_discover.return_value = 'fake_discover'
mock_read_ns.return_value = instance
mock_get_backends.return_value = haproxy_backends_by_task.values()
mock_match_backends_and_tasks.return_value = [
(haproxy_backends_by_task[good_task], good_task),
(haproxy_backends_by_task[bad_task], None),
(None, other_task),
]
tasks = [good_task, other_task]
mock_get_mesos_slaves_grouped_by_attribute.return_value = {'fake_location1': ['fakehost1']}
actual = marathon_serviceinit.status_smartstack_backends(
service=service,
instance=instance,
cluster=cluster,
job_config=fake_marathon_job_config,
tasks=tasks,
expected_count=len(haproxy_backends_by_task),
soa_dir=None,
verbose=False,
)
mock_get_backends.assert_called_once_with(
service_instance,
synapse_host='fakehost1',
synapse_port=3212,
)
assert "fake_location1" in actual
assert "Healthy" in actual
def test_status_smartstack_backends_different_nerve_ns():
service = 'my_service'
instance = 'my_instance'
cluster = 'fake_cluster'
different_ns = 'other_instance'
normal_count = 10
tasks = mock.Mock()
with mock.patch('paasta_tools.marathon_tools.read_namespace_for_service_instance') as read_ns_mock:
read_ns_mock.return_value = different_ns
actual = marathon_serviceinit.status_smartstack_backends(
service=service,
instance=instance,
cluster=cluster,
job_config=fake_marathon_job_config,
tasks=tasks,
expected_count=normal_count,
soa_dir=None,
verbose=False,
)
assert "is announced in the" in actual
assert different_ns in actual
def test_status_smartstack_backends_no_smartstack_replication_info():
service = 'my_service'
instance = 'my_instance'
service_instance = compose_job_id(service, instance)
cluster = 'fake_cluster'
tasks = mock.Mock()
normal_count = 10
with contextlib.nested(
mock.patch('paasta_tools.marathon_tools.load_service_namespace_config', autospec=True),
mock.patch('paasta_tools.marathon_tools.read_namespace_for_service_instance'),
mock.patch('paasta_tools.marathon_serviceinit.get_mesos_slaves_grouped_by_attribute'),
) as (
mock_load_service_namespace_config,
mock_read_ns,
mock_get_mesos_slaves_grouped_by_attribute,
):
mock_load_service_namespace_config.return_value.get_discover.return_value = 'fake_discover'
mock_read_ns.return_value = instance
mock_get_mesos_slaves_grouped_by_attribute.return_value = {}
actual = marathon_serviceinit.status_smartstack_backends(
service=service,
instance=instance,
cluster=cluster,
job_config=fake_marathon_job_config,
tasks=tasks,
expected_count=normal_count,
soa_dir=None,
verbose=False,
)
assert "%s is NOT in smartstack" % service_instance in actual
def test_status_smartstack_backends_multiple_locations():
service = 'my_service'
instance = 'my_instance'
service_instance = compose_job_id(service, instance)
cluster = 'fake_cluster'
good_task = mock.Mock()
other_task = mock.Mock()
fake_backend = {'status': 'UP', 'lastchg': '1', 'last_chk': 'OK',
'check_code': '200', 'svname': 'ipaddress1:1001_hostname1',
'check_status': 'L7OK', 'check_duration': 1}
with contextlib.nested(
mock.patch('paasta_tools.marathon_tools.load_service_namespace_config', autospec=True),
mock.patch('paasta_tools.marathon_tools.read_namespace_for_service_instance'),
mock.patch('paasta_tools.marathon_serviceinit.get_mesos_slaves_grouped_by_attribute'),
mock.patch('paasta_tools.marathon_serviceinit.get_backends', autospec=True),
mock.patch('paasta_tools.marathon_serviceinit.match_backends_and_tasks', autospec=True),
) as (
mock_load_service_namespace_config,
mock_read_ns,
mock_get_mesos_slaves_grouped_by_attribute,
mock_get_backends,
mock_match_backends_and_tasks,
):
mock_load_service_namespace_config.return_value.get_discover.return_value = 'fake_discover'
mock_read_ns.return_value = instance
mock_get_backends.return_value = [fake_backend]
mock_match_backends_and_tasks.return_value = [
(fake_backend, good_task),
]
tasks = [good_task, other_task]
mock_get_mesos_slaves_grouped_by_attribute.return_value = {
'fake_location1': ['fakehost1'],
'fake_location2': ['fakehost2'],
}
actual = marathon_serviceinit.status_smartstack_backends(
service=service,
instance=instance,
cluster=cluster,
job_config=fake_marathon_job_config,
tasks=tasks,
expected_count=len(mock_get_backends.return_value),
soa_dir=None,
verbose=False,
)
mock_get_backends.assert_any_call(
service_instance,
synapse_host='fakehost1',
synapse_port=DEFAULT_SYNAPSE_PORT,
)
mock_get_backends.assert_any_call(
service_instance,
synapse_host='fakehost2',
synapse_port=DEFAULT_SYNAPSE_PORT,
)
assert "fake_location1 - %s" % PaastaColors.green('Healthy') in actual
assert "fake_location2 - %s" % PaastaColors.green('Healthy') in actual
def test_status_smartstack_backends_multiple_locations_expected_count():
service = 'my_service'
instance = 'my_instance'
service_instance = compose_job_id(service, instance)
cluster = 'fake_cluster'
normal_count = 10
good_task = mock.Mock()
other_task = mock.Mock()
fake_backend = {'status': 'UP', 'lastchg': '1', 'last_chk': 'OK',
'check_code': '200', 'svname': 'ipaddress1:1001_hostname1',
'check_status': 'L7OK', 'check_duration': 1}
with contextlib.nested(
mock.patch('paasta_tools.marathon_tools.load_service_namespace_config', autospec=True),
mock.patch('paasta_tools.marathon_tools.read_namespace_for_service_instance'),
mock.patch('paasta_tools.marathon_serviceinit.get_mesos_slaves_grouped_by_attribute'),
mock.patch('paasta_tools.marathon_serviceinit.get_backends', autospec=True),
mock.patch('paasta_tools.marathon_serviceinit.match_backends_and_tasks', autospec=True),
mock.patch('paasta_tools.marathon_serviceinit.haproxy_backend_report', autospec=True),
) as (
mock_load_service_namespace_config,
mock_read_ns,
mock_get_mesos_slaves_grouped_by_attribute,
mock_get_backends,
mock_match_backends_and_tasks,
mock_haproxy_backend_report,
):
mock_load_service_namespace_config.return_value.get_discover.return_value = 'fake_discover'
mock_read_ns.return_value = instance
mock_get_backends.return_value = [fake_backend]
mock_match_backends_and_tasks.return_value = [
(fake_backend, good_task),
]
tasks = [good_task, other_task]
mock_get_mesos_slaves_grouped_by_attribute.return_value = {
'fake_location1': ['fakehost1'],
'fake_location2': ['fakehost2'],
}
marathon_serviceinit.status_smartstack_backends(
service=service,
instance=instance,
cluster=cluster,
job_config=fake_marathon_job_config,
tasks=tasks,
expected_count=normal_count,
soa_dir=None,
verbose=False,
)
mock_get_backends.assert_any_call(
service_instance,
synapse_host='fakehost1',
synapse_port=3212,
)
mock_get_backends.assert_any_call(
service_instance,
synapse_host='fakehost2',
synapse_port=3212,
)
expected_count_per_location = int(
normal_count / len(mock_get_mesos_slaves_grouped_by_attribute.return_value))
mock_haproxy_backend_report.assert_any_call(expected_count_per_location, 1)
def test_status_smartstack_backends_verbose_multiple_apps():
service = 'my_service'
instance = 'my_instance'
service_instance = compose_job_id(service, instance)
cluster = 'fake_cluster'
good_task = mock.Mock()
bad_task = mock.Mock()
other_task = mock.Mock()
haproxy_backends_by_task = {
good_task: {'status': 'UP', 'lastchg': '1', 'last_chk': 'OK',
'check_code': '200', 'svname': 'ipaddress1:1001_hostname1',
'check_status': 'L7OK', 'check_duration': 1},
bad_task: {'status': 'UP', 'lastchg': '1', 'last_chk': 'OK',
'check_code': '200', 'svname': 'ipaddress2:1002_hostname2',
'check_status': 'L7OK', 'check_duration': 1},
}
with contextlib.nested(
mock.patch('paasta_tools.marathon_tools.load_service_namespace_config', autospec=True),
mock.patch('paasta_tools.marathon_tools.read_namespace_for_service_instance'),
mock.patch('paasta_tools.marathon_serviceinit.get_mesos_slaves_grouped_by_attribute'),
mock.patch('paasta_tools.marathon_serviceinit.get_backends', autospec=True),
mock.patch('paasta_tools.marathon_serviceinit.match_backends_and_tasks', autospec=True),
) as (
mock_load_service_namespace_config,
mock_read_ns,
mock_get_mesos_slaves_grouped_by_attribute,
mock_get_backends,
mock_match_backends_and_tasks,
):
mock_load_service_namespace_config.return_value.get_discover.return_value = 'fake_discover'
mock_read_ns.return_value = instance
mock_get_backends.return_value = haproxy_backends_by_task.values()
mock_match_backends_and_tasks.return_value = [
(haproxy_backends_by_task[good_task], good_task),
(haproxy_backends_by_task[bad_task], None),
(None, other_task),
]
tasks = [good_task, other_task]
mock_get_mesos_slaves_grouped_by_attribute.return_value = {'fake_location1': ['fakehost1']}
actual = marathon_serviceinit.status_smartstack_backends(
service=service,
instance=instance,
cluster=cluster,
job_config=fake_marathon_job_config,
tasks=tasks,
expected_count=len(haproxy_backends_by_task),
soa_dir=None,
verbose=True,
)
mock_get_backends.assert_called_once_with(
service_instance,
synapse_host='fakehost1',
synapse_port=3212,
)
assert "fake_location1" in actual
assert "hostname1:1001" in actual
assert re.search(r"%s[^\n]*hostname2:1002" % re.escape(PaastaColors.GREY), actual)
def test_status_smartstack_backends_verbose_multiple_locations():
service = 'my_service'
instance = 'my_instance'
service_instance = compose_job_id(service, instance)
cluster = 'fake_cluster'
good_task = mock.Mock()
other_task = mock.Mock()
fake_backend = {'status': 'UP', 'lastchg': '1', 'last_chk': 'OK',
'check_code': '200', 'svname': 'ipaddress1:1001_hostname1',
'check_status': 'L7OK', 'check_duration': 1}
fake_other_backend = {'status': 'UP', 'lastchg': '1', 'last_chk': 'OK',
'check_code': '200', 'svname': 'ipaddress1:1002_hostname2',
'check_status': 'L7OK', 'check_duration': 1}
with contextlib.nested(
mock.patch('paasta_tools.marathon_tools.load_service_namespace_config', autospec=True),
mock.patch('paasta_tools.marathon_tools.read_namespace_for_service_instance'),
mock.patch('paasta_tools.marathon_serviceinit.get_mesos_slaves_grouped_by_attribute'),
mock.patch('paasta_tools.marathon_serviceinit.get_backends', autospec=True,
side_effect=[[fake_backend], [fake_other_backend]]),
mock.patch('paasta_tools.marathon_serviceinit.match_backends_and_tasks',
autospec=True, side_effect=[[(fake_backend, good_task)], [(fake_other_backend, good_task)]]),
) as (
mock_load_service_namespace_config,
mock_read_ns,
mock_get_mesos_slaves_grouped_by_attribute,
mock_get_backends,
mock_match_backends_and_tasks,
):
mock_load_service_namespace_config.return_value.get_discover.return_value = 'fake_discover'
mock_read_ns.return_value = instance
tasks = [good_task, other_task]
mock_get_mesos_slaves_grouped_by_attribute.return_value = {
'fake_location1': ['fakehost1'],
'fake_location2': ['fakehost2'],
}
actual = marathon_serviceinit.status_smartstack_backends(
service=service,
instance=instance,
cluster=cluster,
job_config=fake_marathon_job_config,
tasks=tasks,
expected_count=1,
soa_dir=None,
verbose=True,
)
mock_get_backends.assert_any_call(
service_instance,
synapse_host='fakehost1',
synapse_port=3212,
)
mock_get_backends.assert_any_call(
service_instance,
synapse_host='fakehost2',
synapse_port=3212,
)
mock_get_mesos_slaves_grouped_by_attribute.assert_called_once_with(
attribute='fake_discover',
blacklist=[],
)
assert "fake_location1 - %s" % PaastaColors.green('Healthy') in actual
assert "hostname1:1001" in actual
assert "fake_location2 - %s" % PaastaColors.green('Healthy') in actual
assert "hostname2:1002" in actual
def test_status_smartstack_backends_verbose_emphasizes_maint_instances():
service = 'my_service'
instance = 'my_instance'
cluster = 'fake_cluster'
normal_count = 10
good_task = mock.Mock()
other_task = mock.Mock()
fake_backend = {'status': 'MAINT', 'lastchg': '1', 'last_chk': 'OK',
'check_code': '200', 'svname': 'ipaddress1:1001_hostname1',
'check_status': 'L7OK', 'check_duration': 1}
with contextlib.nested(
mock.patch('paasta_tools.marathon_tools.load_service_namespace_config', autospec=True),
mock.patch('paasta_tools.marathon_tools.read_namespace_for_service_instance'),
mock.patch('paasta_tools.marathon_serviceinit.get_mesos_slaves_grouped_by_attribute'),
mock.patch('paasta_tools.marathon_serviceinit.get_backends', autospec=True),
mock.patch('paasta_tools.marathon_serviceinit.match_backends_and_tasks', autospec=True),
) as (
mock_load_service_namespace_config,
mock_read_ns,
mock_get_mesos_slaves_grouped_by_attribute,
mock_get_backends,
mock_match_backends_and_tasks,
):
mock_load_service_namespace_config.return_value.get_discover.return_value = 'fake_discover'
mock_read_ns.return_value = instance
mock_get_backends.return_value = [fake_backend]
mock_match_backends_and_tasks.return_value = [
(fake_backend, good_task),
]
tasks = [good_task, other_task]
mock_get_mesos_slaves_grouped_by_attribute.return_value = {'fake_location1': ['fakehost1']}
actual = marathon_serviceinit.status_smartstack_backends(
service=service,
instance=instance,
cluster=cluster,
job_config=fake_marathon_job_config,
tasks=tasks,
expected_count=normal_count,
soa_dir=None,
verbose=True,
)
assert PaastaColors.red('MAINT') in actual
def test_status_smartstack_backends_verbose_demphasizes_maint_instances_for_unrelated_tasks():
service = 'my_service'
instance = 'my_instance'
cluster = 'fake_cluster'
normal_count = 10
good_task = mock.Mock()
other_task = mock.Mock()
fake_backend = {'status': 'MAINT', 'lastchg': '1', 'last_chk': 'OK',
'check_code': '200', 'svname': 'ipaddress1:1001_hostname1',
'check_status': 'L7OK', 'check_duration': 1}
with contextlib.nested(
mock.patch('paasta_tools.marathon_tools.load_service_namespace_config', autospec=True),
mock.patch('paasta_tools.marathon_tools.read_namespace_for_service_instance'),
mock.patch('paasta_tools.marathon_serviceinit.get_mesos_slaves_grouped_by_attribute'),
mock.patch('paasta_tools.marathon_serviceinit.get_backends', autospec=True),
mock.patch('paasta_tools.marathon_serviceinit.match_backends_and_tasks', autospec=True),
) as (
mock_load_service_namespace_config,
mock_read_ns,
mock_get_mesos_slaves_grouped_by_attribute,
mock_get_backends,
mock_match_backends_and_tasks,
):
mock_load_service_namespace_config.return_value.get_discover.return_value = 'fake_discover'
mock_read_ns.return_value = instance
mock_get_backends.return_value = [fake_backend]
mock_match_backends_and_tasks.return_value = [
(fake_backend, None),
]
tasks = [good_task, other_task]
mock_get_mesos_slaves_grouped_by_attribute.return_value = {'fake_location1': ['fakehost1']}
actual = marathon_serviceinit.status_smartstack_backends(
service=service,
instance=instance,
cluster=cluster,
job_config=fake_marathon_job_config,
tasks=tasks,
expected_count=normal_count,
soa_dir=None,
verbose=True,
)
assert PaastaColors.red('MAINT') not in actual
assert re.search(r"%s[^\n]*hostname1:1001" % re.escape(PaastaColors.GREY), actual)
def test_haproxy_backend_report_healthy():
normal_count = 10
actual_count = 11
status = marathon_serviceinit.haproxy_backend_report(normal_count, actual_count)
assert "Healthy" in status
def test_haproxy_backend_report_critical():
normal_count = 10
actual_count = 1
status = marathon_serviceinit.haproxy_backend_report(normal_count, actual_count)
assert "Critical" in status
def test_get_short_task_id():
task_id = 'service.instance.githash.confighash.uuid'
assert marathon_serviceinit.get_short_task_id(task_id) == 'uuid'
def test_status_mesos_tasks_working():
with mock.patch('paasta_tools.marathon_serviceinit.get_running_tasks_from_active_frameworks') as mock_tasks:
mock_tasks.return_value = [
{'id': 1}, {'id': 2}
]
normal_count = 2
actual = marathon_serviceinit.status_mesos_tasks('unused', 'unused', normal_count)
assert 'Healthy' in actual
def test_status_mesos_tasks_warning():
with mock.patch('paasta_tools.marathon_serviceinit.get_running_tasks_from_active_frameworks') as mock_tasks:
mock_tasks.return_value = [
{'id': 1}, {'id': 2}
]
normal_count = 4
actual = marathon_serviceinit.status_mesos_tasks('unused', 'unused', normal_count)
assert 'Warning' in actual
def test_status_mesos_tasks_critical():
with mock.patch('paasta_tools.marathon_serviceinit.get_running_tasks_from_active_frameworks') as mock_tasks:
mock_tasks.return_value = []
normal_count = 10
actual = marathon_serviceinit.status_mesos_tasks('unused', 'unused', normal_count)
assert 'Critical' in actual
def test_perform_command_handles_no_docker_and_doesnt_raise():
fake_service = 'fake_service'
fake_instance = 'fake_instance'
fake_cluster = 'fake_cluster'
soa_dir = 'fake_soa_dir'
with contextlib.nested(
mock.patch('paasta_tools.marathon_serviceinit.marathon_tools.load_marathon_config', autospec=True),
mock.patch('paasta_tools.marathon_serviceinit.marathon_tools.load_marathon_service_config', autospec=True),
mock.patch('paasta_tools.marathon_serviceinit.marathon_tools.create_complete_config', autospec=True),
) as (
mock_load_marathon_config,
mock_load_marathon_service_config,
mock_create_complete_config,
):
mock_create_complete_config.side_effect = NoDockerImageError()
actual = marathon_serviceinit.perform_command(
'start', fake_service, fake_instance, fake_cluster, False, soa_dir)
assert actual == 1
def test_pretty_print_smartstack_backends_for_locations_verbose():
hosts_grouped_by_location = {'place1': ['host1'], 'place2': ['host2'], 'place3': ['host3']}
host_ip_mapping = {
'host1': '169.254.123.1',
'host2': '169.254.123.2',
'host3': '169.254.123.3',
}
tasks = [
mock.Mock(host='host1', ports=[1234]),
mock.Mock(host='host2', ports=[1234]),
mock.Mock(host='host3', ports=[1234])
]
backends = {
'host1': {
'svname': '169.254.123.1:1234_host1',
'status': 'UP',
'check_status': 'L7OK',
'check_code': '200',
'check_duration': 4,
'lastchg': 0
},
'host2': {
'svname': '169.254.123.2:1234_host2',
'status': 'UP',
'check_status': 'L7OK',
'check_code': '200',
'check_duration': 4,
'lastchg': 0
},
'host3': {
'svname': '169.254.123.3:1234_host3',
'status': 'UP',
'check_status': 'L7OK',
'check_code': '200',
'check_duration': 4,
'lastchg': 0
},
}
with contextlib.nested(
mock.patch('paasta_tools.marathon_serviceinit.get_backends', autospec=True,
side_effect=lambda _, synapse_host, synapse_port: [backends[synapse_host]]),
mock.patch('socket.gethostbyname', side_effect=lambda name: host_ip_mapping[name], autospec=True),
) as (
mock_get_backends,
mock_gethostbyname,
):
actual = marathon_serviceinit.pretty_print_smartstack_backends_for_locations(
service_instance='fake_service.fake_instance',
tasks=tasks,
locations=hosts_grouped_by_location,
expected_count=3,
verbose=True,
)
colorstripped_actual = [remove_ansi_escape_sequences(l) for l in actual]
assert colorstripped_actual == [
' Name LastCheck LastChange Status',
' place1 - Healthy - in haproxy with (1/1) total backends UP in this namespace.',
' host1:1234 L7OK/200 in 4ms now UP',
' place2 - Healthy - in haproxy with (1/1) total backends UP in this namespace.',
' host2:1234 L7OK/200 in 4ms now UP',
' place3 - Healthy - in haproxy with (1/1) total backends UP in this namespace.',
' host3:1234 L7OK/200 in 4ms now UP',
]
# vim: tabstop=4 expandtab shiftwidth=4 softtabstop=4
| 40.875878 | 115 | 0.679128 | 4,108 | 34,908 | 5.359299 | 0.076436 | 0.044286 | 0.037473 | 0.049964 | 0.82808 | 0.785156 | 0.769213 | 0.739826 | 0.706577 | 0.690134 | 0 | 0.020767 | 0.226137 | 34,908 | 853 | 116 | 40.923798 | 0.794218 | 0.017732 | 0 | 0.660995 | 0 | 0 | 0.219146 | 0.114022 | 0 | 0 | 0 | 0 | 0.078534 | 1 | 0.037958 | false | 0 | 0.014398 | 0 | 0.052356 | 0.002618 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
47534382c075ae36e2cc12a0c8831b500382af18 | 97 | py | Python | py_moysklad/entities/assortment.py | upmarket-cc/py_moysklad | e026e611344c38f8a8d4f428781fcfb315aaaa60 | [
"MIT"
] | null | null | null | py_moysklad/entities/assortment.py | upmarket-cc/py_moysklad | e026e611344c38f8a8d4f428781fcfb315aaaa60 | [
"MIT"
] | null | null | null | py_moysklad/entities/assortment.py | upmarket-cc/py_moysklad | e026e611344c38f8a8d4f428781fcfb315aaaa60 | [
"MIT"
] | null | null | null | from py_moysklad.entities.meta_entity import MetaEntity
class Assortment(MetaEntity):
pass
| 16.166667 | 55 | 0.814433 | 12 | 97 | 6.416667 | 0.916667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.134021 | 97 | 5 | 56 | 19.4 | 0.916667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
475645b459353b3d16f6cc54ff8a637947d88d5e | 12,110 | py | Python | test_problems/sample_driftless_rw.py | umautobots/osp | d055f1c846f907445186b9dea7da2d4dca4790a6 | [
"MIT"
] | 1 | 2021-11-08T07:27:39.000Z | 2021-11-08T07:27:39.000Z | test_problems/sample_driftless_rw.py | swb19/osp | d055f1c846f907445186b9dea7da2d4dca4790a6 | [
"MIT"
] | null | null | null | test_problems/sample_driftless_rw.py | swb19/osp | d055f1c846f907445186b9dea7da2d4dca4790a6 | [
"MIT"
] | 2 | 2021-07-24T21:27:56.000Z | 2021-10-31T14:13:20.000Z | import numpy as np
def sample_single_exact_v0(v, n_samples, sigma, sigma_v):
"""
Sample random walk values from partial observations
- exact so no weighting needed
:param v: k, 2 | time series of velocities
- nan in timesteps to sample by bridge/walking
valid values in timesteps to use
- exist at least 1 valid value
:param n_samples:
:param sigma: sd of valid v, eg. sqrt(2)sigma_x/dt for differences
:param sigma_v:
:return: n_samples, k, 2
"""
k = v.shape[0]
v_mask = ~np.isnan(v).any(axis=1)
v_inds = np.arange(k)[v_mask]
m = v_inds.size
# S \in 2m, 2k - observations
S = np.zeros((m, k))
S[np.arange(m), v_inds] = 1.
S = np.kron(S, np.eye(2))
# C \in 2(k-1), 2k - differences
C = np.zeros((k-1, k))
inds = np.arange(0, (k-1)*k, k+1)
C.flat[inds] = -1.
C.flat[inds+1] = 1
C = np.kron(C, np.eye(2))
var = np.linalg.inv(S.T.dot(S) / sigma**2 + C.T.dot(C) / sigma_v**2)
m_vec = S.T.dot(v[v_mask].reshape(-1)) / sigma**2
m = var.dot(m_vec)
L = np.linalg.cholesky(var)
# n_samples, 2k
v_s = m + (L.dot(np.random.randn(2*k, n_samples))).T
return v_s.reshape(n_samples, k, 2)
def sample_single_exact_v1(v, n_samples, sigma, sigma_v):
"""
Sample random walk values from partial observations
- exact so no weighting needed
:param v: k, 2 | time series of velocities
- nan in timesteps to sample by bridge/walking
valid values in timesteps to use
- exist at least 1 valid value
:param n_samples:
:param sigma: sd of valid v, eg. sqrt(2)sigma_x/dt for differences
:param sigma_v:
:return: n_samples, k, 2
"""
k = v.shape[0]
v_mask = ~np.isnan(v).any(axis=1)
v_inds = np.arange(k)[v_mask]
m = v_inds.size
# S \in 2m, 2k - observations
S = np.zeros((m, k))
S[np.arange(m), v_inds] = 1.
StS = np.kron(S.T.dot(S), np.eye(2))
Sty = np.kron(S.T, np.eye(2)).dot(v[v_mask].reshape(-1)) #
# C \in 2(k-1), 2k - differences
C = np.zeros((k-1, k))
inds = np.arange(0, (k-1)*k, k+1)
C.flat[inds] = -1.
C.flat[inds+1] = 1
CtC = np.kron(C.T.dot(C), np.eye(2))
var = np.linalg.inv(StS / sigma**2 + CtC / sigma_v**2)
m_vec = Sty / sigma**2
m = var.dot(m_vec)
L = np.linalg.cholesky(var)
# n_samples, 2k
v_s = m + (L.dot(np.random.randn(2*k, n_samples))).T
return v_s.reshape(n_samples, k, 2)
def sample_single_exact_v2(v, n_samples, sigma, sigma_v):
"""
Sample random walk values from partial observations
- exact so no weighting needed
:param v: k, 2 | time series of velocities
- nan in timesteps to sample by bridge/walking
valid values in timesteps to use
- exist at least 1 valid value
:param n_samples:
:param sigma: sd of valid v, eg. sqrt(2)sigma_x/dt for differences
:param sigma_v:
:return: n_samples, k, 2
"""
k = v.shape[0]
v_mask = ~np.isnan(v).any(axis=1)
v_inds = np.arange(k)[v_mask]
m = v_inds.size
# S \in 2m, 2k - observations
S = np.zeros((m, k))
S[np.arange(m), v_inds] = 1.
StS0 = S.T.dot(S)
Sty = np.kron(S.T, np.eye(2)).dot(v[v_mask].reshape(-1))
# C \in 2(k-1), 2k - differences
# C = np.zeros((k-1, k))
# inds = np.arange(0, (k-1)*k, k+1)
# C.flat[inds] = -1.
# C.flat[inds+1] = 1
# CtC0 = C.T.dot(C)
# -
CtC0 = np.zeros((k, k))
inds = np.arange(1, k * k, k+1)
CtC0.flat[inds] = -1
CtC0.flat[np.arange(k, k * k, k+1)] = -1
CtC0.flat[inds-1] = 2
CtC0[0, 0] = 1
CtC0[-1, -1] = 1
inv0 = np.linalg.inv(StS0 / sigma**2 + CtC0 / sigma_v**2)
var = np.kron(inv0, np.eye(2))
m_vec = Sty / sigma**2
m = var.dot(m_vec)
L = np.linalg.cholesky(var)
# L = np.kron(np.linalg.cholesky(inv0), np.eye(2))
# -
# inv0 = np.linalg.inv(StS0 / sigma**2 + CtC0 / sigma_v**2)
# var = np.kron(inv0, np.eye(2))
# m_vec = Sty / sigma**2
# m = var.dot(m_vec)
# L = np.kron(np.linalg.cholesky(inv0), np.eye(2))
# -
# spc = StS0 / sigma ** 2 + CtC0 / sigma_v ** 2
# Linv = np.linalg.cholesky(spc)
# m_vec = Sty / sigma ** 2
# m = np.linalg.solve(np.kron(spc, np.eye(2)), m_vec)
# L = np.kron(np.linalg.inv(Linv), np.eye(2))
# n_samples, 2k
v_s = m + (L.dot(np.random.randn(2*k, n_samples))).T
return v_s.reshape(n_samples, k, 2)
def sample_n_exact_v0(v, n_samples, sigma, sigma_v):
"""
Sample random walk values from partial observations
:param v: n, k, 2 | n independent time series of velocities
- nan in timesteps to sample by bridge/walking
valid values in timesteps to use
- exist at least 1 valid value in each series
:param n_samples:
:param sigma: sd of valid v, eg. sqrt(2)sigma_x/dt for differences
:param sigma_v:
:return: n, n_samples, k, 2
"""
n, k = v.shape[:2]
v_s = np.empty((n, n_samples, k, 2))
for i in range(n):
v_s[i, ...] = sample_single_exact_v0(
v[i, ...], n_samples, sigma, sigma_v)
return v_s
def sample_single_v0(v, n_samples, sigma, sigma_v):
"""
Sample random walk values from partial observations
- calculate nll for v_s set consecutively from v since
they are related by sigma_v
:param v: k, 2 | time series of velocities
- nan in timesteps to sample by bridge/walking
valid values in timesteps to use
- exist at least 1 valid value
:param n_samples:
:param sigma: sd of valid v, eg. sqrt(2)sigma_x/dt for differences
:param sigma_v:
:return:
v_s: n_samples, k, 2
nll: n_samples,
"""
k = v.shape[0]
v_s = np.empty((n_samples, k, 2))
v_mask = ~np.isnan(v).any(axis=1)
v_inds = np.arange(k)[v_mask]
v_s[:, v_mask, :] = v[v_mask]
v_s[:, v_mask, :] += np.random.randn(n_samples, v_inds.size, 2) * sigma
c_inds = v_inds[:-1][(v_inds[1:] - v_inds[:-1]) == 1]
nll = ((v_s[:, c_inds, :] -
v_s[:, c_inds+1, :])**2).sum(axis=(1, 2)) * 0.5 / sigma_v
# back
rv = np.random.randn(n_samples, v_inds[0], 2) * sigma_v
v_s[:, :v_inds[0], :] = rv.cumsum(axis=1)[:, ::-1, :] + v[v_inds[0]]
# forward
rv = np.random.randn(n_samples, k - v_inds[-1]-1, 2) * sigma_v
v_s[:, v_inds[-1]+1:, :] = rv.cumsum(axis=1) + v[v_inds[-1]]
# interpolate
for i in range(1, v_inds.size):
k_i = v_inds[i] - v_inds[i-1]
from_l = np.arange(k_i-1) + 1
from_r = from_l[::-1]
mean = (v[[v_inds[i-1]]] * from_r[:, np.newaxis] +
v[[v_inds[i]]] * from_l[:, np.newaxis]) / k_i
sig = np.sqrt(from_r * from_l / k_i) * sigma_v
rv = mean + np.random.randn(n_samples, k_i - 1, 2) *\
sig[np.newaxis, :, np.newaxis]
v_s[:, v_inds[i-1]+1:v_inds[i], :] = rv
return v_s, nll
def sample_single_v1(v, n_samples, sigma, sigma_v):
"""
Sample random walk values from partial observations
- calculate nll for v_s set consecutively from v since
they are related by sigma_v
:param v: k, 2 | time series of velocities
- nan in timesteps to sample by bridge/walking
valid values in timesteps to use
- exist at least 1 valid value
:param n_samples:
:param sigma: sd of valid v, eg. sqrt(2)sigma_x/dt for differences
:param sigma_v:
:return:
v_s: n_samples, k, 2
nll: n_samples,
"""
k = v.shape[0]
v_s = np.empty((n_samples, k, 2))
v_mask = ~np.isnan(v).any(axis=1)
v_inds = np.arange(k)[v_mask]
v_s[:, v_mask, :] = v[v_mask]
v_s[:, v_mask, :] += np.random.randn(n_samples, v_inds.size, 2) * sigma
c_inds = v_inds[:-1][(v_inds[1:] - v_inds[:-1]) == 1]
nll = ((v_s[:, c_inds, :] -
v_s[:, c_inds+1, :])**2).sum(axis=(1, 2)) * 0.5 / sigma_v
# back
rv = np.random.randn(n_samples, v_inds[0], 2) * sigma_v
v_s[:, :v_inds[0], :] = rv.cumsum(axis=1)[:, ::-1, :] + v[v_inds[0]]
# forward
rv = np.random.randn(n_samples, k - v_inds[-1] - 1, 2) * sigma_v
v_s[:, v_inds[-1] + 1:, :] = rv.cumsum(axis=1) + v[v_inds[-1]]
# interpolate
from_l_inds = v_mask.cumsum() - 1
m = from_l_inds > -1
from_l_inds[m] = v_inds[from_l_inds[m]]
# :v_inds[0] values are undefined
from_l = 1 - v_mask
from_l[v_inds] = -np.hstack((v_inds[0], np.diff(v_inds)-1))
from_l = from_l.cumsum()
from_r_inds = v_mask[::-1].cumsum()[::-1] - 1
m = from_r_inds > -1
from_r_inds[m] = v_inds[v_inds.size - from_r_inds[m]-1]
# v_inds[-1]: values are undefined
from_r = 1 - v_mask
from_r[v_inds] = 1-np.hstack((np.diff(v_inds), k - v_inds[-1]))
from_r = from_r[::-1].cumsum()[::-1]
im = (v_inds[0] < np.arange(k)) & (np.arange(k) < v_inds[-1]) &\
(from_l > 0) & (from_r > 0)
k_interp = im.sum()
from_l_inds = from_l_inds[im]
from_l = from_l[im]
from_r_inds = from_r_inds[im]
from_r = from_r[im]
mean = (v[from_l_inds].reshape(-1, 2) * from_r[:, np.newaxis] +
v[from_r_inds].reshape(-1, 2) * from_l[:, np.newaxis])
gaps = from_r_inds - from_l_inds
mean /= gaps[:, np.newaxis]
sig = np.sqrt(from_r * from_l / gaps) * sigma_v
rv = mean + np.random.randn(n_samples, k_interp, 2) * \
sig[np.newaxis, :, np.newaxis]
v_s[:, im, :] = rv
return v_s, nll
def sample_n_v0(v, n_samples, sigma, sigma_v):
"""
Sample random walk values from partial observations
:param v: n, k, 2 | n independent time series of velocities
- nan in timesteps to sample by bridge/walking
valid values in timesteps to use
- exist at least 1 valid value in each series
:param n_samples:
:param sigma: sd of valid v, eg. sqrt(2)sigma_x/dt for differences
:param sigma_v:
:return:
v_s: n, n_samples, k, 2
nll: n, n_samples
"""
n, k = v.shape[:2]
v_s = np.empty((n, n_samples, k, 2))
nll = np.empty((n, n_samples))
for i in range(n):
v_s[i, ...], nll[i, :] = sample_single_v1(
v[i, ...], n_samples, sigma, sigma_v)
return v_s, nll
def main_sample_single_exact():
from timeit import timeit
seed = np.random.randint(0, 1000)
seed = 0
np.random.seed(seed)
print('seed: {}'.format(seed))
k = 30
v = np.random.randn(k, 2)
is_nan = np.random.randn(k) > 0
if np.all(is_nan):
is_nan[1] = False
v[is_nan] = np.nan
# print(v)
n_samples = 100
sigma = 0.1
sigma_v = 0.05
args = (v, n_samples, sigma, sigma_v)
print('---------------')
np.random.seed(seed)
x_true = sample_single_exact_v0(*args)
np.random.seed(seed)
x_hat = sample_single_exact_v2(*args)
print('diff: {:0.4f}'.format(np.linalg.norm(x_true - x_hat)))
# print(x_true)
# print(x_hat)
n_tries = 2
print(timeit('f(*args)', number=n_tries, globals=dict(f=sample_single_exact_v0, args=args))/n_tries)
print(timeit('f(*args)', number=n_tries, globals=dict(f=sample_single_exact_v2, args=args))/n_tries)
def main_sample_single():
from timeit import timeit
seed = np.random.randint(0, 1000)
seed = 0
np.random.seed(seed)
print('seed: {}'.format(seed))
k = 30
v = np.random.randn(k, 2)
is_nan = np.random.randn(k) > 0
if np.all(is_nan):
is_nan[1] = False
v[is_nan] = np.nan
# print(v)
n_samples = 100
sigma = 0.
sigma_v = 0.1
args = (v, n_samples, sigma, sigma_v)
print('---------------')
np.random.seed(seed)
x_true = sample_single_v0(*args)
np.random.seed(seed)
x_hat = sample_single_v1(*args)
print('diff: {:0.4f}'.format(np.linalg.norm(x_true[0] - x_hat[0])))
print('diff: {:0.4f}'.format(np.linalg.norm(x_true[1] - x_hat[1])))
# print(x_true)
# print(x_hat)
n_tries = 2
print(timeit('f(*args)', number=n_tries, globals=dict(f=sample_single_v0, args=args))/n_tries)
print(timeit('f(*args)', number=n_tries, globals=dict(f=sample_single_v1, args=args))/n_tries)
if __name__ == '__main__':
# main_sample_single()
main_sample_single_exact()
| 32.553763 | 104 | 0.586292 | 2,155 | 12,110 | 3.120186 | 0.067749 | 0.061868 | 0.02677 | 0.020821 | 0.85351 | 0.817222 | 0.805622 | 0.787478 | 0.769631 | 0.769631 | 0 | 0.033366 | 0.247647 | 12,110 | 371 | 105 | 32.641509 | 0.704643 | 0.308258 | 0 | 0.580808 | 0 | 0 | 0.015719 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.045455 | false | 0 | 0.015152 | 0 | 0.09596 | 0.055556 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
476505f71aaec6d92367558299d8993f48dc0465 | 149 | py | Python | generate.py | Mandera/generalgui | 4cc53bc19464d2032d4fb2fee7404cca338c9b5d | [
"MIT"
] | 1 | 2021-03-12T12:48:59.000Z | 2021-03-12T12:48:59.000Z | generate.py | Mandera/generalgui | 4cc53bc19464d2032d4fb2fee7404cca338c9b5d | [
"MIT"
] | null | null | null | generate.py | Mandera/generalgui | 4cc53bc19464d2032d4fb2fee7404cca338c9b5d | [
"MIT"
] | null | null | null |
from generalpackager import Packager
Packager("generalgui").generate_localfiles(print_out=True)
| 2.660714 | 58 | 0.557047 | 11 | 149 | 7.363636 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.389262 | 149 | 55 | 59 | 2.709091 | 0.89011 | 0 | 0 | 0 | 1 | 0 | 0.103093 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
47ae887950961651412d9b5ed6915ba5c5c78ee6 | 29 | py | Python | packages/server/invites/src/ports/providers/__init__.py | gbartoczevicz/moosic | 003ff5cff628505093cc08ad0fbd273272172a51 | [
"MIT"
] | 3 | 2021-09-30T00:41:31.000Z | 2022-03-15T00:14:23.000Z | packages/server/invites/src/ports/providers/__init__.py | gbartoczevicz/moosic | 003ff5cff628505093cc08ad0fbd273272172a51 | [
"MIT"
] | 13 | 2021-09-20T22:29:52.000Z | 2021-12-05T01:59:34.000Z | packages/server/invites/src/ports/providers/__init__.py | gabrielbartoczevicz/moosic | 003ff5cff628505093cc08ad0fbd273272172a51 | [
"MIT"
] | 1 | 2021-11-10T22:11:55.000Z | 2021-11-10T22:11:55.000Z | from .jwt import JwtProvider
| 14.5 | 28 | 0.827586 | 4 | 29 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.137931 | 29 | 1 | 29 | 29 | 0.96 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7be8af517158284b5a23e10b06f7cc0174234dc4 | 135 | py | Python | wallpaper_finder/__init__.py | andrei-kh/wallpaper-finder | c2c8672bdbc0af597649542886f86c5deadb7f3a | [
"MIT"
] | null | null | null | wallpaper_finder/__init__.py | andrei-kh/wallpaper-finder | c2c8672bdbc0af597649542886f86c5deadb7f3a | [
"MIT"
] | null | null | null | wallpaper_finder/__init__.py | andrei-kh/wallpaper-finder | c2c8672bdbc0af597649542886f86c5deadb7f3a | [
"MIT"
] | null | null | null | from .reddit_pictures import RedditPicturesLoader
from .reddit_pictures_api import RedditPicturesLoaderApi
from .utils import FileUtils | 45 | 56 | 0.896296 | 15 | 135 | 7.866667 | 0.6 | 0.169492 | 0.305085 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.081481 | 135 | 3 | 57 | 45 | 0.951613 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d00ae7254348198f6d5e57092d04d4c12befa678 | 28 | py | Python | Contributions/pythonprint.py | OluSure/Hacktoberfest2021-1 | ad1bafb0db2f0cdeaae8f87abbaa716638c5d2ea | [
"MIT"
] | 215 | 2021-10-01T08:18:16.000Z | 2022-03-29T04:12:03.000Z | Contributions/pythonprint.py | OluSure/Hacktoberfest2021-1 | ad1bafb0db2f0cdeaae8f87abbaa716638c5d2ea | [
"MIT"
] | 51 | 2021-10-01T08:16:42.000Z | 2021-10-31T13:51:51.000Z | Contributions/pythonprint.py | OluSure/Hacktoberfest2021-1 | ad1bafb0db2f0cdeaae8f87abbaa716638c5d2ea | [
"MIT"
] | 807 | 2021-10-01T08:11:45.000Z | 2021-11-21T18:57:09.000Z | print("you are great guys")
| 14 | 27 | 0.714286 | 5 | 28 | 4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 28 | 1 | 28 | 28 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0.642857 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
d044a017494ad305c8ecf63fd589320d47d50463 | 36 | py | Python | Tutorials/modules_2.py | gittygupta/Python-Learning-Course | 8e571c06f78b03f6725fe02f742eb40c3eed12e3 | [
"Unlicense"
] | null | null | null | Tutorials/modules_2.py | gittygupta/Python-Learning-Course | 8e571c06f78b03f6725fe02f742eb40c3eed12e3 | [
"Unlicense"
] | null | null | null | Tutorials/modules_2.py | gittygupta/Python-Learning-Course | 8e571c06f78b03f6725fe02f742eb40c3eed12e3 | [
"Unlicense"
] | null | null | null | def test():
print('am a coder') | 18 | 23 | 0.555556 | 6 | 36 | 3.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.25 | 36 | 2 | 23 | 18 | 0.740741 | 0 | 0 | 0 | 0 | 0 | 0.277778 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0 | 0 | 0 | 0.5 | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
d09bda925b0145a132083d96e8c12e55d5e463dc | 211 | py | Python | Problems/Multiple cases/task.py | gabrielizalo/jetbrains-academy-python-coffee-machine | e22cb502f7998855ef4afbc4ef7ecb8226418225 | [
"MIT"
] | null | null | null | Problems/Multiple cases/task.py | gabrielizalo/jetbrains-academy-python-coffee-machine | e22cb502f7998855ef4afbc4ef7ecb8226418225 | [
"MIT"
] | null | null | null | Problems/Multiple cases/task.py | gabrielizalo/jetbrains-academy-python-coffee-machine | e22cb502f7998855ef4afbc4ef7ecb8226418225 | [
"MIT"
] | null | null | null | def f1(x):
return 1 + (x * x)
def f2(x):
return 1 / (x * x)
def f3(x):
return -1 + (x * x)
def f(x):
if x <= 0:
return f1(x)
if 0 < x < 1:
return f2(x)
return f3(x)
| 11.105263 | 23 | 0.407583 | 40 | 211 | 2.15 | 0.25 | 0.325581 | 0.27907 | 0.313953 | 0.453488 | 0.453488 | 0 | 0 | 0 | 0 | 0 | 0.097561 | 0.417062 | 211 | 18 | 24 | 11.722222 | 0.601626 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0.25 | 0.833333 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
d0ba0f0298c1ba32a95a82c1913bf9bf3000317c | 204 | py | Python | employee_det/chalicelib/views/__init__.py | cadenababa/employee-details | f537dd797e20fa8aaf6bfe3bef20106dd8073ad1 | [
"MIT"
] | null | null | null | employee_det/chalicelib/views/__init__.py | cadenababa/employee-details | f537dd797e20fa8aaf6bfe3bef20106dd8073ad1 | [
"MIT"
] | null | null | null | employee_det/chalicelib/views/__init__.py | cadenababa/employee-details | f537dd797e20fa8aaf6bfe3bef20106dd8073ad1 | [
"MIT"
] | 1 | 2020-12-13T07:16:28.000Z | 2020-12-13T07:16:28.000Z | from chalicelib.views.views_ext import *
from chalicelib.views.views import *
from chalicelib.settings import *
DynaboDB = DynaboDB()
RestViewIndex = RestViewIndex()
RestViewEmployee = RestViewEmployee() | 29.142857 | 40 | 0.813725 | 21 | 204 | 7.857143 | 0.428571 | 0.254545 | 0.230303 | 0.290909 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.102941 | 204 | 7 | 41 | 29.142857 | 0.901639 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
d0edab7d6fc07aec68de0306b76d7b0cc88cf54a | 38 | py | Python | kearsley/__init__.py | martxelo/kearsley-algorithm | 99d7ebf3020746ad64affb494360e596f45d7ef8 | [
"MIT"
] | 1 | 2021-03-20T16:58:24.000Z | 2021-03-20T16:58:24.000Z | kearsley/__init__.py | martxelo/kearsley-algorithm | 99d7ebf3020746ad64affb494360e596f45d7ef8 | [
"MIT"
] | null | null | null | kearsley/__init__.py | martxelo/kearsley-algorithm | 99d7ebf3020746ad64affb494360e596f45d7ef8 | [
"MIT"
] | null | null | null | from kearsley.kearsley import Kearsley | 38 | 38 | 0.894737 | 5 | 38 | 6.8 | 0.6 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.078947 | 38 | 1 | 38 | 38 | 0.971429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d0fe1d0d1445787879623450738a4f55b1ea61b4 | 120 | py | Python | polybot/test.py | evanpcosta/IEEEPolybot | 75fd70680f4f9fec8b1b77b4e116e4869eb8c079 | [
"Apache-2.0"
] | null | null | null | polybot/test.py | evanpcosta/IEEEPolybot | 75fd70680f4f9fec8b1b77b4e116e4869eb8c079 | [
"Apache-2.0"
] | null | null | null | polybot/test.py | evanpcosta/IEEEPolybot | 75fd70680f4f9fec8b1b77b4e116e4869eb8c079 | [
"Apache-2.0"
] | 1 | 2021-03-07T20:46:43.000Z | 2021-03-07T20:46:43.000Z | import implementML
implementML.implementML("C:/Users/alanx/OneDrive/Desktop/keras/IEEE/example_data.csv", 10, "random") | 40 | 100 | 0.808333 | 16 | 120 | 6 | 0.875 | 0.458333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017391 | 0.041667 | 120 | 3 | 100 | 40 | 0.817391 | 0 | 0 | 0 | 0 | 0 | 0.53719 | 0.487603 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
190d5756c2d90660586fc6f7c1e1ba68830badd7 | 73 | py | Python | backend/app/rest/users/views.py | air-services/boilerplate | 8c69927c299882048f9aaaaa58483eea2a6a1cd0 | [
"MIT"
] | null | null | null | backend/app/rest/users/views.py | air-services/boilerplate | 8c69927c299882048f9aaaaa58483eea2a6a1cd0 | [
"MIT"
] | null | null | null | backend/app/rest/users/views.py | air-services/boilerplate | 8c69927c299882048f9aaaaa58483eea2a6a1cd0 | [
"MIT"
] | null | null | null | from app.core.crud import CrudView
class UsersView(CrudView):
pass
| 12.166667 | 34 | 0.753425 | 10 | 73 | 5.5 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.178082 | 73 | 5 | 35 | 14.6 | 0.916667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
ef9314abfbbf468195cd280bdf394e6dee6287cb | 29 | py | Python | backEnd/app/api/user/__init__.py | HenryYDJ/flaskAPI | 9947fbad1050d6ba29c29e365c561689ea3e22d8 | [
"MIT"
] | null | null | null | backEnd/app/api/user/__init__.py | HenryYDJ/flaskAPI | 9947fbad1050d6ba29c29e365c561689ea3e22d8 | [
"MIT"
] | 14 | 2022-03-27T13:34:58.000Z | 2022-03-31T14:37:19.000Z | backEnd/app/api/user/__init__.py | HenryYDJ/flaskAPI | 9947fbad1050d6ba29c29e365c561689ea3e22d8 | [
"MIT"
] | null | null | null | from app.api.user import user | 29 | 29 | 0.827586 | 6 | 29 | 4 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.103448 | 29 | 1 | 29 | 29 | 0.923077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
efa0f9b9bda9f466f75748a55c1cb3f47fd872c6 | 48 | py | Python | modules/vae_loss/__init__.py | adnortje/deepvideo | 76d09ee8696355bc29ee57c1ef2ff61474c5ed41 | [
"CC0-1.0"
] | 37 | 2019-11-23T06:42:12.000Z | 2022-01-25T16:08:28.000Z | modules/vae_loss/__init__.py | sangramch/deepvideo | 16e622434b9843238b8092f94da2c58a4346788d | [
"CC0-1.0"
] | 4 | 2020-04-11T12:36:27.000Z | 2021-07-26T10:12:53.000Z | modules/vae_loss/__init__.py | sangramch/deepvideo | 16e622434b9843238b8092f94da2c58a4346788d | [
"CC0-1.0"
] | 9 | 2019-12-13T07:30:58.000Z | 2020-07-15T05:32:17.000Z | # import vae loss
from .vae_loss import VAELoss
| 16 | 29 | 0.791667 | 8 | 48 | 4.625 | 0.625 | 0.378378 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 48 | 2 | 30 | 24 | 0.925 | 0.3125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
efb5692acb75ef5d2fea12ab45f924661cba32db | 273 | py | Python | Python/easy/e355.py | tlgs/dailyprogrammer | 6e7d3352616fa54a8e9caf8564a9cfb951eb0af9 | [
"Unlicense"
] | 4 | 2017-10-18T02:17:02.000Z | 2022-02-02T01:19:02.000Z | Python/easy/e355.py | tlseabra/dailyprogrammer | 6e7d3352616fa54a8e9caf8564a9cfb951eb0af9 | [
"Unlicense"
] | 4 | 2016-01-24T20:30:02.000Z | 2017-01-18T16:01:23.000Z | Python/easy/e355.py | tlgs/dailyprogrammer | 6e7d3352616fa54a8e9caf8564a9cfb951eb0af9 | [
"Unlicense"
] | null | null | null | # 07/04/2018
from itertools import cycle
def encrypt(message, key):
return ''.join([chr((ord(c)+ord(k)-194)%26+97) for c, k in zip(message, cycle(key))])
def decrypt(message, key):
return ''.join([chr((ord(c)-ord(k))%26+97) for c, k in zip(message, cycle(key))])
| 30.333333 | 89 | 0.641026 | 50 | 273 | 3.5 | 0.48 | 0.114286 | 0.182857 | 0.228571 | 0.685714 | 0.685714 | 0.685714 | 0.685714 | 0.685714 | 0.331429 | 0 | 0.080851 | 0.139194 | 273 | 8 | 90 | 34.125 | 0.66383 | 0.03663 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0.2 | 0.4 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
efdc0b9cc24e327a4f8a0adf853b6df89a06f674 | 1,088 | py | Python | api/permissions.py | Eddydpyl/vounty_backend | 772a9887e91f0d5d0de0148b8fc3624adfad17e6 | [
"Apache-2.0"
] | null | null | null | api/permissions.py | Eddydpyl/vounty_backend | 772a9887e91f0d5d0de0148b8fc3624adfad17e6 | [
"Apache-2.0"
] | null | null | null | api/permissions.py | Eddydpyl/vounty_backend | 772a9887e91f0d5d0de0148b8fc3624adfad17e6 | [
"Apache-2.0"
] | null | null | null | from rest_framework import permissions
from rest_framework.permissions import SAFE_METHODS
class SafeMethods(permissions.BasePermission):
def has_permission(self, request, view):
return request.method in permissions.SAFE_METHODS or request.user.is_staff
class IsUser(permissions.BasePermission):
def has_object_permission(self, request, view, obj):
return request.user and (obj == request.user or request.user.is_staff)
class IsUserOrReadOnly(permissions.BasePermission):
def has_object_permission(self, request, view, obj):
return request.method in SAFE_METHODS or request.user and (obj == request.user or request.user.is_staff)
class IsOwner(permissions.BasePermission):
def has_object_permission(self, request, view, obj):
return request.user and (obj.user == request.user or request.user.is_staff)
class IsOwnerOrReadOnly(permissions.BasePermission):
def has_object_permission(self, request, view, obj):
return request.method in SAFE_METHODS or request.user and (obj.user == request.user or request.user.is_staff)
| 38.857143 | 117 | 0.772059 | 144 | 1,088 | 5.694444 | 0.208333 | 0.17439 | 0.110976 | 0.189024 | 0.720732 | 0.707317 | 0.676829 | 0.676829 | 0.670732 | 0.670732 | 0 | 0 | 0.145221 | 1,088 | 27 | 118 | 40.296296 | 0.88172 | 0 | 0 | 0.235294 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.294118 | false | 0 | 0.117647 | 0.294118 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
4be42163f81999d60266f1232441928e648683e6 | 1,781 | py | Python | nas4candle/candle/common/candle_keras/__init__.py | scrlnas2019/nas4candle | 318959424cc66819c816054a87bd1cb5d426e2e7 | [
"BSD-3-Clause"
] | 1 | 2021-01-22T04:03:00.000Z | 2021-01-22T04:03:00.000Z | nas4candle/candle/common/candle_keras/__init__.py | scrlnas2019/nas4candle | 318959424cc66819c816054a87bd1cb5d426e2e7 | [
"BSD-3-Clause"
] | 1 | 2021-01-23T00:14:17.000Z | 2021-01-23T00:14:17.000Z | nas4candle/candle/common/candle_keras/__init__.py | scrlnas2019/nas4candle | 318959424cc66819c816054a87bd1cb5d426e2e7 | [
"BSD-3-Clause"
] | 2 | 2019-11-27T04:42:00.000Z | 2021-01-22T04:06:59.000Z | from __future__ import absolute_import
#__version__ = '0.0.0'
#import from data_utils
from nas4candle.candle.common.data_utils import load_csv_data
from nas4candle.candle.common.data_utils import load_Xy_one_hot_data2
from nas4candle.candle.common.data_utils import load_Xy_data_noheader
#import from file_utils
from nas4candle.candle.common.file_utils import get_file
#import from nas4candle.candle.common.default_utils
from nas4candle.candle.common.default_utils import ArgumentStruct
from nas4candle.candle.common.default_utils import Benchmark
from nas4candle.candle.common.default_utils import str2bool
from nas4candle.candle.common.default_utils import initialize_parameters
from nas4candle.candle.common.default_utils import fetch_file
from nas4candle.candle.common.default_utils import verify_path
from nas4candle.candle.common.default_utils import keras_default_config
from nas4candle.candle.common.default_utils import set_up_logger
#import from keras_utils
#from keras_utils import dense
#from keras_utils import add_dense
from nas4candle.candle.common.keras_utils import build_initializer
from nas4candle.candle.common.keras_utils import build_optimizer
from nas4candle.candle.common.keras_utils import set_seed
from nas4candle.candle.common.keras_utils import set_parallelism_threads
from nas4candle.candle.common.keras_utils import PermanentDropout
from nas4candle.candle.common.keras_utils import register_permanent_dropout
from nas4candle.candle.common.generic_utils import Progbar
from nas4candle.candle.common.generic_utils import LoggingCallback
from nas4candle.candle.common.solr_keras import CandleRemoteMonitor
from nas4candle.candle.common.solr_keras import compute_trainable_params
from nas4candle.candle.common.solr_keras import TerminateOnTimeOut
| 44.525 | 75 | 0.881527 | 249 | 1,781 | 6.044177 | 0.220884 | 0.223256 | 0.318937 | 0.414618 | 0.693688 | 0.669767 | 0.644518 | 0.214618 | 0.062458 | 0 | 0 | 0.017533 | 0.071308 | 1,781 | 39 | 76 | 45.666667 | 0.892382 | 0.112296 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ef154fee2830a92d7c05ea0bc3105b1781a74a34 | 37,418 | py | Python | qiling/tests/test_elf.py | mrTavas/owasp-fstm-auto | 6e9ff36e46d885701c7419db3eca15f12063a7f3 | [
"CC0-1.0"
] | 2 | 2021-05-05T12:03:01.000Z | 2021-06-04T14:27:15.000Z | qiling/tests/test_elf.py | mrTavas/owasp-fstm-auto | 6e9ff36e46d885701c7419db3eca15f12063a7f3 | [
"CC0-1.0"
] | null | null | null | qiling/tests/test_elf.py | mrTavas/owasp-fstm-auto | 6e9ff36e46d885701c7419db3eca15f12063a7f3 | [
"CC0-1.0"
] | 2 | 2021-05-05T12:03:09.000Z | 2021-06-04T14:27:21.000Z | #!/usr/bin/env python3
#
# Cross Platform and Multi Architecture Advanced Binary Emulation Framework
#
import sys, unittest, string, random, os
sys.path.append("..")
from qiling import Qiling
from qiling.const import QL_OS, QL_INTERCEPT, QL_VERBOSE
from qiling.exception import *
from qiling.os.const import STRING
from qiling.os.posix import syscall
from qiling.os.mapper import QlFsMappedObject
from qiling.os.posix.stat import Fstat
class ELFTest(unittest.TestCase):
def test_libpatch_elf_linux_x8664(self):
ql = Qiling(["../examples/rootfs/x8664_linux/bin/patch_test.bin"], "../examples/rootfs/x8664_linux")
ql.patch(0x0000000000000575, b'qiling\x00', file_name = b'libpatch_test.so')
ql.run()
del ql
def test_elf_freebsd_x8664(self):
ql = Qiling(["../examples/rootfs/x8664_freebsd/bin/x8664_hello_asm"], "../examples/rootfs/x8664_freebsd", verbose=QL_VERBOSE.DUMP)
ql.run()
del ql
def test_elf_partial_linux_x8664(self):
def dump(ql, *args, **kw):
ql.save(reg=False, cpu_context=True, snapshot="/tmp/snapshot.bin")
ql.emu_stop()
ql = Qiling(["../examples/rootfs/x8664_linux/bin/sleep_hello"], "../examples/rootfs/x8664_linux", verbose=QL_VERBOSE.DEFAULT)
X64BASE = int(ql.profile.get("OS64", "load_address"), 16)
ql.hook_address(dump, X64BASE + 0x1094)
ql.run()
ql = Qiling(["../examples/rootfs/x8664_linux/bin/sleep_hello"], "../examples/rootfs/x8664_linux", verbose=QL_VERBOSE.DEBUG)
X64BASE = int(ql.profile.get("OS64", "load_address"), 16)
ql.restore(snapshot="/tmp/snapshot.bin")
begin_point = X64BASE + 0x109e
end_point = X64BASE + 0x10bc
ql.run(begin = begin_point, end = end_point)
del ql
def test_elf_x_only_segment(self):
def stop(ql, *args, **kw):
ql.emu_stop()
ql = Qiling(["../examples/rootfs/x8664_linux/bin/sleep_hello_with_x_only_segment"], "../examples/rootfs/x8664_linux", verbose=QL_VERBOSE.DEBUG)
X64BASE = int(ql.profile.get("OS64", "load_address"), 16)
ql.hook_address(stop, X64BASE + 0x1094)
ql.run()
del ql
PARAMS_PUTS = {'s': STRING}
def test_elf_linux_x8664(self):
def my_puts(ql: Qiling):
params = ql.os.resolve_fcall_params(ELFTest.PARAMS_PUTS)
print(f'puts("{params["s"]}")')
reg = ql.reg.read("rax")
print("reg : 0x%x" % reg)
ql.reg.rax = reg
self.set_api = reg
def write_onEnter(ql, arg1, arg2, arg3, *args):
print("enter write syscall!")
ql.reg.rsi = arg2 + 1
ql.reg.rdx = arg3 - 1
self.set_api_onenter = True
def write_onexit(ql, arg1, arg2, arg3, *args):
print("exit write syscall!")
ql.reg.rax = arg3 + 1
self.set_api_onexit = True
ql = Qiling(["../examples/rootfs/x8664_linux/bin/x8664_args","1234test", "12345678", "bin/x8664_hello"], "../examples/rootfs/x8664_linux", verbose=QL_VERBOSE.DEBUG)
ql.set_syscall(1, write_onEnter, QL_INTERCEPT.ENTER)
ql.set_api('puts', my_puts)
ql.set_syscall(1, write_onexit, QL_INTERCEPT.EXIT)
ql.mem.map(0x1000, 0x1000)
ql.mem.write(0x1000, b"\xFF\xFE\xFD\xFC\xFB\xFA\xFB\xFC\xFC\xFE\xFD")
ql.mem.map(0x2000, 0x1000)
ql.mem.write(0x2000, b"\xFF\xFE\xFD\xFC\xFB\xFA\xFB\xFC\xFC\xFE\xFD")
ql.run()
self.assertEqual([0x1000,0x2000], ql.mem.search(b"\xFF\xFE\xFD\xFC\xFB\xFA\xFB\xFC\xFC\xFE\xFD"))
self.assertEqual(93824992233162, self.set_api)
self.assertEqual(True, self.set_api_onexit)
self.assertEqual(True, self.set_api_onenter)
del self.set_api
del self.set_api_onexit
del self.set_api_onenter
del ql
def test_elf_hijackapi_linux_x8664(self):
def my_puts_enter(ql: Qiling):
params = ql.os.resolve_fcall_params(ELFTest.PARAMS_PUTS)
self.test_enter_str = params["s"]
def my_puts_exit(ql):
self.test_exit_rdi = ql.reg.rdi
ql = Qiling(["../examples/rootfs/x8664_linux/bin/x8664_puts"], "../examples/rootfs/x8664_linux", verbose=QL_VERBOSE.DEBUG)
ql.set_api('puts', my_puts_enter, QL_INTERCEPT.ENTER)
ql.set_api('puts', my_puts_exit, QL_INTERCEPT.EXIT)
ql.run()
if self.test_exit_rdi == 140736282240864:
self.test_exit_rdi = 0x1
self.assertEqual(0x1, self.test_exit_rdi)
self.assertEqual("CCCC", self.test_enter_str)
del self.test_exit_rdi
del self.test_enter_str
del ql
def test_elf_linux_x8664_static(self):
ql = Qiling(["../examples/rootfs/x8664_linux/bin/x8664_hello_static"], "../examples/rootfs/x8664_linux", verbose=QL_VERBOSE.DEBUG)
ql.run()
del ql
def test_elf_linux_x86(self):
ql = Qiling(["../examples/rootfs/x86_linux/bin/x86_hello"], "../examples/rootfs/x86_linux", verbose=QL_VERBOSE.DEBUG, log_file="test.qlog")
ql.run()
del ql
def test_elf_linux_x86_static(self):
ql = Qiling(["../examples/rootfs/x86_linux/bin/x86_hello_static"], "../examples/rootfs/x86_linux", verbose=QL_VERBOSE.DEBUG)
ql.run()
del ql
def test_elf_linux_x86_posix_syscall(self):
def test_syscall_read(ql, read_fd, read_buf, read_count, *args):
target = False
pathname = ql.os.fd[read_fd].name.split('/')[-1]
if pathname == "test_syscall_read.txt":
print("test => read(%d, %s, %d)" % (read_fd, pathname, read_count))
target = True
regreturn = syscall.ql_syscall_read(ql, read_fd, read_buf, read_count, *args)
if target:
real_path = ql.os.fd[read_fd].name
with open(real_path) as fd:
assert fd.read() == ql.mem.read(read_buf, read_count).decode()
if ql.platform != QL_OS.WINDOWS:
os.remove(real_path)
return regreturn
def test_syscall_write(ql, write_fd, write_buf, write_count, *args):
target = False
pathname = ql.os.fd[write_fd].name.split('/')[-1]
if pathname == "test_syscall_write.txt":
print("test => write(%d, %s, %d)" % (write_fd, pathname, write_count))
target = True
regreturn = syscall.ql_syscall_write(ql, write_fd, write_buf, write_count, *args)
if target:
real_path = ql.os.fd[write_fd].name
with open(real_path) as fd:
assert fd.read() == 'Hello testing\x00'
if ql.platform != QL_OS.WINDOWS:
os.remove(real_path)
return regreturn
def test_syscall_openat(ql, openat_fd, openat_path, openat_flags, openat_mode, *args):
target = False
pathname = ql.mem.string(openat_path)
if pathname == "test_syscall_open.txt":
print("test => openat(%d, %s, 0x%x, 0%o)" % (openat_fd, pathname, openat_flags, openat_mode))
target = True
regreturn = syscall.ql_syscall_openat(ql, openat_fd, openat_path, openat_flags, openat_mode, *args)
if target:
real_path = ql.os.path.transform_to_real_path(pathname)
assert os.path.isfile(real_path) == True
if ql.platform != QL_OS.WINDOWS:
os.remove(real_path)
return regreturn
def test_syscall_unlink(ql, unlink_pathname, *args):
target = False
pathname = ql.mem.string(unlink_pathname)
if pathname == "test_syscall_unlink.txt":
print("test => unlink(%s)" % (pathname))
target = True
regreturn = syscall.ql_syscall_unlink(ql, unlink_pathname, *args)
if target:
real_path = ql.os.path.transform_to_real_path(pathname)
assert os.path.isfile(real_path) == False
return regreturn
def test_syscall_truncate(ql, trunc_pathname, trunc_length, *args):
target = False
pathname = ql.mem.string(trunc_pathname)
if pathname == "test_syscall_truncate.txt":
print("test => truncate(%s, 0x%x)" % (pathname, trunc_length))
target = True
regreturn = syscall.ql_syscall_truncate(ql, trunc_pathname, trunc_length, *args)
if target:
real_path = ql.os.path.transform_to_real_path(pathname)
assert os.stat(real_path).st_size == 0
if ql.platform != QL_OS.WINDOWS:
os.remove(real_path)
return regreturn
def test_syscall_ftruncate(ql, ftrunc_fd, ftrunc_length, *args):
target = False
pathname = ql.os.fd[ftrunc_fd].name.split('/')[-1]
reg = ql.reg.read("eax")
print("reg : 0x%x" % reg)
ql.reg.eax = reg
if pathname == "test_syscall_ftruncate.txt":
print("test => ftruncate(%d, 0x%x)" % (ftrunc_fd, ftrunc_length))
target = True
regreturn = syscall.ql_syscall_ftruncate(ql, ftrunc_fd, ftrunc_length, *args)
if target:
real_path = ql.os.path.transform_to_real_path(pathname)
assert os.stat(real_path).st_size == 0x10
if ql.platform != QL_OS.WINDOWS:
os.remove(real_path)
return regreturn
ql = Qiling(["../examples/rootfs/x86_linux/bin/x86_posix_syscall"], "../examples/rootfs/x86_linux", verbose=QL_VERBOSE.DEBUG)
ql.set_syscall(0x3, test_syscall_read)
ql.set_syscall(0x4, test_syscall_write)
ql.set_syscall(0x127, test_syscall_openat)
ql.set_syscall(0xa, test_syscall_unlink)
ql.set_syscall(0x5c, test_syscall_truncate)
ql.set_syscall(0x5d, test_syscall_ftruncate)
ql.run()
del ql
def test_elf_linux_arm(self):
def my_puts(ql):
params = ql.os.resolve_fcall_params(ELFTest.PARAMS_PUTS)
print(f'puts("{params["s"]}")')
all_mem = ql.mem.save()
ql.mem.restore(all_mem)
ql = Qiling(["../examples/rootfs/arm_linux/bin/arm_hello"], "../examples/rootfs/arm_linux", verbose=QL_VERBOSE.DEBUG, profile='profiles/append_test.ql')
ql.set_api('puts', my_puts)
ql.run()
del ql
def test_elf_linux_arm_static(self):
ql = Qiling(["../examples/rootfs/arm_linux/bin/arm_hello_static"], "../examples/rootfs/arm_linux", verbose=QL_VERBOSE.DEFAULT)
all_mem = ql.mem.save()
ql.mem.restore(all_mem)
ql.run()
del ql
# syscall testing for ARM, will be uncomment after ARM executable generated properly.
# def test_elf_linux_arm_posix_syscall(self):
# def test_syscall_read(ql, read_fd, read_buf, read_count, *args):
# target = False
# pathname = ql.os.fd[read_fd].name.split('/')[-1]
# if pathname == "test_syscall_read.txt":
# print("test => read(%d, %s, %d)" % (read_fd, pathname, read_count))
# target = True
# syscall.ql_syscall_read(ql, read_fd, read_buf, read_count, *args)
# if target:
# real_path = ql.os.fd[read_fd].name
# with open(real_path) as fd:
# assert fd.read() == ql.mem.read(read_buf, read_count).decode()
# os.remove(real_path)
# def test_syscall_write(ql, write_fd, write_buf, write_count, *args):
# target = False
# pathname = ql.os.fd[write_fd].name.split('/')[-1]
# if pathname == "test_syscall_write.txt":
# print("test => write(%d, %s, %d)" % (write_fd, pathname, write_count))
# target = True
# syscall.ql_syscall_write(ql, write_fd, write_buf, write_count, *args)
# if target:
# real_path = ql.os.fd[write_fd].name
# with open(real_path) as fd:
# assert fd.read() == 'Hello testing\x00'
# os.remove(real_path)
# def test_syscall_open(ql, open_pathname, open_flags, open_mode, *args):
# target = False
# pathname = ql.mem.string(open_pathname)
# if pathname == "test_syscall_open.txt":
# print("test => open(%s, 0x%x, 0%o)" % (pathname, open_flags, open_mode))
# target = True
# syscall.ql_syscall_open(ql, open_pathname, open_flags, open_mode, *args)
# if target:
# real_path = ql.os.path.transform_to_real_path(pathname)
# assert os.path.isfile(real_path) == True
# os.remove(real_path)
# def test_syscall_unlink(ql, unlink_pathname, *args):
# target = False
# pathname = ql.mem.string(unlink_pathname)
# if pathname == "test_syscall_unlink.txt":
# print("test => unlink(%s)" % (pathname))
# target = True
# syscall.ql_syscall_unlink(ql, unlink_pathname, *args)
# if target:
# real_path = ql.os.path.transform_to_real_path(pathname)
# assert os.path.isfile(real_path) == False
# def test_syscall_truncate(ql, trunc_pathname, trunc_length, *args):
# target = False
# pathname = ql.mem.string(trunc_pathname)
# if pathname == "test_syscall_truncate.txt":
# print("test => truncate(%s, 0x%x)" % (pathname, trunc_length))
# target = True
# syscall.ql_syscall_truncate(ql, trunc_pathname, trunc_length, *args)
# if target:
# real_path = ql.os.path.transform_to_real_path(pathname)
# assert os.stat(real_path).st_size == 0
# os.remove(real_path)
# def test_syscall_ftruncate(ql, ftrunc_fd, ftrunc_length, *args):
# target = False
# pathname = ql.os.fd[ftrunc_fd].name.split('/')[-1]
# if pathname == "test_syscall_ftruncate.txt":
# print("test => ftruncate(%d, 0x%x)" % (ftrunc_fd, ftrunc_length))
# target = True
# syscall.ql_syscall_ftruncate(ql, ftrunc_fd, ftrunc_length, *args)
# if target:
# real_path = ql.os.path.transform_to_real_path(pathname)
# assert os.stat(real_path).st_size == 0x10
# os.remove(real_path)
# ql = Qiling(["../examples/rootfs/arm_linux/bin/arm_posix_syscall"], "../examples/rootfs/arm_linux", verbose=QL_VERBOSE.DEBUG)
# ql.set_syscall(0x3, test_syscall_read)
# ql.set_syscall(0x4, test_syscall_write)
# ql.set_syscall(0x5, test_syscall_open)
# ql.set_syscall(0xa, test_syscall_unlink)
# ql.set_syscall(0x5c, test_syscall_truncate)
# ql.set_syscall(0x5d, test_syscall_ftruncate)
# ql.run()
# del ql
def test_elf_linux_arm64(self):
ql = Qiling(["../examples/rootfs/arm64_linux/bin/arm64_hello"], "../examples/rootfs/arm64_linux", verbose=QL_VERBOSE.DEBUG)
ql.run()
del ql
def test_elf_linux_arm64_static(self):
ql = Qiling(["../examples/rootfs/arm64_linux/bin/arm64_hello_static"], "../examples/rootfs/arm64_linux", verbose=QL_VERBOSE.DEFAULT)
ql.run()
del ql
def test_elf_linux_mips32_static(self):
ql = Qiling(["../examples/rootfs/mips32_linux/bin/mips32_hello_static"], "../examples/rootfs/mips32_linux")
ql.run()
del ql
def test_elf_linux_mips32(self):
def random_generator(size=6, chars=string.ascii_uppercase + string.digits):
return ''.join(random.choice(chars) for x in range(size))
ql = Qiling(["../examples/rootfs/mips32_linux/bin/mips32_hello", random_generator(random.randint(1,99))], "../examples/rootfs/mips32_linux")
ql.run()
del ql
def test_elf_onEnter_mips32el(self):
def my_puts_onenter(ql: Qiling):
params = ql.os.resolve_fcall_params(ELFTest.PARAMS_PUTS)
print(f'puts("{params["s"]}")')
params = ql.os.fcall.readParams(ELFTest.PARAMS_PUTS.values())
self.my_puts_onenter_addr = params[0]
return 2
ql = Qiling(["../examples/rootfs/mips32el_linux/bin/mips32el_double_hello"], "../examples/rootfs/mips32el_linux")
ql.set_api('puts', my_puts_onenter, QL_INTERCEPT.ENTER)
ql.run()
self.assertEqual(4196680, self.my_puts_onenter_addr)
del ql
def test_elf_linux_arm64_posix_syscall(self):
def test_syscall_read(ql, read_fd, read_buf, read_count, *args):
target = False
pathname = ql.os.fd[read_fd].name.split('/')[-1]
reg = ql.reg.read("x0")
print("reg : 0x%x" % reg)
ql.reg.x0 = reg
if pathname == "test_syscall_read.txt":
print("test => read(%d, %s, %d)" % (read_fd, pathname, read_count))
target = True
regreturn = syscall.ql_syscall_read(ql, read_fd, read_buf, read_count, *args)
if target:
real_path = ql.os.fd[read_fd].name
with open(real_path) as fd:
assert fd.read() == ql.mem.read(read_buf, read_count).decode()
if ql.platform != QL_OS.WINDOWS:
os.remove(real_path)
return regreturn
def test_syscall_write(ql, write_fd, write_buf, write_count, *args):
target = False
pathname = ql.os.fd[write_fd].name.split('/')[-1]
if pathname == "test_syscall_write.txt":
print("test => write(%d, %s, %d)" % (write_fd, pathname, write_count))
target = True
regreturn = syscall.ql_syscall_write(ql, write_fd, write_buf, write_count, *args)
if target:
real_path = ql.os.fd[write_fd].name
with open(real_path) as fd:
assert fd.read() == 'Hello testing\x00'
if ql.platform != QL_OS.WINDOWS:
os.remove(real_path)
return regreturn
def test_syscall_openat(ql, openat_fd, openat_path, openat_flags, openat_mode, *args):
target = False
pathname = ql.mem.string(openat_path)
if pathname == "test_syscall_open.txt":
print("test => openat(%d, %s, 0x%x, 0%o)" % (openat_fd, pathname, openat_flags, openat_mode))
target = True
regreturn = syscall.ql_syscall_openat(ql, openat_fd, openat_path, openat_flags, openat_mode, *args)
if target:
real_path = ql.os.path.transform_to_real_path(pathname)
assert os.path.isfile(real_path) == True
if ql.platform != QL_OS.WINDOWS:
os.remove(real_path)
return regreturn
def test_syscall_unlink(ql, unlink_pathname, *args):
target = False
pathname = ql.mem.string(unlink_pathname)
if pathname == "test_syscall_unlink.txt":
print("test => unlink(%s)" % (pathname))
target = True
regreturn = syscall.ql_syscall_unlink(ql, unlink_pathname, *args)
if target:
real_path = ql.os.path.transform_to_real_path(pathname)
assert os.path.isfile(real_path) == False
return regreturn
def test_syscall_truncate(ql, trunc_pathname, trunc_length, *args):
target = False
pathname = ql.mem.string(trunc_pathname)
if pathname == "test_syscall_truncate.txt":
print("test => truncate(%s, 0x%x)" % (pathname, trunc_length))
target = True
regreturn = syscall.ql_syscall_truncate(ql, trunc_pathname, trunc_length, *args)
if target:
real_path = ql.os.path.transform_to_real_path(pathname)
assert os.stat(real_path).st_size == 0
if ql.platform != QL_OS.WINDOWS:
os.remove(real_path)
return regreturn
def test_syscall_ftruncate(ql, ftrunc_fd, ftrunc_length, *args):
target = False
pathname = ql.os.fd[ftrunc_fd].name.split('/')[-1]
if pathname == "test_syscall_ftruncate.txt":
print("test => ftruncate(%d, 0x%x)" % (ftrunc_fd, ftrunc_length))
target = True
regreturn = syscall.ql_syscall_ftruncate(ql, ftrunc_fd, ftrunc_length, *args)
if target:
real_path = ql.os.path.transform_to_real_path(pathname)
assert os.stat(real_path).st_size == 0x10
if ql.platform != QL_OS.WINDOWS:
os.remove(real_path)
return regreturn
ql = Qiling(["../examples/rootfs/arm64_linux/bin/arm64_posix_syscall"], "../examples/rootfs/arm64_linux", verbose=QL_VERBOSE.DEBUG)
ql.set_syscall(0x3f, test_syscall_read)
ql.set_syscall(0x40, test_syscall_write)
ql.set_syscall(0x38, test_syscall_openat)
ql.set_syscall(0x402, test_syscall_unlink)
ql.set_syscall(0x2d, test_syscall_truncate)
ql.set_syscall(0x2e, test_syscall_ftruncate)
ql.run()
del ql
def test_elf_linux_mips32el(self):
def random_generator(size=6, chars=string.ascii_uppercase + string.digits):
return ''.join(random.choice(chars) for x in range(size))
ql = Qiling(["../examples/rootfs/mips32el_linux/bin/mips32el_hello", random_generator(random.randint(1,99))], "../examples/rootfs/mips32el_linux")
ql.run()
del ql
def test_elf_linux_mips32el_static(self):
def random_generator(size=6, chars=string.ascii_uppercase + string.digits):
return ''.join(random.choice(chars) for x in range(size))
ql = Qiling(["../examples/rootfs/mips32el_linux/bin/mips32el_hello_static", random_generator(random.randint(1,99))], "../examples/rootfs/mips32el_linux")
ql.run()
del ql
def test_elf_linux_mips32el_posix_syscall(self):
def test_syscall_read(ql, read_fd, read_buf, read_count, *args):
target = False
pathname = ql.os.fd[read_fd].name.split('/')[-1]
reg = ql.reg.read("v0")
print("reg : 0x%x" % reg)
ql.reg.v0 = reg
if pathname == "test_syscall_read.txt":
print("test => read(%d, %s, %d)" % (read_fd, pathname, read_count))
target = True
regreturn = syscall.ql_syscall_read(ql, read_fd, read_buf, read_count, *args)
if target:
real_path = ql.os.fd[read_fd].name
with open(real_path) as fd:
assert fd.read() == ql.mem.read(read_buf, read_count).decode()
if ql.platform != QL_OS.WINDOWS:
os.remove(real_path)
return regreturn
def test_syscall_write(ql, write_fd, write_buf, write_count, *args):
target = False
pathname = ql.os.fd[write_fd].name.split('/')[-1]
if pathname == "test_syscall_write.txt":
print("test => write(%d, %s, %d)" % (write_fd, pathname, write_count))
target = True
regreturn = syscall.ql_syscall_write(ql, write_fd, write_buf, write_count, *args)
if target:
real_path = ql.os.fd[write_fd].name
with open(real_path) as fd:
assert fd.read() == 'Hello testing\x00'
if ql.platform != QL_OS.WINDOWS:
os.remove(real_path)
return regreturn
def test_syscall_open(ql, open_pathname, open_flags, open_mode, *args):
target = False
pathname = ql.mem.string(open_pathname)
if pathname == "test_syscall_open.txt":
print("test => open(%s, 0x%x, 0%o)" % (pathname, open_flags, open_mode))
target = True
regreturn = syscall.ql_syscall_open(ql, open_pathname, open_flags, open_mode, *args)
if target:
real_path = ql.os.path.transform_to_real_path(pathname)
assert os.path.isfile(real_path) == True
if ql.platform != QL_OS.WINDOWS:
os.remove(real_path)
return regreturn
def test_syscall_unlink(ql, unlink_pathname, *args):
target = False
pathname = ql.mem.string(unlink_pathname)
if pathname == "test_syscall_unlink.txt":
print("test => unlink(%s)" % (pathname))
target = True
regreturn = syscall.ql_syscall_unlink(ql, unlink_pathname, *args)
if target:
real_path = ql.os.path.transform_to_real_path(pathname)
assert os.path.isfile(real_path) == False
return regreturn
def test_syscall_truncate(ql, trunc_pathname, trunc_length, *args):
target = False
pathname = ql.mem.string(trunc_pathname)
if pathname == "test_syscall_truncate.txt":
print("test => truncate(%s, 0x%x)" % (pathname, trunc_length))
target = True
regreturn = syscall.ql_syscall_truncate(ql, trunc_pathname, trunc_length, *args)
if target:
real_path = ql.os.path.transform_to_real_path(pathname)
assert os.stat(real_path).st_size == 0
if ql.platform != QL_OS.WINDOWS:
os.remove(real_path)
return regreturn
def test_syscall_ftruncate(ql, ftrunc_fd, ftrunc_length, *args):
target = False
pathname = ql.os.fd[ftrunc_fd].name.split('/')[-1]
if pathname == "test_syscall_ftruncate.txt":
print("test => ftruncate(%d, 0x%x)" % (ftrunc_fd, ftrunc_length))
target = True
regreturn = syscall.ql_syscall_ftruncate(ql, ftrunc_fd, ftrunc_length, *args)
if target:
real_path = ql.os.path.transform_to_real_path(pathname)
assert os.stat(real_path).st_size == 0x10
if ql.platform != QL_OS.WINDOWS:
os.remove(real_path)
return regreturn
ql = Qiling(["../examples/rootfs/mips32el_linux/bin/mips32el_posix_syscall"], "../examples/rootfs/mips32el_linux", verbose=QL_VERBOSE.DEBUG)
ql.set_syscall(4003, test_syscall_read)
ql.set_syscall(4004, test_syscall_write)
ql.set_syscall(4005, test_syscall_open)
ql.set_syscall(4010, test_syscall_unlink)
ql.set_syscall(4092, test_syscall_truncate)
ql.set_syscall(4093, test_syscall_ftruncate)
ql.run()
def test_elf_linux_arm_custom_syscall(self):
def my_syscall_write(ql, write_fd, write_buf, write_count, *args, **kw):
regreturn = 0
buf = None
mapaddr = ql.mem.map_anywhere(0x100000)
ql.log.info("0x%x" % mapaddr)
reg = ql.reg.read("r0")
print("reg : 0x%x" % reg)
ql.reg.r0 = reg
try:
buf = ql.mem.read(write_buf, write_count)
ql.log.info("\n+++++++++\nmy write(%d,%x,%i) = %d\n+++++++++" % (write_fd, write_buf, write_count, regreturn))
ql.os.fd[write_fd].write(buf)
regreturn = write_count
except:
regreturn = -1
ql.log.info("\n+++++++++\nmy write(%d,%x,%i) = %d\n+++++++++" % (write_fd, write_buf, write_count, regreturn))
if ql.verbose >= QL_VERBOSE.DEBUG:
raise
self.set_syscall = reg
return regreturn
ql = Qiling(["../examples/rootfs/arm_linux/bin/arm_hello"], "../examples/rootfs/arm_linux")
ql.set_syscall(0x04, my_syscall_write)
ql.run()
self.assertEqual(1, self.set_syscall)
del self.set_syscall
del ql
def test_elf_linux_x86_crackme(self):
class MyPipe():
def __init__(self):
self.buf = b''
def write(self, s):
self.buf += s
def read(self, l):
if l <= len(self.buf):
ret = self.buf[ : l]
self.buf = self.buf[l : ]
else:
ret = self.buf
self.buf = ''
return ret
def fileno(self):
return 0
def fstat(self):
return Fstat(sys.stdin.fileno())
def show(self):
pass
def clear(self):
pass
def flush(self):
pass
def close(self):
self.outpipe.close()
def instruction_count(ql, address, size, user_data):
user_data[0] += 1
def my__llseek(ql, *args, **kw):
pass
def run_one_round(payload):
stdin = MyPipe()
ql = Qiling(["../examples/rootfs/x86_linux/bin/crackme_linux"], "../examples/rootfs/x86_linux", console = False, stdin = stdin)
ins_count = [0]
ql.hook_code(instruction_count, ins_count)
ql.set_syscall("_llseek", my__llseek)
stdin.write(payload)
ql.run()
del stdin
return ins_count[0]
def solve():
idx_list = [1, 4, 2, 0, 3]
flag = b'\x00\x00\x00\x00\x00\n'
old_count = run_one_round(flag)
for idx in idx_list:
for i in b'L1NUX\\n':
flag = flag[ : idx] + chr(i).encode() + flag[idx + 1 : ]
tmp = run_one_round(flag)
if tmp > old_count:
old_count = tmp
break
# if idx == 2:
# break
print(flag)
print("\n\n Linux Simple Crackme Brute Force, This Will Take Some Time ...")
solve()
def test_x86_fake_urandom_multiple_times(self):
fake_id = 0
ids = []
class Fake_urandom(QlFsMappedObject):
def __init__(self):
nonlocal fake_id
self.id = fake_id
fake_id += 1
ids.append(self.id)
ql.log.info(f"Creating Fake_urandom with id {self.id}")
def read(self, size):
return b'\x01'
def fstat(self):
return -1
def close(self):
return 0
ql = Qiling(["../examples/rootfs/x86_linux/bin/x86_fetch_urandom_multiple_times"], "../examples/rootfs/x86_linux", verbose=QL_VERBOSE.DEBUG)
# Note we pass in a class here.
ql.add_fs_mapper("/dev/urandom", Fake_urandom)
ql.exit_code = 0
ql.exit_group_code = 0
def check_exit_group_code(ql, exit_code, *args, **kw):
ql.exit_group_code = exit_code
def check_exit_code(ql, exit_code, *args, **kw):
ql.exit_code = exit_code
ql.set_syscall("exit_group", check_exit_group_code, QL_INTERCEPT.ENTER)
ql.set_syscall("exit", check_exit_code, QL_INTERCEPT.ENTER)
ql.run()
self.assertEqual(0, ql.exit_code)
self.assertEqual(0, ql.exit_group_code)
last = -1
for i in ids:
self.assertEqual(last + 1, i)
last = i
del ql
def test_x86_fake_urandom(self):
class Fake_urandom(QlFsMappedObject):
def read(self, size):
return b"\x01"
def fstat(self):
return -1
def close(self):
return 0
ql = Qiling(["../examples/rootfs/x86_linux/bin/x86_fetch_urandom"], "../examples/rootfs/x86_linux", verbose=QL_VERBOSE.DEBUG)
ql.add_fs_mapper("/dev/urandom", Fake_urandom())
ql.exit_code = 0
ql.exit_group_code = 0
def check_exit_group_code(ql, exit_code, *args, **kw):
ql.exit_group_code = exit_code
def check_exit_code(ql, exit_code, *args, **kw):
ql.exit_code = exit_code
ql.set_syscall("exit_group", check_exit_group_code, QL_INTERCEPT.ENTER)
ql.set_syscall("exit", check_exit_code, QL_INTERCEPT.ENTER)
ql.run()
self.assertEqual(0, ql.exit_code)
self.assertEqual(0, ql.exit_group_code)
del ql
def test_x8664_map_urandom(self):
ql = Qiling(["../examples/rootfs/x8664_linux/bin/x8664_fetch_urandom"], "../examples/rootfs/x8664_linux", verbose=QL_VERBOSE.DEBUG)
ql.add_fs_mapper("/dev/urandom","/dev/urandom")
ql.exit_code = 0
ql.exit_group_code = 0
def check_exit_group_code(ql, exit_code, *args, **kw):
ql.exit_group_code = exit_code
def check_exit_code(ql, exit_code, *args, **kw):
ql.exit_code = exit_code
ql.set_syscall("exit_group", check_exit_group_code, QL_INTERCEPT.ENTER)
ql.set_syscall("exit", check_exit_code, QL_INTERCEPT.ENTER)
ql.run()
self.assertEqual(0, ql.exit_code)
self.assertEqual(0, ql.exit_group_code)
del ql
def test_x8664_symlink(self):
ql = Qiling(["../examples/rootfs/x8664_linux_symlink/bin/x8664_hello"], "../examples/rootfs/x8664_linux_symlink", verbose=QL_VERBOSE.DEBUG)
ql.run()
del ql
def test_arm_directory_symlink(self):
ql = Qiling(["../examples/rootfs/arm_linux/bin/arm_hello"], "../examples/rootfs/arm_linux", verbose=QL_VERBOSE.DEBUG)
real_path = ql.os.path.transform_to_real_path("/lib/libsymlink_test.so")
self.assertTrue(real_path.endswith("/examples/rootfs/arm_linux/tmp/media/nand/symlink_test/libsymlink_test.so"))
del ql
def test_x8664_absolute_path(self):
class MyPipe():
def __init__(self):
self.buf = b''
def write(self, s):
self.buf += s
def read(self, l):
pass
def fileno(self):
return 0
def fstat(self):
return Fstat(sys.stdin.fileno())
def show(self):
pass
def clear(self):
pass
def flush(self):
pass
def close(self):
pass
pipe = MyPipe()
ql = Qiling(["../examples/rootfs/x8664_linux/bin/absolutepath"], "../examples/rootfs/x8664_linux", verbose=QL_VERBOSE.DEBUG, stdout=pipe)
ql.run()
self.assertEqual(pipe.buf, b'test_complete\n\ntest_complete\n\n')
del ql
def test_x8664_getcwd(self):
class MyPipe():
def __init__(self):
self.buf = b''
def write(self, s):
self.buf += s
def read(self, l):
pass
def fileno(self):
return 0
def fstat(self):
return Fstat(sys.stdin.fileno())
def show(self):
pass
def clear(self):
pass
def flush(self):
pass
def close(self):
pass
pipe = MyPipe()
ql = Qiling(["../examples/rootfs/x8664_linux/bin/testcwd"], "../examples/rootfs/x8664_linux", verbose=QL_VERBOSE.DEBUG, stdout=pipe)
ql.run()
self.assertEqual(pipe.buf, b'/\n/lib\n/bin\n/\n')
del ql
def test_elf_linux_x86_return_from_main_stackpointer(self):
ql = Qiling(["../examples/rootfs/x86_linux/bin/x86_return_main"], "../examples/rootfs/x86_linux", stop_on_stackpointer=True)
ql.run()
del ql
def test_elf_linux_x86_return_from_main_exit_trap(self):
ql = Qiling(["../examples/rootfs/x86_linux/bin/x86_return_main"], "../examples/rootfs/x86_linux", stop_on_exit_trap=True)
ql.run()
del ql
def test_elf_linux_x8664_return_from_main_stackpointer(self):
ql = Qiling(["../examples/rootfs/x8664_linux/bin/x8664_return_main"], "../examples/rootfs/x8664_linux", stop_on_stackpointer=True)
ql.run()
del ql
def test_elf_linux_x8664_return_from_main_exit_trap(self):
ql = Qiling(["../examples/rootfs/x8664_linux/bin/x8664_return_main"], "../examples/rootfs/x8664_linux", stop_on_exit_trap=True)
ql.run()
del ql
def test_arm_stat64(self):
ql = Qiling(["../examples/rootfs/arm_linux/bin/arm_stat64", "/bin/arm_stat64"], "../examples/rootfs/arm_linux", verbose=QL_VERBOSE.DEBUG)
ql.run()
del ql
if __name__ == "__main__":
unittest.main()
| 35.7383 | 173 | 0.57304 | 4,655 | 37,418 | 4.355961 | 0.071321 | 0.034325 | 0.029196 | 0.040144 | 0.836909 | 0.812201 | 0.783696 | 0.755289 | 0.735069 | 0.700202 | 0 | 0.027417 | 0.308889 | 37,418 | 1,046 | 174 | 35.772467 | 0.75669 | 0.088914 | 0 | 0.64275 | 0 | 0.004484 | 0.141593 | 0.107834 | 0 | 0 | 0.004911 | 0 | 0.053812 | 1 | 0.164425 | false | 0.020927 | 0.011958 | 0.022422 | 0.242152 | 0.044843 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
325d6723c81dabf9db2448d4bcf447d86112777f | 29 | py | Python | nv/database.py | new-valley/new-valley | 8810739cab52ad4dea2f4005a59b8b7afea1e2db | [
"MIT"
] | null | null | null | nv/database.py | new-valley/new-valley | 8810739cab52ad4dea2f4005a59b8b7afea1e2db | [
"MIT"
] | null | null | null | nv/database.py | new-valley/new-valley | 8810739cab52ad4dea2f4005a59b8b7afea1e2db | [
"MIT"
] | null | null | null | from nv.extensions import db
| 14.5 | 28 | 0.827586 | 5 | 29 | 4.8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.137931 | 29 | 1 | 29 | 29 | 0.96 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
329e96d74c2540fd04ba91430f76ee9c547b14e4 | 6,174 | py | Python | tests/test_main.py | pzehner/dependencmake | 00feb4e52d1b35dac9f85937d80eafbb50c8c452 | [
"MIT"
] | null | null | null | tests/test_main.py | pzehner/dependencmake | 00feb4e52d1b35dac9f85937d80eafbb50c8c452 | [
"MIT"
] | null | null | null | tests/test_main.py | pzehner/dependencmake | 00feb4e52d1b35dac9f85937d80eafbb50c8c452 | [
"MIT"
] | null | null | null | from argparse import Namespace
from io import StringIO
from path import Path
from dependencmake.__main__ import (
get_parser,
run_build,
run_clean,
run_create_config,
run_fetch,
run_install,
)
class TestGetParser:
def test_get(self):
"""Get a parser."""
parser = get_parser()
assert parser is not None
class TestRunFetch:
def test_run_force(self, mocker):
"""Run the force fetch command."""
mocked_clean = mocker.patch("dependencmake.__main__.clean")
mocked_dependency_list_class = mocker.patch(
"dependencmake.__main__.DependencyList"
)
args = Namespace(path=Path("path"), force=True)
output = StringIO()
run_fetch(args, output)
mocked_clean.assert_called_with(fetch=True)
mocked_dependency_list_class.assert_called()
class TestRunBuild:
def test_run_force(self, mocker):
"""Run the force build command."""
mocked_clean = mocker.patch("dependencmake.__main__.clean")
mocked_dependency_list_class = mocker.patch(
"dependencmake.__main__.DependencyList"
)
args = Namespace(path=Path("path"), force=True, install_path=None, rest=[])
output = StringIO()
run_build(args, output)
mocked_clean.assert_called_with(fetch=True, build=True)
mocked_dependency_list_class.assert_called()
def test_run_extra(self, mocker):
"""Run the build command with CMake arguments."""
mocked_clean = mocker.patch("dependencmake.__main__.clean")
mocked_dependency_list_class = mocker.patch(
"dependencmake.__main__.DependencyList"
)
args = Namespace(
path=Path("path"), force=False, install_path=None, rest=["-DCMAKE_ARG=ON"]
)
output = StringIO()
run_build(args, output)
mocked_clean.assert_not_called()
mocked_dependency_list_class.return_value.build.assert_called_with(
["-DCMAKE_ARG=ON"], output
)
class TestRunInstall:
def test_run_force(self, mocker):
"""Run the force install command."""
mocked_clean = mocker.patch("dependencmake.__main__.clean")
mocked_dependency_list_class = mocker.patch(
"dependencmake.__main__.DependencyList"
)
args = Namespace(path=Path("path"), force=True, install_path=None, rest=[])
output = StringIO()
run_install(args, output)
mocked_clean.assert_called_with(fetch=True, build=True, install=True)
mocked_dependency_list_class.assert_called()
def test_run_extra(self, mocker):
"""Run the install command with CMake arguments."""
mocked_clean = mocker.patch("dependencmake.__main__.clean")
mocked_dependency_list_class = mocker.patch(
"dependencmake.__main__.DependencyList"
)
args = Namespace(
path=Path("path"), force=False, install_path=None, rest=["-DCMAKE_ARG=ON"]
)
output = StringIO()
run_install(args, output)
mocked_clean.assert_not_called()
mocked_dependency_list_class.return_value.build.assert_called_with(
["-DCMAKE_ARG=ON"], output
)
class TestRunCreateConfig:
def test_run(self, mocker):
"""Run the create-config command."""
mocked_create_config = mocker.patch(
"dependencmake.__main__.create_config", autospec=True
)
args = Namespace(path=Path("path"), force=True)
output = StringIO()
run_create_config(args, output)
content = output.getvalue()
assert "Config file created in dependencmake.yaml" in content
mocked_create_config.assert_called_with(Path("path"), True)
class TestRunClean:
def test_run_no_args(self, mocker):
"""Run clean command without arguments."""
mocked_clean = mocker.patch("dependencmake.__main__.clean", autospec=True)
args = Namespace(
fetch=False, build=False, install=False, all=False, install_path=None
)
output = StringIO()
run_clean(args, output)
mocked_clean.assert_called_with(
fetch=False, build=True, install=False, install_path=None
)
def test_run_fetch(self, mocker):
"""Run clean command with fetch argument."""
mocked_clean = mocker.patch("dependencmake.__main__.clean", autospec=True)
args = Namespace(
fetch=True, build=False, install=False, all=False, install_path=None
)
output = StringIO()
run_clean(args, output)
mocked_clean.assert_called_with(
fetch=True, build=False, install=False, install_path=None
)
def test_run_build(self, mocker):
"""Run clean command with build argument."""
mocked_clean = mocker.patch("dependencmake.__main__.clean", autospec=True)
args = Namespace(
fetch=False, build=True, install=False, all=False, install_path=None
)
output = StringIO()
run_clean(args, output)
mocked_clean.assert_called_with(
fetch=False, build=True, install=False, install_path=None
)
def test_run_install(self, mocker):
"""Run clean command with install argument."""
mocked_clean = mocker.patch("dependencmake.__main__.clean", autospec=True)
args = Namespace(
fetch=False, build=False, install=True, all=False, install_path=None
)
output = StringIO()
run_clean(args, output)
mocked_clean.assert_called_with(
fetch=False, build=False, install=True, install_path=None
)
def test_run_all(self, mocker):
"""Run clean command with all argument."""
mocked_clean = mocker.patch("dependencmake.__main__.clean", autospec=True)
args = Namespace(
fetch=False, build=False, install=False, all=True, install_path=None
)
output = StringIO()
run_clean(args, output)
mocked_clean.assert_called_with(
fetch=True, build=True, install=True, install_path=None
)
| 31.824742 | 86 | 0.643667 | 691 | 6,174 | 5.439942 | 0.098408 | 0.058526 | 0.102155 | 0.119181 | 0.822293 | 0.811652 | 0.765363 | 0.75552 | 0.743283 | 0.69593 | 0 | 0 | 0.254292 | 6,174 | 193 | 87 | 31.989637 | 0.816464 | 0.067379 | 0 | 0.536232 | 0 | 0 | 0.109882 | 0.087941 | 0 | 0 | 0 | 0 | 0.130435 | 1 | 0.086957 | false | 0 | 0.028986 | 0 | 0.15942 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
08601317834023e2c864e39cf3f577a827c6fe9a | 38 | py | Python | Services/src/utils/utils.py | IsaacDSC/ALPR_4.0 | b67446c06c19b44d740ba283b46dde63c2c39601 | [
"MIT"
] | null | null | null | Services/src/utils/utils.py | IsaacDSC/ALPR_4.0 | b67446c06c19b44d740ba283b46dde63c2c39601 | [
"MIT"
] | null | null | null | Services/src/utils/utils.py | IsaacDSC/ALPR_4.0 | b67446c06c19b44d740ba283b46dde63c2c39601 | [
"MIT"
] | null | null | null | def doSomethingCool():
print('ok') | 19 | 22 | 0.657895 | 4 | 38 | 6.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.157895 | 38 | 2 | 23 | 19 | 0.78125 | 0 | 0 | 0 | 0 | 0 | 0.051282 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0 | 0 | 0 | 0.5 | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
089794844848949179bbbc6703b451d8941a5c8b | 91 | py | Python | second_module.py | gtsofa/yt | 82a41a9de5be8b45905f9251cbc4240ad7299d79 | [
"MIT"
] | null | null | null | second_module.py | gtsofa/yt | 82a41a9de5be8b45905f9251cbc4240ad7299d79 | [
"MIT"
] | null | null | null | second_module.py | gtsofa/yt | 82a41a9de5be8b45905f9251cbc4240ad7299d79 | [
"MIT"
] | null | null | null | # second module.py
import first_module
print "Second module's name : {}".format(__name__) | 18.2 | 50 | 0.747253 | 13 | 91 | 4.846154 | 0.692308 | 0.380952 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.131868 | 91 | 5 | 50 | 18.2 | 0.797468 | 0.175824 | 0 | 0 | 0 | 0 | 0.337838 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.5 | null | null | 0.5 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
08b891e8c43990829c1d9b7f0ac7222a859bd73f | 31 | py | Python | riberry/cli/commands/__init__.py | srafehi/riberry | 2ffa48945264177c6cef88512c1bc80ca4bf1d5e | [
"MIT"
] | 2 | 2019-12-09T10:24:36.000Z | 2019-12-09T10:26:56.000Z | riberry/cli/commands/__init__.py | srafehi/riberry | 2ffa48945264177c6cef88512c1bc80ca4bf1d5e | [
"MIT"
] | 2 | 2018-06-11T11:34:28.000Z | 2018-08-22T12:00:19.000Z | riberry/cli/commands/__init__.py | srafehi/riberry | 2ffa48945264177c6cef88512c1bc80ca4bf1d5e | [
"MIT"
] | null | null | null | from . import conf, run, admin
| 15.5 | 30 | 0.709677 | 5 | 31 | 4.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.193548 | 31 | 1 | 31 | 31 | 0.88 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3ed03a5c58cbc827ce6626cd72196804a9e7a464 | 179 | py | Python | accounts/admin.py | zhusheng/django_rest_jwt_auth_tutorial | b1b764aacbf9451ee1045cac7bebcf570751d9d5 | [
"Apache-2.0"
] | null | null | null | accounts/admin.py | zhusheng/django_rest_jwt_auth_tutorial | b1b764aacbf9451ee1045cac7bebcf570751d9d5 | [
"Apache-2.0"
] | null | null | null | accounts/admin.py | zhusheng/django_rest_jwt_auth_tutorial | b1b764aacbf9451ee1045cac7bebcf570751d9d5 | [
"Apache-2.0"
] | null | null | null | from django.contrib import admin
# Register your models here.
from django.contrib.auth.models import User
from accounts.models import MyProfile
admin.site.register(MyProfile)
| 17.9 | 43 | 0.815642 | 25 | 179 | 5.84 | 0.56 | 0.136986 | 0.232877 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.122905 | 179 | 9 | 44 | 19.888889 | 0.929936 | 0.145251 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.75 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4108eac643fc91ecd7b5ec834758903659b71464 | 45 | py | Python | src/util/__init__.py | 0417taehyun/ringle-tutor-analysis | 5daf2fa29b4ba1dd247b34ba549a009260da8cc3 | [
"MIT"
] | null | null | null | src/util/__init__.py | 0417taehyun/ringle-tutor-analysis | 5daf2fa29b4ba1dd247b34ba549a009260da8cc3 | [
"MIT"
] | null | null | null | src/util/__init__.py | 0417taehyun/ringle-tutor-analysis | 5daf2fa29b4ba1dd247b34ba549a009260da8cc3 | [
"MIT"
] | null | null | null | from src.util.worker import update_interests
| 22.5 | 44 | 0.866667 | 7 | 45 | 5.428571 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.088889 | 45 | 1 | 45 | 45 | 0.926829 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
41198748999469e4ab291d4dc13966fa1391e041 | 70 | py | Python | Decompiler/p/Cassius/Cassius.py | AGraber/EDDecompiler | 6e00c7c6ba8a12cea7722fcef34ed8ff4ac6bc66 | [
"MIT"
] | 5 | 2021-05-23T19:34:38.000Z | 2021-12-05T05:57:36.000Z | Decompiler/p/Cassius/Cassius.py | staringstar/EDDecompiler | ef86adf8fb72885b4da8f5e0382054f13012e722 | [
"MIT"
] | null | null | null | Decompiler/p/Cassius/Cassius.py | staringstar/EDDecompiler | ef86adf8fb72885b4da8f5e0382054f13012e722 | [
"MIT"
] | 3 | 2020-08-18T19:29:38.000Z | 2020-12-05T18:14:29.000Z | from ActionHelper import *
from Voice import *
from ChrFile import * | 23.333333 | 27 | 0.771429 | 9 | 70 | 6 | 0.555556 | 0.37037 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.185714 | 70 | 3 | 28 | 23.333333 | 0.947368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f5afe1ae7ce622acc7675195f1f6f658459ec0ed | 767 | py | Python | tests/Application/test_App.py | opengeekv2/python-app-exercise | ff3c7e3bac33e1426ab845b08f259b4812230fb3 | [
"MIT"
] | null | null | null | tests/Application/test_App.py | opengeekv2/python-app-exercise | ff3c7e3bac33e1426ab845b08f259b4812230fb3 | [
"MIT"
] | null | null | null | tests/Application/test_App.py | opengeekv2/python-app-exercise | ff3c7e3bac33e1426ab845b08f259b4812230fb3 | [
"MIT"
] | null | null | null | from unittest.mock import patch, MagicMock
from src.Application.App import App
from src.Services.ApiService import ApiService
@patch('src.Services.ApiService.ApiService')
def test_app_run_fine(api_service_mock: ApiService):
api_service_instance = MagicMock()
api_service_instance.run = MagicMock(return_value=True)
api_service_mock.return_value = api_service_instance
app = App(api_service_instance)
assert 0 == app.run()
@patch('src.Services.ApiService.ApiService')
def test_app_run_error(api_service_mock: ApiService):
api_service_instance = MagicMock()
api_service_instance.run = MagicMock(return_value=False)
api_service_mock.return_value = api_service_instance
app = App(api_service_instance)
assert -1 == app.run()
| 31.958333 | 60 | 0.783572 | 104 | 767 | 5.451923 | 0.25 | 0.21164 | 0.253968 | 0.091711 | 0.75485 | 0.75485 | 0.75485 | 0.75485 | 0.75485 | 0.582011 | 0 | 0.002994 | 0.129074 | 767 | 23 | 61 | 33.347826 | 0.845808 | 0 | 0 | 0.470588 | 0 | 0 | 0.088657 | 0.088657 | 0 | 0 | 0 | 0 | 0.117647 | 1 | 0.117647 | false | 0 | 0.176471 | 0 | 0.294118 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
eb20778b1236e4c3ae22844b27ecf515f7e60f38 | 25 | py | Python | acq4/devices/LightSource/__init__.py | ablot/acq4 | ba7cd340d9d0282640adb501d3788f8c0837e4c4 | [
"MIT"
] | null | null | null | acq4/devices/LightSource/__init__.py | ablot/acq4 | ba7cd340d9d0282640adb501d3788f8c0837e4c4 | [
"MIT"
] | null | null | null | acq4/devices/LightSource/__init__.py | ablot/acq4 | ba7cd340d9d0282640adb501d3788f8c0837e4c4 | [
"MIT"
] | null | null | null | from LightSource import * | 25 | 25 | 0.84 | 3 | 25 | 7 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.12 | 25 | 1 | 25 | 25 | 0.954545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
eb3a139966f7d6a1b206356ea224ec7f49967ca5 | 28 | py | Python | fast_wrap/__init__.py | uppittu11/fast_wrap | 4e8b535fd807e642c2bbfba6ff898371cd57bd18 | [
"MIT"
] | null | null | null | fast_wrap/__init__.py | uppittu11/fast_wrap | 4e8b535fd807e642c2bbfba6ff898371cd57bd18 | [
"MIT"
] | null | null | null | fast_wrap/__init__.py | uppittu11/fast_wrap | 4e8b535fd807e642c2bbfba6ff898371cd57bd18 | [
"MIT"
] | null | null | null | from .fast_wrap import wrap
| 14 | 27 | 0.821429 | 5 | 28 | 4.4 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 28 | 1 | 28 | 28 | 0.916667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
de2951b7ae7e2afec3a6057e697d6b17723fb131 | 103 | py | Python | maps/views/team.py | acdh-oeaw/diauma | ebaea149dd49347a794da85e20044d78226a1f89 | [
"MIT"
] | null | null | null | maps/views/team.py | acdh-oeaw/diauma | ebaea149dd49347a794da85e20044d78226a1f89 | [
"MIT"
] | 1 | 2017-09-06T11:08:02.000Z | 2017-09-06T15:31:00.000Z | maps/views/team.py | acdh-oeaw/diauma | ebaea149dd49347a794da85e20044d78226a1f89 | [
"MIT"
] | 1 | 2021-11-11T13:44:16.000Z | 2021-11-11T13:44:16.000Z | from django.shortcuts import render
def index(request):
return render(request, 'maps/team.html')
| 17.166667 | 44 | 0.747573 | 14 | 103 | 5.5 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.145631 | 103 | 5 | 45 | 20.6 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0.135922 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.