hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0ed58ffce904e4734dda4d0f871560b054d227fc | 14,892 | py | Python | test/test_feature_selection.py | gabrielmacedoanac/kgextension | 0551ca278bc3de5c39baf663467be3220ad20edd | [
"MIT"
] | 67 | 2021-01-17T20:12:44.000Z | 2022-03-07T06:57:01.000Z | test/test_feature_selection.py | gabrielmacedoanac/kgextension | 0551ca278bc3de5c39baf663467be3220ad20edd | [
"MIT"
] | null | null | null | test/test_feature_selection.py | gabrielmacedoanac/kgextension | 0551ca278bc3de5c39baf663467be3220ad20edd | [
"MIT"
] | 3 | 2021-04-17T09:19:31.000Z | 2021-10-03T21:29:56.000Z | import pandas as pd
import networkx as nx
import pytest
from kgextension.feature_selection import hill_climbing_filter, hierarchy_based_filter, tree_based_filter
from kgextension.generator import specific_relation_generator, direct_type_generator
class TestHillCLimbingFilter:
def test1_high_beta(self):
input_df = pd.read_csv("test/data/feature_selection/hill_climbing_test1_input.csv")
input_DG = nx.DiGraph()
labels = ['http://chancellor', 'http://president', 'http://European_politician',
'http://head_of_state', 'http://politician', 'http://man', 'http://person', 'http://being']
input_DG.add_nodes_from(labels)
input_DG.add_edges_from([('http://chancellor', 'http://politician'), ('http://president', 'http://politician'),
('http://chancellor', 'http://head_of_state'), ('http://president', 'http://head_of_state'), ('http://head_of_state', 'http://person'),
('http://European_politician', 'http://politician'), ('http://politician', 'http://person'),
('http://man', 'http://person'), ('http://person', 'http://being')])
expected_df = pd.read_csv("test/data/feature_selection/hill_climbing_test1_expected.csv")
output_df = hill_climbing_filter(input_df, 'uri_bool_http://class', G= input_DG, beta=0.5, k=2)
pd.testing.assert_frame_equal(output_df, expected_df, check_like=True)
def test2_generator_data_low_beta(self):
df = pd.DataFrame({
'entities': ['Paris', 'Buenos Aires', 'Mannheim', "München"],
'link': ['http://dbpedia.org/resource/Paris', 'http://dbpedia.org/resource/Buenos_Aires',
'http://dbpedia.org/resource/Mannheim', 'http://dbpedia.org/resource/Munich']
})
input_df = specific_relation_generator(
df, columns=['link'], hierarchy_relation='http://www.w3.org/2004/02/skos/core#broader')
expected_df = pd.read_csv("test/data/feature_selection/hill_climbing_test2_expected.csv")
output_df = hill_climbing_filter(input_df, 'link_in_boolean_http://dbpedia.org/resource/Category:Prefectures_in_France', beta=0.05, k=3)
pd.testing.assert_frame_equal(output_df, expected_df, check_like=True)
def test3_nan(self):
input_df = pd.read_csv("test/data/feature_selection/hill_climbing_test3_input.csv")
input_DG = nx.DiGraph()
labels = ['http://chancellor', 'http://president', 'http://European_politician',
'http://head_of_state', 'http://politician', 'http://man', 'http://person', 'http://being']
input_DG.add_nodes_from(labels)
input_DG.add_edges_from([('http://chancellor', 'http://politician'), ('http://president', 'http://politician'),
('http://chancellor', 'http://head_of_state'), ('http://president', 'http://head_of_state'), ('http://head_of_state', 'http://person'),
('http://European_politician', 'http://politician'), ('http://politician', 'http://person'),
('http://man', 'http://person'), ('http://person', 'http://being')])
expected_df = pd.read_csv("test/data/feature_selection/hill_climbing_test3_expected.csv")
output_df = hill_climbing_filter(input_df, 'class', G= input_DG, beta=0.5, k=2)
pd.testing.assert_frame_equal(output_df, expected_df, check_like=True)
def test4_callable_function(self):
input_df = pd.read_csv("test/data/feature_selection/hill_climbing_test1_input.csv")
input_DG = nx.DiGraph()
labels = ['http://chancellor', 'http://president', 'http://European_politician',
'http://head_of_state', 'http://politician', 'http://man', 'http://person', 'http://being']
input_DG.add_nodes_from(labels)
input_DG.add_edges_from([('http://chancellor', 'http://politician'), ('http://president', 'http://politician'),
('http://chancellor', 'http://head_of_state'), ('http://president', 'http://head_of_state'), ('http://head_of_state', 'http://person'),
('http://European_politician', 'http://politician'), ('http://politician', 'http://person'),
('http://man', 'http://person'), ('http://person', 'http://being')])
def fake_metric(df, class_col, param=5):
return 1/((df.sum(axis=1)*class_col).sum()/param)
expected_df = pd.read_csv("test/data/feature_selection/hill_climbing_test4_expected.csv")
output_df = hill_climbing_filter(input_df, 'uri_bool_http://class', metric=fake_metric, G= input_DG, param=6)
pd.testing.assert_frame_equal(output_df, expected_df, check_like=True)
def test5_no_graph(self):
input_df = pd.read_csv("test/data/feature_selection/hill_climbing_test3_input.csv")
with pytest.raises(RuntimeError) as excinfo:
_ = hill_climbing_filter(input_df, 'class', beta=0.5, k=2)
assert "df.attrs['hierarchy]" in str(excinfo.value)
class TestHierarchyBasedFilter():
def test1_no_pruning_info_gain_with_G(self):
df = pd.DataFrame({
'entities': ['Paris', 'Buenos Aires', 'Mannheim', "München"],
'link': ['http://dbpedia.org/resource/Paris', 'http://dbpedia.org/resource/Buenos_Aires',
'http://dbpedia.org/resource/Mannheim', 'http://dbpedia.org/resource/Munich']
})
expected_df = pd.read_csv("test\data\feature_selection\hierarchy_based_test1_expected.csv")
input_df = direct_type_generator(df, ["link"], regex_filter=['A'], result_type="boolean", bundled_mode=True, hierarchy=True)
input_DG = input_df.attrs['hierarchy']
output_df = hierarchy_based_filter(input_df, "link", threshold=0.99, G=input_DG, metric="info_gain", pruning=False)
pd.testing.assert_frame_equal(output_df, expected_df, check_like=True)
def test2_no_pruning_correlation(self):
df = pd.DataFrame({
'entities': ['Paris', 'Buenos Aires', 'Mannheim', "München"],
'link': ['http://dbpedia.org/resource/Paris', 'http://dbpedia.org/resource/Buenos_Aires',
'http://dbpedia.org/resource/Mannheim', 'http://dbpedia.org/resource/Munich']
})
expected_df = pd.read_csv("test\data\feature_selection\hierarchy_based_test2_expected.csv")
input_df = direct_type_generator(df, ["link"], regex_filter=['A'], result_type="boolean", bundled_mode=True, hierarchy=True)
output_df = hierarchy_based_filter(input_df, "link", threshold=0.99, G=input_DG, metric="correlation", pruning=False)
pd.testing.assert_frame_equal(output_df, expected_df, check_like=True)
def test3_pruning_info_gain_all_remove_True(self):
df = pd.DataFrame({
'entities': ['Paris', 'Buenos Aires', 'Mannheim', "München"],
'link': ['http://dbpedia.org/resource/Paris', 'http://dbpedia.org/resource/Buenos_Aires',
'http://dbpedia.org/resource/Mannheim', 'http://dbpedia.org/resource/Munich']
})
expected_df = pd.read_csv("test\data\feature_selection\hierarchy_based_test3_expected.csv")
input_df = direct_type_generator(df, ["link"], regex_filter=['A'], result_type="boolean", bundled_mode=True, hierarchy=True)
input_DG = input_df.attrs['hierarchy']
output_df = hierarchy_based_filter(input_df, "link", G=input_DG, threshold=0.99, metric="info_gain", pruning=True)
pd.testing.assert_frame_equal(output_df, expected_df, check_like=True)
def test4_pruning_correlation_all_remove_True(self):
df = pd.DataFrame({
'entities': ['Paris', 'Buenos Aires', 'Mannheim', "München"],
'link': ['http://dbpedia.org/resource/Paris', 'http://dbpedia.org/resource/Buenos_Aires',
'http://dbpedia.org/resource/Mannheim', 'http://dbpedia.org/resource/Munich']
})
expected_df = pd.read_csv("test\data\feature_selection\hierarchy_based_test4_expected.csv")
input_df = direct_type_generator(df, ["link"], regex_filter=['A'], result_type="boolean", bundled_mode=True, hierarchy=True)
input_DG = input_df.attrs['hierarchy']
output_df = hierarchy_based_filter(input_df, "link", G=input_DG, threshold=0.99, metric="correlation", pruning=True)
pd.testing.assert_frame_equal(output_df, expected_df, check_like = True)
def test5_pruning_info_gain_all_remove_False(self):
df = pd.DataFrame({
'entities': ['Paris', 'Buenos Aires', 'Mannheim', "München"],
'link': ['http://dbpedia.org/resource/Paris', 'http://dbpedia.org/resource/Buenos_Aires',
'http://dbpedia.org/resource/Mannheim', 'http://dbpedia.org/resource/Munich']
})
expected_df = pd.read_csv("test\data\feature_selection\hierarchy_based_test5_expected.csv")
input_df = direct_type_generator(df, ["link"], regex_filter=['A'], result_type="boolean", bundled_mode=True, hierarchy=True)
input_DG = input_df.attrs['hierarchy']
output_df = hierarchy_based_filter(input_df, "link", G=input_DG, threshold=0.99, metric="info_gain", pruning=True, all_remove=False)
pd.testing.assert_frame_equal(output_df, expected_df, check_like = True)
def test6_pruning_correlation_all_remove_False(self):
df = pd.DataFrame({
'entities': ['Paris', 'Buenos Aires', 'Mannheim', "München"],
'link': ['http://dbpedia.org/resource/Paris', 'http://dbpedia.org/resource/Buenos_Aires',
'http://dbpedia.org/resource/Mannheim', 'http://dbpedia.org/resource/Munich']
})
expected_df = pd.read_csv("test\data\feature_selection\hierarchy_based_test6_expected.csv")
input_df = direct_type_generator(df, ["link"], regex_filter=['A'], result_type="boolean", bundled_mode=True, hierarchy=True)
input_DG = input_df.attrs['hierarchy']
output_df = hierarchy_based_filter(input_df, "link", G=input_DG, threshold=0.99, metric="correlation", pruning=True,
all_remove=False)
pd.testing.assert_frame_equal(output_df, expected_df, check_like = True)
def test7_no_input_G(self):
df = pd.DataFrame({
'entities': ['Paris', 'Buenos Aires', 'Mannheim', "München"],
'link': ['http://dbpedia.org/resource/Paris', 'http://dbpedia.org/resource/Buenos_Aires',
'http://dbpedia.org/resource/Mannheim', 'http://dbpedia.org/resource/Munich']
})
expected_df = pd.read_csv("test\data\feature_selection\hierarchy_based_test7_expected.csv")
input_df = direct_type_generator(df, ["link"], regex_filter=['A'], result_type="boolean", bundled_mode=True, hierarchy=True)
output_df = hierarchy_based_filter(input_df, "link", threshold=0.99, metric="correlation", pruning=True,
all_remove=False)
pd.testing.assert_frame_equal(output_df, expected_df, check_like = True)
def test8_nan(self):
input_df = pd.read_csv("test/data/feature_selection/hill_climbing_test3_input.csv")
input_DG = nx.DiGraph()
labels = ['http://chancellor', 'http://president', 'http://European_politician',
'http://head_of_state', 'http://politician', 'http://man', 'http://person', 'http://being']
input_DG.add_nodes_from(labels)
input_DG.add_edges_from([('http://chancellor', 'http://politician'), ('http://president', 'http://politician'),
('http://chancellor', 'http://head_of_state'), ('http://president', 'http://head_of_state'), ('http://head_of_state', 'http://person'),
('http://European_politician', 'http://politician'), ('http://politician', 'http://person'),
('http://man', 'http://person'), ('http://person', 'http://being')])
expected_df = pd.read_csv("test/data/feature_selection/hierarchy_based_test8_expected.csv")
output_df = hierarchy_based_filter(input_df, 'class', G=input_DG, threshold=0.99, metric="info_gain", pruning=True)
pd.testing.assert_frame_equal(output_df, expected_df, check_like=True)
def test9_callable_function(self):
input_df = pd.read_csv("test/data/feature_selection/hill_climbing_test1_input.csv")
input_DG = nx.DiGraph()
labels = ['http://chancellor', 'http://president', 'http://European_politician',
'http://head_of_state', 'http://politician', 'http://man', 'http://person', 'http://being']
input_DG.add_nodes_from(labels)
input_DG.add_edges_from([('http://chancellor', 'http://politician'), ('http://president', 'http://politician'),
('http://chancellor', 'http://head_of_state'), ('http://president', 'http://head_of_state'), ('http://head_of_state', 'http://person'),
('http://European_politician', 'http://politician'), ('http://politician', 'http://person'),
('http://man', 'http://person'), ('http://person', 'http://being')])
def fake_metric(df_from_hierarchy, l, d):
equivalence = df_from_hierarchy[l] == df_from_hierarchy[d]
return equivalence.sum()/len(equivalence)
expected_df = pd.read_csv("test/data/feature_selection/hierarchy_based_test9_expected.csv")
output_df = hierarchy_based_filter(input_df, 'uri_bool_http://class', G= input_DG, threshold=0.99, metric=fake_metric, pruning=True)
pd.testing.assert_frame_equal(output_df, expected_df, check_like=True)
class TestTreeBasedFilter:
def test1_lift(self):
input_df = pd.read_csv("test/data/feature_selection/tree_based_test_input.csv")
input_df_dt = direct_type_generator(input_df, ['uri'], hierarchy=True)
expected_df = pd.read_csv("test/data/feature_selection/tree_based_test1_expected.csv")
output_df = tree_based_filter(input_df_dt, 'europe', metric='Lift')
pd.testing.assert_frame_equal(output_df, expected_df, check_like=True)
def test2_ig(self):
input_df = pd.read_csv("test/data/feature_selection/tree_based_test_input.csv")
input_df_dt = direct_type_generator(input_df, ['uri'], hierarchy=True)
expected_df = pd.read_csv("test/data/feature_selection/tree_based_test2_expected.csv")
output_df = tree_based_filter(input_df_dt, 'europe', metric='IG')
pd.testing.assert_frame_equal(output_df, expected_df, check_like=True)
| 49.64 | 144 | 0.642291 | 1,815 | 14,892 | 4.972452 | 0.082094 | 0.031801 | 0.051191 | 0.080443 | 0.894183 | 0.88831 | 0.884986 | 0.884986 | 0.882438 | 0.863934 | 0 | 0.007379 | 0.199167 | 14,892 | 299 | 145 | 49.80602 | 0.749371 | 0 | 0 | 0.650273 | 0 | 0 | 0.366732 | 0.09133 | 0 | 0 | 0 | 0 | 0.087432 | 1 | 0.098361 | false | 0 | 0.027322 | 0.005464 | 0.153005 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0ee3780a288d778fc6c78f7ba0d4e6d864776663 | 8,170 | py | Python | tensorflow/python/ipu/tests/host_embedding_lookup_test.py | pierricklee/tensorflow | c6a61d7b19a9242b06f40120ab42f0fdb0b5c462 | [
"Apache-2.0"
] | null | null | null | tensorflow/python/ipu/tests/host_embedding_lookup_test.py | pierricklee/tensorflow | c6a61d7b19a9242b06f40120ab42f0fdb0b5c462 | [
"Apache-2.0"
] | null | null | null | tensorflow/python/ipu/tests/host_embedding_lookup_test.py | pierricklee/tensorflow | c6a61d7b19a9242b06f40120ab42f0fdb0b5c462 | [
"Apache-2.0"
] | null | null | null | # Copyright 2020 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
import numpy as np
import pva
from tensorflow.compiler.plugin.poplar.tests import test_utils as tu
from tensorflow.python import ipu
from tensorflow.python.client import session as sl
from tensorflow.python.framework import ops
from tensorflow.python.framework import test_util
from tensorflow.python.ops import array_ops
from tensorflow.python.ops import variables
from tensorflow.python.ops import variable_scope
from tensorflow.python.platform import googletest
from tensorflow.python.ipu import embedding_ops
from tensorflow.python.ipu.config import IPUConfig
from tensorflow.python.training import gradient_descent as gd
from tensorflow.compiler.plugin.poplar.ops import gen_pop_datastream_ops
class HostEmbeddingLookupTest(test_util.TensorFlowTestCase):
@tu.test_may_use_ipus_or_model(num_ipus=1)
@test_util.deprecated_graph_mode_only
def testDIENShape(self):
shape = [10000000, 20] # 740MB at float32
lookup_count = 4096
def my_net(i):
# lookup
out = gen_pop_datastream_ops.ipu_device_embedding_lookup(
i,
embedding_id="host_embedding",
embedding_shape=shape,
dtype=np.float32)
#update
gen_pop_datastream_ops.ipu_device_embedding_update_add(
out, out, i, embedding_id="host_embedding", embedding_shape=shape)
self.assertEqual(out.shape, (lookup_count, shape[1]))
return out
with ops.device('cpu'):
i = array_ops.placeholder(np.int32, [lookup_count])
w = variable_scope.get_variable("foo",
dtype=np.float32,
shape=shape,
use_resource=False)
with ipu.scopes.ipu_scope("/device:IPU:0"):
r = ipu.ipu_compiler.compile(my_net, inputs=[i])
cfg = IPUConfig()
cfg.auto_select_ipus = 1
cfg.ipu_model.compile_ipu_code = False
if tu.has_ci_ipus():
tu.add_hw_ci_connection_options(cfg)
else:
report_helper = tu.ReportHelper()
report_helper.set_autoreport_options(cfg)
cfg.configure_ipu_system()
with sl.Session() as sess:
i_h = np.arange(0, lookup_count).reshape([lookup_count])
sess.run(variables.global_variables_initializer())
sess.run(
gen_pop_datastream_ops.ipu_host_embedding_register(
w, "host_embedding"))
result = sess.run([r], {i: i_h})
v = sess.run(
gen_pop_datastream_ops.ipu_host_embedding_deregister(
w, "host_embedding"))
# Since we updated with the same activations, we expect to see a 2x
self.assertAllClose(result[0][0] * 2, np.take(v, i_h, axis=0))
self.assertEqual(result[0][0].shape, (lookup_count, shape[1]))
if not tu.has_ci_ipus():
report = pva.openReport(report_helper.find_report())
self.assert_max_tile_memory(report, 772, tolerance=0.3)
@tu.test_may_use_ipus_or_model(num_ipus=1)
@test_util.deprecated_graph_mode_only
def testAGIShape(self):
shape = [100000, 200]
lookup_count = 4096
def my_net(i):
# lookup
out = gen_pop_datastream_ops.ipu_device_embedding_lookup(
i,
embedding_id="host_embedding",
embedding_shape=shape,
dtype=np.float32)
#update
gen_pop_datastream_ops.ipu_device_embedding_update_add(
out, out, i, embedding_id="host_embedding", embedding_shape=shape)
self.assertEqual(out.shape, (lookup_count, shape[1]))
return out
with ops.device('cpu'):
i = array_ops.placeholder(np.int32, [lookup_count])
w = variable_scope.get_variable("foo",
dtype=np.float32,
shape=shape,
use_resource=False)
with ipu.scopes.ipu_scope("/device:IPU:0"):
r = ipu.ipu_compiler.compile(my_net, inputs=[i])
cfg = IPUConfig()
cfg.auto_select_ipus = 1
cfg.ipu_model.compile_ipu_code = False
if tu.has_ci_ipus():
tu.add_hw_ci_connection_options(cfg)
else:
report_helper = tu.ReportHelper()
report_helper.set_autoreport_options(cfg)
cfg.configure_ipu_system()
with sl.Session() as sess:
i_h = np.arange(0, lookup_count).reshape([lookup_count])
sess.run(variables.global_variables_initializer())
sess.run(
gen_pop_datastream_ops.ipu_host_embedding_register(
w, "host_embedding"))
result = sess.run([r], {i: i_h})
v = sess.run(
gen_pop_datastream_ops.ipu_host_embedding_deregister(
w, "host_embedding"))
# Since we updated with the same activations, we expect to see a 2x
self.assertAllClose(result[0][0] * 2, np.take(v, i_h, axis=0))
self.assertEqual(result[0][0].shape, (lookup_count, shape[1]))
if not tu.has_ci_ipus():
report = pva.openReport(report_helper.find_report())
self.assert_max_tile_memory(report, 5852, tolerance=0.3)
@tu.test_may_use_ipus_or_model(num_ipus=1)
@test_util.deprecated_graph_mode_only
def testTrainNoExec(self):
shape = [100000, 200]
lookup_count = 4096
host_embedding = embedding_ops.create_host_embedding(
"my_host_embedding",
shape,
np.float32,
optimizer_spec=embedding_ops.HostEmbeddingOptimizerSpec(0.5))
def my_net(i):
out = host_embedding.lookup(i)
return out
with ops.device('cpu'):
i = array_ops.placeholder(np.int32, [lookup_count])
with ipu.scopes.ipu_scope("/device:IPU:0"):
r = ipu.ipu_compiler.compile(my_net, inputs=[i])
cfg = IPUConfig()
cfg.auto_select_ipus = 1
cfg.ipu_model.compile_ipu_code = False
if tu.has_ci_ipus():
tu.add_hw_ci_connection_options(cfg)
cfg.configure_ipu_system()
with sl.Session() as sess:
i_h = np.arange(0, lookup_count).reshape([lookup_count])
sess.run(variables.global_variables_initializer())
with host_embedding.register(sess):
# training=False should ignore the number of expected updates.
result = sess.run([r], {i: i_h})
v = sess.run(host_embedding.get_embedding_tensor())
# Check the lookup result, but we are really interested that it doesn't hang.
self.assertAllClose(result[0][0], np.take(v, i_h, axis=0))
@tu.test_may_use_ipus_or_model(num_ipus=1)
@test_util.deprecated_graph_mode_only
def testNoLookup(self):
shape = [100000, 200]
lookup_count = 4096
host_embedding = embedding_ops.create_host_embedding(
"my_host_embedding",
shape,
np.float32,
optimizer_spec=embedding_ops.HostEmbeddingOptimizerSpec(0.5))
def my_net(i):
return i
with ops.device('cpu'):
i = array_ops.placeholder(np.int32, [lookup_count])
with ipu.scopes.ipu_scope("/device:IPU:0"):
r = ipu.ipu_compiler.compile(my_net, inputs=[i])
cfg = IPUConfig()
cfg.auto_select_ipus = 1
cfg.ipu_model.compile_ipu_code = False
if tu.has_ci_ipus():
tu.add_hw_ci_connection_options(cfg)
cfg.configure_ipu_system()
with sl.Session() as sess:
i_h = np.arange(0, lookup_count).reshape([lookup_count])
sess.run(variables.global_variables_initializer())
with host_embedding.register(sess):
result = sess.run([r], {i: i_h})
# Check the indices are correct, but the real test is no timeout.
self.assertAllClose(result[0][0], i_h)
if __name__ == "__main__":
googletest.main()
| 33.760331 | 83 | 0.673807 | 1,119 | 8,170 | 4.663986 | 0.205541 | 0.0548 | 0.042154 | 0.032765 | 0.764131 | 0.711056 | 0.711056 | 0.701667 | 0.701667 | 0.701667 | 0 | 0.021899 | 0.217503 | 8,170 | 241 | 84 | 33.900415 | 0.794463 | 0.126805 | 0 | 0.823529 | 0 | 0 | 0.031509 | 0 | 0 | 0 | 0 | 0 | 0.058824 | 1 | 0.047059 | false | 0 | 0.088235 | 0.005882 | 0.164706 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0ee9c1ce345d1dddc765ba8ed6cb360d04716726 | 215 | py | Python | gammapy/datasets/tests/test_core.py | grburgess/gammapy | 609e460698caca7223afeef5e71826c7b32728d1 | [
"BSD-3-Clause"
] | 3 | 2019-01-28T12:21:14.000Z | 2019-02-10T19:58:07.000Z | gammapy/datasets/tests/test_core.py | grburgess/gammapy | 609e460698caca7223afeef5e71826c7b32728d1 | [
"BSD-3-Clause"
] | null | null | null | gammapy/datasets/tests/test_core.py | grburgess/gammapy | 609e460698caca7223afeef5e71826c7b32728d1 | [
"BSD-3-Clause"
] | null | null | null | # Licensed under a 3-clause BSD style license - see LICENSE.rst
from __future__ import absolute_import, division, print_function, unicode_literals
# from ..manage import Datasets
#
#
# def test_dataset_manager():
# | 26.875 | 82 | 0.786047 | 29 | 215 | 5.517241 | 0.862069 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005376 | 0.134884 | 215 | 8 | 83 | 26.875 | 0.854839 | 0.553488 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 6 |
0ef2a22cf0691c7c5ac05c3e94258850507f1f24 | 289 | py | Python | pythonforandroid/recipes/vk/tests/test_utils.py | alexben16/python-for-android | 1830886b6a7111eb898b5e9638b6d280c09bfa87 | [
"MIT"
] | 119 | 2018-07-30T07:59:59.000Z | 2022-03-02T00:28:02.000Z | tests/test_utils.py | gri201/vk | 746301172839c023895a90543607eeaed5c0bac3 | [
"MIT"
] | 11 | 2018-08-09T10:07:50.000Z | 2022-03-17T16:16:29.000Z | tests/test_utils.py | gri201/vk | 746301172839c023895a90543607eeaed5c0bac3 | [
"MIT"
] | 36 | 2018-08-23T10:17:44.000Z | 2022-03-16T00:00:25.000Z | # coding=utf8
from vk.utils import stringify_values
def test_stringify():
assert stringify_values({1: ['str', 'str2']}) == {1: 'str,str2'}
assert stringify_values({1: ['str', u'стр2']}) == {1: u'str,стр2'}
assert stringify_values({1: [u'стр', u'стр2']}) == {1: u'стр,стр2'}
| 28.9 | 71 | 0.619377 | 43 | 289 | 4.046512 | 0.395349 | 0.344828 | 0.362069 | 0.37931 | 0.287356 | 0 | 0 | 0 | 0 | 0 | 0 | 0.053061 | 0.152249 | 289 | 9 | 72 | 32.111111 | 0.657143 | 0.038062 | 0 | 0 | 0 | 0 | 0.163043 | 0 | 0 | 0 | 0 | 0 | 0.6 | 1 | 0.2 | true | 0 | 0.2 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0ef64fc47d1478a2d855b5659f5092d307a6b434 | 212 | py | Python | examples/server/counter.py | Pennsieve/streaming-agent | 5e29fbdf318b9f4464029074768e4b717c9350fd | [
"Apache-2.0"
] | null | null | null | examples/server/counter.py | Pennsieve/streaming-agent | 5e29fbdf318b9f4464029074768e4b717c9350fd | [
"Apache-2.0"
] | null | null | null | examples/server/counter.py | Pennsieve/streaming-agent | 5e29fbdf318b9f4464029074768e4b717c9350fd | [
"Apache-2.0"
] | null | null | null | class Counter():
def __init__(self, start=0):
self.counter = start - 1
def __call__(self):
self.counter += 1
return self.counter
def value(self):
return self.counter
| 19.272727 | 32 | 0.580189 | 26 | 212 | 4.423077 | 0.423077 | 0.382609 | 0.295652 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.02069 | 0.316038 | 212 | 10 | 33 | 21.2 | 0.772414 | 0 | 0 | 0.25 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.375 | false | 0 | 0 | 0.125 | 0.75 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
1666d3a1f2b8ef0afb72b572fd3d60954711a413 | 19,214 | py | Python | tests/test_stream_xep_0030.py | calendar42/SleekXMPP--XEP-0080- | d7bd5fd29f26a5d7de872a49ff63a353b8043e49 | [
"BSD-3-Clause"
] | 1 | 2016-10-24T05:30:25.000Z | 2016-10-24T05:30:25.000Z | tests/test_stream_xep_0030.py | vijayp/SleekXMPP | b2e7f57334d27f140f079213c2016615b7168742 | [
"BSD-3-Clause"
] | null | null | null | tests/test_stream_xep_0030.py | vijayp/SleekXMPP | b2e7f57334d27f140f079213c2016615b7168742 | [
"BSD-3-Clause"
] | null | null | null | import sys
import time
import threading
from sleekxmpp.test import *
class TestStreamDisco(SleekTest):
"""
Test using the XEP-0030 plugin.
"""
def tearDown(self):
self.stream_close()
def testInfoEmptyDefaultNode(self):
"""
Info query result from an entity MUST have at least one identity
and feature, namely http://jabber.org/protocol/disco#info.
Since the XEP-0030 plugin is loaded, a disco response should
be generated and not an error result.
"""
self.stream_start(mode='client',
plugins=['xep_0030'])
self.recv("""
<iq type="get" id="test">
<query xmlns="http://jabber.org/protocol/disco#info" />
</iq>
""")
self.send("""
<iq type="result" id="test">
<query xmlns="http://jabber.org/protocol/disco#info">
<identity category="client" type="bot" />
<feature var="http://jabber.org/protocol/disco#info" />
</query>
</iq>
""")
def testInfoEmptyDefaultNodeComponent(self):
"""
Test requesting an empty, default node using a Component.
"""
self.stream_start(mode='component',
jid='tester.localhost',
plugins=['xep_0030'])
self.recv("""
<iq type="get" id="test">
<query xmlns="http://jabber.org/protocol/disco#info" />
</iq>
""")
self.send("""
<iq type="result" id="test">
<query xmlns="http://jabber.org/protocol/disco#info">
<identity category="component" type="generic" />
<feature var="http://jabber.org/protocol/disco#info" />
</query>
</iq>
""")
def testInfoIncludeNode(self):
"""
Results for info queries directed to a particular node MUST
include the node in the query response.
"""
self.stream_start(mode='client',
plugins=['xep_0030'])
self.xmpp['xep_0030'].static.add_node(node='testing')
self.recv("""
<iq to="tester@localhost" type="get" id="test">
<query xmlns="http://jabber.org/protocol/disco#info"
node="testing" />
</iq>
""")
self.send("""
<iq type="result" id="test">
<query xmlns="http://jabber.org/protocol/disco#info"
node="testing">
</query>
</iq>""",
method='mask')
def testItemsIncludeNode(self):
"""
Results for items queries directed to a particular node MUST
include the node in the query response.
"""
self.stream_start(mode='client',
plugins=['xep_0030'])
self.xmpp['xep_0030'].static.add_node(node='testing')
self.recv("""
<iq to="tester@localhost" type="get" id="test">
<query xmlns="http://jabber.org/protocol/disco#items"
node="testing" />
</iq>
""")
self.send("""
<iq type="result" id="test">
<query xmlns="http://jabber.org/protocol/disco#items"
node="testing">
</query>
</iq>""",
method='mask')
def testDynamicInfoJID(self):
"""
Test using a dynamic info handler for a particular JID.
"""
self.stream_start(mode='client',
plugins=['xep_0030'])
def dynamic_jid(jid, node, iq):
result = self.xmpp['xep_0030'].stanza.DiscoInfo()
result['node'] = node
result.add_identity('client', 'console', name='Dynamic Info')
return result
self.xmpp['xep_0030'].set_node_handler('get_info',
jid='tester@localhost',
handler=dynamic_jid)
self.recv("""
<iq type="get" id="test" to="tester@localhost">
<query xmlns="http://jabber.org/protocol/disco#info"
node="testing" />
</iq>
""")
self.send("""
<iq type="result" id="test">
<query xmlns="http://jabber.org/protocol/disco#info"
node="testing">
<identity category="client"
type="console"
name="Dynamic Info" />
</query>
</iq>
""")
def testDynamicInfoGlobal(self):
"""
Test using a dynamic info handler for all requests.
"""
self.stream_start(mode='component',
jid='tester.localhost',
plugins=['xep_0030'])
def dynamic_global(jid, node, iq):
result = self.xmpp['xep_0030'].stanza.DiscoInfo()
result['node'] = node
result.add_identity('component', 'generic', name='Dynamic Info')
return result
self.xmpp['xep_0030'].set_node_handler('get_info',
handler=dynamic_global)
self.recv("""
<iq type="get" id="test"
to="user@tester.localhost"
from="tester@localhost">
<query xmlns="http://jabber.org/protocol/disco#info"
node="testing" />
</iq>
""")
self.send("""
<iq type="result" id="test"
to="tester@localhost"
from="user@tester.localhost">
<query xmlns="http://jabber.org/protocol/disco#info"
node="testing">
<identity category="component"
type="generic"
name="Dynamic Info" />
</query>
</iq>
""")
def testOverrideJIDInfoHandler(self):
"""Test overriding a JID info handler."""
self.stream_start(mode='client',
plugins=['xep_0030'])
def dynamic_jid(jid, node, iq):
result = self.xmpp['xep_0030'].stanza.DiscoInfo()
result['node'] = node
result.add_identity('client', 'console', name='Dynamic Info')
return result
self.xmpp['xep_0030'].set_node_handler('get_info',
jid='tester@localhost',
handler=dynamic_jid)
self.xmpp['xep_0030'].make_static(jid='tester@localhost',
node='testing')
self.xmpp['xep_0030'].add_identity(jid='tester@localhost',
node='testing',
category='automation',
itype='command-list')
self.recv("""
<iq type="get" id="test" to="tester@localhost">
<query xmlns="http://jabber.org/protocol/disco#info"
node="testing" />
</iq>
""")
self.send("""
<iq type="result" id="test">
<query xmlns="http://jabber.org/protocol/disco#info"
node="testing">
<identity category="automation"
type="command-list" />
</query>
</iq>
""")
def testOverrideGlobalInfoHandler(self):
"""Test overriding the global JID info handler."""
self.stream_start(mode='component',
jid='tester.localhost',
plugins=['xep_0030'])
def dynamic_global(jid, node, iq):
result = self.xmpp['xep_0030'].stanza.DiscoInfo()
result['node'] = node
result.add_identity('component', 'generic', name='Dynamic Info')
return result
self.xmpp['xep_0030'].set_node_handler('get_info',
handler=dynamic_global)
self.xmpp['xep_0030'].make_static(jid='user@tester.localhost',
node='testing')
self.xmpp['xep_0030'].add_feature(jid='user@tester.localhost',
node='testing',
feature='urn:xmpp:ping')
self.recv("""
<iq type="get" id="test"
to="user@tester.localhost"
from="tester@localhost">
<query xmlns="http://jabber.org/protocol/disco#info"
node="testing" />
</iq>
""")
self.send("""
<iq type="result" id="test"
to="tester@localhost"
from="user@tester.localhost">
<query xmlns="http://jabber.org/protocol/disco#info"
node="testing">
<feature var="urn:xmpp:ping" />
</query>
</iq>
""")
def testGetInfoRemote(self):
"""
Test sending a disco#info query to another entity
and receiving the result.
"""
self.stream_start(mode='client',
plugins=['xep_0030'])
events = set()
def handle_disco_info(iq):
events.add('disco_info')
self.xmpp.add_event_handler('disco_info', handle_disco_info)
t = threading.Thread(name="get_info",
target=self.xmpp['xep_0030'].get_info,
args=('user@localhost', 'foo'))
t.start()
self.send("""
<iq type="get" to="user@localhost" id="1">
<query xmlns="http://jabber.org/protocol/disco#info"
node="foo" />
</iq>
""")
self.recv("""
<iq type="result" to="tester@localhost" id="1">
<query xmlns="http://jabber.org/protocol/disco#info"
node="foo">
<identity category="client" type="bot" />
<feature var="urn:xmpp:ping" />
</query>
</iq>
""")
# Wait for disco#info request to be received.
t.join()
time.sleep(0.1)
self.assertEqual(events, set(('disco_info',)),
"Disco info event was not triggered: %s" % events)
def testDynamicItemsJID(self):
"""
Test using a dynamic items handler for a particular JID.
"""
self.stream_start(mode='client',
plugins=['xep_0030'])
def dynamic_jid(jid, node, iq):
result = self.xmpp['xep_0030'].stanza.DiscoItems()
result['node'] = node
result.add_item('tester@localhost', node='foo', name='JID')
return result
self.xmpp['xep_0030'].set_node_handler('get_items',
jid='tester@localhost',
handler=dynamic_jid)
self.recv("""
<iq type="get" id="test" to="tester@localhost">
<query xmlns="http://jabber.org/protocol/disco#items"
node="testing" />
</iq>
""")
self.send("""
<iq type="result" id="test">
<query xmlns="http://jabber.org/protocol/disco#items"
node="testing">
<item jid="tester@localhost" node="foo" name="JID" />
</query>
</iq>
""")
def testDynamicItemsGlobal(self):
"""
Test using a dynamic items handler for all requests.
"""
self.stream_start(mode='component',
jid='tester.localhost',
plugins=['xep_0030'])
def dynamic_global(jid, node, iq):
result = self.xmpp['xep_0030'].stanza.DiscoItems()
result['node'] = node
result.add_item('tester@localhost', node='foo', name='Global')
return result
self.xmpp['xep_0030'].set_node_handler('get_items',
handler=dynamic_global)
self.recv("""
<iq type="get" id="test"
to="user@tester.localhost"
from="tester@localhost">
<query xmlns="http://jabber.org/protocol/disco#items"
node="testing" />
</iq>
""")
self.send("""
<iq type="result" id="test"
to="tester@localhost"
from="user@tester.localhost">
<query xmlns="http://jabber.org/protocol/disco#items"
node="testing">
<item jid="tester@localhost" node="foo" name="Global" />
</query>
</iq>
""")
def testOverrideJIDItemsHandler(self):
"""Test overriding a JID items handler."""
self.stream_start(mode='client',
plugins=['xep_0030'])
def dynamic_jid(jid, node, iq):
result = self.xmpp['xep_0030'].stanza.DiscoItems()
result['node'] = node
result.add_item('tester@localhost', node='foo', name='Global')
return result
self.xmpp['xep_0030'].set_node_handler('get_items',
jid='tester@localhost',
handler=dynamic_jid)
self.xmpp['xep_0030'].make_static(jid='tester@localhost',
node='testing')
self.xmpp['xep_0030'].add_item(ijid='tester@localhost',
node='testing',
jid='tester@localhost',
subnode='foo',
name='Test')
self.recv("""
<iq type="get" id="test" to="tester@localhost">
<query xmlns="http://jabber.org/protocol/disco#items"
node="testing" />
</iq>
""")
self.send("""
<iq type="result" id="test">
<query xmlns="http://jabber.org/protocol/disco#items"
node="testing">
<item jid="tester@localhost" node="foo" name="Test" />
</query>
</iq>
""")
def testOverrideGlobalItemsHandler(self):
"""Test overriding the global JID items handler."""
self.stream_start(mode='component',
jid='tester.localhost',
plugins=['xep_0030'])
def dynamic_global(jid, node, iq):
result = self.xmpp['xep_0030'].stanza.DiscoItems()
result['node'] = node
result.add_item('tester.localhost', node='foo', name='Global')
return result
self.xmpp['xep_0030'].set_node_handler('get_items',
handler=dynamic_global)
self.xmpp['xep_0030'].make_static(jid='user@tester.localhost',
node='testing')
self.xmpp['xep_0030'].add_item(ijid='user@tester.localhost',
node='testing',
jid='user@tester.localhost',
subnode='foo',
name='Test')
self.recv("""
<iq type="get" id="test"
to="user@tester.localhost"
from="tester@localhost">
<query xmlns="http://jabber.org/protocol/disco#items"
node="testing" />
</iq>
""")
self.send("""
<iq type="result" id="test"
to="tester@localhost"
from="user@tester.localhost">
<query xmlns="http://jabber.org/protocol/disco#items"
node="testing">
<item jid="user@tester.localhost" node="foo" name="Test" />
</query>
</iq>
""")
def testGetItemsRemote(self):
"""
Test sending a disco#items query to another entity
and receiving the result.
"""
self.stream_start(mode='client',
plugins=['xep_0030'])
events = set()
results = set()
def handle_disco_items(iq):
events.add('disco_items')
results.update(iq['disco_items']['items'])
self.xmpp.add_event_handler('disco_items', handle_disco_items)
t = threading.Thread(name="get_items",
target=self.xmpp['xep_0030'].get_items,
args=('user@localhost', 'foo'))
t.start()
self.send("""
<iq type="get" to="user@localhost" id="1">
<query xmlns="http://jabber.org/protocol/disco#items"
node="foo" />
</iq>
""")
self.recv("""
<iq type="result" to="tester@localhost" id="1">
<query xmlns="http://jabber.org/protocol/disco#items"
node="foo">
<item jid="user@localhost" node="bar" name="Test" />
<item jid="user@localhost" node="baz" name="Test 2" />
</query>
</iq>
""")
# Wait for disco#items request to be received.
t.join()
time.sleep(0.1)
items = set([('user@localhost', 'bar', 'Test'),
('user@localhost', 'baz', 'Test 2')])
self.assertEqual(events, set(('disco_items',)),
"Disco items event was not triggered: %s" % events)
self.assertEqual(results, items,
"Unexpected items: %s" % results)
def testGetItemsIterator(self):
"""Test interaction between XEP-0030 and XEP-0059 plugins."""
raised_exceptions = []
self.stream_start(mode='client',
plugins=['xep_0030', 'xep_0059'])
results = self.xmpp['xep_0030'].get_items(jid='foo@localhost',
node='bar',
iterator=True)
results.amount = 10
def run_test():
try:
results.next()
except StopIteration:
raised_exceptions.append(True)
t = threading.Thread(name="get_items_iterator",
target=run_test)
t.start()
self.send("""
<iq id="2" type="get" to="foo@localhost">
<query xmlns="http://jabber.org/protocol/disco#items"
node="bar">
<set xmlns="http://jabber.org/protocol/rsm">
<max>10</max>
</set>
</query>
</iq>
""")
self.recv("""
<iq id="2" type="result" to="tester@localhost">
<query xmlns="http://jabber.org/protocol/disco#items">
<set xmlns="http://jabber.org/protocol/rsm">
</set>
</query>
</iq>
""")
t.join()
self.assertEqual(raised_exceptions, [True],
"StopIteration was not raised: %s" % raised_exceptions)
suite = unittest.TestLoader().loadTestsFromTestCase(TestStreamDisco)
| 33.299827 | 76 | 0.475747 | 1,841 | 19,214 | 4.883216 | 0.094514 | 0.086763 | 0.050612 | 0.081758 | 0.819132 | 0.774527 | 0.727697 | 0.701891 | 0.686207 | 0.676752 | 0 | 0.017937 | 0.384876 | 19,214 | 576 | 77 | 33.357639 | 0.742702 | 0.06209 | 0 | 0.79717 | 0 | 0 | 0.499915 | 0.045956 | 0 | 0 | 0 | 0 | 0.009434 | 1 | 0.063679 | false | 0 | 0.009434 | 0 | 0.09434 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
16a34bc6be0bec40fcc9a51f4b18101589b5d489 | 49 | py | Python | SwitchTracer/cores/contrib/couriermiddlewares/__init__.py | IzayoiRin/VirtualVeyonST | d0c4035dba81d02135ad54f4c5a5d463e95f7925 | [
"MIT"
] | null | null | null | SwitchTracer/cores/contrib/couriermiddlewares/__init__.py | IzayoiRin/VirtualVeyonST | d0c4035dba81d02135ad54f4c5a5d463e95f7925 | [
"MIT"
] | null | null | null | SwitchTracer/cores/contrib/couriermiddlewares/__init__.py | IzayoiRin/VirtualVeyonST | d0c4035dba81d02135ad54f4c5a5d463e95f7925 | [
"MIT"
] | null | null | null | from .M.server import *
from .S.clients import *
| 16.333333 | 24 | 0.714286 | 8 | 49 | 4.375 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.163265 | 49 | 2 | 25 | 24.5 | 0.853659 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
16b575c7c29d3a4bc4de6a20b3307981943f8463 | 41 | py | Python | catcoin/__main__.py | val-labs/catcoinledger | e46f5f188a0d326ebf4d01f95c091d5ae26d01fa | [
"Apache-2.0"
] | 1 | 2018-07-29T21:12:52.000Z | 2018-07-29T21:12:52.000Z | catcoin/__main__.py | val-labs/catcoinledger | e46f5f188a0d326ebf4d01f95c091d5ae26d01fa | [
"Apache-2.0"
] | null | null | null | catcoin/__main__.py | val-labs/catcoinledger | e46f5f188a0d326ebf4d01f95c091d5ae26d01fa | [
"Apache-2.0"
] | null | null | null | import sys, cli; cli.main(*sys.argv[1:])
| 20.5 | 40 | 0.658537 | 8 | 41 | 3.375 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.027027 | 0.097561 | 41 | 1 | 41 | 41 | 0.702703 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
16ce36fd2be60a81f38845aa0e59ce9e8889df54 | 91 | py | Python | app/student/__init__.py | siwl/test_website | c19263c86174796214b039189cc3a65af2baec7d | [
"MIT"
] | null | null | null | app/student/__init__.py | siwl/test_website | c19263c86174796214b039189cc3a65af2baec7d | [
"MIT"
] | null | null | null | app/student/__init__.py | siwl/test_website | c19263c86174796214b039189cc3a65af2baec7d | [
"MIT"
] | null | null | null | from flask import Blueprint
student = Blueprint('student', __name__)
from . import views
| 15.166667 | 40 | 0.769231 | 11 | 91 | 6 | 0.636364 | 0.484848 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.153846 | 91 | 5 | 41 | 18.2 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0.076923 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0.666667 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 6 |
16db8efb56afa6e0887aec50feba69d23ef43ebb | 70 | py | Python | index_redirect.py | jcmack/turtle-tango | 9b6f3aecacfd16b4f2f677921cddd9da1e612d98 | [
"Apache-2.0"
] | 3 | 2021-01-30T16:32:35.000Z | 2021-01-30T16:33:14.000Z | index_redirect.py | jcmack/turtle-tango | 9b6f3aecacfd16b4f2f677921cddd9da1e612d98 | [
"Apache-2.0"
] | null | null | null | index_redirect.py | jcmack/turtle-tango | 9b6f3aecacfd16b4f2f677921cddd9da1e612d98 | [
"Apache-2.0"
] | null | null | null | print("Status: 302")
print("Location: /static/env/turtle_tango.html")
| 23.333333 | 48 | 0.742857 | 10 | 70 | 5.1 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.045455 | 0.057143 | 70 | 2 | 49 | 35 | 0.727273 | 0 | 0 | 0 | 0 | 0 | 0.714286 | 0.414286 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
bca732f39ddd2f22c42a5f5c0fcd545cfc7a22f6 | 102 | py | Python | pyobs/images/processors/astrometry/__init__.py | pyobs/pyobs-core | e3401e63eb31587c2bc535f7346b7e4ef69d64ab | [
"MIT"
] | 4 | 2020-02-14T10:50:03.000Z | 2022-03-25T04:15:06.000Z | pyobs/images/processors/astrometry/__init__.py | pyobs/pyobs-core | e3401e63eb31587c2bc535f7346b7e4ef69d64ab | [
"MIT"
] | 60 | 2020-09-14T09:10:20.000Z | 2022-03-25T17:51:42.000Z | pyobs/images/processors/astrometry/__init__.py | pyobs/pyobs-core | e3401e63eb31587c2bc535f7346b7e4ef69d64ab | [
"MIT"
] | 2 | 2020-10-14T09:34:57.000Z | 2021-04-27T09:35:57.000Z | """
Astrometry
----------
"""
from .astrometry import Astrometry
from .dotnet import AstrometryDotNet
| 14.571429 | 36 | 0.705882 | 9 | 102 | 8 | 0.555556 | 0.388889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.117647 | 102 | 6 | 37 | 17 | 0.8 | 0.205882 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
bcc249d051791a7b6c065b1c2c86fa4acce16d4b | 148 | py | Python | pkgs/filetransferutils-pkg/src/genie/libs/filetransferutils/plugins/ios/ftp/fileutils.py | rohit04saluja/genielibs | e3a89932b807075f45a611cb46ca41a4fa6fe240 | [
"Apache-2.0"
] | null | null | null | pkgs/filetransferutils-pkg/src/genie/libs/filetransferutils/plugins/ios/ftp/fileutils.py | rohit04saluja/genielibs | e3a89932b807075f45a611cb46ca41a4fa6fe240 | [
"Apache-2.0"
] | 1 | 2020-08-01T00:59:29.000Z | 2020-08-01T00:59:32.000Z | pkgs/filetransferutils-pkg/src/genie/libs/filetransferutils/plugins/ios/ftp/fileutils.py | rohit04saluja/genielibs | e3a89932b807075f45a611cb46ca41a4fa6fe240 | [
"Apache-2.0"
] | null | null | null | """ File utils base class for FTP on IOS devices. """
from ..fileutils import FileUtils as FileUtilsXEBase
class FileUtils(FileUtilsXEBase):
pass | 24.666667 | 53 | 0.77027 | 19 | 148 | 6 | 0.789474 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148649 | 148 | 6 | 54 | 24.666667 | 0.904762 | 0.304054 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
bcf0560ef4ee36e08be0933300bdfb8cf0a3a0fb | 21,461 | py | Python | testing/unit/tp/account/test_transfer.py | FerrySchuller/remme-core | ca58bfcc5ff0ce6d15c2871a4e03e39f1268d789 | [
"Apache-2.0"
] | 129 | 2018-02-13T21:37:13.000Z | 2020-11-01T23:33:52.000Z | testing/unit/tp/account/test_transfer.py | FerrySchuller/remme-core | ca58bfcc5ff0ce6d15c2871a4e03e39f1268d789 | [
"Apache-2.0"
] | 95 | 2018-03-27T15:57:36.000Z | 2019-08-26T07:35:23.000Z | testing/unit/tp/account/test_transfer.py | FerrySchuller/remme-core | ca58bfcc5ff0ce6d15c2871a4e03e39f1268d789 | [
"Apache-2.0"
] | 30 | 2018-02-24T15:17:37.000Z | 2020-11-14T11:35:25.000Z | """
Provide tests for account handler apply (genesis) method implementation.
"""
import time
import pytest
from sawtooth_sdk.processor.exceptions import InvalidTransaction
from sawtooth_sdk.protobuf.processor_pb2 import TpProcessRequest
from sawtooth_sdk.protobuf.transaction_pb2 import (
Transaction,
TransactionHeader,
)
from remme.protos.account_pb2 import (
Account,
AccountMethod,
TransferPayload,
)
from remme.protos.transaction_pb2 import TransactionPayload
from remme.settings import ZERO_ADDRESS
from remme.shared.utils import hash512
from remme.tp.account import AccountHandler
from testing.conftest import create_signer
from testing.mocks.stub import StubContext
from testing.utils.client import proto_error_msg
RANDOM_NODE_PUBLIC_KEY = '039d6881f0a71d05659e1f40b443684b93c7b7c504ea23ea8949ef5216a2236940'
ADDRESS_NOT_ACCOUNT_TYPE = '000000' + 'cfe1b3dc02df0003ac396037f85b98cf9f99b0beae000dc5e9e8b6dab4'
TOKENS_AMOUNT_TO_SEND = 1000
ACCOUNT_FROM_BALANCE = 10000
ACCOUNT_TO_BALANCE = 1000
ACCOUNT_ADDRESS_FROM = '112007d71fa7e120c60fb392a64fd69de891a60c667d9ea9e5d9d9d617263be6c20202'
ACCOUNT_ADDRESS_TO = '1120071db7c02f5731d06df194dc95465e9b277c19e905ce642664a9a0d504a3909e31'
ACCOUNT_FROM_PRIVATE_KEY = '1cb15ecfe1b3dc02df0003ac396037f85b98cf9f99b0beae000dc5e9e8b6dab4'
INPUTS = OUTPUTS = [
ACCOUNT_ADDRESS_FROM,
ACCOUNT_ADDRESS_TO,
]
TRANSACTION_REQUEST_ACCOUNT_HANDLER_PARAMS = {
'family_name': AccountHandler().family_name,
'family_version': AccountHandler()._family_versions[0],
}
def create_context(account_from_balance, account_to_balance):
"""
Create stub context with initial data.
Stub context is an interface around Sawtooth state, consider as database.
State is key-value storage that contains address with its data (i.e. account balance).
References:
- https://github.com/Remmeauth/remme-core/blob/dev/testing/mocks/stub.py
"""
account_protobuf = Account()
account_protobuf.balance = account_from_balance
serialized_account_from_balance = account_protobuf.SerializeToString()
account_protobuf.balance = account_to_balance
serialized_account_to_balance = account_protobuf.SerializeToString()
initial_state = {
ACCOUNT_ADDRESS_FROM: serialized_account_from_balance,
ACCOUNT_ADDRESS_TO: serialized_account_to_balance,
}
return StubContext(inputs=INPUTS, outputs=OUTPUTS, initial_state=initial_state)
def test_account_handler_with_empty_proto():
"""
Case: send transaction request with empty proto
Expect: invalid transaction error
"""
transfer_payload = TransferPayload()
transaction_payload = TransactionPayload()
transaction_payload.method = AccountMethod.TRANSFER
transaction_payload.data = transfer_payload.SerializeToString()
serialized_transaction_payload = transaction_payload.SerializeToString()
transaction_header = TransactionHeader(
signer_public_key=RANDOM_NODE_PUBLIC_KEY,
family_name=TRANSACTION_REQUEST_ACCOUNT_HANDLER_PARAMS.get('family_name'),
family_version=TRANSACTION_REQUEST_ACCOUNT_HANDLER_PARAMS.get('family_version'),
inputs=INPUTS,
outputs=OUTPUTS,
dependencies=[],
payload_sha512=hash512(data=serialized_transaction_payload),
batcher_public_key=RANDOM_NODE_PUBLIC_KEY,
nonce=time.time().hex().encode(),
)
serialized_header = transaction_header.SerializeToString()
transaction_request = TpProcessRequest(
header=transaction_header,
payload=serialized_transaction_payload,
signature=create_signer(private_key=ACCOUNT_FROM_PRIVATE_KEY).sign(serialized_header),
)
mock_context = StubContext(inputs=INPUTS, outputs=OUTPUTS, initial_state={})
with pytest.raises(InvalidTransaction) as error:
AccountHandler().apply(transaction=transaction_request, context=mock_context)
assert proto_error_msg(
TransferPayload,
{
'address_to': ['Missed address'],
'value': ['Could not transfer with zero amount.'],
}
) == str(error.value)
def test_account_handler_apply():
"""
Case: send transaction request, to send tokens to address, to the account handler.
Expect: addresses data, stored in state, are changed according to transfer amount.
"""
expected_account_from_balance = ACCOUNT_FROM_BALANCE - TOKENS_AMOUNT_TO_SEND
expected_account_to_balance = ACCOUNT_TO_BALANCE + TOKENS_AMOUNT_TO_SEND
account_protobuf = Account()
account_protobuf.balance = expected_account_from_balance
expected_serialized_account_from_balance = account_protobuf.SerializeToString()
account_protobuf.balance = expected_account_to_balance
expected_serialized_account_to_balance = account_protobuf.SerializeToString()
expected_state = {
ACCOUNT_ADDRESS_FROM: expected_serialized_account_from_balance,
ACCOUNT_ADDRESS_TO: expected_serialized_account_to_balance,
}
transfer_payload = TransferPayload()
transfer_payload.address_to = ACCOUNT_ADDRESS_TO
transfer_payload.value = TOKENS_AMOUNT_TO_SEND
transaction_payload = TransactionPayload()
transaction_payload.method = AccountMethod.TRANSFER
transaction_payload.data = transfer_payload.SerializeToString()
serialized_transaction_payload = transaction_payload.SerializeToString()
transaction_header = TransactionHeader(
signer_public_key=RANDOM_NODE_PUBLIC_KEY,
family_name=TRANSACTION_REQUEST_ACCOUNT_HANDLER_PARAMS.get('family_name'),
family_version=TRANSACTION_REQUEST_ACCOUNT_HANDLER_PARAMS.get('family_version'),
inputs=INPUTS,
outputs=OUTPUTS,
dependencies=[],
payload_sha512=hash512(data=serialized_transaction_payload),
batcher_public_key=RANDOM_NODE_PUBLIC_KEY,
nonce=time.time().hex().encode(),
)
serialized_header = transaction_header.SerializeToString()
transaction_request = TpProcessRequest(
header=transaction_header,
payload=serialized_transaction_payload,
signature=create_signer(private_key=ACCOUNT_FROM_PRIVATE_KEY).sign(serialized_header),
)
mock_context = create_context(account_from_balance=ACCOUNT_FROM_BALANCE, account_to_balance=ACCOUNT_TO_BALANCE)
AccountHandler().apply(transaction=transaction_request, context=mock_context)
state_as_list = mock_context.get_state(addresses=[ACCOUNT_ADDRESS_TO, ACCOUNT_ADDRESS_FROM])
state_as_dict = {entry.address: entry.data for entry in state_as_list}
assert expected_state == state_as_dict
def test_account_handler_apply_invalid_transfer_method():
"""
Case: send transaction request, to send tokens to address, to account handler with invalid transfer method value.
Expect: invalid transaction error is raised with invalid account method value error message.
"""
account_method_impossible_value = 5347
transfer_payload = TransferPayload()
transfer_payload.address_to = ACCOUNT_ADDRESS_TO
transfer_payload.value = TOKENS_AMOUNT_TO_SEND
transaction_payload = TransactionPayload()
transaction_payload.method = account_method_impossible_value
transaction_payload.data = transfer_payload.SerializeToString()
serialized_transaction_payload = transaction_payload.SerializeToString()
transaction_header = TransactionHeader(
signer_public_key=RANDOM_NODE_PUBLIC_KEY,
family_name=TRANSACTION_REQUEST_ACCOUNT_HANDLER_PARAMS.get('family_name'),
family_version=TRANSACTION_REQUEST_ACCOUNT_HANDLER_PARAMS.get('family_version'),
inputs=INPUTS,
outputs=OUTPUTS,
dependencies=[],
payload_sha512=hash512(data=serialized_transaction_payload),
batcher_public_key=RANDOM_NODE_PUBLIC_KEY,
nonce=time.time().hex().encode(),
)
serialized_header = transaction_header.SerializeToString()
transaction_request = TpProcessRequest(
header=transaction_header,
payload=serialized_transaction_payload,
signature=create_signer(private_key=ACCOUNT_FROM_PRIVATE_KEY).sign(serialized_header),
)
mock_context = create_context(account_from_balance=ACCOUNT_FROM_BALANCE, account_to_balance=ACCOUNT_TO_BALANCE)
with pytest.raises(InvalidTransaction) as error:
AccountHandler().apply(transaction=transaction_request, context=mock_context)
assert f'Invalid account method value ({account_method_impossible_value}) has been set.' == str(error.value)
def test_account_handler_apply_decode_error():
"""
Case: send transaction request, to send tokens to address, to account handler with invalid transaction payload.
Expect: invalid transaction error is raised cannot decode transaction payload error message.
"""
serialized_not_valid_transaction_payload = b'F1120071db7c02f5731d06df194dc95465e9b27'
transaction_header = TransactionHeader(
signer_public_key=RANDOM_NODE_PUBLIC_KEY,
family_name=TRANSACTION_REQUEST_ACCOUNT_HANDLER_PARAMS.get('family_name'),
family_version=TRANSACTION_REQUEST_ACCOUNT_HANDLER_PARAMS.get('family_version'),
inputs=INPUTS,
outputs=OUTPUTS,
dependencies=[],
payload_sha512=hash512(data=serialized_not_valid_transaction_payload),
batcher_public_key=RANDOM_NODE_PUBLIC_KEY,
nonce=time.time().hex().encode(),
)
serialized_header = transaction_header.SerializeToString()
transaction_request = TpProcessRequest(
header=transaction_header,
payload=serialized_not_valid_transaction_payload,
signature=create_signer(private_key=ACCOUNT_FROM_PRIVATE_KEY).sign(serialized_header),
)
mock_context = create_context(account_from_balance=ACCOUNT_FROM_BALANCE, account_to_balance=ACCOUNT_TO_BALANCE)
with pytest.raises(InvalidTransaction) as error:
AccountHandler().apply(transaction=transaction_request, context=mock_context)
assert 'Cannot decode transaction payload.' == str(error.value)
def test_account_transfer_from_address():
"""
Case: transfer tokens from address to address.
Expect: account's balances, stored in state, are changed according to transfer amount.
"""
expected_account_from_balance = ACCOUNT_FROM_BALANCE - TOKENS_AMOUNT_TO_SEND
expected_account_to_balance = ACCOUNT_TO_BALANCE + TOKENS_AMOUNT_TO_SEND
transfer_payload = TransferPayload()
transfer_payload.address_to = ACCOUNT_ADDRESS_TO
transfer_payload.value = TOKENS_AMOUNT_TO_SEND
transaction_payload = TransactionPayload()
transaction_payload.method = AccountMethod.TRANSFER
transaction_payload.data = transfer_payload.SerializeToString()
serialized_transaction_payload = transaction_payload.SerializeToString()
transaction_header = TransactionHeader(
signer_public_key=RANDOM_NODE_PUBLIC_KEY,
family_name=TRANSACTION_REQUEST_ACCOUNT_HANDLER_PARAMS.get('family_name'),
family_version=TRANSACTION_REQUEST_ACCOUNT_HANDLER_PARAMS.get('family_version'),
inputs=INPUTS,
outputs=OUTPUTS,
dependencies=[],
payload_sha512=hash512(data=serialized_transaction_payload),
batcher_public_key=RANDOM_NODE_PUBLIC_KEY,
nonce=time.time().hex().encode(),
)
serialized_header = transaction_header.SerializeToString()
transaction_request = TpProcessRequest(
header=transaction_header,
payload=serialized_transaction_payload,
signature=create_signer(private_key=ACCOUNT_FROM_PRIVATE_KEY).sign(serialized_header),
)
mock_context = create_context(account_from_balance=ACCOUNT_FROM_BALANCE, account_to_balance=ACCOUNT_TO_BALANCE)
AccountHandler().apply(transaction=transaction_request, context=mock_context)
state_as_list = mock_context.get_state(addresses=[
ACCOUNT_ADDRESS_FROM, ACCOUNT_ADDRESS_TO,
])
state_as_dict = {}
for entry in state_as_list:
acc = Account()
acc.ParseFromString(entry.data)
state_as_dict[entry.address] = acc
assert state_as_dict.get(ACCOUNT_ADDRESS_FROM, Account()).balance == expected_account_from_balance
assert state_as_dict.get(ACCOUNT_ADDRESS_TO, Account()).balance == expected_account_to_balance
def test_account_transfer_from_address_zero_amount():
"""
Case: transfer zero tokens from address to address.
Expect: invalid transaction error is raised with could not transfer with zero amount error message.
"""
mock_context = create_context(account_from_balance=ACCOUNT_FROM_BALANCE, account_to_balance=ACCOUNT_TO_BALANCE)
transfer_payload = TransferPayload()
transfer_payload.address_to = ACCOUNT_ADDRESS_TO
transfer_payload.value = 0
transaction_payload = TransactionPayload()
transaction_payload.method = AccountMethod.TRANSFER
transaction_payload.data = transfer_payload.SerializeToString()
serialized_transaction_payload = transaction_payload.SerializeToString()
transaction_header = TransactionHeader(
signer_public_key=RANDOM_NODE_PUBLIC_KEY,
family_name=TRANSACTION_REQUEST_ACCOUNT_HANDLER_PARAMS.get('family_name'),
family_version=TRANSACTION_REQUEST_ACCOUNT_HANDLER_PARAMS.get('family_version'),
inputs=INPUTS,
outputs=OUTPUTS,
dependencies=[],
payload_sha512=hash512(data=serialized_transaction_payload),
batcher_public_key=RANDOM_NODE_PUBLIC_KEY,
nonce=time.time().hex().encode(),
)
serialized_header = transaction_header.SerializeToString()
transaction_request = TpProcessRequest(
header=transaction_header,
payload=serialized_transaction_payload,
signature=create_signer(private_key=ACCOUNT_FROM_PRIVATE_KEY).sign(serialized_header),
)
with pytest.raises(InvalidTransaction) as error:
AccountHandler().apply(transaction=transaction_request, context=mock_context)
assert proto_error_msg(
TransferPayload,
{
'value': ['Could not transfer with zero amount.'],
}
) == str(error.value)
def test_account_transfer_from_address_to_address_not_account_type():
"""
Case: transfer tokens from address to address that is not account type.
Expect: invalid transaction error is raised with receiver address is not account type error message.
"""
mock_context = create_context(account_from_balance=ACCOUNT_FROM_BALANCE, account_to_balance=ACCOUNT_TO_BALANCE)
transfer_payload = TransferPayload()
transfer_payload.address_to = ADDRESS_NOT_ACCOUNT_TYPE
transfer_payload.value = TOKENS_AMOUNT_TO_SEND
transaction_payload = TransactionPayload()
transaction_payload.method = AccountMethod.TRANSFER
transaction_payload.data = transfer_payload.SerializeToString()
serialized_transaction_payload = transaction_payload.SerializeToString()
transaction_header = TransactionHeader(
signer_public_key=RANDOM_NODE_PUBLIC_KEY,
family_name=TRANSACTION_REQUEST_ACCOUNT_HANDLER_PARAMS.get('family_name'),
family_version=TRANSACTION_REQUEST_ACCOUNT_HANDLER_PARAMS.get('family_version'),
inputs=INPUTS,
outputs=OUTPUTS,
dependencies=[],
payload_sha512=hash512(data=serialized_transaction_payload),
batcher_public_key=RANDOM_NODE_PUBLIC_KEY,
nonce=time.time().hex().encode(),
)
serialized_header = transaction_header.SerializeToString()
transaction_request = TpProcessRequest(
header=transaction_header,
payload=serialized_transaction_payload,
signature=create_signer(private_key=ACCOUNT_FROM_PRIVATE_KEY).sign(serialized_header),
)
with pytest.raises(InvalidTransaction) as error:
AccountHandler().apply(transaction=transaction_request, context=mock_context)
assert proto_error_msg(
TransferPayload,
{
'address_to': ['Address is not of a blockchain token type.'],
}
) == str(error.value)
def test_account_transfer_from_address_send_to_itself():
"""
Case: transfer tokens from address to the same address.
Expect: invalid transaction error is raised with account cannot send tokens to itself error message.
"""
mock_context = create_context(account_from_balance=ACCOUNT_FROM_BALANCE, account_to_balance=ACCOUNT_TO_BALANCE)
transfer_payload = TransferPayload()
transfer_payload.address_to = ACCOUNT_ADDRESS_FROM
transfer_payload.value = TOKENS_AMOUNT_TO_SEND
transaction_payload = TransactionPayload()
transaction_payload.method = AccountMethod.TRANSFER
transaction_payload.data = transfer_payload.SerializeToString()
serialized_transaction_payload = transaction_payload.SerializeToString()
transaction_header = TransactionHeader(
signer_public_key=RANDOM_NODE_PUBLIC_KEY,
family_name=TRANSACTION_REQUEST_ACCOUNT_HANDLER_PARAMS.get('family_name'),
family_version=TRANSACTION_REQUEST_ACCOUNT_HANDLER_PARAMS.get('family_version'),
inputs=INPUTS,
outputs=OUTPUTS,
dependencies=[],
payload_sha512=hash512(data=serialized_transaction_payload),
batcher_public_key=RANDOM_NODE_PUBLIC_KEY,
nonce=time.time().hex().encode(),
)
serialized_header = transaction_header.SerializeToString()
transaction_request = TpProcessRequest(
header=transaction_header,
payload=serialized_transaction_payload,
signature=create_signer(private_key=ACCOUNT_FROM_PRIVATE_KEY).sign(serialized_header),
)
with pytest.raises(InvalidTransaction) as error:
AccountHandler().apply(transaction=transaction_request, context=mock_context)
assert f'Account cannot send tokens to itself.' == str(error.value)
def test_account_transfer_from_address_without_tokens():
"""
Case: transfer tokens from address with zero tokens amount to address.
Expect: invalid transaction error is raised with not enough transferable balance error message.
"""
mock_context = create_context(account_from_balance=0, account_to_balance=ACCOUNT_TO_BALANCE)
transfer_payload = TransferPayload()
transfer_payload.address_to = ACCOUNT_ADDRESS_TO
transfer_payload.value = TOKENS_AMOUNT_TO_SEND
transaction_payload = TransactionPayload()
transaction_payload.method = AccountMethod.TRANSFER
transaction_payload.data = transfer_payload.SerializeToString()
serialized_transaction_payload = transaction_payload.SerializeToString()
transaction_header = TransactionHeader(
signer_public_key=RANDOM_NODE_PUBLIC_KEY,
family_name=TRANSACTION_REQUEST_ACCOUNT_HANDLER_PARAMS.get('family_name'),
family_version=TRANSACTION_REQUEST_ACCOUNT_HANDLER_PARAMS.get('family_version'),
inputs=INPUTS,
outputs=OUTPUTS,
dependencies=[],
payload_sha512=hash512(data=serialized_transaction_payload),
batcher_public_key=RANDOM_NODE_PUBLIC_KEY,
nonce=time.time().hex().encode(),
)
serialized_header = transaction_header.SerializeToString()
transaction_request = TpProcessRequest(
header=transaction_header,
payload=serialized_transaction_payload,
signature=create_signer(private_key=ACCOUNT_FROM_PRIVATE_KEY).sign(serialized_header),
)
with pytest.raises(InvalidTransaction) as error:
AccountHandler().apply(transaction=transaction_request, context=mock_context)
assert 'Not enough transferable balance. Sender\'s current balance: 0.' == str(error.value)
def test_account_transfer_from_address_without_previous_usage():
"""
Case: transfer tokens from address to address when them have never been used before.
Expect: invalid transaction error is raised with not enough transferable balance error message.
"""
initial_state = {
ACCOUNT_ADDRESS_FROM: None,
ACCOUNT_ADDRESS_TO: None,
}
mock_context = StubContext(inputs=INPUTS, outputs=OUTPUTS, initial_state=initial_state)
transfer_payload = TransferPayload()
transfer_payload.address_to = ACCOUNT_ADDRESS_TO
transfer_payload.value = TOKENS_AMOUNT_TO_SEND
transaction_payload = TransactionPayload()
transaction_payload.method = AccountMethod.TRANSFER
transaction_payload.data = transfer_payload.SerializeToString()
serialized_transaction_payload = transaction_payload.SerializeToString()
transaction_header = TransactionHeader(
signer_public_key=RANDOM_NODE_PUBLIC_KEY,
family_name=TRANSACTION_REQUEST_ACCOUNT_HANDLER_PARAMS.get('family_name'),
family_version=TRANSACTION_REQUEST_ACCOUNT_HANDLER_PARAMS.get('family_version'),
inputs=INPUTS,
outputs=OUTPUTS,
dependencies=[],
payload_sha512=hash512(data=serialized_transaction_payload),
batcher_public_key=RANDOM_NODE_PUBLIC_KEY,
nonce=time.time().hex().encode(),
)
serialized_header = transaction_header.SerializeToString()
transaction_request = TpProcessRequest(
header=transaction_header,
payload=serialized_transaction_payload,
signature=create_signer(private_key=ACCOUNT_FROM_PRIVATE_KEY).sign(serialized_header),
)
with pytest.raises(InvalidTransaction) as error:
AccountHandler().apply(transaction=transaction_request, context=mock_context)
assert f'Not enough transferable balance. Sender\'s current balance: 0.' == str(error.value)
| 39.091075 | 117 | 0.769489 | 2,312 | 21,461 | 6.758218 | 0.077855 | 0.079488 | 0.029696 | 0.025536 | 0.84672 | 0.82016 | 0.787712 | 0.75776 | 0.751424 | 0.726208 | 0 | 0.01825 | 0.160011 | 21,461 | 548 | 118 | 39.162409 | 0.848505 | 0.092587 | 0 | 0.672043 | 0 | 0 | 0.05378 | 0.020889 | 0.002688 | 0 | 0 | 0 | 0.02957 | 1 | 0.02957 | false | 0 | 0.034946 | 0 | 0.067204 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4c3a7c9949a60f2b43eea8e39afc286322165f98 | 217 | py | Python | macauff/__init__.py | Onoddil/macauff | 6184b110811dfd8a3c0ccc39e660806b3b886eac | [
"BSD-3-Clause"
] | 5 | 2021-03-03T22:03:03.000Z | 2022-03-11T05:42:18.000Z | macauff/__init__.py | Onoddil/macauff | 6184b110811dfd8a3c0ccc39e660806b3b886eac | [
"BSD-3-Clause"
] | 8 | 2020-07-09T09:26:17.000Z | 2022-03-30T14:24:11.000Z | macauff/__init__.py | Onoddil/macauff | 6184b110811dfd8a3c0ccc39e660806b3b886eac | [
"BSD-3-Clause"
] | 1 | 2022-02-09T14:01:43.000Z | 2022-02-09T14:01:43.000Z | from .matching import *
from .perturbation_auf import *
from .group_sources import *
from .make_set_list import *
from .misc_functions import *
from .photometric_likelihood import *
from .counterpart_pairing import *
| 27.125 | 37 | 0.806452 | 28 | 217 | 6 | 0.571429 | 0.357143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.129032 | 217 | 7 | 38 | 31 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4c541dffba0c2033e3bef66395b12570bdb8e854 | 210 | py | Python | iwpt2020/mk_token_num.py | attardi/iwpt-shared-task-2020 | 3a70c42d53716678776afcccf02d896655777353 | [
"Apache-2.0"
] | 3 | 2020-06-16T12:58:57.000Z | 2021-06-07T21:07:37.000Z | iwpt2020/mk_token_num.py | attardi/iwpt-shared-task-2020 | 3a70c42d53716678776afcccf02d896655777353 | [
"Apache-2.0"
] | 6 | 2020-06-22T07:46:49.000Z | 2022-02-10T02:22:14.000Z | iwpt2020/mk_token_num.py | attardi/iwpt-shared-task-2020 | 3a70c42d53716678776afcccf02d896655777353 | [
"Apache-2.0"
] | 2 | 2020-06-27T07:32:43.000Z | 2020-11-10T07:21:03.000Z | # -*- coding:utf-8 -*-
# Author: hankcs
# Date: 2020-05-06 20:58
text = '30.8K 15.7K 220.5K 46.3K 58.7K 36.9K 35.3K 11.2K 10.8K 26.4K 22.6K 65.7K 117.4K 13.0K 39.3K 2.1K 17.1K'
print(' & '.join(text.split())) | 30 | 111 | 0.614286 | 50 | 210 | 2.58 | 0.82 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.369318 | 0.161905 | 210 | 7 | 112 | 30 | 0.363636 | 0.27619 | 0 | 0 | 0 | 0.5 | 0.704698 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.5 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
d5e4dfa1a3d8abaa6957efa9da0eddffc6c06279 | 74 | py | Python | src/yypget/executor/__init__.py | ysouyno/yypget | c3a20be61546d3f59bf690f09857aec8204460c7 | [
"MIT"
] | null | null | null | src/yypget/executor/__init__.py | ysouyno/yypget | c3a20be61546d3f59bf690f09857aec8204460c7 | [
"MIT"
] | null | null | null | src/yypget/executor/__init__.py | ysouyno/yypget | c3a20be61546d3f59bf690f09857aec8204460c7 | [
"MIT"
] | null | null | null | from .sv_baidu import *
from .wenku_baidu import *
from .tv_sohu import *
| 18.5 | 26 | 0.756757 | 12 | 74 | 4.416667 | 0.583333 | 0.415094 | 0.566038 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.162162 | 74 | 3 | 27 | 24.666667 | 0.854839 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
9103824415e9a5020d033ef27f58f38104b99308 | 4,907 | py | Python | lib/backbone/build_regnet_dropblock.py | yaopengUSTC/mbit-skin-cancer | a82a87b2abebaf724dbe2a7b7e833c434c1b56a0 | [
"MIT"
] | 3 | 2022-01-23T05:27:43.000Z | 2022-03-08T07:29:25.000Z | lib/backbone/build_regnet_dropblock.py | yaopengUSTC/mbit-skin-cancer | a82a87b2abebaf724dbe2a7b7e833c434c1b56a0 | [
"MIT"
] | null | null | null | lib/backbone/build_regnet_dropblock.py | yaopengUSTC/mbit-skin-cancer | a82a87b2abebaf724dbe2a7b7e833c434c1b56a0 | [
"MIT"
] | null | null | null | from .regnet_dropblock import RegNet_DropBlock
import torch
import os
import yaml
pretrained_settings = {
'regnet': {
'imagenet': {
'input_space': 'RGB',
'input_size': [3, 224, 224],
'input_range': [0, 1],
'mean': [0.485, 0.456, 0.406],
'std': [0.229, 0.224, 0.225],
'num_classes': 1000
}
},
}
def initialize_pretrained_model(model, num_classes, settings):
assert num_classes == settings['num_classes'], 'num_classes should be {}, but is {}'.format(
settings['num_classes'], num_classes)
model.input_space = settings['input_space']
model.input_size = settings['input_size']
model.input_range = settings['input_range']
model.mean = settings['mean']
model.std = settings['std']
def RegNetY_8_0_0_MF_DropBlock(num_classes=1000, pretrained='imagenet'):
f = open('../lib/backbone/regnet_yaml/RegNetY-800MF_dds_8gpu.yaml')
config = yaml.load(f, Loader=yaml.FullLoader)
model = RegNet_DropBlock(config)
# print(model)
last_checkpoint = '../pretrained_models/RegNetY-800MF_dds_8gpu.pyth'
err_str = "Checkpoint '{}' not found"
assert os.path.exists(last_checkpoint), err_str.format(last_checkpoint)
checkpoint = torch.load(last_checkpoint, map_location="cpu")
model.load_state_dict(checkpoint["model_state"])
print('have loaded checkpoint from {}'.format(last_checkpoint))
if pretrained is not None:
settings = pretrained_settings['regnet'][pretrained]
initialize_pretrained_model(model, num_classes, settings)
return model
def RegNetY_1_6_GF_DropBlock(num_classes=1000, pretrained='imagenet'):
f = open('../lib/backbone/regnet_yaml/RegNetY-1.6GF_dds_8gpu.yaml')
config = yaml.load(f, Loader=yaml.FullLoader)
model = RegNet_DropBlock(config)
# print(model)
last_checkpoint = '../pretrained_models/RegNetY-1.6GF_dds_8gpu.pyth'
err_str = "Checkpoint '{}' not found"
assert os.path.exists(last_checkpoint), err_str.format(last_checkpoint)
checkpoint = torch.load(last_checkpoint, map_location="cpu")
model.load_state_dict(checkpoint["model_state"])
print('have loaded checkpoint from {}'.format(last_checkpoint))
if pretrained is not None:
settings = pretrained_settings['regnet'][pretrained]
initialize_pretrained_model(model, num_classes, settings)
return model
def RegNetY_3_2_GF_DropBlock(num_classes=1000, pretrained='imagenet'):
f = open('../lib/backbone/regnet_yaml/RegNetY-3.2GF_dds_8gpu.yaml')
config = yaml.load(f, Loader=yaml.FullLoader)
model = RegNet_DropBlock(config)
# print(model)
last_checkpoint = '../pretrained_models/RegNetY-3.2GF_dds_8gpu.pyth'
err_str = "Checkpoint '{}' not found"
assert os.path.exists(last_checkpoint), err_str.format(last_checkpoint)
checkpoint = torch.load(last_checkpoint, map_location="cpu")
model.load_state_dict(checkpoint["model_state"])
print('have loaded checkpoint from {}'.format(last_checkpoint))
if pretrained is not None:
settings = pretrained_settings['regnet'][pretrained]
initialize_pretrained_model(model, num_classes, settings)
return model
def RegNetY_8_0_GF_DropBlock(num_classes=1000, pretrained='imagenet'):
f = open('../lib/backbone/regnet_yaml/RegNetY-8.0GF_dds_8gpu.yaml')
config = yaml.load(f, Loader=yaml.FullLoader)
model = RegNet_DropBlock(config)
# print(model)
last_checkpoint = '../pretrained_models/RegNetY-8.0GF_dds_8gpu.pyth'
err_str = "Checkpoint '{}' not found"
assert os.path.exists(last_checkpoint), err_str.format(last_checkpoint)
checkpoint = torch.load(last_checkpoint, map_location="cpu")
model.load_state_dict(checkpoint["model_state"])
print('have loaded checkpoint from {}'.format(last_checkpoint))
if pretrained is not None:
settings = pretrained_settings['regnet'][pretrained]
initialize_pretrained_model(model, num_classes, settings)
return model
def RegNetY_1_6_0_GF_DropBlock(num_classes=1000, pretrained='imagenet'):
f = open('../lib/backbone/regnet_yaml/RegNetY-16GF_dds_8gpu.yaml')
config = yaml.load(f, Loader=yaml.FullLoader)
model = RegNet_DropBlock(config)
# print(model)
last_checkpoint = '../pretrained_models/RegNetY-16GF_dds_8gpu.pyth'
err_str = "Checkpoint '{}' not found"
assert os.path.exists(last_checkpoint), err_str.format(last_checkpoint)
checkpoint = torch.load(last_checkpoint, map_location="cpu")
model.load_state_dict(checkpoint["model_state"])
print('have loaded checkpoint from {}'.format(last_checkpoint))
if pretrained is not None:
settings = pretrained_settings['regnet'][pretrained]
initialize_pretrained_model(model, num_classes, settings)
return model
if __name__ == "__main__":
RegNetY_3_2_GF_DropBlock()
print('success')
| 40.891667 | 96 | 0.713674 | 625 | 4,907 | 5.3312 | 0.1328 | 0.105042 | 0.060024 | 0.054022 | 0.87455 | 0.843938 | 0.843938 | 0.829532 | 0.829532 | 0.829532 | 0 | 0.025079 | 0.163032 | 4,907 | 119 | 97 | 41.235294 | 0.786219 | 0.013043 | 0 | 0.56701 | 0 | 0 | 0.228701 | 0.106079 | 0 | 0 | 0 | 0 | 0.061856 | 1 | 0.061856 | false | 0 | 0.041237 | 0 | 0.154639 | 0.061856 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
910a70e073c855717b451a7f15365abcdfaa26c7 | 192 | py | Python | old/lib/games_puzzles_algorithms/players/rule_based/first_action_agent.py | feiooo/games-puzzles-algorithms | 66d97135d163fb04e820338068d9bd9e12d907e9 | [
"MIT"
] | null | null | null | old/lib/games_puzzles_algorithms/players/rule_based/first_action_agent.py | feiooo/games-puzzles-algorithms | 66d97135d163fb04e820338068d9bd9e12d907e9 | [
"MIT"
] | null | null | null | old/lib/games_puzzles_algorithms/players/rule_based/first_action_agent.py | feiooo/games-puzzles-algorithms | 66d97135d163fb04e820338068d9bd9e12d907e9 | [
"MIT"
] | null | null | null | class FirstActionAgent(object):
"""docstring for FirstActionAgent"""
def select_action(self, state, **_):
return next(state.legal_actions())
def reset(self):
pass
| 24 | 42 | 0.651042 | 20 | 192 | 6.1 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.223958 | 192 | 7 | 43 | 27.428571 | 0.818792 | 0.15625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0.2 | 0 | 0.2 | 0.8 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 6 |
911f7457664067fd1a45870cbbe9a5ec20d0742b | 79 | py | Python | microscopes/hmm/model.py | datamicroscopes/hmm | 1dadc5456d33023c6c86eb29ad69200fd3b296da | [
"BSD-3-Clause"
] | 9 | 2015-04-21T07:08:14.000Z | 2020-11-07T17:45:08.000Z | microscopes/hmm/model.py | jamezhetianswang/hmm | 1dadc5456d33023c6c86eb29ad69200fd3b296da | [
"BSD-3-Clause"
] | 10 | 2015-04-08T20:42:49.000Z | 2015-04-17T21:10:22.000Z | microscopes/hmm/model.py | jamezhetianswang/hmm | 1dadc5456d33023c6c86eb29ad69200fd3b296da | [
"BSD-3-Clause"
] | 7 | 2015-04-10T18:53:26.000Z | 2021-09-24T06:58:40.000Z | from microscopes.hmm._model import state
from microscopes.common.rng import rng | 39.5 | 40 | 0.860759 | 12 | 79 | 5.583333 | 0.666667 | 0.447761 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.088608 | 79 | 2 | 41 | 39.5 | 0.930556 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9129713ad7aca0663a701ad5c98f5c81479ff829 | 57 | py | Python | redis_db/victims/__init__.py | mica-framework/server | 5ae35f0e2cfeefb94ab7b8aaf6f9ab36a2fb1862 | [
"MIT"
] | null | null | null | redis_db/victims/__init__.py | mica-framework/server | 5ae35f0e2cfeefb94ab7b8aaf6f9ab36a2fb1862 | [
"MIT"
] | null | null | null | redis_db/victims/__init__.py | mica-framework/server | 5ae35f0e2cfeefb94ab7b8aaf6f9ab36a2fb1862 | [
"MIT"
] | null | null | null | # get all components
from . import get
from . import set
| 14.25 | 20 | 0.736842 | 9 | 57 | 4.666667 | 0.666667 | 0.47619 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.210526 | 57 | 3 | 21 | 19 | 0.933333 | 0.315789 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e6a6afecd0b30239b3a618f12f92782b6bc79f58 | 47 | py | Python | src/apps/distributed_efforts/models/__init__.py | sanderland/katago-server | 6414fab080d007c05068a06ff4f25907b92848bd | [
"MIT"
] | 27 | 2020-05-03T11:01:27.000Z | 2022-03-17T05:33:10.000Z | src/apps/distributed_efforts/models/__init__.py | sanderland/katago-server | 6414fab080d007c05068a06ff4f25907b92848bd | [
"MIT"
] | 54 | 2020-05-09T01:18:41.000Z | 2022-01-22T10:31:15.000Z | src/apps/distributed_efforts/models/__init__.py | sanderland/katago-server | 6414fab080d007c05068a06ff4f25907b92848bd | [
"MIT"
] | 9 | 2020-09-29T11:31:32.000Z | 2022-03-09T01:37:50.000Z | from .user_last_version import UserLastVersion
| 23.5 | 46 | 0.893617 | 6 | 47 | 6.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.085106 | 47 | 1 | 47 | 47 | 0.930233 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
fc0c4d28f857bc6cbaa6aab0a67633a16b9d2b9e | 25,337 | py | Python | tools/accuracy_checker/tests/test_metric_evaluator.py | apankratovantonp/open_model_zoo | e372d4173e50741a6828cda415d55c37320f89cd | [
"Apache-2.0"
] | 5 | 2020-03-09T07:39:04.000Z | 2021-08-16T07:17:28.000Z | tools/accuracy_checker/tests/test_metric_evaluator.py | ananda89/open_model_zoo | e372d4173e50741a6828cda415d55c37320f89cd | [
"Apache-2.0"
] | 6 | 2020-09-26T01:24:39.000Z | 2022-02-10T02:16:03.000Z | tools/accuracy_checker/tests/test_metric_evaluator.py | ananda89/open_model_zoo | e372d4173e50741a6828cda415d55c37320f89cd | [
"Apache-2.0"
] | 3 | 2020-07-06T08:45:26.000Z | 2020-11-12T10:14:45.000Z | """
Copyright (c) 2019 Intel Corporation
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
import pytest
from accuracy_checker.config import ConfigError
from accuracy_checker.metrics import ClassificationAccuracy, MetricsExecutor
from accuracy_checker.metrics.metric import Metric
from accuracy_checker.representation import (
ClassificationAnnotation,
ClassificationPrediction,
ContainerAnnotation,
ContainerPrediction,
DetectionAnnotation,
DetectionPrediction
)
from .common import DummyDataset
class TestMetric:
def setup_method(self):
self.module = 'accuracy_checker.metrics.metric_evaluator'
def test_missed_metrics_raises_config_error_exception(self):
with pytest.raises(ConfigError):
MetricsExecutor([], None)
def test_metrics_with_empty_entry_raises_config_error_exception(self):
with pytest.raises(ConfigError):
MetricsExecutor([{}], None)
def test_missed_metric_type_raises_config_error_exception(self):
with pytest.raises(ConfigError):
MetricsExecutor([{'undefined': ''}], None)
def test_undefined_metric_type_raises_config_error_exception(self):
with pytest.raises(ConfigError):
MetricsExecutor([{'type': ''}], None)
def test_accuracy_arguments(self):
dispatcher = MetricsExecutor([{'type': 'accuracy', 'top_k': 1}], None)
assert len(dispatcher.metrics) == 1
_, _, accuracy_metric, _, _, _ = dispatcher.metrics[0]
assert isinstance(accuracy_metric, ClassificationAccuracy)
assert accuracy_metric.top_k == 1
def test_accuracy_with_several_annotation_source_raises_config_error_exception(self):
with pytest.raises(ConfigError):
MetricsExecutor([{'type': 'accuracy', 'top_k': 1, 'annotation_source': 'annotation1, annotation2'}], None)
def test_accuracy_with_several_prediction_source_raises_value_error_exception(self):
with pytest.raises(ConfigError):
MetricsExecutor([{'type': 'accuracy', 'top_k': 1, 'prediction_source': 'prediction1, prediction2'}], None)
def test_accuracy_on_container_with_wrong_annotation_source_name_raise_config_error_exception(self):
annotations = [ContainerAnnotation({'annotation': ClassificationAnnotation('identifier', 3)})]
predictions = [ClassificationPrediction('identifier', [1.0, 1.0, 1.0, 4.0])]
dispatcher = MetricsExecutor([{'type': 'accuracy', 'top_k': 1, 'annotation_source': 'a'}], None)
with pytest.raises(ConfigError):
dispatcher.update_metrics_on_batch(annotations, predictions)
def test_accuracy_with_wrong_annotation_type_raise_config_error_exception(self):
annotations = [DetectionAnnotation('identifier', 3)]
predictions = [ClassificationPrediction('identifier', [1.0, 1.0, 1.0, 4.0])]
dispatcher = MetricsExecutor([{'type': 'accuracy', 'top_k': 1}], None)
with pytest.raises(ConfigError):
dispatcher.update_metrics_on_batch(annotations, predictions)
def test_accuracy_with_unsupported_annotations_in_container_raise_config_error_exception(self):
annotations = [ContainerAnnotation({'annotation': DetectionAnnotation('identifier', 3)})]
predictions = [ClassificationPrediction('identifier', [1.0, 1.0, 1.0, 4.0])]
dispatcher = MetricsExecutor([{'type': 'accuracy', 'top_k': 1}], None)
with pytest.raises(ConfigError):
dispatcher.update_metrics_on_batch(annotations, predictions)
def test_accuracy_with_unsupported_annotation_type_as_annotation_source_for_container_raises_config_error(self):
annotations = [ContainerAnnotation({'annotation': DetectionAnnotation('identifier', 3)})]
predictions = [ClassificationPrediction('identifier', [1.0, 1.0, 1.0, 4.0])]
dispatcher = MetricsExecutor([{'type': 'accuracy', 'top_k': 1, 'annotation_source': 'annotation'}], None)
with pytest.raises(ConfigError):
dispatcher.update_metrics_on_batch(annotations, predictions)
def test_accuracy_on_annotation_container_with_several_suitable_representations_config_value_error_exception(self):
annotations = [ContainerAnnotation({
'annotation1': ClassificationAnnotation('identifier', 3),
'annotation2': ClassificationAnnotation('identifier', 3)
})]
predictions = [ClassificationPrediction('identifier', [1.0, 1.0, 1.0, 4.0])]
dispatcher = MetricsExecutor([{'type': 'accuracy', 'top_k': 1}], None)
with pytest.raises(ConfigError):
dispatcher.update_metrics_on_batch(annotations, predictions)
def test_accuracy_with_wrong_prediction_type_raise_config_error_exception(self):
annotations = [ClassificationAnnotation('identifier', 3)]
predictions = [DetectionPrediction('identifier', [1.0, 1.0, 1.0, 4.0])]
dispatcher = MetricsExecutor([{'type': 'accuracy', 'top_k': 1}], None)
with pytest.raises(ConfigError):
dispatcher.update_metrics_on_batch(annotations, predictions)
def test_accuracy_with_unsupported_prediction_in_container_raise_config_error_exception(self):
annotations = [ClassificationAnnotation('identifier', 3)]
predictions = [ContainerPrediction({'prediction': DetectionPrediction('identifier', [1.0, 1.0, 1.0, 4.0])})]
dispatcher = MetricsExecutor([{'type': 'accuracy', 'top_k': 1}], None)
with pytest.raises(ConfigError):
dispatcher.update_metrics_on_batch(annotations, predictions)
def test_accuracy_with_unsupported_prediction_type_as_prediction_source_for_container_raises_config_error(self):
annotations = [ClassificationAnnotation('identifier', 3)]
predictions = [ContainerPrediction({'prediction': DetectionPrediction('identifier', [1.0, 1.0, 1.0, 4.0])})]
dispatcher = MetricsExecutor([{'type': 'accuracy', 'top_k': 1, 'prediction_source': 'prediction'}], None)
with pytest.raises(ConfigError):
dispatcher.update_metrics_on_batch(annotations, predictions)
def test_accuracy_on_prediction_container_with_several_suitable_representations_raise_config_error_exception(self):
annotations = [ClassificationAnnotation('identifier', 3)]
predictions = [ContainerPrediction({
'prediction1': ClassificationPrediction('identifier', [1.0, 1.0, 1.0, 4.0]),
'prediction2': ClassificationPrediction('identifier', [1.0, 1.0, 1.0, 4.0])
})]
dispatcher = MetricsExecutor([{'type': 'accuracy', 'top_k': 1}], None)
with pytest.raises(ConfigError):
dispatcher.update_metrics_on_batch(annotations, predictions)
def test_complete_accuracy(self):
annotations = [ClassificationAnnotation('identifier', 3)]
predictions = [ClassificationPrediction('identifier', [1.0, 1.0, 1.0, 4.0])]
dispatcher = MetricsExecutor([{'type': 'accuracy', 'top_k': 1}], None)
dispatcher.update_metrics_on_batch(annotations, predictions)
for _, evaluation_result in dispatcher.iterate_metrics(annotations, predictions):
assert evaluation_result.name == 'accuracy'
assert evaluation_result.evaluated_value == pytest.approx(1.0)
assert evaluation_result.reference_value is None
assert evaluation_result.threshold is None
def test_complete_accuracy_with_container_default_sources(self):
annotations = [ContainerAnnotation({'a': ClassificationAnnotation('identifier', 3)})]
predictions = [ContainerPrediction({'p': ClassificationPrediction('identifier', [1.0, 1.0, 1.0, 4.0])})]
dispatcher = MetricsExecutor([{'type': 'accuracy', 'top_k': 1}], None)
dispatcher.update_metrics_on_batch(annotations, predictions)
for _, evaluation_result in dispatcher.iterate_metrics(annotations, predictions):
assert evaluation_result.name == 'accuracy'
assert evaluation_result.evaluated_value == pytest.approx(1.0)
assert evaluation_result.reference_value is None
assert evaluation_result.threshold is None
def test_complete_accuracy_with_container_sources(self):
annotations = [ContainerAnnotation({'a': ClassificationAnnotation('identifier', 3)})]
predictions = [ContainerPrediction({'p': ClassificationPrediction('identifier', [1.0, 1.0, 1.0, 4.0])})]
config = [{'type': 'accuracy', 'top_k': 1, 'annotation_source': 'a', 'prediction_source': 'p'}]
dispatcher = MetricsExecutor(config, None)
dispatcher.update_metrics_on_batch(annotations, predictions)
for _, evaluation_result in dispatcher.iterate_metrics(annotations, predictions):
assert evaluation_result.name == 'accuracy'
assert evaluation_result.evaluated_value == pytest.approx(1.0)
assert evaluation_result.reference_value is None
assert evaluation_result.threshold is None
def test_zero_accuracy(self):
annotation = [ClassificationAnnotation('identifier', 2)]
prediction = [ClassificationPrediction('identifier', [1.0, 1.0, 1.0, 4.0])]
dispatcher = MetricsExecutor([{'type': 'accuracy', 'top_k': 1}], None)
for _, evaluation_result in dispatcher.iterate_metrics([annotation], [prediction]):
assert evaluation_result.name == 'accuracy'
assert evaluation_result.evaluated_value == 0.0
assert evaluation_result.reference_value is None
assert evaluation_result.threshold is None
def test_complete_accuracy_top_3(self):
annotations = [ClassificationAnnotation('identifier', 3)]
predictions = [ClassificationPrediction('identifier', [1.0, 3.0, 4.0, 2.0])]
dispatcher = MetricsExecutor([{'type': 'accuracy', 'top_k': 3}], None)
dispatcher.update_metrics_on_batch(annotations, predictions)
for _, evaluation_result in dispatcher.iterate_metrics(annotations, predictions):
assert evaluation_result.name == 'accuracy'
assert evaluation_result.evaluated_value == pytest.approx(1.0)
assert evaluation_result.reference_value is None
assert evaluation_result.threshold is None
def test_zero_accuracy_top_3(self):
annotations = [ClassificationAnnotation('identifier', 3)]
predictions = [ClassificationPrediction('identifier', [5.0, 3.0, 4.0, 1.0])]
dispatcher = MetricsExecutor([{'type': 'accuracy', 'top_k': 3}], None)
for _, evaluation_result in dispatcher.iterate_metrics(annotations, predictions):
assert evaluation_result.name == 'accuracy'
assert evaluation_result.evaluated_value == 0.0
assert evaluation_result.reference_value is None
assert evaluation_result.threshold is None
def test_reference_is_10_by_config(self):
annotations = [ClassificationAnnotation('identifier', 3)]
predictions = [ClassificationPrediction('identifier', [5.0, 3.0, 4.0, 1.0])]
dispatcher = MetricsExecutor([{'type': 'accuracy', 'top_k': 3, 'reference': 10}], None)
for _, evaluation_result in dispatcher.iterate_metrics(annotations, predictions):
assert evaluation_result.name == 'accuracy'
assert evaluation_result.evaluated_value == 0.0
assert evaluation_result.reference_value == 10
assert evaluation_result.threshold is None
def test_threshold_is_10_by_config(self):
annotations = [ClassificationAnnotation('identifier', 3)]
predictions = [ClassificationPrediction('identifier', [5.0, 3.0, 4.0, 1.0])]
dispatcher = MetricsExecutor([{'type': 'accuracy', 'top_k': 3, 'threshold': 10}], None)
for _, evaluation_result in dispatcher.iterate_metrics([annotations], [predictions]):
assert evaluation_result.name == 'accuracy'
assert evaluation_result.evaluated_value == 0.0
assert evaluation_result.reference_value is None
assert evaluation_result.threshold == 10
def test_classification_per_class_accuracy_fully_zero_prediction(self):
annotation = ClassificationAnnotation('identifier', 0)
prediction = ClassificationPrediction('identifier', [1.0, 2.0])
dataset = DummyDataset(label_map={0: '0', 1: '1'})
dispatcher = MetricsExecutor([{'type': 'accuracy_per_class', 'top_k': 1}], dataset)
dispatcher.update_metrics_on_batch([annotation], [prediction])
for _, evaluation_result in dispatcher.iterate_metrics([annotation], [prediction]):
assert evaluation_result.name == 'accuracy_per_class'
assert len(evaluation_result.evaluated_value) == 2
assert evaluation_result.evaluated_value[0] == pytest.approx(0.0)
assert evaluation_result.evaluated_value[1] == pytest.approx(0.0)
assert evaluation_result.reference_value is None
assert evaluation_result.threshold is None
def test_classification_per_class_accuracy_partially_zero_prediction(self):
annotation = [ClassificationAnnotation('identifier', 1)]
prediction = [ClassificationPrediction('identifier', [1.0, 2.0])]
dataset = DummyDataset(label_map={0: '0', 1: '1'})
dispatcher = MetricsExecutor([{'type': 'accuracy_per_class', 'top_k': 1}], dataset)
dispatcher.update_metrics_on_batch(annotation, prediction)
for _, evaluation_result in dispatcher.iterate_metrics(annotation, prediction):
assert evaluation_result.name == 'accuracy_per_class'
assert len(evaluation_result.evaluated_value) == 2
assert evaluation_result.evaluated_value[0] == pytest.approx(0.0)
assert evaluation_result.evaluated_value[1] == pytest.approx(1.0)
assert evaluation_result.reference_value is None
assert evaluation_result.threshold is None
def test_classification_per_class_accuracy_complete_prediction(self):
annotation = [ClassificationAnnotation('identifier_1', 1), ClassificationAnnotation('identifier_2', 0)]
prediction = [
ClassificationPrediction('identifier_1', [1.0, 2.0]),
ClassificationPrediction('identifier_2', [2.0, 1.0])
]
dataset = DummyDataset(label_map={0: '0', 1: '1'})
dispatcher = MetricsExecutor([{'type': 'accuracy_per_class', 'top_k': 1}], dataset)
dispatcher.update_metrics_on_batch(annotation, prediction)
for _, evaluation_result in dispatcher.iterate_metrics(annotation, prediction):
assert evaluation_result.name == 'accuracy_per_class'
assert len(evaluation_result.evaluated_value) == 2
assert evaluation_result.evaluated_value[0] == pytest.approx(1.0)
assert evaluation_result.evaluated_value[1] == pytest.approx(1.0)
assert evaluation_result.reference_value is None
assert evaluation_result.threshold is None
def test_classification_per_class_accuracy_partially_prediction(self):
annotation = [
ClassificationAnnotation('identifier_1', 1),
ClassificationAnnotation('identifier_2', 0),
ClassificationAnnotation('identifier_3', 0)
]
prediction = [
ClassificationPrediction('identifier_1', [1.0, 2.0]),
ClassificationPrediction('identifier_2', [2.0, 1.0]),
ClassificationPrediction('identifier_3', [1.0, 5.0])
]
dataset = DummyDataset(label_map={0: '0', 1: '1'})
dispatcher = MetricsExecutor([{'type': 'accuracy_per_class', 'top_k': 1}], dataset)
dispatcher.update_metrics_on_batch(annotation, prediction)
for _, evaluation_result in dispatcher.iterate_metrics(annotation, prediction):
assert evaluation_result.name == 'accuracy_per_class'
assert len(evaluation_result.evaluated_value) == 2
assert evaluation_result.evaluated_value[0] == pytest.approx(0.5)
assert evaluation_result.evaluated_value[1] == pytest.approx(1.0)
assert evaluation_result.reference_value is None
assert evaluation_result.threshold is None
def test_classification_per_class_accuracy_prediction_top3_zero(self):
annotation = [ClassificationAnnotation('identifier_1', 0), ClassificationAnnotation('identifier_2', 1)]
prediction = [
ClassificationPrediction('identifier_1', [1.0, 2.0, 3.0, 4.0]),
ClassificationPrediction('identifier_2', [2.0, 1.0, 3.0, 4.0])
]
dataset = DummyDataset(label_map={0: '0', 1: '1', 2: '2', 3: '3'})
dispatcher = MetricsExecutor([{'type': 'accuracy_per_class', 'top_k': 3}], dataset)
dispatcher.update_metrics_on_batch(annotation, prediction)
for _, evaluation_result in dispatcher.iterate_metrics(annotation, prediction):
assert evaluation_result.name == 'accuracy_per_class'
assert len(evaluation_result.evaluated_value) == 4
assert evaluation_result.evaluated_value[0] == pytest.approx(0.0)
assert evaluation_result.evaluated_value[1] == pytest.approx(0.0)
assert evaluation_result.evaluated_value[2] == pytest.approx(0.0)
assert evaluation_result.evaluated_value[3] == pytest.approx(0.0)
assert evaluation_result.reference_value is None
assert evaluation_result.threshold is None
def test_classification_per_class_accuracy_prediction_top3(self):
annotation = [ClassificationAnnotation('identifier_1', 1), ClassificationAnnotation('identifier_2', 1)]
prediction = [
ClassificationPrediction('identifier_1', [1.0, 2.0, 3.0, 4.0]),
ClassificationPrediction('identifier_2', [2.0, 1.0, 3.0, 4.0])
]
dataset = DummyDataset(label_map={0: '0', 1: '1', 2: '2', 3: '3'})
dispatcher = MetricsExecutor([{'type': 'accuracy_per_class', 'top_k': 3}], dataset)
dispatcher.update_metrics_on_batch(annotation, prediction)
for _, evaluation_result in dispatcher.iterate_metrics(annotation, prediction):
assert evaluation_result.name == 'accuracy_per_class'
assert len(evaluation_result.evaluated_value) == 4
assert evaluation_result.evaluated_value[0] == pytest.approx(0.0)
assert evaluation_result.evaluated_value[1] == pytest.approx(0.5)
assert evaluation_result.evaluated_value[2] == pytest.approx(0.0)
assert evaluation_result.evaluated_value[3] == pytest.approx(0.0)
assert evaluation_result.reference_value is None
assert evaluation_result.threshold is None
class TestMetricExtraArgs:
def test_all_metrics_raise_config_error_on_extra_args(self):
for provider in Metric.providers:
adapter_config = {'type': provider, 'something_extra': 'extra'}
with pytest.raises(ConfigError):
Metric.provide(provider, adapter_config, None)
def test_detection_recall_raise_config_error_on_extra_args(self):
adapter_config = {'type': 'recall', 'something_extra': 'extra'}
with pytest.raises(ConfigError):
Metric.provide('recall', adapter_config, None)
def test_detection_miss_rate_raise_config_error_on_extra_args(self):
adapter_config = {'type': 'miss_rate', 'something_extra': 'extra'}
with pytest.raises(ConfigError):
Metric.provide('miss_rate', adapter_config, None)
def test_accuracy_raise_config_error_on_extra_args(self):
adapter_config = {'type': 'accuracy', 'something_extra': 'extra'}
with pytest.raises(ConfigError):
Metric.provide('accuracy', adapter_config, None)
def test_per_class_accuracy_raise_config_error_on_extra_args(self):
adapter_config = {'type': 'accuracy_per_class', 'something_extra': 'extra'}
with pytest.raises(ConfigError):
Metric.provide('accuracy_per_class', adapter_config, None)
def test_character_recognition_accuracy_raise_config_error_on_extra_args(self):
adapter_config = {'type': 'character_recognition_accuracy', 'something_extra': 'extra'}
with pytest.raises(ConfigError):
Metric.provide('character_recognition_accuracy', adapter_config, None)
def test_multi_accuracy_raise_config_error_on_extra_args(self):
metric_config = {'type': 'multi_accuracy', 'something_extra': 'extra'}
with pytest.raises(ConfigError):
Metric.provide('multi_accuracy', metric_config, None)
def test_multi_precision_raise_config_error_on_extra_args(self):
metric_config = {'type': 'multi_precision', 'something_extra': 'extra'}
with pytest.raises(ConfigError):
Metric.provide('multi_precision', metric_config, None)
def test_f1_score_raise_config_error_on_extra_args(self):
metric_config = {'type': 'f1-score', 'something_extra': 'extra'}
with pytest.raises(ConfigError):
Metric.provide('f1-score', metric_config, None)
def test_mae_raise_config_error_on_extra_args(self):
metric_config = {'type': 'mae', 'something_extra': 'extra'}
with pytest.raises(ConfigError):
Metric.provide('mae', metric_config, None)
def test_mse_raise_config_error_on_extra_args(self):
metric_config = {'type': 'mse', 'something_extra': 'extra'}
with pytest.raises(ConfigError):
Metric.provide('mse', metric_config, None)
def test_rmse_raise_config_error_on_extra_args(self):
metric_config = {'type': 'rmse', 'something_extra': 'extra'}
with pytest.raises(ConfigError):
Metric.provide('rmse', metric_config, None)
def test_mae_on_interval_raise_config_error_on_extra_args(self):
metric_config = {'type': 'mae_on_interval', 'something_extra': 'extra'}
with pytest.raises(ConfigError):
Metric.provide('mae_on_interval', metric_config, None)
def test_mse_on_interval_raise_config_error_on_extra_args(self):
metric_config = {'type': 'mse_on_interval', 'something_extra': 'extra'}
with pytest.raises(ConfigError):
Metric.provide('mse_on_interval', metric_config, None)
def test_rmse_on_interval_raise_config_error_on_extra_args(self):
metric_config = {'type': 'rmse_on_interval', 'something_extra': 'extra'}
with pytest.raises(ConfigError):
Metric.provide('rmse_on_interval', metric_config, None)
def test_per_point_normed_error_raise_config_error_on_extra_args(self):
metric_config = {'type': 'per_point_normed_error', 'something_extra': 'extra'}
with pytest.raises(ConfigError):
Metric.provide('per_point_normed_error', metric_config, None)
def test_average_point_error_raise_config_error_on_extra_args(self):
metric_config = {'type': 'normed_error', 'something_extra': 'extra'}
with pytest.raises(ConfigError):
Metric.provide('normed_error', metric_config, None)
def test_reid_cmc_raise_config_error_on_extra_args(self):
metric_config = {'type': 'cmc', 'something_extra': 'extra'}
with pytest.raises(ConfigError):
Metric.provide('cmc', metric_config, None)
def test_reid_map_raise_config_error_on_extra_args(self):
adapter_config = {'type': 'reid_map', 'something_extra': 'extra'}
with pytest.raises(ConfigError):
Metric.provide('reid_map', adapter_config, None)
def test_pairwise_accuracy_raise_config_error_on_extra_args(self):
metric_config = {'type': 'pairwise_accuracy', 'something_extra': 'extra'}
with pytest.raises(ConfigError):
Metric.provide('pairwise_accuracy', metric_config, None)
def test_segmentation_accuracy_raise_config_error_on_extra_args(self):
metric_config = {'type': 'segmentation_accuracy', 'something_extra': 'extra'}
with pytest.raises(ConfigError):
Metric.provide('segmentation_accuracy', metric_config, None)
def test_mean_iou_raise_config_error_on_extra_args(self):
metric_config = {'type': 'mean_iou', 'something_extra': 'extra'}
with pytest.raises(ConfigError):
Metric.provide('mean_iou', metric_config, None)
def test_mean_accuracy_raise_config_error_on_extra_args(self):
metric_config = {'type': 'mean_accuracy', 'something_extra': 'extra'}
with pytest.raises(ConfigError):
Metric.provide('mean_accuracy', metric_config, None)
def test_frequency_weighted_accuracy_raise_config_error_on_extra_args(self):
metric_config = {'type': 'frequency_weighted_accuracy', 'something_extra': 'extra'}
with pytest.raises(ConfigError):
Metric.provide('frequency_weighted_accuracy', metric_config, None)
| 52.457557 | 119 | 0.700399 | 2,804 | 25,337 | 6.025678 | 0.067404 | 0.081439 | 0.085938 | 0.062322 | 0.868549 | 0.84742 | 0.812737 | 0.798118 | 0.77687 | 0.750296 | 0 | 0.020326 | 0.190275 | 25,337 | 482 | 120 | 52.56639 | 0.803227 | 0.02222 | 0 | 0.532086 | 0 | 0 | 0.11435 | 0.009731 | 0 | 0 | 0 | 0 | 0.200535 | 1 | 0.147059 | false | 0 | 0.016043 | 0 | 0.168449 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
fc80b91e694b9fc02b9d3be0e1d4388ffb0d4608 | 91 | py | Python | experiments/src/training/__init__.py | chrislybaer/huggingmolecules | 210239ac46b467e900a47e8f4520054636744ca6 | [
"Apache-2.0"
] | 60 | 2021-05-07T16:07:26.000Z | 2022-03-26T19:23:54.000Z | experiments/src/training/__init__.py | gabegomes/huggingmolecules | adc581c97fbc21d9967dd9334afa94b22fb77651 | [
"Apache-2.0"
] | 11 | 2021-05-07T16:01:35.000Z | 2022-03-09T13:06:05.000Z | experiments/src/training/__init__.py | gabegomes/huggingmolecules | adc581c97fbc21d9967dd9334afa94b22fb77651 | [
"Apache-2.0"
] | 12 | 2021-05-20T08:02:25.000Z | 2022-03-10T14:11:36.000Z | from .training_train_model import train_model
from .training_utils import get_data_loaders
| 30.333333 | 45 | 0.89011 | 14 | 91 | 5.357143 | 0.642857 | 0.32 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.087912 | 91 | 2 | 46 | 45.5 | 0.903614 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5dbc5b6800ffb09218d32abe853be06157bf5176 | 30 | py | Python | src/dataset/__init__.py | getnexar/squeezeDet | 7655afdcd938e0c81991b5bc1aa8ec625d72f26a | [
"BSD-2-Clause"
] | 1 | 2018-01-30T05:17:35.000Z | 2018-01-30T05:17:35.000Z | src/dataset/__init__.py | getnexar/squeezeDet | 7655afdcd938e0c81991b5bc1aa8ec625d72f26a | [
"BSD-2-Clause"
] | 1 | 2017-05-03T14:30:34.000Z | 2017-05-03T14:30:34.000Z | src/dataset/__init__.py | getnexar/squeezeDet | 7655afdcd938e0c81991b5bc1aa8ec625d72f26a | [
"BSD-2-Clause"
] | 1 | 2017-10-01T22:46:02.000Z | 2017-10-01T22:46:02.000Z | from nexarear import nexarear
| 15 | 29 | 0.866667 | 4 | 30 | 6.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.133333 | 30 | 1 | 30 | 30 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5dfa8b4be042d693826aeffeec469139eab08a03 | 69 | py | Python | aksolve/client/aksolve/__init__.py | Bears-R-Us/arkouda-contrib | 6965d6f8a274dd633ebf718b56b93c40562627fc | [
"MIT"
] | 2 | 2022-02-09T20:20:47.000Z | 2022-02-10T01:00:14.000Z | aksolve/client/aksolve/__init__.py | Bears-R-Us/arkouda-contrib | 6965d6f8a274dd633ebf718b56b93c40562627fc | [
"MIT"
] | 8 | 2022-02-24T12:52:34.000Z | 2022-03-30T16:51:07.000Z | aksolve/client/aksolve/__init__.py | Bears-R-Us/arkouda-contrib | 6965d6f8a274dd633ebf718b56b93c40562627fc | [
"MIT"
] | 2 | 2022-02-14T23:32:19.000Z | 2022-03-25T14:59:39.000Z | from aksolve.util import *
from aksolve.conjugate_gradients import *
| 23 | 41 | 0.826087 | 9 | 69 | 6.222222 | 0.666667 | 0.392857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.115942 | 69 | 2 | 42 | 34.5 | 0.918033 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f8da88defbe5885cffcba3e31b666fd4b91d9131 | 11,806 | py | Python | tests/test_filtering.py | minitriga/netcfgbu | 5c5dbb1936740833f5f535d99e2d6c0a17d38fe6 | [
"Apache-2.0"
] | 83 | 2020-06-02T13:25:33.000Z | 2022-03-07T20:50:36.000Z | tests/test_filtering.py | minitriga/netcfgbu | 5c5dbb1936740833f5f535d99e2d6c0a17d38fe6 | [
"Apache-2.0"
] | 55 | 2020-06-03T17:51:31.000Z | 2021-08-14T14:13:56.000Z | tests/test_filtering.py | minitriga/netcfgbu | 5c5dbb1936740833f5f535d99e2d6c0a17d38fe6 | [
"Apache-2.0"
] | 16 | 2020-06-05T20:32:27.000Z | 2021-11-01T17:06:38.000Z | import pytest # noqa
from netcfgbu.filtering import create_filter
import csv
def test_filtering_pass_include():
"""
Test the use-case where the constraint is a valid set of "limits"
"""
key_values = [("os_name", "eos"), ("host", ".*nyc1")]
constraints = [f"{key}={val}" for key, val in key_values]
field_names = [key for key, _ in key_values]
filter_fn = create_filter(
constraints=constraints, field_names=field_names, include=True
)
assert filter_fn(dict(os_name="eos", host="switch1.nyc1")) is True
assert filter_fn(dict(os_name="ios", host="switch1.nyc1")) is False
assert filter_fn(dict(os_name="eos", host="switch1.dc1")) is False
def test_filtering_pass_exclude():
"""
Test use-case where the constraint is a valid set of "excludes"
"""
key_values = [("os_name", "eos"), ("host", ".*nyc1")]
constraints = [f"{key}={val}" for key, val in key_values]
field_names = [key for key, _ in key_values]
filter_fn = create_filter(
constraints=constraints, field_names=field_names, include=False
)
assert filter_fn(dict(os_name="ios", host="switch1.nyc1")) is False
assert filter_fn(dict(os_name="eos", host="switch1.dc1")) is False
assert filter_fn(dict(os_name="ios", host="switch1.dc1")) is True
def test_filtering_fail_constraint_field():
"""
Test the use-case where the constraint form is invalid due to a
field name being incorrect.
"""
key_values = [("os_name2", "eos"), ("host", ".*nyc1")]
constraints = [f"{key}={val}" for key, val in key_values]
field_names = ["os_name", "host"]
with pytest.raises(ValueError) as excinfo:
create_filter(constraints=constraints, field_names=field_names, include=False)
errmsg = excinfo.value.args[0]
assert "Invalid filter expression: os_name2=eos" in errmsg
def test_filtering_fail_constraint_regex():
"""
Test the case where the constraint value is an invalid regular-expression.
"""
with pytest.raises(ValueError) as excinfo:
create_filter(
constraints=["os_name=***"], field_names=["os_name"], include=False
)
errmsg = excinfo.value.args[0]
assert "Invalid filter regular-expression" in errmsg
def test_filtering_pass_filepath(tmpdir):
"""
Test use-case where a filepath constraint is provide, and the file exists.
"""
filename = "failures.csv"
tmpfile = tmpdir.join(filename)
tmpfile.ensure()
abs_filepath = str(tmpfile)
create_filter(constraints=[f"@{abs_filepath}"], field_names=["host"])
def test_filtering_fail_filepath(tmpdir):
"""
Test use-case where a filepath constraint is provide, and the file does not exist.
"""
filename = "failures.csv"
tmpfile = tmpdir.join(filename)
abs_filepath = str(tmpfile)
with pytest.raises(FileNotFoundError) as excinfo:
create_filter(constraints=[f"@{abs_filepath}"], field_names=["host"])
errmsg = excinfo.value.args[0]
assert errmsg == abs_filepath
def test_filtering_pass_csv_filecontents(tmpdir):
"""
Test use-case where the constraint is a valid CSV file.
"""
filename = "failures.csv"
tmpfile = tmpdir.join(filename)
inventory_recs = [
dict(host="swtich1.nyc1", os_name="eos"),
dict(host="switch2.dc1", os_name="ios"),
]
not_inventory_recs = [
dict(host="swtich3.nyc1", os_name="eos"),
dict(host="switch4.dc1", os_name="ios"),
]
with open(tmpfile, "w+") as ofile:
csv_wr = csv.DictWriter(ofile, fieldnames=["host", "os_name"])
csv_wr.writeheader()
csv_wr.writerows(inventory_recs)
abs_filepath = str(tmpfile)
filter_fn = create_filter(constraints=[f"@{abs_filepath}"], field_names=["host"])
for rec in inventory_recs:
assert filter_fn(rec) is True
for rec in not_inventory_recs:
assert filter_fn(rec) is False
filter_fn = create_filter(
constraints=[f"@{abs_filepath}"], field_names=["host"], include=False
)
for rec in inventory_recs:
assert filter_fn(rec) is False
for rec in not_inventory_recs:
assert filter_fn(rec) is True
def test_filtering_fail_csv_missinghostfield(tmpdir):
"""
Test use-case where the constraint is an invalid CSV file; meaning that there
is no `host` field.
"""
filename = "failures.csv"
tmpfile = tmpdir.join(filename)
# create an inventory that does not use 'host' as required, but uses
# 'hostname' instead.
inventory_recs = [
dict(hostname="swtich1.nyc1", os_name="eos"),
dict(hostname="switch2.dc1", os_name="ios"),
]
with open(tmpfile, "w+") as ofile:
csv_wr = csv.DictWriter(ofile, fieldnames=["hostname", "os_name"])
csv_wr.writeheader()
csv_wr.writerows(inventory_recs)
abs_filepath = str(tmpfile)
with pytest.raises(ValueError) as excinfo:
create_filter(constraints=[f"@{abs_filepath}"], field_names=["hostname"])
errmsg = excinfo.value.args[0]
assert "does not contain host content as expected" in errmsg
def test_filtering_fail_csv_filecontentsnotcsv(tmpdir):
"""
Test use-case where the constraint expects a CSV file, but the file is not
a CSV file due to contents; i.e. when attempting to read the CSV file it fails
to load content.
"""
# rather than provide a CSV file, provide this python file (not a CSV file).
# but call it a CSV file.
filepath = tmpdir.join("dummy.csv")
filepath.mklinkto(__file__)
with pytest.raises(ValueError) as excinfo:
create_filter(constraints=[f"@{filepath}"], field_names=["host"])
errmsg = excinfo.value.args[0]
assert "does not contain host content as expected" in errmsg
def test_filtering_fail_csv_notcsvfile():
"""
Test use-case when the provided file is not a CSV, and indicated by the
filename suffix not being '.csv'
"""
with pytest.raises(ValueError) as excinfo:
create_filter(constraints=[f"@{__file__}"], field_names=["host, os_name"])
errmsg = excinfo.value.args[0]
assert "not a CSV file." in errmsg
def test_filtering_ipaddr_v4_include():
"""
Test the ipaddr (Ipv4) include network address/prefix use-case
"""
filter_fn = create_filter(
constraints=["ipaddr=10.10.0.2"], field_names=["ipaddr"], include=True
)
assert filter_fn(dict(ipaddr="10.10.0.2", host="switch1.nyc1")) is True
assert filter_fn(dict(ipaddr="10.10.0.3", host="switch1.nyc1")) is False
assert filter_fn(dict(ipaddr="10.10.0.4", host="switch1.dc1")) is False
filter_fn = create_filter(
constraints=["ipaddr=10.10.0.2/31"], field_names=["ipaddr"], include=True
)
assert filter_fn(dict(ipaddr="10.10.0.2", host="switch1.nyc1")) is True
assert filter_fn(dict(ipaddr="10.10.0.3", host="switch1.nyc1")) is True
assert filter_fn(dict(ipaddr="10.10.0.4", host="switch1.dc1")) is False
filter_fn = create_filter(
constraints=["ipaddr=10.10.0.0/16"], field_names=["ipaddr"], include=True
)
assert filter_fn(dict(ipaddr="10.10.0.2", host="switch1.nyc1")) is True
assert filter_fn(dict(ipaddr="10.10.0.3", host="switch1.nyc1")) is True
assert filter_fn(dict(ipaddr="10.10.0.4", host="switch1.dc1")) is True
def test_filtering_ipaddr_v4_exclude():
"""
Test the ipaddr (Ipv4) exclude network address/prefix use-case
"""
filter_fn = create_filter(
constraints=["ipaddr=10.10.0.2"], field_names=["ipaddr"], include=False
)
assert filter_fn(dict(ipaddr="10.10.0.2", host="switch1.nyc1")) is False
assert filter_fn(dict(ipaddr="10.10.0.3", host="switch1.nyc1")) is True
assert filter_fn(dict(ipaddr="10.10.0.4", host="switch1.dc1")) is True
filter_fn = create_filter(
constraints=["ipaddr=10.10.0.2/31"], field_names=["ipaddr"], include=False
)
assert filter_fn(dict(ipaddr="10.10.0.2", host="switch1.nyc1")) is False
assert filter_fn(dict(ipaddr="10.10.0.3", host="switch1.nyc1")) is False
assert filter_fn(dict(ipaddr="10.10.0.4", host="switch1.dc1")) is True
filter_fn = create_filter(
constraints=["ipaddr=10.10.0.0/16"], field_names=["ipaddr"], include=False
)
assert filter_fn(dict(ipaddr="10.10.0.2", host="switch1.nyc1")) is False
assert filter_fn(dict(ipaddr="10.10.0.3", host="switch1.nyc1")) is False
assert filter_fn(dict(ipaddr="10.10.0.4", host="switch1.dc1")) is False
def test_filtering_ipaddr_v6_include():
"""
Test the ipaddr (Ipv6) include network address/prefix use-case
"""
filter_fn = create_filter(
constraints=["ipaddr=3001:10:10::2"], field_names=["ipaddr"], include=True
)
assert filter_fn(dict(ipaddr="3001:10:10::2", host="switch1.nyc1")) is True
assert filter_fn(dict(ipaddr="3001:10:10::3", host="switch1.nyc1")) is False
assert filter_fn(dict(ipaddr="3001:10:10::4", host="switch1.dc1")) is False
filter_fn = create_filter(
constraints=["ipaddr=3001:10:10::2/127"], field_names=["ipaddr"], include=True
)
assert filter_fn(dict(ipaddr="3001:10:10::2", host="switch1.nyc1")) is True
assert filter_fn(dict(ipaddr="3001:10:10::3", host="switch1.nyc1")) is True
assert filter_fn(dict(ipaddr="3001:10:10::4", host="switch1.dc1")) is False
filter_fn = create_filter(
constraints=["ipaddr=3001:10:10::0/64"], field_names=["ipaddr"], include=True
)
assert filter_fn(dict(ipaddr="3001:10:10::2", host="switch1.nyc1")) is True
assert filter_fn(dict(ipaddr="3001:10:10::3", host="switch1.nyc1")) is True
assert filter_fn(dict(ipaddr="3001:10:10::4", host="switch1.dc1")) is True
def test_filtering_ipaddr_v6_exclude():
"""
Test the ipaddr (Ipv6) exclude network address/prefix use-case
"""
filter_fn = create_filter(
constraints=["ipaddr=3001:10:10::2"], field_names=["ipaddr"], include=False
)
assert filter_fn(dict(ipaddr="3001:10:10::2", host="switch1.nyc1")) is False
assert filter_fn(dict(ipaddr="3001:10:10::3", host="switch1.nyc1")) is True
assert filter_fn(dict(ipaddr="3001:10:10::4", host="switch1.dc1")) is True
filter_fn = create_filter(
constraints=["ipaddr=3001:10:10::2/127"], field_names=["ipaddr"], include=False
)
assert filter_fn(dict(ipaddr="3001:10:10::2", host="switch1.nyc1")) is False
assert filter_fn(dict(ipaddr="3001:10:10::3", host="switch1.nyc1")) is False
assert filter_fn(dict(ipaddr="3001:10:10::4", host="switch1.dc1")) is True
filter_fn = create_filter(
constraints=["ipaddr=3001:10:10::0/64"], field_names=["ipaddr"], include=False
)
assert filter_fn(dict(ipaddr="3001:10:10::2", host="switch1.nyc1")) is False
assert filter_fn(dict(ipaddr="3001:10:10::3", host="switch1.nyc1")) is False
assert filter_fn(dict(ipaddr="3001:10:10::4", host="switch1.dc1")) is False
def test_filtering_ipaddr_regex_fallback():
"""
Test the use-case of ipaddr filtering when a regex is used
"""
filter_fn = create_filter(
constraints=["ipaddr=3001:10:(10|20)::2"], field_names=["ipaddr"], include=True
)
assert filter_fn(dict(ipaddr="3001:10:10::1", host="switch1.nyc1")) is False
assert filter_fn(dict(ipaddr="3001:10:20::2", host="switch1.nyc1")) is True
assert filter_fn(dict(ipaddr="3001:10:30::3", host="switch1.dc1")) is False
filter_fn = create_filter(
constraints=[r"ipaddr=10.10.10.\d{2}"], field_names=["ipaddr"], include=False
)
assert filter_fn(dict(ipaddr="10.10.10.1", host="switch1.nyc1")) is True
assert filter_fn(dict(ipaddr="10.10.10.10", host="switch1.nyc1")) is False
assert filter_fn(dict(ipaddr="10.10.10.12", host="switch1.nyc1")) is False
| 35.347305 | 87 | 0.669575 | 1,729 | 11,806 | 4.426836 | 0.094274 | 0.073164 | 0.095114 | 0.112882 | 0.837863 | 0.817089 | 0.796969 | 0.758688 | 0.745101 | 0.708518 | 0 | 0.05928 | 0.181264 | 11,806 | 333 | 88 | 35.453453 | 0.732568 | 0.117483 | 0 | 0.543147 | 0 | 0 | 0.198741 | 0.013767 | 0 | 0 | 0 | 0 | 0.294416 | 1 | 0.076142 | false | 0.020305 | 0.015228 | 0 | 0.091371 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5d01a4b069935915db4dc001108366c1448c653c | 24,078 | py | Python | mlni/adml_classification.py | anbai106/pyhydra | 1b1060c06b15c02ca417fee13dc7def77b95d4da | [
"MIT"
] | 1 | 2022-03-21T13:18:13.000Z | 2022-03-21T13:18:13.000Z | mlni/adml_classification.py | anbai106/pyhydra | 1b1060c06b15c02ca417fee13dc7def77b95d4da | [
"MIT"
] | 4 | 2021-04-20T13:37:32.000Z | 2021-05-07T01:54:22.000Z | mlni/adml_classification.py | anbai106/pyhydra | 1b1060c06b15c02ca417fee13dc7def77b95d4da | [
"MIT"
] | 2 | 2020-10-24T16:45:07.000Z | 2021-01-11T03:13:17.000Z | from mlni.classification import RB_RepeatedHoldOut_DualSVM_Classification, RB_KFold_DualSVM_Classification, \
VB_RepeatedHoldOut_DualSVM_Classification, VB_KFold_DualSVM_Classification, RB_RepeatedHoldOut_DualSVM_Classification_Nested_Feature_Selection, \
VB_RepeatedHoldOut_DualSVM_Classification_Nested_Feature_Selection
from mlni.base import RB_Input, VB_Input
import os, pickle
from mlni.utils import make_cv_partition, prepare_opnmf_tsv_voting, voting_system
import pandas as pd
__author__ = "Junhao Wen"
__copyright__ = "Copyright 2019-2020 The CBICA & SBIA Lab"
__credits__ = ["Junhao Wen"]
__license__ = "See LICENSE file"
__version__ = "0.1.0"
__maintainer__ = "Junhao Wen"
__email__ = "junhao.wen89@gmail.com"
__status__ = "Development"
def classification_roi(feature_tsv, output_dir, cv_repetition, cv_strategy='hold_out', class_weight_balanced=True,
n_threads=8, seed=None, verbose=False):
"""
MLNI core function for classification for ROI-based features
Args:
feature_tsv:str, path to the tsv containing extracted feature, following the BIDS convention. The tsv contains
the following headers: "
"i) the first column is the participant_id;"
"ii) the second column should be the session_id;"
"iii) the third column should be the diagnosis;"
"The following column should be the extracted features. e.g., the ROI features"
output_dir: str, path to store the classification results.
cv_repetition: int, number of repetitions for cross-validation (CV)
cv_strategy: str, cross validation strategy used. Default is hold_out. choices=['k_fold', 'hold_out']
class_weight_balanced: Bool, default is True. If the two groups are balanced.
n_threads: int, default is 8. The number of threads to run model in parallel.
verbose: Bool, default is False. If the output message is verbose.
Returns: classification outputs.
"""
print('MLNI for a binary classification with nested CV...')
input_data = RB_Input(feature_tsv, standardization_method="minmax")
## data split
print('Data split was performed based on validation strategy: %s...\n' % cv_strategy)
## check if data split has been done, if yes, the pickle file is there
if os.path.isfile(os.path.join(output_dir, 'data_split_stratified_' + str(cv_repetition) + '-holdout.pkl')):
split_index = pickle.load(open(os.path.join(output_dir, 'data_split_stratified_' + str(cv_repetition) + '-holdout.pkl'), 'rb'))
else:
split_index, _ = make_cv_partition(input_data.get_y(), cv_strategy, output_dir, cv_repetition, seed=seed)
print('Data split has been done!\n')
print('Starts binary classification...')
## Here, we perform a nested CV (outer CV with defined CV method, inner CV with 10-fold grid search) for classification.
if cv_strategy == 'hold_out':
wf_classification = RB_RepeatedHoldOut_DualSVM_Classification(input_data, split_index, os.path.join(output_dir, 'classification'),
n_threads=n_threads, n_iterations=cv_repetition, balanced=class_weight_balanced, verbose=verbose)
wf_classification.run()
elif cv_strategy == 'k_fold':
wf_classification = RB_KFold_DualSVM_Classification(input_data, split_index, os.path.join(output_dir, 'classification'),
cv_repetition, n_threads=n_threads, balanced=class_weight_balanced, verbose=verbose)
wf_classification.run()
else:
raise Exception("CV methods have not been implemented")
print('Finish...')
def classification_roi_feature_selection(feature_tsv, output_dir, cv_repetition, cv_strategy='hold_out',
class_weight_balanced=True, feature_selection_method='RFE', top_k=50, n_threads=8, seed=None, verbose=False):
"""
MLNI core function for classification for ROI-based features with nested feature selection
Args:
feature_tsv:str, path to the tsv containing extracted feature, following the BIDS convention. The tsv contains
the following headers: "
"i) the first column is the participant_id;"
"ii) the second column should be the session_id;"
"iii) the third column should be the diagnosis;"
"The following column should be the extracted features. e.g., the ROI features"
output_dir: str, path to store the classification results.
cv_repetition: int, number of repetitions for cross-validation (CV)
cv_strategy: str, cross validation strategy used. Default is hold_out. choices=['k_fold', 'hold_out']
class_weight_balanced: Bool, default is True. If the two groups are balanced.
feature_selection_method: str, default is RFE. choices=['ANOVA', 'RF', 'PCA', 'RFE'].
top_k: int, default is 50 (50%). Percentage of original feature that the method want to select.
n_threads: int, default is 8. The number of threads to run model in parallel.
verbose: Bool, default is False. If the output message is verbose.
Returns: classification outputs.
"""
print('MLNI for a binary classification with nested CV and nested feature selection method...')
input_data = RB_Input(feature_tsv, standardization_method="minmax")
## data split
print('Data split was performed based on validation strategy: %s...\n' % cv_strategy)
## check if data split has been done, if yes, the pickle file is there
if os.path.isfile(os.path.join(output_dir, 'data_split_stratified_' + str(cv_repetition) + '-holdout.pkl')):
split_index = pickle.load(open(os.path.join(output_dir, 'data_split_stratified_' + str(cv_repetition) + '-holdout.pkl'), 'rb'))
else:
split_index, _ = make_cv_partition(input_data.get_y(), cv_strategy, output_dir, cv_repetition, seed=seed)
print('Data split has been done!\n')
print('Starts binary classification...')
## Here, we perform a nested CV (outer CV with defined CV method, inner CV with 10-fold grid search) for classification.
if cv_strategy == 'hold_out':
wf_classification = RB_RepeatedHoldOut_DualSVM_Classification_Nested_Feature_Selection(input_data, split_index,
os.path.join(output_dir, 'classification'), n_threads=n_threads, n_iterations=cv_repetition,
balanced=class_weight_balanced, feature_selection_method=feature_selection_method, top_k=top_k, verbose=verbose)
wf_classification.run()
elif cv_strategy == 'k_fold':
raise Exception("Non-nested feature selection is currently only supported for repeated hold-out CV")
else:
raise Exception("CV methods have not been implemented")
print('Finish...')
def classification_voxel(participant_tsv, output_dir, cv_repetition, cv_strategy='hold_out', class_weight_balanced=True, n_threads=8, seed=None, verbose=False):
"""
MLNI core function for classification with voxel-wise features
Args:
participant_tsv:str, path to the tsv containing extracted feature, following the BIDS convention. The tsv contains
the following headers: "
"i) the first column is the participant_id;"
"ii) the second column should be the session_id;"
"iii) the third column should be the diagnosis;"
"iv) the forth column should be the path to each image;"
output_dir: str, path to store the classification results.
cv_repetition: int, number of repetitions for cross-validation (CV)
cv_strategy: str, cross validation strategy used. Default is hold_out. choices=['k_fold', 'hold_out']
class_weight_balanced: Bool, default is True. If the two groups are balanced.
n_threads: int, default is 8. The number of threads to run model in parallel.
verbose: Bool, default is False. If the output message is verbose.
Returns: classification outputs.
"""
print('MLNI for a binary classification with nested CV...')
input_data =VB_Input(participant_tsv)
## data split
print('Data split was performed based on validation strategy: %s...\n' % cv_strategy)
## check if data split has been done, if yes, the pickle file is there
if os.path.isfile(os.path.join(output_dir, 'data_split_stratified_' + str(cv_repetition) + '-holdout.pkl')):
split_index = pickle.load(open(os.path.join(output_dir, 'data_split_stratified_' + str(cv_repetition) + '-holdout.pkl'), 'rb'))
else:
split_index, _ = make_cv_partition(input_data.get_y(), cv_strategy, output_dir, cv_repetition, seed=seed)
print('Data split has been done!\n')
print('Starts binary classification...')
## Here, we perform a nested CV (outer CV with defined CV method, inner CV with 10-fold grid search) for classification.
if cv_strategy == 'hold_out':
wf_classification = VB_RepeatedHoldOut_DualSVM_Classification(input_data, split_index, os.path.join(output_dir, 'classification'),
n_threads=n_threads, n_iterations=cv_repetition, balanced=class_weight_balanced, verbose=verbose)
wf_classification.run()
elif cv_strategy == 'k_fold':
wf_classification = VB_KFold_DualSVM_Classification(input_data, split_index, os.path.join(output_dir, 'classification'),
cv_repetition, n_threads=n_threads, balanced=class_weight_balanced, verbose=verbose)
wf_classification.run()
else:
raise Exception("CV methods have not been implemented")
print('Finish...')
def classification_voxel_feature_selection(feature_tsv, output_dir, cv_repetition, cv_strategy='hold_out', class_weight_balanced=True,
feature_selection_method='RFE', top_k=50, n_threads=8, seed=None, verbose=False):
"""
MLNI core function for classification with voxel-wise features
Args:
feature_tsv:str, path to the tsv containing extracted feature, following the BIDS convention. The tsv contains
the following headers: "
"i) the first column is the participant_id;"
"ii) the second column should be the session_id;"
"iii) the third column should be the diagnosis;"
"iv) the forth column should be the path to each image;"
output_dir: str, path to store the classification results.
cv_repetition: int, number of repetitions for cross-validation (CV)
cv_strategy: str, cross validation strategy used. Default is hold_out. choices=['k_fold', 'hold_out']
class_weight_balanced: Bool, default is True. If the two groups are balanced.
feature_selection_method: str, default is RFE. choices=['ANOVA', 'RF', 'PCA', 'RFE'].
top_k: int, default is 50 (50%). Percentage of original feature that the method want to select.
n_threads: int, default is 8. The number of threads to run model in parallel.
verbose: Bool, default is False. If the output message is verbose.
Returns: classification outputs.
"""
print('MLNI for a binary classification with nested CV and nested feature selection method...')
input_data =VB_Input(feature_tsv)
## data split
print('Data split was performed based on validation strategy: %s...\n' % cv_strategy)
## check if data split has been done, if yes, the pickle file is there
if os.path.isfile(os.path.join(output_dir, 'data_split_stratified_' + str(cv_repetition) + '-holdout.pkl')):
split_index = pickle.load(open(os.path.join(output_dir, 'data_split_stratified_' + str(cv_repetition) + '-holdout.pkl'), 'rb'))
else:
split_index, _ = make_cv_partition(input_data.get_y(), cv_strategy, output_dir, cv_repetition, seed=seed)
print('Data split has been done!\n')
print('Starts binary classification...')
## Here, we perform a nested CV (outer CV with defined CV method, inner CV with 10-fold grid search) for classification.
if cv_strategy == 'hold_out':
wf_classification = VB_RepeatedHoldOut_DualSVM_Classification_Nested_Feature_Selection(input_data, split_index, os.path.join(output_dir, 'classification'),
n_threads=n_threads, n_iterations=cv_repetition, balanced=class_weight_balanced, feature_selection_method=feature_selection_method, top_k=top_k,
verbose=verbose)
wf_classification.run()
elif cv_strategy == 'k_fold':
raise Exception("Non-nested feature selection is currently only supported for repeated hold-out CV")
else:
raise Exception("CV methods have not been implemented")
print('Finish...')
def classification_multiscale_opnmf_voting(participant_tsv, opnmf_dir, output_dir, components_list, cv_repetition,
cv_strategy='hold_out', voting_method='hard_voting', class_weight_balanced=True,
n_threads=8, verbose=False):
"""
Classification based on the multi-scale feature extracted from opNMF and different voting systems
Args:
participant_tsv:
"i) the first column is the participant_id;"
"ii) the second column should be the session_id;"
"iii) the third column should be the diagnosis;"
opnmf_dir: str, path to the ouptu_dir of opNMF
output_dir: str, path to store the classification results.
components_list: list, a list containing all the Cs (number of components)
num_components_max: int, max of number_of_components
num_components_step: int, step size
cv_repetition: int, number of repetitions for cross-validation (CV)
cv_strategy: str, cross validation strategy used. Currrently only support for hold_out. choices=['hold_out']
class_weight_balanced: Bool, default is True. If the two groups are balanced.
n_threads: int, default is 8. The number of threads to run model in parallel.
verbose: Bool, default is False. If the output message is verbose.
voting_method: str, method for the voting system. Choice: ['hard_voting', 'soft_voting', 'weighted_soft_voting', 'consensus_voting']
Note: soft voting works "correctly" when the classifier is calibrated;
consensus voting assumes that the classifier performs better than change, i.e., accuracy > 0.5,
since clustering labels order does not mean anything.
Returns:
"""
if cv_strategy != 'hold_out':
raise Exception("Only support repetaed hold-out CV currently!")
### For voxel approach
print('Multi-scale ensemble classification...')
print('Starts classification for each specific scale...')
## read the participant tsv
df_participant = pd.read_csv(participant_tsv, sep='\t')
## create a temp file in the output_dir to save the intermediate tsv files
output_dir_ensemble = os.path.join(output_dir, 'ensemble')
output_dir_intermediate = os.path.join(output_dir, 'intermediate')
if not os.path.exists(output_dir_intermediate):
os.makedirs(output_dir_intermediate)
## make the final reuslts folder
if not os.path.exists(output_dir_ensemble):
os.makedirs(output_dir_ensemble)
## first loop on different initial C.
for i in components_list:
component_output_dir, opnmf_component_tsv = prepare_opnmf_tsv_voting(output_dir, opnmf_dir, i, df_participant)
print('For components == %d' % i)
if os.path.exists(os.path.join(component_output_dir, 'classification', 'mean_results.tsv')):
pass
else:
classification_roi(opnmf_component_tsv, component_output_dir, cv_repetition=cv_repetition, cv_strategy=cv_strategy,
class_weight_balanced=class_weight_balanced, n_threads=n_threads, verbose=verbose, seed=0)
## ensemble soft voting to determine the final classification results
voting_system(voting_method, output_dir, components_list, cv_repetition)
print('Finish...')
def classification_multiscale_opnmf_multikernel(participant_tsv, opnmf_dir, output_dir, components_list, cv_repetition,
cv_strategy='hold_out', multikernel_method='AverageMKL', class_weight_balanced=True,
n_threads=8, verbose=False):
"""
Classification based on the multi-scale feature extracted from opNMF and different multikernel learhing (MKL) strategies.
Args:
participant_tsv:
"i) the first column is the participant_id;"
"ii) the second column should be the session_id;"
"iii) the third column should be the diagnosis;"
opnmf_dir: str, path to the ouptu_dir of opNMF
output_dir: str, path to store the classification results.
components_list: list, a list containing all the Cs (number of components)
num_components_max: int, max of number_of_components
num_components_step: int, step size
cv_repetition: int, number of repetitions for cross-validation (CV)
cv_strategy: str, cross validation strategy used. Currrently only support for hold_out. choices=['hold_out']
class_weight_balanced: Bool, default is True. If the two groups are balanced.
n_threads: int, default is 8. The number of threads to run model in parallel.
verbose: Bool, default is False. If the output message is verbose.
multikernel_method: str, method for the MKL. Choice: ['AverageMKL']
Returns:
"""
if cv_strategy != 'hold_out':
raise Exception("Only support repetaed hold-out CV currently!")
### For voxel approach
print('Multi-scale ensemble classification...')
print('Starts classification for each specific scale...')
## read the participant tsv
df_participant = pd.read_csv(participant_tsv, sep='\t')
## create a temp file in the output_dir to save the intermediate tsv files
output_dir_multikernel = os.path.join(output_dir, 'multikernel')
output_dir_intermediate = os.path.join(output_dir, 'intermediate')
if not os.path.exists(output_dir_intermediate):
os.makedirs(output_dir_intermediate)
## make the final reuslts folder
if not os.path.exists(output_dir_multikernel):
os.makedirs(output_dir_multikernel)
def prepare_opnmf_tsv_multikernel(components_list, output_dir, opnmf_dir, df_participant):
"""
This is the function to calculate the multi-kernel for classification.
Args:
components_list:
output_dir:
opnmf_dir:
df_participant:
Returns:
"""
kernel_list = []
## first loop on different initial C.
for i in components_list:
## create a temp file in the output_dir to save the intermediate tsv files
component_output_dir = os.path.join(output_dir, 'component_' + str(i))
if not os.path.exists(component_output_dir):
os.makedirs(component_output_dir)
### grab the output tsv of each C from opNMF
opnmf_tsv = os.path.join(opnmf_dir, 'NMF', 'component_' + str(i), 'atlas_components_signal.tsv')
df_opnmf = pd.read_csv(opnmf_tsv, sep='\t')
### only take the rows in opnmf_tsv which are in common in participant_tsv
df_opnmf = df_opnmf.loc[df_opnmf['participant_id'].isin(df_participant['participant_id'])]
## now check the dimensions
if df_participant.shape[0] != df_opnmf.shape[0]:
raise Exception("The dimension of the participant_tsv and opNMF are not consistent!")
### make sure the row order is consistent with the participant_tsv
df_opnmf = df_opnmf.set_index('participant_id')
df_opnmf = df_opnmf.reindex(index=df_participant['participant_id'])
df_opnmf = df_opnmf.reset_index()
## replace the path column in df_opnmf to be diagnosis, and save it to temp path for pyHYDRA classification
diagnosis_list = list(df_participant['diagnosis'])
df_opnmf["path"] = diagnosis_list
df_opnmf.rename(columns={'path': 'diagnosis'}, inplace=True)
## save to tsv in a temporal folder
opnmf_component_tsv = os.path.join(output_dir, 'intermediate', 'opnmf_component_' + str(i) + '.tsv')
df_opnmf.to_csv(opnmf_component_tsv, index=False, sep='\t', encoding='utf-8')
## Calculate the linear kernel for each C
input_data = RB_Input(opnmf_component_tsv, standardization_method="minmax")
kernel = input_data.get_kernel()
kernel_list.append(kernel)
## merge the list of kernels based on the weights of number of components
components_list_weight = [i / sum(components_list) for i in components_list]
import numpy as np
kernel_final = np.zeros(kernel.shape)
for j in range(len(kernel_list)):
if j == 0:
kernel_final = kernel_list[j] * components_list_weight[j]
else:
kernel_final += kernel_list[j] * components_list_weight[j]
return kernel_final, input_data
if multikernel_method == 'AverageMKL':
kernel_final, input_data = prepare_opnmf_tsv_multikernel(components_list, output_dir, opnmf_dir, df_participant)
## data split
print('Data split was performed based on validation strategy: %s...\n' % cv_strategy)
## check if data split has been done, if yes, the pickle file is there
if os.path.isfile(os.path.join(output_dir, 'data_split_stratified_' + str(cv_repetition) + '-holdout.pkl')):
split_index = pickle.load(
open(os.path.join(output_dir, 'data_split_stratified_' + str(cv_repetition) + '-holdout.pkl'), 'rb'))
else:
split_index, _ = make_cv_partition(input_data.get_y(), cv_strategy, output_dir, cv_repetition)
print('Data split has been done!\n')
print('Starts binary classification...')
## Here, we perform a nested CV (outer CV with defined CV method, inner CV with 10-fold grid search) for classification.
if cv_strategy == 'hold_out':
wf_classification = RB_RepeatedHoldOut_DualSVM_Classification(input_data, split_index,
os.path.join(output_dir, 'multikernel'),
n_threads=n_threads,
n_iterations=cv_repetition,
balanced=class_weight_balanced,
kernel=kernel_final,
verbose=verbose)
wf_classification.run()
elif cv_strategy == 'k_fold':
wf_classification = RB_KFold_DualSVM_Classification(input_data, split_index,
os.path.join(output_dir, 'multikernel'),
cv_repetition, n_threads=n_threads,
kernel=kernel_final,
balanced=class_weight_balanced, verbose=verbose)
wf_classification.run()
else:
raise Exception("CV methods have not been implemented")
else:
raise Exception("Other MKL methods have not been implemented yet...")
print('Finish...')
| 58.726829 | 163 | 0.666916 | 3,069 | 24,078 | 5.009449 | 0.104268 | 0.039222 | 0.016912 | 0.024977 | 0.823403 | 0.806166 | 0.787173 | 0.78327 | 0.775595 | 0.770001 | 0 | 0.002997 | 0.251557 | 24,078 | 409 | 164 | 58.870416 | 0.850119 | 0.383462 | 0 | 0.54 | 0 | 0 | 0.188758 | 0.018925 | 0 | 0 | 0 | 0 | 0 | 1 | 0.035 | false | 0.005 | 0.03 | 0 | 0.07 | 0.15 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5d0c37c26cd0d11685c84090c0a3617c9d686204 | 41 | py | Python | operadores basicos/potencia1.py | gabys12/portafolio-fundamento-de-programacion | c9b47f32e885ed6ae80b14133a609798ea034e19 | [
"CNRI-Python"
] | null | null | null | operadores basicos/potencia1.py | gabys12/portafolio-fundamento-de-programacion | c9b47f32e885ed6ae80b14133a609798ea034e19 | [
"CNRI-Python"
] | null | null | null | operadores basicos/potencia1.py | gabys12/portafolio-fundamento-de-programacion | c9b47f32e885ed6ae80b14133a609798ea034e19 | [
"CNRI-Python"
] | null | null | null | a = 16
b = 4
print("a ** b =", a ** b)
| 8.2 | 26 | 0.341463 | 9 | 41 | 1.555556 | 0.555556 | 0.285714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.115385 | 0.365854 | 41 | 4 | 27 | 10.25 | 0.423077 | 0 | 0 | 0 | 0 | 0 | 0.195122 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.333333 | 1 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5d40b476808a8b97e09351bcf0cec6155319ad9a | 1,358 | py | Python | miniproject/main/dataAPI.py | LokeshShelva/pythonproject | b2cb7e3323fc994473b7ffefc47cc72882e29dab | [
"MIT"
] | null | null | null | miniproject/main/dataAPI.py | LokeshShelva/pythonproject | b2cb7e3323fc994473b7ffefc47cc72882e29dab | [
"MIT"
] | null | null | null | miniproject/main/dataAPI.py | LokeshShelva/pythonproject | b2cb7e3323fc994473b7ffefc47cc72882e29dab | [
"MIT"
] | null | null | null | import ast
def data_API(data, user):
if user:
cleaned = {
"name": data['graphTitle'],
"user": user,
"description": data['graphDescription'],
"xAxis": {
"name": data["xAxisName"],
"unit": data["xAxisUnit"],
"minRange": data["xAxisMinRange"],
"maxRange": data["xAxisMaxRange"]
},
"yAxis": {
"name": data["yAxisName"],
"unit": data["yAxisUnit"],
"minRange": data["yAxisMinRange"],
"maxRange": data["yAxisMaxRange"]
},
"data": ast.literal_eval(data["data"]),
}
else:
cleaned = {
"name": data['graphTitle'],
"description": data['graphDescription'],
"xAxis": {
"name": data["xAxisName"],
"unit": data["xAxisUnit"],
"minRange": data["xAxisMinRange"],
"maxRange": data["xAxisMaxRange"]
},
"yAxis": {
"name": data["yAxisName"],
"unit": data["yAxisUnit"],
"minRange": data["yAxisMinRange"],
"maxRange": data["yAxisMaxRange"]
},
"data": ast.literal_eval(data["data"]),
}
return cleaned
| 30.863636 | 52 | 0.427835 | 92 | 1,358 | 6.282609 | 0.315217 | 0.083045 | 0.051903 | 0.086505 | 0.813149 | 0.813149 | 0.813149 | 0.813149 | 0.813149 | 0.813149 | 0 | 0 | 0.411635 | 1,358 | 43 | 53 | 31.581395 | 0.723404 | 0 | 0 | 0.7 | 0 | 0 | 0.290133 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.025 | false | 0 | 0.025 | 0 | 0.075 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5d52d3b0d437092d045deb335af8d9155f5789d6 | 20 | py | Python | 0x02-python-import_modules/0-import_add.py | C-distin/alx-higher_level_programming | ee018135b24ac07d40f2309a4febf21b8a25aee4 | [
"MIT"
] | null | null | null | 0x02-python-import_modules/0-import_add.py | C-distin/alx-higher_level_programming | ee018135b24ac07d40f2309a4febf21b8a25aee4 | [
"MIT"
] | null | null | null | 0x02-python-import_modules/0-import_add.py | C-distin/alx-higher_level_programming | ee018135b24ac07d40f2309a4febf21b8a25aee4 | [
"MIT"
] | null | null | null | __import__("0-add")
| 10 | 19 | 0.7 | 3 | 20 | 3.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.052632 | 0.05 | 20 | 1 | 20 | 20 | 0.473684 | 0 | 0 | 0 | 0 | 0 | 0.25 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
5d67882c3fb215a419558b10f4df40e6d30a6f3e | 23 | py | Python | src/ncon/__init__.py | brucelyu/ncon | 1dc412ed4f6f07c403614e5c09b9856ce0578b5b | [
"MIT"
] | 30 | 2017-07-14T21:56:50.000Z | 2022-01-26T06:02:56.000Z | src/ncon/__init__.py | brucelyu/ncon | 1dc412ed4f6f07c403614e5c09b9856ce0578b5b | [
"MIT"
] | 2 | 2021-03-30T17:00:20.000Z | 2022-03-05T14:07:22.000Z | src/ncon/__init__.py | brucelyu/ncon | 1dc412ed4f6f07c403614e5c09b9856ce0578b5b | [
"MIT"
] | 13 | 2018-06-29T15:57:48.000Z | 2022-02-15T16:40:00.000Z | from .ncon import ncon
| 11.5 | 22 | 0.782609 | 4 | 23 | 4.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.173913 | 23 | 1 | 23 | 23 | 0.947368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5d7366af356f48e7aa6be76516dc335aeefbb24c | 117 | py | Python | tests/unit/test_package.py | Etiqa/bromine | cabf0931f5a06796c26fdc7fb9f7ecf147554fd5 | [
"BSD-2-Clause"
] | 2 | 2018-09-20T12:37:01.000Z | 2021-08-30T14:44:25.000Z | tests/unit/test_package.py | Etiqa/bromine | cabf0931f5a06796c26fdc7fb9f7ecf147554fd5 | [
"BSD-2-Clause"
] | null | null | null | tests/unit/test_package.py | Etiqa/bromine | cabf0931f5a06796c26fdc7fb9f7ecf147554fd5 | [
"BSD-2-Clause"
] | null | null | null | # pylint: disable=missing-docstring
import bromine
def test_version():
assert hasattr(bromine, '__version__')
| 14.625 | 42 | 0.752137 | 13 | 117 | 6.384615 | 0.846154 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.145299 | 117 | 7 | 43 | 16.714286 | 0.83 | 0.282051 | 0 | 0 | 0 | 0 | 0.134146 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
538e61cd05f03539a74b55556c3b9e0d634f2b50 | 9,847 | py | Python | tests/test_backport_pr.py | lysnikolaou/miss-islington | 5f462576e9b8a4d9910271a386033a1d764d5a66 | [
"Apache-2.0"
] | 13 | 2021-04-01T20:27:23.000Z | 2021-12-30T17:11:24.000Z | tests/test_backport_pr.py | lysnikolaou/miss-islington | 5f462576e9b8a4d9910271a386033a1d764d5a66 | [
"Apache-2.0"
] | 3 | 2021-02-26T03:06:57.000Z | 2022-01-06T22:52:30.000Z | tests/test_backport_pr.py | ConnectionMaster/miss-islington | ac121b7808929853a99ca22c26eb31f9dde56e8c | [
"Apache-2.0"
] | null | null | null | import os
from unittest import mock
from gidgethub import sansio
import redis
import kombu
os.environ["REDIS_URL"] = "someurl"
from miss_islington import backport_pr
class FakeGH:
def __init__(self, *, getitem=None, post=None):
self._getitem_return = getitem
self.getitem_url = None
self.getiter_url = None
self._post_return = post
async def getitem(self, url, url_vars={}):
self.getitem_url = sansio.format_url(url, url_vars)
return self._getitem_return[self.getitem_url]
async def post(self, url, *, data):
self.post_url = url
self.post_data = data
return self._post_return
async def test_unmerged_pr_is_ignored():
data = {"action": "closed", "pull_request": {"merged": False}}
event = sansio.Event(data, event="pull_request", delivery_id="1")
gh = FakeGH()
await backport_pr.router.dispatch(event, gh)
assert gh.getitem_url is None
async def test_labeled_on_unmerged_pr_is_ignored():
data = {"action": "labeled", "pull_request": {"merged": False}}
event = sansio.Event(data, event="pull_request", delivery_id="1")
gh = FakeGH()
await backport_pr.router.dispatch(event, gh)
assert gh.getitem_url is None
async def test_labeled_on_merged_pr_no_backport_label():
data = {
"action": "labeled",
"pull_request": {
"merged": True,
"number": 1,
"merged_by": {"login": "Mariatta"},
"user": {"login": "Mariatta"},
"merge_commit_sha": "f2393593c99dd2d3ab8bfab6fcc5ddee540518a9",
},
"repository": {
"issues_url": "https://api.github.com/repos/python/cpython/issues{/number}"
},
"label": {"name": "CLA signed"},
}
event = sansio.Event(data, event="pull_request", delivery_id="1")
gh = FakeGH()
await backport_pr.router.dispatch(event, gh)
assert not hasattr(gh, "post_data")
assert not hasattr(gh, "post_url")
async def test_merged_pr_no_backport_label():
data = {
"action": "closed",
"pull_request": {
"merged": True,
"number": 1,
"merged_by": {"login": "Mariatta"},
"user": {"login": "Mariatta"},
"merge_commit_sha": "f2393593c99dd2d3ab8bfab6fcc5ddee540518a9",
},
"repository": {
"issues_url": "https://api.github.com/repos/python/cpython/issues/1"
},
}
event = sansio.Event(data, event="pull_request", delivery_id="1")
getitem = {
"https://api.github.com/repos/python/cpython/issues/1": {
"labels_url": "https://api.github.com/repos/python/cpython/issues/1/labels{/name}"
},
"https://api.github.com/repos/python/cpython/issues/1/labels": [
{"name": "CLA signed"}
],
}
gh = FakeGH(getitem=getitem)
await backport_pr.router.dispatch(event, gh)
assert not hasattr(gh, "post_data")
assert not hasattr(gh, "post_url")
async def test_merged_pr_with_backport_label():
data = {
"action": "closed",
"pull_request": {
"merged": True,
"number": 1,
"merged_by": {"login": "Mariatta"},
"user": {"login": "Mariatta"},
"merge_commit_sha": "f2393593c99dd2d3ab8bfab6fcc5ddee540518a9",
},
"repository": {
"issues_url": "https://api.github.com/repos/python/cpython/issues/1"
},
}
event = sansio.Event(data, event="pull_request", delivery_id="1")
getitem = {
"https://api.github.com/repos/python/cpython/issues/1": {
"labels_url": "https://api.github.com/repos/python/cpython/issues/1/labels{/name}"
},
"https://api.github.com/repos/python/cpython/issues/1/labels": [
{"name": "CLA signed"},
{"name": "needs backport to 3.7"},
],
}
gh = FakeGH(getitem=getitem)
with mock.patch("miss_islington.tasks.backport_task.delay"):
await backport_pr.router.dispatch(event, gh)
assert "I'm working now to backport this PR to: 3.7" in gh.post_data["body"]
assert gh.post_url == "/repos/python/cpython/issues/1/comments"
async def test_merged_pr_with_backport_label_thank_pr_author():
data = {
"action": "closed",
"pull_request": {
"merged": True,
"number": 1,
"merged_by": {"login": "Mariatta"},
"user": {"login": "gvanrossum"},
"merge_commit_sha": "f2393593c99dd2d3ab8bfab6fcc5ddee540518a9",
},
"repository": {
"issues_url": "https://api.github.com/repos/python/cpython/issues/1"
},
}
event = sansio.Event(data, event="pull_request", delivery_id="1")
getitem = {
"https://api.github.com/repos/python/cpython/issues/1": {
"labels_url": "https://api.github.com/repos/python/cpython/issues/1/labels{/name}"
},
"https://api.github.com/repos/python/cpython/issues/1/labels": [
{"name": "CLA signed"},
{"name": "needs backport to 3.7"},
],
}
gh = FakeGH(getitem=getitem)
with mock.patch("miss_islington.tasks.backport_task.delay"):
await backport_pr.router.dispatch(event, gh)
assert "I'm working now to backport this PR to: 3.7" in gh.post_data["body"]
assert "Thanks @gvanrossum for the PR" in gh.post_data["body"]
assert gh.post_url == "/repos/python/cpython/issues/1/comments"
async def test_easter_egg():
data = {
"action": "closed",
"pull_request": {
"merged": True,
"number": 1,
"merged_by": {"login": "Mariatta"},
"user": {"login": "gvanrossum"},
"merge_commit_sha": "f2393593c99dd2d3ab8bfab6fcc5ddee540518a9",
},
"repository": {
"issues_url": "https://api.github.com/repos/python/cpython/issues/1"
},
}
event = sansio.Event(data, event="pull_request", delivery_id="1")
getitem = {
"https://api.github.com/repos/python/cpython/issues/1": {
"labels_url": "https://api.github.com/repos/python/cpython/issues/1/labels{/name}"
},
"https://api.github.com/repos/python/cpython/issues/1/labels": [
{"name": "CLA signed"},
{"name": "needs backport to 3.7"},
],
}
gh = FakeGH(getitem=getitem)
with mock.patch("miss_islington.tasks.backport_task.delay"), mock.patch(
"random.random", return_value=0.1
):
await backport_pr.router.dispatch(event, gh)
assert "I'm working now to backport this PR to: 3.7" in gh.post_data["body"]
assert "Thanks @gvanrossum for the PR" in gh.post_data["body"]
assert "I'm not a witch" not in gh.post_data["body"]
assert gh.post_url == "/repos/python/cpython/issues/1/comments"
with mock.patch("miss_islington.tasks.backport_task.delay"), mock.patch(
"random.random", return_value=0.01
):
await backport_pr.router.dispatch(event, gh)
assert "I'm working now to backport this PR to: 3.7" in gh.post_data["body"]
assert "Thanks @gvanrossum for the PR" in gh.post_data["body"]
assert "I'm not a witch" in gh.post_data["body"]
assert gh.post_url == "/repos/python/cpython/issues/1/comments"
async def test_backport_pr_redis_connection_error():
data = {
"action": "closed",
"pull_request": {
"merged": True,
"number": 1,
"merged_by": {"login": "Mariatta"},
"user": {"login": "gvanrossum"},
"merge_commit_sha": "f2393593c99dd2d3ab8bfab6fcc5ddee540518a9",
},
"repository": {
"issues_url": "https://api.github.com/repos/python/cpython/issues/1"
},
}
event = sansio.Event(data, event="pull_request", delivery_id="1")
getitem = {
"https://api.github.com/repos/python/cpython/issues/1": {
"labels_url": "https://api.github.com/repos/python/cpython/issues/1/labels{/name}"
},
"https://api.github.com/repos/python/cpython/issues/1/labels": [
{"name": "CLA signed"},
{"name": "needs backport to 3.7"},
],
}
gh = FakeGH(getitem=getitem)
with mock.patch("miss_islington.tasks.backport_task.delay") as backport_delay_mock:
backport_delay_mock.side_effect = redis.exceptions.ConnectionError
await backport_pr.router.dispatch(event, gh)
assert "I'm having trouble backporting to `3.7`" in gh.post_data["body"]
async def test_backport_pr_kombu_operational_error():
data = {
"action": "closed",
"pull_request": {
"merged": True,
"number": 1,
"merged_by": {"login": "Mariatta"},
"user": {"login": "gvanrossum"},
"merge_commit_sha": "f2393593c99dd2d3ab8bfab6fcc5ddee540518a9",
},
"repository": {
"issues_url": "https://api.github.com/repos/python/cpython/issues/1"
},
}
event = sansio.Event(data, event="pull_request", delivery_id="1")
getitem = {
"https://api.github.com/repos/python/cpython/issues/1": {
"labels_url": "https://api.github.com/repos/python/cpython/issues/1/labels{/name}"
},
"https://api.github.com/repos/python/cpython/issues/1/labels": [
{"name": "CLA signed"},
{"name": "needs backport to 3.7"},
],
}
gh = FakeGH(getitem=getitem)
with mock.patch("miss_islington.tasks.backport_task.delay") as backport_delay_mock:
backport_delay_mock.side_effect = kombu.exceptions.OperationalError
await backport_pr.router.dispatch(event, gh)
assert "I'm having trouble backporting to `3.7`" in gh.post_data["body"]
| 35.420863 | 94 | 0.598964 | 1,165 | 9,847 | 4.902146 | 0.103863 | 0.055857 | 0.091403 | 0.12187 | 0.893364 | 0.88776 | 0.874103 | 0.867974 | 0.862896 | 0.862896 | 0 | 0.029479 | 0.24901 | 9,847 | 277 | 95 | 35.548736 | 0.742799 | 0 | 0 | 0.687764 | 0 | 0.025316 | 0.390779 | 0.06865 | 0 | 0 | 0 | 0 | 0.088608 | 1 | 0.004219 | false | 0 | 0.025316 | 0 | 0.042194 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
53c6db9ce4b987f213971d2d0382ec5a24356506 | 3,678 | py | Python | src/airfly/_vendor/airflow/providers/google/cloud/operators/pubsub.py | ryanchao2012/airfly | 230ddd88885defc67485fa0c51f66c4a67ae98a9 | [
"MIT"
] | 7 | 2021-09-27T11:38:48.000Z | 2022-02-01T06:06:24.000Z | src/airfly/_vendor/airflow/providers/google/cloud/operators/pubsub.py | ryanchao2012/airfly | 230ddd88885defc67485fa0c51f66c4a67ae98a9 | [
"MIT"
] | null | null | null | src/airfly/_vendor/airflow/providers/google/cloud/operators/pubsub.py | ryanchao2012/airfly | 230ddd88885defc67485fa0c51f66c4a67ae98a9 | [
"MIT"
] | null | null | null | # Auto generated by 'inv collect-airflow'
from airfly._vendor.airflow.models.baseoperator import BaseOperator
class PubSubCreateSubscriptionOperator(BaseOperator):
topic: "str"
project_id: "typing.Union[str, NoneType]"
subscription: "typing.Union[str, NoneType]"
subscription_project_id: "typing.Union[str, NoneType]"
ack_deadline_secs: "int"
fail_if_exists: "bool"
gcp_conn_id: "str"
delegate_to: "typing.Union[str, NoneType]"
push_config: "typing.Union[typing.Dict, google.cloud.pubsub_v1.types.PushConfig, NoneType]"
retain_acked_messages: "typing.Union[bool, NoneType]"
message_retention_duration: "typing.Union[typing.Dict, google.protobuf.duration_pb2.Duration, NoneType]"
labels: "typing.Union[typing.Dict[str, str], NoneType]"
enable_message_ordering: "bool"
expiration_policy: "typing.Union[typing.Dict, google.cloud.pubsub_v1.types.ExpirationPolicy, NoneType]"
filter_: "typing.Union[str, NoneType]"
dead_letter_policy: "typing.Union[typing.Dict, google.cloud.pubsub_v1.types.DeadLetterPolicy, NoneType]"
retry_policy: "typing.Union[typing.Dict, google.cloud.pubsub_v1.types.RetryPolicy, NoneType]"
retry: "typing.Union[google.api_core.retry.Retry, NoneType]"
timeout: "typing.Union[float, NoneType]"
metadata: "typing.Union[typing.Sequence[typing.Tuple[str, str]], NoneType]"
topic_project: "typing.Union[str, NoneType]"
subscription_project: "typing.Union[str, NoneType]"
impersonation_chain: "typing.Union[str, typing.Sequence[str], NoneType]"
class PubSubCreateTopicOperator(BaseOperator):
topic: "str"
project_id: "typing.Union[str, NoneType]"
fail_if_exists: "bool"
gcp_conn_id: "str"
delegate_to: "typing.Union[str, NoneType]"
labels: "typing.Union[typing.Dict[str, str], NoneType]"
message_storage_policy: "typing.Union[typing.Dict, google.cloud.pubsub_v1.types.MessageStoragePolicy]"
kms_key_name: "typing.Union[str, NoneType]"
retry: "typing.Union[google.api_core.retry.Retry, NoneType]"
timeout: "typing.Union[float, NoneType]"
metadata: "typing.Union[typing.Sequence[typing.Tuple[str, str]], NoneType]"
project: "typing.Union[str, NoneType]"
impersonation_chain: "typing.Union[str, typing.Sequence[str], NoneType]"
class PubSubDeleteSubscriptionOperator(BaseOperator):
subscription: "str"
project_id: "typing.Union[str, NoneType]"
fail_if_not_exists: "bool"
gcp_conn_id: "str"
delegate_to: "typing.Union[str, NoneType]"
retry: "typing.Union[google.api_core.retry.Retry, NoneType]"
timeout: "typing.Union[float, NoneType]"
metadata: "typing.Union[typing.Sequence[typing.Tuple[str, str]], NoneType]"
project: "typing.Union[str, NoneType]"
impersonation_chain: "typing.Union[str, typing.Sequence[str], NoneType]"
class PubSubDeleteTopicOperator(BaseOperator):
topic: "str"
project_id: "typing.Union[str, NoneType]"
fail_if_not_exists: "bool"
gcp_conn_id: "str"
delegate_to: "typing.Union[str, NoneType]"
retry: "typing.Union[google.api_core.retry.Retry, NoneType]"
timeout: "typing.Union[float, NoneType]"
metadata: "typing.Union[typing.Sequence[typing.Tuple[str, str]], NoneType]"
project: "typing.Union[str, NoneType]"
impersonation_chain: "typing.Union[str, typing.Sequence[str], NoneType]"
class PubSubPublishMessageOperator(BaseOperator):
topic: "str"
messages: "typing.List"
project_id: "typing.Union[str, NoneType]"
gcp_conn_id: "str"
delegate_to: "typing.Union[str, NoneType]"
project: "typing.Union[str, NoneType]"
impersonation_chain: "typing.Union[str, typing.Sequence[str], NoneType]"
| 45.407407 | 108 | 0.736542 | 447 | 3,678 | 5.908277 | 0.185682 | 0.191594 | 0.132526 | 0.166604 | 0.763347 | 0.74593 | 0.696706 | 0.696706 | 0.696706 | 0.626278 | 0 | 0.00187 | 0.127787 | 3,678 | 80 | 109 | 45.975 | 0.821384 | 0.010604 | 0 | 0.666667 | 1 | 0 | 0.551553 | 0.253506 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.014493 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
53d0d6045eabd60a7f89abf358eef718589f4e31 | 25 | py | Python | continual_learning/methods/task_incremental/multi_task/gg/piggyback/__init__.py | jaryP/ContinualAI | 7d9b7614066d219ebd72049692da23ad6ec132b0 | [
"MIT"
] | null | null | null | continual_learning/methods/task_incremental/multi_task/gg/piggyback/__init__.py | jaryP/ContinualAI | 7d9b7614066d219ebd72049692da23ad6ec132b0 | [
"MIT"
] | null | null | null | continual_learning/methods/task_incremental/multi_task/gg/piggyback/__init__.py | jaryP/ContinualAI | 7d9b7614066d219ebd72049692da23ad6ec132b0 | [
"MIT"
] | null | null | null | from .PB import PiggyBack | 25 | 25 | 0.84 | 4 | 25 | 5.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.12 | 25 | 1 | 25 | 25 | 0.954545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
53dc518117820338344e4c8149cc7f61ce092ddd | 43 | py | Python | python/packages/isce3/cuda/geocode/__init__.py | isce3-testing/isce3-circleci-poc | ec1dfb6019bcdc7afb7beee7be0fa0ce3f3b87b3 | [
"Apache-2.0"
] | null | null | null | python/packages/isce3/cuda/geocode/__init__.py | isce3-testing/isce3-circleci-poc | ec1dfb6019bcdc7afb7beee7be0fa0ce3f3b87b3 | [
"Apache-2.0"
] | 1 | 2021-12-23T00:00:31.000Z | 2021-12-23T00:00:31.000Z | python/packages/isce3/cuda/geocode/__init__.py | isce3-testing/isce3-circleci-poc | ec1dfb6019bcdc7afb7beee7be0fa0ce3f3b87b3 | [
"Apache-2.0"
] | 1 | 2021-12-02T21:10:11.000Z | 2021-12-02T21:10:11.000Z | from isce3.ext.isce3.cuda.geocode import *
| 21.5 | 42 | 0.790698 | 7 | 43 | 4.857143 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.051282 | 0.093023 | 43 | 1 | 43 | 43 | 0.820513 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
07060848d2f1291cd59a021526d0664f56dfb94b | 176 | py | Python | Py3DViewer/utils/__init__.py | chiarasharp/py3DViewer | c1bcd32f15e648cb7262ae25ae834d47ed9fd00d | [
"MIT"
] | null | null | null | Py3DViewer/utils/__init__.py | chiarasharp/py3DViewer | c1bcd32f15e648cb7262ae25ae834d47ed9fd00d | [
"MIT"
] | null | null | null | Py3DViewer/utils/__init__.py | chiarasharp/py3DViewer | c1bcd32f15e648cb7262ae25ae834d47ed9fd00d | [
"MIT"
] | null | null | null | from .ColorMap import *
from .IO import *
from .metrics import *
from .Observable import *
from .ObservableArray import *
from .load_operations import *
from .matrices import * | 25.142857 | 30 | 0.767045 | 22 | 176 | 6.090909 | 0.454545 | 0.447761 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.153409 | 176 | 7 | 31 | 25.142857 | 0.899329 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
074279a742c1ca6ba48653ebaadc8507f7df174a | 190 | py | Python | test/test_cli.py | GochoMugo/remindme | 6cf2f94ce07ead754f1ee5976a7e7d7cbfa1a2e4 | [
"MIT"
] | 17 | 2015-05-02T22:58:07.000Z | 2017-04-17T06:33:43.000Z | test/test_cli.py | GochoMugo/remindme | 6cf2f94ce07ead754f1ee5976a7e7d7cbfa1a2e4 | [
"MIT"
] | 8 | 2015-02-14T16:22:27.000Z | 2016-10-26T13:15:19.000Z | test/test_cli.py | GochoMugo/remindme | 6cf2f94ce07ead754f1ee5976a7e7d7cbfa1a2e4 | [
"MIT"
] | 2 | 2016-02-26T10:47:56.000Z | 2019-10-09T05:49:51.000Z | '''
Tests against remindme's command-line runner
'''
import unittest
from remindme import cli
class Test_Cli(unittest.TestCase):
'''Tests against the Command-line Runner.'''
pass
| 15.833333 | 48 | 0.726316 | 25 | 190 | 5.48 | 0.64 | 0.175182 | 0.248175 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.168421 | 190 | 11 | 49 | 17.272727 | 0.867089 | 0.436842 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.25 | 0.5 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
074af095acfa5bceda50730b1b50492789beab75 | 119 | py | Python | tests/test_testing.py | guiferviz/python_module | 0d4494d6d93fa5e8b0d208e50eb0d3290b1107a9 | [
"MIT"
] | null | null | null | tests/test_testing.py | guiferviz/python_module | 0d4494d6d93fa5e8b0d208e50eb0d3290b1107a9 | [
"MIT"
] | null | null | null | tests/test_testing.py | guiferviz/python_module | 0d4494d6d93fa5e8b0d208e50eb0d3290b1107a9 | [
"MIT"
] | null | null | null |
from .context import mymodule
def test_testing_ok():
assert 2 == 2
def test_testing_fail():
assert 2 == 12
| 11.9 | 29 | 0.672269 | 18 | 119 | 4.222222 | 0.666667 | 0.184211 | 0.368421 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.054945 | 0.235294 | 119 | 9 | 30 | 13.222222 | 0.78022 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.4 | 1 | 0.4 | true | 0 | 0.2 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
ab06371096624015995a728fde4e976186c0671f | 37 | py | Python | sudoku/solver/__init__.py | tateishi/py-sudoku | 3af656b2ebe128543ed2bb9c613a6844eda868ca | [
"MIT"
] | null | null | null | sudoku/solver/__init__.py | tateishi/py-sudoku | 3af656b2ebe128543ed2bb9c613a6844eda868ca | [
"MIT"
] | null | null | null | sudoku/solver/__init__.py | tateishi/py-sudoku | 3af656b2ebe128543ed2bb9c613a6844eda868ca | [
"MIT"
] | null | null | null | from .base import BaseSolver, Solver
| 18.5 | 36 | 0.810811 | 5 | 37 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.135135 | 37 | 1 | 37 | 37 | 0.9375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ab24f816cbb0b7895be8dff12a12c6008a79147b | 140 | py | Python | compile_ui.py | yamiyukiharu/trafficVehicleCounter | b563cff04d821975c74a4a37ddd784541e0ba44e | [
"MIT"
] | 1 | 2022-02-03T12:00:29.000Z | 2022-02-03T12:00:29.000Z | compile_ui.py | yamiyukiharu/trafficVehicleCounter | b563cff04d821975c74a4a37ddd784541e0ba44e | [
"MIT"
] | null | null | null | compile_ui.py | yamiyukiharu/trafficVehicleCounter | b563cff04d821975c74a4a37ddd784541e0ba44e | [
"MIT"
] | 1 | 2022-01-26T10:00:50.000Z | 2022-01-26T10:00:50.000Z | import os
# os.system('pyside2-rcc -o icons_rc.py ./qt/icons/icons.qrc')
os.system('pyside2-uic -g python -o ./qt/Ui_Form.py ./qt/form.ui') | 35 | 66 | 0.692857 | 28 | 140 | 3.392857 | 0.571429 | 0.168421 | 0.315789 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015748 | 0.092857 | 140 | 4 | 66 | 35 | 0.732283 | 0.428571 | 0 | 0 | 0 | 0 | 0.670886 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
dba73eb4547563a42ba13bba8d50327f63aa9356 | 2,237 | py | Python | auth0/v2/test/authentication/test_delegated.py | maronnax/auth0-python | 855e275da1f9fddc851f34df4a6b304eed8abb96 | [
"MIT"
] | null | null | null | auth0/v2/test/authentication/test_delegated.py | maronnax/auth0-python | 855e275da1f9fddc851f34df4a6b304eed8abb96 | [
"MIT"
] | null | null | null | auth0/v2/test/authentication/test_delegated.py | maronnax/auth0-python | 855e275da1f9fddc851f34df4a6b304eed8abb96 | [
"MIT"
] | null | null | null | import unittest
import mock
from ...authentication.delegated import Delegated
class TestDelegated(unittest.TestCase):
@mock.patch('auth0.v2.authentication.delegated.Delegated.post')
def test_get_token_id_token(self, mock_post):
d = Delegated('my.domain.com')
d.get_token(client_id='cid',
target='tgt',
api_type='apt',
grant_type='gt',
id_token='idt',
scope='openid profile')
args, kwargs = mock_post.call_args
self.assertEqual(args[0], 'https://my.domain.com/delegation')
self.assertEqual(kwargs['data'], {
'client_id': 'cid',
'grant_type': 'gt',
'id_token': 'idt',
'target': 'tgt',
'scope': 'openid profile',
'api_type': 'apt',
})
self.assertEqual(kwargs['headers'], {
'Content-Type': 'application/json'
})
@mock.patch('auth0.v2.authentication.delegated.Delegated.post')
def test_get_token_refresh_token(self, mock_post):
d = Delegated('my.domain.com')
d.get_token(client_id='cid',
target='tgt',
api_type='apt',
grant_type='gt',
refresh_token='rtk')
args, kwargs = mock_post.call_args
self.assertEqual(args[0], 'https://my.domain.com/delegation')
self.assertEqual(kwargs['data'], {
'client_id': 'cid',
'grant_type': 'gt',
'refresh_token': 'rtk',
'target': 'tgt',
'scope': 'openid',
'api_type': 'apt',
})
self.assertEqual(kwargs['headers'], {
'Content-Type': 'application/json'
})
@mock.patch('auth0.v2.authentication.delegated.Delegated.post')
def test_get_token_value_error(self, mock_post):
d = Delegated('my.domain.com')
with self.assertRaises(ValueError):
d.get_token(client_id='cid',
target='tgt',
api_type='apt',
grant_type='gt',
refresh_token='rtk',
id_token='idt')
| 30.643836 | 69 | 0.517658 | 227 | 2,237 | 4.911894 | 0.242291 | 0.043049 | 0.049327 | 0.043049 | 0.799103 | 0.799103 | 0.767713 | 0.767713 | 0.738117 | 0.738117 | 0 | 0.005442 | 0.34287 | 2,237 | 72 | 70 | 31.069444 | 0.753061 | 0 | 0 | 0.701754 | 0 | 0 | 0.236477 | 0.064372 | 0 | 0 | 0 | 0 | 0.122807 | 1 | 0.052632 | false | 0 | 0.052632 | 0 | 0.122807 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
dbbe028644c0b2696785941f8f888ec3fa90295f | 270 | py | Python | src/encrypt.py | ksaweryr/Database-Faker | f9322de9a241ee91ed6c13cfb8f08f32a79c5fb5 | [
"MIT"
] | 1 | 2022-02-14T12:06:11.000Z | 2022-02-14T12:06:11.000Z | src/encrypt.py | ksaweryr/Database-Faker | f9322de9a241ee91ed6c13cfb8f08f32a79c5fb5 | [
"MIT"
] | null | null | null | src/encrypt.py | ksaweryr/Database-Faker | f9322de9a241ee91ed6c13cfb8f08f32a79c5fb5 | [
"MIT"
] | null | null | null | import platform
if platform.system() in ('Linux', 'Darwin'):
from crypt import crypt, METHOD_SHA512
def encrypt(s):
return crypt(s, salt=METHOD_SHA512)
else:
from passlib.hash import sha512_crypt
def encrypt(s):
return sha512_crypt.encrypt(s, rounds=5000) | 16.875 | 45 | 0.740741 | 40 | 270 | 4.9 | 0.525 | 0.122449 | 0.112245 | 0.173469 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.069869 | 0.151852 | 270 | 16 | 45 | 16.875 | 0.786026 | 0 | 0 | 0.222222 | 0 | 0 | 0.04059 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.222222 | false | 0.111111 | 0.333333 | 0.222222 | 0.777778 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 6 |
91952c33d576f6cf924a15bf15fe0f5ef09d4bb2 | 25,596 | py | Python | tests/test_normal.py | laike9m/ezcf | 09b236c0670709f7ab01b17c78c12cec2cdfc779 | [
"MIT"
] | 186 | 2015-03-31T10:49:35.000Z | 2021-11-28T00:46:13.000Z | tests/test_normal.py | laike9m/ezcf | 09b236c0670709f7ab01b17c78c12cec2cdfc779 | [
"MIT"
] | null | null | null | tests/test_normal.py | laike9m/ezcf | 09b236c0670709f7ab01b17c78c12cec2cdfc779 | [
"MIT"
] | 23 | 2015-04-01T01:56:12.000Z | 2018-02-24T09:41:35.000Z | # coding: utf-8
import sys
import unittest
import os
import datetime
from pprint import pprint
try:
import ezcf
except ImportError:
sys.path.append('../')
import ezcf
class TestProto(unittest.TestCase):
def test_import(self):
import sample_json
self.assertEqual(sample_json.hello, "world")
self.assertEqual(sample_json.a_list, [1, 2, 3])
self.assertEqual(sample_json.a_dict, {
"key1": 1000,
"key2": [u"你好", 100]
})
import sample_yaml
self.assertEqual(sample_yaml.Date,
datetime.datetime(2001, 11, 23, 20, 3, 17))
self.assertEqual(sample_yaml.Fatal, 'Unknown variable "bar"')
self.assertEqual(
sample_yaml.Stack,
[{'code': 'x = MoreObject("345\\n")\n',
'file': 'TopClass.py',
'line': 23},
{'code': 'foo = bar', 'file': 'MoreClass.py', 'line': 58}])
self.assertEqual(sample_yaml.Time,
datetime.datetime(2001, 11, 23, 20, 2, 31))
self.assertEqual(sample_yaml.User, 'ed')
self.assertEqual(sample_yaml.warning,
u'一个 slightly different error message.')
import sample_yml
self.assertEqual(sample_yml.Date,
datetime.datetime(2001, 11, 23, 20, 3, 17))
self.assertEqual(sample_yml.Fatal, 'Unknown variable "bar"')
self.assertEqual(
sample_yml.Stack,
[{'code': 'x = MoreObject("345\\n")\n',
'file': 'TopClass.py',
'line': 23},
{'code': 'foo = bar', 'file': 'MoreClass.py', 'line': 58}])
self.assertEqual(sample_yml.Time,
datetime.datetime(2001, 11, 23, 20, 2, 31))
self.assertEqual(sample_yml.User, 'ed')
self.assertEqual(sample_yml.warning,
'A slightly different error message.')
import sample_ini
self.assertEqual(sample_ini.keyword1, 'value1')
self.assertEqual(sample_ini.keyword2, 'value2')
self.assertEqual(
sample_ini.section1,
{
'keyword1': 'value1', 'keyword2': 'value2',
'sub-section': {
'keyword1': 'value1', 'keyword2': 'value2',
'nested section': {
'keyword1': 'value1', 'keyword2': 'value2',
},
},
'sub-section2': {
'keyword1': 'value1', 'keyword2': 'value2',
},
}
)
self.assertEqual(sample_ini.section2,
{'keyword1': 'value1', 'keyword2': 'value2'})
import sample_xml
self.assertEqual(sample_xml.note, {"to": u"我", "from": "you"})
def test_from_import(self):
from sample_json import a_list, a_dict
self.assertEqual(a_list, [1, 2, 3])
self.assertEqual(a_dict, {
"key1": 1000,
"key2": [u"你好", 100]
})
from sample_yaml import Date, Fatal, Stack, Time, User
self.assertEqual(Date, datetime.datetime(2001, 11, 23, 20, 3, 17))
self.assertEqual(Fatal, 'Unknown variable "bar"')
self.assertEqual(
Stack,
[{'code': 'x = MoreObject("345\\n")\n',
'file': 'TopClass.py',
'line': 23},
{'code': 'foo = bar', 'file': 'MoreClass.py', 'line': 58}])
self.assertEqual(Time, datetime.datetime(2001, 11, 23, 20, 2, 31))
self.assertEqual(User, 'ed')
from sample_ini import keyword1
from sample_ini import section1
from sample_ini import section2
self.assertEqual(keyword1, 'value1')
self.assertEqual(
section1,
{
'keyword1': 'value1', 'keyword2': 'value2',
'sub-section': {
'keyword1': 'value1', 'keyword2': 'value2',
'nested section': {
'keyword1': 'value1', 'keyword2': 'value2',
},
},
'sub-section2': {
'keyword1': 'value1', 'keyword2': 'value2',
},
}
)
self.assertEqual(section2,
{'keyword1': 'value1', 'keyword2': 'value2'})
from sample_xml import note
self.assertEqual(note, {"to": u"我", "from": "you"})
if sys.version_info[:2] > (2, 6):
with self.assertRaises(NameError):
print(hello)
with self.assertRaises(NameError):
print(warning)
with self.assertRaises(NameError):
print(keyword2)
def test_import_as(self):
import sample_json as sj
self.assertEqual(sj.hello, "world")
self.assertEqual(sj.a_list, [1, 2, 3])
self.assertEqual(sj.a_dict, {
"key1": 1000,
"key2": [u"你好", 100]
})
import sample_yaml as sy
self.assertEqual(sy.Date, datetime.datetime(2001, 11, 23, 20, 3, 17))
self.assertEqual(sy.Fatal, 'Unknown variable "bar"')
self.assertEqual(
sy.Stack,
[{'code': 'x = MoreObject("345\\n")\n',
'file': 'TopClass.py',
'line': 23},
{'code': 'foo = bar', 'file': 'MoreClass.py', 'line': 58}])
self.assertEqual(sy.Time, datetime.datetime(2001, 11, 23, 20, 2, 31))
self.assertEqual(sy.User, 'ed')
self.assertEqual(sy.warning, u'一个 slightly different error message.')
import sample_ini as si
self.assertEqual(si.keyword1, 'value1')
self.assertEqual(si.keyword2, 'value2')
self.assertEqual(
si.section1,
{
'keyword1': 'value1', 'keyword2': 'value2',
'sub-section': {
'keyword1': 'value1', 'keyword2': 'value2',
'nested section': {
'keyword1': 'value1', 'keyword2': 'value2',
},
},
'sub-section2': {
'keyword1': 'value1', 'keyword2': 'value2',
},
}
)
self.assertEqual(si.section2,
{'keyword1': 'value1', 'keyword2': 'value2'})
import sample_xml as sx
self.assertEqual(sx.note, {"to": u"我", "from": "you"})
def test_from_import_as(self):
from sample_json import hello as h
from sample_json import a_list as al
from sample_json import a_dict as ad
self.assertEqual(h, "world")
self.assertEqual(al, [1, 2, 3])
self.assertEqual(ad, {
"key1": 1000,
"key2": [u"你好", 100]
})
from sample_yaml import Date as d
from sample_yaml import Fatal as f
from sample_yaml import Stack as s
from sample_yaml import Time as t
from sample_yaml import User as u
from sample_yaml import warning as w
self.assertEqual(d, datetime.datetime(2001, 11, 23, 20, 3, 17))
self.assertEqual(f, 'Unknown variable "bar"')
self.assertEqual(
s,
[{'code': 'x = MoreObject("345\\n")\n',
'file': 'TopClass.py',
'line': 23},
{'code': 'foo = bar', 'file': 'MoreClass.py', 'line': 58}])
self.assertEqual(t, datetime.datetime(2001, 11, 23, 20, 2, 31))
self.assertEqual(u, 'ed')
self.assertEqual(w, u'一个 slightly different error message.')
from sample_ini import keyword1 as k1
from sample_ini import keyword2 as k2
from sample_ini import section1 as s1
from sample_ini import section2 as s2
self.assertEqual(k1, 'value1')
self.assertEqual(k2, 'value2')
self.assertEqual(s1,
{
'keyword1': 'value1', 'keyword2': 'value2',
'sub-section': {
'keyword1': 'value1', 'keyword2': 'value2',
'nested section': {
'keyword1': 'value1', 'keyword2': 'value2',
},
},
'sub-section2': {
'keyword1': 'value1', 'keyword2': 'value2',
},
}
)
self.assertEqual(s2, {'keyword1': 'value1', 'keyword2': 'value2'})
from sample_xml import note as no
self.assertEqual(no, {"to": u"我", "from": "you"})
def test_import_subdir(self):
import subdir.sample_json
self.assertEqual(subdir.sample_json.hello, "world")
self.assertEqual(subdir.sample_json.a_list, [1, 2, 3])
self.assertEqual(subdir.sample_json.a_dict, {
"key1": 1000,
"key2": [u"你好", 100]
})
import subdir.sample_yaml
self.assertEqual(subdir.sample_yaml.Date,
datetime.datetime(2001, 11, 23, 20, 3, 17))
self.assertEqual(subdir.sample_yaml.Fatal, 'Unknown variable "bar"')
self.assertEqual(
subdir.sample_yaml.Stack,
[{'code': 'x = MoreObject("345\\n")\n',
'file': 'TopClass.py',
'line': 23},
{'code': 'foo = bar', 'file': 'MoreClass.py', 'line': 58}])
self.assertEqual(subdir.sample_yaml.Time,
datetime.datetime(2001, 11, 23, 20, 2, 31))
self.assertEqual(subdir.sample_yaml.User, 'ed')
self.assertEqual(subdir.sample_yaml.warning,
'A slightly different error message.')
import subdir.sample_ini
self.assertEqual(subdir.sample_ini.keyword1, 'value1')
self.assertEqual(subdir.sample_ini.keyword2, 'value2')
self.assertEqual(
subdir.sample_ini.section1,
{
'keyword1': 'value1', 'keyword2': 'value2',
'sub-section': {
'keyword1': 'value1', 'keyword2': 'value2',
'nested section': {
'keyword1': 'value1', 'keyword2': 'value2',
},
},
'sub-section2': {
'keyword1': 'value1', 'keyword2': 'value2',
},
}
)
self.assertEqual(subdir.sample_ini.section2,
{'keyword1': 'value1', 'keyword2': 'value2'})
import subdir.sample_xml
self.assertEqual(subdir.sample_xml.note, {"to": u"我", "from": "you"})
def test_from_import_subdir(self):
from subdir.sample_json import a_list, a_dict
self.assertEqual(a_list, [1, 2, 3])
self.assertEqual(a_dict, {
"key1": 1000,
"key2": [u"你好", 100]
})
from subdir.sample_yaml import Date, Fatal, Stack, Time, User
self.assertEqual(Date, datetime.datetime(2001, 11, 23, 20, 3, 17))
self.assertEqual(Fatal, 'Unknown variable "bar"')
self.assertEqual(
Stack,
[{'code': 'x = MoreObject("345\\n")\n',
'file': 'TopClass.py',
'line': 23},
{'code': 'foo = bar', 'file': 'MoreClass.py', 'line': 58}])
self.assertEqual(Time, datetime.datetime(2001, 11, 23, 20, 2, 31))
self.assertEqual(User, 'ed')
from subdir.sample_ini import keyword1
from subdir.sample_ini import section1
from subdir.sample_ini import section2
self.assertEqual(keyword1, 'value1')
self.assertEqual(
section1,
{
'keyword1': 'value1', 'keyword2': 'value2',
'sub-section': {
'keyword1': 'value1', 'keyword2': 'value2',
'nested section': {
'keyword1': 'value1', 'keyword2': 'value2',
},
},
'sub-section2': {
'keyword1': 'value1', 'keyword2': 'value2',
},
}
)
self.assertEqual(section2,
{'keyword1': 'value1', 'keyword2': 'value2'})
from subdir.sample_xml import note
self.assertEqual(note, {"to": u"我", "from": "you"})
if sys.version_info[:2] > (2, 6):
with self.assertRaises(NameError):
print(hello)
with self.assertRaises(NameError):
print(warning)
with self.assertRaises(NameError):
print(keyword2)
def test_import_as_subdir(self):
import subdir.sample_json as sj
self.assertEqual(sj.hello, "world")
self.assertEqual(sj.a_list, [1, 2, 3])
self.assertEqual(sj.a_dict, {
"key1": 1000,
"key2": [u"你好", 100]
})
import subdir.sample_yaml as sy
self.assertEqual(sy.Date, datetime.datetime(2001, 11, 23, 20, 3, 17))
self.assertEqual(sy.Fatal, 'Unknown variable "bar"')
self.assertEqual(
sy.Stack,
[{'code': 'x = MoreObject("345\\n")\n',
'file': 'TopClass.py',
'line': 23},
{'code': 'foo = bar', 'file': 'MoreClass.py', 'line': 58}])
self.assertEqual(sy.Time, datetime.datetime(2001, 11, 23, 20, 2, 31))
self.assertEqual(sy.User, 'ed')
self.assertEqual(sy.warning, 'A slightly different error message.')
import subdir.sample_ini as si
self.assertEqual(si.keyword1, 'value1')
self.assertEqual(si.keyword2, 'value2')
self.assertEqual(
si.section1,
{
'keyword1': 'value1', 'keyword2': 'value2',
'sub-section': {
'keyword1': 'value1', 'keyword2': 'value2',
'nested section': {
'keyword1': 'value1', 'keyword2': 'value2',
},
},
'sub-section2': {
'keyword1': 'value1', 'keyword2': 'value2',
},
}
)
self.assertEqual(si.section2,
{'keyword1': 'value1', 'keyword2': 'value2'})
import subdir.sample_xml as sx
self.assertEqual(sx.note, {"to": u"我", "from": "you"})
def test_from_import_as_subdir(self):
from subdir.sample_json import hello as h
from subdir.sample_json import a_list as al
from subdir.sample_json import a_dict as ad
self.assertEqual(h, "world")
self.assertEqual(al, [1, 2, 3])
self.assertEqual(ad, {
"key1": 1000,
"key2": [u"你好", 100]
})
from subdir.sample_yaml import Date as d
from subdir.sample_yaml import Fatal as f
from subdir.sample_yaml import Stack as s
from subdir.sample_yaml import Time as t
from subdir.sample_yaml import User as u
from subdir.sample_yaml import warning as w
self.assertEqual(d, datetime.datetime(2001, 11, 23, 20, 3, 17))
self.assertEqual(f, 'Unknown variable "bar"')
self.assertEqual(
s,
[{'code': 'x = MoreObject("345\\n")\n',
'file': 'TopClass.py',
'line': 23},
{'code': 'foo = bar', 'file': 'MoreClass.py', 'line': 58}])
self.assertEqual(t, datetime.datetime(2001, 11, 23, 20, 2, 31))
self.assertEqual(u, 'ed')
self.assertEqual(w, 'A slightly different error message.')
from subdir.sample_ini import keyword1 as k1
from subdir.sample_ini import keyword2 as k2
from subdir.sample_ini import section1 as s1
from subdir.sample_ini import section2 as s2
self.assertEqual(k1, 'value1')
self.assertEqual(k2, 'value2')
self.assertEqual(s1, {
'keyword1': 'value1', 'keyword2': 'value2',
'sub-section': {
'keyword1': 'value1', 'keyword2': 'value2',
'nested section': {
'keyword1': 'value1', 'keyword2': 'value2',
},
},
'sub-section2': {
'keyword1': 'value1', 'keyword2': 'value2',
},
}
)
self.assertEqual(s2, {'keyword1': 'value1', 'keyword2': 'value2'})
from subdir.sample_xml import note as no
self.assertEqual(no, {"to": u"我", "from": "you"})
def test_import_subdir2(self):
import subdir.subdir.sample_json
self.assertEqual(subdir.subdir.sample_json.hello, "world")
self.assertEqual(subdir.subdir.sample_json.a_list, [1, 2, 3])
self.assertEqual(subdir.subdir.sample_json.a_dict, {
"key1": 1000,
"key2": [u"你好", 100]
})
import subdir.subdir.sample_yaml
self.assertEqual(subdir.subdir.sample_yaml.Date,
datetime.datetime(2001, 11, 23, 20, 3, 17))
self.assertEqual(subdir.subdir.sample_yaml.Fatal,
'Unknown variable "bar"')
self.assertEqual(
subdir.subdir.sample_yaml.Stack,
[{'code': 'x = MoreObject("345\\n")\n',
'file': 'TopClass.py',
'line': 23},
{'code': 'foo = bar', 'file': 'MoreClass.py', 'line': 58}])
self.assertEqual(subdir.subdir.sample_yaml.Time,
datetime.datetime(2001, 11, 23, 20, 2, 31))
self.assertEqual(subdir.subdir.sample_yaml.User, 'ed')
self.assertEqual(subdir.subdir.sample_yaml.warning,
'A slightly different error message.')
import subdir.subdir.sample_ini
self.assertEqual(subdir.subdir.sample_ini.keyword1, 'value1')
self.assertEqual(subdir.subdir.sample_ini.keyword2, 'value2')
self.assertEqual(
subdir.subdir.sample_ini.section1,
{
'keyword1': 'value1', 'keyword2': 'value2',
'sub-section': {
'keyword1': 'value1', 'keyword2': 'value2',
'nested section': {
'keyword1': 'value1', 'keyword2': 'value2',
},
},
'sub-section2': {
'keyword1': 'value1', 'keyword2': 'value2',
},
}
)
self.assertEqual(subdir.subdir.sample_ini.section2,
{'keyword1': 'value1', 'keyword2': 'value2'})
import subdir.subdir.sample_xml
self.assertEqual(subdir.subdir.sample_xml.note, {"to": u"我", "from": "you"})
def test_from_import_subdir2(self):
from subdir.subdir.sample_json import a_list, a_dict
self.assertEqual(a_list, [1, 2, 3])
self.assertEqual(a_dict, {
"key1": 1000,
"key2": [u"你好", 100]
})
from subdir.subdir.sample_yaml import Date, Fatal, Stack, Time, User
self.assertEqual(Date, datetime.datetime(2001, 11, 23, 20, 3, 17))
self.assertEqual(Fatal, 'Unknown variable "bar"')
self.assertEqual(
Stack,
[{'code': 'x = MoreObject("345\\n")\n',
'file': 'TopClass.py',
'line': 23},
{'code': 'foo = bar', 'file': 'MoreClass.py', 'line': 58}])
self.assertEqual(Time, datetime.datetime(2001, 11, 23, 20, 2, 31))
self.assertEqual(User, 'ed')
from subdir.subdir.sample_ini import keyword1
from subdir.subdir.sample_ini import section1
from subdir.subdir.sample_ini import section2
self.assertEqual(keyword1, 'value1')
self.assertEqual(
section1,
{
'keyword1': 'value1', 'keyword2': 'value2',
'sub-section': {
'keyword1': 'value1', 'keyword2': 'value2',
'nested section': {
'keyword1': 'value1', 'keyword2': 'value2',
},
},
'sub-section2': {
'keyword1': 'value1', 'keyword2': 'value2',
},
}
)
self.assertEqual(section2,
{'keyword1': 'value1', 'keyword2': 'value2'})
from subdir.subdir.sample_xml import note
self.assertEqual(note, {"to": u"我", "from": "you"})
if sys.version_info[:2] > (2, 6):
with self.assertRaises(NameError):
print(hello)
with self.assertRaises(NameError):
print(warning)
with self.assertRaises(NameError):
print(keyword2)
def test_import_as_subdir2(self):
import subdir.subdir.sample_json as config
self.assertEqual(config.hello, "world")
self.assertEqual(config.a_list, [1, 2, 3])
self.assertEqual(config.a_dict, {
"key1": 1000,
"key2": [u"你好", 100]
})
import subdir.subdir.sample_yaml as sy
self.assertEqual(sy.Date, datetime.datetime(2001, 11, 23, 20, 3, 17))
self.assertEqual(sy.Fatal, 'Unknown variable "bar"')
self.assertEqual(
sy.Stack,
[{'code': 'x = MoreObject("345\\n")\n',
'file': 'TopClass.py',
'line': 23},
{'code': 'foo = bar', 'file': 'MoreClass.py', 'line': 58}])
self.assertEqual(sy.Time, datetime.datetime(2001, 11, 23, 20, 2, 31))
self.assertEqual(sy.User, 'ed')
self.assertEqual(sy.warning, 'A slightly different error message.')
import subdir.subdir.sample_ini as si
self.assertEqual(si.keyword1, 'value1')
self.assertEqual(si.keyword2, 'value2')
self.assertEqual(
si.section1,
{
'keyword1': 'value1', 'keyword2': 'value2',
'sub-section': {
'keyword1': 'value1', 'keyword2': 'value2',
'nested section': {
'keyword1': 'value1', 'keyword2': 'value2',
},
},
'sub-section2': {
'keyword1': 'value1', 'keyword2': 'value2',
},
}
)
self.assertEqual(si.section2,
{'keyword1': 'value1', 'keyword2': 'value2'})
import subdir.subdir.sample_xml as sx
self.assertEqual(sx.note, {"to": u"我", "from": "you"})
def test_from_import_as_subdir2(self):
from subdir.sample_json import hello as h
from subdir.subdir.sample_json import a_list as al
from subdir.subdir.sample_json import a_dict as ad
self.assertEqual(h, "world")
self.assertEqual(al, [1, 2, 3])
self.assertEqual(ad, {
"key1": 1000,
"key2": [u"你好", 100]
})
from subdir.subdir.sample_yaml import Date as d
from subdir.subdir.sample_yaml import Fatal as f
from subdir.subdir.sample_yaml import Stack as s
from subdir.subdir.sample_yaml import Time as t
from subdir.subdir.sample_yaml import User as u
from subdir.subdir.sample_yaml import warning as w
self.assertEqual(d, datetime.datetime(2001, 11, 23, 20, 3, 17))
self.assertEqual(f, 'Unknown variable "bar"')
self.assertEqual(
s,
[{'code': 'x = MoreObject("345\\n")\n',
'file': 'TopClass.py',
'line': 23},
{'code': 'foo = bar', 'file': 'MoreClass.py', 'line': 58}])
self.assertEqual(t, datetime.datetime(2001, 11, 23, 20, 2, 31))
self.assertEqual(u, 'ed')
self.assertEqual(w, 'A slightly different error message.')
from subdir.subdir.sample_ini import keyword1 as k1
from subdir.subdir.sample_ini import keyword2 as k2
from subdir.subdir.sample_ini import section1 as s1
from subdir.subdir.sample_ini import section2 as s2
self.assertEqual(k1, 'value1')
self.assertEqual(k2, 'value2')
self.assertEqual(s1, {
'keyword1': 'value1', 'keyword2': 'value2',
'sub-section': {
'keyword1': 'value1', 'keyword2': 'value2',
'nested section': {
'keyword1': 'value1', 'keyword2': 'value2',
},
},
'sub-section2': {
'keyword1': 'value1', 'keyword2': 'value2',
},
}
)
self.assertEqual(s2, {'keyword1': 'value1', 'keyword2': 'value2'})
from subdir.subdir.sample_xml import note as no
self.assertEqual(no, {"to": u"我", "from": "you"})
def test_invalid_json(self):
from ezcf._base import InvalidJsonError
if sys.version_info[:2] > (2, 6):
with self.assertRaises(InvalidJsonError):
import invalid_json
def test_invalid_yaml(self):
from ezcf._base import InvalidYamlError
if sys.version_info[:2] > (2, 6):
with self.assertRaises(InvalidYamlError):
import invalid_yaml
def test_invalid_ini(self):
from ezcf._base import InvalidIniError
if sys.version_info[:2] > (2, 6):
with self.assertRaises(InvalidIniError):
import invalid_ini
def test_invalid_xml(self):
from ezcf._base import InvalidXmlError
if sys.version_info[:2] > (2, 6):
with self.assertRaises(InvalidXmlError):
import invalid_xml
| 40.757962 | 84 | 0.515471 | 2,624 | 25,596 | 4.945122 | 0.047256 | 0.190737 | 0.101726 | 0.12947 | 0.950216 | 0.935496 | 0.906982 | 0.861282 | 0.823443 | 0.77127 | 0 | 0.05722 | 0.349312 | 25,596 | 627 | 85 | 40.822967 | 0.721885 | 0.000508 | 0 | 0.59 | 0 | 0 | 0.166882 | 0.01118 | 0 | 0 | 0 | 0 | 0.296667 | 1 | 0.026667 | false | 0 | 0.188333 | 0 | 0.216667 | 0.016667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
91d76e35a31fab1d697ac6f0237121db38de9f1b | 116 | py | Python | freenom_dns_updater/exception/remove_error.py | anhdhbn/Freenom-dns-updater | ec928755d7e18efa00bcc9aed20ad0b3eb093239 | [
"MIT"
] | 160 | 2016-02-27T15:20:24.000Z | 2022-03-13T17:27:49.000Z | freenom_dns_updater/exception/remove_error.py | anhdhbn/Freenom-dns-updater | ec928755d7e18efa00bcc9aed20ad0b3eb093239 | [
"MIT"
] | 31 | 2016-02-12T21:25:35.000Z | 2022-03-03T19:24:59.000Z | freenom_dns_updater/exception/remove_error.py | anhdhbn/Freenom-dns-updater | ec928755d7e18efa00bcc9aed20ad0b3eb093239 | [
"MIT"
] | 56 | 2016-03-05T14:39:21.000Z | 2022-02-11T01:21:15.000Z | from .dns_record_base_exception import DnsRecordBaseException
class RemoveError(DnsRecordBaseException):
pass
| 19.333333 | 61 | 0.853448 | 11 | 116 | 8.727273 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.112069 | 116 | 5 | 62 | 23.2 | 0.932039 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
37df6822636300e0157ff3ac6360e5eda7a37f68 | 35 | py | Python | python/covertable/sorters/__init__.py | walkframe/covertable | 0519c947c88319f51d26949f1b0e3113ae6a90e6 | [
"Apache-2.0"
] | 29 | 2019-06-17T13:33:42.000Z | 2022-03-08T01:16:54.000Z | python/covertable/sorters/__init__.py | walkframe/covertable | 0519c947c88319f51d26949f1b0e3113ae6a90e6 | [
"Apache-2.0"
] | 17 | 2019-10-01T18:09:29.000Z | 2021-10-14T16:45:42.000Z | python/covertable/sorters/__init__.py | walkframe/covertable | 0519c947c88319f51d26949f1b0e3113ae6a90e6 | [
"Apache-2.0"
] | 4 | 2019-06-22T16:59:39.000Z | 2021-11-04T14:50:59.000Z | from . import random, hash # NOQA
| 17.5 | 34 | 0.685714 | 5 | 35 | 4.8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.228571 | 35 | 1 | 35 | 35 | 0.888889 | 0.114286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
37e0b88c071014703212e1539bdf7cf49481a157 | 34,891 | py | Python | readthedocs/rtd_tests/tests/test_resolver.py | mforbes/readthedocs.org | 92f6224a67648a6d27e7a295973c2718d07cee11 | [
"MIT"
] | 2,092 | 2019-06-29T07:47:30.000Z | 2022-03-31T14:54:59.000Z | readthedocs/rtd_tests/tests/test_resolver.py | mforbes/readthedocs.org | 92f6224a67648a6d27e7a295973c2718d07cee11 | [
"MIT"
] | 2,389 | 2019-06-29T04:22:55.000Z | 2022-03-31T22:57:49.000Z | readthedocs/rtd_tests/tests/test_resolver.py | mforbes/readthedocs.org | 92f6224a67648a6d27e7a295973c2718d07cee11 | [
"MIT"
] | 1,185 | 2019-06-29T21:49:31.000Z | 2022-03-30T09:57:15.000Z | from unittest import mock
import django_dynamic_fixture as fixture
import pytest
from django.test import TestCase, override_settings
from readthedocs.builds.constants import EXTERNAL
from readthedocs.core.resolver import (
Resolver,
resolve,
resolve_domain,
resolve_path,
)
from readthedocs.projects.constants import PRIVATE
from readthedocs.projects.models import Domain, Project, ProjectRelationship
from readthedocs.rtd_tests.utils import create_user
@override_settings(PUBLIC_DOMAIN='readthedocs.org')
class ResolverBase(TestCase):
def setUp(self):
self.owner = create_user(username='owner', password='test')
self.tester = create_user(username='tester', password='test')
self.pip = fixture.get(
Project,
slug='pip',
users=[self.owner],
main_language_project=None,
)
self.subproject = fixture.get(
Project,
slug='sub',
language='ja',
users=[self.owner],
main_language_project=None,
)
self.translation = fixture.get(
Project,
slug='trans',
language='ja',
users=[self.owner],
main_language_project=None,
)
self.pip.add_subproject(self.subproject)
self.pip.translations.add(self.translation)
class SmartResolverPathTests(ResolverBase):
def test_resolver_filename(self):
with override_settings(USE_SUBDOMAIN=False):
url = resolve_path(project=self.pip, filename='/foo/bar/blah.html')
self.assertEqual(url, '/docs/pip/en/latest/foo/bar/blah.html')
with override_settings(USE_SUBDOMAIN=True):
url = resolve_path(project=self.pip, filename='/foo/bar/blah.html')
self.assertEqual(url, '/en/latest/foo/bar/blah.html')
with override_settings(USE_SUBDOMAIN=False):
url = resolve_path(project=self.pip, filename='')
self.assertEqual(url, '/docs/pip/en/latest/')
with override_settings(USE_SUBDOMAIN=True):
url = resolve_path(project=self.pip, filename='')
self.assertEqual(url, '/en/latest/')
def test_resolver_filename_index(self):
with override_settings(USE_SUBDOMAIN=False):
url = resolve_path(project=self.pip, filename='foo/bar/index.html')
self.assertEqual(url, '/docs/pip/en/latest/foo/bar/index.html')
url = resolve_path(
project=self.pip, filename='foo/index/index.html',
)
self.assertEqual(url, '/docs/pip/en/latest/foo/index/index.html')
def test_resolver_filename_false_index(self):
with override_settings(USE_SUBDOMAIN=False):
url = resolve_path(project=self.pip, filename='foo/foo_index.html')
self.assertEqual(url, '/docs/pip/en/latest/foo/foo_index.html')
url = resolve_path(
project=self.pip, filename='foo_index/foo_index.html',
)
self.assertEqual(
url, '/docs/pip/en/latest/foo_index/foo_index.html',
)
def test_resolver_filename_sphinx(self):
self.pip.documentation_type = 'sphinx'
with override_settings(USE_SUBDOMAIN=False):
url = resolve_path(project=self.pip, filename='foo/bar')
self.assertEqual(url, '/docs/pip/en/latest/foo/bar')
url = resolve_path(project=self.pip, filename='foo/index')
self.assertEqual(url, '/docs/pip/en/latest/foo/index')
def test_resolver_filename_mkdocs(self):
self.pip.documentation_type = 'mkdocs'
with override_settings(USE_SUBDOMAIN=False):
url = resolve_path(project=self.pip, filename='foo/bar')
self.assertEqual(url, '/docs/pip/en/latest/foo/bar')
url = resolve_path(project=self.pip, filename='foo/index.html')
self.assertEqual(url, '/docs/pip/en/latest/foo/index.html')
url = resolve_path(project=self.pip, filename='foo/bar.html')
self.assertEqual(url, '/docs/pip/en/latest/foo/bar.html')
def test_resolver_subdomain(self):
with override_settings(USE_SUBDOMAIN=False):
url = resolve_path(project=self.pip, filename='index.html')
self.assertEqual(url, '/docs/pip/en/latest/index.html')
with override_settings(USE_SUBDOMAIN=True):
url = resolve_path(project=self.pip, filename='index.html')
self.assertEqual(url, '/en/latest/index.html')
def test_resolver_domain_object(self):
self.domain = fixture.get(
Domain,
domain='http://docs.foobar.com',
project=self.pip,
canonical=True,
)
with override_settings(USE_SUBDOMAIN=False):
url = resolve_path(project=self.pip, filename='index.html')
self.assertEqual(url, '/en/latest/index.html')
with override_settings(USE_SUBDOMAIN=True):
url = resolve_path(project=self.pip, filename='index.html')
self.assertEqual(url, '/en/latest/index.html')
def test_resolver_domain_object_not_canonical(self):
self.domain = fixture.get(
Domain,
domain='http://docs.foobar.com',
project=self.pip,
canonical=False,
)
with override_settings(USE_SUBDOMAIN=False):
url = resolve_path(project=self.pip, filename='')
self.assertEqual(url, '/docs/pip/en/latest/')
with override_settings(USE_SUBDOMAIN=True):
url = resolve_path(project=self.pip, filename='')
self.assertEqual(url, '/en/latest/')
def test_resolver_subproject_subdomain(self):
with override_settings(USE_SUBDOMAIN=False):
url = resolve_path(project=self.subproject, filename='index.html')
self.assertEqual(url, '/docs/pip/projects/sub/ja/latest/index.html')
with override_settings(USE_SUBDOMAIN=True):
url = resolve_path(project=self.subproject, filename='index.html')
self.assertEqual(url, '/projects/sub/ja/latest/index.html')
def test_resolver_subproject_single_version(self):
self.subproject.single_version = True
with override_settings(USE_SUBDOMAIN=False):
url = resolve_path(project=self.subproject, filename='index.html')
self.assertEqual(url, '/docs/pip/projects/sub/index.html')
with override_settings(USE_SUBDOMAIN=True):
url = resolve_path(project=self.subproject, filename='index.html')
self.assertEqual(url, '/projects/sub/index.html')
def test_resolver_subproject_both_single_version(self):
self.pip.single_version = True
self.subproject.single_version = True
with override_settings(USE_SUBDOMAIN=False):
url = resolve_path(project=self.subproject, filename='index.html')
self.assertEqual(url, '/docs/pip/projects/sub/index.html')
with override_settings(USE_SUBDOMAIN=True):
url = resolve_path(project=self.subproject, filename='index.html')
self.assertEqual(url, '/projects/sub/index.html')
def test_resolver_translation(self):
with override_settings(USE_SUBDOMAIN=False):
url = resolve_path(project=self.translation, filename='index.html')
self.assertEqual(url, '/docs/pip/ja/latest/index.html')
with override_settings(USE_SUBDOMAIN=True):
url = resolve_path(project=self.translation, filename='index.html')
self.assertEqual(url, '/ja/latest/index.html')
def test_resolver_urlconf(self):
url = resolve_path(project=self.translation, filename='index.html', urlconf='$version/$filename')
self.assertEqual(url, 'latest/index.html')
def test_resolver_urlconf_extra(self):
url = resolve_path(project=self.translation, filename='index.html', urlconf='foo/bar/$version/$filename')
self.assertEqual(url, 'foo/bar/latest/index.html')
class ResolverPathOverrideTests(ResolverBase):
"""Tests to make sure we can override resolve_path correctly."""
def test_resolver_force_single_version(self):
self.pip.single_version = False
with override_settings(USE_SUBDOMAIN=False):
url = resolve_path(
project=self.pip, filename='index.html', single_version=True,
)
self.assertEqual(url, '/docs/pip/index.html')
with override_settings(USE_SUBDOMAIN=True):
url = resolve_path(
project=self.pip, filename='index.html', single_version=True,
)
self.assertEqual(url, '/index.html')
def test_resolver_force_domain(self):
with override_settings(USE_SUBDOMAIN=False):
url = resolve_path(
project=self.pip, filename='index.html', cname=True,
)
self.assertEqual(url, '/en/latest/index.html')
with override_settings(USE_SUBDOMAIN=True):
url = resolve_path(
project=self.pip, filename='index.html', cname=True,
)
self.assertEqual(url, '/en/latest/index.html')
def test_resolver_force_domain_single_version(self):
self.pip.single_version = False
with override_settings(USE_SUBDOMAIN=False):
url = resolve_path(
project=self.pip, filename='index.html', single_version=True,
cname=True,
)
self.assertEqual(url, '/index.html')
with override_settings(USE_SUBDOMAIN=True):
url = resolve_path(
project=self.pip, filename='index.html', single_version=True,
cname=True,
)
self.assertEqual(url, '/index.html')
def test_resolver_force_language(self):
with override_settings(USE_SUBDOMAIN=False):
url = resolve_path(
project=self.pip, filename='index.html', language='cz',
)
self.assertEqual(url, '/docs/pip/cz/latest/index.html')
with override_settings(USE_SUBDOMAIN=True):
url = resolve_path(
project=self.pip, filename='index.html', language='cz',
)
self.assertEqual(url, '/cz/latest/index.html')
def test_resolver_force_version(self):
with override_settings(USE_SUBDOMAIN=False):
url = resolve_path(
project=self.pip, filename='index.html', version_slug='foo',
)
self.assertEqual(url, '/docs/pip/en/foo/index.html')
with override_settings(USE_SUBDOMAIN=True):
url = resolve_path(
project=self.pip, filename='index.html', version_slug='foo',
)
self.assertEqual(url, '/en/foo/index.html')
def test_resolver_force_language_version(self):
with override_settings(USE_SUBDOMAIN=False):
url = resolve_path(
project=self.pip, filename='index.html', language='cz',
version_slug='foo',
)
self.assertEqual(url, '/docs/pip/cz/foo/index.html')
with override_settings(USE_SUBDOMAIN=True):
url = resolve_path(
project=self.pip, filename='index.html', language='cz',
version_slug='foo',
)
self.assertEqual(url, '/cz/foo/index.html')
class ResolverCanonicalProject(TestCase):
def test_project_with_same_translation_and_main_language(self):
proj1 = fixture.get(Project, main_language_project=None)
proj2 = fixture.get(Project, main_language_project=None)
self.assertFalse(proj1.translations.exists())
self.assertIsNone(proj1.main_language_project)
self.assertFalse(proj2.translations.exists())
self.assertIsNone(proj2.main_language_project)
proj1.translations.add(proj2)
proj1.main_language_project = proj2
proj1.save()
self.assertEqual(
proj1.main_language_project.main_language_project,
proj1,
)
# This tests that we aren't going to re-recurse back to resolving proj1
r = Resolver()
self.assertEqual(r._get_canonical_project(proj1), proj2)
def test_project_with_same_superproject_and_translation(self):
proj1 = fixture.get(Project, main_language_project=None)
proj2 = fixture.get(Project, main_language_project=None)
self.assertFalse(proj1.translations.exists())
self.assertIsNone(proj1.main_language_project)
self.assertFalse(proj2.translations.exists())
self.assertIsNone(proj2.main_language_project)
proj2.translations.add(proj1)
proj2.add_subproject(proj1)
self.assertEqual(
proj1.main_language_project,
proj2,
)
self.assertEqual(
proj1.superprojects.first().parent,
proj2,
)
# This tests that we aren't going to re-recurse back to resolving proj1
r = Resolver()
self.assertEqual(r._get_canonical_project(proj1), proj2)
def test_project_with_same_grandchild_project(self):
# Note: we don't disallow this, but we also don't support this in our
# resolution (yet at least)
proj1 = fixture.get(Project, main_language_project=None)
proj2 = fixture.get(Project, main_language_project=None)
proj3 = fixture.get(Project, main_language_project=None)
self.assertFalse(proj1.translations.exists())
self.assertFalse(proj2.translations.exists())
self.assertFalse(proj3.translations.exists())
self.assertIsNone(proj1.main_language_project)
self.assertIsNone(proj2.main_language_project)
self.assertIsNone(proj3.main_language_project)
proj2.add_subproject(proj1)
proj3.add_subproject(proj2)
proj1.add_subproject(proj3)
self.assertEqual(
proj1.superprojects.first().parent,
proj2,
)
self.assertEqual(
proj2.superprojects.first().parent,
proj3,
)
self.assertEqual(
proj3.superprojects.first().parent,
proj1,
)
# This tests that we aren't going to re-recurse back to resolving proj1
r = Resolver()
self.assertEqual(r._get_canonical_project(proj1), proj3)
class ResolverDomainTests(ResolverBase):
@override_settings(PRODUCTION_DOMAIN='readthedocs.org')
def test_domain_resolver(self):
with override_settings(USE_SUBDOMAIN=False):
url = resolve_domain(project=self.pip)
self.assertEqual(url, 'readthedocs.org')
with override_settings(USE_SUBDOMAIN=True):
url = resolve_domain(project=self.pip)
self.assertEqual(url, 'pip.readthedocs.org')
@override_settings(PRODUCTION_DOMAIN='readthedocs.org')
def test_domain_resolver_with_domain_object(self):
self.domain = fixture.get(
Domain,
domain='docs.foobar.com',
project=self.pip,
canonical=True,
)
with override_settings(USE_SUBDOMAIN=False):
url = resolve_domain(project=self.pip)
self.assertEqual(url, 'docs.foobar.com')
with override_settings(USE_SUBDOMAIN=True):
url = resolve_domain(project=self.pip)
self.assertEqual(url, 'docs.foobar.com')
@override_settings(PRODUCTION_DOMAIN='readthedocs.org')
def test_domain_resolver_subproject(self):
with override_settings(USE_SUBDOMAIN=False):
url = resolve_domain(project=self.subproject)
self.assertEqual(url, 'readthedocs.org')
with override_settings(USE_SUBDOMAIN=True):
url = resolve_domain(project=self.subproject)
self.assertEqual(url, 'pip.readthedocs.org')
@override_settings(PRODUCTION_DOMAIN='readthedocs.org')
def test_domain_resolver_subproject_itself(self):
"""
Test inconsistent project/subproject relationship.
If a project is subproject of itself (inconsistent relationship) we
still resolves the proper domain.
"""
# remove all possible subproject relationships
self.pip.subprojects.all().delete()
# add the project as subproject of itself
self.pip.add_subproject(self.pip)
with override_settings(USE_SUBDOMAIN=False):
url = resolve_domain(project=self.pip)
self.assertEqual(url, 'readthedocs.org')
with override_settings(USE_SUBDOMAIN=True):
url = resolve_domain(project=self.pip)
self.assertEqual(url, 'pip.readthedocs.org')
@override_settings(PRODUCTION_DOMAIN='readthedocs.org')
def test_domain_resolver_translation(self):
with override_settings(USE_SUBDOMAIN=False):
url = resolve_domain(project=self.translation)
self.assertEqual(url, 'readthedocs.org')
with override_settings(USE_SUBDOMAIN=True):
url = resolve_domain(project=self.translation)
self.assertEqual(url, 'pip.readthedocs.org')
@override_settings(PRODUCTION_DOMAIN='readthedocs.org')
def test_domain_resolver_translation_itself(self):
"""
Test inconsistent project/translation relationship.
If a project is a translation of itself (inconsistent relationship) we
still resolves the proper domain.
"""
# remove all possible translations relationships
self.pip.translations.all().delete()
# add the project as subproject of itself
self.pip.translations.add(self.pip)
with override_settings(USE_SUBDOMAIN=False):
url = resolve_domain(project=self.pip)
self.assertEqual(url, 'readthedocs.org')
with override_settings(USE_SUBDOMAIN=True):
url = resolve_domain(project=self.pip)
self.assertEqual(url, 'pip.readthedocs.org')
@override_settings(
PRODUCTION_DOMAIN='readthedocs.org',
PUBLIC_DOMAIN='public.readthedocs.org',
)
def test_domain_public(self):
with override_settings(USE_SUBDOMAIN=False):
url = resolve_domain(project=self.translation)
self.assertEqual(url, 'readthedocs.org')
with override_settings(USE_SUBDOMAIN=True):
url = resolve_domain(project=self.translation)
self.assertEqual(url, 'pip.public.readthedocs.org')
@override_settings(
PRODUCTION_DOMAIN='readthedocs.org',
PUBLIC_DOMAIN='public.readthedocs.org',
RTD_EXTERNAL_VERSION_DOMAIN='dev.readthedocs.build',
PUBLIC_DOMAIN_USES_HTTPS=True,
USE_SUBDOMAIN=True,
)
def test_domain_external(self):
latest = self.pip.versions.first()
latest.type = EXTERNAL
latest.save()
url = resolve(project=self.pip)
self.assertEqual(url, 'https://pip--latest.dev.readthedocs.build/en/latest/')
url = resolve(project=self.pip, version_slug=latest.slug)
self.assertEqual(url, 'https://pip--latest.dev.readthedocs.build/en/latest/')
url = resolve(project=self.pip, version_slug='non-external')
self.assertEqual(url, 'https://pip.public.readthedocs.org/en/non-external/')
class ResolverTests(ResolverBase):
@override_settings(PRODUCTION_DOMAIN='readthedocs.org')
def test_resolver(self):
with override_settings(USE_SUBDOMAIN=False):
url = resolve(project=self.pip)
self.assertEqual(url, 'http://readthedocs.org/docs/pip/en/latest/')
with override_settings(USE_SUBDOMAIN=True):
url = resolve(project=self.pip)
self.assertEqual(url, 'http://pip.readthedocs.org/en/latest/')
@override_settings(PRODUCTION_DOMAIN='readthedocs.org')
def test_resolver_domain(self):
self.domain = fixture.get(
Domain,
domain='docs.foobar.com',
project=self.pip,
canonical=True,
)
with override_settings(USE_SUBDOMAIN=False):
url = resolve(project=self.pip)
self.assertEqual(url, 'http://docs.foobar.com/en/latest/')
with override_settings(USE_SUBDOMAIN=True):
url = resolve(project=self.pip)
self.assertEqual(url, 'http://docs.foobar.com/en/latest/')
@override_settings(PRODUCTION_DOMAIN='readthedocs.org')
def test_resolver_domain_https(self):
self.domain = fixture.get(
Domain,
domain='docs.foobar.com',
project=self.pip,
https=True,
canonical=True,
)
with override_settings(USE_SUBDOMAIN=False):
url = resolve(project=self.pip)
self.assertEqual(url, 'https://docs.foobar.com/en/latest/')
with override_settings(USE_SUBDOMAIN=True):
url = resolve(project=self.pip)
self.assertEqual(url, 'https://docs.foobar.com/en/latest/')
@override_settings(PRODUCTION_DOMAIN='readthedocs.org')
def test_resolver_subproject(self):
with override_settings(USE_SUBDOMAIN=False):
url = resolve(project=self.subproject)
self.assertEqual(
url, 'http://readthedocs.org/docs/pip/projects/sub/ja/latest/',
)
with override_settings(USE_SUBDOMAIN=True):
url = resolve(project=self.subproject)
self.assertEqual(
url, 'http://pip.readthedocs.org/projects/sub/ja/latest/',
)
@override_settings(PRODUCTION_DOMAIN='readthedocs.org')
def test_resolver_translation(self):
with override_settings(USE_SUBDOMAIN=False):
url = resolve(project=self.translation)
self.assertEqual(url, 'http://readthedocs.org/docs/pip/ja/latest/')
with override_settings(USE_SUBDOMAIN=True):
url = resolve(project=self.translation)
self.assertEqual(url, 'http://pip.readthedocs.org/ja/latest/')
@override_settings(PRODUCTION_DOMAIN='readthedocs.org')
def test_resolver_nested_translation_of_a_subproject(self):
"""The project is a translation, and the main translation is a subproject of a project."""
translation = fixture.get(
Project,
slug='api-es',
language='es',
users=[self.owner],
main_language_project=self.subproject,
)
with override_settings(USE_SUBDOMAIN=False):
url = resolve(project=translation)
self.assertEqual(
url, 'http://readthedocs.org/docs/pip/projects/sub/es/latest/',
)
with override_settings(USE_SUBDOMAIN=True):
url = resolve(project=translation)
self.assertEqual(
url, 'http://pip.readthedocs.org/projects/sub/es/latest/',
)
@pytest.mark.xfail(reason='We do not support this for now', strict=True)
@override_settings(PRODUCTION_DOMAIN='readthedocs.org')
def test_resolver_nested_subproject_of_a_translation(self):
"""The project is a subproject, and the superproject is a translation of a project."""
project = fixture.get(
Project,
slug='all-docs',
language='en',
users=[self.owner],
main_language_project=None,
)
translation = fixture.get(
Project,
slug='docs-es',
language='es',
users=[self.owner],
main_language_project=project,
)
subproject = fixture.get(
Project,
slug='api-es',
language='es',
users=[self.owner],
main_language_project=None,
)
translation.add_subproject(subproject)
with override_settings(USE_SUBDOMAIN=False):
url = resolve(project=subproject)
self.assertEqual(url, 'http://readthedocs.org/docs/docs-es/projects/api-es/es/latest/')
with override_settings(USE_SUBDOMAIN=True):
url = resolve(project=subproject)
self.assertEqual(url, 'http://docs-es.readthedocs.org/projects/api-es/es/latest/')
@override_settings(PRODUCTION_DOMAIN='readthedocs.org')
def test_resolver_single_version(self):
self.pip.single_version = True
with override_settings(USE_SUBDOMAIN=False):
url = resolve(project=self.pip)
self.assertEqual(url, 'http://readthedocs.org/docs/pip/')
with override_settings(USE_SUBDOMAIN=True):
url = resolve(project=self.pip)
self.assertEqual(url, 'http://pip.readthedocs.org/')
@override_settings(PRODUCTION_DOMAIN='readthedocs.org')
def test_resolver_subproject_alias(self):
relation = self.pip.subprojects.first()
relation.alias = 'sub_alias'
relation.save()
with override_settings(USE_SUBDOMAIN=False):
url = resolve(project=self.subproject)
self.assertEqual(
url,
'http://readthedocs.org/docs/pip/projects/sub_alias/ja/latest/',
)
with override_settings(USE_SUBDOMAIN=True):
url = resolve(project=self.subproject)
self.assertEqual(
url,
'http://pip.readthedocs.org/projects/sub_alias/ja/latest/',
)
@override_settings(PRODUCTION_DOMAIN='readthedocs.org')
def test_resolver_private_project(self):
self.pip.privacy_level = PRIVATE
self.pip.save()
with override_settings(USE_SUBDOMAIN=False):
url = resolve(project=self.pip)
self.assertEqual(url, 'http://readthedocs.org/docs/pip/en/latest/')
with override_settings(USE_SUBDOMAIN=True):
url = resolve(project=self.pip)
self.assertEqual(url, 'http://pip.readthedocs.org/en/latest/')
@override_settings(PRODUCTION_DOMAIN='readthedocs.org')
def test_resolver_private_project_override(self):
self.pip.privacy_level = PRIVATE
self.pip.save()
with override_settings(USE_SUBDOMAIN=False):
url = resolve(project=self.pip)
self.assertEqual(url, 'http://readthedocs.org/docs/pip/en/latest/')
url = resolve(project=self.pip)
self.assertEqual(url, 'http://readthedocs.org/docs/pip/en/latest/')
with override_settings(USE_SUBDOMAIN=True):
url = resolve(project=self.pip)
self.assertEqual(url, 'http://pip.readthedocs.org/en/latest/')
url = resolve(project=self.pip)
self.assertEqual(url, 'http://pip.readthedocs.org/en/latest/')
@override_settings(PRODUCTION_DOMAIN='readthedocs.org')
def test_resolver_private_version_override(self):
latest = self.pip.versions.first()
latest.privacy_level = PRIVATE
latest.save()
with override_settings(USE_SUBDOMAIN=False):
url = resolve(project=self.pip)
self.assertEqual(url, 'http://readthedocs.org/docs/pip/en/latest/')
url = resolve(project=self.pip)
self.assertEqual(url, 'http://readthedocs.org/docs/pip/en/latest/')
with override_settings(USE_SUBDOMAIN=True):
url = resolve(project=self.pip)
self.assertEqual(url, 'http://pip.readthedocs.org/en/latest/')
url = resolve(project=self.pip)
self.assertEqual(url, 'http://pip.readthedocs.org/en/latest/')
@override_settings(
PRODUCTION_DOMAIN='readthedocs.org',
PUBLIC_DOMAIN='public.readthedocs.org',
)
def test_resolver_public_domain_overrides(self):
with override_settings(USE_SUBDOMAIN=False):
url = resolve(project=self.pip)
self.assertEqual(url, 'http://readthedocs.org/docs/pip/en/latest/')
url = resolve(project=self.pip)
self.assertEqual(url, 'http://readthedocs.org/docs/pip/en/latest/')
with override_settings(USE_SUBDOMAIN=True):
url = resolve(project=self.pip)
self.assertEqual(
url, 'http://pip.public.readthedocs.org/en/latest/',
)
url = resolve(project=self.pip)
self.assertEqual(
url, 'http://pip.public.readthedocs.org/en/latest/',
)
# Domain overrides PUBLIC_DOMAIN
self.domain = fixture.get(
Domain,
domain='docs.foobar.com',
project=self.pip,
canonical=True,
)
with override_settings(USE_SUBDOMAIN=True):
url = resolve(project=self.pip)
self.assertEqual(url, 'http://docs.foobar.com/en/latest/')
url = resolve(project=self.pip)
self.assertEqual(url, 'http://docs.foobar.com/en/latest/')
with override_settings(USE_SUBDOMAIN=False):
url = resolve(project=self.pip)
self.assertEqual(url, 'http://docs.foobar.com/en/latest/')
url = resolve(project=self.pip)
self.assertEqual(url, 'http://docs.foobar.com/en/latest/')
@override_settings(
PRODUCTION_DOMAIN='readthedocs.org',
PUBLIC_DOMAIN='readthedocs.io',
USE_SUBDOMAIN=True,
)
def test_resolver_domain_https(self):
with override_settings(PUBLIC_DOMAIN_USES_HTTPS=True):
url = resolve(project=self.pip)
self.assertEqual(url, 'https://pip.readthedocs.io/en/latest/')
url = resolve(project=self.pip)
self.assertEqual(url, 'https://pip.readthedocs.io/en/latest/')
with override_settings(PUBLIC_DOMAIN_USES_HTTPS=False):
url = resolve(project=self.pip)
self.assertEqual(url, 'http://pip.readthedocs.io/en/latest/')
class ResolverAltSetUp:
def setUp(self):
self.owner = create_user(username='owner', password='test')
self.tester = create_user(username='tester', password='test')
self.pip = fixture.get(
Project,
slug='pip',
users=[self.owner],
main_language_project=None,
)
self.seed = fixture.get(
Project,
slug='sub',
users=[self.owner],
main_language_project=None,
)
self.subproject = fixture.get(
Project,
slug='subproject',
language='ja',
users=[self.owner],
main_language_project=None,
)
self.translation = fixture.get(
Project,
slug='trans',
language='ja',
users=[self.owner],
main_language_project=None,
)
self.pip.add_subproject(self.subproject, alias='sub')
self.pip.translations.add(self.translation)
@override_settings(PUBLIC_DOMAIN='readthedocs.org')
class ResolverDomainTestsAlt(ResolverAltSetUp, ResolverDomainTests):
pass
@override_settings(PUBLIC_DOMAIN='readthedocs.org')
class SmartResolverPathTestsAlt(ResolverAltSetUp, SmartResolverPathTests):
pass
@override_settings(PUBLIC_DOMAIN='readthedocs.org')
class ResolverTestsAlt(ResolverAltSetUp, ResolverTests):
pass
@override_settings(USE_SUBDOMAIN=True, PUBLIC_DOMAIN='readthedocs.io')
class TestSubprojectsWithTranslations(TestCase):
def setUp(self):
self.subproject_en = fixture.get(
Project,
language='en',
privacy_level='public',
main_language_project=None,
)
self.subproject_es = fixture.get(
Project,
language='es',
privacy_level='public',
main_language_project=self.subproject_en,
)
self.superproject_en = fixture.get(
Project,
language='en',
privacy_level='public',
main_language_project=None,
)
self.superproject_es = fixture.get(
Project,
language='es',
privacy_level='public',
main_language_project=self.superproject_en,
)
self.relation = fixture.get(
ProjectRelationship,
parent=self.superproject_en,
child=self.subproject_en,
alias=None,
)
self.assertIn(self.relation, self.superproject_en.subprojects.all())
self.assertEqual(self.superproject_en.subprojects.count(), 1)
def test_subproject_with_translation_without_custom_domain(self):
url = resolve(self.superproject_en, filename='')
self.assertEqual(
url, 'http://{project.slug}.readthedocs.io/en/latest/'.format(
project=self.superproject_en,
),
)
url = resolve(self.superproject_es, filename='')
self.assertEqual(
url, 'http://{project.slug}.readthedocs.io/es/latest/'.format(
project=self.superproject_en,
),
)
url = resolve(self.subproject_en, filename='')
# yapf: disable
self.assertEqual(
url,
(
'http://{project.slug}.readthedocs.io/projects/'
'{subproject.slug}/en/latest/'
).format(
project=self.superproject_en,
subproject=self.subproject_en,
),
)
url = resolve(self.subproject_es, filename='')
self.assertEqual(
url,
(
'http://{project.slug}.readthedocs.io/projects/'
'{subproject.slug}/es/latest/'
).format(
project=self.superproject_en,
subproject=self.subproject_en,
),
)
# yapf: enable
def test_subproject_with_translation_with_custom_domain(self):
fixture.get(
Domain,
domain='docs.example.com',
canonical=True,
cname=True,
https=False,
project=self.superproject_en,
)
url = resolve(self.superproject_en, filename='')
self.assertEqual(url, 'http://docs.example.com/en/latest/')
url = resolve(self.superproject_es, filename='')
self.assertEqual(url, 'http://docs.example.com/es/latest/')
# yapf: disable
url = resolve(self.subproject_en, filename='')
self.assertEqual(
url,
(
'http://docs.example.com/projects/'
'{subproject.slug}/en/latest/'
).format(
subproject=self.subproject_en,
),
)
url = resolve(self.subproject_es, filename='')
self.assertEqual(
url,
(
'http://docs.example.com/projects/'
'{subproject.slug}/es/latest/'
).format(
subproject=self.subproject_en,
),
)
# yapf: enable
| 39.558957 | 113 | 0.626637 | 3,785 | 34,891 | 5.61638 | 0.051783 | 0.081146 | 0.088908 | 0.101421 | 0.874353 | 0.83672 | 0.813576 | 0.78728 | 0.769122 | 0.74292 | 0 | 0.002248 | 0.260583 | 34,891 | 881 | 114 | 39.603859 | 0.821737 | 0.031469 | 0 | 0.667112 | 0 | 0.001337 | 0.14273 | 0.034317 | 0 | 0 | 0 | 0 | 0.173797 | 1 | 0.066845 | false | 0.009358 | 0.012032 | 0 | 0.093583 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
53237476c747d7889ed6c3cbf249b82cac273a7f | 104 | py | Python | kstore/models/manufacturers.py | KeoH/django-keoh-kstore | 825d7984a06823a4e592265c4e791b455ddbb481 | [
"BSD-2-Clause"
] | null | null | null | kstore/models/manufacturers.py | KeoH/django-keoh-kstore | 825d7984a06823a4e592265c4e791b455ddbb481 | [
"BSD-2-Clause"
] | null | null | null | kstore/models/manufacturers.py | KeoH/django-keoh-kstore | 825d7984a06823a4e592265c4e791b455ddbb481 | [
"BSD-2-Clause"
] | null | null | null | #encoding:utf-8
from .abstracts import BaseCompanyModel
class Manufacturer(BaseCompanyModel):
pass
| 17.333333 | 39 | 0.807692 | 11 | 104 | 7.636364 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010989 | 0.125 | 104 | 5 | 40 | 20.8 | 0.912088 | 0.134615 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
725e50d75ac9696d94b02a38fd20e2b04c98dd4b | 157 | py | Python | pre_script_clean.py | tokejepsen/mgear_scripts | 10254ce9cced28fc5cd8b94b34a881ca7075b7d1 | [
"MIT"
] | null | null | null | pre_script_clean.py | tokejepsen/mgear_scripts | 10254ce9cced28fc5cd8b94b34a881ca7075b7d1 | [
"MIT"
] | null | null | null | pre_script_clean.py | tokejepsen/mgear_scripts | 10254ce9cced28fc5cd8b94b34a881ca7075b7d1 | [
"MIT"
] | null | null | null | import pymel.core as pm
# Delete all ngSkin nodes
pm.delete(pm.ls(type="ngSkinLayerData"))
# Delete all controller tags
pm.delete(pm.ls(type="controller"))
| 22.428571 | 40 | 0.757962 | 25 | 157 | 4.76 | 0.56 | 0.201681 | 0.168067 | 0.201681 | 0.268908 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.10828 | 157 | 6 | 41 | 26.166667 | 0.85 | 0.318471 | 0 | 0 | 0 | 0 | 0.240385 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
7279bf6ed0c7d43cecfc73b94de1a15a42abdfd1 | 26 | py | Python | src/dalijums.py | GundegaDekena/startit-kursu-grupas-darbs | 515a09351fb6f2af7f9f40dfdf569df9db308488 | [
"MIT"
] | null | null | null | src/dalijums.py | GundegaDekena/startit-kursu-grupas-darbs | 515a09351fb6f2af7f9f40dfdf569df9db308488 | [
"MIT"
] | null | null | null | src/dalijums.py | GundegaDekena/startit-kursu-grupas-darbs | 515a09351fb6f2af7f9f40dfdf569df9db308488 | [
"MIT"
] | null | null | null | def dali(a, b):
return | 13 | 15 | 0.576923 | 5 | 26 | 3 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.269231 | 26 | 2 | 16 | 13 | 0.789474 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
728e0f0b6f1dbe0f68e5c2d1f333aac3fc84f535 | 144 | py | Python | dataset/__init__.py | trungtd-2436/LaTeX-OCR | 4689c0b5669697b675c3698e22ecbaaf458452f3 | [
"MIT"
] | null | null | null | dataset/__init__.py | trungtd-2436/LaTeX-OCR | 4689c0b5669697b675c3698e22ecbaaf458452f3 | [
"MIT"
] | null | null | null | dataset/__init__.py | trungtd-2436/LaTeX-OCR | 4689c0b5669697b675c3698e22ecbaaf458452f3 | [
"MIT"
] | null | null | null | import dataset.arxiv
import dataset.dataset
import dataset.extract_latex
import dataset.latex2png
import dataset.render
import dataset.scraping
| 20.571429 | 28 | 0.875 | 19 | 144 | 6.578947 | 0.421053 | 0.624 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007576 | 0.083333 | 144 | 6 | 29 | 24 | 0.939394 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f44a31795166c8450b3b33b7f016aacb10152cba | 49 | py | Python | loss_landscape/utils/__init__.py | zhfeing/loss-landscape | e59afcf68a8408276e7fa5fe2e8f12218485948c | [
"MIT"
] | null | null | null | loss_landscape/utils/__init__.py | zhfeing/loss-landscape | e59afcf68a8408276e7fa5fe2e8f12218485948c | [
"MIT"
] | null | null | null | loss_landscape/utils/__init__.py | zhfeing/loss-landscape | e59afcf68a8408276e7fa5fe2e8f12218485948c | [
"MIT"
] | null | null | null | from .dist_utils import DistLaunchArgs, LogArgs
| 16.333333 | 47 | 0.836735 | 6 | 49 | 6.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.122449 | 49 | 2 | 48 | 24.5 | 0.930233 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f44ea0c30c88da9148c4bc3676810066e062e84b | 159 | py | Python | discovery/exceptions.py | amenezes/discovery-client | 9c41456d1cc14f4aab34628ad4e13423e00bc4be | [
"Apache-2.0"
] | 2 | 2019-07-18T22:43:49.000Z | 2020-03-09T03:27:41.000Z | discovery/exceptions.py | amenezes/discovery-client | 9c41456d1cc14f4aab34628ad4e13423e00bc4be | [
"Apache-2.0"
] | 20 | 2019-02-27T19:08:03.000Z | 2021-06-22T16:47:32.000Z | discovery/exceptions.py | amenezes/discovery-client | 9c41456d1cc14f4aab34628ad4e13423e00bc4be | [
"Apache-2.0"
] | null | null | null | class ServiceNotFoundException(Exception):
pass
class ClientOperationException(Exception):
pass
class NoConsulLeaderException(Exception):
pass
| 14.454545 | 42 | 0.786164 | 12 | 159 | 10.416667 | 0.5 | 0.312 | 0.288 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.157233 | 159 | 10 | 43 | 15.9 | 0.932836 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 0 | 1 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
be478e5b6226490d146f1892dcd3ba2477ca06bc | 4,453 | py | Python | tests/commons.py | Commonjava/mrrc-uploader | d07fc6acb1490479718e14bdc9c1b18ded606866 | [
"Apache-2.0"
] | null | null | null | tests/commons.py | Commonjava/mrrc-uploader | d07fc6acb1490479718e14bdc9c1b18ded606866 | [
"Apache-2.0"
] | 27 | 2021-09-14T02:16:18.000Z | 2021-11-03T13:59:24.000Z | tests/commons.py | ligangty/mrrc-uploader | d07fc6acb1490479718e14bdc9c1b18ded606866 | [
"Apache-2.0"
] | 5 | 2021-08-19T01:39:46.000Z | 2021-09-15T15:40:06.000Z | # For maven
TEST_BUCKET = "test_bucket"
COMMONS_CLIENT_456_FILES = [
"org/apache/httpcomponents/httpclient/4.5.6/httpclient-4.5.6.pom.sha1",
"org/apache/httpcomponents/httpclient/4.5.6/httpclient-4.5.6.jar",
"org/apache/httpcomponents/httpclient/4.5.6/httpclient-4.5.6.jar.sha1",
"org/apache/httpcomponents/httpclient/4.5.6/httpclient-4.5.6.pom"
]
COMMONS_CLIENT_459_FILES = [
"org/apache/httpcomponents/httpclient/4.5.9/httpclient-4.5.9.pom.sha1",
"org/apache/httpcomponents/httpclient/4.5.9/httpclient-4.5.9.jar",
"org/apache/httpcomponents/httpclient/4.5.9/httpclient-4.5.9.jar.sha1",
"org/apache/httpcomponents/httpclient/4.5.9/httpclient-4.5.9.pom"
]
COMMONS_CLIENT_METAS = [
"org/apache/httpcomponents/httpclient/maven-metadata.xml",
"org/apache/httpcomponents/httpclient/maven-metadata.xml.md5",
"org/apache/httpcomponents/httpclient/maven-metadata.xml.sha1",
"org/apache/httpcomponents/httpclient/maven-metadata.xml.sha256"
]
COMMONS_LOGGING_FILES = [
"commons-logging/commons-logging/1.2/commons-logging-1.2-sources.jar",
"commons-logging/commons-logging/1.2/commons-logging-1.2-sources.jar.sha1",
"commons-logging/commons-logging/1.2/commons-logging-1.2.jar",
"commons-logging/commons-logging/1.2/commons-logging-1.2.jar.sha1",
"commons-logging/commons-logging/1.2/commons-logging-1.2.pom",
"commons-logging/commons-logging/1.2/commons-logging-1.2.pom.sha1",
]
COMMONS_LOGGING_METAS = [
"commons-logging/commons-logging/maven-metadata.xml",
"commons-logging/commons-logging/maven-metadata.xml.md5",
"commons-logging/commons-logging/maven-metadata.xml.sha1",
"commons-logging/commons-logging/maven-metadata.xml.sha256"
]
ARCHETYPE_CATALOG = "archetype-catalog.xml"
ARCHETYPE_CATALOG_FILES = [
ARCHETYPE_CATALOG,
"archetype-catalog.xml.sha1",
"archetype-catalog.xml.md5",
"archetype-catalog.xml.sha256"
]
NON_MVN_FILES = [
"commons-client-4.5.6/example-settings.xml",
"commons-client-4.5.6/licenses/gnu",
"commons-client-4.5.6/licenses/licenses.txt",
"commons-client-4.5.6/README.md",
"commons-client-4.5.9/example-settings.xml",
"commons-client-4.5.9/licenses/gnu",
"commons-client-4.5.9/licenses/licenses.txt",
"commons-client-4.5.9/README.md"
]
COMMONS_CLIENT_456_MVN_NUM = (
len(COMMONS_CLIENT_456_FILES) +
len(COMMONS_LOGGING_FILES))
COMMONS_CLIENT_459_MVN_NUM = (
len(COMMONS_CLIENT_459_FILES) +
len(COMMONS_LOGGING_FILES))
COMMONS_CLIENT_MVN_NUM = (
len(COMMONS_CLIENT_456_FILES) +
len(COMMONS_CLIENT_459_FILES) +
len(COMMONS_LOGGING_FILES))
COMMONS_CLIENT_META_NUM = (
len(COMMONS_CLIENT_METAS) +
len(COMMONS_LOGGING_METAS) +
len(ARCHETYPE_CATALOG_FILES))
# For maven indexes
COMMONS_CLIENT_456_INDEXES = [
"index.html",
"org/index.html",
"org/apache/index.html",
"org/apache/httpcomponents/index.html",
"org/apache/httpcomponents/httpclient/index.html",
"org/apache/httpcomponents/httpclient/4.5.6/index.html",
]
COMMONS_CLIENT_459_INDEXES = [
"index.html",
"org/index.html",
"org/apache/index.html",
"org/apache/httpcomponents/index.html",
"org/apache/httpcomponents/httpclient/index.html",
"org/apache/httpcomponents/httpclient/4.5.9/index.html",
]
COMMONS_LOGGING_INDEXES = [
"commons-logging/index.html",
"commons-logging/commons-logging/index.html",
"commons-logging/commons-logging/1.2/index.html",
]
COMMONS_CLIENT_INDEX = "org/apache/httpcomponents/httpclient/index.html"
COMMONS_CLIENT_456_INDEX = "org/apache/httpcomponents/httpclient/4.5.6/index.html"
COMMONS_LOGGING_INDEX = "commons-logging/commons-logging/index.html"
COMMONS_ROOT_INDEX = "index.html"
# For npm
TEST_NPM_BUCKET = "npm_bucket"
CODE_FRAME_7_14_5_FILES = [
"@babel/code-frame/7.14.5/package.json",
"@babel/code-frame/-/code-frame-7.14.5.tgz",
]
CODE_FRAME_7_15_8_FILES = [
"@babel/code-frame/7.15.8/package.json",
"@babel/code-frame/-/code-frame-7.15.8.tgz",
]
CODE_FRAME_META = "@babel/code-frame/package.json"
# For npm indexes
CODE_FRAME_7_14_5_INDEXES = [
"@babel/code-frame/7.14.5/index.html",
"@babel/code-frame/-/index.html",
]
CODE_FRAME_7_15_8_INDEXES = [
"@babel/code-frame/7.15.8/index.html",
"@babel/code-frame/-/index.html",
]
CODE_FRAME_7_14_5_INDEX = "@babel/code-frame/7.14.5/index.html"
CODE_FRAME_INDEX = "@babel/code-frame/index.html"
COMMONS_ROOT_INDEX = "index.html"
| 38.059829 | 82 | 0.735235 | 658 | 4,453 | 4.81003 | 0.088146 | 0.181359 | 0.14534 | 0.187678 | 0.82654 | 0.758926 | 0.707741 | 0.506161 | 0.442022 | 0.419905 | 0 | 0.052566 | 0.102852 | 4,453 | 116 | 83 | 38.387931 | 0.739675 | 0.011453 | 0 | 0.190909 | 0 | 0.127273 | 0.634751 | 0.61451 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
be83eaa7383ad08900194f5fb12b63a3090d60dc | 115 | py | Python | examples/calibration.py | indifferentalex/botticelli | 10895695649996899bb0ab31c2b9dca069e35dbf | [
"MIT"
] | 1 | 2018-06-10T16:34:44.000Z | 2018-06-10T16:34:44.000Z | examples/calibration.py | indifferentalex/botticelli | 10895695649996899bb0ab31c2b9dca069e35dbf | [
"MIT"
] | null | null | null | examples/calibration.py | indifferentalex/botticelli | 10895695649996899bb0ab31c2b9dca069e35dbf | [
"MIT"
] | null | null | null | from context import botticelli
from botticelli.utilities import detector_inspector
detector_inspector.inspect() | 28.75 | 52 | 0.86087 | 13 | 115 | 7.461538 | 0.615385 | 0.350515 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.104348 | 115 | 4 | 53 | 28.75 | 0.941748 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
fe26f6abed55354b4e3063ca5e736917ab14012c | 4,890 | py | Python | backend/apps/heroes/migrations/0002_auto_20170912_1401.py | migcruz/dota2analytics | 2f2f2da271b025ae148e2a5628253c28e6298eea | [
"MIT"
] | null | null | null | backend/apps/heroes/migrations/0002_auto_20170912_1401.py | migcruz/dota2analytics | 2f2f2da271b025ae148e2a5628253c28e6298eea | [
"MIT"
] | null | null | null | backend/apps/heroes/migrations/0002_auto_20170912_1401.py | migcruz/dota2analytics | 2f2f2da271b025ae148e2a5628253c28e6298eea | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# Generated by Django 1.9.4 on 2017-09-12 18:01
from __future__ import unicode_literals
import django.contrib.postgres.fields
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('heroes', '0001_initial'),
]
operations = [
migrations.AddField(
model_name='hero',
name='agi_gain',
field=models.FloatField(default=0.0),
),
migrations.AddField(
model_name='hero',
name='attack_range',
field=models.IntegerField(default=0),
),
migrations.AddField(
model_name='hero',
name='attack_rate',
field=models.FloatField(default=0.0),
),
migrations.AddField(
model_name='hero',
name='attack_type',
field=models.CharField(default='', max_length=6),
),
migrations.AddField(
model_name='hero',
name='base_agi',
field=models.IntegerField(default=0),
),
migrations.AddField(
model_name='hero',
name='base_armor',
field=models.IntegerField(default=0),
),
migrations.AddField(
model_name='hero',
name='base_attack_max',
field=models.IntegerField(default=0),
),
migrations.AddField(
model_name='hero',
name='base_attack_min',
field=models.IntegerField(default=0),
),
migrations.AddField(
model_name='hero',
name='base_health',
field=models.IntegerField(default=0),
),
migrations.AddField(
model_name='hero',
name='base_health_regen',
field=models.FloatField(default=0.0),
),
migrations.AddField(
model_name='hero',
name='base_int',
field=models.IntegerField(default=0),
),
migrations.AddField(
model_name='hero',
name='base_mana',
field=models.IntegerField(default=0),
),
migrations.AddField(
model_name='hero',
name='base_mana_regen',
field=models.FloatField(default=0.0),
),
migrations.AddField(
model_name='hero',
name='base_mr',
field=models.IntegerField(default=0),
),
migrations.AddField(
model_name='hero',
name='base_str',
field=models.IntegerField(default=0),
),
migrations.AddField(
model_name='hero',
name='cm_enabled',
field=models.BooleanField(default=True),
),
migrations.AddField(
model_name='hero',
name='hero_id',
field=models.IntegerField(default=0),
),
migrations.AddField(
model_name='hero',
name='icon',
field=models.CharField(default='', max_length=100),
),
migrations.AddField(
model_name='hero',
name='int_gain',
field=models.FloatField(default=0.0),
),
migrations.AddField(
model_name='hero',
name='legs',
field=models.IntegerField(default=0),
),
migrations.AddField(
model_name='hero',
name='move_speed',
field=models.IntegerField(default=0),
),
migrations.AddField(
model_name='hero',
name='npc_name',
field=models.CharField(default='', max_length=50),
),
migrations.AddField(
model_name='hero',
name='primary_attr',
field=models.CharField(default='', max_length=3),
),
migrations.AddField(
model_name='hero',
name='projectile_speed',
field=models.IntegerField(default=0),
),
migrations.AddField(
model_name='hero',
name='roles',
field=django.contrib.postgres.fields.ArrayField(base_field=models.CharField(default='', max_length=12), default=list, size=None),
),
migrations.AddField(
model_name='hero',
name='str_gain',
field=models.FloatField(default=0.0),
),
migrations.AddField(
model_name='hero',
name='turn_rate',
field=models.FloatField(default=0.0),
),
migrations.AddField(
model_name='hero',
name='webm',
field=models.CharField(default='', max_length=100),
),
migrations.AlterField(
model_name='hero',
name='name',
field=models.CharField(default='', max_length=50),
),
]
| 30.185185 | 141 | 0.522904 | 448 | 4,890 | 5.549107 | 0.189732 | 0.096541 | 0.151649 | 0.198311 | 0.819791 | 0.819791 | 0.695897 | 0.680209 | 0.604586 | 0.604586 | 0 | 0.019602 | 0.35317 | 4,890 | 161 | 142 | 30.372671 | 0.766361 | 0.013701 | 0 | 0.720779 | 1 | 0 | 0.084647 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.019481 | 0 | 0.038961 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
fe4b0967e88a5e052662a91d7149d10f038d502e | 6,167 | py | Python | tests/unitary/LiquidityGaugeV3/test_set_rewards.py | AqualisDAO/curve-dao-contracts | beec73a068da8ed01c0f710939dc5adb776d565b | [
"MIT"
] | 217 | 2020-06-24T14:01:21.000Z | 2022-03-29T08:35:24.000Z | tests/unitary/LiquidityGaugeV3/test_set_rewards.py | AqualisDAO/curve-dao-contracts | beec73a068da8ed01c0f710939dc5adb776d565b | [
"MIT"
] | 25 | 2020-06-24T09:39:02.000Z | 2022-03-22T17:03:00.000Z | tests/unitary/LiquidityGaugeV3/test_set_rewards.py | AqualisDAO/curve-dao-contracts | beec73a068da8ed01c0f710939dc5adb776d565b | [
"MIT"
] | 110 | 2020-07-10T22:45:49.000Z | 2022-03-29T02:51:08.000Z | import brownie
import pytest
from brownie import ZERO_ADDRESS
REWARD = 10 ** 20
WEEK = 7 * 86400
LP_AMOUNT = 10 ** 18
@pytest.fixture(scope="module", autouse=True)
def initial_setup(gauge_v3, mock_lp_token, alice):
mock_lp_token.approve(gauge_v3, LP_AMOUNT, {"from": alice})
gauge_v3.deposit(LP_AMOUNT, {"from": alice})
def test_set_rewards_with_deposit(alice, coin_reward, reward_contract, mock_lp_token, gauge_v3):
sigs = [
reward_contract.stake.signature[2:],
reward_contract.withdraw.signature[2:],
reward_contract.getReward.signature[2:],
]
sigs = f"0x{sigs[0]}{sigs[1]}{sigs[2]}{'00' * 20}"
gauge_v3.set_rewards(reward_contract, sigs, [coin_reward] + [ZERO_ADDRESS] * 7, {"from": alice})
assert mock_lp_token.balanceOf(reward_contract) == LP_AMOUNT
assert gauge_v3.reward_contract() == reward_contract
assert gauge_v3.reward_tokens(0) == coin_reward
assert gauge_v3.reward_tokens(1) == ZERO_ADDRESS
def test_set_rewards_no_deposit(alice, coin_reward, reward_contract, mock_lp_token, gauge_v3):
sigs = f"0x{'00' * 4}{'00' * 4}{reward_contract.getReward.signature[2:]}{'00' * 20}"
gauge_v3.set_rewards(reward_contract, sigs, [coin_reward] + [ZERO_ADDRESS] * 7, {"from": alice})
assert mock_lp_token.balanceOf(gauge_v3) == LP_AMOUNT
assert gauge_v3.reward_contract() == reward_contract
assert gauge_v3.reward_tokens(0) == coin_reward
assert gauge_v3.reward_tokens(1) == ZERO_ADDRESS
def test_multiple_reward_tokens(alice, coin_reward, coin_a, coin_b, reward_contract, gauge_v3):
sigs = f"0x{'00' * 4}{'00' * 4}{reward_contract.getReward.signature[2:]}{'00' * 20}"
reward_tokens = [coin_reward, coin_a, coin_b] + [ZERO_ADDRESS] * 5
gauge_v3.set_rewards(reward_contract, sigs, reward_tokens, {"from": alice})
assert reward_tokens == [gauge_v3.reward_tokens(i) for i in range(8)]
def test_modify_reward_tokens_less(alice, coin_reward, coin_a, coin_b, reward_contract, gauge_v3):
sigs = f"0x{'00' * 4}{'00' * 4}{reward_contract.getReward.signature[2:]}{'00' * 20}"
gauge_v3.set_rewards(
reward_contract, sigs, [coin_reward, coin_a, coin_b] + [ZERO_ADDRESS] * 5, {"from": alice}
)
reward_tokens = [coin_reward] + [ZERO_ADDRESS] * 7
with brownie.reverts("dev: cannot modify existing reward token"):
gauge_v3.set_rewards(reward_contract, sigs, reward_tokens, {"from": alice})
def test_modify_reward_tokens_different(
alice, coin_reward, coin_a, coin_b, reward_contract, gauge_v3
):
sigs = f"0x{'00' * 4}{'00' * 4}{reward_contract.getReward.signature[2:]}{'00' * 20}"
gauge_v3.set_rewards(
reward_contract, sigs, [coin_reward, coin_a, coin_b] + [ZERO_ADDRESS] * 5, {"from": alice}
)
reward_tokens = [coin_reward, coin_b, coin_a] + [ZERO_ADDRESS] * 5
with brownie.reverts("dev: cannot modify existing reward token"):
gauge_v3.set_rewards(reward_contract, sigs, reward_tokens, {"from": alice})
def test_modify_reward_tokens_more(alice, coin_reward, coin_a, coin_b, reward_contract, gauge_v3):
sigs = f"0x{'00' * 4}{'00' * 4}{reward_contract.getReward.signature[2:]}{'00' * 20}"
gauge_v3.set_rewards(reward_contract, sigs, [coin_a] + [ZERO_ADDRESS] * 7, {"from": alice})
reward_tokens = [coin_a, coin_reward, coin_b] + [ZERO_ADDRESS] * 5
gauge_v3.set_rewards(reward_contract, sigs, reward_tokens, {"from": alice})
assert reward_tokens == [gauge_v3.reward_tokens(i) for i in range(8)]
def test_not_a_contract(alice, coin_reward, gauge_v3):
with brownie.reverts("dev: not a contract"):
gauge_v3.set_rewards(alice, "0x00", [coin_reward] + [ZERO_ADDRESS] * 7, {"from": alice})
def test_deposit_no_withdraw(alice, coin_reward, reward_contract, gauge_v3):
sigs = [
reward_contract.stake.signature[2:],
reward_contract.withdraw.signature[2:],
reward_contract.getReward.signature[2:],
]
sigs = f"0x{sigs[0]}{'00' * 4}{sigs[2]}{'00' * 20}"
with brownie.reverts("dev: failed withdraw"):
gauge_v3.set_rewards(
reward_contract, sigs, [coin_reward] + [ZERO_ADDRESS] * 7, {"from": alice}
)
def test_withdraw_no_deposit(alice, coin_reward, reward_contract, gauge_v3):
sigs = [
reward_contract.stake.signature[2:],
reward_contract.withdraw.signature[2:],
reward_contract.getReward.signature[2:],
]
sigs = f"0x{'00' * 4}{sigs[1]}{sigs[2]}{'00' * 20}"
with brownie.reverts("dev: withdraw without deposit"):
gauge_v3.set_rewards(
reward_contract, sigs, [coin_reward] + [ZERO_ADDRESS] * 7, {"from": alice}
)
def test_bad_deposit_sig(alice, coin_reward, reward_contract, gauge_v3):
sigs = [
"12345678",
reward_contract.withdraw.signature[2:],
reward_contract.getReward.signature[2:],
]
sigs = f"0x{sigs[0]}{'00' * 4}{sigs[2]}{'00' * 20}"
with brownie.reverts("dev: failed deposit"):
gauge_v3.set_rewards(
reward_contract, sigs, [coin_reward] + [ZERO_ADDRESS] * 7, {"from": alice}
)
def test_bad_withdraw_sig(alice, coin_reward, reward_contract, gauge_v3):
sigs = [
reward_contract.stake.signature[2:],
"12345678",
reward_contract.getReward.signature[2:],
]
sigs = f"0x{sigs[0]}{'00' * 4}{sigs[2]}{'00' * 20}"
with brownie.reverts("dev: failed withdraw"):
gauge_v3.set_rewards(
reward_contract, sigs, [coin_reward] + [ZERO_ADDRESS] * 7, {"from": alice}
)
def test_no_reward_token(alice, reward_contract, gauge_v3):
with brownie.reverts("dev: no reward token"):
gauge_v3.set_rewards(reward_contract, "0x00", [ZERO_ADDRESS] * 8, {"from": alice})
def test_bad_claim_sig(alice, coin_reward, reward_contract, gauge_v3):
sigs = [
reward_contract.stake.signature[2:],
reward_contract.withdraw.signature[2:],
]
sigs = f"0x{sigs[0]}{sigs[1]}{'00' * 4}{'00' * 20}"
with brownie.reverts("dev: bad claim sig"):
gauge_v3.set_rewards(
reward_contract, sigs, [coin_reward] + [ZERO_ADDRESS] * 7, {"from": alice}
)
| 38.067901 | 100 | 0.674558 | 860 | 6,167 | 4.543023 | 0.089535 | 0.186332 | 0.040952 | 0.069619 | 0.857947 | 0.831072 | 0.808037 | 0.805221 | 0.786793 | 0.756591 | 0 | 0.045859 | 0.179666 | 6,167 | 161 | 101 | 38.304348 | 0.726428 | 0 | 0 | 0.533898 | 0 | 0.042373 | 0.152749 | 0.052538 | 0 | 0 | 0.001297 | 0 | 0.084746 | 1 | 0.118644 | false | 0 | 0.025424 | 0 | 0.144068 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
fe4fb96d69deda278590d08c250c44f96c83e1fc | 100 | py | Python | useful_functions.py | sebball/etsy-shop-manager-api-v3 | a12bb652ff944d13c472afb468d3c5d4377c0369 | [
"MIT"
] | 1 | 2022-02-15T13:38:15.000Z | 2022-02-15T13:38:15.000Z | useful_functions.py | sebball/etsy-shop-manager-api-v3 | a12bb652ff944d13c472afb468d3c5d4377c0369 | [
"MIT"
] | null | null | null | useful_functions.py | sebball/etsy-shop-manager-api-v3 | a12bb652ff944d13c472afb468d3c5d4377c0369 | [
"MIT"
] | null | null | null |
def dict_fresh_id(dictionary):
return {key: value for (key, value) in dictionary.items()}
| 20 | 63 | 0.68 | 14 | 100 | 4.714286 | 0.785714 | 0.242424 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2 | 100 | 4 | 64 | 25 | 0.825 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
fe5c6a4aa77d2a69832227412f4253453c365b82 | 47 | py | Python | recipes/recipes_emscripten/pyyaml/test_import_pyyaml.py | emscripten-forge/recipes | 62cb3e146abc8945ac210f38e4e47c080698eae5 | [
"MIT"
] | 1 | 2022-03-10T16:50:56.000Z | 2022-03-10T16:50:56.000Z | recipes/recipes_emscripten/pyyaml/test_import_pyyaml.py | emscripten-forge/recipes | 62cb3e146abc8945ac210f38e4e47c080698eae5 | [
"MIT"
] | 9 | 2022-03-18T09:26:38.000Z | 2022-03-29T09:21:51.000Z | recipes/recipes_emscripten/pyyaml/test_import_pyyaml.py | emscripten-forge/recipes | 62cb3e146abc8945ac210f38e4e47c080698eae5 | [
"MIT"
] | null | null | null |
def test_import_pyyaml():
import yaml
| 11.75 | 25 | 0.659574 | 6 | 47 | 4.833333 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.276596 | 47 | 4 | 26 | 11.75 | 0.852941 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0 | 1 | 0 | 1.5 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
fe72b6a278c15426fa499e23d894445915d540e9 | 66 | py | Python | grammaticon/util.py | blurks/grammaticon | 686ce80c1d600130080074bea2d911a3c5b00bbe | [
"Apache-2.0"
] | null | null | null | grammaticon/util.py | blurks/grammaticon | 686ce80c1d600130080074bea2d911a3c5b00bbe | [
"Apache-2.0"
] | null | null | null | grammaticon/util.py | blurks/grammaticon | 686ce80c1d600130080074bea2d911a3c5b00bbe | [
"Apache-2.0"
] | 1 | 2021-12-07T01:35:42.000Z | 2021-12-07T01:35:42.000Z | from markdown import markdown
def md(s):
return markdown(s)
| 11 | 29 | 0.712121 | 10 | 66 | 4.7 | 0.7 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.212121 | 66 | 5 | 30 | 13.2 | 0.903846 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
feaca22e213d7f5b1830a35afb1aa274b58103e8 | 8,616 | py | Python | tests/test_swaps.py | dquigley-warwick/matador | 729e97efb0865c4fff50af87555730ff4b7b6d91 | [
"MIT"
] | 24 | 2020-01-21T21:40:44.000Z | 2022-03-23T13:37:18.000Z | tests/test_swaps.py | dquigley-warwick/matador | 729e97efb0865c4fff50af87555730ff4b7b6d91 | [
"MIT"
] | 234 | 2020-02-03T15:56:58.000Z | 2022-03-29T21:36:45.000Z | tests/test_swaps.py | dquigley-warwick/matador | 729e97efb0865c4fff50af87555730ff4b7b6d91 | [
"MIT"
] | 15 | 2019-11-29T11:33:32.000Z | 2021-11-02T09:14:08.000Z | #!/usr/bin/env python
import unittest
from matador.swaps import AtomicSwapper
from matador.utils.chem_utils import get_periodic_table
class SwapTest(unittest.TestCase):
""" Test atomic swap functions. """
def test_single_simple_swap(self):
# spoof AtomicSwapper __init__
swap_args = {"swap": ["AsP"], "debug": True}
bare_swap = AtomicSwapper([])
bare_swap.periodic_table = get_periodic_table()
bare_swap.swap_args = swap_args["swap"]
# try to parse swaps
bare_swap.parse_swaps()
self.assertEqual(bare_swap.swap_pairs, [[["As"], ["P"]]])
# set up test data for real swap
doc = dict()
doc["atom_types"] = ["Li", "Li", "As", "As"]
swapped_docs, num_swapped = bare_swap.atomic_swaps(doc)
self.assertEqual(num_swapped, 1)
self.assertEqual(swapped_docs[0]["atom_types"], ["Li", "Li", "P", "P"])
def test_null_self_swap(self):
# spoof AtomicSwapper __init__
swap_args = {"swap": ["KK:PP"], "debug": True}
bare_swap = AtomicSwapper([])
bare_swap.periodic_table = get_periodic_table()
bare_swap.swap_args = swap_args["swap"]
# try to parse swaps
bare_swap.parse_swaps()
self.assertEqual(bare_swap.swap_pairs, [[["K"], ["K"]], [["P"], ["P"]]])
# set up test data for real swap
doc = dict()
doc["atom_types"] = ["K", "K", "P", "P"]
swapped_docs, num_swapped = bare_swap.atomic_swaps(doc)
self.assertEqual(num_swapped, 0)
def test_multiple_simple_swap(self):
# spoof AtomicSwapper __init__
swap_args = {"swap": ["AsP:LiNa"], "debug": True}
bare_swap = AtomicSwapper([])
bare_swap.periodic_table = get_periodic_table()
bare_swap.swap_args = swap_args["swap"]
# try to parse swaps
bare_swap.parse_swaps()
self.assertEqual(bare_swap.swap_pairs, [[["As"], ["P"]], [["Li"], ["Na"]]])
# set up test data for real swap
doc = dict()
doc["atom_types"] = ["Li", "Li", "As", "As"]
swapped_docs, num_swapped = bare_swap.atomic_swaps(doc)
self.assertEqual(num_swapped, 1)
self.assertEqual(swapped_docs[0]["atom_types"], ["Na", "Na", "P", "P"])
def test_one_to_many_swap(self):
# spoof AtomicSwapper __init__
swap_args = {"swap": ["As[P,Sb,Zn,Cu]"], "debug": True}
bare_swap = AtomicSwapper([])
bare_swap.periodic_table = get_periodic_table()
bare_swap.swap_args = swap_args["swap"]
# try to parse swaps
bare_swap.parse_swaps()
self.assertEqual(bare_swap.swap_pairs, [[["As"], ["P", "Sb", "Zn", "Cu"]]])
# set up test data for real swap
doc = dict()
doc["atom_types"] = ["Li", "Li", "As", "As"]
swapped_docs, num_swapped = bare_swap.atomic_swaps(doc)
self.assertEqual(num_swapped, 4)
P_found = False
Sb_found = False
Zn_found = False
Cu_found = False
for new_doc in swapped_docs:
self.assertTrue("As" not in new_doc["atom_types"])
if "P" in new_doc["atom_types"]:
self.assertTrue(
x not in new_doc["atom_types"] for x in ["Sb", "Zn", "Cu"]
)
self.assertEqual(new_doc["atom_types"], ["Li", "Li", "P", "P"])
P_found = True
if "Sb" in new_doc["atom_types"]:
self.assertTrue(
x not in new_doc["atom_types"] for x in ["P", "Zn", "Cu"]
)
self.assertEqual(new_doc["atom_types"], ["Li", "Li", "Sb", "Sb"])
Sb_found = True
if "Zn" in new_doc["atom_types"]:
self.assertTrue(
x not in new_doc["atom_types"] for x in ["P", "Sb", "Cu"]
)
self.assertEqual(new_doc["atom_types"], ["Li", "Li", "Zn", "Zn"])
Zn_found = True
if "Cu" in new_doc["atom_types"]:
self.assertTrue(
x not in new_doc["atom_types"] for x in ["P", "Sb", "Zn"]
)
self.assertEqual(new_doc["atom_types"], ["Li", "Li", "Cu", "Cu"])
Cu_found = True
self.assertTrue(P_found)
self.assertTrue(Sb_found)
self.assertTrue(Zn_found)
self.assertTrue(Cu_found)
def test_mistaken_macro(self):
swap_args = {"swap": ["VP"], "debug": True}
bare_swap = AtomicSwapper([], maintain_num_species=False)
bare_swap.periodic_table = get_periodic_table()
bare_swap.swap_args = swap_args["swap"]
# try to parse swaps
bare_swap.parse_swaps()
self.assertEqual(bare_swap.swap_pairs, [[["V"], ["P"]]])
def test_many_to_one_macro(self):
swap_args = {"swap": ["[V]P"], "debug": True}
bare_swap = AtomicSwapper([], maintain_num_species=False)
bare_swap.periodic_table = get_periodic_table()
bare_swap.swap_args = swap_args["swap"]
# try to parse swaps
bare_swap.parse_swaps()
self.assertEqual(bare_swap.swap_pairs, [[["N", "P", "As", "Sb", "Bi"], ["P"]]])
# set up test data for real swap
doc = dict()
doc["atom_types"] = ["P", "Sb", "As", "As"]
swapped_docs, num_swapped = bare_swap.atomic_swaps(doc)
self.assertEqual(num_swapped, 1)
self.assertEqual(swapped_docs[0]["atom_types"], ["P", "P", "P", "P"])
def test_many_to_many_macro(self):
swap_args = {"swap": ["[V][Tc,Mo]"], "debug": True}
bare_swap = AtomicSwapper([], maintain_num_species=False)
bare_swap.periodic_table = get_periodic_table()
bare_swap.swap_args = swap_args["swap"]
# try to parse swaps
bare_swap.parse_swaps()
# set up test data for real swap
doc = dict()
doc["atom_types"] = ["P", "Sb", "As", "As", "Bi"]
self.assertEqual(
bare_swap.swap_pairs, [[["N", "P", "As", "Sb", "Bi"], ["Tc", "Mo"]]]
)
swapped_docs, num_swapped = bare_swap.atomic_swaps(doc)
self.assertEqual(num_swapped, 2)
self.assertEqual(swapped_docs[0]["atom_types"], ["Tc", "Tc", "Tc", "Tc", "Tc"])
self.assertEqual(swapped_docs[1]["atom_types"], ["Mo", "Mo", "Mo", "Mo", "Mo"])
def test_multiple_many_to_one_swap(self):
# spoof AtomicSwapper __init__
swap_args = {"swap": ["[Li,Na]K:[Ru, Rh]La"], "debug": True}
bare_swap = AtomicSwapper([], maintain_num_species=False)
bare_swap.periodic_table = get_periodic_table()
bare_swap.swap_args = swap_args["swap"]
# try to parse swaps
bare_swap.parse_swaps()
self.assertEqual(
bare_swap.swap_pairs, [[["Li", "Na"], ["K"]], [["Ru", "Rh"], ["La"]]]
)
# set up test data for real swap
doc = dict()
doc["atom_types"] = ["Li", "Na", "Ru", "Rh"]
swapped_docs, num_swapped = bare_swap.atomic_swaps(doc)
self.assertEqual(num_swapped, 1)
self.assertEqual(swapped_docs[0]["atom_types"], ["K", "K", "La", "La"])
def test_many_to_many_swap_awkward(self):
# spoof AtomicSwapper __init__
swap_args = {"swap": ["[Li,Na]K:[V,I][I]"], "debug": True}
bare_swap = AtomicSwapper([], maintain_num_species=False)
bare_swap.periodic_table = get_periodic_table()
bare_swap.swap_args = swap_args["swap"]
# try to parse swaps
bare_swap.parse_swaps()
self.assertEqual(
bare_swap.swap_pairs,
[[["Li", "Na"], ["K"]], [["V", "I"], ["Li", "Na", "K", "Rb", "Cs", "Fr"]]],
)
# set up test data for real swap
doc = dict()
doc["atom_types"] = ["Li", "Na", "V", "I"]
swapped_docs, num_swapped = bare_swap.atomic_swaps(doc)
self.assertEqual(num_swapped, 6)
self.assertEqual(swapped_docs[0]["atom_types"], ["K", "K", "Li", "Li"])
def test_maintain_num_species(self):
# spoof AtomicSwapper __init__
swap_args = {"swap": ["[Li,Na]K"], "debug": True}
bare_swap = AtomicSwapper([])
bare_swap.periodic_table = get_periodic_table()
bare_swap.swap_args = swap_args["swap"]
# try to parse swaps
bare_swap.parse_swaps()
# set up test data for real swap
doc = dict()
doc["atom_types"] = ["Li", "Na"]
swapped_docs, num_swapped = bare_swap.atomic_swaps(doc)
print(swapped_docs)
self.assertEqual(num_swapped, 0)
if __name__ == "__main__":
unittest.main()
| 42.44335 | 87 | 0.568825 | 1,106 | 8,616 | 4.140145 | 0.089512 | 0.101332 | 0.07862 | 0.042586 | 0.845599 | 0.820266 | 0.801922 | 0.79406 | 0.767416 | 0.741428 | 0 | 0.002553 | 0.272632 | 8,616 | 202 | 88 | 42.653465 | 0.7281 | 0.083682 | 0 | 0.4625 | 0 | 0 | 0.097049 | 0 | 0 | 0 | 0 | 0 | 0.2375 | 1 | 0.0625 | false | 0 | 0.01875 | 0 | 0.0875 | 0.00625 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
2294cefeba18af26c9cc21bdeea6d4556e57b791 | 121 | py | Python | poll-activity-data/get_activities.py | pingidentity/pingone-sample-scripts | 49b96c317b2055311b426335a4a3729496a123c8 | [
"Apache-2.0"
] | 1 | 2021-08-24T17:40:38.000Z | 2021-08-24T17:40:38.000Z | poll-activity-data/get_activities.py | pingidentity/pingone-sample-scripts | 49b96c317b2055311b426335a4a3729496a123c8 | [
"Apache-2.0"
] | 1 | 2020-10-02T15:46:57.000Z | 2020-10-02T15:46:57.000Z | poll-activity-data/get_activities.py | pingidentity/pingone-sample-scripts | 49b96c317b2055311b426335a4a3729496a123c8 | [
"Apache-2.0"
] | 1 | 2020-10-20T20:14:59.000Z | 2020-10-20T20:14:59.000Z | from subprocess import call
import sys
call( ["node", "/Applications/Splunk/bin/scripts/poll_activities.js"] + sys.argv ) | 40.333333 | 82 | 0.768595 | 17 | 121 | 5.411765 | 0.823529 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.090909 | 121 | 3 | 82 | 40.333333 | 0.836364 | 0 | 0 | 0 | 0 | 0 | 0.45082 | 0.418033 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
22a9e1146f79d0ace7d1fc10dac9df03044d525e | 33,513 | py | Python | mpf/tests/test_OPP.py | hyphz/mpf | 978a25f62a0f43f481ecb44eaa9f316f52c76a78 | [
"MIT"
] | null | null | null | mpf/tests/test_OPP.py | hyphz/mpf | 978a25f62a0f43f481ecb44eaa9f316f52c76a78 | [
"MIT"
] | null | null | null | mpf/tests/test_OPP.py | hyphz/mpf | 978a25f62a0f43f481ecb44eaa9f316f52c76a78 | [
"MIT"
] | null | null | null | import copy
import logging
from mpf.platforms.opp.opp import OppHardwarePlatform
from unittest.mock import MagicMock
import time
from mpf.platforms.opp import opp
from mpf.platforms.opp.opp_rs232_intf import OppRs232Intf
from mpf.tests.MpfTestCase import MpfTestCase
from mpf.tests.loop import MockSerial
COMMAND_LENGTH = {
0x00: 7,
0x02: 7,
0x07: 7,
0x08: 7,
0x0d: 7,
0x13: 8,
0x14: 7,
0x17: 5,
0x19: 11,
}
class MockOppSocket(MockSerial):
def read(self, length):
del length
if not self.queue:
return b""
msg = b""
while self.queue:
msg += self.queue.pop(0)
return msg
def read_ready(self):
return bool(self.queue)
def write_ready(self):
return True
def write(self, msg):
"""Handle messages in fake OPP."""
#print("Serial received: " + "".join("\\x%02x" % b for b in msg) + " len: " + str(len(msg)))
total_msg_len = len(msg)
if self.crashed:
return
while msg:
# special case: EOM and inventory map
if msg[0] in (0xff, 0xf0):
self._handle_msg(msg[0:1])
msg = msg[1:]
continue
if len(msg) < 2:
raise AssertionError("Message too short. " + "".join("\\x%02x" % b for b in msg))
command = msg[1]
if command == 0x40:
# special case of variable length message
if len(msg) < 6:
raise AssertionError("Fade too short. " + "".join("\\x%02x" % b for b in msg))
command_len = 9 + msg[5]
else:
if command not in COMMAND_LENGTH:
raise AssertionError("Unknown command. " + "".join("\\x%02x" % b for b in msg))
command_len = COMMAND_LENGTH[command]
if len(msg) < command_len:
raise AssertionError("Command length ({}) does not match message length ({}). {}".format(
command_len, len(msg), "".join("\\x%02x" % b for b in msg)
))
self._handle_msg(msg[0:command_len])
msg = msg[command_len:]
return total_msg_len
def _handle_msg(self, msg):
# print("Handling: " + "".join("\\x%02x" % b for b in msg) + " len: " + str(len(msg)))
if msg in self.permanent_commands:
self.queue.append(self.permanent_commands[msg])
return len(msg)
if msg not in self.expected_commands:
self.crashed = True
remaining_expected_commands = dict(self.expected_commands)
self.expected_commands = {"crashed": True}
raise AssertionError("Unexpected command: " + "".join("\\x%02x" % b for b in msg) +
" len: " + str(len(msg)) + " Remaining expected commands: " +
str(remaining_expected_commands))
if self.expected_commands[msg] is not False:
self.queue.append(self.expected_commands[msg])
del self.expected_commands[msg]
def __init__(self):
super().__init__()
self.name = "SerialMock"
self.expected_commands = {}
self.queue = []
self.permanent_commands = {}
self.crashed = False
class OPPCommon(MpfTestCase):
def __init__(self, methodName):
super().__init__(methodName)
self.expected_duration = 2
self.serialMock = None
def get_machine_path(self):
return 'tests/machine_files/opp/'
def _crc_message(self, msg, term=False):
crc_msg = msg + OppRs232Intf.calc_crc8_part_msg(msg, 0, len(msg))
if term:
crc_msg += b'\xff'
return crc_msg
def _mock_loop(self):
self.clock.mock_serial("com1", self.serialMock)
def tearDown(self):
self.assertFalse(self.serialMock.crashed)
super().tearDown()
def get_platform(self):
return False
def _wait_for_processing(self):
start = time.time()
while self.serialMock.expected_commands and not self.serialMock.crashed and time.time() < start + 10:
self.advance_time_and_run(.01)
self.assertFalse(self.serialMock.crashed)
class TestOPPStm32(MpfTestCase):
def __init__(self, methodName):
super().__init__(methodName)
self.expected_duration = 2
self.serialMocks = {}
def get_machine_path(self):
return 'tests/machine_files/opp/'
def _crc_message(self, msg, term=False):
crc_msg = msg + OppRs232Intf.calc_crc8_part_msg(msg, 0, len(msg))
if term:
crc_msg += b'\xff'
return crc_msg
def _mock_loop(self):
self.clock.mock_serial("com1", self.serialMocks["com1"])
self.clock.mock_serial("com2", self.serialMocks["com2"])
def tearDown(self):
for port, mock in self.serialMocks.items():
self.assertFalse(mock.crashed, "Mock {} crashed".format(port))
super().tearDown()
def get_platform(self):
return False
def _wait_for_processing(self):
start = time.time()
while sum([len(mock.expected_commands) for mock in self.serialMocks.values()]) and \
not sum([mock.crashed for mock in self.serialMocks.values()]) and time.time() < start + 10:
self.advance_time_and_run(.01)
self.assertFalse(self.serialMocks["com1"].expected_commands)
self.assertFalse(self.serialMocks["com2"].expected_commands)
def get_config_file(self):
return 'config_stm32.yaml'
def setUp(self):
self.expected_duration = 1.5
opp.serial_imported = True
opp.serial = MagicMock()
self.serialMocks["com1"] = MockOppSocket()
self.serialMocks["com2"] = MockOppSocket()
board1_config = b'\x20\x0d\x01\x02\x03\x08' # wing1: solenoids, wing2: inputs, wing3: lamps, wing4: neo_sol
board2_config = b'\x20\x0d\x0b\x0c\x03\x03' # wing1: lamps, wing2: lamps, wing3: lamps, wing4: lamps
board1_version = b'\x20\x02\x00\x02\x01\x00' # 0.2.1.0
board2_version = b'\x20\x02\x00\x02\x01\x00' # 0.2.1.0
inputs1_message = b"\x20\x08\x00\xff\x00\x0c" # inputs 0+1 off, 2+3 on, 8 on
inputs2_message = b"\x20\x08\x00\x00\x00\x00"
self.serialMocks["com1"].expected_commands = {
b'\xf0': b'\xf0\x20', # boards 20 installed
self._crc_message(b'\x20\x0d\x00\x00\x00\x00'): self._crc_message(board1_config), # get config
self._crc_message(b'\x20\x02\x00\x00\x00\x00'): self._crc_message(board1_version), # get version
self._crc_message(b'\x20\x00\x00\x00\x00\x00'): self._crc_message(b'\x20\x00\x01\x23\x45\x67')
}
self.serialMocks["com1"].permanent_commands = {
b'\xff': b'\xff',
self._crc_message(b'\x20\x08\x00\x00\x00\x00'): self._crc_message(inputs1_message), # read inputs
}
self.serialMocks["com2"].expected_commands = {
b'\xf0': b'\xf0\x20', # boards 20 installed
self._crc_message(b'\x20\x0d\x00\x00\x00\x00'): self._crc_message(board2_config), # get config
self._crc_message(b'\x20\x02\x00\x00\x00\x00'): self._crc_message(board2_version), # get version
self._crc_message(b'\x20\x00\x00\x00\x00\x00'): self._crc_message(b'\x20\x00\x00\x00\x00\x02')
}
self.serialMocks["com2"].permanent_commands = {
b'\xff': b'\xff',
self._crc_message(b'\x20\x08\x00\x00\x00\x00'): self._crc_message(inputs2_message), # read inputs
}
super().setUp()
assert isinstance(self.machine.default_platform, OppHardwarePlatform)
self._wait_for_processing()
self.assertEqual(0x00020100, self.machine.default_platform.min_version["19088743"])
self.assertEqual(0x00020100, self.machine.default_platform.min_version["2"])
self.maxDiff = 100000
# test hardware scan
info_str = """Connected CPUs:
- Port: com1 at 115200 baud. Chain Serial: 19088743
-> Board: 0x20 Firmware: 0x20100
- Port: com2 at 115200 baud. Chain Serial: 2
-> Board: 0x20 Firmware: 0x20100
Incand cards:
- Chain: 19088743 Board: 0x20 Card: 0 Numbers: [16, 17, 18, 19, 20, 21, 22, 23]
- Chain: 2 Board: 0x20 Card: 0 Numbers: [16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31]
Input cards:
- Chain: 19088743 Board: 0x20 Card: 0 Numbers: [0, 1, 2, 3, 8, 9, 10, 11, 12, 13, 14, 15, 25, 26, 27]
Solenoid cards:
- Chain: 19088743 Board: 0x20 Card: 0 Numbers: [0, 1, 2, 3, 12, 13, 14, 15]
LEDs:
- Chain: 19088743 Board: 0x20 Card: 0
"""
self.assertEqual(info_str, self.machine.default_platform.get_info_string())
def testOpp(self):
# assert that the watchdog does not trigger on incand only boards
with self.assertLogs('OPP', level='WARNING') as cm:
self.advance_time_and_run(1)
self.assertFalse(cm.output)
# log something to prevent the test from breaking
logging.getLogger("OPP").warning("DEBUG")
# set color of neo pixel
self.serialMocks["com1"].expected_commands[
self._crc_message(b'\x20\x40\x00\x00\x00\x06\x00\x64\xff\x00\x00\x00\x00\xff', False)] = False
self.machine.lights["l_neo_0"].color("red", fade_ms=100)
self.machine.lights["l_neo_1"].color("blue", fade_ms=100)
self.advance_time_and_run(.01)
self._wait_for_processing()
self.advance_time_and_run(.15)
self.serialMocks["com1"].expected_commands[
self._crc_message(b'\x20\x40\x00\x00\x00\x06\x00\x64\x00\x00\xff\xff\x00\x00', False)] = False
self.machine.lights["l_neo_0"].color("blue", fade_ms=100)
self.machine.lights["l_neo_1"].color("red", fade_ms=100)
self.advance_time_and_run(.01)
self._wait_for_processing()
self.advance_time_and_run(.15)
self.machine.lights["l_neo_0"].color("blue", fade_ms=100)
self.machine.lights["l_neo_1"].color("red", fade_ms=100)
self.advance_time_and_run(.01)
self._wait_for_processing()
self.serialMocks["com2"].expected_commands[
self._crc_message(b'\x20\x40\x10\x13\x00\x02\x00\x64\x99\xe5', False)] = False
self.machine.lights["l2-3"].color("white%60", fade_ms=100)
self.machine.lights["l2-4"].color("white%90", fade_ms=100)
self.advance_time_and_run(.02)
self._wait_for_processing()
self.serialMocks["com2"].expected_commands[
self._crc_message(b'\x20\x40\x20\x00\x00\x02\x00\x64\x99\xe5', False)] = False
self.machine.lights["m0-0"].color("white%60", fade_ms=100)
self.machine.lights["m0-1"].color("white%90", fade_ms=100)
self.advance_time_and_run(.02)
self._wait_for_processing()
class TestOPPFirmware2(OPPCommon, MpfTestCase):
def get_config_file(self):
return 'config2.yaml'
def setUp(self):
self.expected_duration = 1.5
opp.serial_imported = True
opp.serial = MagicMock()
self.serialMock = MockOppSocket()
board1_config = b'\x20\x0d\x01\x02\x03\x03' # wing1: solenoids, wing2: inputs, wing3: lamps, wing4: lamps
board2_config = b'\x21\x0d\x06\x02\x02\x01' # wing1: neo, wing2: inputs, wing3: inputs, wing4: solenoids
board3_config = b'\x22\x0d\x03\x03\x03\x07' # wing1: lamps, wing2: lamps, wing3: lamps, wing4: hi-side lamps
board4_config = b'\x23\x0d\x01\x01\x04\x05' # wing1: sol, wing2: sol, wing3: matrix_out, wing4: matrix_in
board1_version = b'\x20\x02\x00\x02\x00\x00' # 0.2.0.0
board2_version = b'\x21\x02\x00\x02\x00\x00' # 0.2.0.0
board3_version = b'\x22\x02\x00\x02\x00\x00' # 0.2.0.0
board4_version = b'\x23\x02\x00\x02\x00\x00' # 0.2.0.0
inputs1_message = b"\x20\x08\x00\xff\x00\x0c" # inputs 0+1 off, 2+3 on, 8 on
inputs2_message = b"\x21\x08\x00\x00\x00\x00"
inputs3a_message = b"\x23\x08\x00\x00\x00\x00"
inputs3b_message = b"\x23\x19\x00\x00\x00\x00\x00\x00\x00\x01"
self.serialMock.expected_commands = {
b'\xf0': b'\xf0\x20\x21\x22\x23', # boards 20 + 21 + 22 + 23 installed
self._crc_message(b'\x20\x0d\x00\x00\x00\x00'): self._crc_message(board1_config),
self._crc_message(b'\x21\x0d\x00\x00\x00\x00'): self._crc_message(board2_config),
self._crc_message(b'\x22\x0d\x00\x00\x00\x00'): self._crc_message(board3_config),
self._crc_message(b'\x23\x0d\x00\x00\x00\x00'): self._crc_message(board4_config), # get config
self._crc_message(b'\x20\x02\x00\x00\x00\x00'): self._crc_message(board1_version),
self._crc_message(b'\x21\x02\x00\x00\x00\x00'): self._crc_message(board2_version),
self._crc_message(b'\x22\x02\x00\x00\x00\x00'): self._crc_message(board3_version),
self._crc_message(b'\x23\x02\x00\x00\x00\x00'): self._crc_message(board4_version), # get version
self._crc_message(b'\x20\x14\x00\x02\x17\x00'): False, # configure coil 0
self._crc_message(b'\x20\x14\x01\x04\x17\x00'): False, # configure coil 1
self._crc_message(b'\x20\x14\x02\x04\x0a\x00'): False, # configure coil 2
self._crc_message(b'\x20\x14\x03\x00\x0a\x06'): False, # configure coil 3
self._crc_message(b'\x21\x14\x0c\x00\x0a\x01'): False, # configure coil 1-12
self._crc_message(b'\x23\x14\x00\x02\x2a\x00'): False, # configure coil 3-0
self._crc_message(b'\x20\x13\x07\x00\x00\x00\x00', False): False, # turn off all incands
self._crc_message(b'\x22\x13\x07\x00\x00\x00\x00', False): False, # turn off all incands
}
self.serialMock.permanent_commands = {
b'\xff': b'\xff',
self._crc_message(b'\x20\x08\x00\x00\x00\x00'): self._crc_message(inputs1_message),
self._crc_message(b'\x21\x08\x00\x00\x00\x00'): self._crc_message(inputs2_message),
self._crc_message(b'\x23\x08\x00\x00\x00\x00'): self._crc_message(inputs3a_message),
self._crc_message(b'\x23\x19\x00\x00\x00\x00\x00\x00\x00\x00'): self._crc_message(inputs3b_message), # read inputs
}
super().setUp()
assert isinstance(self.machine.default_platform, OppHardwarePlatform)
self._wait_for_processing()
self.assertEqual(0x00020000, self.machine.default_platform.min_version["com1"])
self.assertFalse(self.serialMock.expected_commands)
self.maxDiff = 100000
# test hardware scan
info_str = """Connected CPUs:
- Port: com1 at 115200 baud. Chain Serial: com1
-> Board: 0x20 Firmware: 0x20000
-> Board: 0x21 Firmware: 0x20000
-> Board: 0x22 Firmware: 0x20000
-> Board: 0x23 Firmware: 0x20000
Incand cards:
- Chain: com1 Board: 0x20 Card: 0 Numbers: [16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31]
- Chain: com1 Board: 0x22 Card: 2 Numbers: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20,\
21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31]
Input cards:
- Chain: com1 Board: 0x20 Card: 0 Numbers: [0, 1, 2, 3, 8, 9, 10, 11, 12, 13, 14, 15]
- Chain: com1 Board: 0x21 Card: 1 Numbers: [0, 1, 2, 3, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21,\
22, 23, 24, 25, 26, 27]
- Chain: com1 Board: 0x23 Card: 3 Numbers: [0, 1, 2, 3, 8, 9, 10, 11]
- Chain: com1 Board: 0x23 Card: 3 Numbers: [32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49,\
50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78,\
79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95]
Solenoid cards:
- Chain: com1 Board: 0x20 Card: 0 Numbers: [0, 1, 2, 3]
- Chain: com1 Board: 0x21 Card: 1 Numbers: [12, 13, 14, 15]
- Chain: com1 Board: 0x23 Card: 3 Numbers: [0, 1, 2, 3, 4, 5, 6, 7]
LEDs:
- Chain: com1 Board: 0x21 Card: 1
"""
self.assertEqual(info_str, self.machine.default_platform.get_info_string())
def testOpp(self):
self._test_dual_wound_coils()
self._test_switches()
def _test_switches(self):
# initial switches
self.assertSwitchState("s_test", 1)
self.assertSwitchState("s_test_no_debounce", 1)
self.assertSwitchState("s_test_nc", 1)
self.assertSwitchState("s_flipper", 0)
self.assertSwitchState("s_test_card2", 1)
self.assertSwitchState("s_matrix_test", 1)
self.assertSwitchState("s_matrix_test2", 0)
self.assertSwitchState("s_matrix_test3", 1)
# switch change
permanent_commands = copy.deepcopy(self.serialMock.permanent_commands)
inputs1_message = b"\x20\x08\x00\x00\x01\x08" # inputs 0+1+2 off, 3 on, 8 off
inputs2_message = b'\x21\x08\x00\x00\x00\x00'
inputs3a_message = b"\x23\x08\x00\x00\x00\x00"
inputs3b_message = b"\x23\x19\x80\x00\x00\x00\x00\x01\x00\x00"
self.serialMock.permanent_commands = {
b'\xff': b'\xff',
self._crc_message(b'\x20\x08\x00\x00\x00\x00'): self._crc_message(inputs1_message),
self._crc_message(b'\x21\x08\x00\x00\x00\x00'): self._crc_message(inputs2_message),
self._crc_message(b'\x23\x08\x00\x00\x00\x00'): self._crc_message(inputs3a_message),
self._crc_message(b'\x23\x19\x00\x00\x00\x00\x00\x00\x00\x00'): self._crc_message(inputs3b_message),
}
switch = self.machine.switches["s_test_nc"]
while self.machine.switch_controller.is_active(switch):
self.advance_time_and_run(0.1)
self.assertSwitchState("s_test", 1)
self.assertSwitchState("s_test_no_debounce", 1)
self.assertSwitchState("s_test_nc", 0)
self.assertSwitchState("s_flipper", 0)
self.assertSwitchState("s_test_card2", 0)
self.assertSwitchState("s_matrix_test", 0)
self.assertSwitchState("s_matrix_test2", 1)
self.assertSwitchState("s_matrix_test3", 0)
self.serialMock.permanent_commands = permanent_commands
def _test_dual_wound_coils(self):
self.serialMock.expected_commands[self._crc_message(b'\x20\x14\x02\x24\x0a\x00')] = False
self.serialMock.expected_commands[self._crc_message(b'\x20\x14\x03\x23\x0a\x00')] = False
self.serialMock.expected_commands[self._crc_message(b'\x20\x17\x03\x03')] = False
self.serialMock.expected_commands[self._crc_message(b'\x20\x17\x03\x02')] = False
self.machine.flippers["f_test_hold"].enable()
self._wait_for_processing()
self.assertFalse(self.serialMock.expected_commands)
# enable a coil (when a rule is active)
self.serialMock.expected_commands[self._crc_message(b'\x20\x14\x03\x21\x0a\x06')] = False
self.serialMock.expected_commands[self._crc_message(b'\x20\x07\x00\x08\x00\x08', False)] = False
self.serialMock.expected_commands[self._crc_message(b'\x20\x14\x03\x23\x0a\x00')] = False
self.machine.coils["c_flipper_main"].enable()
self._wait_for_processing()
self.assertFalse(self.serialMock.expected_commands)
# pulse it (when rule is active)
self.serialMock.expected_commands[self._crc_message(b'\x20\x07\x00\x08\x00\x08', False)] = False
self.machine.coils["c_flipper_main"].pulse()
self._wait_for_processing()
self.assertFalse(self.serialMock.expected_commands)
# pulse it with other settings (when rule is active)
self.serialMock.expected_commands[self._crc_message(b'\x20\x14\x03\x23\x2a\x00')] = False
self.serialMock.expected_commands[self._crc_message(b'\x20\x07\x00\x08\x00\x08', False)] = False
self.serialMock.expected_commands[self._crc_message(b'\x20\x14\x03\x23\x0a\x00')] = False
self.machine.coils["c_flipper_main"].pulse(42)
self._wait_for_processing()
self.assertFalse(self.serialMock.expected_commands)
self.serialMock.expected_commands[self._crc_message(b'\x20\x14\x02\x04\x0a\x20')] = False
self.serialMock.expected_commands[self._crc_message(b'\x20\x14\x03\x00\x0a\x26')] = False
self.serialMock.expected_commands[self._crc_message(b'\x20\x17\x03\x83')] = False
self.serialMock.expected_commands[self._crc_message(b'\x20\x17\x03\x82')] = False
self.machine.flippers["f_test_hold"].disable()
self._wait_for_processing()
self.assertFalse(self.serialMock.expected_commands)
# enable a coil (which is already configured right)
self.serialMock.expected_commands[self._crc_message(b'\x20\x07\x00\x02\x00\x02', False)] = False
self.machine.coils["c_test_allow_enable"].enable()
self._wait_for_processing()
self.assertFalse(self.serialMock.expected_commands)
# disable it
self.serialMock.expected_commands[self._crc_message(b'\x20\x07\x00\x00\x00\x02', False)] = False
self.machine.coils["c_test_allow_enable"].disable()
self._wait_for_processing()
self.assertFalse(self.serialMock.expected_commands)
# pulse it
self.serialMock.expected_commands[self._crc_message(b'\x20\x14\x01\x02\x17\x00')] = False
self.serialMock.expected_commands[self._crc_message(b'\x20\x07\x00\x02\x00\x02', False)] = False
self.machine.coils["c_test_allow_enable"].pulse()
self._wait_for_processing()
self.assertFalse(self.serialMock.expected_commands)
# pulse it again with same settings (no reconfigure)
self.serialMock.expected_commands[self._crc_message(b'\x20\x07\x00\x02\x00\x02', False)] = False
self.machine.coils["c_test_allow_enable"].pulse()
self._wait_for_processing()
self.assertFalse(self.serialMock.expected_commands)
# pulse it with other settings (should reconfigure)
self.serialMock.expected_commands[self._crc_message(b'\x20\x14\x01\x02\x2a\x00')] = False
self.serialMock.expected_commands[self._crc_message(b'\x20\x07\x00\x02\x00\x02', False)] = False
self.machine.coils["c_test_allow_enable"].pulse(42)
self._wait_for_processing()
self.assertFalse(self.serialMock.expected_commands)
class TestOPP(OPPCommon, MpfTestCase):
def get_config_file(self):
return 'config.yaml'
def setUp(self):
self.expected_duration = 1.5
opp.serial_imported = True
opp.serial = MagicMock()
self.serialMock = MockOppSocket()
board1_config = b'\x20\x0d\x01\x02\x03\x03' # wing1: solenoids, wing2: inputs, wing3: lamps, wing4: lamps
board2_config = b'\x21\x0d\x06\x02\x02\x01' # wing1: neo, wing2: inputs, wing3: inputs, wing4: solenoids
board1_version = b'\x20\x02\x00\x01\x01\x00' # 0.1.1.0
board2_version = b'\x21\x02\x00\x01\x01\x00' # 0.1.1.0
inputs1_message = b'\x20\x08\x00\x00\x00\x0c' # inputs 0+1 off, 2+3 on, 8 on
inputs2_message = b'\x21\x08\x00\x00\x00\x00'
self.serialMock.expected_commands = {
b'\xf0': b'\xf0\x20\x21', # boards 20 + 21 installed
self._crc_message(b'\x20\x0d\x00\x00\x00\x00'): self._crc_message(board1_config),
self._crc_message(b'\x21\x0d\x00\x00\x00\x00'): self._crc_message(board2_config), # get config
self._crc_message(b'\x20\x02\x00\x00\x00\x00'): self._crc_message(board1_version),
self._crc_message(b'\x21\x02\x00\x00\x00\x00'): self._crc_message(board2_version), # get version
self._crc_message(b'\x20\x14\x00\x02\x17\x00'): False, # configure coil 0
self._crc_message(b'\x20\x14\x01\x00\x17\x0f'): False, # configure coil 1
self._crc_message(b'\x20\x14\x02\x00\x0a\x0f'): False, # configure coil 2
self._crc_message(b'\x20\x14\x03\x00\x0a\x06'): False, # configure coil 3
self._crc_message(b'\x21\x14\x0c\x00\x0a\x01'): False, # configure coil 1-12
self._crc_message(b'\x20\x13\x07\x00\x00\x00\x00'): False, # turn off all incands
}
self.serialMock.permanent_commands = {
b'\xff': b'\xff',
self._crc_message(b'\x20\x08\x00\x00\x00\x00'): self._crc_message(inputs1_message),
self._crc_message(b'\x21\x08\x00\x00\x00\x00'): self._crc_message(inputs2_message), # read inputs
}
super().setUp()
self._wait_for_processing()
self.assertFalse(self.serialMock.expected_commands)
def test_opp(self):
self._test_coils()
self._test_leds()
self._test_matrix_lights()
self._test_autofires()
self._test_switches()
self._test_flippers()
# test hardware scan
self.maxDiff = 100000
info_str = """Connected CPUs:
- Port: com1 at 115200 baud. Chain Serial: com1
-> Board: 0x20 Firmware: 0x10100
-> Board: 0x21 Firmware: 0x10100
Incand cards:
- Chain: com1 Board: 0x20 Card: 0 Numbers: [16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31]
Input cards:
- Chain: com1 Board: 0x20 Card: 0 Numbers: [0, 1, 2, 3, 8, 9, 10, 11, 12, 13, 14, 15]
- Chain: com1 Board: 0x21 Card: 1 Numbers: [0, 1, 2, 3, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27]
Solenoid cards:
- Chain: com1 Board: 0x20 Card: 0 Numbers: [0, 1, 2, 3]
- Chain: com1 Board: 0x21 Card: 1 Numbers: [12, 13, 14, 15]
LEDs:
- Chain: com1 Board: 0x21 Card: 1
"""
self.assertEqual(info_str, self.machine.default_platform.get_info_string())
def _test_switches(self):
# initial switches
self.assertSwitchState("s_test", 1)
self.assertSwitchState("s_test_no_debounce", 1)
self.assertSwitchState("s_test_nc", 1)
self.assertSwitchState("s_flipper", 0)
self.assertSwitchState("s_test_card2", 1)
# switch change
permanent_commands = copy.deepcopy(self.serialMock.permanent_commands)
inputs1_message = b"\x20\x08\x00\x00\x01\x08" # inputs 0+1+2 off, 3 on, 8 off
inputs2_message = b'\x21\x08\x00\x00\x00\x00'
self.serialMock.permanent_commands = {
b'\xff': b'\xff',
self._crc_message(b'\x20\x08\x00\x00\x00\x00'): self._crc_message(inputs1_message),
self._crc_message(b'\x21\x08\x00\x00\x00\x00'): self._crc_message(inputs2_message)
}
switch = self.machine.switches["s_test_nc"]
while self.machine.switch_controller.is_active(switch):
self.advance_time_and_run(0.1)
self.assertSwitchState("s_test", 1)
self.assertSwitchState("s_test_no_debounce", 1)
self.assertSwitchState("s_test_nc", 0)
self.assertSwitchState("s_flipper", 0)
self.assertSwitchState("s_test_card2", 0)
self.serialMock.permanent_commands = permanent_commands
def _test_coils(self):
self.assertEqual("OPP com1 Board 0x20", self.machine.coils["c_test"].hw_driver.get_board_name())
# pulse coil
self.serialMock.expected_commands[self._crc_message(b'\x20\x14\x00\x02\x17\x00')] = False # configure coil 0
self.serialMock.expected_commands[self._crc_message(b'\x20\x07\x00\x01\x00\x01')] = False
self.machine.coils["c_test"].pulse()
self._wait_for_processing()
self.assertFalse(self.serialMock.expected_commands)
self.serialMock.expected_commands[self._crc_message(b'\x21\x14\x0c\x02\x0a\x00')] = False
self.serialMock.expected_commands[self._crc_message(b'\x21\x07\x10\x00\x10\x00')] = False
self.machine.coils["c_holdpower_16"].pulse(10)
# enable coil (not allowed)
with self.assertRaises(AssertionError):
self.machine.coils["c_test"].enable()
self._wait_for_processing()
self.assertFalse(self.serialMock.expected_commands)
self.assertFalse(self.serialMock.crashed)
# disable coil
self.serialMock.expected_commands[self._crc_message(b'\x20\x07\x00\x00\x00\x01', False)] = False
self.machine.coils["c_test"].disable()
self._wait_for_processing()
self.assertFalse(self.serialMock.expected_commands)
# pulse coil (with allow_enable set)
self.serialMock.expected_commands[self._crc_message(b'\x20\x14\x01\x02\x17\x00')] = False
self.serialMock.expected_commands[self._crc_message(b'\x20\x07\x00\x02\x00\x02', False)] = False
self.machine.coils["c_test_allow_enable"].pulse()
self._wait_for_processing()
self.assertFalse(self.serialMock.expected_commands)
# enable coil (with allow_enable set)
self.serialMock.expected_commands[self._crc_message(b'\x20\x14\x01\x00\x17\x0f')] = False
self.serialMock.expected_commands[self._crc_message(b'\x20\x07\x00\x02\x00\x02', False)] = False
self.machine.coils["c_test_allow_enable"].enable()
self._wait_for_processing()
self.assertFalse(self.serialMock.expected_commands)
def _test_matrix_lights(self):
self.serialMock.expected_commands[self._crc_message(b'\x20\x13\x07\x00\x01\x00\x00', False)] = False
self.machine.lights["test_light1"].on()
self.machine.lights["test_light2"].off()
self._wait_for_processing()
self.assertFalse(self.serialMock.expected_commands)
self.serialMock.expected_commands[self._crc_message(b'\x20\x13\x07\x00\x03\x00\x00', False)] = False
self.machine.lights["test_light1"].on()
self.machine.lights["test_light2"].on()
# it will only update once every 10 ticks so just advance 10 times to be sure
self._wait_for_processing()
self.assertFalse(self.serialMock.expected_commands)
def _test_leds(self):
# set leds 0, 1, 2 to brightness 255
self.serialMock.expected_commands[self._crc_message(b'\x21\x40\x00\x00\x00\x03\x00\x00\xff\xff\xff', False)] = False
self.machine.lights["test_led1"].on()
self._wait_for_processing()
self.assertFalse(self.serialMock.expected_commands)
# set leds 0, 1, 2 to brightness 0
# set leds 3, 4, 5 to brightness 255
self.serialMock.expected_commands[self._crc_message(b'\x21\x40\x00\x00\x00\x06\x00\x00\x00\x00\x00\xff\xff\xff', False)] = False
self.machine.lights["test_led1"].off()
self.machine.lights["test_led2"].on()
self._wait_for_processing()
self.assertFalse(self.serialMock.expected_commands)
# align with update task
self.advance_time_and_run(.1)
# two fades which are close enough together are batched
self.serialMock.expected_commands[self._crc_message(b'\x21\x40\x00\x00\x00\x06\x00\x64\xff\x00\x00\xff\x00\x00', False)] = False
self.machine.lights["test_led1"].color("red", fade_ms=100)
self.machine.lights["test_led2"].color("red", fade_ms=95)
# align with update task
self.advance_time_and_run(.1)
# fade leds 3, 4, 5 to brightness 245, 222, 179
self.serialMock.expected_commands[self._crc_message(b'\x21\x40\x00\x03\x00\x03\x07\xd0\xf5\xde\xb3', False)] = False
self.machine.lights["test_led2"].color("wheat", fade_ms=2000)
self._wait_for_processing()
self.assertFalse(self.serialMock.expected_commands)
def _test_autofires(self):
self.serialMock.expected_commands[self._crc_message(b'\x20\x14\x00\x03\x17\x20')] = False
self.machine.autofires["ac_slingshot_test"].enable()
self._wait_for_processing()
self.assertFalse(self.serialMock.expected_commands)
self.serialMock.expected_commands[self._crc_message(b'\x20\x14\x00\x02\x17\x20')] = False
self.machine.autofires["ac_slingshot_test"].disable()
self._wait_for_processing()
self.assertFalse(self.serialMock.expected_commands)
self.serialMock.expected_commands[self._crc_message(b'\x20\x14\x01\x03\x17\x30')] = False
self.machine.autofires["ac_slingshot_test2"].enable()
self._wait_for_processing()
self.assertFalse(self.serialMock.expected_commands)
self.serialMock.expected_commands[self._crc_message(b'\x20\x14\x01\x00\x17\x3f')] = False
self.machine.autofires["ac_slingshot_test2"].disable()
self._wait_for_processing()
self.assertFalse(self.serialMock.expected_commands)
self.serialMock.expected_commands[self._crc_message(b'\x20\x14\x00\x0b\x17\x14')] = False
self.machine.autofires["ac_delayed_kickback"].enable()
self._wait_for_processing()
self.assertFalse(self.serialMock.expected_commands)
self.serialMock.expected_commands[self._crc_message(b'\x20\x14\x00\x02\x17\x20')] = False
self.machine.autofires["ac_delayed_kickback"].disable()
self._wait_for_processing()
self.assertFalse(self.serialMock.expected_commands)
def _test_flippers(self):
self.serialMock.expected_commands[self._crc_message(b'\x20\x14\x03\x21\x0a\x06')] = False
self.machine.flippers["f_test_single"].enable()
self._wait_for_processing()
self.assertFalse(self.serialMock.expected_commands)
self.serialMock.expected_commands[self._crc_message(b'\x20\x14\x03\x00\x0a\x26')] = False
self.machine.flippers["f_test_single"].disable()
self._wait_for_processing()
self.assertFalse(self.serialMock.expected_commands)
| 44.983893 | 143 | 0.647689 | 4,706 | 33,513 | 4.422864 | 0.098173 | 0.053618 | 0.085423 | 0.069905 | 0.83194 | 0.798069 | 0.783607 | 0.751321 | 0.732007 | 0.701883 | 0 | 0.108835 | 0.214782 | 33,513 | 744 | 144 | 45.044355 | 0.68212 | 0.073882 | 0 | 0.530035 | 0 | 0.047703 | 0.240556 | 0.106124 | 0 | 0 | 0.008208 | 0 | 0.139576 | 1 | 0.065371 | false | 0 | 0.021201 | 0.015901 | 0.123675 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
22ae30db54a17dc72aff8eed6f6176cf586c77ac | 14,131 | py | Python | torch_mimicry/datasets/data_utils.py | gengcong940126/mimicry | 30a001426a4685d5a83258f52faa9564cefa9158 | [
"MIT"
] | 1 | 2021-05-24T02:48:33.000Z | 2021-05-24T02:48:33.000Z | torch_mimicry/datasets/data_utils.py | gengcong940126/mimicry | 30a001426a4685d5a83258f52faa9564cefa9158 | [
"MIT"
] | null | null | null | torch_mimicry/datasets/data_utils.py | gengcong940126/mimicry | 30a001426a4685d5a83258f52faa9564cefa9158 | [
"MIT"
] | 1 | 2021-05-24T02:48:34.000Z | 2021-05-24T02:48:34.000Z | """
Script for loading datasets.
"""
import os
import torchvision
from torchvision import transforms
from torch_mimicry.datasets.imagenet import imagenet
def load_dataset(root, name, **kwargs):
"""
Loads different datasets specifically for GAN training.
By default, all images are normalized to values in the range [-1, 1].
Args:
root (str): Path to where datasets are stored.
name (str): Name of dataset to load.
Returns:
Dataset: Torch Dataset object for a specific dataset.
"""
if name == "cifar10":
return load_cifar10_dataset(root, **kwargs)
elif name == "cifar100":
return load_cifar100_dataset(root, **kwargs)
elif name == "imagenet_32":
return load_imagenet_dataset(root, size=32, **kwargs)
elif name == "imagenet_128":
return load_imagenet_dataset(root, size=128, **kwargs)
elif name == "stl10_48":
return load_stl10_dataset(root, size=48, **kwargs)
elif name == "celeba_64":
return load_celeba_dataset(root, size=64, **kwargs)
elif name == "celeba_128":
return load_celeba_dataset(root, size=128, **kwargs)
elif name == "lsun_bedroom_128":
return load_lsun_bedroom_dataset(root, size=128, **kwargs)
elif name == "fake_data":
return load_fake_dataset(root, **kwargs)
else:
raise ValueError("Invalid dataset name {} selected.".format(name))
def load_fake_dataset(root,
transform_data=True,
convert_tensor=True,
**kwargs):
"""
Loads fake dataset for testing.
Args:
root (str): Path to where datasets are stored.
transform_data (bool): If True, preprocesses data.
convert_tensor (bool): If True, converts image to tensor and preprocess
to range [-1, 1].
Returns:
Dataset: Torch Dataset object.
"""
dataset_dir = os.path.join(root, 'fake_data')
if not os.path.exists(dataset_dir):
os.makedirs(dataset_dir)
if transform_data:
transforms_list = []
if convert_tensor:
transforms_list += [
transforms.ToTensor(),
transforms.Normalize((0.5, ), (0.5, ))
]
transform = transforms.Compose(transforms_list)
else:
transform = None
dataset = torchvision.datasets.FakeData(transform=transform, **kwargs)
return dataset
def load_lsun_bedroom_dataset(root,
size=128,
transform_data=True,
convert_tensor=True,
**kwargs):
"""
Loads LSUN-Bedroom dataset.
Args:
root (str): Path to where datasets are stored.
size (int): Size to resize images to.
transform_data (bool): If True, preprocesses data.
convert_tensor (bool): If True, converts image to tensor and preprocess
to range [-1, 1].
Returns:
Dataset: Torch Dataset object.
"""
dataset_dir = os.path.join(root, 'lsun')
if not os.path.exists(dataset_dir):
raise ValueError(
"Missing directory {}. Download the dataset to this directory.".
format(dataset_dir))
if transform_data:
transforms_list = [transforms.CenterCrop(256), transforms.Resize(size)]
if convert_tensor:
transforms_list += [
transforms.ToTensor(),
transforms.Normalize((0.5, ), (0.5, ))
]
transform = transforms.Compose(transforms_list)
else:
transform = None
dataset = torchvision.datasets.LSUN(root=dataset_dir,
classes=['bedroom_train'],
transform=transform,
**kwargs)
return dataset
def load_celeba_dataset(root,
transform_data=True,
convert_tensor=True,
download=True,
split='all',
size=64,
**kwargs):
"""
Loads the CelebA dataset.
Args:
root (str): Path to where datasets are stored.
size (int): Size to resize images to.
transform_data (bool): If True, preprocesses data.
split (str): The split of data to use.
download (bool): If True, downloads the dataset.
convert_tensor (bool): If True, converts image to tensor and preprocess
to range [-1, 1].
Returns:
Dataset: Torch Dataset object.
"""
dataset_dir = os.path.join(root, 'celeba_all')
if not os.path.exists(dataset_dir):
os.makedirs(dataset_dir)
if transform_data:
# Build default transforms for scaling outputs to -1 to 1.
transforms_list = [
transforms.CenterCrop(
178), # Because each image is size (178, 218) spatially.
transforms.Resize(size)
]
if convert_tensor:
transforms_list += [
transforms.ToTensor(),
transforms.Normalize((0.5, ), (0.5, ))
]
transform = transforms.Compose(transforms_list)
else:
transform = None
if download:
print("INFO: download is True. Downloading CelebA images...")
dataset = torchvision.datasets.CelebA(root=dataset_dir,
transform=transform,
download=download,
split=split,
**kwargs)
return dataset
def load_stl10_dataset(root,
size=48,
split='unlabeled',
download=True,
transform_data=True,
convert_tensor=True,
**kwargs):
"""
Loads the STL10 dataset.
Args:
root (str): Path to where datasets are stored.
size (int): Size to resize images to.
transform_data (bool): If True, preprocesses data.
split (str): The split of data to use.
download (bool): If True, downloads the dataset.
convert_tensor (bool): If True, converts image to tensor and preprocess
to range [-1, 1].
Returns:
Dataset: Torch Dataset object.
"""
dataset_dir = os.path.join(root, 'stl10')
if not os.path.exists(dataset_dir):
os.makedirs(dataset_dir)
if transform_data:
transforms_list = [transforms.Resize(size)]
if convert_tensor:
transforms_list += [
transforms.ToTensor(),
transforms.Normalize((0.5, ), (0.5, ))
]
transform = transforms.Compose(transforms_list)
else:
transform = None
dataset = torchvision.datasets.STL10(root=dataset_dir,
split=split,
transform=transform,
download=download,
**kwargs)
return dataset
def load_imagenet_dataset(root,
size=32,
split='train',
download=True,
transform_data=True,
convert_tensor=True,
**kwargs):
"""
Loads the ImageNet dataset.
Args:
root (str): Path to where datasets are stored.
size (int): Size to resize images to.
transform_data (bool): If True, preprocesses data.
split (str): The split of data to use.
download (bool): If True, downloads the dataset.
convert_tensor (bool): If True, converts image to tensor and preprocess
to range [-1, 1].
Returns:
Dataset: Torch Dataset object.
"""
dataset_dir = os.path.join(root, 'imagenet')
if not os.path.exists(dataset_dir):
os.makedirs(dataset_dir)
if transform_data:
transforms_list = [transforms.CenterCrop(224), transforms.Resize(size)]
if convert_tensor:
transforms_list += [
transforms.ToTensor(),
transforms.Normalize((0.5, ), (0.5, ))
]
transform = transforms.Compose(transforms_list)
else:
transform = None
dataset = imagenet.ImageNet(root=dataset_dir,
split=split,
transform=transform,
download=download,
**kwargs)
return dataset
def load_cifar100_dataset(root,
split='train',
download=True,
transform_data=True,
convert_tensor=True,
**kwargs):
"""
Loads the CIFAR-100 dataset.
Args:
root (str): Path to where datasets are stored.
transform_data (bool): If True, preprocesses data.
split (str): The split of data to use.
download (bool): If True, downloads the dataset.
convert_tensor (bool): If True, converts image to tensor and preprocess
to range [-1, 1].
Returns:
Dataset: Torch Dataset object.
"""
dataset_dir = os.path.join(root, 'cifar100')
if not os.path.exists(dataset_dir):
os.makedirs(dataset_dir)
if transform_data:
transforms_list = []
if convert_tensor:
transforms_list += [
transforms.ToTensor(),
transforms.Normalize((0.5, ), (0.5, ))
]
transform = transforms.Compose(transforms_list)
else:
transform = None
# Build datasets
if split == "all":
train_dataset = torchvision.datasets.CIFAR100(root=dataset_dir,
train=True,
transform=transform,
download=download,
**kwargs)
test_dataset = torchvision.datasets.CIFAR100(root=dataset_dir,
train=False,
transform=transform,
download=download,
**kwargs)
# Merge the datasets
dataset = torch.utils.data.ConcatDataset([train_dataset, test_dataset])
elif split == "train":
dataset = torchvision.datasets.CIFAR100(root=dataset_dir,
train=True,
transform=transform,
download=download,
**kwargs)
elif split == "test":
dataset = torchvision.datasets.CIFAR100(root=dataset_dir,
train=False,
transform=transform,
download=download,
**kwargs)
else:
raise ValueError("split argument must one of ['train', 'val', 'all']")
return dataset
def load_cifar10_dataset(root,
split='train',
download=True,
transform_data=True,
**kwargs):
"""
Loads the CIFAR-10 dataset.
Args:
root (str): Path to where datasets are stored.
transform_data (bool): If True, preprocesses data.
split (str): The split of data to use.
download (bool): If True, downloads the dataset.
convert_tensor (bool): If True, converts image to tensor and preprocess
to range [-1, 1].
Returns:
Dataset: Torch Dataset object.
"""
dataset_dir = os.path.join(root, 'cifar10')
if not os.path.exists(dataset_dir):
os.makedirs(dataset_dir)
if transform_data:
transform = transforms.Compose(
[transforms.ToTensor(),
transforms.Normalize((0.5, ), (0.5, ))])
else:
transform = None
# Build datasets
if split == "all":
train_dataset = torchvision.datasets.CIFAR10(root=dataset_dir,
train=True,
transform=transform,
download=download,
**kwargs)
test_dataset = torchvision.datasets.CIFAR10(root=dataset_dir,
train=False,
transform=transform,
download=download,
**kwargs)
# Merge the datasets
dataset = torch.utils.data.ConcatDataset([train_dataset, test_dataset])
elif split == "train":
dataset = torchvision.datasets.CIFAR10(root=dataset_dir,
train=True,
transform=transform,
download=download,
**kwargs)
elif split == "test":
dataset = torchvision.datasets.CIFAR10(root=dataset_dir,
train=False,
transform=transform,
download=download,
**kwargs)
else:
raise ValueError("split argument must one of ['train', 'val', 'all']")
return dataset
| 33.016355 | 80 | 0.49862 | 1,295 | 14,131 | 5.328185 | 0.100386 | 0.047826 | 0.027536 | 0.054203 | 0.826957 | 0.808406 | 0.782754 | 0.745797 | 0.718696 | 0.694203 | 0 | 0.019104 | 0.418442 | 14,131 | 427 | 81 | 33.093677 | 0.820516 | 0.225462 | 0 | 0.683333 | 0 | 0 | 0.042822 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.033333 | false | 0 | 0.016667 | 0 | 0.116667 | 0.004167 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
22b63bcafc8f9a231900c2b3ea912e553928712b | 2,454 | py | Python | content/models.py | AbdullahJaswal/Examination | 27f65a2e9630567ec213a13951965bb5b8db375d | [
"MIT"
] | null | null | null | content/models.py | AbdullahJaswal/Examination | 27f65a2e9630567ec213a13951965bb5b8db375d | [
"MIT"
] | null | null | null | content/models.py | AbdullahJaswal/Examination | 27f65a2e9630567ec213a13951965bb5b8db375d | [
"MIT"
] | null | null | null | from django.db import models
# Create your models here.
class Topic(models.Model):
name = models.CharField(max_length=255, blank=False, unique=True)
description = models.TextField(blank=True)
slug = models.SlugField(unique=True, max_length=255, blank=False)
sub_topics = models.ManyToManyField('SubTopic', blank=True)
is_active = models.BooleanField(default=True, blank=False)
order = models.IntegerField(blank=False)
created_at = models.DateTimeField(auto_now_add=True)
updated_at = models.DateTimeField(auto_now=True)
class Meta:
verbose_name = 'Topic'
verbose_name_plural = 'Topics'
ordering = ('order',)
def __str__(self):
return str(self.name)
class SubTopic(models.Model):
name = models.CharField(max_length=255, blank=False, unique=True)
description = models.TextField(blank=True)
slug = models.SlugField(unique=True, max_length=255, blank=False)
questions = models.ManyToManyField('Question', blank=True)
is_active = models.BooleanField(default=True, blank=False)
order = models.IntegerField(blank=False)
created_at = models.DateTimeField(auto_now_add=True)
updated_at = models.DateTimeField(auto_now=True)
class Meta:
verbose_name = 'Sub Topic'
verbose_name_plural = 'Sub Topics'
ordering = ('order',)
def __str__(self):
return str(self.name)
class Question(models.Model):
content = models.TextField(blank=False, unique=True)
explanation = models.TextField(blank=False)
answers = models.ManyToManyField('Answer', blank=True)
is_active = models.BooleanField(default=False, blank=False)
created_at = models.DateTimeField(auto_now_add=True)
updated_at = models.DateTimeField(auto_now=True)
class Meta:
verbose_name = 'Question'
verbose_name_plural = 'Questions'
ordering = ('id',)
def __str__(self):
return str(self.content)
class Answer(models.Model):
content = models.TextField(blank=False)
is_correct = models.BooleanField(default=False, blank=False)
is_active = models.BooleanField(default=False, blank=False)
created_at = models.DateTimeField(auto_now_add=True)
updated_at = models.DateTimeField(auto_now=True)
class Meta:
verbose_name = 'Answer'
verbose_name_plural = 'Answers'
ordering = ('id',)
def __str__(self):
return str(self.content)
| 26.967033 | 69 | 0.696414 | 295 | 2,454 | 5.59661 | 0.19322 | 0.084797 | 0.101757 | 0.121139 | 0.78195 | 0.78195 | 0.757723 | 0.700182 | 0.700182 | 0.651726 | 0 | 0.006073 | 0.194784 | 2,454 | 90 | 70 | 27.266667 | 0.829453 | 0.00978 | 0 | 0.642857 | 0 | 0 | 0.039539 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071429 | false | 0 | 0.017857 | 0.071429 | 0.785714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
a3d0165a8d4f65e4bdf00e5a89fcdbc40a0f6a48 | 62 | py | Python | admin/tests/__init__.py | RichardHirtle/c4all | a09c4b098cf1a58ed5e3ab6116a749a17ec035e0 | [
"MIT"
] | 4 | 2016-09-03T12:43:13.000Z | 2020-04-22T14:49:28.000Z | admin/tests/__init__.py | RichardHirtle/c4all | a09c4b098cf1a58ed5e3ab6116a749a17ec035e0 | [
"MIT"
] | 1 | 2019-09-25T12:49:01.000Z | 2020-08-04T11:33:09.000Z | admin/tests/__init__.py | RichardHirtle/c4all | a09c4b098cf1a58ed5e3ab6116a749a17ec035e0 | [
"MIT"
] | 3 | 2015-03-17T13:38:42.000Z | 2016-05-06T15:06:31.000Z | from thread import *
from comment import *
from user import *
| 15.5 | 21 | 0.758065 | 9 | 62 | 5.222222 | 0.555556 | 0.425532 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.193548 | 62 | 3 | 22 | 20.666667 | 0.94 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a3ff4f3bce36a4b693268e47baacd7388965b88f | 222 | py | Python | test/conftest.py | angelo-v/docker-testinfra | a49c07f87d4afcdf37f5099d81da649ed1d118dd | [
"MIT"
] | 2 | 2019-03-02T16:31:10.000Z | 2019-03-20T18:15:40.000Z | test/conftest.py | angelo-v/docker-testinfra | a49c07f87d4afcdf37f5099d81da649ed1d118dd | [
"MIT"
] | 2 | 2018-04-06T10:41:43.000Z | 2018-06-01T07:34:59.000Z | test/conftest.py | angelo-v/docker-testinfra | a49c07f87d4afcdf37f5099d81da649ed1d118dd | [
"MIT"
] | 3 | 2018-03-20T15:53:29.000Z | 2019-04-17T19:28:51.000Z | import docker
import pytest
@pytest.fixture(scope="session")
def client():
return docker.from_env()
@pytest.fixture(scope="session")
def image(client):
img, _ = client.images.build(path='./src')
return img
| 18.5 | 47 | 0.693694 | 29 | 222 | 5.241379 | 0.586207 | 0.171053 | 0.236842 | 0.328947 | 0.368421 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.153153 | 222 | 11 | 48 | 20.181818 | 0.808511 | 0 | 0 | 0.222222 | 0 | 0 | 0.085586 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.222222 | false | 0 | 0.222222 | 0.111111 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
430c99c58df7067d4415877d5d4cd65db7d7ea6f | 34 | py | Python | py/ssd/models/box_head/__init__.py | zjZSTU/SSD | ae137301201b66df8566fd68a617c04a2a2f8576 | [
"Apache-2.0"
] | 9 | 2020-07-24T10:03:38.000Z | 2022-03-06T08:59:51.000Z | py/ssd/models/box_head/__init__.py | zjZSTU/SSD | ae137301201b66df8566fd68a617c04a2a2f8576 | [
"Apache-2.0"
] | 4 | 2021-06-08T21:34:12.000Z | 2022-03-12T00:30:20.000Z | py/ssd/models/box_head/__init__.py | zjZSTU/SSD | ae137301201b66df8566fd68a617c04a2a2f8576 | [
"Apache-2.0"
] | 3 | 2020-07-24T10:03:43.000Z | 2022-03-05T15:26:48.000Z | from .build import build_box_head
| 17 | 33 | 0.852941 | 6 | 34 | 4.5 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.117647 | 34 | 1 | 34 | 34 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4316f9b6173530f3e1307dc8f2482fbbc7e606a5 | 130 | py | Python | twitter_stream/admin.py | vrtt/django-twitter-stream | 678c69b3a3d21ea8f1f197f7ee25f658852aa44a | [
"MIT"
] | 54 | 2015-01-28T23:13:20.000Z | 2021-04-26T09:28:16.000Z | twitter_stream/admin.py | vrtt/django-twitter-stream | 678c69b3a3d21ea8f1f197f7ee25f658852aa44a | [
"MIT"
] | 11 | 2015-08-29T09:42:42.000Z | 2020-04-19T19:28:16.000Z | twitter_stream/admin.py | vrtt/django-twitter-stream | 678c69b3a3d21ea8f1f197f7ee25f658852aa44a | [
"MIT"
] | 25 | 2015-01-26T19:05:20.000Z | 2020-05-09T10:26:50.000Z | from django.contrib import admin
from . import models
admin.site.register(models.FilterTerm)
admin.site.register(models.ApiKey)
| 18.571429 | 38 | 0.815385 | 18 | 130 | 5.888889 | 0.555556 | 0.169811 | 0.320755 | 0.433962 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.092308 | 130 | 6 | 39 | 21.666667 | 0.898305 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
4a2e092fe7525a424dfd5701109f18c64bd13652 | 167 | py | Python | coko/classes/exceptions.py | dante-signal31/coko | c803433f28602b0ecbbd86329d624557e4986a10 | [
"BSD-3-Clause"
] | null | null | null | coko/classes/exceptions.py | dante-signal31/coko | c803433f28602b0ecbbd86329d624557e4986a10 | [
"BSD-3-Clause"
] | null | null | null | coko/classes/exceptions.py | dante-signal31/coko | c803433f28602b0ecbbd86329d624557e4986a10 | [
"BSD-3-Clause"
] | null | null | null |
class CokoException(Exception):
pass
class FolderNotFound(CokoException):
def __init__(self, incorrect_path):
self.incorrect_path = incorrect_path
| 16.7 | 44 | 0.742515 | 17 | 167 | 6.882353 | 0.588235 | 0.333333 | 0.290598 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.185629 | 167 | 9 | 45 | 18.555556 | 0.860294 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0.2 | 0 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 6 |
4a3c5eee27473caddbae27946520c6792d571740 | 39 | py | Python | gap_minder_collab.py | damrajac/the_gapminder_project | 5f25fe78965fffa544b633dbd1f641575340dd7b | [
"MIT"
] | null | null | null | gap_minder_collab.py | damrajac/the_gapminder_project | 5f25fe78965fffa544b633dbd1f641575340dd7b | [
"MIT"
] | null | null | null | gap_minder_collab.py | damrajac/the_gapminder_project | 5f25fe78965fffa544b633dbd1f641575340dd7b | [
"MIT"
] | null | null | null | print ("i did it all for the nookie")
| 13 | 37 | 0.666667 | 8 | 39 | 3.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.230769 | 39 | 2 | 38 | 19.5 | 0.866667 | 0 | 0 | 0 | 0 | 0 | 0.710526 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
4a6146b4401418f370438a574f1c66adbf629614 | 12,231 | py | Python | tabular_class.py | hmhyau/rl-intention | 29c84fd9abaa6149fc170531d0ae904fb23047a4 | [
"MIT"
] | 3 | 2020-11-13T19:07:55.000Z | 2021-03-06T10:33:15.000Z | tabular_class.py | hmhyau/rl-intention | 29c84fd9abaa6149fc170531d0ae904fb23047a4 | [
"MIT"
] | null | null | null | tabular_class.py | hmhyau/rl-intention | 29c84fd9abaa6149fc170531d0ae904fb23047a4 | [
"MIT"
] | null | null | null | import numpy as np
from base_class import TabularRLModel
from schedules import LinearSchedule, ExponentialSchedule
from gym.spaces import Tuple, Discrete
import cloudpickle as pickle
remove = ['hvalues', 'qvalues', 'policy']
class QTabularRLModel(TabularRLModel):
def __init__(
self,
policy,
env,
gamma=0.99,
learning_rate=1e-2,
buffer_size=None,
exploration_type='linear',
exploration_frac=None,
exploration_ep=250,
exploration_initial_eps=1.,
exploration_final_eps=0.05,
double_q=False,
policy_kwargs=None,
seed=None,
intent=False
):
super(QTabularRLModel, self).__init__(
policy,
env,
gamma,
learning_rate,
buffer_size,
exploration_type,
exploration_frac,
exploration_ep,
exploration_initial_eps,
exploration_final_eps,
double_q,
policy_kwargs,
seed,
intent
)
self._aliases()
def _aliases(self):
self.qvalues = self.policy.qvalues
if self.policy.intent:
self.hvalues = self.policy.hvalues
def learn(self, total_timesteps=None, total_episodes=None, log_interval=100, ckpt_interval=100, ckpt_path=None):
last_100rewards = np.zeros(100)
last_100rewards[:] = np.NaN
if total_timesteps and total_episodes:
raise ValueError("Only one of total_timesteps or total_episodes can be specified")
if ckpt_path is None:
print('Checkpoint path is not provided, no intermediate models will be saved')
loop_type = 'episode' if total_episodes else 'timesteps'
loop_var = total_timesteps if total_timesteps is not None else total_episodes
# if self.exploration_frac is None:
# self.exploration = LinearSchedule(frac=self.exploration_ep,
# initial=self.exploration_initial_eps,
# final=self.exploration_final_eps)
# else:
# self.exploration = LinearSchedule(frac=self.exploration_frac * loop_var,
# initial=self.exploration_initial_eps,
# final=self.exploration_final_eps)
if self.exploration_type == 'linear':
self.exploration = LinearSchedule(
frac=self.exploration_frac * loop_var,
initial=self.exploration_initial_eps,
final=self.exploration_final_eps)
elif self.exploration_type == 'exponential':
self.exploration = ExponentialSchedule(
frac=self.exploration_frac,
initial=self.exploration_initial_eps,
final=self.exploration_final_eps)
train = True
done = False
step = 0
ep_reward = 0
obs = self.env.reset()
while train:
if loop_type == 'episode':
update_eps = self.exploration.value(self.ep_done)
if loop_type == 'timesteps':
update_eps = self.exploration.value(self.elapsed_steps)
if np.random.random_sample() > update_eps:
action, value = self.policy.predict(obs, deterministic=True)
else:
action, value = self.policy.predict(obs, deterministic=False)
next_obs, reward, done, info = self.env.step(action)
# print(step, next_obs, self.qvalues[next_obs])
# argmax_a = np.argmax(self.qvalues[next_obs])
# argmax_a, _ = self.policy.predict(obs, deterministic=True)
argmax_a = np.argmax(self.qvalues[next_obs])
if isinstance(self.observation_space, Tuple):
# print(obs, action)
expected_reward = reward + self.gamma*self.qvalues[next_obs + (argmax_a,)]*(1-int(done))-self.qvalues[obs + (action,)]
self.qvalues[obs + (action,)] += self.learning_rate * expected_reward
if self.policy.intent:
intent_update = np.zeros(self.qvalues.shape)
intent_update[obs + (action,)] += 1
expected_intent = intent_update + self.gamma * self.hvalues[next_obs + (argmax_a,)] * (1-int(done)) - self.hvalues[obs + (action,)]
self.hvalues[obs + (action,)] = self.hvalues[obs + (action,)] + self.learning_rate * expected_intent
if isinstance(self.observation_space, Discrete):
expected_reward = reward + self.gamma*np.max(self.qvalues[next_obs])*(1-int(done))-self.qvalues[obs, action]
self.qvalues[obs, action] += self.learning_rate * expected_reward
if self.policy.intent:
intent_update = np.zeros(self.qvalues.shape)
intent_update[obs, action] += 1
expected_intent = intent_update + self.gamma * self.hvalues[next_obs, argmax_a] * (1-int(done)) - self.hvalues[obs, action]
self.hvalues[obs, action] = self.hvalues[obs, action] + self.learning_rate * expected_intent
obs = next_obs
step += 1
ep_reward += reward
self.elapsed_steps += 1
if loop_type == 'timesteps':
if self.elapsed_steps == total_timesteps:
train = False
if done:
# print(step)
last_100rewards[self.ep_done%100] = ep_reward
print("\rEpisode {}/{}, Average Reward {}".format(self.ep_done,total_episodes, np.nanmean(last_100rewards)),end="")
self.ep_done += 1
step = 0
ep_reward = 0
obs = self.env.reset()
if loop_type == 'episode':
if self.ep_done >= total_episodes:
train = False
if ckpt_path is not None and ckpt_interval:
if loop_type == 'episode':
if self.ep_done % ckpt_interval == 0 and done:
ckpt_str = str(self.ep_done)
full_path = ckpt_path + '/' + ckpt_str
# super(DBNModel, self).save(full_path)
super(QTabularRLModel, self).save(full_path)
if loop_type == 'timesteps':
if self.elapsed_steps % ckpt_interval == 0 and done:
ckpt_str = str(self.ep_done)
full_path = ckpt_path + '/' + ckpt_str
# super(DBNModel, self).save(full_path)
super(QTabularRLModel, self).save(full_path)
class MCTabularRLModel(TabularRLModel):
def __init__(
self,
policy,
env,
gamma=0.99,
learning_rate=1e-2,
buffer_size=None,
exploration_type='linear',
exploration_frac=None,
exploration_ep=250,
exploration_initial_eps=1.,
exploration_final_eps=0.05,
double_q=False,
policy_kwargs=None,
seed=None,
intent=False
):
super(MCTabularRLModel, self).__init__(
policy,
env,
gamma,
learning_rate,
buffer_size,
exploration_type,
exploration_frac,
exploration_ep,
exploration_initial_eps,
exploration_final_eps,
double_q,
policy_kwargs,
seed,
intent
)
self._aliases()
def _aliases(self):
self.qvalues = self.policy.qvalues
if self.policy.intent:
self.hvalues = self.policy.hvalues
def learn(self, total_timesteps=None, total_episodes=None, log_interval=100, ckpt_interval=100, ckpt_path=None):
def _sample_episode():
sample = []
obs = self.env.reset()
done = False
while not done:
update_eps = self.exploration.value(self.ep_done)
if np.random.random_sample() > update_eps:
action, value = self.policy.predict(obs, deterministic=True)
else:
action, value = self.policy.predict(obs, deterministic=False)
new_obs, reward, done, info = self.env.step(action)
sample.append((obs, action, reward))
obs = new_obs
return sample
episode_rewards = []
episode_successes = []
loop_var = total_timesteps if total_timesteps is not None else total_episodes
if total_timesteps is not None:
raise ValueError('Only total_episodes can be specified for this class')
# if self.exploration_frac is None:
# self.exploration = LinearSchedule(frac=self.exploration_ep,
# initial=self.exploration_initial_eps,
# final=self.exploration_final_eps)
# else:
# self.exploration = LinearSchedule(frac=self.exploration_frac * loop_var,
# initial=self.exploration_initial_eps,
# final=self.exploration_final_eps)
if self.exploration_type == 'linear':
self.exploration = LinearSchedule(
frac=self.exploration_frac * loop_var,
initial=self.exploration_initial_eps,
final=self.exploration_final_eps)
elif self.exploration_type == 'exponential':
self.exploration = ExponentialSchedule(
frac=self.exploration_frac,
initial=self.exploration_initial_eps,
final=self.exploration_final_eps)
train = True
ep_reward = 0
while train:
sample = _sample_episode()
obses, actions, rewards = zip(*sample)
self.ep_reward = np.sum(rewards)
for idx in range(len(sample)):
self.elapsed_steps += 1
discounts = np.array([self.gamma**i for i in range(len(obses)+1)])
expected_reward = sum(rewards[idx:]*discounts[:-(1+idx)]) - self.qvalues[obses[idx], actions[idx]]
self.qvalues[obses[idx], actions[idx]] += self.learning_rate * expected_reward
# print(np.where(self.qvalues!=0))
if self.policy.intent:
intent_update = np.zeros(self.qvalues.shape)
for obs, action in zip(obses[idx:], actions[idx:]):
intent_update[obs, action] += self.learning_rate
tmp = self.hvalues[obses[idx], actions[idx]] * (1-self.learning_rate)
tmp += intent_update
self.hvalues[obses[idx], actions[idx]] = tmp
self.ep_done += 1
last_100rewards[self.ep_done%100] = ep_reward
print("\rEpisode {}/{}, Average Reward {}".format(self.ep_done,total_episodes, np.nanmean(last_100rewards)),end="")
# print(len(sample))
ep_reward = 0
if self.ep_done >= total_episodes:
train = False
if ckpt_path is not None and ckpt_interval:
if loop_type == 'episode':
if self.ep_done % ckpt_interval == 0 and done:
ckpt_str = str(self.ep_done)
full_path = ckpt_path + '/' + ckpt_str
# super(DBNModel, self).save(full_path)
super(MCTabularRLModel, self).save(full_path)
if loop_type == 'timesteps':
if self.elapsed_steps % ckpt_interval == 0 and done:
ckpt_str = str(self.ep_done)
full_path = ckpt_path + '/' + ckpt_str
# super(DBNModel, self).save(full_path)
super(MCTabularRLModel, self).save(full_path) | 39.970588 | 151 | 0.547543 | 1,269 | 12,231 | 5.054374 | 0.120567 | 0.095884 | 0.024945 | 0.036171 | 0.805426 | 0.770034 | 0.740879 | 0.740879 | 0.699252 | 0.677736 | 0 | 0.011294 | 0.36293 | 12,231 | 306 | 152 | 39.970588 | 0.811858 | 0.109394 | 0 | 0.72807 | 0 | 0 | 0.036812 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.030702 | false | 0 | 0.02193 | 0 | 0.065789 | 0.013158 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4a8d4acd05d1143219e6817dde7486231d642c60 | 198 | py | Python | 3rd_party/occa/scripts/docs/api_docgen/__init__.py | RonRahaman/nekRS | ffc02bca33ece6ba3330c4ee24565b1c6b5f7242 | [
"BSD-3-Clause"
] | 312 | 2015-07-02T09:02:09.000Z | 2022-03-30T16:13:23.000Z | 3rd_party/occa/scripts/docs/api_docgen/__init__.py | neams-th-coe/nekRS | 5d2c8ab3d14b3fb16db35682336a1f96000698bb | [
"BSD-3-Clause"
] | 520 | 2015-07-12T18:32:38.000Z | 2022-03-31T16:15:00.000Z | 3rd_party/occa/scripts/docs/api_docgen/__init__.py | neams-th-coe/nekRS | 5d2c8ab3d14b3fb16db35682336a1f96000698bb | [
"BSD-3-Clause"
] | 79 | 2015-07-22T22:10:56.000Z | 2022-03-17T09:07:01.000Z | from .api_docgen import *
from .constants import *
from .dev_utils import *
from .markdown import *
from .system_commands import *
from .types import *
from .utils import *
from .xml_utils import *
| 22 | 30 | 0.757576 | 28 | 198 | 5.214286 | 0.428571 | 0.479452 | 0.205479 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.161616 | 198 | 8 | 31 | 24.75 | 0.879518 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
4abbe259886b39dee3f29987a8a41e77e5509580 | 191 | py | Python | streamlit_custom_slider/app.py | lukexyz/iris | 7290525d15f5283dfdfb6bb9c53c5de479bf30cb | [
"MIT"
] | 1 | 2021-01-04T18:13:28.000Z | 2021-01-04T18:13:28.000Z | streamlit_custom_slider/app.py | lukexyz/iris | 7290525d15f5283dfdfb6bb9c53c5de479bf30cb | [
"MIT"
] | null | null | null | streamlit_custom_slider/app.py | lukexyz/iris | 7290525d15f5283dfdfb6bb9c53c5de479bf30cb | [
"MIT"
] | 1 | 2021-11-08T14:39:57.000Z | 2021-11-08T14:39:57.000Z | # Import the wrapper function from your package
from streamlit_custom_slider import st_custom_slider
import streamlit as st
st.title("Testing Streamlit custom components")
st_custom_slider() | 31.833333 | 52 | 0.848168 | 28 | 191 | 5.571429 | 0.535714 | 0.230769 | 0.230769 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.109948 | 191 | 6 | 53 | 31.833333 | 0.917647 | 0.235602 | 0 | 0 | 0 | 0 | 0.241379 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
43babda7f408f7d2f840cc5ab2dc130d42ae5845 | 1,924 | py | Python | changelog_generator/tests/test_entry_point.py | nghialt/gitlab-changelog-generator | d9af4baa1d76ab9436548e47842eb80935f9a8bd | [
"MIT"
] | 3 | 2019-11-01T15:13:31.000Z | 2020-02-03T06:27:16.000Z | changelog_generator/tests/test_entry_point.py | nghialt/gitlab-changelog-generator | d9af4baa1d76ab9436548e47842eb80935f9a8bd | [
"MIT"
] | 19 | 2018-06-17T21:07:32.000Z | 2020-06-04T07:07:41.000Z | changelog_generator/tests/test_entry_point.py | nghialt/gitlab-changelog-generator | d9af4baa1d76ab9436548e47842eb80935f9a8bd | [
"MIT"
] | 5 | 2018-11-24T08:04:39.000Z | 2020-10-28T19:54:29.000Z | import sys
import unittest
from changelog_generator.entry_point import process_arguments
class TestGenerator(unittest.TestCase):
def test_process_arguments(self):
sys.argv = [
"script",
"--ip",
"localhost",
"--group",
"test-group",
"--project",
"test-project",
"--branches",
"release",
"master",
"--version",
"1.2.3",
"--token",
"test-token",
]
expected_result = {
"ip_address": "localhost",
"api_version": "4",
"project_group": "test-group",
"project": "test-project",
"branch_one": "release",
"branch_two": "master",
"version": "1.2.3",
"changelog": "N",
"token": "test-token",
"ssl": True,
}
result = process_arguments()
self.assertEqual(result, expected_result)
def test_ssl_false(self):
sys.argv = [
"script",
"--ip",
"localhost",
"--group",
"test-group",
"--project",
"test-project",
"--branches",
"release",
"master",
"--version",
"1.2.3",
"--token",
"test-token",
"--ssl",
"False"
]
expected_result = {
"ip_address": "localhost",
"api_version": "4",
"project_group": "test-group",
"project": "test-project",
"branch_one": "release",
"branch_two": "master",
"version": "1.2.3",
"changelog": "N",
"token": "test-token",
"ssl": False,
}
result = process_arguments()
self.assertEqual(result, expected_result)
| 25.315789 | 61 | 0.422557 | 150 | 1,924 | 5.266667 | 0.286667 | 0.081013 | 0.070886 | 0.106329 | 0.806329 | 0.789873 | 0.789873 | 0.789873 | 0.64557 | 0.64557 | 0 | 0.012693 | 0.426715 | 1,924 | 75 | 62 | 25.653333 | 0.703536 | 0 | 0 | 0.794118 | 0 | 0 | 0.272349 | 0 | 0 | 0 | 0 | 0 | 0.029412 | 1 | 0.029412 | false | 0 | 0.044118 | 0 | 0.088235 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
43d9b5675831adfd072fd306253a7bf37e40ac0c | 2,422 | py | Python | mysite/db_router.py | Duwo/my_registration | fff632d3a64255ef9f53fef4f4dc08b183226fb8 | [
"MIT"
] | null | null | null | mysite/db_router.py | Duwo/my_registration | fff632d3a64255ef9f53fef4f4dc08b183226fb8 | [
"MIT"
] | null | null | null | mysite/db_router.py | Duwo/my_registration | fff632d3a64255ef9f53fef4f4dc08b183226fb8 | [
"MIT"
] | null | null | null | class AuthRouter(object):
"""
A router to control all database operations on models in the
auth application.
"""
def db_for_read(self, model, **hints):
"""
Attempts to read auth models go to auth_db.
"""
if model._meta.app_label == 'auth':
return 'auth_db'
return None
def db_for_write(self, model, **hints):
"""
Attempts to write auth models go to auth_db.
"""
if model._meta.app_label == 'auth':
return 'auth_db'
return None
def allow_relation(self, obj1, obj2, **hints):
"""
Allow relations if a model in the auth app is involved.
"""
if obj1._meta.app_label == 'auth' or \
obj2._meta.app_label == 'auth':
return True
return None
def allow_migrate(self, db, app_label, model_name=None, **hints):
"""
Make sure the auth app only appears in the 'auth_db'
database.
"""
if app_label == 'auth':
return db == 'auth_db'
return None
class PortfolioRouter(object):
"""
A router to control all database operations on models in the
portfolio application.
"""
def db_for_read(self, model, **hints):
"""
Attempts to read portfolio models go to portfolio_db.
"""
if model._meta.app_label == 'portfolio':
return 'portfolio_db'
return None
def db_for_write(self, model, **hints):
"""
Attempts to write portfolio models go to portfolio_db.
"""
if model._meta.app_label == 'portfolio':
return 'portfolio_db'
if model._meta.app_label == 'photologue':
return 'portfolio_db'
return None
def allow_relation(self, obj1, obj2, **hints):
"""
Allow relations if a model in the portfolio app is involved.
"""
if obj1._meta.app_label == 'portfolio' or \
obj2._meta.app_label == 'portfolio':
return True
return None
def allow_migrate(self, db, app_label, model_name=None, **hints):
"""
Make sure the portfolio app only appears in the 'portfolio_db'
database.
"""
if app_label == 'portfolio':
return db == 'portfolio_db'
if app_label == 'photologue':
return 'photologue_db'
return None | 29.901235 | 70 | 0.563171 | 290 | 2,422 | 4.527586 | 0.172414 | 0.085301 | 0.082254 | 0.049505 | 0.837014 | 0.740289 | 0.728865 | 0.714395 | 0.667174 | 0.667174 | 0 | 0.004981 | 0.336912 | 2,422 | 81 | 71 | 29.901235 | 0.812578 | 0.253097 | 0 | 0.675 | 0 | 0 | 0.106302 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0 | 0 | 0.7 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
601b29e291ebaecbfea5e2a4d548c08ddddb7b39 | 2,929 | py | Python | examples/student.py | mojodojo101/TryHarder-InfoSecPrep | 3fd4f96590704ba086335ab847173751ad56f580 | [
"MIT"
] | 5 | 2020-10-28T04:05:10.000Z | 2021-11-30T09:42:16.000Z | examples/student.py | mojodojo101/TryHarder-InfoSecPrep | 3fd4f96590704ba086335ab847173751ad56f580 | [
"MIT"
] | 1 | 2020-10-28T03:45:52.000Z | 2020-10-28T03:45:52.000Z | examples/student.py | mojodojo101/TryHarder-InfoSecPrep | 3fd4f96590704ba086335ab847173751ad56f580 | [
"MIT"
] | 5 | 2020-04-22T08:02:39.000Z | 2021-06-30T06:30:31.000Z | import discord
from discord.ext import commands
from builtins import bot
@commands.command()
async def oscp(ctx):
user = ctx.author.id
server = ctx.guild
channel = discord.utils.get(server.channels, name="student-roles")
if ctx.channel == channel:
member = server.get_member(user)
userMention = member.mention
role = discord.utils.get(server.roles, name="OSCP Student")
roleMention = role.mention
await member.add_roles(role)
await channel.send("%s - you have been added to %s" % (userMention, roleMention))
@commands.command()
async def oswe(ctx):
user = ctx.author.id
server = ctx.guild
channel = discord.utils.get(server.channels, name="student-roles")
if ctx.channel == channel:
member = server.get_member(user)
userMention = member.mention
role = discord.utils.get(server.roles, name="OSWE Student")
roleMention = role.mention
await member.add_roles(role)
await channel.send("%s - you have been added to %s" % (userMention, roleMention))
@commands.command()
async def osce(ctx):
user = ctx.author.id
server = ctx.guild
channel = discord.utils.get(server.channels, name="student-roles")
if ctx.channel == channel:
member = server.get_member(user)
userMention = member.mention
role = discord.utils.get(server.roles, name="OSCE Student")
roleMention = role.mention
await member.add_roles(role)
await channel.send("%s - you have been added to %s" % (userMention, roleMention))
@commands.command()
async def oswp(ctx):
user = ctx.author.id
server = ctx.guild
channel = discord.utils.get(server.channels, name="student-roles")
if ctx.channel == channel:
member = server.get_member(user)
userMention = member.mention
role = discord.utils.get(server.roles, name="OSWP Student")
roleMention = role.mention
await member.add_roles(role)
await channel.send("%s - you have been added to %s" % (userMention, roleMention))
@commands.command()
async def wapt(ctx):
user = ctx.author.id
server = ctx.guild
channel = discord.utils.get(server.channels, name="student-roles")
if ctx.channel == channel:
member = server.get_member(user)
userMention = member.mention
role = discord.utils.get(server.roles, name="WAPT Student")
roleMention = role.mention
await member.add_roles(role)
await channel.send("%s - you have been added to %s" % (userMention, roleMention))
def setup(bot):
bot.add_command(oscp)
bot.add_command(oswe)
bot.add_command(osce)
bot.add_command(oswp)
bot.add_command(wapt)
def teardown(bot):
bot.remove_command(oscp)
bot.remove_command(oswe)
bot.remove_command(osce)
bot.remove_command(oswp)
bot.remove_command(wapt) | 35.289157 | 98 | 0.657904 | 372 | 2,929 | 5.126344 | 0.123656 | 0.062926 | 0.078658 | 0.110121 | 0.824331 | 0.824331 | 0.824331 | 0.824331 | 0.824331 | 0.824331 | 0 | 0 | 0.22704 | 2,929 | 83 | 99 | 35.289157 | 0.842314 | 0 | 0 | 0.666667 | 0 | 0 | 0.093857 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.026667 | false | 0 | 0.04 | 0 | 0.066667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
60598a1d290ea74384028a4eaca8bf23827dbfb0 | 96 | py | Python | venv/lib/python3.8/site-packages/pyls/plugins/mccabe_lint.py | Retraces/UkraineBot | 3d5d7f8aaa58fa0cb8b98733b8808e5dfbdb8b71 | [
"MIT"
] | 2 | 2022-03-13T01:58:52.000Z | 2022-03-31T06:07:54.000Z | venv/lib/python3.8/site-packages/pyls/plugins/mccabe_lint.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | 19 | 2021-11-20T04:09:18.000Z | 2022-03-23T15:05:55.000Z | venv/lib/python3.8/site-packages/pyls/plugins/mccabe_lint.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | null | null | null | /home/runner/.cache/pip/pool/ae/2b/85/2cc22118d4ad12a20ce5952e5496f3ae4865be8059a4fcf41db5a31a46 | 96 | 96 | 0.895833 | 9 | 96 | 9.555556 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.40625 | 0 | 96 | 1 | 96 | 96 | 0.489583 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
60758b9e944f3da2bd4e6b8152561ddec86589a5 | 15,297 | py | Python | tests/test_ordinal_column.py | trungngv/CHAID | 794756560872e944cec6a6dcc780feeeeadc51ed | [
"Apache-2.0"
] | 141 | 2016-06-14T13:38:38.000Z | 2022-02-03T12:01:18.000Z | tests/test_ordinal_column.py | trungngv/CHAID | 794756560872e944cec6a6dcc780feeeeadc51ed | [
"Apache-2.0"
] | 110 | 2016-06-16T14:30:34.000Z | 2022-01-28T19:36:10.000Z | tests/test_ordinal_column.py | trungngv/CHAID | 794756560872e944cec6a6dcc780feeeeadc51ed | [
"Apache-2.0"
] | 47 | 2016-11-27T16:21:43.000Z | 2021-12-28T08:40:51.000Z | """
Testing module for the class OrdinalColumn
"""
from unittest import TestCase
import numpy as np
from numpy import nan
from setup_tests import list_ordered_equal, list_unordered_equal, CHAID
def test_all_ordinal_combinations():
arr = np.array([1.0, 2.0, 3.0, 4.0])
ordinal = CHAID.OrdinalColumn(arr)
assert [
i for i in ordinal.all_combinations()
] == [[[1], [2, 3, 4]],
[[1, 2], [3, 4]],
[[1], [2], [3, 4]],
[[1, 2, 3], [4]],
[[1], [2, 3], [4]],
[[1, 2], [3], [4]],
[[1], [2], [3], [4]]]
def test_all_ordinal_combinations_with_nan():
arr = np.array([1.0, 2.0, 3.0, np.nan])
ordinal = CHAID.OrdinalColumn(arr)
nan_val = np.array([np.nan]).astype(int)[0]
assert [
i for i in ordinal.all_combinations()
] == [[[nan_val], [1, 2, 3]],
[[nan_val, 1], [2, 3]],
[[1], [nan_val, 2, 3]],
[[nan_val], [1], [2, 3]],
[[nan_val, 1, 2], [3]],
[[1, 2], [nan_val, 3]],
[[nan_val], [1, 2], [3]],
[[nan_val, 1], [2], [3]],
[[1], [nan_val, 2], [3]],
[[1], [2], [nan_val, 3]],
[[nan_val], [1], [2], [3]]]
class TestOrdinalDeepCopy(TestCase):
""" Test fixture class for deep copy method """
def setUp(self):
""" Setup for copy tests"""
arr = np.array([1, 2, 3, 3, 3, 3])
self.orig = CHAID.OrdinalColumn(arr)
self.copy = self.orig.deep_copy()
def test_deep_copy_does_copy(self):
""" Ensure a copy actually happens when deep_copy is called """
assert id(self.orig) != id(self.copy), 'The vector objects must be different'
assert list_ordered_equal(self.copy, self.orig), 'Vector contents must be the same'
def test_changing_copy(self):
""" Test that altering the copy doesn't alter the original """
self.copy.arr[0] = 55.0
assert not list_ordered_equal(self.copy, self.orig), 'Altering one vector should not affect the other'
def test_metadata(self):
""" Ensure metadata is copied correctly or deep_copy """
assert self.copy.metadata == self.orig.metadata, 'Copied metadata should be equivilent'
class TestOrdinalGrouping(TestCase):
""" Test fixture class for deep copy method """
def setUp(self):
""" Setup for grouping tests """
arr = np.array([1.0, 2.0, 3.0, 3.0, 3.0, 3.0, 4.0, 5.0, 10.0])
self.col = CHAID.OrdinalColumn(arr)
def test_possible_groups(self):
""" Ensure possible groups are only adjacent numbers """
groupings = list(self.col.possible_groupings())
possible_groupings = [(1, 2), (2, 3), (3, 4), (4, 5)]
assert list_unordered_equal(possible_groupings, groupings), 'Without NaNs, with groups are identified, possible grouping are incorrectly identified.'
groups = list(self.col.groups())
actual_groups = [[1], [2], [3], [4], [5], [10]]
assert list_unordered_equal(actual_groups, groups), 'Without NaNs, before any groups are identified, actual groupings are incorrectly reported'
def test_groups_after_grouping(self):
""" Ensure a copy actually happens when deep_copy is called """
self.col.group(3, 4)
self.col.group(3, 2)
groupings = list(self.col.possible_groupings())
possible_groupings = [(1, 3), (3, 5)]
assert list_unordered_equal(possible_groupings, groupings), 'Without NaNs, with groups are identified, possible grouping are incorrectly identified.'
groups = list(self.col.groups())
actual_groups = [[1], [2, 3, 4], [5], [10]]
assert list_unordered_equal(actual_groups, groups), 'Without NaNs, before any groups are identified, actual groupings are incorrectly reported'
def test_groups_after_copy(self):
""" Ensure a copy actually happens when deep_copy is called """
self.col.group(3, 4)
self.col.group(3, 2)
col = self.col.deep_copy()
groupings = list(col.possible_groupings())
possible_groupings = [(1, 3), (3, 5)]
assert list_unordered_equal(possible_groupings, groupings), 'Without NaNs, with groups are identified, possible grouping are incorrectly identified.'
groups = list(col.groups())
actual_groups = [[1], [2, 3, 4], [5], [10]]
assert list_unordered_equal(actual_groups, groups), 'Without NaNs, before any groups are identified, actual groupings are incorrectly reported'
class TestOrdinalWithObjects(TestCase):
""" Test fixture class for deep copy method """
def setUp(self):
""" Setup for grouping tests """
arr = np.array(
[1.0, 2.0, 3.0, 3.0, 3.0, 3.0, 4.0, 5.0, 10.0, None],
dtype=object
)
self.col = CHAID.OrdinalColumn(arr)
def test_possible_groups(self):
""" Ensure possible groups are only adjacent numbers """
metadata = self.col.metadata
groupings = [(metadata[x], metadata[y]) for x, y in self.col.possible_groupings()]
possible_groupings = [
(1.0, 2.0), (2.0, 3.0), (3.0, 4.0), (4.0, 5.0), (1.0, '<missing>'), (2.0, '<missing>'), (3.0, '<missing>'),
(4.0, '<missing>'), (5.0, '<missing>'), (10.0, '<missing>')
]
assert list_unordered_equal(possible_groupings, groupings), 'With NaNs, before any groups are identified, possible grouping are incorrectly calculated.'
groups = list(self.col.groups())
groups = [[self.col.metadata[i] for i in group] for group in self.col.groups()]
actual_groups = [[1.0], [2.0], [3.0], [4.0], [5.0], ['<missing>'], [10.0]]
assert list_unordered_equal(actual_groups, groups), 'With NaNs, before any groups are identified, actual groupings are incorrectly reported'
def test_groups_after_grouping(self):
""" Ensure possible groups are only adjacent numbers after identifing some groups """
self.col.group(3.0, 4.0)
self.col.group(3.0, 2.0)
groupings = [(self.col.metadata[x], self.col.metadata[y]) for x, y in self.col.possible_groupings()]
possible_groupings = [
(1.0, 3.0), (3.0, 5.0), (1.0, '<missing>'), (3.0, '<missing>'), (5.0, '<missing>'), (10.0, '<missing>')
]
assert list_unordered_equal(possible_groupings, groupings), 'With NaNs, with groups are identified, possible grouping incorrectly identified.'
groups = [[self.col.metadata[i] for i in group] for group in self.col.groups()]
actual_groups = [[1.0], [2.0, 3.0, 4.0], [5.0], [10.0], ['<missing>']]
assert list_unordered_equal(actual_groups, groups), 'With NaNs, with groups identified, actual groupings are incorrectly reported'
def test_groups_grouping_with_nan(self):
""" Ensure possible groups are only adjacent numbers after identifing some groups containing nans"""
self.col.group(4.0, self.col._nan)
self.col.group(3.0, 4.0)
self.col.group(3.0, 2.0)
groupings = [(self.col.metadata[x], self.col.metadata[y]) for x, y in self.col.possible_groupings()]
possible_groupings = [
(1.0, 3.0), (3.0, 5.0)
]
assert list_unordered_equal(possible_groupings, groupings), 'With NaNs, with groups containing nan identified, possible grouping incorrectly identified.'
groups = [[self.col.metadata[i] for i in group] for group in self.col.groups()]
actual_groups = [[1.0], [2.0, 3.0, 4.0, '<missing>'], [5.0], [10.0]]
assert list_unordered_equal(actual_groups, groups), 'With NaNs, with groups containing nan identified, actual groupings are incorrectly reported'
def test_groups_after_copy(self):
""" Ensure possible groups are only adjacent numbers after identifing some groups """
self.col.group(3.0, 4.0)
self.col.group(3.0, 2.0)
col = self.col.deep_copy()
groupings = [(col.metadata[x], col.metadata[y]) for x, y in col.possible_groupings()]
possible_groupings = [
(1.0, 3.0), (3.0, 5.0), (1.0, '<missing>'), (3.0, '<missing>'), (5.0, '<missing>'), (10.0, '<missing>')
]
assert list_unordered_equal(possible_groupings, groupings), 'With NaNs, with groups are identified, possible grouping incorrectly identified.'
groups = [[col.metadata[i] for i in group] for group in col.groups()]
actual_groups = [[1.0], [2.0, 3.0, 4.0], [5.0], [10.0], ['<missing>']]
assert list_unordered_equal(actual_groups, groups), 'With NaNs, with groups identified, actual groupings are incorrectly reported'
def test_groups_after_copy_with_nan(self):
""" Ensure possible groups are only adjacent numbers after identifing some groups containing nans"""
self.col.group(3.0, 4.0)
self.col.group(3.0, self.col._nan)
self.col.group(3.0, 2.0)
col = self.col.deep_copy()
groupings = [(col.metadata[x], col.metadata[y]) for x, y in col.possible_groupings()]
possible_groupings = [
(1.0, 3.0), (3.0, 5.0)
]
assert list_unordered_equal(possible_groupings, groupings), 'With NaNs, with groups containing nan identified, possible grouping incorrectly identified.'
groups = [[col.metadata[i] for i in group] for group in col.groups()]
actual_groups = [[1.0], [2.0, 3.0, 4.0, '<missing>'], [5.0], [10.0]]
assert list_unordered_equal(actual_groups, groups), 'With NaNs, with groups containing nan identified, actual groupings are incorrectly reported'
class TestOrdinalGroupingWithnan(TestCase):
""" Test fixture class for deep copy method """
def setUp(self):
""" Setup for grouping tests """
arr = np.array([1.0, 2.0, nan, 3.0, 3.0, nan, 3.0, 3.0, nan, 4.0, 5.0, 10.0])
self.col = CHAID.OrdinalColumn(arr)
def test_possible_groups(self):
""" Ensure possible groups are only adjacent numbers """
metadata = self.col.metadata
groupings = [(metadata[x], metadata[y]) for x, y in self.col.possible_groupings()]
possible_groupings = [
(1.0, 2.0), (2.0, 3.0), (3.0, 4.0), (4.0, 5.0), (1.0, '<missing>'), (2.0, '<missing>'), (3.0, '<missing>'),
(4.0, '<missing>'), (5.0, '<missing>'), (10.0, '<missing>')
]
assert list_unordered_equal(possible_groupings, groupings), 'With NaNs, before any groups are identified, possible grouping are incorrectly calculated.'
groups = list(self.col.groups())
groups = [[self.col.metadata[i] for i in group] for group in self.col.groups()]
actual_groups = [[1.0], [2.0], [3.0], [4.0], [5.0], ['<missing>'], [10.0]]
assert list_unordered_equal(actual_groups, groups), 'With NaNs, before any groups are identified, actual groupings are incorrectly reported'
def test_groups_after_grouping(self):
""" Ensure possible groups are only adjacent numbers after identifing some groups """
self.col.group(3.0, 4.0)
self.col.group(3.0, 2.0)
groupings = [(self.col.metadata[x], self.col.metadata[y]) for x, y in self.col.possible_groupings()]
possible_groupings = [
(1.0, 3.0), (3.0, 5.0), (1.0, '<missing>'), (3.0, '<missing>'), (5.0, '<missing>'), (10.0, '<missing>')
]
assert list_unordered_equal(possible_groupings, groupings), 'With NaNs, with groups are identified, possible grouping incorrectly identified.'
groups = [[self.col.metadata[i] for i in group] for group in self.col.groups()]
actual_groups = [[1.0], [2.0, 3.0, 4.0], [5.0], [10.0], ['<missing>']]
assert list_unordered_equal(actual_groups, groups), 'With NaNs, with groups identified, actual groupings are incorrectly reported'
def test_groups_grouping_with_nan(self):
""" Ensure possible groups are only adjacent numbers after identifing some groups containing nans"""
self.col.group(4.0, self.col._nan)
self.col.group(3.0, 4.0)
self.col.group(3.0, 2.0)
groupings = [(self.col.metadata[x], self.col.metadata[y]) for x, y in self.col.possible_groupings()]
possible_groupings = [
(1.0, 3.0), (3.0, 5.0)
]
assert list_unordered_equal(possible_groupings, groupings), 'With NaNs, with groups containing nan identified, possible grouping incorrectly identified.'
groups = [[self.col.metadata[i] for i in group] for group in self.col.groups()]
actual_groups = [[1.0], [2.0, 3.0, 4.0, '<missing>'], [5.0], [10.0]]
assert list_unordered_equal(actual_groups, groups), 'With NaNs, with groups containing nan identified, actual groupings are incorrectly reported'
def test_groups_after_copy(self):
""" Ensure possible groups are only adjacent numbers after identifing some groups """
self.col.group(3.0, 4.0)
self.col.group(3.0, 2.0)
col = self.col.deep_copy()
groupings = [(col.metadata[x], col.metadata[y]) for x, y in col.possible_groupings()]
possible_groupings = [
(1.0, 3.0), (3.0, 5.0), (1.0, '<missing>'), (3.0, '<missing>'), (5.0, '<missing>'), (10.0, '<missing>')
]
assert list_unordered_equal(possible_groupings, groupings), 'With NaNs, with groups are identified, possible grouping incorrectly identified.'
groups = [[col.metadata[i] for i in group] for group in col.groups()]
actual_groups = [[1.0], [2.0, 3.0, 4.0], [5.0], [10.0], ['<missing>']]
assert list_unordered_equal(actual_groups, groups), 'With NaNs, with groups identified, actual groupings are incorrectly reported'
def test_groups_after_copy_with_nan(self):
""" Ensure possible groups are only adjacent numbers after identifing some groups containing nans"""
self.col.group(3.0, 4.0)
self.col.group(3.0, self.col._nan)
self.col.group(3.0, 2.0)
col = self.col.deep_copy()
groupings = [(col.metadata[x], col.metadata[y]) for x, y in col.possible_groupings()]
possible_groupings = [
(1.0, 3.0), (3.0, 5.0)
]
assert list_unordered_equal(possible_groupings, groupings), 'With NaNs, with groups containing nan identified, possible grouping incorrectly identified.'
groups = [[col.metadata[i] for i in group] for group in col.groups()]
actual_groups = [[1.0], [2.0, 3.0, 4.0, '<missing>'], [5.0], [10.0]]
assert list_unordered_equal(actual_groups, groups), 'With NaNs, with groups containing nan identified, actual groupings are incorrectly reported'
class TestOrdinalConstructor(TestCase):
""" Test fixture class for testing external Ordinal contruction """
def setUp(self):
""" Setup for tests """
arr_with_nan = np.array([1.0, 2.0, nan, 3.0, 3.0, nan, 3.0])
self.col_with_nan = CHAID.OrdinalColumn(arr_with_nan, {1.0: 'first', 2.0: 'second', 3.0: 'third'})
def test_correctly_subs_nan_values(self):
assert self.col_with_nan.arr[2] == self.col_with_nan._nan
def test_correctly_subs_floats_for_ints(self):
assert np.issubdtype(self.col_with_nan.arr.dtype, np.integer)
def test_correctly_subs_floated_metadata(self):
assert self.col_with_nan.metadata == {self.col_with_nan._nan: '<missing>', 1: 'first', 2: 'second', 3: 'third'}
| 51.160535 | 161 | 0.627378 | 2,187 | 15,297 | 4.277549 | 0.058528 | 0.056868 | 0.013789 | 0.066702 | 0.881668 | 0.862961 | 0.854944 | 0.847034 | 0.839337 | 0.82822 | 0 | 0.047679 | 0.22122 | 15,297 | 298 | 162 | 51.332215 | 0.737598 | 0.098385 | 0 | 0.672811 | 0 | 0 | 0.202964 | 0 | 0 | 0 | 0 | 0 | 0.16129 | 1 | 0.119816 | false | 0 | 0.018433 | 0 | 0.16129 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
607dc82019a6232aa95e95e2d90bcc5b15a81ab3 | 31 | py | Python | pycrobit/examples/test.py | MrGallo/pycrob | 5d41ec54191bb31048dcb69374efd26c99e06844 | [
"MIT"
] | null | null | null | pycrobit/examples/test.py | MrGallo/pycrob | 5d41ec54191bb31048dcb69374efd26c99e06844 | [
"MIT"
] | 9 | 2019-12-23T15:43:36.000Z | 2022-03-12T00:16:32.000Z | pycrobit/examples/test.py | MrGallo/pycrob | 5d41ec54191bb31048dcb69374efd26c99e06844 | [
"MIT"
] | 1 | 2019-05-21T18:46:57.000Z | 2019-05-21T18:46:57.000Z | from pycrobit import Microbit
| 10.333333 | 29 | 0.83871 | 4 | 31 | 6.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.16129 | 31 | 2 | 30 | 15.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6081f3bf1c72b479f3aecc271838fa5e9761d5ea | 183 | py | Python | Coloring/learning/utils/__init__.py | zarahz/MARL-and-Markets | 3591a160e098e7251b9e7c7b59c6d0ab08ba0779 | [
"MIT"
] | 1 | 2022-03-12T09:17:32.000Z | 2022-03-12T09:17:32.000Z | Coloring/learning/utils/__init__.py | zarahz/MARL-and-Markets | 3591a160e098e7251b9e7c7b59c6d0ab08ba0779 | [
"MIT"
] | null | null | null | Coloring/learning/utils/__init__.py | zarahz/MARL-and-Markets | 3591a160e098e7251b9e7c7b59c6d0ab08ba0779 | [
"MIT"
] | null | null | null |
from .other import *
from .storage import *
from .dictlist import DictList
from .format import *
from .penv import *
from .env import *
from .arguments import *
from .agent import *
| 18.3 | 30 | 0.737705 | 25 | 183 | 5.4 | 0.4 | 0.444444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.180328 | 183 | 9 | 31 | 20.333333 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7156ad3357c083be84bc85aaf5af2b6d77ffa22e | 5,736 | py | Python | modules/employee_management/employee_info/code/employee_delete.py | xuhuiliang-maybe/ace_office | 07fae18676a193206802e8fb9aa32a805b1da24c | [
"Apache-2.0"
] | 1 | 2018-11-27T08:08:07.000Z | 2018-11-27T08:08:07.000Z | modules/employee_management/employee_info/code/employee_delete.py | xuhuiliang-maybe/ace_office | 07fae18676a193206802e8fb9aa32a805b1da24c | [
"Apache-2.0"
] | null | null | null | modules/employee_management/employee_info/code/employee_delete.py | xuhuiliang-maybe/ace_office | 07fae18676a193206802e8fb9aa32a805b1da24c | [
"Apache-2.0"
] | null | null | null | # coding=utf-8
import json
import traceback
from django.contrib import messages
from django.contrib.auth.decorators import login_required
from django.contrib.auth.decorators import permission_required
from django.contrib.messages.views import SuccessMessageMixin
from django.core.urlresolvers import reverse
from django.http import HttpResponse
from django.views.generic import View
from django.views.generic.edit import DeleteView
from modules.employee_management.employee_info.models import Employee
from modules.share_module.check_decorator import check_principal, check_user_is_songxiaodan
from modules.share_module.permissionMixin import class_view_decorator
# 员工信息-删除
@class_view_decorator(login_required)
@class_view_decorator(permission_required('employee_info.delete_employee', raise_exception=True))
@class_view_decorator(check_principal) # 校验是否负责该项目
@class_view_decorator(check_user_is_songxiaodan) # 校验是否是songxiaodan
class EmployeeDelete(SuccessMessageMixin, DeleteView):
model = Employee
template_name = "base/confirm_delete.html"
success_message = u"%(name)s 删除创建"
def get_success_url(self):
self.url = reverse('employee_info:list', args=("employee",))
referrer = self.request.POST.get("referrer", "")
if referrer:
self.url = referrer
return self.url
def get_context_data(self, **kwargs):
context = super(EmployeeDelete, self).get_context_data(**kwargs)
referrer = self.request.META.get('HTTP_REFERER', "")
context["referrer"] = referrer
return context
# 员工信息-批量删除
@class_view_decorator(login_required)
@class_view_decorator(permission_required('employee_info.delete_employee', raise_exception=True))
class EmployeesBatchDelete(SuccessMessageMixin, View):
def post(self, request, *args, **kwargs):
try:
ids = request.POST.get('ids').split(",")
employee_obj = Employee.objects.filter(is_temporary=False) # 查询所有正式员工
if ids[0] == "all":
if request.user.is_superuser:
employee_obj.all().delete()
messages.success(self.request, u"成功删除")
result = {"code": -1, "msg": "成功删除"}
return HttpResponse(json.dumps(result), content_type="application/json")
else:
emp_obj_list = employee_obj.all()
else:
emp_obj_list = employee_obj.filter(id__in=ids)
for one_obj in emp_obj_list:
try:
try:
project_principal = one_obj.project_name.principal
except:
project_principal = None
if project_principal == request.user or request.user.is_superuser:
one_obj.delete()
except:
traceback.print_exc()
messages.success(self.request, u"成功删除")
result = {"code": -1, "msg": "成功删除"}
except:
traceback.print_exc()
messages.warning(self.request, u"删除操作异常")
result = {"code": -1, "msg": "删除异常"}
return HttpResponse(json.dumps(result), content_type="application/json")
# 临时工-删除
@class_view_decorator(login_required)
@class_view_decorator(permission_required('employee_info.delete_temporary', raise_exception=True))
@class_view_decorator(check_principal) # 校验是否负责该项目
class TemporaryDelete(SuccessMessageMixin, DeleteView):
model = Employee
template_name = "base/confirm_delete.html"
success_message = u"%(name)s 删除创建"
def get_success_url(self):
self.url = reverse('employee_info:list', args=("temporary",))
referrer = self.request.POST.get("referrer", "")
if referrer:
self.url = referrer
return self.url
def get_context_data(self, **kwargs):
context = super(TemporaryDelete, self).get_context_data(**kwargs)
referrer = self.request.META.get('HTTP_REFERER', "")
context["referrer"] = referrer
return context
# 临时工-批量删除
@class_view_decorator(login_required)
@class_view_decorator(permission_required('employee_info.delete_temporary', raise_exception=True))
class TemporaryBatchDelete(SuccessMessageMixin, View):
def post(self, request, *args, **kwargs):
try:
ids = request.POST.get('ids').split(",")
employee_obj = Employee.objects.filter(is_temporary=True) # 查询所有临时工
if ids[0] == "all":
if request.user.is_superuser:
employee_obj.all().delete()
messages.success(self.request, u"成功删除")
result = {"code": -1, "msg": "成功删除"}
return HttpResponse(json.dumps(result), content_type="application/json")
else:
emp_obj_list = employee_obj.all()
else:
emp_obj_list = employee_obj.filter(id__in=ids)
for one_obj in emp_obj_list:
try:
try:
project_principal = one_obj.project_name.principal
except:
project_principal = None
if project_principal == request.user or request.user.is_superuser:
one_obj.delete()
except:
traceback.print_exc()
messages.success(self.request, u"成功删除")
result = {"code": -1, "msg": "成功删除"}
except:
traceback.print_exc()
messages.warning(self.request, u"删除操作异常")
result = {"code": -1, "msg": "删除异常"}
return HttpResponse(json.dumps(result), content_type="application/json")
| 39.833333 | 98 | 0.631799 | 624 | 5,736 | 5.592949 | 0.192308 | 0.030946 | 0.061891 | 0.024069 | 0.797708 | 0.797708 | 0.776504 | 0.776504 | 0.776504 | 0.776504 | 0 | 0.002135 | 0.265167 | 5,736 | 143 | 99 | 40.111888 | 0.82586 | 0.017434 | 0 | 0.79661 | 0 | 0 | 0.084089 | 0.029511 | 0 | 0 | 0 | 0 | 0 | 1 | 0.050847 | false | 0 | 0.110169 | 0 | 0.313559 | 0.033898 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7178fbb50d7eada1885b03a72fda0aedc48d34d0 | 30 | py | Python | RubiksBlindfolded/__init__.py | mn-banjar/RubiksBlindfolded | 6e642a5f7b07605a6c33e60fdc5a36e509966f85 | [
"MIT"
] | null | null | null | RubiksBlindfolded/__init__.py | mn-banjar/RubiksBlindfolded | 6e642a5f7b07605a6c33e60fdc5a36e509966f85 | [
"MIT"
] | null | null | null | RubiksBlindfolded/__init__.py | mn-banjar/RubiksBlindfolded | 6e642a5f7b07605a6c33e60fdc5a36e509966f85 | [
"MIT"
] | null | null | null | from .algorithm import *
| 7.5 | 25 | 0.633333 | 3 | 30 | 6.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.3 | 30 | 3 | 26 | 10 | 0.904762 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
71b3cfb7564e16721c999b09f6ff56cbf49bf021 | 1,227 | py | Python | utils.py | tkit1994/cat-vs-dog | 1f1d49b2114ec3e4ba9a5a26eeba8ebbec7be3bf | [
"MIT"
] | null | null | null | utils.py | tkit1994/cat-vs-dog | 1f1d49b2114ec3e4ba9a5a26eeba8ebbec7be3bf | [
"MIT"
] | null | null | null | utils.py | tkit1994/cat-vs-dog | 1f1d49b2114ec3e4ba9a5a26eeba8ebbec7be3bf | [
"MIT"
] | null | null | null | import tensorflow as tf
def preprocess(filename, label):
'''
处理图片,被dataset.map调用,此处的map和python中的map用法差不多,
一些数据增强的操作也可以在这里面写,也可以再写个函数继续map
'''
# 文件的全路径
fullpath = tf.string_join(["data/train/", filename])
# 读文件
img = tf.read_file(fullpath)
# decode文件,二进制-->Tensor,用到的函数取决于文件的格式
img = tf.image.decode_jpeg(img, channels=3)
# resize使图片大小想用
img = tf.image.resize_images(img, (224, 224))
# 归一化操作,此处归一化到(-1, 1),不用归一化应该也可以,一般减去各个通道的中值,
# 我这里偷懒没有算中值,然后归一化到(-1, 1)或者(0, 1),不归一化应该也可以,
# caffe中好像就直接减去中值,没有进一步做处理,pytorch中是减去中值,然后归一化。
img -= 127
img /= 128
return img, label
def preprocess_test(filename):
'''
处理图片,被dataset.map调用,此处的map和python中的map用法差不多,
一些数据增强的操作也可以在这里面写,也可以再写个函数继续map
'''
# 文件的全路径
fullpath = tf.string_join([filename,])
# 读文件
img = tf.read_file(fullpath)
# decode文件,二进制-->Tensor,用到的函数取决于文件的格式
img = tf.image.decode_jpeg(img, channels=3)
# resize使图片大小想用
img = tf.image.resize_images(img, (224, 224))
# 归一化操作,此处归一化到(-1, 1),不用归一化应该也可以,一般减去各个通道的中值,
# 我这里偷懒没有算中值,然后归一化到(-1, 1)或者(0, 1),不归一化应该也可以,
# caffe中好像就直接减去中值,没有进一步做处理,pytorch中是减去中值,然后归一化。
img -= 127
img /= 128
return img | 27.886364 | 56 | 0.665037 | 141 | 1,227 | 5.723404 | 0.390071 | 0.037175 | 0.049566 | 0.099133 | 0.894672 | 0.894672 | 0.894672 | 0.894672 | 0.894672 | 0.894672 | 0 | 0.038776 | 0.201304 | 1,227 | 44 | 57 | 27.886364 | 0.784694 | 0.443358 | 0 | 0.588235 | 0 | 0 | 0.017405 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.117647 | false | 0 | 0.058824 | 0 | 0.294118 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
71e545be68fc1de9080a658b78d9330e9cc2e533 | 75,033 | py | Python | likeyoubot_l2r_scene.py | dogfooter-master/dogfooter | e1e39375703fe3019af7976f97c44cf2cb7ca0fa | [
"MIT"
] | null | null | null | likeyoubot_l2r_scene.py | dogfooter-master/dogfooter | e1e39375703fe3019af7976f97c44cf2cb7ca0fa | [
"MIT"
] | null | null | null | likeyoubot_l2r_scene.py | dogfooter-master/dogfooter | e1e39375703fe3019af7976f97c44cf2cb7ca0fa | [
"MIT"
] | null | null | null | import likeyoubot_resource as lybrsc
import likeyoubot_message
import cv2
import sys
import numpy as np
from matplotlib import pyplot as plt
import pyautogui
import operator
import random
import likeyoubot_game as lybgame
import likeyoubot_l2r as lybgamel2r
from likeyoubot_configure import LYBConstant as lybconstant
import likeyoubot_scene
import time
class LYBL2rScene(likeyoubot_scene.LYBScene):
def __init__(self, scene_name):
likeyoubot_scene.LYBScene.__init__(self, scene_name)
def process(self, window_image, window_pixels):
super(LYBL2rScene, self).process(window_image, window_pixels)
rc = 0
if self.scene_name == 'init_screen_scene':
rc = self.init_screen_scene()
elif self.scene_name == 'google_play_store_scene':
rc = self.google_play_store_scene()
elif self.scene_name == 'main_scene':
rc = self.main_scene()
elif self.scene_name == 'login_scene':
rc = self.login_scene()
elif self.scene_name == 'bosang_scene':
rc = self.bosang_scene()
elif self.scene_name == 'onulhwaldong_scene':
rc = self.onulhwaldong_scene()
elif self.scene_name == 'sangjeom_scene':
rc = self.sangjeom_scene()
elif self.scene_name == 'omantap_scene':
rc = self.omantap_scene()
elif self.scene_name == 'ilil_quest_scene':
rc = self.ilil_quest_scene()
elif self.scene_name == 'jugan_quest_scene':
rc = self.jugan_quest_scene()
elif self.scene_name == 'quest_scroll_scene':
rc = self.quest_scroll_scene()
elif self.scene_name == 'quest_scroll_limit_scene':
rc = self.quest_scroll_limit_scene()
elif self.scene_name == 'npc_talk_scene':
rc = self.npc_talk_scene()
elif self.scene_name == 'yoil_dungeon_scene':
rc = self.yoil_dungeon_scene()
elif self.scene_name == 'social_scene':
rc = self.social_scene()
elif self.scene_name == 'dungeon_list_scene':
rc = self.dungeon_list_scene()
elif self.scene_name == 'gabang_scene':
rc = self.gabang_scene()
elif self.scene_name == 'gyeoltoojang_scene':
rc = self.gyeoltoojang_scene()
elif self.scene_name == 'hyeolmeng_scene':
rc = self.hyeolmeng_scene()
elif self.scene_name == 'hyeolmeng_chulseok_check_scene':
rc = self.hyeolmeng_chulseok_check_scene()
elif self.scene_name == 'azit_scene':
rc = self.azit_scene()
elif self.scene_name == 'azit_manmulsang_scene':
rc = self.azit_manmulsang_scene()
elif self.scene_name == 'hyeolmeng_give_scene':
rc = self.hyeolmeng_give_scene()
elif self.scene_name == 'mail_scene':
rc = self.mail_scene()
elif self.scene_name == 'jeongye_dungeon_scene':
rc = self.jeongye_dungeon_scene()
elif self.scene_name == 'jangbi_dungeon_scene':
rc = self.jangbi_dungeon_scene()
elif self.scene_name == 'jadong_chamga_scene':
rc = self.jadong_chamga_scene()
elif self.scene_name == 'adena_dungeon_scene':
rc = self.adena_dungeon_scene()
elif self.scene_name == 'bosang_hesu_scene':
rc = self.bosang_hesu_scene()
elif self.scene_name == 'dungeon_clear_scene':
rc = self.dungeon_clear_scene()
elif self.scene_name == 'dungeon_clear_2_scene':
rc = self.dungeon_clear_scene()
elif self.scene_name == 'dungeon_clear_3_scene':
rc = self.dungeon_clear_scene()
elif self.scene_name == 'experience_dungeon_scene':
rc = self.experience_dungeon_scene()
elif self.scene_name == 'sohwanseok_dungeon_scene':
rc = self.sohwanseok_dungeon_scene()
else:
rc = self.else_scene()
return rc
def else_scene(self):
if self.status == 0:
self.logger.info('unknown scene: ' + self.scene_name)
self.status += 1
else:
if self.scene_name + '_close_icon' in self.game_object.resource_manager.pixel_box_dic:
self.lyb_mouse_click(self.scene_name + '_close_icon', custom_threshold=0)
self.status = 0
return self.status
def sohwanseok_dungeon_scene(self):
pb_name = 'bosang_hesu'
(loc_x, loc_y), match_rate = self.game_object.locationOnWindowPart(
self.window_image,
self.game_object.resource_manager.pixel_box_dic[pb_name],
custom_threshold=0.8,
custom_flag=1,
custom_rect=(560, 70, 620, 120)
)
self.logger.debug(pb_name + ' ' + str((loc_x, loc_y)) + ' ' + str(match_rate))
if loc_x != -1:
self.game_object.get_scene('bosang_hesu_scene').status = 0
self.lyb_mouse_click_location(loc_x, loc_y)
return self.status
pb_name = 'sohwanseok_dungeon_scene_limit'
match_rate = self.game_object.rateMatchedPixelBox(self.window_pixels, pb_name)
self.logger.debug(pb_name + ' ' + str(match_rate))
if match_rate > 0.9:
self.status = 99999
if self.status == 0:
self.logger.info('scene: ' + self.scene_name)
self.status += 1
elif self.status >= 1 and self.status < 7:
pb_name = 'sohwanseok_dungeon_scene_difficulty_list_' + str(6 - self.status)
self.lyb_mouse_click(pb_name, custom_threshold=0)
self.set_option('last_status', self.status + 1)
self.status = 10
elif self.status == 7:
self.set_option('last_status', 99999)
self.status = 10
elif self.status == 10:
pb_name = 'sohwanseok_dungeon_scene_green'
match_rate = self.game_object.rateMatchedPixelBox(self.window_pixels, pb_name)
self.logger.debug(pb_name + ' ' + str(match_rate))
if match_rate > 0.9:
self.status += 1
else:
self.status = self.get_option('last_status')
elif self.status >= 11 and self.status < 14:
self.lyb_mouse_click('sohwanseok_dungeon_scene_ipjang', custom_threshold=0)
self.status += 1
elif self.status == 14:
self.status = 99999
elif self.status == 99999:
self.lyb_mouse_click('back', custom_threshold=0)
self.status += 1
else:
if self.scene_name + '_close_icon' in self.game_object.resource_manager.pixel_box_dic:
self.lyb_mouse_click(self.scene_name + '_close_icon', custom_threshold=0)
self.status = 0
return self.status
def experience_dungeon_scene(self):
pb_name = 'bosang_hesu'
(loc_x, loc_y), match_rate = self.game_object.locationOnWindowPart(
self.window_image,
self.game_object.resource_manager.pixel_box_dic[pb_name],
custom_threshold=0.8,
custom_flag=1,
custom_rect=(560, 70, 620, 120)
)
self.logger.debug(pb_name + ' ' + str((loc_x, loc_y)) + ' ' + str(match_rate))
if loc_x != -1:
self.game_object.get_scene('bosang_hesu_scene').status = 0
self.lyb_mouse_click_location(loc_x, loc_y)
return self.status
pb_name = 'experience_dungeon_scene_limit'
match_rate = self.game_object.rateMatchedPixelBox(self.window_pixels, pb_name)
self.logger.debug(pb_name + ' ' + str(match_rate))
if match_rate > 0.9:
self.status = 99999
if self.status == 0:
self.logger.info('scene: ' + self.scene_name)
self.status += 1
elif self.status >= 1 and self.status < 8:
pb_name = 'experience_dungeon_scene_difficulty_list_' + str(7 - self.status)
self.lyb_mouse_click(pb_name, custom_threshold=0)
self.set_option('last_status', self.status + 1)
self.status = 10
elif self.status == 8:
self.set_option('last_status', 99999)
self.status = 10
elif self.status == 10:
pb_name = 'experience_dungeon_scene_green'
match_rate = self.game_object.rateMatchedPixelBox(self.window_pixels, pb_name)
self.logger.debug(pb_name + ' ' + str(match_rate))
if match_rate > 0.9:
self.status += 1
else:
self.status = self.get_option('last_status')
elif self.status >= 11 and self.status < 14:
self.lyb_mouse_click('experience_dungeon_scene_ipjang', custom_threshold=0)
self.status += 1
elif self.status == 14:
self.status = 99999
elif self.status == 99999:
self.lyb_mouse_click('back', custom_threshold=0)
self.status += 1
else:
if self.scene_name + '_close_icon' in self.game_object.resource_manager.pixel_box_dic:
self.lyb_mouse_click(self.scene_name + '_close_icon', custom_threshold=0)
self.status = 0
return self.status
def dungeon_clear_scene(self):
if self.status == 0:
self.logger.info('scene: ' + self.scene_name)
self.status += 1
else:
if self.scene_name + '_close_icon' in self.game_object.resource_manager.pixel_box_dic:
self.lyb_mouse_click(self.scene_name + '_close_icon', custom_threshold=0)
self.status = 0
return self.status
def bosang_hesu_scene(self):
if self.status == 0:
self.logger.info('scene: ' + self.scene_name)
self.status += 1
elif self.status == 1:
self.lyb_mouse_click('bosang_hesu_scene_gold', custom_threshold=0)
self.status += 1
else:
if self.scene_name + '_close_icon' in self.game_object.resource_manager.pixel_box_dic:
self.lyb_mouse_click(self.scene_name + '_close_icon', custom_threshold=0)
self.status = 0
return self.status
def adena_dungeon_scene(self):
pb_name = 'bosang_hesu'
(loc_x, loc_y), match_rate = self.game_object.locationOnWindowPart(
self.window_image,
self.game_object.resource_manager.pixel_box_dic[pb_name],
custom_threshold=0.8,
custom_flag=1,
custom_rect=(560, 70, 620, 120)
)
self.logger.debug(pb_name + ' ' + str((loc_x, loc_y)) + ' ' + str(match_rate))
if loc_x != -1:
self.game_object.get_scene('bosang_hesu_scene').status = 0
self.lyb_mouse_click_location(loc_x, loc_y)
return self.status
pb_name = 'adena_dungeon_scene_limit'
match_rate = self.game_object.rateMatchedPixelBox(self.window_pixels, pb_name)
if match_rate > 0.9:
self.status = 99999
if self.status == 0:
self.logger.info('scene: ' + self.scene_name)
self.status += 1
elif self.status >= 1 and self.status < 7:
pb_name = 'adena_dungeon_scene_difficulty_list_' + str(6 - self.status)
self.lyb_mouse_click(pb_name, custom_threshold=0)
self.set_option('last_status', self.status + 1)
self.status = 10
elif self.status == 7:
self.set_option('last_status', 99999)
self.status = 10
elif self.status == 10:
pb_name = 'adena_dungeon_scene_green'
match_rate = self.game_object.rateMatchedPixelBox(self.window_pixels, pb_name)
self.logger.debug(pb_name + ' ' + str(match_rate))
if match_rate > 0.9:
self.status += 1
else:
self.status = self.get_option('last_status')
elif self.status >= 11 and self.status < 14:
self.lyb_mouse_click('adena_dungeon_scene_ipjang', custom_threshold=0)
self.status += 1
elif self.status == 14:
self.status = 99999
elif self.status == 99999:
self.lyb_mouse_click('back', custom_threshold=0)
self.status += 1
else:
if self.scene_name + '_close_icon' in self.game_object.resource_manager.pixel_box_dic:
self.lyb_mouse_click(self.scene_name + '_close_icon', custom_threshold=0)
self.status = 0
return self.status
return self.status
def jadong_chamga_scene(self):
if self.status == 0:
self.logger.info('scene: ' + self.scene_name)
self.status += 1
elif self.status >= 1 and self.status < 300:
if self.status % 10 == 0:
self.logger.info('자동 참가 대기 중...(' + str(self.status) + '/300)')
self.status += 1
else:
if self.scene_name + '_close_icon' in self.game_object.resource_manager.pixel_box_dic:
self.lyb_mouse_click(self.scene_name + '_close_icon', custom_threshold=0)
self.status = 0
return self.status
def jangbi_dungeon_scene(self):
# pb_name = 'yoil_dungeon_scene_free_sotang'
# match_rate = self.game_object.rateMatchedPixelBox(self.window_pixels, pb_name)
# if match_rate > 0.9:
# self.lyb_mouse_click(pb_name)
# return self.status
# pb_name = 'yoil_dungeon_scene_sotang_cancel'
# match_rate = self.game_object.rateMatchedPixelBox(self.window_pixels, pb_name)
# if match_rate > 0.9:
# self.lyb_mouse_click(pb_name)
# return self.status
pb_name = 'bosang_hesu'
(loc_x, loc_y), match_rate = self.game_object.locationOnWindowPart(
self.window_image,
self.game_object.resource_manager.pixel_box_dic[pb_name],
custom_threshold=0.8,
custom_flag=1,
custom_rect=(560, 70, 620, 120)
)
self.logger.debug(pb_name + ' ' + str((loc_x, loc_y)) + ' ' + str(match_rate))
if loc_x != -1:
self.game_object.get_scene('bosang_hesu_scene').status = 0
self.lyb_mouse_click_location(loc_x, loc_y)
return self.status
pb_name = 'jangbi_dungeon_scene_limit'
match_rate = self.game_object.rateMatchedPixelBox(self.window_pixels, pb_name)
if match_rate > 0.9:
self.status = 99999
if self.status == 0:
self.logger.info('scene: ' + self.scene_name)
self.status += 1
elif self.status >= 1 and self.status < 8:
pb_name = 'jangbi_dungeon_scene_difficulty_list_' + str(7 - self.status)
self.lyb_mouse_click(pb_name, custom_threshold=0)
self.set_option('last_status', self.status + 1)
self.status = 10
elif self.status == 8:
self.set_option('last_status', 99999)
self.status = 10
elif self.status == 10:
pb_name = 'jangbi_dungeon_scene_green'
match_rate = self.game_object.rateMatchedPixelBox(self.window_pixels, pb_name)
self.logger.debug(pb_name + ' ' + str(match_rate))
if match_rate > 0.9:
self.status += 1
else:
self.status = self.get_option('last_status')
elif self.status >= 11 and self.status < 14:
self.lyb_mouse_click('jangbi_dungeon_scene_ipjang', custom_threshold=0)
self.status += 1
elif self.status == 14:
self.status = 99999
elif self.status == 99999:
self.lyb_mouse_click('back', custom_threshold=0)
self.status += 1
else:
if self.scene_name + '_close_icon' in self.game_object.resource_manager.pixel_box_dic:
self.lyb_mouse_click(self.scene_name + '_close_icon', custom_threshold=0)
self.status = 0
return self.status
def jeongye_dungeon_scene(self):
pb_name = 'jeongye_dungeon_scene_bosang_5'
match_rate = self.game_object.rateMatchedPixelBox(self.window_pixels, pb_name)
self.logger.debug(pb_name + ' ' + str(match_rate))
if match_rate > 0.95:
self.lyb_mouse_click('jeongye_dungeon_scene_bosang', custom_threshold=0)
return self.status
if self.status == 0:
self.logger.info('scene: ' + self.scene_name)
self.set_option('row', 0)
self.set_option('drag_count', 0)
self.status += 1
elif self.status >= 1 and self.status < 3:
self.lyb_mouse_drag('jeongye_dungeon_scene_drag_top', 'jeongye_dungeon_scene_drag_bot')
self.status += 1
elif self.status == 3:
row = self.get_option('row')
if row >= 3:
self.set_option('row', 0)
self.set_option('last_status', self.status)
self.status = 10
return self.status
pb_name = 'jeongye_dungeon_scene_lock'
(loc_x, loc_y), match_rate = self.game_object.locationOnWindowPart(
self.window_image,
self.game_object.resource_manager.pixel_box_dic[pb_name],
custom_threshold=0.8,
custom_flag=1,
custom_rect=(30, 130 + (60 * row) - 70, 70, 130 + (60 * row) + 70)
)
self.logger.debug(pb_name + ' ' + str((loc_x, loc_y)) + ' ' + str(match_rate))
if loc_x != -1:
self.status = 99999
return self.status
resource_name = 'jeongye_dungeon_scene_need_loc'
(loc_x, loc_y), match_rate = self.game_object.locationResourceOnWindowPart(
self.window_image,
resource_name,
custom_threshold=0.7,
custom_top_level=(120, 135, 150),
custom_below_level=(60, 80, 100),
custom_flag=1,
custom_rect=(30, 130 + (60 * row) - 70, 300, 130 + (60 * row) + 70)
)
self.logger.debug(resource_name + ' ' + str((loc_x, loc_y)) + ' ' + str(match_rate))
if loc_x != -1:
self.lyb_mouse_click_location(loc_x, loc_y)
self.set_option('last_status', self.status)
self.status += 1
self.set_option('row', row + 1)
elif self.status >= 4 and self.status < 10:
self.status += 1
pb_name = 'jeongye_dungeon_scene_available'
match_rate = self.game_object.rateMatchedPixelBox(self.window_pixels, pb_name)
if match_rate < 0.9:
self.status = self.get_option('last_status')
return self.status
pb_name = 'jeongye_dungeon_scene_ipjang'
match_rate = self.game_object.rateMatchedPixelBox(self.window_pixels, pb_name)
if match_rate > 0.9:
self.lyb_mouse_click(pb_name, custom_threshold=0)
self.set_option('row', 0)
return self.status
elif self.status == 10:
drag_count = self.get_option('drag_count')
if drag_count > 2:
self.status = 99999
return self.status
self.set_option('drag_count', drag_count + 1)
self.lyb_mouse_drag('jeongye_dungeon_scene_drag_bot', 'jeongye_dungeon_scene_drag_top', stop_delay=1)
self.status += 1
elif self.status == 11:
self.status = self.get_option('last_status')
elif self.status == 99999:
self.lyb_mouse_click('back', custom_threshold=0)
self.status = 0
else:
if self.scene_name + '_close_icon' in self.game_object.resource_manager.pixel_box_dic:
self.lyb_mouse_click(self.scene_name + '_close_icon', custom_threshold=0)
self.status = 0
return self.status
def mail_scene(self):
if self.status == 0:
self.logger.info('scene: ' + self.scene_name)
self.status += 1
elif self.status >= 1 and self.status < 10:
for i in range(3):
pb_name = 'mail_scene_new_' + str(i)
(loc_x, loc_y), match_rate = self.game_object.locationOnWindowPart(
self.window_image,
self.game_object.resource_manager.pixel_box_dic[pb_name],
custom_threshold=0.8,
custom_flag=1,
custom_rect=(90, 60, 350, 100)
)
self.logger.debug(pb_name + ' ' + str((loc_x, loc_y)) + ' ' + str(match_rate))
if loc_x != -1:
self.lyb_mouse_click_location(loc_x - 10, loc_y + 10)
self.status += 1
self.set_option('last_status', self.status)
self.status += 10
return self.status
self.status = 10
elif self.status == 10:
self.status = 99999
elif self.status >= 11 and self.status < 20:
self.lyb_mouse_click('mail_scene_receive_all', custom_threshold=0)
self.status = self.get_option('last_status')
elif self.status == 99999:
self.lyb_mouse_click('back', custom_threshold=0)
self.status = 0
else:
if self.scene_name + '_close_icon' in self.game_object.resource_manager.pixel_box_dic:
self.lyb_mouse_click(self.scene_name + '_close_icon', custom_threshold=0)
self.status = 0
return self.status
def hyeolmeng_give_scene(self):
if self.status == 0:
self.logger.info('scene: ' + self.scene_name)
self.status += 1
elif self.status == 1:
self.lyb_mouse_click('hyeolmeng_give_scene_give', custom_threshold=0)
self.status += 1
else:
if self.scene_name + '_close_icon' in self.game_object.resource_manager.pixel_box_dic:
self.lyb_mouse_click(self.scene_name + '_close_icon', custom_threshold=0)
self.status = 0
return self.status
def azit_manmulsang_scene(self):
if self.status == 0:
self.logger.info('scene: ' + self.scene_name)
self.status += 1
elif self.status == 1:
self.lyb_mouse_click('azit_manmulsang_scene_gift')
self.status += 1
else:
if self.scene_name + '_close_icon' in self.game_object.resource_manager.pixel_box_dic:
self.lyb_mouse_click(self.scene_name + '_close_icon', custom_threshold=0)
self.status = 0
return self.status
def azit_scene(self):
if self.status == 0:
self.logger.info('scene: ' + self.scene_name)
self.status += 1
elif self.status >= 1 and self.status < 3:
self.status += 1
pb_name = 'azit_scene_new'
(loc_x, loc_y), match_rate = self.game_object.locationOnWindowPart(
self.window_image,
self.game_object.resource_manager.pixel_box_dic[pb_name],
custom_threshold=0.8,
custom_flag=1,
custom_rect=(600, 320, 635, 370)
)
self.logger.debug(pb_name + ' ' + str((loc_x, loc_y)) + ' ' + str(match_rate))
if loc_x != -1:
self.lyb_mouse_click_location(loc_x, loc_y)
self.game_object.get_scene('main_scene').set_option('from_azit_scene', True)
self.game_object.get_scene('azit_manmulsang_scene').status = 0
else:
self.status = 99999
elif self.status == 99999:
self.lyb_mouse_click('back', custom_threshold=0)
self.status = 0
else:
if self.scene_name + '_close_icon' in self.game_object.resource_manager.pixel_box_dic:
self.lyb_mouse_click(self.scene_name + '_close_icon', custom_threshold=0)
self.status = 0
return self.status
def hyeolmeng_chulseok_check_scene(self):
self.lyb_mouse_click('hyeolmeng_chulseok_check_scene_bosang', custom_threshold=0)
return self.status
def hyeolmeng_scene(self):
if self.status == 0:
self.logger.info('scene: ' + self.scene_name)
self.status += 1
elif self.status == 1:
self.lyb_mouse_click('hyeolmeng_scene_tab_0', custom_threshold=0)
self.status += 1
elif self.status == 2:
for i in range(4):
pb_name = 'hyeolmeng_scene_new_' + str(i)
(loc_x, loc_y), match_rate = self.game_object.locationOnWindowPart(
self.window_image,
self.game_object.resource_manager.pixel_box_dic[pb_name],
custom_threshold=0.8,
custom_flag=1,
custom_rect=(280, 70, 635, 370)
)
self.logger.debug(pb_name + ' ' + str((loc_x, loc_y)) + ' ' + str(match_rate))
if loc_x != -1:
self.lyb_mouse_click_location(loc_x, loc_y)
self.game_object.get_scene('azit_scene').status = 0
self.game_object.get_scene('hyeolmeng_give_scene').status = 0
self.status += 1
return self.status
self.status = 99999
elif self.status == 3:
for i in range(4):
pb_name = 'hyeolmeng_scene_new_' + str(i)
(loc_x, loc_y), match_rate = self.game_object.locationOnWindowPart(
self.window_image,
self.game_object.resource_manager.pixel_box_dic[pb_name],
custom_threshold=0.8,
custom_flag=1,
custom_rect=(480, 340, 560, 380)
)
self.logger.debug(pb_name + ' ' + str((loc_x, loc_y)) + ' ' + str(match_rate))
if loc_x != -1:
self.lyb_mouse_click_location(loc_x, loc_y)
return self.status
self.status = 1
elif self.status == 99999:
self.lyb_mouse_click('back', custom_threshold=0)
self.status = 0
else:
if self.scene_name + '_close_icon' in self.game_object.resource_manager.pixel_box_dic:
self.lyb_mouse_click(self.scene_name + '_close_icon', custom_threshold=0)
self.status = 0
return self.status
def gyeoltoojang_scene(self):
if self.status == 0:
self.logger.info('scene: ' + self.scene_name)
self.status += 1
elif self.status >= 1 and self.status < 10:
self.status += 1
pb_name = 'gyeoltoojang_scene_limit'
match_rate = self.game_object.rateMatchedPixelBox(self.window_pixels, pb_name)
if match_rate > 0.9:
self.status = 10
return self.status
pb_name = 'gyeoltoojang_scene_bosang'
(loc_x, loc_y), match_rate = self.game_object.locationOnWindowPart(
self.window_image,
self.game_object.resource_manager.pixel_box_dic[pb_name],
custom_threshold=0.8,
custom_flag=1,
custom_rect=(20, 270, 160, 320)
)
self.logger.debug(pb_name + ' ' + str((loc_x, loc_y)) + ' ' + str(match_rate))
if loc_x != -1:
self.lyb_mouse_click_location(loc_x, loc_y)
return self.status
pb_name = 'gyeoltoojang_scene_sijak'
(loc_x, loc_y), match_rate = self.game_object.locationOnWindowPart(
self.window_image,
self.game_object.resource_manager.pixel_box_dic[pb_name],
custom_threshold=0.8,
custom_flag=1,
custom_rect=(540, 110, 630, 370)
)
self.logger.debug(pb_name + ' ' + str((loc_x, loc_y)) + ' ' + str(match_rate))
if loc_x != -1:
self.lyb_mouse_click_location(loc_x, loc_y)
return self.status
self.status += 1
elif self.status == 10:
self.lyb_mouse_click('back', custom_threshold=0)
self.status = 0
else:
if self.scene_name + '_close_icon' in self.game_object.resource_manager.pixel_box_dic:
self.lyb_mouse_click(self.scene_name + '_close_icon', custom_threshold=0)
self.status = 0
return self.status
def gabang_scene(self):
if self.status == 0:
self.logger.info('scene: ' + self.scene_name)
self.status += 1
elif self.status >= 1 and self.status < 10:
self.status += 1
pb_name = 'gabang_scene_limit'
match_rate = self.game_object.rateMatchedPixelBox(self.window_pixels, pb_name)
if match_rate > 0.9:
self.status = 10
return self.status
pb_name = 'gabang_scene_sell_all'
match_rate = self.game_object.rateMatchedPixelBox(self.window_pixels, pb_name)
if match_rate > 0.9:
self.lyb_mouse_click(pb_name)
return self.status
pb_name = 'gabang_scene_sell'
match_rate = self.game_object.rateMatchedPixelBox(self.window_pixels, pb_name)
if match_rate > 0.9:
self.lyb_mouse_click(pb_name)
return self.status
elif self.status == 10:
self.status = 99999
elif self.status == 99999:
self.lyb_mouse_click('back', custom_threshold=0)
self.status = 0
else:
if self.scene_name + '_close_icon' in self.game_object.resource_manager.pixel_box_dic:
self.lyb_mouse_click(self.scene_name + '_close_icon', custom_threshold=0)
self.status = 0
return self.status
def dungeon_list_scene(self):
if self.status == 0:
self.logger.info('scene: ' + self.scene_name)
self.status += 1
else:
self.lyb_mouse_click('back', custom_threshold=0)
self.status = 0
return self.status
def social_scene(self):
if self.status == 0:
self.logger.info('scene: ' + self.scene_name)
self.status += 1
elif self.status >= 1 and self.status < 3:
pb_name = 'social_scene_new'
(loc_x, loc_y), match_rate = self.game_object.locationOnWindowPart(
self.window_image,
self.game_object.resource_manager.pixel_box_dic[pb_name],
custom_threshold=0.8,
custom_flag=1,
custom_rect=(460, 340, 560, 380)
)
self.logger.debug(pb_name + ' ' + str((loc_x, loc_y)) + ' ' + str(match_rate))
if loc_x != -1:
self.lyb_mouse_click_location(loc_x, loc_y)
return self.status
self.status += 1
elif self.status == 3:
self.status = 99999
elif self.status == 99999:
self.lyb_mouse_click('back', custom_threshold=0)
self.status = 0
else:
if self.scene_name + '_close_icon' in self.game_object.resource_manager.pixel_box_dic:
self.lyb_mouse_click(self.scene_name + '_close_icon', custom_threshold=0)
self.status = 0
return self.status
def yoil_dungeon_scene(self):
pb_name = 'yoil_dungeon_scene_free_sotang'
match_rate = self.game_object.rateMatchedPixelBox(self.window_pixels, pb_name)
if match_rate > 0.9:
self.lyb_mouse_click(pb_name)
return self.status
pb_name = 'yoil_dungeon_scene_sotang_cancel'
match_rate = self.game_object.rateMatchedPixelBox(self.window_pixels, pb_name)
if match_rate > 0.9:
self.lyb_mouse_click(pb_name)
return self.status
pb_name = 'yoil_dungeon_scene_limit'
match_rate = self.game_object.rateMatchedPixelBox(self.window_pixels, pb_name)
if match_rate > 0.9:
self.status = 99999
if self.status == 0:
self.logger.info('scene: ' + self.scene_name)
self.status += 1
elif self.status >= 1 and self.status < 8:
pb_name = 'yoil_dungeon_scene_difficulty_list_' + str(7 - self.status)
self.lyb_mouse_click(pb_name, custom_threshold=0)
self.set_option('last_status', self.status + 1)
self.status = 10
elif self.status == 8:
self.set_option('last_status', 99999)
self.status = 10
elif self.status == 10:
pb_name = 'yoil_dungeon_scene_green'
match_rate = self.game_object.rateMatchedPixelBox(self.window_pixels, pb_name)
self.logger.debug(pb_name + ' ' + str(match_rate))
if match_rate > 0.9:
self.lyb_mouse_click('yoil_dungeon_scene_sotang', custom_threshold=0)
self.status += 1
else:
self.status = self.get_option('last_status')
elif self.status >= 11 and self.status < 14:
self.lyb_mouse_click('yoil_dungeon_scene_ipjang', custom_threshold=0)
self.status += 1
elif self.status == 14:
self.status = 99999
elif self.status == 99999:
self.lyb_mouse_click('back', custom_threshold=0)
self.status += 1
else:
if self.scene_name + '_close_icon' in self.game_object.resource_manager.pixel_box_dic:
self.lyb_mouse_click(self.scene_name + '_close_icon', custom_threshold=0)
self.status = 0
return self.status
def npc_talk_scene(self):
match_rate = self.game_object.rateMatchedResource(self.window_pixels, self.scene_name)
if match_rate < 0.9:
return self.status
if self.status == 0:
self.logger.info('scene: ' + self.scene_name)
self.status += 1
else:
if self.scene_name + '_close_icon' in self.game_object.resource_manager.pixel_box_dic:
self.lyb_mouse_click(self.scene_name + '_close_icon', custom_threshold=0)
self.status = 0
return self.status
def quest_scroll_limit_scene(self):
if self.status == 0:
self.logger.info('scene: ' + self.scene_name)
self.set_option('limit', True)
self.status += 1
else:
if self.scene_name + '_close_icon' in self.game_object.resource_manager.pixel_box_dic:
self.lyb_mouse_click(self.scene_name + '_close_icon', custom_threshold=0)
self.status = 0
return self.status
def quest_scroll_scene(self):
if self.status == 0:
self.logger.info('scene: ' + self.scene_name)
self.status += 1
elif self.status == 1:
pb_name = 'quest_scroll_scene_do'
match_rate = self.game_object.rateMatchedPixelBox(self.window_pixels, pb_name)
self.logger.debug(pb_name + ' ' + str(match_rate))
if match_rate > 0.9:
self.lyb_mouse_click(pb_name, custom_threshold=0)
self.status = 0
return self.status
self.lyb_mouse_click('quest_scroll_scene_list_0', custom_threshold=0)
else:
if self.scene_name + '_close_icon' in self.game_object.resource_manager.pixel_box_dic:
self.lyb_mouse_click(self.scene_name + '_close_icon', custom_threshold=0)
self.status = 0
return self.status
def jugan_quest_scene(self):
if self.status == 0:
self.logger.info('scene: ' + self.scene_name)
self.status += 1
elif self.status >= 1 and self.status < 5:
self.status += 1
pb_name_list = [
'jugan_quest_scene_do',
'jugan_quest_scene_move',
'jugan_quest_scene_complete',
]
for pb_name in pb_name_list:
match_rate = self.game_object.rateMatchedPixelBox(self.window_pixels, pb_name)
self.logger.debug(pb_name + ' ' + str(match_rate))
if match_rate > 0.9:
self.lyb_mouse_click(pb_name, custom_threshold=0)
return self.status
elif self.status == 5:
for i in range(7):
self.lyb_mouse_click('jugan_quest_scene_progress_bosang_' + str(i), custom_threshold=0)
self.status += 1
else:
if self.scene_name + '_close_icon' in self.game_object.resource_manager.pixel_box_dic:
self.lyb_mouse_click(self.scene_name + '_close_icon', custom_threshold=0)
self.status = 0
return self.status
def ilil_quest_scene(self):
if self.status == 0:
self.logger.info('scene: ' + self.scene_name)
self.status += 1
elif self.status >= 1 and self.status < 5:
self.status += 1
for i in range(3):
pb_name = 'ilil_quest_scene_bosang_' + str(i)
(loc_x, loc_y), match_rate = self.game_object.locationOnWindowPart(
self.window_image,
self.game_object.resource_manager.pixel_box_dic[pb_name],
custom_threshold=0.8,
custom_flag=1,
custom_rect=(120, 240, 630, 300)
)
self.logger.debug(pb_name + ' ' + str((loc_x, loc_y)) + ' ' + str(match_rate))
if loc_x != -1:
self.lyb_mouse_click_location(loc_x, loc_y)
return self.status
for i in range(3):
pb_name = 'ilil_quest_scene_do_' + str(i)
(loc_x, loc_y), match_rate = self.game_object.locationOnWindowPart(
self.window_image,
self.game_object.resource_manager.pixel_box_dic[pb_name],
custom_threshold=0.8,
custom_flag=1,
custom_rect=(120, 240, 630, 300)
)
self.logger.debug(pb_name + ' ' + str((loc_x, loc_y)) + ' ' + str(match_rate))
if loc_x != -1:
self.lyb_mouse_click_location(loc_x, loc_y)
return self.status
elif self.status == 5:
for i in range(2):
self.lyb_mouse_click('ilil_quest_scene_progress_bosang_' + str(i), custom_threshold=0)
self.status += 1
else:
if self.scene_name + '_close_icon' in self.game_object.resource_manager.pixel_box_dic:
self.lyb_mouse_click(self.scene_name + '_close_icon', custom_threshold=0)
self.status = 0
return self.status
def omantap_scene(self):
if self.status == 0:
self.logger.warn('scene: ' + self.scene_name)
self.status += 1
elif self.status == 1:
pb_name = 'omantap_scene_auto_next'
match_rate = self.game_object.rateMatchedPixelBox(self.window_pixels, pb_name)
self.logger.debug(pb_name + ' ' + str(match_rate))
if match_rate < 0.9:
self.lyb_mouse_click(pb_name, custom_threshold=0)
return self.status
pb_name = 'omantap_scene_limit'
match_rate = self.game_object.rateMatchedPixelBox(self.window_pixels, pb_name)
self.logger.debug(pb_name + ' ' + str(match_rate))
if match_rate > 0.9:
self.lyb_mouse_click('omantap_scene_sotang', custom_threshold=0)
self.status = 99998
return self.status
self.lyb_mouse_click('omantap_scene_enter', custom_threshold=0)
elif self.status == 99998:
self.status += 1
elif self.status == 99999:
self.lyb_mouse_click('back', custom_threshold=0)
self.status += 1
else:
if self.scene_name + '_close_icon' in self.game_object.resource_manager.pixel_box_dic:
self.lyb_mouse_click(self.scene_name + '_close_icon', custom_threshold=0)
self.status = 0
return self.status
def sangjeom_scene(self):
if self.status == 0:
self.logger.warn('scene: ' + self.scene_name)
self.status += 1
elif self.status == 1:
self.status += 1
elif self.status == 2:
pb_name = 'sangjeom_scene_ilban'
match_rate = self.game_object.rateMatchedPixelBox(self.window_pixels, pb_name)
self.logger.debug(pb_name + ' ' + str(match_rate))
if match_rate > 0.9:
self.lyb_mouse_click(pb_name)
self.status = 1
return self.status
for i in range(2):
pb_name = 'sangjeom_scene_new_' + str(i)
(loc_x, loc_y), match_rate = self.game_object.locationOnWindowPart(
self.window_image,
self.game_object.resource_manager.pixel_box_dic[pb_name],
custom_threshold=0.8,
custom_flag=1,
custom_rect=(90, 100, 125, 380)
)
self.logger.warn(pb_name + ' ' + str((loc_x, loc_y)) + ' ' + str(match_rate))
if loc_x != -1:
self.lyb_mouse_click_location(loc_x, loc_y)
self.status += 1
return self.status
self.status = 99999
elif self.status == 3:
self.status += 1
elif self.status == 4:
for i in range(2):
pb_name = 'sangjeom_scene_inner_new_' + str(i)
(loc_x, loc_y), match_rate = self.game_object.locationOnWindowPart(
self.window_image,
self.game_object.resource_manager.pixel_box_dic[pb_name],
custom_threshold=0.8,
custom_flag=1,
custom_rect=(125, 190, 630, 220)
)
self.logger.warn(pb_name + ' ' + str((loc_x, loc_y)) + ' ' + str(match_rate))
if loc_x != -1:
self.lyb_mouse_click_location(loc_x, loc_y)
self.game_object.get_scene('onulhwaldong_scene').set_option('activity_completed', True)
return self.status
pb_name = 'sangjeom_scene_inner_new_' + str(i)
(loc_x, loc_y), match_rate = self.game_object.locationOnWindowPart(
self.window_image,
self.game_object.resource_manager.pixel_box_dic[pb_name],
custom_threshold=0.8,
custom_flag=1,
custom_rect=(125, 325, 630, 355)
)
self.logger.warn(pb_name + ' ' + str((loc_x, loc_y)) + ' ' + str(match_rate))
if loc_x != -1:
self.lyb_mouse_click_location(loc_x, loc_y)
self.game_object.get_scene('onulhwaldong_scene').set_option('activity_completed', True)
return self.status
self.status = 1
elif self.status == 99999:
self.lyb_mouse_click('back', custom_threshold=0)
self.status += 1
else:
if self.scene_name + '_close_icon' in self.game_object.resource_manager.pixel_box_dic:
self.lyb_mouse_click(self.scene_name + '_close_icon', custom_threshold=0)
self.status = 0
return self.status
def onulhwaldong_scene(self):
if self.status == 0:
self.logger.warn('scene: ' + self.scene_name)
self.set_option('quest_index', 0)
self.status += 1
elif self.status == 1:
pb_name = 'onulhwaldong_scene_bosang'
(loc_x, loc_y), match_rate = self.game_object.locationOnWindowPart(
self.window_image,
self.game_object.resource_manager.pixel_box_dic[pb_name],
custom_threshold=0.8,
custom_flag=1,
custom_rect=(20, 240, 630, 270)
)
self.logger.warn(pb_name + ' ' + str((loc_x, loc_y)) + ' ' + str(match_rate))
if loc_x != -1:
self.lyb_mouse_click_location(loc_x, loc_y)
return self.status
quest_index = self.get_option('quest_index')
pb_name = 'onulhwaldong_scene_sugeng'
(loc_x, loc_y), match_rate = self.game_object.locationOnWindowPart(
self.window_image,
self.game_object.resource_manager.pixel_box_dic[pb_name],
custom_threshold=0.8,
custom_flag=1,
custom_rect=(20 + (155 * quest_index), 240, 170 + (155 * quest_index), 270)
)
self.logger.warn(pb_name + ' ' + str((loc_x, loc_y)) + ' ' + str(match_rate))
if loc_x != -1:
self.lyb_mouse_click_location(loc_x, loc_y)
self.game_object.get_scene('sangjeom_scene').status = 0
self.game_object.get_scene('omantap_scene').status = 0
self.game_object.get_scene('yoil_dungeon_scene').status = 0
self.game_object.get_scene('gyeoltoojang_scene').status = 0
self.game_object.get_scene('social_scene').status = 0
self.game_object.get_scene('ilil_quest_scene').status = 0
self.game_object.get_scene('jugan_quest_scene').status = 0
self.game_object.get_scene('jeongye_dungeon_scene').status = 0
self.set_option('activity_completed', False)
self.status += 1
else:
self.status = 99999
elif self.status >= 2 and self.status < 5:
self.status += 1
elif self.status == 5:
quest_index = self.get_option('quest_index')
if self.get_option('activity_completed') == False:
if quest_index >= 3:
self.set_option('quest_index', 0)
else:
self.set_option('quest_index', quest_index + 1)
for i in range(3):
pb_name = 'onulhwaldong_scene_progress_bosang_' + str(i)
self.lyb_mouse_click(pb_name, custom_threshold=0)
self.status = 1
else:
if self.game_object.get_scene('main_scene').current_work == '메인 퀘스트':
self.game_object.get_scene('main_scene').set_option('메인 퀘스트' + '_end_flag', True)
if self.scene_name + '_close_icon' in self.game_object.resource_manager.pixel_box_dic:
self.lyb_mouse_click(self.scene_name + '_close_icon', custom_threshold=0)
self.status = 0
return self.status
def bosang_scene(self):
if self.status == 0:
self.logger.warn('scene: ' + self.scene_name)
self.status += 1
elif self.status == 1:
self.status += 1
elif self.status == 2:
for i in range(4):
pb_name = 'bosang_scene_new_' + str(i)
(loc_x, loc_y), match_rate = self.game_object.locationOnWindowPart(
self.window_image,
self.game_object.resource_manager.pixel_box_dic[pb_name],
custom_threshold=0.7,
custom_flag=1,
custom_rect=(145, 80, 190, 350))
self.logger.warn(pb_name + ' ' + str((loc_x, loc_y)) + ' ' + str(match_rate))
if loc_x != -1:
self.logger.info('보상 알림 감지')
self.lyb_mouse_click_location(loc_x, loc_y)
self.status += 1
return self.status
self.status = 99999
elif self.status == 3:
self.status += 1
elif self.status == 4:
pb_name = 'bosang_scene_bosang'
match_rate = self.game_object.rateMatchedPixelBox(self.window_pixels, pb_name)
self.logger.debug(pb_name + ' ' + str(match_rate))
if match_rate > 0.9:
self.lyb_mouse_click(pb_name)
else:
self.lyb_mouse_click(pb_name, custom_threshold=0)
self.status = 1
else:
if self.scene_name + '_close_icon' in self.game_object.resource_manager.pixel_box_dic:
self.lyb_mouse_click(self.scene_name + '_close_icon')
self.status = 0
return self.status
def google_play_store_scene(self):
elapsed_time = time.time() - self.get_checkpoint('start')
if elapsed_time > 120 and elapsed_time < 180:
self.set_checkpoint('start')
self.game_object.terminate_application()
self.status = 0
return self.status
if self.status == 0:
self.set_checkpoint('start')
self.status += 1
elif self.status == 1:
pb_name = self.scene_name + '_open'
match_rate = self.game_object.rateMatchedPixelBox(self.window_pixels, pb_name)
if match_rate > 0.9:
self.lyb_mouse_click(pb_name)
else:
self.status += 1
elif self.status == 2:
pb_name = self.scene_name + '_update'
match_rate = self.game_object.rateMatchedPixelBox(self.window_pixels, pb_name)
if match_rate > 0.9:
self.lyb_mouse_click(pb_name)
else:
self.status = 1
else:
self.status = 0
return self.status
def login_scene(self):
self.schedule_list = self.get_game_config('schedule_list')
if not '로그인' in self.schedule_list:
return 0
elapsedTime = time.time() - self.get_checkpoint('start')
if elapsedTime > 120:
self.status = 0
if self.status == 0:
self.set_checkpoint('start')
self.status += 1
elif self.status == 1:
self.lyb_mouse_click('login_scene_touch', custom_threshold=0)
self.game_object.interval = self.period_bot(5)
self.status += 1
elif self.status == 2:
self.status += 1
elif self.status >= 2 and self.status < 30:
if self.status % 5 == 0:
self.lyb_mouse_click('login_scene_touch', custom_threshold=0)
self.logger.info('로그인 화면 랙 인식: ' + str(self.status) + '/30')
self.status += 1
elif self.status == 30:
self.game_object.terminate_application()
self.status += 1
else:
# self.lyb_mouse_click(self.scene_name + '_close_icon')
self.status = 0
return self.status
def init_screen_scene(self):
self.schedule_list = self.get_game_config('schedule_list')
if not '게임 시작' in self.schedule_list:
return 0
loc_x = -1
loc_y = -1
if self.game_object.player_type == 'nox':
for each_icon in lybgamel2r.LYBL2r.l2r_icon_list:
(loc_x, loc_y), match_rate = self.game_object.locationOnWindowPart(
self.window_image,
self.game_object.resource_manager.pixel_box_dic[each_icon],
custom_threshold=0.8,
custom_flag=1,
custom_rect=(80, 110, 570, 300)
)
# self.logger.debug(match_rate)
if loc_x != -1:
self.lyb_mouse_click_location(loc_x, loc_y)
break
elif self.game_object.player_type == 'momo':
for each_icon in lybgamel2r.LYBL2r.l2r_icon_list:
(loc_x, loc_y), match_rate = self.game_object.locationOnWindowPart(
self.window_image,
self.game_object.resource_manager.pixel_box_dic[each_icon],
custom_threshold=0.8,
custom_flag=1,
custom_rect=(30, 10, 610, 300)
)
# self.logger.debug(match_rate)
if loc_x != -1:
self.lyb_mouse_click_location(loc_x, loc_y)
break
# if loc_x == -1:
# self.loggingToGUI('테라 아이콘 발견 못함')
return 0
#########################################
# #
# #
# MAIN #
# #
# #
#########################################
def main_scene(self):
if self.game_object.current_schedule_work != self.current_work:
self.game_object.current_schedule_work = self.current_work
self.game_object.main_scene = self
is_clicked = self.pre_process_main_scene()
if is_clicked == True:
return self.status
self.schedule_list = self.get_game_config('schedule_list')
if len(self.schedule_list) == 1:
self.logger.warn('스케쥴 작업이 없어서 종료합니다.')
return -1
if self.status == 0:
self.status += 1
elif self.status >= 1 and self.status < 1000:
self.set_schedule_status()
elif self.status == self.get_work_status('메인 퀘스트'):
elapsed_time = self.get_elapsed_time()
cfg_main_quest_duration = int(
self.get_game_config(lybconstant.LYB_DO_STRING_L2R_WORK + 'main_quest_duration'))
if elapsed_time > self.period_bot(cfg_main_quest_duration):
self.set_option(self.current_work + '_end_flag', True)
else:
self.loggingElapsedTime("[메인 퀘스트] 작업 경과 시간", elapsed_time, cfg_main_quest_duration, period=10)
if self.get_option(self.current_work + '_end_flag') == True:
self.set_option(self.current_work + '_end_flag', False)
self.set_option(self.current_work + '_inner_status', None)
self.status = self.last_status[self.current_work] + 1
return self.status
inner_status = self.get_option(self.current_work + '_inner_status')
if inner_status == None:
inner_status = 0
if inner_status == 0:
self.game_object.get_scene('quest_scroll_limit_scene').set_option('limit', False)
self.set_option(self.current_work + '_inner_status', 10)
else:
if self.isAutoMainQuest() == True:
return self.status
else:
if self.main_scene_process_main_quest() == True:
return self.status
elif self.status == self.get_work_status('알림'):
try:
self.game_object.telegram_send(str(self.get_game_config(lybconstant.LYB_DO_STRING_NOTIFY_MESSAGE)))
self.status = self.last_status[self.current_work] + 1
except:
recovery_count = self.get_option(self.current_work + 'recovery_count')
if recovery_count == None:
recovery_count = 0
if recovery_count > 2:
self.status = self.last_status[self.current_work] + 1
self.set_option(self.current_work + 'recovery_count', 0)
else:
self.logger.error(traceback.format_exc())
self.set_option(self.current_work + 'recovery_count', recovery_count + 1)
elif self.status == self.get_work_status('[작업 예약]'):
self.logger.warn('[작업 예약]')
self.game_object.wait_for_start_reserved_work = False
self.status = self.last_status[self.current_work] + 1
elif self.status == self.get_work_status('[작업 대기]'):
elapsed_time = self.get_elapsed_time()
limit_time = int(self.get_game_config(lybconstant.LYB_DO_STRING_WAIT_FOR_NEXT))
if elapsed_time > limit_time:
self.set_option(self.current_work + '_end_flag', True)
else:
self.loggingElapsedTime('[작업 대기]', int(elapsed_time), limit_time, period=10)
if self.get_option(self.current_work + '_end_flag') == True:
self.set_option(self.current_work + '_end_flag', False)
self.status = self.last_status[self.current_work] + 1
return self.status
elif self.status == self.get_work_status('[반복 시작]'):
self.set_option('loop_start', self.last_status[self.current_work])
self.status = self.last_status[self.current_work] + 1
elif self.status == self.get_work_status('[반복 종료]'):
loop_count = self.get_option('loop_count')
if loop_count == None:
loop_count = 1
self.logger.debug('[반복 종료] ' + str(loop_count) + ' 회 수행 완료, ' +
str(int(
self.get_game_config(lybconstant.LYB_DO_STRING_COUNT_LOOP)) - loop_count) + ' 회 남음')
if loop_count >= int(self.get_game_config(lybconstant.LYB_DO_STRING_COUNT_LOOP)):
self.status = self.last_status[self.current_work] + 1
self.set_option('loop_count', 1)
self.set_option('loop_start', None)
else:
self.status = self.get_option('loop_start')
# print('DEBUG LOOP STATUS = ', self.status )
if self.status == None:
self.logger.debug('[반복 시작] 점을 찾지 못해서 다음 작업을 수행합니다')
self.status = self.last_status[self.current_work] + 1
self.set_option('loop_count', loop_count + 1)
else:
self.status = self.last_status[self.current_work] + 1
return self.status
def pre_process_main_scene(self):
pb_name_list = [
'main_scene_base_open',
'main_scene_base_close'
]
is_field = False
for pb_name in pb_name_list:
match_rate = self.game_object.rateMatchedPixelBox(self.window_pixels, pb_name)
if match_rate > 0.9:
is_field = True
break
# 던전 진행 중인 경우
# 1. [던전] 퀘스트
# 2. 요일 던전 - 제한 시간
if is_field == False:
self.logger.debug('not field')
# [던전] 퀘스트는 60초마다 눌러주고...
elapsed_time = time.time() - self.get_checkpoint(pb_name + '_last_clicked')
if elapsed_time > 60:
pb_name = 'main_scene_quest_dungeon'
(loc_x, loc_y), match_rate = self.game_object.locationOnWindowPart(
self.window_image,
self.game_object.resource_manager.pixel_box_dic[pb_name],
custom_threshold=0.7,
custom_top_level=(250, 80, 115),
custom_below_level=(145, 50, 60),
custom_flag=1,
custom_rect=(5, 95, 140, 240)
)
# self.logger.warn(pb_name + ' ' + str((loc_x, loc_y)) + ' ' + str(match_rate))
if loc_x != -1:
self.lyb_mouse_click_location(loc_x, loc_y)
self.set_checkpoint(pb_name + '_last_clicked')
return True
# 던전에 더 이상 할 게 없다면...
if self.main_scene_is_dungeon() == False:
check_count = self.get_option(pb_name + '_check')
if check_count == None:
check_count = 0
self.logger.debug('던전 나가기 체크...(' + str(check_count) + '/3)')
if check_count > 2:
pb_name = 'main_scene_out'
(loc_x, loc_y), match_rate = self.game_object.locationOnWindowPart(
self.window_image,
self.game_object.resource_manager.pixel_box_dic[pb_name],
custom_threshold=0.7,
custom_flag=1,
custom_top_level=(255, 255, 255),
custom_below_level=(190, 190, 190),
custom_rect=(400, 50, 540, 90)
)
self.logger.warn(pb_name + ' ' + str((loc_x, loc_y)) + ' ' + str(match_rate))
if loc_x != -1:
self.lyb_mouse_click_location(loc_x, loc_y)
self.set_option(pb_name + '_check', 0)
return True
self.set_option(pb_name + '_check', check_count + 1)
else:
self.set_option(pb_name + '_check', 0)
cfg_lag_check_period = int(self.get_game_config(lybconstant.LYB_DO_STRING_L2R_ETC + 'lag_check_period'))
if cfg_lag_check_period != 0:
elapsed_time = time.time() - self.get_checkpoint('last_lag_check')
if elapsed_time > cfg_lag_check_period:
random_direction = int(random.random() * 8)
self.logger.warn('랙 방지 움직임: ' + str(lybgamel2r.LYBL2r.character_move_list[random_direction]))
self.lyb_mouse_drag('character_move_direction_center',
'character_move_direction_' + str(random_direction), stop_delay=5)
self.set_checkpoint('last_lag_check')
return True
if self.get_option('from_azit_scene') == True:
pb_name = 'main_scene_new'
(loc_x, loc_y), match_rate = self.game_object.locationOnWindowPart(
self.window_image,
self.game_object.resource_manager.pixel_box_dic[pb_name],
custom_threshold=0.6,
custom_flag=1,
custom_rect=(480, 80, 510, 110)
)
self.logger.warn(pb_name + ' ' + str((loc_x, loc_y)) + ' ' + str(match_rate))
if loc_x != -1:
self.lyb_mouse_click_location(loc_x, loc_y)
return True
pb_name = 'main_scene_out'
(loc_x, loc_y), match_rate = self.game_object.locationOnWindowPart(
self.window_image,
self.game_object.resource_manager.pixel_box_dic[pb_name],
custom_threshold=0.7,
custom_flag=1,
custom_top_level=(255, 255, 255),
custom_below_level=(210, 210, 210),
custom_rect=(400, 50, 540, 90)
)
self.logger.warn(pb_name + ' ' + str((loc_x, loc_y)) + ' ' + str(match_rate))
if loc_x != -1:
self.lyb_mouse_click_location(loc_x, loc_y)
self.set_option('from_azit_scene', False)
return True
pb_name_list = [
'main_scene_equip',
'main_scene_use',
]
for pb_name in pb_name_list:
match_rate = self.game_object.rateMatchedPixelBox(self.window_pixels, pb_name)
if match_rate > 0.9:
self.logger.debug(pb_name + ' ' + str(match_rate))
self.lyb_mouse_click(pb_name)
self.game_object.get_scene('mail_scene').status = 0
return True
pb_name = 'main_scene_mail_new'
(loc_x, loc_y), match_rate = self.game_object.locationOnWindowPart(
self.window_image,
self.game_object.resource_manager.pixel_box_dic[pb_name],
custom_threshold=0.7,
custom_flag=1,
custom_rect=(510, 50, 540, 90)
)
if loc_x != -1:
self.logger.info('메일함 확인')
self.lyb_mouse_click_location(loc_x - 5, loc_y + 5)
return True
if self.isFull() == True:
self.lyb_mouse_click('main_scene_gabang', custom_threshold=0)
self.game_object.get_scene('gabang_scene').status = 0
return True
if self.isHorseOn() == True:
self.logger.info('이동 중...')
self.set_option('moving', True)
return True
pb_name = 'main_scene_potion_empty'
(loc_x, loc_y), match_rate = self.game_object.locationOnWindowPart(
self.window_image,
self.game_object.resource_manager.pixel_box_dic[pb_name],
custom_threshold=0.7,
custom_flag=1,
custom_top_level=(255, 255, 255),
custom_below_level=(130, 130, 130),
custom_rect=(480, 210, 635, 250)
)
if loc_x != -1:
self.logger.info('물약 확인 ' + str(round(match_rate, 2)))
self.lyb_mouse_click_location(loc_x, loc_y - 5)
return True
pb_name = 'main_scene_distance'
(loc_x, loc_y), match_rate = self.game_object.locationOnWindowPart(
self.window_image,
self.game_object.resource_manager.pixel_box_dic[pb_name],
custom_threshold=0.7,
custom_top_level=(255, 255, 255),
custom_below_level=(110, 110, 110),
custom_flag=1,
custom_rect=(250, 200, 400, 280)
)
if loc_x != -1:
self.logger.info('목표 추적 중...')
self.set_option('moving', True)
return True
# 이동 완료 후 자동 버튼 누르기
if self.get_option('moving') == True:
if self.isAutoCombat(limit_count=1) == False:
self.lyb_mouse_click('auto', custom_threshold=0)
self.set_option('moving', False)
return self.status
if is_field == False:
if self.isAutoCombat() == False:
self.lyb_mouse_click('auto', custom_threshold=0)
return True
return False
def main_scene_process_main_quest(self):
pb_name = 'main_quest_completed'
# self.game_object.getImagePixelBox(pb_name).save(pb_name + '.png')
match_rate = self.game_object.rateMatchedPixelBox(self.window_pixels, pb_name)
# self.logger.debug(pb_name + ' ' + str(match_rate))
if match_rate > 0.9:
self.lyb_mouse_click(pb_name)
return True
pb_name_list = [
['main_quest_completed', -1, -1, (5, 125, 25, 240)],
['quest_complete', (110, 200, 235), (15, 60, 90), (100, 125, 140, 240)],
['main_quest', (250, 175, 60), (130, 80, 0), (5, 125, 25, 240)],
]
if self.get_game_config(lybconstant.LYB_DO_STRING_L2R_WORK + 'main_quest_sub') == True:
if self.game_object.get_scene('quest_scroll_limit_scene').get_option('limit') == False:
pb_name_list.insert(2, ['main_quest_sub', (80, 200, 235), (45, 130, 170), (5, 125, 25, 240)])
for each_pb in pb_name_list:
pb_name = each_pb[0]
custom_top_level = each_pb[1]
custom_below_level = each_pb[2]
custom_rect = each_pb[3]
(loc_x, loc_y), match_rate = self.game_object.locationOnWindowPart(
self.window_image,
self.game_object.resource_manager.pixel_box_dic[pb_name],
custom_threshold=0.65,
custom_top_level=custom_top_level,
custom_below_level=custom_below_level,
custom_flag=1,
custom_rect=custom_rect
)
self.logger.warn(pb_name + ' ' + str((loc_x, loc_y)) + ' ' + str(match_rate))
if loc_x != -1:
self.game_object.get_scene('ilil_quest_scene').status = 0
self.game_object.get_scene('quest_scroll_scene').status = 0
self.lyb_mouse_click_location(loc_x, loc_y)
return True
else:
drag_start = 'bot'
drag_end = 'top'
drag_direction = self.get_option('drag_direction')
if drag_direction == None:
drag_direction = 0
if drag_direction % 6 < 3:
drag_start = 'top'
drag_end = 'bot'
resource_name = 'main_quest_limit_loc'
(loc_x, loc_y), match_rate = self.game_object.locationResourceOnWindowPart(
self.window_image,
resource_name,
custom_threshold=0.7,
custom_top_level=(250, 175, 60),
custom_below_level=(130, 80, 0),
custom_flag=1,
custom_rect=(5, 120, 140, 240)
)
self.logger.debug(resource_name + ' ' + str((loc_x, loc_y)) + ' ' + str(match_rate))
if loc_x != -1:
self.set_option('메인 퀘스트' + '_end_flag', True)
return True
resource_name = 'main_quest_complete_loc'
(loc_x, loc_y), match_rate = self.game_object.locationResourceOnWindowPart(
self.window_image,
resource_name,
custom_threshold=0.7,
custom_flag=1,
custom_rect=(5, 120, 140, 240)
)
self.logger.debug(resource_name + ' ' + str((loc_x, loc_y)) + ' ' + str(match_rate))
if loc_x != -1:
self.logger.info('자동 퀘스트 진행 완료')
self.lyb_mouse_click_location(loc_x, loc_y)
return True
self.lyb_mouse_drag('main_scene_quest_drag_' + drag_start, 'main_scene_quest_drag_' + drag_end)
return True
def isAutoCombat(self, limit_count=-1):
return self.isStatusByResource(
'[자동전투 중]',
'main_scene_auto_loc',
0.7,
(255, 255, 255), (100, 100, 100),
(250, 200, 400, 320),
limit_count=limit_count
)
def isAutoMainQuest(self, limit_count=-1):
return self.isStatusByResource(
'[자동퀘스트 중]',
'main_scene_auto_quest_loc',
0.7,
(255, 255, 255), (100, 100, 100),
(250, 200, 400, 320),
limit_count=limit_count
)
def isFull(self):
check_count = self.get_option('check_count')
if check_count == None:
check_count = 0
pb_name = 'main_scene_full'
(loc_x, loc_y), match_rate = self.game_object.locationOnWindowPart(
self.window_image,
self.game_object.resource_manager.pixel_box_dic[pb_name],
custom_threshold=0.8,
custom_top_level=(80, 255, 255),
custom_below_level=(30, 75, 110),
custom_flag=1,
custom_rect=(470, 30, 510, 70)
)
self.logger.debug(pb_name + ' ' + str((loc_x, loc_y)) + ' ' + str(match_rate))
if loc_x != -1:
if check_count > 2:
self.set_option('check_count', 0)
return True
self.set_option('check_count', check_count + 1)
return False
self.set_option('check_count', 0)
return False
def isStatusByResource(self, log_message, resource_name, custom_threshold, custom_top_level, custom_below_level,
custom_rect, limit_count=-1):
if limit_count == -1:
limit_count = int(self.get_game_config(lybconstant.LYB_DO_STRING_L2R_ETC + 'auto_limit'))
(loc_x, loc_y), match_rate = self.game_object.locationResourceOnWindowPart(
self.window_image,
resource_name,
custom_threshold=custom_threshold,
custom_top_level=custom_top_level,
custom_below_level=custom_below_level,
custom_flag=1,
custom_rect=custom_rect,
average=True
)
self.logger.debug(resource_name + ' ' + str((loc_x, loc_y)) + ' ' + str(match_rate))
if loc_x != -1:
self.set_option(resource_name + 'check_count', 0)
return True
check_count = self.get_option(resource_name + 'check_count')
if check_count == None:
check_count = 0
if check_count > limit_count:
self.set_option(resource_name + 'check_count', 0)
return False
self.logger.debug(log_message + '..(' + str(check_count) + '/' + str(limit_count) + ')')
self.set_option(resource_name + 'check_count', check_count + 1)
return True
def isHorseOn(self):
count = 0
for i in range(4):
pb_name = 'horse_on_' + str(i)
# self.game_object.getImagePixelBox(pb_name).save(pb_name + '.png')
(loc_x, loc_y), match_rate = self.game_object.locationOnWindowPart(
self.window_image,
self.game_object.resource_manager.pixel_box_dic[pb_name],
custom_threshold=0.2,
custom_top_level=(30, 120, 180),
custom_below_level=(10, 70, 60),
custom_flag=1,
custom_rect=(445, 335, 480, 370)
)
self.logger.debug(pb_name + ' ' + str((loc_x, loc_y)) + ' ' + str(match_rate))
if match_rate > 0.5:
return True
else:
if loc_x != -1:
count += 1
if count > 2:
return True
return False
def main_scene_is_dungeon(self):
pb_name = 'main_scene_quest_dungeon'
(loc_x, loc_y), match_rate = self.game_object.locationOnWindowPart(
self.window_image,
self.game_object.resource_manager.pixel_box_dic[pb_name],
custom_threshold=0.7,
custom_top_level=(250, 80, 115),
custom_below_level=(145, 50, 60),
custom_flag=1,
custom_rect=(5, 95, 140, 240)
)
self.logger.debug(pb_name + ' ' + str((loc_x, loc_y)) + ' ' + str(match_rate))
if loc_x != -1:
return True
pb_name = 'main_scene_dungeon_gold'
(loc_x, loc_y), match_rate = self.game_object.locationOnWindowPart(
self.window_image,
self.game_object.resource_manager.pixel_box_dic[pb_name],
custom_threshold=0.7,
custom_flag=1,
custom_rect=(5, 170, 35, 255)
)
self.logger.debug(pb_name + ' ' + str((loc_x, loc_y)) + ' ' + str(match_rate))
if loc_x != -1:
return True
resource_name = 'main_scene_dungeon_time_loc'
(loc_x, loc_y), match_rate = self.game_object.locationResourceOnWindowPart(
self.window_image,
resource_name,
custom_threshold=0.7,
custom_top_level=(255, 90, 125),
custom_below_level=(125, 45, 60),
custom_flag=1,
custom_rect=(530, 130, 590, 160)
)
self.logger.debug(resource_name + ' ' + str((loc_x, loc_y)) + ' ' + str(match_rate))
if loc_x != -1:
return True
return False
def get_work_status(self, work_name):
if work_name in lybgamel2r.LYBL2r.work_list:
return (lybgamel2r.LYBL2r.work_list.index(work_name) + 1) * 1000
else:
return 99999
| 40.146067 | 118 | 0.566551 | 9,202 | 75,033 | 4.302108 | 0.042599 | 0.107103 | 0.060473 | 0.051101 | 0.84801 | 0.809488 | 0.775134 | 0.738734 | 0.720142 | 0.694933 | 0 | 0.035223 | 0.332934 | 75,033 | 1,868 | 119 | 40.167559 | 0.755714 | 0.013567 | 0 | 0.682662 | 0 | 0 | 0.078746 | 0.029217 | 0 | 0 | 0 | 0 | 0 | 1 | 0.028151 | false | 0 | 0.008957 | 0.00128 | 0.111324 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e0eb9b9655759c5a4673366dfb90d35636b4a4bd | 48 | py | Python | api/startup.py | JohnLest/three-tiers_template_Python | ee655c28f9f2ee6d1143aaec0ed3c6bc942481d4 | [
"MIT"
] | null | null | null | api/startup.py | JohnLest/three-tiers_template_Python | ee655c28f9f2ee6d1143aaec0ed3c6bc942481d4 | [
"MIT"
] | null | null | null | api/startup.py | JohnLest/three-tiers_template_Python | ee655c28f9f2ee6d1143aaec0ed3c6bc942481d4 | [
"MIT"
] | null | null | null | import sys
print (f"Work with {sys.version}")
| 12 | 34 | 0.6875 | 8 | 48 | 4.125 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 48 | 3 | 35 | 16 | 0.825 | 0 | 0 | 0 | 0 | 0 | 0.479167 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
e0f81cecbca9ca5a9b8c3302d79c9300f49e4d9e | 4,152 | py | Python | opflexagent/test/test_rpc.py | elastx/python-opflex-agent | 955f5fa66ee52c1fc58aded06eef1fe735b86bc6 | [
"Apache-2.0"
] | 7 | 2015-09-04T06:18:11.000Z | 2017-07-12T07:35:35.000Z | opflexagent/test/test_rpc.py | elastx/python-opflex-agent | 955f5fa66ee52c1fc58aded06eef1fe735b86bc6 | [
"Apache-2.0"
] | 86 | 2015-04-10T15:53:47.000Z | 2021-08-18T10:31:09.000Z | opflexagent/test/test_rpc.py | elastx/python-opflex-agent | 955f5fa66ee52c1fc58aded06eef1fe735b86bc6 | [
"Apache-2.0"
] | 17 | 2015-04-10T15:41:45.000Z | 2021-08-30T10:23:34.000Z | # Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import sys
from unittest import mock
sys.modules["apicapi"] = mock.Mock() # noqa
sys.modules["pyinotify"] = mock.Mock() # noqa
from opflexagent import rpc
from opflexagent.test import base
class TestOpflexRpc(base.OpflexTestBase):
def setUp(self):
super(TestOpflexRpc, self).setUp()
self.callback = rpc.GBPServerRpcCallback(mock.Mock(), mock.Mock())
def test_request_endpoint_details(self):
result = {'device': 'someid'}
self.callback.gbp_driver.request_endpoint_details = mock.Mock(
return_value=result)
self.callback.request_endpoint_details(mock.ANY, host='h1')
(self.callback.agent_notifier.opflex_endpoint_update.
assert_called_once_with(mock.ANY, [result], host='h1'))
# Test None return
self.callback.agent_notifier.opflex_endpoint_update.reset_mock()
result = None
self.callback.gbp_driver.request_endpoint_details = mock.Mock(
return_value=result)
self.callback.request_endpoint_details(mock.ANY, host='h1')
self.assertFalse(
self.callback.agent_notifier.opflex_endpoint_update.called)
def test_request_vrf_details(self):
result = {'device': 'someid'}
self.callback.gbp_driver.request_vrf_details = mock.Mock(
return_value=result)
self.callback.request_vrf_details(mock.ANY, host='h1')
(self.callback.agent_notifier.opflex_vrf_update.
assert_called_once_with(mock.ANY, [result], host='h1'))
# Test None return
self.callback.agent_notifier.opflex_vrf_update.reset_mock()
result = None
self.callback.gbp_driver.request_vrf_details = mock.Mock(
return_value=result)
self.callback.request_vrf_details(mock.ANY, host='h1')
self.assertFalse(
self.callback.agent_notifier.opflex_vrf_update.called)
def test_request_endpoint_details_list(self):
result = {'device': 'someid'}
self.callback.gbp_driver.request_endpoint_details = mock.Mock(
return_value=result)
self.callback.request_endpoint_details_list(
mock.ANY, host='h1', requests=range(3))
(self.callback.agent_notifier.opflex_endpoint_update.
assert_called_once_with(mock.ANY, [result] * 3, host='h1'))
# Test None return
self.callback.agent_notifier.opflex_endpoint_update.reset_mock()
result = None
self.callback.gbp_driver.request_endpoint_details = mock.Mock(
return_value=result)
self.callback.request_endpoint_details_list(
mock.ANY, host='h1', requests=range(3))
self.assertFalse(
self.callback.agent_notifier.opflex_endpoint_update.called)
def test_request_vrf_details_list(self):
result = {'device': 'someid'}
self.callback.gbp_driver.request_vrf_details = mock.Mock(
return_value=result)
self.callback.request_vrf_details_list(
mock.ANY, host='h1', requests=range(3))
(self.callback.agent_notifier.opflex_vrf_update.
assert_called_once_with(mock.ANY, [result] * 3, host='h1'))
# Test None return
self.callback.agent_notifier.opflex_vrf_update.reset_mock()
result = None
self.callback.gbp_driver.request_vrf_details = mock.Mock(
return_value=result)
self.callback.request_vrf_details_list(
mock.ANY, host='h1', requests=range(3))
self.assertFalse(
self.callback.agent_notifier.opflex_vrf_update.called)
| 41.52 | 78 | 0.684489 | 516 | 4,152 | 5.27907 | 0.209302 | 0.127753 | 0.07489 | 0.110132 | 0.748164 | 0.732012 | 0.732012 | 0.732012 | 0.732012 | 0.732012 | 0 | 0.00678 | 0.218449 | 4,152 | 99 | 79 | 41.939394 | 0.832666 | 0.150289 | 0 | 0.8 | 0 | 0 | 0.025093 | 0 | 0 | 0 | 0 | 0 | 0.114286 | 1 | 0.071429 | false | 0 | 0.057143 | 0 | 0.142857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1cb1a3630c7f289d55ac069e1f6b854c6181a944 | 402 | py | Python | venv/Lib/site-packages/observer/__main__.py | rexliu3/FileOrganizer | a7d8443c64b05eae237272c30e6de3bfc21012fb | [
"MIT"
] | null | null | null | venv/Lib/site-packages/observer/__main__.py | rexliu3/FileOrganizer | a7d8443c64b05eae237272c30e6de3bfc21012fb | [
"MIT"
] | 1 | 2020-06-16T03:05:44.000Z | 2020-06-16T03:06:04.000Z | venv/Lib/site-packages/observer/__main__.py | rexliu3/FileOrganizer | a7d8443c64b05eae237272c30e6de3bfc21012fb | [
"MIT"
] | null | null | null | # -*- coding:utf-8 -*-
# Created by Hans-Thomas on 2011-12-11.
#=============================================================================
# __main__.py ---
#=============================================================================
import sys
from observer.main import main
main(sys.argv[1:])
#.............................................................................
# __main__.py
| 26.8 | 78 | 0.263682 | 26 | 402 | 3.769231 | 0.730769 | 0.122449 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.027174 | 0.084577 | 402 | 14 | 79 | 28.714286 | 0.23913 | 0.800995 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1cb35aa5cbad7d7ab17d19229048f2c98d7f8388 | 7,676 | py | Python | _package/xms/mesher/meshing/mesh_utils.py | kwryankrattiger/xmsmesher | 66a25fd1164ae997242d2bf11f04c9b8e0e82f17 | [
"BSD-2-Clause"
] | null | null | null | _package/xms/mesher/meshing/mesh_utils.py | kwryankrattiger/xmsmesher | 66a25fd1164ae997242d2bf11f04c9b8e0e82f17 | [
"BSD-2-Clause"
] | 2 | 2021-04-20T23:11:40.000Z | 2021-11-29T20:27:41.000Z | _package/xms/mesher/meshing/mesh_utils.py | kwryankrattiger/xmsmesher | 66a25fd1164ae997242d2bf11f04c9b8e0e82f17 | [
"BSD-2-Clause"
] | 1 | 2021-04-14T21:24:46.000Z | 2021-04-14T21:24:46.000Z | """Meshing utility methods."""
from .._xmsmesher.meshing import mesh_utils
__all__ = ['size_function_from_depth', 'smooth_size_function', 'smooth_elev_by_slope', 'generate_mesh', 'generate_2dm',
'check_mesh_input_topology', 'redistribute_poly_line']
def size_function_from_depth(depths, min_size, max_size):
"""Creates a size at each point.
Based on the depth at the point and the min and max sizes the
equation is min_depth + ( (depth - min_depth) / (max_depth - min_depth) ) * (max_size - min_size).
This is often useful for coastal numerical model simulations.
Args:
depths (iterable): The measured depths at point locations.
min_size (float): The minimum element edge size.
max_size (float): The maximum element edge size.
Returns:
Array of sizes based on depth.
"""
return mesh_utils.SizeFunctionFromDepth(depths, min_size, max_size)
def size_function_from_edge_lengths(ugrid):
"""Creates a size at each point based on the average length of the connected edges to the point.
Args:
ugrid (UGrid): The unstructructure grid
Returns:
Array of sizes based on depth.
"""
return mesh_utils.SizeFunctionFromEdgeLengths(ugrid._instance)
def smooth_size_function(tin, sizes, size_ratio, min_size, anchor_to, points_flag):
"""Smooths a size function.
Ensures that the size function transitions over a sufficient distance so that the area
change of adjacent elements meets the size ratio passed in.
Args:
tin (tin): Points and triangles defining the connectivity of the size function.
sizes (iterable): Array of the current sizes.
size_ratio (float): Allowable size difference between adjacent elements.
min_size (float): Minimum specified element size.
anchor_to (str): Option to anchor to the minimum or maximum size ('min' or 'max')
points_flag(iterable): Flag to indicate if the value at the point should be adjusted (a value of true will skip
the point). Leave the bitset empty to process all points.
Returns:
Array of smoothed sizes.
"""
anchor_types = {'min': 0, 'max': 1}
if anchor_to not in anchor_types.keys():
raise ValueError("anchor_to must be one of 'min' or 'max', not {}".format(anchor_to))
return mesh_utils.SmoothSizeFunction(tin._instance, sizes, size_ratio, min_size, anchor_types[anchor_to],
points_flag)
def smooth_size_function_ugrid(ugrid, sizes, size_ratio, min_size, anchor_to, points_flag):
"""Smooths a size function.
Ensures that the size function transitions over a sufficient distance so that the area
change of adjacent elements meets the size ratio passed in.
Args:
ugrid (UGrid): Unstructured grid defining the connectivity of the size function.
sizes (iterable): Array of the current sizes.
size_ratio (float): Allowable size difference between adjacent elements.
min_size (float): Minimum specified element size.
anchor_to (str): Option to anchor to the minimum or maximum size ('min' or 'max')
points_flag(iterable): Flag to indicate if the value at the point should be adjusted (a value of true will skip
the point). Leave the bitset empty to process all points.
Returns:
Array of smoothed sizes.
"""
anchor_types = {'min': 0, 'max': 1}
if anchor_to not in anchor_types.keys():
raise ValueError("anchor_to must be one of 'min' or 'max', not {}".format(anchor_to))
return mesh_utils.SmoothSizeFunctionUGrid(ugrid._instance, sizes, size_ratio, min_size, anchor_types[anchor_to],
points_flag)
def smooth_elev_by_slope(tin, elevations, max_slope, anchor_to, points_flag):
"""Smooths a elevations based on max specified slope (max_slope).
Preserves either the min or max based on anchor_type.
Args:
tin (tin): Points and triangles defining the connectivity of the elevations.
elevations (iterable): Array of the current elevations.
max_slope (float): Maximum allowable slope.
anchor_to (str): Option to anchor to the minimum or maximum size ('min' or 'max')
points_flag (iterable): Flag to indicate if the value at the point should be adjusted (a value of true will
skip the point). Leave the bitset empty to process all points.
Returns:
Array of smoothed elevations.
"""
anchor_types = {'min': 0, 'max': 1}
if anchor_to not in anchor_types.keys():
raise ValueError("anchor_to must be one of 'min' or 'max', not {}".format(anchor_to))
return mesh_utils.SmoothElevBySlope(tin._instance, elevations, max_slope, anchor_types[anchor_to], points_flag)
def smooth_elev_by_slope_ugrid(ugrid, elevations, max_slope, anchor_to, points_flag):
"""Smooths a elevations based on max specified slope (max_slope).
Preserves either the min or max based on anchor_type.
Args:
ugrid (UGrid): Unstructured grid defining the connectivity of the elevations.
elevations (iterable): Array of the current elevations.
max_slope (float): Maximum allowable slope.
anchor_to (str): Option to anchor to the minimum or maximum size ('min' or 'max')
points_flag (iterable): Flag to indicate if the value at the point should be adjusted (a value of true will
skip the point). Leave the bitset empty to process all points.
Returns:
Array of smoothed elevations.
"""
anchor_types = {'min': 0, 'max': 1}
if anchor_to not in anchor_types.keys():
raise ValueError("anchor_to must be one of 'min' or 'max', not {}".format(anchor_to))
return mesh_utils.SmoothElevBySlopeUGrid(ugrid._instance, elevations, max_slope, anchor_types[anchor_to],
points_flag)
def generate_mesh(mesh_io):
"""Creates a mesh from the input polygons.
Args:
mesh_io (MultiPolyMesherIo): Input polygons and options for generating a mesh.
Returns:
true if the mesh was generated successfully false otherwise, and a string of messages.
"""
return mesh_utils.generate_mesh(mesh_io._instance)
def generate_2dm(mesh_io, file_name, precision=15):
"""Creates a mesh from the input polygons and writes it to a 2dm file.
Args:
mesh_io (MultiPolyMesherIo): Input polygons and options for generating a mesh.
file_name (str): The file name of the output 2dm file.
precision (int, optional): The decimal point precision of the resulting mesh.
Returns:
true if the mesh was generated successfully false otherwise, and a string of messages.
"""
return mesh_utils.generate_2dm(mesh_io._instance, file_name, precision)
def check_mesh_input_topology(mesh_io):
"""Checks if the input polygons intersect one another.
Args:
mesh_io (MultiPolyMesherIo): Input polygons and options for generating a mesh.
Returns:
true if mesh inputs are topologically correct, and a string of messages.
"""
return mesh_utils.check_mesh_input_topology(mesh_io._instance)
def redistribute_poly_line(polyline, size):
"""Redistributes the points along a line to a constant spacing.
Args:
polyline (iterable): Input poly line locations.
size (float): The desired spacing for point redistribution.
Returns:
redistributed poly line locations.
"""
return mesh_utils.redistribute_poly_line(polyline, size)
| 42.175824 | 119 | 0.689812 | 1,051 | 7,676 | 4.880114 | 0.166508 | 0.043673 | 0.029245 | 0.028076 | 0.742835 | 0.722558 | 0.71164 | 0.699162 | 0.692338 | 0.67947 | 0 | 0.002562 | 0.237363 | 7,676 | 181 | 120 | 42.40884 | 0.873591 | 0.601876 | 0 | 0.394737 | 1 | 0 | 0.131519 | 0.026833 | 0 | 0 | 0 | 0 | 0 | 1 | 0.263158 | false | 0 | 0.026316 | 0 | 0.552632 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
1cb84829e00c91918677c90bde004db15c9ae926 | 8,534 | py | Python | src/napari_lf/lfa/lflib/solvers/mrnsd.py | PolarizedLightFieldMicroscopy/napari-LF | b8b16e21424a1fc3a3fdd7f79099aa252480d75a | [
"BSD-3-Clause"
] | null | null | null | src/napari_lf/lfa/lflib/solvers/mrnsd.py | PolarizedLightFieldMicroscopy/napari-LF | b8b16e21424a1fc3a3fdd7f79099aa252480d75a | [
"BSD-3-Clause"
] | 1 | 2022-03-22T15:57:27.000Z | 2022-03-22T15:57:27.000Z | src/napari_lf/lfa/lflib/solvers/mrnsd.py | PolarizedLightFieldMicroscopy/napari-LF | b8b16e21424a1fc3a3fdd7f79099aa252480d75a | [
"BSD-3-Clause"
] | null | null | null | import numpy as np
import time
# ----------------------------------------------------------------------------------------
# Modified Residual Norm Steepest Descent SOLVER
# ----------------------------------------------------------------------------------------
def mrnsd_reconstruction(A, b, Rtol = 1e-6, NE_Rtol = 1e-6, max_iter = 100, x0 = None):
'''
Modified Residual Norm Steepest Descent
Nonnegatively constrained steepest descent method.
Ported from the RestoreTools MATLAB package available at:
http://www.mathcs.emory.edu/~nagy/RestoreTools/
Input: A - object defining the coefficient matrix.
b - Right hand side vector.
Optional Intputs:
options - Structure that can have:
x0 - initial guess (must be strictly positive); default is x0 = 1
max_iter - integer specifying maximum number of iterations;
default is 100
Rtol - stopping tolerance for the relative residual,
norm(b - A*x)/norm(b)
default is 1e-6
NE_Rtol - stopping tolerance for the relative residual,
norm(A.T*b - A.T*A*x)/norm(A.T*b)
default is 1e-6
Output:
x - solution
Original MATLAB code by J. Nagy, August, 2011
References:
[1] J. Nagy, Z. Strakos.
"Enforcing nonnegativity in image reconstruction algorithms"
in Mathematical Modeling, Estimation, and Imaging,
David C. Wilson, et.al., Eds., 4121 (2000), pg. 182--190.
[2] L. Kaufman.
"Maximum likelihood, least squares and penalized least squares for PET",
IEEE Trans. Med. Imag. 12 (1993) pp. 200--214.
'''
# The A operator represents a large, sparse matrix that has dimensions [ nrays x nvoxels ]
nrays = A.shape[0]
nvoxels = A.shape[1]
# Pre-compute some values for use in stopping criteria below
b_norm = np.linalg.norm(b)
trAb = A.rmatvec(b)
trAb_norm = np.linalg.norm(trAb)
# Start the optimization from the initial volume of a focal stack.
if x0 != None:
x = x0
else:
x = np.ones(nvoxels)
Rnrm = np.zeros(max_iter+1);
Xnrm = np.zeros(max_iter+1);
NE_Rnrm = np.zeros(max_iter+1);
eps = np.spacing(1)
tau = np.sqrt(eps);
sigsq = tau;
minx = x.min()
# If initial guess has negative values, compensate
if minx < 0:
x = x - min(0,minx) + sigsq;
# Initialize some values before iterations begin.
Rnrm = np.zeros((max_iter+1, 1))
Xnrm = np.zeros((max_iter+1, 1))
# Initial Iteration
r = b - A.matvec(x)
g = -(A.rmatvec(r));
xg = x * g;
gamma = np.dot(g.T, xg);
for i in range(max_iter):
tic = time.time()
x_prev = x
# STEP 1: MRNSD Update step
s = - x * g;
u = A.matvec(s);
theta = gamma / np.dot(u.T, u);
neg_ind = np.nonzero(s < 0)
zero_ratio = -x[neg_ind] / s[neg_ind]
if zero_ratio.shape[0] == 0:
alpha = theta
else:
alpha = min( theta, zero_ratio.min() );
x = x + alpha*s;
g = g + alpha * A.rmatvec(u);
xg = x * g;
gamma = np.dot(g.T, xg);
# STEP 2: Compute residuals and check stopping criteria
Rnrm[i] = np.sqrt(gamma) / b_norm
Xnrm[i] = np.linalg.norm(x - x_prev) / nvoxels
toc = time.time()
print('\t--> [ MRNSD Iteration %d (%0.2f seconds) ] ' % (i, toc-tic))
print('\t Residual Norm: %0.4g (tol = %0.2e) ' % (Rnrm[i], Rtol))
print('\t Update Norm: %0.4g ' % (Xnrm[i]))
# stop because residual satisfies ||b-A*x|| / ||b||<= Rtol
if Rnrm[i] <= Rtol:
break
return x.astype(np.float32)
# ----------------------------------------------------------------------------------------
# Weighted Modified Residual Norm Steepest Descent SOLVER
# ----------------------------------------------------------------------------------------
def wmrnsd_reconstruction(A, b,
Rtol = 1e-6, NE_Rtol = 1e-6, max_iter = 100, x0 = None,
sigmaSq = 0.0, beta = 0.0):
'''
Modified Residual Norm Steepest Descent
Nonnegatively constrained steepest descent method.
Ported from the RestoreTools MATLAB package available at:
http://www.mathcs.emory.edu/~nagy/RestoreTools/
Input: A - object defining the coefficient matrix.
b - Right hand side vector.
Optional Intputs:
options - Structure that can have:
x0 - initial guess (must be strictly positive); default is x0 = 1
sigmaSq - the square of the standard deviation for the
white Gaussian read noise (variance)
beta - Poisson parameter for background light level
max_iter - integer specifying maximum number of iterations;
default is 100
Rtol - stopping tolerance for the relative residual,
norm(b - A*x)/norm(b)
default is 1e-6
NE_Rtol - stopping tolerance for the relative residual,
norm(A.T*b - A.T*A*x)/norm(A.T*b)
default is 1e-6
Output:
x - solution
Original MATLAB code by J. Nagy, August, 2011
References:
[1] J. Nagy, Z. Strakos.
"Enforcing nonnegativity in image reconstruction algorithms"
in Mathematical Modeling, Estimation, and Imaging,
David C. Wilson, et.al., Eds., 4121 (2000), pg. 182--190.
[2] L. Kaufman.
"Maximum likelihood, least squares and penalized least squares for PET",
IEEE Trans. Med. Imag. 12 (1993) pp. 200--214.
'''
# The A operator represents a large, sparse matrix that has dimensions [ nrays x nvoxels ]
nrays = A.shape[0]
nvoxels = A.shape[1]
# Pre-compute some values for use in stopping criteria below
b_norm = np.linalg.norm(b)
trAb = A.rmatvec(b)
trAb_norm = np.linalg.norm(trAb)
# Start the optimization from the initial volume of a focal stack.
if x0 != None:
x = x0
else:
x = np.ones(nvoxels)
Rnrm = np.zeros(max_iter+1);
Xnrm = np.zeros(max_iter+1);
NE_Rnrm = np.zeros(max_iter+1);
eps = np.spacing(1)
tau = np.sqrt(eps);
sigsq = tau;
minx = x.min()
# If initial guess has negative values, compensate
if minx < 0:
x = x - min(0,minx) + sigsq;
# Initialize some values before iterations begin.
Rnrm = np.zeros((max_iter+1, 1))
Xnrm = np.zeros((max_iter+1, 1))
# Initial Iteration
c = b + sigmaSq;
b = b - beta;
r = b - A.matvec(x);
trAr = A.rmatvec(r);
wt = np.sqrt(c);
for i in range(max_iter):
tic = time.time()
x_prev = x
# STEP 1: WMRNSD Update step
v = A.rmatvec(r/c)
d = x * v;
w = A.matvec(d);
w = w/wt;
tau_uc = np.dot(d.T,v) / np.dot(w.T,w);
neg_ind = np.nonzero(d < 0)
zero_ratio = -x[neg_ind] / d[neg_ind]
if zero_ratio.shape[0] == 0:
tau = tau_uc;
else:
tau_bd = np.min( zero_ratio );
tau = min(tau_uc, tau_bd);
x = x + tau*d;
w = w * wt;
r = r - tau*w;
trAr = A.rmatvec(r);
# STEP 2: Compute residuals and check stopping criteria
Rnrm[i] = np.linalg.norm(r) / b_norm
Xnrm[i] = np.linalg.norm(x - x_prev) / nvoxels
NE_Rnrm[i] = np.linalg.norm(trAr) / trAb_norm
toc = time.time()
print('\t--> [ MRNSD Iteration %d (%0.2f seconds) ] ' % (i, toc-tic))
print('\t Residual Norm: %0.4g (tol = %0.2e) ' % (Rnrm[i], Rtol))
print('\t Error Norm: %0.4g (tol = %0.2e) ' % (NE_Rnrm[i], NE_Rtol))
print('\t Update Norm: %0.4g ' % (Xnrm[i]))
# stop because residual satisfies ||b-A*x|| / ||b||<= Rtol
if Rnrm[i] <= Rtol:
break
# stop because normal equations residual satisfies ||A'*b-A'*A*x|| / ||A'b||<= NE_Rtol
if NE_Rnrm[i] <= NE_Rtol:
break
return x.astype(np.float32)
#---------------------------------------------------------------------------------------
if __name__ == "__main__":
pass
| 32.082707 | 99 | 0.517577 | 1,111 | 8,534 | 3.916292 | 0.215122 | 0.025741 | 0.022983 | 0.032177 | 0.860722 | 0.845783 | 0.83498 | 0.802574 | 0.791542 | 0.783268 | 0 | 0.028735 | 0.323061 | 8,534 | 265 | 100 | 32.203774 | 0.724424 | 0.502344 | 0 | 0.640351 | 0 | 0 | 0.098965 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.017544 | false | 0.008772 | 0.017544 | 0 | 0.052632 | 0.061404 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e81684984e6da00aec8da30c65a7a82541a23bfc | 113 | py | Python | src/chapter2/exercise2.py | ManuelA1000/BCS-2021 | 0bdf8a165b6e9e79c33257919a44b4be3cd49a57 | [
"MIT"
] | null | null | null | src/chapter2/exercise2.py | ManuelA1000/BCS-2021 | 0bdf8a165b6e9e79c33257919a44b4be3cd49a57 | [
"MIT"
] | null | null | null | src/chapter2/exercise2.py | ManuelA1000/BCS-2021 | 0bdf8a165b6e9e79c33257919a44b4be3cd49a57 | [
"MIT"
] | null | null | null | user_name = input("Type in your name:")
#print("You're welcome," + user_name)
print(f"You're welcome",user_name)
| 28.25 | 39 | 0.716814 | 20 | 113 | 3.9 | 0.55 | 0.307692 | 0.307692 | 0.410256 | 0.512821 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.106195 | 113 | 3 | 40 | 37.666667 | 0.772277 | 0.318584 | 0 | 0 | 0 | 0 | 0.421053 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.5 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
e82f7e86518798b0b2d9d6cc3e767d6119bf4c84 | 21 | py | Python | ms_api_rest/vistas/__init__.py | leojim06/miso-cloud-api | 4ed507f65fb777edf8b4edb4a2e6e7807e62745c | [
"MIT"
] | null | null | null | ms_api_rest/vistas/__init__.py | leojim06/miso-cloud-api | 4ed507f65fb777edf8b4edb4a2e6e7807e62745c | [
"MIT"
] | null | null | null | ms_api_rest/vistas/__init__.py | leojim06/miso-cloud-api | 4ed507f65fb777edf8b4edb4a2e6e7807e62745c | [
"MIT"
] | null | null | null | from .vistas import * | 21 | 21 | 0.761905 | 3 | 21 | 5.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 21 | 1 | 21 | 21 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1c07d3d48cc42fb656d45e1902c30e25679d93f8 | 119 | py | Python | HackerRank/Python/Built-Ins/ginortS.py | TISparta/competitive-programming-solutions | 31987d4e67bb874bf15653565c6418b5605a20a8 | [
"MIT"
] | 1 | 2018-01-30T13:21:30.000Z | 2018-01-30T13:21:30.000Z | HackerRank/Python/Built-Ins/ginortS.py | TISparta/competitive-programming-solutions | 31987d4e67bb874bf15653565c6418b5605a20a8 | [
"MIT"
] | null | null | null | HackerRank/Python/Built-Ins/ginortS.py | TISparta/competitive-programming-solutions | 31987d4e67bb874bf15653565c6418b5605a20a8 | [
"MIT"
] | 1 | 2018-08-29T13:26:50.000Z | 2018-08-29T13:26:50.000Z | print(*sorted(input(),key = lambda x: (x.isdigit() and int(x)&1==0, x.isdigit(), x.isupper(), x.islower(), x)),sep="")
| 59.5 | 118 | 0.596639 | 21 | 119 | 3.380952 | 0.666667 | 0.225352 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.018519 | 0.092437 | 119 | 1 | 119 | 119 | 0.638889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
1c15bf736e55a2fa37df79643b4120a17e411204 | 130,513 | py | Python | test/pharma/supply_chain/pharma/pharmaplotlymanager.py | IBM/dse-do-dashboard | 30348da5414ef03890e6f3b92a36afc77757a021 | [
"Apache-2.0"
] | 1 | 2022-03-24T13:05:22.000Z | 2022-03-24T13:05:22.000Z | test/pharma/supply_chain/pharma/pharmaplotlymanager.py | IBM/dse-do-dashboard | 30348da5414ef03890e6f3b92a36afc77757a021 | [
"Apache-2.0"
] | 18 | 2022-01-13T15:14:52.000Z | 2022-03-09T22:58:36.000Z | test/pharma/supply_chain/pharma/pharmaplotlymanager.py | IBM/dse-do-dashboard | 30348da5414ef03890e6f3b92a36afc77757a021 | [
"Apache-2.0"
] | 2 | 2022-01-19T21:34:58.000Z | 2022-01-19T21:35:20.000Z | from typing import List, Dict, Tuple, Optional
import pandas as pd
from supply_chain.pharma.pharmadatamanager import PharmaDataManager
from supply_chain.scnfo.scnfoplotlymanager import ScnfoPlotlyManager
import plotly.express as px
import plotly.graph_objs as go
import numpy as np
from dse_do_dashboard.utils.dash_common_utils import plotly_figure_exception_handler
#######################################################################################
# Pharma
#######################################################################################
class PharmaPlotlyManager(ScnfoPlotlyManager):
def __init__(self, dm:PharmaDataManager):
super().__init__(dm)
# self.line_name_category_orders = ['Abbott_Weesp_Line','Abbott_Olst_Granulate_Line',
# 'Abbott_Olst_Packaging_Line_5','Abbott_Olst_Packaging_Line_6']
# self.plant_name_category_orders = ['Abbott_Weesp_Plant', 'Abbott_Olst_Plant']
self.line_name_category_orders = ['API_Line','Granulate_Line', 'Packaging_Line_1','Packaging_Line_2']
self.plant_name_category_orders = ['API_Plant', 'Packaging_Plant']
def describe_demand(self):
"""Print summary of demand statistics."""
super().describe_demand()
df = (self.dm.demand
.join(self.dm.products[['productGroup']])
.reset_index())
print(f"Num product types = {len(df.productGroup.unique()):,}")
# def plotly_demand_bars(self):
# """Product demand over time. Colored by productGroup."""
# product_aggregation_column = 'productGroup'
# df = (self.dm.demandplotly_production_activities_bars
# .join(self.dm.products[['productGroup']])
# ).groupby(['timePeriodSeq', product_aggregation_column]).sum()
# # display(df.head())
#
# labels = {'timePeriodSeq': 'Time Period', 'quantity': 'Demand', 'productName': 'Product Name',
# 'productGroup': 'Product Group'}
# fig = px.bar(df.reset_index(), x="timePeriodSeq", y="quantity", color=product_aggregation_column,
# title='Total Product Demand', labels=labels)
# fig.update_layout(
# # title={
# # 'text': f"Total product demand",
# # # 'y': 0.9,
# # # 'x': 0.5,
# # 'xanchor': 'center',
# # 'yanchor': 'top'},
# legend={'orientation': 'v'},
# # legend_title_text=product_aggregation_column,
# )
#
# return fig
def gen_color_col(self, catSeries = None):
'''Converts a series into a set of color codes
NEEDS TO BE CALLED ON ENTIRE SERIES, NOT SUBSETTED VERSION
'''
cmap = ["#004172", "#08539d", "#2e64c7", "#be35a0", "#e32433", "#eb6007",
"#fb8b00", "#c19f00", "#5c9c00", "#897500", "#cb0049", "#7746ba", "#0080d1",
"#3192d2", "#ac6ac0", "#e34862", "#c57e00", "#71a500", "#ad6e00", "#b82e2e",]
color_dict = {
'Other': "#7FB3D5",
'API': "#B03A2E",
' - API': "#B03A2E",
'Granulate': "#1F618D",
'Tablet': "#117A65",
'Package': "#B7950B",
}
if catSeries is not None:
catSeries = catSeries.dropna() # some NAs get introduced for some reason
labels = list(catSeries.unique())
if ' - API' not in labels or 'API' not in labels:
labels.append(' - API')
labels = sorted(labels)
cmap_ix = 0
for ix in range(len(labels)):
if cmap_ix == len(cmap):
cmap_ix = 0
else:
if 'Granulate' in labels[ix]:
color_dict[labels[ix]] = "#1F618D"
elif 'Tablet' in labels[ix]:
color_dict[labels[ix]] = "#117A65"
elif 'Package' in labels[ix]:
color_dict[labels[ix]] = "#B7950B"
elif 'API' in labels[ix]:
color_dict[labels[ix]] = "#B03A2E"
if labels[ix] not in color_dict:
color_dict[labels[ix]] = cmap[cmap_ix]
cmap_ix += 1
return color_dict
def plotly_demand_bars(self, query=None, title='Total Product Demand', view = "All"):
"""Product demand over time. Colored by productGroup."""
product_aggregation_column = 'productName'
df = (self.dm.demand
.join(self.dm.products[['productGroup', 'productCountry']])
)
# df = (self.dm.demand # will return two dfs
# .join(self.dm.products[['productGroup', 'productCountry']])
# )
df = df.reset_index()
df['productCountry'] = np.where(pd.isnull(df.productCountry), '', df.productCountry)
df['location_product'] = df.productCountry + " - " + df.productName
color_discrete_map = self.gen_color_col(df.location_product)
if query is not None:
df = df.query(query).copy()
# Set location_product name
df = df.reset_index()
df = (df
.groupby(['timePeriodSeq', 'location_product']).sum()
.sort_values('quantity', ascending=False)
)
df['demand_proportion'] = df.groupby(['timePeriodSeq'])['quantity'].apply(lambda x: x/x.sum())
df = df.reset_index()
df['new_labels'] = np.where(df['demand_proportion'] < 0.015, 'Other', df['location_product'])
# cmap = px.colors.qualitative.Light24
new_labels = df['new_labels'].unique()
labels = {'timePeriodSeq': 'Time Period', 'quantity': 'Demand', 'productName': 'Product Name',
'productGroup': 'Product Group'}
if view == "All":
color = "location_product"
elif view == "Compact":
color = "new_labels"
fig = px.bar(df.reset_index(), x="timePeriodSeq", y="quantity",
color= color,
title=title, labels=labels,
# color_discrete_sequence=px.colors.qualitative.Light24,
color_discrete_map=color_discrete_map,
height=600,
hover_name="location_product",
# hover_data=["quantity"]
)
fig.update_layout(
legend={
'title': f"Total Product Demand",
'bgcolor':'rgba(0,0,0,0)', # transparent background? not sure if this works
'x': 1,
'orientation': 'v'},
margin = {'l':80,'t':50},
hovermode="closest",
)
return fig
def plotly_utilization_multi_facet_bars(self):
"""Line utilization colored by groupID.
Shows which groupIDs claim how much capacity on which lines.
Could be used to analyze why certain lines cannot produce enough of a given product,
i.e. that they are busy with other products."""
product_aggregation_column = 'productGroup'
df = (self.dm.production_activities[['line_capacity_utilization']]
.join(self.dm.products[['productGroup']])
).groupby(['timePeriodSeq', 'lineName', product_aggregation_column]).sum()
labels = {'timePeriodSeq': 'Time Period', 'var_name': 'Utilization Type', 'lineName': 'Line Name',
'line_capacity_utilization': 'Line Capacity Utilization'}
fig = px.bar(df.reset_index(), x="lineName", y="line_capacity_utilization", color=product_aggregation_column,
title='Line Utilization', labels=labels,
facet_col="timePeriodSeq",
)
# get rid of duplicated X-axis labels
for axis in fig.layout:
if type(fig.layout[axis]) == go.layout.XAxis:
fig.layout[axis].title.text = ''
# fig.for_each_trace(lambda t: t.update(name=t.name.split()[-1]))
fig.for_each_annotation(lambda a: a.update(text=a.text.split()[-1]))
fig.update_layout(yaxis=dict(tickformat="%", ))
fig.update_layout(hovermode="closest") # Is supposed to be the default, but in DE we get multiple. Setting 'closest' explicitly is a work-around
fig.update_layout(
legend=
dict( # change legend location
title = "Product Group",
orientation="h",
yanchor="top",
y=1.3,
xanchor="right",
x=0.95),
# legend_title_text=None # this doesn't align the legend still
)
return fig
def plotly_excess_utilization_line_time_bars(self):
"""Line utilization bar per line over time, clustered by time-period.
Excess utilization over 100% is clearly colored as red.
Good initial view of utilization and excess utilization.
"""
df = (self.dm.line_utilization.copy()
)
df['Regular Capacity'] = df.utilization.clip(0, 1)
df['Over Capacity'] = (df.utilization - 1).clip(0)
df = df[['Regular Capacity', 'Over Capacity']]
df = (df.stack()
.rename_axis(index={None: 'var_name'})
.to_frame(name='Utilization')
.reset_index()
)
labels = {'timePeriodSeq': 'Time Period', 'var_name': 'Utilization Type', 'lineName': 'Line Name'}
fig = px.bar(df.reset_index(), x="timePeriodSeq", y="Utilization", color='var_name', title='Line Utilization',
labels=labels,
facet_row="lineName",
# width = 2000
color_discrete_map = {'Regular Capacity':'green', 'Over Capacity':'red'},
height = 800,
)
fig.update_layout(
legend=
dict( # change legend location
title = "Utilization Type",
orientation="h",
yanchor="top",
y=1.05,
xanchor="right",
x=0.95),
)
fig.update_layout(hovermode="closest") # Is supposed to be the default, but in DE we get multiple. Setting 'closest' explicitly is a work-around
### This gets rid of the duplicated Y Axis labels caused by the facet_row argument
for axis in fig.layout:
if type(fig.layout[axis]) == go.layout.YAxis:
fig.layout[axis].title.text = ''
fig.layout[axis].tickformat = '%'
fig.for_each_annotation(lambda a: a.update(text=a.text.split("Line Name=")[-1]))
fig.for_each_annotation(lambda a: a.update(text=a.text.replace("_", " ")))
fig.for_each_annotation(lambda a: a.update(text=a.text.replace("Olst", "Olst<br>")))
fig.for_each_annotation(lambda a: a.update(x = a.x-1.07, textangle = 270))
fig.update_layout(
legend=
dict( # change legend location
title = "Product Group",
orientation="v",
x=1.05,
yanchor="top"
),
margin = {'l' : 130, 't':80}
)
return fig
def plotly_utilization_line_time_bars(self):
"""Line utilization colored by groupID.
Shows which groupIDs claim how much capacity on which lines.
Could be used to analyze why certain lines cannot produce enough of a given product,
i.e. that they are busy with other products."""
product_aggregation_column = 'productGroup'
df = (self.dm.production_activities[['line_capacity_utilization']]
.join(self.dm.products[['productGroup']])
).groupby(['timePeriodSeq', 'lineName', product_aggregation_column]).sum()
color_discrete_map = self.gen_color_col()
labels = {'timePeriodSeq': 'Time Period', 'var_name': 'Utilization Type', 'lineName': 'Line Name',
'line_capacity_utilization':'Line Capacity Utilization'}
fig = px.bar(df.reset_index(), x="timePeriodSeq", y="line_capacity_utilization", color=product_aggregation_column,
title='Line Utilization', labels=labels, facet_row = 'lineName',
color_discrete_map=color_discrete_map,
category_orders={
product_aggregation_column: ['API', 'Granulate', 'Tablet', 'Package'],
# 'lineName': ['Abbott_Weesp_Line', 'Abbott_Olst_Granulate_Line',
# 'Abbott_Olst_Packaging_Line_5', 'Abbott_Olst_Packaging_Line_6' ],
'lineName' : self.line_name_category_orders,
'timePeriodSeq': df.reset_index().timePeriodSeq.sort_values().unique() },
height=800,
)
fig.update_layout(
legend=
dict( # change legend location
title = "Product Group",
orientation="v",
yanchor="top",
y=1.1,
xanchor="right",
x=1.05),
margin = {'l': 130,'t':80}
)
fig.update_layout(hovermode="closest") # Is supposed to be the default, but in DE we get multiple. Setting 'closest' explicitly is a work-around
### This gets rid of the duplicated Y Axis labels caused by the facet_row argument
for axis in fig.layout:
if type(fig.layout[axis]) == go.layout.YAxis:
fig.layout[axis].title.text = ''
fig.layout[axis].tickformat = '%'
fig.for_each_annotation(lambda a: a.update(x = a.x -1.07, textangle = 270))
fig.for_each_annotation(lambda a: a.update(text=a.text.split("Line Name=")[-1]))
fig.for_each_annotation(lambda a: a.update(text=a.text.replace("_", " ")))
fig.for_each_annotation(lambda a: a.update(text=a.text.replace("Olst", "Olst<br>")))
return fig
def plotly_line_utilization_heatmap_v2(self):
"""
Trying multiple traces to see if we can get a more clear color difference for utilization > 100%
Can't get the hover to work with multiple traces
"""
# product_aggregation_column = 'groupID'
df = ((self.dm.production_activities
)
)
df = df.pivot_table(values='line_capacity_utilization', index=['lineName'], columns=['timePeriodSeq'], aggfunc=np.sum)
hovertemplate ='<b>Utilization: %{z:.1%}</b><br>Line: %{y} <br>Time Period: %{x} '
trace = go.Heatmap(z=df.values, x=df.columns, y=df.index, colorscale='Portland', zmin = 0, zmid =1, hovertemplate=hovertemplate) #colorscale='rdbu',
fig = go.Figure(data=[trace], layout=go.Layout(width=1000, height=600))
return fig
def plotly_demand_fullfilment(self, mode=None):
"""Demand, Fulfilled, Unfulfilled, Backlog, BacklogResupply and Inventory over time, grouped by time-period.
Colored by groupID.
Very useful graph since it contains all critical variables at the demand locations. Good for explanation.
"""
# Collect transportation activities into a destination location.
# (later we'll do a left join to only select trnasportation into a demand location and ignore all other transportation activities)
df0 = (self.dm.transportation_activities[['xTransportationSol']]
.groupby(['productName', 'destinationLocationName', 'timePeriodSeq']).sum()
.rename_axis(index={'destinationLocationName': 'locationName'})
.rename(columns={'xTransportationSol':'Transportation'})
)
# display(df0.head())
product_aggregation_column = 'productGroup'
df = (self.dm.demand_inventories[['quantity','xFulfilledDemandSol','xUnfulfilledDemandSol','xBacklogSol','xBacklogResupplySol','xInvSol']]
.join(self.dm.products[['productGroup']])
# .join(self.dm.locations)
.join(df0, how='left')
).groupby(['timePeriodSeq', product_aggregation_column]).sum()
if 'relative_week' in df.columns: # TODO: remove if not relevant anymore
df = df.drop(columns=['relative_week'])
# display(df.head())
df = (df
# .drop(columns=['relative_week'])
.rename(
columns={'quantity': 'Demand', 'xFulfilledDemandSol': 'Fulfilled', 'xUnfulfilledDemandSol': 'Unfulfilled',
'xBacklogSol': 'Backlog', 'xBacklogResupplySol': 'Backlog Resupply', 'xInvSol': 'Inventory'})
)
df = (df.stack()
.rename_axis(index={None: 'var_name'})
.to_frame(name='quantity')
.reset_index()
)
labels = {'timePeriodSeq': 'Time Period', 'quantity': 'Demand', 'productName': 'Product Name', 'productGroup':'Product Group',
'var_name': 'Var'}
if mode is None: #'bar_subplot_by_time'
fig = px.bar(df, x="var_name", y="quantity", color=product_aggregation_column, title="Demand", labels=labels,
facet_col="timePeriodSeq",
category_orders={
'var_name': ['Demand', 'Transportation', 'Fulfilled', 'Unfulfilled', 'Backlog', 'Backlog Resupply',
'Inventory']},
height=700
)
elif mode == 'multi_line':
fig = px.line(df, x="timePeriodSeq", y="quantity", color='var_name', title="Demand", labels=labels,
facet_row=product_aggregation_column,
height=700
)
elif mode == 'animated_horizontal_bars':
fig = px.bar(df, y="var_name", x="quantity", color=product_aggregation_column, title="Demand", labels=labels,
# facet_col="timePeriodSeq",
animation_frame="timePeriodSeq",
category_orders={
'var_name': ['Demand', 'Transportation', 'Fulfilled', 'Unfulfilled', 'Backlog', 'Backlog Resupply',
'Inventory']},
height=700
)
elif mode == 'animated_vertical_bars':
fig = px.bar(df, x="timePeriodSeq", y="quantity", color=product_aggregation_column, title="Demand", labels=labels,
# facet_col="timePeriodSeq",
animation_frame="timePeriodSeq",
facet_row = 'var_name',
category_orders={
'var_name': ['Demand', 'Transportation', 'Fulfilled', 'Unfulfilled', 'Backlog', 'Backlog Resupply',
'Inventory']},
height=700
)
fig.update_layout(hovermode="closest") # Is supposed to be the default, but in DE we get multiple. Setting 'closest' explicitly is a work-around
return fig
def plotly_demand_fullfilment_multi_plot(self, mode=None, var_names=None):
"""Demand, Fulfilled, Unfulfilled, Backlog, BacklogResupply and Inventory over time, grouped by time-period.
Colored by groupID.
Very useful graph since it contains all critical variables at the demand locations. Good for explanation.
"""
# Collect transportation activities into a destination location.
# (later we'll do a left join to only select trnasportation into a demand location and ignore all other transportation activities)
df0 = (self.dm.transportation_activities[['xTransportationSol']]
.groupby(['productName', 'destinationLocationName', 'timePeriodSeq']).sum()
.rename_axis(index={'destinationLocationName': 'locationName'})
.rename(columns={'xTransportationSol':'Transportation'})
)
# display(df0.head())
# print(f"products in demand = {self.dm.demand_inventories.index.get_level_values('productName').unique()}")
# print(f"products = {self.dm.products[['productGroup', 'productCountry']]}")
product_aggregation_column = 'productName'
df = (self.dm.demand_inventories[['quantity','xFulfilledDemandSol','xUnfulfilledDemandSol','xBacklogSol','xBacklogResupplySol','xInvSol']]
.join(self.dm.products[['productGroup', 'productCountry']], how='left')
# .join(self.dm.locations)
.join(df0, how='left')
).groupby(['timePeriodSeq', product_aggregation_column, "productCountry"]).sum()
# print(f"products = {df.index.get_level_values('productName').unique()}")
if 'relative_week' in df.columns: # TODO: remove if not relevant anymore
df = df.drop(columns=['relative_week'])
df = (df
.rename(
columns={'quantity': 'Demand', 'xFulfilledDemandSol': 'Fulfilled', 'xUnfulfilledDemandSol': 'Unfulfilled',
'xBacklogSol': 'Backlog', 'xBacklogResupplySol': 'Backlog Resupply', 'xInvSol': 'Inventory'})
)
df = (df.stack()
.rename_axis(index={None: 'var_name'})
.to_frame(name='quantity')
.reset_index()
)
var_name_category_order = ['Demand', 'Transportation', 'Fulfilled', 'Unfulfilled', 'Backlog', 'Backlog Resupply', 'Inventory']
num_vars = 6
if var_names is not None:
df = df.query("var_name in @var_names").copy()
num_vars = len(var_names)
var_name_category_order = var_names
df['location_product'] = df.productCountry + " - " + df.productName
color_discrete_map = self.gen_color_col(df['location_product'])
# print(f"color_discrete_map={color_discrete_map}")
# print(f"location_product = {df['location_product'].unique()}")
labels = {'timePeriodSeq': 'Time Period', 'quantity': 'Quantity', 'productName': 'Product Name', 'productGroup':'Product Group',
'var_name': 'Var', 'location_product': 'Product Country'}
active_var_names = []
if mode == 'columns':
fig = px.bar(df, x="timePeriodSeq", y="quantity",
# color=product_aggregation_column,
title="Fulfillment",
labels=labels,
facet_col="var_name",
color = "location_product",
color_discrete_map= color_discrete_map,
category_orders={
# 'var_name': ['Demand', 'Transportation', 'Fulfilled', 'Unfulfilled', 'Backlog', 'Backlog Resupply',
# 'Inventory'],
'var_name': var_name_category_order
},
height=400
)
for axis in fig.layout:
if type(fig.layout[axis]) == go.layout.XAxis:
fig.layout[axis].title.text = ''
fig.update_layout(
# keep the original annotations and add a list of new annotations:
annotations = list(fig.layout.annotations) +
[go.layout.Annotation(
x=0.55,
y=-0.15,
font=dict(
size=14
),
showarrow=False,
text="Time Period",
textangle=0,
xref="paper",
yref="paper"
)
]
)
else: # e.g. None
fig = px.bar(df, x="timePeriodSeq", y="quantity",
# color=product_aggregation_column,
title="Fulfillment", labels=labels,
facet_row="var_name",
color = "location_product",
color_discrete_map= color_discrete_map,
category_orders={
# 'var_name': ['Demand', 'Transportation', 'Fulfilled', 'Unfulfilled', 'Backlog', 'Backlog Resupply',
# 'Inventory'],
'var_name': var_name_category_order
},
height=250*num_vars
)
fig.for_each_annotation(lambda a: a.update(x = a.x -1.045, textangle = 270))
# get rid of duplicated Y-axis labels
for axis in fig.layout:
if type(fig.layout[axis]) == go.layout.YAxis:
fig.layout[axis].title.text = ''
fig.update_layout(hovermode="closest",legend = {'orientation': 'v'}) # Is supposed to be the default, but in DE we get multiple. Setting 'closest' explicitly is a work-around
fig.for_each_annotation(lambda a: a.update(text=a.text.split("Var=")[-1]))
fig.update_layout(legend =
{'orientation': 'v',
'x': 1,
},
margin = {'l': 75}
)
# fig.layout.yaxis2.update(matches=None)
# fig.layout.yaxis3.update(matches=None)
fig.layout.yaxis4.update(matches=None)
fig.update_yaxes(showticklabels=True, col=4) #, col=2
fig.update_layout(
margin={'l': 80, 't': 50, 'r': 20, 'b': 60})
return fig
# def plotly_inventory_days_of_supply_line(self, mode:str='line', query=None):
# """Demand inventory, normalized by days-of-supply."""
# num_days = 2 * 365 # For now assume 2 years. TODO: get from number of time-periods and bucket length
# df1 = (self.dm.demand[['quantity']]
# .join(self.dm.products['productGroup'])
# .groupby(['productGroup','productName','locationName']).sum()
# )
# df1['demand_per_day'] = df1.quantity / num_days
# df1 = df1.drop(columns=['quantity'])
# # display(df1.head())
# df = (self.dm.demand_inventories[['xInvSol']]
# .join(df1)
# .reset_index()
# .set_index(['locationName','productGroup','productName'])
# .sort_index()
# )
# if query is not None:
# df = df.query(query).copy()
# df['days_of_supply'] = df.xInvSol / df.demand_per_day
# df = df.reset_index()
# df['product_location'] = df.locationName + " - " + df.productName
# labels = {'timePeriodSeq': 'Time Period', 'quantity': 'Inventory', 'productName': 'Product Name',
# 'productGroup': 'Product Group', 'days_of_supply': 'Days of Supply'}
# if mode == 'bar':
# fig = px.bar(df, x="timePeriodSeq", y="days_of_supply",
# color='product_location',
# height=600,
# title='Demand Inventory (days-of-supply)', labels=labels)
# else:
# fig = px.line(df, x="timePeriodSeq", y="days_of_supply",
# color='product_location',
# height=600,
# title='Demand Inventory (days-of-supply)', labels=labels)
# fig.update_layout(
# hovermode="closest",
# # title={
# # 'text': f"Total product demand",
# # # 'y': 0.9,
# # # 'x': 0.5,
# # 'xanchor': 'center',
# # 'yanchor': 'top'},
# legend={'orientation': 'v'},
# # legend_title_text=product_aggregation_column,
# )
# return fig
def plotly_wh_inventory(self, mode:str='bar', query=None):
"""Warehouse inventory stacked bar chart by productName.
TODO: remove products that have no inventory over the whole time-line."""
df = (self.dm.warehouse_inventories[['xInvSol']]
# .query("xInvSol > 0")
.join(self.dm.products[['productGroup', 'productCountry']])
.sort_index()
.sort_values(['xInvSol'], ascending=False)
)
if query is not None:
df = df.query(query)
df = df.reset_index()
df['productCountry'] = df['productCountry'].fillna("")
df['location_product'] = df['productCountry'] + " - " + df['productName']
df['location_product'] = df['location_product'].fillna('API')
color_discrete_map = self.gen_color_col(df['location_product'])
labels = {'timePeriodSeq': 'Time Period', 'days_of_supply':'Days of Supply', 'quantity': 'Inventory', 'productName': 'Product Name',
'productGroup': 'Product Group',
'location_product': 'Product Location',
"xInvSol": "Inventory"}
if mode == 'bar':
fig = px.bar(df, x="timePeriodSeq", y="xInvSol",
color='location_product',
color_discrete_map = color_discrete_map,
height=600,
title='Warehouse Inventory', labels=labels)
elif mode == 'area':
fig = px.area(df, x="timePeriodSeq", y="xInvSol",
color='location_product',
color_discrete_map = color_discrete_map,
height=600,
title='Warehouse Inventory', labels=labels)
else:
fig = px.line(df, x="timePeriodSeq", y="xInvSol",
color='location_product',
color_discrete_map = color_discrete_map,
height=600,
title='Warehouse Inventory', labels=labels)
fig.update_layout(
hovermode="closest",
legend={'orientation': 'v',
# 'yanchor': 'middle',
'x': 1.05,
},
margin = {'l': 80,'t':80}
# legend_title_text=product_aggregation_column,
)
return fig
def plotly_plant_inventory(self, mode:str='bar', query=None):
"""Plant inventory stacked bar chart by productName.
TODO: remove products that have no inventory over the whole time-line."""
df = (self.dm.plant_inventories[['xInvSol']]
# .query("xInvSol > 0") # Doesn't work well: will reduce the number of entries in the horizon
.join(self.dm.products[['productGroup', 'productCountry']])
.sort_index()
.sort_values(['xInvSol'], ascending=False)
)
if query is not None:
df = df.query(query)
df = df.reset_index()
# df = df[df.xInvSol > 0]
df.productCountry = df['productCountry'].fillna("")
df['location_product'] = df['productCountry'] + " - " + df['productName']
# df['location_product'] = df['location_product'].fillna('API')
color_discrete_map = self.gen_color_col(df['location_product'])
labels = {'timePeriodSeq': 'Time Period', 'days_of_supply':'Days of Supply', 'quantity': 'Inventory', 'productName': 'Product Name',
'productGroup': 'Product Group',
'location_product': 'Product Location'}
category_orders = {
# 'locationName': ['Abbott_Weesp_Plant', 'Abbott_Olst_Plant'],
'locationName': self.plant_name_category_orders
}
if mode == 'bar':
fig = px.bar(df, x="timePeriodSeq", y="xInvSol",
facet_row='locationName',
color='location_product',
color_discrete_map = color_discrete_map,
category_orders=category_orders,
height=600,
title='Plant Inventory', labels=labels)
fig.for_each_annotation(lambda a: a.update(x = a.x-1.04, textangle = 270))
elif mode == 'area':
fig = px.area(df, x="timePeriodSeq", y="xInvSol",
facet_row='locationName',
color='location_product',
# color='productName',
color_discrete_map = color_discrete_map,
category_orders=category_orders,
height=600,
title='Plant Inventory', labels=labels)
fig.for_each_annotation(lambda a: a.update(x = a.x-1.08, textangle = 270))
else:
fig = px.line(df, x="timePeriodSeq", y="xInvSol",
color='location_product',
color_discrete_map = color_discrete_map,
category_orders=category_orders,
height=600,
title='Plant Inventory', labels=labels)
fig.update_layout(
hovermode="closest",
legend={'orientation': 'v',
'x': 1.05,},
margin = {'l': 80, 't':80}
)
for axis in fig.layout:
if type(fig.layout[axis]) == go.layout.YAxis:
fig.layout[axis].title.text = ''
fig.for_each_annotation(lambda a: a.update(text=a.text.split("locationName=")[-1]))
fig.for_each_annotation(lambda a: a.update(text=a.text.replace("_", " ")))
return fig
def plotly_demand_inventory(self, mode:str='bar', query=None):
"""Plant inventory stacked bar chart by productName.
TODO: remove products that have no inventory over the whole time-line."""
df = (self.dm.demand_inventories[['xInvSol']]
# .query("xInvSol > 0") # Doesn't work well: will reduce the number of entries in the horizon
.join(self.dm.products[['productGroup', 'productCountry']])
.sort_index()
.sort_values(['xInvSol'], ascending=False)
)
if query is not None:
df = df.query(query)
df = df.reset_index()
df['productCountry'] = df['productCountry'].fillna('')
df['location_product'] = df.productCountry + " - " + df.productName
color_discrete_map = self.gen_color_col(df['location_product'])
labels = {'timePeriodSeq': 'Time Period', 'days_of_supply':'Days of Supply', 'quantity': 'Inventory', 'productName': 'Product Name',
'productGroup': 'Product Group', 'location_product': 'Product Location', 'xInvSol': 'Inventory'}
if mode == 'bar':
fig = px.bar(df, x="timePeriodSeq", y="xInvSol",
color_discrete_map=color_discrete_map,
color='location_product',
height=600,
title='Demand Inventory', labels=labels)
else:
fig = px.line(df, x="timePeriodSeq", y="xInvSol",
color_discrete_map=color_discrete_map,
color='location_product',
height=600,
title='Demand Inventory', labels=labels)
fig.update_layout(
hovermode="closest",
legend={'orientation': 'v',
'x': 1.05},
margin={'l':80,'t':80},
# legend_title_text=product_aggregation_column,
)
return fig
def plotly_line_product_capacity_heatmap(self):
"""Heatmap of capacity as line vs product. Good insight on line specialization/recipe-properties.
Input tables: ['RecipeProperties', 'Line', 'Product']
Output tables: []
"""
df = (self.dm.recipe_properties[['capacity']]
.join(self.dm.lines)
.join(self.dm.products[['productGroup']])
# .join(self.dm.plants.rename(columns={'locationDescr':'plantDescr'}), on='plantName')
# .join(self.dm.locations, on='locationName')
) # .groupby(['lineName','productType']).max()
df = df.reset_index()
# display(df.head())
# df = df.pivot_table(values='capacity', index=['lineDescr'], columns=['productType'], aggfunc=np.max)
df = df.pivot_table(values='capacity', index=['lineName'], columns=['productGroup'], aggfunc=np.max)
df = df.reset_index()
cols = ["API", "Granulate", "Tablet", "Package"]
df= df[cols]
labels = {'lineName': 'Line', 'productGroup': 'Product Group', 'productName': 'Product Name'}
labels = dict(x="Product Group", y="Line", color="Capacity")
# labels = dict(x=["1","2","3","4"], y="Line", color="Capacity")
fig = px.imshow(df, labels=labels, width=1000,
color_continuous_scale='YlOrRd',
# labels = {
# 'x':["1","2","3","4"]
# },
# y = ["Abbott Olst<br>Granulate Line", "Abbott Olst<br>Packaging Line 5", "Abbott Olst<br>Packaging Line 6", "Abbott<br>Weesp Line"],
y = ["Granulate Line", "Packaging Line 1", "Packaging Line 2", "API Line"],
# y = ["API Line", "Granulate Line", "Packaging Line 1", "Packaging Line 2"],
# x = ["API", "Granulate", "Tablet", "Package"],
# template="ggplot2",
)
# for i, label in enumerate(['orignal', 'clean', '3', '4']):
# fig.layout.annotations[i]['text'] = label
# fig.update_xaxes(showticklabels=False).update_yaxes(showticklabels=False)
fig.update_layout(
hovermode="closest",
title={
'text': "Maximum Line Capacity by Product Type",
# 'y': 0.92,
'x': 0.5,
'xanchor': 'center',
'yanchor': 'top'},
margin = {'l': 60,'t':80,'b':60})
return fig
def plotly_line_package_capacity_heatmap(self):
"""Heatmap of capacity as line vs product. Good insight on line specialization/recipe-properties.
Input tables: ['RecipeProperties', 'Line', 'Product']
Output tables: []
"""
df = (self.dm.recipe_properties[['capacity']]
.join(self.dm.lines)
.join(self.dm.products[['productGroup', 'productCountry']])
# .join(self.dm.plants.rename(columns={'locationDescr':'plantDescr'}), on='plantName')
# .join(self.dm.locations, on='locationName')
.query("productGroup == 'Package'")
) # .groupby(['lineName','productType']).max()
df = df.reset_index()
df.productName = df.productName.astype(str)
df['location_product'] = df['productCountry'] + ' - ' + df['productName']
df['location_product'] = df['location_product'].fillna('API')
# df = df.pivot_table(values='capacity', index=['lineDescr'], columns=['productType'], aggfunc=np.max)
df = df.pivot_table(values='capacity', index= ['lineName'],
columns=['location_product'] , aggfunc=np.max)
labels = {'lineName': 'Line', 'productGroup': 'Product Group', 'productName': 'Product Name',
}
labels = dict(x="Line", y="Product" , color="Max Capacity")
fig = px.imshow(df,
aspect = 'auto',
labels=labels,
# height = 800,
# width=1000,
color_continuous_scale='YlOrRd',
# y = ["Abbott Olst<br>Packaging Line 5", "Abbott Olst<br>Packaging Line 6"],
# y = ["Packaging Line 1", "Packaging Line 2"],
)
fig.update_layout(
title={
'text': "Maximum Packaging Line Capacity by Product",
# 'y': 0.92,
'x': 0.5,
'xanchor': 'center',
'yanchor': 'top'},
margin = {'l': 140,'t':40,'b':100})
fig.update_xaxes(tickfont={'size':11})
return fig
def plotly_time_product_group_capacity_heatmap(self):
"""Heatmap of capacity over time.
Good to detect and time-variation in capacity.
Input tables: ['RecipeProperties', 'Line', 'Product']
Output tables: []
"""
df = (self.dm.recipe_properties[['capacity']]
.join(self.dm.lines)
.join(self.dm.products[['productGroup']]))
df = df.reset_index()
cols = ["API", "Granulate", "Tablet", "Package"]
# df= df[cols]
df = df.pivot_table(values='capacity', index=['productGroup'], columns=['timePeriodSeq'], aggfunc=np.max)
df= df.reindex(cols)
# print(df.index)
labels = {'lineName': 'Line', 'productGroup': 'Product Group', 'productName': 'Product Name'}
labels = dict(x="Time Period", y="Product Group", color="Capacity")
fig = px.imshow(df, labels=labels,
color_continuous_scale='YlOrRd',
# y = ["API", "Granulate", "Tablet", "Package"]
)
fig.update_layout(
title={
'text': "Maximum Line Capacity by Product Group and Time Period",
'y': 0.95,
'x': 0.5,
'xanchor': 'center',
'yanchor': 'top'},
margin = {'l': 90,'t':80,'b':60})
return fig
def plotly_time_package_capacity_heatmap(self):
"""Heatmap of capacity over time.
Good to detect and time-variation in capacity.
Input tables: ['RecipeProperties', 'Line', 'Product']
Output tables: []
"""
df = (self.dm.recipe_properties[['capacity']]
.join(self.dm.lines)
.join(self.dm.products[['productGroup']]))
df = df.reset_index()
df = df.query("productGroup == 'Package'")
# display(df.head())
df = df.pivot_table(values='capacity', index=['productName'], columns=['timePeriodSeq'], aggfunc=np.max)
labels = {'lineName': 'Line', 'productGroup': 'Product Group', 'productName': 'Product Name'}
labels = dict(x="Time Period", y="Product Name", color="Capacity")
fig = px.imshow(df, labels=labels,
# color_discrete_sequence=px.colors.qualitative.G10 # Doesn't work!
# color_continuous_scale='Turbo',
# color_continuous_scale='YlOrBr',
color_continuous_scale='YlOrRd',
height = 1000,
)
fig.update_layout(
title={
'text': "Maximum Line Capacity by Product and Time Period",
# 'y': 0.95,
'x': 0.5,
'xanchor': 'center',
'yanchor': 'top'},
margin = {'l': 90,'t':80,'b':60})
return fig
# def plotly_time_product_capacity_bars(self):
# """Heatmap of capacity over time.
# Good to detect and time-variation in capacity.
# Input tables: ['RecipeProperties', 'Line', 'Product']
# Output tables: []
# """
# df = (self.dm.recipe_properties[['capacity']]
# .join(self.dm.lines)
# .join(self.dm.products[['productGroup']]))
# # display(df.head())
# df = df[['capacity','productGroup']].groupby(['lineName','timePeriodSeq','productName']).max()
# # display(df.head())
# labels = {'lineName': 'Line', 'productGroup': 'Product Group', 'productName': 'Product Name', 'timePeriodSeq':'Time Period', 'capacity':'Capacity'}
# # labels = dict(x="Time Period", y="Product Group", color="Capacity")
# fig = px.bar(df.reset_index(), x='timePeriodSeq', y='capacity', color='productName',labels=labels,
# facet_col='productGroup',
# category_orders={
# "productGroup": ["API", "Granulate", "Tablet", "Package"]
# },
# # facet_row = 'lineName',
# )
# fig.update_layout(
# hovermode="closest",
# title={
# 'text': "Maximum Line Capacity by Product and Time Period",
# 'y': 0.95,
# 'x': 0.5,
# 'xanchor': 'center',
# 'yanchor': 'top'})
# fig.update_layout(legend=dict(
# yanchor="top",
# y=0.99,
# xanchor="right",
# x=1.15,
# orientation="v"
# ))
# fig.for_each_annotation(lambda a: a.update(text=a.text.split("=")[-1]))
# return fig
def plotly_time_product_group_capacity_bars(self):
"""Heatmap of capacity over time.
Good to detect and time-variation in capacity.
Input tables: ['RecipeProperties', 'Line', 'Product']
Output tables: []
"""
df = (self.dm.recipe_properties[['capacity']]
.join(self.dm.lines)
.join(self.dm.products[['productGroup']]))
# display(df.head())
df = df[['capacity','productGroup']].groupby(['lineName','timePeriodSeq','productGroup']).max()
# display(df.head())
color_discrete_map = self.gen_color_col()
labels = {'lineName': 'Line', 'productGroup': 'Product Group', 'productName': 'Product Name', 'timePeriodSeq':'Time Period', 'capacity':'Capacity'}
# labels = dict(x="Time Period", y="Product Group", color="Capacity")
fig = px.bar(df.reset_index(), x='timePeriodSeq', y='capacity', color='productGroup',labels=labels,
facet_col='productGroup',
category_orders={
"productGroup": ["API", "Granulate", "Tablet", "Package"]
},
color_discrete_map= color_discrete_map
)
fig.update_layout(
hovermode="closest",
title={
'text': "Maximum Line Capacity by Product Group and Time Period",
'y': 0.95,
'x': 0.5,
'xanchor': 'center',
'yanchor': 'top'}
)
fig.update_layout(
legend=dict(
yanchor="top",
y=0.99,
xanchor="right",
x=1.15,
orientation="v"
),
margin = {'l': 60,'t':80,'b':60},
)
fig.for_each_annotation(lambda a: a.update(text=a.text.split("=")[-1]))
for axis in fig.layout:
if type(fig.layout[axis]) == go.layout.XAxis:
fig.layout[axis].title.text = ''
return fig
def plotly_demand_inventory_bar_subplot_by_time(self):
"""Demand, Fulfilled, Unfulfilled, Backlog, BacklogResupply and Inventory over time, grouped by time-period.
Colored by groupID.
Very useful graph since it contains all critical variables at the demand locations. Good for explanation.
"""
# product_aggregation_column = 'groupID'
# df1 = (self.dm.demand_inventories
# .join(self.dm.products[['productType', 'subgroupID', 'groupID']])
# # .join(self.dm.locations)
# ).groupby(['timePeriodSeq', product_aggregation_column]).sum()
# if 'relative_week' in df1.columns: # TODO: remove if not relevant anymore
# df1 = df1.drop(columns=['relative_week'])
# df1 = (df1
# # .drop(columns=['relative_week'])
# .rename(
# columns={'quantity': 'Demand', 'xFulfilledDemandSol': 'Fulfilled', 'xUnfulfilledDemandSol': 'Unfulfilled',
# 'xBacklogSol': 'Backlog', 'xBacklogResupplySol': 'Backlog Resupply', 'xDemandInvSol': 'Inventory'})
# )
# df1 = (df1.stack()
# .rename_axis(index={None: 'var_name'})
# .to_frame(name='quantity')
# .reset_index()
# )
# # Inflows from plants:
# df2 = (self.dm.plant_to_demand_transportation[['xTransportationSol']]
# .join(self.dm.products[['productType', 'subgroupID', 'groupID']])
# .groupby(['timePeriodSeq', product_aggregation_column]).sum()
# .rename(columns={'xTransportationSol': 'Production'})
# )
# df2 = (df2.stack()
# .rename_axis(index={None: 'var_name'})
# .to_frame(name='quantity')
# .reset_index()
# )
# df = pd.concat([df1, df2])
# # print(df.head())
# labels = {'timePeriodSeq': 'Time Period', 'quantity': 'Demand', 'productName': 'Product Name',
# 'var_name': 'Var'}
# fig = px.bar(df, x="var_name", y="quantity", color=product_aggregation_column, title="Demand", labels=labels,
# facet_col="timePeriodSeq",
# category_orders={
# 'var_name': ['Demand', 'Production', 'Fulfilled', 'Unfulfilled', 'Backlog', 'Backlog Resupply',
# 'Inventory']},
# height=700
# )
# fig.update_layout(hovermode="closest") # Is supposed to be the default, but in DE we get multiple. Setting 'closest' explicitly is a work-around
# return fig
product_aggregation_column = 'groupID'
df1 = (self.dm.demand_inventories
.join(self.dm.products[['productType', 'subgroupID', 'groupID']])
# .join(self.locations)
).groupby(['timePeriodSeq', product_aggregation_column]).sum()
if 'relative_week' in df1.columns: # TODO: remove if not relevant anymore
df1 = df1.drop(columns=['relative_week'])
df1 = (df1
# .drop(columns=['relative_week'])
.rename(
columns={'quantity': 'Demand', 'xFulfilledDemandSol': 'Fulfilled', 'xUnfulfilledDemandSol': 'Unfulfilled',
'xBacklogSol': 'Backlog', 'xBacklogResupplySol': 'Backlog Resupply', 'xDemandInvSol': 'Inventory'})
)
df1 = (df1.stack()
.rename_axis(index={None: 'var_name'})
.to_frame(name='quantity')
.reset_index()
)
# Inflows from plants:
df2 = (self.dm.plant_to_demand_transportation[['xTransportationSol']]
.join(self.dm.products[['productType', 'subgroupID', 'groupID']])
.groupby(['timePeriodSeq', product_aggregation_column]).sum()
.rename(columns={'xTransportationSol': 'Production'})
)
df2 = (df2.stack()
.rename_axis(index={None: 'var_name'})
.to_frame(name='quantity')
.reset_index()
)
df = pd.concat([df1, df2])
labels = {'timePeriodSeq': 'Time Period', 'quantity': 'Demand', 'productName': 'Product Name',
'var_name': 'Var'}
import plotly.graph_objects as go
fig = go.Figure()
fig.update_layout(
template="simple_white",
xaxis=dict(title_text="Time"),
yaxis=dict(title_text="Quantity"),
barmode="stack",
)
colors = ["#6495ED", "#FFBF00", "#FF7F50", "#DE3163", "#9FE2BF"]
for p, c in zip(df.groupID.unique(), colors):
plot_df = df[df.groupID == p]
fig.add_trace(
go.Bar(x=[plot_df.timePeriodSeq, plot_df.var_name], y=plot_df.quantity, name=p, marker_color=c),
)
fig.update_xaxes(
rangeslider_visible=True,
rangeselector=dict(
buttons=list([
dict(count=1, label="1m", step="month", stepmode="backward"),
dict(count=6, label="6m", step="month", stepmode="backward"),
dict(count=1, label="YTD", step="year", stepmode="todate"),
dict(count=1, label="1y", step="year", stepmode="backward"),
dict(step="all")
])
)
)
fig.update_layout(hovermode="closest") # Is supposed to be the default, but in DE we get multiple. Setting 'closest' explicitly is a work-around
return fig
#######################################################################################
# Inventory
#######################################################################################
def plotly_inventory_days_of_supply_line(self, mode:str='line', query=None):
"""Demand inventory, normalized by days-of-supply.
Args:
mode (str): line (default) or bar. Bar will result in a stacked bar.
Input tables: ['Demand', 'Product']
Output tables: ['DemandInventory]
"""
# num_days = 2 * 365 # For now assume 2 years. TODO: get from number of time-periods and bucket length
num_days = len(self.dm.demand.index.unique(level='timePeriodSeq')) * 30
df1 = (self.dm.demand[['quantity']]
.join(self.dm.products['productGroup'])
.groupby(['productGroup','productName','locationName']).sum()
)
df1['demand_per_day'] = df1.quantity / num_days
df1 = df1.drop(columns=['quantity'])
temp = self.dm.demand_inventories[['xInvSol']].reset_index()
temp = temp[temp.locationName == 'PERU']
df = (self.dm.demand_inventories[['xInvSol']]
.join(df1)
.reset_index()
.set_index(['locationName','productGroup','productName'])
.sort_index()
)
if query is not None:
df = df.query(query).copy()
df['days_of_supply'] = df.xInvSol / df.demand_per_day
tdf = df.reset_index()
tdf = tdf[tdf.locationName == 'PERU']
df = df.reset_index()
df['location_product'] = df.locationName + " - " + df.productName
color_discrete_map = self.gen_color_col(df['location_product'])
labels = {'timePeriodSeq': 'Time Period', 'quantity': 'Inventory', 'productName': 'Product Name',
'productGroup': 'Product Group', "days_of_supply": "Days of Supply", 'days_of_supply_smoothed': 'Days of Supply'}
df['days_of_supply'] = df['days_of_supply'].clip(upper = 100)
df = df.sort_values('timePeriodSeq')
df['days_of_supply_smoothed'] = df['days_of_supply'].rolling(window=5).mean()
if mode == 'bar':
fig = px.bar(df, x="timePeriodSeq", y="days_of_supply",
color='location_product',
color_discrete_map=color_discrete_map,
height=600,
title='Demand Inventory (days-of-supply)', labels=labels)
else:
fig = px.line(df, x="timePeriodSeq", y="days_of_supply",
color='location_product',
color_discrete_map=color_discrete_map,
height=600,
title='Demand Inventory (days-of-supply)', labels=labels)
fig.update_layout(
hovermode="closest",
legend={'orientation': 'v',
"title": 'Product Location',
'x': 1.05},
margin={'l':80,'t':60, 'r':0},
)
return fig
def plotly_inventory_days_of_supply_slack_line(self, mode:str='line', query=None):
"""Demand inventory, days-of-supply slack.
Args:
mode (str): line (default) or bar. Bar will result in a stacked bar.
Input tables: ['Demand', 'Product']
Output tables: ['DemandInventory]
"""
# num_days = 2 * 365 # For now assume 2 years. TODO: get from number of time-periods and bucket length
num_days = len(self.dm.demand.index.unique(level='timePeriodSeq')) * 30
df1 = (self.dm.demand[['quantity']]
.join(self.dm.products['productGroup'])
.groupby(['productGroup','productName','locationName']).sum()
)
df1['demand_per_day'] = df1.quantity / num_days
df1 = df1.drop(columns=['quantity'])
df = (self.dm.demand_inventories[['xDOSSlackSol']]
.join(df1)
.reset_index()
.set_index(['locationName','productGroup','productName'])
.sort_index()
)
if query is not None:
df = df.query(query).copy()
df['dosSlack'] = df.xDOSSlackSol / df.demand_per_day
df = df.reset_index()
df['location_product'] = df.locationName + " - " + df.productName
color_discrete_map = self.gen_color_col(df['location_product'])
labels = {'timePeriodSeq': 'Time Period', 'quantity': 'Inventory', 'productName': 'Product Name',
'productGroup': 'Product Group', "days_of_supply": "Days of Supply"}
df['dosSlack'] = df['dosSlack'].clip(upper = 100)
if mode == 'bar':
fig = px.bar(df, x="timePeriodSeq", y="dosSlack",
color='location_product',
color_discrete_map=color_discrete_map,
height=600,
title='Demand Inventory Slack (days-of-supply)', labels=labels)
else:
fig = px.line(df, x="timePeriodSeq", y="dosSlack",
color='location_product',
color_discrete_map=color_discrete_map,
height=600,
title='Demand Inventory Slack (days-of-supply)', labels=labels)
fig.update_layout(
hovermode="closest",
legend={'orientation': 'v',
"title": 'Product Location',
'x': 1.05},
margin={'l': 80, 't': 60, 'r': 0},
)
return fig
def plotly_wh_inventory_days_of_supply_line(self, mode:str='line', query=None):
"""Warehouse inventory, normalized by days-of-supply."""
# num_days = 2 * 365 # For now assume 2 years. TODO: get from number of time-periods and bucket length
num_days = len(self.dm.demand.index.unique(level='timePeriodSeq')) * 30
df1 = (self.dm.demand[['quantity']]
.join(self.dm.products[['productGroup', 'productCountry']])
.groupby(['productGroup','productName', 'productCountry']).sum()
)
df1['demand_per_day'] = df1.quantity / num_days
df1 = df1.drop(columns=['quantity'])
df = (self.dm.warehouse_inventories[['xInvSol']]
.join(df1)
.reset_index()
.set_index(['locationName','productGroup','productName', 'productCountry'])
.sort_index()
)
if query is not None:
df = df.query(query).copy()
df['days_of_supply'] = (df.xInvSol / df.demand_per_day)
df = df.reset_index()
df.productCountry = df.productCountry.fillna("")
df['location_product'] = df.productCountry + " - " + df.productName
df.days_of_supply = df.days_of_supply.clip(upper = 100)
color_discrete_map = self.gen_color_col(df['location_product'])
labels = {'timePeriodSeq': 'Time Period', 'days_of_supply':'Days of Supply', 'quantity': 'Inventory', 'productName': 'Product Name',
'productGroup': 'Product Group', 'location_product': 'Product Location', 'xInvSol': 'Inventory'}
if mode == 'bar':
fig = px.bar(df, x="timePeriodSeq", y="days_of_supply",
color='location_product',
color_discrete_map=color_discrete_map,
height=600,
title='Warehouse Inventory (days-of-supply)', labels=labels)
elif mode == 'area':
fig = px.area(df, x="timePeriodSeq", y="xInvSol",
color='productName',
color_discrete_map=color_discrete_map,
height=600,
title='Warehouse Inventory', labels=labels)
else:
fig = px.line(df, x="timePeriodSeq", y="days_of_supply",
color='location_product',
color_discrete_map=color_discrete_map,
height=600,
title='Warehouse Inventory (days-of-supply)',
labels=labels)
fig.update_layout(
hovermode="closest",
legend={'orientation': 'v',
"x": 1.05},
margin={'l': 80, 't': 60, 'r': 0},
)
return fig
def plotly_package_demand_bars(self, query=None):
"""Product demand over time. Colored by productGroup.
Input tables: ['Demand', 'Product']
Output tables: []
"""
df = (self.dm.demand
.join(self.dm.products[['productGroup']])
.query("productGroup == 'Package'")
)
if query is not None:
df = df.query(query)
aggregation_column = 'locationName'
df = df.groupby(['timePeriodSeq', aggregation_column]).sum()
labels = {'timePeriodSeq': 'Time Period', 'quantity': 'Demand', 'productName': 'Product Name',
'productGroup': 'Product Group'}
fig = px.bar(df.reset_index(), x="timePeriodSeq", y="quantity", color=aggregation_column,
title='Total Package Demand', labels=labels)
fig.update_layout(
hovermode="closest",
legend={'orientation': 'v'},
)
return fig
def plotly_package_demand_lines(self, query=None):
"""Product demand over time. Colored by productGroup.
Input tables: ['Demand', 'Product']
Output tables: []
"""
df = (self.dm.demand
.join(self.dm.products[['productGroup']])
.query("productGroup == 'Package'")
)
if query is not None:
df = df.query(query)
aggregation_column = 'locationName'
df = df.groupby(['timePeriodSeq', aggregation_column]).sum()
labels = {'timePeriodSeq': 'Time Period', 'quantity': 'Demand', 'productName': 'Product Name',
'productGroup': 'Product Group'}
fig = px.line(df.reset_index(), x="timePeriodSeq", y="quantity", color=aggregation_column,
title='Total Package Demand', labels=labels)
fig.update_layout(
hovermode="closest",
legend={'orientation': 'v'},
)
return fig
def plotly_demand_fullfilment_scroll(self):
"""Demand, Fulfilled, Unfulfilled, Backlog, BacklogResupply and Inventory over time, grouped by time-period.
Colored by groupID.
Very useful graph since it contains all critical variables at the demand locations. Good for explanation.
"""
# Collect transportation activities into a destination location.
# (later we'll do a left join to only select trnasportation into a demand location and ignore all other transportation activities)
df0 = self.dm.transportation_activities
df0['destinationTimePeriodSeq'] = df0.index.get_level_values('timePeriodSeq') + df0.transitTime
df0 = (df0[['xTransportationSol', 'destinationTimePeriodSeq']]
.groupby(['productName', 'destinationLocationName', 'destinationTimePeriodSeq']).sum()
.rename_axis(index={'destinationLocationName': 'locationName', 'destinationTimePeriodSeq':'timePeriodSeq'})
.rename(columns={'xTransportationSol':'Transportation'})
)
# display(df0.head())
product_aggregation_column = 'productGroup'
df = (self.dm.demand_inventories[['quantity','xFulfilledDemandSol','xUnfulfilledDemandSol','xBacklogSol','xBacklogResupplySol','xInvSol']]
.join(self.dm.products[['productGroup']])
# .join(self.dm.locations)
.join(df0, how='left')
).groupby(['timePeriodSeq', product_aggregation_column]).sum()
if 'relative_week' in df.columns: # TODO: remove if not relevant anymore
df = df.drop(columns=['relative_week'])
# display(df.head())
df = (df
# .drop(columns=['relative_week'])
.rename(
columns={'quantity': 'Demand', 'xFulfilledDemandSol': 'Fulfilled', 'xUnfulfilledDemandSol': 'Unfulfilled',
'xBacklogSol': 'Backlog', 'xBacklogResupplySol': 'Backlog Resupply', 'xInvSol': 'Inventory'})
)
# display(df.head())
df = (df.stack()
.rename_axis(index={None: 'var_name'})
.to_frame(name='quantity')
.reset_index()
)
# display(df.head())
labels = {'timePeriodSeq': 'Time Period', 'quantity': 'Demand', 'productName': 'Product Name', 'productGroup':'Product Group',
'var_name': 'Var'}
fig = go.Figure()
fig.update_layout(
template="simple_white",
xaxis=dict(title_text="Time"),
yaxis=dict(title_text="Quantity"),
barmode="stack",
height=700
)
colors = self.gen_color_col()
# Default colors:
for p in df.productGroup.unique():
plot_df = df[df.productGroup == p]
fig.add_trace(go.Bar(x=[plot_df.timePeriodSeq, plot_df.var_name], y=plot_df.quantity, name=p, marker_color = colors[p]))
fig.update_xaxes(
rangeslider_visible=True,
rangeselector=dict(
buttons=list([
dict(count=1, label="1m", step="month", stepmode="backward"),
dict(count=6, label="6m", step="month", stepmode="backward"),
dict(count=1, label="YTD", step="year", stepmode="todate"),
dict(count=1, label="1y", step="year", stepmode="backward"),
dict(step="all")
])
)
)
fig.update_layout(hovermode="closest") # Is supposed to be the default, but in DE we get multiple. Setting 'closest' explicitly is a work-around
fig.update_layout(
xaxis = dict(
tickfont = dict(size=9)))
fig.update_layout(
margin={'l': 10, 't': 10, 'r': 0, 'b':10})
return fig
def plotly_demand_fullfilment_scroll_product(self):
"""Demand, Fulfilled, Unfulfilled, Backlog, BacklogResupply and Inventory over time, grouped by time-period.
Colored by groupID.
Very useful graph since it contains all critical variables at the demand locations. Good for explanation.
"""
# Collect transportation activities into a destination location.
# (later we'll do a left join to only select trnasportation into a demand location and ignore all other transportation activities)
# df0 = (self.dm.transportation_activities[['xTransportationSol']]
# .groupby(['productName', 'destinationLocationName', 'timePeriodSeq']).sum()
# .rename_axis(index={'destinationLocationName': 'locationName'})
# .rename(columns={'xTransportationSol':'Transportation'})
# )
df0 = self.dm.transportation_activities
df0['destinationTimePeriodSeq'] = df0.index.get_level_values('timePeriodSeq') + df0.transitTime
df0 = (df0[['xTransportationSol', 'destinationTimePeriodSeq']]
.groupby(['productName', 'destinationLocationName', 'destinationTimePeriodSeq']).sum()
.rename_axis(index={'destinationLocationName': 'locationName', 'destinationTimePeriodSeq':'timePeriodSeq'})
.rename(columns={'xTransportationSol':'Transportation'})
)
# display(df0.head())
# product_aggregation_column = 'productGroup'
product_aggregation_column = 'productName'
df = (self.dm.demand_inventories[['quantity','xFulfilledDemandSol','xUnfulfilledDemandSol','xBacklogSol','xBacklogResupplySol','xInvSol']]
.join(self.dm.products[['productGroup', 'productCountry']])
# .join(self.dm.locations)
.join(df0, how='left')
).groupby(['timePeriodSeq', product_aggregation_column, 'productCountry']).sum()
# print(df.head())
if 'relative_week' in df.columns: # TODO: remove if not relevant anymore
df = df.drop(columns=['relative_week'])
# display(df.head())
df = (df
# .drop(columns=['relative_week'])
.rename(
columns={'quantity': 'Demand', 'xFulfilledDemandSol': 'Fulfilled', 'xUnfulfilledDemandSol': 'Unfulfilled',
'xBacklogSol': 'Backlog', 'xBacklogResupplySol': 'Backlog Resupply', 'xInvSol': 'Inventory'})
)
df = (df.stack()
.rename_axis(index={None: 'var_name'})
.to_frame(name='quantity')
.reset_index()
)
labels = {'timePeriodSeq': 'Time Period', 'quantity': 'Demand', 'productName': 'Product Name', 'productGroup':'Product Group',
'var_name': 'Var'}
fig = go.Figure()
fig.update_layout(
template="simple_white",
xaxis=dict(title_text="Time"),
yaxis=dict(title_text="Quantity"),
barmode="stack",
height=900,
# width = 2000
)
df = df.reset_index()
df['location_product'] = df['productCountry'] + ' - ' + df['productName']
df['location_product'] = df['location_product'].fillna('API')
colors = self.gen_color_col(df['location_product'])
# Default colors:
# for p in df[product_aggregation_column].unique():
# print(f"p = {p}")
# print(df[product_aggregation_column].unique())
# print(colors)
# Default colors:
for p in df['location_product'].unique():
# print(f"p = {p}")
plot_df = df[df['location_product'] == p]
try:
fig.add_trace(go.Bar(x=[plot_df.timePeriodSeq, plot_df.var_name], y=plot_df.quantity, name=p,
marker_color = colors[p]
))
except:
pass
fig.update_xaxes(
rangeslider_visible=True,
rangeselector=dict(
buttons=list([
dict(count=1, label="1m", step="month", stepmode="backward"),
dict(count=6, label="6m", step="month", stepmode="backward"),
dict(count=1, label="YTD", step="year", stepmode="todate"),
dict(count=1, label="1y", step="year", stepmode="backward"),
dict(step="all")
])
)
)
fig.update_layout(hovermode="closest") # Is supposed to be the default, but in DE we get multiple. Setting 'closest' explicitly is a work-around
fig.update_layout(
xaxis = dict(
tickfont = dict(size=9)),
legend = {'orientation': 'v', 'x': 1})
fig.update_layout(
margin={'l': 10, 't': 10, 'r': 0, 'b': 30})
return fig
def plotly_production_activities_bars(self, query=None, title='Production'):
"""Production activity over time, colored by productGroup.
Input tables: ['Product', 'Location']
Output tables: ['ProductionActivity']
"""
product_aggregation_column = 'productGroup'
product_aggregation_column = 'productName'
df = (self.dm.production_activities
.join(self.dm.products[['productGroup', 'productCountry']]))
df = df.reset_index()
df.productCountry = df.productCountry.fillna('')
df['location_product'] = df.productCountry + " - " + df.productName
color_discrete_map = self.gen_color_col(df['location_product'])
if query is not None:
df = df.query(query)
df = (df
.reset_index()
.merge(self.dm.locations.reset_index(), on='locationName')
).groupby(['timePeriodSeq', product_aggregation_column, 'lineName', 'location_product']).sum()
active_line_name_category_orders = [l for l in self.line_name_category_orders if l in df.index.unique(level='lineName')] # Avoids empty spaces in Plotly chart
labels = {'timePeriodSeq': 'Time Period', 'xProdSol': 'Production', 'productName': 'Product Name', 'location_product': 'Product Location'}
category_orders = {
# 'lineName' : ['Abbott_Weesp_Line','Abbott_Olst_Granulate_Line', 'Abbott_Olst_Packaging_Line_5','Abbott_Olst_Packaging_Line_6'],
# 'lineName' : self.line_name_category_orders,
'lineName' : active_line_name_category_orders,
# 'timePeriodSeq': df.reset_index().timePeriodSeq.sort_values().unique(),
'timePeriodSeq': df.index.unique(level='timePeriodSeq').sort_values()
}
fig = px.bar(df.reset_index(), x="timePeriodSeq", y="xProdSol", color='location_product',
color_discrete_map= color_discrete_map,
title=title, labels=labels,
facet_row = 'lineName',
category_orders=category_orders,
height=800,
)
fig.update_layout(legend =
{'orientation': 'v',
'x': 1.05,
}
)
fig.update_layout(margin = {'l': 85, 't':80})
fig.for_each_annotation(lambda a: a.update(x = a.x-1.04, textangle = 270))
fig.for_each_annotation(lambda a: a.update(text=a.text.split("lineName=")[-1]))
fig.for_each_annotation(lambda a: a.update(text=a.text.replace("_", " ")))
fig.for_each_annotation(lambda a: a.update(text=a.text.replace("Olst", "Olst<br>")))
# get rid of duplicated X-axis labels
for axis in fig.layout:
if type(fig.layout[axis]) == go.layout.YAxis:
fig.layout[axis].title.text = ''
fig.update_xaxes(type='category')
fig.update_layout(hovermode="closest",legend = {'orientation': 'v'}) # Is supposed to be the default, but in DE we get multiple. Setting 'closest' explicitly is a work-around
return fig
def plotly_planned_production_activities_bars(self, query=None, title='Production'):
"""Production activity over time, colored by productGroup.
Input tables: ['Product', 'Location']
Output tables: ['ProductionActivity']
"""
product_aggregation_column = 'productName'
df = (self.dm.planned_production_activity
.join(self.dm.products[['productGroup', 'productCountry']])
# .sort_index()
)
df = df.reset_index()
df['productCountry'] = np.where(pd.isnull(df.productCountry), '', df.productCountry)
df['location_product'] = df.productCountry + " - " + df.productName
color_discrete_map = self.gen_color_col(df['location_product'])
if query is not None:
df = df.query(query)
df = (df
).groupby(['timePeriodSeq', product_aggregation_column, 'lineName', 'productCountry', 'location_product']).sum()
active_line_name_category_orders = [l for l in self.line_name_category_orders if l in df.index.unique(level='lineName')] # Avoids empty spaces in Plotly chart
labels = {'timePeriodSeq': 'Time Period', 'xProdSol': 'Production', 'productName': 'Product Name',
'location_product': 'Product Location'}
# df = (df.reset_index())
category_orders = {
# 'lineName' : ['Abbott_Weesp_Line','Abbott_Olst_Granulate_Line', 'Abbott_Olst_Packaging_Line_5','Abbott_Olst_Packaging_Line_6'],
# 'lineName' : self.line_name_category_orders,
'lineName' : active_line_name_category_orders,
# 'timePeriodSeq': df.reset_index().timePeriodSeq.sort_values().unique()
# 'timePeriodSeq': df.timePeriodSeq.sort_values().unique()
'timePeriodSeq': df.index.unique(level='timePeriodSeq').sort_values()
}
fig = px.bar(df.reset_index(), x="timePeriodSeq", y="quantity", color='location_product',
color_discrete_map=color_discrete_map,
title=title, labels=labels,
facet_row = 'lineName',
category_orders = category_orders,
height=800,
)
fig.update_layout(legend =
{'orientation': 'v',
'x': 1.05,
}
)
fig.update_layout(margin = {'l': 90,'t':60})
# fig.for_each_annotation(lambda a: a.update(x = a.x -1., y = a.y-0.15, textangle = 0,
# font = {'size':16}
# ))
fig.for_each_annotation(lambda a: a.update(x = a.x-1.055, textangle = 270))
fig.for_each_annotation(lambda a: a.update(text=a.text.split("lineName=")[-1]))
fig.for_each_annotation(lambda a: a.update(text=a.text.replace("_", " ")))
fig.for_each_annotation(lambda a: a.update(text=a.text.replace("Olst", "Olst<br>")))
# get rid of duplicated X-axis labels
for axis in fig.layout:
if type(fig.layout[axis]) == go.layout.YAxis:
fig.layout[axis].title.text = ''
fig.update_xaxes(type='category')
fig.update_layout(hovermode="closest",legend = {'orientation': 'v'})
return fig
def plotly_production_slack_bars(self, query=None, title='Production Slack'):
"""Production activity slack over time, colored by productName.
Input tables: ['Product']
Output tables: ['ProductionActivity']
"""
product_aggregation_column = 'productName'
df = (self.dm.production_activities
.join(self.dm.products[['productGroup', 'productCountry']]))
df = df.reset_index()
df['productCountry'] = np.where(pd.isnull(df.productCountry), '', df.productCountry)
df['location_product'] = df.productCountry + " - " + df.productName
color_discrete_map = self.gen_color_col(df['location_product'])
if query is not None:
df = df.query(query)
df = (df
.reset_index()
# .merge(self.dm.locations.reset_index(), on='locationName')
).groupby(['timePeriodSeq', product_aggregation_column, 'lineName', 'productCountry', 'location_product']).sum()
labels = {'timePeriodSeq': 'Time Period', 'xProdSol': 'Production', 'productName': 'Product Name',
'location_product': 'Product Location'}
active_line_name_category_orders = [l for l in self.line_name_category_orders if l in df.index.unique(level='lineName')] # Avoids empty spaces in Plotly chart
category_orders = {
# 'lineName' : ['Abbott_Weesp_Line','Abbott_Olst_Granulate_Line',
# 'Abbott_Olst_Packaging_Line_5','Abbott_Olst_Packaging_Line_6'],
# 'lineName' : self.line_name_category_orders,
'lineName' : active_line_name_category_orders,
# 'timePeriodSeq': df.reset_index().timePeriodSeq.sort_values().unique()
'timePeriodSeq': df.index.unique(level='timePeriodSeq').sort_values(),
}
fig = px.bar(df.reset_index(), x="timePeriodSeq", y="xProdSlackSol", color='location_product',
color_discrete_map=color_discrete_map,
title=title, labels=labels,
facet_row = 'lineName',
category_orders=category_orders,
height=800,
)
fig.update_layout(legend =
{'orientation': 'v',
'x': 1.05,
}
)
fig.update_layout(margin = {'l': 85, 't':60})
fig.for_each_annotation(lambda a: a.update(x = a.x-1.05, textangle = 270))
fig.for_each_annotation(lambda a: a.update(text=a.text.split("lineName=")[-1]))
fig.for_each_annotation(lambda a: a.update(text=a.text.replace("_", " ")))
fig.for_each_annotation(lambda a: a.update(text=a.text.replace("Olst", "Olst<br>")))
# get rid of duplicated X-axis labels
for axis in fig.layout:
if type(fig.layout[axis]) == go.layout.YAxis:
fig.layout[axis].title.text = ''
fig.update_xaxes(type='category')
fig.update_layout(hovermode="closest",legend = {'orientation': 'v'})
return fig
def plotly_production_excess_bars(self, query=None, title='Production Plan Difference', mode = None):
"""Production activity excess (compared to plan) over time, colored by productName.
Default mode returns excess as a substraction, percentage returns as percentage
Input tables: ['Product', 'PlannedproductionActivity']
Output tables: ['ProductionActivity']
"""
product_aggregation_column = 'productName'
planned_production = (self.dm.planned_production_activity
.reset_index()
# .astype({'planId': int})
# .query("planId == 1") # HACK!!!! Need to filter on planId
# .reset_index()
.set_index(['productName','lineName','timePeriodSeq','recipeId'], verify_integrity = True)
)
df = (self.dm.production_activities
.join(self.dm.products[['productGroup', 'productCountry']])
.join(planned_production, how = 'left')
.rename(columns={'quantity':'plannedProductionQuantity'})
)
df.plannedProductionQuantity = df.plannedProductionQuantity.fillna(0)
if mode == 'percentage':
df['planExcessQuantity'] = ((df.xProdSol - df.plannedProductionQuantity) / df.plannedProductionQuantity)
else:
df['planExcessQuantity'] = df.xProdSol - df.plannedProductionQuantity
df = df.reset_index()
df['productCountry'] = np.where(pd.isnull(df.productCountry), '', df.productCountry)
df['location_product'] = df.productCountry + " - " + df.productName
color_discrete_map = self.gen_color_col(df['location_product'])
if query is not None:
df = df.query(query)
df = (df
.reset_index()
).groupby(['timePeriodSeq', product_aggregation_column, 'lineName', 'productCountry', 'location_product']).sum()
labels = {'timePeriodSeq': 'Time Period', 'xProdSol': 'Production', 'productName': 'Product Name',
'location_product': 'Product Location', 'planExcessQuantity':'Plan Difference'}
active_line_name_category_orders = [l for l in self.line_name_category_orders if l in df.index.unique(level='lineName')] # Avoids empty spaces in Plotly chart
# df = df.reset_index()
category_orders = {
# 'lineName' : ['Abbott_Weesp_Line','Abbott_Olst_Granulate_Line',
# 'Abbott_Olst_Packaging_Line_5','Abbott_Olst_Packaging_Line_6'],
# 'lineName' : self.line_name_category_orders,
'lineName' : active_line_name_category_orders,
# 'timePeriodSeq': [df.timePeriodSeq.sort_values().unique()]
'timePeriodSeq': df.index.unique(level='timePeriodSeq').sort_values()
}
fig = px.bar(df.reset_index(), x="timePeriodSeq", y="planExcessQuantity", color='location_product',
color_discrete_map=color_discrete_map,
title=title, labels=labels,
facet_row = 'lineName',
category_orders=category_orders,
height=800,
)
fig.update_layout(legend =
{'orientation': 'v',
'x': 1.05,
}
)
if mode is not None:
for axis in fig.layout:
if type(fig.layout[axis]) == go.layout.YAxis:
fig.layout[axis].title.text = ''
fig.layout[axis].tickformat = '%'
fig.update_layout(margin = {'l': 85})
fig.for_each_annotation(lambda a: a.update(x = a.x-1.05, textangle = 270))
fig.for_each_annotation(lambda a: a.update(text=a.text.split("lineName=")[-1]))
fig.for_each_annotation(lambda a: a.update(text=a.text.replace("_", " ")))
fig.for_each_annotation(lambda a: a.update(text=a.text.replace("Olst", "Olst<br>")))
fig.update_layout(hovermode="closest") # Is supposed to be the default, but in DE we get multiple. Setting 'closest' explicitly is a work-around
# get rid of duplicated X-axis labels
for axis in fig.layout:
if type(fig.layout[axis]) == go.layout.YAxis:
fig.layout[axis].title.text = ''
fig.update_xaxes(type='category')
fig.update_layout(hovermode="closest",legend = {'orientation': 'v'},
margin= {'l':85,'t':60})
return fig
def plotly_inventory_flow_sankey_test(self, include_wip=True):
"""Sankey diagram of transportation activities.
See https://stackoverflow.com/questions/50486767/plotly-how-to-draw-a-sankey-diagram-from-a-dataframe
"""
aggregation_column = 'productName'
# Collect inventories (location-product):
# for these groupby productGroup instead of productName
df1 = self.dm.plant_inventories[[]].groupby(['locationName',aggregation_column]).sum().copy()
df1['type'] = 'plant'
df2 = self.dm.warehouse_inventories[[]].groupby(['locationName',aggregation_column]).sum().copy()
df2['type'] = 'warehouse'
df3 = self.dm.demand_inventories[[]].groupby(['locationName',aggregation_column]).sum().copy()
df3['type'] = 'demand'
df4 = pd.DataFrame([{'locationName':'External',aggregation_column:'None', 'type':'external'}]).set_index(['locationName',aggregation_column])
df5 = self.dm.WIP[[]].groupby(['locationName',aggregation_column]).sum().copy()
df5 = df5.reset_index()
df5['locationName'] = df5.locationName + "_wip"
df5 = df5.set_index(['locationName',aggregation_column])
df5['type'] = 'wip'
df6 = self.dm.plant_inventories[[]].groupby(['locationName']).sum().copy()
df6[aggregation_column] = 'None'
df6 = df6.reset_index().set_index(['locationName',aggregation_column])
df6['type'] = 'source'
product_locations = pd.concat([df5, df4, df1, df2, df3, df6]) # should be same dataframes with same keys
product_locations = product_locations.reset_index()
product_locations = product_locations.merge(self.dm.products[['productGroup']], on = 'productName')
# Create locationName vs id
inventory_labels_df = (product_locations.reset_index()
.reset_index().rename(columns={'index': 'id'})
)
inventory_labels_df['label'] = inventory_labels_df.locationName + " - " +inventory_labels_df['productGroup']
#Collect inventory flows - transportation
df1 = (self.dm.transportation_activities[['xTransportationSol']].join(self.dm.products[['productGroup']])
.query("xTransportationSol > 0")
.groupby(['originLocationName', 'destinationLocationName','shippingMode','productGroup']).sum()
.rename(columns={'xTransportationSol':'quantity'})
)
df1 = df1.reset_index()
df1 = (df1.merge(inventory_labels_df[['locationName','productGroup','id']], left_on=['originLocationName','productGroup'], right_on=['locationName','productGroup'])
.rename(columns={'id': 'source'})
.drop(columns=['locationName'])
)
df1 = (df1.merge(inventory_labels_df[['locationName','productGroup','id']], left_on=['destinationLocationName','productGroup'], right_on=['locationName','productGroup'])
.rename(columns={'id': 'target'})
.drop(columns=['locationName'])
)
df1['label'] = df1.shippingMode + " - " + df1['productGroup'] + " from " + df1.originLocationName + " to " + df1.destinationLocationName
df1 = df1.drop(columns=['originLocationName','destinationLocationName','shippingMode'])
df1['color'] = 'rosybrown'
aggregation_column = 'productGroup'
#Collect inventory flows - Production
df2 = (self.dm.production_activities[['xProdSol']].join(self.dm.products[['productGroup']])
.join(self.dm.bom_items[['quantity']].rename(columns={'quantity':'component_bom_quantity'}), how='left')
.join(self.dm.lines[['plantName']])
.join(self.dm.plants[['locationName']], on='plantName')
.query("xProdSol > 0")
.reset_index()
)
df2.componentName.fillna('None',inplace=True) # For any product without components
df2['component_quantity'] = df2.xProdSol * df2.component_bom_quantity
df2 = (df2
.drop(columns=['component_bom_quantity','recipeId','timePeriodSeq'])
.groupby(['componentName', aggregation_column,'lineName','plantName','locationName']).sum()
.rename(columns={'xProdSol':'quantity'})
)
df2 = df2.reset_index()
df2 = (df2.merge(inventory_labels_df[['locationName',aggregation_column,'id','type']], left_on=['locationName',aggregation_column], right_on=['locationName',aggregation_column])
.rename(columns={'id': 'target'})
)
df2 = (df2.merge(inventory_labels_df[['locationName',aggregation_column,'id','type']], left_on=['locationName','componentName'], right_on=['locationName',aggregation_column], suffixes=[None,'_y'])
.rename(columns={'id': 'source'})
.drop(columns=[aggregation_column+'_y'])
)
df2['label'] = df2.type + " - " + df2.componentName + " to " + df2[aggregation_column]
df2 = df2[[aggregation_column, 'quantity', 'source', 'target', 'label']]
df2['color'] = 'olive'
# Collect inventory flows - WIP
df3 = (self.dm.WIP[['wipQuantity']].join(self.dm.products[['productGroup']])
.query("wipQuantity > 0")
.rename(columns={'wipQuantity':'quantity'})
)
df3 = df3.reset_index()
df3['locationNameWip'] = df3.locationName + '_wip'
# display(df3.head())
df3 = (df3.merge(inventory_labels_df[['locationName',aggregation_column,'id']], left_on=['locationName',aggregation_column], right_on=['locationName',aggregation_column])
.rename(columns={'id': 'target'})
# .drop(columns=['locationName'])
)
df3 = (df3.merge(inventory_labels_df[['locationName',aggregation_column,'id']], left_on=['locationNameWip',aggregation_column], right_on=['locationName',aggregation_column], suffixes=[None,'_y'])
.rename(columns={'id': 'source'})
.drop(columns=['locationName_y'])
)
# display(df3.head())
df3['label'] = "wip - " + df3[aggregation_column] + " to " + df3.locationName
# df1 = df1.drop(columns=['locationNameWip','locationName','shippingMode'])
# display(df3.head())
df3['color'] = 'lightsalmon'
if include_wip:
df = pd.concat([df1, df2, df3])
else:
df = pd.concat([df1, df2])
# df = df.merge(self.dm.products[['productGroup']], on = 'productName')
# Set pop-up text
# df['color'] = 'aquamarine'
fig = go.Figure(data=[go.Sankey(
# valueformat = ".0f",
# valuesuffix = "TWh",
# Define nodes
node=dict(
pad=15,
thickness=15,
line=dict(color="black", width=0.5),
label=inventory_labels_df.label.array,
),
# Add links
link=dict(
source=df.source.array,
target=df.target.array,
value=df.quantity.array,
label=df.label.array,
color = df.color.array,
))])
fig.update_layout(title_text="",
font_size=10,
height=1000)
return fig
def plotly_inventory_flow_sankey(self, include_wip=True):
"""Sankey diagram of transportation activities.
See https://stackoverflow.com/questions/50486767/plotly-how-to-draw-a-sankey-diagram-from-a-dataframe
"""
# Collect inventories (location-product):
df1 = self.dm.plant_inventories[[]].groupby(['locationName','productName']).sum().copy()
df1['type'] = 'plant'
df2 = self.dm.warehouse_inventories[[]].groupby(['locationName','productName']).sum().copy()
df2['type'] = 'warehouse'
df3 = self.dm.demand_inventories[[]].groupby(['locationName','productName']).sum().copy()
df3['type'] = 'demand'
df4 = pd.DataFrame([{'locationName':'External','productName':'None', 'type':'external'}]).set_index(['locationName','productName'])
df5 = self.dm.WIP[[]].groupby(['locationName','productName']).sum().copy()
df5 = df5.reset_index()
df5['locationName'] = df5.locationName + "_wip"
df5 = df5.set_index(['locationName','productName'])
df5['type'] = 'wip'
df6 = self.dm.plant_inventories[[]].groupby(['locationName']).sum().copy()
df6['productName'] = 'None'
df6 = df6.reset_index().set_index(['locationName','productName'])
df6['type'] = 'source'
product_locations = pd.concat([df5, df4, df1, df2, df3, df6])
# display(product_locations.head())
# Create locationName vs id
inventory_labels_df = (product_locations.reset_index()
.reset_index().rename(columns={'index': 'id'})
)
inventory_labels_df['label'] = inventory_labels_df.locationName + " - " +inventory_labels_df.productName
# display(inventory_labels_df.head())
#Collect inventory flows - transportation
df1 = (self.dm.transportation_activities[['xTransportationSol']]
.query("xTransportationSol > 0")
.groupby(['originLocationName', 'destinationLocationName','shippingMode','productName']).sum()
.rename(columns={'xTransportationSol':'quantity'})
)
df1 = df1.reset_index()
# display(df1.head())
df1 = (df1.merge(inventory_labels_df[['locationName','productName','id']], left_on=['originLocationName','productName'], right_on=['locationName','productName'])
.rename(columns={'id': 'source'})
.drop(columns=['locationName'])
)
# display(df1.head())
df1 = (df1.merge(inventory_labels_df[['locationName','productName','id']], left_on=['destinationLocationName','productName'], right_on=['locationName','productName'])
.rename(columns={'id': 'target'})
.drop(columns=['locationName'])
)
df1['label'] = df1.shippingMode + " - " + df1.productName + " from " + df1.originLocationName + " to " + df1.destinationLocationName
df1 = df1.drop(columns=['originLocationName','destinationLocationName','shippingMode'])
df1['color'] = 'rosybrown'
# display(df1.head())
#Collect inventory flows - Production
df2 = (self.dm.production_activities[['xProdSol']]
.join(self.dm.bom_items[['quantity']].rename(columns={'quantity':'component_bom_quantity'}), how='left')
.join(self.dm.lines[['plantName']])
.join(self.dm.plants[['locationName']], on='plantName')
.query("xProdSol > 0")
.reset_index()
# .groupby(['locationName', 'plantName','lineName', 'productName']).sum()
)
df2.componentName.fillna('None',inplace=True) # For any product without components
df2['component_quantity'] = df2.xProdSol * df2.component_bom_quantity
df2 = (df2
.drop(columns=['component_bom_quantity','recipeId','timePeriodSeq'])
.groupby(['componentName', 'productName','lineName','plantName','locationName']).sum()
.rename(columns={'xProdSol':'quantity'})
)
df2 = df2.reset_index()
# display(df2.head())
df2 = (df2.merge(inventory_labels_df[['locationName','productName','id','type']], left_on=['locationName','productName'], right_on=['locationName','productName'])
.rename(columns={'id': 'target'})
)
df2 = (df2.merge(inventory_labels_df[['locationName','productName','id','type']], left_on=['locationName','componentName'], right_on=['locationName','productName'], suffixes=[None,'_y'])
.rename(columns={'id': 'source'})
.drop(columns=['productName_y'])
)
df2['label'] = df2.type + " - " + df2.componentName + " to " + df2.productName
df2 = df2[['productName', 'quantity', 'source', 'target', 'label']]
df2['color'] = 'olive'
# display(df2.head())
# Collect inventory flows - WIP
df3 = (self.dm.WIP[['wipQuantity']]
.query("wipQuantity > 0")
.rename(columns={'wipQuantity':'quantity'})
)
df3 = df3.reset_index()
df3['locationNameWip'] = df3.locationName + '_wip'
# display(df3.head())
df3 = (df3.merge(inventory_labels_df[['locationName','productName','id']], left_on=['locationName','productName'], right_on=['locationName','productName'])
.rename(columns={'id': 'target'})
# .drop(columns=['locationName'])
)
df3 = (df3.merge(inventory_labels_df[['locationName','productName','id']], left_on=['locationNameWip','productName'], right_on=['locationName','productName'], suffixes=[None,'_y'])
.rename(columns={'id': 'source'})
.drop(columns=['locationName_y'])
)
# display(df3.head())
df3['label'] = "wip - " + df3.productName + " to " + df3.locationName
# df1 = df1.drop(columns=['locationNameWip','locationName','shippingMode'])
# display(df3.head())
df3['color'] = 'lightsalmon'
if include_wip:
df = pd.concat([df1, df2, df3])
else:
df = pd.concat([df1, df2])
# Set pop-up text
# display(df.head())
# df['color'] = 'aquamarine'
fig = go.Figure(data=[go.Sankey(
# valueformat = ".0f",
# valuesuffix = "TWh",
# Define nodes
node=dict(
pad=15,
thickness=15,
line=dict(color="black", width=0.5),
label=inventory_labels_df.label.array,
),
# Add links
link=dict(
source=df.source.array,
target=df.target.array,
value=df.quantity.array,
label=df.label.array,
color = df.color.array,
))])
fig.update_layout(title_text="",
font_size=10,
height=1000,
margin={'l':40, 'r':40, 't':40})
return fig
def plotly_line_product_group_capacity_heatmap(self):
"""Heatmap of capacity as line vs product. Good insight on line specialization/recipe-properties.
Input tables: ['RecipeProperties', 'Line', 'Product']
Output tables: []
"""
df = (self.dm.recipe_properties[['capacity']]
.join(self.dm.lines)
.join(self.dm.products[['productGroup']])
# .join(self.dm.plants.rename(columns={'locationDescr':'plantDescr'}), on='plantName')
# .join(self.dm.locations, on='locationName')
) # .groupby(['lineName','productType']).max()
df = df.reset_index()
# df = df.pivot_table(values='capacity', index=['lineDescr'], columns=['productType'], aggfunc=np.max)
df = df.pivot_table(values='capacity', index=['lineName'], columns=['productGroup'], aggfunc=np.max)
labels = {'lineName': 'Line', 'productGroup': 'Product Group', 'productName': 'Product Name'}
labels = dict(x="Product Group", y="Line", color="Capacity")
fig = px.imshow(df, labels=labels, width=1000,
color_continuous_scale='YlOrRd',
# y = ["Abbott Olst<br>Granulate Line", "Abbott Olst<br>Packaging Line 5",
# "Abbott Olst<br>Packaging Line 6", "Abbott<br>Weesp Line"],
y = ["Granulate Line", "Packaging Line 1", "Packaging Line 2", "API Line"],
x = ["API", "Granulate", "Tablet", "Package"],
)
fig.update_layout(
title={
'text': "Maximum Line Capacity by Product Type",
# 'y': 0.92,
'x': 0.5,
'xanchor': 'center',
'yanchor': 'top'})
return fig
@plotly_figure_exception_handler
def plotly_transportation_bar(self, query = None, title = 'Transportation Activity'):
"""
"""
df = self.dm.transportation_activities[['xTransportationSol']].query("xTransportationSol > 0")\
.join(self.dm.products[['productGroup', 'productCountry']])\
.groupby(['timePeriodSeq', 'originLocationName', 'destinationLocationName','shippingMode','productName']).\
sum().rename(columns={'xTransportationSol':'quantity'})
if query is not None:
df = df.query(query).copy()
# title = "Departing From: " + query.split("originLocationName == ")[-1].replace("_", " ").replace("'","")
else:
pass
# title = "Transportation Activity"
df = df.join(self.dm.products[['productGroup', 'productCountry']])
df = df.reset_index()
df.productCountry = df.productCountry.fillna("")
df['location_product'] = df['productCountry'] + " - " + df['productName']
df['location_product'] = df['location_product'].fillna('API')
color_discrete_map = self.gen_color_col(df['location_product'])
labels = {'location_product': 'Product Location', 'timePeriodSeq': 'Time Period', "quantity": 'Quantity'}
if len(df.shippingMode.unique()) < 2:
fct = None
else:
fct = "shippingMode"
category_orders = {'shippingMode': ['Air', 'Sea', 'Truck', 'Rail']}
active_shipping_mode_category_orders = [sm for sm in ['Air', 'Sea', 'Truck', 'Rail'] if sm in df.shippingMode.unique()]
fig = px.bar(data_frame = df, x = "timePeriodSeq", y = "quantity", color = "location_product",
labels = labels,
facet_col = fct,
# category_orders = category_orders,
category_orders = {'shippingMode': active_shipping_mode_category_orders},
color_discrete_map=color_discrete_map)
fig.update_layout(title = title, legend = {'orientation': 'v', 'x': 1.05},
margin = {'l':80, 't':80})
if len(df.shippingMode.unique()) > 1:
fig.for_each_annotation(lambda a: a.update(text=a.text.split("shippingMode=")[-1].capitalize()))
fig.update_layout(hovermode="closest")
return fig
def demand_choropleth_map(self):
""""""
df = (self.dm.demand
.join(self.dm.products[['productGroup', 'productCountry']]))
# Set location_product name
df = df.reset_index()
df['location_product'] = df.locationName + " - " + df.productName
df = (df
.groupby(['timePeriodSeq', 'location_product', 'productCountry']).sum()
.sort_values('quantity', ascending=False))
# locs = pd.read_csv('/workspace/geocode_abbott_locations_fixed.csv')
# print(self.dm.locations.head())
# print(locs.head())
locs = self.dm.locations.reset_index()
df = df.reset_index()
df = df.merge(locs[["locationName", "latitude", "longitude", "countryIso"]], left_on = "productCountry", right_on = "locationName")
df_gby = df.groupby("countryIso")['quantity'].mean().reset_index()
fig = px.choropleth(df_gby,
locations = "countryIso",
color = "quantity",
width = 1200,
title = "Demand Choropleth Map")
fig.update_layout(paper_bgcolor='#edf3f4',
geo=dict(bgcolor= '#edf3f4', showframe = False),
margin = {'b': 0, 't':50},
title = {'y': 0.95},
coloraxis_colorbar=dict(title="Quantity")
)
return fig
def unfulfilled_demand_choropleth_map(self, animation_col = None):
"""
"""
df0 = (self.dm.transportation_activities[['xTransportationSol']]
.groupby(['productName', 'destinationLocationName', 'timePeriodSeq']).sum()
.rename_axis(index={'destinationLocationName': 'locationName'})
.rename(columns={'xTransportationSol':'Transportation'})
)
product_aggregation_column = 'productName'
df = (self.dm.demand_inventories[['quantity','xFulfilledDemandSol','xUnfulfilledDemandSol','xBacklogSol','xBacklogResupplySol','xInvSol']]
.join(self.dm.products[['productGroup', 'productCountry']])
# .join(self.dm.locations)
.join(df0, how='left')
).groupby(['timePeriodSeq', 'productCountry', product_aggregation_column]).sum()
df = df.reset_index()
# locs = pd.read_csv('/workspace/geocode_abbott_locations_fixed.csv')
locs = self.dm.locations.reset_index()
df = df.merge(locs[["locationName", "latitude", "longitude", "countryIso"]], left_on = "productCountry", right_on = "locationName")
if animation_col is not None:
df_gby = df.groupby(["countryIso", animation_col])['xUnfulfilledDemandSol'].mean().reset_index()
title = "Animated Unfulfilled Demand Choropleth Map"
width = 2000
else:
df_gby = df.groupby("countryIso")['xUnfulfilledDemandSol'].mean().reset_index()
title = "Unfulfilled Demand Choropleth Map"
width = 1000
fig = px.choropleth(df_gby,
locations = "countryIso",
color = "xUnfulfilledDemandSol",
animation_frame=animation_col,
animation_group = animation_col,
# facet_col = "productGroup",
width = width,
# height = 1000,
title = title)
fig.update_layout(legend = {'title': 'Quantity'},
paper_bgcolor='#edf3f4',
geo=dict(bgcolor= '#edf3f4', showframe = False),
margin = {'b': 0, 't':50},
title = {'y': 0.95},
width = width,
coloraxis_colorbar=dict(title="Quantity")
)
return fig
def line_map(self):
"""
"""
import math
import random
aggregation_column = 'productName'
#Collect inventory flows - transportation
df1 = (self.dm.transportation_activities[['xTransportationSol']].join(self.dm.products[['productGroup']])
.query("xTransportationSol > 0")
.groupby(['originLocationName', 'destinationLocationName','shippingMode','productGroup']).sum()
.rename(columns={'xTransportationSol':'quantity'})
)
df1 = df1.reset_index()
# locs = pd.read_csv('/workspace/geocode_abbott_locations_fixed.csv')
locs = self.dm.locations.reset_index()
map_locs = df1.drop_duplicates(['originLocationName', 'destinationLocationName', 'shippingMode', 'productGroup', 'quantity'])
df6 = map_locs.merge(locs[["locationName", "latitude", "longitude", "countryIso"]], left_on = "originLocationName", right_on = "locationName")
df6 = df6.rename({'latitude': 'origin_lat', 'longitude':'origin_lon', 'countryIso':'origin_iso3'}, axis = 1)
df6 = df6.merge(locs[["locationName", "latitude", "longitude", "countryIso"]], left_on = "destinationLocationName", right_on = "locationName")
df6 = df6.rename({'latitude': 'destination_lat', 'longitude':'destination_lon', 'countryIso':'destination_iso3'}, axis = 1)
fig = go.Figure()
fig = fig.add_trace(go.Scattergeo(
# locationmode = 'USA-states',
lon = df6['origin_lon'],
lat = df6['origin_lat'],
hoverinfo = 'text',
text = df6['originLocationName'],
name = "Supply Chain Origin",
# showlegend = False,
mode = 'markers',
marker = dict(
size = 8,
color = 'rgb(255, 0, 0)',
)))
df6 = df6.reset_index().copy()
# add some jitter to prevent overlays
random.seed(42)
df6['destination_lat'] = df6['destination_lat'].apply(lambda x : x + random.uniform(-0.75, 0.75))
df6['destination_lon'] = df6['destination_lon'].apply(lambda x : x + random.uniform(-0.75, 0.75))
fig = fig.add_trace(go.Scattergeo(
# locationmode = 'USA-states',
lon = df6['destination_lon'],
lat = df6['destination_lat'],
hoverinfo = 'text',
text = df6['destinationLocationName'],
name = "Supply Chain Destination",
mode = 'markers',
marker = dict(
size = df6.quantity.apply(math.log).clip(lower = 2)*2,
color = "blue",
)))
color_dict = {'sea': 'darkblue', 'truck': 'darkgreen', 'air': 'darkred', 'Sea': 'darkblue', 'Truck': 'darkgreen', 'Air':'darkred'
}
df6['showlegend'] = False
df6['linetype'] = "solid"
ix = df6.groupby('shippingMode').first()['index'].values
for i in ix:
df6['showlegend'].iloc[i] = True
for i in range(len(df6)):
fig.add_trace(
go.Scattergeo(
lon = [df6['origin_lon'][i], df6['destination_lon'][i]],
lat = [df6['origin_lat'][i], df6['destination_lat'][i]],
mode = 'lines',
name = df6['shippingMode'][i],
showlegend = bool(df6['showlegend'][i]),
# showlegend = False,
line_dash= df6['linetype'][i],
line = dict(width = 1,color = color_dict[df6['shippingMode'][i]]),
# opacity = float(df_flight_paths['cnt'][i]) / float(df_flight_paths['cnt'].max()),
)
)
# adding a choropleth on top
df = (self.dm.demand
.join(self.dm.products[['productGroup', 'productCountry']])
)
# Set location_product name
df = df.reset_index()
df['location_product'] = df.locationName + " - " + df.productName
df = (df
.groupby(['timePeriodSeq', 'location_product', 'productCountry']).sum()
.sort_values('quantity', ascending=False))
df = df.reset_index()
df = df.merge(locs[["locationName", "latitude", "longitude", "countryIso"]], left_on = "productCountry", right_on = "locationName")
df = df.sort_values('timePeriodSeq')
df_gby = df.groupby("countryIso")['quantity'].mean().reset_index()
fig = fig.add_trace(
go.Choropleth(
locations = df_gby['countryIso'],
z = df_gby['quantity'],
colorscale = "Reds",
colorbar_title = "Quantity"
)
)
fig.update_layout(coloraxis_colorbar_x=-1)
fig.update_layout(
width = 1000,
# height = 1000,
legend = {
'title': 'Transportation Type',
'orientation': 'v',
'x': 0.85,
'y': 0.9,
},
title = {'text': "Supply Chain Overview", 'y': 0.95},
margin = {
't': 50,
'b': 0,
},
paper_bgcolor='#edf3f4',
geo=dict(bgcolor= '#edf3f4', showframe = False),
)
return fig
def percent_unfullfilleddemand(self):
# product_aggregation_column = 'productGroup' # potentially for further unpacking
df = (self.dm.demand_inventories[['quantity','xUnfulfilledDemandSol']])
# .join(self.dm.products[['productGroup']])
# ).groupby(['timePeriodSeq']).sum()
unfulfilled_demand = df.xUnfulfilledDemandSol.groupby(['timePeriodSeq']).sum()
num_tp = len(self.dm.demand.index.unique(level='timePeriodSeq'))
average_monthly_demand = df.quantity.sum()/num_tp
final_df = (unfulfilled_demand/average_monthly_demand).replace([np.inf, -np.inf], np.nan).fillna(0).round(4)*100
# final_df = final_df.groupby('timePeriodSeq').mean()
return final_df
def percent_backlog(self):
# product_aggregation_column = 'productGroup'
df = (self.dm.demand_inventories[['quantity','xBacklogSol']])
# .join(self.dm.products[['productGroup']])
# ).groupby(['timePeriodSeq']).sum()
backlog = df.xBacklogSol.groupby('timePeriodSeq').sum()
num_tp = len(self.dm.demand.index.unique(level='timePeriodSeq'))
average_monthly_demand = df.quantity.sum()/num_tp
# final_df = (df.xBacklogSol/df.quantity).replace([np.inf, -np.inf], np.nan).fillna(0).round(4)*100
final_df = (backlog / average_monthly_demand).replace([np.inf, -np.inf], np.nan).fillna(0).round(4)*100
# final_df = final_df.groupby('timePeriodSeq').mean()
return final_df
def dos_inv(self):
# product_aggregation_column = 'productGroup'
# print(self.dm.products)
# can feed it plant or warehouse inventories
df_demand = (self.dm.demand_inventories[['quantity','xFulfilledDemandSol','xUnfulfilledDemandSol','xBacklogSol','xBacklogResupplySol','xInvSol']])
# .join(self.dm.products[['productGroup']])
# ).groupby(['timePeriodSeq']).sum()
df_inv = (self.dm.demand_inventories[['quantity','xFulfilledDemandSol','xUnfulfilledDemandSol','xBacklogSol','xBacklogResupplySol','xInvSol']])
# .join(self.dm.products[['productGroup']])
# ).groupby(['timePeriodSeq']).sum()
final_df = (df_demand.groupby(["productName", "timePeriodSeq"]).xInvSol.sum()/df_inv.groupby(["productName", "timePeriodSeq"]).\
quantity.sum()).replace([np.inf, -np.inf], np.nan).fillna(0).round(4)
final_df = pd.Series(final_df.groupby('timePeriodSeq').mean().values)
t = self.get_demand_location_dos(30).groupby(['timePeriodSeq']).agg({'dosQuantity': 'sum'})
t = pd.Series(t.reset_index()['dosQuantity'])
final_dos = (final_df/t)
return final_dos
def average_inv(self):
# product_aggregation_column = 'productGroup'
# num_timeperiods = self.dm.active_timeperiods.max()
num_timeperiods = 30
df_inv = (self.dm.demand_inventories[['xInvSol']]
# .join(self.dm.products[['productGroup']])
).groupby(['timePeriodSeq']).sum()
final_df = (df_inv.xInvSol/num_timeperiods).round(4)
# final_df = final_df.groupby('timePeriodSeq').mean()
return final_df
def get_demand_location_dos(self, dos:int):
"""Compute the quantity of product at the end of a time-period that represents the
Days-Of-Supply computed using the actual demand in the following time-periods.
The quantity can be used in a days-of-supply inventory constraint or objective.
For the last time-periods, assume demand remains constant with the value of the last time-period.
Args:
dos (int): Days-Of-Supply. Number of days.
Note: use dm.demand_inventories. Is has already expanded to all time-periods.
"""
# num_tps = 24 # Number of time-periods
# num_days_tp = 30 # Number of days per time-period. To keep it simple, use 30 per month. HARD-CODED for now. TODO: put in parameter, or add as column in TimePeriods
num_days_tp = len(self.dm.demand.index.unique(level='timePeriodSeq')) * 30
# print(self.dm.demand_inventories.head())
df = (self.dm.demand_inventories[['quantity']]
.sort_index() # sort index so the shift will work right
).fillna(0)
num_tps = len(df.index.unique(level='timePeriodSeq'))-1
# df['numDays'] = num_days_tp
df['demandPerDay'] = df.quantity / num_days_tp #df.numDays
df['nextDemandPerDay'] = df.demandPerDay # Note we are shifting the nextDemandPerDay, so initialize once
df['dosQuantity'] = 0 # We are incrementing the dosQuantity, so initialize
remaining_dos = dos # Remaining DOS in each iteration, initialize with all DOS
shift = 0 # Only for debuging
# Iterate over the next time-periods until it covers all requested dos days
# Sum the DOS quantity
# Assume demand is constant throughout the time-period
while remaining_dos > 0:
# print(remaining_dos)
shift = shift + 1
# print(shift)
next_dos = min(remaining_dos, num_days_tp)
# print(f"Shift = {shift}, remaining_dos = {remaining_dos}, next_dos={next_dos}")
df['nextDemandPerDay'] = df.groupby(['locationName','productName'])['nextDemandPerDay'].shift(-1) #, fill_value=0)
# print(df.head())
# print(num_tps)
# print(df.loc[pd.IndexSlice[:,:,num_tps],'demandPerDay'])
df.loc[pd.IndexSlice[:,:,num_tps],'nextDemandPerDay'] = df.loc[pd.IndexSlice[:,:,num_tps],'demandPerDay'] # Fill gap from the shift with last demand
# print("test")
df['dosQuantity'] = df.dosQuantity + df.nextDemandPerDay * next_dos
remaining_dos = remaining_dos - next_dos
# print("test")
# display(df.query("locationName=='NAMIBIA'").head(24))
df = df.drop(columns=['demandPerDay', 'nextDemandPerDay'])
# print(df)
return df
def kpi_heatmap(self):
'''
'''
cols = [self.percent_unfullfilleddemand(), self.percent_backlog(), self.dos_kpi(as_time = True),
self.calc_air_pct(as_time = True), self.utilization_kpi(as_time = True)]
final_df = pd.DataFrame(data= cols).\
rename({'xUnfulfilledDemandSol': 'unfulfilled_demand', 'xBacklogSol': 'backlog', 'xInvSol': 'dos_inv',
'Unnamed 0': 'air_sea_ratio', 'line_capacity_utilization': 'utilization'},
axis = 0)
heatmap_df = final_df.copy()
# make a green zone around 30 days and orange if between 10-20 and 40-50, then red between 0-10 and 50-60
heatmap_df.loc['dos_inv'] = np.where((heatmap_df.loc['dos_inv'] <= 10) | (heatmap_df.loc['dos_inv'] >= 60), 2,
np.where((heatmap_df.loc['dos_inv'] >= 40) & (heatmap_df.loc['dos_inv'] < 60), 1,
np.where((heatmap_df.loc['dos_inv'] >= 20) & (heatmap_df.loc['dos_inv'] < 40), 0, np.nan)))
heatmap_df.loc['unfulfilled_demand'] = np.where(heatmap_df.loc['unfulfilled_demand'] > 5, 2,
np.where(heatmap_df.loc['unfulfilled_demand'] < 2, 0, 1))
heatmap_df.loc['backlog'] = np.where(heatmap_df.loc['backlog'] > 10, 2,
np.where(heatmap_df.loc['backlog'] < 5, 0, 1))
heatmap_df.loc['air_sea_ratio'] = np.where(heatmap_df.loc['air_sea_ratio'] > 50, 2,
np.where(heatmap_df.loc['air_sea_ratio'] < 20, 0, 1))
heatmap_df.loc['utilization'] = np.where(heatmap_df.loc['utilization'] > 95, 2,
np.where(heatmap_df.loc['utilization'] < 85, 0, 1))
# final_df = final_df.apply(lambda x:(x - x.min())/(x.max() - x.min()), axis = 0)
fig = px.imshow(heatmap_df,
color_continuous_scale =["green", "orange", "red"],
y = ["Unfulfilled Demand %", "Backlog %", "Inventory<br>Days of Supply", "Air Shipping %", "Utilization %"]
)
# customdata allows to add an "invisible" dataset that is not being plotted but whose values can be used for reference
fig.update_traces(customdata= final_df,
hovertemplate =
"%{y}: %{customdata: .3f}"+
"<br>Time Period %{x}"+
'<extra></extra>')
fig.update(layout_coloraxis_showscale=False)
fig.update_layout(margin = {'b':40, 'l':140, 'r':10, 't':20}) # hide colorbar
return fig
def make_gauge(self, value: float, title: str, orange_threshold: float, red_threshold: float, max_val: float):
"""
"""
steps = [
{'range': [0, orange_threshold], 'color': 'green'},
{'range': [orange_threshold, red_threshold], 'color': 'orange'},
{'range': [red_threshold, max_val], 'color': 'red'},
]
fig = go.Figure(go.Indicator(
mode = "gauge+number",
value = value,
domain = {'x': [0, 1], 'y': [0, .75]},
title = {'text': title, 'font': {'color': 'black', 'size': 18}},
gauge = {'axis': {'range': [None, max_val], 'tickfont': {'color': 'black'}},
'threshold' : {'line': {'color': "darkred", 'width': 4}, 'thickness': 0.75, 'value': red_threshold},
'steps': steps,
'bar': {'color': "darkblue"},},
)
)
fig.update_layout(font = {'color': 'green' if value < orange_threshold else 'orange' if value > orange_threshold and value < red_threshold else 'red', 'family': "Arial"},
margin={'t':10,'b':30},
)
return fig
def make_gauge_dos(self, value: float, title: str, max_val: float, type = None):
''' Standalone function for the DOS gauge
'''
steps = [
{'range': [0, 10], 'color': 'red'},
{'range': [60, max_val], 'color': 'red'},
{'range': [10, 20], 'color': 'orange'},
{'range': [40, 60], 'color': 'orange'},
{'range': [20, 40], 'color': 'green'},
]
fig = go.Figure(go.Indicator(
mode = "gauge+number",
value = value,
domain = {'x': [0, 1], 'y': [0, .75]},
title = {'text': title, 'font': {'color': 'black', 'size': 18}},
gauge = {'axis': {'range': [None, max_val], 'tickfont': {'color': 'black'}},
'threshold' : {'line': {'color': "darkred", 'width': 4}, 'thickness': 0.75, 'value': 60},
'steps': steps,
'bar': {'color': "darkblue"},},
)
)
fig.update_layout(font = {'color': 'green' if value < 40 and value > 20 else 'orange' if ((value > 40 and value < 60) or (value > 10 and value < 20)) else 'red', 'family': "Arial"},
margin={'t': 10, 'b': 30})
return fig
def calc_air_pct(self, as_time = False):
"""
When setting as_time = True, returns a vector with a value at each time index.
The issue is that not all time indices have a value for air or sea shipping.
A hacky solution: create a df initialized to 0 with all combinations of timePeriodSeq and shippingMode (i.e. 21 time periods * 3 shippingModes)
then iterate over the original df that was grouped by timePeriodSeq and shippingMode,
check if the grouped data has a value for that time/shippingMode combination,
if yes then copy/paste that value
if no, then keep 0 as the value
TODO: Probably a better way to write that code
"""
import warnings
warnings.filterwarnings("ignore")
print(pd.__version__)
df = self.dm.transportation_activities[['xTransportationSol']].query("xTransportationSol > 0")\
.join(self.dm.products[['productGroup', 'productCountry']])\
.groupby(['timePeriodSeq', 'originLocationName', 'destinationLocationName','shippingMode','productName']).\
sum().rename(columns={'xTransportationSol':'quantity'})
if not 'Air' in df.index.get_level_values('shippingMode') and as_time:
num_tp = len(self.dm.demand.index.unique(level='timePeriodSeq'))
return pd.Series(index = range(num_tp+1), data = 0)
elif not 'Air' in df.index.get_level_values('shippingMode') and not as_time:
return 0
if as_time:
df = df.reset_index()
from itertools import product
df_gby = df.groupby(['shippingMode', 'timePeriodSeq']).sum().reset_index()
dft = pd.DataFrame(product(df['shippingMode'].unique(),
df['timePeriodSeq'].unique()), columns = ['shippingMode', 'timePeriodSeq'])
dft['quantity'] = 0
### HACK ### probably a better way to write this code
for i in range(len(dft)):
sm = dft['shippingMode'].iloc[i]
ts = dft['timePeriodSeq'].iloc[i]
if len(df_gby.loc[(df_gby.shippingMode == sm) & (df_gby.timePeriodSeq == ts)]['quantity']) != 0:
dft['quantity'].iloc[i] = df_gby.loc[(df_gby.shippingMode == sm) & (df_gby.timePeriodSeq == ts)]['quantity']
else:
continue
air = dft.loc[dft.shippingMode == 'Air'].quantity.values
sea = dft.loc[dft.shippingMode == 'Sea'].quantity.values
ratio = pd.Series(air/(air+sea)).replace([np.inf, -np.inf], np.nan).fillna(0).round(3)
else:
df_gby = df.groupby('shippingMode').sum()
air = df_gby.loc[df_gby.index == 'Air'].quantity.values
sea = df_gby.loc[df_gby.index == 'Sea'].quantity.values
ratio = air/(sea+air)
ratio = np.round(ratio, 3)
return ratio*100
def utilization_kpi(self, as_time = False):
"""
"""
product_aggregation_column = 'productGroup'
df = (self.dm.production_activities[['line_capacity_utilization']]
.join(self.dm.products[['productGroup']])
).groupby(['timePeriodSeq', 'lineName', product_aggregation_column]).sum().reset_index()
# df = df[df['lineName'] == 'Abbott_Olst_Packaging_Line_5']
df = df[df['lineName'].isin(['Abbott_Olst_Packaging_Line_5', 'Packaging_Line_1'])] # works both for Client and Pharma
# df = df[df['lineName'] == 'Packaging_Line_1'] # Ony for Pharma
df['line_capacity_utilization'] = (df['line_capacity_utilization'].replace(0, np.nan)*100)
# VT notes 20211122: why the replace 0 with Nan? Probably to force the mean() to ignore months that have zero utilization?
# TODO: why not filter?
# df['line_capacity_utilization'] = (df['line_capacity_utilization']*100)
if as_time:
return df.set_index('timePeriodSeq')['line_capacity_utilization'].sort_index()
else:
return float(df.groupby('lineName')['line_capacity_utilization'].mean())
def dos_kpi(self, as_time = False):
'''
'''
df = self.dm.demand_inventories[['quantity', 'xInvSol']]
num_days = len(self.dm.demand.index.unique(level='timePeriodSeq')) * 30
demand_inv = df.groupby('timePeriodSeq')['xInvSol'].sum()
total_demand = df['quantity'].sum()
demand_dos = demand_inv / (total_demand / num_days)
if as_time:
return demand_dos
else:
return float(demand_dos.mean())
| 44.635089 | 204 | 0.550857 | 13,137 | 130,513 | 5.327929 | 0.064474 | 0.015002 | 0.010715 | 0.012858 | 0.808024 | 0.774277 | 0.746989 | 0.725415 | 0.703928 | 0.690969 | 0 | 0.015171 | 0.308069 | 130,513 | 2,923 | 205 | 44.650359 | 0.759894 | 0.213534 | 0 | 0.5961 | 0 | 0.000557 | 0.206708 | 0.016556 | 0 | 0 | 0 | 0.004447 | 0 | 1 | 0.026741 | false | 0.001114 | 0.007242 | 0 | 0.062396 | 0.001114 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1c220487a5ecd6a0ace19bd41e7559a91f6d226c | 279 | py | Python | social_network/management/commands/configuresocialnetwork.py | diana-gv/django-social-network | 48bafca81f28874ceead59e263ce5b7e3853dbfb | [
"BSD-3-Clause"
] | 3 | 2015-01-13T05:45:04.000Z | 2020-01-10T19:05:35.000Z | social_network/management/commands/configuresocialnetwork.py | diana-gv/django-social-network | 48bafca81f28874ceead59e263ce5b7e3853dbfb | [
"BSD-3-Clause"
] | null | null | null | social_network/management/commands/configuresocialnetwork.py | diana-gv/django-social-network | 48bafca81f28874ceead59e263ce5b7e3853dbfb | [
"BSD-3-Clause"
] | 6 | 2015-01-13T04:40:53.000Z | 2021-08-13T01:07:40.000Z | # coding=utf-8
from django.core.management.base import NoArgsCommand
class Command(NoArgsCommand):
def handle_noargs(self, **options):
from ...management import configure_notifications, create_edge_types
configure_notifications()
create_edge_types() | 31 | 76 | 0.749104 | 31 | 279 | 6.516129 | 0.709677 | 0.217822 | 0.277228 | 0.316832 | 0.366337 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00431 | 0.168459 | 279 | 9 | 77 | 31 | 0.866379 | 0.043011 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.333333 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1c3a9e2d07809822b70de13a2212e36d403b00b3 | 149 | py | Python | Muta3DMaps/core/AsyncV/__init__.py | NatureGeorge/SIFTS_Plus_Muta_Maps | 60f84e6024508e65ee3791103762b95666d3c646 | [
"MIT"
] | null | null | null | Muta3DMaps/core/AsyncV/__init__.py | NatureGeorge/SIFTS_Plus_Muta_Maps | 60f84e6024508e65ee3791103762b95666d3c646 | [
"MIT"
] | null | null | null | Muta3DMaps/core/AsyncV/__init__.py | NatureGeorge/SIFTS_Plus_Muta_Maps | 60f84e6024508e65ee3791103762b95666d3c646 | [
"MIT"
] | null | null | null | # @Date: 2019-11-20T22:46:50+08:00
# @Email: 1730416009@stu.suda.edu.cn
# @Filename: __init__.py
# @Last modified time: 2019-11-24T23:13:42+08:00
| 29.8 | 48 | 0.691275 | 27 | 149 | 3.666667 | 0.851852 | 0.121212 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.348485 | 0.114094 | 149 | 4 | 49 | 37.25 | 0.401515 | 0.939597 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
98e2e7d75d86b8ab47a6484e478dfaddff2953b4 | 329 | py | Python | pywidget/decode.py | jiajia15401/pywidget | ed4296aab0ce1a5ec01ef1dedaf3a1cec53ad0d3 | [
"MIT"
] | 1 | 2018-12-08T18:14:53.000Z | 2018-12-08T18:14:53.000Z | pywidget/decode.py | jiajia15401/pywidget | ed4296aab0ce1a5ec01ef1dedaf3a1cec53ad0d3 | [
"MIT"
] | null | null | null | pywidget/decode.py | jiajia15401/pywidget | ed4296aab0ce1a5ec01ef1dedaf3a1cec53ad0d3 | [
"MIT"
] | null | null | null | from pywidget.save import *
class decode(use_to_decode):
def __init__(self):
pass
###################
#. @ #
#@@@ #
#. @ #
#. @ @ . #
# @ #
###################
| 27.416667 | 40 | 0.197568 | 13 | 329 | 4.538462 | 0.923077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.595745 | 329 | 11 | 41 | 29.909091 | 0.443609 | 0.458967 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0.25 | 0.25 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 6 |
98f2c730707e8e81e60e918912cc23aa7d5329c0 | 683 | py | Python | workdays/views.py | anhtran304/workdays | c0d3d6dac80e3cac140c769f2c7e751bf26d054a | [
"MIT"
] | null | null | null | workdays/views.py | anhtran304/workdays | c0d3d6dac80e3cac140c769f2c7e751bf26d054a | [
"MIT"
] | null | null | null | workdays/views.py | anhtran304/workdays | c0d3d6dac80e3cac140c769f2c7e751bf26d054a | [
"MIT"
] | null | null | null | from django.shortcuts import render
def index(request):
return render(request, "index.html")
def farmerportal(request):
return render(request, "farmerportal.html")
def workerportal(request):
return render(request, "workerportal.html")
def workerindex(request):
return render(request, "workerindex.html")
def farmerindex(request):
return render(request, "farmerindex.html")
def farmerpublic(request):
return render(request, "farmerpublic.html")
def workerpublic(request):
return render(request, "workerpublic.html")
def howitworks(request):
return render(request, "howitworks.html")
def about(request):
return render(request, "about.html") | 24.392857 | 47 | 0.743777 | 77 | 683 | 6.597403 | 0.233766 | 0.230315 | 0.336614 | 0.46063 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.136164 | 683 | 28 | 48 | 24.392857 | 0.861017 | 0 | 0 | 0 | 0 | 0 | 0.197368 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.473684 | false | 0 | 0.052632 | 0.473684 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
c72bcaac393039783c699d40e07f46b2e08c5218 | 2,617 | py | Python | smarthome_migration/ac_device.py | johnklee/learn_dp_from_bad_smell_design | 88506487ce64a1b9492ec28fe235ae596ddf1472 | [
"MIT"
] | null | null | null | smarthome_migration/ac_device.py | johnklee/learn_dp_from_bad_smell_design | 88506487ce64a1b9492ec28fe235ae596ddf1472 | [
"MIT"
] | 4 | 2022-01-02T06:49:43.000Z | 2022-02-15T12:36:41.000Z | smarthome_migration/ac_device.py | johnklee/learn_dp_from_bad_smell_design | 88506487ce64a1b9492ec28fe235ae596ddf1472 | [
"MIT"
] | null | null | null | """Air condition controller."""
import enum
import device_api
from log_utils import get_logger
class ACControllerV1(device_api.ACInterface):
def __init__(self):
self.on_state = False
self.log = get_logger(self)
self.degree = 20
def is_on(self):
return self.on_state
def get_degree(self):
return self.degree
def turn_on(self, degree=None):
if not self.on_state:
if degree is None:
degree = self.degree
else:
self.degree = degree
self.log.info('\tTurn on AC at degree=%d', self.degree)
self.on_state = True
def turn_off(self):
if self.on_state:
self.log.info('\tTurn off AC...')
self.on_state = False
def turn_degree(self, degree):
if self.on_state:
self.log.info('\tTurning degree to be %d', degree)
self.degree = degree
else:
self.log.warning('\tPlease turn on AC first!')
class ACControllerV2(device_api.ACInterface):
def __init__(self):
self.on_state = False
self.log = get_logger(self)
self._degree = 20
def is_on(self):
return self.on_state
def get_degree(self):
return self.degree
def on(self, degree=None):
if not self.on_state:
self.log.info('\tTurn on AC')
self.on_state = True
degree = self._degree
if degree is not None:
self.degree = degree
def off(self):
if self.on_state:
self.log.info('\tTurn off AC...')
self.on_state = False
@property
def degree(self):
return self._degree
@degree.setter
def degree(self, val):
if self.on_state:
self.log.info('\tTurning degree to be %d', val)
self._degree = val
else:
self.log.warning('\tPlease turn on AC first!')
class ACControllerV3(device_api.ACInterface):
def __init__(self):
self._state = device_api.PowerState.OFF
self.log = get_logger(self)
self._degree = 20
def is_on(self):
return self._state == device_api.PowerState.ON
def get_degree(self):
return self.degree
def on(self, degree=None):
if not self.is_on():
self.log.info('\tTurn on AC')
self._state = device_api.PowerState.ON
degree = self._degree
if degree is not None:
self.degree = degree
def off(self):
if self.is_on():
self.log.info('\tTurn off AC...')
self._state = device_api.PowerState.OFF
@property
def degree(self):
return self._degree
@degree.setter
def degree(self, val):
if self.on_state:
self.log.info('\tTurning degree to be %d', val)
self._degree = val
else:
self.log.warning('\tPlease turn on AC first!')
| 22.560345 | 61 | 0.643485 | 385 | 2,617 | 4.21039 | 0.127273 | 0.135719 | 0.101789 | 0.059223 | 0.832202 | 0.832202 | 0.754473 | 0.682912 | 0.682912 | 0.658853 | 0 | 0.004534 | 0.241498 | 2,617 | 115 | 62 | 22.756522 | 0.812091 | 0.009553 | 0 | 0.8 | 0 | 0 | 0.096674 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.222222 | false | 0 | 0.033333 | 0.088889 | 0.377778 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c747c95762064fa839b70121bc7b98e9140010d2 | 41 | py | Python | malcolm/modules/system/__init__.py | dinojugosloven/pymalcolm | 0b856ee1113efdb42f2f3b15986f8ac5f9e1b35a | [
"Apache-2.0"
] | null | null | null | malcolm/modules/system/__init__.py | dinojugosloven/pymalcolm | 0b856ee1113efdb42f2f3b15986f8ac5f9e1b35a | [
"Apache-2.0"
] | null | null | null | malcolm/modules/system/__init__.py | dinojugosloven/pymalcolm | 0b856ee1113efdb42f2f3b15986f8ac5f9e1b35a | [
"Apache-2.0"
] | null | null | null | from . import defines, parts, controllers | 41 | 41 | 0.804878 | 5 | 41 | 6.6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121951 | 41 | 1 | 41 | 41 | 0.916667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.