hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
747d6e9a269040ab426472ee4f6b1ee1174c111d | 10,841 | py | Python | tests/interpretation/test_ablation.py | qcri/NeuroX | a56528231f6514412f3703af48effce1404cb069 | [
"BSD-3-Clause"
] | 87 | 2018-12-12T11:58:21.000Z | 2022-03-26T19:19:46.000Z | tests/interpretation/test_ablation.py | qcri/NeuroX | a56528231f6514412f3703af48effce1404cb069 | [
"BSD-3-Clause"
] | 16 | 2019-07-08T23:45:18.000Z | 2022-03-30T14:46:40.000Z | tests/interpretation/test_ablation.py | qcri/NeuroX | a56528231f6514412f3703af48effce1404cb069 | [
"BSD-3-Clause"
] | 15 | 2019-02-12T08:52:35.000Z | 2022-03-15T13:13:32.000Z | import unittest
from unittest.mock import MagicMock, patch
import numpy as np
import neurox.interpretation.ablation as ablation
class TestFilterActivationsKeepNeurons(unittest.TestCase):
@classmethod
def setUpClass(cls):
cls.total_examples = 10
cls.total_neurons = 100
cls.neurons_to_keep = 3
def setUp(self):
activations = np.random.random((self.total_examples, self.total_neurons))
useful_activations = activations[:, :self.neurons_to_keep].copy()
shuffled_idx = np.random.permutation(np.arange(self.total_neurons))
activations = activations[:, shuffled_idx]
neuron_idx_to_keep = np.concatenate(
[np.where(shuffled_idx==i)[0] for i in range(self.neurons_to_keep)]
)
self.activations = activations
self.useful_activations = useful_activations
self.neuron_idx_to_keep = neuron_idx_to_keep
def test_filter_activations_keep_neurons(self):
"Filter activations (keep neurons)"
# Test if the correct activations are returned
filtered_activations = ablation.filter_activations_keep_neurons(self.activations, self.neuron_idx_to_keep)
np.testing.assert_array_almost_equal(filtered_activations, self.useful_activations)
def test_filter_activations_keep_neurons_view(self):
"Filter activations (keep neurons) view"
# Test if changing the returned view changes the original matrix
filtered_activations = ablation.filter_activations_keep_neurons(self.activations, self.neuron_idx_to_keep)
filtered_activations[:, :] = 0
np.testing.assert_array_almost_equal(filtered_activations, np.zeros((self.total_examples, self.neurons_to_keep)))
def test_keep_specific_neurons(self):
"Filter activations (keep neurons) - alternative function"
# Test if the correct activations are returned
filtered_activations = ablation.keep_specific_neurons(self.activations, self.neuron_idx_to_keep)
np.testing.assert_array_almost_equal(filtered_activations, self.useful_activations)
def test_keep_specific_neurons_view(self):
"Filter activations (keep neurons) view - alternative function"
# Test if changing the returned view changes the original matrix
filtered_activations = ablation.keep_specific_neurons(self.activations, self.neuron_idx_to_keep)
filtered_activations[:, :] = 0
np.testing.assert_array_almost_equal(filtered_activations, np.zeros((self.total_examples, self.neurons_to_keep)))
class TestFilterActivationsRemoveNeurons(unittest.TestCase):
@classmethod
def setUpClass(cls):
cls.total_examples = 10
cls.total_neurons = 100
cls.neurons_to_remove = 3
def setUp(self):
activations = np.random.random((self.total_examples, self.total_neurons))
useful_activations = np.sort(activations[:, self.neurons_to_remove:].copy())
shuffled_idx = np.random.permutation(np.arange(self.total_neurons))
activations = activations[:, shuffled_idx]
neuron_idx_to_remove = np.concatenate(
[np.where(shuffled_idx==i)[0] for i in range(self.neurons_to_remove)]
)
self.activations = activations
self.useful_activations = useful_activations
self.neuron_idx_to_remove = neuron_idx_to_remove
def test_filter_activations_remove_neurons(self):
"Filter activations (remove neurons)"
# Test if the correct activations are returned
filtered_activations = ablation.filter_activations_remove_neurons(self.activations, self.neuron_idx_to_remove)
np.testing.assert_array_almost_equal(np.sort(filtered_activations), self.useful_activations)
def test_filter_activations_remove_neurons_view(self):
"Filter activations (remove neurons) view"
# Test if changing the returned view changes the original matrix
filtered_activations = ablation.filter_activations_remove_neurons(self.activations, self.neuron_idx_to_remove)
filtered_activations[:, :] = 0
np.testing.assert_array_almost_equal(filtered_activations, np.zeros((self.total_examples, self.total_neurons - self.neurons_to_remove)))
class TestZeroOutActivationsKeepNeurons(unittest.TestCase):
@classmethod
def setUpClass(cls):
cls.total_examples = 10
cls.total_neurons = 100
cls.neurons_to_keep = 3
def test_zero_out_activations_keep_neurons(self):
"Zero out activations (keep neurons)"
# Test if the correct activations are returned
activations = np.random.random((self.total_examples, self.total_neurons))
expected_activations = activations.copy()
expected_activations[:, self.neurons_to_keep:] = 0
shuffled_idx = np.random.permutation(np.arange(self.total_neurons))
activations = activations[:, shuffled_idx]
expected_activations = expected_activations[:, shuffled_idx]
neuron_idx_to_keep = np.concatenate(
[np.where(shuffled_idx==i)[0] for i in range(self.neurons_to_keep)]
)
filtered_activations = ablation.zero_out_activations_keep_neurons(activations, neuron_idx_to_keep)
np.testing.assert_array_almost_equal(filtered_activations, expected_activations)
class TestZeroOutActivationsRemoveNeurons(unittest.TestCase):
@classmethod
def setUpClass(cls):
cls.total_examples = 10
cls.total_neurons = 100
cls.neurons_to_remove = 3
def test_zero_out_activations_remove_neurons(self):
"Zero out activations (remove neurons)"
# Test if the correct activations are returned
activations = np.random.random((self.total_examples, self.total_neurons))
expected_activations = activations.copy()
expected_activations[:, :self.neurons_to_remove] = 0
shuffled_idx = np.random.permutation(np.arange(self.total_neurons))
activations = activations[:, shuffled_idx]
expected_activations = expected_activations[:, shuffled_idx]
neuron_idx_to_remove = np.concatenate(
[np.where(shuffled_idx==i)[0] for i in range(self.neurons_to_remove)]
)
filtered_activations = ablation.zero_out_activations_remove_neurons(activations, neuron_idx_to_remove)
np.testing.assert_array_almost_equal(filtered_activations, expected_activations)
class TestFilterActivationsByLayers(unittest.TestCase):
@classmethod
def setUpClass(cls):
cls.num_examples = 10
cls.num_neurons_per_layer = 72
cls.num_layers = 4
def setUp(self):
layer_activations = []
for l in range(self.num_layers):
layer_activations.append(np.random.random((self.num_examples, self.num_neurons_per_layer)))
self.layer_activations = layer_activations
self.activations = np.concatenate(layer_activations, axis=1)
def test_filter_activations_by_layers_start_layer(self):
"Filter activations by layer (Start layer)"
selected_layer = 0
filtered_activations = ablation.filter_activations_by_layers(
self.activations, [selected_layer], self.num_layers
)
np.testing.assert_array_almost_equal(filtered_activations, self.layer_activations[selected_layer])
def test_filter_activations_by_layers_middle_layer(self):
"Filter activations by layer (Middle layer)"
selected_layer = 1
filtered_activations = ablation.filter_activations_by_layers(
self.activations, [selected_layer], self.num_layers
)
np.testing.assert_array_almost_equal(filtered_activations, self.layer_activations[selected_layer])
def test_filter_activations_by_layers_last_layer(self):
"Filter activations by layer (Last layer)"
selected_layer = self.num_layers - 1
filtered_activations = ablation.filter_activations_by_layers(
self.activations, [selected_layer], self.num_layers
)
np.testing.assert_array_almost_equal(filtered_activations, self.layer_activations[selected_layer])
def test_filter_activations_by_layers_multiple(self):
"Filter activations by layers (Multiple layers)"
selected_layers = [1, 3]
filtered_activations = ablation.filter_activations_by_layers(
self.activations, selected_layers, self.num_layers
)
expected_output = np.concatenate(
[self.layer_activations[s_l] for s_l in selected_layers],
axis=1
)
np.testing.assert_array_almost_equal(filtered_activations, expected_output)
def test_filter_activations_by_layers_bidi_forward(self):
"Filter activations by layer (Bi-directional forward)"
selected_layer = 2
filtered_activations = ablation.filter_activations_by_layers(
self.activations, [selected_layer], self.num_layers,
bidirectional_filtering="forward"
)
np.testing.assert_array_almost_equal(filtered_activations, self.layer_activations[selected_layer][:, :self.num_neurons_per_layer//2])
def test_filter_activations_by_layers_bidi_backward(self):
"Filter activations by layer (Bi-directional backward)"
selected_layer = 2
filtered_activations = ablation.filter_activations_by_layers(
self.activations, [selected_layer], self.num_layers,
bidirectional_filtering="backward"
)
np.testing.assert_array_almost_equal(filtered_activations, self.layer_activations[selected_layer][:, self.num_neurons_per_layer//2:])
def test_filter_activations_by_layers_bidi_forward_multiple(self):
"Filter activations by layers (Bi-directional forward)"
selected_layers = [1, 3]
filtered_activations = ablation.filter_activations_by_layers(
self.activations, selected_layers, self.num_layers,
bidirectional_filtering="forward"
)
expected_output = np.concatenate(
[self.layer_activations[s_l][:, :self.num_neurons_per_layer//2] for s_l in selected_layers],
axis=1
)
np.testing.assert_array_almost_equal(filtered_activations, expected_output)
def test_filter_activations_by_layers_bidi_backward_multiple(self):
"Filter activations by layers (Bi-directional backward)"
selected_layers = [1, 3]
filtered_activations = ablation.filter_activations_by_layers(
self.activations, selected_layers, self.num_layers,
bidirectional_filtering="backward"
)
expected_output = np.concatenate(
[self.layer_activations[s_l][:, self.num_neurons_per_layer//2:] for s_l in selected_layers],
axis=1
)
np.testing.assert_array_almost_equal(filtered_activations, expected_output) | 44.983402 | 144 | 0.72521 | 1,271 | 10,841 | 5.844217 | 0.080252 | 0.086968 | 0.061389 | 0.063947 | 0.901319 | 0.862547 | 0.796311 | 0.779079 | 0.750404 | 0.732903 | 0 | 0.006554 | 0.197768 | 10,841 | 241 | 145 | 44.983402 | 0.847534 | 0.10571 | 0 | 0.55615 | 0 | 0 | 0.071593 | 0 | 0 | 0 | 0 | 0 | 0.085562 | 1 | 0.128342 | false | 0 | 0.02139 | 0 | 0.176471 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
777915dc659d9baa14226c7edb10de87e4db9050 | 244 | py | Python | src/productrec/pipelines/cleaning/__init__.py | HSV-AI/product-recommendation | 6e5fabce4f7e579e78a3c59730024d221169e3c4 | [
"Apache-2.0"
] | 2 | 2021-06-04T20:04:17.000Z | 2022-02-18T05:23:55.000Z | src/productrec/pipelines/cleaning/__init__.py | HSV-AI/product-recommendation | 6e5fabce4f7e579e78a3c59730024d221169e3c4 | [
"Apache-2.0"
] | 28 | 2021-06-10T00:36:58.000Z | 2022-03-14T20:21:48.000Z | src/productrec/pipelines/cleaning/__init__.py | HSV-AI/product-recommendation | 6e5fabce4f7e579e78a3c59730024d221169e3c4 | [
"Apache-2.0"
] | 1 | 2022-02-18T05:23:58.000Z | 2022-02-18T05:23:58.000Z | from .nodes import clean_brazillian
from .nodes import clean_electronics
from .nodes import clean_ecommerce
from .nodes import clean_jewelry
from .nodes import clean_journey
from .nodes import clean_retailrocket
from .nodes import clean_vipin20 | 34.857143 | 37 | 0.860656 | 35 | 244 | 5.8 | 0.314286 | 0.310345 | 0.517241 | 0.689655 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009217 | 0.110656 | 244 | 7 | 38 | 34.857143 | 0.926267 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
77868b1f0f907a4cba9f27f9515aaa76107533ca | 41 | py | Python | command_handlers/__init__.py | darkracer/sound-bot | 7ce72a912c13361b411107361c3cf37c86e56cf4 | [
"MIT"
] | null | null | null | command_handlers/__init__.py | darkracer/sound-bot | 7ce72a912c13361b411107361c3cf37c86e56cf4 | [
"MIT"
] | null | null | null | command_handlers/__init__.py | darkracer/sound-bot | 7ce72a912c13361b411107361c3cf37c86e56cf4 | [
"MIT"
] | null | null | null | from .command_handlers import handle_play | 41 | 41 | 0.902439 | 6 | 41 | 5.833333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.073171 | 41 | 1 | 41 | 41 | 0.921053 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
77c8080f46ccbb3c5e8312043601f324f6fa0c45 | 120 | py | Python | tests/py/map.py | HappyFacade/micropython-wrap | 26fcd5a7e845e76cb004925ae53b284f0ad7abca | [
"MIT"
] | 92 | 2015-02-09T06:47:05.000Z | 2022-03-03T01:57:51.000Z | tests/py/map.py | HappyFacade/micropython-wrap | 26fcd5a7e845e76cb004925ae53b284f0ad7abca | [
"MIT"
] | 8 | 2016-12-08T16:30:47.000Z | 2022-02-22T14:53:14.000Z | tests/py/map.py | HappyFacade/micropython-wrap | 26fcd5a7e845e76cb004925ae53b284f0ad7abca | [
"MIT"
] | 18 | 2016-06-12T10:10:55.000Z | 2022-03-12T14:51:45.000Z | import upywraptest
print(upywraptest.Map1({'a': 1, 'b': 2, 'def': 444}))
print(upywraptest.Map2({'a': [1], 'b': [2]}))
| 24 | 53 | 0.591667 | 18 | 120 | 3.944444 | 0.611111 | 0.450704 | 0.084507 | 0.112676 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.084112 | 0.108333 | 120 | 4 | 54 | 30 | 0.579439 | 0 | 0 | 0 | 0 | 0 | 0.058333 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0.666667 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
7ad93a7eb406d3fe33ffc8207a9e6886c39c1059 | 3,826 | py | Python | orders/migrations/0005_auto_20191222_2252.py | yun-mh/uniwalk | f5307f6970b24736d13b56b4792c580398c35b3a | [
"Apache-2.0"
] | null | null | null | orders/migrations/0005_auto_20191222_2252.py | yun-mh/uniwalk | f5307f6970b24736d13b56b4792c580398c35b3a | [
"Apache-2.0"
] | 9 | 2020-01-10T14:10:02.000Z | 2022-03-12T00:08:19.000Z | orders/migrations/0005_auto_20191222_2252.py | yun-mh/uniwalk | f5307f6970b24736d13b56b4792c580398c35b3a | [
"Apache-2.0"
] | null | null | null | # Generated by Django 2.2.5 on 2019-12-22 13:52
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('orders', '0004_auto_20191213_2253'),
]
operations = [
migrations.RemoveField(
model_name='order',
name='email',
),
migrations.AlterField(
model_name='order',
name='address_city_orderer',
field=models.CharField(max_length=40, verbose_name='市区町村(お届け先)'),
),
migrations.AlterField(
model_name='order',
name='address_city_recipient',
field=models.CharField(max_length=40, verbose_name='市区町村(ご請求書先)'),
),
migrations.AlterField(
model_name='order',
name='address_detail_orderer',
field=models.CharField(max_length=40, verbose_name='建物名・部屋番号(お届け先)'),
),
migrations.AlterField(
model_name='order',
name='address_detail_recipient',
field=models.CharField(max_length=40, verbose_name='建物名・部屋番号(ご請求書先)'),
),
migrations.AlterField(
model_name='order',
name='first_name_orderer',
field=models.CharField(max_length=30, verbose_name='名(お届け先)'),
),
migrations.AlterField(
model_name='order',
name='first_name_orderer_kana',
field=models.CharField(max_length=30, verbose_name='名(カナ, お届け先)'),
),
migrations.AlterField(
model_name='order',
name='first_name_recipient',
field=models.CharField(max_length=30, verbose_name='名(ご請求書先)'),
),
migrations.AlterField(
model_name='order',
name='first_name_recipient_kana',
field=models.CharField(max_length=30, verbose_name='名(ご請求書先,カナ)'),
),
migrations.AlterField(
model_name='order',
name='last_name_orderer',
field=models.CharField(max_length=30, verbose_name='姓(お届け先)'),
),
migrations.AlterField(
model_name='order',
name='last_name_orderer_kana',
field=models.CharField(max_length=30, verbose_name='姓(カナ, お届け先)'),
),
migrations.AlterField(
model_name='order',
name='last_name_recipient',
field=models.CharField(max_length=30, verbose_name='姓(ご請求書先)'),
),
migrations.AlterField(
model_name='order',
name='last_name_recipient_kana',
field=models.CharField(max_length=30, verbose_name='姓(ご請求書先,カナ)'),
),
migrations.AlterField(
model_name='order',
name='phone_number_orderer',
field=models.CharField(max_length=15, verbose_name='電話番号(お届け先)'),
),
migrations.AlterField(
model_name='order',
name='phone_number_recipient',
field=models.CharField(max_length=15, verbose_name='電話番号(ご請求書先)'),
),
migrations.AlterField(
model_name='order',
name='postal_code_orderer',
field=models.CharField(max_length=7, verbose_name='郵便番号(お届け先)'),
),
migrations.AlterField(
model_name='order',
name='postal_code_recipient',
field=models.CharField(max_length=7, verbose_name='郵便番号(ご請求書先)'),
),
migrations.AlterField(
model_name='order',
name='prefecture_orderer',
field=models.CharField(max_length=2, verbose_name='都道府県(お届け先)'),
),
migrations.AlterField(
model_name='order',
name='prefecture_recipient',
field=models.CharField(max_length=2, verbose_name='都道府県(ご請求書先)'),
),
]
| 35.425926 | 82 | 0.576581 | 388 | 3,826 | 5.443299 | 0.167526 | 0.080966 | 0.125947 | 0.161932 | 0.90625 | 0.90625 | 0.883523 | 0.825284 | 0.631155 | 0.27178 | 0 | 0.02349 | 0.299007 | 3,826 | 107 | 83 | 35.757009 | 0.763236 | 0.011762 | 0 | 0.554455 | 1 | 0 | 0.183117 | 0.060333 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.009901 | 0 | 0.039604 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
bb25b7adb53b0b986f3cae0c523893e2f1186f90 | 398 | py | Python | geospace/__init__.py | xiejx5/GeoSpace | 935ebb0593e367c008cc3cc42b2e20ed921b95b1 | [
"MIT"
] | 1 | 2021-02-04T08:48:29.000Z | 2021-02-04T08:48:29.000Z | geospace/__init__.py | xiejx5/geospace | 935ebb0593e367c008cc3cc42b2e20ed921b95b1 | [
"MIT"
] | null | null | null | geospace/__init__.py | xiejx5/geospace | 935ebb0593e367c008cc3cc42b2e20ed921b95b1 | [
"MIT"
] | 2 | 2020-07-05T15:56:48.000Z | 2020-08-05T04:33:52.000Z | from geospace._const import *
from geospace.boundary import *
from geospace.ras_to_shp import *
from geospace.raster import *
from geospace.shape import *
from geospace.shp_to_ras import *
from geospace.statistics import *
from geospace.utils import *
from geospace.gdal_calc import Calc
from geospace.map_calc import map_calc
try:
from geospace.gee_export import *
except Exception:
pass
| 24.875 | 38 | 0.801508 | 58 | 398 | 5.344828 | 0.362069 | 0.425806 | 0.464516 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.143216 | 398 | 15 | 39 | 26.533333 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.071429 | 0.785714 | 0 | 0.785714 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
bb336d71e806ae5bed862ce1fa4f6f36c1a0801d | 7,758 | py | Python | tests/test_api_gateway/test_rest/test_authorization.py | minos-framework/minos-api-gateway | 4be8564311260be274e91b2a31e18c5d0feef1ab | [
"MIT"
] | 3 | 2022-02-01T14:42:11.000Z | 2022-02-05T11:01:09.000Z | tests/test_api_gateway/test_rest/test_authorization.py | minos-framework/minos-api-gateway | 4be8564311260be274e91b2a31e18c5d0feef1ab | [
"MIT"
] | 8 | 2022-02-03T08:05:57.000Z | 2022-03-21T09:57:14.000Z | tests/test_api_gateway/test_rest/test_authorization.py | minos-framework/minos-api-gateway | 4be8564311260be274e91b2a31e18c5d0feef1ab | [
"MIT"
] | 1 | 2022-03-03T10:32:52.000Z | 2022-03-03T10:32:52.000Z | import json
import os
import unittest
from unittest import (
mock,
)
from uuid import (
uuid4,
)
from aiohttp.test_utils import (
AioHTTPTestCase,
unittest_run_loop,
)
from werkzeug.exceptions import (
abort,
)
from minos.api_gateway.rest import (
ApiGatewayConfig,
ApiGatewayRestService,
)
from tests.mock_servers.server import (
MockServer,
)
from tests.utils import (
BASE_PATH,
)
class TestApiGatewayAuthorization(AioHTTPTestCase):
CONFIG_FILE_PATH = BASE_PATH / "config.yml"
@mock.patch.dict(os.environ, {"API_GATEWAY_REST_CORS_ENABLED": "true"})
def setUp(self) -> None:
self.config = ApiGatewayConfig(self.CONFIG_FILE_PATH)
self.discovery = MockServer(host=self.config.discovery.host, port=self.config.discovery.port,)
self.discovery.add_json_response(
"/microservices", {"address": "localhost", "port": "5568", "status": True},
)
self.microservice = MockServer(host="localhost", port=5568)
self.microservice.add_json_response(
"/order/5", "Microservice call correct!!!", methods=("GET", "PUT", "PATCH", "DELETE",)
)
self.microservice.add_json_response(
"/autz-merchants/5", "Microservice call correct!!!", methods=("GET", "PUT", "PATCH", "DELETE",)
)
self.microservice.add_json_response(
"/autz-merchants-2/5", "Microservice call correct!!!", methods=("GET", "PUT", "PATCH", "DELETE",)
)
self.microservice.add_json_response("/categories/5", "Microservice call correct!!!", methods=("GET",))
self.microservice.add_json_response("/order", "Microservice call correct!!!", methods=("POST",))
self.authentication_service = MockServer(host=self.config.rest.auth.host, port=self.config.rest.auth.port)
self.authentication_service.add_json_response("/auth/credentials", {"uuid": uuid4()}, methods=("POST",))
self.authentication_service.add_json_response(
"/auth/credentials/login", {"token": "credential-token-test"}, methods=("POST",)
)
self.authentication_service.add_json_response("/auth/credentials", {"uuid": uuid4()}, methods=("GET",))
self.authentication_service.add_json_response(
"/auth/token", {"uuid": uuid4(), "token": "token-test"}, methods=("POST",)
)
self.authentication_service.add_json_response("/auth/token/login", {"token": "token-test"}, methods=("POST",))
self.authentication_service.add_json_response("/auth/token", {"uuid": uuid4()}, methods=("GET",))
self.authentication_service.add_json_response(
"/auth/validate-token", {"uuid": uuid4(), "role": 3}, methods=("POST",)
)
self.authentication_service.add_json_response("/auth", {"uuid": uuid4()}, methods=("POST", "GET",))
self.discovery.start()
self.microservice.start()
self.authentication_service.start()
super().setUp()
def tearDown(self) -> None:
self.discovery.shutdown_server()
self.microservice.shutdown_server()
self.authentication_service.shutdown_server()
super().tearDown()
async def get_application(self):
"""
Override the get_app method to return your application.
"""
rest_service = ApiGatewayRestService(
address=self.config.rest.host, port=self.config.rest.port, config=self.config
)
return await rest_service.create_application()
@unittest_run_loop
async def test_auth_unauthorized(self):
await self.client.post(
"/admin/rules",
data=json.dumps(
{"service": "autz-merchants", "rule": "*://*/autz-merchants/*", "methods": ["GET", "POST"]}
),
)
await self.client.post(
"/admin/autz-rules",
data=json.dumps(
{
"service": "autz-merchants",
"roles": [2],
"rule": "*://*/autz-merchants/*",
"methods": ["GET", "POST"],
}
),
)
url = "/autz-merchants/5"
headers = {"Authorization": "Bearer credential-token-test"}
response = await self.client.request("POST", url, headers=headers)
self.assertEqual(401, response.status)
self.assertIn("401: Unauthorized", await response.text())
async def test_authorized(self):
await self.client.post(
"/admin/rules",
data=json.dumps(
{"service": "autz-merchants-2", "rule": "*://*/autz-merchants-2/*", "methods": ["GET", "POST"]}
),
)
await self.client.post(
"/admin/autz-rules",
data=json.dumps(
{
"service": "autz-merchants-2",
"roles": [3],
"rule": "*://*/autz-merchants-2/*",
"methods": ["GET", "POST"],
}
),
)
url = "/autz-merchants-2/5"
headers = {"Authorization": "Bearer credential-token-test"}
response = await self.client.request("GET", url, headers=headers)
self.assertEqual(200, response.status)
self.assertIn("Microservice call correct!!!", await response.text())
class TestAutzFailed(AioHTTPTestCase):
CONFIG_FILE_PATH = BASE_PATH / "config.yml"
@mock.patch.dict(os.environ, {"API_GATEWAY_REST_CORS_ENABLED": "true"})
def setUp(self) -> None:
self.config = ApiGatewayConfig(self.CONFIG_FILE_PATH)
self.discovery = MockServer(host=self.config.discovery.host, port=self.config.discovery.port,)
self.discovery.add_json_response(
"/microservices", {"address": "localhost", "port": "5568", "status": True},
)
self.microservice = MockServer(host="localhost", port=5568)
self.microservice.add_json_response(
"/order/5", "Microservice call correct!!!", methods=("GET", "PUT", "PATCH", "DELETE",)
)
self.microservice.add_json_response("/order", "Microservice call correct!!!", methods=("POST",))
self.authentication_service = MockServer(host=self.config.rest.auth.host, port=self.config.rest.auth.port)
self.authentication_service.add_json_response("/auth/validate-token", lambda: abort(400), methods=("POST",))
self.discovery.start()
self.microservice.start()
self.authentication_service.start()
super().setUp()
def tearDown(self) -> None:
self.discovery.shutdown_server()
self.microservice.shutdown_server()
self.authentication_service.shutdown_server()
super().tearDown()
async def get_application(self):
"""
Override the get_app method to return your application.
"""
rest_service = ApiGatewayRestService(
address=self.config.rest.host, port=self.config.rest.port, config=self.config
)
return await rest_service.create_application()
@unittest_run_loop
async def test_auth_unauthorized(self):
await self.client.post(
"/admin/autz-rules",
data=json.dumps(
{"service": "merchants", "roles": ["Customer"], "rule": "*://*/merchants/*", "methods": ["GET", "POST"]}
),
)
url = "/merchants/jksdksdjskd"
headers = {"Authorization": "Bearer credential-token-test_01"}
response = await self.client.request("POST", url, headers=headers)
self.assertEqual(401, response.status)
self.assertIn("The given request does not have authorization to be forwarded", await response.text())
if __name__ == "__main__":
unittest.main()
| 36.767773 | 120 | 0.606213 | 793 | 7,758 | 5.79319 | 0.160151 | 0.039182 | 0.058772 | 0.054854 | 0.842185 | 0.82956 | 0.79495 | 0.784066 | 0.783631 | 0.765999 | 0 | 0.00953 | 0.242588 | 7,758 | 210 | 121 | 36.942857 | 0.772294 | 0 | 0 | 0.517857 | 0 | 0 | 0.199474 | 0.037105 | 0 | 0 | 0 | 0 | 0.035714 | 1 | 0.02381 | false | 0 | 0.059524 | 0 | 0.119048 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
bb36f6ac91719dc1841a1f4596da20f867233842 | 5,002 | py | Python | tests/normalization/counts/test_random_selection.py | motleystate/moonstone | 37c38fabf361722f7002626ef13c68c443ace4ac | [
"MIT"
] | null | null | null | tests/normalization/counts/test_random_selection.py | motleystate/moonstone | 37c38fabf361722f7002626ef13c68c443ace4ac | [
"MIT"
] | 84 | 2020-07-27T13:01:12.000Z | 2022-03-16T17:10:23.000Z | tests/normalization/counts/test_random_selection.py | motleystate/moonstone | 37c38fabf361722f7002626ef13c68c443ace4ac | [
"MIT"
] | null | null | null | from unittest import TestCase
import pandas as pd
from moonstone.normalization.counts.random_selection import (
RandomSelection, TaxonomyRandomSelection
)
class TestRandomSelection(TestCase):
def setUp(self):
self.raw_data = [
[199, 1, 48, 75],
[0, 24, 1, 0],
[1, 25, 1, 25],
]
self.column_names = ['Sample_1', 'Sample_2', 'Sample_3', 'Sample_4']
self.index = ['Gen_1', 'Gen_2', 'Gen_3']
self.raw_df = pd.DataFrame(self.raw_data, columns=self.column_names, index=self.index)
def test_normalize_default_threshold(self):
expected_data = [
[50, 1, 48, 40],
[0, 24, 1, 0],
[0, 25, 1, 10],
]
expected_df = pd.DataFrame(expected_data, columns=self.column_names, index=self.index).astype(float)
tested_normalization = RandomSelection(self.raw_df, random_seed=2935)
pd.testing.assert_frame_equal(tested_normalization.normalized_df, expected_df)
def test_normalize_threshold_20(self):
expected_data = [
[20, 0, 20, 16],
[0, 9, 0, 0],
[0, 11, 0, 4],
]
expected_df = pd.DataFrame(expected_data, columns=self.column_names, index=self.index).astype(float)
tested_normalization = RandomSelection(self.raw_df, threshold=20, random_seed=2935)
pd.testing.assert_frame_equal(tested_normalization.normalized_df, expected_df)
def test_normalize_threshold_100(self):
expected_data = [
[100, 75],
[0, 0],
[0, 25],
]
expected_columns = ['Sample_1', 'Sample_4']
expected_df = pd.DataFrame(expected_data, columns=expected_columns, index=self.index).astype(float)
tested_normalization = RandomSelection(self.raw_df, threshold=100, random_seed=2935)
pd.testing.assert_frame_equal(tested_normalization.normalized_df, expected_df)
def test_normalize_threshold_150(self):
expected_data = [
[150],
]
expected_columns = ['Sample_1']
expected_df = pd.DataFrame(expected_data, columns=expected_columns, index=['Gen_1']).astype(float)
tested_normalization = RandomSelection(self.raw_df, threshold=150, random_seed=2935)
pd.testing.assert_frame_equal(tested_normalization.normalized_df, expected_df)
def test_normalize_float_default_threshold(self):
raw_data = [
[199, 1.1, 48, 75],
[0, 24, 1, 0],
[1, 25, 1, 25],
]
self.raw_df = pd.DataFrame(raw_data, columns=self.column_names, index=self.index)
expected_data = [
[50, 1.1, 48, 40],
[0, 24, 1, 0],
[0, 25, 1, 10],
]
expected_df = pd.DataFrame(expected_data, columns=self.column_names, index=self.index).astype(float)
tested_normalization = RandomSelection(self.raw_df, random_seed=2935)
pd.testing.assert_frame_equal(tested_normalization.normalized_df, expected_df)
class TestTaxonomyRandomSelection(TestCase):
def setUp(self):
self.raw_data = [
[199, 1, 48, 75],
[0, 24, 1, 0],
[1, 25, 1, 25],
]
self.column_names = ['Sample_1', 'Sample_2', 'Sample_3', 'Sample_4']
tuples = list(zip(*[
['group_1', 'group_1', 'group_2'],
['Gen_1', 'Gen_2', 'Gen_3']
]))
self.index = pd.MultiIndex.from_tuples(tuples, names=['first', 'second'])
self.raw_df = pd.DataFrame(self.raw_data, columns=self.column_names, index=self.index)
def test_normalize_default_threshold(self):
expected_data = [
[50, 1, 48, 40],
[0, 24, 1, 0],
[0, 25, 1, 10],
]
expected_df = pd.DataFrame(expected_data, columns=self.column_names, index=self.index).astype(float)
tested_normalization = TaxonomyRandomSelection(self.raw_df, random_seed=2935)
pd.testing.assert_frame_equal(tested_normalization.normalized_df, expected_df)
def test_normalize_threshold_20(self):
expected_data = [
[20, 0, 20, 16],
[0, 9, 0, 0],
[0, 11, 0, 4],
]
expected_df = pd.DataFrame(expected_data, columns=self.column_names, index=self.index).astype(float)
tested_normalization = TaxonomyRandomSelection(self.raw_df, threshold=20, random_seed=2935)
pd.testing.assert_frame_equal(tested_normalization.normalized_df, expected_df)
def test_normalize_threshold_100(self):
expected_data = [
[100, 75],
[0, 0],
[0, 25],
]
expected_columns = ['Sample_1', 'Sample_4']
expected_df = pd.DataFrame(expected_data, columns=expected_columns, index=self.index).astype(float)
tested_normalization = TaxonomyRandomSelection(self.raw_df, threshold=100, random_seed=2935)
pd.testing.assert_frame_equal(tested_normalization.normalized_df, expected_df)
| 40.016 | 108 | 0.627149 | 615 | 5,002 | 4.835772 | 0.115447 | 0.03766 | 0.033289 | 0.05649 | 0.87727 | 0.866846 | 0.866846 | 0.857431 | 0.857431 | 0.821789 | 0 | 0.066595 | 0.252499 | 5,002 | 124 | 109 | 40.33871 | 0.728804 | 0 | 0 | 0.638889 | 0 | 0 | 0.034186 | 0 | 0 | 0 | 0 | 0 | 0.074074 | 1 | 0.092593 | false | 0 | 0.027778 | 0 | 0.138889 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
2470b6f266b6ddaace06512b1118369f876390e3 | 23 | py | Python | db/__init__.py | yasinasama/guime | 00caac105becf6df4d873fdbba4542711307e383 | [
"MIT"
] | null | null | null | db/__init__.py | yasinasama/guime | 00caac105becf6df4d873fdbba4542711307e383 | [
"MIT"
] | null | null | null | db/__init__.py | yasinasama/guime | 00caac105becf6df4d873fdbba4542711307e383 | [
"MIT"
] | null | null | null | from .db import DB_CONN | 23 | 23 | 0.826087 | 5 | 23 | 3.6 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.130435 | 23 | 1 | 23 | 23 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
24757786a857b9111499204e215fe379d9c8e8bf | 3,828 | py | Python | dataloader/stereo_msi.py | mehrdadshoeiby/MAML-Robust-learning | 4dd6fc2655f567e83f0dbcb87e8e4f8c15ef6ae6 | [
"MIT"
] | null | null | null | dataloader/stereo_msi.py | mehrdadshoeiby/MAML-Robust-learning | 4dd6fc2655f567e83f0dbcb87e8e4f8c15ef6ae6 | [
"MIT"
] | null | null | null | dataloader/stereo_msi.py | mehrdadshoeiby/MAML-Robust-learning | 4dd6fc2655f567e83f0dbcb87e8e4f8c15ef6ae6 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
define data loader class
"""
from __future__ import print_function, division
import os
import glob
import torch
import pandas as pd
from skimage import transform
from skimage.io import imread
import numpy as np
import matplotlib.pyplot as plt
from torch.utils.data import Dataset, DataLoader
#import spectral.io.envi as envi
from dataloader.tools import np.load
# Ignore warnings
import warnings
warnings.filterwarnings("ignore")
class StereoMSITrainDataset(Dataset):
"""
all the training data should be stored in the same folder
format for lr image ==> "image_{}_lr2".format(idx)
"""
def __init__(self, args, mytransform):
self.root_dir = args.root_dir
self.mytransform = mytransform
def __len__(self):
# find the number of labels hence lenght of the dataset.
return len(glob.glob1(os.path.join(self.root_dir + 'data/train'),
'*.tiff'))
def __getitem__(self, idx):
# dataset idx starts from 1
im_lr = os.path.join(self.root_dir,
"data/train/{}.npy".format(idx+1))
im_hr = os.path.join(self.root_dir,
"data/train/{}.tiff".format(idx+1))
im_lr = np.array(np.load(im_lr), dtype=float)
im_hr = np.array(imread(im_hr), dtype=float)
sample_train = {'im_lr': im_lr, 'im_hr': im_hr}
if self.mytransform:
sample_train = self.mytransform(sample_train)
return sample_train
class StereoMSIValidDataset(Dataset):
"""
all the training data should be stored in the same folder
format for lr image ==> "image_{}_lr2".format(idx)
"""
def __init__(self, args, mytransform):
self.root_dir = args.root_dir
self.mytransform = mytransform
def __len__(self):
# find the number of labels hence lenght of the dataset.
return len(glob.glob1(os.path.join(self.root_dir + 'data/valid'),
'*.tiff'))
def __getitem__(self, idx):
# validation dataset idx starts from 201
im_lr = os.path.join(self.root_dir,
"data/valid/{}.npy".format(str(idx+251)))
im_hr = os.path.join(self.root_dir,
"data/valid/{}.tiff".format(str(idx+251)))
im_lr = np.array(np.load(im_lr), dtype=float)
im_hr = np.array(imread(im_hr), dtype=float)
sample_valid = {'im_lr': im_lr, 'im_hr': im_hr}
if self.mytransform:
sample_valid = self.mytransform(sample_valid)
# read validation/testing dataset
return sample_valid
class StereoMSITestDataset(Dataset):
"""
all the training data should be stored in the same folder
format for lr image ==> "image_{}_lr2".format(idx)
"""
def __init__(self, args, mytransform):
self.root_dir = args.root_dir
self.mytransform = mytransform
def __len__(self):
# find the number of labels hence lenght of the dataset.
return len(glob.glob1(os.path.join(self.root_dir + 'data/test'),
'*.tiff'))
def __getitem__(self, idx):
# validation dataset idx starts from 201
im_lr = os.path.join(self.root_dir,
"data/test/{}.npy".format(str(idx+276)))
im_hr = os.path.join(self.root_dir,
"data/test/{}.tiff".format(str(idx+276)))
im_lr = np.array(np.load(im_lr), dtype=float)
im_hr = np.array(imread(im_hr), dtype=float)
sample_valid = {'im_lr': im_lr, 'im_hr': im_hr}
if self.mytransform:
sample_valid = self.mytransform(sample_valid)
# read validation/testing dataset
return sample_valid
| 32.168067 | 74 | 0.605538 | 501 | 3,828 | 4.419162 | 0.191617 | 0.047425 | 0.059621 | 0.056911 | 0.756098 | 0.721319 | 0.721319 | 0.721319 | 0.714995 | 0.642728 | 0 | 0.010208 | 0.283438 | 3,828 | 118 | 75 | 32.440678 | 0.796938 | 0.104754 | 0 | 0.58209 | 0 | 0 | 0.062083 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.179104 | null | null | 0.014925 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
24b14127c772895cfb0f163b6b14bcb45dd79203 | 157 | py | Python | pyleecan/Methods/Slot/SlotCirc/__init__.py | IrakozeFD/pyleecan | 5a93bd98755d880176c1ce8ac90f36ca1b907055 | [
"Apache-2.0"
] | 95 | 2019-01-23T04:19:45.000Z | 2022-03-17T18:22:10.000Z | pyleecan/Methods/Slot/SlotCirc/__init__.py | IrakozeFD/pyleecan | 5a93bd98755d880176c1ce8ac90f36ca1b907055 | [
"Apache-2.0"
] | 366 | 2019-02-20T07:15:08.000Z | 2022-03-31T13:37:23.000Z | pyleecan/Methods/Slot/SlotCirc/__init__.py | IrakozeFD/pyleecan | 5a93bd98755d880176c1ce8ac90f36ca1b907055 | [
"Apache-2.0"
] | 74 | 2019-01-24T01:47:31.000Z | 2022-02-25T05:44:42.000Z | from ....Methods.Slot.Slot import SlotCheckError
class SC_WHCheckError(SlotCheckError):
"""Raised when a SlotCirc has self.H0 < self.W0/2"""
pass
| 19.625 | 56 | 0.713376 | 21 | 157 | 5.285714 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.022901 | 0.165605 | 157 | 7 | 57 | 22.428571 | 0.824427 | 0.292994 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
24b534058273a7e997e810b91c7080b98aca5ce5 | 94 | py | Python | teste.py | vanessanunes/ES-17-2 | 706cde6dbbf21512e0ac50f19a8bb74572cac715 | [
"Apache-2.0"
] | null | null | null | teste.py | vanessanunes/ES-17-2 | 706cde6dbbf21512e0ac50f19a8bb74572cac715 | [
"Apache-2.0"
] | 6 | 2019-08-01T00:53:09.000Z | 2019-08-01T00:57:53.000Z | teste.py | vanessanunes/ES-17-2 | 706cde6dbbf21512e0ac50f19a8bb74572cac715 | [
"Apache-2.0"
] | null | null | null | import pytest
from principal import somar
def test_somar():
assert somar(2,3)== 5
| 10.444444 | 27 | 0.670213 | 14 | 94 | 4.428571 | 0.785714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.042254 | 0.244681 | 94 | 9 | 28 | 10.444444 | 0.830986 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.25 | 1 | 0.25 | true | 0 | 0.5 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
24b834e707755661a705ceb7195e1ef30ce2c05f | 34 | py | Python | discord/types/voice.py | Harukomaze/disnake | 541f5c9623a02be894cd1015dbb344070700cb87 | [
"MIT"
] | null | null | null | discord/types/voice.py | Harukomaze/disnake | 541f5c9623a02be894cd1015dbb344070700cb87 | [
"MIT"
] | null | null | null | discord/types/voice.py | Harukomaze/disnake | 541f5c9623a02be894cd1015dbb344070700cb87 | [
"MIT"
] | null | null | null | from disnake.types.voice import *
| 17 | 33 | 0.794118 | 5 | 34 | 5.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.117647 | 34 | 1 | 34 | 34 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
700bd68afd2159400a46983e299d928640a38caa | 22,309 | py | Python | dataworkspace/dataworkspace/tests/request_access/test_views.py | uktrade/jupyterhub-data-auth-admin | 91544f376209a201531f4dbfb8faad1b8ada18c9 | [
"MIT"
] | 1 | 2019-06-10T08:22:56.000Z | 2019-06-10T08:22:56.000Z | dataworkspace/dataworkspace/tests/request_access/test_views.py | uktrade/jupyterhub-data-auth-admin | 91544f376209a201531f4dbfb8faad1b8ada18c9 | [
"MIT"
] | 2 | 2019-05-17T13:10:42.000Z | 2019-06-17T10:48:46.000Z | dataworkspace/dataworkspace/tests/request_access/test_views.py | uktrade/jupyterhub-data-auth-admin | 91544f376209a201531f4dbfb8faad1b8ada18c9 | [
"MIT"
] | null | null | null | from unittest import mock
import pytest
from django.contrib.auth.models import Permission
from django.contrib.contenttypes.models import ContentType
from django.core.files.uploadedfile import SimpleUploadedFile
from django.urls import reverse
from dataworkspace.apps.applications.models import ApplicationInstance
from dataworkspace.apps.datasets.constants import DataSetType, UserAccessType
from dataworkspace.apps.request_access.models import AccessRequest
from dataworkspace.tests.datasets.test_views import DatasetsCommon
from dataworkspace.tests.factories import DataSetFactory
from dataworkspace.tests.request_access import factories
from dataworkspace.apps.core.storage import ClamAVResponse
class TestDatasetAccessOnly:
def test_user_sees_appropriate_message_on_dataset_page(self, client, user, metadata_db):
dataset = DatasetsCommon()._create_master(
user_access_type=UserAccessType.REQUIRES_AUTHORIZATION
)
permission = Permission.objects.get(
codename="start_all_applications",
content_type=ContentType.objects.get_for_model(ApplicationInstance),
)
user.user_permissions.add(permission)
resp = client.get(dataset.get_absolute_url())
assert resp.status_code == 200
assert "You need to request access to view this data." in resp.content.decode(resp.charset)
assert (
"We will ask you some questions so we can give you access to the tools you need to analyse this data."
not in resp.content.decode(resp.charset)
)
def test_request_access_form_is_single_page(self, client, user, metadata_db):
dataset = DatasetsCommon()._create_master(
user_access_type=UserAccessType.REQUIRES_AUTHORIZATION
)
permission = Permission.objects.get(
codename="start_all_applications",
content_type=ContentType.objects.get_for_model(ApplicationInstance),
)
user.user_permissions.add(permission)
resp = client.get(reverse("request_access:dataset", kwargs={"dataset_uuid": dataset.id}))
assert resp.status_code == 200
assert "Submit" in resp.content.decode(resp.charset)
def test_user_redirected_to_confirmation_page_after_form_submission(
self, client, user, metadata_db
):
dataset = DatasetsCommon()._create_master(
user_access_type=UserAccessType.REQUIRES_AUTHORIZATION
)
permission = Permission.objects.get(
codename="start_all_applications",
content_type=ContentType.objects.get_for_model(ApplicationInstance),
)
user.user_permissions.add(permission)
resp = client.post(
reverse("request_access:dataset", kwargs={"dataset_uuid": dataset.id}),
{"contact_email": "test@example.com", "reason_for_access": "I need it"},
)
access_requests = AccessRequest.objects.all()
# Ensure summary page is shown
assert resp.status_code == 302
assert resp.url == reverse(
"request_access:summary-page", kwargs={"pk": access_requests[0].pk}
)
assert len(access_requests) == 1
assert access_requests[0].contact_email == "test@example.com"
assert access_requests[0].reason_for_access == "I need it"
assert access_requests[0].journey == AccessRequest.JOURNEY_DATASET_ACCESS
# Submit summary page
resp = client.post(
reverse("request_access:summary-page", kwargs={"pk": access_requests[0].pk})
)
assert resp.status_code == 302
assert resp.url == reverse(
"request_access:confirmation-page", kwargs={"pk": access_requests[0].pk}
)
@pytest.mark.django_db
@mock.patch("dataworkspace.apps.request_access.views.zendesk.Zenpy")
@mock.patch("dataworkspace.apps.core.storage._upload_to_clamav")
def test_zendesk_ticket_created_after_form_submission(
self, mock_upload_to_clamav, mock_zendesk_client, client, user, metadata_db
):
class MockTicket:
@property
def ticket(self):
return type("ticket", (object,), {"id": 1})()
mock_zenpy_client = mock.MagicMock()
mock_zenpy_client.tickets.create.return_value = MockTicket()
mock_zendesk_client.return_value = mock_zenpy_client
mock_upload_to_clamav.return_value = ClamAVResponse({"malware": False})
dataset = DatasetsCommon()._create_master(
user_access_type=UserAccessType.REQUIRES_AUTHORIZATION
)
permission = Permission.objects.get(
codename="start_all_applications",
content_type=ContentType.objects.get_for_model(ApplicationInstance),
)
user.user_permissions.add(permission)
resp = client.post(
reverse("request_access:dataset", kwargs={"dataset_uuid": dataset.id}),
{"contact_email": "test@example.com", "reason_for_access": "I need it"},
)
access_requests = AccessRequest.objects.all()
# Ensure summary page is shown
assert resp.status_code == 302
assert resp.url == reverse(
"request_access:summary-page", kwargs={"pk": access_requests[0].pk}
)
# Submit summary page
client.post(
reverse("request_access:summary-page", kwargs={"pk": access_requests[0].pk}),
{"contact_email": "test@example.com", "reason_for_access": "I need it"},
follow=True,
)
assert len(mock_zenpy_client.tickets.create.call_args_list) == 1
call_args, _ = mock_zenpy_client.tickets.create.call_args_list[0]
ticket = call_args[0]
assert ticket.subject == "Access Request for A master"
assert (
ticket.description
== f"""Access request for
Username: Frank Exampleson
Journey: Dataset access
Dataset: A master
SSO Login: frank.exampleson@test.com
People search: https://people.trade.gov.uk/search?search_filters[]=people&query=Frank%20Exampleson
Details for the request can be found at
http://testserver/admin/request_access/accessrequest/{access_requests[0].pk}/change/
"""
)
class TestToolsAccessOnly:
@pytest.mark.parametrize(
"access_type", (UserAccessType.REQUIRES_AUTHENTICATION, UserAccessType.OPEN)
)
def test_user_sees_appropriate_message_on_dataset_page(self, access_type, client, metadata_db):
dataset = DatasetsCommon()._create_master(user_access_type=access_type)
resp = client.get(dataset.get_absolute_url())
assert resp.status_code == 200
assert "You need to request access to tools to analyse this data." in resp.content.decode(
resp.charset
)
assert (
"We will ask you some questions so we can give you access to the tools you need to analyse this data."
in resp.content.decode(resp.charset)
)
@pytest.mark.parametrize(
"access_type", (UserAccessType.REQUIRES_AUTHENTICATION, UserAccessType.OPEN)
)
def test_request_access_form_is_multipage_form(self, access_type, client, metadata_db):
dataset = DatasetsCommon()._create_master(user_access_type=access_type)
resp = client.get(reverse("request_access:dataset", kwargs={"dataset_uuid": dataset.id}))
access_requests = AccessRequest.objects.all()
assert resp.status_code == 302
assert resp.url == reverse("request_access:tools-1", kwargs={"pk": access_requests[0].pk})
resp = client.get(reverse("request_access:tools-1", kwargs={"pk": access_requests[0].pk}))
assert "Continue" in resp.content.decode(resp.charset)
@pytest.mark.parametrize(
"access_type", (UserAccessType.REQUIRES_AUTHENTICATION, UserAccessType.OPEN)
)
@mock.patch("dataworkspace.apps.core.storage._upload_to_clamav")
@mock.patch("dataworkspace.apps.core.boto3_client.boto3.client")
def test_user_redirected_to_step_2_after_step_1_form_submission(
self, mock_boto, _upload_to_clamav, access_type, client, metadata_db
):
_upload_to_clamav.return_value = ClamAVResponse({"malware": False})
dataset = DatasetsCommon()._create_master(user_access_type=access_type)
client.get(reverse("request_access:dataset", kwargs={"dataset_uuid": dataset.id}))
access_requests = AccessRequest.objects.all()
screenshot = SimpleUploadedFile("file.txt", b"file_content")
resp = client.post(
reverse("request_access:tools-1", kwargs={"pk": access_requests[0].pk}),
{"training_screenshot": screenshot},
)
assert len(access_requests) == 1
assert access_requests[0].training_screenshot.name.startswith("file.txt")
assert resp.status_code == 302
assert resp.url == reverse("request_access:tools-2", kwargs={"pk": access_requests[0].pk})
@pytest.mark.parametrize(
"access_type", (UserAccessType.REQUIRES_AUTHENTICATION, UserAccessType.OPEN)
)
def test_user_redirected_to_step_3_after_responding_yes_in_step_2(
self, access_type, client, metadata_db
):
dataset = DatasetsCommon()._create_master(user_access_type=access_type)
client.get(reverse("request_access:dataset", kwargs={"dataset_uuid": dataset.id}))
access_requests = AccessRequest.objects.all()
resp = client.post(
reverse("request_access:tools-2", kwargs={"pk": access_requests[0].pk}),
{"spss_and_stata": True},
)
assert len(access_requests) == 1
assert access_requests[0].spss_and_stata is True
assert resp.status_code == 302
assert resp.url == reverse("request_access:tools-3", kwargs={"pk": access_requests[0].pk})
@pytest.mark.parametrize(
"access_type", (UserAccessType.REQUIRES_AUTHENTICATION, UserAccessType.OPEN)
)
def test_user_redirected_to_summary_page_after_responding_no_in_step_2(
self, access_type, client, metadata_db
):
dataset = DatasetsCommon()._create_master(user_access_type=access_type)
client.get(reverse("request_access:dataset", kwargs={"dataset_uuid": dataset.id}))
access_requests = AccessRequest.objects.all()
resp = client.post(
reverse("request_access:tools-2", kwargs={"pk": access_requests[0].pk}),
)
assert len(access_requests) == 1
assert access_requests[0].spss_and_stata is False
assert resp.status_code == 302
assert resp.url == reverse(
"request_access:summary-page", kwargs={"pk": access_requests[0].pk}
)
@pytest.mark.parametrize(
"access_type", (UserAccessType.REQUIRES_AUTHENTICATION, UserAccessType.OPEN)
)
def test_user_redirected_to_summary_page_after_step_3_form_submission(
self, access_type, client, metadata_db
):
dataset = DatasetsCommon()._create_master(user_access_type=access_type)
client.get(reverse("request_access:dataset", kwargs={"dataset_uuid": dataset.id}))
access_requests = AccessRequest.objects.all()
resp = client.post(
reverse("request_access:tools-3", kwargs={"pk": access_requests[0].pk}),
{
"line_manager_email_address": "manager@example.com",
"reason_for_spss_and_stata": "I want it",
},
)
assert len(access_requests) == 1
assert access_requests[0].line_manager_email_address == "manager@example.com"
assert access_requests[0].reason_for_spss_and_stata == "I want it"
assert resp.status_code == 302
assert resp.url == reverse(
"request_access:summary-page", kwargs={"pk": access_requests[0].pk}
)
@pytest.mark.django_db
@pytest.mark.parametrize(
"access_type", (UserAccessType.REQUIRES_AUTHENTICATION, UserAccessType.OPEN)
)
@mock.patch("dataworkspace.apps.core.boto3_client.boto3.client")
@mock.patch("dataworkspace.apps.request_access.views.zendesk.Zenpy")
@mock.patch("dataworkspace.apps.core.storage._upload_to_clamav")
def test_zendesk_ticket_created_after_form_submission(
self,
mock_upload_to_clamav,
mock_zendesk_client,
mock_boto,
client,
metadata_db,
access_type,
):
class MockTicket:
@property
def ticket(self):
return type("ticket", (object,), {"id": 1})()
mock_zenpy_client = mock.MagicMock()
mock_zenpy_client.tickets.create.return_value = MockTicket()
mock_zendesk_client.return_value = mock_zenpy_client
mock_upload_to_clamav.return_value = ClamAVResponse({"malware": False})
dataset = DatasetsCommon()._create_master(user_access_type=access_type)
client.get(reverse("request_access:dataset", kwargs={"dataset_uuid": dataset.id}))
access_requests = AccessRequest.objects.all()
screenshot = SimpleUploadedFile("file.txt", b"file_content")
client.post(
reverse("request_access:tools-1", kwargs={"pk": access_requests[0].pk}),
{"training_screenshot": screenshot},
)
client.post(
reverse("request_access:tools-2", kwargs={"pk": access_requests[0].pk}),
follow=True,
)
client.post(
reverse("request_access:summary-page", kwargs={"pk": access_requests[0].pk}),
follow=True,
)
assert len(mock_zenpy_client.tickets.create.call_args_list) == 1
call_args, _ = mock_zenpy_client.tickets.create.call_args_list[0]
ticket = call_args[0]
assert ticket.subject == "Access Request for A master"
assert (
ticket.description
== f"""Access request for
Username: Frank Exampleson
Journey: Tools access
Dataset: A master
SSO Login: frank.exampleson@test.com
People search: https://people.trade.gov.uk/search?search_filters[]=people&query=Frank%20Exampleson
Details for the request can be found at
http://testserver/admin/request_access/accessrequest/{access_requests[0].pk}/change/
"""
)
class TestDatasetAndToolsAccess:
def test_user_sees_appropriate_message_on_dataset_page(self, client, metadata_db):
dataset = DatasetsCommon()._create_master(
user_access_type=UserAccessType.REQUIRES_AUTHORIZATION
)
resp = client.get(dataset.get_absolute_url())
assert resp.status_code == 200
assert "You need to request access to view this data." in resp.content.decode(resp.charset)
assert (
"We will ask you some questions so we can give you access to the tools you need to analyse this data."
in resp.content.decode(resp.charset)
)
def test_request_access_form_is_multipage_form(self, client, metadata_db):
dataset = DatasetsCommon()._create_master(
user_access_type=UserAccessType.REQUIRES_AUTHORIZATION
)
resp = client.get(reverse("request_access:dataset", kwargs={"dataset_uuid": dataset.id}))
assert "Continue" in resp.content.decode(resp.charset)
def test_user_redirected_to_tools_form_after_dataset_request_access_form_submission(
self, client, metadata_db
):
dataset = DatasetsCommon()._create_master(
user_access_type=UserAccessType.REQUIRES_AUTHORIZATION
)
resp = client.post(
reverse("request_access:dataset", kwargs={"dataset_uuid": dataset.id}),
{"contact_email": "test@example.com", "reason_for_access": "I need it"},
)
access_requests = AccessRequest.objects.all()
assert len(access_requests) == 1
assert access_requests[0].contact_email == "test@example.com"
assert access_requests[0].reason_for_access == "I need it"
assert access_requests[0].journey == AccessRequest.JOURNEY_DATASET_ACCESS
assert resp.status_code == 302
assert resp.url == reverse("request_access:tools-1", kwargs={"pk": access_requests[0].pk})
def test_tools_not_required_for_data_cut(self, client, metadata_db):
datacut = DataSetFactory.create(
published=True,
type=DataSetType.DATACUT,
name="A datacut",
user_access_type="REQUIRES_AUTHORIZATION",
)
resp = client.post(
reverse("request_access:dataset", kwargs={"dataset_uuid": datacut.id}),
{"contact_email": "test@example.com", "reason_for_access": "I need it"},
)
access_requests = AccessRequest.objects.all()
assert len(access_requests) == 1
assert access_requests[0].contact_email == "test@example.com"
assert access_requests[0].reason_for_access == "I need it"
assert access_requests[0].journey == AccessRequest.JOURNEY_DATASET_ACCESS
assert resp.status_code == 302
assert resp.url == reverse(
"request_access:summary-page", kwargs={"pk": access_requests[0].pk}
)
class TestNoAccessRequired:
@pytest.mark.parametrize(
"access_type", (UserAccessType.REQUIRES_AUTHENTICATION, UserAccessType.OPEN)
)
def test_user_sees_appropriate_message_on_request_access_page(
self, access_type, client, user, metadata_db
):
DatasetsCommon()._create_master(user_access_type=access_type)
permission = Permission.objects.get(
codename="start_all_applications",
content_type=ContentType.objects.get_for_model(ApplicationInstance),
)
user.user_permissions.add(permission)
resp = client.get(reverse("request_access:index"))
assert resp.status_code == 200
assert "You have access to our tools" in resp.content.decode(resp.charset)
class TestEditAccessRequest:
def test_edit_eligibility_criteria(self, client):
dataset = DatasetsCommon()._create_master(
user_access_type=UserAccessType.REQUIRES_AUTHORIZATION
)
access_request = factories.AccessRequestFactory(
catalogue_item_id=dataset.id,
contact_email="original@example.com",
reason_for_access="I need it",
)
resp = client.post(
reverse(
"datasets:eligibility_criteria",
kwargs={"dataset_uuid": dataset.id},
)
+ f"?access_request={access_request.id}",
{"meet_criteria": "yes"},
)
assert resp.status_code == 302
assert resp.url == reverse("request_access:dataset", kwargs={"dataset_uuid": dataset.id})
assert access_request.id == AccessRequest.objects.latest("created_date").id
def test_edit_dataset_request_fields(self, client, user):
dataset = DatasetsCommon()._create_master(
user_access_type=UserAccessType.REQUIRES_AUTHORIZATION
)
access_request = factories.AccessRequestFactory(
catalogue_item_id=dataset.id,
contact_email="original@example.com",
reason_for_access="I need it",
requester=user,
)
resp = client.post(
reverse(
"request_access:dataset-request-update",
kwargs={"pk": access_request.id},
),
{
"contact_email": "updated@example.com",
"reason_for_access": "I still need it",
},
)
assert resp.status_code == 302
access_request.refresh_from_db()
assert access_request.catalogue_item_id == dataset.id
assert access_request.contact_email == "updated@example.com"
assert access_request.reason_for_access == "I still need it"
@mock.patch("dataworkspace.apps.core.boto3_client.boto3.client")
@mock.patch("dataworkspace.apps.core.storage._upload_to_clamav")
def test_edit_training_screenshot(self, mock_upload_to_clamav, mock_boto, client, user):
mock_upload_to_clamav.return_value = ClamAVResponse({"malware": False})
screenshot1 = SimpleUploadedFile("original-file.txt", b"file_content")
access_request = factories.AccessRequestFactory(
contact_email="testy-mctestface@example.com",
training_screenshot=screenshot1,
requester=user,
)
# Ensure the original file name is displayed in the form
resp = client.get(reverse("request_access:tools-1", kwargs={"pk": access_request.pk}))
assert "original-file.txt" in resp.content.decode(resp.charset)
# Ensure the file can be updated
screenshot2 = SimpleUploadedFile("new-file.txt", b"file_content")
resp = client.post(
reverse("request_access:tools-1", kwargs={"pk": access_request.pk}),
{"training_screenshot": screenshot2},
)
assert resp.status_code == 302
access_request.refresh_from_db()
assert access_request.training_screenshot.name.split("!")[0] == "new-file.txt"
@mock.patch("dataworkspace.apps.core.boto3_client.boto3.client")
@mock.patch("dataworkspace.apps.core.storage._upload_to_clamav")
def test_cannot_access_other_users_access_request(
self, mock_upload_to_clamav, mock_boto, client, user
):
mock_upload_to_clamav.return_value = ClamAVResponse({"malware": False})
screenshot1 = SimpleUploadedFile("original-file.txt", b"file_content")
access_request = factories.AccessRequestFactory(
contact_email="testy-mctestface@example.com",
training_screenshot=screenshot1,
)
# client.post results in the request.user being set to the user fixture, whereas
# the AccessRequestFactory will create a new user object and assign it as the
# request access requester. Therefore trying to edit the access request should
# raise a 404
resp = client.post(
reverse(
"request_access:dataset-request-update",
kwargs={"pk": access_request.id},
),
{
"contact_email": "updated@example.com",
"reason_for_access": "I still need it",
},
)
assert resp.status_code == 404
| 41.160517 | 114 | 0.670671 | 2,539 | 22,309 | 5.635289 | 0.097282 | 0.049063 | 0.053117 | 0.026139 | 0.858121 | 0.846869 | 0.838622 | 0.829606 | 0.821708 | 0.803257 | 0 | 0.008637 | 0.226725 | 22,309 | 541 | 115 | 41.236599 | 0.820764 | 0.01914 | 0 | 0.644944 | 0 | 0.011236 | 0.196059 | 0.082026 | 0 | 0 | 0 | 0 | 0.164045 | 1 | 0.049438 | false | 0 | 0.029213 | 0.004494 | 0.098876 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
704cb11867beace4fb3c9b2ba46351caa91e9e55 | 195 | py | Python | src/main/aoc/modules/__init__.py | lhalla/advent-of-code-2019 | d0c88385ae2132637b2685af3a9da901142b423d | [
"MIT"
] | null | null | null | src/main/aoc/modules/__init__.py | lhalla/advent-of-code-2019 | d0c88385ae2132637b2685af3a9da901142b423d | [
"MIT"
] | null | null | null | src/main/aoc/modules/__init__.py | lhalla/advent-of-code-2019 | d0c88385ae2132637b2685af3a9da901142b423d | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
from . import utils
from . import fuel
from . import intcode
from . import wire
from . import password
from . import icpp
from . import diagnostics
from . import computer
| 17.727273 | 25 | 0.753846 | 28 | 195 | 5.25 | 0.5 | 0.544218 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00625 | 0.179487 | 195 | 10 | 26 | 19.5 | 0.9125 | 0.107692 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.125 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
70886498983e66d9f7a473eebd555c7c5052783f | 9,300 | py | Python | com/beiwei/indicators/rsi.py | wtwong316/tsd_analysis_with_es | 04e9667b5fb6c5052e55053fe7838a79aab24fb5 | [
"Apache-2.0"
] | null | null | null | com/beiwei/indicators/rsi.py | wtwong316/tsd_analysis_with_es | 04e9667b5fb6c5052e55053fe7838a79aab24fb5 | [
"Apache-2.0"
] | null | null | null | com/beiwei/indicators/rsi.py | wtwong316/tsd_analysis_with_es | 04e9667b5fb6c5052e55053fe7838a79aab24fb5 | [
"Apache-2.0"
] | null | null | null | from com.beiwei.client.config.hlclient import HLClient
from elasticsearch_dsl import Search
from elasticsearch_dsl.query import Q, Bool, Range, Term
from elasticsearch_dsl.aggs import A, DateHistogram, ScriptedMetric, Avg, Min, MovingFn, BucketScript, \
BucketSelector, Derivative
import pandas as pd
from datetime import datetime
hl_es = HLClient.get_instance()
def rsi_bb(index, ts_code, start_date, end_date, rsi_period, sma_period):
search = Search(index=index, using=hl_es)[0:0]
search.query = Q(Bool(must=[Range(end_date={'gte': start_date, 'lte': end_date}), Term(ts_code=ts_code)]))
aggs = A(DateHistogram(field='end_date', fixed_interval='1d', format='yyyyMMdd'))
aggs_price = A(Avg(field='adj_nav'))
aggs_datestr = A(Min(field='end_date'))
aggs_stp = A(BucketSelector(buckets_path={'count': '_count'}, script='params.count > 0'))
aggs_price_diff = A(Derivative(buckets_path='Daily'))
aggs_gain = A(BucketScript(buckets_path={'Diff': 'Diff'}, script='(params.Diff > 0) ? params.Diff : 0'))
aggs_loss = A(BucketScript(buckets_path={'Diff': 'Diff'}, script='(params.Diff < 0) ? -params.Diff : 0'))
aggs_gain_sma = A(MovingFn(script='MovingFunctions.unweightedAvg(values)', window=rsi_period, buckets_path='Gain', shift=1))
aggs_loss_sma = A(MovingFn(script='MovingFunctions.unweightedAvg(values)', window=rsi_period, buckets_path='Loss', shift=1))
#aggs_gain_ewma = A(MovingFn(script='MovingFunctions.ewma(values, 2/(14+1))', window=rsi_period-1, buckets_path='Gain'))
#aggs_loss_ewma = A(MovingFn(script='MovingFunctions.ewma(values, 2/(14+1))', window=rsi_period-1, buckets_path='Loss'))
aggs_rsi = A(BucketScript(buckets_path={'GainMA': 'GainMA', 'LossMA': 'LossMA'},
script='100 - 100/(1+params.GainMA/params.LossMA)'))
aggs_sma = A(MovingFn(script='MovingFunctions.unweightedAvg(values)', window=sma_period, buckets_path='Daily'))
aggs_sd = A(MovingFn(script='MovingFunctions.stdDev(values, MovingFunctions.unweightedAvg(values))',
window=sma_period, buckets_path='Daily'))
aggs_bbu = A(BucketScript(buckets_path={'SMA': 'SMA', 'SD': 'SD'}, script='params.SMA + 2*params.SD'))
aggs_bbl = A(BucketScript(buckets_path={'SMA': 'SMA', 'SD': 'SD'}, script='params.SMA - 2*params.SD'))
aggs_rsi_sma = A(MovingFn(script='MovingFunctions.unweightedAvg(values)', window=sma_period, buckets_path='RSI'))
aggs_rsi_sd = A(MovingFn(script='MovingFunctions.stdDev(values, MovingFunctions.unweightedAvg(values))',
window=sma_period, buckets_path='RSI'))
aggs_rsi_bbu = A(BucketScript(buckets_path={'SMA': 'RSI_SMA', 'SD': 'RSI_SD'}, script='params.SMA + 2* params.SD'))
aggs_rsi_bbl = A(BucketScript(buckets_path={'SMA': 'RSI_SMA', 'SD': 'RSI_SD'}, script='params.SMA - 2* params.SD'))
aggs_rsi_diff = A(Derivative(buckets_path='RSI'))
aggs_rsi_type = A(BucketScript(buckets_path={'RSI': 'RSI', 'RSI_Diff': 'RSI_Diff'},
script='(params.RSI >= 70) ? (params.RSI_Diff > 0 ? 3:4) : '
'(params.RSI <= 30) ? (params.RSI_Diff > 0 ? 2:1):0'))
start_epoch_milli = int(datetime.strptime(start_date, '%Y%m%d').timestamp()*1000)
aggs_rsi_bb = A(BucketSelector(buckets_path={'DateStr': 'DateStr'},
script='params.DateStr >= {}L'.format(start_epoch_milli)))
search.aggs.bucket('RSI_BB', aggs).metric('Daily', aggs_price).\
metric('DateStr', aggs_datestr).pipeline('Diff', aggs_price_diff).\
pipeline('Gain', aggs_gain).pipeline('Loss', aggs_loss).\
pipeline('GainMA', aggs_gain_sma).pipeline('LossMA', aggs_loss_sma).pipeline('RSI', aggs_rsi). \
pipeline('SMA', aggs_sma).pipeline('SD', aggs_sd). \
pipeline('BBU', aggs_bbu).pipeline('BBL', aggs_bbl). \
pipeline('RSI_SMA', aggs_rsi_sma).pipeline('RSI_SD', aggs_rsi_sd).\
pipeline('RSI_BBU', aggs_rsi_bbu).pipeline('RSI_BBL', aggs_rsi_bbl).\
pipeline('RSI_Diff', aggs_rsi_diff).pipeline('RSIType', aggs_rsi_type).pipeline('SRSI_BB', aggs_rsi_bb)
response = search.execute()
return response
def iex_rsi_bb(index, symbol, start_date, end_date, rsi_period, sma_period):
search = Search(index=index, using=hl_es)[0:0]
search.query = Q(Bool(must=[Range(date={'gte': start_date, 'lte': end_date}), Term(symbol=symbol)]))
aggs = A(DateHistogram(field='date', fixed_interval='1d', format='yyyy-MM-dd'))
aggs_price = A(Avg(field='close'))
aggs_datestr = A(Min(field='date'))
aggs_stp = A(BucketSelector(buckets_path={'count': '_count'}, script='params.count > 0'))
aggs_price_diff = A(Derivative(buckets_path='Daily'))
aggs_gain = A(BucketScript(buckets_path={'Diff': 'Diff'}, script='(params.Diff > 0) ? params.Diff : 0'))
aggs_loss = A(BucketScript(buckets_path={'Diff': 'Diff'}, script='(params.Diff < 0) ? -params.Diff : 0'))
aggs_gain_sma = A(MovingFn(script='MovingFunctions.unweightedAvg(values)', window=rsi_period, buckets_path='Gain', shift=1))
aggs_loss_sma = A(MovingFn(script='MovingFunctions.unweightedAvg(values)', window=rsi_period, buckets_path='Loss', shift=1))
#aggs_gain_ewma = A(MovingFn(script='MovingFunctions.ewma(values, 2/(14+1))', window=rsi_period-1, buckets_path='Gain'))
#aggs_loss_ewma = A(MovingFn(script='MovingFunctions.ewma(values, 2/(14+1))', window=rsi_period-1, buckets_path='Loss'))
aggs_rsi = A(BucketScript(buckets_path={'GainMA': 'GainMA', 'LossMA': 'LossMA'},
script='100 - 100/(1+params.GainMA/params.LossMA)'))
aggs_sma = A(MovingFn(script='MovingFunctions.unweightedAvg(values)', window=sma_period, buckets_path='Daily'))
aggs_sd = A(MovingFn(script='MovingFunctions.stdDev(values, MovingFunctions.unweightedAvg(values))',
window=sma_period, buckets_path='Daily'))
aggs_bbu = A(BucketScript(buckets_path={'SMA': 'SMA', 'SD': 'SD'}, script='params.SMA + 2*params.SD'))
aggs_bbl = A(BucketScript(buckets_path={'SMA': 'SMA', 'SD': 'SD'}, script='params.SMA - 2*params.SD'))
aggs_rsi_sma = A(MovingFn(script='MovingFunctions.unweightedAvg(values)', window=sma_period, buckets_path='RSI'))
aggs_rsi_sd = A(MovingFn(script='MovingFunctions.stdDev(values, MovingFunctions.unweightedAvg(values))',
window=sma_period, buckets_path='RSI'))
aggs_rsi_bbu = A(BucketScript(buckets_path={'SMA': 'RSI_SMA', 'SD': 'RSI_SD'}, script='params.SMA + 2* params.SD'))
aggs_rsi_bbl = A(BucketScript(buckets_path={'SMA': 'RSI_SMA', 'SD': 'RSI_SD'}, script='params.SMA - 2* params.SD'))
aggs_rsi_diff = A(Derivative(buckets_path='RSI'))
aggs_rsi_type = A(BucketScript(buckets_path={'RSI': 'RSI', 'RSI_Diff': 'RSI_Diff'},
script='(params.RSI >= 70) ? (params.RSI_Diff > 0 ? 3:4) : '
'(params.RSI <= 30) ? (params.RSI_Diff > 0 ? 2:1):0'))
start_epoch_milli = int(datetime.strptime(start_date, '%Y%m%d').timestamp()*1000)
aggs_rsi_bb = A(BucketSelector(buckets_path={'DateStr': 'DateStr'},
script='params.DateStr >= {}L'.format(start_epoch_milli)))
search.aggs.bucket('RSI_BB', aggs).metric('Daily', aggs_price).\
metric('DateStr', aggs_datestr).pipeline('STP', aggs_stp).pipeline('Diff', aggs_price_diff).\
pipeline('Gain', aggs_gain).pipeline('Loss', aggs_loss).\
pipeline('GainMA', aggs_gain_sma).pipeline('LossMA', aggs_loss_sma).pipeline('RSI', aggs_rsi). \
pipeline('SMA', aggs_sma).pipeline('SD', aggs_sd). \
pipeline('BBU', aggs_bbu).pipeline('BBL', aggs_bbl). \
pipeline('RSI_SMA', aggs_rsi_sma).pipeline('RSI_SD', aggs_rsi_sd).\
pipeline('RSI_BBU', aggs_rsi_bbu).pipeline('RSI_BBL', aggs_rsi_bbl).\
pipeline('RSI_Diff', aggs_rsi_diff).pipeline('RSIType', aggs_rsi_type).pipeline('SRSI_BB', aggs_rsi_bb)
response = search.execute()
return response
def convert_rsi_bb_response_to_df(response, date_name, start_date):
df = pd.DataFrame()
if len(response['aggregations']['RSI_BB']['buckets']) > 0:
buckets = response['aggregations']['RSI_BB']['buckets']
for bucket in buckets:
if 'RSI' in bucket:
if bucket['key_as_string'] >= start_date:
row_list = dict()
row_list[date_name] = bucket['key_as_string']
row_list['BBL'] = bucket['BBL'].value
row_list['BBU'] = bucket['BBU'].value
row_list['SMA'] = bucket['SMA'].value
row_list['SD'] = bucket['SD'].value
row_list['RSI'] = bucket['RSI'].value
row_list['RSI_BBL'] = bucket['RSI_BBL'].value
row_list['RSI_BBU'] = bucket['RSI_BBU'].value
row_list['RSI_SMA'] = bucket['RSI_SMA'].value
row_list['RSI_SD'] = bucket['RSI_SD'].value
row_list['Daily'] = bucket['Daily'].value
row_list['RSIType'] = bucket['RSIType'].value
df = df.append(row_list, ignore_index=True)
return df
| 72.65625 | 128 | 0.647097 | 1,218 | 9,300 | 4.699507 | 0.109195 | 0.076869 | 0.055905 | 0.067086 | 0.831586 | 0.798393 | 0.798393 | 0.798393 | 0.787911 | 0.787911 | 0 | 0.012251 | 0.183763 | 9,300 | 127 | 129 | 73.228346 | 0.7418 | 0.051613 | 0 | 0.631579 | 0 | 0 | 0.237976 | 0.072368 | 0 | 0 | 0 | 0 | 0 | 1 | 0.026316 | false | 0 | 0.052632 | 0 | 0.105263 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
709d2eaa0d06c40b3877992df2cfd2d60c456b44 | 18 | py | Python | __init__.py | ixokai/ccf | 90f0997d4bd555ce4d137d85b5043c94abac9eda | [
"MIT"
] | null | null | null | __init__.py | ixokai/ccf | 90f0997d4bd555ce4d137d85b5043c94abac9eda | [
"MIT"
] | null | null | null | __init__.py | ixokai/ccf | 90f0997d4bd555ce4d137d85b5043c94abac9eda | [
"MIT"
] | null | null | null | from .ccf import * | 18 | 18 | 0.722222 | 3 | 18 | 4.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 18 | 1 | 18 | 18 | 0.866667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
561df9b76b5ea9996337e434316479db2c59e7fd | 96 | py | Python | venv/lib/python3.8/site-packages/pip/_vendor/pep517/envbuild.py | Retraces/UkraineBot | 3d5d7f8aaa58fa0cb8b98733b8808e5dfbdb8b71 | [
"MIT"
] | 2 | 2022-03-13T01:58:52.000Z | 2022-03-31T06:07:54.000Z | venv/lib/python3.8/site-packages/pip/_vendor/pep517/envbuild.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | 19 | 2021-11-20T04:09:18.000Z | 2022-03-23T15:05:55.000Z | venv/lib/python3.8/site-packages/pip/_vendor/pep517/envbuild.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | null | null | null | /home/runner/.cache/pip/pool/2b/87/48/1806e45dfdd1a105fff5116966f32f3eb5521ed71b1fba3d552acd9e53 | 96 | 96 | 0.895833 | 9 | 96 | 9.555556 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.40625 | 0 | 96 | 1 | 96 | 96 | 0.489583 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5632543921cebc7cef4979600b58e10ba2f4437c | 113 | py | Python | tuneit/tools/__init__.py | sbacchio/tuneit | 8d4771acbb56e07336e2aae66160a1f62777cb01 | [
"BSD-3-Clause"
] | null | null | null | tuneit/tools/__init__.py | sbacchio/tuneit | 8d4771acbb56e07336e2aae66160a1f62777cb01 | [
"BSD-3-Clause"
] | null | null | null | tuneit/tools/__init__.py | sbacchio/tuneit | 8d4771acbb56e07336e2aae66160a1f62777cb01 | [
"BSD-3-Clause"
] | null | null | null | "Highlevel tools for analyzing the tunable graphs"
from .base import *
from .check import *
from .time import *
| 18.833333 | 50 | 0.752212 | 16 | 113 | 5.3125 | 0.75 | 0.235294 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.176991 | 113 | 5 | 51 | 22.6 | 0.913978 | 0.424779 | 0 | 0 | 0 | 0 | 0.424779 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.75 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
567955c5edc110ccaf51f35c4b61b2de2a802ea0 | 150 | py | Python | tests/test_tests.py | makefu/hydra-check | 21aa981f545769d10707b40f4a42e13172b72bad | [
"MIT"
] | 1 | 2020-03-10T20:32:26.000Z | 2020-03-10T20:32:26.000Z | tests/test_tests.py | makefu/hydra-check | 21aa981f545769d10707b40f4a42e13172b72bad | [
"MIT"
] | 2 | 2020-03-07T18:23:02.000Z | 2020-03-19T10:17:30.000Z | tests/test_tests.py | makefu/hydra-check | 21aa981f545769d10707b40f4a42e13172b72bad | [
"MIT"
] | null | null | null | import pytest
from hydra_check import cli
def test_get_url() -> None:
assert cli.get_url("unstable") == "https://hydra.nixos.org/job/unstable"
| 18.75 | 76 | 0.72 | 23 | 150 | 4.521739 | 0.73913 | 0.115385 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.14 | 150 | 7 | 77 | 21.428571 | 0.806202 | 0 | 0 | 0 | 0 | 0 | 0.293333 | 0 | 0 | 0 | 0 | 0 | 0.25 | 1 | 0.25 | true | 0 | 0.5 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
567d17494561a7698fbccee32b23d78446fd074c | 6,990 | py | Python | .c9/metadata/environment/NodaSF_project/settings.py | bopopescu/nodasf | 32718c9ba606a7373b20c77710fd3706fc583396 | [
"MIT"
] | null | null | null | .c9/metadata/environment/NodaSF_project/settings.py | bopopescu/nodasf | 32718c9ba606a7373b20c77710fd3706fc583396 | [
"MIT"
] | 9 | 2019-12-05T20:37:07.000Z | 2022-02-10T12:34:48.000Z | .c9/metadata/environment/NodaSF_project/settings.py | bopopescu/nodasf | 32718c9ba606a7373b20c77710fd3706fc583396 | [
"MIT"
] | 1 | 2020-07-25T23:37:21.000Z | 2020-07-25T23:37:21.000Z | {"filter":false,"title":"settings.py","tooltip":"/NodaSF_project/settings.py","undoManager":{"mark":22,"position":22,"stack":[[{"start":{"row":41,"column":33},"end":{"row":42,"column":0},"action":"insert","lines":["",""],"id":2},{"start":{"row":42,"column":0},"end":{"row":42,"column":4},"action":"insert","lines":[" "]}],[{"start":{"row":42,"column":4},"end":{"row":42,"column":6},"action":"insert","lines":["''"],"id":3}],[{"start":{"row":42,"column":5},"end":{"row":42,"column":6},"action":"insert","lines":["n"],"id":4},{"start":{"row":42,"column":6},"end":{"row":42,"column":7},"action":"insert","lines":["o"]},{"start":{"row":42,"column":7},"end":{"row":42,"column":8},"action":"insert","lines":["d"]},{"start":{"row":42,"column":8},"end":{"row":42,"column":9},"action":"insert","lines":["a"]},{"start":{"row":42,"column":9},"end":{"row":42,"column":10},"action":"insert","lines":["s"]},{"start":{"row":42,"column":10},"end":{"row":42,"column":11},"action":"insert","lines":["f"]}],[{"start":{"row":42,"column":12},"end":{"row":42,"column":13},"action":"insert","lines":[","],"id":5}],[{"start":{"row":56,"column":16},"end":{"row":56,"column":34},"action":"remove","lines":["{{ project_name }}"],"id":6},{"start":{"row":56,"column":16},"end":{"row":56,"column":17},"action":"insert","lines":["n"]},{"start":{"row":56,"column":17},"end":{"row":56,"column":18},"action":"insert","lines":["o"]},{"start":{"row":56,"column":18},"end":{"row":56,"column":19},"action":"insert","lines":["d"]}],[{"start":{"row":56,"column":16},"end":{"row":56,"column":17},"action":"remove","lines":["n"],"id":7}],[{"start":{"row":56,"column":16},"end":{"row":56,"column":17},"action":"insert","lines":["N"],"id":8}],[{"start":{"row":56,"column":19},"end":{"row":56,"column":20},"action":"insert","lines":["a"],"id":9},{"start":{"row":56,"column":20},"end":{"row":56,"column":21},"action":"insert","lines":["S"]},{"start":{"row":56,"column":21},"end":{"row":56,"column":22},"action":"insert","lines":["F"]}],[{"start":{"row":75,"column":37},"end":{"row":75,"column":38},"action":"remove","lines":["}"],"id":10},{"start":{"row":75,"column":36},"end":{"row":75,"column":37},"action":"remove","lines":["}"]},{"start":{"row":75,"column":35},"end":{"row":75,"column":36},"action":"remove","lines":[" "]},{"start":{"row":75,"column":34},"end":{"row":75,"column":35},"action":"remove","lines":["e"]},{"start":{"row":75,"column":33},"end":{"row":75,"column":34},"action":"remove","lines":["m"]},{"start":{"row":75,"column":32},"end":{"row":75,"column":33},"action":"remove","lines":["a"]},{"start":{"row":75,"column":31},"end":{"row":75,"column":32},"action":"remove","lines":["n"]},{"start":{"row":75,"column":30},"end":{"row":75,"column":31},"action":"remove","lines":["_"]},{"start":{"row":75,"column":29},"end":{"row":75,"column":30},"action":"remove","lines":["t"]},{"start":{"row":75,"column":28},"end":{"row":75,"column":29},"action":"remove","lines":["c"]},{"start":{"row":75,"column":27},"end":{"row":75,"column":28},"action":"remove","lines":["e"]},{"start":{"row":75,"column":26},"end":{"row":75,"column":27},"action":"remove","lines":["j"]},{"start":{"row":75,"column":25},"end":{"row":75,"column":26},"action":"remove","lines":["o"]},{"start":{"row":75,"column":24},"end":{"row":75,"column":25},"action":"remove","lines":["r"]},{"start":{"row":75,"column":23},"end":{"row":75,"column":24},"action":"remove","lines":["p"]},{"start":{"row":75,"column":22},"end":{"row":75,"column":23},"action":"remove","lines":[" "]},{"start":{"row":75,"column":21},"end":{"row":75,"column":22},"action":"remove","lines":["{"]},{"start":{"row":75,"column":20},"end":{"row":75,"column":21},"action":"remove","lines":["{"]}],[{"start":{"row":75,"column":20},"end":{"row":75,"column":21},"action":"insert","lines":["N"],"id":11},{"start":{"row":75,"column":21},"end":{"row":75,"column":22},"action":"insert","lines":["o"]},{"start":{"row":75,"column":22},"end":{"row":75,"column":23},"action":"insert","lines":["d"]},{"start":{"row":75,"column":23},"end":{"row":75,"column":24},"action":"insert","lines":["a"]}],[{"start":{"row":75,"column":24},"end":{"row":75,"column":25},"action":"insert","lines":["S"],"id":12},{"start":{"row":75,"column":25},"end":{"row":75,"column":26},"action":"insert","lines":["F"]},{"start":{"row":75,"column":26},"end":{"row":75,"column":27},"action":"insert","lines":["_"]},{"start":{"row":75,"column":27},"end":{"row":75,"column":28},"action":"insert","lines":["p"]},{"start":{"row":75,"column":28},"end":{"row":75,"column":29},"action":"insert","lines":["r"]},{"start":{"row":75,"column":29},"end":{"row":75,"column":30},"action":"insert","lines":["o"]},{"start":{"row":75,"column":30},"end":{"row":75,"column":31},"action":"insert","lines":["j"]}],[{"start":{"row":75,"column":31},"end":{"row":75,"column":32},"action":"insert","lines":["e"],"id":13},{"start":{"row":75,"column":32},"end":{"row":75,"column":33},"action":"insert","lines":["c"]},{"start":{"row":75,"column":33},"end":{"row":75,"column":34},"action":"insert","lines":["t"]}],[{"start":{"row":1,"column":20},"end":{"row":1,"column":38},"action":"remove","lines":["{{ project_name }}"],"id":14},{"start":{"row":1,"column":20},"end":{"row":1,"column":21},"action":"insert","lines":["N"]},{"start":{"row":1,"column":21},"end":{"row":1,"column":22},"action":"insert","lines":["o"]},{"start":{"row":1,"column":22},"end":{"row":1,"column":23},"action":"insert","lines":["d"]},{"start":{"row":1,"column":23},"end":{"row":1,"column":24},"action":"insert","lines":["a"]}],[{"start":{"row":1,"column":20},"end":{"row":1,"column":24},"action":"remove","lines":["Noda"],"id":15},{"start":{"row":1,"column":20},"end":{"row":1,"column":34},"action":"insert","lines":["NodaSF_project"]}],[{"start":{"row":12,"column":18},"end":{"row":12,"column":19},"action":"remove","lines":["_"],"id":16}],[{"start":{"row":12,"column":18},"end":{"row":12,"column":19},"action":"insert","lines":["-"],"id":17}],[{"start":{"row":12,"column":9},"end":{"row":12,"column":10},"action":"remove","lines":["_"],"id":18}],[{"start":{"row":12,"column":9},"end":{"row":12,"column":10},"action":"insert","lines":["-"],"id":19}],[{"start":{"row":12,"column":9},"end":{"row":12,"column":10},"action":"remove","lines":["-"],"id":20}],[{"start":{"row":12,"column":9},"end":{"row":12,"column":10},"action":"insert","lines":["_"],"id":21}],[{"start":{"row":12,"column":18},"end":{"row":12,"column":19},"action":"remove","lines":["-"],"id":22}],[{"start":{"row":12,"column":18},"end":{"row":12,"column":19},"action":"insert","lines":["_"],"id":23}],[{"start":{"row":42,"column":0},"end":{"row":42,"column":13},"action":"remove","lines":[" 'nodasf',"],"id":24}]]},"ace":{"folds":[],"scrolltop":0,"scrollleft":0,"selection":{"start":{"row":12,"column":12},"end":{"row":12,"column":12},"isBackwards":false},"options":{"guessTabSize":true,"useWrapMode":false,"wrapToView":true},"firstLineState":0},"timestamp":1562791514323,"hash":"309c3f39d51932a606e02cf0045a2df22ed3ff28"} | 6,990 | 6,990 | 0.554363 | 1,015 | 6,990 | 3.807882 | 0.093596 | 0.14075 | 0.182147 | 0.132471 | 0.741268 | 0.699353 | 0.592497 | 0.569728 | 0.496766 | 0.441397 | 0 | 0.085292 | 0.002003 | 6,990 | 1 | 6,990 | 6,990 | 0.46875 | 0 | 0 | 0 | 0 | 0 | 0.475325 | 0.009584 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8e7a8f28e701925665cd19f3ee831313fbd511d4 | 6,574 | py | Python | algorithm_Python/heap_sort.py | enihsyou/Sorting-algorithm | 09a109bb26e0d8d165a4d1bbe18ec7b4e538b364 | [
"MIT"
] | null | null | null | algorithm_Python/heap_sort.py | enihsyou/Sorting-algorithm | 09a109bb26e0d8d165a4d1bbe18ec7b4e538b364 | [
"MIT"
] | null | null | null | algorithm_Python/heap_sort.py | enihsyou/Sorting-algorithm | 09a109bb26e0d8d165a4d1bbe18ec7b4e538b364 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""Heap sort
File name: heap_sort
Reference: https://en.wikipedia.org/wiki/Heapsort
Introduction: 堆排序 O(n*Log(n))
Date: 2016-05-22
Last modified: 2016-06-01
Author: enihsyou
"""
from count_time import count_time_debug, count_time
@count_time
def heap_sort(data):
"""Heap sort
堆排序
Args:
data (List[int]): list to sort, need a not None list
Returns:
List[int]: ordered list
"""
length = len(data)
def _build_max_heap(_start, _end):
"""把``start``到``end``之间的数据 变得符合堆结构
将堆的末节点进行调整,使子节点永远不大于父节点。
Args:
_start (int): 指定要变换的开始位置
_end (int): 指定要变换的终点位置
Returns:
None: 只改变``data``数组
"""
root = _start # 根节点位置
while True:
child = 2 * root + 1 # 二叉子节点开始的位置
if child > _end: break
# 先比较两个子节点,选择更大的一个child
if child + 1 <= _end and data[child] > data[child + 1]:
child += 1
if data[root] > data[child]:
data[root], data[child] = data[child], data[root]
root = child # 继续向下
else: # 子节点小于根节点,跳出
break
# 创建最大堆
# (length - 2) // 2: 最后一个父节点,从右向左循环调整成堆结构
for start in range((length - 2) // 2, -1, -1):
_build_max_heap(start, length - 1) # length - 1: 保证有两个位置,不越界
# 堆排序
for end in range(length - 1, 0, -1):
data[0], data[end] = data[end], data[0]
_build_max_heap(0, end - 1)
return data
@count_time
def heap_sort_reverse(data, reverse=False):
"""Heap sort ver.reverse
支持从小到大排序,可以用[::-1]提升效率
堆排序
Args:
data (List[int]): list to sort, need a not None list
reverse (bool): whether to sort descending (default: False)
Returns:
List[int]: ordered list
"""
length = len(data)
def _build_max_heap(_start, _end):
"""把``start``到``end``之间的数据 变得符合堆结构
将堆的末节点进行调整,使子节点永远不大于父节点。
把``reverse``判断放在外面能加快一点。
Args:
_start (int): 指定要变换的开始位置
_end (int): 指定要变换的终点位置
Returns:
None: 只改变``data``数组
"""
root = _start # 根节点位置
while True:
child = 2 * root + 1 # 二叉子节点开始的位置
if child > _end: break
# 先比较两个子节点,选择更大的一个child
if child + 1 <= _end:
if reverse: # descending
if data[child] < data[child + 1]:
child += 1
else: # ascending
if data[child] > data[child + 1]:
child += 1
# 如果子节点大于根节点,交换
if reverse: # descending
if data[root] < data[child]:
data[root], data[child] = data[child], data[root]
root = child # 继续向下
else: # 子节点小于根节点,跳出
break
else: # ascending
if data[root] > data[child]:
data[root], data[child] = data[child], data[root]
root = child # 继续向下
else: # 子节点小于根节点,跳出
break
# 创建最大堆
# (length - 2) // 2: 最后一个父节点,从右向左循环调整成堆结构
for start in range((length - 2) // 2, -1, -1):
_build_max_heap(start, length - 1) # length - 1: 保证有两个位置,不越界
# 堆排序
for end in range(length - 1, 0, -1):
data[0], data[end] = data[end], data[0]
_build_max_heap(0, end - 1)
return data
@count_time_debug
def heap_sort_debug(data, reverse=False, print_step=False):
"""Heap sort ver.debug
堆排序
Args:
data (List[int]): list to sort, need a not None list
reverse (bool): whether to sort descending (default: False)
print_step (bool) : whether to show sorting steps (default: False)
Returns:
List[int]: ordered list
"""
length = len(data)
steps = 0 # 记录操作步数
comps = 0 # 记录比较次数
swaps = 0 # 记录交换次数
def _build_max_heap(_start, _end):
"""把``start``到``end``之间的数据 变得符合堆结构
将堆的末节点进行调整,使子节点永远不大于父节点。
把``reverse``判断放在外面能加快一点。
Args:
_start (int): 指定要变换的开始位置
_end (int): 指定要变换的终点位置
Returns:
None: 只改变``data``数组
"""
nonlocal steps, swaps, comps
root = _start # 根节点位置
while True:
steps += 1
child = 2 * root + 1 # 二叉子节点开始的位置
if child > _end: break
# 先比较两个子节点,选择更大的一个child
if child + 1 <= _end:
if reverse: # descending
comps += 1
if data[child] < data[child + 1]:
child += 1
else: # ascending
comps += 1
if data[child] > data[child + 1]:
child += 1
# 如果子节点大于根节点,交换
if reverse: # descending
comps += 1
if data[root] < data[child]:
swaps += 1
data[root], data[child] = data[child], data[root]
root = child # 继续向下
else: # 子节点小于根节点,跳出
break
else: # ascending
comps += 1
if data[root] > data[child]:
swaps += 1
data[root], data[child] = data[child], data[root]
root = child # 继续向下
else: # 子节点小于根节点,跳出
break
if print_step: print(data)
# 创建最大堆
# (length - 2) // 2: 最后一个父节点,从右向左循环调整成堆结构
for start in range((length - 2) // 2, -1, -1):
_build_max_heap(start, length - 1) # length - 1: 保证有两个位置,不越界
# 堆排序
for end in range(length - 1, 0, -1):
swaps += 1
data[0], data[end] = data[end], data[0]
_build_max_heap(0, end - 1)
print("输入长度:", length,
"循环次数:", steps,
"比较次数:", comps,
"操作次数:", swaps)
return data
#############
# Test Part #
#############
# 调用测试
# print(heap_sort([3, 5, 4, 8, 2, 7, 6, 0, 9, 1]))
# print(heap_sort([3, 5, 4, 8, 2, 7, 6, 0, 9, 1], True))
# 计时测试
# heap_sort_debug([3, 5, 4, 8, 2, 7, 6, 0, 9, 1])
# heap_sort_debug([3, 5, 4, 8, 2, 7, 6, 0, 9, 1], True)
# 步骤测试
# heap_sort_debug([3, 5, 4, 8, 2, 7, 6, 0, 9, 1], print_step=True)
# heap_sort_debug([3, 5, 4, 8, 2, 7, 6, 0, 9, 1], True, print_step=True)
if __name__ == "__main__":
print("堆排序法::输入数组进行测试")
while True:
inp = input()
print(heap_sort_debug(eval(inp), print_step=True))
| 27.278008 | 74 | 0.488135 | 785 | 6,574 | 3.975796 | 0.171975 | 0.072092 | 0.074976 | 0.057674 | 0.799744 | 0.782442 | 0.782442 | 0.774431 | 0.774431 | 0.774431 | 0 | 0.037946 | 0.386675 | 6,574 | 240 | 75 | 27.391667 | 0.736111 | 0.338302 | 0 | 0.818182 | 0 | 0 | 0.010622 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.054545 | false | 0 | 0.009091 | 0 | 0.090909 | 0.045455 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8ebea9bd9a36436d97a39d084b3e9f5efd6b672d | 48 | py | Python | item_s/app.py | augustocarrlos10/Flask_TecWeb | 5e99f20d61c42a9a1b621abe40c7410f8ac9890f | [
"MIT"
] | null | null | null | item_s/app.py | augustocarrlos10/Flask_TecWeb | 5e99f20d61c42a9a1b621abe40c7410f8ac9890f | [
"MIT"
] | null | null | null | item_s/app.py | augustocarrlos10/Flask_TecWeb | 5e99f20d61c42a9a1b621abe40c7410f8ac9890f | [
"MIT"
] | null | null | null | from yourapplication import app as application
| 16 | 46 | 0.854167 | 6 | 48 | 6.833333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.145833 | 48 | 2 | 47 | 24 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d91d9c7ed29a69a5366d7b5dad33b9b80409c4b3 | 96 | py | Python | venv/lib/python3.8/site-packages/yarg/package.py | GiulianaPola/select_repeats | 17a0d053d4f874e42cf654dd142168c2ec8fbd11 | [
"MIT"
] | 2 | 2022-03-13T01:58:52.000Z | 2022-03-31T06:07:54.000Z | venv/lib/python3.8/site-packages/yarg/package.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | 19 | 2021-11-20T04:09:18.000Z | 2022-03-23T15:05:55.000Z | venv/lib/python3.8/site-packages/yarg/package.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | null | null | null | /home/runner/.cache/pip/pool/37/28/98/75cee92af7cb65c02b31d951959cbf85df7edff9219f99df9d60762c79 | 96 | 96 | 0.895833 | 9 | 96 | 9.555556 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.416667 | 0 | 96 | 1 | 96 | 96 | 0.479167 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d9925ffb3873d78bb7c57816683316ad608d9519 | 34 | py | Python | authman/resource/__init__.py | masoudn84/authman | 411a5461e52410ab9ec11e99285f27296d381c2c | [
"Apache-2.0"
] | null | null | null | authman/resource/__init__.py | masoudn84/authman | 411a5461e52410ab9ec11e99285f27296d381c2c | [
"Apache-2.0"
] | null | null | null | authman/resource/__init__.py | masoudn84/authman | 411a5461e52410ab9ec11e99285f27296d381c2c | [
"Apache-2.0"
] | null | null | null | from authman.resource import apiv1 | 34 | 34 | 0.882353 | 5 | 34 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.032258 | 0.088235 | 34 | 1 | 34 | 34 | 0.935484 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
79402936ee8c57ef0f611a9d88693dd7d9f2c397 | 29 | py | Python | db/search/resolvers/__init__.py | matchd-ch/matchd-backend | 84be4aab1b4708cae50a8988301b15df877c8db0 | [
"Apache-2.0"
] | 1 | 2022-03-03T09:55:57.000Z | 2022-03-03T09:55:57.000Z | db/search/resolvers/__init__.py | matchd-ch/matchd-backend | 84be4aab1b4708cae50a8988301b15df877c8db0 | [
"Apache-2.0"
] | 7 | 2022-02-09T10:44:53.000Z | 2022-03-28T03:29:43.000Z | db/search/resolvers/__init__.py | matchd-ch/matchd-backend | 84be4aab1b4708cae50a8988301b15df877c8db0 | [
"Apache-2.0"
] | null | null | null | from .hit import HitResolver
| 14.5 | 28 | 0.827586 | 4 | 29 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.137931 | 29 | 1 | 29 | 29 | 0.96 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
79439689139bc78e7fafa036aa9680c5c06bf3ab | 3,646 | py | Python | src/zen/tests/gml.py | wangyiranamy/Testing | 2a729d1f73b6df69150807b965b8fedbb7661c04 | [
"BSD-3-Clause"
] | 41 | 2015-01-13T19:49:50.000Z | 2021-05-02T04:11:19.000Z | src/zen/tests/gml.py | wangyiranamy/Testing | 2a729d1f73b6df69150807b965b8fedbb7661c04 | [
"BSD-3-Clause"
] | 9 | 2015-01-28T10:46:27.000Z | 2022-03-12T06:32:39.000Z | src/zen/tests/gml.py | wangyiranamy/Testing | 2a729d1f73b6df69150807b965b8fedbb7661c04 | [
"BSD-3-Clause"
] | 19 | 2015-01-27T12:19:42.000Z | 2019-07-20T21:30:56.000Z | from zen import *
import unittest
import os
import os.path as path
import tempfile
class GMLReadTestCase(unittest.TestCase):
def test_read_directed_test1(self):
fname = path.join(path.dirname(__file__),'test1.gml')
G = gml.read(fname)
self.assertEqual(len(G),3)
self.assertEqual(G.size(),2)
self.assertEqual(type(G),DiGraph)
self.assertTrue(G.has_edge('N1','N2'))
self.assertTrue(G.has_edge('N2','N3'))
self.assertFalse(G.has_edge('N1','N3'))
self.assertFalse(G.has_edge('N3','N2'))
self.assertEqual(G.node_idx('N1'),1)
self.assertEqual(G.node_idx('N2'),2)
self.assertEqual(G.node_idx('N3'),3)
self.assertEqual(G.node_data('N1')['sampleOne'],42)
self.assertEqual(G.node_data('N2')['sampleTwo'],42.1)
self.assertEqual(G.node_data('N3')['sampleThree'],'HELLO WORLD')
self.assertEqual(G.edge_data('N1','N2')['label'],
'Edge from node 1 to node 2')
def test_read_undirected_test1(self):
fname = path.join(path.dirname(__file__),'test2.gml')
G = gml.read(fname)
self.assertEqual(len(G),3)
self.assertEqual(G.size(),2)
self.assertEqual(type(G),Graph)
self.assertTrue(G.has_edge('N1','N2'))
self.assertTrue(G.has_edge('N2','N3'))
self.assertFalse(G.has_edge('N1','N3'))
self.assertTrue(G.has_edge('N3','N2'))
self.assertEqual(G.node_idx('N1'),1)
self.assertEqual(G.node_idx('N2'),2)
self.assertEqual(G.node_idx('N3'),3)
self.assertEqual(G.node_data('N1')['sampleOne'],42)
self.assertEqual(G.node_data('N2')['sampleTwo'],42.1)
self.assertEqual(G.node_data('N3')['sampleThree'],'HELLO WORLD')
self.assertEqual(G.edge_data('N1','N2')['label'],
'Edge from node 1 to node 2')
def test_list_variables(self):
fname = path.join(path.dirname(__file__),'test3.gml')
G = gml.read(fname)
self.assertEqual(len(G),3)
self.assertEqual(G.size(),2)
self.assertEqual(G.node_data('N1')['listVar'],
[1,'a',3.2])
def test_weight_fxn(self):
fname = path.join(path.dirname(__file__),'test3.gml')
G = gml.read(fname,weight_fxn=lambda data:data['value'])
self.assertEqual(len(G),3)
self.assertEqual(G.size(),2)
self.assertEqual(G.weight('N1','N2'),2)
self.assertEqual(G.weight('N2','N3'),3)
def test_non_asci_char(self):
G = Graph()
G.add_node(u'\u2660')
G.add_node(u'\u2663')
G.add_node(u'\u2665')
G.add_node(u'\u2666')
G.add_edge(u'\u2663', u'\u2665')
G.add_edge(u'\u2660', u'\u2666')
G.add_edge(u'\u2665', u'\u2666')
G.add_edge(u'\u2660', u'\u2663')
gml.write(G, 'test4.gml')
H = gml.read('test4.gml')
for nobj in G.nodes():
self.assertEqual(H.node_idx(nobj), G.node_idx(nobj))
for nobj1, nobj2 in G.edges():
self.assertEqual(H.edge_idx(nobj1, nobj2),
G.edge_idx(nobj1, nobj2))
self.assertEqual(G.size(), H.size())
self.assertEqual(len(G), len(H))
def test_tuple_node_objects(self):
G = Graph()
G.add_node((1,2))
G.add_node((2,3))
G.add_edge((1,2),(2,3))
gml.write(G, 'test5.gml')
H = gml.read('test5.gml')
for nobj in G.nodes():
self.assertEqual(H.node_idx(nobj), G.node_idx(nobj))
for nobj1, nobj2 in G.edges():
self.assertEqual(H.edge_idx(nobj1, nobj2),
G.edge_idx(nobj1, nobj2))
self.assertEqual(G.size(), H.size())
self.assertEqual(len(G), len(H))
def test_no_node_data(self):
G = Graph()
G.add_node()
G.add_node()
G.add_edge_(0,1)
gml.write(G, 'test5.gml')
H = gml.read('test5.gml')
for edge_idx in G.edges_():
node_idx1, node_idx2 = H.endpoints_(edge_idx)
H.has_edge_(node_idx1, node_idx2)
self.assertEqual(G.size(), H.size())
self.assertEqual(len(G), len(H))
if __name__ == '__main__':
unittest.main()
| 24.469799 | 66 | 0.665661 | 607 | 3,646 | 3.833608 | 0.149918 | 0.238505 | 0.165019 | 0.111732 | 0.78771 | 0.764933 | 0.70477 | 0.70477 | 0.672969 | 0.672969 | 0 | 0.049134 | 0.129183 | 3,646 | 148 | 67 | 24.635135 | 0.68378 | 0 | 0 | 0.582524 | 0 | 0 | 0.109435 | 0 | 0 | 0 | 0 | 0 | 0.436893 | 1 | 0.067961 | false | 0 | 0.048544 | 0 | 0.126214 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
795a3f029819973136f92b1daa9efc9b0d762635 | 25 | py | Python | src/wta/__init__.py | kaist-irnlp/SparseColBERT | f0f0ed4acff5dc3c747f13315de0fe7ea50b5b70 | [
"MIT"
] | null | null | null | src/wta/__init__.py | kaist-irnlp/SparseColBERT | f0f0ed4acff5dc3c747f13315de0fe7ea50b5b70 | [
"MIT"
] | null | null | null | src/wta/__init__.py | kaist-irnlp/SparseColBERT | f0f0ed4acff5dc3c747f13315de0fe7ea50b5b70 | [
"MIT"
] | null | null | null | from .wta import WTAModel | 25 | 25 | 0.84 | 4 | 25 | 5.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.12 | 25 | 1 | 25 | 25 | 0.954545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
798cada0852ba30d1888c3df99ad144d9bda1e89 | 205 | py | Python | secure_emailing/docker/common/smtp_proxy_v2/__init__.py | guadaltech/kubernetes-containers-tools | 5d4d9da6bf95162404b927cf23178d86dd99dd85 | [
"Apache-2.0"
] | 81 | 2015-01-25T13:16:31.000Z | 2021-05-29T13:35:53.000Z | secure_emailing/docker/common/smtp_proxy_v2/__init__.py | guadaltech/kubernetes-containers-tools | 5d4d9da6bf95162404b927cf23178d86dd99dd85 | [
"Apache-2.0"
] | 16 | 2015-01-01T13:52:23.000Z | 2020-06-12T01:13:18.000Z | secure_emailing/docker/common/smtp_proxy_v2/__init__.py | guadaltech/kubernetes-containers-tools | 5d4d9da6bf95162404b927cf23178d86dd99dd85 | [
"Apache-2.0"
] | 38 | 2015-01-16T10:23:07.000Z | 2021-08-24T14:14:27.000Z | import secure_smtpd.config
from secure_smtpd.config import LOG_NAME
from .smtp_server import SMTPServer
from .fake_credential_validator import FakeCredentialValidator
from .proxy_server import ProxyServer
| 34.166667 | 62 | 0.887805 | 27 | 205 | 6.481481 | 0.592593 | 0.125714 | 0.194286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.087805 | 205 | 5 | 63 | 41 | 0.935829 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
798e96a75fa3e2265dfc0f81881663d98d36cb9f | 35 | py | Python | testsqrt.py | UncleEngineer/PythonForBeginners | 042388fba66688988367dd2f338393e18ea645b2 | [
"MIT"
] | 3 | 2020-09-27T14:37:43.000Z | 2021-04-20T01:55:31.000Z | testsqrt.py | ktanan/PythonForBeginners | 042388fba66688988367dd2f338393e18ea645b2 | [
"MIT"
] | null | null | null | testsqrt.py | ktanan/PythonForBeginners | 042388fba66688988367dd2f338393e18ea645b2 | [
"MIT"
] | 3 | 2020-09-27T14:38:14.000Z | 2020-09-29T14:35:22.000Z | import math
print(math.sqrt(32))
| 11.666667 | 21 | 0.714286 | 6 | 35 | 4.166667 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.066667 | 0.142857 | 35 | 2 | 22 | 17.5 | 0.766667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
7999a010f4f658b3f1d157f47aa536df3bbb4bb7 | 6,912 | py | Python | needed_libs.py | PHP-Programmer-HUN/AI-cam | 095470e347fff1d740871728dfaced2d455a865c | [
"MIT"
] | 1 | 2021-06-27T23:38:21.000Z | 2021-06-27T23:38:21.000Z | needed_libs.py | PHP-Programmer-HUN/AI-cam | 095470e347fff1d740871728dfaced2d455a865c | [
"MIT"
] | null | null | null | needed_libs.py | PHP-Programmer-HUN/AI-cam | 095470e347fff1d740871728dfaced2d455a865c | [
"MIT"
] | null | null | null | vho65FiUoGAGUbiPPgrvITZQhaDN17sY7vFriy8U26SoghwJiGnOm5iIgHocdrIGo4nB3t4CrzFMZbdeGgplN6shRKqJqLkxUhFLorsAuqmBg4lM60aHq4lvepX9gDFk+Bgh6n2+4dqNgl744RrK8umHW9ZUMa7ABzvrbSOMJH4EDBLogL9fSgehmLp0mPiRM8JTQux4K+Yp/NM96kw/Kz77FvD2102AccXdzr2iZQtVt3RO4sgHBcfTrX/m979qjPAy2yMKvU/KGA+C9wfzpaT9kJP/wzQGP8EMK8AxSm3500EKXNoXrPHGGVLlqpb+WtmIRBngeMGvHsXWGJiIUV7+pxbhzaPdCxiGNwYw6TeMEB/ne1yL+ixJx0mqWYS0KRctfyqlYtTjPRugIfg0tzeb6uEV08KA3QPmzWBKrAk0Rid/huG7dA143RD/w3NdHn/X5Fw9Qwafl0Ckqa0eH21e3n30YNr+d+mb2ZmZfjhQRdvU3YOUMP3H9smfKaE4fUQrhrUVK2Zut2tCfYr6VYYqG+EuV2LMS/rel7wYJTS3jykiNPUiPgnzOBeZ8cOSbkh4hENDWz2TBXrgGWJBIhmIEsYr/hY82YGiWIxsMu7v3mXVrndehcrwBwq8O9Er9t+Bk+B8mNQyTetIrNrGZtb0DN9g0cu6s8yoAJiJX/GoKin7UKYAzs9tb32/SbHe/L/E3KG7GaJFUClTr3YS2ukDffwA4rVvTNQTarK00C/bBU5w5itn67a0NCf7hHzfN/RoJ/TEHES769zoXqAsD3W68pR7WR7Tqx6mxxg9bAy3iPXGE8gWU2ttejQy3rQ24QZ8SCmhSCLZGfEQRpu8mGdwMZTnjE25TT+JTUix1c0CvNXcKnpId/xooUgCULQHiXrWiGNgEI73bL9RgBPFWIpJ+NlruGqN4xuVi4yRE5jLLi6gpSZtFC2x8M5BgaVOTk8dmwNkNYowzwQDlob6qv4wnjbddLVll5ZYY+1IrKJYqyVJrDQsdVhGyq6Cka0YnEeNeE4Xdbj4aUJYDCpA3g0cYyY1peuE/F4VnoWX5zpTauPeTnhKOcLjn6TPre6odNg6MQo/hWclgcQgulvCZDV65/QPyzPXVan8vFPQUh1ONXpF6NSBuA1624g0hMbEZownLOKoJbw8AECyh6mhN1/ebroh10NSb13kPgg4DAABFN13pdufRQKUlMPcg8kRT7uFFZaCldjMsIaWlvQYXEhEB03KWi3HoFOOWCN0DBUlmwS2YSmF0aUexO2ICESDHzTiWaB2bKOi0YFpht6bv3tFug6/J1t9VWqVVMwd5O0kUJdukZSv560YMyeTJylz5sYAgfDBG90Hc6zkahAzEhEabpDZ8Eqy9d3zPYfH2HAPP9KqW3S7ZDArsGSzbZw74807Im+hmq1qWN+lmhFMhsGkxtjtE1dFXoEP4+/8gbd8eMErG76QryPNi33U5LIIYqKNhL71DESg1R1q322JDt83mawW2dvOtck7FRf/s9KMR9W9aOMwGPu5OFciV1ZbymfFa1DyfFpoWjdkWpDGCHqJEW1kf//SxpwQbRdRRChs1wKwGfGeKRQdX5k08s1wKarg4F5BsFogq3lqpZiHN/HbGxd4t42VbXfCkzLBYlDTOUQJ1kifBbnGHmeGpUw8YN8WXD6/9C5CyKBmnzn0q2M03om0/kcgJ4cyk/Hl0/ejpJrQx/wPDzO/9MeUX7Og+UjndS/VmVU0IkvUn0aLRyAEAymdhsNmDzc8viESIQqRCQdJAQT+HmEsCOXY8G1uFjOvohnNk09fd3ya/OEEfUZGnymtqd1jk/GvOZ4z7Tk+WV9NoC22vwKrp9hZ7d2MTKsq1P4EPEA+KV0o2ekExlQAIKn45gl9jEhQnyDMO+q9JOUN6siWCLe587WberPVOposWtMS1RIcKlmWLzVgY5rHPILpnMLLAQvSxBToIFgVD9JL2uhL7qY7QVrcCYnjIlKeNRHXW6CX4PGaVntm73TMVmBPbMormd2vzNkkmkhszqZKiIYjDPCAbD11oVX5aXOwZq0UUYP/2XagSCboCla3M19bwpFy7bT4RnYHalSBNtpYLdfVIr8Gez4SyiCyQO8FynDElPSKQdrrV4VEGLwZ/HN1X/ItXTUEXwKnsV0EBReslNLKlfREg8/HzNgVvp31HUOxRCPGZLggTRFYxGcoruRuItwAMgUTJdbsMjsypGxGHQb4dBajlLf/YI1975WRj9lZmVqHKsYB6VEvkfXYiIXaCZ5MLDUaE1/Ty41z9yxSvUTMts2mqE7YMZKACAVFjpiYken4x1UGGELVt9ph6sg+3xOOaYJSWzpEkGCYZg8jumS9N9k9H9Tke0r9YwOlJpJinyIIstJMikz7BpO4HgeE8on5Nb0KFhsb/U8XCR6JSFblfvxSxIIDDLTaHyLiR761CieP0Ly62gX7pYR1+0gQrbUARJCkCeaKnP8r+bx828oZPSpPw7+Jbq9TVCxdfvUSw+DpPdTyJ61c07Owk2shRhtnDVoxRJr0Yoq6UO6mYLAMtxSIqpVXr0gEZz05U5fCEkR/Sg/8/4hgFUuLHne+tEjvoM2TDO87FXkFRg3CLV3ImIkcGc8mA1seGcvwU7gAW0O4ccCpCVk3ug75bxSiw+rXZ5sAtIBhIn64acyRLZJ8Jv5iJjbiattalqYLhPnag/WSCiBh/0YGbQdhtUW1Q8tVxu8350GC9JG930Doa0Ag1S95MKBKtt+k/QaVpUDPbzV3r8ox0qZjQ2d/Kjojkep9qeF4e3/OMYEkhdRkM5cR3fLhOzSHTpEZ/wU6ZmXVS27Mn5icvKFNmQc8tILJngselbyzcgmEXQ+qpqlgYr6kHVLczBAFXLjEch69sLC9gDh1y8Ia2+nrr5M1gJ2Bjr72p10w821q1+faNaJHrEX1SKz+7WyrQBiVOPyx8pMDcIY6AnnjLMaPQeQBGi3u8iFsnGnOzLV4w2wC9xqPHvpRMxKq8ztB3te0G0mwhjjeSceARb/VyhKYkRiyNWaqI45DBa4x7Vtsm12fxqHWDVe7iOIVzrd/jBgrwWFy766XnDFzXRkmc/y0jGVcBW6N9F5ZwGw+PGui1+8pe+2q6Vqs3aXbKhVTtP74Koe4GAfKVZPzkJtNrP4gcxUFp9Xmrtx8D5QEPmFJGt3NpgYs3WhEHez6xkS/r/Msz/dhLSdWOhNh9vRn5gMf1VTIRNjlSb1CzNtC+R6QsdcEkH0aV3c9illdkOnyPkwK5Pl6NC/IRBZkpYGdYzFNgLMhd2Hpw/SZuTY6YH/+fZeMwhq1QEGQFZJr175jMmf5HSQET8VvqzggY3AVsAEfea68+OkBZy/H+jcFmuhCzIvIg/0R0D/CAQlWBtLp1hA9QcTrj9KJlCTYOUJOas2ldtPw/ywjEIc8hIkLe5wf7WEozzXEIHxWvrNi/rOl/kTbbP/0yNkdL/NFMU21cl64mSqooTitIlBu1faW7o1iTLdfAgSmCmiccp7Kr7DtZnKZ9K9PuBXos1Eh6hCDc6VPAskzEUuUFhmaOrwvaBtEmlO76Sta1MA4Sx/5vEV03qmhLE4M/XfKC1OH2uxqlgF6dLZTh2AabGoXeDbNyCVeDC9Q29gZjwWHbz3e4kVpeU9iORbN7YPdD5ZRVG5e2QcW/llYCsBFAhHVoJz8Jhg7H/aiQ2oMHjTlK06Y3O4ZQ30E6xmq1L5eLIaUqMBvSk8XazfZLnwyDEjNPjBHZZSJxLQ2vZx5ZZOJS3rsYf722C8n0AF7o55NoqBc3L2nMXd4aI/WBs0UFP+dQI4E2FzesMCluVLnVGeIycoiMTDuQh2aaWUk4Qx6kC+8d7I2TwhbF59HWen/ibmzjDEQnfFlfaEGhPh42zkN5he6NYnobTFmcpOMLH7mp9qp3OQ5BxykWLqbov7l+Jjp7tzjANRTaWEAzz6L0F1SgnF/BfaZ5fL0kz+jSVSHptag4bkjzpLOTeEqzJVhr9mr4CrLoIICiJO6p+ys/R6kV3ZdwHuf1rCO2A0OyCvX2WhKdyoWyGvK1m2T+ZLJCZ5hUGzb4ncLrsCOTPX9dKXnyliXILEQTOrlToxtrfu8Iii/KXuPG4k6WgNSmpEg9+J/KnDwa+avplstl5Ss72KlmqeXdqmgqQ6F8qBLaD2T8lcfmSPTybqB826ZWTfSAe1RE//KWOlejh9MxNnb7M9aQjb2k4HO+desVwvGok067/Nhrdzxrvp62J8wbyR3gCX6SOHzjfy68sLwByyt32XJ+3oLSiNNNKoN488Z52szSLQ/jat2SaB8Op3Ihf+vxfn3eWOs5EW+/jNyTey3Xoj5v5LOvVCUcCyCDMcpVvcdu/UXD3wunW9AJfTkIPfwLFsyZQn/zetcas7N/jUrNsZpYRE2INUYYACAfSg1wKPHqHXuMDexd6IcLpw9tTY6CL0M9/C+NusVswpSWdJvZqysUJGBqAy/o3dtDdSFP0FNO5o9NoXwGr9L3ET4yDEFSpiFYEHxPwRhzK8z2+EuzcZsXyY/ePBmQ+3SYiPtEDD+6jq3nJDkPQ+CJBMrZbJoG0E1Ijxf0gwlwut3aZzCX+G9R2KXJ1YZ6paKiTVkBjN8Je+H7FQaDTWQpGQVvW2Fxpc5p5INqkg2fstFgs4/9z/SMtQYfgBDLVGmYLzC1LSeOfnwonlD/52G4P/bYMONu/1UEZWK7zWfflrtuEQasbmhIHv1xnfkk69J4qOFimzBXd2R1skw00uvZzBSKDDV5ElOpK5VzHn8EKK+mkIeY0s3KYZWLNtmzGj4hieUfhw0CXBBlhkByP9apgKeZyGDsAxqhFq1C4FqGtI8IBmO9KiEpy5wk5VVtowxSH8v1CEF4N4IhLNGaS9J/q9INgCAhyp3nN7w1qIW0ocBEzQmR1NVC6za6L9FL+EY/2KQ7YkmMyX/qaG6URht/rvDthGeUB63wxEVWMCWglky+wWHPj4Fw5vEnsFZs2TQu7iJX1IXyNRd7XxLLgJ+O+NB5YwfoUn2DB0myBqopF+Zs+vADBY57xfGgzakTBxuzy7uIu0lc4ynrZcYuftpoaVobcvdkjk4OI0pvZAyr7DoIf8D8X5NGKt2Cvnj585jBHgJW1KYnLcyu1WYHH5t39YrlPatzIe9hVMnnXEM7c5mzmzEYR0R3fpmQVKOL93rEcrek3LJsnK3xChQxIlNQ1m4I0u+sZb2CIiXYEYh/AAGyQVnH88HNOlAubHLOh2WeF7qSCH4RWxMTE+bTVhRTIvelc7IJQiSHZpIzsy14ExCpYbiK2ilFHccSSsuAkNXOcl4UYr+LvlfFtOuzWkJ2s6Vbxqjs/5I6wOgK8iMHz6M97mVd21ZvFLuIiIA6J0isu+wCPgruyxjOABDLgWNe3rSJOBNd8/hQGJMMtpr8rfGWQCtTk4DO/pZU9iPBl4X1U1f/zy1pOcqAt4XAIrSbmS53KKbeIJe4aeTu1H0p/WvklelcRH+JboiR5583fU0pne/VKjY2r92lVchgMpj1SqJmdlqlq6pudx4Yd/XojwP1Po6PT9u6Kao794Wv4PJRiBGuem7JY8usUMJh+xudS3aOgVJqQ9c37mjaEWSZzRLAVJo0BLhFi/TPK932fmzyPfcdiRq9dC8j6/P3b4XmfA7mRUfJAPKymqwv6vBQIj9yIIl7AOwdl5UZOOb1xddYTp7EgmwRos6mYjY/aS5NJ03sNmo/tpYnfG4M3ae6sRUlZvd34jlaPsG2mcMqau7U8SXxmIxcgKTRv0a6i01gun8c+A3GuM2KKYWYt03b9VEL/H/noXIQXf+EIEAvtHh9p25t62IfkkuJrQhu830jjTdSwDcGoYmGeKc4DM5KNxNwhEyom2oZXdTrjkWZKweMvIwUDN6ghzBfmSQACKvTzr7skBVIyEIxhITpKLFKQn+X3rk4uuTbvU1TKdzRiYRhy5RYOVVlZj/58YolRHksMdBBFNYQuJx9A2c6Jdb9XBjHMbRdTJRP1uN0ft2Qtx6rerV3Jf8/We+nb5ZbGhuYkD3FN+a9u2zOagcQXTbK0rLAhj+eNlxLE7QJBqhSVT6eEAWXvF/hFUYFKppeLcqhrOd7LdhqH53qIDvJceQiorQluS9Lro8JxT54BrmPik/46/+pS9twnplj2ajd4Ukr/o11dMt4DbrogbzMP8odq7EgZqTZTF+jC1NCTGnwbQ7mCD0aL0hEUkq6U33HtpbPk41zxAeuVr03gW158k1bhkyR6jTV4kllZ954qRb3DNMrDAczBP9ZhRwjbFyTvx7ejpHLgCZBDKTEjm8B1UMldu7knaiS2NHJqRiZAje7L30AQfZS3BLYPd0Fk6vw+7rufx1/rHYSqGHBhDCpynkZcOlVwFlzJ44EVP3TEEMLYLDdaqsFGs3h5ewcgsbolKocHADy83qdtBlSungjAaDM4YACYoHqNFbz/0WJJ+WJ6la99u4mwvK9DbYc6Hsw9O6442k8Ec0WvKkfZ5fwUAAirE4jbpa5yI9W9FxTIZ08yvFJ7m1Fo9X0cUDYOjhLo/U5DhFANZANn0mSBOvpP/tNu7IvP2e/6c9v5CfcNaHl0UdlTLVQ7e+PEjnNQ42U931jug+mAtwhJHMOrsmf5HvP62p3HZf9kkIzO0VMM4lV4sEJRCkLEtpeOKzTFP9dlQdhzUt3Lt9gGU0h51qk6eMMxagQJKFy/BbHOVyP9diRr0LLiTMoozT+CzcRGkMWIXtd/2zGx7/6BTKwuGxEwtpJ2JiFyuKD1GX+lMSyPGqx8HXwKp6Hjkp0juR1EToJ4lGHI/+gvM6K/I2BIXHfc1YfesxtxCX+xADeLDmloSUsgTjrFHp4uJt3OQR6HcA/8hDAeUlIperYVsF8ceqqqBJGQMgrbANQPkzBzpSJ3sekWv0 | 6,912 | 6,912 | 0.967882 | 216 | 6,912 | 30.972222 | 0.99537 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.157986 | 0 | 6,912 | 1 | 6,912 | 6,912 | 0.809896 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
79c4079e6bd384f6950dee3772f51610558a3f9d | 783 | py | Python | mapboxgl/__init__.py | lucasvw/mapboxgl-jupyter | e2dc343737014d5d26274c0dcdcb106b495d5a66 | [
"MIT"
] | null | null | null | mapboxgl/__init__.py | lucasvw/mapboxgl-jupyter | e2dc343737014d5d26274c0dcdcb106b495d5a66 | [
"MIT"
] | null | null | null | mapboxgl/__init__.py | lucasvw/mapboxgl-jupyter | e2dc343737014d5d26274c0dcdcb106b495d5a66 | [
"MIT"
] | null | null | null | from .viz import (
CircleViz,
GraduatedCircleViz,
HeatmapViz,
ClusteredCircleViz,
ImageViz,
RasterTilesViz,
ChoroplethViz,
LinestringViz,
)
from .layers import (
CircleLayer,
GraduatedCircleLayer,
HeatmapLayer,
ClusteredCircleLayer,
ImageLayer,
RasterTilesLayer,
ChoroplethLayer,
LinestringLayer,
)
from .map import Map
__version__ = "0.9.0"
__all__ = [
'CircleViz',
'GraduatedCircleViz',
'HeatmapViz',
'ClusteredCircleViz',
'ImageViz',
'RasterTilesViz',
'ChoroplethViz',
'LinestringViz',
'CircleLayer',
'GraduatedCircleLayer',
'HeatmapLayer',
'ClusteredCircleLayer',
'ImageLayer',
'RasterTilesLayer',
'ChoroplethLayer',
'LinestringLayer',
'Map',
] | 17.795455 | 27 | 0.655172 | 48 | 783 | 10.520833 | 0.520833 | 0.106931 | 0.146535 | 0.217822 | 0.879208 | 0.879208 | 0.879208 | 0.879208 | 0 | 0 | 0 | 0.005042 | 0.240102 | 783 | 44 | 28 | 17.795455 | 0.843697 | 0 | 0 | 0 | 0 | 0 | 0.293367 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.073171 | 0 | 0.073171 | 0 | 0 | 0 | 1 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8de765d9fd0e070aa1975360e2663ca4e1c106a6 | 19 | py | Python | optflow/__init__.py | czming/optflow | 6a6d24efbaac162f1d3da5d26430f9ea9e60bbad | [
"MIT"
] | 12 | 2021-07-02T15:27:04.000Z | 2021-12-28T05:59:04.000Z | zhusuan/flows/__init__.py | thu-ml/Zhusuan-Jittor | e73c6e3081afde305b9caba80858543abf168466 | [
"MIT"
] | 1 | 2021-07-29T08:50:00.000Z | 2021-07-29T08:50:00.000Z | zhusuan/flows/__init__.py | thu-ml/Zhusuan-Jittor | e73c6e3081afde305b9caba80858543abf168466 | [
"MIT"
] | 2 | 2021-07-20T11:04:29.000Z | 2021-11-11T09:02:35.000Z | from .flow import * | 19 | 19 | 0.736842 | 3 | 19 | 4.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.157895 | 19 | 1 | 19 | 19 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
30d0a802d613a990c2ed4e800f9d5e13501901cc | 88 | py | Python | milk/nfoldcrossvalidation.py | luispedro/milk | abc2a28b526c199414d42c0a26092938968c3caf | [
"MIT"
] | 284 | 2015-01-21T09:07:55.000Z | 2022-03-19T07:39:17.000Z | milk/nfoldcrossvalidation.py | pursh2002/milk | abc2a28b526c199414d42c0a26092938968c3caf | [
"MIT"
] | 6 | 2015-04-22T15:17:44.000Z | 2018-04-22T16:06:24.000Z | milk/nfoldcrossvalidation.py | pursh2002/milk | abc2a28b526c199414d42c0a26092938968c3caf | [
"MIT"
] | 109 | 2015-02-03T07:39:59.000Z | 2022-01-16T00:16:13.000Z | from .measures.nfoldcrossvalidation import foldgenerator, getfold, nfoldcrossvalidation
| 44 | 87 | 0.886364 | 7 | 88 | 11.142857 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.068182 | 88 | 1 | 88 | 88 | 0.95122 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
30e9da30b771f8da41affdb230d245b8038d55b9 | 21,961 | py | Python | pero/glyphs/shapes.py | xxao/pero | a7f0c84fae0b21fe120204e798bd61cdab3a125d | [
"MIT"
] | 13 | 2019-07-15T17:51:21.000Z | 2022-03-15T06:13:43.000Z | pero/glyphs/shapes.py | xxao/pero | a7f0c84fae0b21fe120204e798bd61cdab3a125d | [
"MIT"
] | 1 | 2021-12-29T00:46:44.000Z | 2022-01-21T16:18:48.000Z | pero/glyphs/shapes.py | xxao/pero | a7f0c84fae0b21fe120204e798bd61cdab3a125d | [
"MIT"
] | 3 | 2020-09-27T14:31:45.000Z | 2022-01-22T14:28:15.000Z | # Created byMartin.cz
# Copyright (c) Martin Strohalm. All rights reserved.
import math
from .. enums import *
from .. properties import *
from .. drawing import Path
from . glyph import Glyph
class Annulus(Glyph):
"""
Defines a ring-like glyph.
Properties:
x: int, float or callable
Specifies the x-coordinate of the center.
y: int, float or callable
Specifies the y-coordinate of the center.
inner_radius: int, float or callable
Specifies the inner radius.
outer_radius: int, float or callable
Specifies the outer radius.
line properties:
Includes pero.LineProperties to specify the glyph outline.
fill properties:
Includes pero.FillProperties to specify the glyph fill.
"""
x = NumProperty(0)
y = NumProperty(0)
inner_radius = NumProperty(UNDEF)
outer_radius = NumProperty(UNDEF)
line = Include(LineProperties)
fill = Include(FillProperties)
def draw(self, canvas, source=UNDEF, **overrides):
"""Uses given canvas to draw glyph."""
# check if visible
if not self.is_visible(source, overrides):
return
# get properties
x = self.get_property('x', source, overrides)
y = self.get_property('y', source, overrides)
inner_radius = self.get_property('inner_radius', source, overrides)
outer_radius = self.get_property('outer_radius', source, overrides)
# make path
path = Path().circle(x, y, inner_radius)
if outer_radius:
path.circle(x, y, outer_radius)
# set pen and brush
canvas.set_pen_by(self, source=source, overrides=overrides)
canvas.set_brush_by(self, source=source, overrides=overrides)
# draw path
canvas.draw_path(path)
class Arc(Glyph):
"""
Defines an arc glyph.
Properties:
x: int, float or callable
Specifies the x-coordinate of the center.
y: int, float or callable
Specifies the y-coordinate of the center.
radius: int, float or callable
Specifies the arc radius.
clockwise: bool or callable
Specifies the drawing direction. If set to True the arc is drawn
clockwise, otherwise anti-clockwise.
start_angle properties:
Includes pero.AngleProperties to specify the start angle.
end_angle properties:
Includes pero.AngleProperties to specify the end angle.
line properties:
Includes pero.LineProperties to specify the glyph outline.
fill properties:
Includes pero.FillProperties to specify the glyph fill.
"""
x = NumProperty(0)
y = NumProperty(0)
radius = NumProperty(UNDEF)
start_angle = Include(AngleProperties, prefix="start_")
end_angle = Include(AngleProperties, prefix="end_")
clockwise = BoolProperty(True)
line = Include(LineProperties)
fill = Include(FillProperties)
def draw(self, canvas, source=UNDEF, **overrides):
"""Uses given canvas to draw glyph."""
# check if visible
if not self.is_visible(source, overrides):
return
# get properties
x = self.get_property('x', source, overrides)
y = self.get_property('y', source, overrides)
start_angle = AngleProperties.get_angle(self, 'start_', ANGLE_RAD, source, overrides)
end_angle = AngleProperties.get_angle(self, 'end_', ANGLE_RAD, source, overrides)
radius = self.get_property('radius', source, overrides)
clockwise = self.get_property('radius', source, overrides)
# set pen and brush
canvas.set_pen_by(self, source=source, overrides=overrides)
canvas.set_brush_by(self, source=source, overrides=overrides)
# draw
canvas.draw_arc(x, y, radius, start_angle, end_angle, clockwise)
class Bow(Glyph):
"""
Defines an arc-like glyph specified by radius and end-point coordinates.
One of the four existing solutions is chosen according to the 'large' and
'clockwise' parameters.
Properties:
x1: int, float, callable
Specifies the x-coordinate of the arc start.
y1: int, float, callable
Specifies the y-coordinate of the arc start.
x2: int, float, callable
Specifies the x-coordinate of the arc end.
y2: int, float, callable
Specifies the y-coordinate of the arc end.
radius: int, float, callable
Specifies the arc radius.
large: bool
Specifies which of the possible arcs will be drawn according to
its length.
clockwise: bool, callable
Specifies which of the possible arcs will be drawn according to
drawing direction. If set to True the clockwise arc is drawn,
otherwise the anti-clockwise.
"""
x1 = NumProperty(0)
y1 = NumProperty(0)
x2 = NumProperty(0)
y2 = NumProperty(0)
radius = NumProperty(0)
large = BoolProperty(False)
clockwise = BoolProperty(True)
line = Include(LineProperties)
fill = Include(FillProperties)
def draw(self, canvas, source=UNDEF, **overrides):
"""Uses given canvas to draw glyph."""
# check if visible
if not self.is_visible(source, overrides):
return
# get properties
x1 = self.get_property('x1', source, overrides)
y1 = self.get_property('y1', source, overrides)
x2 = self.get_property('x2', source, overrides)
y2 = self.get_property('y2', source, overrides)
radius = self.get_property('radius', source, overrides)
large = self.get_property('large', source, overrides)
clockwise = self.get_property('clockwise', source, overrides)
# set pen and brush
canvas.set_pen_by(self, source=source, overrides=overrides)
canvas.set_brush_by(self, source=source, overrides=overrides)
# draw
canvas.draw_bow(x1, y1, x2, y2, radius, large, clockwise)
class Bar(Glyph):
"""
Defines a bar glyph.
Properties:
left: int, float or callable
Specifies the x-coordinate of the left edge.
right: int, float or callable
Specifies the x-coordinate of the right edge.
top: int, float or callable
Specifies the y-coordinate of the top edge.
bottom: int, float or callable
Specifies the y-coordinate of the bottom edge.
radius: int, float, (int,), (float,) callable or UNDEF
Specifies the corner radius as a single value or values for
individual corners starting from top-left.
line properties:
Includes pero.LineProperties to specify the glyph outline.
fill properties:
Includes pero.FillProperties to specify the glyph fill.
"""
left = NumProperty(0)
right = NumProperty(0)
top = NumProperty(0)
bottom = NumProperty(0)
radius = QuadProperty(0)
line = Include(LineProperties)
fill = Include(FillProperties)
def draw(self, canvas, source=UNDEF, **overrides):
"""Uses given canvas to draw glyph."""
# check if visible
if not self.is_visible(source, overrides):
return
# get properties
left = self.get_property('left', source, overrides)
right = self.get_property('right', source, overrides)
top = self.get_property('top', source, overrides)
bottom = self.get_property('bottom', source, overrides)
radius = self.get_property('radius', source, overrides)
# check coords
if right < left:
left, right = right, left
if bottom < top:
top, bottom = bottom, top
# set pen and brush
canvas.set_pen_by(self, source=source, overrides=overrides)
canvas.set_brush_by(self, source=source, overrides=overrides)
# draw
canvas.draw_rect(left, top, right-left, bottom-top, radius)
class Ellipse(Glyph):
"""
Defines an ellipse glyph.
Properties:
x: int, float or callable
Specifies the x-coordinate of the center.
y: int, float or callable
Specifies the y-coordinate of the center.
width: int, float or callable
Specifies the full width.
height: int, float or callable
Specifies the full height.
line properties:
Includes pero.LineProperties to specify the glyph outline.
fill properties:
Includes pero.FillProperties to specify the glyph fill.
"""
x = NumProperty(0)
y = NumProperty(0)
width = NumProperty(UNDEF)
height = NumProperty(UNDEF)
line = Include(LineProperties)
fill = Include(FillProperties)
def draw(self, canvas, source=UNDEF, **overrides):
"""Uses given canvas to draw glyph."""
# check if visible
if not self.is_visible(source, overrides):
return
# get properties
x = self.get_property('x', source, overrides)
y = self.get_property('y', source, overrides)
width = self.get_property('width', source, overrides)
height = self.get_property('height', source, overrides)
# set pen and brush
canvas.set_pen_by(self, source=source, overrides=overrides)
canvas.set_brush_by(self, source=source, overrides=overrides)
# draw
canvas.draw_ellipse(x, y, width, height)
class Line(Glyph):
"""
Defines a line glyph.
Properties:
x1: int, float or callable
Specifies the x-coordinate of the start.
y1: int, float or callable
Specifies the y-coordinate of the start.
x2: int, float or callable
Specifies the x-coordinate of the end.
y2: int, float or callable
Specifies the y-coordinate of the end.
line properties:
Includes pero.LineProperties to specify the line.
"""
x1 = NumProperty(0)
y1 = NumProperty(0)
x2 = NumProperty(0)
y2 = NumProperty(0)
line = Include(LineProperties)
def draw(self, canvas, source=UNDEF, **overrides):
"""Uses given canvas to draw glyph."""
# check if visible
if not self.is_visible(source, overrides):
return
# get properties
x1 = self.get_property('x1', source, overrides)
y1 = self.get_property('y1', source, overrides)
x2 = self.get_property('x2', source, overrides)
y2 = self.get_property('y2', source, overrides)
# set pen
canvas.set_pen_by(self, source=source, overrides=overrides)
# draw
canvas.draw_line(x1, y1, x2, y2)
class Polygon(Glyph):
"""
Defines a poly-line glyph.
Properties:
points: ((int, int),), ((float, float),), callable, None or UNDEF
Specifies the points as a sequence of (x,y) coordinates.
line properties:
Includes pero.LineProperties to specify the glyph outline.
fill properties:
Includes pero.FillProperties to specify the glyph fill.
"""
points = TupleProperty(UNDEF, nullable=True)
line = Include(LineProperties)
fill = Include(FillProperties)
def draw(self, canvas, source=UNDEF, **overrides):
"""Uses given canvas to draw glyph."""
# check if visible
if not self.is_visible(source, overrides):
return
# get properties
points = self.get_property('points', source, overrides)
# check data
if not points:
return
# set pen and brush
canvas.set_pen_by(self, source=source, overrides=overrides)
canvas.set_brush_by(self, source=source, overrides=overrides)
# draw
canvas.draw_polygon(points)
class Ray(Glyph):
"""
Defines a ray glyph.
Properties:
x: int, float or callable
Specifies the x-coordinate of the origin.
y: int, float or callable
Specifies the y-coordinate of the origin.
length: int, float or callable
Specifies the line length.
offset: int, float, callable or UNDEF
Specifies the shift from the origin while keeping the length and
angle.
angle properties:
Includes pero.AngleProperties to specify the line angle.
line properties:
Includes pero.LineProperties to specify the glyph outline.
"""
x = NumProperty(0)
y = NumProperty(0)
length = NumProperty(UNDEF)
offset = NumProperty(0)
angle = Include(AngleProperties)
line = Include(LineProperties)
def draw(self, canvas, source=UNDEF, **overrides):
"""Uses given canvas to draw glyph."""
# check if visible
if not self.is_visible(source, overrides):
return
# get properties
x = self.get_property('x', source, overrides)
y = self.get_property('y', source, overrides)
length = self.get_property('length', source, overrides)
offset = self.get_property('offset', source, overrides)
angle = AngleProperties.get_angle(self, '', ANGLE_RAD, source, overrides)
# set pen
canvas.set_pen_by(self, source=source, overrides=overrides)
# draw
canvas.draw_ray(x, y, angle, length, offset)
class Rect(Glyph):
"""
Defines a rectangle glyph.
Properties:
x: int, float or callable
Specifies the x-coordinate of the anchor.
y: int, float or callable
Specifies the y-coordinate of the anchor.
width: int, float or callable
Specifies the full width.
height: int, float or callable
Specifies the full height.
radius: int, float, (int,), (float,) callable or UNDEF
Specifies the corner radius as a single value or values for
individual corners starting from top-left.
anchor: pero.POSITION_COMPASS or callable
Specifies the anchor position as any item from the
pero.POSITION_COMPASS enum.
line properties:
Includes pero.LineProperties to specify the glyph outline.
fill properties:
Includes pero.FillProperties to specify the glyph fill.
"""
x = NumProperty(0)
y = NumProperty(0)
width = NumProperty(UNDEF)
height = NumProperty(UNDEF)
radius = QuadProperty(0)
anchor = EnumProperty(UNDEF, enum=POSITION_COMPASS)
line = Include(LineProperties)
fill = Include(FillProperties)
def draw(self, canvas, source=UNDEF, **overrides):
"""Uses given canvas to draw glyph."""
# check if visible
if not self.is_visible(source, overrides):
return
# get properties
x = self.get_property('x', source, overrides)
y = self.get_property('y', source, overrides)
anchor = self.get_property('anchor', source, overrides)
width = self.get_property('width', source, overrides)
height = self.get_property('height', source, overrides)
radius = self.get_property('radius', source, overrides)
# shift anchor
if anchor is UNDEF or anchor == POS_NW:
pass
elif anchor == POS_N:
x -= 0.5 * width
elif anchor == POS_NE:
x -= width
elif anchor == POS_E:
x -= width
y -= 0.5 * height
elif anchor == POS_SE:
x -= width
y -= height
elif anchor == POS_S:
x -= 0.5 * width
y -= height
elif anchor == POS_SW:
y -= height
elif anchor == POS_W:
y -= 0.5 * height
elif anchor == POS_C:
x -= 0.5 * width
y -= 0.5 * height
# set pen and brush
canvas.set_pen_by(self, source=source, overrides=overrides)
canvas.set_brush_by(self, source=source, overrides=overrides)
# draw
canvas.draw_rect(x, y, width, height, radius)
class Shape(Glyph):
"""
Defines a path-based glyph.
Properties:
path: pero.Path, callable, None or UNDEF
Specifies the path to be drawn.
line properties:
Includes pero.LineProperties to specify the glyph outline.
fill properties:
Includes pero.FillProperties to specify the glyph fill.
"""
path = Property(UNDEF, types=(Path,), nullable=True)
line = Include(LineProperties)
fill = Include(FillProperties)
def draw(self, canvas, source=UNDEF, **overrides):
"""Uses given canvas to draw glyph."""
# check if visible
if not self.is_visible(source, overrides):
return
# get properties
path = self.get_property('path', source, overrides)
# check data
if not path:
return
# set pen and brush
canvas.set_pen_by(self, source=source, overrides=overrides)
canvas.set_brush_by(self, source=source, overrides=overrides)
# draw
canvas.draw_path(path)
class Wedge(Glyph):
"""
Defines a wedge glyph.
Properties:
x: int, float or callable
Specifies the x-coordinate of the center.
y: int, float or callable
Specifies the y-coordinate of the center.
offset: int, float, callable or UNDEF
Specifies the shift from the center in the direction of an angle
between start angle and end angle.
inner_radius: int, float or callable
Specifies the inner radius.
outer_radius: int, float or callable
Specifies the outer radius.
start_angle properties:
Includes pero.AngleProperties to specify the start angle.
end_angle properties:
Includes pero.AngleProperties to specify the end angle.
clockwise: bool or callable
Specifies the drawing direction. If set to True the arc is drawn
clockwise, otherwise anti-clockwise.
line properties:
Includes pero.LineProperties to specify the glyph outline.
fill properties:
Includes pero.FillProperties to specify the glyph fill.
"""
x = NumProperty(0)
y = NumProperty(0)
offset = NumProperty(0)
inner_radius = NumProperty(UNDEF)
outer_radius = NumProperty(UNDEF)
start_angle = Include(AngleProperties, prefix="start_")
end_angle = Include(AngleProperties, prefix="end_")
clockwise = BoolProperty(True)
line = Include(LineProperties)
fill = Include(FillProperties)
def draw(self, canvas, source=UNDEF, **overrides):
"""Uses given canvas to draw glyph."""
# check if visible
if not self.is_visible(source, overrides):
return
# get properties
x = self.get_property('x', source, overrides)
y = self.get_property('y', source, overrides)
start_angle = AngleProperties.get_angle(self, 'start_', ANGLE_RAD, source, overrides)
end_angle = AngleProperties.get_angle(self, 'end_', ANGLE_RAD, source, overrides)
inner_radius = self.get_property('inner_radius', source, overrides)
outer_radius = self.get_property('outer_radius', source, overrides)
clockwise = self.get_property('clockwise', source, overrides)
offset = self.get_property('offset', source, overrides)
# apply offset
if offset:
offset_angle = (end_angle + start_angle) * 0.5
x += math.cos(offset_angle) * offset
y += math.sin(offset_angle) * offset
# init path
path = Path()
# skip drawing
if start_angle == end_angle:
pass
# draw as annulus
elif start_angle % (2 * math.pi) == end_angle % (2 * math.pi):
path.circle(x, y, outer_radius)
if inner_radius:
path.circle(x, y, inner_radius)
# draw as wedge
else:
path.arc(x, y, outer_radius, start_angle, end_angle, clockwise)
path.line_to(x + inner_radius*math.cos(end_angle), y + inner_radius*math.sin(end_angle))
if inner_radius:
path.arc_around(x, y, start_angle, not clockwise)
path.close()
# set pen and brush
canvas.set_pen_by(self, source=source, overrides=overrides)
canvas.set_brush_by(self, source=source, overrides=overrides)
# draw path
canvas.draw_path(path)
| 30.543811 | 100 | 0.581485 | 2,447 | 21,961 | 5.139354 | 0.076011 | 0.097805 | 0.054866 | 0.057729 | 0.805821 | 0.792382 | 0.761371 | 0.750716 | 0.742287 | 0.71223 | 0 | 0.005855 | 0.338965 | 21,961 | 718 | 101 | 30.586351 | 0.860439 | 0.383726 | 0 | 0.694444 | 0 | 0 | 0.019766 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.043651 | false | 0.011905 | 0.019841 | 0 | 0.440476 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
30eb2eb00770a03abeb20449af46632dbf3e8137 | 2,322 | py | Python | method 2.py | PiyushBL45t/Image-Resizing- | a8c7e431b53d622129b2e4dd0e38189c92274202 | [
"MIT"
] | null | null | null | method 2.py | PiyushBL45t/Image-Resizing- | a8c7e431b53d622129b2e4dd0e38189c92274202 | [
"MIT"
] | null | null | null | method 2.py | PiyushBL45t/Image-Resizing- | a8c7e431b53d622129b2e4dd0e38189c92274202 | [
"MIT"
] | null | null | null | import cv2
import sys
def change_height(scale):
image = cv2.imread('mountain.jpg') # takes the image as an input
cv2.imshow("Mountain", image) # shows the image
print("The shape of the image is: ",image.shape)
width = int(image.shape[0]) # change the width of the image according to the scale
height = int(image.shape[1]*scale)# change the height of the image according to the scale
# storing them in a tuple
resolution = (width, height)
print("Resolution of the converted image is: ",resolution)
new_image = cv2.resize(image, resolution, interpolation = cv2.INTER_AREA)
cv2.imshow('New Image', new_image)
cv2.imwrite('converted.jpg', new_image) # to save the image in the current working directory
print(new_image.shape)
cv2.waitKey(0)
cv2.destroyAllWindows()
def change_width(scale):
image = cv2.imread('mountain.jpg') # takes the image as an input
cv2.imshow("Mountain", image) # shows the image
print("The shape of the image is: ",image.shape)
width = int(image.shape[1]*scale) # change the width of the image according to the scale
height = int(image.shape[0])# change the height of the image according to the scale
# storing them in a tuple
resolution = (width, height)
print("Resolution of the converted image is: ",resolution)
new_image = cv2.resize(image, resolution, interpolation = cv2.INTER_AREA)
cv2.imshow('New Image', new_image)
cv2.imwrite('converted.jpg', new_image) # to save the image in the current working directory
print("The new shape of the image is: ", new_image.shape)
cv2.waitKey(0)
cv2.destroyAllWindows()
def main():
print()
choice = 1
while choice != 0:
print("1. Change the width of the image")
print("2. Change the height of the image")
print("3. Exit")
choice = int(input("Enter your choice: "))
if choice == 1:
scale = float(input("Enter the scale: "))
change_width(scale)
elif choice == 2:
scale = float(input("Enter the scale: "))
change_height(scale)
elif choice == 3:
sys.exit()
else:
print('Invalid choice')
main() | 30.96 | 97 | 0.625323 | 314 | 2,322 | 4.579618 | 0.191083 | 0.083449 | 0.062587 | 0.052851 | 0.851182 | 0.83936 | 0.803894 | 0.748261 | 0.748261 | 0.682893 | 0 | 0.018332 | 0.271748 | 2,322 | 75 | 98 | 30.96 | 0.832052 | 0.194229 | 0 | 0.44898 | 0 | 0 | 0.215126 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.061224 | false | 0 | 0.040816 | 0 | 0.102041 | 0.22449 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
eba31f4ddfab40c60e256bb212e92a3d6903fba8 | 32,694 | py | Python | e2e/test_dataset_manager.py | allegro/biggerquery | bcb5f8f0a6807f2ef2213383373a783f08799cd3 | [
"Apache-2.0"
] | 25 | 2019-07-26T11:53:46.000Z | 2020-07-19T18:27:31.000Z | e2e/test_dataset_manager.py | allegro/biggerquery | bcb5f8f0a6807f2ef2213383373a783f08799cd3 | [
"Apache-2.0"
] | 7 | 2019-10-07T06:53:23.000Z | 2020-08-06T06:45:58.000Z | e2e/test_dataset_manager.py | allegro/biggerquery | bcb5f8f0a6807f2ef2213383373a783f08799cd3 | [
"Apache-2.0"
] | 5 | 2019-10-11T14:25:49.000Z | 2020-07-16T10:07:40.000Z | import uuid
import tempfile
import json
import pandas as pd
from pytz import UTC
from datetime import datetime, timedelta
from unittest import TestCase
from unittest import main
from pathlib import Path
from google.cloud.bigquery import Table, TimePartitioning
from bigflow.bigquery.dataset_manager import create_dataset_manager
from . import config
def df_to_collections(df):
return [r for _, r in df.iterrows()]
class DatasetManagerBaseTestCase(TestCase):
TEST_PARTITION_DT = (datetime.utcnow() - timedelta(days=-1)).replace(hour=0, minute=0, second=0, microsecond=0, tzinfo=UTC)
TEST_PARTITION = TEST_PARTITION_DT.isoformat()[:10]
TEST_PARTITION_PLUS_ONE = datetime.now().isoformat()[:10]
def setUp(self):
self.dataset_uuid = str(uuid.uuid4()).replace('-', '')
self.internal_tables = [
'fake_target_table',
'partitioned_fake_target_table',
'loaded_table',
'example_test_table'
]
self.external_tables = {'some_external': 'table'}
self.test_dataset_id, self.dataset_manager = create_dataset_manager(
config.PROJECT_ID,
self.TEST_PARTITION,
dataset_name=self.dataset_uuid,
internal_tables=self.internal_tables,
external_tables=self.external_tables,
dataset_labels={'test_label_key': 'test_label_value', 'another_test_label_key': 'another_test_label_value'},
tables_labels={'labeled_table': {'labeled_table_key': 'labeled_table_value', 'another_labeled_table_key': 'another_labeled_table_value'}})
self.dataset_manager.create_table('''
CREATE TABLE IF NOT EXISTS fake_target_table (
first_name STRING,
last_name STRING)
''')
self.dataset_manager.create_table('''
CREATE TABLE IF NOT EXISTS partitioned_fake_target_table (
batch_date TIMESTAMP,
first_name STRING,
last_name STRING)
PARTITION BY DATE(batch_date)
''')
def tearDown(self):
self.dataset_manager.remove_dataset()
class DatasetManagerTestCase(DatasetManagerBaseTestCase):
def test_should_add_labels(self):
# expect
self.assertEqual(self.dataset_manager.client.get_dataset(self.test_dataset_id).labels,
{'test_label_key': 'test_label_value', 'another_test_label_key': 'another_test_label_value'})
def test_should_not_add_labels(self):
# when
self.test_dataset_id, self.dataset_manager = create_dataset_manager(
config.PROJECT_ID,
self.TEST_PARTITION,
dataset_name="without_labels"+self.dataset_uuid,
internal_tables=self.internal_tables,
external_tables=self.external_tables)
# then
self.assertEqual(self.dataset_manager.client.get_dataset(self.test_dataset_id).labels, {})
def test_should_upsert_labels(self):
# given
self.assertEqual(self.dataset_manager.client.get_dataset(self.test_dataset_id).labels,
{'test_label_key': 'test_label_value', 'another_test_label_key': 'another_test_label_value'})
# when
new_test_dataset_id, new_dataset_manager = create_dataset_manager(
config.PROJECT_ID,
self.TEST_PARTITION,
dataset_name=self.dataset_uuid,
internal_tables=self.internal_tables,
external_tables=self.external_tables,
dataset_labels={'test_label_key': 'updated_test_label_value', 'new_test_label_key': 'new_test_label_value'},
tables_labels={'labeled_table': {'labeled_table_key': 'labeled_table_value'}})
# then
self.assertEqual(new_test_dataset_id, self.test_dataset_id)
self.assertEqual(new_dataset_manager.client.get_dataset(self.test_dataset_id).labels,
{'test_label_key': 'updated_test_label_value', 'new_test_label_key': 'new_test_label_value'})
class PartitionedDatasetManagerPropertiesTestCase(DatasetManagerBaseTestCase):
def test_should_expose_project_id_as_property(self):
# expect
self.assertEqual(self.dataset_manager.project_id, config.PROJECT_ID)
def test_should_expose_dataset_name_as_property(self):
# expect
self.assertEqual(self.dataset_manager.dataset_name, self.dataset_uuid)
def test_should_expose_internal_tables_as_property(self):
# expect
self.assertEqual(self.dataset_manager.internal_tables, self.internal_tables)
def test_should_expose_external_tables_as_property(self):
# expect
self.assertEqual(self.dataset_manager.external_tables, self.external_tables)
class WriteTruncateTestCase(DatasetManagerBaseTestCase):
def test_should_save_records_to_non_partitioned_table(self):
# when
self.dataset_manager.write_truncate('fake_target_table', '''
SELECT 'John' AS first_name, 'Smith' AS last_name
''', partitioned=False)
# then
self.assertTrue(df_to_collections(self.dataset_manager.collect('''
SELECT * FROM `{fake_target_table}`
''')))
def test_should_override_old_records_in_non_partitioned_table(self):
# given
self.dataset_manager.write_truncate('fake_target_table', '''
SELECT 'Thomas' AS first_name, 'Anderson' AS last_name
''', partitioned=False)
# when
self.dataset_manager.write_truncate('fake_target_table', '''
SELECT 'Neo' AS first_name, 'Neo' AS last_name
''', partitioned=False)
final_rows = df_to_collections(self.dataset_manager.collect('''
SELECT * FROM `{fake_target_table}`
'''))
# then
self.assertEqual(len(final_rows), 1)
self.assertEqual(final_rows[0]['first_name'], 'Neo')
self.assertEqual(final_rows[0]['last_name'], 'Neo')
def test_should_save_records_to_partitioned_table(self):
# when
self.dataset_manager.write_truncate('partitioned_fake_target_table', '''
SELECT TIMESTAMP('{dt}') AS batch_date, 'John' AS first_name, 'Smith' AS last_name
''')
# then
self.assertTrue(df_to_collections(self.dataset_manager.collect('''
SELECT * FROM `{partitioned_fake_target_table}`
WHERE DATE(batch_date) = '{dt}'
''')))
def test_should_override_old_records_in_partitioned_table(self):
# given
self.dataset_manager.write_truncate('partitioned_fake_target_table', '''
SELECT TIMESTAMP('{dt}') AS batch_date, 'John' AS first_name, 'Smith' AS last_name
''')
# when
self.dataset_manager.write_truncate('partitioned_fake_target_table', '''
SELECT TIMESTAMP('{dt}') AS batch_date, 'Neo' AS first_name, 'Neo' AS last_name
''')
final_rows = df_to_collections(self.dataset_manager.collect('''
SELECT * FROM `{partitioned_fake_target_table}`
WHERE DATE(batch_date) = '{dt}'
'''))
# then
self.assertEqual(len(final_rows), 1)
self.assertEqual(final_rows[0]['first_name'], 'Neo')
self.assertEqual(final_rows[0]['last_name'], 'Neo')
def test_should_write_to_custom_partition(self):
# when
self.dataset_manager.write_truncate('partitioned_fake_target_table', '''
SELECT TIMESTAMP('{dt}') AS batch_date, 'John' AS first_name, 'Smith' AS last_name
''', custom_run_datetime=self.TEST_PARTITION_PLUS_ONE)
# then
self.assertTrue(df_to_collections(self.dataset_manager.collect('''
SELECT *
FROM `{partitioned_fake_target_table}`
WHERE DATE(batch_date) = '{dt}'
''', custom_run_datetime=self.TEST_PARTITION_PLUS_ONE)))
def test_should_return_error_when_trying_to_write_to_nonexistent_table(self):
with self.assertRaises(ValueError):
self.dataset_manager.write_truncate('nonexistent_table', '''
SELECT 'John' AS first_name, 'Smith' AS last_name
''')
class CreateTableTestCase(DatasetManagerBaseTestCase):
def test_should_create_table(self):
# when
self.dataset_manager.create_table('''
CREATE TABLE new_table (
batch_date TIMESTAMP,
first_name STRING,
last_name STRING)
PARTITION BY DATE(batch_date)
''')
# then
self.assertTrue(self.dataset_manager._table_exists('new_table'))
def test_should_add_labels(self):
# given
self.dataset_manager.create_table('''
CREATE TABLE labeled_table (
id STRING
)''')
# when
test_dataset_id, dataset_manager = create_dataset_manager(
config.PROJECT_ID,
self.TEST_PARTITION,
dataset_name=self.dataset_uuid,
internal_tables=self.internal_tables,
external_tables=self.external_tables,
dataset_labels={'test_label_key': 'test_label_value', 'another_test_label_key': 'another_test_label_value'},
tables_labels={'labeled_table': {'labeled_table_key': 'labeled_table_value', 'another_labeled_table_key': 'another_labeled_table_value'}})
# then
self.assertEqual(
dataset_manager.client.get_table(test_dataset_id + '.' + 'labeled_table').labels,
{'labeled_table_key': 'labeled_table_value', 'another_labeled_table_key': 'another_labeled_table_value'}
)
def test_should_upsert_labels(self):
# given
self.dataset_manager.create_table('''
CREATE TABLE IF NOT EXISTS labeled_table (
id STRING
)''')
test_dataset_id, dataset_manager = create_dataset_manager(
config.PROJECT_ID,
self.TEST_PARTITION,
dataset_name=self.dataset_uuid,
internal_tables=self.internal_tables,
external_tables=self.external_tables,
dataset_labels={'test_label_key': 'test_label_value', 'another_test_label_key': 'another_test_label_value'},
tables_labels={'labeled_table': {'labeled_table_key': 'labeled_table_value', 'another_labeled_table_key': 'another_labeled_table_value'}})
self.assertEqual(
dataset_manager.client.get_table(test_dataset_id + '.' + 'labeled_table').labels,
{'labeled_table_key': 'labeled_table_value', 'another_labeled_table_key': 'another_labeled_table_value'}
)
# when
test_dataset_id, dataset_manager = create_dataset_manager(
config.PROJECT_ID,
self.TEST_PARTITION,
dataset_name=self.dataset_uuid,
internal_tables=self.internal_tables,
external_tables=self.external_tables,
dataset_labels={'test_label_key': 'test_label_value', 'another_test_label_key': 'another_test_label_value'},
tables_labels={'labeled_table': {'labeled_table_key': 'updated_labeled_table_value', 'new_labeled_table_key': 'new_labeled_table_value'}})
# then
self.assertEqual(
dataset_manager.client.get_table(test_dataset_id + '.' + 'labeled_table').labels,
{'labeled_table_key': 'updated_labeled_table_value', 'new_labeled_table_key': 'new_labeled_table_value'})
class WriteAppendTestCase(DatasetManagerBaseTestCase):
def test_should_write_append_to_non_partitioned_table(self):
# when
self.dataset_manager.write_append('fake_target_table', '''
SELECT 'John' AS first_name, 'Smith' AS last_name
''', partitioned=False)
# then
self.assertTrue(df_to_collections(self.dataset_manager.collect('''
SELECT * FROM `{fake_target_table}`
''')))
# when
self.dataset_manager.write_append('fake_target_table', '''
SELECT 'John' AS first_name, 'Smith' AS last_name
''', partitioned=False)
# then
results = df_to_collections(self.dataset_manager.collect('SELECT * FROM `{fake_target_table}`'))
for r in results:
self.assertEqual(r['first_name'], 'John')
self.assertEqual(r['last_name'], 'Smith')
self.assertEqual(len(results), 2)
def test_should_write_append_to_partitioned_table(self):
# when
self.dataset_manager.write_append('partitioned_fake_target_table', '''
SELECT TIMESTAMP('{dt}') AS batch_date, 'John' AS first_name, 'Smith' AS last_name
''')
# then
self.assertTrue(df_to_collections(self.dataset_manager.collect('''
SELECT * FROM `{partitioned_fake_target_table}`
WHERE DATE(batch_date) = '{dt}'
''')))
# when
self.dataset_manager.write_append('partitioned_fake_target_table', '''
SELECT TIMESTAMP('{dt}') AS batch_date, 'John' AS first_name, 'Smith' AS last_name
''')
# then
results = df_to_collections(self.dataset_manager.collect('SELECT * FROM `{partitioned_fake_target_table}`'))
for r in results:
self.assertEqual(r['first_name'], 'John')
self.assertEqual(r['last_name'], 'Smith')
self.assertEqual(len(results), 2)
def test_should_return_error_when_trying_to_write_to_nonexistent_table(self):
# when
with self.assertRaises(ValueError):
self.dataset_manager.write_append('nonexistent_table', '''
SELECT 'John' AS first_name, 'Smith' AS last_name
''')
def test_should_write_to_custom_partition(self):
# when
self.dataset_manager.write_append('partitioned_fake_target_table', '''
SELECT TIMESTAMP('{dt}') AS batch_date, 'John' AS first_name, 'Smith' AS last_name
''', custom_run_datetime=self.TEST_PARTITION_PLUS_ONE)
# then
self.assertTrue(df_to_collections(self.dataset_manager.collect('''
SELECT *
FROM `{partitioned_fake_target_table}`
WHERE DATE(batch_date) = '{dt}'
''', custom_run_datetime=self.TEST_PARTITION_PLUS_ONE)))
class WriteToTemporaryTableTestCase(DatasetManagerBaseTestCase):
def test_should_create_temporary_table_from_query_results_if_table_not_exists(self):
# when
self.dataset_manager.write_tmp('tmp_table', '''
SELECT 'John' AS first_name, 'Smith' AS last_name
''')
# then
self.assertTrue(df_to_collections(self.dataset_manager.collect('''
SELECT * FROM `{tmp_table}`
''')))
def test_should_override_existing_temporary_table_content(self):
# given
self.dataset_manager.write_tmp('tmp_table', '''
SELECT 'John' AS first_name, 'Smith' AS last_name
''')
# when
self.dataset_manager.write_tmp('tmp_table', '''
SELECT 'Neo' AS first_name, 'Neo' AS last_name
''')
# then
results = df_to_collections(self.dataset_manager.collect('SELECT * FROM `{tmp_table}`'))
self.assertEqual(len(results), 1)
self.assertEqual(results[0]['first_name'], 'Neo')
self.assertEqual(results[0]['last_name'], 'Neo')
def test_should_write_to_custom_partition(self):
# when
self.dataset_manager.write_tmp('tmp_table', '''
SELECT TIMESTAMP('{dt}') AS batch_date, 'John' AS first_name, 'Smith' AS last_name
''', custom_run_datetime=self.TEST_PARTITION_PLUS_ONE)
# then
self.assertTrue(df_to_collections(self.dataset_manager.collect('''
SELECT *
FROM `{tmp_table}`
WHERE DATE(batch_date) = '{dt}'
''', custom_run_datetime=self.TEST_PARTITION_PLUS_ONE)))
class QueryTemplatingTestCase(DatasetManagerBaseTestCase):
def setUp(self):
external_test_dataset_id, self.external_dataset_manager = create_dataset_manager(
config.PROJECT_ID,
self.TEST_PARTITION,
internal_tables=['external_source_table'])
self.external_dataset_manager.create_table('''
CREATE TABLE IF NOT EXISTS external_source_table (
first_name STRING,
last_name STRING)
''')
self.external_dataset_manager.write_truncate('external_source_table', '''
SELECT 'John' AS first_name, 'Smith' AS last_name
''', partitioned=False)
self.test_dataset_id, self.dataset_manager = create_dataset_manager(
config.PROJECT_ID,
self.TEST_PARTITION,
internal_tables=['fake_target_table', 'fake_source_table', 'fake_source_table_another_partition', 'fake_partitioned_target_table'],
external_tables={
'external_source_table': external_test_dataset_id + '.' + 'external_source_table'
},
extras={
'first_name': 'John',
'last_name': 'Smith'
})
self.dataset_manager.create_table('''
CREATE TABLE IF NOT EXISTS fake_target_table (
first_name STRING,
last_name STRING)
''')
self.dataset_manager.create_table('''
CREATE TABLE IF NOT EXISTS fake_partitioned_target_table (
batch_date TIMESTAMP,
first_name STRING,
last_name STRING)
PARTITION BY DATE(batch_date)
''')
self.dataset_manager.create_table('''
CREATE TABLE IF NOT EXISTS fake_source_table (
batch_date TIMESTAMP,
first_name STRING,
last_name STRING)
PARTITION BY DATE(batch_date)
''')
self.dataset_manager.create_table('''
CREATE TABLE IF NOT EXISTS fake_source_table_another_partition (
batch_date TIMESTAMP,
first_name STRING,
last_name STRING)
PARTITION BY DATE(batch_date)
''')
self.dataset_manager.write_truncate('fake_source_table_another_partition', '''
SELECT 'Custom' AS first_name, 'Partition' AS last_name, TIMESTAMP('{partition_plus_one}') as batch_date
'''.format(partition_plus_one=self.TEST_PARTITION_PLUS_ONE), custom_run_datetime=self.TEST_PARTITION_PLUS_ONE)
self.dataset_manager.write_truncate('fake_source_table', '''
SELECT 'John' AS first_name, 'Smith' AS last_name, TIMESTAMP('{partition}') as batch_date
'''.format(partition=self.TEST_PARTITION))
def tearDown(self):
self.dataset_manager.remove_dataset()
self.external_dataset_manager.remove_dataset()
def test_should_resolve_internal_table_name(self):
# when
self.dataset_manager.write_truncate('fake_target_table', '''
SELECT * FROM `{fake_source_table}`
''', partitioned=False)
# then
self.assertTrue(df_to_collections(self.dataset_manager.collect('''
SELECT * FROM `{fake_target_table}`
'''.format(
fake_target_table=self.test_dataset_id + '.' + 'fake_target_table'))))
def test_should_resolve_external_table_name(self):
# when
self.dataset_manager.write_truncate('fake_target_table', '''
SELECT * FROM `{external_source_table}`
''', partitioned=False)
# then
self.assertTrue(df_to_collections(self.dataset_manager.collect('''
SELECT * FROM `{fake_target_table}`
''')))
def test_should_resolve_partition(self):
# when
self.dataset_manager.write_truncate('fake_target_table', '''
SELECT * FROM `{fake_source_table}`
WHERE DATE(batch_date) = '{dt}'
''', partitioned=False)
# then
self.assertTrue(df_to_collections(self.dataset_manager.collect('''
SELECT * FROM `{fake_target_table}`
''')))
def test_should_resolve_tmp_table_name(self):
# given
self.dataset_manager.write_tmp('fake_tmp_table', '''
SELECT * FROM `{fake_source_table}`
''')
# when
self.dataset_manager.write_truncate('fake_target_table', '''
SELECT * FROM `{fake_tmp_table}`
''', partitioned=False)
# then
self.assertTrue(df_to_collections(self.dataset_manager.collect('''
SELECT * FROM `{fake_target_table}`
''')))
def test_should_resolve_extras(self):
# when
self.dataset_manager.write_truncate('fake_target_table', '''
SELECT * FROM `{external_source_table}`
WHERE first_name = '{first_name}'
AND last_name = '{last_name}'
''', partitioned=False)
# then
self.assertTrue(df_to_collections(self.dataset_manager.collect('''
SELECT * FROM `{fake_target_table}`
''')))
def test_should_resolve_custom_partition(self):
# when
self.dataset_manager.write_truncate('fake_partitioned_target_table', '''
SELECT * FROM `{fake_source_table_another_partition}`
WHERE DATE(batch_date) = '{dt}'
''', custom_run_datetime=self.TEST_PARTITION_PLUS_ONE)
# then
self.assertTrue(df_to_collections(self.dataset_manager.collect('''
SELECT * FROM `{fake_partitioned_target_table}`
''', custom_run_datetime=self.TEST_PARTITION_PLUS_ONE)))
class CollectTestCase(DatasetManagerBaseTestCase):
def test_should_collect_records(self):
# given
self.dataset_manager.write_tmp('tmp_table', '''
SELECT TIMESTAMP('{dt}') AS batch_date, 'John' AS first_name, 'Smith' AS last_name
''')
# when
records = df_to_collections(self.dataset_manager.collect('''
SELECT *
FROM `{tmp_table}`
WHERE DATE(batch_date) = '{dt}'
'''))
# then
self.assertEqual(len(records), 1)
self.assertEqual(records[0]['first_name'], 'John')
self.assertEqual(records[0]['last_name'], 'Smith')
def test_should_collect_list_records(self):
# given
self.dataset_manager.write_tmp('tmp_table', '''
SELECT TIMESTAMP('{dt}') AS batch_date, 'John' AS first_name, 'Smith' AS last_name
''')
# when
records = self.dataset_manager.collect_list('''
SELECT *
FROM `{tmp_table}`
WHERE DATE(batch_date) = '{dt}'
''')
# then
assert isinstance(records, list)
self.assertEqual(len(records), 1)
self.assertEqual(records[0]['first_name'], 'John')
self.assertEqual(records[0]['last_name'], 'Smith')
# when
records = self.dataset_manager.collect_list('''
SELECT *
FROM `{tmp_table}`
WHERE DATE(batch_date) = '{dt}'
''', record_as_dict=True)
# then
self.assertEqual(records, [{
'first_name': 'John',
'last_name': 'Smith',
'batch_date': self.TEST_PARTITION_DT
}])
def test_should_collect_records_from_custom_partition(self):
# given
self.dataset_manager.write_tmp('tmp_table', '''
SELECT TIMESTAMP('{dt}') AS batch_date, 'John' AS first_name, 'Smith' AS last_name
''', custom_run_datetime=self.TEST_PARTITION_PLUS_ONE)
self.dataset_manager.write_append('tmp_table', '''
SELECT TIMESTAMP('{dt}') AS batch_date, 'Thomas' AS first_name, 'Anderson' AS last_name
''', partitioned=False)
# when
records = df_to_collections(self.dataset_manager.collect('''
SELECT *
FROM `{tmp_table}`
WHERE DATE(batch_date) = '{dt}'
''', custom_run_datetime=self.TEST_PARTITION_PLUS_ONE))
# then
self.assertEqual(len(records), 1)
self.assertEqual(records[0]['first_name'], 'John')
self.assertEqual(records[0]['last_name'], 'Smith')
class RunDryTestCase(DatasetManagerBaseTestCase):
def test_should_dry_run(self):
# given
self.dataset_manager.write_tmp('tmp_table', '''
SELECT TIMESTAMP('{dt}') AS batch_date, 'John' AS first_name, 'Smith' AS last_name
''')
# when
costs = self.dataset_manager.dry_run('''
SELECT *
FROM `{tmp_table}`
WHERE DATE(batch_date) = '{dt}'
''')
# then
self.assertTrue(costs, 'This query will process 21.0 B and cost 0.0 USD.')
def test_should_dry_run_with_custom_partition(self):
# given
self.dataset_manager.write_tmp('tmp_table', '''
SELECT TIMESTAMP('{dt}') AS batch_date, 'John' AS first_name, 'Smith' AS last_name
''')
# when
costs = self.dataset_manager.dry_run('''
SELECT *
FROM `{tmp_table}`
WHERE DATE(batch_date) = '{dt}'
''', custom_run_datetime=self.TEST_PARTITION_PLUS_ONE)
# then
self.assertTrue(costs, 'This query will process 21.0 B and cost 0.0 USD.')
class LoadTableFromDataFrameTestCase(DatasetManagerBaseTestCase):
def test_should_load_df_to_non_partitioned_table(self):
# given
df = pd.DataFrame([['John', 'Smith']], columns=['first_name', 'last_name'])
# when
self.dataset_manager.load_table_from_dataframe('fake_target_table', df, partitioned=False)
# then
self.assertTrue(df_to_collections(self.dataset_manager.collect('''
SELECT * FROM `{fake_target_table}`
''')))
# when
self.dataset_manager.load_table_from_dataframe('fake_target_table', df, partitioned=False)
# then
results = df_to_collections(self.dataset_manager.collect('SELECT * FROM `{fake_target_table}`'))
for r in results:
self.assertEqual(r['first_name'], 'John')
self.assertEqual(r['last_name'], 'Smith')
self.assertEqual(len(results), 2)
def test_should_load_df_to_partitioned_table(self):
# given
df = pd.DataFrame([['John', 'Smith', pd.Timestamp(self.TEST_PARTITION, tz='utc')]], columns=['first_name', 'last_name', 'batch_date'])
# when
self.dataset_manager.load_table_from_dataframe('partitioned_fake_target_table', df)
# then
self.assertTrue(df_to_collections(self.dataset_manager.collect('''
SELECT * FROM `{partitioned_fake_target_table}`
WHERE DATE(batch_date) = '{dt}'
''')))
# when
self.dataset_manager.load_table_from_dataframe('partitioned_fake_target_table', df)
# then
results = df_to_collections(self.dataset_manager.collect('SELECT * FROM `{partitioned_fake_target_table}`'))
for r in results:
self.assertEqual(r['first_name'], 'John')
self.assertEqual(r['last_name'], 'Smith')
self.assertEqual(len(results), 2)
def test_should_create_table_when_loading_df_to_nonexistent_table(self):
# given
df = pd.DataFrame([['John', 'Smith']], columns=['first_name', 'last_name'])
# when
self.dataset_manager.load_table_from_dataframe('loaded_table', df, partitioned=False)
# then
self.assertTrue(df_to_collections(self.dataset_manager.collect('''
SELECT * FROM `{loaded_table}`
''')))
def test_should_load_df_to_custom_partition(self):
# given
df = pd.DataFrame([['John', 'Smith', pd.Timestamp(self.TEST_PARTITION_PLUS_ONE, tz='utc')]],
columns=['first_name', 'last_name', 'batch_date'])
# when
self.dataset_manager.load_table_from_dataframe(
'partitioned_fake_target_table', df, custom_run_datetime=self.TEST_PARTITION_PLUS_ONE)
# then
self.assertTrue(df_to_collections(self.dataset_manager.collect('''
SELECT * FROM `{partitioned_fake_target_table}`
WHERE DATE(batch_date) = '{dt}'
''', custom_run_datetime=self.TEST_PARTITION_PLUS_ONE)))
class CreateTableFromSchemaTestCase(DatasetManagerBaseTestCase):
def test_should_create_table_from_dict_schema(self):
# when
self.dataset_manager.create_table_from_schema('example_test_table', [
{
"mode": "NULLABLE",
"name": "example_field",
"type": "STRING"
},
])
# then
self.table_should_exists()
def test_should_create_table_json_file_schema(self):
with tempfile.NamedTemporaryFile() as f:
# given
f.write(json.dumps([
{
"mode": "NULLABLE",
"name": "example_field",
"type": "STRING"
}
]).encode('utf-8'))
f.seek(0)
# when
self.dataset_manager.create_table_from_schema('example_test_table', Path(f.name))
def test_should_create_table_from_table_object(self):
# given
table_id = f'{self.dataset_manager.project_id}.{self.dataset_manager.dataset_name}.example_test_table'
table = Table(table_id, schema=[
{
"mode": "NULLABLE",
"name": "example_field",
"type": "STRING"
},
])
table.time_partitioning = TimePartitioning()
# when
self.dataset_manager.create_table_from_schema('example_test_table', schema=None, table=table)
# then
self.table_should_exists()
def test_should_throw_an_exception_when_invalid_argument_combination_provided(self):
# given
schema = [
{
"mode": "NULLABLE",
"name": "example_field",
"type": "STRING"
},
]
table = Table('bla.bla.example_test_table', schema=schema)
# then
with self.assertRaises(ValueError) as e:
# when both arguments provided in the same time
self.dataset_manager.create_table_from_schema('example_test_table', schema, table)
# then
with self.assertRaises(ValueError) as e:
# when non of the required arguments provided
self.dataset_manager.create_table_from_schema('example_test_table', None, None)
def table_should_exists(self):
self.assertTrue(self.dataset_manager._table_exists('example_test_table'))
self.dataset_manager.write_truncate('example_test_table', '''
SELECT 'John' AS example_field
''')
self.assertTrue(self.dataset_manager.collect_list('''
SELECT *
FROM `{example_test_table}`
WHERE _PARTITIONTIME = TIMESTAMP('{dt}')
'''))
class InsertTestCase(DatasetManagerBaseTestCase):
def test_should_insert_records_to_partitioned_table(self):
with tempfile.NamedTemporaryFile() as f:
# given
f.write(json.dumps([
{
"example_field": "example_field_value"
}
]).encode('utf-8'))
f.seek(0)
# and
self.dataset_manager.create_table_from_schema('example_test_table', [
{
"mode": "NULLABLE",
"name": "example_field",
"type": "STRING"
},
])
# when adding record from file
self.dataset_manager.insert('example_test_table', Path(f.name))
# and from memory
self.dataset_manager.insert('example_test_table', [{
"example_field": "example_field_value"
}])
# then
expected_result = [
{"example_field": "example_field_value"},
{"example_field": "example_field_value"},
]
actual_result = self.dataset_manager.collect('''
SELECT *
FROM `{example_test_table}`
WHERE _PARTITIONTIME = TIMESTAMP('{dt}')
''').to_dict(orient='records')
self.assertEqual(expected_result, actual_result)
def test_should_insert_records_to_non_partitioned_table(self):
# given
table_id = f'{self.dataset_manager.project_id}.{self.dataset_manager.dataset_name}.example_test_table'
table = Table(table_id, schema=[
{
"mode": "NULLABLE",
"name": "example_field",
"type": "STRING"
},
])
self.dataset_manager.create_table_from_schema('example_test_table', table=table)
self.dataset_manager.insert('example_test_table', [{
"example_field": "example_field_value"
}], partitioned=False)
# then
expected_result = [
{"example_field": "example_field_value"}
]
actual_result = self.dataset_manager.collect('''
SELECT *
FROM `{example_test_table}`
''').to_dict(orient='records')
self.assertEqual(expected_result, actual_result)
if __name__ == '__main__':
main()
| 36.046307 | 150 | 0.63868 | 3,663 | 32,694 | 5.317499 | 0.059514 | 0.095595 | 0.102577 | 0.040148 | 0.858456 | 0.820721 | 0.791611 | 0.770356 | 0.738782 | 0.709262 | 0 | 0.00188 | 0.251422 | 32,694 | 906 | 151 | 36.086093 | 0.793985 | 0.021686 | 0 | 0.692691 | 0 | 0 | 0.342412 | 0.075586 | 0 | 0 | 0 | 0 | 0.121262 | 1 | 0.083056 | false | 0 | 0.019934 | 0.001661 | 0.131229 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ebba783392623f8f2962c0159b1aae5d22e903c1 | 3,282 | py | Python | gan/dcgan/modules.py | valentingol/GANJax | ebcb8f4412277da2d9bda80282c2842d111bf393 | [
"MIT"
] | 9 | 2021-11-20T18:25:37.000Z | 2021-12-13T23:32:35.000Z | gan/dcgan/modules.py | valentingol/GANJax | ebcb8f4412277da2d9bda80282c2842d111bf393 | [
"MIT"
] | 4 | 2021-12-04T15:30:58.000Z | 2022-01-20T13:13:32.000Z | gan/dcgan/modules.py | valentingol/GANJax | ebcb8f4412277da2d9bda80282c2842d111bf393 | [
"MIT"
] | 3 | 2022-01-18T00:02:30.000Z | 2022-03-10T09:22:43.000Z | import haiku as hk
from haiku.initializers import Constant, RandomNormal
import jax
from jax import numpy as jnp
class DCGenerator(hk.Module):
def __init__(self, channels, ker_shapes, strides, padding, name=None):
super().__init__(name=name)
self.name = name
self.channels = channels
self.ker_shapes = ker_shapes
self.strides = strides
self.padding = padding
self.n_layers = len(channels)
if isinstance(ker_shapes, int):
ker_shapes = [ker_shapes] * self.n_layers
if isinstance(strides, int):
strides = [strides] * self.n_layers
if isinstance(padding, int):
padding = [padding] * self.n_layers
self.layers = [
hk.Conv2DTranspose(
channels[i],
kernel_shape=ker_shapes[i],
stride=strides[i],
padding='VALID' if padding[i]==0 else 'SAME',
with_bias=False,
w_init=RandomNormal(stddev=0.02, mean=0.0)
)
for i in range(self.n_layers)
]
self.batch_norms = [
hk.BatchNorm(False, False, 0.99) for _ in range(self.n_layers - 1)
]
def forward(self, z, is_training):
x = jnp.reshape(z, (-1, 1, 1, z.shape[-1]))
for i in range(self.n_layers - 1):
x = self.layers[i](x)
x = self.batch_norms[i](x, is_training)
x = jax.nn.relu(x)
x = self.layers[-1](x)
x = jnp.tanh(x)
return x
def __call__(self, z, is_training=True):
return self.forward(z, is_training)
class DCDiscriminator(hk.Module):
def __init__(self, channels, ker_shapes, strides, padding, name=None):
super().__init__(name=name)
self.name = name
self.channels = channels
self.ker_shapes = ker_shapes
self.strides = strides
self.padding = padding
self.n_layers = len(channels)
if isinstance(ker_shapes, int):
ker_shapes = [ker_shapes] * self.n_layers
if isinstance(strides, int):
strides = [strides] * self.n_layers
if isinstance(padding, int):
padding = [padding] * self.n_layers
self.layers = [
hk.Conv2D(channels[i],
kernel_shape=ker_shapes[i],
stride=strides[i],
padding='VALID' if padding[i]==0 else 'SAME',
w_init=RandomNormal(stddev=0.02, mean=0.0),
b_init=Constant(0.0)
)
for i in range(self.n_layers)
]
self.batch_norms = [
hk.BatchNorm(True, True, 0.99) for _ in range(self.n_layers - 1)
]
def forward(self, x, is_training):
if x.ndim == 3:
x = jnp.expand_dims(x, axis=-1)
for i in range(self.n_layers - 1):
x = self.layers[i](x)
x = self.batch_norms[i](x, is_training)
x = jax.nn.leaky_relu(x, 0.2)
x = self.layers[-1](x)
x = jnp.squeeze(x)
return x
def __call__(self, z, is_training=True):
return self.forward(z, is_training)
| 33.151515 | 78 | 0.531688 | 410 | 3,282 | 4.07561 | 0.190244 | 0.075404 | 0.09216 | 0.043088 | 0.824656 | 0.824656 | 0.824656 | 0.804309 | 0.804309 | 0.767205 | 0 | 0.017094 | 0.358318 | 3,282 | 98 | 79 | 33.489796 | 0.776353 | 0 | 0 | 0.642857 | 0 | 0 | 0.005484 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071429 | false | 0 | 0.047619 | 0.02381 | 0.190476 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
cce0ba85995b3b7802974d57f021bf5060ed20fc | 160 | py | Python | elm-finder/apps/package/tests/__init__.py | martin-jahn/elm-finder | 7510e38d52eebaf462ae5c5ce961e4884b1709bd | [
"MIT"
] | 2 | 2019-04-28T21:32:46.000Z | 2019-05-13T05:27:09.000Z | elm-finder/apps/package/tests/__init__.py | martin-jahn/elm-finder | 7510e38d52eebaf462ae5c5ce961e4884b1709bd | [
"MIT"
] | null | null | null | elm-finder/apps/package/tests/__init__.py | martin-jahn/elm-finder | 7510e38d52eebaf462ae5c5ce961e4884b1709bd | [
"MIT"
] | null | null | null | # @@from package.tests.test_sourceforge import *
from apps.package import *
from apps.package.tests.test_signals import SignalTests
from .test_models import *
| 26.666667 | 55 | 0.80625 | 22 | 160 | 5.727273 | 0.454545 | 0.190476 | 0.253968 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1125 | 160 | 5 | 56 | 32 | 0.887324 | 0.2875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
ccf73860d755b2c911ef1eefcc231fd9a81c642e | 93 | py | Python | sources/algorithms/__init__.py | tipech/OverlapGraph | 0aa132802f2e174608ce33c6bfc24ff14551bf4a | [
"MIT"
] | null | null | null | sources/algorithms/__init__.py | tipech/OverlapGraph | 0aa132802f2e174608ce33c6bfc24ff14551bf4a | [
"MIT"
] | 1 | 2018-10-07T08:06:01.000Z | 2018-10-07T08:06:01.000Z | sources/algorithms/__init__.py | tipech/OverlapGraph | 0aa132802f2e174608ce33c6bfc24ff14551bf4a | [
"MIT"
] | null | null | null | #!/usr/bin/env python
from .sweepln import *
from .rigctor import *
from .queries import *
| 13.285714 | 22 | 0.709677 | 13 | 93 | 5.076923 | 0.692308 | 0.30303 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.172043 | 93 | 6 | 23 | 15.5 | 0.857143 | 0.215054 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c610055354bef29c52e0a905439189df6e84da40 | 1,122 | py | Python | util/decode.py | hazelcast-incubator/hazelcast-python-client | 5ec6c908916a6adef648059314923c0dbf71557b | [
"Apache-2.0"
] | null | null | null | util/decode.py | hazelcast-incubator/hazelcast-python-client | 5ec6c908916a6adef648059314923c0dbf71557b | [
"Apache-2.0"
] | null | null | null | util/decode.py | hazelcast-incubator/hazelcast-python-client | 5ec6c908916a6adef648059314923c0dbf71557b | [
"Apache-2.0"
] | null | null | null | __author__ = 'jonathanbrodie'
import struct
'''
To-do: properly decode bytes
'''
def decodeboolean(bytesobject):
if bool is False:
return bytearray(ctypes.c_uint8(0))
else:
return bytearray(ctypes.c_uint8(bool))
def decodeuint8(target):
return bytearray(ctypes.c_uint8(target))
def decodeuint16(target):
return bytearray(ctypes.c_uint16(target))
def decodeuint32(target):
return bytearray(ctypes.c_uint32(target))
def decodeuint64(target):
return bytearray(ctypes.c_uint64(target))
def decodeint8(target):
return bytearray(ctypes.c_int8(target))
def decodeint16(target):
return bytearray(ctypes.c_int16(target))
def decodeint32(target):
return bytearray(ctypes.c_int32(target))
def decodeint64(target):
return bytearray(ctypes.c_int64(target))
def decodefloat(target):
return bytearray(ctypes.c_float(target))
def decodedouble(target):
return bytearray(ctypes.c_double(target))
def decodestring(string):
newstring=string.decode("UTF8")
return decodeuint32(len(newstring))+newstring
def decodebytes(bytes):
return decodeuint32(len(bytes))+bytes | 29.526316 | 49 | 0.753119 | 139 | 1,122 | 5.964029 | 0.352518 | 0.217129 | 0.303981 | 0.318456 | 0.408926 | 0 | 0 | 0 | 0 | 0 | 0 | 0.036999 | 0.132799 | 1,122 | 38 | 50 | 29.526316 | 0.815005 | 0 | 0 | 0 | 0 | 0 | 0.016559 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.40625 | false | 0 | 0.03125 | 0.34375 | 0.875 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
c63a2566a9dae25b976b0ae8d4c09070d3344b16 | 2,942 | py | Python | tests/test_physics.py | buxx/synergine2 | 843988df5e653a413eca8c486ee93f5e9e884f37 | [
"MIT"
] | 1 | 2021-02-26T15:36:04.000Z | 2021-02-26T15:36:04.000Z | tests/test_physics.py | buxx/synergine2 | 843988df5e653a413eca8c486ee93f5e9e884f37 | [
"MIT"
] | 182 | 2017-03-06T10:20:19.000Z | 2021-06-10T14:12:36.000Z | tests/test_physics.py | buxx/synergine2 | 843988df5e653a413eca8c486ee93f5e9e884f37 | [
"MIT"
] | 1 | 2018-01-01T15:38:24.000Z | 2018-01-01T15:38:24.000Z | # coding: utf-8
import pytest
from synergine2_xyz.physics import Matrixes
from tests import BaseTest
class TestVisibilityMatrix(BaseTest):
def test_initialize_empty_matrix(self):
visibility = Matrixes()
visibility.initialize_empty_matrix('testing', matrix_width=3, matrix_height=2, value_structure=['opacity'])
matrix = visibility.get_matrix('testing')
assert isinstance(matrix, list)
assert [(0.0,), (0.0,), (0.0,)] == matrix[0]
assert [(0.0,), (0.0,), (0.0,)] == matrix[1]
def test_update_matrix(self):
visibility = Matrixes()
visibility.initialize_empty_matrix('testing', matrix_width=3, matrix_height=2, value_structure=['opacity'])
visibility.update_matrix('testing', x=2, y=1, value=(0.5,))
visibility.update_matrix('testing', x=0, y=0, value=(0.7,))
matrix = visibility.get_matrix('testing')
assert [(0.7,), (0.0,), (0.0,)] == matrix[0]
assert [(0.0,), (0.0,), (0.5,)] == matrix[1]
def test_get_path_positions(self):
visibility = Matrixes()
visibility.initialize_empty_matrix('testing', matrix_width=3, matrix_height=2, value_structure=['opacity'])
visibility.update_matrix('testing', x=2, y=1, value=(0.5,))
visibility.update_matrix('testing', x=0, y=0, value=(0.7,))
path_positions = visibility.get_path_positions(from_=(0, 0), to=(2, 1))
assert [(0, 0), (1, 0), (2, 1)] == path_positions
def test_get_path_values(self):
visibility = Matrixes()
visibility.initialize_empty_matrix('testing', matrix_width=3, matrix_height=2, value_structure=['opacity'])
visibility.update_matrix('testing', x=2, y=1, value=(0.5,))
visibility.update_matrix('testing', x=0, y=0, value=(0.7,))
path_positions = visibility.get_path_positions(from_=(0, 0), to=(2, 1))
path_values = visibility.get_values_for_path('testing', path_positions=path_positions)
assert [(0.7,), (0.0,), (0.5,)] == path_values
def test_get_path_value(self):
visibility = Matrixes()
visibility.initialize_empty_matrix('testing', matrix_width=3, matrix_height=2, value_structure=['opacity'])
visibility.update_matrix('testing', x=2, y=1, value=(0.5,))
visibility.update_matrix('testing', x=0, y=0, value=(0.7,))
path_positions = visibility.get_path_positions(from_=(0, 0), to=(2, 1))
path_values = visibility.get_values_for_path('testing', path_positions=path_positions, value_name='opacity')
assert [0.7, 0.0, 0.5] == path_values
def test_get_value(self):
visibility = Matrixes()
visibility.initialize_empty_matrix('testing', matrix_width=3, matrix_height=2, value_structure=['opacity'])
visibility.update_matrix('testing', x=2, y=1, value=(0.5,))
value = visibility.get_value('testing', x=2, y=1, value_name='opacity')
assert 0.5 == value
| 45.96875 | 116 | 0.653637 | 403 | 2,942 | 4.55335 | 0.114144 | 0.027248 | 0.024523 | 0.019619 | 0.846866 | 0.828338 | 0.777112 | 0.777112 | 0.768937 | 0.768937 | 0 | 0.046589 | 0.182869 | 2,942 | 63 | 117 | 46.698413 | 0.716722 | 0.004419 | 0 | 0.541667 | 0 | 0 | 0.066963 | 0 | 0 | 0 | 0 | 0 | 0.1875 | 1 | 0.125 | false | 0 | 0.0625 | 0 | 0.208333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d6ae293e57a3ddd4f4cc766f48e9d365214cc3aa | 3,690 | py | Python | nexus-ingest/nexus-xd-python-modules/tests/processorchain_test.py | dataplumber/nexus | f25a89e85eba098da9c6db1ff3d408dae8a6b310 | [
"Apache-2.0"
] | 23 | 2016-08-09T22:45:14.000Z | 2020-02-17T08:18:29.000Z | nexus-ingest/nexus-xd-python-modules/tests/processorchain_test.py | lewismc/incubator-sdap-nexus | ff98fa346303431542b8391cc2a1bf7561d1bd03 | [
"Apache-2.0"
] | 6 | 2017-04-27T21:22:17.000Z | 2021-06-01T21:45:52.000Z | nexus-ingest/nexus-xd-python-modules/tests/processorchain_test.py | dataplumber/nexus | f25a89e85eba098da9c6db1ff3d408dae8a6b310 | [
"Apache-2.0"
] | 5 | 2016-08-31T13:47:29.000Z | 2017-11-14T21:45:22.000Z | """
Copyright (c) 2016 Jet Propulsion Laboratory,
California Institute of Technology. All rights reserved
"""
import unittest
from os import environ, path
import nexusproto.NexusContent_pb2 as nexusproto
class TestRunChainMethod(unittest.TestCase):
def test_run_chain_read_filter_all(self):
environ['CHAIN'] = 'nexusxd.tilereadingprocessor.read_grid_data:nexusxd.emptytilefilter.filter_empty_tiles'
environ['INBOUND_PORT'] = '7890'
environ['VARIABLE'] = 'analysed_sst'
environ['OUTBOUND_PORT'] = '7891'
environ['LONGITUDE'] = 'lon'
environ['TIME'] = 'time'
environ['READER'] = 'GRIDTILE'
environ['LATITUDE'] = 'lat'
test_file = path.join(path.dirname(__file__), 'datafiles', 'empty_mur.nc4')
from nexusxd import processorchain
gen = processorchain.run_chain(None, "time:0:1,lat:0:1,lon:0:1;time:0:1,lat:1:2,lon:0:1;file://%s" % test_file)
for message in gen:
self.fail("Should not produce any messages. Message: %s" % message)
def test_run_chain_read_filter_none(self):
environ['CHAIN'] = 'nexusxd.tilereadingprocessor.read_grid_data:nexusxd.emptytilefilter.filter_empty_tiles'
environ['INBOUND_PORT'] = '7890'
environ['VARIABLE'] = 'analysed_sst'
environ['OUTBOUND_PORT'] = '7891'
environ['LONGITUDE'] = 'lon'
environ['TIME'] = 'time'
environ['READER'] = 'GRIDTILE'
environ['LATITUDE'] = 'lat'
test_file = path.join(path.dirname(__file__), 'datafiles', 'not_empty_mur.nc4')
from nexusxd import processorchain
results = list(processorchain.run_chain(None, "time:0:1,lat:0:1,lon:0:1;time:0:1,lat:1:2,lon:0:1;file://%s" % test_file))
self.assertEquals(2, len(results))
def test_run_chain_read_filter_kelvin_summarize(self):
environ['CHAIN'] = 'nexusxd.tilereadingprocessor.read_grid_data:nexusxd.emptytilefilter.filter_empty_tiles:nexusxd.kelvintocelsius.transform:nexusxd.tilesumarizingprocessor.summarize_nexustile'
environ['INBOUND_PORT'] = '7890'
environ['VARIABLE'] = 'analysed_sst'
environ['OUTBOUND_PORT'] = '7891'
environ['LONGITUDE'] = 'lon'
environ['TIME'] = 'time'
environ['READER'] = 'GRIDTILE'
environ['LATITUDE'] = 'lat'
test_file = path.join(path.dirname(__file__), 'datafiles', 'not_empty_mur.nc4')
from nexusxd import processorchain
results = list(
processorchain.run_chain(None, "time:0:1,lat:0:1,lon:0:1;time:0:1,lat:1:2,lon:0:1;file://%s" % test_file))
self.assertEquals(2, len(results))
def test_run_chain_partial_empty(self):
environ[
'CHAIN'] = 'nexusxd.tilereadingprocessor.read_grid_data:nexusxd.emptytilefilter.filter_empty_tiles:nexusxd.kelvintocelsius.transform:nexusxd.tilesumarizingprocessor.summarize_nexustile'
environ['INBOUND_PORT'] = '7890'
environ['VARIABLE'] = 'analysed_sst'
environ['OUTBOUND_PORT'] = '7891'
environ['LONGITUDE'] = 'lon'
environ['TIME'] = 'time'
environ['READER'] = 'GRIDTILE'
environ['LATITUDE'] = 'lat'
test_file = path.join(path.dirname(__file__), 'datafiles', 'partial_empty_mur.nc4')
from nexusxd import processorchain
results = list(
processorchain.run_chain(None, "time:0:1,lat:0:10,lon:0:10;time:0:1,lat:489:499,lon:0:10;file://%s" % test_file))
self.assertEquals(1, len(results))
tile = nexusproto.NexusTile.FromString(results[0])
self.assertTrue(tile.summary.HasField('bbox'), "bbox is missing")
if __name__ == '__main__':
unittest.main()
| 41.931818 | 201 | 0.665312 | 443 | 3,690 | 5.329571 | 0.230248 | 0.014401 | 0.02033 | 0.030496 | 0.802626 | 0.802626 | 0.76493 | 0.747141 | 0.747141 | 0.747141 | 0 | 0.033602 | 0.193496 | 3,690 | 87 | 202 | 42.413793 | 0.759745 | 0.027642 | 0 | 0.625 | 0 | 0.09375 | 0.375978 | 0.217877 | 0 | 0 | 0 | 0 | 0.0625 | 1 | 0.0625 | false | 0 | 0.109375 | 0 | 0.1875 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d6bfa04314249e774ddf8aef090c60fc970b7154 | 4,675 | py | Python | christmas_lights/test_christmas_lights.py | ivandelcz/katas | b7d9a979181681db01c48ddd3d4e591b0dcc8791 | [
"MIT"
] | null | null | null | christmas_lights/test_christmas_lights.py | ivandelcz/katas | b7d9a979181681db01c48ddd3d4e591b0dcc8791 | [
"MIT"
] | null | null | null | christmas_lights/test_christmas_lights.py | ivandelcz/katas | b7d9a979181681db01c48ddd3d4e591b0dcc8791 | [
"MIT"
] | null | null | null | from christmas_lights import (
Panel,
Light,
Coordinate
)
def test_given_a_light_then_it_has_power_status():
power_state = Light.OFF
light = Light(power_state)
assert isinstance(light, Light)
assert light.status == power_state
def test_given_a_turn_off_light_then_it_has_turn_on_action():
light = Light(Light.OFF)
light.turn_on()
assert light.status == Light.ON
def test_given_a_turn_off_light_then_it_has_turn_off_action():
light = Light(Light.OFF)
light.turn_off()
assert light.status == Light.OFF
def test_given_a_turn_off_light_then_it_has_toggle_to_on_action():
light = Light(Light.OFF)
light.toggle()
assert light.status == Light.ON
def test_given_a_turn_on_light_then_it_has_toggle_to_off_action():
light = Light(Light.ON)
light.toggle()
assert light.status == Light.OFF
def test_given_a_turn_on_light_then_it_has_turn_off_action():
light = Light(Light.ON)
light.turn_off()
assert light.status == Light.OFF
def test_given_a_coordinate_it_has_x_and_y():
x = 0
y = 1
coord = Coordinate(x, y)
assert coord.x == x
assert coord.y == y
def test_given_a_panel_then_it_is_panel_with_set_size():
size = 1000
panel = Panel(size)
assert isinstance(panel, Panel)
assert panel.size == size
def test_given_one_cell_panel_then_it_has_one_OFF_light():
panel = Panel(1)
assert panel.coordinate(x=0, y=0).status == Light.OFF
def test_given_one_cell_panel_then_it_can_be_ON_light():
panel = Panel(1)
start_coordinate = Coordinate(0, 0)
end_coordinate = Coordinate(0, 0)
panel.turn_on(start_coordinate, end_coordinate)
assert panel.coordinate(x=0, y=0).status == Light.ON
def test_given_four_cell_panel_then_it_has_four_OFF_lights():
panel = Panel(4)
assert panel.coordinate(0, 0).status == Light.OFF
assert panel.coordinate(0, 1).status == Light.OFF
assert panel.coordinate(1, 0).status == Light.OFF
assert panel.coordinate(1, 1).status == Light.OFF
def test_given_four_cell_panel_then_power_on_first_row():
panel = Panel(4)
start_coordinate = Coordinate(0, 0)
end_coordinate = Coordinate(1, 0)
panel.turn_on(start_coordinate, end_coordinate)
assert panel.coordinate(1, 1).status == Light.OFF
assert panel.coordinate(0, 1).status == Light.OFF
assert panel.coordinate(1, 0).status == Light.ON
assert panel.coordinate(0, 0).status == Light.ON
def test_given_four_cell_panel_then_power_second_row():
panel = Panel(4)
start_coordinate = Coordinate(0, 1)
end_coordinate = Coordinate(1, 1)
panel.turn_on(start_coordinate, end_coordinate)
assert panel.coordinate(1, 1).status == Light.ON
assert panel.coordinate(0, 1).status == Light.ON
assert panel.coordinate(1, 0).status == Light.OFF
assert panel.coordinate(0, 0).status == Light.OFF
def test_given_four_cell_panel_then_power_both_rows():
panel = Panel(4)
start_coordinate = Coordinate(0, 0)
end_coordinate = Coordinate(1, 1)
panel.turn_on(start_coordinate, end_coordinate)
assert panel.coordinate(0, 0).status == Light.ON
assert panel.coordinate(0, 1).status == Light.ON
assert panel.coordinate(1, 0).status == Light.ON
assert panel.coordinate(1, 1).status == Light.ON
def test_given_four_cell_panel_then_power_on_first_row_and_toggle_all_panel():
panel = Panel(4)
panel.turn_on(Coordinate(0, 0), Coordinate(1, 0))
panel.toggle(Coordinate(0, 0), Coordinate(1, 1))
assert panel.coordinate(1, 1).status == Light.ON
assert panel.coordinate(0, 1).status == Light.ON
assert panel.coordinate(1, 0).status == Light.OFF
assert panel.coordinate(0, 0).status == Light.OFF
def test_given_four_cell_panel_then_power_on_and_then_off():
panel = Panel(4)
panel.turn_on(Coordinate(0, 0), Coordinate(1, 0))
panel.turn_off(Coordinate(0, 0), Coordinate(1, 1))
assert panel.coordinate(0, 0).status == Light.OFF
assert panel.coordinate(0, 1).status == Light.OFF
assert panel.coordinate(1, 0).status == Light.OFF
assert panel.coordinate(1, 1).status == Light.OFF
def test_given_four_cell_panel_and_power_on_second_row_then_two_lights_on():
panel = Panel(4)
start_coordinate = Coordinate(0, 1)
end_coordinate = Coordinate(1, 1)
panel.turn_on(start_coordinate, end_coordinate)
assert panel.lights_on() == 2
def test_given_four_cell_panel_and_power_on_all_then_four_lights_on():
panel = Panel(4)
start_coordinate = Coordinate(0, 0)
end_coordinate = Coordinate(1, 1)
panel.turn_on(start_coordinate, end_coordinate)
assert panel.lights_on() == 4
| 27.662722 | 78 | 0.721283 | 722 | 4,675 | 4.354571 | 0.072022 | 0.108461 | 0.173664 | 0.083969 | 0.840649 | 0.836832 | 0.827926 | 0.794529 | 0.756997 | 0.709924 | 0 | 0.028336 | 0.169626 | 4,675 | 168 | 79 | 27.827381 | 0.781556 | 0 | 0 | 0.594595 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.351351 | 1 | 0.162162 | false | 0 | 0.009009 | 0 | 0.171171 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ba3a704cd570ac368b7d07967c41f359477e03a6 | 119 | py | Python | double9.py | TuxStory/Python3 | 4c1b2291d1613b32aa36b62b0b881ea40b423cce | [
"MIT"
] | null | null | null | double9.py | TuxStory/Python3 | 4c1b2291d1613b32aa36b62b0b881ea40b423cce | [
"MIT"
] | null | null | null | double9.py | TuxStory/Python3 | 4c1b2291d1613b32aa36b62b0b881ea40b423cce | [
"MIT"
] | null | null | null | #!/usr/python3.5
#-*- coding: utf-8 -*-
for j in range(1,10):
for i in range(1,10):
print (i*j,end=" ")
print ()
| 14.875 | 23 | 0.537815 | 23 | 119 | 2.782609 | 0.652174 | 0.21875 | 0.25 | 0.3125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.094737 | 0.201681 | 119 | 7 | 24 | 17 | 0.578947 | 0.310924 | 0 | 0 | 0 | 0 | 0.0125 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.5 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
ba7b1c9d9fd0790e1815bcea71f0b0fb7f689013 | 11,606 | py | Python | tests/test_mesh.py | Daiver/pygeom_tools | ed89d8cab2d5956c7c680da1ce6335f4c0a31c70 | [
"MIT"
] | 9 | 2019-10-29T18:39:47.000Z | 2022-03-18T11:44:12.000Z | tests/test_mesh.py | Daiver/pygeom_tools | ed89d8cab2d5956c7c680da1ce6335f4c0a31c70 | [
"MIT"
] | null | null | null | tests/test_mesh.py | Daiver/pygeom_tools | ed89d8cab2d5956c7c680da1ce6335f4c0a31c70 | [
"MIT"
] | 1 | 2021-06-24T08:34:56.000Z | 2021-06-24T08:34:56.000Z | import unittest
import numpy as np
from geom_tools import Mesh
from geom_tools.bounding_box import BoundingBox
class TestMesh(unittest.TestCase):
def test_mesh_has_uv01(self):
mesh1 = Mesh(vertices=np.arange(3), polygon_vertex_indices=[[0, 1, 2]])
self.assertFalse(mesh1.has_uv())
def test_mesh_has_uv02(self):
mesh1 = Mesh(
vertices=np.arange(3), polygon_vertex_indices=[[0, 1, 2]],
texture_vertices=np.arange(3), texture_polygon_vertex_indices=[[2, 1, 0]])
self.assertTrue(mesh1.has_uv())
def test_mesh_has_normals01(self):
mesh1 = Mesh(vertices=np.arange(3), polygon_vertex_indices=[[0, 1, 2]])
self.assertFalse(mesh1.has_normals())
def test_mesh_has_normals02(self):
mesh1 = Mesh(vertices=np.arange(3), polygon_vertex_indices=[[0, 1, 2]], normals=np.array([1, 2, 3]))
self.assertTrue(mesh1.has_normals())
# TODO: cover all comparison branches
def test_mesh_comparison01(self):
mesh1 = Mesh(vertices=np.arange(3), polygon_vertex_indices=[[0, 1, 2]])
mesh2 = Mesh(vertices=np.arange(3), polygon_vertex_indices=[[0, 1, 2]])
self.assertTrue(mesh1 == mesh2)
def test_mesh_comparison02(self):
mesh1 = Mesh(vertices=np.arange(3), polygon_vertex_indices=[[0, 1, 2]])
mesh2 = Mesh(vertices=np.arange(2), polygon_vertex_indices=[[0, 1, 2]])
self.assertTrue(mesh1 != mesh2)
def test_mesh_comparison03(self):
mesh1 = Mesh(
vertices=np.arange(3), polygon_vertex_indices=[[0, 1, 2, 4]],
triangle_vertex_indices=np.array([[0, 1, 2], [0, 2, 4]]))
mesh2 = Mesh(
vertices=np.arange(3), polygon_vertex_indices=[[0, 1, 2, 4]],
triangle_vertex_indices=np.array([[0, 1, 2], [0, 2, 4]]))
self.assertTrue(mesh1 == mesh2)
def test_mesh_comparison04(self):
mesh1 = Mesh(
vertices=np.arange(3), polygon_vertex_indices=[[0, 1, 2, 4]],
triangle_vertex_indices=np.array([[0, 1, 2], [0, 2, 4]]))
mesh2 = Mesh(
vertices=np.arange(3), polygon_vertex_indices=[[0, 1, 2, 4]],
triangle_vertex_indices=np.array([[0, 1, 2], [0, 6, 4]]))
self.assertTrue(mesh1 != mesh2)
def test_mesh_comparison05(self):
mesh1 = Mesh(
vertices=np.arange(3), polygon_vertex_indices=[[0, 1, 2, 4]],
triangle_vertex_indices=np.array([[0, 1, 2], [0, 2, 4]]),
polygon_groups=[-1],
)
mesh2 = Mesh(
vertices=np.arange(3), polygon_vertex_indices=[[0, 1, 2, 4]],
triangle_vertex_indices=np.array([[0, 1, 2], [0, 2, 4]]),
)
self.assertTrue(mesh1 == mesh2)
self.assertTrue(mesh2 == mesh1)
def test_mesh_comparison06(self):
mesh1 = Mesh(
vertices=np.arange(3), polygon_vertex_indices=[[0, 1, 2, 4]],
triangle_vertex_indices=np.array([[0, 1, 2], [0, 2, 4]]),
polygon_groups=[-1],
)
mesh2 = Mesh(
vertices=np.arange(3), polygon_vertex_indices=[[0, 1, 2, 4]],
triangle_vertex_indices=np.array([[0, 1, 2], [0, 2, 4]]),
polygon_groups=[-1],
)
self.assertTrue(mesh1 == mesh2)
self.assertTrue(mesh2 == mesh1)
def test_mesh_comparison07(self):
mesh1 = Mesh(
vertices=np.arange(3), polygon_vertex_indices=[[0, 1, 2, 4]],
triangle_vertex_indices=np.array([[0, 1, 2], [0, 2, 4]]),
polygon_groups=[0],
)
mesh2 = Mesh(
vertices=np.arange(3), polygon_vertex_indices=[[0, 1, 2, 4]],
triangle_vertex_indices=np.array([[0, 1, 2], [0, 2, 4]]),
polygon_groups=[-1],
)
self.assertTrue(mesh1 != mesh2)
self.assertTrue(mesh2 != mesh1)
def test_mesh_comparison08(self):
mesh1 = Mesh(
vertices=np.arange(3), polygon_vertex_indices=[[0, 1, 2, 4], [0, 1, 2, 4]],
polygon_groups=[1, 0],
)
mesh2 = Mesh(
vertices=np.arange(3), polygon_vertex_indices=[[0, 1, 2, 4], [0, 1, 2, 4]],
polygon_groups=[1, 0],
)
self.assertTrue(mesh1 == mesh2)
self.assertTrue(mesh2 == mesh1)
def test_mesh_comparison09(self):
mesh1 = Mesh(
vertices=np.arange(3), polygon_vertex_indices=[[0, 1, 2, 4], [0, 1, 2, 4]],
polygon_groups=[1, 0], group_names=["R", "Left"]
)
mesh2 = Mesh(
vertices=np.arange(3), polygon_vertex_indices=[[0, 1, 2, 4], [0, 1, 2, 4]],
polygon_groups=[1, 0], group_names=["R", "Left"]
)
self.assertTrue(mesh1 == mesh2)
self.assertTrue(mesh2 == mesh1)
def test_mesh_comparison10(self):
mesh1 = Mesh(
vertices=np.arange(3), polygon_vertex_indices=[[0, 1, 2, 4], [0, 1, 2, 4]],
polygon_groups=[1, 0], group_names=["R", "Left"]
)
mesh2 = Mesh(
vertices=np.arange(3), polygon_vertex_indices=[[0, 1, 2, 4], [0, 1, 2, 4]],
polygon_groups=[1, 0], group_names=["Left"]
)
self.assertTrue(mesh1 != mesh2)
self.assertTrue(mesh2 != mesh1)
def test_mesh_n_vertices01(self):
mesh1 = Mesh(vertices=np.arange(5), polygon_vertex_indices=[[0, 1, 2]])
self.assertEqual(mesh1.n_vertices(), 5)
def test_mesh_n_texture_vertices01(self):
mesh1 = Mesh(vertices=np.arange(5), polygon_vertex_indices=[[0, 1, 2]])
self.assertEqual(mesh1.n_texture_vertices(), 0)
def test_mesh_n_texture_vertices02(self):
mesh1 = Mesh(
vertices=np.arange(3), polygon_vertex_indices=[[0, 1, 2]],
texture_vertices=np.arange(6), texture_polygon_vertex_indices=[[2, 1, 0]])
self.assertTrue(mesh1.n_texture_vertices(), 6)
def test_mesh_n_polygons01(self):
mesh1 = Mesh(vertices=np.arange(5), polygon_vertex_indices=[[0, 1, 2]])
self.assertEqual(mesh1.n_polygons(), 1)
def test_mesh_n_polygons02(self):
mesh1 = Mesh(vertices=np.arange(5), polygon_vertex_indices=[
[0, 1, 2],
[4, 5, 6],
])
self.assertEqual(mesh1.n_polygons(), 2)
def test_mesh_n_triangles01(self):
mesh1 = Mesh(
vertices=np.arange(5), polygon_vertex_indices=[
[0, 1, 2],
[4, 5, 6, 8],
],
triangle_vertex_indices=np.array([
[0, 1, 2],
[4, 5, 6],
[4, 6, 8],
]))
self.assertEqual(mesh1.n_triangles(), 3)
self.assertTrue(mesh1.is_triangulated())
def test_mesh_n_triangles02(self):
mesh1 = Mesh(
vertices=np.arange(5), polygon_vertex_indices=[
[0, 1, 2],
[4, 5, 6, 8],
])
self.assertIsNone(mesh1.n_triangles())
self.assertFalse(mesh1.is_triangulated())
def test_mesh_n_groups01(self):
mesh1 = Mesh(vertices=np.arange(5), polygon_vertex_indices=[[0, 1, 2]])
self.assertEqual(mesh1.n_groups(), 0)
def test_mesh_n_groups02(self):
mesh1 = Mesh(vertices=np.arange(5), polygon_vertex_indices=[[0, 1, 2]], group_names=["F"])
self.assertEqual(mesh1.n_groups(), 1)
def test_mesh_n_groups03(self):
mesh1 = Mesh(vertices=np.arange(5), polygon_vertex_indices=[[0, 1, 2]], group_names=["F", "U", "U"])
self.assertEqual(mesh1.n_groups(), 3)
def test_mesh_bbox01(self):
mesh1 = Mesh(
vertices=np.array([[0, 1, 2], [-5, 2, 3]]), polygon_vertex_indices=[
[0, 1, 2],
[4, 5, 6, 8],
])
bbox = mesh1.bbox()
self.assertEqual(BoundingBox([-5, 1, 2], [0, 2, 3]), bbox)
def test_set_vertices_and_compute_normals01(self):
mesh = Mesh(
vertices=np.array([
[0, 0, 0],
[0, 0, 0],
[0, 0, 0],
]),
polygon_vertex_indices=[
[0, 1, 2]
],
triangle_vertex_indices=np.array([
[0, 1, 2]
]),
)
new_vertices = np.array([
[0, 0, 0],
[1, 0, 0],
[0, 1, 0],
])
ans = np.array([
[0, 0, 1],
[0, 0, 1],
[0, 0, 1],
], dtype=np.float32)
self.assertFalse(mesh.has_normals())
mesh.set_vertices_and_compute_normals(new_vertices)
res = mesh.normals
self.assertEqual(ans.shape, res.shape)
self.assertTrue(np.allclose(ans, res))
def test_set_vertices_and_compute_normals02(self):
new_vertices = np.array([
[0.1, 0, 0],
[0, 50, 0],
[1, 0, 0],
[0, 0, 1],
[0, 0, 0],
], dtype=np.float32)
mesh = Mesh(
vertices=np.zeros_like(new_vertices),
polygon_vertex_indices=[
[1, 0, 4],
[4, 2, 3],
],
triangle_vertex_indices=np.array([
[1, 0, 4],
[4, 2, 3],
], dtype=np.int32),
)
ans = np.array([
[0, 0, -1],
[0, 0, -1],
[0, -1, 0],
[0, -1, 0],
[0, -0.70710678, -0.70710678],
], dtype=np.float32)
self.assertFalse(mesh.has_normals())
mesh.set_vertices_and_compute_normals(new_vertices)
res = mesh.normals
self.assertEqual(ans.shape, res.shape)
self.assertTrue(np.allclose(ans, res))
def test_set_vertices_and_compute_normals03(self):
mesh = Mesh(
vertices=np.array([
[0, 0, 0],
[0, 0, 0],
[0, 0, 0],
]),
polygon_vertex_indices=[
[0, 1, 2]
],
triangle_vertex_indices=np.array([
[0, 1, 2]
]),
)
new_vertices = np.array([
[0, 0, 0],
[1, 0, 0],
[0, 1, 0],
])
ans = np.array([
[0, 0, 1],
[0, 0, 1],
[0, 0, 1],
], dtype=np.float32)
self.assertFalse(mesh.has_normals())
mesh2 = mesh.set_vertices_and_compute_normals(new_vertices)
res = mesh2.normals
self.assertEqual(ans.shape, res.shape)
self.assertTrue(np.allclose(ans, res))
def test_clone01(self):
mesh_input = Mesh(
vertices=np.arange(3), polygon_vertex_indices=[[0, 1, 2, 4], [0, 1, 2, 4]],
polygon_groups=[1, 0], group_names=["R", "Left"]
)
mesh_ans = Mesh(
vertices=np.arange(3), polygon_vertex_indices=[[0, 1, 2, 4], [0, 1, 2, 4]],
polygon_groups=[1, 0], group_names=["R", "Left"]
)
mesh_res = mesh_input.clone()
mesh_input.vertices[:] = 0
self.assertEqual(mesh_ans, mesh_res)
self.assertNotEqual(mesh_input, mesh_res)
def test_clone02(self):
mesh_input = Mesh(
vertices=np.arange(3), polygon_vertex_indices=[[0, 1, 2, 4]],
triangle_vertex_indices=np.array([[0, 1, 2], [0, 2, 4]])
)
mesh_ans = Mesh(
vertices=np.arange(3), polygon_vertex_indices=[[0, 1, 2, 4]],
triangle_vertex_indices=np.array([[0, 1, 2], [0, 2, 4]])
)
mesh_res = mesh_input.clone()
mesh_input.vertices[:] = 0
self.assertEqual(mesh_ans, mesh_res)
self.assertNotEqual(mesh_input, mesh_res)
| 35.710769 | 108 | 0.534293 | 1,482 | 11,606 | 3.988529 | 0.071525 | 0.027745 | 0.032989 | 0.145661 | 0.853324 | 0.809339 | 0.801895 | 0.783285 | 0.76992 | 0.762138 | 0 | 0.08011 | 0.311649 | 11,606 | 324 | 109 | 35.820988 | 0.659782 | 0.003016 | 0 | 0.675958 | 0 | 0 | 0.002852 | 0 | 0 | 0 | 0 | 0.003086 | 0.160279 | 1 | 0.10453 | false | 0 | 0.013937 | 0 | 0.121951 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
baacafe5aadb417d074717e17ad28e615a3c707a | 265 | py | Python | blog/admin.py | Jafte/jasn.ru | 09d2945825c5bbfa2896c4acfc30acb6138c9ac9 | [
"MIT"
] | null | null | null | blog/admin.py | Jafte/jasn.ru | 09d2945825c5bbfa2896c4acfc30acb6138c9ac9 | [
"MIT"
] | 4 | 2021-06-04T21:30:01.000Z | 2021-09-22T17:39:12.000Z | blog/admin.py | Jafte/jasn.ru | 09d2945825c5bbfa2896c4acfc30acb6138c9ac9 | [
"MIT"
] | null | null | null | from django.contrib import admin
from blog.models import Blog, Post, PostImage
from guardian.admin import GuardedModelAdmin
admin.site.register(Blog, GuardedModelAdmin)
admin.site.register(Post, GuardedModelAdmin)
admin.site.register(PostImage, GuardedModelAdmin)
| 33.125 | 49 | 0.845283 | 32 | 265 | 7 | 0.40625 | 0.294643 | 0.348214 | 0.455357 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.079245 | 265 | 7 | 50 | 37.857143 | 0.918033 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
baad532d1d286aaf9b188562fb56df4f63cbad49 | 47 | py | Python | examples/datasets/__init__.py | ulises-jeremias/tf2-tools | 24c2eb1ab0cd81dca8b31055902c0bd05b795492 | [
"MIT"
] | 1 | 2020-05-24T01:52:52.000Z | 2020-05-24T01:52:52.000Z | examples/datasets/__init__.py | ulises-jeremias/tf2-tools | 24c2eb1ab0cd81dca8b31055902c0bd05b795492 | [
"MIT"
] | 1 | 2020-05-23T19:31:12.000Z | 2020-05-23T19:31:12.000Z | examples/datasets/__init__.py | ulises-jeremias/tf2-tools | 24c2eb1ab0cd81dca8b31055902c0bd05b795492 | [
"MIT"
] | null | null | null | """Datasets logic"""
from .loader import load
| 11.75 | 24 | 0.702128 | 6 | 47 | 5.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148936 | 47 | 3 | 25 | 15.666667 | 0.825 | 0.297872 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
bac54a6b749783088c51cc670f61267463f161e4 | 3,657 | py | Python | tests/test_transform_rotate.py | mrstephenneal/pdfwatermarker | 55934803efd91b6b456985be7df93c03d24747c7 | [
"Apache-2.0"
] | 9 | 2018-08-28T14:08:19.000Z | 2019-08-22T07:33:14.000Z | tests/test_transform_rotate.py | mrstephenneal/pdfwatermarker | 55934803efd91b6b456985be7df93c03d24747c7 | [
"Apache-2.0"
] | 15 | 2018-08-28T14:08:17.000Z | 2019-07-08T01:29:34.000Z | tests/test_transform_rotate.py | mrstephenneal/pdfwatermarker | 55934803efd91b6b456985be7df93c03d24747c7 | [
"Apache-2.0"
] | 1 | 2020-08-10T00:14:43.000Z | 2020-08-10T00:14:43.000Z | import os
import unittest
from tempfile import TemporaryDirectory
from looptools import Timer
from pdfconduit import Info, Rotate
from tests import *
class TestRotate(unittest.TestCase):
@classmethod
def setUpClass(cls):
cls.pdf_path = pdf_path
def setUp(self):
self.temp = TemporaryDirectory()
def tearDown(self):
self.temp.cleanup()
@Timer.decorator
def test_rotate_pdfrw_90(self):
"""Rotate a PDF file by 90 degrees using the `pdfrw` library."""
rotation = 90
rotated = Rotate(self.pdf_path, rotation, suffix='rotated_pdfrw', tempdir=self.temp.name, method='pdfrw').file
# Assert rotated pdf file exists
self.assertTrue(os.path.isfile(rotated))
# Assert pdf file was rotated by the correct amount of degrees
self.assertEqual(Info(rotated).rotate, rotation)
return rotated
@Timer.decorator
def test_rotate_pdfrw_180(self):
"""Rotate a PDF file by 180 degrees using the `pdfrw` library."""
rotation = 180
rotated = Rotate(self.pdf_path, rotation, suffix='rotated_180_pdfrw', tempdir=self.temp.name,
method='pdfrw').file
# Assert rotated pdf file exists
self.assertTrue(os.path.isfile(rotated))
# Assert pdf file was rotated by the correct amount of degrees
self.assertEqual(Info(rotated).rotate, rotation)
return rotated
@Timer.decorator
def test_rotate_pdfrw_270(self):
"""Rotate a PDF file by 270 degrees using the `pdfrw` library."""
rotation = 270
rotated = Rotate(self.pdf_path, rotation, suffix='rotated_270_pdfrw', tempdir=self.temp.name,
method='pdfrw').file
# Assert rotated pdf file exists
self.assertTrue(os.path.isfile(rotated))
# Assert pdf file was rotated by the correct amount of degrees
self.assertEqual(Info(rotated).rotate, rotation)
return rotated
@Timer.decorator
def test_rotate_pypdf3_90(self):
"""Rotate a PDF file by 90 degrees using the `pypdf3` library."""
rotation = 90
rotated = Rotate(self.pdf_path, rotation, suffix='rotated_pdfrw', tempdir=self.temp.name, method='pypdf3').file
# Assert rotated pdf file exists
self.assertTrue(os.path.isfile(rotated))
# Assert pdf file was rotated by the correct amount of degrees
self.assertEqual(Info(rotated).rotate, rotation)
return rotated
@Timer.decorator
def test_rotate_pypdf3_180(self):
"""Rotate a PDF file by 180 degrees using the `pypdf3` library."""
rotation = 180
rotated = Rotate(self.pdf_path, rotation, suffix='rotated_180_pdfrw', tempdir=self.temp.name,
method='pypdf3').file
# Assert rotated pdf file exists
self.assertTrue(os.path.isfile(rotated))
# Assert pdf file was rotated by the correct amount of degrees
self.assertEqual(Info(rotated).rotate, rotation)
return rotated
@Timer.decorator
def test_rotate_pypdf3_270(self):
"""Rotate a PDF file by 270 degrees using the `pypdf3` library."""
rotation = 270
rotated = Rotate(self.pdf_path, rotation, suffix='rotated_270_pdfrw', tempdir=self.temp.name,
method='pypdf3').file
# Assert rotated pdf file exists
self.assertTrue(os.path.isfile(rotated))
# Assert pdf file was rotated by the correct amount of degrees
self.assertEqual(Info(rotated).rotate, rotation)
return rotated
if __name__ == '__main__':
unittest.main()
| 34.17757 | 119 | 0.657096 | 456 | 3,657 | 5.173246 | 0.127193 | 0.053412 | 0.043239 | 0.053412 | 0.876219 | 0.876219 | 0.848665 | 0.848665 | 0.848665 | 0.848665 | 0 | 0.025265 | 0.253213 | 3,657 | 106 | 120 | 34.5 | 0.838521 | 0.249658 | 0 | 0.612903 | 0 | 0 | 0.049963 | 0 | 0 | 0 | 0 | 0 | 0.193548 | 1 | 0.145161 | false | 0 | 0.096774 | 0 | 0.354839 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
bafcf7582ebeb5663b464a49606784d9fd54aa22 | 168 | py | Python | boa3_test/test_sc/native_test/stdlib/Base58CheckDecode.py | OnBlockIO/neo3-boa | cb317292a67532a52ed26f2b0f0f7d0b10ac5f5f | [
"Apache-2.0"
] | 25 | 2020-07-22T19:37:43.000Z | 2022-03-08T03:23:55.000Z | boa3_test/test_sc/native_test/stdlib/Base58CheckDecode.py | OnBlockIO/neo3-boa | cb317292a67532a52ed26f2b0f0f7d0b10ac5f5f | [
"Apache-2.0"
] | 419 | 2020-04-23T17:48:14.000Z | 2022-03-31T13:17:45.000Z | boa3_test/test_sc/native_test/stdlib/Base58CheckDecode.py | OnBlockIO/neo3-boa | cb317292a67532a52ed26f2b0f0f7d0b10ac5f5f | [
"Apache-2.0"
] | 15 | 2020-05-21T21:54:24.000Z | 2021-11-18T06:17:24.000Z | from boa3.builtin import public
from boa3.builtin.nativecontract.stdlib import StdLib
@public
def main(key: str) -> bytes:
return StdLib.base58_check_decode(key)
| 21 | 53 | 0.785714 | 24 | 168 | 5.416667 | 0.666667 | 0.123077 | 0.230769 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.027397 | 0.130952 | 168 | 7 | 54 | 24 | 0.863014 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.4 | 0.2 | 0.8 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
242317878ec7dc031243f0c8f89ad4084c5a49d9 | 92 | py | Python | util.py | tedkyi/covid_rapid_test_simulation2 | ff390aceab3cbf53f96c502167df463fc197d5fc | [
"MIT"
] | null | null | null | util.py | tedkyi/covid_rapid_test_simulation2 | ff390aceab3cbf53f96c502167df463fc197d5fc | [
"MIT"
] | 21 | 2021-12-07T06:24:27.000Z | 2021-12-21T07:51:20.000Z | util.py | tedkyi/covid_rapid_test_simulation2 | ff390aceab3cbf53f96c502167df463fc197d5fc | [
"MIT"
] | 1 | 2021-12-10T07:00:43.000Z | 2021-12-10T07:00:43.000Z |
from scipy.stats import norm
def gaussianRandom(mu,var=1.0):
return norm.rvs(mu,var)
| 13.142857 | 31 | 0.717391 | 16 | 92 | 4.125 | 0.8125 | 0.151515 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.025974 | 0.163043 | 92 | 6 | 32 | 15.333333 | 0.831169 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
2464ce63aa37586de5c206d68765e175d094f18c | 29 | py | Python | apps/external_apps/app_plugins/__init__.py | indro/t2c | 56482ad4aed150f29353e054db2c97b567243bf8 | [
"MIT"
] | 3 | 2015-12-25T14:45:36.000Z | 2016-11-28T09:58:03.000Z | apps/external_apps/app_plugins/__init__.py | indro/t2c | 56482ad4aed150f29353e054db2c97b567243bf8 | [
"MIT"
] | null | null | null | apps/external_apps/app_plugins/__init__.py | indro/t2c | 56482ad4aed150f29353e054db2c97b567243bf8 | [
"MIT"
] | null | null | null | from library import Library
| 9.666667 | 27 | 0.827586 | 4 | 29 | 6 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.172414 | 29 | 2 | 28 | 14.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
030fa5d72dc25e29dfd18778a1dbc38e1f48c420 | 141 | py | Python | roombapy/__init__.py | shenxn/roombapy | 93fb4a2d4e3975cfece2ac93c4627bc4093c9617 | [
"MIT"
] | null | null | null | roombapy/__init__.py | shenxn/roombapy | 93fb4a2d4e3975cfece2ac93c4627bc4093c9617 | [
"MIT"
] | 30 | 2021-01-26T07:16:13.000Z | 2022-03-29T02:07:38.000Z | roombapy/__init__.py | shenxn/roombapy | 93fb4a2d4e3975cfece2ac93c4627bc4093c9617 | [
"MIT"
] | null | null | null | from .discovery import RoombaDiscovery
from .getpassword import RoombaPassword
from .roomba import Roomba, RoombaConnectionError, RoombaInfo
| 35.25 | 61 | 0.865248 | 14 | 141 | 8.714286 | 0.642857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.099291 | 141 | 3 | 62 | 47 | 0.96063 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
301b8e3befb78a74b8623edb9757f99f12f8ccbf | 7,926 | py | Python | tests/p2p/netfilter/test_match.py | jansegre/hathor-core | 22b3de6be2518e7a0797edbf0e4f6eb1cf28d6fd | [
"Apache-2.0"
] | null | null | null | tests/p2p/netfilter/test_match.py | jansegre/hathor-core | 22b3de6be2518e7a0797edbf0e4f6eb1cf28d6fd | [
"Apache-2.0"
] | 12 | 2020-10-04T17:14:52.000Z | 2022-03-03T22:21:56.000Z | tests/p2p/netfilter/test_match.py | jansegre/hathor-core | 22b3de6be2518e7a0797edbf0e4f6eb1cf28d6fd | [
"Apache-2.0"
] | null | null | null | from twisted.internet.address import HostnameAddress, IPv4Address, IPv6Address, UNIXAddress
from hathor.p2p.netfilter.context import NetfilterContext
from hathor.p2p.netfilter.matches import (
NetfilterMatch,
NetfilterMatchAll,
NetfilterMatchAnd,
NetfilterMatchIPAddress,
NetfilterMatchOr,
NetfilterMatchPeerId,
)
from hathor.p2p.peer_id import PeerId
from hathor.simulator import FakeConnection
from tests import unittest
class NetfilterNeverMatch(NetfilterMatch):
def match(self, context: 'NetfilterContext') -> bool:
return False
class NetfilterMatchTest(unittest.TestCase):
def test_match_all(self):
matcher = NetfilterMatchAll()
context = NetfilterContext()
self.assertTrue(matcher.match(context))
def test_never_match(self):
matcher = NetfilterNeverMatch()
context = NetfilterContext()
self.assertFalse(matcher.match(context))
def test_match_and_success(self):
m1 = NetfilterMatchAll()
m2 = NetfilterMatchAll()
matcher = NetfilterMatchAnd(m1, m2)
context = NetfilterContext()
self.assertTrue(matcher.match(context))
def test_match_and_fail_01(self):
m1 = NetfilterNeverMatch()
m2 = NetfilterMatchAll()
matcher = NetfilterMatchAnd(m1, m2)
context = NetfilterContext()
self.assertFalse(matcher.match(context))
def test_match_and_fail_10(self):
m1 = NetfilterMatchAll()
m2 = NetfilterNeverMatch()
matcher = NetfilterMatchAnd(m1, m2)
context = NetfilterContext()
self.assertFalse(matcher.match(context))
def test_match_and_fail_00(self):
m1 = NetfilterNeverMatch()
m2 = NetfilterNeverMatch()
matcher = NetfilterMatchAnd(m1, m2)
context = NetfilterContext()
self.assertFalse(matcher.match(context))
def test_match_or_success_11(self):
m1 = NetfilterMatchAll()
m2 = NetfilterMatchAll()
matcher = NetfilterMatchOr(m1, m2)
context = NetfilterContext()
self.assertTrue(matcher.match(context))
def test_match_or_success_10(self):
m1 = NetfilterMatchAll()
m2 = NetfilterNeverMatch()
matcher = NetfilterMatchOr(m1, m2)
context = NetfilterContext()
self.assertTrue(matcher.match(context))
def test_match_or_success_01(self):
m1 = NetfilterNeverMatch()
m2 = NetfilterMatchAll()
matcher = NetfilterMatchOr(m1, m2)
context = NetfilterContext()
self.assertTrue(matcher.match(context))
def test_match_or_fail_00(self):
m1 = NetfilterNeverMatch()
m2 = NetfilterNeverMatch()
matcher = NetfilterMatchOr(m1, m2)
context = NetfilterContext()
self.assertFalse(matcher.match(context))
def test_match_ip_address_empty_context(self):
matcher = NetfilterMatchIPAddress('192.168.0.0/24')
context = NetfilterContext()
self.assertFalse(matcher.match(context))
def test_match_ip_address_ipv4_net(self):
matcher = NetfilterMatchIPAddress('192.168.0.0/24')
context = NetfilterContext(addr=IPv4Address('TCP', '192.168.0.10', 1234))
self.assertTrue(matcher.match(context))
context = NetfilterContext(addr=IPv4Address('TCP', '192.168.1.10', 1234))
self.assertFalse(matcher.match(context))
context = NetfilterContext(addr=IPv4Address('TCP', '127.0.0.1', 1234))
self.assertFalse(matcher.match(context))
context = NetfilterContext(addr=IPv4Address('TCP', '', 1234))
self.assertFalse(matcher.match(context))
def test_match_ip_address_ipv4_ip(self):
matcher = NetfilterMatchIPAddress('192.168.0.1/32')
context = NetfilterContext(addr=IPv4Address('TCP', '192.168.0.1', 1234))
self.assertTrue(matcher.match(context))
context = NetfilterContext(addr=IPv4Address('TCP', '192.168.0.10', 1234))
self.assertFalse(matcher.match(context))
context = NetfilterContext(addr=IPv4Address('TCP', '', 1234))
self.assertFalse(matcher.match(context))
def test_match_ip_address_ipv4_hostname(self):
matcher = NetfilterMatchIPAddress('192.168.0.1/32')
context = NetfilterContext(addr=HostnameAddress('hathor.network', 80))
self.assertFalse(matcher.match(context))
def test_match_ip_address_ipv4_unix(self):
matcher = NetfilterMatchIPAddress('192.168.0.1/32')
context = NetfilterContext(addr=UNIXAddress('/unix.sock'))
self.assertFalse(matcher.match(context))
def test_match_ip_address_ipv4_ipv6(self):
matcher = NetfilterMatchIPAddress('192.168.0.1/32')
context = NetfilterContext(addr=IPv6Address('TCP', '2001:db8::', 80))
self.assertFalse(matcher.match(context))
context = NetfilterContext(addr=IPv6Address('TCP', '', 80))
self.assertFalse(matcher.match(context))
def test_match_ip_address_ipv6_net(self):
matcher = NetfilterMatchIPAddress('2001:0db8:0:f101::/64')
context = NetfilterContext(addr=IPv6Address('TCP', '2001:db8::8a2e:370:7334', 1234))
self.assertFalse(matcher.match(context))
context = NetfilterContext(addr=IPv6Address('TCP', '2001:db8:0:f101:2::7334', 1234))
self.assertTrue(matcher.match(context))
def test_match_ip_address_ipv6_ip(self):
matcher = NetfilterMatchIPAddress('2001:0db8:0:f101::1/128')
context = NetfilterContext(addr=IPv6Address('TCP', '2001:db8:0:f101::1', 1234))
self.assertTrue(matcher.match(context))
context = NetfilterContext(addr=IPv6Address('TCP', '2001:db8::8a2e:370:7334', 1234))
self.assertFalse(matcher.match(context))
context = NetfilterContext(addr=IPv6Address('TCP', '2001:db8:0:f101:2::7334', 1234))
self.assertFalse(matcher.match(context))
def test_match_ip_address_ipv6_hostname(self):
matcher = NetfilterMatchIPAddress('2001:0db8:0:f101::1/128')
context = NetfilterContext(addr=HostnameAddress('hathor.network', 80))
self.assertFalse(matcher.match(context))
def test_match_ip_address_ipv6_unix(self):
matcher = NetfilterMatchIPAddress('2001:0db8:0:f101::1/128')
context = NetfilterContext(addr=UNIXAddress('/unix.sock'))
self.assertFalse(matcher.match(context))
def test_match_ip_address_ipv6_ipv4(self):
matcher = NetfilterMatchIPAddress('2001:0db8:0:f101::1/128')
context = NetfilterContext(addr=IPv4Address('TCP', '192.168.0.1', 1234))
self.assertFalse(matcher.match(context))
def test_match_peer_id_empty_context(self):
matcher = NetfilterMatchPeerId('123')
context = NetfilterContext()
self.assertFalse(matcher.match(context))
def test_match_peer_id(self):
network = 'testnet'
peer_id1 = PeerId()
peer_id2 = PeerId()
manager1 = self.create_peer(network, peer_id=peer_id1)
manager2 = self.create_peer(network, peer_id=peer_id2)
conn = FakeConnection(manager1, manager2)
self.assertTrue(conn.proto2.is_state(conn.proto2.PeerState.HELLO))
matcher = NetfilterMatchPeerId(str(peer_id1.id))
context = NetfilterContext(protocol=conn.proto2)
self.assertFalse(matcher.match(context))
conn.run_one_step()
self.assertTrue(conn.proto2.is_state(conn.proto2.PeerState.PEER_ID))
self.assertFalse(matcher.match(context))
# Success because the connection is ready and proto2 is connected to proto1.
conn.run_one_step()
conn.run_one_step()
self.assertTrue(conn.proto2.is_state(conn.proto2.PeerState.READY))
self.assertTrue(matcher.match(context))
# Fail because proto1 is connected to proto2, and the peer id cannot match.
context = NetfilterContext(protocol=conn.proto1)
self.assertFalse(matcher.match(context))
| 40.85567 | 92 | 0.688241 | 860 | 7,926 | 6.205814 | 0.127907 | 0.080944 | 0.124602 | 0.126476 | 0.800637 | 0.775342 | 0.766348 | 0.722503 | 0.706389 | 0.648304 | 0 | 0.064455 | 0.197451 | 7,926 | 193 | 93 | 41.067358 | 0.774564 | 0.018673 | 0 | 0.616352 | 0 | 0 | 0.064695 | 0.026367 | 0 | 0 | 0 | 0 | 0.238994 | 1 | 0.150943 | false | 0 | 0.037736 | 0.006289 | 0.207547 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
063df7ccb49e4e0a38e0e416517f82c4214a9389 | 2,967 | py | Python | djpp/constants.py | paramono/djpp | ba70c212595d37e2d3ffbd7313c879979d9d4f3e | [
"MIT"
] | 1 | 2020-07-28T19:08:20.000Z | 2020-07-28T19:08:20.000Z | djpp/constants.py | paramono/djpaypal_subs | ba70c212595d37e2d3ffbd7313c879979d9d4f3e | [
"MIT"
] | null | null | null | djpp/constants.py | paramono/djpaypal_subs | ba70c212595d37e2d3ffbd7313c879979d9d4f3e | [
"MIT"
] | null | null | null | from django.utils.translation import gettext as _
APIMODE_SANDBOX = False
APIMODE_LIVE = True
APIMODE_CHOICES = [
(APIMODE_SANDBOX, _('Sandbox')),
(APIMODE_LIVE, _('Live')),
]
# Products
PRODUCTS_ENDPOINT = '/v1/catalogs/products'
PRODUCT_TYPE_PHYSICAL = 'PHYSICAL'
PRODUCT_TYPE_DIGITAL = 'DIGITAL'
PRODUCT_TYPE_SERVICE = 'SERVICE'
PRODUCT_TYPES = [
(PRODUCT_TYPE_PHYSICAL, _('Physical')),
(PRODUCT_TYPE_DIGITAL, _('Digital')),
(PRODUCT_TYPE_SERVICE, _('Service')),
]
# Plans
PLANS_ENDPOINT = '/v1/billing/plans'
PLAN_STATUS_CREATED = _('CREATED')
PLAN_STATUS_INACTIVE = _('INACTIVE')
PLAN_STATUS_ACTIVE = _('ACTIVE')
PLAN_STATUS_CHOICES = [
(PLAN_STATUS_CREATED, _('Created')),
(PLAN_STATUS_INACTIVE, _('Inactive')),
(PLAN_STATUS_ACTIVE, _('Active')),
]
# Subscriptions
SUBSCRIPTIONS_ENDPOINT = '/v1/billing/subscriptions'
SUBSCRIPTION_STATUS_APPROVAL_PENDING = 'APPROVAL_PENDING'
SUBSCRIPTION_STATUS_APPROVED = 'APPROVED'
SUBSCRIPTION_STATUS_ACTIVE = 'ACTIVE'
SUBSCRIPTION_STATUS_SUSPENDED = 'SUSPENDED'
SUBSCRIPTION_STATUS_CANCELLED = 'CANCELLED'
SUBSCRIPTION_STATUS_EXPIRED = 'EXPIRED'
SUBSCRIPTION_STATUS_CHOICES = [
(SUBSCRIPTION_STATUS_APPROVAL_PENDING, _('Approval pending')),
(SUBSCRIPTION_STATUS_APPROVED, _('Approved')),
(SUBSCRIPTION_STATUS_ACTIVE, _('Active')),
(SUBSCRIPTION_STATUS_SUSPENDED, _('Suspended')),
(SUBSCRIPTION_STATUS_CANCELLED, _('Cancelled')),
(SUBSCRIPTION_STATUS_EXPIRED, _('Expired')),
]
ORDER_INTENT_CAPTURE = 'CAPTURE'
ORDER_INTENT_AUTHORIZE = 'AUTHORIZE'
ORDER_INTENT_CHOICES = (
(ORDER_INTENT_CAPTURE, _('Capture')),
(ORDER_INTENT_AUTHORIZE, _('Authorize')),
)
ORDER_STATUS_CREATED = 'CREATED'
ORDER_STATUS_SAVED = 'SAVED'
ORDER_STATUS_APPROVED = 'APPROVED'
ORDER_STATUS_VOIDED = 'VOIDED'
ORDER_STATUS_COMPLETED = 'COMPLETED'
ORDER_STATUS_CHOICES = (
(ORDER_STATUS_CREATED, _('Created')),
(ORDER_STATUS_SAVED, _('Saved')),
(ORDER_STATUS_APPROVED, _('Approved')),
(ORDER_STATUS_VOIDED, _('Voided')),
(ORDER_STATUS_COMPLETED, _('Completed')),
)
CAPTURE_STATUS_COMPLETED = 'COMPLETED'
CAPTURE_STATUS_DECLINED = 'DECLINED'
CAPTURE_STATUS_PARTIALLY_REFUNDED = 'PARTIALLY_REFUNDED'
CAPTURE_STATUS_PENDING = 'PENDING'
CAPTURE_STATUS_REFUNDED = 'REFUNDED'
CAPTURE_STATUS_CHOICES = (
(CAPTURE_STATUS_COMPLETED, _('Completed')),
(CAPTURE_STATUS_DECLINED, _('Declined')),
(CAPTURE_STATUS_PARTIALLY_REFUNDED, _('Partially refunded')),
(CAPTURE_STATUS_PENDING, _('Pending')),
(CAPTURE_STATUS_REFUNDED, _('Refunded')),
)
DISBURSEMENT_MODE_INSTANT = 'INSTANT'
DISBURSEMENT_MODE_DELAYED = 'DELAYED'
DISBURSEMENT_MODE_CHOICES = (
(DISBURSEMENT_MODE_INSTANT, _('Instant')),
(DISBURSEMENT_MODE_DELAYED, _('Delayed')),
)
| 29.969697 | 66 | 0.70273 | 275 | 2,967 | 6.989091 | 0.189091 | 0.121748 | 0.041623 | 0.048387 | 0.800728 | 0.800728 | 0.800728 | 0.800728 | 0.738293 | 0.679501 | 0 | 0.001229 | 0.176946 | 2,967 | 98 | 67 | 30.27551 | 0.785831 | 0.009437 | 0 | 0 | 0 | 0 | 0.170358 | 0.015673 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.013158 | 0 | 0.013158 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
065b1c2737baa9e4d1ad6e7058a99e3efe51c0d9 | 177 | py | Python | metanic/accounts/admin.py | LimpidTech/melody | a00b99f9b697864a078e2cb886be4d75c10458a9 | [
"BSD-3-Clause"
] | null | null | null | metanic/accounts/admin.py | LimpidTech/melody | a00b99f9b697864a078e2cb886be4d75c10458a9 | [
"BSD-3-Clause"
] | 1 | 2020-02-11T21:34:24.000Z | 2020-02-11T21:34:24.000Z | metanic/accounts/admin.py | LimpidTech/melody | a00b99f9b697864a078e2cb886be4d75c10458a9 | [
"BSD-3-Clause"
] | null | null | null | from django.contrib import admin
from django.contrib.auth import admin as auth_admin
from metanic.accounts import models
admin.site.register(models.User, auth_admin.UserAdmin) | 29.5 | 54 | 0.841808 | 27 | 177 | 5.444444 | 0.518519 | 0.136054 | 0.231293 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.096045 | 177 | 6 | 54 | 29.5 | 0.91875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.75 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ebe932a6280448325a43ae27b79c87e165d738db | 25 | py | Python | dtspec/__init__.py | fuchsst/dtspec | c7972f627607ddc9e821ffb35197f5d0d280c8fb | [
"MIT"
] | 30 | 2019-12-05T15:46:46.000Z | 2021-12-20T19:20:05.000Z | dtspec/__init__.py | fuchsst/dtspec | c7972f627607ddc9e821ffb35197f5d0d280c8fb | [
"MIT"
] | 9 | 2020-02-28T15:25:01.000Z | 2021-07-20T22:15:21.000Z | dtspec/__init__.py | fuchsst/dtspec | c7972f627607ddc9e821ffb35197f5d0d280c8fb | [
"MIT"
] | 6 | 2020-01-13T22:35:10.000Z | 2021-11-12T11:12:17.000Z | import dtspec.api as api
| 12.5 | 24 | 0.8 | 5 | 25 | 4 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.16 | 25 | 1 | 25 | 25 | 0.952381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
230864de0276d7649feedc112a04bd2bca59d051 | 98 | py | Python | hw/andrey_yelin/lesson_02/test_lesson2.py | alexander-sidorov/qap-05 | 6db7c0a1eeadd15f7d3f826e7f0ac4be3949ec8c | [
"MIT"
] | 9 | 2021-12-10T21:30:07.000Z | 2022-02-25T21:32:34.000Z | hw/andrey_yelin/lesson_02/test_lesson2.py | alexander-sidorov/qap-05 | 6db7c0a1eeadd15f7d3f826e7f0ac4be3949ec8c | [
"MIT"
] | 22 | 2021-12-11T08:46:58.000Z | 2022-02-02T15:56:37.000Z | hw/andrey_yelin/lesson_02/test_lesson2.py | alexander-sidorov/qap-05 | 6db7c0a1eeadd15f7d3f826e7f0ac4be3949ec8c | [
"MIT"
] | 8 | 2021-12-11T09:15:45.000Z | 2022-02-02T08:09:09.000Z | from hw.andrey_yelin.lesson_02.lesson2 import example
def test_example() -> None:
example()
| 16.333333 | 53 | 0.744898 | 14 | 98 | 5 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.036145 | 0.153061 | 98 | 5 | 54 | 19.6 | 0.807229 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
000832a328c50c67049f2c10841db85feb714a3d | 1,036 | py | Python | pyclesperanto_prototype/_tier4/__init__.py | haesleinhuepf/pyclesperanto_prototype | 65bc3035d3b2b61a2722c93b95bae310bfbd190e | [
"BSD-3-Clause"
] | 1 | 2021-01-15T15:32:19.000Z | 2021-01-15T15:32:19.000Z | pyclesperanto_prototype/_tier4/__init__.py | haesleinhuepf/pyclesperanto_prototype | 65bc3035d3b2b61a2722c93b95bae310bfbd190e | [
"BSD-3-Clause"
] | null | null | null | pyclesperanto_prototype/_tier4/__init__.py | haesleinhuepf/pyclesperanto_prototype | 65bc3035d3b2b61a2722c93b95bae310bfbd190e | [
"BSD-3-Clause"
] | null | null | null | from ._connected_components_labeling_box import connected_components_labeling_box
from ._extend_labeling_via_voronoi import extend_labeling_via_voronoi
from ._extend_labels_with_maximum_radius import extend_labels_with_maximum_radius
from ._mean_squared_error import mean_squared_error
from ._local_maximum_touching_neighbor_count_map import local_maximum_touching_neighbor_count_map
from ._local_mean_touching_neighbor_count_map import local_mean_touching_neighbor_count_map
from ._local_median_touching_neighbor_count_map import local_median_touching_neighbor_count_map
from ._local_minimum_touching_neighbor_count_map import local_minimum_touching_neighbor_count_map
from ._local_standard_deviation_touching_neighbor_count_map import local_standard_deviation_touching_neighbor_count_map
from ._merge_touching_labels import merge_touching_labels
from ._sorensen_dice_coefficient import sorensen_dice_coefficient
from ._spots_to_pointlist import spots_to_pointlist
from ._touching_neighbor_count_map import touching_neighbor_count_map
| 74 | 119 | 0.937259 | 146 | 1,036 | 5.917808 | 0.219178 | 0.222222 | 0.291667 | 0.333333 | 0.590278 | 0.483796 | 0.106481 | 0 | 0 | 0 | 0 | 0 | 0.050193 | 1,036 | 13 | 120 | 79.692308 | 0.878049 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
002217556f9b7866aff4deaf2b8eda223aa9ba3e | 24 | py | Python | metrics/__init__.py | sanchezirina/defeatcovid19-net-pytorch | eadfec212ade7724688e4455e59157c9c53f0c89 | [
"MIT"
] | 4 | 2019-10-12T04:55:03.000Z | 2019-11-25T22:30:41.000Z | metrics/__init__.py | sanchezirina/defeatcovid19-net-pytorch | eadfec212ade7724688e4455e59157c9c53f0c89 | [
"MIT"
] | null | null | null | metrics/__init__.py | sanchezirina/defeatcovid19-net-pytorch | eadfec212ade7724688e4455e59157c9c53f0c89 | [
"MIT"
] | null | null | null | from .accuracy import *
| 12 | 23 | 0.75 | 3 | 24 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 24 | 1 | 24 | 24 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
aeae7a0be4300f395aae0415517a6f15df8c8716 | 84 | py | Python | vstreamer/application/__init__.py | artudi54/video-streamer | 66e5e722ed66abe5877488f177c0ac4f13325382 | [
"MIT"
] | 2 | 2019-10-08T10:49:52.000Z | 2021-10-01T11:26:31.000Z | vstreamer/application/__init__.py | artudi54/video-streamer | 66e5e722ed66abe5877488f177c0ac4f13325382 | [
"MIT"
] | 1 | 2019-05-16T13:48:29.000Z | 2019-05-16T13:48:49.000Z | vstreamer/application/__init__.py | artudi54/video-streamer | 66e5e722ed66abe5877488f177c0ac4f13325382 | [
"MIT"
] | 1 | 2019-10-08T10:49:56.000Z | 2019-10-08T10:49:56.000Z | from vstreamer.application.VideoStreamerApplication import VideoStreamerApplication
| 42 | 83 | 0.928571 | 6 | 84 | 13 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.047619 | 84 | 1 | 84 | 84 | 0.975 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
aed0b994b2122aeed0f53408fe72a094aab473a8 | 2,502 | py | Python | karp5/tests/unit_tests/domain/services/test_search_service.py | spraakbanken/karp-backend-v5 | bfca9d0f29a1243ee8d817c6a7db8b30a7da1097 | [
"MIT"
] | 4 | 2018-01-09T10:20:22.000Z | 2019-11-21T12:26:56.000Z | karp5/tests/unit_tests/domain/services/test_search_service.py | spraakbanken/karp-backend-v5 | bfca9d0f29a1243ee8d817c6a7db8b30a7da1097 | [
"MIT"
] | 44 | 2018-03-23T13:59:13.000Z | 2022-03-29T06:03:17.000Z | karp5/tests/unit_tests/domain/services/test_search_service.py | spraakbanken/karp-backend-v5 | bfca9d0f29a1243ee8d817c6a7db8b30a7da1097 | [
"MIT"
] | 2 | 2018-01-07T12:08:32.000Z | 2019-08-21T08:05:17.000Z | from unittest import mock
import pytest
import elasticsearch_dsl as es_dsl
from karp5.domain.services import search_service
@pytest.mark.parametrize(
"from_,size,expected_calls",
[
# (None, None, [mock.call.extra(from_=0, size=0), mock.call.execute().to_dict()]),
(
0,
25,
[
mock.call.extra(size=25),
mock.call.extra().execute(),
mock.call.extra().execute().to_dict(),
],
),
(
0,
25,
[
mock.call.extra(from_=0, size=25),
mock.call.extra().execute(),
mock.call.extra().execute().to_dict(),
],
),
],
)
def test_execute_query_execute(from_, size, expected_calls):
scan_limit = 1000
es_search_mock = mock.Mock(name="es_search", spec=es_dsl.Search)
# es_search_params_mock = mock.Mock(name="es_search_params", spec=es_dsl.Search)
# es_search_mock.params.return_value = es_search_params_mock
# es_search_params_mock.scan.return_value = []
with mock.patch("karp5.config.conf_mgr") as conf_mgr_mock:
conf_mgr_mock.app_config.return_value.SCAN_LIMIT = scan_limit
# conf_mgr_mock.__getitem__.return_value = conf_mgr_mock
search_service.execute_query(es_search_mock, from_=from_, size=size)
# assert es_search_mock.mock_calls == expected_calls
@pytest.mark.parametrize(
"from_,size,expected_calls",
[
(
5,
1200,
[
mock.call.extra(from_=0, size=0),
mock.call.extra().execute(),
mock.call.params(preserve_order=True, scroll="5m"),
],
),
],
)
def test_execute_query_scan(from_, size, expected_calls):
scan_limit = 1000
es_search_mock = mock.Mock(name="es_search", spec=es_dsl.Search)
es_search_params_mock = mock.Mock(name="es_search_params", spec=es_dsl.Search)
es_search_mock.params.return_value = es_search_params_mock
es_search_params_mock.scan.return_value = []
with mock.patch("karp5.config.conf_mgr") as conf_mgr_mock:
conf_mgr_mock.app_config.return_value.SCAN_LIMIT = scan_limit
# conf_mgr_mock.__getitem__.return_value = conf_mgr_mock
search_service.execute_query(es_search_mock, from_=from_, size=size)
# assert es_search_mock.mock_calls == expected_calls
# assert es_search_params_mock.mock_calls == [mock.call.scan()]
| 33.36 | 90 | 0.631894 | 324 | 2,502 | 4.490741 | 0.16358 | 0.104467 | 0.080412 | 0.086598 | 0.84811 | 0.83299 | 0.815808 | 0.758076 | 0.727148 | 0.684536 | 0 | 0.017121 | 0.252998 | 2,502 | 74 | 91 | 33.810811 | 0.76137 | 0.214628 | 0 | 0.553571 | 0 | 0 | 0.065473 | 0.047059 | 0 | 0 | 0 | 0 | 0 | 1 | 0.035714 | false | 0 | 0.071429 | 0 | 0.107143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
aeec03f1b68a67a7fc8a39be46d8330c0e24c222 | 198 | py | Python | 01-Contracts/scripts/deploy.py | iamsahu/bonsai-banks | 2728bf17ea37c2b82216fd4bad0d45d79af06742 | [
"MIT"
] | 4 | 2021-06-02T15:44:04.000Z | 2021-07-13T12:23:48.000Z | 01-Contracts/scripts/deploy.py | iamsahu/bonsai-banks | 2728bf17ea37c2b82216fd4bad0d45d79af06742 | [
"MIT"
] | 1 | 2021-06-10T00:16:45.000Z | 2021-06-10T00:16:45.000Z | 01-Contracts/scripts/deploy.py | iamsahu/bonsai-banks | 2728bf17ea37c2b82216fd4bad0d45d79af06742 | [
"MIT"
] | 2 | 2021-06-08T18:03:09.000Z | 2021-06-29T09:49:36.000Z | from brownie import BonsaiBank, accounts
def main():
# accounts[0].deploy(AdvisoryToken)
# acct = accounts.load('test_account')
BonsaiBank.deploy(accounts[0],{'from': accounts[0]})
| 28.285714 | 56 | 0.686869 | 23 | 198 | 5.869565 | 0.608696 | 0.2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.018072 | 0.161616 | 198 | 7 | 57 | 28.285714 | 0.795181 | 0.353535 | 0 | 0 | 0 | 0 | 0.031746 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4e1e965d9c707810c3f4f74a37c7b3f72c5d6142 | 25 | py | Python | things/__init__.py | ryancollingwood/testfps | d234737e900c4b1904ff62ff579cb4016ed2cde9 | [
"MIT"
] | null | null | null | things/__init__.py | ryancollingwood/testfps | d234737e900c4b1904ff62ff579cb4016ed2cde9 | [
"MIT"
] | null | null | null | things/__init__.py | ryancollingwood/testfps | d234737e900c4b1904ff62ff579cb4016ed2cde9 | [
"MIT"
] | null | null | null | from .thing import Thing
| 12.5 | 24 | 0.8 | 4 | 25 | 5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.16 | 25 | 1 | 25 | 25 | 0.952381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9d7b2c59d2759527115620fc84e08badb1c39361 | 85 | py | Python | tests/test_djangito.py | mechanicbuddy/djangito | 07c08a83c57577cbf945bba461219bc0ef2a7695 | [
"Apache-2.0"
] | null | null | null | tests/test_djangito.py | mechanicbuddy/djangito | 07c08a83c57577cbf945bba461219bc0ef2a7695 | [
"Apache-2.0"
] | null | null | null | tests/test_djangito.py | mechanicbuddy/djangito | 07c08a83c57577cbf945bba461219bc0ef2a7695 | [
"Apache-2.0"
] | null | null | null | from djangito import djangito
def test_djangito():
assert djangito is not None
| 14.166667 | 31 | 0.764706 | 12 | 85 | 5.333333 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2 | 85 | 5 | 32 | 17 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9df5aff66335b456ccace608fb61c09e77ef4706 | 216 | py | Python | gens/__init__.py | kosyachniy/gens | 832d677f6c0d91c072119db4c407298c00036437 | [
"MIT"
] | null | null | null | gens/__init__.py | kosyachniy/gens | 832d677f6c0d91c072119db4c407298c00036437 | [
"MIT"
] | null | null | null | gens/__init__.py | kosyachniy/gens | 832d677f6c0d91c072119db4c407298c00036437 | [
"MIT"
] | null | null | null | """
Initializing the Python package
"""
from .main import generate, generate_id, generate_password
__version__ = '0.1'
__all__ = (
'__version__',
'generate',
'generate_id',
'generate_password',
)
| 13.5 | 58 | 0.671296 | 22 | 216 | 5.863636 | 0.636364 | 0.248062 | 0.27907 | 0.403101 | 0.527132 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011561 | 0.199074 | 216 | 15 | 59 | 14.4 | 0.734104 | 0.143519 | 0 | 0 | 1 | 0 | 0.282486 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.25 | 0.125 | 0 | 0.125 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
d19306cd3fa5ab8cc2282b6579adf60ec91c7d0d | 101 | py | Python | src/desymbolize.py | humble-goat/csv-bridge-to-erp | 9a7ca78f5e49b5a1a465dd65e9a98a0c50e5293c | [
"MIT"
] | null | null | null | src/desymbolize.py | humble-goat/csv-bridge-to-erp | 9a7ca78f5e49b5a1a465dd65e9a98a0c50e5293c | [
"MIT"
] | null | null | null | src/desymbolize.py | humble-goat/csv-bridge-to-erp | 9a7ca78f5e49b5a1a465dd65e9a98a0c50e5293c | [
"MIT"
] | null | null | null | def lower_remover(input):
return input.replace('/', '').replace('%', '').replace('.', '').lower() | 50.5 | 75 | 0.574257 | 10 | 101 | 5.7 | 0.6 | 0.491228 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.09901 | 101 | 2 | 75 | 50.5 | 0.626374 | 0 | 0 | 0 | 0 | 0 | 0.029412 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
d1c64552f4dc5c44b2f08705cccc50e3359d88e9 | 39 | py | Python | json_schema_to_dash_forms/__init__.py | catalystneuro/json-schema-to-dash-forms | a6e83a02f3ac2dd0ec2c7d09ec4327f3c6512ee4 | [
"MIT"
] | 6 | 2020-11-11T12:05:47.000Z | 2022-03-16T21:33:18.000Z | json_schema_to_dash_forms/__init__.py | catalystneuro/json-schema-to-dash-forms | a6e83a02f3ac2dd0ec2c7d09ec4327f3c6512ee4 | [
"MIT"
] | 10 | 2020-10-23T15:31:35.000Z | 2021-08-09T09:06:59.000Z | json_schema_to_dash_forms/__init__.py | catalystneuro/json-schema-to-dash-forms | a6e83a02f3ac2dd0ec2c7d09ec4327f3c6512ee4 | [
"MIT"
] | null | null | null | from .forms import SchemaFormContainer
| 19.5 | 38 | 0.871795 | 4 | 39 | 8.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.102564 | 39 | 1 | 39 | 39 | 0.971429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
06182fc6a105d13a39e8b7e516711a34d9105430 | 3,755 | py | Python | tests/test_advanced_config.py | simonwiles/xmltotabular | 13c8b60e383aa16db662c744e57578f2b7e2ed57 | [
"MIT"
] | 2 | 2020-09-26T23:57:00.000Z | 2021-06-16T23:59:46.000Z | tests/test_advanced_config.py | simonwiles/xmltotabular | 13c8b60e383aa16db662c744e57578f2b7e2ed57 | [
"MIT"
] | 23 | 2020-11-09T05:43:50.000Z | 2021-10-21T19:29:47.000Z | tests/test_advanced_config.py | simonwiles/xmltotabular | 13c8b60e383aa16db662c744e57578f2b7e2ed57 | [
"MIT"
] | null | null | null | import yaml
from xmltotabular import XmlDocToTabular
def test_concatenation_of_multiple_results():
"""Tests that the <joiner> syntax is properly implemented."""
config = yaml.safe_load(
r"""
album:
<entity>: album
<fields>:
name: name
artist: artist
released: released
label: label
genre: genre
description/p:
<fieldname>: description
<joiner>: "\n"
"""
)
xml = """\
<?xml version="1.0" encoding="UTF-8"?>
<album>
<name>Five Leaves Left</name>
<artist>Nick Drake</artist>
<released>1969</released>
<label>Island</label>
<genre>Folk</genre>
<description>
<p>
Five Leaves Left was recorded between July 1968 and June 1969 at Sound Techniques
in London, England. Engineer John Wood recalled that "[Drake] would track live,
singing and playing along with the string section" without the use of any
overdubbing. For the song "River Man", producer Joe Boyd described Drake playing
on a stool in the centre of the studio while surrounded by a semi-circle of
instruments. The studio's environment was also an important factor as it had
multiple levels to it which enabled the creation of interesting sounds and
atmospheres.
</p>
<p>
Among his various backing musicians, Drake was accompanied by Richard Thompson
from Fairport Convention and Danny Thompson of Pentangle. Robert Kirby, a friend
of Drake's from his youth, arranged the string instruments for several tracks
while Harry Robinson arranged the strings for "River Man". The title of the album
is a reference to the old Rizla cigarette papers packet, which used to contain a
printed note near the end saying "Only five leaves left".
</p>
</description>
</album>
"""
docTransformer = XmlDocToTabular(config)
assert docTransformer.process_doc(xml) == {
"album": [
{
"id": "None_0",
"name": "Five Leaves Left",
"artist": "Nick Drake",
"released": "1969",
"label": "Island",
"genre": "Folk",
"description": (
"Five Leaves Left was recorded between July 1968 and June "
"1969 at Sound Techniques in London, England. Engineer John Wood "
'recalled that "[Drake] would track live, singing and playing along '
'with the string section" without the use of any overdubbing. For '
'the song "River Man", producer Joe Boyd described Drake playing on '
"a stool in the centre of the studio while surrounded by a semi-"
"circle of instruments. The studio's environment was also an "
"important factor as it had multiple levels to it which enabled the "
"creation of interesting sounds and atmospheres.\nAmong his various "
"backing musicians, Drake was accompanied by Richard Thompson from "
"Fairport Convention and Danny Thompson of Pentangle. Robert Kirby, "
"a friend of Drake's from his youth, arranged the string "
"instruments for several tracks while Harry Robinson arranged the "
'strings for "River Man". The title of the album is a reference to '
"the old Rizla cigarette papers packet, which used to contain a "
'printed note near the end saying "Only five leaves left".'
),
}
]
}
| 42.670455 | 89 | 0.595206 | 441 | 3,755 | 5.052154 | 0.362812 | 0.02693 | 0.037702 | 0.016158 | 0.736086 | 0.736086 | 0.736086 | 0.736086 | 0.736086 | 0.736086 | 0 | 0.01124 | 0.336618 | 3,755 | 87 | 90 | 43.16092 | 0.883179 | 0.014647 | 0 | 0.059701 | 0 | 0 | 0.74145 | 0.013561 | 0 | 0 | 0 | 0 | 0.014925 | 1 | 0.014925 | false | 0 | 0.059701 | 0 | 0.074627 | 0.029851 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ae22897cf751d0f1e94673659541453e6e24e05d | 2,541 | py | Python | parkkeeper/migrations/0007_auto_20151207_0812.py | telminov/django-park-keeper | a79489e2cd584e60fb1c49f849aa7bdbc1dfb4bc | [
"MIT"
] | 4 | 2015-11-07T11:07:37.000Z | 2017-07-03T06:24:04.000Z | parkkeeper/migrations/0007_auto_20151207_0812.py | telminov/django-park-keeper | a79489e2cd584e60fb1c49f849aa7bdbc1dfb4bc | [
"MIT"
] | null | null | null | parkkeeper/migrations/0007_auto_20151207_0812.py | telminov/django-park-keeper | a79489e2cd584e60fb1c49f849aa7bdbc1dfb4bc | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# Generated by Django 1.9 on 2015-12-07 08:12
from __future__ import unicode_literals
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('parkkeeper', '0006_auto_20151115_1303'),
]
operations = [
migrations.AddField(
model_name='monitschedule',
name='cron',
field=models.CharField(default='', help_text='* * * * *', max_length=50, verbose_name='Cron-style schedule'),
),
migrations.AddField(
model_name='workschedule',
name='cron',
field=models.CharField(default='', help_text='* * * * *', max_length=50, verbose_name='Cron-style schedule'),
),
migrations.AlterField(
model_name='monit',
name='worker_type',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='monits', to='parkkeeper.WorkerType'),
),
migrations.AlterField(
model_name='monitschedule',
name='count',
field=models.IntegerField(blank=True, default=1, null=True),
),
migrations.AlterField(
model_name='monitschedule',
name='interval',
field=models.IntegerField(blank=True, default=1, null=True),
),
migrations.AlterField(
model_name='monitschedule',
name='period',
field=models.IntegerField(editable=False, help_text='in seconds', null=True),
),
migrations.AlterField(
model_name='work',
name='worker_type',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='works', to='parkkeeper.WorkerType'),
),
migrations.AlterField(
model_name='workertype',
name='port',
field=models.IntegerField(unique=True),
),
migrations.AlterField(
model_name='workschedule',
name='count',
field=models.IntegerField(blank=True, default=1, null=True),
),
migrations.AlterField(
model_name='workschedule',
name='interval',
field=models.IntegerField(blank=True, default=1, null=True),
),
migrations.AlterField(
model_name='workschedule',
name='period',
field=models.IntegerField(editable=False, help_text='in seconds', null=True),
),
]
| 35.291667 | 132 | 0.586383 | 243 | 2,541 | 5.995885 | 0.304527 | 0.067948 | 0.154427 | 0.179135 | 0.741935 | 0.741935 | 0.713109 | 0.612217 | 0.612217 | 0.612217 | 0 | 0.021488 | 0.285714 | 2,541 | 71 | 133 | 35.788732 | 0.781267 | 0.02558 | 0 | 0.75 | 1 | 0 | 0.142742 | 0.026284 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.046875 | 0 | 0.09375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8818426d42cf53123d8bc008f561d224997fa6d6 | 28 | py | Python | 6 kyu/A disguised sequence I.py | mwk0408/codewars_solutions | 9b4f502b5f159e68024d494e19a96a226acad5e5 | [
"MIT"
] | 6 | 2020-09-03T09:32:25.000Z | 2020-12-07T04:10:01.000Z | 6 kyu/A disguised sequence I.py | mwk0408/codewars_solutions | 9b4f502b5f159e68024d494e19a96a226acad5e5 | [
"MIT"
] | 1 | 2021-12-13T15:30:21.000Z | 2021-12-13T15:30:21.000Z | 6 kyu/A disguised sequence I.py | mwk0408/codewars_solutions | 9b4f502b5f159e68024d494e19a96a226acad5e5 | [
"MIT"
] | null | null | null | def fcn (n):
return 2**n | 14 | 15 | 0.535714 | 6 | 28 | 2.5 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.05 | 0.285714 | 28 | 2 | 15 | 14 | 0.7 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
8870d16a6991828cc0e5f60be3461c08726b9245 | 50 | py | Python | PlaylistDownloader/__init__.py | justin-gerhardt/PlaylistDownloader | e9d5ae849c440f9c6e569e5ede65996238428650 | [
"Unlicense"
] | null | null | null | PlaylistDownloader/__init__.py | justin-gerhardt/PlaylistDownloader | e9d5ae849c440f9c6e569e5ede65996238428650 | [
"Unlicense"
] | null | null | null | PlaylistDownloader/__init__.py | justin-gerhardt/PlaylistDownloader | e9d5ae849c440f9c6e569e5ede65996238428650 | [
"Unlicense"
] | null | null | null | from .PlaylistDownloader import PlaylistDownloader | 50 | 50 | 0.92 | 4 | 50 | 11.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.06 | 50 | 1 | 50 | 50 | 0.978723 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ee810abb0fde0e8e765cab3ad6c072a5cfeacf14 | 11,621 | py | Python | security_monkey/tests/sso/test_okta.py | boladmin/security_monkey | c28592ffd518fa399527d26262683fc860c30eef | [
"Apache-2.0"
] | 4,258 | 2015-01-04T22:06:10.000Z | 2022-03-31T23:40:27.000Z | security_monkey/tests/sso/test_okta.py | boladmin/security_monkey | c28592ffd518fa399527d26262683fc860c30eef | [
"Apache-2.0"
] | 1,013 | 2015-01-12T02:31:03.000Z | 2021-09-16T19:09:03.000Z | security_monkey/tests/sso/test_okta.py | boladmin/security_monkey | c28592ffd518fa399527d26262683fc860c30eef | [
"Apache-2.0"
] | 965 | 2015-01-11T21:06:07.000Z | 2022-03-17T16:53:57.000Z | import mock
from security_monkey.sso.views import Okta
from security_monkey.tests import SecurityMonkeyTestCase
RETURN_TO = 'http://localhost:5000'
VALID_OKTA_AUTH_RESPONSE = {
'code': 'somecode',
'state': 'clientId,0oagdrujcfsH6mYQz0h7,redirectUri,http://localhost:5000/api/1/auth/okta,return_to,{}'.format(RETURN_TO),
}
INVALID_OKTA_AUTH_RESPONSE = {
'code': 'somecode',
'state': 'clientId,<invalid>,redirectUri,http://localhost:5000/api/1/auth/okta,return_to,http://localhost:5000',
}
VALID_JWKS_RESPONSE = {'keys': [{'use': 'sig', 'e': 'AQAB', 'kty': 'RSA', 'alg': 'RS256', 'n': 'h3E3aglIzKuXZOtH-_SM1gbtBE1A76kmIyqx6bnSUoOUZQOfP3QjguZGOzMCRjbt2Q3MqZyQWiJ-m99yIzoGyA0hQ-TerEmBxaXrqyPBg_ApG4skGhVTzpZtds2cqLQCb1LXuIc9gD41KTJDSmzhNui9GwHcIrpGQ8uEQNxCjikIKSYflZsr6rBLP7pbSx0ApFdrmNZuQFJwaTF2XIxEmZ3uHPKfERdZFZyFyWjtv-jG_DvPLBNFS6teRx-xeGoSiC-8uVf9zPWLiu0vSKGInQKoQ4iJg38qqcCCV1jNzIs15m3ApJshdcNyxTz7uUtrK2ZW9lj1rL6jA4-RICpG-Q', 'kid': 't_ox-6D8CnBOEzY8OAYVySBWicE4FRlrMkFWqaP7bxI'}, {'use': 'sig', 'e': 'AQAB', 'kty': 'RSA', 'alg': 'RS256', 'n': 'iVJQL1Mjj1_7pe8RvVeNJKt_8h_o00MUZBAsY9MhEaAcMDrpE77bd5Y-kG4ybpg1syPlm_SF-eZ0sm7PQXNJsvNhZcCnBToU7zDHUzTg5j2bsClB_ydLKlb33_ZzkEJC34g_H4VBmkHrpv98elXhIvLyCfPpRbqEzWIEeIj5tWVgfQHrnbiejb27ji9fhJX5u89f5M0yZL9s-S72PUtEkLgeklpjV1vM2zxZHfjez1zw9T7_mGLnaO_hF2EtWhFgjg__lTCWOY0nDVY4Ev-MjM2ayqDU7LRHoglwpF7YazVdyj79MUhl1nrlNSMjeP5muymjJF5M_vmpbH9vDqxq5Q', 'kid': 'uCnP2kWY8wysUZItLUzNW-Xkcv2t8mKL4W8ffAHc69Q'}]}
VALID_HEADER_DATA = [{'alg': 'RS256', 'kid': 't_ox-6D8CnBOEzY8OAYVySBWicE4FRlrMkFWqaP7bxI'}]
INVALID_ACCESS_TOKEN_RESPONSE = {'access_token': 'eyJraWQiO0X294LTZEOENuQk9Felk4T0FZVnlTQldpY0U0RlJsck1rRldxYVA3YnhJIiwiYWxnIjoiUlMyNTYifQ.eyJ2ZXIiOjEsImp0aSI6IkFULqVmJnOTZzN05iMHNzSlNmM2NDeTAiLCJpc3MiOiJodHRwczovL2Rldi00MzM0OTkub2t0YXByZXZpZXcuY29tL29hdXRoMi9kZWZhdWx0IiwiYXVkIjoiYXBpOi8vZGVmYXVsdCIsImlhdCI6MTUzODA3MzUxMiwiZXhwIjoxNTM4MDc3MTEyLCJjaWQiOiIwb2FnZHJ1amNmc0g2bVlRejBoNyIsInVpZCI6IjAwdWViZGI4MWRETWVNdGRCMGg3Iiwic2NwIjpbIm9wZW5pZCIsImVtYWlsIl0sInN1YiI6InRlc3R0ZXN0QGV4YW1wbGUuY29tIn0.b7x93CrD9JfxGn89wIsXFrFM9x-SJERlMzFmh5-FZKOnKRYoQZ5phN4V_rHfnysCwKIGkn0gyZ10znA5gRvdDxbROlm07YbZ5zbs9bis2gjoAVmSEwsqHbEHi7rD9k0lRJ-u1QiuhkMpm7Uhi3j_-DlXF4fnL7StB6MxJe00dVzdr5n-M6qt1KIWAjn6LinG0_0ndSbe1bHl4hMOPER-z-gHAh0QdKEszv89tuFYuK9upvafI5Hv0NYQZG6STVjNBRYO6kGt6X7Lto7dUjnMdXiwD93M8Bt1vNNfX6uvufX4qQH49Q9y7kwv0C6eu4fXESiGmP9jQMbK6Nh2Bcr12Q', 'token_type': 'Bearer', 'expires_in': '3600', 'id_token': 'eyJraWQiOiJ0X294LTZEOENuQk9Felk4T0FZVnlTQldpY0U0RlJsck1rRldxYVA3YnhJIiwiYWxnIjoiUlMyNTYifQ.eyJzdWIiOiIwMHVlYmRiODFkRE1lTXRkQjBoNyIsImVtYWlsIjoidGVzdHRlc3RAZXhhbXBsZS5jb20iLCJ2ZXIiOjEsImlzcyI6Imh0dHBzOi8vZGV2LTQzMzQ5OS5va3RhcHJldmlldy5jb20vb2F1dGgyL2RlZmF1bHQiLCJhdWQiOiIwb2FnZHJ1amNmc0g2bVlRejBoNyIsImlhdCI6MTUzODA3MzUxMywiZXhwIjoxNTM4MDc3MTEzLCJqdGkiOiJJRC40R0Q3NnVNTEJhU1pIbnVDTVB1a1JNV2NnNGNUYlZSZzNPVHZMTWtMSTM0IiwiYW1yIjpbInB3ZCJdLCJpZHAiOiIwMG9kbGkyZmxuTHhGSHJUMjBoNyIsIm5vbmNlIjoiNjkyOTAwMGY1YzNiNDIxNWI5NzM4YmJiMzYyNWIwNDAiLCJhdXRoX3RpbWUiOjE1MzgwNzE1MDUsImF0X2hhc2giOiJNRjY1U1o1Xy1yZXVaOUJoWGFtbWhnIiwiZmlyc3ROYW1lIjoiVGVzdCIsInVpZCI6IjAwdWViZGI4MWRETWVNdGRCMGg3IiwibGFzdE5hbWUiOiJUZXN0In0.ANqUGlxrcWLPrs8VZTMdvW8VbtITqfhK4OQqtZ5EB7YYv6wN2YtGhuBy1dD4OXBk0Die_5ykcdLSHNT5GQSd3QpQDFxsRe7b30y7hw3OgRHH8zp0jCrX-NVAvJYAFfBc2hh7Q3RXipl4xXxNZqJIA', 'scope': 'openid email'}
VALID_EXPIRED_ACCESS_TOKEN_RESPONSE = {'access_token': 'eyJraWQiOiJ0X294LTZEOENuQk9Felk4T0FZVnlTQldpY0U0RlJsck1rRldxYVA3YnhJIiwiYWxnIjoiUlMyNTYifQ.eyJ2ZXIiOjEsImp0aSI6IkFULkZTamk4d001dGN6VDQ4bXlvTHJSTDNqVmJnOTZzN05iMHNzSlNmM2NDeTAiLCJpc3MiOiJodHRwczovL2Rldi00MzM0OTkub2t0YXByZXZpZXcuY29tL29hdXRoMi9kZWZhdWx0IiwiYXVkIjoiYXBpOi8vZGVmYXVsdCIsImlhdCI6MTUzODA3MzUxMiwiZXhwIjoxNTM4MDc3MTEyLCJjaWQiOiIwb2FnZHJ1amNmc0g2bVlRejBoNyIsInVpZCI6IjAwdWViZGI4MWRETWVNdGRCMGg3Iiwic2NwIjpbIm9wZW5pZCIsImVtYWlsIl0sInN1YiI6InRlc3R0ZXN0QGV4YW1wbGUuY29tIn0.b7x93CrD9JfxGn89wIsXFrFM9x-SJERlMzFmh5-FZKOnKRYoQZ5phN4V_rHfnysCwKIGkn0gyZ10znA5gRvdDxbROlm07YbZ5zbs9bis2gjoAVmSEwsqHbEHi7rD9k0lRJ-u1QiuhkMpm7Uhi3j_-DlXF4fnL7StB6MxJe00dVzdr5n-M6qt1KIWAjn6LinG0_0ndSbe1bHl4hMOPER-z-gHAh0QdKEszv89tuFYuK9upvafI5Hv0NYQZG6STVjNBRYO6kGt6X7Lto7dUjnMdXiwD93M8Bt1vNNfX6uvufX4qQH49Q9y7kwv0C6eu4fXESiGmP9jQMbK6Nh2Bcr12Q', 'token_type': 'Bearer', 'expires_in': '3600', 'id_token': 'eyJraWQiOiJ0X294LTZEOENuQk9Felk4T0FZVnlTQldpY0U0RlJsck1rRldxYVA3YnhJIiwiYWxnIjoiUlMyNTYifQ.eyJzdWIiOiIwMHVlYmRiODFkRE1lTXRkQjBoNyIsImVtYWlsIjoidGVzdHRlc3RAZXhhbXBsZS5jb20iLCJ2ZXIiOjEsImlzcyI6Imh0dHBzOi8vZGV2LTQzMzQ5OS5va3RhcHJldmlldy5jb20vb2F1dGgyL2RlZmF1bHQiLCJhdWQiOiIwb2FnZHJ1amNmc0g2bVlRejBoNyIsImlhdCI6MTUzODA3MzUxMywiZXhwIjoxNTM4MDc3MTEzLCJqdGkiOiJJRC40R0Q3NnVNTEJhU1pIbnVDTVB1a1JNV2NnNGNUYlZSZzNPVHZMTWtMSTM0IiwiYW1yIjpbInB3ZCJdLCJpZHAiOiIwMG9kbGkyZmxuTHhGSHJUMjBoNyIsIm5vbmNlIjoiNjkyOTAwMGY1YzNiNDIxNWI5NzM4YmJiMzYyNWIwNDAiLCJhdXRoX3RpbWUiOjE1MzgwNzE1MDUsImF0X2hhc2giOiJNRjY1U1o1Xy1yZXVaOUJoWGFtbWhnIiwiZmlyc3ROYW1lIjoiVGVzdCIsInVpZCI6IjAwdWViZGI4MWRETWVNdGRCMGg3IiwibGFzdE5hbWUiOiJUZXN0In0.ANqUGlxrcWLPrs8VZTMdvW8VbtITqfhK4OQqtZ5EB7YYv6wN2YtGhuBy1dD4OXBk0Die_5ykcdLSHNT5GQSd3QpQDFxsRe7b30y7hw3OgRHH8zp0jCrX-NVAvJYAFfBc2hh7Q3RXipl4xXxG3zZ9w2M5sTjJp3dSgWVlv9k6ADVN_MFg8EIaf6ivfkj9D_OMSJ_S6v23Zg87sInlk7KdCP8rdcYkrMuaiJdozfpCJJsRthzql9rJZbgjYToL7_zzAGWdWg7a4JWqtob4s0kkZij_W0Eu0_C9qWekkfiUAJhE1Rl08JPS_5AA5_Uivw2hQewvvDpneUHkVGp3NZqJIA', 'scope': 'openid email'}
VALID_EXPIRED_ENCODED_ID_TOKEN = 'eyJraWQiOiJ0X294LTZEOENuQk9Felk4T0FZVnlTQldpY0U0RlJsck1rRldxYVA3YnhJIiwiYWxnIjoiUlMyNTYifQ.eyJzdWIiOiIwMHVlYmRiODFkRE1lTXRkQjBoNyIsImVtYWlsIjoidGVzdHRlc3RAZXhhbXBsZS5jb20iLCJ2ZXIiOjEsImlzcyI6Imh0dHBzOi8vZGV2LTQzMzQ5OS5va3RhcHJldmlldy5jb20vb2F1dGgyL2RlZmF1bHQiLCJhdWQiOiIwb2FnZHJ1amNmc0g2bVlRejBoNyIsImlhdCI6MTUzODA3MzUxMywiZXhwIjoxNTM4MDc3MTEzLCJqdGkiOiJJRC40R0Q3NnVNTEJhU1pIbnVDTVB1a1JNV2NnNGNUYlZSZzNPVHZMTWtMSTM0IiwiYW1yIjpbInB3ZCJdLCJpZHAiOiIwMG9kbGkyZmxuTHhGSHJUMjBoNyIsIm5vbmNlIjoiNjkyOTAwMGY1YzNiNDIxNWI5NzM4YmJiMzYyNWIwNDAiLCJhdXRoX3RpbWUiOjE1MzgwNzE1MDUsImF0X2hhc2giOiJNRjY1U1o1Xy1yZXVaOUJoWGFtbWhnIiwiZmlyc3ROYW1lIjoiVGVzdCIsInVpZCI6IjAwdWViZGI4MWRETWVNdGRCMGg3IiwibGFzdE5hbWUiOiJUZXN0In0.ANqUGlxrcWLPrs8VZTMdvW8VbtITqfhK4OQqtZ5EB7YYv6wN2YtGhuBy1dD4OXBk0Die_5ykcdLSHNT5GQSd3QpQDFxsRe7b30y7hw3OgRHH8zp0jCrX-NVAvJYAFfBc2hh7Q3RXipl4xXxG3zZ9w2M5sTjJp3dSgWVlv9k6ADVN_MFg8EIaf6ivfkj9D_OMSJ_S6v23Zg87sInlk7KdCP8rdcYkrMuaiJdozfpCJJsRthzql9rJZbgjYToL7_zzAGWdWg7a4JWqtob4s0kkZij_W0Eu0_C9qWekkfiUAJhE1Rl08JPS_5AA5_Uivw2hQewvvDpneUHkVGp3NZqJIA'
VALID_EXPIRED_DECODED_ID_TOKEN = {'nonce': '6929000f5c3b4215b9738bbb3625b040', 'ver': '1', 'aud': '0oagdrujcfsH6mYQz0h7', 'firstName': 'Test', 'iss': 'https://dev-433499.oktapreview.com/oauth2/default', 'lastName': 'Test', 'idp': '00odli2flnLxFHrT20h7', 'at_hash': 'MF65SZ5_-reuZ9BhXammhg', 'jti': 'ID.4GD76uMLBaSZHnuCMPukRMWcg4cTbVRg3OTvLMkLI34', 'exp': '1538077113', 'auth_time': '1538071505', 'uid': '00uebdb81dDMeMtdB0h7', 'iat': '1538073513', 'amr': ['pwd'], 'email': 'testtest@example.com', 'sub': '00uebdb81dDMeMtdB0h7'}
class OktaTestCase(SecurityMonkeyTestCase):
@mock.patch('flask_restful.reqparse.RequestParser.parse_args')
@mock.patch('security_monkey.sso.views.requests.post')
@mock.patch('security_monkey.sso.views.validate_redirect_url')
@mock.patch('security_monkey.sso.views.requests.get')
@mock.patch('security_monkey.sso.views.fetch_token_header_payload')
@mock.patch('security_monkey.sso.views.jwt.decode')
@mock.patch('security_monkey.sso.views.login_user')
def test_successful_authenticatoin_redirects(self, mock_login, mock_jwt_decode, mock_header_payload, mock_get_jwks, mock_redirect_validation, mock_fetch_token, mock_parse_args):
"""Test that given the Okta tokens are valid and the flow completes, it returns a 302 to the return to"""
mock_parse_args.return_value = INVALID_OKTA_AUTH_RESPONSE
mock_fetch_token.return_value.json.return_value = INVALID_ACCESS_TOKEN_RESPONSE
mock_get_jwks.return_value.json.return_value = VALID_JWKS_RESPONSE
mock_header_payload.return_value = VALID_HEADER_DATA
mock_jwt_decode.return_value = VALID_EXPIRED_DECODED_ID_TOKEN # Serves the purpose, 'validated' at this point.
config_patches = {
'ACTIVE_PROVIDERS': ['okta'],
'OKTA_CLIENT_SECERT': '5SaHHXe8bHlxjjjcpM7n8j7DEjil7IAkUfsOfeSd',
'WEB_PATH': 'http://localhost:5000'
}
with mock.patch.dict(self.app.config, config_patches):
with self.app.app_context():
response = Okta().post()
self.assertEqual(response.status_code, 302)
self.assertEqual(response.location, RETURN_TO)
def test_okta_not_enabled_in_config(self):
config_patches = {
'ACTIVE_PROVIDERS': [''],
}
with mock.patch.dict(self.app.config, config_patches):
with self.app.app_context():
response = Okta().post()
self.assertEqual(response, ('Okta is not enabled in the config. See the ACTIVE_PROVIDERS section.', 404))
@mock.patch('flask_restful.reqparse.RequestParser.parse_args')
@mock.patch('security_monkey.sso.views.requests.post')
@mock.patch('security_monkey.sso.views.validate_redirect_url')
@mock.patch('security_monkey.sso.views.requests.get')
@mock.patch('security_monkey.sso.views.fetch_token_header_payload')
def test_okta_invalid_id_token_expired(self, mock_header_payload, mock_get_jwks, mock_redirect_validation, mock_fetch_token, mock_parse_args):
"""Test that given an expired token, the expected error is returned."""
mock_parse_args.return_value = VALID_OKTA_AUTH_RESPONSE
mock_fetch_token.return_value.json.return_value = VALID_EXPIRED_ACCESS_TOKEN_RESPONSE
mock_get_jwks.return_value.json.return_value = VALID_JWKS_RESPONSE
mock_header_payload.return_value = VALID_HEADER_DATA
config_patches = {
'ACTIVE_PROVIDERS': ['okta'],
'OKTA_CLIENT_SECERT': '5SaHHXe8bHlxjjjcpM7n8j7DEjil7IAkUfsOfeSd',
'WEB_PATH': 'http://localhost:5000'
}
with mock.patch.dict(self.app.config, config_patches):
with self.app.app_context():
response = Okta().post()
self.assertEqual(response, ({"message": "Token has expired"}, 403))
@mock.patch('flask_restful.reqparse.RequestParser.parse_args')
@mock.patch('security_monkey.sso.views.requests.post')
@mock.patch('security_monkey.sso.views.validate_redirect_url')
@mock.patch('security_monkey.sso.views.requests.get')
@mock.patch('security_monkey.sso.views.fetch_token_header_payload')
def test_okta_bad_secret_causes_invalid_id_token_decode_error(self, mock_header_payload, mock_get_jwks, mock_redirect_validation, mock_fetch_token, mock_parse_args):
"""Test that given a bad decode the expected error is returned (causing this by having a garbage token."""
mock_parse_args.return_value = INVALID_OKTA_AUTH_RESPONSE
mock_fetch_token.return_value.json.return_value = INVALID_ACCESS_TOKEN_RESPONSE
mock_get_jwks.return_value.json.return_value = VALID_JWKS_RESPONSE
mock_header_payload.return_value = VALID_HEADER_DATA
config_patches = {
'ACTIVE_PROVIDERS': ['okta'],
'OKTA_CLIENT_SECERT': '5SaHHXe8bHlxjjjcpM7n8j7DEjil7IAkUfsOfeSd',
'WEB_PATH': 'http://localhost:5000'
}
with mock.patch.dict(self.app.config, config_patches):
with self.app.app_context():
response = Okta().post()
self.assertEqual(response, ({"message": "Token is invalid"}, 403))
| 108.607477 | 2,007 | 0.837794 | 850 | 11,621 | 11.121176 | 0.274118 | 0.019994 | 0.026976 | 0.03491 | 0.697662 | 0.67111 | 0.658521 | 0.649847 | 0.649847 | 0.644346 | 0 | 0.085243 | 0.087428 | 11,621 | 106 | 2,008 | 109.632075 | 0.806129 | 0.02702 | 0 | 0.590909 | 0 | 0.022727 | 0.652405 | 0.559306 | 0 | 0 | 0 | 0 | 0.056818 | 1 | 0.045455 | false | 0 | 0.034091 | 0 | 0.090909 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ee820769901c3158687c3be439238ccd481ce292 | 2,573 | py | Python | api/tests/test_message_validator.py | mmedum/limbo | 8aff181a1616bc4941b29fcd75e33e7943a105ea | [
"MIT"
] | 1 | 2019-08-22T03:59:27.000Z | 2019-08-22T03:59:27.000Z | api/tests/test_message_validator.py | mmedum/limbo | 8aff181a1616bc4941b29fcd75e33e7943a105ea | [
"MIT"
] | 1 | 2018-12-23T14:18:40.000Z | 2018-12-23T14:18:40.000Z | api/tests/test_message_validator.py | mmedum/limbo | 8aff181a1616bc4941b29fcd75e33e7943a105ea | [
"MIT"
] | null | null | null | import unittest
import message_validator
class TestMessageValidator(unittest.TestCase):
def test_subject_not_defined(self):
expected_output = {
'Message': 'not submitted',
'Problem': 'no subject defined'
}
expected_status_code = 400
expected_should_send = False
test_body = {
'from': 'test@test.com',
'to': ['test@test.com'],
'message': 'message_test'
}
should_send, msg, status_code = message_validator.validate_body(test_body)
self.assertEqual(should_send, expected_should_send)
self.assertEqual(status_code, expected_status_code)
self.assertEqual(msg, expected_output)
def test_message_not_defined(self):
expected_output = {
'Message': 'not submitted',
'Problem': 'no message defined'
}
expected_status_code = 400
expected_should_send = False
test_body = {
'from': 'test@test.com',
'to': ['test@test.com'],
'subject': 'test'
}
should_send, msg, status_code = message_validator.validate_body(test_body)
self.assertEqual(should_send, expected_should_send)
self.assertEqual(status_code, expected_status_code)
self.assertEqual(msg, expected_output)
def test_to_not_defined(self):
expected_output = {
'Message': 'not submitted',
'Problem': 'no receivers defined'
}
expected_status_code = 400
expected_should_send = False
test_body = {
'from': 'test@test.com',
'subject': 'test',
'message': 'test'
}
should_send, msg, status_code = message_validator.validate_body(test_body)
self.assertEqual(should_send, expected_should_send)
self.assertEqual(status_code, expected_status_code)
self.assertEqual(msg, expected_output)
def test_from_not_defined(self):
expected_output = {
'Message': 'not submitted',
'Problem': 'no from defined'
}
expected_status_code = 400
expected_should_send = False
test_body = {
'to': ['test@test.com'],
'subject': 'test',
'message': 'test'
}
should_send, msg, status_code = message_validator.validate_body(test_body)
self.assertEqual(should_send, expected_should_send)
self.assertEqual(status_code, expected_status_code)
self.assertEqual(msg, expected_output)
| 29.918605 | 82 | 0.607462 | 266 | 2,573 | 5.552632 | 0.120301 | 0.108328 | 0.097495 | 0.05958 | 0.906567 | 0.906567 | 0.900474 | 0.900474 | 0.900474 | 0.900474 | 0 | 0.006619 | 0.295375 | 2,573 | 85 | 83 | 30.270588 | 0.808053 | 0 | 0 | 0.686567 | 0 | 0 | 0.135639 | 0 | 0 | 0 | 0 | 0 | 0.179104 | 1 | 0.059701 | false | 0 | 0.029851 | 0 | 0.104478 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ee8c69e1d8e15a0ed00e68e84248539974f3ebc2 | 104 | py | Python | npload.py | dmsehf804/3stream_ROI | 140b77aff56e234f152e01ea87dee24faafd1161 | [
"MIT"
] | 1 | 2019-12-06T09:27:01.000Z | 2019-12-06T09:27:01.000Z | npload.py | dmsehf804/3stream_ROI | 140b77aff56e234f152e01ea87dee24faafd1161 | [
"MIT"
] | null | null | null | npload.py | dmsehf804/3stream_ROI | 140b77aff56e234f152e01ea87dee24faafd1161 | [
"MIT"
] | null | null | null | import numpy as np
print(np.load('data/UCF101_train_rgb/ApplyEyeMakeup/v_ApplyEyeMakeup_g08.npy').shape) | 52 | 85 | 0.846154 | 17 | 104 | 4.941176 | 0.882353 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.05 | 0.038462 | 104 | 2 | 85 | 52 | 0.79 | 0 | 0 | 0 | 0 | 0 | 0.580952 | 0.580952 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
ee8f73e95e871df8b32429393694e108ca1c38a7 | 72 | py | Python | hello.py | dick7/test-git | 359d897d3edd390b14afc28bd9bc50502e087e20 | [
"MIT"
] | null | null | null | hello.py | dick7/test-git | 359d897d3edd390b14afc28bd9bc50502e087e20 | [
"MIT"
] | null | null | null | hello.py | dick7/test-git | 359d897d3edd390b14afc28bd9bc50502e087e20 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# coding=utf-8
print("Hello,world!\nHello,Git!")
| 14.4 | 33 | 0.680556 | 12 | 72 | 4.083333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015152 | 0.083333 | 72 | 4 | 34 | 18 | 0.727273 | 0.458333 | 0 | 0 | 0 | 0 | 0.648649 | 0.648649 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
c9a142a26ecbb10cbfee6297b6accce8219500c7 | 69 | py | Python | zhangzhen/20180326/text8.py | python20180319howmework/homework | c826d7aa4c52f8d22f739feb134d20f0b2c217cd | [
"Apache-2.0"
] | null | null | null | zhangzhen/20180326/text8.py | python20180319howmework/homework | c826d7aa4c52f8d22f739feb134d20f0b2c217cd | [
"Apache-2.0"
] | null | null | null | zhangzhen/20180326/text8.py | python20180319howmework/homework | c826d7aa4c52f8d22f739feb134d20f0b2c217cd | [
"Apache-2.0"
] | null | null | null |
#print(not 1 or 0 and 1 or 3 and 4 or 5 and 6 or 7 and 8 and 9 )
#4
| 17.25 | 64 | 0.608696 | 22 | 69 | 1.909091 | 0.590909 | 0.142857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.23913 | 0.333333 | 69 | 3 | 65 | 23 | 0.673913 | 0.927536 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4e61541c20589b17b2a96ec323f6a9e363ec9b7b | 2,036 | py | Python | tests/test_frontend/test_tensorflow.py | uTensor/utensor_cgen | eccd6859028d0b6a350dced25ea72ff02faaf9ad | [
"Apache-2.0"
] | 49 | 2018-01-06T12:57:56.000Z | 2021-09-03T09:48:32.000Z | tests/test_frontend/test_tensorflow.py | uTensor/utensor_cgen | eccd6859028d0b6a350dced25ea72ff02faaf9ad | [
"Apache-2.0"
] | 101 | 2018-01-16T19:24:21.000Z | 2021-11-10T19:39:33.000Z | tests/test_frontend/test_tensorflow.py | uTensor/utensor_cgen | eccd6859028d0b6a350dced25ea72ff02faaf9ad | [
"Apache-2.0"
] | 32 | 2018-02-15T19:39:50.000Z | 2020-11-26T22:32:05.000Z | import numpy as np
import tensorflow.compat.v1 as tf
def test_scalar_shape():
from utensor_cgen.frontend.tensorflow import GraphDefParser
graph = tf.Graph()
with graph.as_default():
tf.constant(1, dtype=tf.float32, name='x')
parser = GraphDefParser({})
ugraph = parser.parse(graph.as_graph_def(), output_nodes=['x'])
# shape of scalar tensor should be empty list
out_tensor = ugraph.ops_info['x'].output_tensors[0]
assert out_tensor.shape == []
assert out_tensor.dtype is np.dtype('float32')
def test_placeholder_shape():
from utensor_cgen.frontend.tensorflow import GraphDefParser
graph = tf.Graph()
with graph.as_default():
tf.placeholder(dtype=tf.float32, name='x')
parser = GraphDefParser({})
ugraph = parser.parse(graph.as_graph_def(), output_nodes=['x'])
# nondeterministic shape, can be any shape
out_tensor = ugraph.ops_info['x'].output_tensors[0]
assert out_tensor.shape is None
assert out_tensor.dtype is np.dtype('float32')
graph = tf.Graph()
with graph.as_default():
tf.placeholder(dtype=tf.float32, name='x', shape=[None, 5])
parser = GraphDefParser({})
ugraph = parser.parse(graph.as_graph_def(), output_nodes=['x'])
# nondeterministic dimension
out_tensor = ugraph.ops_info['x'].output_tensors[0]
assert out_tensor.shape == [None, 5]
assert out_tensor.dtype is np.dtype('float32')
def test_normal_tensor_shape():
from utensor_cgen.frontend.tensorflow import GraphDefParser
shape = np.random.randint(1, 10, size=(10,)).tolist()
graph = tf.Graph()
with graph.as_default():
tf.constant(np.random.rand(*shape), dtype=tf.float32, name='x')
parser = GraphDefParser({})
ugraph = parser.parse(graph.as_graph_def(), output_nodes=['x'])
# deterministic shape
out_tensor = ugraph.ops_info['x'].output_tensors[0]
assert out_tensor.shape == shape, 'expecting {}, get {}'.format(shape, out_tensor.shape)
assert out_tensor.dtype is np.dtype('float32')
| 37.703704 | 92 | 0.694008 | 279 | 2,036 | 4.896057 | 0.222222 | 0.085652 | 0.087848 | 0.046852 | 0.81552 | 0.81552 | 0.81552 | 0.81552 | 0.746706 | 0.711567 | 0 | 0.01717 | 0.170432 | 2,036 | 53 | 93 | 38.415094 | 0.791593 | 0.064342 | 0 | 0.658537 | 0 | 0 | 0.031579 | 0 | 0 | 0 | 0 | 0 | 0.195122 | 1 | 0.073171 | false | 0 | 0.121951 | 0 | 0.195122 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4e6664d8bc0d6427843b4700dcb4ea38876252b6 | 36 | py | Python | devkit/data/transforms/__init__.py | gwanglee/VisDA2020 | 21fec160b54dafae4ee6ca33d16a24ac307501ae | [
"MIT"
] | 360 | 2020-03-30T07:15:45.000Z | 2022-03-04T14:08:04.000Z | data/transforms/__init__.py | Mawandasmat/APNet | 8ba6e078ff062415b2b2b34115bbadb4cfd6e827 | [
"MIT"
] | 30 | 2020-05-12T11:12:20.000Z | 2021-12-31T05:49:10.000Z | data/transforms/__init__.py | Mawandasmat/APNet | 8ba6e078ff062415b2b2b34115bbadb4cfd6e827 | [
"MIT"
] | 38 | 2020-05-12T05:33:46.000Z | 2022-01-25T22:27:45.000Z | from .build import build_transforms
| 18 | 35 | 0.861111 | 5 | 36 | 6 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 36 | 1 | 36 | 36 | 0.9375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4ecd133c9ae89a7f3bf354eee21d302b6ba656d2 | 1,111 | py | Python | sls/utils/__init__.py | aws-samples/amazon-ec2-rds-scheduling-tags-api-sls | 89bec948dc1b838e5518eb76ad571da8dea693f4 | [
"MIT-0"
] | null | null | null | sls/utils/__init__.py | aws-samples/amazon-ec2-rds-scheduling-tags-api-sls | 89bec948dc1b838e5518eb76ad571da8dea693f4 | [
"MIT-0"
] | null | null | null | sls/utils/__init__.py | aws-samples/amazon-ec2-rds-scheduling-tags-api-sls | 89bec948dc1b838e5518eb76ad571da8dea693f4 | [
"MIT-0"
] | null | null | null | """
base init file
"""
# pylint:disable=wrong-import-position,wrong-import-order, import-error, W0703
import boto3
def get_boto3_client(service_name, secret_region, credentials):
"""
:param service_name:
:param secret_region:
:param credentials:
:return:
"""
return boto3.client(service_name, secret_region,
aws_access_key_id=credentials.get('AccessKeyId', ''),
aws_secret_access_key=credentials.get('SecretAccessKey', ''),
aws_session_token=credentials.get('SessionToken', '')
)
def get_boto3_resource(service_name, secret_region, credentials):
"""
:param service_name:
:param secret_region:
:param credentials:
:return:
"""
return boto3.resource(service_name, secret_region,
aws_access_key_id=credentials.get('AccessKeyId', ''),
aws_secret_access_key=credentials.get('SecretAccessKey', ''),
aws_session_token=credentials.get('SessionToken', '')
)
| 30.861111 | 87 | 0.60216 | 106 | 1,111 | 6.009434 | 0.301887 | 0.103611 | 0.10675 | 0.144427 | 0.844584 | 0.844584 | 0.784929 | 0.784929 | 0.784929 | 0.784929 | 0 | 0.011364 | 0.287129 | 1,111 | 35 | 88 | 31.742857 | 0.792929 | 0.212421 | 0 | 0.461538 | 0 | 0 | 0.093711 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.153846 | false | 0 | 0.076923 | 0 | 0.384615 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0902e3a8f7b9fc510433891a948a7aa07bd44180 | 135 | py | Python | attacks/__init__.py | sukrutrao/Adversarial-Patch-Training | b7322e6f4d94029ceb0dcb946d2b6852c795990f | [
"Unlicense"
] | 21 | 2020-08-04T12:47:03.000Z | 2022-03-22T09:34:29.000Z | attacks/__init__.py | sukrutrao/Adversarial-Patch-Training | b7322e6f4d94029ceb0dcb946d2b6852c795990f | [
"Unlicense"
] | 3 | 2021-06-08T22:13:00.000Z | 2022-03-12T00:45:16.000Z | attacks/__init__.py | sukrutrao/Adversarial-Patch-Training | b7322e6f4d94029ceb0dcb946d2b6852c795990f | [
"Unlicense"
] | 6 | 2020-08-04T12:47:05.000Z | 2022-02-13T00:58:03.000Z | """
Attacks.
"""
from .attack import *
from .norms import *
from .objectives import *
from .adversarial_patch import AdversarialPatch
| 15 | 47 | 0.748148 | 15 | 135 | 6.666667 | 0.6 | 0.3 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148148 | 135 | 8 | 48 | 16.875 | 0.869565 | 0.059259 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
095704b67a31c3c7ae5560a39c309fe1c5f99efc | 64,190 | py | Python | Software/Network plotters/elec_heat_v2g50_OMEGA_plot_v2.py | JonasVind/Master_Project_Code-Plots | f3efea1a30738b119bf6958cc490b940c90e2909 | [
"CC-BY-4.0"
] | null | null | null | Software/Network plotters/elec_heat_v2g50_OMEGA_plot_v2.py | JonasVind/Master_Project_Code-Plots | f3efea1a30738b119bf6958cc490b940c90e2909 | [
"CC-BY-4.0"
] | null | null | null | Software/Network plotters/elec_heat_v2g50_OMEGA_plot_v2.py | JonasVind/Master_Project_Code-Plots | f3efea1a30738b119bf6958cc490b940c90e2909 | [
"CC-BY-4.0"
] | null | null | null | # Import libraries
import os
import sys
import pypsa
import numpy as np
import pandas as pd
import time
import math
# Timer
t0 = time.time() # Start a timer
# Import functions file
sys.path.append(os.path.split(os.getcwd())[0])
from functions_file import *
# Directory of file
directory = os.path.split(os.path.split(os.getcwd())[0])[0] + "\\Data\\elec_heat_v2g50\\"
# File name
file = "postnetwork-elec_heat_v2g50_0.125_0.05.h5"
# Generic file name
titleFileName = file
# Figure path
figurePath = os.path.split(os.path.split(os.getcwd())[0])[0] + "\\Figures\\elec_heat_v2g50\\"
##############################################################################
##############################################################################
################################# Pre Analysis ###############################
##############################################################################
##############################################################################
# ------------------- Curtailment - CO2 constraints (Elec) ------------------#
# Path to save files
path = figurePath + "Pre Analysis\\"
# List of file names
filename_CO2 = ["postnetwork-elec_heat_v2g50_0.125_0.6.h5",
"postnetwork-elec_heat_v2g50_0.125_0.5.h5",
"postnetwork-elec_heat_v2g50_0.125_0.4.h5",
"postnetwork-elec_heat_v2g50_0.125_0.3.h5",
"postnetwork-elec_heat_v2g50_0.125_0.2.h5",
"postnetwork-elec_heat_v2g50_0.125_0.1.h5",
"postnetwork-elec_heat_v2g50_0.125_0.05.h5"]
# List of constraints
constraints = ["40%", "50%", "60%", "70%", "80%", "90%", "95%"]
title= "" #("Electricity Curtailment - " + file[12:-14])
fig = Curtailment(directory=directory, files=filename_CO2, title=title, constraints=constraints, figsize=[10,3], ylim=[-1,24])
SavePlot(fig, path, title=(file[12:-14] + " - Curtailment Elec (CO2)"))
title = "" #("Heat Curtailment - " + file[12:-14])
fig = CurtailmentHeat(directory=directory, files=filename_CO2, title=title, constraints=constraints, figsize=[10,3], ylim=[-1,20], legendLoc="upper left")
SavePlot(fig, path, title=(file[12:-14] + " - Curtailment Heat (CO2)"))
##############################################################################
##############################################################################
################################### MISMATCH #################################
##############################################################################
##############################################################################
# ------- Electricity produced by technology (Elec CO2 and Trans) -----------#
# Path to save files
path = figurePath + "Mismatch\\"
# Name of file (must be in correct folder location)
filenames = ["postnetwork-elec_heat_v2g50_0.125_0.6.h5",
"postnetwork-elec_heat_v2g50_0.125_0.5.h5",
"postnetwork-elec_heat_v2g50_0.125_0.4.h5",
"postnetwork-elec_heat_v2g50_0.125_0.3.h5",
"postnetwork-elec_heat_v2g50_0.125_0.2.h5",
"postnetwork-elec_heat_v2g50_0.125_0.1.h5",
"postnetwork-elec_heat_v2g50_0.125_0.05.h5"]
# List of constraints
constraints = ["40%", "50%", "60%", "70%", "80%", "90%", "95%"]
fig = ElecProductionOvernight(directory=directory, filenames=filenames, constraints=constraints, figsize=[10,4])
SavePlot(fig, path, title=(file[12:-14] + " - total elec generation (CO2)"))
# ------------------ Total Installed Heating Capacity (CO2) -----------------#
# Path to save files
path = figurePath + "Mismatch\\"
fig = OvernightHeatCapacityInstalled(path=directory, filenames=filenames, constraints=constraints)
SavePlot(fig, path, title=(file[12:-14] + " - Total Heat Cap Inst (CO2)"))
for files in filenames[-1:]:
# ------------------ Map Capacity Plots (Elec + Heat +Transport) ------------------#
# Path to save files
path = figurePath + "Mismatch\\Map Capacity\\"
# --- Elec ---
# Import network
network = pypsa.Network(directory+files)
fig1, fig2, fig3 = MapCapacityOriginal(network, files, ncol=3)
SavePlot(fig1, path, title=(files[12:-3] + " - Map Capacity Elec Generator"))
SavePlot(fig2, path, title=(files[12:-3] + " - Map Capacity Elec Storage Energy"))
SavePlot(fig3, path, title=(files[12:-3] + " - Map Capacity Elec Storage Power"))
# --- Heat ---
# Import network
network = pypsa.Network(directory+files)
fig4, fig5, fig6 = MapCapacityHeat(network, files, ncol=2)
SavePlot(fig4, path, title=(files[12:-3] + " - Map Capacity Heat Generator"))
SavePlot(fig5, path, title=(files[12:-3] + " - Map Capacity Heat Storage Energy"))
SavePlot(fig6, path, title=(files[12:-3] + " - Map Capacity Heat Storage Power"))
# -------------------- Map Energy Plot (Elec + Heat + Transport) -------------------#
# Path for saving file
path = figurePath + "Mismatch\\Map Energy Distribution\\"
# Import network
network = pypsa.Network(directory+files)
# --- Elec ---
figElec = MapCapacityElectricityEnergy(network, files)
SavePlot(figElec, path, title=(files[12:-3] + " - Elec Energy Production"))
# --- Heat ---
# Heating energy
figHeat = MapCapacityHeatEnergy(network, files)
SavePlot(figHeat, path, title=(files[12:-3] + " - Heat Energy Production"))
#%%
# --------------------- Map PC Plot (Elec + Heat + Transport) ----------------------#
# Path to save plots
path = figurePath + "Mismatch\\Map PC\\"
# Import network
network = pypsa.Network(directory+file)
# Get the names of the data
dataNames = network.buses.index.str.slice(0,2).unique()
# Get time stamps
timeIndex = network.loads_t.p_set.index
# --- Elec ---
# Electricity load for each country
loadElec = network.loads_t.p_set[dataNames]
# Solar PV generation
generationSolar = network.generators_t.p[dataNames + " solar"]
generationSolar.columns = generationSolar.columns.str.slice(0,2)
# Onshore wind generation
generationOnwind = network.generators_t.p[[country for country in network.generators_t.p.columns if "onwind" in country]].groupby(network.generators.bus.str.slice(0,2),axis=1).sum()
# Offshore wind generation
# Because offwind is only for 21 countries, additional methods have to be implemented to make it at 8760 x 30 matrix
# Create empty array of 8760 x 30, add the offwind generation and remove 'NaN' values.
generationOffwind = pd.DataFrame(np.zeros([8760,30]),index=timeIndex, columns=dataNames)
generationOffwind += network.generators_t.p[[country for country in network.generators_t.p.columns if "offwind" in country]].groupby(network.generators.bus.str.slice(0,2),axis=1).sum()
generationOffwind = generationOffwind.replace(np.nan,0)
# RoR generations
# Because RoR is only for 27 countries, additional methods have to be implemented to make it at 8760 x 30 matrix
# Create empty array of 8760 x 30, add the RoR generation and remove 'NaN' values.
generationRoR = pd.DataFrame(np.zeros([8760,30]),index=timeIndex, columns=dataNames)
generationRoR += network.generators_t.p[[country for country in network.generators_t.p.columns if "ror" in country]].groupby(network.generators.bus.str.slice(0,2),axis=1).sum()
generationRoR = generationRoR.replace(np.nan,0)
# Combined generation for electricity
generationElec = generationSolar + generationOnwind + generationOffwind + generationRoR
# Mismatch electricity
mismatchElec = generationElec - loadElec
# PCA on mismatch for electricity
eigenValuesElec, eigenVectorsElec, varianceExplainedElec, normConstElec, TElec = PCA(mismatchElec)
# Plot map PC for mismatch electricity
titlePlotElec = "Mismatch for electricity only"
for i in np.arange(6):
fig = MAP(eigenVectorsElec, eigenValuesElec, dataNames, (i + 1)) #, titlePlotElec, titleFileName)
title = (file[12:-3] + " - Map PC Elec Mismatch (lambda " + str(i+1) + ")")
SavePlot(fig, path, title)
# --- Heat ---
# Heat load for each country
loadHeat = network.loads_t.p_set[[country for country in network.loads_t.p_set.columns if "heat" in country]].groupby(network.loads.bus.str.slice(0,2),axis=1).sum()
# Heat generators for each country (solar collectors)
# Because some countries have urban collectors, while other have central collectors,
# additional methods have to be implemented to make it at 8760 x 30 matrix
# Create empty array of 8760 x 30, add the heat generators and remove 'NaN' values.
generationHeatSolar = network.generators_t.p[dataNames + " solar thermal collector"]
generationHeatSolar.columns = generationHeatSolar.columns.str.slice(0,2)
# Urban heat
generationHeatUrbanSingle = network.generators_t.p[[country for country in network.generators_t.p.columns if "urban" in country]]
generationHeatUrbanSingle.columns = generationHeatUrbanSingle.columns.str.slice(0,2)
generationHeatUrban = pd.DataFrame(np.zeros([8760,30]),index=timeIndex, columns=dataNames)
generationHeatUrban += generationHeatUrbanSingle
generationHeatUrban = generationHeatUrban.replace(np.nan,0)
# Central heat
generationHeatCentralSingle = network.generators_t.p[[country for country in network.generators_t.p.columns if "central" in country]]
generationHeatCentralSingle.columns = generationHeatCentralSingle.columns.str.slice(0,2)
generationHeatCentral = pd.DataFrame(np.zeros([8760,30]),index=timeIndex, columns=dataNames)
generationHeatCentral += generationHeatCentralSingle
generationHeatCentral = generationHeatCentral.replace(np.nan,0)
# Combine generation for heat
generationHeat = generationHeatSolar + generationHeatUrban + generationHeatCentral
# Mismatch electricity
mismatchHeat = generationHeat - loadHeat
# PCA on mismatch for Heating
eigenValuesHeat, eigenVectorsHeat, varianceExplainedHeat, normConstHeat, THeat = PCA(mismatchHeat)
# Plot map PC for mismatch heat
titlePlotHeat = "Mismatch for heating only"
for i in np.arange(6):
fig = MAP(eigenVectorsHeat, eigenValuesHeat, dataNames, (i + 1)) #, titlePlotHeat, titleFileName)
title = (file[12:-3] + " - Map PC Heat Mismatch (lambda " + str(i+1) + ")")
SavePlot(fig, path, title)
# --- Transport ---
# Transport load for each country
loadTransport = network.loads_t.p_set[dataNames + ' transport']
# Generation transport
generationTransport = pd.DataFrame(data=np.zeros([8760,30]), index=timeIndex, columns=(dataNames + ' transport'))
# Mismatch transport
mismatchTransport = generationTransport - loadTransport
# PCA on mismatch for transport
eigenValuesTrans, eigenVectorsTrans, varianceExplainedTrans, normConstTrans, TTrans = PCA(mismatchTransport)
# Plot map PC for mismatch transport
titlePlotTrans = "Mismatch for transport only"
for i in np.arange(6):
fig = MAP(eigenVectorsTrans, eigenValuesTrans, dataNames, (i + 1)) #, titlePlotTrans, titleFileName)
title = (file[12:-3] + " - Map PC v2g Mismatch (lambda " + str(i+1) + ")")
SavePlot(fig, path, title)
# ----------------------- FFT Plot (Elec + Heat + Transport) -----------------------#
# Path to save FFT plots
path = figurePath + "Mismatch\\FFT\\"
# --- Elec ---
file_name = "Electricity mismatch - " + file
for i in np.arange(6):
fig = FFTPlot(TElec.T, varianceExplainedElec, title=file_name, PC_NO = (i+1))
title = (file[12:-3] + " - FFT Elec Mismatch (lambda " + str(i+1) + ")")
SavePlot(fig, path, title)
# --- Heat ---
file_name = "Heating mismatch - " + file
for i in np.arange(6):
fig = FFTPlot(THeat.T, varianceExplainedHeat, title=file_name, PC_NO = (i+1))
title = (file[12:-3] + " - FFT Heat Mismatch (lambda " + str(i+1) + ")")
SavePlot(fig, path, title)
# --- Transport ---
file_name = "Transport mismatch - " + file
for i in np.arange(6):
fig = FFTPlot(TTrans.T, varianceExplainedTrans, title=file_name, PC_NO = (i+1))
title = (file[12:-3] + " - FFT v2g Mismatch (lambda " + str(i+1) + ")")
SavePlot(fig, path, title)
# -------------------- Seasonal Plot (Elec + Heat + Transport) ---------------------#
# Path to save seasonal plots
path = figurePath + "Mismatch\\Seasonal\\"
# --- Elec ---
file_name = "Electricity mismatch - " + file
for i in np.arange(6):
fig = seasonPlot(TElec, timeIndex, title=file_name, PC_NO=(i+1), PC_amount=6)
title = (file[12:-3] + " - Seasonal Plot Elec Mismatch (lambda " + str(i+1) + ")")
SavePlot(fig, path, title)
# --- Heat ---
file_name = "Heating mismatch - " + file
for i in np.arange(6):
fig = seasonPlot(THeat, timeIndex, title=file_name, PC_NO=(i+1), PC_amount=6)
title = (file[12:-3] + " - Seasonal Plot Heat Mismatch (lambda " + str(i+1) + ")")
SavePlot(fig, path, title)
# --- Transport ---
file_name = "Transport mismatch - " + file
for i in np.arange(6):
fig = seasonPlot(TTrans, timeIndex, title=file_name, PC_NO=(i+1), PC_amount=6)
title = (file[12:-3] + " - Seasonal Plot v2g Mismatch (lambda " + str(i+1) + ")")
SavePlot(fig, path, title)
# -------------------- FFT + Seasonal Plot (Elec) ---------------------#
# Path to save seasonal plots
path = figurePath + "Mismatch\\Timeseries\\"
# --- Elec ---
file_name = "Electricity mismatch - " + file
for i in np.arange(6):
fig = FFTseasonPlot(TElec, timeIndex, varianceExplainedElec, PC_NO=(i+1), PC_amount=6,dpi=200)
title = (file[12:-3] + " - Timeseries Plot Elec Mismatch (lambda " + str(i+1) + ")")
SavePlot(fig, path, title)
# --- Heat ---
file_name = "Heating mismatch - " + file
for i in np.arange(6):
fig = FFTseasonPlot(THeat, timeIndex, varianceExplainedHeat, PC_NO=(i+1), PC_amount=6,dpi=200)
title = (file[12:-3] + " - Timeseries Plot Heat Mismatch (lambda " + str(i+1) + ")")
SavePlot(fig, path, title)
# --- Transport ---
file_name = "Transport mismatch - " + file
for i in np.arange(6):
fig = FFTseasonPlot(TTrans, timeIndex, varianceExplainedTrans, PC_NO=(i+1), PC_amount=6,dpi=200)
title = (file[12:-3] + " - Timeseries Plot v2g Mismatch (lambda " + str(i+1) + ")")
SavePlot(fig, path, title)
# ----------------- Contribution plot (Elec + Heat + Transport) ------------------- #
# Path to save contribution plots
path = figurePath + "Mismatch\\Contribution\\"
# --- Elec ---
# Contribution
dircConElec = Contribution(network, "elec")
lambdaCollectedConElec = ConValueGenerator(normConstElec, dircConElec, eigenVectorsElec)
for i in range(6):
fig = ConPlot(eigenValuesElec,lambdaCollectedConElec,i+1,10,suptitle=("Electricity Contribution - " + file[12:-3]),dpi=200)
title = (file[12:-3] + " - Contribution Plot Elec (lambda " + str(i+1) + ")")
SavePlot(fig, path, title)
# --- Heat ---
# Contribution
dircConHeat = Contribution(network, "heat")
lambdaCollectedConHeat = ConValueGenerator(normConstHeat, dircConHeat, eigenVectorsHeat)
for i in range(6):
fig = ConPlot(eigenValuesHeat,lambdaCollectedConHeat,i+1,10,suptitle=("Heating Contribution - " + file[12:-3]),dpi=200)
title = (file[12:-3] + " - Contribution Plot Heat (lambda " + str(i+1) + ")")
SavePlot(fig, path, title)
# ------------------- Response plot (Elec + Heat + Transport) -------------------- #
# Path to save contribution plots
path = figurePath + "Mismatch\\Response\\"
# --- Elec ---
# Response
dircResElec = ElecResponse(network,True)
lambdaCollectedResElec = ConValueGenerator(normConstElec, dircResElec, eigenVectorsElec)
for i in range(6):
fig = ConPlot(eigenValuesElec,lambdaCollectedResElec,i+1,10,suptitle=("Electricity Response - " + file[12:-3]),dpi=200)
title = (file[12:-3] + " - Response Plot Elec (lambda " + str(i+1) + ")")
SavePlot(fig, path, title)
# --- Heat ---
# Response
dircResHeat = HeatResponse(network,True)
lambdaCollectedResHeat = ConValueGenerator(normConstHeat, dircResHeat, eigenVectorsHeat)
for i in range(6):
fig = ConPlot(eigenValuesHeat,lambdaCollectedResHeat,i+1,10,suptitle=("Heating Response - " + file[12:-3]),dpi=100)
title = (file[12:-3] + " - Response Plot Heat (lambda " + str(i+1) + ")")
SavePlot(fig, path, title)
# ------------------- Covariance plot (Elec + Heat + Transport) -------------------- #
# Path to save contribution plots
path = figurePath + "Mismatch\\Covariance\\"
# --- Elec ---
# Covariance
covMatrixElec = CovValueGenerator(dircConElec, dircResElec , True, normConstElec,eigenVectorsElec).T
for i in range(6):
fig = ConPlot(eigenValuesElec,covMatrixElec,i+1,10,suptitle=("Electricity Covariance - " + file[12:-3]),dpi=200)
title = (file[12:-3] + " - Covariance Plot Elec (lambda " + str(i+1) + ")")
SavePlot(fig, path, title)
# --- Heat ---
# Covariance
covMatrixHeat = CovValueGenerator(dircConHeat, dircResHeat , True, normConstHeat, eigenVectorsHeat).T
for i in range(6):
fig = ConPlot(eigenValuesHeat,covMatrixHeat,i+1,10,suptitle=("Heating Covariance - " + file[12:-3]),dpi=200)
title = (file[12:-3] + " - Covariance Plot Heat (lambda " + str(i+1) + ")")
SavePlot(fig, path, title)
# ------------------- Combined Projection plot (Elec + Heat + Transport) -------------------- #
# Path to save contribution plots
path = figurePath + "Mismatch\\Projection\\"
# --- Elec ---
for i in range(6):
fig = CombConPlot(eigenValuesElec, lambdaCollectedConElec, lambdaCollectedResElec, covMatrixElec, i+1, depth = 6)#, suptitle=("Electricity Projection - " + file[12:-3]),dpi=200)
title = (file[12:-3] + " - Projection Plot Elec (lambda " + str(i+1) + ")")
SavePlot(fig, path, title)
# --- Heat ---
for i in range(6):
fig = CombConPlot(eigenValuesHeat, lambdaCollectedConHeat, lambdaCollectedResHeat, covMatrixHeat, i+1, depth = 5)#, suptitle=("Heating Projection - " + file[12:-3]),dpi=200)
title = (file[12:-3] + " - Projection Plot Heat (lambda " + str(i+1) + ")")
SavePlot(fig, path, title)
# ------------------- PC1 and PC2 combined plot (Elec + Heat + Transport) -------------------- #
# Path to save contribution plots
path = figurePath + "Mismatch\\Combined Plot\\"
# --- Elec ---
fig = PC1and2Plotter(TElec, timeIndex, [1,2], eigenValuesElec, lambdaCollectedConElec, lambdaCollectedResElec, covMatrixElec,PCType="withProjection")#,suptitle=("Electricity Mismatch - " + file[12:-3]),dpi=200,depth=3)
title = (file[12:-3] + " - Combined Plot Elec (lambda 1 & 2)")
SavePlot(fig, path, title)
fig = PC1and2Plotter(TElec, timeIndex, [1,2], eigenValuesElec, lambdaCollectedConElec, lambdaCollectedResElec, covMatrixElec,PCType="onlyProjection")#,suptitle=("Electricity Mismatch - " + file[12:-3]),dpi=200,depth=3)
title = (file[12:-3] + " - Comb Plot Elec only Proj (lambda 1 & 2)")
SavePlot(fig, path, title)
# --- Heat ---
fig = PC1and2Plotter(THeat, timeIndex, [1,2], eigenValuesHeat, lambdaCollectedConHeat, lambdaCollectedResHeat, covMatrixHeat,PCType="withProjection")#,suptitle=("Heating Mismatch - " + file[12:-3]),dpi=200,depth=3)
title = (file[12:-3] + " - Combined Plot Heat (lambda 1 & 2)")
SavePlot(fig, path, title)
fig = PC1and2Plotter(THeat, timeIndex, [1,2], eigenValuesHeat, lambdaCollectedConHeat, lambdaCollectedResHeat, covMatrixHeat,PCType="onlyProjection")#,suptitle=("Heating Mismatch - " + file[12:-3]),dpi=200,depth=3)
title = (file[12:-3] + " - Comb Plot Heat only Proj (lambda 1 & 2)")
SavePlot(fig, path, title)
# --- Transport ---
fig = PC1and2Plotter(TTrans, timeIndex, [1,2], eigenValuesTrans, lambdaCollectedConElec, lambdaCollectedResElec, covMatrixElec,PCType="withoutProjection")#,suptitle=("Transport Mismatch - " + file[12:-3]),dpi=200,depth=3)
title = (file[12:-3] + " - Combined Plot v2g (lambda 1 & 2)")
SavePlot(fig, path, title)
#%%
# ---------------------- Bar plot CO2 constraint --------------------------- #
# Path to save bar plots
path = figurePath + "Mismatch\\Bar\\"
# Name of file (must be in correct folder location)
filename_CO2 = ["postnetwork-elec_heat_v2g50_0.125_0.6.h5",
"postnetwork-elec_heat_v2g50_0.125_0.5.h5",
"postnetwork-elec_heat_v2g50_0.125_0.4.h5",
"postnetwork-elec_heat_v2g50_0.125_0.3.h5",
"postnetwork-elec_heat_v2g50_0.125_0.2.h5",
"postnetwork-elec_heat_v2g50_0.125_0.1.h5",
"postnetwork-elec_heat_v2g50_0.125_0.05.h5"]
# Variable to store mismatch PC componentns for each network
barMatrixCO2Elec = []
barMatrixCO2Heat = []
barMatrixCO2Trans = []
for file in filename_CO2:
# --------------------------- Electricity -------------------------------#
# Network
network = pypsa.Network(directory + file)
# Get the names of the data
dataNames = network.buses.index.str.slice(0,2).unique()
# Get time stamps
timeIndex = network.loads_t.p_set.index
# Electricity load for each country
loadElec = network.loads_t.p_set[dataNames]
# Solar PV generation
generationSolar = network.generators_t.p[dataNames + " solar"]
generationSolar.columns = generationSolar.columns.str.slice(0,2)
# Onshore wind generation
generationOnwind = network.generators_t.p[[country for country in network.generators_t.p.columns if "onwind" in country]].groupby(network.generators.bus.str.slice(0,2),axis=1).sum()
# Offshore wind generation
# Because offwind is only for 21 countries, additional methods have to be implemented to make it at 8760 x 30 matrix
# Create empty array of 8760 x 30, add the offwind generation and remove 'NaN' values.
generationOffwind = pd.DataFrame(np.zeros([8760,30]),index=timeIndex, columns=dataNames)
generationOffwind += network.generators_t.p[[country for country in network.generators_t.p.columns if "offwind" in country]].groupby(network.generators.bus.str.slice(0,2),axis=1).sum()
generationOffwind = generationOffwind.replace(np.nan,0)
# RoR generations
# Because RoR is only for 27 countries, additional methods have to be implemented to make it at 8760 x 30 matrix
# Create empty array of 8760 x 30, add the RoR generation and remove 'NaN' values.
generationRoR = pd.DataFrame(np.zeros([8760,30]),index=timeIndex, columns=dataNames)
generationRoR += network.generators_t.p[[country for country in network.generators_t.p.columns if "ror" in country]].groupby(network.generators.bus.str.slice(0,2),axis=1).sum()
generationRoR = generationRoR.replace(np.nan,0)
# Combined generation for electricity
generationElec = generationSolar + generationOnwind + generationOffwind + generationRoR
# Mismatch electricity
mismatchElec = generationElec - loadElec
# PCA on mismatch for electricity
eigenValuesElec, eigenVectorsElec, varianceExplainedElec, normConstElec, TElec = PCA(mismatchElec)
# Append value to matrix
barMatrixCO2Elec.append(varianceExplainedElec)
# --------------------------- Heat -------------------------------#
# Heat load for each country
loadHeat = network.loads_t.p_set[[country for country in network.loads_t.p_set.columns if "heat" in country]].groupby(network.loads.bus.str.slice(0,2),axis=1).sum()
# Heat generators for each country (solar collectors)
# Because some countries have urban collectors, while other have central collectors,
# additional methods have to be implemented to make it at 8760 x 30 matrix
# Create empty array of 8760 x 30, add the heat generators and remove 'NaN' values.
generationHeatSolar = network.generators_t.p[dataNames + " solar thermal collector"]
generationHeatSolar.columns = generationHeatSolar.columns.str.slice(0,2)
# Urban heat
generationHeatUrbanSingle = network.generators_t.p[[country for country in network.generators_t.p.columns if "urban" in country]]
generationHeatUrbanSingle.columns = generationHeatUrbanSingle.columns.str.slice(0,2)
generationHeatUrban = pd.DataFrame(np.zeros([8760,30]),index=timeIndex, columns=dataNames)
generationHeatUrban += generationHeatUrbanSingle
generationHeatUrban = generationHeatUrban.replace(np.nan,0)
# Central heat
generationHeatCentralSingle = network.generators_t.p[[country for country in network.generators_t.p.columns if "central" in country]]
generationHeatCentralSingle.columns = generationHeatCentralSingle.columns.str.slice(0,2)
generationHeatCentral = pd.DataFrame(np.zeros([8760,30]),index=timeIndex, columns=dataNames)
generationHeatCentral += generationHeatCentralSingle
generationHeatCentral = generationHeatCentral.replace(np.nan,0)
# Combine generation for heat
generationHeat = generationHeatSolar + generationHeatUrban + generationHeatCentral
# Mismatch electricity
mismatchHeat = generationHeat - loadHeat
# PCA on mismatch for electricity
eigenValuesHeat, eigenVectorsHeat, varianceExplainedHeat, normConstHeat, THeat = PCA(mismatchHeat)
# Append value to matrix
barMatrixCO2Heat.append(varianceExplainedHeat)
# ----------------------------- Transport --------------------------------#
# Transport load for each country
loadTransport = network.loads_t.p_set[dataNames + ' transport']
# Generation transport
generationTransport = pd.DataFrame(data=np.zeros([8760,30]), index=timeIndex, columns=(dataNames + ' transport'))
# Mismatch transport
mismatchTransport = generationTransport - loadTransport
# PCA on mismatch for transport
eigenValuesTrans, eigenVectorsTrans, varianceExplainedTrans, normConstTrans, TTrans = PCA(mismatchTransport)
# Append value to matrix
barMatrixCO2Trans.append(varianceExplainedTrans)
constraints = ["40%", "50%", "60%", "70%", "80%", "90%", "95%"]
title = "Number of PC describing variance of network as a function of $CO_{2}$ constraint"
xlabel = "$CO_{2}$ constraint"
suptitleElec = ("Electricity Mismatch - " + file[12:-14])
fig = BAR(barMatrixCO2Elec, 10, filename_CO2, constraints, title, xlabel, suptitleElec)
titleBarCO2Elec = (file[12:-14] + " - Bar CO2 Elec Mismatch")
SavePlot(fig, path, titleBarCO2Elec)
suptitleHeat = ("Heating Mismatch - " + file[12:-14])
fig = BAR(barMatrixCO2Heat, 10, filename_CO2, constraints, title, xlabel, suptitleHeat)
titleBarCO2Heat = (file[12:-14] + " - Bar CO2 Heat Mismatch")
SavePlot(fig, path, titleBarCO2Heat)
suptitleTrans = ("Transport Mismatch - " + file[12:-14])
fig = BAR(barMatrixCO2Trans, 10, filename_CO2, constraints, title, xlabel, suptitleTrans)
titleBarCO2Trans = (file[12:-14] + " - Bar CO2 v2g Mismatch")
SavePlot(fig, path, titleBarCO2Trans)
# ------------------ Change in contribution and response CO2 ----------------------- #
# Variable to store lambda values
lambdaContributionElec = []
lambdaContributionHeat = []
lambdaResponseElec = []
lambdaResponseHeat = []
lambdaCovarianceElec = []
lambdaCovarianceHeat = []
# Name of file (must be in correct folder location)
filename_CO2 = ["postnetwork-elec_heat_v2g50_0.125_0.6.h5",
"postnetwork-elec_heat_v2g50_0.125_0.5.h5",
"postnetwork-elec_heat_v2g50_0.125_0.4.h5",
"postnetwork-elec_heat_v2g50_0.125_0.3.h5",
"postnetwork-elec_heat_v2g50_0.125_0.2.h5",
"postnetwork-elec_heat_v2g50_0.125_0.1.h5",
"postnetwork-elec_heat_v2g50_0.125_0.05.h5"]
for file in filename_CO2:
# --------------------------- Electricity -------------------------------#
# Network
network = pypsa.Network(directory + file)
# Get the names of the data
dataNames = network.buses.index.str.slice(0,2).unique()
# Get time stamps
timeIndex = network.loads_t.p_set.index
# Electricity load for each country
loadElec = network.loads_t.p_set[dataNames]
# Solar PV generation
generationSolar = network.generators_t.p[dataNames + " solar"]
generationSolar.columns = generationSolar.columns.str.slice(0,2)
# Onshore wind generation
generationOnwind = network.generators_t.p[[country for country in network.generators_t.p.columns if "onwind" in country]].groupby(network.generators.bus.str.slice(0,2),axis=1).sum()
# Offshore wind generation
# Because offwind is only for 21 countries, additional methods have to be implemented to make it at 8760 x 30 matrix
# Create empty array of 8760 x 30, add the offwind generation and remove 'NaN' values.
generationOffwind = pd.DataFrame(np.zeros([8760,30]),index=timeIndex, columns=dataNames)
generationOffwind += network.generators_t.p[[country for country in network.generators_t.p.columns if "offwind" in country]].groupby(network.generators.bus.str.slice(0,2),axis=1).sum()
generationOffwind = generationOffwind.replace(np.nan,0)
# RoR generations
# Because RoR is only for 27 countries, additional methods have to be implemented to make it at 8760 x 30 matrix
# Create empty array of 8760 x 30, add the RoR generation and remove 'NaN' values.
generationRoR = pd.DataFrame(np.zeros([8760,30]),index=timeIndex, columns=dataNames)
generationRoR += network.generators_t.p[[country for country in network.generators_t.p.columns if "ror" in country]].groupby(network.generators.bus.str.slice(0,2),axis=1).sum()
generationRoR = generationRoR.replace(np.nan,0)
# Combined generation for electricity
generationElec = generationSolar + generationOnwind + generationOffwind + generationRoR
# Mismatch electricity
mismatchElec = generationElec - loadElec
# PCA on mismatch for electricity
eigenValuesElec, eigenVectorsElec, varianceExplainedElec, normConstElec, TElec = PCA(mismatchElec)
# Contribution Elec
dircConElec = Contribution(network, "elec")
lambdaCollected = ConValueGenerator(normConstElec, dircConElec, eigenVectorsElec)
lambdaContributionElec.append(lambdaCollected)
# Response Elec
dircResElec = ElecResponse(network,True)
lambdaCollected = ConValueGenerator(normConstElec, dircResElec, eigenVectorsElec)
lambdaResponseElec.append(lambdaCollected)
# Covariance Elec
covMatrix = CovValueGenerator(dircConElec, dircResElec , True, normConstElec,eigenVectorsElec)
lambdaCovarianceElec.append(covMatrix.T)
# --------------------------- Heat -------------------------------#
# Heat load for each country
loadHeat = network.loads_t.p_set[[country for country in network.loads_t.p_set.columns if "heat" in country]].groupby(network.loads.bus.str.slice(0,2),axis=1).sum()
# Heat generators for each country (solar collectors)
# Because some countries have urban collectors, while other have central collectors,
# additional methods have to be implemented to make it at 8760 x 30 matrix
# Create empty array of 8760 x 30, add the heat generators and remove 'NaN' values.
generationHeatSolar = network.generators_t.p[dataNames + " solar thermal collector"]
generationHeatSolar.columns = generationHeatSolar.columns.str.slice(0,2)
# Urban heat
generationHeatUrbanSingle = network.generators_t.p[[country for country in network.generators_t.p.columns if "urban" in country]]
generationHeatUrbanSingle.columns = generationHeatUrbanSingle.columns.str.slice(0,2)
generationHeatUrban = pd.DataFrame(np.zeros([8760,30]),index=timeIndex, columns=dataNames)
generationHeatUrban += generationHeatUrbanSingle
generationHeatUrban = generationHeatUrban.replace(np.nan,0)
# Central heat
generationHeatCentralSingle = network.generators_t.p[[country for country in network.generators_t.p.columns if "central" in country]]
generationHeatCentralSingle.columns = generationHeatCentralSingle.columns.str.slice(0,2)
generationHeatCentral = pd.DataFrame(np.zeros([8760,30]),index=timeIndex, columns=dataNames)
generationHeatCentral += generationHeatCentralSingle
generationHeatCentral = generationHeatCentral.replace(np.nan,0)
# Combine generation for heat
generationHeat = generationHeatSolar + generationHeatUrban + generationHeatCentral
# Mismatch electricity
mismatchHeat = generationHeat - loadHeat
# PCA on mismatch for electricity
eigenValuesHeat, eigenVectorsHeat, varianceExplainedHeat, normConstHeat, THeat = PCA(mismatchHeat)
# Contribution Heat
dircConHeat = Contribution(network, "heat")
lambdaCollected = ConValueGenerator(normConstHeat, dircConHeat, eigenVectorsHeat)
lambdaContributionHeat.append(lambdaCollected)
# Response Heat
dircResHeat = HeatResponse(network,True)
lambdaCollected = ConValueGenerator(normConstHeat, dircResHeat, eigenVectorsHeat)
lambdaResponseHeat.append(lambdaCollected)
# Covariance Heat
covMatrix = CovValueGenerator(dircConHeat, dircResHeat , True, normConstHeat,eigenVectorsHeat)
lambdaCovarianceHeat.append(covMatrix.T)
# general terms
pathContibution = figurePath + "Mismatch\\Change in Contribution\\"
pathResponse = figurePath + "Mismatch\\Change in Response\\"
pathCovariance = figurePath + "Mismatch\\Change in Covariance\\"
#%%
# Plot change in elec contribution
figtitle = "Change in electricity contribution as a function of CO2 constraint"
fig = ChangeContributionElec(lambdaContributionElec, rotate=True, PC=2) #figtitle
saveTitle = file[12:-14] + " - Change in elec cont (CO2)"
SavePlot(fig, pathContibution, saveTitle)
figtitle = "Change in electricity contribution as a function of CO2 constraint"
fig = ChangeContributionElec(lambdaContributionElec, rotate=False, PC=6) #figtitle
saveTitle = file[12:-14] + " - Change in elec cont app (CO2)"
SavePlot(fig, pathContibution, saveTitle)
# Plot change in heat contribution
figtitle = "Change in heating contribution as a function of CO2 constraint"
fig = ChangeContributionHeat(lambdaContributionHeat, rotate=True, PC=2) #figtitle
saveTitle = file[12:-14] + " - Change in heat cont (CO2)"
SavePlot(fig, pathContibution, saveTitle)
figtitle = "Change in heating contribution as a function of CO2 constraint"
fig = ChangeContributionHeat(lambdaContributionHeat, rotate=False, PC=6) #figtitle
saveTitle = file[12:-14] + " - Change in heat cont app (CO2)"
SavePlot(fig, pathContibution, saveTitle)
# Plot change in elec response
figtitle = "Change in electricity response as a function of CO2 constraint"
fig = ChangeResponseElec(lambdaResponseElec, rotate=True, PC=2) #figtitle
saveTitle = file[12:-14] + " - Change in elec response (CO2)"
SavePlot(fig, pathResponse, saveTitle)
figtitle = "Change in electricity response as a function of CO2 constraint"
fig = ChangeResponseElec(lambdaResponseElec, rotate=False, PC=6) #figtitle
saveTitle = file[12:-14] + " - Change in elec response app (CO2)"
SavePlot(fig, pathResponse, saveTitle)
# Plot change in heat response
figtitle = "Change in heating response as a function of CO2 constraint"
fig = ChangeResponseHeat(lambdaResponseHeat, rotate=True, PC=2) #figtitle
saveTitle = file[12:-14] + " - Change in heat response (CO2)"
SavePlot(fig, pathResponse, saveTitle)
figtitle = "Change in heating response as a function of CO2 constraint"
fig = ChangeResponseHeat(lambdaResponseHeat, rotate=False, PC=6) #figtitle
saveTitle = file[12:-14] + " - Change in heat response app (CO2)"
SavePlot(fig, pathResponse, saveTitle)
# Plot change in elec covariance response
figtitle = "Change in electricity covariance response as a function of CO2 constraint"
fig = ChangeResponseCov(lambdaResponseElec, rotate=True, PC=2) #figtitle
saveTitle = file[12:-14] + " - Change in elec cov response (CO2)"
SavePlot(fig, pathResponse, saveTitle)
figtitle = "Change in electricity covariance response as a function of CO2 constraint"
fig = ChangeResponseCov(lambdaResponseElec, rotate=False, PC=6) #figtitle
saveTitle = file[12:-14] + " - Change in elec cov response app (CO2)"
SavePlot(fig, pathResponse, saveTitle)
# Plot change in heat covariance response
figtitle = "Change in heating covariance response as a function of CO2 constraint"
fig = ChangeResponseCov(lambdaResponseHeat, rotate=True, PC=2) #figtitle
saveTitle = file[12:-14] + " - Change in heat cov response (CO2)"
SavePlot(fig, pathResponse, saveTitle)
figtitle = "Change in heating covariance response as a function of CO2 constraint"
fig = ChangeResponseCov(lambdaResponseHeat, rotate=False, PC=6) #figtitle
saveTitle = file[12:-14] + " - Change in heat cov response app (CO2)"
SavePlot(fig, pathResponse, saveTitle)
# Plot change in elec covariance
figtitle = "Change in electricity covariance between mismatch and response as a function of CO2 constraint"
fig = ChangeCovariance(lambdaCovarianceElec, collectTerms=True, rotate=True, PC=2) #figtitle
saveTitle = file[12:-14] + " - Change in elec covariance (CO2)"
SavePlot(fig, pathCovariance, saveTitle)
figtitle = "Change in electricity covariance between mismatch and response as a function of CO2 constraint"
fig = ChangeCovariance(lambdaCovarianceElec, collectTerms=True, rotate=False, PC=6) #figtitle
saveTitle = file[12:-14] + " - Change in elec covariance app (CO2)"
SavePlot(fig, pathCovariance, saveTitle)
# Plot change in heat covariance
figtitle = "Change in heating covariance between mismatch and response as a function of CO2 constraint"
fig = ChangeCovariance(lambdaCovarianceHeat, collectTerms=True, rotate=True, PC=2) #figtitle
saveTitle = file[12:-14] + " - Change in heat covariance (CO2)"
SavePlot(fig, pathCovariance, saveTitle)
figtitle = "Change in heating covariance between mismatch and response as a function of CO2 constraint"
fig = ChangeCovariance(lambdaCovarianceHeat, collectTerms=True, rotate=False, PC=6) #figtitle
saveTitle = file[12:-14] + " - Change in heat covariance app (CO2)"
SavePlot(fig, pathCovariance, saveTitle)
#%%
##############################################################################
##############################################################################
################################ NODAL PRICE #################################
##############################################################################
##############################################################################
# File name
file = "postnetwork-elec_heat_v2g50_0.125_0.05.h5"
# Import network
network = pypsa.Network(directory+file)
# Get the names of the data
dataNames = network.buses.index.str.slice(0,2).unique()
# ----------------------- Map PC Plot (Elec + Heat + Transport) --------------------#
# Path to save plots
path = figurePath + "Nodal Price\\Map PC\\"
# --- Elec ---
# Prices for electricity for each country (restricted to 1000 €/MWh)
priceElec = FilterPrice(network.buses_t.marginal_price[dataNames],465)
# PCA on nodal prices for electricity
eigenValuesElec, eigenVectorsElec, varianceExplainedElec, normConstElec, TElec = PCA(priceElec)
# Plot map PC for electricity nodal prices
titlePlotElec = "Nodal price for electricity only"
for i in np.arange(6):
fig = MAP(eigenVectorsElec, eigenValuesElec, dataNames, (i + 1)) #, titlePlotElec, titleFileName)
title = (file[12:-3] + " - Map PC Elec NP (lambda " + str(i+1) + ")")
SavePlot(fig, path, title)
# --- Heat ---
# Prices for heat for each country (restricted to 1000 €/MWh)
priceHeat = network.buses_t.marginal_price[[x for x in network.buses_t.marginal_price.columns if ("heat" in x) or ("cooling" in x)]]
priceHeat = priceHeat.groupby(priceHeat.columns.str.slice(0,2), axis=1).sum()
priceHeat.columns = priceHeat.columns + " heat"
priceHeat = FilterPrice(priceHeat,465)
# PCA on nodal prices for heating
eigenValuesHeat, eigenVectorsHeat, varianceExplainedHeat, normConstHeat, THeat = PCA(priceHeat)
# Plot map PC for heating nodal prices
titlePlotHeat = "Nodal price for heating only"
for i in np.arange(6):
fig = MAP(eigenVectorsHeat, eigenValuesHeat, dataNames, (i + 1)) #, titlePlotHeat, titleFileName)
title = (file[12:-3] + " - Map PC Heat NP (lambda " + str(i+1) + ")")
SavePlot(fig, path, title)
# --- Transport ---
# Prices for heat for each country (restricted to 1000 €/MWh)
priceTrans = FilterPrice(network.buses_t.marginal_price[dataNames + " EV battery"],465)
# PCA on nodal prices for heating
eigenValuesTrans, eigenVectorsTrans, varianceExplainedTrans, normConstTrans, TTrans = PCA(priceTrans)
# Plot map PC for heating nodal prices
titlePlotTrans = "Nodal price for transport only"
for i in np.arange(6):
fig = MAP(eigenVectorsTrans, eigenValuesTrans, dataNames, (i + 1)) #, titlePlotTrans, titleFileName)
title = (file[12:-3] + " - Map PC v2g NP (lambda " + str(i+1) + ")")
SavePlot(fig, path, title)
# ------------------------ FFT Plot (Elec + Heat + Transport) -----------------------#
# Path to save FFT plots
path = figurePath + "Nodal Price\\FFT\\"
# --- Elec ---
file_name = "Electricity Nodal Price - " + file
for i in np.arange(6):
fig = FFTPlot(TElec.T, varianceExplainedElec, title=file_name, PC_NO = (i+1))
title = (file[12:-3] + " - FFT Elec NP (lambda " + str(i+1) + ")")
SavePlot(fig, path, title)
# --- Heat ---
file_name = "Heating Nodal Price - " + file
for i in np.arange(6):
fig = FFTPlot(THeat.T, varianceExplainedHeat, title=file_name, PC_NO = (i+1))
title = (file[12:-3] + " - FFT Heat NP (lambda " + str(i+1) + ")")
SavePlot(fig, path, title)
# --- Transport ---
file_name = "Transport Nodal Price - " + file
for i in np.arange(6):
fig = FFTPlot(TTrans.T, varianceExplainedTrans, title=file_name, PC_NO = (i+1))
title = (file[12:-3] + " - FFT v2g NP (lambda " + str(i+1) + ")")
SavePlot(fig, path, title)
# ----------------------- Seasonal Plot (Elec + Heat + Transport) ------------------------#
# Path to save seasonal plots
path = figurePath + "Nodal Price\\Seasonal\\"
# --- Elec ---
file_name = "Electricity Nodal Price - " + file
for i in np.arange(6):
fig = seasonPlot(TElec, timeIndex, title=file_name, PC_NO=(i+1), PC_amount=6)
title = (file[12:-3] + " - Seasonal Plot Elec NP (lambda " + str(i+1) + ")")
SavePlot(fig, path, title)
# --- Heat ---
file_name = "Heating Nodal Price - " + file
for i in np.arange(6):
fig = seasonPlot(THeat, timeIndex, title=file_name, PC_NO=(i+1), PC_amount=6)
title = (file[12:-3] + " - Seasonal Plot Heat NP (lambda " + str(i+1) + ")")
SavePlot(fig, path, title)
# --- Transport ---
file_name = "Transport Nodal Price - " + file
for i in np.arange(6):
fig = seasonPlot(TTrans, timeIndex, title=file_name, PC_NO=(i+1), PC_amount=6)
title = (file[12:-3] + " - Seasonal Plot v2g NP (lambda " + str(i+1) + ")")
SavePlot(fig, path, title)
# -------------------- FFT + Seasonal Plot (Elec) ---------------------#
# Path to save seasonal plots
path = figurePath + "Nodal Price\\Timeseries\\"
# --- Elec ---
file_name = "Electricity Nodal Price - " + file
for i in np.arange(6):
fig = FFTseasonPlot(TElec, timeIndex, varianceExplainedElec, PC_NO=(i+1), PC_amount=6,dpi=200)
title = (file[12:-3] + " - Timeseries Plot Elec NP (lambda " + str(i+1) + ")")
SavePlot(fig, path, title)
# --- Heat ---
file_name = "Heating Nodal Price - " + file
for i in np.arange(6):
fig = FFTseasonPlot(THeat, timeIndex, varianceExplainedHeat, PC_NO=(i+1), PC_amount=6,dpi=200)
title = (file[12:-3] + " - Timeseries Plot Heat NP (lambda " + str(i+1) + ")")
SavePlot(fig, path, title)
# --- Transport ---
file_name = "Transport Nodal Price - " + file
for i in np.arange(6):
fig = FFTseasonPlot(TTrans, timeIndex, varianceExplainedTrans, PC_NO=(i+1), PC_amount=6,dpi=200)
title = (file[12:-3] + " - Timeseries Plot v2g NP (lambda " + str(i+1) + ")")
SavePlot(fig, path, title)
# ------------------- PC1 and PC2 combined plot (Elec + Heat + Transport) -------------------- #
# Path to save contribution plots
path = figurePath + "Nodal Price\\Combined Plot\\"
# --- Elec ---
fig = PC1and2Plotter(TElec, timeIndex, [1,2], eigenValuesElec, lambdaCollectedConElec, lambdaCollectedResElec, covMatrixElec,PCType="withoutProjection")#,suptitle=("Electricity Nodal Price - " + file[12:-3]),dpi=200)
title = (file[12:-3] + " - Combined Plot Elec NP (lambda 1 & 2)")
SavePlot(fig, path, title)
# --- Heat ---
fig = PC1and2Plotter(THeat, timeIndex, [1,2], eigenValuesHeat, lambdaCollectedConHeat, lambdaCollectedResHeat, covMatrixHeat,PCType="withoutProjection")#,suptitle=("Heating Nodal Price - " + file[12:-3]),dpi=200)
title = (file[12:-3] + " - Combined Plot Heat (lambda 1 & 2)")
SavePlot(fig, path, title)
# --- Transport ---
fig = PC1and2Plotter(TTrans, timeIndex, [1,2], eigenValuesTrans, lambdaCollectedConElec, lambdaCollectedResElec, covMatrixElec,PCType="withoutProjection")#,suptitle=("Transport Nodal Price - " + file[12:-3]),dpi=200)
title = (file[12:-3] + " - Combined Plot v2g NP (lambda 1 & 2)")
SavePlot(fig, path, title)
# ---------------------- Bar plot CO2 constraint --------------------------- #
# Path to save bar plots
path = figurePath + "Nodal Price\\Bar\\"
# Name of file (must be in correct folder location)
filename_CO2 = ["postnetwork-elec_heat_v2g50_0.125_0.6.h5",
"postnetwork-elec_heat_v2g50_0.125_0.5.h5",
"postnetwork-elec_heat_v2g50_0.125_0.4.h5",
"postnetwork-elec_heat_v2g50_0.125_0.3.h5",
"postnetwork-elec_heat_v2g50_0.125_0.2.h5",
"postnetwork-elec_heat_v2g50_0.125_0.1.h5",
"postnetwork-elec_heat_v2g50_0.125_0.05.h5"]
# Variable to store nodal price PC componentns for each network
barMatrixCO2Elec = []
barMatrixCO2Heat = []
barMatrixCO2Trans = []
# Variable to store nodal price mean and standard variation
meanPriceElec = []
quantileMeanPriceElec = []
quantileMinPriceElec = []
meanPriceHeat = []
quantileMeanPriceHeat = []
quantileMinPriceHeat = []
meanPriceTrans = []
quantileMeanPriceTrans = []
quantileMinPriceTrans = []
for file in filename_CO2:
# Network
network = pypsa.Network(directory + file)
# Get the names of the data
dataNames = network.buses.index.str.slice(0,2).unique()
# --- Elec ---
# Prices for electricity for each country (restricted to 1000 €/MWh)
priceElec = FilterPrice(network.buses_t.marginal_price[dataNames],465)
# PCA on nodal prices for electricity
eigenValuesElec, eigenVectorsElec, varianceExplainedElec, normConstElec, TElec = PCA(priceElec)
# Append value to matrix
barMatrixCO2Elec.append(varianceExplainedElec)
# --- Heat ---
# Prices for heat for each country (restricted to 1000 €/MWh)
priceHeat = network.buses_t.marginal_price[[x for x in network.buses_t.marginal_price.columns if ("heat" in x) or ("cooling" in x)]]
priceHeat = priceHeat.groupby(priceHeat.columns.str.slice(0,2), axis=1).sum()
priceHeat.columns = priceHeat.columns + " heat"
priceHeat = FilterPrice(priceHeat,465)
# PCA on nodal prices for heating
eigenValuesHeat, eigenVectorsHeat, varianceExplainedHeat, normConstHeat, THeat = PCA(priceHeat)
# Append value to matrix
barMatrixCO2Heat.append(varianceExplainedHeat)
# --- Transport ---
# Prices for transport for each country (restricted to 1000 €/MWh)
priceTrans = FilterPrice(network.buses_t.marginal_price[dataNames + " EV battery"],465)
# PCA on nodal prices for heating
eigenValuesTrans, eigenVectorsTrans, varianceExplainedTrans, normConstTrans, TTrans = PCA(priceTrans)
# Append value to matrix
barMatrixCO2Trans.append(varianceExplainedTrans)
# ----------------------- NP Mean (Elec + Heat + Transport) --------------------#
# --- Elec ---
# Mean price for country
minPrice = priceElec.min().mean()
meanPrice = priceElec.mean().mean()
# append min, max and mean to matrix
meanPriceElec.append([minPrice, meanPrice])
# --- Heat ---
# Mean price for country
minPrice = priceHeat.min().mean()
meanPrice = priceHeat.mean().mean()
# append min, max and mean to matrix
meanPriceHeat.append([minPrice, meanPrice])
# --- Transport ---
# Mean price for country
minPrice = priceTrans.min().mean()
meanPrice = priceTrans.mean().mean()
# append min, max and mean to matrix
meanPriceTrans.append([minPrice, meanPrice])
# ----------------------- NP Quantile (Elec+Heat+Transport) --------------------#
# --- Elec ---
# Mean price for country
quantileMinPrice = np.quantile(priceElec.min(),[0.05,0.25,0.75,0.95])
quantileMeanPrice = np.quantile(priceElec.mean(),[0.05,0.25,0.75,0.95])
# append min, max and mean to matrix
quantileMeanPriceElec.append(quantileMeanPrice)
quantileMinPriceElec.append(quantileMinPrice)
# --- Heat ---
# Mean price for country
quantileMinPrice = np.quantile(priceHeat.min(),[0.05,0.25,0.75,0.95])
quantileMeanPrice = np.quantile(priceHeat.mean(),[0.05,0.25,0.75,0.95])
# append min, max and mean to matrix
quantileMeanPriceHeat.append(quantileMeanPrice)
quantileMinPriceHeat.append(quantileMinPrice)
# --- Transport ---
# Mean price for country
quantileMinPrice = np.quantile(priceTrans.min(),[0.05,0.25,0.75,0.95])
quantileMeanPrice = np.quantile(priceTrans.mean(),[0.05,0.25,0.75,0.95])
# append min, max and mean to matrix
quantileMeanPriceTrans.append(quantileMeanPrice)
quantileMinPriceTrans.append(quantileMinPrice)
constraints = ["40%", "50%", "60%", "70%", "80%", "90%", "95%"]
title = "Number of PC describing variance of network as a function of $CO_{2}$ constraint"
xlabel = "$CO_{2}$ constraint"
suptitleElec = ("Electricity Nodal Price - " + file[12:-14])
fig = BAR(barMatrixCO2Elec, 10, filename_CO2, constraints, title, xlabel, suptitleElec)
titleBarCO2Elec = (file[12:-14] + " - Bar CO2 Elec NP")
SavePlot(fig, path, titleBarCO2Elec)
suptitleHeat = ("Heating Nodal Price - " + file[12:-14])
fig = BAR(barMatrixCO2Heat, 10, filename_CO2, constraints, title, xlabel, suptitleHeat)
titleBarCO2Heat = (file[12:-14] + " - Bar CO2 Heat NP")
SavePlot(fig, path, titleBarCO2Heat)
suptitleTrans = ("Transport Nodal Price - " + file[12:-14])
fig = BAR(barMatrixCO2Trans, 10, filename_CO2, constraints, title, xlabel, suptitleTrans)
titleBarCO2Trans = (file[12:-14] + " - Bar CO2 v2g NP")
SavePlot(fig, path, titleBarCO2Trans)
# ----------------------- Price evalution (Elec) --------------------#
path = figurePath + "Nodal Price\\Price Evolution\\"
title = ("Electricity Nodal Price Evalution - " + file[12:-14])
fig = PriceEvolution(meanPriceElec,quantileMeanPriceElec,quantileMinPriceElec, networktype="green", figsize=[10,4])#, title=title)
title = (file[12:-14] + " - Elec NP CO2 Evolution")
SavePlot(fig, path, title)
# ----------------------- Price evalution (Heat) --------------------#
path = figurePath + "Nodal Price\\Price Evolution\\"
title = ("Heating Nodal Price Evalution - " + file[12:-14])
fig = PriceEvolution(meanPriceHeat,quantileMeanPriceHeat,quantileMinPriceHeat,networktype="green")#,title=title)
title = (file[12:-14] + " - Heating NP CO2 Evolution")
SavePlot(fig, path, title)
# ----------------------- Price evalution (Transport) --------------------#
path = figurePath + "Nodal Price\\Price Evolution\\"
title = ("Transport Nodal Price Evalution - " + file[12:-14])
fig = PriceEvolution(meanPriceTrans,quantileMeanPriceTrans,quantileMinPriceTrans,networktype="green")#,title=title)
title = (file[12:-14] + " - Transport NP CO2 Evolution")
SavePlot(fig, path, title)
#%%
##############################################################################
##############################################################################
################################# Coherence ##################################
##############################################################################
##############################################################################
# -------------------- Coherence Plot (Elec + Heat + Transport) ---------------------#
# File name
file = "postnetwork-elec_heat_v2g50_0.125_0.05.h5"
# Import network
network = pypsa.Network(directory+file)
# Get the names of the data
dataNames = network.buses.index.str.slice(0,2).unique()
# Get time stamps
timeIndex = network.loads_t.p_set.index
# Path to save contribution plots
path = figurePath + "Coherence\\"
# --- Elec ---
# Electricity load for each country
loadElec = network.loads_t.p_set[dataNames]
# Solar PV generation
generationSolar = network.generators_t.p[dataNames + " solar"]
generationSolar.columns = generationSolar.columns.str.slice(0,2)
# Onshore wind generation
generationOnwind = network.generators_t.p[[country for country in network.generators_t.p.columns if "onwind" in country]].groupby(network.generators.bus.str.slice(0,2),axis=1).sum()
# Offshore wind generation
# Because offwind is only for 21 countries, additional methods have to be implemented to make it at 8760 x 30 matrix
# Create empty array of 8760 x 30, add the offwind generation and remove 'NaN' values.
generationOffwind = pd.DataFrame(np.zeros([8760,30]),index=timeIndex, columns=dataNames)
generationOffwind += network.generators_t.p[[country for country in network.generators_t.p.columns if "offwind" in country]].groupby(network.generators.bus.str.slice(0,2),axis=1).sum()
generationOffwind = generationOffwind.replace(np.nan,0)
# RoR generations
# Because RoR is only for 27 countries, additional methods have to be implemented to make it at 8760 x 30 matrix
# Create empty array of 8760 x 30, add the RoR generation and remove 'NaN' values.
generationRoR = pd.DataFrame(np.zeros([8760,30]),index=timeIndex, columns=dataNames)
generationRoR += network.generators_t.p[[country for country in network.generators_t.p.columns if "ror" in country]].groupby(network.generators.bus.str.slice(0,2),axis=1).sum()
generationRoR = generationRoR.replace(np.nan,0)
# Combined generation for electricity
generationElec = generationSolar + generationOnwind + generationOffwind + generationRoR
# Mismatch electricity
mismatchElec = generationElec - loadElec
# Prices for each country (restricted to 1000 €/MWh)
priceElec = FilterPrice(network.buses_t.marginal_price[dataNames],465)
# Coherence between prices and mismatch
c1Elec, c2Elec, c3Elec = Coherence(mismatchElec, priceElec)
# Plot properties
title1 = "" # "Coherence 1: Electricity mismatch and nodal price"
title2 = "" # "Coherence 2: Electricity mismatch and nodal price"
title3 = "" # "Coherence 3: Electricity mismatch and nodal price"
xlabel = "Electricity Mismatch"
ylabel="Electricity Prices"
noX = 6
noY = 6
fig1 = CoherencePlot(dataMatrix=c1Elec.T, übertitle="", title=title1, xlabel=xlabel, ylabel=ylabel, noX=noX, noY=noY, dataRange=[0,1])
fig2 = CoherencePlot(dataMatrix=c2Elec.T, übertitle="", title=title2, xlabel=xlabel, ylabel=ylabel, noX=noX, noY=noY, dataRange=[0,1])
fig3 = CoherencePlot(dataMatrix=c3Elec.T, übertitle="", title=title3, xlabel=xlabel, ylabel=ylabel, noX=noX, noY=noY, dataRange=[-1,1])
SavePlot(fig1, path, title = (file[12:-3] + " - C1 elec mismatch and ENP"))
SavePlot(fig2, path, title = (file[12:-3] + " - C2 elec mismatch and ENP"))
SavePlot(fig3, path, title = (file[12:-3] + " - C3 elec mismatch and ENP"))
# Combined Plot
fig = CoherencePlotCombined(c1Elec.T, c2Elec.T, c3Elec.T, xlabel=xlabel, ylabel=ylabel)
SavePlot(fig, path, title = (file[12:-3] + " - C123 combined elec mismatch and ENP"))
# --- Heat ---
# Heat load for each country
loadHeat = network.loads_t.p_set[[country for country in network.loads_t.p_set.columns if "heat" in country]].groupby(network.loads.bus.str.slice(0,2),axis=1).sum()
# Heat generators for each country (solar collectors)
# Because some countries have urban collectors, while other have central collectors,
# additional methods have to be implemented to make it at 8760 x 30 matrix
# Create empty array of 8760 x 30, add the heat generators and remove 'NaN' values.
generationHeatSolar = network.generators_t.p[dataNames + " solar thermal collector"]
generationHeatSolar.columns = generationHeatSolar.columns.str.slice(0,2)
# Urban heat
generationHeatUrbanSingle = network.generators_t.p[[country for country in network.generators_t.p.columns if "urban" in country]]
generationHeatUrbanSingle.columns = generationHeatUrbanSingle.columns.str.slice(0,2)
generationHeatUrban = pd.DataFrame(np.zeros([8760,30]),index=timeIndex, columns=dataNames)
generationHeatUrban += generationHeatUrbanSingle
generationHeatUrban = generationHeatUrban.replace(np.nan,0)
# Central heat
generationHeatCentralSingle = network.generators_t.p[[country for country in network.generators_t.p.columns if "central" in country]]
generationHeatCentralSingle.columns = generationHeatCentralSingle.columns.str.slice(0,2)
generationHeatCentral = pd.DataFrame(np.zeros([8760,30]),index=timeIndex, columns=dataNames)
generationHeatCentral += generationHeatCentralSingle
generationHeatCentral = generationHeatCentral.replace(np.nan,0)
# Combine generation for heat
generationHeat = generationHeatSolar + generationHeatUrban + generationHeatCentral
# Mismatch electricity
mismatchHeat = generationHeat - loadHeat
# Prices for heat for each country (restricted to 1000 €/MWh)
priceHeat = network.buses_t.marginal_price[[x for x in network.buses_t.marginal_price.columns if ("heat" in x) or ("cooling" in x)]]
priceHeat = priceHeat.groupby(priceHeat.columns.str.slice(0,2), axis=1).sum()
priceHeat.columns = priceHeat.columns + " heat"
priceHeat = FilterPrice(priceHeat,465)
# Coherence between prices and mismatch
c1Heat, c2Heat, c3Heat = Coherence(mismatchHeat, priceHeat)
# Plot properties
title1 = "" # "Coherence 1: Heating mismatch and nodal price"
title2 = "" # "Coherence 2: Heating mismatch and nodal price"
title3 = "" # "Coherence 3: Heating mismatch and nodal price"
xlabel = "Heat Mismatch"
ylabel="Heating Prices"
noX = 6
noY = 6
fig1 = CoherencePlot(dataMatrix=c1Heat.T, übertitle="", title=title1, xlabel=xlabel, ylabel=ylabel, noX=noX, noY=noY, dataRange=[0,1])
fig2 = CoherencePlot(dataMatrix=c2Heat.T, übertitle="", title=title2, xlabel=xlabel, ylabel=ylabel, noX=noX, noY=noY, dataRange=[0,1])
fig3 = CoherencePlot(dataMatrix=c3Heat.T, übertitle="", title=title3, xlabel=xlabel, ylabel=ylabel, noX=noX, noY=noY, dataRange=[-1,1])
SavePlot(fig1, path, title = (file[12:-3] + " - C1 heat mismatch and HNP"))
SavePlot(fig2, path, title = (file[12:-3] + " - C2 heat mismatch and HNP"))
SavePlot(fig3, path, title = (file[12:-3] + " - C3 heat mismatch and HNP"))
# Combined Plot
fig = CoherencePlotCombined(c1Heat.T, c2Heat.T, c3Heat.T, xlabel=xlabel, ylabel=ylabel)
SavePlot(fig, path, title = (file[12:-3] + " - C123 combined heat mismatch and HNP"))
# --- Transport ---
# Transport load for each country
loadTrans = network.loads_t.p_set[dataNames + ' transport']
# Generation transport
generationTrans = pd.DataFrame(data=np.zeros([8760,30]), index=timeIndex, columns=(dataNames + ' transport'))
# Mismatch transport
mismatchTrans = generationTrans - loadTrans
# Prices for transport for each country (restricted to 1000 €/MWh)
priceTrans = FilterPrice(network.buses_t.marginal_price[dataNames + " EV battery"],465)
# Coherence between prices and mismatch
c1Trans, c2Trans, c3Trans = Coherence(mismatchTrans, priceTrans)
# Plot properties
title1 = "" # "Coherence 1: Transport mismatch and nodal price"
title2 = "" # "Coherence 2: Transport mismatch and nodal price"
title3 = "" # "Coherence 3: Transport mismatch and nodal price"
xlabel = "Transport Mismatch"
ylabel="Transport Prices"
noX = 6
noY = 6
fig1 = CoherencePlot(dataMatrix=c1Trans.T, übertitle="", title=title1, xlabel=xlabel, ylabel=ylabel, noX=noX, noY=noY, dataRange=[0,1])
fig2 = CoherencePlot(dataMatrix=c2Trans.T, übertitle="", title=title2, xlabel=xlabel, ylabel=ylabel, noX=noX, noY=noY, dataRange=[0,1])
fig3 = CoherencePlot(dataMatrix=c3Trans.T, übertitle="", title=title3, xlabel=xlabel, ylabel=ylabel, noX=noX, noY=noY, dataRange=[-1,1])
SavePlot(fig1, path, title = (file[12:-3] + " - C1 v2g mismatch and TNP"))
SavePlot(fig2, path, title = (file[12:-3] + " - C2 v2g mismatch and TNP"))
SavePlot(fig3, path, title = (file[12:-3] + " - C3 v2g mismatch and TNP"))
# Combined Plot
fig = CoherencePlotCombined(c1Trans.T, c2Trans.T, c3Trans.T, xlabel=xlabel, ylabel=ylabel)
SavePlot(fig, path, title = (file[12:-3] + " - C123 combined v2g mismatch and TNP"))
# --- Elec/Heat Prices ---
# Coherence between elec prices and heat prices
c1Price, c2Price, c3Price = Coherence(priceElec, priceHeat)
# Plot properties
title1 = "" # "Coherence 1: Electricity and heating nodal prices"
title2 = "" # "Coherence 2: Electricity and heating nodal prices"
title3 = "" # "Coherence 3: Electricity and heating nodal prices"
xlabel = "Electricity Prices"
ylabel = "Heating Prices"
noX = 6
noY = 6
fig1 = CoherencePlot(dataMatrix=c1Price.T, übertitle="", title=title1, xlabel=xlabel, ylabel=ylabel, noX=noX, noY=noY, dataRange=[0,1])
fig2 = CoherencePlot(dataMatrix=c2Price.T, übertitle="", title=title2, xlabel=xlabel, ylabel=ylabel, noX=noX, noY=noY, dataRange=[0,1])
fig3 = CoherencePlot(dataMatrix=c3Price.T, übertitle="", title=title3, xlabel=xlabel, ylabel=ylabel, noX=noX, noY=noY, dataRange=[-1,1])
SavePlot(fig1, path, title = (file[12:-3] + " - C1 ENP and HNP"))
SavePlot(fig2, path, title = (file[12:-3] + " - C2 ENP and HNP"))
SavePlot(fig3, path, title = (file[12:-3] + " - C3 ENP and HNP"))
# Combined Plot
fig = CoherencePlotCombined(c1Price.T, c2Price.T, c3Price.T, xlabel=xlabel, ylabel=ylabel)
SavePlot(fig, path, title = (file[12:-3] + " - C123 combined ENP and HNP"))
# --- Elec/Transport Prices ---
# Coherence between elec prices and heat prices
c1Price, c2Price, c3Price = Coherence(priceElec, priceTrans)
# Plot properties
title1 = "" # "Coherence 1: Electricity and transport nodal prices"
title2 = "" # "Coherence 2: Electricity and transport nodal prices"
title3 = "" # "Coherence 3: Electricity and transport nodal prices"
xlabel = "Electricity Prices"
ylabel = "Transport Prices"
noX = 6
noY = 6
fig1 = CoherencePlot(dataMatrix=c1Price.T, übertitle="", title=title1, xlabel=xlabel, ylabel=ylabel, noX=noX, noY=noY, dataRange=[0,1])
fig2 = CoherencePlot(dataMatrix=c2Price.T, übertitle="", title=title2, xlabel=xlabel, ylabel=ylabel, noX=noX, noY=noY, dataRange=[0,1])
fig3 = CoherencePlot(dataMatrix=c3Price.T, übertitle="", title=title3, xlabel=xlabel, ylabel=ylabel, noX=noX, noY=noY, dataRange=[-1,1])
SavePlot(fig1, path, title = (file[12:-3] + " - C1 ENP and TNP"))
SavePlot(fig2, path, title = (file[12:-3] + " - C2 ENP and TNP"))
SavePlot(fig3, path, title = (file[12:-3] + " - C3 ENP and TNP"))
# Combined Plot
fig = CoherencePlotCombined(c1Price.T, c2Price.T, c3Price.T, xlabel=xlabel, ylabel=ylabel)
SavePlot(fig, path, title = (file[12:-3] + " - C123 combined ENP and TNP"))
# --- Heat/Transport Prices ---
# Coherence between elec prices and heat prices
c1Price, c2Price, c3Price = Coherence(priceHeat, priceTrans)
# Plot properties
title1 = "" # "Coherence 1: Heating and transport nodal prices"
title2 = "" # "Coherence 2: Heating and transport nodal prices"
title3 = "" # "Coherence 3: Heating and transport nodal prices"
xlabel = "Heating Prices"
ylabel = "Transport Prices"
noX = 6
noY = 6
fig1 = CoherencePlot(dataMatrix=c1Price.T, übertitle="", title=title1, xlabel=xlabel, ylabel=ylabel, noX=noX, noY=noY, dataRange=[0,1])
fig2 = CoherencePlot(dataMatrix=c2Price.T, übertitle="", title=title2, xlabel=xlabel, ylabel=ylabel, noX=noX, noY=noY, dataRange=[0,1])
fig3 = CoherencePlot(dataMatrix=c3Price.T, übertitle="", title=title3, xlabel=xlabel, ylabel=ylabel, noX=noX, noY=noY, dataRange=[-1,1])
SavePlot(fig1, path, title = (file[12:-3] + " - C1 HNP and TNP"))
SavePlot(fig2, path, title = (file[12:-3] + " - C2 HNP and TNP"))
SavePlot(fig3, path, title = (file[12:-3] + " - C3 HNP and TNP"))
# Combined Plot
fig = CoherencePlotCombined(c1Price.T, c2Price.T, c3Price.T, xlabel=xlabel, ylabel=ylabel)
SavePlot(fig, path, title = (file[12:-3] + " - C123 combined HNP and TNP"))
# Finish timer
t1 = time.time() # End timer
total_time = round(t1-t0)
total_time_min = math.floor(total_time/60)
total_time_sec = round(total_time-(total_time_min*60))
print("\n \nThe code is now done running. It took %s min and %s sec." %(total_time_min,total_time_sec))
# 8 min og 40 sek
| 43.607337 | 221 | 0.68411 | 7,731 | 64,190 | 5.634847 | 0.05562 | 0.016528 | 0.012855 | 0.01763 | 0.875491 | 0.85265 | 0.825815 | 0.791038 | 0.763859 | 0.74196 | 0 | 0.037838 | 0.149385 | 64,190 | 1,471 | 222 | 43.636982 | 0.75984 | 0.213538 | 0 | 0.620219 | 0 | 0 | 0.174255 | 0.034427 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.010929 | 0 | 0.010929 | 0.001366 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
11a182d20ab2ba449c48609410f2b80e85f8fe43 | 208 | py | Python | hon/renderers/__init__.py | swquinn/hon | 333332029ee884a8822d38024659d5d7da64ff1a | [
"MIT"
] | null | null | null | hon/renderers/__init__.py | swquinn/hon | 333332029ee884a8822d38024659d5d7da64ff1a | [
"MIT"
] | 14 | 2019-06-23T01:49:55.000Z | 2021-02-22T01:26:51.000Z | hon/renderers/__init__.py | swquinn/hon | 333332029ee884a8822d38024659d5d7da64ff1a | [
"MIT"
] | null | null | null | from .renderer import Renderer
from .render_context import RenderContext
from .ebook import EbookRenderer, EpubRenderer
from .html.html_renderer import HtmlRenderer
from .pdf.pdf_renderer import PdfRenderer
| 29.714286 | 46 | 0.855769 | 26 | 208 | 6.730769 | 0.5 | 0.24 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.105769 | 208 | 6 | 47 | 34.666667 | 0.94086 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
11d03b8ec50621f66bae0b3fb7fc924428ef14e9 | 280 | py | Python | py/generate_password.py | christophfranke/continuous-integration-tools | eeb77a624303f5b5971ea7110a8352ff72feb312 | [
"MIT"
] | null | null | null | py/generate_password.py | christophfranke/continuous-integration-tools | eeb77a624303f5b5971ea7110a8352ff72feb312 | [
"MIT"
] | null | null | null | py/generate_password.py | christophfranke/continuous-integration-tools | eeb77a624303f5b5971ea7110a8352ff72feb312 | [
"MIT"
] | null | null | null | from modules import engine
from modules import out
from modules import run
@engine.prepare_and_clean
def execute():
out.log("generating password...")
run.local('openssl rand -base64 15')
def help():
out.log("Creates a strong random 20 characters password.", 'help') | 23.333333 | 70 | 0.728571 | 40 | 280 | 5.05 | 0.65 | 0.163366 | 0.252475 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.025532 | 0.160714 | 280 | 12 | 70 | 23.333333 | 0.834043 | 0 | 0 | 0 | 0 | 0 | 0.341637 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.222222 | true | 0.222222 | 0.333333 | 0 | 0.555556 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
eeb2af912a13eb667df46e97c3cddb30790f007e | 5,086 | py | Python | tests/torchvision/model-server/test_serve.py | basisai/bedrock-express | 273b6377f080e1f6125dfd8ec465a8aaf3dee468 | [
"Apache-2.0"
] | 9 | 2020-10-22T06:42:38.000Z | 2020-10-22T08:38:17.000Z | tests/torchvision/model-server/test_serve.py | basisai/bedrock-express | 273b6377f080e1f6125dfd8ec465a8aaf3dee468 | [
"Apache-2.0"
] | 69 | 2020-10-23T02:15:36.000Z | 2022-03-31T00:03:18.000Z | tests/tf-vision/model-server/test_serve.py | basisai/bedrock-express | 273b6377f080e1f6125dfd8ec465a8aaf3dee468 | [
"Apache-2.0"
] | 1 | 2021-09-28T01:36:41.000Z | 2021-09-28T01:36:41.000Z | from io import BytesIO
from unittest import TestCase
from serve import Model
soccer = b'\x89PNG\r\n\x1a\n\x00\x00\x00\rIHDR\x00\x00\x00\x1c\x00\x00\x00\x1c\x08\x06\x00\x00\x00r\r\xdf\x94\x00\x00\x00\x01sRGB\x00\xae\xce\x1c\xe9\x00\x00\x00\x84eXIfMM\x00*\x00\x00\x00\x08\x00\x05\x01\x12\x00\x03\x00\x00\x00\x01\x00\x01\x00\x00\x01\x1a\x00\x05\x00\x00\x00\x01\x00\x00\x00J\x01\x1b\x00\x05\x00\x00\x00\x01\x00\x00\x00R\x01(\x00\x03\x00\x00\x00\x01\x00\x02\x00\x00\x87i\x00\x04\x00\x00\x00\x01\x00\x00\x00Z\x00\x00\x00\x00\x00\x00\x00H\x00\x00\x00\x01\x00\x00\x00H\x00\x00\x00\x01\x00\x03\xa0\x01\x00\x03\x00\x00\x00\x01\x00\x01\x00\x00\xa0\x02\x00\x04\x00\x00\x00\x01\x00\x00\x00\x1c\xa0\x03\x00\x04\x00\x00\x00\x01\x00\x00\x00\x1c\x00\x00\x00\x00\x97\x87\x99\xa1\x00\x00\x00\tpHYs\x00\x00\x0b\x13\x00\x00\x0b\x13\x01\x00\x9a\x9c\x18\x00\x00\x01YiTXtXML:com.adobe.xmp\x00\x00\x00\x00\x00<x:xmpmeta xmlns:x="adobe:ns:meta/" x:xmptk="XMP Core 5.4.0">\n <rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#">\n <rdf:Description rdf:about=""\n xmlns:tiff="http://ns.adobe.com/tiff/1.0/">\n <tiff:Orientation>1</tiff:Orientation>\n </rdf:Description>\n </rdf:RDF>\n</x:xmpmeta>\nL\xc2\'Y\x00\x00\x04\xafIDATH\r\x8d\x96\xc9+\xffO\x1c\xc7\xc7\xbeo\x85B!\xd9\x92".J\xe9sQ\xca\xc5A\x1c\x9c\\(\x17\xc9\xc5_\xe0JN\xca\xd1\xc1IJI)%[\xe4 \x07\xfb\x92}/\xd9w\xf3\x9b\xc7\xeb\xf7\x9d\xe9\xf3\xf9X\xfaL\xcdg\xe6=\xaf\xe5\xf9Zg>A\xda\x0c\x15\xe0\xf8\xfa\xfa\xfa\x913((H1\x03\x19\xa1\x810\xc1\x83]\xc1\xc1\xc1\x81\xb2\xff\xca\x170 \x1e\xec\xec\xec\xa8\xd5\xd5Uu\x7f\x7f\xaf>??Ull\xacJMMU%%%*>>\xfeW\x10\x1f\x02!\xfdk|||\x08yee\x85\xd0\xff8\xbb\xbb\xbb\xf5\xe3\xe3\xa3~{{\xd3&\xec\x7f\xa9\xd3AP},\xf8\xf7\xc1\xf1\xfb\xfb\xbb|\x85\x84\x84\xa8\x97\x97\x17\xf9\xbe\xbd\xbdU\xcb\xcb\xcbjqqQy<\x1eUVV&\xde\xc1\x83\xd7\xc6@\x15\x19\x19\xa9\xf8F\x87\x7fn\xbf\x01Z|\xc0\xd8\x87\x85\x85\x05\x94;x\x01D\xee\xf5\xf5U%$$\x08\x98?\xa8O\x15X"\xde\x98\xf0\xa8\xf0\xf0p\'\x04\x8dI\xa52\xd9\x03`\xbf\t\x85\xadV\xe4\x96\x96\x96$\xd7\x9c\xc1k\x87\x03\xe4\x10\xe2\xfe\xfe\xbe\xea\xe8\xe8PGGG\xf2\x8dB\xef\xe1\x1d"\xbb\xf7WJH\x91oii\x110\x1f\xba\x01r\x89>??wE\xb1\xb5\xb5\x05IS4\x06\xd4M[DB\xfc\xf7\xc3\x99\xf1VxMt\xb4\t\xa9\xde\xdc\xdc\x14]\x93\x93\x93\xc2\x05\x9d!\x1e\x9aU\x9c\x18\x1f\x1fw\xce\x10.\x064\xa6\r\x1d\xc5prr\xa2\xba\xba\xba\xd4\xc0\xc0\x80z~~\x96\x02\x81\xdf\xf2Y\x19\xe4\xfb\xfb\xfb%=\xf40\xe7(2\xab\x96\xb2\xae\xaa\xaa\xd2\xe9\xe9\xe9b\x19m\xc0\xa0\xd4M!\xb8(LLL\x08\x1d[\x98555\xe2\r\xbcxF{<==\xe9\x83\x83\x03]ZZ*<\x1b\x1b\x1b\x90%\n$^>\xd6\xd6\xd6\x9c\xa2\xde\xde^\x114\xd6kB\x04 \x8a\xfa\xfa\xfa\x84\'11Qgee\xe9\xbc\xbc<\'\x83!\xf0\x9a\xb6\xd1\xd7\xd7\xd7\xb2\x9a\x88\t}ll\xec;\xe0\xc8\xc8\x88\x10\xdb\xdb\xdb\xf5\xdd\xdd\x9dX\t\x08\x13\xc0\xed\xedm\xa1\xe7\xe7\xe7\xeb\x98\x98\x18\x07\xc47\x9e\xb6\xb6\xb6\n\xc8\xe5\xe5\xa5\xa6\x16\xce\xce\xce\xf4\xc5\xc5\x85nll\xd4===\x0e\xd0UiJJ\x8a\x91S\xaa\xad\xadMrB/\x91\x17&-\x02\xbd\xa9\xa9I\x99bR\x11\x11\x11\xc2\xcb\x0f}\xc7\xa8\xad\xad\x95~E\xce\xe6\x9b\xbc!c\xa2!<\xfc\x98\xb3\xff1\x0b\n\n\xd4\xc2\xc2\x82(\xa6\x0f\x8dIrkps\xa0\x84\xde\xaa\xac\xac\x14\xc1\xb8\xb881\x8a\xf2\xbf\xba\xba\x92\xb3\x9c\x9c\x1cer\xe7z\x91C\xf4\xe4\xe6\xe6\xaa\xf2\xf2r\xe1\xa1=\x9c\x87\x9cdgg\x0b\x88\xf5\xcc\xae\x802\x8b\x8b\x8bE044T\x00M.\xd5\xc3\xc3\x83\xea\xec\xec\x14C\x89\x04\x0e \x87r\xf8\x18\x18kG(\x9eX"\xaf@TT\x94\xa5\xc9j\x9b\x16k\xd3\xd2\xd2Taa\xa12U\xe7hl<\x1e\x8f\xbb$\xe0g\x00\x8cn\xc2\x1b\x1d\x1d-g\xfc\xb8\xbb\x94~:==\x95\xfc\xc0h\x87U\xc0\x8a0!\xc4\x13{\xc7\xa2\x98\xa7\x89\xfe\x84\x07\xaf\xac\x13\xa4\x02\xbd\xa6\xb0\x84\x8eN\xf7\x1e\xe2\x19B\x14\x01\x16\x11\x16\xf6\xac(E\x19aMNN\xf6\xb1\x1e%6\xf4\xf0\xa0\x03~\x8c\xe3e\xa1\xd88\xb3FH\x0e\xadGIII\xca\xb4\x84\x1a\x1d\x1dU\xbb\xbb\xbbR\x04\x96\x99\x90\xa2\x98\x95\xe2`\xb5\x13\xe5\x18\x03\x10\x1eQ\xc9\xa6\x97\x95i\rI\x03Fa\x8c\xac\x06L\xe2g-\xe0\xad\xe3\x8dcxLn***T}}\xbd\xa22\x01d \xec\x14\x98=\xb2\xb4\n/\xc4\xd0\xd0\x902=-|\xc7\xc7\xc7*##\xc3y\xc7\xa1\x0b\xa9p\x98\x1fs{\x08\x08-rxx\xa8\xa6\xa6\xa6$,uuu\x12b\x1b.\xcb\x8fw\xfc\xd5\xd8\xdb\xdbS\r\r\r.W\xc3\xc3\xc3\xdf\xc0D\x06\x0f\xed0\x1e\xc8\x96\x97\xc2\x10e\x9a\x1e\xd2&\xbfzffFn\x1b\xae\xc0\xf5\xf5u\x99\xe6\xff\x8d\xe6\x9e4\xaf\xbf\xae\xae\xaev2\x83\x83\x83V\xe5\xb7\x15w}\x86\x055\xe1\xd0\xcd\xcd\xcdN\x89\xb1^@\xb8\xe2xz\x00\xc2\xb0\xf9\xf9y\x9d\x99\x99)|EEEzvv\xd6\xe93\xde\xbb\xbd\xdd\xb8\xb6\xb0!b%L\x84\x8e\x9c\xcd\xcd\xcd\xa9\xe9\xe9iEA\x99\xdb\xdf\xb5\r<LswJ8I\x85ym\xa4E\x8crQg\xf3\xec\xad\xfbG@oP\xcbLO\xdd\xdc\xdcH\x85\x02D\xf5r\x83PL\xde\x7f\x11\xad\xb1V\xce\x7f\xfd\x15\x10F,ezW\xa5\xbf\x02\xfb\rP |\x7f\x02Ze\xac6L\xdegv\xffS\xe8,\xcd\x7f\xfd\x0f\xc4<\xa3\xb7\x1a\xf3\xf2\xe6\x00\x00\x00\x00IEND\xaeB`\x82' # noqa: E501
class TestModelServer(TestCase):
def test_validate(self):
m = Model()
result = m.validate(files={"image": BytesIO(soccer)}, skip_preprocess=True)
self.assertIn("result", result)
self.assertIn("prediction_id", result)
self.assertEqual(len(result["prediction_id"].split("/")), 3)
self.assertEqual(result["result"], [805])
| 299.176471 | 4,632 | 0.724145 | 1,099 | 5,086 | 3.346679 | 0.367607 | 0.106036 | 0.073409 | 0.032626 | 0.087819 | 0.080479 | 0.080479 | 0.072322 | 0.032626 | 0.032626 | 0 | 0.23664 | 0.028706 | 5,086 | 16 | 4,633 | 317.875 | 0.507895 | 0.001966 | 0 | 0 | 0 | 0.333333 | 0.891801 | 0.836421 | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0.083333 | false | 0 | 0.25 | 0 | 0.416667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
eeea7315cf1d7abd300ccba2bb3a8132583b760e | 105 | py | Python | tests/conftest.py | tatarize/vnoise | de7a60b7599df733b2c449420554ef93dacbd03b | [
"MIT"
] | 9 | 2021-02-25T03:29:02.000Z | 2022-03-16T20:07:58.000Z | tests/conftest.py | tatarize/vnoise | de7a60b7599df733b2c449420554ef93dacbd03b | [
"MIT"
] | null | null | null | tests/conftest.py | tatarize/vnoise | de7a60b7599df733b2c449420554ef93dacbd03b | [
"MIT"
] | 2 | 2021-02-25T14:56:48.000Z | 2021-12-08T02:01:06.000Z | import pytest
from vnoise import Noise
@pytest.fixture(scope="module")
def noise():
return Noise()
| 13.125 | 31 | 0.72381 | 14 | 105 | 5.428571 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.161905 | 105 | 7 | 32 | 15 | 0.863636 | 0 | 0 | 0 | 0 | 0 | 0.057143 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | true | 0 | 0.4 | 0.2 | 0.8 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
eeeec07e464a6fab9fd8b008ba93c9dbe39dd134 | 1,183 | py | Python | FuzzingTool_Dialog_InputTestcaseandRun.py | Ryu-Miyaki/Fuzz4B | 8546f165d4dbdd97eb6ab5a6f4c445ee81ec364b | [
"MIT"
] | 16 | 2020-06-25T11:56:59.000Z | 2022-02-05T14:00:12.000Z | FuzzingTool_Dialog_InputTestcaseandRun.py | Ryu-Miyaki/Fuzz4B | 8546f165d4dbdd97eb6ab5a6f4c445ee81ec364b | [
"MIT"
] | null | null | null | FuzzingTool_Dialog_InputTestcaseandRun.py | Ryu-Miyaki/Fuzz4B | 8546f165d4dbdd97eb6ab5a6f4c445ee81ec364b | [
"MIT"
] | null | null | null | """Subclass of Dialog_InputTestcaseandRun, which is generated by wxFormBuilder."""
import wx
import FuzzingTool
# Implementing Dialog_InputTestcaseandRun
class FuzzingTool_Dialog_InputTestcaseandRun( FuzzingTool.Dialog_InputTestcaseandRun ):
def __init__( self, parent ):
FuzzingTool.Dialog_InputTestcaseandRun.__init__( self, parent )
# Handlers for Dialog_InputTestcaseandRun events.
def Button_CopyOnButtonClick( self, event ):
# TODO: Implement Button_CopyOnButtonClick
pass
def CheckBox_UseGDBorNotOnCheckBox( self, event ):
# TODO: Implement CheckBox_UseGDBorNotOnCheckBox
pass
def RadioBtn_CrashTestcaseOnRadioButton( self, event ):
# TODO: Implement RadioBtn_CrashTestcaseOnRadioButton
pass
def RadioBtn_OriginalTestcaseOnRadioButton( self, event ):
# TODO: Implement RadioBtn_OriginalTestcaseOnRadioButton
pass
def RadioBtn_MinimizedTestcaseOnRadioButton( self, event ):
# TODO: Implement RadioBtn_MinimizedTestcaseOnRadioButton
pass
def Button_RunGDBOnButtonClick( self, event ):
# TODO: Implement Button_RunGDBOnButtonClick
pass
def Button_ExitOnButtonClick( self, event ):
# TODO: Implement Button_ExitOnButtonClick
pass
| 28.853659 | 87 | 0.814877 | 108 | 1,183 | 8.657407 | 0.324074 | 0.06738 | 0.097326 | 0.164706 | 0.186096 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.129332 | 1,183 | 40 | 88 | 29.575 | 0.907767 | 0.422654 | 0 | 0.368421 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0.025 | 0 | 1 | 0.421053 | false | 0.368421 | 0.105263 | 0 | 0.578947 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 6 |
0122c03b5d1fc21adc4ba2bd13f1aefe17a54762 | 143 | py | Python | tests/__init__.py | s-ball/MockSelector | c902053a6bcf111503060491777a1af83f92750d | [
"MIT"
] | null | null | null | tests/__init__.py | s-ball/MockSelector | c902053a6bcf111503060491777a1af83f92750d | [
"MIT"
] | null | null | null | tests/__init__.py | s-ball/MockSelector | c902053a6bcf111503060491777a1af83f92750d | [
"MIT"
] | null | null | null | import os
import sys
current_path = os.path.dirname(os.path.abspath(__file__))
sys.path.extend((current_path, os.path.dirname(current_path)))
| 23.833333 | 62 | 0.79021 | 23 | 143 | 4.608696 | 0.391304 | 0.311321 | 0.245283 | 0.320755 | 0.45283 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.06993 | 143 | 5 | 63 | 28.6 | 0.796992 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
012e4eef1875649a6948591070239660289a5680 | 36 | py | Python | awsbase/awslambda/__init__.py | mjm461/awsbase | 989dd4a4f5a5011c4f5ffd8bf23230849f13acff | [
"BSD-2-Clause"
] | null | null | null | awsbase/awslambda/__init__.py | mjm461/awsbase | 989dd4a4f5a5011c4f5ffd8bf23230849f13acff | [
"BSD-2-Clause"
] | null | null | null | awsbase/awslambda/__init__.py | mjm461/awsbase | 989dd4a4f5a5011c4f5ffd8bf23230849f13acff | [
"BSD-2-Clause"
] | null | null | null | from .base_lambda import BaseLambda
| 18 | 35 | 0.861111 | 5 | 36 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 36 | 1 | 36 | 36 | 0.9375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
01501aa0c8da894d9add05532470ebb3226a0c56 | 124 | py | Python | artificery/parsers/upsample_residuals/parse.py | seung-lab/artificery | f2dc93a5ff8b057f6a7c4789d483efe52809da22 | [
"Apache-2.0"
] | 1 | 2020-06-17T21:04:15.000Z | 2020-06-17T21:04:15.000Z | artificery/parsers/upsample_residuals/parse.py | seung-lab/artificery | f2dc93a5ff8b057f6a7c4789d483efe52809da22 | [
"Apache-2.0"
] | null | null | null | artificery/parsers/upsample_residuals/parse.py | seung-lab/artificery | f2dc93a5ff8b057f6a7c4789d483efe52809da22 | [
"Apache-2.0"
] | 1 | 2020-10-09T04:06:41.000Z | 2020-10-09T04:06:41.000Z | from scalenet.upsample_residuals import UpsampleResiduals
def parse(params, create_module):
return UpsampleResiduals()
| 24.8 | 57 | 0.830645 | 13 | 124 | 7.769231 | 0.923077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.112903 | 124 | 4 | 58 | 31 | 0.918182 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
017ea89c853b9538215b116110099049727ec610 | 35,070 | py | Python | license_protected_downloads/tests/test_views.py | NexellCorp/infrastructure_server_linaro-license-protection | c328a1e023c60b443b2fca6349179104f7637077 | [
"Net-SNMP",
"Xnet",
"Info-ZIP",
"OML"
] | null | null | null | license_protected_downloads/tests/test_views.py | NexellCorp/infrastructure_server_linaro-license-protection | c328a1e023c60b443b2fca6349179104f7637077 | [
"Net-SNMP",
"Xnet",
"Info-ZIP",
"OML"
] | null | null | null | license_protected_downloads/tests/test_views.py | NexellCorp/infrastructure_server_linaro-license-protection | c328a1e023c60b443b2fca6349179104f7637077 | [
"Net-SNMP",
"Xnet",
"Info-ZIP",
"OML"
] | null | null | null | __author__ = 'dooferlad'
import hashlib
import os
import tempfile
import unittest
import urllib2
import urlparse
import mock
from django.conf import settings
from django.test import Client, TestCase
from django.http import HttpResponse
from license_protected_downloads.buildinfo import BuildInfo
from license_protected_downloads.common import _insert_license_into_db
from license_protected_downloads.config import INTERNAL_HOSTS
from license_protected_downloads.tests.helpers import temporary_directory
from license_protected_downloads.tests.helpers import TestHttpServer
from license_protected_downloads.views import _process_include_tags
from license_protected_downloads.views import is_same_parent_dir
from license_protected_downloads import views
THIS_DIRECTORY = os.path.dirname(os.path.abspath(__file__))
TESTSERVER_ROOT = os.path.join(THIS_DIRECTORY, "testserver_root")
class BaseServeViewTest(TestCase):
def setUp(self):
self.client = Client()
self.old_served_paths = settings.SERVED_PATHS
settings.SERVED_PATHS = [os.path.join(THIS_DIRECTORY,
"testserver_root")]
self.old_upload_path = settings.UPLOAD_PATH
settings.UPLOAD_PATH = os.path.join(THIS_DIRECTORY,
"test_upload_root")
if not os.path.isdir(settings.UPLOAD_PATH):
os.makedirs(settings.UPLOAD_PATH)
self.old_master_api_key = settings.MASTER_API_KEY
settings.MASTER_API_KEY = "1234abcd"
def tearDown(self):
settings.SERVED_PATHS = self.old_served_paths
settings.MASTER_API_KEY = self.old_master_api_key
os.rmdir(settings.UPLOAD_PATH)
settings.UPLOAD_PATH = self.old_upload_path
class ViewTests(BaseServeViewTest):
def test_license_directly(self):
response = self.client.get('/licenses/license.html', follow=True)
self.assertEqual(response.status_code, 200)
self.assertContains(response, '/build-info')
def test_licensefile_directly_samsung(self):
response = self.client.get('/licenses/samsung.html', follow=True)
self.assertEqual(response.status_code, 200)
self.assertContains(response, '/build-info')
def test_licensefile_directly_ste(self):
response = self.client.get('/licenses/ste.html', follow=True)
self.assertEqual(response.status_code, 200)
self.assertContains(response, '/build-info')
def test_licensefile_directly_linaro(self):
response = self.client.get('/licenses/linaro.html', follow=True)
self.assertEqual(response.status_code, 200)
self.assertContains(response, '/build-info')
def test_redirect_to_license_samsung(self):
# Get BuildInfo for target file
target_file = "build-info/origen-blob.txt"
file_path = os.path.join(TESTSERVER_ROOT, target_file)
build_info = BuildInfo(file_path)
# Try to fetch file from server - we should be redirected
url = urlparse.urljoin("http://testserver/", target_file)
response = self.client.get(url, follow=True)
digest = hashlib.md5(build_info.get("license-text")).hexdigest()
self.assertRedirects(response, '/license?lic=%s&url=%s' %
(digest, target_file))
# Make sure that we get the license text in the license page
self.assertContains(response, build_info.get("license-text"))
# Test that we use the "samsung" theme. This contains exynos.png
self.assertContains(response, "exynos.png")
def test_redirect_to_license_ste(self):
# Get BuildInfo for target file
target_file = "build-info/snowball-blob.txt"
file_path = os.path.join(TESTSERVER_ROOT, target_file)
build_info = BuildInfo(file_path)
# Try to fetch file from server - we should be redirected
url = urlparse.urljoin("http://testserver/", target_file)
response = self.client.get(url, follow=True)
digest = hashlib.md5(build_info.get("license-text")).hexdigest()
self.assertRedirects(response, '/license?lic=%s&url=%s' %
(digest, target_file))
# Make sure that we get the license text in the license page
self.assertContains(response, build_info.get("license-text"))
# Test that we use the "stericsson" theme. This contains igloo.png
self.assertContains(response, "igloo.png")
def test_redirect_to_license_linaro(self):
# Get BuildInfo for target file
target_file = "build-info/linaro-blob.txt"
file_path = os.path.join(TESTSERVER_ROOT, target_file)
build_info = BuildInfo(file_path)
# Try to fetch file from server - we should be redirected
url = urlparse.urljoin("http://testserver/", target_file)
response = self.client.get(url, follow=True)
digest = hashlib.md5(build_info.get("license-text")).hexdigest()
self.assertRedirects(response, '/license?lic=%s&url=%s' %
(digest, target_file))
# Make sure that we get the license text in the license page
self.assertContains(response, build_info.get("license-text"))
# Test that we use the "linaro" theme. This contains linaro.png
self.assertContains(response, "linaro.png")
@staticmethod
def set_up_license(target_file, index=0):
# Get BuildInfo for target file
file_path = os.path.join(TESTSERVER_ROOT, target_file)
build_info = BuildInfo(file_path)
# Insert license information into database
text = build_info.get("license-text", index)
digest = hashlib.md5(text).hexdigest()
theme = build_info.get("theme", index)
_insert_license_into_db(digest, text, theme)
return digest
def test_redirect_to_file_on_accept_license(self):
target_file = "build-info/linaro-blob.txt"
digest = self.set_up_license(target_file)
# Accept the license for our file...
accept_url = '/accept-license?lic=%s&url=%s' % (digest, target_file)
response = self.client.post(accept_url, {"accept": "accept"})
# We should have a license accept cookie.
accept_cookie_name = "license_accepted_" + digest
self.assertTrue(accept_cookie_name in response.cookies)
# We should get redirected back to the original file location.
self.assertEqual(response.status_code, 302)
url = urlparse.urljoin("http://testserver/", target_file)
listing_url = os.path.dirname(url)
self.assertEqual(response['Location'],
listing_url + "?dl=/" + target_file)
def test_redirect_to_decline_page_on_decline_license(self):
target_file = "build-info/linaro-blob.txt"
digest = self.set_up_license(target_file)
# Reject the license for our file...
accept_url = '/accept-license?lic=%s&url=%s' % (digest, target_file)
response = self.client.post(accept_url, {"reject": "reject"})
# We should get a message saying we don't have access to the file.
self.assertContains(response, "Without accepting the license, you can"
" not download the requested files.")
def test_download_file_accepted_license(self):
target_file = "build-info/linaro-blob.txt"
url = urlparse.urljoin("http://testserver/", target_file)
digest = self.set_up_license(target_file)
# Accept the license for our file...
accept_url = '/accept-license?lic=%s&url=%s' % (digest, target_file)
response = self.client.post(accept_url, {"accept": "accept"})
# We should get redirected back to the original file location.
self.assertEqual(response.status_code, 302)
listing_url = os.path.dirname(url)
self.assertEqual(response['Location'],
listing_url + "?dl=/" + target_file)
# We should have a license accept cookie.
accept_cookie_name = "license_accepted_" + digest
self.assertTrue(accept_cookie_name in response.cookies)
# XXX Workaround for seemingly out of sync cookie handling XXX
# The cookies in client.cookies are instances of
# http://docs.python.org/library/cookie.html once they have been
# returned by a client get/post. Unfortunately for the next query
# client.cookies needs to be a dictionary keyed by cookie name and
# containing a value of whatever is stored in the cookie (or so it
# seems). For this reason we start up a new client, erasing all
# cookies from the current session, and re-introduce them.
client = Client()
client.cookies[accept_cookie_name] = accept_cookie_name
response = client.get(url)
# If we have access to the file, we will get an X-Sendfile response
self.assertEqual(response.status_code, 200)
file_path = os.path.join(TESTSERVER_ROOT, target_file)
self.assertEqual(response['X-Sendfile'], file_path)
def test_OPEN_EULA_txt(self):
target_file = '~linaro-android/staging-vexpress-a9/test.txt'
url = urlparse.urljoin("http://testserver/", target_file)
response = self.client.get(url, follow=True)
# If we have access to the file, we will get an X-Sendfile response
self.assertEqual(response.status_code, 200)
file_path = os.path.join(TESTSERVER_ROOT, target_file)
self.assertEqual(response['X-Sendfile'], file_path)
def test_never_available_dirs(self):
target_file = '~linaro-android/staging-imx53/test.txt'
url = urlparse.urljoin("http://testserver/", target_file)
response = self.client.get(url, follow=True)
# If we don't have access we will get a Forbidden response (403)
self.assertEqual(response.status_code, 403)
def test_protected_by_EULA_txt(self):
# Get BuildInfo for target file
target_file = "~linaro-android/staging-origen/test.txt"
# Try to fetch file from server - we should be redirected
url = urlparse.urljoin("http://testserver/", target_file)
response = self.client.get(url, follow=True)
eula_path = os.path.join(settings.PROJECT_ROOT,
"templates/licenses/samsung.txt")
with open(eula_path) as license_file:
license_text = license_file.read()
digest = hashlib.md5(license_text).hexdigest()
self.assertRedirects(response, "/license?lic=%s&url=%s" %
(digest, target_file))
# Make sure that we get the license text in the license page
self.assertContains(response, license_text)
# Test that we use the "samsung" theme. This contains exynos.png
self.assertContains(response, "exynos.png")
@mock.patch('license_protected_downloads.views.config')
def test_protected_internal_file(self, config):
'''ensure a protected file can be downloaded by an internal host'''
config.INTERNAL_HOSTS = ('127.0.0.1',)
target_file = "~linaro-android/staging-origen/test.txt"
url = urlparse.urljoin("http://testserver/", target_file)
response = self.client.get(url)
self.assertEqual(response.status_code, 200)
self.assertIn('X-Sendfile', response)
@mock.patch('license_protected_downloads.views.config')
def test_protected_internal_listing(self, config):
'''ensure directory listings are browseable for internal hosts'''
config.INTERNAL_HOSTS = ('127.0.0.1',)
response = self.client.get('http://testserver/')
self.assertIn('linaro-license-protection.git/commit', response.content)
def test_per_file_license_samsung(self):
# Get BuildInfo for target file
target_file = "images/origen-blob.txt"
# Try to fetch file from server - we should be redirected
url = urlparse.urljoin("http://testserver/", target_file)
response = self.client.get(url, follow=True)
eula_path = os.path.join(settings.PROJECT_ROOT,
"templates/licenses/samsung.txt")
with open(eula_path) as license_file:
license_text = license_file.read()
digest = hashlib.md5(license_text).hexdigest()
self.assertRedirects(response, "/license?lic=%s&url=%s" %
(digest, target_file))
# Make sure that we get the license text in the license page
self.assertContains(response, license_text)
# Test that we use the "samsung" theme. This contains exynos.png
self.assertContains(response, "exynos.png")
def test_per_file_non_protected_dirs(self):
target_file = "images/MANIFEST"
url = urlparse.urljoin("http://testserver/", target_file)
response = self.client.get(url, follow=True)
# If we have access to the file, we will get an X-Sendfile response
self.assertEqual(response.status_code, 200)
file_path = os.path.join(TESTSERVER_ROOT, target_file)
self.assertEqual(response['X-Sendfile'], file_path)
def test_dir_containing_only_dirs(self):
target_file = "~linaro-android"
url = urlparse.urljoin("http://testserver/", target_file)
response = self.client.get(url, follow=True)
# If we have access to the file, we will get an X-Sendfile response
self.assertContains(
response,
r"<th></th><th>Name</th><th>Last modified</th>"
"<th>Size</th><th>License</th>")
def test_not_found_file(self):
target_file = "12qwaszx"
url = urlparse.urljoin("http://testserver/", target_file)
response = self.client.get(url, follow=True)
self.assertContains(response, "not found", status_code=404)
def test_unprotected_BUILD_INFO(self):
target_file = 'build-info/panda-open.txt'
url = urlparse.urljoin("http://testserver/", target_file)
response = self.client.get(url, follow=True)
# If we have access to the file, we will get an X-Sendfile response
self.assertEqual(response.status_code, 200)
file_path = os.path.join(TESTSERVER_ROOT, target_file)
self.assertEqual(response['X-Sendfile'], file_path)
def test_redirect_to_file_on_accept_multi_license(self):
target_file = "build-info/multi-license.txt"
digest = self.set_up_license(target_file)
# Accept the first license for our file...
accept_url = '/accept-license?lic=%s&url=%s' % (digest, target_file)
response = self.client.post(accept_url, {"accept": "accept"})
# We should have a license accept cookie.
accept_cookie_name = "license_accepted_" + digest
self.assertTrue(accept_cookie_name in response.cookies)
# We should get redirected back to the original file location.
self.assertEqual(response.status_code, 302)
url = urlparse.urljoin("http://testserver/", target_file)
listing_url = os.path.dirname(url)
self.assertEqual(
response['Location'], listing_url + "?dl=/" + target_file)
client = Client()
client.cookies[accept_cookie_name] = accept_cookie_name
digest = self.set_up_license(target_file, 1)
# Accept the second license for our file...
accept_url = '/accept-license?lic=%s&url=%s' % (digest, target_file)
response = client.post(accept_url, {"accept": "accept"})
# We should have a license accept cookie.
accept_cookie_name1 = "license_accepted_" + digest
self.assertTrue(accept_cookie_name1 in response.cookies)
# We should get redirected back to the original file location.
self.assertEqual(response.status_code, 302)
url = urlparse.urljoin("http://testserver/", target_file)
listing_url = os.path.dirname(url)
self.assertEqual(
response['Location'], listing_url + "?dl=/" + target_file)
client = Client()
client.cookies[accept_cookie_name] = accept_cookie_name
client.cookies[accept_cookie_name1] = accept_cookie_name1
response = client.get(url)
# If we have access to the file, we will get an X-Sendfile response
self.assertEqual(response.status_code, 200)
file_path = os.path.join(TESTSERVER_ROOT, target_file)
self.assertEqual(response['X-Sendfile'], file_path)
def test_header_html(self):
target_file = "~linaro-android"
url = urlparse.urljoin("http://testserver/", target_file)
response = self.client.get(url, follow=True)
self.assertContains(
response, r"Welcome to the Linaro releases server")
def test_exception_internal_host_for_lic(self):
internal_host = INTERNAL_HOSTS[0]
target_file = 'build-info/origen-blob.txt'
url = urlparse.urljoin("http://testserver/", target_file)
response = self.client.get(
url, follow=True, REMOTE_ADDR=internal_host)
# If we have access to the file, we will get an X-Sendfile response
self.assertEqual(response.status_code, 200)
file_path = os.path.join(TESTSERVER_ROOT, target_file)
self.assertEqual(response['X-Sendfile'], file_path)
def test_exception_internal_host_for_openid(self):
internal_host = INTERNAL_HOSTS[0]
target_file = 'build-info/openid.txt'
url = urlparse.urljoin("http://testserver/", target_file)
response = self.client.get(
url, follow=True, REMOTE_ADDR=internal_host)
# If we have access to the file, we will get an X-Sendfile response
self.assertEqual(response.status_code, 200)
file_path = os.path.join(TESTSERVER_ROOT, target_file)
self.assertEqual(response['X-Sendfile'], file_path)
def test_exception_internal_host_for_lic_and_openid(self):
internal_host = INTERNAL_HOSTS[0]
target_file = 'build-info/origen-blob-openid.txt'
url = urlparse.urljoin("http://testserver/", target_file)
response = self.client.get(
url, follow=True, REMOTE_ADDR=internal_host)
# If we have access to the file, we will get an X-Sendfile response
self.assertEqual(response.status_code, 200)
file_path = os.path.join(TESTSERVER_ROOT, target_file)
self.assertEqual(response['X-Sendfile'], file_path)
def test_no_exception_ip(self):
internal_host = '10.1.2.3'
target_file = 'build-info/origen-blob.txt'
file_path = os.path.join(TESTSERVER_ROOT, target_file)
build_info = BuildInfo(file_path)
# Try to fetch file from server - we should be redirected
url = urlparse.urljoin("http://testserver/", target_file)
response = self.client.get(
url, follow=True, REMOTE_ADDR=internal_host)
digest = hashlib.md5(build_info.get("license-text")).hexdigest()
self.assertRedirects(response, '/license?lic=%s&url=%s' %
(digest, target_file))
# Make sure that we get the license text in the license page
self.assertContains(response, build_info.get("license-text"))
# Test that we use the "samsung" theme. This contains exynos.png
self.assertContains(response, "exynos.png")
def test_broken_build_info_directory(self):
target_file = "build-info/broken-build-info"
url = urlparse.urljoin("http://testserver/", target_file)
response = self.client.get(url, follow=True)
# If a build-info file is invalid, we don't allow access
self.assertEqual(response.status_code, 403)
def test_broken_build_info_file(self):
target_file = "build-info/broken-build-info/test.txt"
url = urlparse.urljoin("http://testserver/", target_file)
response = self.client.get(url, follow=True)
# If a build-info file is invalid, we don't allow access
self.assertEqual(response.status_code, 403)
def test_unable_to_download_hidden_files(self):
target_file = '~linaro-android/staging-vexpress-a9/OPEN-EULA.txt'
url = urlparse.urljoin("http://testserver/", target_file)
response = self.client.get(url, follow=True)
# This file exists, but isn't listed so we shouldn't be able to
# download it.
self.assertEqual(response.status_code, 404)
def test_partial_build_info_file_open(self):
target_file = ("partial-license-settings/"
"partially-complete-build-info/"
"should_be_open.txt")
url = urlparse.urljoin("http://testserver/", target_file)
response = self.client.get(url, follow=True)
# If a build-info file specifies this file is open
self.assertEqual(response.status_code, 200)
def test_partial_build_info_file_protected(self):
target_file = ("partial-license-settings/"
"partially-complete-build-info/"
"should_be_protected.txt")
file_path = os.path.join(TESTSERVER_ROOT, target_file)
build_info = BuildInfo(file_path)
# Try to fetch file from server - we should be redirected
url = urlparse.urljoin("http://testserver/", target_file)
response = self.client.get(url, follow=True)
digest = hashlib.md5(build_info.get("license-text")).hexdigest()
self.assertRedirects(response, '/license?lic=%s&url=%s' %
(digest, target_file))
def test_partial_build_info_file_unspecified(self):
target_file = ("partial-license-settings/"
"partially-complete-build-info/"
"should_be_inaccessible.txt")
url = urlparse.urljoin("http://testserver/", target_file)
response = self.client.get(url, follow=True)
# If a build-info file has no information about this file
self.assertEqual(response.status_code, 403)
def test_listings_do_not_contain_double_slash_in_link(self):
target_file = 'images/'
url = urlparse.urljoin("http://testserver/", target_file)
response = self.client.get(url, follow=True)
# this link should not contain a double slash:
self.assertNotContains(response, "//origen-blob.txt")
def test_directory_with_broken_symlink(self):
target_file = 'broken-symlinks'
url = urlparse.urljoin("http://testserver/", target_file)
response = self.client.get(url, follow=True)
# this test should not cause an exception. Anything else is a pass.
self.assertEqual(response.status_code, 200)
def test_whitelisted_dirs(self):
target_file = "precise/restricted/whitelisted.txt"
url = urlparse.urljoin("http://testserver/", target_file)
response = self.client.get(url, follow=True)
# If we have access to the file, we will get an X-Sendfile response
self.assertEqual(response.status_code, 200)
file_path = os.path.join(TESTSERVER_ROOT, target_file)
self.assertEqual(response['X-Sendfile'], file_path)
def make_temporary_file(self, data, root=None):
"""Creates a temporary file and fills it with data.
Returns the file name of the new temporary file.
"""
tmp_file_handle, tmp_filename = tempfile.mkstemp(dir=root)
tmp_file = os.fdopen(tmp_file_handle, "w")
tmp_file.write(data)
tmp_file.close()
self.addCleanup(os.unlink, tmp_filename)
return os.path.basename(tmp_filename)
def test_replace_self_closing_tag(self):
target_file = "readme"
old_cwd = os.getcwd()
file_path = os.path.join(TESTSERVER_ROOT, target_file)
os.chdir(file_path)
ret = _process_include_tags(
'Test <linaro:include file="README" /> html')
self.assertEqual(ret, r"Test Included from README html")
os.chdir(old_cwd)
def test_replace_self_closing_tag1(self):
target_file = "readme"
old_cwd = os.getcwd()
file_path = os.path.join(TESTSERVER_ROOT, target_file)
os.chdir(file_path)
ret = _process_include_tags(
'Test <linaro:include file="README"/> html')
self.assertEqual(ret, r"Test Included from README html")
os.chdir(old_cwd)
def test_replace_with_closing_tag(self):
target_file = "readme"
old_cwd = os.getcwd()
file_path = os.path.join(TESTSERVER_ROOT, target_file)
os.chdir(file_path)
ret = _process_include_tags(
'Test <linaro:include file="README">README is missing'
'</linaro:include> html')
self.assertEqual(ret, r"Test Included from README html")
os.chdir(old_cwd)
def test_replace_non_existent_file(self):
target_file = "readme"
old_cwd = os.getcwd()
file_path = os.path.join(TESTSERVER_ROOT, target_file)
os.chdir(file_path)
ret = _process_include_tags(
'Test <linaro:include file="NON_EXISTENT_FILE" /> html')
self.assertEqual(ret, r"Test html")
os.chdir(old_cwd)
def test_replace_empty_file_property(self):
target_file = "readme"
old_cwd = os.getcwd()
file_path = os.path.join(TESTSERVER_ROOT, target_file)
os.chdir(file_path)
ret = _process_include_tags(
'Test <linaro:include file="" /> html')
self.assertEqual(ret, r"Test html")
os.chdir(old_cwd)
def test_replace_parent_dir(self):
target_file = "readme"
old_cwd = os.getcwd()
file_path = os.path.join(TESTSERVER_ROOT, target_file)
os.chdir(file_path)
ret = _process_include_tags(
'Test <linaro:include file="../README" /> html')
self.assertEqual(ret, r"Test html")
os.chdir(old_cwd)
def test_replace_subdir(self):
target_file = "readme"
old_cwd = os.getcwd()
file_path = os.path.join(TESTSERVER_ROOT, target_file)
os.chdir(file_path)
ret = _process_include_tags(
'Test <linaro:include file="subdir/README" /> html')
self.assertEqual(ret, r"Test html")
os.chdir(old_cwd)
def test_replace_subdir_parent_dir(self):
target_file = "readme"
old_cwd = os.getcwd()
file_path = os.path.join(TESTSERVER_ROOT, target_file)
os.chdir(file_path)
ret = _process_include_tags(
'Test <linaro:include file="subdir/../README" /> html')
self.assertEqual(ret, r"Test Included from README html")
os.chdir(old_cwd)
def test_replace_full_path(self):
target_file = "readme"
old_cwd = os.getcwd()
file_path = os.path.join(TESTSERVER_ROOT, target_file)
os.chdir(file_path)
tmp = self.make_temporary_file("Included from /tmp", root="/tmp")
ret = _process_include_tags(
'Test <linaro:include file="/tmp/%s" /> html' % tmp)
self.assertEqual(ret, r"Test html")
os.chdir(old_cwd)
def test_replace_self_dir(self):
target_file = "readme"
old_cwd = os.getcwd()
file_path = os.path.join(TESTSERVER_ROOT, target_file)
os.chdir(file_path)
ret = _process_include_tags(
'Test <linaro:include file="./README" /> html')
self.assertEqual(ret, r"Test Included from README html")
os.chdir(old_cwd)
def test_replace_self_parent_dir(self):
target_file = "readme"
old_cwd = os.getcwd()
file_path = os.path.join(TESTSERVER_ROOT, target_file)
os.chdir(file_path)
ret = _process_include_tags(
'Test <linaro:include file="./../README" /> html')
self.assertEqual(ret, r"Test html")
os.chdir(old_cwd)
def test_replace_symlink(self):
target_file = "readme"
old_cwd = os.getcwd()
file_path = os.path.join(TESTSERVER_ROOT, target_file)
os.chdir(file_path)
ret = _process_include_tags(
'Test <linaro:include file="READMELINK" /> html')
self.assertEqual(ret, r"Test html")
os.chdir(old_cwd)
def test_process_include_tags(self):
target_file = "readme"
url = urlparse.urljoin("http://testserver/", target_file)
response = self.client.get(url, follow=True)
self.assertContains(response, r"Included from README")
def test_is_same_parent_dir_true(self):
fname = os.path.join(TESTSERVER_ROOT, "subdir/../file")
self.assertTrue(is_same_parent_dir(TESTSERVER_ROOT, fname))
def test_is_same_parent_dir_false(self):
fname = os.path.join(TESTSERVER_ROOT, "../file")
self.assertFalse(is_same_parent_dir(TESTSERVER_ROOT, fname))
def test_get_remote_static_unsupported_file(self):
response = self.client.get('/get-remote-static?name=unsupported.css')
self.assertEqual(response.status_code, 404)
def test_get_remote_static_nonexisting_file(self):
pages = {"/": "index"}
with TestHttpServer(pages) as http_server:
css_url = '%s/init.css' % http_server.base_url
settings.SUPPORTED_REMOTE_STATIC_FILES = {
'init.css': css_url}
self.assertRaises(urllib2.HTTPError, self.client.get,
'/get-remote-static?name=init.css')
def test_get_remote_static(self):
pages = {"/": "index", "/init.css": "test CSS"}
with TestHttpServer(pages) as http_server:
css_url = '%s/init.css' % http_server.base_url
settings.SUPPORTED_REMOTE_STATIC_FILES = {
'init.css': css_url}
response = self.client.get('/get-remote-static?name=init.css')
self.assertEqual(response.status_code, 200)
self.assertContains(response, 'test CSS')
def test_path_to_root(self):
response = self.client.get("http://testserver//", follow=True)
# Shouldn't be able to escape served paths...
self.assertEqual(response.status_code, 404)
def test_path_to_dir_above(self):
response = self.client.get("http://testserver/../", follow=True)
# Shouldn't be able to escape served paths...
self.assertEqual(response.status_code, 404)
def test_path_to_dir_above2(self):
response = self.client.get("http://testserver/..", follow=True)
# Shouldn't be able to escape served paths...
self.assertEqual(response.status_code, 404)
class HowtoViewTests(BaseServeViewTest):
def test_no_howtos(self):
with temporary_directory() as serve_root:
settings.SERVED_PATHS = [serve_root.root]
serve_root.make_file(
"build/9/build.tar.bz2", with_buildinfo=True)
response = self.client.get('/build/9/')
self.assertEqual(response.status_code, 200)
self.assertContains(response, 'build.tar.bz2')
def test_howtos_without_license(self):
with temporary_directory() as serve_root:
settings.SERVED_PATHS = [serve_root.root]
serve_root.make_file(
"build/9/build.tar.bz2", with_buildinfo=True)
serve_root.make_file(
"build/9/howto/HOWTO_test.txt", data=".h1 HowTo Test")
response = self.client.get('/build/9/')
self.assertEqual(response.status_code, 200)
self.assertContains(response, 'build.tar.bz2')
def test_howtos_with_license_in_buildinfo(self):
with temporary_directory() as serve_root:
settings.SERVED_PATHS = [serve_root.root]
serve_root.make_file(
"build/9/build.tar.bz2", with_buildinfo=True)
serve_root.make_file(
"build/9/howto/HOWTO_test.txt", data=".h1 HowTo Test",
with_buildinfo=True)
response = self.client.get('/build/9/')
self.assertEqual(response.status_code, 200)
self.assertContains(response, 'howto')
def test_howtos_with_license_in_openeula(self):
with temporary_directory() as serve_root:
settings.SERVED_PATHS = [serve_root.root]
serve_root.make_file(
"build/9/build.tar.bz2", with_buildinfo=True)
serve_root.make_file(
"build/9/howto/HOWTO_test.txt", data=".h1 HowTo Test",
with_buildinfo=False)
serve_root.make_file(
"build/9/howto/OPEN-EULA.txt", with_buildinfo=False)
response = self.client.get('/build/9/')
self.assertEqual(response.status_code, 200)
self.assertContains(response, 'howto')
def test_howtos_howto_dir(self):
with temporary_directory() as serve_root:
settings.SERVED_PATHS = [serve_root.root]
serve_root.make_file(
"build/9/build.tar.bz2", with_buildinfo=True)
serve_root.make_file(
"build/9/howto/HOWTO_releasenotes.txt", data=".h1 HowTo Test")
response = self.client.get('/build/9/howto/')
self.assertEqual(response.status_code, 200)
self.assertContains(response, 'HowTo Test')
def test_howtos_product_dir(self):
with temporary_directory() as serve_root:
settings.SERVED_PATHS = [serve_root.root]
serve_root.make_file(
"build/9/build.tar.bz2", with_buildinfo=True)
serve_root.make_file(
"build/9/target/product/panda/howto/HOWTO_releasenotes.txt",
data=".h1 HowTo Test")
response = self.client.get('/build/9/target/product/panda/howto/')
self.assertEqual(response.status_code, 200)
self.assertContains(response, 'HowTo Test')
class FileViewTests(BaseServeViewTest):
def test_static_file(self):
with temporary_directory() as serve_root:
settings.SERVED_PATHS = [serve_root.root]
serve_root.make_file("MD5SUM")
serve_root.make_file(
"BUILD-INFO.txt",
data=("Format-Version: 2.0\n\n"
"Files-Pattern: MD5SUM\n"
"License-Type: open\n"))
response = self.client.get('/MD5SUM')
self.assertEqual(response.status_code, 200)
class ViewHelpersTests(BaseServeViewTest):
def test_auth_group_error(self):
groups = ["linaro", "batman", "catwoman", "joker"]
request = mock.Mock()
request.path = "mock_path"
response = views.group_auth_failed_response(request, groups)
self.assertIsNotNone(response)
self.assertTrue(isinstance(response, HttpResponse))
self.assertContains(
response,
"You need to be the member of one of the linaro batman, catwoman "
"or joker groups",
status_code=403)
if __name__ == '__main__':
unittest.main()
| 42.7162 | 79 | 0.652067 | 4,389 | 35,070 | 5.008658 | 0.088859 | 0.058682 | 0.052313 | 0.042032 | 0.806259 | 0.784561 | 0.755447 | 0.7292 | 0.701133 | 0.686985 | 0 | 0.007682 | 0.242829 | 35,070 | 820 | 80 | 42.768293 | 0.820178 | 0.113031 | 0 | 0.633613 | 0 | 0 | 0.155528 | 0.063734 | 0 | 0 | 0 | 0 | 0.184874 | 1 | 0.114286 | false | 0 | 0.030252 | 0 | 0.156303 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6d6e3324f7e5b31d4833af56ea28e374b91f9c27 | 4,154 | py | Python | 70_question/famous_algorithm/topological_sort_test.py | alvinctk/google-tech-dev-guide | 9d7759bea1f44673c2de4f25a94b27368928a59f | [
"Apache-2.0"
] | 26 | 2019-06-07T05:29:47.000Z | 2022-03-19T15:32:27.000Z | 70_question/famous_algorithm/topological_sort_test.py | alvinctk/google-tech-dev-guide | 9d7759bea1f44673c2de4f25a94b27368928a59f | [
"Apache-2.0"
] | null | null | null | 70_question/famous_algorithm/topological_sort_test.py | alvinctk/google-tech-dev-guide | 9d7759bea1f44673c2de4f25a94b27368928a59f | [
"Apache-2.0"
] | 6 | 2019-10-10T06:39:28.000Z | 2020-05-12T19:50:55.000Z | import topological_sort as program
import unittest
class TestProgram(unittest.TestCase):
def test_case_1(self):
jobs = [1, 2, 3, 4, 5, 6, 7, 8]
deps = [[3, 1], [8, 1], [8, 7], [5, 7], [5, 2], [1, 4], [1, 6], [1, 2], [7, 6]]
order = program.topologicalSort(jobs, deps)
self.assertEqual(isValidTopologicalOrder(order, jobs, deps), True)
def test_case_2(self):
jobs = [1, 2, 3, 4, 5, 6, 7, 8]
deps = [[3, 1], [8, 1], [8, 7], [5, 7], [5, 2], [1, 4], [6, 7], [1, 2], [7, 6]]
order = program.topologicalSort(jobs, deps)
self.assertEqual(order, [])
def test_case_3(self):
jobs = [1, 2, 3, 4, 5, 6, 7, 8]
deps = [[3, 1], [8, 1], [8, 7], [5, 7], [5, 2], [1, 4], [1, 6], [1, 2], [7, 6], [4, 6], [6, 2], [2, 3]]
order = program.topologicalSort(jobs, deps)
self.assertEqual(order, [])
def test_case_4(self):
jobs = [1, 2, 3, 4, 5, 6, 7, 8]
deps = [[1, 2], [2, 3], [3, 4], [4, 5], [5, 6], [6, 7], [7, 8], [8, 1]]
order = program.topologicalSort(jobs, deps)
self.assertEqual(order, [])
def test_case_5(self):
jobs = [1, 2, 3, 4, 5, 6, 7, 8, 9]
deps = [[1, 2], [2, 3], [3, 4], [4, 5], [5, 6], [7, 6], [7, 8], [8, 1]]
order = program.topologicalSort(jobs, deps)
self.assertEqual(isValidTopologicalOrder(order, jobs, deps), True)
def test_case_6(self):
jobs = [1, 2, 3, 4, 5, 6, 7, 8]
deps = [[1, 2], [3, 5], [4, 6], [3, 6], [1, 7], [7, 8], [1, 8], [2, 8]]
order = program.topologicalSort(jobs, deps)
self.assertEqual(isValidTopologicalOrder(order, jobs, deps), True)
def test_case_7(self):
jobs = [1, 2, 3, 4, 5, 6, 7, 8]
deps = [
[1, 2], [1, 3], [1, 4], [1, 5], [1, 6], [1, 7],
[2, 8], [3, 8], [4, 8], [5, 8], [6, 8], [7, 8],
]
order = program.topologicalSort(jobs, deps)
self.assertEqual(isValidTopologicalOrder(order, jobs, deps), True)
def test_case_8(self):
jobs = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]
deps = [
[1, 2], [1, 3], [1, 4], [1, 5], [1, 6], [1, 7],
[2, 8], [3, 8], [4, 8], [5, 8], [6, 8], [7, 8],
[2, 3], [2, 4], [5, 4], [7, 6], [6, 2], [6, 3],
[6, 5], [5, 9], [9, 8], [8, 0], [4, 0], [5, 0],
[9, 0], [2, 0], [3, 9], [3, 10], [10, 11], [11, 12], [2, 12],
]
order = program.topologicalSort(jobs, deps)
self.assertEqual(isValidTopologicalOrder(order, jobs, deps), True)
def test_case_9(self):
jobs = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]
deps = [
[1, 2], [1, 3], [1, 4], [1, 5], [1, 6], [1, 7],
[2, 8], [3, 8], [4, 8], [5, 8], [6, 8], [7, 8],
[2, 3], [2, 4], [5, 4], [7, 6], [6, 2], [6, 3],
[6, 5], [5, 9], [9, 8], [8, 0], [4, 0], [5, 0],
[9, 0], [2, 0], [3, 9], [3, 10], [10, 11], [11, 12], [12, 2],
]
order = program.topologicalSort(jobs, deps)
self.assertEqual(order, [])
def test_case_10(self):
jobs = [1, 2, 3, 4]
deps = [[1, 2], [1, 3], [3, 2], [4, 2], [4, 3]]
order = program.topologicalSort(jobs, deps)
self.assertEqual(isValidTopologicalOrder(order, jobs, deps), True)
def test_case_11(self):
jobs = [1, 2, 3, 4, 5]
deps = []
order = program.topologicalSort(jobs, deps)
self.assertEqual(isValidTopologicalOrder(order, jobs, deps), True)
def test_case_12(self):
jobs = [1, 2, 3, 4, 5]
deps = [[1, 4], [5, 2]]
order = program.topologicalSort(jobs, deps)
self.assertEqual(isValidTopologicalOrder(order, jobs, deps), True)
def isValidTopologicalOrder(order, jobs, deps):
visited = {}
for candidate in order:
for prereq, job in deps:
if candidate == prereq and job in visited:
return False
visited[candidate] = True
for job in jobs:
if job not in visited:
return False
return len(order) == len(jobs)
if __name__ == "__main__":
unittest.main()
| 37.763636 | 111 | 0.47039 | 638 | 4,154 | 3.010972 | 0.073668 | 0.022905 | 0.020302 | 0.024987 | 0.809995 | 0.80583 | 0.799584 | 0.799584 | 0.779802 | 0.779282 | 0 | 0.139665 | 0.310544 | 4,154 | 109 | 112 | 38.110092 | 0.531075 | 0 | 0 | 0.538462 | 0 | 0 | 0.001926 | 0 | 0 | 0 | 0 | 0 | 0.131868 | 1 | 0.142857 | false | 0 | 0.021978 | 0 | 0.208791 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6d6e9d7e1e43345dfdcd53a0572d0b310a9a9631 | 38 | py | Python | protocols/protocol_6_1/metrics.py | Lucioric2000/GelReportModels | 1704cdea3242d5b46c8b81ef46553ccae2799435 | [
"Apache-2.0"
] | null | null | null | protocols/protocol_6_1/metrics.py | Lucioric2000/GelReportModels | 1704cdea3242d5b46c8b81ef46553ccae2799435 | [
"Apache-2.0"
] | null | null | null | protocols/protocol_6_1/metrics.py | Lucioric2000/GelReportModels | 1704cdea3242d5b46c8b81ef46553ccae2799435 | [
"Apache-2.0"
] | null | null | null | from protocols.metrics_1_1_0 import *
| 19 | 37 | 0.842105 | 7 | 38 | 4.142857 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.088235 | 0.105263 | 38 | 1 | 38 | 38 | 0.764706 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6da8d21d942afce3e3502783def6179bef406cae | 23,830 | py | Python | yang/connection/binding_connection.py | rvilalta/OFC_SC472 | 6d95f34c058c69918b045b164f87724d61ed29c7 | [
"Apache-2.0"
] | 2 | 2019-11-12T15:15:44.000Z | 2020-02-24T16:46:53.000Z | yang/connection/binding_connection.py | rvilalta/OFC2019_SC472 | c0bcbd05bb6c90eb9d8ab5abdc10b04d65a8a5d3 | [
"Apache-2.0"
] | null | null | null | yang/connection/binding_connection.py | rvilalta/OFC2019_SC472 | c0bcbd05bb6c90eb9d8ab5abdc10b04d65a8a5d3 | [
"Apache-2.0"
] | 2 | 2021-09-28T15:31:03.000Z | 2021-11-16T17:53:59.000Z | # -*- coding: utf-8 -*-
from operator import attrgetter
from pyangbind.lib.yangtypes import RestrictedPrecisionDecimalType
from pyangbind.lib.yangtypes import RestrictedClassType
from pyangbind.lib.yangtypes import TypedListType
from pyangbind.lib.yangtypes import YANGBool
from pyangbind.lib.yangtypes import YANGListType
from pyangbind.lib.yangtypes import YANGDynClass
from pyangbind.lib.yangtypes import ReferenceType
from pyangbind.lib.base import PybindBase
from collections import OrderedDict
from decimal import Decimal
from bitarray import bitarray
import six
# PY3 support of some PY2 keywords (needs improved)
if six.PY3:
import builtins as __builtin__
long = int
elif six.PY2:
import __builtin__
class yc_connection_connection__connection(PybindBase):
"""
This class was auto-generated by the PythonClass plugin for PYANG
from YANG module connection - based on the path /connection. Each member element of
the container is represented as a class variable - with a specific
YANG type.
"""
__slots__ = ('_path_helper', '_extmethods', '__connection_id','__source_node','__target_node','__source_port','__target_port','__bandwidth','__layer_protocol_name',)
_yang_name = 'connection'
_yang_namespace = 'urn:connection'
_pybind_generated_by = 'container'
def __init__(self, *args, **kwargs):
self._path_helper = False
self._extmethods = False
self.__target_node = YANGDynClass(base=six.text_type, is_leaf=True, yang_name="target-node", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:connection', defining_module='connection', yang_type='leafref', is_config=True)
self.__layer_protocol_name = YANGDynClass(base=RestrictedClassType(base_type=six.text_type, restriction_type="dict_key", restriction_arg={u'ETH': {}, u'OPTICAL': {}},), is_leaf=True, yang_name="layer-protocol-name", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:connection', defining_module='connection', yang_type='topology:layer-protocol-name', is_config=True)
self.__connection_id = YANGDynClass(base=six.text_type, is_leaf=True, yang_name="connection-id", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, is_keyval=True, namespace='urn:connection', defining_module='connection', yang_type='string', is_config=True)
self.__bandwidth = YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="bandwidth", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:connection', defining_module='connection', yang_type='uint32', is_config=True)
self.__target_port = YANGDynClass(base=six.text_type, is_leaf=True, yang_name="target-port", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:connection', defining_module='connection', yang_type='leafref', is_config=True)
self.__source_node = YANGDynClass(base=six.text_type, is_leaf=True, yang_name="source-node", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:connection', defining_module='connection', yang_type='leafref', is_config=True)
self.__source_port = YANGDynClass(base=six.text_type, is_leaf=True, yang_name="source-port", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:connection', defining_module='connection', yang_type='leafref', is_config=True)
load = kwargs.pop("load", None)
if args:
if len(args) > 1:
raise TypeError("cannot create a YANG container with >1 argument")
all_attr = True
for e in self._pyangbind_elements:
if not hasattr(args[0], e):
all_attr = False
break
if not all_attr:
raise ValueError("Supplied object did not have the correct attributes")
for e in self._pyangbind_elements:
nobj = getattr(args[0], e)
if nobj._changed() is False:
continue
setmethod = getattr(self, "_set_%s" % e)
if load is None:
setmethod(getattr(args[0], e))
else:
setmethod(getattr(args[0], e), load=load)
def _path(self):
if hasattr(self, "_parent"):
return self._parent._path()+[self._yang_name]
else:
return [u'connection']
def _get_connection_id(self):
"""
Getter method for connection_id, mapped from YANG variable /connection/connection_id (string)
"""
return self.__connection_id
def _set_connection_id(self, v, load=False):
"""
Setter method for connection_id, mapped from YANG variable /connection/connection_id (string)
If this variable is read-only (config: false) in the
source YANG file, then _set_connection_id is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_connection_id() directly.
"""
parent = getattr(self, "_parent", None)
if parent is not None and load is False:
raise AttributeError("Cannot set keys directly when" +
" within an instantiated list")
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=six.text_type, is_leaf=True, yang_name="connection-id", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, is_keyval=True, namespace='urn:connection', defining_module='connection', yang_type='string', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """connection_id must be of a type compatible with string""",
'defined-type': "string",
'generated-type': """YANGDynClass(base=six.text_type, is_leaf=True, yang_name="connection-id", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, is_keyval=True, namespace='urn:connection', defining_module='connection', yang_type='string', is_config=True)""",
})
self.__connection_id = t
if hasattr(self, '_set'):
self._set()
def _unset_connection_id(self):
self.__connection_id = YANGDynClass(base=six.text_type, is_leaf=True, yang_name="connection-id", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, is_keyval=True, namespace='urn:connection', defining_module='connection', yang_type='string', is_config=True)
def _get_source_node(self):
"""
Getter method for source_node, mapped from YANG variable /connection/source_node (leafref)
"""
return self.__source_node
def _set_source_node(self, v, load=False):
"""
Setter method for source_node, mapped from YANG variable /connection/source_node (leafref)
If this variable is read-only (config: false) in the
source YANG file, then _set_source_node is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_source_node() directly.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=six.text_type, is_leaf=True, yang_name="source-node", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:connection', defining_module='connection', yang_type='leafref', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """source_node must be of a type compatible with leafref""",
'defined-type': "leafref",
'generated-type': """YANGDynClass(base=six.text_type, is_leaf=True, yang_name="source-node", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:connection', defining_module='connection', yang_type='leafref', is_config=True)""",
})
self.__source_node = t
if hasattr(self, '_set'):
self._set()
def _unset_source_node(self):
self.__source_node = YANGDynClass(base=six.text_type, is_leaf=True, yang_name="source-node", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:connection', defining_module='connection', yang_type='leafref', is_config=True)
def _get_target_node(self):
"""
Getter method for target_node, mapped from YANG variable /connection/target_node (leafref)
"""
return self.__target_node
def _set_target_node(self, v, load=False):
"""
Setter method for target_node, mapped from YANG variable /connection/target_node (leafref)
If this variable is read-only (config: false) in the
source YANG file, then _set_target_node is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_target_node() directly.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=six.text_type, is_leaf=True, yang_name="target-node", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:connection', defining_module='connection', yang_type='leafref', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """target_node must be of a type compatible with leafref""",
'defined-type': "leafref",
'generated-type': """YANGDynClass(base=six.text_type, is_leaf=True, yang_name="target-node", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:connection', defining_module='connection', yang_type='leafref', is_config=True)""",
})
self.__target_node = t
if hasattr(self, '_set'):
self._set()
def _unset_target_node(self):
self.__target_node = YANGDynClass(base=six.text_type, is_leaf=True, yang_name="target-node", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:connection', defining_module='connection', yang_type='leafref', is_config=True)
def _get_source_port(self):
"""
Getter method for source_port, mapped from YANG variable /connection/source_port (leafref)
"""
return self.__source_port
def _set_source_port(self, v, load=False):
"""
Setter method for source_port, mapped from YANG variable /connection/source_port (leafref)
If this variable is read-only (config: false) in the
source YANG file, then _set_source_port is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_source_port() directly.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=six.text_type, is_leaf=True, yang_name="source-port", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:connection', defining_module='connection', yang_type='leafref', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """source_port must be of a type compatible with leafref""",
'defined-type': "leafref",
'generated-type': """YANGDynClass(base=six.text_type, is_leaf=True, yang_name="source-port", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:connection', defining_module='connection', yang_type='leafref', is_config=True)""",
})
self.__source_port = t
if hasattr(self, '_set'):
self._set()
def _unset_source_port(self):
self.__source_port = YANGDynClass(base=six.text_type, is_leaf=True, yang_name="source-port", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:connection', defining_module='connection', yang_type='leafref', is_config=True)
def _get_target_port(self):
"""
Getter method for target_port, mapped from YANG variable /connection/target_port (leafref)
"""
return self.__target_port
def _set_target_port(self, v, load=False):
"""
Setter method for target_port, mapped from YANG variable /connection/target_port (leafref)
If this variable is read-only (config: false) in the
source YANG file, then _set_target_port is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_target_port() directly.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=six.text_type, is_leaf=True, yang_name="target-port", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:connection', defining_module='connection', yang_type='leafref', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """target_port must be of a type compatible with leafref""",
'defined-type': "leafref",
'generated-type': """YANGDynClass(base=six.text_type, is_leaf=True, yang_name="target-port", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:connection', defining_module='connection', yang_type='leafref', is_config=True)""",
})
self.__target_port = t
if hasattr(self, '_set'):
self._set()
def _unset_target_port(self):
self.__target_port = YANGDynClass(base=six.text_type, is_leaf=True, yang_name="target-port", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:connection', defining_module='connection', yang_type='leafref', is_config=True)
def _get_bandwidth(self):
"""
Getter method for bandwidth, mapped from YANG variable /connection/bandwidth (uint32)
"""
return self.__bandwidth
def _set_bandwidth(self, v, load=False):
"""
Setter method for bandwidth, mapped from YANG variable /connection/bandwidth (uint32)
If this variable is read-only (config: false) in the
source YANG file, then _set_bandwidth is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_bandwidth() directly.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="bandwidth", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:connection', defining_module='connection', yang_type='uint32', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """bandwidth must be of a type compatible with uint32""",
'defined-type': "uint32",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="bandwidth", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:connection', defining_module='connection', yang_type='uint32', is_config=True)""",
})
self.__bandwidth = t
if hasattr(self, '_set'):
self._set()
def _unset_bandwidth(self):
self.__bandwidth = YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="bandwidth", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:connection', defining_module='connection', yang_type='uint32', is_config=True)
def _get_layer_protocol_name(self):
"""
Getter method for layer_protocol_name, mapped from YANG variable /connection/layer_protocol_name (topology:layer-protocol-name)
"""
return self.__layer_protocol_name
def _set_layer_protocol_name(self, v, load=False):
"""
Setter method for layer_protocol_name, mapped from YANG variable /connection/layer_protocol_name (topology:layer-protocol-name)
If this variable is read-only (config: false) in the
source YANG file, then _set_layer_protocol_name is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_layer_protocol_name() directly.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=six.text_type, restriction_type="dict_key", restriction_arg={u'ETH': {}, u'OPTICAL': {}},), is_leaf=True, yang_name="layer-protocol-name", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:connection', defining_module='connection', yang_type='topology:layer-protocol-name', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """layer_protocol_name must be of a type compatible with topology:layer-protocol-name""",
'defined-type': "topology:layer-protocol-name",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=six.text_type, restriction_type="dict_key", restriction_arg={u'ETH': {}, u'OPTICAL': {}},), is_leaf=True, yang_name="layer-protocol-name", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:connection', defining_module='connection', yang_type='topology:layer-protocol-name', is_config=True)""",
})
self.__layer_protocol_name = t
if hasattr(self, '_set'):
self._set()
def _unset_layer_protocol_name(self):
self.__layer_protocol_name = YANGDynClass(base=RestrictedClassType(base_type=six.text_type, restriction_type="dict_key", restriction_arg={u'ETH': {}, u'OPTICAL': {}},), is_leaf=True, yang_name="layer-protocol-name", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:connection', defining_module='connection', yang_type='topology:layer-protocol-name', is_config=True)
connection_id = __builtin__.property(_get_connection_id, _set_connection_id)
source_node = __builtin__.property(_get_source_node, _set_source_node)
target_node = __builtin__.property(_get_target_node, _set_target_node)
source_port = __builtin__.property(_get_source_port, _set_source_port)
target_port = __builtin__.property(_get_target_port, _set_target_port)
bandwidth = __builtin__.property(_get_bandwidth, _set_bandwidth)
layer_protocol_name = __builtin__.property(_get_layer_protocol_name, _set_layer_protocol_name)
_pyangbind_elements = OrderedDict([('connection_id', connection_id), ('source_node', source_node), ('target_node', target_node), ('source_port', source_port), ('target_port', target_port), ('bandwidth', bandwidth), ('layer_protocol_name', layer_protocol_name), ])
class connection(PybindBase):
"""
This class was auto-generated by the PythonClass plugin for PYANG
from YANG module connection - based on the path /connection. Each member element of
the container is represented as a class variable - with a specific
YANG type.
YANG Description: Basic example of network topology
"""
__slots__ = ('_path_helper', '_extmethods', '__connection',)
_yang_name = 'connection'
_yang_namespace = 'urn:connection'
_pybind_generated_by = 'container'
def __init__(self, *args, **kwargs):
self._path_helper = False
self._extmethods = False
self.__connection = YANGDynClass(base=YANGListType("connection_id",yc_connection_connection__connection, yang_name="connection", parent=self, is_container='list', user_ordered=False, path_helper=self._path_helper, yang_keys='connection-id', extensions=None), is_container='list', yang_name="connection", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions=None, namespace='urn:connection', defining_module='connection', yang_type='list', is_config=True)
load = kwargs.pop("load", None)
if args:
if len(args) > 1:
raise TypeError("cannot create a YANG container with >1 argument")
all_attr = True
for e in self._pyangbind_elements:
if not hasattr(args[0], e):
all_attr = False
break
if not all_attr:
raise ValueError("Supplied object did not have the correct attributes")
for e in self._pyangbind_elements:
nobj = getattr(args[0], e)
if nobj._changed() is False:
continue
setmethod = getattr(self, "_set_%s" % e)
if load is None:
setmethod(getattr(args[0], e))
else:
setmethod(getattr(args[0], e), load=load)
def _path(self):
if hasattr(self, "_parent"):
return self._parent._path()+[self._yang_name]
else:
return []
def _get_connection(self):
"""
Getter method for connection, mapped from YANG variable /connection (list)
"""
return self.__connection
def _set_connection(self, v, load=False):
"""
Setter method for connection, mapped from YANG variable /connection (list)
If this variable is read-only (config: false) in the
source YANG file, then _set_connection is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_connection() directly.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=YANGListType("connection_id",yc_connection_connection__connection, yang_name="connection", parent=self, is_container='list', user_ordered=False, path_helper=self._path_helper, yang_keys='connection-id', extensions=None), is_container='list', yang_name="connection", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions=None, namespace='urn:connection', defining_module='connection', yang_type='list', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """connection must be of a type compatible with list""",
'defined-type': "list",
'generated-type': """YANGDynClass(base=YANGListType("connection_id",yc_connection_connection__connection, yang_name="connection", parent=self, is_container='list', user_ordered=False, path_helper=self._path_helper, yang_keys='connection-id', extensions=None), is_container='list', yang_name="connection", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions=None, namespace='urn:connection', defining_module='connection', yang_type='list', is_config=True)""",
})
self.__connection = t
if hasattr(self, '_set'):
self._set()
def _unset_connection(self):
self.__connection = YANGDynClass(base=YANGListType("connection_id",yc_connection_connection__connection, yang_name="connection", parent=self, is_container='list', user_ordered=False, path_helper=self._path_helper, yang_keys='connection-id', extensions=None), is_container='list', yang_name="connection", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions=None, namespace='urn:connection', defining_module='connection', yang_type='list', is_config=True)
connection = __builtin__.property(_get_connection, _set_connection)
_pyangbind_elements = OrderedDict([('connection', connection), ])
| 57.839806 | 521 | 0.726479 | 3,098 | 23,830 | 5.294061 | 0.063589 | 0.046339 | 0.059752 | 0.03951 | 0.883178 | 0.854582 | 0.852753 | 0.841107 | 0.832815 | 0.822206 | 0 | 0.00423 | 0.156735 | 23,830 | 411 | 522 | 57.980535 | 0.811943 | 0.165883 | 0 | 0.576 | 0 | 0.032 | 0.280138 | 0.093944 | 0 | 0 | 0 | 0 | 0 | 1 | 0.112 | false | 0 | 0.06 | 0 | 0.3 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6dcb9a344281602ddce521bca662ee78056e19df | 111 | py | Python | mawie/events/updator.py | Grisou13/mawie | d6b7a2f09431ab0d34f366062bf3c26d346169f9 | [
"MIT"
] | 1 | 2020-03-03T16:39:30.000Z | 2020-03-03T16:39:30.000Z | mawie/events/updator.py | Grisou13/mawie | d6b7a2f09431ab0d34f366062bf3c26d346169f9 | [
"MIT"
] | 5 | 2021-03-18T20:17:57.000Z | 2022-01-13T00:37:27.000Z | mawie/events/updator.py | Grisou13/mawie | d6b7a2f09431ab0d34f366062bf3c26d346169f9 | [
"MIT"
] | 2 | 2016-09-30T06:25:57.000Z | 2016-11-25T20:23:46.000Z | from . import Request, Event
class UpdatorRequest(Request):
pass
class ForceUpdatorRun(Request):
pass | 15.857143 | 31 | 0.747748 | 12 | 111 | 6.916667 | 0.666667 | 0.26506 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.18018 | 111 | 7 | 32 | 15.857143 | 0.912088 | 0 | 0 | 0.4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.4 | 0.2 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 6 |
6dfb5004ba395449e44acca4ac8c76133de2ba18 | 47 | py | Python | src/utils/drive_sync.py | Abdul-Muiz-Iqbal/PostMan | 31c94f751c87ac6f3079b014da7402a4bbf22cc3 | [
"Apache-2.0"
] | null | null | null | src/utils/drive_sync.py | Abdul-Muiz-Iqbal/PostMan | 31c94f751c87ac6f3079b014da7402a4bbf22cc3 | [
"Apache-2.0"
] | 3 | 2021-05-05T16:05:26.000Z | 2021-05-08T10:24:18.000Z | src/utils/drive_sync.py | Abdul-Muiz-Iqbal/PostMan | 31c94f751c87ac6f3079b014da7402a4bbf22cc3 | [
"Apache-2.0"
] | 1 | 2021-05-01T10:14:40.000Z | 2021-05-01T10:14:40.000Z |
class DriveSync:
"""TODO: Stub"""
pass | 11.75 | 20 | 0.553191 | 5 | 47 | 5.2 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.276596 | 47 | 4 | 21 | 11.75 | 0.764706 | 0.212766 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.25 | 0 | 1 | 0 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
09a6aa4b2f2f96d59f1963a9dc828a97839c0026 | 3,260 | py | Python | tests/evaluation/test_admin.py | rohanshad/pycox | 5483489d21f3441e53f78f9f8898ce607f41c632 | [
"BSD-2-Clause"
] | 449 | 2019-07-01T13:54:06.000Z | 2022-03-31T05:57:14.000Z | tests/evaluation/test_admin.py | Sandy4321/pycox | 6ed3973954789f54453055bbeb85887ded2fb81c | [
"BSD-2-Clause"
] | 96 | 2019-07-01T16:54:53.000Z | 2022-03-22T18:20:01.000Z | tests/evaluation/test_admin.py | Sandy4321/pycox | 6ed3973954789f54453055bbeb85887ded2fb81c | [
"BSD-2-Clause"
] | 110 | 2019-07-01T13:50:44.000Z | 2022-03-29T16:39:54.000Z | import numpy as np
import pandas as pd
from pycox.evaluation import admin
from pycox.evaluation import EvalSurv
def test_brier_score_no_censor():
n = 4
durations = np.ones(n) * 50
durations_c = np.ones_like(durations) * 100
events = durations <= durations_c
m = 5
index_surv = np.array([0, 25., 50., 75., 100.])
surv_ones = np.ones((m, n))
time_grid = np.array([5., 40., 60., 100.])
bs = admin.brier_score(time_grid, durations, durations_c, events, surv_ones, index_surv)
assert (bs == np.array([0., 0., 1., 1.])).all()
surv_zeros = surv_ones * 0
bs = admin.brier_score(time_grid, durations, durations_c, events, surv_zeros, index_surv)
assert (bs == np.array([1., 1., 0., 0.])).all()
surv_05 = surv_ones * 0.5
bs = admin.brier_score(time_grid, durations, durations_c, events, surv_05, index_surv)
assert (bs == np.array([0.25, 0.25, 0.25, 0.25])).all()
time_grid = np.array([110.])
bs = admin.brier_score(time_grid, durations, durations_c, events, surv_05, index_surv)
assert np.isnan(bs).all()
def test_brier_score_censor():
n = 4
durations = np.ones(n) * 50
durations_c = np.array([25, 50, 60, 100])
events = durations <= durations_c
durations[~events] = durations_c[~events]
m = 5
index_surv = np.array([0, 25., 50., 75., 100.])
surv = np.ones((m, n))
surv[:, 0] = 0
time_grid = np.array([5., 25., 40., 60., 100.])
bs = admin.brier_score(time_grid, durations, durations_c, events, surv, index_surv)
assert (bs == np.array([0.25, 0.25, 0., 1., 1.])).all()
def test_brier_score_evalsurv():
n = 4
durations = np.ones(n) * 50
durations_c = np.array([25, 50, 60, 100])
events = durations <= durations_c
durations[~events] = durations_c[~events]
m = 5
index_surv = np.array([0, 25., 50., 75., 100.])
surv = np.ones((m, n))
surv[:, 0] = 0
surv = pd.DataFrame(surv, index_surv)
time_grid = np.array([5., 25., 40., 60., 100.])
ev = EvalSurv(surv, durations, events, censor_durations=durations_c)
bs = ev.brier_score_admin(time_grid)
assert (bs.values == np.array([0.25, 0.25, 0., 1., 1.])).all()
def test_binoial_log_likelihood_no_censor():
n = 4
durations = np.ones(n) * 50
durations_c = np.ones_like(durations) * 100
events = durations <= durations_c
m = 5
index_surv = np.array([0, 25., 50., 75., 100.])
surv_ones = np.ones((m, n))
time_grid = np.array([5., 40., 60., 100.])
bll = admin.binomial_log_likelihood(time_grid, durations, durations_c, events, surv_ones, index_surv)
eps = 1e-7
assert abs(bll - np.log([1-eps, 1-eps, eps, eps])).max() < 1e-7
surv_zeros = surv_ones * 0
bll = admin.binomial_log_likelihood(time_grid, durations, durations_c, events, surv_zeros, index_surv)
assert abs(bll - np.log([eps, eps, 1-eps, 1-eps])).max() < 1e-7
surv_05 = surv_ones * 0.5
bll = admin.binomial_log_likelihood(time_grid, durations, durations_c, events, surv_05, index_surv)
assert abs(bll - np.log([0.5, 0.5, 1-0.5, 1-0.5])).max() < 1e-7
time_grid = np.array([110.])
bll = admin.binomial_log_likelihood(time_grid, durations, durations_c, events, surv_05, index_surv)
assert np.isnan(bll).all()
| 38.809524 | 106 | 0.636503 | 529 | 3,260 | 3.731569 | 0.111531 | 0.101317 | 0.134752 | 0.118541 | 0.836879 | 0.763425 | 0.740122 | 0.720871 | 0.720871 | 0.699088 | 0 | 0.078041 | 0.19816 | 3,260 | 83 | 107 | 39.277108 | 0.677123 | 0 | 0 | 0.630137 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.136986 | 1 | 0.054795 | false | 0 | 0.054795 | 0 | 0.109589 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
09ef7f5b60c84fac9b8935eb214b9f9b79dfab27 | 32 | py | Python | __init__.py | cmansel/augmentation-generator | 380b9ff401e935134a073bd8b1eba428da1cb5b2 | [
"MIT"
] | null | null | null | __init__.py | cmansel/augmentation-generator | 380b9ff401e935134a073bd8b1eba428da1cb5b2 | [
"MIT"
] | null | null | null | __init__.py | cmansel/augmentation-generator | 380b9ff401e935134a073bd8b1eba428da1cb5b2 | [
"MIT"
] | null | null | null | # __init__.py
from .src import * | 16 | 18 | 0.71875 | 5 | 32 | 3.8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15625 | 32 | 2 | 18 | 16 | 0.703704 | 0.34375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
61d7044a51f4f9424b51f6382dde74cd949e17f0 | 157 | py | Python | pixelwalker/engine/utils.py | thomMar/pixelwalker | 48c1b202673a948e1ca51fbd44ac4c1a037f4ef7 | [
"MIT"
] | null | null | null | pixelwalker/engine/utils.py | thomMar/pixelwalker | 48c1b202673a948e1ca51fbd44ac4c1a037f4ef7 | [
"MIT"
] | null | null | null | pixelwalker/engine/utils.py | thomMar/pixelwalker | 48c1b202673a948e1ca51fbd44ac4c1a037f4ef7 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
import os
import uuid
def get_upload_path(instance, filename):
return os.path.join(filename+"_"+str(uuid.uuid4().hex), filename) | 26.166667 | 69 | 0.700637 | 23 | 157 | 4.652174 | 0.73913 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014493 | 0.121019 | 157 | 6 | 69 | 26.166667 | 0.76087 | 0.133758 | 0 | 0 | 0 | 0 | 0.007407 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.5 | 0.25 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
febb279c5fbc01fdfe69d8d53febb4f8fc279a47 | 90 | py | Python | fvhexperiments/management/commands/__init__.py | VekotinVerstas/DjangoHttpBroker-FVH-Experiments | 512a7a6be74beb75473860ca34b9abf695820a32 | [
"MIT"
] | null | null | null | fvhexperiments/management/commands/__init__.py | VekotinVerstas/DjangoHttpBroker-FVH-Experiments | 512a7a6be74beb75473860ca34b9abf695820a32 | [
"MIT"
] | null | null | null | fvhexperiments/management/commands/__init__.py | VekotinVerstas/DjangoHttpBroker-FVH-Experiments | 512a7a6be74beb75473860ca34b9abf695820a32 | [
"MIT"
] | 2 | 2020-05-05T12:57:47.000Z | 2020-08-14T13:33:56.000Z | from broker.providers.forward import import_forwards
import_forwards(__file__, __name__)
| 22.5 | 52 | 0.866667 | 11 | 90 | 6.181818 | 0.727273 | 0.411765 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.077778 | 90 | 3 | 53 | 30 | 0.819277 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.