hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0e3ecb44e0a5581a9a2bacf1b712289a7fbbe82a | 1,540 | py | Python | tests/test_prefixed_with.py | artisanofcode/python-conjecture | 5a7d57e407a4fb3e09a05d41ffda773136003289 | [
"MIT"
] | null | null | null | tests/test_prefixed_with.py | artisanofcode/python-conjecture | 5a7d57e407a4fb3e09a05d41ffda773136003289 | [
"MIT"
] | null | null | null | tests/test_prefixed_with.py | artisanofcode/python-conjecture | 5a7d57e407a4fb3e09a05d41ffda773136003289 | [
"MIT"
] | null | null | null | """test conjecture.prefixed_with."""
from __future__ import annotations
import hypothesis
import hypothesis.strategies as st
import pytest
import conjecture
@pytest.mark.describe("prefixed_with")
@pytest.mark.it("should match prefixed strings")
@hypothesis.given(
value=st.text(min_size=1),
other=st.text(),
)
def test_should_match_prefixed_strings(value: str, other: str) -> None:
assert conjecture.prefixed_with(value).resolve(value + other)
@pytest.mark.describe("prefixed_with")
@pytest.mark.it("should not match other string prefix")
@hypothesis.given(
value=st.text(min_size=1),
other=st.text(min_size=1),
)
def test_should_not_match_other_strings(value: str, other: str) -> None:
hypothesis.assume(not (other + value).startswith(value))
assert not conjecture.prefixed_with(value).resolve(other + value)
@pytest.mark.describe("prefixed_with")
@pytest.mark.it("should match prefixed bytes")
@hypothesis.given(
value=st.binary(min_size=1),
other=st.binary(),
)
def test_should_match_prefixed_bytes(value: bytes, other: bytes) -> None:
assert conjecture.prefixed_with(value).resolve(value + other)
@pytest.mark.describe("prefixed_with")
@pytest.mark.it("should not match other bytes prefix")
@hypothesis.given(
value=st.binary(min_size=1),
other=st.binary(min_size=1),
)
def test_should_not_match_other_bytes(value: bytes, other: bytes) -> None:
hypothesis.assume(not (other + value).startswith(value))
assert not conjecture.prefixed_with(value).resolve(other + value)
| 29.056604 | 74 | 0.751299 | 215 | 1,540 | 5.209302 | 0.176744 | 0.096429 | 0.042857 | 0.092857 | 0.870536 | 0.808929 | 0.723214 | 0.723214 | 0.723214 | 0.6625 | 0 | 0.004405 | 0.115584 | 1,540 | 52 | 75 | 29.615385 | 0.817915 | 0.019481 | 0 | 0.461538 | 0 | 0 | 0.119016 | 0 | 0 | 0 | 0 | 0 | 0.102564 | 1 | 0.102564 | false | 0 | 0.128205 | 0 | 0.230769 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7d316b0f09a38c8ec16a21b28386c560c23f1eb8 | 9,009 | py | Python | tests/test_base.py | pavankumarjs/GrootFSM | 29ff50764c8d2bcf4fecb55ef4e8a764b8b3da32 | [
"MIT"
] | null | null | null | tests/test_base.py | pavankumarjs/GrootFSM | 29ff50764c8d2bcf4fecb55ef4e8a764b8b3da32 | [
"MIT"
] | null | null | null | tests/test_base.py | pavankumarjs/GrootFSM | 29ff50764c8d2bcf4fecb55ef4e8a764b8b3da32 | [
"MIT"
] | null | null | null | from unittest import TestCase
import logging
import sys
from mock import Mock
from fsm.base import FSMBuilder, FSMException
def setUpModule():
logging.basicConfig(stream=sys.stdout, level=logging.DEBUG)
def tearDownModule():
pass
class TestFSM(TestCase):
def setUp(self):
self.builder = FSMBuilder()
def tearDown(self):
pass
def test_fsm_builder_with_random_names(self):
before_exit1, after_entry1 = Mock(), Mock()
state1 = self.builder.add_state( before_exit=before_exit1, after_entry=after_entry1)
before_exit2, after_entry2 = Mock(), Mock()
state2 = self.builder.add_state(before_exit=before_exit2, after_entry=after_entry2)
before_exit3, after_entry3 = Mock(), Mock()
state3 = self.builder.add_state(before_exit=before_exit3, after_entry=after_entry3)
on_transition11, on_transition12, on_transition23, on_transition31 = Mock(), Mock(), Mock(), Mock()
transition11 = self.builder.add_transition(state1.name, state1.name, on_transition=on_transition11)
transition12 = self.builder.add_transition(state1.name, state2.name, on_transition=on_transition12)
transition23 = self.builder.add_transition(state2.name, state3.name, on_transition=on_transition23)
transition31 = self.builder.add_transition(state3.name, state1.name, on_transition=on_transition31)
self.builder.set_initial_state(state1.name)
fsm = self.builder.build()
self.assertEqual(before_exit1.call_count, 0)
self.assertEqual(after_entry1.call_count, 0)
self.assertEqual(on_transition11.call_count, 0)
fsm.execute_transition_to(state1.name, test_arg=111)
self.assertEqual(fsm.state, state1.name)
self.assertEqual(before_exit1.call_count, 1)
before_exit1.assert_called_with(test_arg=111)
self.assertEqual(after_entry1.call_count, 1)
after_entry1.assert_called_with(test_arg=111)
self.assertEqual(on_transition11.call_count, 1)
on_transition11.assert_called_with(test_arg=111)
self.assertRaises(FSMException, fsm.execute_transition_to, state3.name)
self.assertEqual(after_entry2.call_count, 0)
self.assertEqual(on_transition12.call_count, 0)
fsm.execute_transition_to(state2.name)
self.assertEqual(fsm.state, state2.name)
self.assertEqual(before_exit1.call_count, 2)
self.assertEqual(after_entry2.call_count, 1)
self.assertEqual(on_transition12.call_count, 1)
self.assertRaises(FSMException, fsm.execute_transition, transition31.name, **{'test_arg':111})
before_exit2.assert_not_called()
on_transition31.assert_not_called()
self.assertEqual(before_exit2.call_count, 0)
self.assertEqual(after_entry3.call_count, 0)
self.assertEqual(on_transition23.call_count, 0)
fsm.execute_transition(transition23.name)
self.assertEqual(fsm.state, state3.name)
self.assertEqual(before_exit2.call_count, 1)
self.assertEqual(after_entry3.call_count, 1)
self.assertEqual(on_transition23.call_count, 1)
def test_fsm_builder_with_names(self):
before_exit1, after_entry1 = Mock(), Mock()
state1 = self.builder.add_named_state('state1', before_exit=before_exit1, after_entry=after_entry1)
before_exit2, after_entry2 = Mock(), Mock()
state2 = self.builder.add_named_state('state2', before_exit=before_exit2, after_entry=after_entry2)
before_exit3, after_entry3 = Mock(), Mock()
state3 = self.builder.add_named_state('state3', before_exit=before_exit3, after_entry=after_entry3)
on_transition11, on_transition12, on_transition23, on_transition31 = Mock(), Mock(), Mock(), Mock()
transition11 = self.builder.add_named_transition('transition11', state1.name, state1.name, on_transition=on_transition11)
transition12 = self.builder.add_named_transition('transition12', state1.name, state2.name, on_transition=on_transition12)
transition23 = self.builder.add_named_transition('transition23', state2.name, state3.name, on_transition=on_transition23)
transition31 = self.builder.add_named_transition('transition31', state3.name, state1.name, on_transition=on_transition31)
self.builder.set_initial_state(state1.name)
fsm = self.builder.build()
self.assertEqual(before_exit1.call_count, 0)
self.assertEqual(after_entry1.call_count, 0)
self.assertEqual(on_transition11.call_count, 0)
fsm.execute_transition_to('state1', test_arg=111)
self.assertEqual(fsm.state, 'state1')
self.assertEqual(before_exit1.call_count, 1)
self.assertEqual(after_entry1.call_count, 1)
self.assertEqual(on_transition11.call_count, 1)
self.assertRaises(FSMException, fsm.execute_transition_to, 'state3')
self.assertEqual(after_entry2.call_count, 0)
self.assertEqual(on_transition12.call_count, 0)
fsm.execute_transition_to('state2')
self.assertEqual(fsm.state, 'state2')
self.assertEqual(before_exit1.call_count, 2)
self.assertEqual(after_entry2.call_count, 1)
self.assertEqual(on_transition12.call_count, 1)
self.assertRaises(FSMException, fsm.execute_transition, 'transition31', **{'test_arg':111})
before_exit2.assert_not_called()
on_transition31.assert_not_called()
self.assertEqual(before_exit2.call_count, 0)
self.assertEqual(after_entry3.call_count, 0)
self.assertEqual(on_transition23.call_count, 0)
fsm.execute_transition('transition23')
self.assertEqual(fsm.state, 'state3')
self.assertEqual(before_exit2.call_count, 1)
self.assertEqual(after_entry3.call_count, 1)
self.assertEqual(on_transition23.call_count, 1)
def test_fsm_builder_error(self):
before_exit1, after_entry1 = Mock(), Mock()
state1 = self.builder.add_state(before_exit=before_exit1, after_entry=after_entry1)
before_exit2, after_entry2 = Mock(), Mock()
state2 = self.builder.add_state(before_exit=before_exit2, after_entry=after_entry2)
on_transition12 = Mock()
transition12 = self.builder.add_transition(state1.name, state2.name, on_transition=on_transition12)
self.assertRaises(FSMException, self.builder.build)
def test_fsm_builder_duplicate_transition_error(self):
before_exit1, after_entry1 = Mock(), Mock()
state1 = self.builder.add_state(before_exit=before_exit1, after_entry=after_entry1)
before_exit2, after_entry2 = Mock(), Mock()
state2 = self.builder.add_state(before_exit=before_exit2, after_entry=after_entry2)
on_transition11, on_transition12 = Mock(), Mock()
transition11 = self.builder.add_transition(state1.name, state1.name, on_transition=on_transition11)
transition11_duplicate = self.builder.add_transition(state1.name, state1.name, on_transition=on_transition11)
transition12 = self.builder.add_transition(state1.name, state2.name, on_transition=on_transition12)
self.builder.set_initial_state(state1.name)
self.assertRaises(FSMException, self.builder.build)
def test_fsm_builder_duplicate_transition_name_error(self):
before_exit1, after_entry1 = Mock(), Mock()
state1 = self.builder.add_state(before_exit=before_exit1, after_entry=after_entry1)
before_exit2, after_entry2 = Mock(), Mock()
state2 = self.builder.add_state(before_exit=before_exit2, after_entry=after_entry2)
on_transition11, on_transition12 = Mock(), Mock()
transition11 = self.builder.add_named_transition('transition11', state1.name, state1.name, on_transition=on_transition11)
transition11_duplicate = self.builder.add_named_transition('transition11', state1.name, state1.name, on_transition=on_transition11)
transition12 = self.builder.add_transition(state1.name, state2.name, on_transition=on_transition12)
self.builder.set_initial_state(state1.name)
self.assertRaises(FSMException, self.builder.build)
def test_fsm_builder_duplicate_state_error(self):
before_exit1, after_entry1 = Mock(), Mock()
state1 = self.builder.add_named_state('state1', before_exit=before_exit1, after_entry=after_entry1)
state1_duplicate = self.builder.add_named_state('state1', before_exit=before_exit1, after_entry=after_entry1)
before_exit2, after_entry2 = Mock(), Mock()
state2 = self.builder.add_state(before_exit=before_exit2, after_entry=after_entry2)
on_transition11, on_transition12 = Mock(), Mock()
transition11 = self.builder.add_transition(state1.name, state1.name, on_transition=on_transition11)
transition12 = self.builder.add_transition(state1.name, state2.name, on_transition=on_transition12)
self.builder.set_initial_state(state1.name)
self.assertRaises(FSMException, self.builder.build)
| 50.05 | 139 | 0.735487 | 1,117 | 9,009 | 5.633841 | 0.064458 | 0.076911 | 0.07119 | 0.048625 | 0.933418 | 0.897982 | 0.897982 | 0.862864 | 0.827268 | 0.818052 | 0 | 0.047625 | 0.165612 | 9,009 | 179 | 140 | 50.329609 | 0.789544 | 0 | 0 | 0.65 | 0 | 0 | 0.019758 | 0 | 0 | 0 | 0 | 0 | 0.392857 | 1 | 0.071429 | false | 0.014286 | 0.035714 | 0 | 0.114286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
adae81b9e1df52122d5502739a4a4c151058393e | 103 | py | Python | sacd/memory/__init__.py | Michaelrising/sac-discrete.pytorch | 93ae779f5980726db0302c3471fd143c7d1d35ed | [
"MIT"
] | null | null | null | sacd/memory/__init__.py | Michaelrising/sac-discrete.pytorch | 93ae779f5980726db0302c3471fd143c7d1d35ed | [
"MIT"
] | 1 | 2021-09-03T02:58:12.000Z | 2021-09-03T02:58:12.000Z | sacd/memory/__init__.py | Michaelrising/sac-discrete.pytorch | 93ae779f5980726db0302c3471fd143c7d1d35ed | [
"MIT"
] | null | null | null | from .base import LazyMultiStepMemory, RecurrentMemory
from .per import LazyPrioritizedMultiStepMemory
| 34.333333 | 54 | 0.883495 | 9 | 103 | 10.111111 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.087379 | 103 | 2 | 55 | 51.5 | 0.968085 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
adcb57d00bed356ca7a2f7f5708b2efa58c9c30c | 42 | py | Python | scattertext/semioticsquare/__init__.py | shettyprithvi/scattertext | a15613b6feef3ddc56c03aadb8e1e629d28a427d | [
"Apache-2.0"
] | 1,823 | 2016-07-28T00:25:56.000Z | 2022-03-30T12:33:57.000Z | scattertext/semioticsquare/__init__.py | shettyprithvi/scattertext | a15613b6feef3ddc56c03aadb8e1e629d28a427d | [
"Apache-2.0"
] | 92 | 2016-07-28T23:13:20.000Z | 2022-01-24T03:53:38.000Z | scattertext/semioticsquare/__init__.py | shettyprithvi/scattertext | a15613b6feef3ddc56c03aadb8e1e629d28a427d | [
"Apache-2.0"
] | 271 | 2016-12-26T12:56:08.000Z | 2022-03-24T19:35:13.000Z | from .SemioticSquare import SemioticSquare | 42 | 42 | 0.904762 | 4 | 42 | 9.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.071429 | 42 | 1 | 42 | 42 | 0.974359 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
addd4da93696432558e313e4dd5bfd550ef06749 | 12,337 | py | Python | src/Fig_5_Pattern_separation_changing_beta_plotting.py | fmi-basel/gzenke-nonlinear-transient-amplification | f3b0c8c89b42c34f1aad740c7026865cf3164f1d | [
"MIT"
] | null | null | null | src/Fig_5_Pattern_separation_changing_beta_plotting.py | fmi-basel/gzenke-nonlinear-transient-amplification | f3b0c8c89b42c34f1aad740c7026865cf3164f1d | [
"MIT"
] | 3 | 2021-12-16T10:15:10.000Z | 2021-12-16T12:54:24.000Z | src/Fig_5_Pattern_separation_changing_beta_plotting.py | fmi-basel/gzenke-nonlinear-transient-amplification | f3b0c8c89b42c34f1aad740c7026865cf3164f1d | [
"MIT"
] | 1 | 2021-12-16T10:02:43.000Z | 2021-12-16T10:02:43.000Z | import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from sympy.solvers import solve
from sympy import Symbol
from matplotlib import patches
import matplotlib.patches as mpatches
import scipy.io as sio
import math
# plotting configuration
ratio = 1.5
figure_len, figure_width = 15*ratio, 12*ratio
font_size_1, font_size_2 = 36*ratio, 36*ratio
legend_size = 18*ratio
line_width, tick_len = 3*ratio, 10*ratio
marker_size = 30*ratio
plot_line_width = 5*ratio
hfont = {'fontname': 'Arial'}
marker_edge_width = 4
pal = sns.color_palette("deep")
U_max = 6
l_beta = [0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0]
l_peak_E1_EE_STP, l_peak_E12_EE_STP, l_peak_E2_EE_STP, l_ss_E1_EE_STP, l_ss_E12_EE_STP, l_ss_E2_EE_STP = [], [], [], [], [], []
l_peak_E1_EI_STP, l_peak_E12_EI_STP, l_peak_E2_EI_STP, l_ss_E1_EI_STP, l_ss_E12_EI_STP, l_ss_E2_EI_STP = [], [], [], [], [], []
l_bs_E2_EE_STP, l_bs_E2_EI_STP = [], []
for beta in l_beta:
l_r_e_1_2_EE_STP = sio.loadmat('data/Fig_5_Pattern_separation_activity_EE_STP_E12_beta_' + str(beta) + '.mat')['E12'][0]
l_r_e_2_EE_STP = sio.loadmat('data/Fig_5_Pattern_separation_activity_EE_STP_E2_beta_' + str(beta) + '.mat')['E2'][0]
l_r_e_1_2_EI_STP = sio.loadmat('data/Fig_5_Pattern_separation_activity_EI_STP_E12_beta_' + str(beta) + '_U_max_' + str(U_max) + '.mat')['E12'][0]
l_r_e_2_EI_STP = sio.loadmat('data/Fig_5_Pattern_separation_activity_EI_STP_E2_beta_' + str(beta) + '_U_max_' + str(U_max) + '.mat')['E2'][0]
l_peak_E1_EE_STP.append(np.nanmax(l_r_e_1_2_EE_STP[90000:110000]))
l_ss_E1_EE_STP.append(np.nanmean(l_r_e_1_2_EE_STP[105000:109000]))
l_peak_E12_EE_STP.append(np.nanmax(l_r_e_1_2_EE_STP[50000:70000]))
l_ss_E12_EE_STP.append(np.nanmean(l_r_e_1_2_EE_STP[65000:69000]))
l_peak_E2_EE_STP.append(np.nanmax(l_r_e_2_EE_STP[90000:110000]))
l_ss_E2_EE_STP.append(np.nanmean(l_r_e_2_EE_STP[105000:109000]))
l_peak_E1_EI_STP.append(np.nanmax(l_r_e_1_2_EI_STP[90000:110000]))
l_ss_E1_EI_STP.append(np.nanmean(l_r_e_1_2_EI_STP[105000:109000]))
l_peak_E12_EI_STP.append(np.nanmax(l_r_e_1_2_EI_STP[50000:70000]))
l_ss_E12_EI_STP.append(np.nanmean(l_r_e_1_2_EI_STP[65000:69000]))
l_peak_E2_EI_STP.append(np.nanmax(l_r_e_2_EI_STP[90000:110000]))
l_ss_E2_EI_STP.append(np.nanmean(l_r_e_2_EI_STP[105000:109000]))
l_bs_E2_EE_STP.append(np.nanmean(l_r_e_2_EE_STP[40000:49000]))
l_bs_E2_EI_STP.append(np.nanmean(l_r_e_2_EI_STP[40000:49000]))
l_asso_peak_EE_STP, l_asso_peak_EI_STP, l_asso_ss_EE_STP, l_asso_ss_EI_STP = [], [], [], []
l_sepa_peak_EE_STP, l_sepa_peak_EI_STP, l_sepa_ss_EE_STP, l_sepa_ss_EI_STP = [], [], [], []
l_dis_peak_EE_STP, l_dis_peak_EI_STP, l_dis_ss_EE_STP, l_dis_ss_EI_STP = [], [], [], []
for i in range(len(l_peak_E1_EE_STP)):
l_asso_peak_EE_STP.append(1 + (l_peak_E12_EE_STP[i] - l_peak_E1_EE_STP[i])/(l_peak_E1_EE_STP[i] + l_peak_E12_EE_STP[i]))
l_asso_peak_EI_STP.append(1 + (l_peak_E12_EI_STP[i] - l_peak_E1_EI_STP[i])/(l_peak_E1_EI_STP[i] + l_peak_E12_EI_STP[i]))
l_asso_ss_EE_STP.append(1 + (l_ss_E12_EE_STP[i] - l_ss_E1_EE_STP[i])/(l_ss_E1_EE_STP[i] + l_ss_E12_EE_STP[i]))
l_asso_ss_EI_STP.append(1 + (l_ss_E12_EI_STP[i] - l_ss_E1_EI_STP[i])/(l_ss_E1_EI_STP[i] + l_ss_E12_EI_STP[i]))
l_sepa_peak_EE_STP.append((l_peak_E1_EE_STP[i] - l_peak_E2_EE_STP[i])/(l_peak_E1_EE_STP[i] + l_peak_E2_EE_STP[i]))
l_sepa_peak_EI_STP.append((l_peak_E1_EI_STP[i] - l_peak_E2_EI_STP[i])/(l_peak_E1_EI_STP[i] + l_peak_E2_EI_STP[i]))
l_sepa_ss_EE_STP.append((l_ss_E1_EE_STP[i] - l_ss_E2_EE_STP[i])/(l_ss_E1_EE_STP[i] + l_ss_E2_EE_STP[i]))
l_sepa_ss_EI_STP.append((l_ss_E1_EI_STP[i] - l_ss_E2_EI_STP[i])/(l_ss_E1_EI_STP[i] + l_ss_E2_EI_STP[i]))
l_dis_peak_EE_STP.append(math.sin(math.radians(45 - round(math.degrees(
math.asin(l_peak_E2_EE_STP[i] / np.sqrt(np.power(l_peak_E1_EE_STP[i], 2) + np.power(l_peak_E2_EE_STP[i], 2)))),
2))) * np.sqrt(
np.power(l_peak_E1_EE_STP[i], 2) + np.power(l_peak_E2_EE_STP[i], 2)))
l_dis_peak_EI_STP.append(math.sin(math.radians(45 - round(math.degrees(
math.asin(l_peak_E2_EI_STP[i] / np.sqrt(np.power(l_peak_E1_EI_STP[i], 2) + np.power(l_peak_E2_EI_STP[i], 2)))),
2))) * np.sqrt(
np.power(l_peak_E1_EI_STP[i], 2) + np.power(l_peak_E2_EI_STP[i], 2)))
l_dis_ss_EE_STP.append(math.sin(math.radians(45 - round(math.degrees(
math.asin(l_ss_E2_EE_STP[i] / np.sqrt(np.power(l_ss_E1_EE_STP[i], 2) + np.power(l_ss_E2_EE_STP[i], 2)))),
2))) * np.sqrt(
np.power(l_ss_E1_EE_STP[i], 2) + np.power(l_ss_E2_EE_STP[i], 2)))
l_dis_ss_EI_STP.append(math.sin(math.radians(45 - round(math.degrees(
math.asin(l_ss_E2_EI_STP[i] / np.sqrt(np.power(l_ss_E1_EI_STP[i], 2) + np.power(l_ss_E2_EI_STP[i], 2)))),
2))) * np.sqrt(
np.power(l_ss_E1_EI_STP[i], 2) + np.power(l_ss_E2_EI_STP[i], 2)))
plt.figure(figsize=(figure_len, figure_width))
ax = plt.gca()
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.spines['bottom'].set_visible(True)
ax.spines['left'].set_visible(True)
for axis in ['top', 'bottom', 'left', 'right']:
ax.spines[axis].set_linewidth(line_width)
plt.tick_params(width=line_width, length=tick_len)
plt.plot(l_asso_peak_EE_STP, color='gray', linewidth=plot_line_width)
plt.plot(l_asso_ss_EE_STP, color='gray', linestyle='dashed', linewidth=plot_line_width)
for i in range(len(l_peak_E1_EE_STP)):
plt.plot(i, l_asso_peak_EE_STP[i], linestyle='none', marker='o', fillstyle='full',
markeredgewidth=marker_edge_width, markersize=marker_size,
markeredgecolor='black', markerfacecolor='gray')
plt.plot(i, l_asso_ss_EE_STP[i], linestyle='none', marker='o', fillstyle='full',
markeredgewidth=marker_edge_width, markersize=marker_size,
markeredgecolor='black', markerfacecolor='gray')
plt.xticks([0, 2, 4, 6, 8, 10], [0, 0.2, 0.4, 0.6, 0.8, 1.0], fontsize=font_size_1, **hfont)
plt.yticks([0, 0.5, 1.0], fontsize=font_size_1, **hfont)
plt.xlabel(r'$\beta$', fontsize=font_size_1, **hfont)
plt.ylabel('Association index', fontsize=font_size_1, **hfont)
plt.ylim([-0.05, 1.05])
plt.legend(['E-to-E STD onset transients', 'E-to-E STD fixed point'], prop={"family": "Arial", 'size': font_size_1}, loc='lower right')
plt.savefig('paper_figures/png/Fig_5_asso_index_peak_ss_changing_beta_U_max_' + str(U_max) + '_EE_STD.png')
plt.savefig('paper_figures/pdf/Fig_5_asso_index_peak_ss_changing_beta_U_max_' + str(U_max) + '_EE_STD.pdf')
plt.figure(figsize=(figure_len, figure_width))
ax = plt.gca()
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.spines['bottom'].set_visible(True)
ax.spines['left'].set_visible(True)
for axis in ['top', 'bottom', 'left', 'right']:
ax.spines[axis].set_linewidth(line_width)
plt.tick_params(width=line_width, length=tick_len)
plt.plot(l_asso_peak_EI_STP, color='m', linewidth=plot_line_width)
plt.plot(l_asso_ss_EI_STP, color='m', linestyle='dashed', linewidth=plot_line_width)
for i in range(len(l_peak_E1_EE_STP)):
plt.plot(i, l_asso_peak_EI_STP[i], linestyle='none', marker='o', fillstyle='full',
markeredgewidth=marker_edge_width, markersize=marker_size,
markeredgecolor='black', markerfacecolor='m')
plt.plot(i, l_asso_ss_EI_STP[i], linestyle='none', marker='o', fillstyle='full',
markeredgewidth=marker_edge_width, markersize=marker_size,
markeredgecolor='black', markerfacecolor='m')
plt.xticks([0, 2, 4, 6, 8, 10], [0, 0.2, 0.4, 0.6, 0.8, 1.0], fontsize=font_size_1, **hfont)
plt.yticks([0, 0.5, 1.0], fontsize=font_size_1, **hfont)
plt.xlabel(r'$\beta$', fontsize=font_size_1, **hfont)
plt.ylabel('Association index', fontsize=font_size_1, **hfont)
plt.ylim([-0.05, 1.05])
plt.legend(['E-to-I STF onset transients', 'E-to-I STF fixed point'], prop={"family": "Arial", 'size': font_size_1}, loc='lower right')
plt.savefig('paper_figures/png/Fig_5_asso_index_peak_ss_changing_beta_U_max_' + str(U_max) + '_EI_STF.png')
plt.savefig('paper_figures/pdf/Fig_5_asso_index_peak_ss_changing_beta_U_max_' + str(U_max) + '_EI_STF.pdf')
plt.figure(figsize=(figure_len, figure_width))
ax = plt.gca()
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.spines['bottom'].set_visible(True)
ax.spines['left'].set_visible(True)
for axis in ['top', 'bottom', 'left', 'right']:
ax.spines[axis].set_linewidth(line_width)
plt.tick_params(width=line_width, length=tick_len)
plt.yscale('symlog', linthreshy=0.1)
plt.plot(l_dis_peak_EE_STP, color='gray', linewidth=plot_line_width)
plt.plot(l_dis_ss_EE_STP, color='gray', linestyle='dashed', linewidth=plot_line_width)
for i in range(len(l_peak_E1_EE_STP)):
plt.plot(i, l_dis_peak_EE_STP[i], linestyle='none', marker='o', fillstyle='full',
markeredgewidth=marker_edge_width, markersize=marker_size,
markeredgecolor='black', markerfacecolor='gray')#, alpha=0.3+0.06*i)
plt.plot(i, l_dis_ss_EE_STP[i], linestyle='none', marker='o', fillstyle='full',
markeredgewidth=marker_edge_width, markersize=marker_size,
markeredgecolor='black', markerfacecolor='gray')#, alpha=0.3+0.06*i)
plt.xticks([0, 2, 4, 6, 8, 10], [0, 0.2, 0.4, 0.6, 0.8, 1.0], fontsize=font_size_1, **hfont)
plt.yticks([0, 1, 100, 10000, 1000000], fontsize=font_size_1, **hfont)
plt.xlabel(r'$\beta$', fontsize=font_size_1, **hfont)
plt.ylabel('Distance to the decision boundary', fontsize=font_size_1, **hfont)
plt.ylim([0, 1000000])
plt.legend(['E-to-E STD onset transients', 'E-to-E STD fixed point'], prop={"family": "Arial", 'size': font_size_1}, loc='lower right')
plt.savefig('paper_figures/png/Fig_5_sepa_dis_EE_STP_changing_beta_U_max_' + str(U_max) + '.png')
plt.savefig('paper_figures/pdf/Fig_5_sepa_dis_EE_STP_changing_beta_U_max_' + str(U_max) + '.pdf')
plt.figure(figsize=(figure_len, figure_width))
ax = plt.gca()
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.spines['bottom'].set_visible(True)
ax.spines['left'].set_visible(True)
for axis in ['top', 'bottom', 'left', 'right']:
ax.spines[axis].set_linewidth(line_width)
plt.tick_params(width=line_width, length=tick_len)
plt.yscale('symlog', linthreshy=0.1)
plt.plot(l_dis_peak_EI_STP, color='m', linewidth=plot_line_width)
plt.plot(l_dis_ss_EI_STP, color='m', linestyle='dashed', linewidth=plot_line_width)
for i in range(len(l_peak_E1_EE_STP)):
plt.plot(i, l_dis_peak_EI_STP[i], linestyle='none', marker='o', fillstyle='full',
markeredgewidth=marker_edge_width, markersize=marker_size,
markeredgecolor='black', markerfacecolor='m')#, alpha=0.3+0.06*i)
plt.plot(i, l_dis_ss_EI_STP[i], linestyle='none', marker='o', fillstyle='full',
markeredgewidth=marker_edge_width, markersize=marker_size,
markeredgecolor='black', markerfacecolor='m')#, alpha=0.3+0.06*i)
plt.xticks([0, 2, 4, 6, 8, 10], [0, 0.2, 0.4, 0.6, 0.8, 1.0], fontsize=font_size_1, **hfont)
plt.yticks([0, 1, 100, 10000, 1000000], fontsize=font_size_1, **hfont)
plt.xlabel(r'$\beta$', fontsize=font_size_1, **hfont)
plt.ylabel('Distance to the decision boundary', fontsize=font_size_1, **hfont)
plt.ylim([0, 1000000])
plt.legend(['E-to-I STF onset transients', 'E-to-I STF fixed point'], prop={"family": "Arial", 'size': font_size_1}, loc='lower right')
plt.savefig('paper_figures/png/Fig_5_sepa_dis_EI_STP_changing_beta_U_max_' + str(U_max) + '.png')
plt.savefig('paper_figures/pdf/Fig_5_sepa_dis_EI_STP_changing_beta_U_max_' + str(U_max) + '.pdf')
| 54.588496 | 149 | 0.680149 | 2,219 | 12,337 | 3.380352 | 0.078864 | 0.051993 | 0.02133 | 0.014931 | 0.933342 | 0.878416 | 0.846687 | 0.819891 | 0.813758 | 0.801626 | 0 | 0.055287 | 0.161384 | 12,337 | 225 | 150 | 54.831111 | 0.669727 | 0.007944 | 0 | 0.491525 | 0 | 0 | 0.129752 | 0.058049 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.050847 | 0 | 0.050847 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
70ce3a51253f987d6a5242e442d3dc3521f8e7bf | 244 | py | Python | PhysicsTools/PatAlgos/python/slimming/MiniAODfromMiniAOD_cff.py | ckamtsikis/cmssw | ea19fe642bb7537cbf58451dcf73aa5fd1b66250 | [
"Apache-2.0"
] | 852 | 2015-01-11T21:03:51.000Z | 2022-03-25T21:14:00.000Z | PhysicsTools/PatAlgos/python/slimming/MiniAODfromMiniAOD_cff.py | ckamtsikis/cmssw | ea19fe642bb7537cbf58451dcf73aa5fd1b66250 | [
"Apache-2.0"
] | 30,371 | 2015-01-02T00:14:40.000Z | 2022-03-31T23:26:05.000Z | PhysicsTools/PatAlgos/python/slimming/MiniAODfromMiniAOD_cff.py | ckamtsikis/cmssw | ea19fe642bb7537cbf58451dcf73aa5fd1b66250 | [
"Apache-2.0"
] | 3,240 | 2015-01-02T05:53:18.000Z | 2022-03-31T17:24:21.000Z | import FWCore.ParameterSet.Config as cms
from PhysicsTools.PatAlgos.slimming.modifyPrimaryPhysicsObjects_cff import *
from PhysicsTools.PatAlgos.slimming.MicroEventContent_cff import *
EIsequence = cms.Sequence( modifyPrimaryPhysicsObjects )
| 34.857143 | 76 | 0.864754 | 24 | 244 | 8.708333 | 0.625 | 0.15311 | 0.229665 | 0.30622 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.077869 | 244 | 6 | 77 | 40.666667 | 0.928889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.75 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
cb3895cccdc85d957c09ce3d54e31f96718892ee | 180 | py | Python | src/sound_lib/external/__init__.py | Oire/TheQube | fcfd8a68b15948e0740642d635db24adef8cc314 | [
"MIT"
] | 21 | 2015-08-02T21:26:14.000Z | 2019-12-27T09:57:44.000Z | src/sound_lib/external/__init__.py | Oire/TheQube | fcfd8a68b15948e0740642d635db24adef8cc314 | [
"MIT"
] | 34 | 2015-01-12T00:38:14.000Z | 2020-08-31T11:19:37.000Z | src/sound_lib/external/__init__.py | Oire/TheQube | fcfd8a68b15948e0740642d635db24adef8cc314 | [
"MIT"
] | 15 | 2015-03-24T15:42:30.000Z | 2020-09-24T20:26:42.000Z | import platform
if platform.system() == 'Windows':
import pybasswma
if platform.system() != 'Darwin':
import pybass_aac
import pybass_alac
import pybassflac
import pybassmidi
| 20 | 34 | 0.772222 | 22 | 180 | 6.227273 | 0.545455 | 0.145985 | 0.233577 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.138889 | 180 | 8 | 35 | 22.5 | 0.883871 | 0 | 0 | 0 | 0 | 0 | 0.072222 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.75 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
cb527af07b75788eb97e76417899e85961c3d4ae | 356 | py | Python | emat/analysis/__init__.py | jinsanity07git/tmip-emat | ff816cf50f141825078bb276d6da46d92c5028a9 | [
"BSD-3-Clause"
] | null | null | null | emat/analysis/__init__.py | jinsanity07git/tmip-emat | ff816cf50f141825078bb276d6da46d92c5028a9 | [
"BSD-3-Clause"
] | null | null | null | emat/analysis/__init__.py | jinsanity07git/tmip-emat | ff816cf50f141825078bb276d6da46d92c5028a9 | [
"BSD-3-Clause"
] | 1 | 2020-08-06T07:36:21.000Z | 2020-08-06T07:36:21.000Z |
try:
from .visual_distribution import display_experiments, contrast_experiments
except ImportError:
pass
from .feature_scoring import feature_scores, threshold_feature_scores
try:
from .explore import Explore
except ImportError:
pass
try:
from .explore_2 import Visualizer, TwoWayFigure
except ImportError:
pass
from .prim import Prim, PrimBox
| 16.952381 | 75 | 0.823034 | 44 | 356 | 6.477273 | 0.477273 | 0.073684 | 0.221053 | 0.175439 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003247 | 0.134831 | 356 | 20 | 76 | 17.8 | 0.922078 | 0 | 0 | 0.642857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.214286 | 0.571429 | 0 | 0.571429 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
cb65aa0b9cbddb9df7d5d7ca391d76ce56453c7b | 27 | py | Python | pong/__init__.py | onirei/pong | 985f00adc34ab3e11fa0e08bad9ad5554703b4ea | [
"MIT"
] | null | null | null | pong/__init__.py | onirei/pong | 985f00adc34ab3e11fa0e08bad9ad5554703b4ea | [
"MIT"
] | null | null | null | pong/__init__.py | onirei/pong | 985f00adc34ab3e11fa0e08bad9ad5554703b4ea | [
"MIT"
] | null | null | null | from .core import GameCore
| 13.5 | 26 | 0.814815 | 4 | 27 | 5.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148148 | 27 | 1 | 27 | 27 | 0.956522 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
cb685ee9dc0dd4abe2836519921190dce074120e | 317 | py | Python | app/main/views/misc.py | Jaydonjin/robot_demo | 94d65d7f65857aeed5b22323f8cfe602fdc1dd2c | [
"MIT"
] | null | null | null | app/main/views/misc.py | Jaydonjin/robot_demo | 94d65d7f65857aeed5b22323f8cfe602fdc1dd2c | [
"MIT"
] | null | null | null | app/main/views/misc.py | Jaydonjin/robot_demo | 94d65d7f65857aeed5b22323f8cfe602fdc1dd2c | [
"MIT"
] | null | null | null | from flask import current_app
from flask import render_template
from app.main import main
@main.route("/version", methods=['GET'])
def version():
return render_template('main/version.html', version=current_app.config['VERSION'])
@main.route("/faq.htm")
def faq():
return render_template('main/faq.html')
| 21.133333 | 86 | 0.735016 | 45 | 317 | 5.066667 | 0.4 | 0.184211 | 0.131579 | 0.210526 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.116719 | 317 | 14 | 87 | 22.642857 | 0.814286 | 0 | 0 | 0 | 0 | 0 | 0.176656 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.222222 | true | 0 | 0.333333 | 0.222222 | 0.777778 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
cb985013f56b9ea1a081ca6efe6acc3fe4cd8ac8 | 25 | py | Python | tests/tests/__init__.py | hodlwave/f469-disco | f83f3fe096d02f76452eb48ba8a955d098591531 | [
"MIT"
] | null | null | null | tests/tests/__init__.py | hodlwave/f469-disco | f83f3fe096d02f76452eb48ba8a955d098591531 | [
"MIT"
] | null | null | null | tests/tests/__init__.py | hodlwave/f469-disco | f83f3fe096d02f76452eb48ba8a955d098591531 | [
"MIT"
] | null | null | null | from .test_ecc import *
| 8.333333 | 23 | 0.72 | 4 | 25 | 4.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2 | 25 | 2 | 24 | 12.5 | 0.85 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
cbcee51822be02f5d8509bad3077b4024eb7c47a | 26 | py | Python | devel/lib/python2.7/dist-packages/costmap_2d/msg/__init__.py | Louis-AD-git/racecar_ws | 3c5cb561d1aee11d80a7f3847e0334e93f345513 | [
"MIT"
] | 4 | 2019-10-26T18:48:51.000Z | 2020-02-27T19:31:36.000Z | devel/lib/python2.7/dist-packages/costmap_2d/msg/__init__.py | Louis-AD-git/racecar_ws | 3c5cb561d1aee11d80a7f3847e0334e93f345513 | [
"MIT"
] | null | null | null | devel/lib/python2.7/dist-packages/costmap_2d/msg/__init__.py | Louis-AD-git/racecar_ws | 3c5cb561d1aee11d80a7f3847e0334e93f345513 | [
"MIT"
] | 1 | 2019-10-26T18:50:48.000Z | 2019-10-26T18:50:48.000Z | from ._VoxelGrid import *
| 13 | 25 | 0.769231 | 3 | 26 | 6.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.153846 | 26 | 1 | 26 | 26 | 0.863636 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
cbd1bad640cd612b21a7f54e28536ff47cde2706 | 315 | py | Python | build/lib/databasemanager-master/databasemanager/classes/annotationextrainfo.py | jowanpittevils/Databasemanager_Signalplotter | 993152ad15793054df2acf386eb1c9a76610b789 | [
"BSD-3-Clause"
] | null | null | null | build/lib/databasemanager-master/databasemanager/classes/annotationextrainfo.py | jowanpittevils/Databasemanager_Signalplotter | 993152ad15793054df2acf386eb1c9a76610b789 | [
"BSD-3-Clause"
] | null | null | null | build/lib/databasemanager-master/databasemanager/classes/annotationextrainfo.py | jowanpittevils/Databasemanager_Signalplotter | 993152ad15793054df2acf386eb1c9a76610b789 | [
"BSD-3-Clause"
] | null | null | null | #==================================================#
# Authors: Amir H. Ansari <amirans65.ai@gmail.com> #
# License: BSD (3-clause) #
#==================================================#
from databasemanager.classes.extrainfo import ExtraInfo
class AnnotationExtraInfo(ExtraInfo):
pass | 39.375 | 55 | 0.447619 | 22 | 315 | 6.409091 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011321 | 0.15873 | 315 | 8 | 56 | 39.375 | 0.520755 | 0.634921 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
cbe29eb2f908b9c32839131393ff57c54a419e50 | 19,656 | py | Python | lib/train/data/processing.py | SangbumChoi/MixFormer | 6a1d215abcf9812a4530ba3930fea74ea5d3c51d | [
"MIT"
] | 103 | 2022-03-21T13:40:05.000Z | 2022-03-31T13:31:06.000Z | lib/train/data/processing.py | SangbumChoi/MixFormer | 6a1d215abcf9812a4530ba3930fea74ea5d3c51d | [
"MIT"
] | 8 | 2022-03-22T12:33:17.000Z | 2022-03-30T16:12:02.000Z | lib/train/data/processing.py | SangbumChoi/MixFormer | 6a1d215abcf9812a4530ba3930fea74ea5d3c51d | [
"MIT"
] | 18 | 2022-03-21T13:40:06.000Z | 2022-03-31T19:08:10.000Z | import torch
import torchvision.transforms as transforms
from lib.utils import TensorDict
import lib.train.data.processing_utils as prutils
import torch.nn.functional as F
import random
import numpy as np
def stack_tensors(x):
if isinstance(x, (list, tuple)) and isinstance(x[0], torch.Tensor):
return torch.stack(x)
return x
class BaseProcessing:
""" Base class for Processing. Processing class is used to process the data returned by a dataset, before passing it
through the network. For example, it can be used to crop a search region around the object, apply various data
augmentations, etc."""
def __init__(self, transform=transforms.ToTensor(), template_transform=None, search_transform=None, joint_transform=None):
"""
args:
transform - The set of transformations to be applied on the images. Used only if template_transform or
search_transform is None.
template_transform - The set of transformations to be applied on the template images. If None, the 'transform'
argument is used instead.
search_transform - The set of transformations to be applied on the search images. If None, the 'transform'
argument is used instead.
joint_transform - The set of transformations to be applied 'jointly' on the template and search images. For
example, it can be used to convert both template and search images to grayscale.
"""
self.transform = {'template': transform if template_transform is None else template_transform,
'search': transform if search_transform is None else search_transform,
'joint': joint_transform}
def __call__(self, data: TensorDict):
raise NotImplementedError
class STARKProcessing(BaseProcessing):
""" The processing class used for training LittleBoy. The images are processed in the following way.
First, the target bounding box is jittered by adding some noise. Next, a square region (called search region )
centered at the jittered target center, and of area search_area_factor^2 times the area of the jittered box is
cropped from the image. The reason for jittering the target box is to avoid learning the bias that the target is
always at the center of the search region. The search region is then resized to a fixed size given by the
argument output_sz.
"""
def __init__(self, search_area_factor, output_sz, center_jitter_factor, scale_jitter_factor,
mode='pair', settings=None, *args, **kwargs):
"""
args:
search_area_factor - The size of the search region relative to the target size.
output_sz - An integer, denoting the size to which the search region is resized. The search region is always
square.
center_jitter_factor - A dict containing the amount of jittering to be applied to the target center before
extracting the search region. See _get_jittered_box for how the jittering is done.
scale_jitter_factor - A dict containing the amount of jittering to be applied to the target size before
extracting the search region. See _get_jittered_box for how the jittering is done.
mode - Either 'pair' or 'sequence'. If mode='sequence', then output has an extra dimension for frames
"""
super().__init__(*args, **kwargs)
self.search_area_factor = search_area_factor
self.output_sz = output_sz
self.center_jitter_factor = center_jitter_factor
self.scale_jitter_factor = scale_jitter_factor
self.mode = mode
self.settings = settings
def _get_jittered_box(self, box, mode):
""" Jitter the input box
args:
box - input bounding box
mode - string 'template' or 'search' indicating template or search data
returns:
torch.Tensor - jittered box
"""
jittered_size = box[2:4] * torch.exp(torch.randn(2) * self.scale_jitter_factor[mode])
max_offset = (jittered_size.prod().sqrt() * torch.tensor(self.center_jitter_factor[mode]).float())
jittered_center = box[0:2] + 0.5 * box[2:4] + max_offset * (torch.rand(2) - 0.5)
return torch.cat((jittered_center - 0.5 * jittered_size, jittered_size), dim=0)
def __call__(self, data: TensorDict):
"""
args:
data - The input data, should contain the following fields:
'template_images', search_images', 'template_anno', 'search_anno'
returns:
TensorDict - output data block with following fields:
'template_images', 'search_images', 'template_anno', 'search_anno', 'test_proposals', 'proposal_iou'
"""
# Apply joint transforms
if self.transform['joint'] is not None:
data['template_images'], data['template_anno'], data['template_masks'] = self.transform['joint'](
image=data['template_images'], bbox=data['template_anno'], mask=data['template_masks'])
data['search_images'], data['search_anno'], data['search_masks'] = self.transform['joint'](
image=data['search_images'], bbox=data['search_anno'], mask=data['search_masks'], new_roll=False)
for s in ['template', 'search']:
assert self.mode == 'sequence' or len(data[s + '_images']) == 1, \
"In pair mode, num train/test frames must be 1"
# Add a uniform noise to the center pos
jittered_anno = [self._get_jittered_box(a, s) for a in data[s + '_anno']]
# 2021.1.9 Check whether data is valid. Avoid too small bounding boxes
w, h = torch.stack(jittered_anno, dim=0)[:, 2], torch.stack(jittered_anno, dim=0)[:, 3]
crop_sz = torch.ceil(torch.sqrt(w * h) * self.search_area_factor[s])
if (crop_sz < 1).any():
data['valid'] = False
# print("Too small box is found. Replace it with new data.")
return data
# Crop image region centered at jittered_anno box and get the attention mask
crops, boxes, att_mask, mask_crops = prutils.jittered_center_crop(data[s + '_images'], jittered_anno,
data[s + '_anno'], self.search_area_factor[s],
self.output_sz[s], masks=data[s + '_masks'])
# Apply transforms
data[s + '_images'], data[s + '_anno'], data[s + '_att'], data[s + '_masks'] = self.transform[s](
image=crops, bbox=boxes, att=att_mask, mask=mask_crops, joint=False)
# 2021.1.9 Check whether elements in data[s + '_att'] is all 1
# Note that type of data[s + '_att'] is tuple, type of ele is torch.tensor
for ele in data[s + '_att']:
if (ele == 1).all():
data['valid'] = False
# print("Values of original attention mask are all one. Replace it with new data.")
return data
# 2021.1.10 more strict conditions: require the donwsampled masks not to be all 1
for ele in data[s + '_att']:
feat_size = self.output_sz[s] // 16 # 16 is the backbone stride
# (1,1,128,128) (1,1,256,256) --> (1,1,8,8) (1,1,16,16)
mask_down = F.interpolate(ele[None, None].float(), size=feat_size).to(torch.bool)[0]
if (mask_down == 1).all():
data['valid'] = False
# print("Values of down-sampled attention mask are all one. "
# "Replace it with new data.")
return data
data['valid'] = True
# if we use copy-and-paste augmentation
if data["template_masks"] is None or data["search_masks"] is None:
data["template_masks"] = torch.zeros((1, self.output_sz["template"], self.output_sz["template"]))
data["search_masks"] = torch.zeros((1, self.output_sz["search"], self.output_sz["search"]))
# Prepare output
if self.mode == 'sequence':
data = data.apply(stack_tensors)
else:
data = data.apply(lambda x: x[0] if isinstance(x, list) else x)
return data
class MixformerProcessing(BaseProcessing):
""" The processing class used for training LittleBoy. The images are processed in the following way.
First, the target bounding box is jittered by adding some noise. Next, a square region (called search region )
centered at the jittered target center, and of area search_area_factor^2 times the area of the jittered box is
cropped from the image. The reason for jittering the target box is to avoid learning the bias that the target is
always at the center of the search region. The search region is then resized to a fixed size given by the
argument output_sz.
"""
def __init__(self, search_area_factor, output_sz, center_jitter_factor, scale_jitter_factor,
mode='pair', settings=None, train_score=False, *args, **kwargs):
"""
args:
search_area_factor - The size of the search region relative to the target size.
output_sz - An integer, denoting the size to which the search region is resized. The search region is always
square.
center_jitter_factor - A dict containing the amount of jittering to be applied to the target center before
extracting the search region. See _get_jittered_box for how the jittering is done.
scale_jitter_factor - A dict containing the amount of jittering to be applied to the target size before
extracting the search region. See _get_jittered_box for how the jittering is done.
mode - Either 'pair' or 'sequence'. If mode='sequence', then output has an extra dimension for frames
"""
super().__init__(*args, **kwargs)
self.search_area_factor = search_area_factor
self.output_sz = output_sz
self.center_jitter_factor = center_jitter_factor
self.scale_jitter_factor = scale_jitter_factor
self.mode = mode
self.settings = settings
self.train_score = train_score
# self.label_function_params = label_function_params
self.out_feat_sz = 20 ######## the output feature map size
def _get_jittered_box(self, box, mode):
""" Jitter the input box
args:
box - input bounding box
mode - string 'template' or 'search' indicating template or search data
returns:
torch.Tensor - jittered box
"""
jittered_size = box[2:4] * torch.exp(torch.randn(2) * self.scale_jitter_factor[mode])
max_offset = (jittered_size.prod().sqrt() * torch.tensor(self.center_jitter_factor[mode]).float())
jittered_center = box[0:2] + 0.5 * box[2:4] + max_offset * (torch.rand(2) - 0.5)
return torch.cat((jittered_center - 0.5 * jittered_size, jittered_size), dim=0)
def _generate_neg_proposals(self, box, min_iou=0.0, max_iou=0.3, sigma=0.5):
""" Generates proposals by adding noise to the input box
args:
box - input box
returns:
torch.Tensor - Array of shape (num_proposals, 4) containing proposals
torch.Tensor - Array of shape (num_proposals,) containing IoU overlap of each proposal with the input box. The
IoU is mapped to [-1, 1]
"""
# Generate proposals
# num_proposals = self.proposal_params['boxes_per_frame']
# proposal_method = self.proposal_params.get('proposal_method', 'default')
# if proposal_method == 'default':
num_proposals = box.size(0)
proposals = torch.zeros((num_proposals, 4)).to(box.device)
gt_iou = torch.zeros(num_proposals)
for i in range(num_proposals):
proposals[i, :], gt_iou[i] = prutils.perturb_box(box[i], min_iou=min_iou, max_iou=max_iou,
sigma_factor=sigma)
# elif proposal_method == 'gmm':
# proposals, _, _ = prutils.sample_box_gmm(box, self.proposal_params['proposal_sigma'],
# num_samples=num_proposals)
# gt_iou = prutils.iou(box.view(1,4), proposals.view(-1,4))
# # Map to [-1, 1]
# gt_iou = gt_iou * 2 - 1
return proposals
def __call__(self, data: TensorDict):
"""
args:
data - The input data, should contain the following fields:
'template_images', search_images', 'template_anno', 'search_anno'
returns:
TensorDict - output data block with following fields:
'template_images', 'search_images', 'template_anno', 'search_anno', 'test_proposals', 'proposal_iou'
"""
# Apply joint transforms
if self.transform['joint'] is not None:
data['template_images'], data['template_anno'], data['template_masks'] = self.transform['joint'](
image=data['template_images'], bbox=data['template_anno'], mask=data['template_masks'])
data['search_images'], data['search_anno'], data['search_masks'] = self.transform['joint'](
image=data['search_images'], bbox=data['search_anno'], mask=data['search_masks'], new_roll=False)
for s in ['template', 'search']:
assert self.mode == 'sequence' or len(data[s + '_images']) == 1, \
"In pair mode, num train/test frames must be 1"
# Add a uniform noise to the center pos
jittered_anno = [self._get_jittered_box(a, s) for a in data[s + '_anno']]
# 2021.1.9 Check whether data is valid. Avoid too small bounding boxes
w, h = torch.stack(jittered_anno, dim=0)[:, 2], torch.stack(jittered_anno, dim=0)[:, 3]
crop_sz = torch.ceil(torch.sqrt(w * h) * self.search_area_factor[s])
if (crop_sz < 1).any():
data['valid'] = False
# print("Too small box is found. Replace it with new data.")
return data
# Crop image region centered at jittered_anno box and get the attention mask
crops, boxes, att_mask, mask_crops = prutils.jittered_center_crop(data[s + '_images'], jittered_anno,
data[s + '_anno'], self.search_area_factor[s],
self.output_sz[s], masks=data[s + '_masks'])
# Apply transforms
data[s + '_images'], data[s + '_anno'], data[s + '_att'], data[s + '_masks'] = self.transform[s](
image=crops, bbox=boxes, att=att_mask, mask=mask_crops, joint=False)
# 2021.1.9 Check whether elements in data[s + '_att'] is all 1
# Note that type of data[s + '_att'] is tuple, type of ele is torch.tensor
for ele in data[s + '_att']:
if (ele == 1).all():
data['valid'] = False
# print("Values of original attention mask are all one. Replace it with new data.")
return data
# 2021.1.10 more strict conditions: require the donwsampled masks not to be all 1
for ele in data[s + '_att']:
feat_size = self.output_sz[s] // 16 # 16 is the backbone stride
# (1,1,128,128) (1,1,256,256) --> (1,1,8,8) (1,1,16,16)
mask_down = F.interpolate(ele[None, None].float(), size=feat_size).to(torch.bool)[0]
if (mask_down == 1).all():
data['valid'] = False
# print("Values of down-sampled attention mask are all one. "
# "Replace it with new data.")
return data
data['valid'] = True
# if we use copy-and-paste augmentation
if data["template_masks"] is None or data["search_masks"] is None:
data["template_masks"] = torch.zeros((1, self.output_sz["template"], self.output_sz["template"]))
data["search_masks"] = torch.zeros((1, self.output_sz["search"], self.output_sz["search"]))
# Prepare output
if self.mode == 'sequence':
data = data.apply(stack_tensors)
else:
data = data.apply(lambda x: x[0] if isinstance(x, list) else x)
# if self.train_score:
# if random.random() < 0.5:
# data['label'] = torch.zeros_like(data['label'])
# data['search_anno'] = self._generate_neg_proposals(data['search_anno'])
# search_anno is with normalized coords. (x,y,w,h)
# search_anno = data['search_anno'].clone()
# wl = wr = search_anno[:, 2] * 0.5
# ht = hb = search_anno[:, 3] * 0.5
# w2h2 = torch.stack((wl, wr, ht, hb), dim=1) # [num_images, 4]
#
# search_anno = (search_anno * self.out_feat_sz).float()
# center_float = search_anno[:, :2] + search_anno[:, 2:] / 2.
# center_int = center_float.int().float()
# ind = center_int[:, 1] * self.out_feat_sz + center_int[:, 0] # [num_images, 1]
#
# data['ind'] = ind.long()
# data['w2h2'] = w2h2
### Generate label functions and regression mask
# if self.settings.script_name == 'tsp_cls_online':
# search_anno = data['search_anno'].clone() * self.output_sz['search']
# data['gt_scores'] = self._generate_label_function(search_anno)
# search_anno = data['search_anno'].clone() * self.out_feat_sz
# target_center = search_anno[:, :2] + search_anno[:, 2:] * 0.5
# # add noise
# target_center[:, 0] = target_center[:, 0] + np.random.randint(0, 2)
# target_center[:, 1] = target_center[:, 1] + np.random.randint(0, 2)
# mask_scale_w = self.settings.mask_scale + np.random.uniform(-0.15, 0.15)
# mask_scale_h = self.settings.mask_scale + np.random.uniform(-0.15, 0.15)
# mask_w, mask_h = search_anno[:, 2] * mask_scale_w, search_anno[:, 3] * mask_scale_h
#
# data['reg_mask'] = self._generate_regression_mask(target_center, mask_w, mask_h, self.out_feat_sz)
return data
def _generate_regression_mask(self, target_center, mask_w, mask_h, mask_size=20):
"""
NHW format
:return:
"""
k0 = torch.arange(mask_size, dtype=torch.float32, device=target_center.device).view(1, 1, -1)
k1 = torch.arange(mask_size, dtype=torch.float32, device=target_center.device).view(1, -1, 1)
d0 = (k0 - target_center[:, 0].view(-1, 1, 1)).abs() # w, (b, 1, w)
d1 = (k1 - target_center[:, 1].view(-1, 1, 1)).abs() # h, (b, h, 1)
# dist = d0.abs() + d1.abs()
mask_w = mask_w.view(-1, 1, 1)
mask_h = mask_h.view(-1, 1, 1)
mask0 = torch.where(d0 <= mask_w*0.5, torch.ones_like(d0), torch.zeros_like(d0)) # (b, 1, w)
mask1 = torch.where(d1 <= mask_h*0.5, torch.ones_like(d1), torch.zeros_like(d1)) # (b, h, 1)
return mask0 * mask1 # (b, h, w) | 54.6 | 126 | 0.595391 | 2,580 | 19,656 | 4.367054 | 0.125194 | 0.024851 | 0.015976 | 0.014201 | 0.759386 | 0.74838 | 0.735955 | 0.719801 | 0.715985 | 0.707997 | 0 | 0.020423 | 0.297517 | 19,656 | 360 | 127 | 54.6 | 0.795553 | 0.444801 | 0 | 0.719178 | 0 | 0 | 0.082668 | 0 | 0 | 0 | 0 | 0 | 0.013699 | 1 | 0.075342 | false | 0 | 0.047945 | 0 | 0.239726 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1db60a2b743f78dcfbe8f0d9c2ff6bf3d8bd77f9 | 20,648 | py | Python | tests/providers/integration_tests.py | chibiegg/lexicon | 6230ea1e567a730243dc77c08ff6c4c16f136157 | [
"MIT"
] | null | null | null | tests/providers/integration_tests.py | chibiegg/lexicon | 6230ea1e567a730243dc77c08ff6c4c16f136157 | [
"MIT"
] | null | null | null | tests/providers/integration_tests.py | chibiegg/lexicon | 6230ea1e567a730243dc77c08ff6c4c16f136157 | [
"MIT"
] | null | null | null | from builtins import object
import lexicon.client
from lexicon.common.options_handler import SafeOptions, env_auth_options
import pytest
import vcr
import os
# Configure VCR
provider_vcr = vcr.VCR(
cassette_library_dir='tests/fixtures/cassettes',
record_mode='new_episodes',
decode_compressed_response=True
)
"""
https://stackoverflow.com/questions/26266481/pytest-reusable-tests-for-different-implementations-of-the-same-interface
Single, reusable definition of tests for the interface. Authors of
new implementations of the interface merely have to provide the test
data, as class attributes of a class which inherits
unittest.TestCase AND this class.
Required test data:
self.Provider must be set
self.provider_name must be set
self.domain must be set
self._filter_headers can be defined to provide a list of sensitive headers
self._filter_query_parameters can be defined to provide a list of sensitive parameter
"""
class IntegrationTests(object):
###########################################################################
# Provider.authenticate()
###########################################################################
def test_Provider_authenticate(self):
with provider_vcr.use_cassette(self._cassette_path('IntegrationTests/test_Provider_authenticate.yaml'), filter_headers=self._filter_headers(), filter_query_parameters=self._filter_query_parameters(), filter_post_data_parameters=self._filter_post_data_parameters()):
provider = self.Provider(self._test_options(), self._test_engine_overrides())
provider.authenticate()
assert provider.domain_id is not None
def test_Provider_authenticate_with_unmanaged_domain_should_fail(self):
with provider_vcr.use_cassette(self._cassette_path('IntegrationTests/test_Provider_authenticate_with_unmanaged_domain_should_fail.yaml'), filter_headers=self._filter_headers(), filter_query_parameters=self._filter_query_parameters(), filter_post_data_parameters=self._filter_post_data_parameters()):
options = self._test_options()
options['domain'] = 'thisisadomainidonotown.com'
provider = self.Provider(options, self._test_engine_overrides())
with pytest.raises(Exception):
provider.authenticate()
###########################################################################
# Provider.create_record()
###########################################################################
def test_Provider_when_calling_create_record_for_A_with_valid_name_and_content(self):
with provider_vcr.use_cassette(self._cassette_path('IntegrationTests/test_Provider_when_calling_create_record_for_A_with_valid_name_and_content.yaml'), filter_headers=self._filter_headers(), filter_query_parameters=self._filter_query_parameters(), filter_post_data_parameters=self._filter_post_data_parameters()):
provider = self.Provider(self._test_options(), self._test_engine_overrides())
provider.authenticate()
assert provider.create_record('A','localhost','127.0.0.1')
def test_Provider_when_calling_create_record_for_CNAME_with_valid_name_and_content(self):
with provider_vcr.use_cassette(self._cassette_path('IntegrationTests/test_Provider_when_calling_create_record_for_CNAME_with_valid_name_and_content.yaml'), filter_headers=self._filter_headers(), filter_query_parameters=self._filter_query_parameters(), filter_post_data_parameters=self._filter_post_data_parameters()):
provider = self.Provider(self._test_options(), self._test_engine_overrides())
provider.authenticate()
assert provider.create_record('CNAME','docs','docs.example.com')
def test_Provider_when_calling_create_record_for_TXT_with_valid_name_and_content(self):
with provider_vcr.use_cassette(self._cassette_path('IntegrationTests/test_Provider_when_calling_create_record_for_TXT_with_valid_name_and_content.yaml'), filter_headers=self._filter_headers(), filter_query_parameters=self._filter_query_parameters(), filter_post_data_parameters=self._filter_post_data_parameters()):
provider = self.Provider(self._test_options(), self._test_engine_overrides())
provider.authenticate()
assert provider.create_record('TXT','_acme-challenge.test','challengetoken')
def test_Provider_when_calling_create_record_for_TXT_with_full_name_and_content(self):
with provider_vcr.use_cassette(self._cassette_path('IntegrationTests/test_Provider_when_calling_create_record_for_TXT_with_full_name_and_content.yaml'), filter_headers=self._filter_headers(), filter_query_parameters=self._filter_query_parameters(), filter_post_data_parameters=self._filter_post_data_parameters()):
provider = self.Provider(self._test_options(), self._test_engine_overrides())
provider.authenticate()
assert provider.create_record('TXT',"_acme-challenge.full.{0}".format(self.domain),'challengetoken')
def test_Provider_when_calling_create_record_for_TXT_with_fqdn_name_and_content(self):
with provider_vcr.use_cassette(self._cassette_path('IntegrationTests/test_Provider_when_calling_create_record_for_TXT_with_fqdn_name_and_content.yaml'), filter_headers=self._filter_headers(), filter_query_parameters=self._filter_query_parameters(), filter_post_data_parameters=self._filter_post_data_parameters()):
provider = self.Provider(self._test_options(), self._test_engine_overrides())
provider.authenticate()
assert provider.create_record('TXT',"_acme-challenge.fqdn.{0}.".format(self.domain),'challengetoken')
###########################################################################
# Provider.list_records()
###########################################################################
def test_Provider_when_calling_list_records_with_no_arguments_should_list_all(self):
with provider_vcr.use_cassette(self._cassette_path('IntegrationTests/test_Provider_when_calling_list_records_with_no_arguments_should_list_all.yaml'), filter_headers=self._filter_headers(), filter_query_parameters=self._filter_query_parameters(), filter_post_data_parameters=self._filter_post_data_parameters()):
provider = self.Provider(self._test_options(), self._test_engine_overrides())
provider.authenticate()
assert isinstance(provider.list_records(), list)
def test_Provider_when_calling_list_records_with_name_filter_should_return_record(self):
with provider_vcr.use_cassette(self._cassette_path('IntegrationTests/test_Provider_when_calling_list_records_with_name_filter_should_return_record.yaml'), filter_headers=self._filter_headers(), filter_query_parameters=self._filter_query_parameters(), filter_post_data_parameters=self._filter_post_data_parameters()):
provider = self.Provider(self._test_options(), self._test_engine_overrides())
provider.authenticate()
provider.create_record('TXT','random.test','challengetoken')
records = provider.list_records('TXT','random.test')
assert len(records) == 1
assert records[0]['content'] == 'challengetoken'
assert records[0]['type'] == 'TXT'
assert records[0]['name'] == 'random.test.{0}'.format(self.domain)
def test_Provider_when_calling_list_records_with_full_name_filter_should_return_record(self):
with provider_vcr.use_cassette(self._cassette_path('IntegrationTests/test_Provider_when_calling_list_records_with_full_name_filter_should_return_record.yaml'), filter_headers=self._filter_headers(), filter_query_parameters=self._filter_query_parameters(), filter_post_data_parameters=self._filter_post_data_parameters()):
provider = self.Provider(self._test_options(), self._test_engine_overrides())
provider.authenticate()
provider.create_record('TXT','random.fulltest.{0}'.format(self.domain),'challengetoken')
records = provider.list_records('TXT','random.fulltest.{0}'.format(self.domain))
assert len(records) == 1
assert records[0]['content'] == 'challengetoken'
assert records[0]['type'] == 'TXT'
assert records[0]['name'] == 'random.fulltest.{0}'.format(self.domain)
def test_Provider_when_calling_list_records_with_fqdn_name_filter_should_return_record(self):
with provider_vcr.use_cassette(self._cassette_path('IntegrationTests/test_Provider_when_calling_list_records_with_fqdn_name_filter_should_return_record.yaml'), filter_headers=self._filter_headers(), filter_query_parameters=self._filter_query_parameters(), filter_post_data_parameters=self._filter_post_data_parameters()):
provider = self.Provider(self._test_options(), self._test_engine_overrides())
provider.authenticate()
provider.create_record('TXT','random.fqdntest.{0}.'.format(self.domain),'challengetoken')
records = provider.list_records('TXT','random.fqdntest.{0}.'.format(self.domain))
assert len(records) == 1
assert records[0]['content'] == 'challengetoken'
assert records[0]['type'] == 'TXT'
assert records[0]['name'] == 'random.fqdntest.{0}'.format(self.domain)
def test_Provider_when_calling_list_records_after_setting_ttl(self):
with provider_vcr.use_cassette(self._cassette_path('IntegrationTests/test_Provider_when_calling_list_records_after_setting_ttl.yaml'), filter_headers=self._filter_headers(), filter_query_parameters=self._filter_query_parameters(), filter_post_data_parameters=self._filter_post_data_parameters()):
provider = self.Provider(self._test_options(), self._test_engine_overrides())
provider.authenticate()
assert provider.create_record('TXT',"ttl.fqdn.{0}.".format(self.domain),'ttlshouldbe3600')
records = provider.list_records('TXT','ttl.fqdn.{0}'.format(self.domain))
assert len(records) == 1
assert str(records[0]['ttl']) == str(3600)
@pytest.mark.skip(reason="not sure how to test empty list across multiple providers")
def test_Provider_when_calling_list_records_should_return_empty_list_if_no_records_found(self):
with provider_vcr.use_cassette(self._cassette_path('IntegrationTests/test_Provider_when_calling_list_records_should_return_empty_list_if_no_records_found.yaml'), filter_headers=self._filter_headers(), filter_query_parameters=self._filter_query_parameters(), filter_post_data_parameters=self._filter_post_data_parameters()):
provider = self.Provider(self._test_options(), self._test_engine_overrides())
provider.authenticate()
assert isinstance(provider.list_records(), list)
@pytest.mark.skip(reason="not sure how to test filtering across multiple providers")
def test_Provider_when_calling_list_records_with_arguments_should_filter_list(self):
with provider_vcr.use_cassette(self._cassette_path('IntegrationTests/test_Provider_when_calling_list_records_with_arguments_should_filter_list.yaml'), filter_headers=self._filter_headers(), filter_query_parameters=self._filter_query_parameters(), filter_post_data_parameters=self._filter_post_data_parameters()):
provider = self.Provider(self._test_options(), self._test_engine_overrides())
provider.authenticate()
assert isinstance(provider.list_records(), list)
###########################################################################
# Provider.update_record()
###########################################################################
def test_Provider_when_calling_update_record_should_modify_record(self):
with provider_vcr.use_cassette(self._cassette_path('IntegrationTests/test_Provider_when_calling_update_record_should_modify_record.yaml'), filter_headers=self._filter_headers(), filter_query_parameters=self._filter_query_parameters(), filter_post_data_parameters=self._filter_post_data_parameters()):
provider = self.Provider(self._test_options(), self._test_engine_overrides())
provider.authenticate()
assert provider.create_record('TXT','orig.test','challengetoken')
records = provider.list_records('TXT','orig.test')
assert provider.update_record(records[0].get('id', None),'TXT','updated.test','challengetoken')
def test_Provider_when_calling_update_record_should_modify_record_name_specified(self):
with provider_vcr.use_cassette(self._cassette_path('IntegrationTests/test_Provider_when_calling_update_record_should_modify_record_name_specified.yaml'), filter_headers=self._filter_headers(), filter_query_parameters=self._filter_query_parameters(), filter_post_data_parameters=self._filter_post_data_parameters()):
provider = self.Provider(self._test_options(), self._test_engine_overrides())
provider.authenticate()
assert provider.create_record('TXT','orig.nameonly.test','challengetoken')
assert provider.update_record(None,'TXT','orig.nameonly.test','updated')
def test_Provider_when_calling_update_record_with_full_name_should_modify_record(self):
with provider_vcr.use_cassette(self._cassette_path('IntegrationTests/test_Provider_when_calling_update_record_with_full_name_should_modify_record.yaml'), filter_headers=self._filter_headers(), filter_query_parameters=self._filter_query_parameters(), filter_post_data_parameters=self._filter_post_data_parameters()):
provider = self.Provider(self._test_options(), self._test_engine_overrides())
provider.authenticate()
assert provider.create_record('TXT','orig.testfull.{0}'.format(self.domain),'challengetoken')
records = provider.list_records('TXT','orig.testfull.{0}'.format(self.domain))
assert provider.update_record(records[0].get('id', None),'TXT','updated.testfull.{0}'.format(self.domain),'challengetoken')
def test_Provider_when_calling_update_record_with_fqdn_name_should_modify_record(self):
with provider_vcr.use_cassette(self._cassette_path('IntegrationTests/test_Provider_when_calling_update_record_with_fqdn_name_should_modify_record.yaml'), filter_headers=self._filter_headers(), filter_query_parameters=self._filter_query_parameters(), filter_post_data_parameters=self._filter_post_data_parameters()):
provider = self.Provider(self._test_options(), self._test_engine_overrides())
provider.authenticate()
assert provider.create_record('TXT','orig.testfqdn.{0}.'.format(self.domain),'challengetoken')
records = provider.list_records('TXT','orig.testfqdn.{0}.'.format(self.domain))
assert provider.update_record(records[0].get('id', None),'TXT','updated.testfqdn.{0}.'.format(self.domain),'challengetoken')
###########################################################################
# Provider.delete_record()
###########################################################################
def test_Provider_when_calling_delete_record_by_identifier_should_remove_record(self):
with provider_vcr.use_cassette(self._cassette_path('IntegrationTests/test_Provider_when_calling_delete_record_by_identifier_should_remove_record.yaml'), filter_headers=self._filter_headers(), filter_query_parameters=self._filter_query_parameters(), filter_post_data_parameters=self._filter_post_data_parameters()):
provider = self.Provider(self._test_options(), self._test_engine_overrides())
provider.authenticate()
assert provider.create_record('TXT','delete.testid','challengetoken')
records = provider.list_records('TXT','delete.testid')
assert provider.delete_record(records[0]['id'])
records = provider.list_records('TXT','delete.testid')
assert len(records) == 0
def test_Provider_when_calling_delete_record_by_filter_should_remove_record(self):
with provider_vcr.use_cassette(self._cassette_path('IntegrationTests/test_Provider_when_calling_delete_record_by_filter_should_remove_record.yaml'), filter_headers=self._filter_headers(), filter_query_parameters=self._filter_query_parameters(), filter_post_data_parameters=self._filter_post_data_parameters()):
provider = self.Provider(self._test_options(), self._test_engine_overrides())
provider.authenticate()
assert provider.create_record('TXT','delete.testfilt','challengetoken')
assert provider.delete_record(None, 'TXT','delete.testfilt','challengetoken')
records = provider.list_records('TXT','delete.testfilt')
assert len(records) == 0
def test_Provider_when_calling_delete_record_by_filter_with_full_name_should_remove_record(self):
with provider_vcr.use_cassette(self._cassette_path('IntegrationTests/test_Provider_when_calling_delete_record_by_filter_with_full_name_should_remove_record.yaml'), filter_headers=self._filter_headers(), filter_query_parameters=self._filter_query_parameters(), filter_post_data_parameters=self._filter_post_data_parameters()):
provider = self.Provider(self._test_options(), self._test_engine_overrides())
provider.authenticate()
assert provider.create_record('TXT', 'delete.testfull.{0}'.format(self.domain),'challengetoken')
assert provider.delete_record(None, 'TXT', 'delete.testfull.{0}'.format(self.domain),'challengetoken')
records = provider.list_records('TXT', 'delete.testfull.{0}'.format(self.domain))
assert len(records) == 0
def test_Provider_when_calling_delete_record_by_filter_with_fqdn_name_should_remove_record(self):
with provider_vcr.use_cassette(self._cassette_path('IntegrationTests/test_Provider_when_calling_delete_record_by_filter_with_fqdn_name_should_remove_record.yaml'), filter_headers=self._filter_headers(), filter_query_parameters=self._filter_query_parameters(), filter_post_data_parameters=self._filter_post_data_parameters()):
provider = self.Provider(self._test_options(), self._test_engine_overrides())
provider.authenticate()
assert provider.create_record('TXT', 'delete.testfqdn.{0}.'.format(self.domain),'challengetoken')
assert provider.delete_record(None, 'TXT', 'delete.testfqdn.{0}.'.format(self.domain),'challengetoken')
records = provider.list_records('TXT', 'delete.testfqdn.{0}.'.format(self.domain))
assert len(records) == 0
# Private helpers, mimicing the auth_* options provided by the Client
# http://stackoverflow.com/questions/6229073/how-to-make-a-python-dictionary-that-returns-key-for-keys-missing-from-the-dicti
"""
This method lets you set options that are passed into the Provider. see lexicon/providers/base.py for a full list
of options available. In general you should not need to override this method. Just override `self.domain`
Any parameters that you expect to be passed to the provider via the cli, like --auth_username and --auth_token, will
be present during the tests, with a 'placeholder_' prefix.
options['auth_password'] == 'placeholder_auth_password'
options['auth_username'] == 'placeholder_auth_username'
options['unique_provider_option'] == 'placeholder_unique_provider_option'
"""
def _test_options(self):
cmd_options = SafeOptions()
cmd_options['domain'] = self.domain
cmd_options.update(env_auth_options(self.provider_name))
return cmd_options
"""
This method lets you override engine options. You must ensure the `fallbackFn` is defined, so your override might look like:
def _test_engine_overrides(self):
overrides = super(DnsmadeeasyProviderTests, self)._test_engine_overrides()
overrides.update({'api_endpoint': 'http://api.sandbox.dnsmadeeasy.com/V2.0'})
return overrides
In general you should not need to override this method unless you need to override a provider setting only during testing.
Like `api_endpoint`.
"""
def _test_engine_overrides(self):
overrides = {
'fallbackFn': (lambda x: 'placeholder_' + x)
}
return overrides
def _cassette_path(self, fixture_subpath):
return "{0}/{1}".format(self.provider_name, fixture_subpath)
def _filter_headers(self):
return []
def _filter_query_parameters(self):
return []
def _filter_post_data_parameters(self):
return [] | 76.758364 | 333 | 0.736682 | 2,459 | 20,648 | 5.736072 | 0.093127 | 0.04821 | 0.068486 | 0.076569 | 0.834456 | 0.825806 | 0.811769 | 0.792414 | 0.769585 | 0.754272 | 0 | 0.004339 | 0.129407 | 20,648 | 269 | 334 | 76.758364 | 0.780318 | 0.015885 | 0 | 0.357143 | 0 | 0 | 0.195614 | 0.123868 | 0 | 0 | 0 | 0 | 0.241758 | 1 | 0.153846 | false | 0 | 0.032967 | 0.021978 | 0.225275 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1ddd749835385e81887ecd8e36cf2029d0622fa5 | 93 | py | Python | utils/__init__.py | rickgroen/cov-weighting | 64c296679cd37e724a03c6dc107606f7048aec96 | [
"MIT"
] | 26 | 2021-01-05T07:10:31.000Z | 2022-03-23T06:31:00.000Z | utils/__init__.py | rickgroen/cov-weighting | 64c296679cd37e724a03c6dc107606f7048aec96 | [
"MIT"
] | 6 | 2021-04-12T16:27:11.000Z | 2022-02-09T07:00:15.000Z | utils/__init__.py | rickgroen/cov-weighting | 64c296679cd37e724a03c6dc107606f7048aec96 | [
"MIT"
] | 7 | 2021-03-08T09:28:05.000Z | 2022-02-23T07:39:29.000Z | from utils.train_utils import *
from utils.reduce_image_set import RestrictedFilePathCreator
| 31 | 60 | 0.88172 | 12 | 93 | 6.583333 | 0.666667 | 0.227848 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.086022 | 93 | 2 | 61 | 46.5 | 0.929412 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3803c334b45d5a91da1e8dcc10610db999771e4a | 195 | py | Python | app/controllers/genre_controller.py | Juan7655/wfh-movies | 6fcaf7144b30663b4e3c7549b0767547447dea8f | [
"MIT"
] | null | null | null | app/controllers/genre_controller.py | Juan7655/wfh-movies | 6fcaf7144b30663b4e3c7549b0767547447dea8f | [
"MIT"
] | null | null | null | app/controllers/genre_controller.py | Juan7655/wfh-movies | 6fcaf7144b30663b4e3c7549b0767547447dea8f | [
"MIT"
] | null | null | null | from app.controllers import paths
from app.models import schemas, models
from app.controllers.base_controller import crud
paths['genre'] = crud(schemas.Genre, schemas.Genre, models.Genre, 'id')
| 32.5 | 71 | 0.794872 | 28 | 195 | 5.5 | 0.428571 | 0.136364 | 0.233766 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.102564 | 195 | 5 | 72 | 39 | 0.88 | 0 | 0 | 0 | 0 | 0 | 0.035897 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.75 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
380d3df01d85e8e8e0f6aa8ec66afe806590a9a2 | 69 | py | Python | app/database/handler.py | justanotherresearchanddevelopment/MalaysianIncomeTaxCalculator | acfac285f0876a5fa462e77dbd70b656a76eec06 | [
"Apache-2.0"
] | null | null | null | app/database/handler.py | justanotherresearchanddevelopment/MalaysianIncomeTaxCalculator | acfac285f0876a5fa462e77dbd70b656a76eec06 | [
"Apache-2.0"
] | null | null | null | app/database/handler.py | justanotherresearchanddevelopment/MalaysianIncomeTaxCalculator | acfac285f0876a5fa462e77dbd70b656a76eec06 | [
"Apache-2.0"
] | null | null | null | import sqlite3
class DatanbaseHandler:
def __init__(self) : pass | 17.25 | 29 | 0.768116 | 8 | 69 | 6.125 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017544 | 0.173913 | 69 | 4 | 29 | 17.25 | 0.842105 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0.333333 | 0.333333 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
381d1bbcd7f7895c74fa1c7218fc004e13a3f64b | 31 | py | Python | libsaas/services/mailchimp/__init__.py | MidtownFellowship/libsaas | 541bb731b996b08ede1d91a235cb82895765c38a | [
"MIT"
] | 155 | 2015-01-27T15:17:59.000Z | 2022-02-20T00:14:08.000Z | libsaas/services/mailchimp/__init__.py | MidtownFellowship/libsaas | 541bb731b996b08ede1d91a235cb82895765c38a | [
"MIT"
] | 14 | 2015-01-12T08:22:37.000Z | 2021-06-16T19:49:31.000Z | libsaas/services/mailchimp/__init__.py | MidtownFellowship/libsaas | 541bb731b996b08ede1d91a235cb82895765c38a | [
"MIT"
] | 43 | 2015-01-28T22:41:45.000Z | 2021-09-21T04:44:26.000Z | from .service import Mailchimp
| 15.5 | 30 | 0.83871 | 4 | 31 | 6.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.129032 | 31 | 1 | 31 | 31 | 0.962963 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
69798dfd479a8c4c6d7795719354ead1a4ddf144 | 74,457 | py | Python | apps/dashboard/layers_builders/benin_protected_areas.py | TechnoServe/Caju-Dashboard-v2 | 7345cbfc677f60665276437dbe0a68a992b03b17 | [
"MIT"
] | null | null | null | apps/dashboard/layers_builders/benin_protected_areas.py | TechnoServe/Caju-Dashboard-v2 | 7345cbfc677f60665276437dbe0a68a992b03b17 | [
"MIT"
] | null | null | null | apps/dashboard/layers_builders/benin_protected_areas.py | TechnoServe/Caju-Dashboard-v2 | 7345cbfc677f60665276437dbe0a68a992b03b17 | [
"MIT"
] | null | null | null | # WDPA_WDOECM_May2022_Public_BEN_shp-polygons_1.json
import json
import time
import folium
import geojson
from apscheduler.schedulers.background import BackgroundScheduler
from apscheduler.triggers.interval import IntervalTrigger
from celery import shared_task
from django.utils.translation import gettext
heroku = False
# Load the Benin Protected_areas shapefile
with open("staticfiles/WDPA_WDOECM_May2022_Public_BEN_shp-po/WDPA_WDOECM_May2022_Public_BEN_shp-polygons_1.json",
errors="ignore") as f:
protected_area_1 = geojson.load(f)
with open("staticfiles/WDPA_WDOECM_May2022_Public_BEN_shp-po/WDPA_WDOECM_May2022_Public_BEN_shp-polygons_2.json",
errors="ignore") as f:
protected_area_2 = geojson.load(f)
with open("staticfiles/WDPA_WDOECM_May2022_Public_BEN_shp-po/WDPA_WDOECM_May2022_Public_BEN_shp-polygons_3.json",
errors="ignore") as f:
protected_area_3 = geojson.load(f)
# with open("staticfiles/WDPA_WDOECM_May2022_Public_BEN_shp-po/WDPA_WDOECM_May2022_Public_BEN_shp-points_1.json",
# errors="ignore") as f:
# protected_point_1 = geojson.load(f)
# with open("staticfiles/WDPA_WDOECM_May2022_Public_BEN_shp-po/WDPA_WDOECM_May2022_Public_BEN_shp-points_2.json",
# errors="ignore") as f:
# protected_point_2 = geojson.load(f)
# with open("staticfiles/WDPA_WDOECM_May2022_Public_BEN_shp-po/WDPA_WDOECM_May2022_Public_BEN_shp-points_3.json",
# errors="ignore") as f:
# protected_point_3 = geojson.load(f)
temp_geojson_1 = folium.GeoJson(data=protected_area_1,
name='Benin Protected Area 1',
)
temp_geojson_2 = folium.GeoJson(data=protected_area_2,
name='Benin Protected Area 2',
)
temp_geojson_3 = folium.GeoJson(data=protected_area_3,
name='Benin Protected Area 3',
)
geojsons = [temp_geojson_1, temp_geojson_2, temp_geojson_3]
protected_area_features = []
for geo in geojsons:
for feature in geo.data['features']:
protected_area_features.append(feature)
protected_area_data_file = open('staticfiles/protected_area_data.json')
protected_area_data_dict = json.load(protected_area_data_file)
def __human_format__(num):
num = float('{:.3g}'.format(num))
magnitude = 0
while abs(num) >= 1000:
magnitude += 1
num /= 1000.0
return '{}{}'.format('{:f}'.format(num).rstrip('0').rstrip('.'), ['', 'K', 'M', 'B', 'T'][magnitude])
def __style_function__(feature):
"""
Function to define the layer highlight style
"""
return {"color": "#1167B1", "fillColor": "#476930", "weight": 2, "dashArray": "1, 1"}
def __highlight_function__(feature):
"""
Function to define the layer highlight style
"""
return {"color": "#476930", "fillColor": "#1167B1", "weight": 2, "dashArray": "1, 1"}
def __build_html_view__(data: object) -> any:
"""
Return the HTML view of the Benin Republic protected_areas Layer popup
"""
# Variables for protected_areaal translation
active_trees = gettext("Active Trees")
sick_trees = gettext("Sick Trees")
dead_trees = gettext("Dead Trees")
out_of_production = gettext("Out of Production Trees")
cashew_trees_status = gettext("Cashew Trees Status in")
is_ranked = gettext("is ranked")
satellite_est = gettext("Satellite Estimation")
tns_survey = gettext("TNS Survey")
# All 3 shapefiles share these variables
total_cashew_yield = gettext("Total Cashew Yield (kg)")
total_area = gettext("Total Area (ha)")
cashew_tree_cover = gettext("Cashew Tree Cover (ha)")
yield_hectare = gettext("Yield/Hectare (kg/ha)")
yield_per_tree = gettext("Yield per Tree (kg/tree)")
number_of_trees = gettext("Number of Trees")
source_tns = gettext("Source: TNS/BeninCaju Yield Surveys 2020")
predicted_cashew_tree_d = gettext("Predicted Cashew Tree Cover Communes Statistics In")
among_benin_protected_areas = gettext(
"among Benin protected_areas in terms of total cashew yield according to the TNS Yield Survey")
return f'''
<html>
<head>
<style>
body {{
align-items: center;
background: #F1EEF1;
display: flex;
font-family: sans-serif;
justify-content: center;
height: 100vh;
width: 100vw;
margin: 0;
}}
.me {{
background-image: url(data:image/jpg;base64,
/9j/4AAQSkZJRgABAQAASABIAAD
/4QCYRXhpZgAATU0AKgAAAAgABgEGAAMAAAABAAIAAAESAAMAAAABAAEAAAEaAAUAAAABAAAAVgEbAAUAAAABAAAAXgEoAAMAAAABAAIAAIdpAAQAAAABAAAAZgAAAAAAAABIAAAAAQAAAEgAAAABAAOgAQADAAAAAQABAACgAgAEAAAAAQAAAZCgAwAEAAAAAQAAAZAAAAAA/+0AOFBob3Rvc2hvcCAzLjAAOEJJTQQEAAAAAAAAOEJJTQQlAAAAAAAQ1B2M2Y8AsgTpgAmY7PhCfv/AABEIAZABkAMBIgACEQEDEQH/xAAfAAABBQEBAQEBAQAAAAAAAAAAAQIDBAUGBwgJCgv/xAC1EAACAQMDAgQDBQUEBAAAAX0BAgMABBEFEiExQQYTUWEHInEUMoGRoQgjQrHBFVLR8CQzYnKCCQoWFxgZGiUmJygpKjQ1Njc4OTpDREVGR0hJSlNUVVZXWFlaY2RlZmdoaWpzdHV2d3h5eoOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3uLm6wsPExcbHyMnK0tPU1dbX2Nna4eLj5OXm5+jp6vHy8/T19vf4+fr/xAAfAQADAQEBAQEBAQEBAAAAAAAAAQIDBAUGBwgJCgv/xAC1EQACAQIEBAMEBwUEBAABAncAAQIDEQQFITEGEkFRB2FxEyIygQgUQpGhscEJIzNS8BVictEKFiQ04SXxFxgZGiYnKCkqNTY3ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqCg4SFhoeIiYqSk5SVlpeYmZqio6Slpqeoqaqys7S1tre4ubrCw8TFxsfIycrS09TV1tfY2dri4+Tl5ufo6ery8/T19vf4+fr/2wBDAAICAgICAgMCAgMEAwMDBAUEBAQEBQcFBQUFBQcIBwcHBwcHCAgICAgICAgKCgoKCgoLCwsLCw0NDQ0NDQ0NDQ3/2wBDAQICAgMDAwYDAwYNCQcJDQ0NDQ0NDQ0NDQ0NDQ0NDQ0NDQ0NDQ0NDQ0NDQ0NDQ0NDQ0NDQ0NDQ0NDQ0NDQ0NDQ3/3QAEABn/2gAMAwEAAhEDEQA/AP0n1PVdQaGa6Q7huEbPGozGnXoWOeSB9Ko6f9ol1KO48rOZgibioyqqSfXgljUOoWsy6JMiXaQF2L5KgcdMZ+ntVDRjdiUXf2mLYpkdE8rJ2Kvb5q8qLstDrave500ltJKSk8XQjcVl2/ebOOPpWXczHyoY2EhUE7j5xLYyfXBJx0Narr5TOZZIywYEEjGOPTPvXJ+LoLtdPibTXSaZZoTKhAI8nIEhAzncFOR/KonIqmtbFW2j826mkSSXYXjBVrjJGM84Yck1smNxfwW8CsAJV3FTG3O0kE9PTHFZHh6zg8vUZZ8SKs7FXaMDKY46fpXTxWNjNLG5fBLEgq2B8qe49+9SkXJ2epmX8FyJ5iroqx2pU/u9uPM5/hYVbjt7j7eDsQGFI1EkTkEgDPRgevTOelZGraKI7uWSCaWTZHHgCTj7yjBGcEdSeM+ldHa2FxBqErI8j/KcKxwvypgdf8aUVqOT0K2nfaLmS6ikMuYoo0kURqylnXcemM8HnrWzNbNFc2algnzRDaAyBmDFuMZGOM8/jTxE4l6sS7IAQV52D0+oxW7b3FwbhFnQPsV8mRdp546jj9K18jFy6mmkk1vbM0qrIfu5Xn0Hbt+leUfFOWGPQIEcMDJLHuKfKMKM8juP85r1L7REyB3QxgiTGzDJ6A8c15L8WJ5J9Ihgt8TKsnDAlWUbfmB6cY9aVXSBWF1qo+XJ9SVry08l9iKwyATwcbiMexA/A1hazuAjkkeCTzLlAAo2HaAS2cHHuO56dauyM0DRGaJ/JiJQhiCpB5/hyT1GD+dYd6vmvBE1vsxIzrsILZIOOc8ACvMUj6C2hpQtcSzGSSP/AFMI3HIYMCMcDggev8607K4uGiO9hJEZFx5YIZSq5PB6CseNGn1IWsc+ElXbgjLYUDAwcHr74rTd/scJtixb7+WI3NuY4A4557d6jqV5HY+HIo7jTbqWEDcZc7xlSCxGc88kDP510TEif5gWAb92fvABAce/Vs1heGUSPTLubqrSjA6DAXnI74x9auTC6ZXuIfL3xxuInPHB4yffd2BrXocsviJbWaUjz2AdFjG0JgffkJHJ4Jx1rorcuqqH3bXADBRuGC3pXPW0BSCSGZDmJooic8kqNw9uSea1bUgyOS8kewogCFiMkk4z7YxzWdymuxsXksWZGkG4OMBnA5HJAxXlHiB4f7QKA4Ro+cju3PPqDXfajD50iFZmkjG9yZFz/DgD8zXmOq+ZNfXBmTYUX5f7owcDgcjIB7Vm0VBanOTSx3Uu5kwqoq7PdWBA+lcLq9xJbXkCu8gM0vLEcL3IP+ztHU9K7edzH50ZjyrBXznt+XqK5bWkhmBLmRZCGbgHHTH0PpiiGjsatHG3N688lokmcKqOO7bScr+Y6Z7Vv6Yv2SzjhjfcyxlgAgUncSTj86wba38y8kWUSbYUhjC4zyoPYHt3IroblNNsopZpp2GIli3AYUZ7D0IrZmdy4odYljhLgtID9SAO341uQBZr5mLMUVug4IwMnHGcVi25s7qKCSGRXDtk9TtORjI/DNa0EaBmeUAYL7So5yxCnvSkgubKTLGJCSqjaCC7bhxyePx+tTMEAaQshwFA4ySMZ4x39qhMELRhlRvLyd+NuT24zzjIpSITCZVVk24AA6gfUdu4rMhnK+IZ7cLE6uMBfunIw2OByMDOa8H8VvbI00kw7sxHYlsAY64Oa9y8RsPPlj3ycBCp2gg7RxnPFeCeIozJuaQgu43PjjDFuMitI7EvY53TvMWMKXdYzyw4ZRwevfmupshGYpogNmXjCKuSG49SfU81z0KSxW5lDYAI+ZuQewGa3YEYrG0/ygToxC46AZHJ/lVIWh3Om28E7wgcOsgwVBwQ2QcnHYV9Faau2Er90hQFOBls4459q+c9FjuZLy2jUsd0kTHAwOD06+lfSFoJI0byowVjcs33mB49M8HPasa3Q1hsbkM7Q2iwjdJkqAMc4LZycL2FdFa3EWzzXQMWAyNxII9en/6qwUU4ZmJOBu9Ccc9CeK3o5WRDar8oCkMwwMBjnqfr2rKyJkSpdys8SFNgjjztyWHXgnBHX0ArNvLhpXmcFo+ST1A5Oefc9MVcdhFdhW3M0YC4X+IDHPTp7VnatH5szLLlt2Au0leSATnH51tF3VjGS1uVnObjcrbi6KQzcAcdB0rLu1kcIJWZgCzZ6jjjg/WrheJndCd4GTtI4AHHU1QnmClQow3DOBnnPt6+tJoI73PJNVeQ6m6KQpacEjcVO0k1LDG8srQyoNhk6lucY9j04quJlur2SSdWZt5XeMHIVjjse3atfiNz5Ksh3puyf4Tnmhs2ehR1UvNNJCm07Zd3yjryMZP1/lWTc3du6Xk0krZt5pVA3YUHdyce/wCNbNxbbWeVAXbzmJLHoc1gQ2BMWrXJwgJZ9o9O5ye3uOtXGxDFhCSW3mIcJs++o2gfKR8u7FUZi0KyeSDIWVVVk45A5JY+v0rSk8i3hKK2F8pAueduRn+tZV+8jRSrLJwFG0DjOeuewx+NFhqxPp1xBKm9MEAr9052nbzgHnFWXuJbbWdLMO5zJc4cFhjDcH24rA0RAISI2ChHwZGHByAM56n0rrLVbaLXtNBDurSO4UjuCcHHbPYmk1Z6BLa5/9D9GLuFLuE2rRxsHR2Yhjk54XuMctSR6diyeGBUJSNIgdw6blUgHBORg/WtHUNOsneNWhQHZyGkbaQvUjBHJK5qtZ20UMkXlImXuBwu8jYSW5yx6CvMUbHRz6FC/wDtx1O7dUijROhDZYg8crg+1Ra25ayugIAzQpt3YIxuIHXFSWsRjlnlkVCzSE4LBTjhjnJqvqqXcmiSNEvLgv8A6xRnksAc9RWckkjaLbaQaVBA2nys0DRefIXLA8fNgjA68j1rZuICl0wgYbAkm5Wx1f5V5rCt5FidDIJIfLciQ54+RO3Y8kVY0m5JlkMl0s3yqCrNgjA39Oc5J5zU+bHJMiuLf7UsscqCN2ljhX58Hbv69SD0rbtWlFzMkBLEqvVgw+ZuPQ8r+FZFvJITHvi5e6VsrtOFQbucEcelbcV9ayTS7VjlVfKUjo2QcAc+nFEQm9C7vUXDqZlyfug9cZGSMjHXNdMs7RQ72iYkgYZPmyPfGf5Vx8NhESspkkjIEjLk4ALccDkDOK6iI3UKCVfLkCKUUuR3OB8wA5/OqZkye0mMmN8ZRHxtLY5VsZHABBB9a8f+Kt4LKIB2zH85ZgvJywC/pnnvXs6TTtFAk0DEIAcg7umTjPNeC/E+aKZwQF8skBueRg57gY2nrUV/hNsEv3h87SzRuLJUhwpLxHZyvJLIeOTyB9K5q9Nx5vlq75gjLliF6HChQRg5JJFa05dY4AI4yUlBlLZzg5HGMduaqXiPBfSyW8auBEoLxnI4O7BBzyOvOa8/1Pee4+2uZWfzHBeKQyLJtBbaFKjt/Tmr0TIJdzYMeQI1clmIAJ5bqMk+nSqGnzKiQwTljiPBUjdhnGeo579+aftjW4MUZIZFbYynO7PH1zweD2pWBtnf6a0iw2sKuHjZZ/MBOehwNmB27VsJ50lntYrsZtyYyCVLAAHnGT9Kw7CQXGiLMzNEYVl2Ejpz1446j8K6G1Ro7dXkByip35+Xkk+ueKu+hz311LZkUl9pQt5pyu/BAH4e30qKzluAu9UAe6nfJV9wCouM9h+FTQRpcReZ5fR2IONxy3f9e1X1tl8yKFo1AUyNhVPQHjoR6Vi77jvoR6k8yLGrxEBEIZg2c7uoNeO6jdlp7vkNlkTI7FScn3PI+uK9cvkkZJZA42MoPLbcc8gZrxVIZo7M3ClpBcXDu/XDKCOcDpjGVP55qZLqy6ZUMi7XIJJOQAGwAOegPTPYetc1exG5KHzZQ4jyNpAJ2knkHgjGe1a9/dTQoGQCUvNEMFNoxIQMgj0965+8uHF2WkHlBDJ83bywCDjPAJPrVwV9EXLuzjNU1e30yQ3jSMXVxgR8uz7D8oXHzMf0HJr5c8afFnxRqN6+kaREkMKSJ87HzHZwM4JPyAAYJ4Ne2+Otd0yz8PXGpsFMzRSPHNywRQQuAO7ytxxyRxxXxrp+nXfia5lubiSS3soj5bGFS8s8h58tBn7zc5Y8D0x09GFFJXZ5tes2+WJ02k/GDx14c1OG4j1BJWgfDgxrIGU8MuPl3DPPXr0Nfbvwr8fN468OtqM9o9pdJMYrgSo0ayy4BUorgFkwR8ykrnjNfGuneEk0zVY7aRYkFxMIGtGIluI0baFLSMNo3ZxgDg854r2TRvhRrdxDJNpFzNc/Zi2CupzRBVDFRu2r5SgDjaTgsM06tGMo3Jo1JRdj7VkuI4rKNUlSMysBk4yuOWODx647VTtmuriycybeG2jHQqeA2VPU9+4rx7w9d2ehxnTNYvD9sjYRqbmZEeLcAwD+Wdj8Y9iOvOa9B0vW9LktzIh+1IWdpZbaTmMry3mFcDBPRhnjqK4JUmjs50ytrkjC8nW8HyZZQUbo3rg+n5V4XrISS6BByS4ALd+R9eleu6il1cQSSRwuy4ysjlQpUjKkHlmGOuBzXhmqTCwuwbi8iiQMMPgbQcHIGTkkUJdEOWxDKZPstxHGEKIV3jGclm42+hGOTXSWgjbMnSPeNyHB+6nJ6Dn0rze98YWtqCkMzTq5U7vI25565yM1r6Z8QNFuJYpr5jAGM2ZHyiudoGASMZ9upq+RmfPE908KwpPq1g6yblbZImR94KpOfp9a97sVdxG5lKja0rYyMgt064yB2rxfwBdadc3cc1pIrGKHKrkEuMAHHTj1xXr1jEJbeOZ2IU5AOOODnGPb19K5avxWOmO1zo7fzH3yxksCc/McE/TjtW+n7p3k3feAUtwe3GQRzz9KwIAsRiVyPLfLE7j8oyAMcVqsN0BZRxIQqjqTnHOfXPapVzOXcvW00UlxI7qxKI3DlfmOM/lWNfyNLOGhG5SQ67Dnr3B9vQ1dtngSaRmAkOSBtPI2gZ//AFVl3JtxceYVaI7nYYJyOQOfw4xWke5jIhLQt58m4BmBzwM9Twc+lYkiN5aK/BGMnHP0rak2pG8Ibd820BsYIxyBx+NYF8zpEWb5QBkg8cHOMe1AI8c09o5L6WTBzI5JAONxBPTPTiunnuYURo3Rlj5DsDjAAJGDjsTXK6cbUTwYPzOJCSeSB04PTvmt67b7S7qxYY24YYGeOPw9qr7RrJ6WRcu/ltkIYBFdiTuzwTjdnPoaxr2ZzFqEO35SrKnbdjAyR6DvU9/bpMYogfLAdzuU4XAYHaR09qo64ZFV4oU2hpos7eWO9+n04qkRcrbZfIlKsAvkxE4HPCgHGfX2rGv7kLNJDnADEDdgkfjzkVdvongheSRC21EzvO3AHt0NZsgjF0szkRkO+M+nTk9+vahJWBblrTIpbi3EnKSLM5Dv3Hbj+VbWjWs58SWTXA3kLIwIJzzk9fpVDT7F7pldWKpuO5geCBxwPqOau6DK8/iuBY2/1EUxTJOSNuOe3HYVm3qU/hP/0f0ev/tdzeRw2hIkW2YsJBleCAD65GTiraeYltGxUcmWRGVQQCiFd3b/APXWVqsgSWaeSTJEGAUR2P7wkghQ3pirUR/4ltsz7mxAT80Mik7yucfMcnBNeVc6WmzNsy0+mTXVxHCzNcSqh2L0XjqfbFJPbxXFottcxQkDCY2pzkBOmPr9a07SG3h0yNJN8SzTEbWVk+aRuvPsKzdRS5gUvazhOUUCdieu459jWbd9DVb6CWFikcFw0UCM6o4UAEElu2QwPYCtC1R0tpTLD5Uhc8ZLAZ477v51Vik1CCzfzIo2XehJRslh78de9Ure5uUsw3kyJkOd4c7Se3TPr3oY7GjDYGSS3UyxlTJK4BABOBtqU2KWAvJpVTiVdvAIU46nGD19elOtPtZe3ckkCKR9k0anghfYHrnvUt3iPTZyyxFdpbjKZYnaPX14oiKTsy3Zwv5MdtswNqnbvB7bsjOT39atXc8SJFEcqCxeQgZUKoGcjkd6z7eNreZTIWMwU5ZmDY28cc4FdM8bz+U0blo2h2kgYJBPXpg5qmQ0WYQYrhZ4ZgwW1UBN5525P3ScflzXzd8UruO409pzC9vNiRSrA5BJAAyOuc8V9GzXEUE8kTbwUhIUleMADJBA96+X/iuSrBXZ8NkAZBXdlcc9ep4PQVnXeh1YJXqHjqadNcXaYcMsUTssLk435Bz6ZABAHauetVt2kuUjk3AhwV3fxhQTg8EdR+FW9TnWxvTcW8zoRbRsSFLrlmYHI65z37CsrRpjNbXFweDvOSFMahXYcDPHuSOlcFlbQ9rrqdNbwqDG0sm0SqrGJWySx4GRx+frTpI4ZjJ5hePEieWGxng5I3Y/r0pn2RJrpLeQmSSJVYvg5jJyQM9MHA/GnXEN/Hi1ZWud8+N0MiqEKgkEq3GR7d+lQhnpelxL/YcUcbRsrqzMwbJxzxgjrz+VbSopZYiUUqGVWHfavr9e1efS+KdA8N6dYLr2oWmmRyKyxGedY2c56KM5PGc8cVw3iP4+/DbSr/T7a41yC7ZfMuHi08m7kUsMRrtiBOTuz6cVsotrQ5JNJu7PorRrKaGCJZm+4qqcncSRjJz+Gc+tTs8iGV84IhAXnGB8zEkdsfWvD5Pjz4J0nw+/iC8uZEjgWRFhu7eW0llIB+ZVkAGwDkv90duteNaJ+1HH40ubrSPDfh7UX+0ROG1UQxmC1UHyyTHI4mYIvO/ZgnGAeaI4ectEiZVYx3Po/XPGuhx6TdRx6ravc28TmaN5EBTbxtOT1z17jNeBWvxZ8IlrazOt6db3hXyzbSyuFZiThVcgKGYduT7V59rGieFvE3iK1spFFp4XsJXhtURQC8jQtvvZcjcJJJSNu5umXPzHjU1PwT4At9GbQNVtLWB4wnnsr4mkjfozb87lDdSMlSMg81s8JFQ13FGtLmaWx20fiC81Nrv+yhbl5Z0+zRvNhV8vG9nJGQgx65Y8Yrndb0MQxX19qL/2rcLAW3FgUj8wjOIc7VAz15NcTo3jLwZ4K02HQr/UJpmgRxC/+vaa3RvklJ5DbyDkDkdK47XPjF4Du4njs7rd56mGfbEwVlxw3AVgVbHH501RlCdrFOtGUbtninxS8Utqq+HfDmiiRI7CwjyM8ebITuOP9nByTx6dK7zwvHaaKsNvFFGth5cbEudpmljQbmxnnAIZxlQPXtXmsvhrW9R1KfW7W0iube6t5ZLXzbqJYvOJwFZgx+RXJOzHsaF1UaPIbO7ePy9Kto4Srk/vrgjMv13k/MehHtivRVNpXkjy1UTk7HU654xd7a5jgsoFidpH3OxM6kYVWy2DsbJKqQD6etZfhX4lX2mkaXevDJpzOqvGzHOG6DYxwV9uM9ziuM03xEbrUptTupwtxP8ALKSVG+IE4RdwIUgDAxya63QNFsPHV59j+2xwKkcrxxsyCZpY0O3rwVJI44JGSCDWUkrFxk76H0lJ438Jay+nWUN+Z7eyiaOSGFRDjBUxLnyCgZMEYzggYrf0+y0W0i+3xiWKCfEkZEq2kshAJAE8bfZ5Tn7qOAcd814DoPii38H62/2S4VLVkS3ZBII2idlCs+7o4VgTn+6R6V03xB+Kljaqr6Aq2mqTEfvIijWl0ifKDNb/AOrYuRycBscg1g6bbsjdVUk7nZ6v44Twy40++LXUEasVjU+SyeaOI5oznyyW/jXMZGcYOAfIda1rwvNqB1nxZctdu+xo4Y5WgXbnpHBHlghHygswPc155qniO8nspGnhB1WeZmkuUkzFHGVwUitjny23dGyQoHyjkmsLSNU8OadbSQ6hpgvb3kqzrvwc8Dkjgj05zWrw6jsYfWW99jr7jx9o1vJ/xTOmRQncdgmO9gg6DAJ/DLE10em+NU1ZBp2qWtmjgZMHlSwMVcjOd2UJx0xjI715e/jYIptoNPtra3cAhI7ePKkdxnJ5781z91rZvXJlbbyPlP8ACB/dPUVPJEpVH3PrDwx4msfC0z3ehyAFSSbWRyYpUOcYBy8cqdUdOG6NnFfWvhXxO/iXSoktHumfy/nUQCJVO3Iy7ctkH7wGDX55fDPUls/FVnb3F5LPBfQyrDJEDvWQjlSBzyFJ549ua+ofA+i2KpaalNqep6bdXkc7QzxzSjIziPCurQl1XgocqRziuHEUludlCo2rI+qLZLqJtxv5QY1QhJoVWMg5x1wSQR65xXT6fczSoiTptO8OvybssOBgZ/DOOOleJyr41sbaa7sryDxLbw7Rd280KJOI9vP+rwCeeAVyex7V6F4O1lNesY1tUZApyIpTmSM8jY/8QKkZBP3hjk1yyWlzS72Z1aKVWWWQMEQlcqANy5Gc/NnisfZJLKkgVyjRviTO4DJ7jnHAroLRJZrSWWZSSQ43KMklTjJPrxWZGrQgumGZxnHKjjJPpTi9DKb1KSkPbFZDjhsFgO3Gfw/WsS9lIhJVlOYyMegAJ5z61vJIzWbsR8oXJ54zngGua1FN1m4I6pIwKjkEKcde1Fhx3PINKglYCONmDLuzkc4PbA/OtqKVblUOCcsDk8Bh6Dpwc8GsrSfNtnBctkISNvOeOnTOfUmrfnsUit0XYIBGrbehYleB7gdQK0W5UmbFxbAQWsag5EkhznLBgxPPX8q5XWILsT3qm4Em6WOUIPlCDdkbuecc8iureUNeWkcjOzESlQGGWJJAOBxXGX75ijkRclgEJTkEqxBz9DTgiHpuVVVbiylkmJDhgv3mJ4PXPpTQ6TT+bBEjKGYEEZ5J5yevamiZZLa5mikIYlFYK2OG4/PjtWT9peI7BljucZwcYyeWNElbYqLO50tpEh3hGkGC5/hGOSBjv0+lReD1ki15p0jWMGByykj7vJ457VLomsWklsI41DtEFWRwMrg9s9O1Q6FdlfEF7bIjBpLWX5W4xgZ474549azW5c2rH//S/Qud7YtcQLIxa3CwOAAzBggKqD3wOtbNzJNIYYVEqIRkYhJGMAAHjA/CuVe3JWc+ejtNM0zsCUKlwPQ9gwGPxrp7+CwkntSjSOSGwqyPg7Af9oDBNeS29rHXp3GTS7ZfLeeXYi/KPJIwwBPUj1+nFcndTrF+4IeVmkAbe3YKoJ2ke3StGeKOKbUHtBIW27FVncjJA55YjHJ6Vy2oWl1d3kM3lmD94XPO4MFBUgdxkEEHsazloawV9zurNLM23mzwlWdsqjE84XA7cZzT57GwTT5IYSFdosqN2CGPpnrirasqCKJ1f5UwNz7geAOvrxVTUoU1CGW3EjjaUXMTKr5GCACORknn29qRK3Nv7MYpViilcvHCkeWIYckcevSsvV4blrmCGK4TDPbgRyRnbhWUnLKc9vzpmm2c0N3czSySPuMeA2M4UNkDbjvitqa7aJmZzs2c5dSQOuOe1XFaEt2YtsJGkL3SQlkVgXDbl2sST1AOCBW9awhINqDymxGAEPyBiM/ka5h7u6aKfy4AVVMAqQeG+vpnvW5bXnMDtaMu6VQzK2D8oHYEfypvyJadi9dG+iE7tEucFS4O5QG4OVODj6GvlT4kzNDeoA8axznydzAYxnPBOcAda+mdW1ONRjzTDJMx2eap5PJ27sAc+5r5a+Icr3mqJZkoEivFMhPIUHoMdic4OetY4nax3YC/Nex43d2+975rV9zNHbwKpwq72LZIzjOc9vSqFlZXAsI96NtlcIEYnOQzddvfueO9O1O5XTheW6xhVF1uVix+Vc4wB/hWgl40hjR4pABJGVTG7kDkcHOR1HrXFqj2GtRsf2gXDwrKELsqlAeWiQE4wcHnNO8Qapa+EvD1/wCJbl1WGxtmcqw6yKMDH+0xIAp3mzTaoz2nyNtchZM5IBUfh15r52/ae1sw6XbaTEGhtLKW0u7sr0lE06woGI4IzkgHviqpx5pJMzrS5YOSPlnWtc8R6rrces6op/tbWLuC3QT5YW6XbpFFBHngIrsHl243Y2k9a+qvEvwP8BeEvCtzqmrWMt/qh/fNMJZIp0lVhyHDbWDyYGwLtxxjaK+ZtJ8zxFHq/k7X1K00uw1rSomba0z2M7yzRRkZIyQN2AeGBr6mvPG9j8afBNnaaXerbvc3lssv2iTy3ZFfc6k5zgfdfHORwORXouLlNQieZDlinKZ80+MtZ1q8stKs/ENxO9kHimJ2lHlgBJ3RBj827/lmXwuApbjk62leMn8I6ppM/g64tdS8OxSPIhhUwzTMVY7b6Nv3qsmdnlnC5+Yc4AxfiXdaH4mnSKTVJJ77TrOGynisN13bQCAlN2fLTKyEbS2TtK+2K8ys7eCzjN7dAJDZo2xWPmNcyIQPJjAAJ5PzMTtU+p6dibg7WOGfvO56N4v8fSyeItQ1bS99nbXckTFIZXEYLxgGPHKgjBG3HPXiuL8VeJLnWLTT57fVLm78iExXNveybzCwPyGM+jLwQOAw6c1kyyGC182+iUSM8gihXiTY/O3HO0Z539ccDGa4VWdGL8HsRjIIPUY9KxqVGWkXri/uHi+zlndUlMkYJysZP3in90seTjA79apO8xHnSbjuzh/Vu/1NNWQqD5RIP+fwqEnI54I9K5pMo6LQb+G0tp9OlTdFNJ5wQYVfNKbC5wPvbQB74qlqdxLI2JJGdh8jFuThfu8+w4rPt5TFMjgZww49eelW9RA8zOMHI4/Cr53ycora3KG87duasWskazKZnkSPPzmL/WY7hT6n349aqg4/Gis7jN29v9PcJHp1mII0GP30rzSMT1JJwoPsowO1WLG58Om3A1a0ug46SadMPOI9Ak/7kEe5rmsbjgDk9qnEToMtn6DrVKT3E0jvbSy0eUtJotwbiTBIt528q8yD18tRtkGOuw/hXMXbJ55MzuDk7gVKkMOxB5GPSsZmIIx1ByOeQR0IPrWl9uS+Ij1R2L/dW6I3SL6eZ3kUe/zDsTVyq3ViVGzuVwsBUPvBK4wjA8j0B6fnS/u0fzkUMgPKPyD7NjsfWrwsJFUPcr+7cbhJFyAuSA+04DKSO386zZLaSMsF+bbyccjB6Ee1S0yrnQW0smm3ltfaXM6I7edbtkGSGVOqE9CV/VT9a9x0n4++N7C0FldGe+KSgBVkW2CvjIKhYyMsOxwPevAdHuAWeCbBQfvlHU70GPl92BwfUCu4sfB/iDWtDPia3ijkSPers4JaXyl+fDAkll42jA7c0mk1qXCck/dPprw5+0fFp1/aP4msNQ0ydTtN2IVdgjHgHGA6KeMHII9+n1L4R8WaZ4q8QaZrGi3kH7+G6F3HbttWRT8wJQ8rg4JVvmRicHDZr4hsfBMfinwnaara6XZahbsDDNNouoGy1OBo8jCQ3Aa1nJOCElZUOeffi/DNxd/D3xZNZrd3NjqUEojsLi6g8hZn4zZ3Kg4jkkUjYwJjLcA4IBxnQi07HQqsk1zH7DykraLbcbQoHblTz/k1iShkEi5BwuCO46dOlcP8N/iHZ/EPw+LmHfFe27i2vrWQENDKo25AHG04OD2NdvNGUeTciMcED5jk4PH4etcKTXus1k7+8VLm5EUAUrkSBQe3P649q5LUZTbwSqCybYXIckEYx+nWuqvGUWodokXdjOCccDoPzzXFeIz/AMSe9ODvEDLHj8Bz7HNKwRPK7OKVLyLfKSqeaCScAgjkHBzU1yVWe1ityfLuZ28zLEAYXg57gY+lJpVpc3K4B4wzOQPuu3A/Tv0qxcxQ2bWmxEVllwWcdtv3lPI3c9OmKu6LkjaRYraa3/2I2GQMDG7I44/KsTU4vMthuiYtHLzgZypzg47ciujvy63ttApJ3R4xjA7kc+/auUuZL6OdYFBdpBjGD1CknOTxnrTiiDj7N7t7GQRW3kNlCxb5iT6DGM1iLAPtEtxeSsxXcwUttQE5/hHLfjmuntTeQQOx+TEuCB0BHH8q5+EAh5WYF2L7GC7idrYP4d6pj7HVWFy7YWIBTsTmQbV68YUex71q+GLWePxTPO5Z2itmDu7jng9AB09vSuV0mbecxKWCbi5J9ADn9K7Hw/fLN4jvkJODA+MdRle/txUtahJ6H//T/Qa2toRfS2sa/JHPhct8zrwOSF9v0rTvVu5Na8sR4git1UMrtkySn5s8dABUUEVykvnxXPzvI0jqVZ8liePlOOCMVNJeL9puPtL7Q88amQK+MYJzg88Y5ryNmddrmdqVg6vdBQD53ljO5s8HHXjHPpWJaabLPddbmPgn78iY3HtyR29Kp+JpZtUsmm8O3MS3M0sAWR23GNFyXITPVgMewOe1b+li8VDdqRJKzlSoJK4XqB14HasrK9jZXUblq5u5rORfNuJ42UbSrsjLyQAfnXIzzUcTzzTsy3SSL542iSPnaAoPI3Z5PrTrtYXkkmmjYBcyFkJKrtUjHPHf0qsLRbUxyWS5zCp2nCkBiX4xjJ4HPWn0Etzfs0vkX9w0E3nTM2EJULtxkjkn1zxV/wC0yQ2k7zROpQ/Mh+boR1PBI49KyYrl7fUxa+SWjSKPgEZR2Lktkk8Yxjmugnnhl02R5cqGjIYkZxt5Oevrx1q1tqZyWosc8DxokoIMgJbORkDnuOvSui8m1dUAkwq7nGQCBxjjn1rkJJzcSJEhXhTt25y2Mc447HmuukgIKxtKOEwd6jIz396fXUmV1oczqltd2ulTS+Z9/cAVypOPUZwa+QPGJZ9QuLaRlZZ3TegDHAILfQE455r7L1QyvaRLcbHQI+3j5idp4wOO45r4r8a332a5vJZY8tKZXRxxhY0OBjrye/SubEaux6mXrc88S9K6VNujTbNNGsfQ4IfIHPTIz17VJHfxo8bozo7fvC20LvJJChc+o71gTXUZsIdPvoyk5k+VQcsyno/B9Tj1xUirG1wJp2AWGKNdwQrgKT8wByMZ/lXNLzPTSV9DpNPuZLe48/exLw+WxzwnzHrgkjI6ivnf9pmzur7w/b69Crzwram01SKJcFIXkElvPgjcWgmQH02sx7CvfNMg0mW3mMfkmS3dQDsKDdtLH3PXk1b1Cz0q+Q6VqKl4ri18l/NfesqEYkVsjOMH8KcXZkVKfNFpn5Z+Hn1CG7t7Vbp7LXNNcT6UQyiORLgbmQN1YScgDptOPXGfe6zeTa6ZNNX+zLwThykY2eXKrDhUI+WUMOMdemcV7n8fvhfc+HdTxpsRns7EIttPAp8+CBsBY7kA5G1huRxjknOCefAo9A8VeJtQSS4DytIQGvpQTGAnUs6jLMuMYHzE8e9ekk7aI8Oaa0Z7BrGq6r4hi/tTVITYQKyx3cUluqyF8AfunIBUPjndnnOOazNT0zQ9Ks11DxLKzCRD9m09ZPICBeFO2LMmwdQOMkktnPHXR+GfA/w+0OS/1jVtSbWLkRssV7cRRzyIo3KyWpLJGityhkLTYPODXzhq1/Bf3txdo8kklw5d3mJeRifVu/8Aniu2pU5V72rOZRvsWL3xHK6GDTl+yxksDtOWZT/eZvmPH0rnGlduMnFaNvo+pXg3wQOy5xkAnk11un/DjW72PzWjZFxkYG7OfQetebUq9ZM7qGDq1HanFs8+ySApPA7UnNe02HwsuZcK0E0j55GCP5V3dv8ABe7VQ4tm2N3K4yR2GevvXHPG0Y7s9Wlw9jKmqifLmGB6EYIqadzNMzAcE8D6CvpK8+GggOxkyR1GPT0rmIfABhHliB5FyxDsRk5P9O1EcdSa0ZpPhzFxdrHh2D1pn14r2658DsnAgYn+7jtWFP4HlUtgEY5wwxgGtY4im9mctXJsTDeJ5gPWp45yh3MN5AwMk8flWzqGjtZk7lI+o5zWAQVJBrZPqjzJ03F2kadrY3d7zbQGTr8sY3Nx7A5qW40PVLe3F5cWV1DE3SRoJBGeccPt2/rmsyC4ktpPNiOGwRnkdeOxFa8Wu6nbxhI7mdQDuCiWTZ9dpYjP4VouVoz1M1LyW3iMLN+7PI3ZG0+xPTPcdKmhuN7oVwvPLA9Qeo/rV0+I9TJU7x8n3QRuwTyTg5HPetCw1G0OH1O1iZS3LRKFmYHkj0wfXAwOKcdXa4GRYBLbUBE/BEmA2cYAzn26V6X4Y+Jmr+C9Ojs7P7DeQMS5TkXC5JyJCOGBBIC8ED1rzrW5Irm7k1CNw/nkNlBtCnpsZD90gDjBIIrCqW7aDTe6PsX4O/Fi+0LREsbXRUu47WS4f7RbZjuFEzmQxEL5gkC5+XdGRgY61W8Uar4R+KniBlu0i0e4ujaQxy3LbbaCUb2lE6owMamMbPRZGB4HFfJEb7GEi5RweGUlWH0YEEfga2bHVbuO/W788rc5/wBfId5IPBDs2SykcHOeKSSbv1Kc3ax9z/ssarcaV471nwlcRtE0NmrPvkMjyNBKwbzGyQXiVlXcCd6bWr7gkEy75WnHzvtXuB7Ek4/Svy8+FHj+10H4gW2qzRJG97FAk0UCbIkuY5dm+MgkFbmNyGGQEwM9a/T7Nn9nK7fORnb5TjOQAc47kZrkxMbSub0ndWFvVY2yAcYLMCBjGcHAPOea8/8AEl0P7BvUmjAOwkj+E5YDAPqeteh6gAbZVCBOQ2MchTwRx3PvXmniqO3bQb5LclldVxxjA3j0rmvc2jfY4vT5YzZg72CjvnqCSOfXinTr58lq0RDSRuwKqowQVUdT39hVB7O6trRvKKxpGFI5DZOTj09e1abTXEFwizRswB+V3OV6AcYHarSLb7m3dMq6tEjqNxwSC2eFBwfSuZ1KUHVAHl2IWyPKO3HynPPqc81Nf7DrdrM24gw4Qo2QHdTzjPBA9ulZd6z21wpjcSSedHuD9drRnd6881UUZXMG9aNID5Tu7SO2fY7uOfpXNWMLSW6yoXVBv+UcEEN3OcnpXRXdxEbaNIwmFkAB6kFnPc8VmaUJIIWSeVXDylgkYztGeckU2Uk29TqtE09hCxIHz/PkdTkYOc960PDCH/hJdWkbHywup7YwOMf40+O8ZULLHuPKtj5QOmD+PSqfhJVbWtVuChIihcgqSAOMHr3zUrUuekbH/9T9GNItL6cSTXEsR/fKE8tCMDAJ/i9zUc8cgguW/elke6ZSBs3bUCAAdMfNTtGeSysDcSKjlruULjuN7KMc8/KB2rPW4aWzeNwRJJDI7OD8hWaXgenAWvGZ2ddDO/s4XM1ra2kmwwNGxAA3HYFUg47cGtGyDxSWCeTFh/tM370shUFsZztPrn6ViQadpw+26glr5kjoAZI0XBC7m7EEjkfjV+wbT4rwF2aIQW8UKqWkyOrHjLA9qm3VmvQ03mufst05tTJudocI4AOSFzknOePSruqXdosctxJA6+Qqr2yGC/X3qjZ63p001nZvKJBI7FizYy4bdg5A5z0Nb2owxSxYCH98+cA5X7wHXkUr6C6mRpcFtdapNKkhGVWLCOOsUYyT35LVr3Wn5siBK7BmOAx5K56dj2rKtUhbUpZ2hAKrKN+0Ocs6dxjj681tX8SRxRiJlHzLvKyMBzgk4wRmtURJ6kVnp92lzboYfNjjikO4PlgAQMfNz37mujNxGb6a1YyQhIVblgQSc5Xmud0eWSOZ3uI2kZkQBkZc/OxJ4+UnpU9jGt7B5sksgkaeSeNWbLZjztPDYyC2MZxxRYTTF1m9t4bZTIS4WIbmLKB8w4zg8cfpXxB44UTX8S2rkzhpQGVd6pDnBAJHTJ6d6+v/ABkkn2S784M4M6YAO0YVV689+9fJmqzrHqrSbWjEKkLkjDEt1PPGK5a71PVwEFa55RqAjklszLbGVoXMZkZgm0IOSvb86gt70hL2JyAfIQIWYEHLEA7QeOe9bmn3GLwrcHBjS5uBuxgSSNgDnGcL0qC3dLlZoykbM7LyMZZQx2rkDsfTpWTZ6K3JUN5JayguNzsyjADBsfKFGQAD681d1MxWWm+bc7xEBGWfA3Lh1Q4xzt7HFZ1uJZAUuIZIYImXbtbDuQSWPpz2I7VbeV72eBFd1WOWA9ATnduVWYnBUjGR3qOt2PpoeNfFDQ/Flr4yudf0O0OpC7WDcjSyQPbbvulQnyurj7xOV9cYzXid14i1TQ9Mvpr942ubeGRnggYSIs8rmOElmxvWPln2Y3MR2xn7O+IPhzR7W5vLifz4IVhmbyba8MMMh2AleFZ1DNgYBAB5r5c+CPwnn/aT8bNpUsU9joHh1o5tVcA4DzE7bdWPCyGMbcZLhWLtjKZ9nD1ZKF2zwsVTXPZbs+ePBfwr8ZfEu8N5p0M00U0pV76cMyu4OGwTy23occA8V9zeAP2OLDT4lvfEDm5nRuQRgDPfH+eK/S7w78L/AAt4VsbXTtF0+G1gsolSCNFCqiAYAX0roLvS7QBdsYdlIILgAgJ/9fivNqwqS1bsethJ4aklyx5pd3/kfEr/AAN8PadbrHJZQw9AhCDcduRj3H4c0Wfwj0qEkJagtGQCSoHJHBb0A7DqOpr601GxjuFWCNVKKCdoHO4dgeorBuLCKGBVJJ+Y8dAR3rxsVhG23c+mwuZvlSsfO9v8PLLTSXTIk77lDAduwGKzLzRrYRNmMAqxHBztPr+New6zdRWzGNRnAIPOOvHXPp+teM3t9KN6R7vL3Y5HUj1r5+vC0rI+kwlSc1zM851zw9BLuaQBSucMoxuzxxXJy+GbFEy0e5cbc5I9sn6e9d/e3TOzjkgYxu71zTXaGRo5sgKMYrKMpLY7na2pyH/CNWyNuByTyuTnArnta8NwsgZuXYHqOQf616GXgc7wfutgYHIzVK78pmIkUAEZ6ZOD0PrmuqFecWrnNOEZXR8heMfD7KXVVIeI8jsR2/8ArV4hfQSW8x3qQCxHPtX2T4w0zeski5dicE46j2r5q8RaaWha5XDHedyg8/XFfVYHE88LM+Az/LVCXtIHn9AyeOcelOYDPH5Uu5P7pFegfJkyYUZxk+/apW27QxPzk8k/0HpVXcD90YP4UjOWIyBjGOO/196q4EjE8gc56/0qPAweMH3qdbdnhaeFtwT/AFg/iUHvjuvvUIYZGelJgMyeB6VZyoCkj8aJI9uCR15GOhHqKdJH5SxMzK4bnCMCQAeQf7p9M00rAaNk8KTNbS/OkqkxkEgxyDkdOgPQ/nX6v/BnxnH4s8G6Pdyyb5Wtf9YwCu7RcPHIAfvpjhv4hyK/JGQxLMfJLFc5R2GGx7j1r7c/ZO1W1t9WvtHkuyba7tFvEBJCW8zHEsYyDzuGTjA5B6k1nXjeN+xpSk07H3beTvPCmU3E8cMMEAnPPU4Fec+MJki8O3UhyobyUAIwRlhwcf5Nd1cO/mhI4t0YAIf5Vyx9upz17cVwXjRnj0K42ghGkjyHGOAc/jXAt0dlNnFwi6mYNEzYWOM/Lghsjv1xn1q/eM7+Urqys3zYbqoJPH6VT0hbaC6kQwkq0cZB5A46j6j0rVknhN3DsiXaC0fznAbrzjvWnqJ73I2EUmoRGKGNQkSsJOpJwV/LHT1rnLqNmunE0ablk3cEltoXrgc5/lW7PdxrdurCIj7OpCR9Ac4GM9M9cVk3BWOCW8LB2MhBZQPl+XHJqopXJaOK8+S90xTITHtc5CcEBS3J781DZNHFBHyMFQQB15J6/wD16RjIbOYPglGZwcHBB9/xqL7TEwhjjTdwGPI2jbzk+vWh+Q4nYafNJdqxDbQEQAkcjI7qK1PCF0hfXY4mJaMKjOMAbs5xz14/AVz9m6wq6MGPmlBkHBGO4zitXwrBJFLrc0SBIHkVcjr9ck889xUrcud7H//V/Q64aLT9CtS1oZGUhsxyqMEozEnLZ2/rWXdbxpbSRxhHkWJSMKSAAWPf3+lbd1plpthh2gbYwPQEOM9x6Cq2u2dvHp0Uw+88axjJzjPGeO9eI7M746GBayB4fIRthSMoAQAvVVPQ/lXVWWlR+bcvIokAdnyAAw2qABkDqCPauethaPI0MZCuJUhP7wcryzEk8dccda3NNvWeC5IQMssrKpDo46nPKn0FJPQqV+hVjt4jqkEKLIIxC7Ehg/J45BzjGfzp09rAXhCo8gS6VE24QnHLYIKc5FXbGCZ557yAs0RgVEy3zAA44DbuOKxxLIL23ildUbMsqxv+6DAAEncvy5GeM07XC5U0G4e8u9QDyXIjSVlUMd4Iyx/iByRhejVtaqJLVY/sqRyF7kR5CspVVXn7rc9Bg9BU2lwxpeJGYAHkkPzqVII+UZ4Iz6V0N7HbG4QsWGEnmbI3AZZVGc5xWhEnqVLRpIoJfMiErIYwBt3HhScZ68Vr2tlCsVuywM7Ku3auMrkjPYVWtkRoWZY0ZXkcnbwSFwo/lWgsPlFdsUhwVP8ArOOo59e9CJbR5/4wuHs9O1M8hjKzqrk9hgAdeK+Rb+5+xX2oS3UbOrkiNlG7ACruz/ewfpX1R47hSHQ9QLGVpDK8qKz54bA29/yNfFnjSa+Mt1ZwMrRNFtUt8rguQPTp/OuWprI9nBL3LlS1SO4n1Blumzb2g3CVc7yEY9R2yOMYxXG2er3EGnWD2myQNDGu6T5SAQSTwCKJodRiTUnaFBsj8nO1l8xVQ/LkHGeck45o0+yEf2fz496iNd2w7lLBNgwOo5x2qXax12dzrYJY44o4zdjc4fLBt4yueMnoSeM1b064u7G603Tl8ubF0jtyDv2KuCMdMdD71HBBp7x3BkGxoIG2jeArb+SfqBx61m+HJ4rrxXaxyPlLeVJdrkq7xnoRjsDgH0xk9aztqVLREPjzV76fWpdGurfdLcqPJxglZ7lyYYvXcz7Rx68199/AL4P6f8Gfhvp/hSHY+oyF7/WLpBj7TqV2xluZOSeN7FUGTtUADivmr4d+E4PF/wAXNN1G4UXFpY3c2oyE/N++sEMMYbOcgSOGGMfMoOTX6AzOIkJ7gGuyk3Y8rFvaKKMkiRKwY9xj17nNYdxIWXcvzjOTx1yevtz/AI0stwk11tRuhxzzySQfyq3Hbj5lwBsAX05AxmiV5DhFQ3Odmt9371Fwygvgna2M55+hri9UcDfCVwW3AgHHJHbPHQ16jcCNItzkbscZHUEfpXiXirUvsbsucncAO2Bzk/rXnYy0I3Z7WWqVSdkeXeILoMZIiQcHcWA644/I15fqcrKjOuG2kfL0+9/nBrq9avI2L5ORgjA5OeP1Ned6ldwhGkkCJuxwxycgjAP1r5CtK8rn6HhabUDm72ZY5CmMZ7dRzWPO4ZGZhuJ78HPNNvpf9axBwSchefy+lYwuPOUMmAGHcYPHpUwi3qbN62ZbEUYPTAOO/FMmddmMhcEgZqBWfyzIrrgZ4J5/LvVc3AePoM44x3roUXczutjF1Ygv5U3IIwDweo4rw3xL4cCtI9t+9RmJZFHJX3PtXuN2ikMrLu/HjJ7/AFrFm06N1yq7dp57Z9Qfb+delhqnszzsbh41ouMj471fRpLcthCrA8giuWZHXqDxX1l4j8PWk6PGnDsNy44wR2r5/wBZ0vyJHTA3qSCema+gw9b2iPzzM8tdCemxxeAen5UAHPNOdCGP5UgLDpxW541iZNyEleDj+dNZWzyMU3cG5I5qyrYUMB9D9KpagVf3mO+AfyNTGNpV3RDJxllA5wO/vUoacK6yMcEbXB9M5/8Ar0zLROcHDIeGH8/xp2AI/LZSXBZtp2gdOPWvX/gr4/8A+FfeMbTVLohtOlkFvcHBYqpyVO0c/eIz7cnpXmKIsxa6jZY5oyHaMjAb3X+RHvkUywcrfR28JCC5uIUUdQH81fLHqTuIA9aGlazBN9D9qbHVLXUovtUBBj2K2eoYFc5HHI//AFVw/jWNJdKlQNg+dEu3kLhQTW94P0mbT/DVnp2objcBW88Y3+W+eIycDJGep6Vznj+GRNKiEbBz9qUYBwRgEfyry0rSsd8O5z+lWxW/kuAGkUeUSu7oGAAx60lz5cc6o6lSRLsGCxBZm7/54zU2nGaG/e2ba8HmQg5zkcHsfXtVB7tILuG5ug8a2yThVAH7x2zyuM9PpWiQO2zKkiu0zOYVwiFAQ2GYjoa5+N7hkaFVA8ycggdF7cnuMHrW7cQ3Nz5c0ztkqG4bjIB5NYd1JJHcpHPGShcdHHPzDnjnB71oZ9TJ1GKKKO6jkdiqxyZGemMY/AnpWfbyIJ1MKhkUdQcAcY/WrMypKLqBk3idZCwzkAliBj6Csq2n8tYolRW3HawHHJ4z9al2KvqdxBauw81fuh/ujrtUf1rR8LZXSNQlfLsbhSC38KtnjHv2zVGzllkiWJMx7PMw/T7q9j6Grfh12TRZyCgLTr5mDyT26/r2ogtTWo9ND//W/RS4lVomtll6OkZCLu2B175HAwePas3Wree4tLW2Rw8Z8teFAwFPU49AKHa9giijaORm88ruMjqf3a7ckbuTlazpmm6yRTQtCXKne+zIBY/ez/k14ktz0Io1tGspnkUwsH3GSVhjKgM3y8EEE4HNQxIlnpdu0sNuMu7tsUK2Scn7oB4BNR6NJc28gjZpF2WSkoOdrOOMEYwBn61r6iSLiztFkGPLYHfHwAigc4JPUipLe9iaw8tNKH2WSQhQPliYszYGSNp5ye1Wj9q2/vkUJsIIlwSFOByMqcj8amtkH2gwxSROTkuFJGSMdQe4x1rVnhMktwpPGI0IHOOc9B7D8qtaGMnqebaleajpLvqWnaOupMkY2QJcCAsXkYk/OuM7cZw3TrXQf2lbyTTvcQzWzmC3QxuolVCwLsu5GYHn8D1q9btYw3TB7iM7yfkY7WDAY6HB7+9JPbQ7rsRxlo955Ei5bC4XhgePQ1dyro2dChmttPtlhdXyiLhiFIBG49QMk+/51swrcvIzRojMm44Z9nQ8cgGsoRJDBEDZyNIuFXKxsT05ypHrWnbMYbadmTaAmSdqr8zHkY3ZosZvVnkXjG4mfSEW8jjEzzMZDklMqex4J6V8VeJ7y4S4RFEPls8bOxyMkNuIwRnBPPXivtrxsol0+GHZy6vLyTkHnqQQOK+HPiCglvI5kfKqiuI+AFJbZjcMHHbFck7cx7eD+A5GcXNxbMnmESSSsw2EsFBlCcEfeymfU5p8V81sZWgmYrI4VSoAO/kbMHoe/pWFC1/pdi8LxiZxKmx3wwGGJwu1vuZ6HqaZcXF1c6rbqjpConBz5WN3lxMffkFs+/Wp3R1pnURK0MclqJcGGONXc8b93UK304IIzU2jaNBqup2kAM4dlaSKRGCSo5UKxjk/u46gjBxzkVTmn2reJHbzJ8zMHTAUZ4J9+K3fD09st4THLKBFAYY33AgLHEc5OMg5PX8Km9gaunc+pP2W9C1GGfXtX1C9kv44ilnbPLHCrKC7ySDdEo3knGSeOMivqbVZHCOV7YGOvP8A9evLvgB4fHhT4ZaUt4SZ7/deSEjb/rfuDHYKmBivR76QXRkiU/Lycnj6dPyFdHOrHmuLdVvojjY7tp79ICQqjkf3i3b269PcV3Sq8dsZ8gtgKR246/lXjum6zbz+Mo9JDKWbOAD0CHDfTk16h4o1i302FYJMb2GTnheT0/8ArVjCvHkcrnZicNNVI00t0cr4t16HT1ZAfnLAbc5KhvX2HpXxp478Z/aNcWzhdsliPvYAyeB+X/1q9Y+IutSNeJeNII7Y4aQnjGcEDB6nivm2y0K81TWrzWnyYk5hLAEFux/KvnMwxDqyt0PuMkwMKFP2ktylrniWWMsqA7nIwAMY/wA+tc1FFqd9cAhceYATu5x9R7e1eg3HheG6lGo3svkdWYMRgAc8+/tWratoVhaqEcNG4LM54Zcc4968tQ/lVz3pV4wR5tcWM8AIlBcr1OOme+P6Vzf2G7kL7MnOcAkDI9fxrute1/QxmeNgxU8DOTjtkcE/hWbY+JNNmV9wUFgRuBAXjkYI6VfsasY81tDNYqjOVubU4uewmiyzZDdSAMcVQjyjK3IUEDGOCDXpDXkF3N5SqhUjAYkE/wD6qz7vRSq7os568dMjtTjXezNKkFujjX2YKmPLc7h7duKoxIXLiNiPTJx+ftWtdW80TlxjOOg7D+tUlcNI3yEsuNx7E4rqpvQ5Z2uYGp6M15A7RYDD5gp+6Qe9fP3ivT9jGZ02sjEPjnp0P0FfUM7lYxGPlJ7dsHk/lXmHivTP7RimW1CsirkspGSSM5/AV6uErNPU8HN8GqtN23Pk69jCsSMYyefassjB610WsWUtlO6OON2Pof8A69c+eCQRkV7l09T86qRcXZiqdpDH8assynG05Bz+HtVXjGVPHcGjI7dqadjMugEgAcnBH4U8jzoosoFZImjbAxu28qx9+1QxsxU4boOme1SK7BipPDDafpWmgivC4Rx5uSuCD6jjj9as21zLbXsV7GokkgliuUU8B2iYPtJ7bsYz2zmqypufD8E8H2NS+WUEM3Ziyn6qef51HSzGfsD4L8Uaf4t8NWWraXJlL22huIy3zNslwcOeGDx8g7ueKo+Po51sYfJDbnnyCB0wn198187fsyeJLF/Dl5okl5HbXmmMJgt1IsSG2lz0JxuBYbcdVwD0Nez+NvFEF3FYx6V/prvI6/uTlI9qKDukICjHU9TXnzhadkdkJLl1ItJl1Ce9BnOM+XjIwWMeQo+metNBeVX+0IqyHzNrBuQpbHf/APXUOjMlvNCbqZZHjDB22kLkZOBk+pwPWtVNQiSNpgjMSSzA4xx8xHf078VS3FKRzl9OytMruGCIwweAAv07GueW4S6uj8pkK7uCDwB0/wDretXru/jkeaZ/uOo6EL9/qvHXr2rK+2iMCKP5T5TSIc5HXjI+lUTzFFD+5VkG4+XgA98nH4VmW8RS6WMjBDk7RwABznnmr0MhCvCNwKrCrDHCjJb9fWszVLlU1GKQKWjztZsdQVPGfSlfQcWeixII7QNHmQPHIQVOchh/nHpUfhHyz4bkjRWVkvAvzAZwQeAe/wCNc7p2rufKtvLZ2a2KoB3C98dB9au+EBqMdldLdRGOOWYSJnk5XPGR7VUDWTuro//X/QlJHuYreSSVlbDOSUAJLPk4/A9KqalLHd3cNozl48s7gOFLGQgDGf8AZz1rUm020SWBpY0VQUUhghwPm9B9DVOWCzee4kt9isrfKMjB2jaM9MY9q8JydtT0Uuo6FIp9RuzC8gUeRDtypOc88fTHP4VOy3X9ozzpc7vLhMaxyLwGlYcHGc4K0abplteSSXM1urI07j7pYMqjAOeo5Hao9PsRH9oLuqlpQqbCQpKKWJVXJIPPrzUop7s6DTbO5udQkeQKWhhiQlByC43Eeh6+lXbKKWK4luHjJ86bfuXKnCqOuOwzik0W1eFb6cSOWlkIUAZUKqKOg6896v211dRyEFonRd5HUEZO3uPpWxhJ6mS17a3bm0uICw87YfMAbnjkH0I79arX9kjXUQto8RkiNtuQRnGOncYNbDzQJdW0QQBpZJmPPVRj8+TxVKZo57m3mXaEExlddxztUNtAIPXOKYyzNHC3zxBQVB4Zvm4J4z6cCrPmyRaXNK4CsTjkKMbfckfrVeOMQO7y+Y2yBE2sQ3LnJIJ79KvparJpiSeVAy/vGDMNzHGeTx1HpRzO24rI8i8eiEwwKchfsqJ8oyQDg/jxXxl4+1G0hN3HEY2bKRlnYKqkksDjud4X26ivtLx8N8hhcjaixrxwMdcDB68V8IeOYori5uVhAdZrqAupPzbYyQQOnPOTjvXJN+8me7hF7hlE2bWun27W8ZnRUWSQucHYuT17HceccZrHia2hmutU28KXWFVO5RvIQcHncMHB9K1YLjSJrtFl89lWKYoCcEHcApOeOACB7GoJF0+70+ULNHuBjQhCcJgE5yO+fbipudKg7Fa3u7p7KWJ9pjnd2OT84QkZOOTnnBNei+A9MvNV1nTvD/kIyXvmQrJw0jiUgNuwBuCpnmuV0+G1hEzW8iq1tGQd7ZYkAZBJXoc+te6/s9W0uoeLotYuXH2ewsZpuDna8pEag5HBOCeKl7lNcqbPtq4SO2toLG2wkdsqJGB02R8D+VctZasJpp4Rhh84JYnOOpArA8U/FPwnoEn2e/u44ZZfkTJz27ntx7V4Zpfxj0D/AITH+zFuFCzS+V7qSM4K9QORXFiasozXKbYDCe0pycvkcppXiWbSvjRp8d1JsW4upYwhGMbg2M+3rnvXpfx38YHw9c2l7LKRDuTzCDkBGIBI9/6V4p4+0O6svi1pmoxcRm7hmVugILAcY7c10v7VULT+FDJGuXWIYyxGAfpXjwlKVBwv1/U+zqUqLxdKpb7DGeNb+2lij1ScrPbZSVZAc4UrgH/gXQGvm/xP8fNP0Rn8P6RbTXuoSPtjtbNGnmdhx+7SMM74zyQuB3IrzLwv4j8US+AJJvHt5Jo/hmIEW9wyBtQvYlO0GzjP/LIngTSjyycFA+QaqaDpfivXr9PDPhTS5fDsGqRTXSaNo0wXW76CPG651fVZf3lrGVIJRCsgBGOSFHu4TJPe58Rt07s+axvEUppUMEry/L1Kfiv4keMoMQ6+YtGuJiWW0vblZb0qOgFvbl35/wBra3qBXHprXxI1VRJp1jrFyvXctjLbqT7CbZkY71614h8GQfDJ5dG0ZtO069YIb06bDl0vJV3OrXUpaed1XlpG2lmPStbwd4bgubOGS7aa7md3lea6uDIXVP4SXY8e1d844egtInnOhmNadq07eh4JrOvfEDTkjvNY8M6pGkS4eQQStGx/vO0QkA+pIFcrY/FN4ZJJktpSJCN5inyTg++RX3alhBpsSXVnKbaVG3hUkYZJ6/dODx2xzXmXjLwDonxStHsraOz0vxXknTtQ8sQQXkuR/o96EAH74AIk4UtG5BKsCQ2FLFYerLlkrBiMBj6EHVpzvb7zyHQ/jVocUJh1VbmBiciQRb+/+z39xXt/hD4m+HdffydN1CGYgBmj5WXPoUbB/EV8D3cNzaPcWOpW8lrPbvJDPBMAHiliYpIjYJG5HUqcEjI4JGDUNm8zWyzwW9w/2f5WmiRjsbt8y8g4q8Tk+Hlpexy4TijGw+JcyP0l1NbWaHzYmGM53VzrWs0O4EA5HGO5r5G8FeLdDia6tvFHinxBonyobKXT4heIJBu3LcQTLIxjYlQDGAw5JIFd/qfxD1/QLeC60vxbo3i23nZYmhW1e3vLbKFh5yF8leNu4KBuIGK8/wDsWpHSnJM9qnxbQl/Fg1+J7PMjzMq7c4HzYPr1rnLjSorBLjyt22QDcnOM9MivPtN+N0afJqejlST8z2swb64WQD+ddHN8WfBd+nlO11bq6EHzICShPfK5z+dZvBYmDs46eR3U87wNZaVLPz0PG/GWiKZ5XRccFh/vA9/qK8ZlXY2O/evpPX9Q0C9tG+xajbSB13bzJhs/7QPOa+fNUt1huGIYMpPBXnOe49q9XD35LSWx8lm8KftOem00+xmZPU0EEGgDBFWgpIHA5GD7EV0JXPHIAHHzqc4q4NkrAhWBGNyjv64781EoIBb0H4daUk4DxgArzkD5lNWtBMllhDWyzo3zbx9cHI/QgfnUwQnTQWGNtwMNj++h/TIqnDMqsdy7wc/Ke4br/jWnLctNBDYQMHhLKyjGGBwVAJ9smmrbgj2v9nrVYLD4iro92oeLUbaeOMbUc+fFteNV3g4LDd09K+2/FarJe28ciskio+5AchApA4xheSOTjrXw5+zzpcep/E2GUozpZ2086lRnawZUU+3BOK+2fGlwZZ7VvOw8cUmdqbcgSe/vXDXtz2OmlsUdP2yrGqg5LMzbuoATOTzjrV61tXmzCysf3W8rnPDNgYz/AFrEsoiY7lzIQFaT/VqMDcOcfia3bVbW1imliYu2EQmRjyCc5/DpzUFvXU4O/s4pV1KNbj/VyL5bIAdpGOPpxzisTUr6xjvQhlckQ5U4yWJXoPeujlmht/trghcSGeQ7OAAp/wAPxrjBJ9qvku0RsqisNwAIDRkfrmr2MroyU1WW4uJ4Ujk2xm35xjOcgj3P6VqT20lwC105UbjtDEKeOASRnpVYXlsJp8hmw8ZxgnAQZP69BV9p7cFZViYsQCSeOvP+Rik+5a0OpsIbOysd6s8kiQSHzFXAVj7n1rY8M3Qm8JPPGCG+2Ishxgcqe3PU+nFc9bXxXTrh5E/1cZHzMe3I6duelbWhyyf8IvawbFSNrl5MKuMlVxzzk1UI6lzatc//0P0MeS9jm3iPcIjFj59zA8A+lUYZNkl7eXsYdQ+6Pls8c88Yx+NbypHc6hO80YdzGkkgEu37oJ7j+90NYcMNtPYuZo3j3REAbty5wTnIPODXhSfY9GGxr+G5FjsoUDbS0Pn7N3TzM4/AdvWrzhikXnTqY41kmkL4PMh2jr1wPyqkun20cVyIJghjighAfnA46jr61k6/ok0b2NtbzMxubu2hOCPljiVp2yDnptqVvYqyZ2ekaetjpyRWqIqbjtVBgYJJ6Z9BWpIYRCkbsoYhQS2VPPPfgD8a88i0TxHZz24k1DMc0pK+X+72oidGDEhiWPUYroxqN4VlW8lci3YrudNpJ4GAVyp6fWtjGUfMnCo+rwhZNwhtnZSVOF3PwMgkH7vfnFTrErKsiSRyyBdzp13D7o7479q53T54pdSv1MkcpSOOEN8uBlSwXgjgZ+vrXRSWca2pePyYpCI0JCnHBLDnPtQx2sy9cCMPOdojWM7yeowi8DntntUkcyppUEE0mJGtQDwBkuecY4B5rndVW3Ng8X2hS8zmKMM5z2LdOehNa8siu8aIoI3xx7mJyeM45+lD0QRV3oeRfESYpNNbom4SgnJ+bBQdeK+PPHek2Ul1G7SCJ2l3RlOAwGH+Y44IC8Y5r6f+I01rJdfZAmfNZllcSdFJwQCP6V8pePLrSra5gtZ2eAFHGQS5jI4DnsfQYrjl8dj38LpTOGWVba5laD5AbOPcG3MzSMx4yw6gE1pQxRjToLVljYThkaTygNqx7clm2gDHYk9aninhlvb9bMl4t0asWAJKhVVCQcYOQcn1rpfB2hT6hot1q2tXK6ZounQy3d/fMPlt445DuCg5DNkYGM7mOBVRg5vljuaVKkaceaT0MHTbDWtWsZLbR7aW6uL7zHVI41kbGQRvJIVF4yGYgcda9Z8AapF4WhutLuNVtJ9Sm2xTQaXHJqMkJj6LMbRGRSpPI3d66L4Z/DS6+Kllpmta8jaP4O1SWOTTfDu4xyXtiyFvtWqMMPNNMgPl2uRHGuWk3EAL758ONC0JL/V30+xtbDSrKVraxsoVCokUB+8VHQnGB0r0KNCnTl72r/A82piqlXSHur8dz8+vi54M1zXJRqI8Q3sVtDIJZF/4RrUJYwRyW3R7mjHuV2j0r5907w14t1nVY4fD2ow6xcJKZlntL2NU+QjJJneN12jqOqjselfrj4s8U6doetXKTbLdUOwSsUgTcBlzuYgnLegIr43+O/gPwR48s7rxj4aWzj1zShHdaglsY3S7tcgPOUA4liyDuA+ZeDyKuksPWnyTjbzX/BM67xmGj7WEr+v/AAA8X/H3wt4S03RhrsMnijX7SRbcjSgDaJPGnmfNcPgzbcYPko/J6dSOG8cfGj4h+K7WJ9Q0jw5qEUxijS3tp5ru2tJbj5oftKjHmttILRkKuRgmvkjwLoXjH44/Fe28C6LNLbGSS4ZUR9n2a3tiEllZ15GMjOD1IUV9B3/wgufg7400+OyujdO0vkBWOTKZFZMH1yTkj1APUVhFYKlVVKFPd9e5blmWIw8sS6tuVPRaaeR5w99q/iy+vfFN1qb3SQXc2maTLMgZZ7i1+S81BlwB+6kBitowAkQXKg7uPq/4Aaxp+k6XrOpXFlJaXl9/ZEPnld0otnlke4DscEPIISzj0JPYV8q2V3/ZfgP4dwQgZbwPbagqk8yyyXIW4PqWLEbj1Ga9u1CPzPBEQUus1/rlqvlqdzF4bJpAo2nP35OB1Nbe1ftW2e/w7hoRpc63seNeO/F1xqmrTXVxmZrqZ75lZgHBvZSyqrcHAXaB1wK7HS/jV4G8O6Fb6T/bkkN1DF+8t7GxV44yx5HmyKzOc9WHBNeR/EnQNRstX1MXUflSWkZZI5FZJVVF2kkHO0DBznGK4rSbzR00XVvC+sMsMtxNbanpszwhl3iJYmUuOV3KCBng9ODmsamHhUbUzlzHH1qNZumj3O9+Mvh28nVrbWbwbCG3T20ZQk84PyKcepzWEniu8mvmntJIb6EN8ptW8uTHX5VJK5/EV7X8BPg9oHiH4YRap4q0aG4kv5p5IJJCdzQ7sAqMDCtjIHQiofGP7MngjTp7jUxr8/hhgqPC42tGSCd4O714wK5ZYahCXKtzLD5riqnT7j5Z+PMNtquvaR4/sUZIfGWnvPdqU2FdV0thaXhYfwtIPJbaOMhm5zmuq0CzsU0GygsPL8pLaPlGHLEZYnHUkk5J5rI+IMGpn4RaOmozG7Gk+Or60t7ryzH51pdadJKrkHlQXUcHqcZ5riNMh1CwjhubdWgDxK2UzhsjqR/OqxceenGzIymp7HFT93/ganqdxoVhqKmGeCN8jDB0Dfr1ridW+G0EQ8y2Elv3/dMdvthTkVes/Ed7C+JTyTzxx+dejaTf/wBoR4kYsTn73Td6fSvMdSpS1TPpHQw2K0cD51vfDGs2GWikS4UdnXa35jiudedoDsu4HhI79Vr7Bm0O2n+dwBxjIGOtYl54Os5lLNGjjO0+5Ptj8DXRRzR7S1PLxPC8Ja0nY+VxLbzgAMr8/wCetNvIUaFnHDDpj2r33VPhHpdwhmgzESAQVODz7CvJtb8EavoiNJFL50QbbgjH0rvhjqdRWPBxWS4nD+81oeeHpmrcEoXrySCpPXg/4VVZjk8Yz1FIM54oTscRqNhrdYYxhnYk/RR8o/E5NV1fIDNyVIB9SDUYmYYDgNjpnqPxp8rmZsQwhNx4C5Zj/WqugIZcByUPAORViPGCy8dSB7it+z8C+Mry3+1Wmj3U0WM5CryPoWBNc55c1vO9tOjRyRsVdHUo6MOzKeQR6GojJN6Gk6U4/Emj6v8A2WbW3+0+IdRKK86R2kILNtwhZnOB1PLflX0d4sJkmgIJ+W3ZuOeC5z+frXgv7L9jZC31PUDOBM1zHE0BwGdQPk2985z7AGvdPEzu2qQRH5R9lwSM8kN+vWuaqvfNIP3CGARrbMjSyb5Jd2ANvcYHv6Vemmtnt7vEpKKg3KT/AHSTXO2tzcTYD7I0G91HJOFx8x46cVaiKzrdP5UTkeVGxy21w2evvjvSE/Ixb23T7OPNlJdoQG45OUPBHvmuZJFnCQmSUiRRgc4CY7muwuXg8u6hkhhLIwKlGY5Ur0z7Vx9xLZSNItxbtGm4IOCOcc8g0lclWTOeWYtPISCCXAY7zwdvXsAa20uIpCyMQXUZzj5gMdefWqE3h+0ufN+zXZILKWXO7J6/hip4tKmQmQP5hbAwF5Xj345ofkaI6CymL2V04ADG2BUA5JAHbA4NddpJY+HrOQgACRxwc5BUH9K5ux03UUt2OxY42g27n4yTkHgcmtbRLO+07RY4b0KcS/KyHIbI7fStI7hLY//R/RJprKE3l7E0iTS2yJ80UgGOSAf++sYrPt7uKPT3F00ZyGQcMo+cBR1B6enrWvePJb2kgYlNzDGVZgSCeuF449DWJLJduixW8yKHudwEgJ2hM7Tz2O2vn2z04o37W2t5AyJtdWl2sc/MREFA9Cec1OkKza1DO5lEcRmkXcvBIQRqR+DGsS0d7mIT3UcbFwdrxkEHLFsr3roIbZY2lbc8YWMR9D0HPXnk04binoi1dw2NxqunbpVaS0WaVc8ld+EBwDkAjcPpUtjaBbIbYEIuJC3ysVPds/N0PNcvDptwNV1LV42jDtZW9sHkU/N958ErztBk5GK6nzmgWKHMKpEAzmFwMdh8p+npW6sZSslZMzLe6dLXUL2exmgeSeQKHVZFZExtZcZ+UgcZxWhcSxBIBLMVjYbyogAAwFBJ4JyTTNl8mnoqPLmTCA7Fc5Zhzx2xmtK8NwsolEuQHiQKY8EgZJA56Dj3o2QX1KFwdIS5sI/IMrvO+wIncj+e0dulOjuI5SXEbwkz7vm4ACg9OvBFW7zzJLq2fOPs2+XcFwo+UgZx39MVn3exLBfLUg7XOT12kYyOSR3rOT0Lp7o8G8R2c1zqJYqu8hnJ3YACknAPXnuTXyd8RLzTJ9bm0qeKEmNIlQGPeqq4BI3cYPOcV9c+IIGhuvOZisSR5cH5mwc/xE4HbnFfH3iu7g1DxJf2luyyg3Ukrq2OEAVVPHQjGOPXNcjWuh71G/Kc3pFnBqc941mfs9tFcrGyFliLomeF55ycEele5fFVbI+D/hn8ONOUR2Xiu+S81PYMb4dPiaZkYD+Heu3Ht615NoNjbxaT9olVMqfOKKhYkuCQm48E5YfjWtHczS+IPBFnc8mzutW8nJLbBfWbSuF3cgM6lh2HOK7cJO02jkx1JzUL7XPs/wAPePPDngrwl4e1vxVMLG3W1aFJiv7uOWaIlWfsiLEPvHgCvNtJ8d6d4Q0fUPFAke5tkslmzbNu+0G6uAV2sMglz0b+EVva/wDC7S/i98Hrvwtqd9d6atvZQXNrd2RUvHLBEAQyOGR0YZVlI5B4IODXxSPB3jL4deBtRilC3unyjRY0msN7I8XmSeZvg+Zo8bhnHbvxXoRpQlUUXLUzhU5XKVuxal+MXjbx/wCMrGXTPCukRW+tyXMFtulMlzvilxL5hwdhJwFJY7uSAAOfRdF8JeI7Lx3PLrulxWEd9bzWM3ksJEdJoGXB4GBuwaqfs0eHdQ1O+03xNrkFpJ4XgCX2lajuEcscrx4aFogPnLMdwYnK8g19F+KNUGq+NtM0nRgA9xdxp5j8B93Tao69cknGK5a1SFKpFQXU0pRrVeaMtVZ9DxL9ib4Ux+HoPHfxo1KBY5NTuP7I0sFc5trNAbh1DYwZLtnUnoVQV5j+0hqd1o9/YeK4Iw76RdwX0kSZYMIJVkZR3OQuB69K/SjVdI0jwB4D07wfo6LFa6XapCqepA+Zz6lmJYnqSa/N/wCN2datZ4V5MhZdpGTwcjH9a8LNsUo4uDj3ufR8M5d7bBVYyW6sfJHjO5tr7wxo2vaDLJLBofiTxB4eds5WKx13Gs2AJ5+VFKQryFzwOwr2e48Xrqnge0i06D+z7u1uYZYwjMztE1jsNyr4HzGRACM5XPHrXzb4MtlTXNW+E+vTraWPjG3jsbK5uM+Va6taSG40mZuVATzy1u56kvEOldN4b8S61d+GJfBGsQPDe6NeSJ5c4xd2cw/dzQZHDJvBDDnLAMpxivXrzvFVYnk5HXlh6s8PU3/VCL4Z1jxlrN6Yt97aNOs1zcorySM6RF1Hy8kuc7mOfrk16f4V+Fdv4vtNOuL+GyvUeQiGzmUJMIskuRhlIjJXkH8qqz6Tqug6EPD4hlgvtZs7XU32NJE0EiM+I2APUoCjYPpisLwXoHiCTVGkt2mgwWLPHuyM8fNj1/XvXK68uVtOx7M8FR51zxvfc+y9T8VaxpOlR6JF/ozWcAjjjgRESPHCIrAnKhfTr614/eQ/25di71OSS5mB6yszBcemScD6Uos9dVUtluXc7Ru3A4GOOM9OKrvZanCGdhJs5xgZGQOvvivEqYifNecj3KGFw9KH7qCR5L+0dNFb/Djw5bRnJvPFtxMiDjP2fTJ4+AO25wPrS3miWUenRWrMubeGJS235iVQd/rVP4vQf278TfB3w8t4XVPDOmnU9UiDGQJd6jItwykn+KOOKNSD2lrrBa3t388EO7exBz0z+Nejiq/JThB9jwMto+2xNeslo3ZfI8dfRZEug8OCnB2+v4V6hoOlrFD5vVifnX27Vv2nhl2PmKRtTluPl+nPSugTSlt1MRj2kndwMkgV5GIxikuVM97C4T2cuYpwojkOgOW7ey9M9vrSyAZCuo+VcHb2zVt1ZMcEcYPGD/kVQdjvCt2LZ49K5YNt3O92tqMVA6lQQAR0PQVxvi7QLjVLKRLVeNuQg7sOR+FdorYLAjOTn6ewqxbSLGWGMDcAcnIZuv5CvQpzlF3Rw4ilGpBwkfBGt6Tc6feyQzROhzn5geucHnp1qhb2UkgLDIAIyR2ycc/Svs7XvDdhdrMZUEn2gkAsvO8nO4DHB68DiuAt/Cml2sz+RE2FyBvAO7HTP9MV7McXBxuz4eeSVPaWi9DzaD4a3uoATW9wJFIGTtAGT0HXrXpfhnwDYeGZUu9WRZZuq7SH+bshA6H+dekaVo+g6VPE0CLh1DlcKyqSPm5PbNc7rF7evrtvcJ+8syQka4AGCcH7o6n88VxTxUqjcE9D6LB5RQw6VWUby+8sa7461vw7Jb+bpnl2br8kncc+g44HavPviZaaf4q0EeNdPjVbux2C5dePOt3ODn1MROc9cZFfRvjXQ7TV/h21zGgUwYLKeSp6ZB/nXzF4Lb7Umq6BPgwTQSqwPO0EYJH86jCTg17SGltzfMqcqkHQq63Tt5NHuHwN8LX8fgbTtY0p8NeTs8qlQzLMpKsATyFdQPlHQ8j0r0nxDHfHW43kk8tUtkBYkDcD2x65/Gk8CWN1oHhXS9HhDYhAbcjkhyMkOMdiPXk1FrQln1cyTN5bLEhILcdDyc13yfNK58AtFYhlkliS3kjlDNh0IAA+915A5p9mvl215iU7S8eGOMlcE/nVCaJvssSGRCQ6uUJJ27m5HHTgZqchdtxbxjCtKGfbnIDIRwaCbmRK8RlnjBOHPXB2/dB6+tYGoQwr8wPyiZGPHTjHHvW/ItvHJK752Iq8nuWUZrlb27cSSRRLtUyIBhs5785HpTTEmZcipFIzIxRt7LnG3p0P4V0ena4sYYXe19oHJ5JBPJ9K5i7MhnyVJV85B64z159aS3DSzKowByvPofehotHq17qWnXG9IrkKcJEoZcADjoPXvXS6mvlWun22/cwjLkhduAT0Pv715F4isYzo8+3akhKHzOmCMc+v1rt7W9vL2w025vyWl+yqhOMZCEj8fr3rSmtbkzlof//S/Ra8uobW3ghicr5dwoJcsMKc4xxzwDjmoHnSW0nZP3isyKMnBBK89eeCaa9pIZFgY3Tx4y7AowJjAAODg89qxkGrS6ghU7YJG3HzITkv65HGMf1r55y1PWjHQ1J7S0R7SzYvHukjXcpIBA65x67ea6oS2+3iRWEqkkbs8MSq8Z9KyktjPqtoUjVxAHchHIwdpH9fwq7JYF7iPy1dfLWNEVirDIySefpWlMzqdESy2qJOVaMJHLcqipyBxzxjH8K85qa8R/LIKIy3TKgD5bC4yc5zwOaoC2unvbTIZYitxOfkByQFQcg/7RxVjV4ZpYUt4JvLMuIxuU5X3IIB7nitTDrqRW9pbrLAWt1X928xKZByuTwQR68emKla0juIorjZzIS6hpCDg8Ack8kDFNFxeRtMRFC4WEoMnGGGBgeuc1c8p0torW6it0xHl2Zxjd34OO5odilzbmFe6fJPpt21iohmdIo4yxaRVLsB0JxwCf61rarbQ2tjePDuUqjgMzFgBtHY8DnsKxbrXPD9nbLDc6tp1tsl3uXnhQBUbIABbHXpxXNal8UPh1CtzHceK9OlLRFCPODjcxGAAgIyAKzknY3ppt2SOM1+ZIXWSd4hHtxKwByAB1HB6gA4r451bTQL6bVLKeWN/wDSzvKAqFZsRg84Gea9/wDFfxO8BPa3UVvrUck7lvJyjeUcDbycY7mvn3U/FGkMpxqRnimQLceRtiBGcscdRjJycViqVRu6R7NNxUbSNqMC38PNBFJGVUQowT5E3FgAMjnkmsy61JYfE2laldRtGbHVrBWByyCKffZnkgbcCQH3rkV8dabpUPkxOZ4IiGIErOWKHIwuMD19c1zGrfEvQrvS9TtJC8Z1hURZ2G4w7G3KVPrGcHmtKdCpGakTiKkXTaP0OvvHsnw2+Feq62LMX/2JhYXELOY9qSyNHvBAPQEHHQivnn4ffFGLxH4VnfSRCb6yDRmG5faC9qyzRlmA6NGeeOor1jSmX4rfBbWQg3Sapp8a3McR+aO/tcJIo4/iKgj1GCPvV+a/hu/u/Ani4Qzxym2luEtpEHKOZDsAYk45VmXPqRnrXqU6KqPma1POp4iMPRn6g/DlbQ+B9E8SraRRHVLBblY1zsjkLuZFX/eznPetf4S+HLnV/itd+ItQBFvpcbXEY/hV3GxOe+MsQPxrY+H2lfZfhV4UtIFEkdvp0aqFByytnaQPp2NezadYW3hDw/PPLiO4usS3DnqBjCr9FHavAnB/WZSv7qZ7jxSWG9lFe9LQ8d+NPitYhJDC+9o24UeijOD7ZNfn3resS6jdyG5x8pbaO4/CvoX4o+IRdzzPEwx5nlkseXLHGfx618tanGizOYRtEZO8njPtnqK+YxFf2+Ic+nQ/RMpwiwuEjTW55X8S/Bdh4h0trmywt7ErMoBK7geSuVIYZHQg5BwRggVj+CdZg+JPibT576eGy8dIIra8+1FYoPEkVsAIpNwwsWrQogV1+5dRruXBBEXot0bedHQfIwGfbjnrXhHjbwTELh9RCfI5DuR/eUhg3HoQCD2IBHNe/gMTaDpT2PlM/wAsftljKC95brufc+pala3Hi0arLC8cn9jTwvBJlZIrhhIgWRGUMpw2VyOPWtPwJpD6dFBcWy5jubdklIPIkQ8DjjPGc+9fG3hr4u/EbTNPg067uLHxXFb/ACwJ4gjZ7mKHbtEcd9GROvPO6Tzj1HHb2fw/8YvHsKqum+ANDyvCmTxHdGIE9yPsuTkZ560VsNK94zXz0Ko5vGcGp0ZX8lc+hI9Onu7zdIqlTIxB4zgnoRjmsL4ieLfDHwl061v/ABFbDU9fvGI0Dw3bAtfardgZQFBlorZDgyysAMcAEkA+by+MfjjrRW3Gr6J4NtnwJDoFibu+VeSQt5eZjXPc+RnHQg4I6fwZ4M8KeFZ7nX44pdQ127INzrGpzvd38rejTyEsFx0VMKOwxXnzeFoP2lR88lslt82af7dil7KlD2cHu3v8l/mcb8Nvhbr2jLqPivx1Kl34u8UXL32qS7spF5rFxCmc/LHnAGTgKFBIUV61/ZFnZwlI41JDYwB3HU59K0LzWkkUBcAY+cr0A+voP51ztxrKRkSltinIPpjrn8q8DEYirXqOpLdn0WDwNPDUVSgtEOu4LKKMxqoXcSxHUc8Y9fwrm5zArEAgueT2x6Ae3FVL69eUtOCAijlSM5J5BB7GsRr9WIaMbSNoPPQeuT1Jq6VFvU0nZbD7jzjEDNFsKH13FufX0rEmdgRk8k49yT3NXZbhpgjkF9mR8zbOnIwKyXcgecC3z84K8D2yK9OlC2hwzld3IPNBDK+VJ496GvltiqBgzuOFbAyB1z24qk8rxh3lOOeM8fn61W2mS5izucygRkKwyGHJHPt1xXdGK6nJWk7aGxeB7tEuJXwkSLzGQcE4yucdh1HUHpXIPD5k0kURIXllHfH8q65GliEgh2hUQ7SpyXB4PmDHQZ6jnPesazhmkmDSoocnacH5Vwen0q09DihbnOH1rxCui6bI8YJlRtiKD1J68emOa9Q0fxD4R8WaL/aEKLb3oCtLAoG0OoxlR1Ga828deHzb6jbmRR5F6rHaenYH+deYaYZNB1iS3cldrdj/AJ61fsY1Kd4vUr6zUp4hKa93Y+0LKUXfhXU7UEECCQgEZPK9a+TfAMDIdZ1UdFV0UnpubIr2/wAJa6Z1ljVuJYXVx/slT1rwO7vR4b8CyIozLqNw0akegckn8hWWCpNOUO7R0ZtUjFKt0jGT+9WX5n2d4HW9g8HaPaahOTcWliI5t/DbokA5PX2z1HSsrVJMavKCVUmNATyzHI6D6V574b+NPgn+wrOy1Ce8truC22MXgaRWfABO9N2Qxz15rUXxd4V1XWDeWWsW5yq8GQIeB0AbBzXrOEr7H5pzKx0YdniE5G0TbAoHQsOgb6irM7MYZTGpAV2Y55P+r6e/XiqcaPKkbRSxMjtkhHDZyQBjB498VYunubbSpA6EyPM0ZOcYUrj0/wDrilYlu5z1yh8mPzGAMy5b6BRke2K5dwjSJ5uSN4J5BHHA9+K6bUAiww71GVhY7ifUgdcVzF1hXbG9VVgRznBBwTn0qrBF9TM1OEif7TECyk7QD3HXP0FWbSTF2gSM4DKd2cexP0pbu5lZV2BlwzcMQpI9ajVGW4jKgnkFyW57jJ/wot3NEzpL4wpaNeSOAW2qdozk7uMe3rXVQBjaWA5CGAEb8jGWPQdcGvOLmW7k0xo0VGMUgLIA27b7diP5V6XAZWtbESD7lugHGOMmtIIzqPTU/9P9KrqC5tre+lidpxHAqRhWBJZid3HX0rBvdX0Lw6beTV9XstOit1VZRLKofIAYAgkn8MV8YeMvjz4n1yNrMzNb28zHENhmFWK8KRNw7+4JGPTvXzlqniDUY71rqfYBLuSVrZdzbiP+W2/cef7wBz6ivJhhHe8j10klaTP0f1X9oP4W6Ncyyy6k13I+6JBDauFYkg4EjYRvQHNeReJP2v8AQUadNH8OTzqVKs1xdJCFJG3cVjD8YJIIPIr4FvNTvJp5njiO99wAJ3/Kem0noMjkHkHgVx17rDzRATF5Cq9GJVhkjgA8kex9OK3jhkifcPtzV/2zPGwVf7K0rSrRRF5cZffK4RTnjcQOR1zj6V5trn7WfxZvWSX+2VtTI4CGzgSPaDn+8rcnpnpXyde6pM8qxl1yRljGcxyDHygDsw6Z4ya5q5vRFM5GPMIGHJwSPQn9BVezSE6kV8KPoHV/j98UL/zHn8S6k25ydvnMvTk5CkBea4u5+KWvak5a91O7m2Zys07vkd+WJ69K8dbUZjtMeUdM5DDkjrz9KpNclz5sWUYctxnGT0+npRbohe2Z6ZN42ldHzkrJzyoLA5/I1jSeLNRikJjcrkfdUkA57jnH9RXBTXk7Pn5iSdpO3HIqI3MwBAJwRkcgbT7fWhyYnWkztrvxPLcxhJG3hBypPJPsTzwKx08WXdsxjR3RcDYyuenofWuVkllBBYdBjr0xz2qKTEnzEDBAPXoaXM0T7Rs7KXxTf3OX81kkP3uTtYj29KypNVuZwwaT5T1U8/THoVrB3KW2FwMd8GkUqo5f0OMVNw529z7U/Zl/aFt/h1qD+HfE0ytp2ooLaXzZPLS4TG2IrKSBFdQjCo7YWVAqlg6qa+m/in8D9F8ei41TwnqGlzJeq0qzNexW+1pFzvlRvlB3cl42GTztBzX5IObVfkzw2Q3Iwc+o9MVMuptZ2sun2lzNFaupUwpM6xMpHQoDtx+FbUq/s2Yzptu8Wf0S/AXXdE1HSdN8HrrFnq2p+GdOs49Vawk+0W0Nxt2hDOAqPIWUttXJAwSBkZs/HbxbHploum2sro8gy7r82CQcA+nvXm/7PuqxW/wZ8F3FuFiS40PT2LBQMs0KkFsAcse5rgfjdqV2Hkkkkw7KRn2P1r5LNMeowlTgrXZ91w/lTq4uFao7pI+e9V1mK/BiYh2Dl946bh02jtzWBPCsxBfnIG7OMn0rnbW/iW4kMpXYmSATgcVNdeIbXy1mnvLe1UEYDOFGB3OcGvEpYeW6P0OrVhHdnEeJLSaJvMiRg2cE5+UjHbHH4elchdamLi2NjOCzIBuDdMda+gdEm0vVdPZkSDUomc5aGQOMf44ryXx14RFh5mpWDHyjh1X+JcjJUj2r0MNXXP7OSscOLwzlS9rB3Rw2naNp10PMhOOe3PWvRtJ0W5twvlDdgjHoCRz07fWvHdG1Q22px7sCNypAHUE9a+stBga40tWjOH4wR3B6Z9/6VWPqyp7nFlVGFVNxRhWX22JQZWwV+7noSO3P04rbsr0yxkxjjaS6qSuD2x3wfWtKeOMEmYhioGVGBjI5x75HSsiaaCSczQsFmVQxb+EjuGxwfYda8Wc1PoewqSRPdag2BDbruU5yxyAvT8/x4rDl12SAbJT87HAIHU9sjkVQutQE7SlW2h8gkjsfw4x29a5C+muIZ1U52bfvA4O70zz1/SumhQT0ZzVqnItDf+3zYkO9mXduAznI988VS/tNUmG4sA+emSM+/H/6qx3ugcbX5YZADYPHT8Krx7vOVYyqhTnK8En+Wfx5rvVJHmutJM7D7TFIrPIAQmQGPUd/xweQarNeK5J3EHI+6cqQR6Vmxzbt0RwgTqScfe7flTZz5blkcbQArcBXK+o9varjFXMpydiOe582R41i3JtYnngEc5rOjuW3yzs6xrMflY84bGCvc4J7+tTXGURiMsZGyO4B9vwrOhgXz4rcI75b5hgcAg9u+OtdELI4q029jrAhlSRWQFZV83zUfZgqoO0Nz7bgfTFZnnx6ebnbIsgRd25SNrMVByMcc1qWBtQjLOGlhjUkMvU46DIOP/rVyF1dQW94zSxuYAd7KPmyoPB7/jRF3ujnfuyUjobK2ufHswtS4ja1iHkM/G4j72fr2rz7xp4bm0opNdqFnLFSQOuK7WHxNp89yh0ppQ7E5SKMkjPbI6GsLV08SeLryOBrSSOCE4BYDk9D3NKLcJb6HdNKrBK2pF4TmnsLG51M/dhgdsHvkYH614x47uh9tsNHDf8AHjCPNA/56SfMSR619Aav9hsooPDdu4doALvUHByEji+YJx3Yjp6V8o3WoPqWqXepzHLXEjyD6MeB+WBXbgY3k5nh8R4hxpRoJ/0i7FsyMSqcDv8Ay4q2I+xAY8DGcjmsQOv8Sg/UUEgMDGzL7A5r1rnxdjoImlgO6CR4m7lCUPHuuK6iw8b+KLCLyIdTuCm7ftd/NBP/AAPP8688W5uF4Eu72bpSG5mxho1b6GjQLHsX/C0PEcqBLr7NMqqUGYcMRnPY9ciprXx0WVhfWwckHaYm28n654rxoXmzqjDjtVhbyEnDSYPXkYGafLF9BarY9sPj7TLkkS28qADAKlW59+nFdBp/jPRzLGy3DJuABMqbEXnuTnn9K+ffM3AbHR17bTyD6YNTKG28d+cZ/pSdGDH7WSPpy61/Q7qGFbC5ilkeUZG4EjPfjt65r1bd5jROvzARxqACGAwPY18HqzZzuK4HUjp+NaFvqt7YkG0vJYz/ANM5GX+tTGik9GTKbasf/9TyHUFuLlWj2i1Z23bMYj8sD+FhkZz/AHcFTyeDXGX/AIhs9FAjJUynIkAkL7lAwCGIwwzwQc1xet+KdS1mZrUSi2tZHAmVm3Qxt/eyBuL/AE4rjrzVLYWsbXAMkkPmASSHG4N0IT1A9aysdsqmuh19z4iuGcS2iqBIHlEa/IpHTg84B54rj9S1CRy0+5EJUFfLGPm64JPXb2/LpXJtrF6VEcBwoxg4xwO309ulZ1x5853yOSTydvFQ5kOdzfn1S3bLuc5bcM+g9cY69/pxWa+sRhFQHDJuwVHUt1OevTgelZQtl4OMnHXrR5eASB/+qs3JsSZMdUlG3G9gi7ACcgL6DPQU2XVbl2Y7SN/3h29vyqsRhjgUhxn0pO40w+2zYKbAQeeTzULzzvn7ozz/APXoJx9cUwkf0qCiMyXH94D8KRvNI/1hwad+PWmt93kVNtB3IipHJY/nTGjB7k/rUh5xTSc81LVyxPKXqBSgKHU4xzRmmZ75qdAP27/Yw1m01b9nrwvayMJl0+2k0mcN2NjI8RXPrwMeldh8UfBV7qFsWVvtEKg7Js5cL6Pjv79DXzR+wLrTS/DbxFoi5I0rxDMyqT1F7DHcZHtuc19ua9eXFxbxxWwIdyF+9nDHgEADB718zmlJOUk99z7fI8TOnyTh6M/NH4i+ENZ03Qru60cA3EQ2o2MkDuxHfHavj+18G6pql0914hZguTullPmOx9gf/wBVfuV41+GnhSawkidxHqVzbG5RWOAipwxx0Kt2B6HpXwD8Tfhsvh/RT4lgnR4pi4ijHDOUGRx7jt1yKMJWnQ/dTW/U9LF0cNmD9vKTsuh8i+GrXxR4B8Tw6t4cl3QCVVmReEngYgMskfTOOQeoPQ9QfoPxf4ltLqD7QkhKyKRjGML6fga8A/tWa4dmV8LkY+Y5J75+lalvbSXxUXLuVJwBnCj/AOtXbiKCnNTlujnyrHU8NCdLD3al0eyG6DpUmoaissRIVicegwa+v/Ckwm0xEX5XiTa3ZjivF/DOnwQbPKALAc5GSMemK9f09msoxdwoXIHzDtz0NeLmlRTsj6LKaHsItvruP1aPawZS5xwSSMAemK5yK8ij+QqzKjHk7eoyRz2qzq87GYyuwCN90Ke/6c965yGUwxkFw+BxjBHPTPrXHTptx1PQqVFfQhvru4a7VVjXy+QNnUA9Sx/yRWZdbJgzBGUAkc5PzjvjuM96vSMVhMi7YyGOQwwSeue5PXoOlNITG9224yVGNqr75HXJrui7WsedUvJtMwrSJ5SxkQCRMFgOCB7dxz61K0jhWeThWIIVAWx3PX09RVyeSGSQyli28BXwduOODg44rOvJGMjcAPIAuR6D1z2xXTe71PPqe6rjWnDIqQS7inBwSDg8rgeueOetULi8Mu6KR+R8xCZH8++elOjkjJaQvtkUjI7ZX3/oah+zeYh5VnYqFTkYPU5PHGPyrWKSOKpJvYlSV0jIaQbVViAepGBnjPX/AD0qZrtkMFwsmSzeWqjh8jnI+nrxisiNQsy7JGUKG3RnBUscjIPU49K2o9tpJFJ5UYbCyIp+YDsx2nsc49q00MOZvY3Lkl9OeKLKl5DyrHPl4+7jGDgjrXDajLdrcvc2qglOQwBXeFwCce+K6ra0AIEmIYwBtl4dB3z6gg5z3rmBMP8AhK7WKVlNtMXVQP4oh8vB75Pp3opaJkVNakUtz0PwZ4isXtfJubaNbljzJtALZ9f8a53xdp2uaYWg0i4fyp3Z9y9VDe9Xo9HOj6hLErfJ9+Nhz8vbNbUl4GRJJCrBsEnH+e1citGblHqe6oSqQUZu1jyPX4k8K+BNRdtzXeoAQ+c5y7PLgdfp+Qr5wiIBIHTGBXqvxV8VrrWpJo9nj7NYsS2P4pSMfoD+deUr8vNe5hYOMLvdn57nWIjVxDjD4Y6FrOetOGO/8qhUjr0pwrsPJJDyRzTgcHPXmot2Kepx1/lVJiZYyvofxpwIIxgEDse1QBiPx6U7coPAqiCURRHJC4PTjipPJC8K7rnsDmoCRnjgVYRiPmJyapJBclD3iji4B7AMMU5bq7HzSRJL6Y5/So95PUf/AK/Sno6k4bv/AJ7VVhep/9X8/wDU72CNtsKhEQkLGpOXIGCT3568+tczIklw/mzHPoB0FWSpd2lkyWc885x601tueDjGeOlYvuXdsr4KrgDjPeo3OecYFWGBGOc+uahbnjFSMgI4AJzioSGzx+VTleeOO1QPkZzzQO5GTkZPNRMATjpxUvp/jUcmCOO/FRItFYjmm4x1FTFR2IzUZz9azKTISMCo3Bx+FSkY4qNqRQ0ngE8mozg805him55qGWhD04pnfmlz2pBWQz9Ff2BLmVLT4gWicsLvTbqMEZUF7cRHP/fuv0s0yxGqXsCOvGQzepxwPYZr8vv2BrwJrPj2yBIP2TSJs8dGe4T6/wAH0r9VdO1GHR9NudVlZU+zws4DDPzAcd815WLhzVrM+jwNVwwvu76ni/xV8VCLV9faSENDpkKWNqVJSVgI98vquNzDBbB9Bjk/L/gW+g+J3gTUfCt3NGZbK4nEcbIWaNQxQnd6JnnP1zX1r/wri98a+Arr+1JUTUteuZblvlAKxy5278H5sADB7V5b8Hv2ek+Gtlquu+JdQMcQlm+zxA+WG3g7llycuBzjsM+tTVhGSb2aOihUlFKMddbWPyy17SIdH1S505kWPyZnj+UZDhWIVh2BOMmqVhdyWe9lm8x3bKnvjPIA9RjrX0l8VvANq+qz3mj3YuEWckRxFQrBj8qgDpXhV74VvheOWt3MXVSo2jd3Ix096qniKdSN7l18txFCd0j0TwlrMLqGcEeYSuTwD+Ne16dq9mkKhHDKqhT35PavjOFbjS9zB2G88luFQgYGMe/evXfDmq3MsFq9vO/nru3xYVgWIwW465zkV5+NwSn70We5lucSt7OotT03W2QTFYJNx67FGQP85rz9bpklZLr5XJIRD3x3H0rTe/njdHjAljkO1u43emTyDn1rMvl+03DTK3yDPyjoDx3rnp0+RcrPSeJU3puaAkRkSdpdrdAOfmz0BJ4FU7y4R7YLvUknPJyCVPG0d/U1hXUknyDG4pzwccEfiKorcXTsWVslG544Cj07ZrdUupz1sTb3UjSVseYZCHCsOPu7e545yPSpxerNuAYSGdQCwGMEHjp09q59iI4TLa/Md4ySTzznp9fzqYXLzS4lRQ0hGdoC4yecAdq25Op5862tjYtxsgaVyCsm5cleeDyR6FR+lPuM29rE8bkRB90bbtwlPPzZ/D2NRxReVpxdGJgjl/eLnOA/HPfPB9eO9XJWiaG3tg0LHJ2AdHDdj0CsuOaOoNaENvFZXLLM6yRlt0jEcANjjngZJ5I4p101tc7JIUx5UgT5/mIAHTJwSCeT3qrai9YtFbRiWMx52PjG5MnIbOSNvGcdKfMp8rAmEpBLIyjAwcHOTySPzp2MHK62EnuIJLrN0QwIeJUBOAyrlVyep9Aa81vNYNn4gsBMcxpGSp7rkgnJ9zzXU6h+6kL4DlnIdHUKHAG7G89Pw/CuS0+0j1XUxcFFUEhQASw55HXnNdVOKs29jz6taftIqG9z3R/E+nSaakskisU+4Q3J9fxrxjxh8SESGSw0phJcONruB8sf+Lewrqta06G10/cEVWDDOF7fWvmGZt00rersfzJqcJhoXcjfOs3rwgqUVa61YgYl97kseSSTyT/+ugelNwMjHpSgV6Z8WSKe5pSc9OKj5p46U12AeCQfanjmoxzxT89vzrRAPzmlXjpTB7/pSjBOKpCZOmepqQDLVECe3NPDY4FaRILAc9PWngqBtIyahDHvUq4HJFMTP//W/ORgx9z0qKUkcYIbpUxA5wMmoWG6T2rMpEfT8qhJwTnqKlfI6Dmq5PGCKhsoaSP0qu5I6cipCwHTrnNM3EdeRUtghhIHWoWx+Z4qQkHpmom71LZoRn0prdOKcR61GetZsaI29qjbGeRUpAqLmkWRnAFMIp+PWmf1rKRoMNNp54/xppyBUAfcH7B0+zx540hz/rND018HuUuroD+dfqZrxtU0Rred1Hn4WLCkhWXHJPTgkdep4r8m/wBhu6EHxf1W33KPtfh9iVLY3fZ7lSMDvgynOPWv0o+LHxI0/wAEaSbvUGDQxspaIckZPy46cnH515mNnyz5lvY97LYOdJU/NmN4s8YR2Wv3ViuoSWsumRQwRxxsFwAo564Oa+YPjT4z8S61NZT21/cPo89uF8tHOFkVsSbsd8/Mcdq9P+FvgLQv2jvE2p/FPWpLmPw40q2EVmj+X9suoD++3SDrFCw2EL1kypOFIr6D1v8AZ6+Esz2tm2kNHZW7tPJH50uG2jADANnaScsB1x6E1z0cFJrml9x9Is0o0moQ1st16H49arqn9nqwS8CFXbcpkHy7Tjsef8KyI/iVf2hEd0HmgUBc7TjH1x6V+qnirwH8PPD0Ulr4a8J2sO13lZxAirukXaF+b7ycng8flXyd4o8K2mqXciPHbQJAHVY0ALYDjaOeowvJ7ZqpTpRfLJFqhi6sfawna/Tc+bW8ceFNXga3uVSAsrA5AwSfrVWw17StIcz2l1GqoQY1Ugj1wR7Gux1D4Yrc3EReOKNSoViVC4DHPzAZ55rKvvhDo8UbSSTAsSQoUbckDoB/nml7TDrTm0OeeGzBvmcFdddi/deJItZjWSylCySKWkC4Zc9sqOM/rXTeGtRtp1bTZMvLsBk3HIL5yCDgEfga8isvAn9m3a3Ns0rITgDdgZ9DXb+H7FjrUUEyLsJ2sM42ZHY9+eoqKtKHJ7rIw+Irqqvaxsy9qieXO1uoLcknjnrjOe4NZ6KY4+eMcfh/+riu3v4rPYzKdzxk53nJUjsT16Vw88qCRkIVN4DK2fuj88c+9Y05cyPSxDs7iQfZpwGfITcNwXg/UDvjvVO6aJ5MQuQ7AKuSGVz36DIGO3UGkjaVCoJ3MrllJxyOpHocipZXmgmYtGTG3zKyg7SG6c44+tbpdjzpTutTZtpls4gJomkjB+Zh1Ud/lOAT0606/ZZcYGN+XhdQCGEfDAHtzxz1rJhvb6Sza3ldZI0kDoHJ3twRtAI/h60NLtYp5RSXbsUg7GRlGQT1BJHUVPJYPbJouWMrwXCmDejqG5Y/IwYdOOgOcVdiAs4phMjLkbucO3HG1Dzz6kjpzWJHfyGSS2tnLER+WhRSMr6ENyM5watrM8EUV4ssSuriKOSUEBGxnY5/uZyDnkHHanyMhzSV0cx4hvrYTx3MM7MGc5i2gmLaOFIPBI9uCOlT+GIWd1DtuGNwxjZnPUdMCuW127e/uo7TyVjmhlKko2VbPXadv4nnkV6N4ZhEY3Bc/KAfauiq+WmkcmCXtK7l0Rq+KkUWTRgHjGTnv1r5FlXbLIvo7D8ia+vvEkgmgCoOSOT1+lfJWoxGG/uYj/DK4/XNVgX7rObiH44tFM9acD+VJ+FLz2Fd582KOuadTaX2q0gHjrmlFN7YpRTAcCQKmU+hqEcmnk/pVITJl47U8Dv2pi0/BGBmtUQPGSeRUinuOMdqZkjjp+tOUgHHpTJbP//X/OQkeuKiPGW/CpN2Bgrg0zCFc+tZMpaFdm469KiYgrz24qcLj3xULqSOR71BRXZAORUbDOT0qZwO9Qcn6VLaKiRnNRHnj/PNSsM8ZqMnv3qShnemHOaUtxmmEgj6VmUhh45pp9DTj6fzpjelBREaYaeevSmEDFZtFpjGpjdKeaYTis2M+mP2P9Uk034+6PGCBFf6bqVnIrcBtwhlXGeCwMZIHXGcd6/Qb4geDrn4k+LRpbrixt5Y7Ob5TtZjgsQ2fvAHg9B7GvyE8GeI38IeMdB8XRuY/wCx9St7t3HBWHJjnxnjPkSSAZ4Gc1+/Hg+WxKNewlZUm2yxSjAXbIBtxjjock+pzXDircykevltRqMoo6XRINN8DaBZ+H9Agg0+w0+AW9tAiiNI0QdFAwPc+5r538Z/He+0zULu3ebckH7tvLYMS3Urjg7RgZavedctE1WJ7VyORwT0BByD+Br568R/BWx1C8nu442RLgF5GXO4E9QpHXd3z+FeTKfPO0m2vI+jw/LTp3SV/M8d1/4lXuv2In3yg3HzAuemBn5ME5wMDtya8am1O4mv5pdr4AUIz9cPyM/7XPPavp7SPgINEZYQWkjkjLMzKW2E+h6/MDjpx2rlPFfgWwsrchIz+6jf5o1xkjkZJ5wMcE1y1VCm7KL1Pbw2Iq1Iq8l6I8SfUruT7/G0AkepBx+PpViS389Y90YwfmznqDyO/T3qO4eJcs5CquNpPYd+g6Vkw6x/o0oKjOdobr8o6H1Galw091HbGp/MzfvILCdAZVWIAYO0ZOVPOD3+prh9VgW1VpI3zOHyuO644P1FSTaxFDEISGjd+hPQqR2B6fjXK3GuTvdPG7hymXCjJwmOfXAHc9K6aFOaR5uOq0nuV7vUWkjZWIZ3YeY2NpbHP4VQe5YiLBWMAEDI9+gPv6VBeptk/wBIUqwbo6n+IZGfwqjKiyhIPN3gjcVAOF56H2A6ntXfGCseDVryb1NNrqIsEdsBFJJB79gB7nirEc/2ePy54vPiMUkbqWdVYn5lYe6dcdCetYq2y27vDcsu87XicEupiJ+8oHXI7HBPbmtG7tpIXtY5rjzLeQHeqSrkbvu8dV3ADORn3q+ToYe2drstRu080RBkHACqwYu47/OOAV4wWPINbD2R1W/nsAotnQAztI4zGcnZJjIDED7+Oma5uC2lN2lsblkiVo5VZiCAxGNpHQMBx6HFacfiGexu47iKMi1nhO6OQ5UqMqSjsAwB6gdiKOXsCq6WkOhgW3imSWQRzBBj58q8iN83llc5yuMdq5e9u4xbyRldq8tKqsV3AcA4746560SXwSzmEOwxsgRcH50AbPJYc9eq4PrxXOzM8j7vm5zkHk4xjP0q4w1uzCrXXLyo0NEikuplj3LtQtICRjrx/LtXrWmgQwqqjJwD6HHpXA6NDhwAASMFup47ZNeiWYVgCW2knkZ61z4mV5HpZbT5IBqW2SAuc9MDvXyz4ji8nXLtP9vd+YFfV16uLQkcZOK+YvGsXla/LjoyKfqe9a4F7o4s/j7kZHJ08ZzzUdPB4+lekj5YWl60UD0qwHDFOFM6U4c0APXIpw69KTnpQOOapASgZPWpfm9M0xaeM471qkZskznt7U7n8vWowTz3zTxgHvTM2f/Q/OHcRkk+3rTCwC54oKnPQZPX2pjMOTj2rFvQuy3GtySR0NQElec8U7jjnp3qNunHWouMiMnc1GSuM05uMhjUTEbRnvUFoaTgcVGSeh5pWx05FJ3NLyGR57U1uBz36U/r1FNJ7GoGiE0w+tSHHSmN1PtQWRMe4pmRinNTCahlrYbUZp9NPqaykMjZEkjaOVQ6OCrKejKRgg/UV+uH7KHjzUvF3wzsLy6mkuNQ8NTvpF6Sy5mSJf3DuAOphZGOAOvtX5JGvr39inxdcaN8WLjwY15Ja2vi7TpoYtpAjGo2YEkTHphngEibufuqMdKxqQU4uJvhqzpTUj9graL7WEEeDvxk9SM9fbiu7S4tfscdqluHXeFiwMsTn5j7Ad8159ochhsPLZfNe2bYyg4wfvHP51D4t8VppFk1xcgxiQRM0ajIChhkZHv1zxXm0oKknNnuzlKtNQR0WqyvIk8ltGC6fKpjB+ctwMHHQnvXy548i3M9y77UCs7eYN4Lr8uNw4JJPT0r3288SQ2+jrb+W1qLhjK7+ZhySCy7SMkkgZ4HA4FfL2sG91O21OGWUZtZvtKMScJA+WyMnGS3XvgGs8RKnUSS3PSy72lNvSyPDNW0i2NvdzopZIxlhuCtlQSXBbAKjH3RzXnurW6WVqmMOJxvC4A+br9QcetV9X8TpqEkrSXYkngPEfHlpGmePUsWAbpwO+eK5G41671LcXUFgrHAGJNoxk8cAcjk9BWUaFtjveOSTuNv45g4a5XzNqFgsfOxD157Yx+Fc1CS7SSQymBXRgGJ5kQ4BTIIOD3zx3PFWlkm+XzNwGCWycBx3APvyKzLyGKMlEbYAHYLJ39AR1A5698cV0QhY8zEVXLVk0Ms0snkxDznkZjIrfMHCHcAB1BwCPcdKbdRzPO7W1sbRZCxaLzCcq3OxT1K4Pqfwq2torwl4gN077Y9zD5UADEnGGVj1U45GR1rJXVlhuPLlYF7ZSiKEIX73v8AdLdQQO3PJrdRe6OCVSN7SZTVo7aUyxfu5jncCCdpByv+90zWtdSRpHGL11doSXEqqHZpG5QHkbhgngcDg4yKz72dZo5bvDSxXDMI3OFbcuNxwO3PQ/UVSa6to4/s6AlpCQkZyY1YDAJBxuOc9uKuMWzKc4osXF9HPcNLaZJCDeWP3ivHmKpORkYyCafvjlVrKd22eQC0mzLDyzuJKZ4Bzye/BrCkA8ryPMyFbdtQA5cDGRjkk9PpUFpG0r4wwZSBlTglSDxj+taezW5h7aWx0CIVgMtzlo5mRYyy4+f7vAXnIHUipIIzNPJKpxtxGxP3PbA6gVVECpD9nd5HZ13bV+7tHOOeh9a1rGAyqrW778qNx+7yo6AHBPHfvWVTRHRQV5JG1pq+XLgg4524GAPrXY2eXYE9CTn6dK5a3O0gL17nFbUMmY2DkKoODk5rhnqz3qPupG1IN8bIDwo5+vpXzf8AEGLZq0bf3kI/I19ENKBGq5yMfz/wrwf4iRZlhmHZipP1FbYPSdjgzr3qB5lThTRUgBr1VufIgB3p2KTFLz+NWAmPanLk/jSVIDxQAYzzTh+dN7U9c/nVRQmSDjrUgGBkYqLJ+mKeD3NbIgkA746U9RUXtUgY46fjQQ0f/9H83CxPPUVGzAjHPrTiTt5GcVCzZPFclzQCRjIIGahcnPSlZhnHXP6U0gH5t2D0o6DsRsfU1C3tg08565qM4+lIsYcZzUee+RTiCc005781DAbnIph5NO6YpnApFRGnn+dRk96UmozSZSG55plOzxTDWbZoBqM/nTqYazewDM13Xwu16Twv8T/B3iSMqp03X7B2LyCJRHM5t5CzsCFURysTkHOMdSCOF+lRXEcstvLHbMEnZT5LngJMPmifPONsgVs47VMdwP6PbGePTGvIkLTPG32liEC7oSMjBHJODxkds14P8Qdcm1nRNS8resF1EFjEa73AZsH5yMBuOT0UV1HgPxtb/EL4e+G/iHo8jzf2zY2yyxs4by54wI545SDhZY5I2R0B4bK84rz3xfNdHRbea1kdL2JmKQEFofLk3Mx2cbsN1JPUYQEmvHxkJKXsz6nK6sWlVO7s7pdZ02Btxe4wY0izuaKaNMCQ9AARxjtXzp8Q757Gxh0h50sJL63jWYopdh5sjbmYqOSR/DivdPCE0U2gxW8MGblds1wjc7WlXkeuMAliemO1fLvxVvrsrdabNbsLhpibeZlKrDEMliAuSwLcA9wB3rkpQfPZnqqpaLZ83atFaSTTtZxmbdYSSyHdtSB0b70YAz8oxlH7msiCG4v4Lf7JbhLku+bl5RF+8X5imCwGFXGO596u6sEe/lhbah8uJBNbZkt3+UEsznBUMuST0z+FZ11ZQz2/9sLIbyHMLGOFdqxnJSEueiyMiH5QCRwT1r04rueRVqe97pJDfTyq8GpzBciWNmI3MwZgduQflG7nPGKsW+m6jKHhcBhAoyzfMNsjdAwBzjr7DNXrO0n86LU7IxtO8MssUDgHcqkGM5cbZGKk5xjBXvVSzsp5it+b7damYSZikwxTyy7AcB1JZhkY2kE1SjqZzqWSM1dNhv0zLcPBexMQYXjJSRFYYYSAgEncAR97v2rn3WS2vDfSIoAkEvlI3zhWOE49Aw5/+vXXa+IZbu207TLNnkm3TBoWO9GcE+WSTt3hVDkL1IIxnFUrzW57u2WefyE+eGII0fzFYAcMjqPnibOWU9DmumMHa559Sabsc1LcNp8xhhniEYuBM7DDSOAc+6gBicr1xSXMoguC93ao7SMskUiSfun6hmDA8/pjFZUmUmZZY9z7SqhDjbnnd05Vs8Z7e9MDFiWfK7mBYgHBHAz+HetGjFO5KT5TKGw7EDBBO5CPXHIJq9aywo8pcqjlTxnaGUjJAJyc1FI8a4ucbFkG3YDkEgdj6fU5p0EqE+VKB5b9CQCQcYz0yVHoKyexrHRmr5H2u5jmnm4ky5+bscBVIzyTj1rU0ZXR2cR+Wz/Ku0Fg2T90k1hWtt5rblMa+WdpYEZO48ZBI5GOMVurJOZmd5Gbn5c5GWIwD9MVz1NrHdhlrzWNqJjGcZJIPOeOa0YWST/WEYb17/lXPqp+9I5J4IH09K0oJC2AenpXLKJ69OetjpdwMXuB24rynxzbebYSsR80ZEn5V6OJnZMc46c+1cxrkAurWdMAAqQM1WHdpkZhDnotHz3TwcU102MUPVSR+VAGa9eJ8UyQY7U71pop3BqhigYp3akFGKAF9qeB700cHrThxWiJbHg880/GRTQcYpwrQkdg/lTgBSLxyD1p6kH60EM//9L82S5C8k4ODj1x0qBnAPQf/rpzSAtjOMVExzk9a4zWwmQSfb3qJsnHoaQdDTWHGBxQUkJ1ycc1EfunrSsew54qEnmlcYpbqPaoy1Bbv2qAtg1Fy0iQsPXmo2bPNMLUzd6UrlWFYntTCzGkzngck0w5HB61nKRVgLUjGmlgO/NRlqzchkm4CmUn0puagAyc80c0gNFAH15+y38fdM+GV/qHg7xzLIvhjXJIpYboklNKvogw89yTlIHXAcqMIw3twWYfobr11dXtkt1cFZjFAjItpt8vyMtKku48ECJgVKnBOCa/DavXPA/xy+I3gKxTRbC+GqaGhZl0jU2d7eMspX9zKh86FeQTGpMZAChVByM6tNVFZnThsTKjK6P0Nt9ch0i5k8P3YurfTtRtgUuJXMci+XJlZJccneG+Yg7T0A61h+ILHT5rU6TeXbafcT2klqJYj5iqkytLuLA4UlQFLjLbvQc187aN+0tpGp6LPovjO0Nk0cVuIL5Yzel/KbdJHJhR5KNgnfyASD/Dg+mG5fTzFrcmkPq1pa27JpghlSWCWKQloxIVOCYgxCkYb+9ziuT6ty6nsxzHnjyo871K710W0/haK0F9cq13Ek7rtSWLCKCu3CtsUlQD1BB4rMsND13SrmxubJYyt+biV2kh3OXCtvAizjcuzjZkEc5xmu6k8aeKLaaXaLU28q+VBY/ZxDJtlALOjAnZkgHeSd2DxzxxVp4luNM/sywFojHT3gS1CySERyn5WGHH3m3ZXDAKGPUVqlbdmMrt3SIZ7WW9hnFzdRzvPEzEIpUyEZdfLGPkXeGyRyCSMYIryebUVkkvLm7ljjkiheGytwpiERB+X5P7nBUgntXbahpfif7BdJrF9PbKdzEmRTgrOw8uTbygBzwnDDGOK5az0uQXItL6SK6kmdYlOfOQeYpZZDyCMnAx1PPcVtBpHNUU5WRiXVxez3aXsEbW0aQJEjMSql143HadpkIIyw5wBnkVVW3u0jLAxqPILbFYsmGOTwM7Se/fvXX3VlbNLpllaEeVPE0sao3ETqHSYSLLtAdexH3gBj0rGvtOudNQW8nloGdZTGsvmPEyrgg4A65zg9OnPFXz6GSpWM3y7eFPOifcrpghj8wfrtx7dj3FUWcBjuJ284BOceuPSrLsuNzxgYGRgc4PYew7VDK4nXYCAg6A4Gf/AK9IproPhRSA8rHyycEJgt+R4pqK4lWRgG2/UH8vSnRxvHhD3P8ADzip4oWZjFHyVPVvl/D1qW+pSRZjhiurdIIwBInJx356HvmutQNFHFHznueWAz71jWkIRG8pdzntjHP49q3g0yxjevfHtXJVlc9XDUrRu9yMsQucckZP1qzbH5dpz161nNtRQDkgDpVq0OWDetZHTTdpHQIW2/Uis6/jWRSCcDFacLLsDEHpxVScGTOcZA79Kin8R01rONkfP2vWv2a/ZkGEkyw+vesZa9K8T6eLi2aVAPMjcn6//rrzQV61N3SZ8ZXp8s2iXNOGMVGOaf7VqYjqUGkpevFNAGcCpBTBjHNPHtVksePenc445FNGO9KMHFaEkqgDt704HuetMyc809c9ulWkQz//0/zNbcOuDu/rTGyMAjk+lOyrcj9KhLcn+dcZ0DWznAJ4qJmOacX65qEsOuetS2NIazN94dai38nNNdyOKdp0F3rd6ml6JbzajeSOI0trONriYuTjG2MHGD1JwF7kVDkkWkMaQYyelVZ7mC2TzbmVIUPAeRgg/MkCvvj4W/sOeItcSPVfinqDaLbMAw0vT2V7xhzxNcEFY8g/djG5T/HX2v4S/Zw+Dnw/jR9G8NWRuFUK13dp9rum93ml3O34msJ1ktBpo/D2z0zV9TQS6VpepX8ZGRJZ6fc3MZHs8cTIfwNd5oPwa+LfiZl/sTwfrEyMM+ZNALWNR7m4aM/oa/ffw/pVmsb2sMSRxR/cWJQiAewAFYt7G2n6llB+7Zv3in36ECsnWGpX0SPgb4QfsMaNJFDqXxeu5dSvJdrf2Np8j29lB3ImnG2W5YcggbIyP4M8mz49/YU8N3vir7b4Q14eGNA2hbjTvs/2uSIgcNau7hUDd1kV1GOAOc/pFawx22lXGtSfdQbE9PUkfyr5y8Y6/fz3E0NgfMuLhtsSDoD059h1o55PUIK73PjHWf2VfhNa27adpmreIJdTMZSO4edJA8o4DG3CiPGeSAoHp2rw+4/ZF+OEcs62unWF1BET5couzFJOvHzCExOUz6M+Qe561+s/w98B2+gaedZ1slrqQl3lk659F9Kp+JvFKNcfYtN/dxlssyZDN9SKpalPe0T8ioP2XfjncHD+HobL5iub6/ihHB6/IJWx6cfgKvSfsp/GGNTsj0WZwOUj1F+D6ZNuB/Kv1z0Ozk1NlkvIPOUdydpA+p/TvWyLHwxc3f8AZ9vHJGwyrbCD831zUtiZ+LUn7NfxnjYqdDifBxmO8iI/XbVKf9nn4ywEL/wjM0pP/PG4gf8Am61+47+FtCiU5SVgANxYr/KkuPCmmNBELd/IYsOWXPB7ZHrRdEc7Pwlk+B3xiR/LHg/UpG/6Zm3Yf+jav2H7PXxy1PULfS7PwVqJuLlgq75LYRrnq0kgmYIqjljzgdicCv3vtvDmgWCLYwW6vJcD55H4Yj8OgzVGfTrPTpXs9It0hmkwssgJI2k8gZq4pC9oz8gNJ/Ya+OGoXH2e/n0DTTn7yXU1+Rjr8scUI4/36938C/sG+LNBnjkuviVc6QTKHdNEtVgRmJznbO9wpbJJJxz3r9DLexSxLxRbgrfffHLew9M122n6ekg+0yjbHCuS2OABWqt0RlKpJbM+Eb74Z+HLl3XV7QyXejyfY5Jo22GaKFioZ1AwM9TxgZryPxJ8JdPjt2No91JdRzCe3meVTtw2QoUjB/3iMqQOSK+uvENt9m8a69EI/wB3cXInC9QPNjVhwfxrz+9sikcikEqd+M5yQevI6Y9u1fKVMVOnWlBvZn6LQw8KtCM11SZ8KeJPC8hFxNOha4ZfNmlhUEOqN86tyAdx9vp2rj0jtzc20lrayOVmQRicIHyBuG4j5dvy8E9DgdTz9n634XtL6F5LWMgsjdM5O3pz1H4dfSvDfEHgtZYzcRKyADymUgdPTPcV008wjszGeWObvF6nimoahCbr7LfxRtiaW4+z7NwimJAdHLDOJR8wxwDxXMXAjEqtENsZXlM5xn0zn0rt77SLqC5LnJdiPvDI+XoAPSubn0zEzNk5Yktx/ET/ACrtjXT2OCeBnFao58yIp8tclcbQxA+6TnA9KSCFWfdMNxPt+Van2NehGGzxgVaS3jxgja3f1q/aJbGCw0m9TJgtgHLc/KBwOgrcjSaQKGJYDIBx0BOSee9OhgkdsN8nGRjuK0ILecSHB+QDoecVlOoddGhYrw24SQb3MgJ7jHPtjr9a0ZcLEFwM9RjjjvT/ACkUYAweue9QTMAB3PrWN7s6uXlVkZdwy7fc9u3/AOurdmhPBIqm24tyPp7f/XNXrUEc46Yq3sZR+I6WLb5Jb07VnsVyzn5cKRz6/SrcT7UO7Bzz9KoXDKquyj73XH+JrOG5vPY5w232mGQfe5JyPSvJ9csDZXZZVwj8/Ru4/HrXvOnQ5DKcdwtcj4h0iOYSRuOpJHHSu6jUV7M8PG4ZuHMjxwVIKkvLSWynMMoPHQ+oqFTkV2niWs7E2c9e9GO1NGTxTqaGHengd6B0zRVktjx6U5cEY7U0e1OHpWkV0M2x/XHFPHrTMceopy88da0RJ//U/Md+CQMGq5JHtSljzyeaidsj2rgZ1co139Kgdhj0xSM6gFmIA/z/ADr67+Ef7GfxG+Iy2+r+K3fwjpE+ySNJYt+qTxHksIn+S2BGCplDOc8ohFRKRV0tzxj4M/CXxF8XfGun6LpljcS6RDe27azehSlvb2gcPKjS8DzZUGxVUlxvDYAwT+4w8I+H/BF7Z6L4d02106ytY0gtoLeFI1iiUYCrtAwMV0Xw4+FnhX4beGLDwn4Vsls9Psh8qDJZ3Y5eWVz80krtlndslick1qeO4Gi1uGf+AMh/PisZQbV2ZKsm7IujSTcWnGQw6YHFcj4id7Tyo1HzjhhjjAr2XRYN+neYw6rkE1494i/0+eR05WNiv5UeytZkRqXdjrfBlmstuZByD1xXP69pzT6q0cC7mYhFA7knArt/BUYg0iWXGAqZ/E0zR7ZbnxCZ5eVtwZ/xXp+tCpKyHKp7zscr8QZE0fQYdFhOfJjAfHdscn868v8AAvhJZjL4o1nKQKxwW6n0Vc/xH9BXqfiPQ7nxXrP2VDtjZt0z9kjB5J/oO5rO8azQafZQ6Rpw2W8C7VT2A6n1J6mrdK6HCpbRHk/jXxZdX9yLOzzHbodqIo4wKr+F/B89y63N7+9ZvmC9hn1q3oPh86hdmWRS6g9T03ete3/Y7bwzpDXs2BIw+Xj071Di3ojZ1FFWRxOs3FvodoLaMqkhHIHFedaDdFtRMkQzIxOPYnv+NUdRvbrXtZdN24FjwewzwK9O0Xw/Z+HrE6nfAFyMIO7E9fwFKVNrQE7RuzXggk+zoLlsjGWJ43E1TvriJWiZiWw2Ai+1V3vJNUCLCSPMIO0Douf0rs9G8OxzTG+uslIx34wB2H1ohDWyM5StqytpitZWj67q3yzSA+ShONqdq5mzvZNU1JGTIHJwT19TmoPGPiF7+6a1sjiJPlGOnHHT+VbPgrR55n+07fl9fWtuS2iJbtHmZ0kWnzXt1FbRLhWILtwflHWuq1IiKy/s22GTKVTA7gnFXYIYrVkghH7yU/Ow6KoqndwS3cyiA7Zbj/R4m/ubgd8n/AUyR7kVvGnZHLz3Z8teN9SRPiFdxoh8r7LbpGx481VBzJnsGOcZ7DNc/fWqFN2drMDsB4C7uxP9asfEC7gl+JmtrAP3NtJHZwjsqW0Sp+hzUPnq1t5c0ZZgpAwfvf8A1jXweZ2+sTfmfqmVJrC04vsjkriyC+asICmNsmMHnOOv4+teWeJLS7hjlEkBZQwbd0wD2z05/GvcfJjlHmBR5WDnj5h7flXFeItPE0LQpJtXdkISWBIGOuOOua4PaK6PUpw7Hx9r6lmPBUg9wOh7V59PDKHPHTk//Xr3TxN4euRcOHGVXJXZj8Sec89a8tvbIoMBlO4kfcOB9Se9ezQqK2hx4ik27s4+aMEYfjnIx9aYkJySF4roDaHbg9zxjr+VTR6c8m3aCF9W6V0uSscXsHfQzrezZ2AI7da1I4VghJx14z15rfgsY44yWU/72OfwNVp4tvTgDt35rH2t2dMcPyo5yYbipGVOOprHucorPnJFbdyVAJbj0Fc9cOrkqvQHk10UzirpIrDls5znvW1Zxkx5/KstE+YZ4z+grrdPt8x5JGexIq6krGdGDbIiflx0I7f/AF6yJ2wPmPHNbt35ccvk5Xdt3YzyR0z9KwLoKG+Xp60oLqOq7aFrT2IwwHOePxq5qdmsuGwBwecdTVXTmAYE9ufpW6ymRSByQeM+hqm7MhR5oWPI9c0QXUZHQ9VPcGvMJoJbWVoZhtZa+l7qzSSLLDJUYrzjXvDwugzAFXByrAc/j6130at9GeFjcE4vmieXCnjOMUs9vPZy+TcLtYfkR7UwEV0Hlkgx0p3sRUWakz3NaJ3JaHinLzmmg9DmpB6VrEyZIoGfepFUc470ihhT1Hrmt0iWz//V/Lo+tV2bn609mONoB59K+zv2aP2RNQ+MtpD408ZzXWmeFZmxYw2/7q51VP4pll+9Da9QrriSUjchCYZ/P9DqlJRV5Dv2FPhUvjz4tSeMda043egeEbOW6WSaItayatI3lwKGI2vJbqsjsnO0spPIFfsd4a2XWo3asMlMMSeeCa1vB/g3w74L8MxeF/ClhDp2m2Nr5NvbwIERFQccDHJx16msnwGu7Tr7U25M9w6/RUOK2jTXU4KlXmTkjsWbynyBx6iua8e2wkeKVOrIhGfbFbNvcrO7IOSeB9KZ4pi82CGPGcQCicdCacrM1NCZv7CZk5Ij6fWvF7sn7VLHHkiRiQPfNex+H5kisrO0f/l63tn/AHBwK8v1C2e210q46u2B9TUyRpFnfaft0zwxLK3Bc4x+GCP1rN8FSPcz6pMoztEcH4tliPyp3im5XTtPtbJjgRxb5OepPJzVr4fWstr4dS6nXbNqEklyQeoRjhM/8BA/ChIHL3TfaCKwiMUZ+9l5X9cD+Q7V4H4lke+umdesjEKD6dvzFe4eIJfJsXRThpBt/D0ryuy0ubVNUjgjXIVuOP5/Spe46bN/wVocUFt5067Y4hub3NcX8QNdfULlreIkQp8oXtXpPiDU4tMtBpFgdzKP3jerV49c2j3EhmdeSdx70KKRcXd3ZX8C6BFG9xrV/hgvAB796TU9QuvEevR6PaEkKQXI6InYflVzW9Yi0TRHij4ZgAo7sx9vatT4X6C8Eb6ndruuLk7iT156VHK1r1ZbktzvNH8PLCEiRQWwBk+3rVbxfr8enWT6bYntt3DuT1NdVrmqRaJZtEmPPkXn/ZB7V4qtlfeJNTFpBk5bLt6KadlHREQvJ80in4b8N3Gt3owMJ/GxGRtr6Bht7bSbRbW2AG0AdP1qC0s7Lw1YJa2wG/bgkdSe9Qw+ZezCXHyZ/An/AOtWsI216mNWpzvyLllHhHuHxycZ68DrVlb200uO+1+/ZYrbS7Z2ZmOFXavmSt+ACr+dWWWOzgeaXlI1aRx7KM/r0r5P/a88aXnhX4Y6B4Ls3Meo+N9Zt7GZgM4tY83d2CPSRIzFn1YU6slGLkysLSdWrGmup4m2qtqGpza1c5Vr6eW5Yt0PmsX/AEBFd0Xj+yiQbiwyQAvHTPXtXkyajFcRfZUIJBJK9GUgY6enY16NpFybnR9zEjKAbl+YKenT+dfnWIk5XlI/XcPFRskYM2rSWshMTkbQMkjnnsfcelV7/VoJYvuDewBLDgN9OueeK5HXrw2lwyu7OoyCynjk8nBrm/7cJjMGf3bHpIuQfoQRiuFxlLY9ONlqR64lqS2wbpJGweApB6liRzx2rxjVNPk+0SmNgYmcgksWwPUc/nXoepa7b3EssLhRIFySc4PHTseK891F5NyskgVWYnCDB6fnXo4aLjozGtZozY7MquIjGAnU45x269Sa1YNO+QFpAT6BcgDPc9jVKxsmvDgFwFwd2Mnn39a9O03w5J9i3uH3KM9MdB0GP610VqvKYU4XONuLeOFMgHB/M1ydzKyGTyiFTkc84PtXd63B5Ue3AUjqAMDp1J65ryy73biGJCe/fr+lOh72osQ+VWMS7nLEoCM+vqPrWTtLHp7/AP160bgKvA/P1qoqsWzwR+n516UdEeHUu5aklrEXl24yO4z1rt7OB4QvJAHqM1zNnCQ6sBg5GfWu/tEGwFe3ODzmsqkjpw0dDBvoI8NI6DOMB8c4PvXJyKN+zPQ/jXoOoR/IQAAM9B2P4Vwd0qRzt/CP51rSd0c+JjylyziIGVxjPOeTW9DGGB6kHrWHZnGAcjPp6109qoyueOwFKe46a0Fktjs24wdtc/d2YbKEcV3AVpCCeMdsc8+tQ31ikYIz/tHH9KISsyqtHmR4ZrmgreKQy/QjqDXl2o6bdabJsnHy/wAL9j/hX03Pp4k3EjBxznpXN3ugrcKdyhgeQCM9D6GvQpV1szwMTgW9Ybnz1uI6jn3qRTivbb7wZ4I1lQsjr4J1E9LxFabQZ245uLfPmWRPJMkLeVklnHavLfFPhbxB4I1gaF4pszY3bjfbksJLe7j6iW0nH7u4iYcgodwB+dVPFdcWt0ePUhKD5ZGSDx71KM545qurEmpk65FdEGYyRaWpVA6H8qhXGOeKlVcc9a6YmUz/1vhz9nX4WQfGj4u6Z4Lvf3mkWQbVNcC/MPsVqyYt2I4U3UjLHtP3o/MFf0MWVhHpcUcMMCwwxIqhIwAkaqMKigcBVHAArJ8OfD7wp4D8GW3hzwpYQ6Pp2nQJHEII18xI4vm+ZyNzuxyzMxJLEmrfhrxJHrcZguoZLeZM/LIAN6/wuOxDDms6VO0dDjxFZzd+h2NkVdwVOQ3Fc/pOmrpkWo6eg+WK5kkTH92X5h+ua241NrMkqfNHnmtGdIxfKy42XcRj/Ecr/Wh76EQlpY8h0O7K6zJEzcu5wD6Zrvtci3W8UvrGBXnt1AdJ8SxNJlVkfj0ya9G1BlfTY3HPG38jUuOhrfUzNPzFBoc2QQt00DH035xn61lX9pFN4zhDnMcRedx6rEucfniut0+3WTRYmIGRdxSD6q4rk9WmWDxlOhGAdPu3P/AdmaOW7GpdDIS0fxhfu103+iwXDPdH1jXkIP8AePH0zXqdiC8LXHCr0UYwABwAB2wK8+8JW02n+DrBpwRc6qzXsmeoWUkoPwXFejyMLPTI0JwWGfwqGug5PU4/XW8xjzwo6VURB4d04uVH266GTn/lkjc4+p71tKIYY31a+I8uI5RT/G46fgP514T4r8efabiRYzzuO49TwaahdhF3Nm8voMs0jd+TnvXJ6h4hjgylsod2HB7VyD311fFfLzzwBW/YaE77TIhJbknvTbjE1UW9TK0/R9Q8UaqtzqWRbxMCEz19q+mdLt7fQ9MW7lUKduIUPGT/AHv8Kw/C3h+BI/tVyCLeHls8Fm7KD/P2qj4n1szSNIxwinbGnb0GKwcm/eY37z5Ucp4g1Ge+u/JTLzSnIA64J64r1XwxokPh7TDLIB58g3ux6jPauM8F6G15qH9o3YDMuG57HsK9J1q58mPykwWPr7dqmmk/eY6rduRGARJeXGASWZsKPQHqT7V0lvAVuI7ZAGKDrjgZ6msjRYl+e4fcGPOBzgf/AF66OIx2FpJfznDy5C/7o/xreJzS3sRXyLcNHaHlZZRvA7pF87D8SAPxr84v2tNbPiD48+FvC6yhovDOhzajcp12z6lKFiP/AAGOBx+NfonYF7i/hjbqltub2ad8n/x0CvyH8UeI38c/tAfEXxJaskqRamNNssHrb6YiwOv/AH9Eorz80qcuHZ73DlDnxibWyMa01x4/Ewi3AKzZHODleG+uV5x3xX0H4Xd7U3lkXwFPmxODzskGe/HWvlfW9NubHXRe2yk/MTjqQynjHuB+dfQ3w/1U6zcrcSIy/ufLlXqEdfXPY9QPSvkcRFOHMj9HhdTOT8emeIyllAwTh2Xnr0bHT2zXilxeyrM0sBd1xlsdDj29K9s+KskdqWQll3KWV0IZTjsQeoH5ivmKfVPLIkZ9gIG9c8EnuB2rDC07xOurUSOolu4L5lYI4mQ8qTgdMdRz06VcktCm1DGYlkAwAcnP1P6Vz1jdxzYc/PGOmeH/ADHaulstQttyRjcwUj5hyw9sHv710S5lsKHK0dZ4d0cySRxOhOxhuG3JyTnk55r3STThZaYRDH5Z2fNubkE9cj1rF8B6Vb33kyx7gFU72f72/wBCO2K9p1nTltdMaZFOSnPGe2OvbNeXWqOTNV7skj4m8ayi1mMewsG6jOevevIr2ZC3U8DjnivU/H0wF9J5aqOcMpzkZrxe8utjfKMetezgleCPPx0rSKEz7pCDzzVm1VchTkA+nU1QRg8m49egrat4ejjOR1r0ZaHjrVmtZQAkc4HQ55NdXGXhRNnUDP8A+usrT7YSbWIIyByDxXTxwPt+7gLn65rkmzvorS5k3Z8wHkDPXqRmvP8AVVMcxBbBXrXpE8RjYrj5hgiuJ1q3MgMijg/Nken+Fb0TnxSbTKFi3ORyfQ/4109qzb1bGT1JP8/wrmNNBMu1xz04GeK7e3hO0FgQPyrSZjQ1VjorFHdQSM5wV9TVqW0aXMRG0gbjgc/h+NLZLsjyhB4AB9+9bccZePYB+Pt/+uuVysz0ElY4+Ww80hVXHHJ65FRnSVzgZwMr7f8A1q7qKwyCpAw3Tn+tW49MKtyu5sYOf4feqdayMfYq55VeeHfPUqiHkDtnnufaug8FeKrDwxA/g7x/pcfiTwPdZFxp1xCLn7FnrPaqRlQBzJEuNw+ZAHGG706aSSQhOAPxzXN6no0bqSEwUIJx1/zit6GKaehw4zAQqRs0WviL+xXZX9hH4t+B+qrc2V5GlzBpl7P5kEkUo3K1td/M4BB+USb142jb1r4Q1rw/rnhjV7jQvEdhcaZqNocS21ymyVQejDqHQ9nQsp7Hg1+tP7K3ivW3nu/hNPpsl5punRz39pqCOu2zgmkz9mlRjuIMhYxbAQFBU4wM+w/FH4PeDvidpp07xXpwuGh3fZ7qP5Lq2YfxQyj5gR3U/Kw6jrXvUJqaufE4iM6FRwmfhQrHAzUqZ9eK9/8AjB+zZ42+E0b63Gx1/wAMg8apbxlZYBk8XcKjCYGMyJ8mckhAK+fUYYBByCAQRyCD0OehFdkX3Mm01dH/1/1elhF1ptwsbiRJYXKsnzK3ynBBGQa5ODRIrq1smJeKSNFIZThgwXBB9R7GvNtEkvvD1zEdNupra2WZXltlIaB1z8w2MCFyO64r26FIoVe3jbIzvj9fLf5lP4fkK5MLio1FeG5OYYF4dpN3RFbSSRDyJwOnQenrUtyZIzHMnOwh0HrjnH41cuLZbxUlThgOvQg1VidZEktZRiWI9exB6EV1b+8jzkraHMeONO+02X222X54iJV+g5pmn3v27QUmOMBh78MM104Cz272j43R8jPdT/ga870WNrO51LRWP8PnRA+gPQfTNJLdM2u2j0W0bbptvEvTzF69/mFcL4giDeMAxOGnt5rVfpJIu79BXS6Nei4ktYf+mig/gf8A61ZWowrceMtPyMlZZmJ9AoBqPMa8zoriJZtUt9PgGI4lSMAdAqAf0FXLxjqF8YUO2KPjPYKvU1DZnZ9r1V/vOWji/H7x/AVFcKLbSpJJX8pZhulfuI+yj3br7Dmpt1E9+U8r+JHjC20/Tp7tT/o1p+6t4x1llPAA9eefzrw/Q/Ds81qupakC8svz4PGN3PQ1ciaT4neLn1JVKeG9EkKWwHCzzg4Le4GOPb616tDpj3MwijX5M4Bx2rOVRp2R1RSijmdJ0GW5mTavGck47V6npfh0TSBPuxx4Mkn90f4+lbOl6L9iijhQAzy9PYd2PsKtalewWlubKzbIX77d3buT/QUlD7UyJVG3aJma5qUMcQsrQbLeIY47/wD6+5rzBRJqeobyA0UfA/3qt6zqDSt5EPzO3GK6vwhorsyu6/KMEA8jJrKd5PlNIWguZnbaFZLpenGaT5SRkiubupJb25ZuBuJxz91feuk8Q3sVpGluTkKMlR3x0rK0a2e4d7uVRtY5AHr/AFro5VsjByfxM39LsSEjhXO0YJJ6nHc1hanqf9sasNOs+YLb7zdhXRaxe/2Ros1wCBJKCi+w7muQ8G25Om/bGH7y5lBJ9t3+FD1dhKyTkdC0yafDqOpvxsWVl/3LeP8AxBr8Ifgzqy3usza4oMn9q3N1ftnjd9vmec/+jK/aT4r60vhz4PeLvErsEFloOpTg9MFlkx/OvxC+Fmnvo0djbgt/o0MUQ7H92gUfnivKzm3s1E+r4Vi+ac/Q+rdY0K2KiVoSMSBsqM8N6+uM/hW94G0xdPupJImZGk4dc+nT8v1HFWLOVbrS1lJ3zQDcoznKD7w9+DVzTFSCX7XbyFQw3EMckAfwn19j6V8dGdk1c++vdnmfxktYzErIpRlYl06YKjg/4H0r491SUncNuQnUEcg+/tX2d8TLiCSMwOS25TtIJPOM9e4I7V8e6jE26QA4Y7l57jNdmBdlqTi1zRVhulXx2jBwAOcc/THrXf6TIHuo4tz/AD/dbsG9P/115XpsEonG3CsvUHp+Fe6+D7L7dd27M3LEAADr6jP4VtibJNozwUm9GfTfw6sr62Mct0jSMfutnjp14GD+NeweIZpP7IYHksM9cZPofpWZoVg1rp9u8qiMtjA9z6it3xPIIfDz7lVVKHr1P09M14HI3ds7ZVE5qx+evxD1BZb+R4UwAWz7/SvDrhy82OgPIH869O8c3pl1ScRN5sYY5boD9K8zZVcjA5719Dg4KNNHmZjLmm0FsoaTfjP1rpoInbAToTwKzrSyBAPJOOldJaIrEQKDlcEnpjmt6kzghA6rRdNadQuwYyOv613z6TLBbqcY7E9etN8L6YVjLyZ46BcNkmvRJ9GdrXjgYyfU8d/evKrVrSPWpQtE8Zu7VjneMjHUVwerWflqFOSCK9gv7M/MoUAA/r71xd9as5bfzj7uB0/z6V3YefYwr07o8vtke3k2MSM9DjqPrXe6Yyugjk4ORjPTArGayB/eEHpgDtnvW9Y2rYUMwQdf89665NNHn0lyux0NhDIWw5JUkn8K6e2h+UYGF5//AFVjWgVCEwWOeOMmugt1kGSpPPJzwM1xVbndB6WN21tYQBnGAegHH/1q1Da7snGGHOSMfhio7KPAV9q5xznrnHYVvxRIY8kkM3fH5VwSbubaWOeNtsQK/wAx64x09PrWTfwbQ8h2h5Oq9QQOtdhdW0uBt2gnGMnniuXnsNW17VrPw9pMPmXl5KsMSDu7dyeyqMsx6ACt6F29DGrJJNs9k/ZW8O6gniHxT4vCGPTntrewQ9nuEZpG2H/YVhuI7nFfY8hkl+ZYorhSeRIAH6Y4Yf1FZvhjwxY+BfCVl4b08Bls49hcDmaZ+ZJD7u5JreSPyYxzyq4J9cdf1r6ihBwikfnWPrqtWlNHmmoac1vPM0EObeYHzbSVQwIPBK9VYHuP0r4M+OH7IFjrcdz4w+EMUdpeuWluNGZvLtp2/iNuTxDIeuz7jH0JLV+lzRJcFvMOVfI5HI9CD2+tcdd27WV2Y3Ox26P/AMspR7/3W/SvQp1L7nn2a1R//9k=);
background-size: cover;
}}
.container {{
align-items: center;
/* background: #F1EEF1;
border: 1px solid #D2D1D4;
*/ display: flex;
height: 100vh;
justify-content: center;
width: 100vw;
}}
.email {{
background: #DEDBDF;
border-radius: 16px;
height: 32px;
overflow: hidden;
position: relative;
width: 162px;
-webkit-tap-highlight-color: transparent;
transition: width 300ms cubic-bezier(0.4, 0.0, 0.2, 1),
height 300ms cubic-bezier(0.4, 0.0, 0.2, 1),
box-shadow 300ms cubic-bezier(0.4, 0.0, 0.2, 1),
border-radius 300ms cubic-bezier(0.4, 0.0, 0.2, 1);
}}x
.email:not(.expand) {{
cursor: pointer;
}}
.email:not(.expand):hover {{
background: #C2C0C2;
}}
.to {{
opacity: 0;
position: absolute;
transition: opacity 100ms cubic-bezier(0.4, 0.0, 1, 1);
}}
.to-contents {{
transform: scale(.55);
transform-origin: 0 0;
transition: transform 300ms cubic-bezier(0.4, 0.0, 0.2, 1);
}}
.name {{
font-size: 14px;
line-height: 32px;
margin-left: 10px;
}}
.top {{
background: #34495E;
display: flex;
flex-direction: row;
height: 70px;
transition: height 300ms cubic-bezier(0.4, 0.0, 0.2, 1);
width: 300px;
}}
.name-large {{
color: #dd5;
font-size: 22px;
line-height: 70px;
margin-left: 20px;
font-weight: normal;
letter-spacing: -1px;
}}
.line1 {{
background: #6422EB;
height: 12px;
position: absolute;
transform: translateX(9px) translateY(4px) rotate(45deg);
width: 2px;
}}
.line2 {{
background: #6422EB;
height: 12px;
position: absolute;
transform: translateX(9px) translateY(4px) rotate(-45deg);
width: 2px;
}}
.bottom {{
background: #FFF;
color: #444247;
font-size: 16px;
height: 150px;
padding-top: 5px;
width: 300px;
}}
.row {{
align-items: center;
display: flex;
flex-direction: row;
height: 30px;
}}
.link {{
margin-left: 16px;
}}
.link a {{
color: #444247;
text-decoration: none;
}}
.link a:hover {{
color: #777579;
}}
.email.expand {{
border-radius: 6px;
box-shadow: 0 10px 20px rgba(0,0,0,0.10), 0 6px 6px rgba(0,0,0,0.16);
height: 150px;
width: 300px;
}}
.expand .from {{
opacity: 0;
transition: opacity 100ms cubic-bezier(0.4, 0.0, 1, 1);
}}
.expand .from-contents {{
transform: scale(1.91);
}}
.expand .to {{
opacity: 1;
transition: opacity 200ms 100ms cubic-bezier(0.0, 0.0, 0.2, 1);
}}
.expand .to-contents {{
transform: scale(1);
}}
table td {{
border: 1px solid #fff;
padding: 4px 8px;
}}
</style>
</head>
<body>
<div class="container">
<div class="email expand">
<div class="to">
<div class="to-contents">
<div class="top">
<div class="name-large">
{data.name}
</div>
</div>
<div class="bottom">
<table>
<tr>
<td>Area Size:</td>
<td>{__human_format__(data.area_ha)} ha</td>
</tr>
<tr>
<td>Cashew Tree Cover:</td>
<td>{__human_format__(data.cashew_tree_cover)} ha</td>
</tr>
</table>
</div>
</div>
</div>
</div>
</div>
</body>
</html>
'''
def __build_data__(feature):
"""
Return all the data needed to build the Benin republic protected_areas Layer
"""
data = protected_area_data_dict[feature["properties"]["NAME"]]
return data
@shared_task(bind=True)
def add_benin_protected_area(self):
"""
Adding the shapefiles with popups for the Benin Republic protected_areas
Add benin republic protected_areas data to the parent layer
"""
__start_time = time.time()
class DataObject:
def __init__(self, **entries):
self.__dict__.update(entries)
benin_dept_layer = folium.FeatureGroup(name=gettext('Benin Protected Areas'), show=False, overlay=True)
temp_geojson_1 = folium.GeoJson(data=protected_area_1,
name='Benin Protected Area 1',
style_function=__style_function__,
highlight_function=__highlight_function__,
)
temp_geojson_2 = folium.GeoJson(data=protected_area_2,
name='Benin Protected Area 2',
style_function=__style_function__,
highlight_function=__highlight_function__,
)
temp_geojson_3 = folium.GeoJson(data=protected_area_3,
name='Benin Protected Area 3',
style_function=__style_function__,
highlight_function=__highlight_function__,
)
geojsons = [temp_geojson_1, temp_geojson_2, temp_geojson_3]
for geo in geojsons:
for feature in geo.data['features']:
layer = folium.GeoJson(feature, zoom_on_click=False, style_function=__highlight_function__)
data = __build_data__(feature)
# html template for the popups
html_view = __build_html_view__(DataObject(**data))
# Popup size and frame declaration
iframe = folium.IFrame(html=html_view, width=300, height=150, ratio='100%')
folium.Popup(iframe).add_to(layer)
# # consolidate individual features back into the main layer
# folium.GeoJsonTooltip(fields=["NAME", "REP_AREA"],
# aliases=["Area name:", "Area(km²):"],
# labels=True,
# sticky=True,
# style=(
# "background-color: white; color: black; font-family: sans-serif; font-size: "
# "12px; "
# "padding: 4px;")
# ).add_to(layer)
layer.add_to(benin_dept_layer)
return benin_dept_layer
current_benin_protected_area_layer = add_benin_protected_area()
scheduler = BackgroundScheduler()
@scheduler.scheduled_job(IntervalTrigger(days=1))
def update_benin_protected_area_layer():
global current_benin_protected_area_layer
current_benin_protected_area_layer = add_benin_protected_area()
scheduler.start()
| 201.235135 | 63,405 | 0.918033 | 3,093 | 74,457 | 21.99709 | 0.659231 | 0.005541 | 0.003248 | 0.004395 | 0.038317 | 0.035701 | 0.033085 | 0.030204 | 0.029322 | 0.028631 | 0 | 0.148419 | 0.039889 | 74,457 | 369 | 63,406 | 201.780488 | 0.803321 | 0.022174 | 0 | 0.348921 | 0 | 0.028777 | 0.936892 | 0.879314 | 0 | 1 | 0 | 0 | 0 | 1 | 0.028777 | false | 0 | 0.028777 | 0 | 0.082734 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
69b37e53872c5adee686aab94940b33df26b3796 | 46 | py | Python | koapy/backend/kiwoom_open_api_w/core/KiwoomOpenApiWQAxWidgetMixin.py | resoliwan/koapy | b0616f252bb3588695dfb37c7d9b8580a65649a3 | [
"MIT"
] | 1 | 2021-09-25T22:33:01.000Z | 2021-09-25T22:33:01.000Z | koapy/backend/kiwoom_open_api_w/core/KiwoomOpenApiWQAxWidgetMixin.py | resoliwan/koapy | b0616f252bb3588695dfb37c7d9b8580a65649a3 | [
"MIT"
] | null | null | null | koapy/backend/kiwoom_open_api_w/core/KiwoomOpenApiWQAxWidgetMixin.py | resoliwan/koapy | b0616f252bb3588695dfb37c7d9b8580a65649a3 | [
"MIT"
] | 1 | 2021-11-12T15:33:29.000Z | 2021-11-12T15:33:29.000Z | class KiwoomOpenApiWQAxWidgetMixin:
pass
| 11.5 | 35 | 0.804348 | 3 | 46 | 12.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.173913 | 46 | 3 | 36 | 15.333333 | 0.973684 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
69d17258d58767342a202a973a3169371b6a758e | 34 | py | Python | build/lib/Normalizer/__init__.py | Vyzrala/Data-Preprocessor | 3a7a735e89fe60dcfa1eb1dd6d750c2ffd1145ad | [
"MIT"
] | null | null | null | build/lib/Normalizer/__init__.py | Vyzrala/Data-Preprocessor | 3a7a735e89fe60dcfa1eb1dd6d750c2ffd1145ad | [
"MIT"
] | null | null | null | build/lib/Normalizer/__init__.py | Vyzrala/Data-Preprocessor | 3a7a735e89fe60dcfa1eb1dd6d750c2ffd1145ad | [
"MIT"
] | null | null | null | from .Normalizer import Normalizer | 34 | 34 | 0.882353 | 4 | 34 | 7.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.088235 | 34 | 1 | 34 | 34 | 0.967742 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
38b434ecdc3f2595affc9f0e6914384a33b5043e | 173 | py | Python | __init__.py | Ayrx/screenshot_ninja | 86d9db589da575068849624ad70d354b7658cc64 | [
"MIT"
] | 4 | 2021-03-29T00:04:54.000Z | 2022-03-14T04:15:57.000Z | __init__.py | Ayrx/screenshot_ninja | 86d9db589da575068849624ad70d354b7658cc64 | [
"MIT"
] | 3 | 2021-03-27T14:31:11.000Z | 2021-10-09T18:29:17.000Z | __init__.py | Ayrx/screenshot_ninja | 86d9db589da575068849624ad70d354b7658cc64 | [
"MIT"
] | 1 | 2021-10-09T06:37:06.000Z | 2021-10-09T06:37:06.000Z | from .core import get_active_view_image, get_active_window_image
from . import frontend
__all__ = ["get_active_view_image", "get_active_window_image"]
frontend.register()
| 24.714286 | 64 | 0.82659 | 25 | 173 | 5.08 | 0.44 | 0.283465 | 0.204724 | 0.283465 | 0.598425 | 0.598425 | 0.598425 | 0.598425 | 0 | 0 | 0 | 0 | 0.092486 | 173 | 6 | 65 | 28.833333 | 0.808917 | 0 | 0 | 0 | 0 | 0 | 0.254335 | 0.254335 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
2a2dbffe5b1ca5d13e9f7b51ac633c241ffa4d01 | 38 | py | Python | oceanspy/tests/test_oceanspy.py | rabernat/oceanspy | 9bd58f8529cb0fa865393c057ad7498e4f99681d | [
"MIT"
] | null | null | null | oceanspy/tests/test_oceanspy.py | rabernat/oceanspy | 9bd58f8529cb0fa865393c057ad7498e4f99681d | [
"MIT"
] | null | null | null | oceanspy/tests/test_oceanspy.py | rabernat/oceanspy | 9bd58f8529cb0fa865393c057ad7498e4f99681d | [
"MIT"
] | 3 | 2019-08-22T18:23:07.000Z | 2021-08-19T19:26:33.000Z | import pytest
import oceanspy as ospy
| 12.666667 | 23 | 0.842105 | 6 | 38 | 5.333333 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.157895 | 38 | 2 | 24 | 19 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
2a403827b7ac5ec4d7a6157049bba9cb75cf0a42 | 204 | py | Python | locuszoom_plotting_service/base/util.py | abought/locuszoom-hosted | 5cb635b18287d15610df0da6c85b477a3eaaaabb | [
"MIT"
] | null | null | null | locuszoom_plotting_service/base/util.py | abought/locuszoom-hosted | 5cb635b18287d15610df0da6c85b477a3eaaaabb | [
"MIT"
] | null | null | null | locuszoom_plotting_service/base/util.py | abought/locuszoom-hosted | 5cb635b18287d15610df0da6c85b477a3eaaaabb | [
"MIT"
] | null | null | null | import random
from django.db import models
def _generate_slug():
"""Generate a random 6-digit string, for use as "slugs" (external-facing record IDs)"""
return str(random.randrange(1, 1e6, 1))
| 22.666667 | 91 | 0.710784 | 31 | 204 | 4.612903 | 0.83871 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.029586 | 0.171569 | 204 | 8 | 92 | 25.5 | 0.816568 | 0.397059 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | true | 0 | 0.5 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
2a455bc700c5c6bda1561bd968f83e5ce91537ca | 2,558 | py | Python | epytope/Data/pssms/smmpmbec/mat/B_42_01_10.py | christopher-mohr/epytope | 8ac9fe52c0b263bdb03235a5a6dffcb72012a4fd | [
"BSD-3-Clause"
] | 7 | 2021-02-01T18:11:28.000Z | 2022-01-31T19:14:07.000Z | epytope/Data/pssms/smmpmbec/mat/B_42_01_10.py | christopher-mohr/epytope | 8ac9fe52c0b263bdb03235a5a6dffcb72012a4fd | [
"BSD-3-Clause"
] | 22 | 2021-01-02T15:25:23.000Z | 2022-03-14T11:32:53.000Z | epytope/Data/pssms/smmpmbec/mat/B_42_01_10.py | christopher-mohr/epytope | 8ac9fe52c0b263bdb03235a5a6dffcb72012a4fd | [
"BSD-3-Clause"
] | 4 | 2021-05-28T08:50:38.000Z | 2022-03-14T11:45:32.000Z | B_42_01_10 = {0: {'A': 0.044, 'C': -0.015, 'E': -0.074, 'D': -0.055, 'G': 0.004, 'F': -0.133, 'I': 0.309, 'H': -0.09, 'K': 0.015, 'M': 0.117, 'L': 0.165, 'N': 0.127, 'Q': -0.131, 'P': -0.046, 'S': 0.087, 'R': -0.37, 'T': -0.067, 'W': -0.042, 'V': 0.276, 'Y': -0.122}, 1: {'A': 0.646, 'C': -0.075, 'E': -0.124, 'D': -0.255, 'G': -0.046, 'F': 0.049, 'I': -0.011, 'H': 0.096, 'K': 0.303, 'M': 0.185, 'L': 0.151, 'N': -0.106, 'Q': -0.242, 'P': -1.14, 'S': 0.304, 'R': 0.263, 'T': 0.168, 'W': -0.407, 'V': 0.143, 'Y': 0.097}, 2: {'A': 0.011, 'C': -0.004, 'E': -0.007, 'D': -0.006, 'G': -0.008, 'F': -0.004, 'I': -0.005, 'H': 0.006, 'K': 0.014, 'M': 0.002, 'L': -0.002, 'N': -0.007, 'Q': 0.001, 'P': -0.001, 'S': 0.001, 'R': 0.026, 'T': -0.003, 'W': -0.009, 'V': -0.008, 'Y': 0.001}, 3: {'A': -0.007, 'C': 0.001, 'E': -0.0, 'D': -0.002, 'G': -0.001, 'F': 0.01, 'I': 0.001, 'H': 0.004, 'K': 0.006, 'M': 0.005, 'L': 0.004, 'N': -0.003, 'Q': -0.003, 'P': -0.012, 'S': -0.004, 'R': 0.009, 'T': -0.01, 'W': 0.003, 'V': -0.007, 'Y': 0.005}, 4: {'A': 0.256, 'C': -0.203, 'E': 0.192, 'D': -0.012, 'G': -0.312, 'F': -0.269, 'I': -0.283, 'H': 0.079, 'K': 0.426, 'M': -0.138, 'L': -0.03, 'N': -0.335, 'Q': 0.104, 'P': 0.044, 'S': 0.091, 'R': 0.625, 'T': 0.09, 'W': -0.495, 'V': 0.015, 'Y': 0.154}, 5: {'A': 0.019, 'C': 0.004, 'E': 0.002, 'D': 0.007, 'G': 0.011, 'F': 0.013, 'I': -0.02, 'H': 0.011, 'K': 0.011, 'M': 0.006, 'L': -0.022, 'N': -0.0, 'Q': -0.017, 'P': -0.014, 'S': 0.005, 'R': 0.019, 'T': -0.008, 'W': -0.014, 'V': -0.022, 'Y': 0.008}, 6: {'A': 0.037, 'C': 0.037, 'E': 0.043, 'D': 0.03, 'G': 0.023, 'F': 0.074, 'I': 0.032, 'H': -0.048, 'K': -0.085, 'M': 0.001, 'L': 0.033, 'N': -0.007, 'Q': -0.013, 'P': 0.019, 'S': -0.024, 'R': -0.162, 'T': -0.02, 'W': 0.009, 'V': 0.005, 'Y': 0.015}, 7: {'A': 0.004, 'C': 0.001, 'E': 0.002, 'D': -0.008, 'G': 0.001, 'F': 0.001, 'I': 0.008, 'H': -0.003, 'K': -0.004, 'M': 0.003, 'L': 0.005, 'N': -0.001, 'Q': 0.001, 'P': -0.003, 'S': 0.002, 'R': -0.002, 'T': 0.0, 'W': -0.004, 'V': 0.005, 'Y': -0.009}, 8: {'A': -0.097, 'C': 0.127, 'E': 0.207, 'D': 0.007, 'G': 0.298, 'F': 0.052, 'I': -0.177, 'H': 0.209, 'K': 0.397, 'M': -0.222, 'L': -0.102, 'N': -0.003, 'Q': 0.063, 'P': -0.182, 'S': -0.424, 'R': 0.354, 'T': -0.148, 'W': 0.065, 'V': -0.122, 'Y': -0.303}, 9: {'A': 0.122, 'C': 0.068, 'E': 0.03, 'D': 0.205, 'G': 0.382, 'F': -0.03, 'I': -0.821, 'H': 0.261, 'K': 0.346, 'M': -0.638, 'L': -1.286, 'N': 0.179, 'Q': 0.003, 'P': 0.096, 'S': 0.689, 'R': -0.142, 'T': 0.376, 'W': -0.049, 'V': -0.356, 'Y': 0.566}, -1: {'con': 4.04951}} | 2,558 | 2,558 | 0.395622 | 618 | 2,558 | 1.632686 | 0.231392 | 0.051536 | 0.009911 | 0.011893 | 0.140733 | 0 | 0 | 0 | 0 | 0 | 0 | 0.375466 | 0.161845 | 2,558 | 1 | 2,558 | 2,558 | 0.095149 | 0 | 0 | 0 | 0 | 0 | 0.079328 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
2a658d837d525ca568714aa14a0ca258a2f2548d | 96 | py | Python | venv/lib/python3.8/site-packages/poetry/core/_vendor/jsonschema/_types.py | Retraces/UkraineBot | 3d5d7f8aaa58fa0cb8b98733b8808e5dfbdb8b71 | [
"MIT"
] | 2 | 2022-03-13T01:58:52.000Z | 2022-03-31T06:07:54.000Z | venv/lib/python3.8/site-packages/poetry/core/_vendor/jsonschema/_types.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | 19 | 2021-11-20T04:09:18.000Z | 2022-03-23T15:05:55.000Z | venv/lib/python3.8/site-packages/poetry/core/_vendor/jsonschema/_types.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | null | null | null | /home/runner/.cache/pip/pool/5f/b2/aa/a896a5215b364b5c65552d72e9296b4866538d23172fefc5e419a1796d | 96 | 96 | 0.895833 | 9 | 96 | 9.555556 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.458333 | 0 | 96 | 1 | 96 | 96 | 0.4375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
aa60a25a048e2b35a652089aa58fc7a2349d4728 | 98 | py | Python | insomnia/__init__.py | takeru1205/Insomnia | 72f78db5dc7b9c6e494f31408e0a011606275291 | [
"MIT"
] | null | null | null | insomnia/__init__.py | takeru1205/Insomnia | 72f78db5dc7b9c6e494f31408e0a011606275291 | [
"MIT"
] | 3 | 2019-12-02T01:59:09.000Z | 2020-12-15T09:44:33.000Z | insomnia/__init__.py | takeru1205/Insomnia | 72f78db5dc7b9c6e494f31408e0a011606275291 | [
"MIT"
] | null | null | null | from . import models
from . import networks
# from . import agents
from . import replay_buffers
| 14 | 28 | 0.755102 | 13 | 98 | 5.615385 | 0.538462 | 0.547945 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.193878 | 98 | 6 | 29 | 16.333333 | 0.924051 | 0.204082 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
aaa32ecae570d212f73a8a665d43f694a7c5f4ad | 24 | py | Python | evaluation/cider/__init__.py | daveredrum/meshed-memory-transformer | 6dfbc2ba241b7c1c8deac6114d66542190a77619 | [
"BSD-3-Clause"
] | 401 | 2019-12-19T02:44:28.000Z | 2022-03-27T13:36:18.000Z | evaluation/cider/__init__.py | daveredrum/meshed-memory-transformer | 6dfbc2ba241b7c1c8deac6114d66542190a77619 | [
"BSD-3-Clause"
] | 75 | 2019-12-24T11:52:17.000Z | 2022-03-21T09:23:45.000Z | evaluation/cider/__init__.py | daveredrum/meshed-memory-transformer | 6dfbc2ba241b7c1c8deac6114d66542190a77619 | [
"BSD-3-Clause"
] | 115 | 2019-12-19T15:00:11.000Z | 2022-03-19T14:29:40.000Z | from .cider import Cider | 24 | 24 | 0.833333 | 4 | 24 | 5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 24 | 1 | 24 | 24 | 0.952381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
aaa820b3b0f81ea52e5d86be2b0fbcf222930874 | 13,138 | py | Python | test/test_command.py | TorkamaniLab/metapipe | 15592e5b0c217afb00ac03503f8d0d7453d4baf4 | [
"MIT"
] | 11 | 2016-01-26T06:47:05.000Z | 2022-02-23T19:12:00.000Z | test/test_command.py | TorkamaniLab/metapipe | 15592e5b0c217afb00ac03503f8d0d7453d4baf4 | [
"MIT"
] | 44 | 2016-01-08T00:46:47.000Z | 2016-04-13T00:46:47.000Z | test/test_command.py | TorkamaniLab/metapipe | 15592e5b0c217afb00ac03503f8d0d7453d4baf4 | [
"MIT"
] | 4 | 2015-10-30T19:24:13.000Z | 2020-01-25T02:56:53.000Z | """ Tests for the command class. """
try:
from unittest.mock import Mock, PropertyMock, patch
except ImportError:
from mock import Mock, PropertyMock, patch
import sure
from .fixtures import *
from metapipe.parser import Parser
from metapipe.models import *
def test_eval_1():
parser = Parser(overall)
cmds = parser.consume()
cmds[0].eval()[0].eval().should.equal('#PBS_O_WORKDIR=~/someuser\nset -e;'
'\nmodule load python\n# do something\n'
'/usr/bin/python somescript.py -i '
'somefile.1 somefile.2 somefile.3 -o mp.1.1.output '
'-fgh somefile.txt')
def test_eval_2():
parser = Parser(overall)
cmds = parser.consume()
cmds[0].eval()[1].eval().should.equal('#PBS_O_WORKDIR=~/someuser\nset -e;'
'\nmodule load python\n# do something\n'
'/usr/bin/python somescript.py -i '
'somefile.4 somefile.5 somefile.6 -o mp.1.2.output '
'-fgh somefile.txt')
def test_eval_3():
parser = Parser(overall)
cmds = parser.consume()
old_commands = []
for cmd in cmds[0:1]:
old_commands.extend(cmd.eval())
cmd = cmds[1].eval()[0]
cmd.update_dependent_files(old_commands)
cmd.eval().should.equal('#PBS_O_WORKDIR=~/someuser\nset -e;'
'\nmodule load python\n# do something\n'
'/usr/bin/bash somescript.sh -i mp.1.1.output'
' -o mp.2.1.output -fgh somefile.txt')
def test_eval_4():
parser = Parser(overall)
cmds = parser.consume()
old_commands = []
for cmd in cmds[0:1]:
old_commands.extend(cmd.eval())
cmd = cmds[1].eval()[1]
cmd.update_dependent_files(old_commands)
cmd.eval().should.equal('#PBS_O_WORKDIR=~/someuser\nset -e;'
'\nmodule load python\n# do something\n'
'/usr/bin/bash somescript.sh -i mp.1.2.output'
' -o mp.2.2.output -fgh somefile.txt')
def test_eval_5():
parser = Parser(overall)
cmds = parser.consume()
old_commands = []
for cmd in cmds[0:2]:
old_commands.extend(cmd.eval())
cmd = cmds[2].eval()[0]
cmd.update_dependent_files(old_commands)
cmd.eval().should.equal('#PBS_O_WORKDIR=~/someuser\nset -e;'
'\nmodule load python\n# do something\n'
'/usr/bin/ruby somescript.rb -i mp.2.1.output'
' >> somefile')
def test_eval_6():
parser = Parser(overall)
cmds = parser.consume()
old_commands = []
for cmd in cmds[0:2]:
old_commands.extend(cmd.eval())
cmd = cmds[2].eval()[1]
cmd.update_dependent_files(old_commands)
cmd.eval().should.equal('#PBS_O_WORKDIR=~/someuser\nset -e;'
'\nmodule load python\n# do something\n'
'/usr/bin/ruby somescript.rb -i mp.2.2.output'
' >> somefile')
def test_eval_7():
parser = Parser(overall)
cmds = parser.consume()
old_commands = []
for cmd in cmds[0:2]:
old_commands.extend(cmd.eval())
cmd = cmds[2].eval()[2]
cmd.update_dependent_files(old_commands)
cmd.eval().should.equal('#PBS_O_WORKDIR=~/someuser\nset -e;'
'\nmodule load python\n# do something\n/usr/bin/ruby somescript.rb -i '
'mp.1.1.output mp.1.2.output >> somefile')
def test_eval_8():
parser = Parser(overall)
cmds = parser.consume()
old_commands = []
for cmd in cmds[0:3]:
old_commands.extend(cmd.eval())
cmd = cmds[3].eval()[0]
cmd.update_dependent_files(old_commands)
cmd.eval().should.equal('#PBS_O_WORKDIR=~/someuser\nset -e;'
'\nmodule load python\n# do something\n'
'cut -f *.counts > something.file')
def test_eval_9():
parser = Parser(overall)
cmds = parser.consume()
old_commands = []
for cmd in cmds[0:4]:
old_commands.extend(cmd.eval())
cmd = cmds[4].eval()[0]
cmd.update_dependent_files(old_commands)
cmd.eval().should.equal('#PBS_O_WORKDIR=~/someuser\nset -e;'
'\nmodule load python\n# do something\n'
'paste *.counts > some.file # some.file')
def test_eval_10():
parser = Parser(overall)
cmds = parser.consume()
old_commands = []
for cmd in cmds[0:5]:
old_commands.extend(cmd.eval())
cmd = cmds[5].eval()[0]
cmd.update_dependent_files(old_commands)
cmd.eval().should.equal('#PBS_O_WORKDIR=~/someuser\nset -e;'
'\nmodule load python\n# do something\n'
'./somescript somefile.1 somefile.2 '
'somefile.3 somefile.4')
def test_eval_11():
parser = Parser(overall)
cmds = parser.consume()
old_commands = []
for cmd in cmds[0:5]:
old_commands.extend(cmd.eval())
cmd = cmds[5].eval()[1]
cmd.update_dependent_files(old_commands)
cmd.eval().should.equal('#PBS_O_WORKDIR=~/someuser\nset -e;'
'\nmodule load python\n# do something\n'
'./somescript somefile.1.counts somefile.2.counts '
'somefile.3.counts somefile.4.counts')
def test_eval_12():
parser = Parser(overall)
cmds = parser.consume()
old_commands = []
for cmd in cmds[0:6]:
old_commands.extend(cmd.eval())
cmd = cmds[6].eval()[0]
cmd.update_dependent_files(old_commands)
cmd.eval().should.equal('#PBS_O_WORKDIR=~/someuser\nset -e;'
'\nmodule load python\n# do something\n'
'/usr/bin/ruby somescript.rb -i somefile.1.counts')
def test_eval_13():
parser = Parser(overall)
cmds = parser.consume()
old_commands = []
for cmd in cmds[0:6]:
old_commands.extend(cmd.eval())
cmd = cmds[6].eval()[1]
cmd.update_dependent_files(old_commands)
cmd.eval().should.equal('#PBS_O_WORKDIR=~/someuser\nset -e;'
'\nmodule load python\n# do something\n'
'/usr/bin/ruby somescript.rb -i somefile.2.counts')
def test_eval_14():
parser = Parser(overall)
cmds = parser.consume()
old_commands = []
for cmd in cmds[0:6]:
old_commands.extend(cmd.eval())
cmd = cmds[6].eval()[2]
cmd.update_dependent_files(old_commands)
cmd.eval().should.equal('#PBS_O_WORKDIR=~/someuser\nset -e;'
'\nmodule load python\n# do something\n'
'/usr/bin/ruby somescript.rb -i somefile.3.counts')
def test_eval_14():
parser = Parser(overall)
cmds = parser.consume()
old_commands = []
for cmd in cmds[0:6]:
old_commands.extend(cmd.eval())
cmd = cmds[6].eval()[3]
cmd.update_dependent_files(old_commands)
cmd.eval().should.equal('#PBS_O_WORKDIR=~/someuser\nset -e;'
'\nmodule load python\n# do something\n'
'/usr/bin/ruby somescript.rb -i somefile.4.counts')
def test_eval_15():
parser = Parser(overall)
cmds = parser.consume()
old_commands = []
for cmd in cmds[0:7]:
old_commands.extend(cmd.eval())
cmd = cmds[7].eval()[0]
cmd.update_dependent_files(old_commands)
cmd.eval().should.equal('#PBS_O_WORKDIR=~/someuser\nset -e;'
'\nmodule load python\n# do something\n'
'/usr/bin/python somescript.py -i somefile.1.counts'
' somefile.2.counts somefile.3.counts somefile.4.counts # *.bam')
def test_eval_16():
parser = Parser(overall)
cmds = parser.consume()
old_commands = []
for cmd in cmds[0:8]:
old_commands.extend(cmd.eval())
cmd = cmds[8].eval()[0]
cmd.update_dependent_files(old_commands)
cmd.eval().should.equal('#PBS_O_WORKDIR=~/someuser\nset -e;'
'\nmodule load python\n# do something\n'
'cat somefile.1.bam somefile.2.bam somefile.bam')
def test_eval_16_deps():
parser = Parser(overall)
cmds = parser.consume()
old_commands = []
for cmd in cmds[0:8]:
old_commands.extend(cmd.eval())
cmd = cmds[8].eval()[0]
cmd.update_dependent_files(old_commands)
cmd.depends_on.should.have.length_of(1)
def test_eval_multiple_inputs():
parser = Parser(multiple_inputs)
cmds = parser.consume()
old_commands = []
cmd = cmds[0].eval()[0]
print(cmd)
cmd.update_dependent_files(old_commands)
cmd.eval().should.equal('bash somescript somefile.1 --conf somefile.4 > '
'mp.1.1.output')
def test_multiple_outputs1():
parser = Parser(multiple_outputs)
cmds = parser.consume()
old_commands = []
cmd = cmds[0].eval()[0]
cmd.update_dependent_files(old_commands)
cmd.eval().should.equal('bash somescript somefile.1 --log'
' mp.1.1-1.output -r mp.1.1-2.output')
def test_multiple_outputs2():
parser = Parser(multiple_outputs)
cmds = parser.consume()
old_commands = []
cmd = cmds[1].eval()[0]
cmd.update_dependent_files(old_commands)
cmd.eval().should.equal('python somescript.py somefile.4 somefile.5 '
'somefile.6 --log mp.2.1-1.output -r mp.2.1-2.output '
'--output mp.2.1-3.output')
def test_another_sample_pipeline():
parser = Parser(another_sample)
cmds = parser.consume()
old_commands = []
cmd = cmds[0].eval()[0]
cmd.update_dependent_files(old_commands)
cmd.eval().should.equal('# Trimmomatic\n'
'java -jar Trimmomatic-0.35/trimmomatic-0.35.jar '
'PE somefile.1 somefile.2 mp.1.1-1.output mp.1.1-2.output '
'mp.1.1-3.output mp.1.1-4.output '
'ILLUMINACLIP:Trimmomatic-0.35/adapters/TruSeq3-PE.fa:2:30:10:2:true '
'LEADING:3 TRAILING:3')
def test_another_sample_pipeline_1():
parser = Parser(another_sample)
cmds = parser.consume()
old_commands = []
for cmd in cmds[0:1]:
old_commands.extend(cmd.eval())
cmd = cmds[1].eval()[0]
cmd.update_dependent_files(old_commands)
cmd.eval().should.equal('# Unzip the outputs from trimmomatic\n'
'gzip --stdout -d mp.1.1-1.output > '
'mp.2.1.output')
def test_another_sample_pipeline_1_deps():
parser = Parser(another_sample)
cmds = parser.consume()
old_commands = []
for cmd in cmds[0:1]:
old_commands.extend(cmd.eval())
cmd = cmds[1].eval()[0]
cmd.update_dependent_files(old_commands)
cmd.depends_on.should.have.length_of(1)
cmd.depends_on[0].should.equal('1.1')
def test_another_sample_pipeline_2():
parser = Parser(another_sample)
cmds = parser.consume()
old_commands = []
for cmd in cmds[0:2]:
old_commands.extend(cmd.eval())
cmd = cmds[2].eval()[0]
cmd.update_dependent_files(old_commands)
cmd.eval().should.equal('# Cutadapt\n# cutadapt needs unzipped fastq '
'files\n~/.local/bin/cutadapt --cut 7 -o '
'mp.3.1.output mp.2.1.output')
def test_another_sample_pipeline_2():
parser = Parser(another_sample)
cmds = parser.consume()
old_commands = []
for cmd in cmds[0:2]:
old_commands.extend(cmd.eval())
cmd = cmds[2].eval()[1]
cmd.update_dependent_files(old_commands)
cmd.eval().should.equal('# Cutadapt\n# cutadapt needs unzipped fastq '
'files\n~/.local/bin/cutadapt --cut 7 -o '
'mp.3.2.output mp.2.2.output')
def test_long_running_1():
parser = Parser(long_running)
old_commands = []
templates = parser.consume()
cmd = templates[0].eval()[0]
cmd.update_dependent_files(old_commands)
cmd.eval().should.equal('cat somefile.1 > mp.1.1.output && sleep 1')
def test_long_running_2():
parser = Parser(long_running)
templates = parser.consume()
old_commands = []
for cmd in templates[0:1]:
old_commands.extend(cmd.eval())
cmd = templates[1].eval()[0]
cmd.update_dependent_files(old_commands)
cmd.eval().should.equal('cat mp.1.1.output && '
'sleep 1')
def test_full_output_file_name():
parser = Parser(full_output_file_name)
templates = parser.consume()
old_commands = []
cmd = templates[0].eval()[0]
cmd.update_dependent_files(old_commands)
cmd.eval().should.equal('gzip --stdout somefile.1 > mp.1.1.output.gz')
def test_full_output_file_name_2():
parser = Parser(full_output_file_name)
templates = parser.consume()
old_commands = []
for cmd in templates[0:1]:
old_commands.extend(cmd.eval())
cmd = templates[1].eval()[0]
cmd.update_dependent_files(old_commands)
cmd.eval().should.equal('cat mp.1.1.output.gz > mp.2.1.output.gz')
def test_magical_glob():
parser = Parser(magical_glob)
templates = parser.consume()
old_commands = []
for cmd in templates[0:1]:
old_commands.extend(cmd.eval())
with patch('metapipe.models.Input.files', new_callable=PropertyMock) as mock_files:
mock_files.return_value = ['mp.1.1.output', 'mp.1.2.output']
cmd = templates[1].eval()[0]
cmd.update_dependent_files(old_commands)
cmd.eval().should.equal('cat mp.1.1.output mp.1.2.output > mp.2.1.output')
def test_magical_glob2():
parser = Parser(magical_glob2)
templates = parser.consume()
old_commands = []
for cmd in templates[0:1]:
old_commands.extend(cmd.eval())
with patch('metapipe.models.Input.files', new_callable=PropertyMock) as mock_files:
mock_files.return_value = ['mp.1.1.output', 'mp.1.2.output']
cmd = templates[1].eval()[0]
cmd.update_dependent_files(old_commands)
cmd.eval().should.equal('cat mp.1.1.output > mp.2.1.output')
| 28.132762 | 87 | 0.641726 | 1,904 | 13,138 | 4.274685 | 0.075105 | 0.113527 | 0.060204 | 0.084777 | 0.879961 | 0.856739 | 0.81902 | 0.804399 | 0.781054 | 0.766925 | 0 | 0.028834 | 0.205435 | 13,138 | 466 | 88 | 28.193133 | 0.750838 | 0.002131 | 0 | 0.692082 | 0 | 0.014663 | 0.263929 | 0.057091 | 0 | 0 | 0 | 0 | 0 | 1 | 0.093842 | false | 0 | 0.020528 | 0 | 0.11437 | 0.002933 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
631637b0f42e61a82ad3cbe197fe649785d61884 | 148 | py | Python | backend/coreapp/util.py | TGEnigma/decomp.me | 7613af64065b58d89235d15c0378ad4911f3b3fc | [
"MIT"
] | null | null | null | backend/coreapp/util.py | TGEnigma/decomp.me | 7613af64065b58d89235d15c0378ad4911f3b3fc | [
"MIT"
] | null | null | null | backend/coreapp/util.py | TGEnigma/decomp.me | 7613af64065b58d89235d15c0378ad4911f3b3fc | [
"MIT"
] | null | null | null | import hashlib
from typing import Tuple
def gen_hash(key: Tuple[str, ...]) -> str:
return hashlib.sha256(str(key).encode('utf-8')).hexdigest()
| 24.666667 | 63 | 0.695946 | 22 | 148 | 4.636364 | 0.727273 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.031008 | 0.128378 | 148 | 5 | 64 | 29.6 | 0.75969 | 0 | 0 | 0 | 0 | 0 | 0.033784 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.5 | 0.25 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
2d547e7dd45cd6a684eb0ba908f91420c0331c85 | 58 | py | Python | R3.py | kevprakash/R3 | 1e897bc03bad0da0aa10c9c0d193d9740ed1504a | [
"MIT"
] | null | null | null | R3.py | kevprakash/R3 | 1e897bc03bad0da0aa10c9c0d193d9740ed1504a | [
"MIT"
] | null | null | null | R3.py | kevprakash/R3 | 1e897bc03bad0da0aa10c9c0d193d9740ed1504a | [
"MIT"
] | null | null | null | import UI #Quick hack to make it run from a file called R3 | 58 | 58 | 0.775862 | 13 | 58 | 3.461538 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021739 | 0.206897 | 58 | 1 | 58 | 58 | 0.956522 | 0.810345 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
2d6d6a320f2c7ff10d9e2bafed7371ababbe806f | 51 | py | Python | bin/user_login_transfer.py | RobertSchaffer1/lsdc | 010a26f98bec690f8c2cf47b02764c69ce26c2c5 | [
"BSD-3-Clause"
] | null | null | null | bin/user_login_transfer.py | RobertSchaffer1/lsdc | 010a26f98bec690f8c2cf47b02764c69ce26c2c5 | [
"BSD-3-Clause"
] | 147 | 2020-04-10T20:31:49.000Z | 2022-03-22T17:29:52.000Z | bin/user_login_transfer.py | JunAishima/lsdc | 2a68be66642b14a0440182954bcb513c82874ca1 | [
"BSD-3-Clause"
] | 10 | 2020-09-25T20:34:55.000Z | 2021-10-06T19:11:18.000Z | client_id = 'a659c8ba-4645-40c0-ae55-3bba34728c7a'
| 25.5 | 50 | 0.803922 | 7 | 51 | 5.714286 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.416667 | 0.058824 | 51 | 1 | 51 | 51 | 0.416667 | 0 | 0 | 0 | 0 | 0 | 0.705882 | 0.705882 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
2d70c9e166d10d8d1f65be1db047ef7b8cc8a7a5 | 164 | py | Python | scattertext/viz/__init__.py | tigerneil/scattertext | 23351895ada347fae300bf910c2c77f47ac58a35 | [
"Apache-2.0"
] | 1 | 2020-08-11T03:27:28.000Z | 2020-08-11T03:27:28.000Z | scattertext/viz/__init__.py | tigerneil/scattertext | 23351895ada347fae300bf910c2c77f47ac58a35 | [
"Apache-2.0"
] | null | null | null | scattertext/viz/__init__.py | tigerneil/scattertext | 23351895ada347fae300bf910c2c77f47ac58a35 | [
"Apache-2.0"
] | 1 | 2020-01-08T00:25:31.000Z | 2020-01-08T00:25:31.000Z | from .HTMLVisualizationAssembly import HTMLVisualizationAssembly
from .VizDataAdapter import VizDataAdapter
from .HTMLSemioticSquareViz import HTMLSemioticSquareViz | 54.666667 | 64 | 0.914634 | 12 | 164 | 12.5 | 0.416667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.067073 | 164 | 3 | 65 | 54.666667 | 0.980392 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
2d89cdc45ef9040996f748c69049debb4b3e79f1 | 15,979 | py | Python | Tests/Plot/test_Lam_Mag_inset_plot.py | IrakozeFD/pyleecan | 5a93bd98755d880176c1ce8ac90f36ca1b907055 | [
"Apache-2.0"
] | null | null | null | Tests/Plot/test_Lam_Mag_inset_plot.py | IrakozeFD/pyleecan | 5a93bd98755d880176c1ce8ac90f36ca1b907055 | [
"Apache-2.0"
] | null | null | null | Tests/Plot/test_Lam_Mag_inset_plot.py | IrakozeFD/pyleecan | 5a93bd98755d880176c1ce8ac90f36ca1b907055 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
from os.path import join
import pytest
import matplotlib.pyplot as plt
from numpy import pi
from pyleecan.Classes.Frame import Frame
from pyleecan.Classes.LamSlotMag import LamSlotMag
from pyleecan.Classes.Lamination import Lamination
from pyleecan.Classes.SlotM10 import SlotM10
from pyleecan.Classes.SlotM11 import SlotM11
from pyleecan.Classes.SlotM12 import SlotM12
from pyleecan.Classes.SlotM13 import SlotM13
from pyleecan.Classes.SlotM14 import SlotM14
from pyleecan.Classes.SlotM15 import SlotM15
from pyleecan.Classes.SlotM16 import SlotM16
from pyleecan.Classes.Shaft import Shaft
from pyleecan.Classes.VentilationCirc import VentilationCirc
from pyleecan.Classes.VentilationTrap import VentilationTrap
from pyleecan.Classes.MatMagnetics import MatMagnetics
from Tests import save_plot_path as save_path
@pytest.mark.PLOT
class Test_Lam_Mag_inset_plot(object):
"""pytest for Lamination with inset magnet plot"""
def test_Lam_Mag_10_inset(self):
"""Test machine plot with SlotM10 inset"""
plt.close("all")
rotor = LamSlotMag(
Rint=40e-3,
Rext=100e-3,
is_internal=True,
is_stator=False,
L1=0.45,
Nrvd=1,
Wrvd=0.05,
)
rotor.magnet.Lmag = 0.5
rotor.slot = SlotM10(Zs=4, W0=0.04, H0=0.02, Hmag=0.02, Wmag=0.04)
rotor.mat_type.mag = MatMagnetics(Wlam=0.5e-3)
rotor.axial_vent.append(VentilationCirc(Zh=4, Alpha0=0, D0=2.5e-3, H0=50e-3))
rotor.axial_vent.append(VentilationCirc(Zh=8, Alpha0=0, D0=5e-3, H0=60e-3))
rotor.axial_vent.append(VentilationCirc(Zh=12, Alpha0=0, D0=10e-3, H0=70e-3))
stator = LamSlotMag(
Rint=110e-3,
Rext=200e-3,
is_internal=False,
is_stator=True,
L1=0.45,
Nrvd=1,
Wrvd=0.05,
)
stator.magnet.Lmag = 0.5
stator.slot = SlotM10(Zs=8, W0=0.04, Hmag=0.02, Wmag=0.04, H0=0.02)
stator.mat_type.mag = MatMagnetics(Wlam=0.5e-3)
stator.axial_vent.append(
VentilationTrap(Zh=6, Alpha0=pi / 6, W1=10e-3, W2=20e-3, D0=0.02, H0=0.140)
)
stator.axial_vent.append(
VentilationTrap(Zh=6, Alpha0=pi / 6, W1=20e-3, W2=40e-3, D0=0.02, H0=0.170)
)
rotor.plot(is_show_fig=False)
fig = plt.gcf()
assert len(fig.axes[0].patches) == 30
fig.savefig(join(save_path, "test_Lam_Mag_10i_1-Rotor.png"))
stator.plot(is_show_fig=False)
fig = plt.gcf()
assert len(fig.axes[0].patches) == 22
fig.savefig(join(save_path, "test_Lam_Mag_10i_2-Stator.png"))
rotor.slot.Hmag = rotor.slot.Hmag * 1.2
rotor.slot.Wmag = rotor.slot.Wmag * 0.5
rotor.plot(is_show_fig=False)
fig = plt.gcf()
assert len(fig.axes[0].patches) == 30
fig.savefig(join(save_path, "test_Lam_Mag_10i_3-Rotor_missmatch.png"))
rotor.magnet = None
rotor.plot(is_show_fig=False)
fig = plt.gcf()
assert len(fig.axes[0].patches) == 26
fig.savefig(join(save_path, "test_Lam_Mag_10i_4-Rotor_no_mag.png"))
@pytest.mark.skip(reason="No multi magnet for now")
def test_Lam_Mag_10_inset_2_mag(self):
"""Test machine plot with Magnet 10 inset with two magnet in the slot"""
plt.close("all")
rotor = LamSlotMag(
Rint=40e-3,
Rext=100e-3,
is_internal=True,
is_stator=False,
L1=0.45,
Nrvd=1,
Wrvd=0.05,
)
rotor.slot = SlotMFlat(
Zs=4,
W0=0.03,
H0=0.02,
W3=2 * pi / 60,
magnet=[
SlotM10(Lmag=0.5, Hmag=0.015, Wmag=0.03),
SlotM10(Lmag=0.5, Hmag=0.015, Wmag=0.03),
],
)
rotor.mat_type.mag = MatMagnetics(Wlam=0.5e-3)
rotor.axial_vent.append(VentilationCirc(Zh=4, Alpha0=0, D0=2.5e-3, H0=50e-3))
rotor.axial_vent.append(VentilationCirc(Zh=8, Alpha0=0, D0=5e-3, H0=60e-3))
rotor.axial_vent.append(VentilationCirc(Zh=12, Alpha0=0, D0=10e-3, H0=70e-3))
stator = LamSlotMag(
Rint=110e-3,
Rext=200e-3,
is_internal=False,
is_stator=True,
L1=0.45,
Nrvd=1,
Wrvd=0.05,
)
stator.slot = SlotMFlat(
Zs=8,
W0=0.03,
W3=2 * pi / 64,
H0=0.02,
magnet=[
SlotM10(Lmag=0.5, Hmag=0.025, Wmag=0.03),
SlotM10(Lmag=0.5, Hmag=0.025, Wmag=0.03),
],
)
stator.mat_type.mag = MatMagnetics(Wlam=0.5e-3)
stator.axial_vent.append(
VentilationTrap(Zh=6, Alpha0=pi / 6, W1=10e-3, W2=20e-3, D0=0.02, H0=0.140)
)
stator.axial_vent.append(
VentilationTrap(Zh=6, Alpha0=pi / 6, W1=20e-3, W2=40e-3, D0=0.02, H0=0.170)
)
rotor.plot(is_show_fig=False)
fig = plt.gcf()
assert len(fig.axes[0].patches) == 34
fig.savefig(join(save_path, "test_Lam_Mag_10i_2_Mag_2-Rotor.png"))
stator.plot(is_show_fig=False)
fig = plt.gcf()
assert len(fig.axes[0].patches) == 30
fig.savefig(join(save_path, "test_Lam_Mag_10i_3_Mag_2-Stator.png"))
def test_Lam_Mag_11_inset(self):
"""Test machine plot with Magnet 11 inset"""
plt.close("all")
rotor = LamSlotMag(
Rint=40e-3,
Rext=90e-3,
is_internal=True,
is_stator=False,
L1=0.4,
Nrvd=2,
Wrvd=0.05,
)
rotor.mat_type.mag = MatMagnetics(Wlam=0.5e-3)
rotor.magnet.Lmag = 0.5
rotor.slot = SlotM11(Zs=8, W0=pi / 8, H0=0.01, Hmag=0.01, Wmag=pi / 8)
stator = LamSlotMag(
Rint=115e-3,
Rext=200e-3,
is_internal=False,
is_stator=True,
L1=0.4,
Nrvd=2,
Wrvd=0.05,
)
stator.magnet.Lmag = 0.35
stator.slot = SlotM11(
Zs=4,
W0=pi / 4,
Hmag=0.03,
Wmag=pi / 4,
H0=0.02,
)
stator.mat_type.mag = MatMagnetics(Wlam=0.5e-3)
rotor.plot(is_show_fig=False)
fig = plt.gcf()
assert len(fig.axes[0].patches) == 10
fig.savefig(join(save_path, "test_Lam_Mag_11i_1-Rotor.png"))
stator.plot(is_show_fig=False)
fig = plt.gcf()
assert len(fig.axes[0].patches) == 6
fig.savefig(join(save_path, "test_Lam_Mag_11i_2-Stator.png"))
rotor.slot.Hmag = rotor.slot.Hmag * 1.2
rotor.slot.Wmag = rotor.slot.Wmag * 0.5
rotor.plot(is_show_fig=False)
fig = plt.gcf()
assert len(fig.axes[0].patches) == 10
fig.savefig(join(save_path, "test_Lam_Mag_11i_3-Rotor_missmatch.png"))
rotor.magnet = None
rotor.plot(is_show_fig=False)
fig = plt.gcf()
assert len(fig.axes[0].patches) == 2
fig.savefig(join(save_path, "test_Lam_Mag_11i_4-Rotor_no_mag.png"))
@pytest.mark.skip(reason="Only one magnet for now")
def test_Lam_Mag_11_inset_2_mag(self):
"""Test machine plot with Magnet 11 inset with two magnet in the slot"""
plt.close("all")
rotor = LamSlotMag(
Rint=40e-3,
Rext=90e-3,
is_internal=True,
is_stator=False,
L1=0.4,
Nrvd=2,
Wrvd=0.05,
)
rotor.mat_type.mag = MatMagnetics(Wlam=0.5e-3)
rotor.slot = SlotMPolar(
Zs=8,
W0=pi / 12,
H0=0.01,
W3=pi / 18,
magnet=[
SlotM11(Lmag=0.5, Hmag=0.01, Wmag=pi / 12),
SlotM11(Lmag=0.5, Hmag=0.01, Wmag=pi / 12),
],
)
stator = LamSlotMag(
Rint=115e-3,
Rext=200e-3,
is_internal=False,
is_stator=True,
L1=0.4,
Nrvd=2,
Wrvd=0.05,
)
stator.slot = SlotMPolar(
Zs=4,
W0=pi / 10,
H0=0.02,
W3=2 * pi / 50,
magnet=[
SlotM11(Lmag=0.35, Hmag=0.03, Wmag=pi / 10),
SlotM11(Lmag=0.35, Hmag=0.03, Wmag=pi / 10),
],
)
stator.mat_type.mag = MatMagnetics(Wlam=0.5e-3)
rotor.plot(is_show_fig=False)
fig = plt.gcf()
assert len(fig.axes[0].patches) == 18
fig.savefig(join(save_path, "test_Lam_Mag_11i_2_Mag_2-Rotor.png"))
stator.plot(is_show_fig=False)
fig = plt.gcf()
assert len(fig.axes[0].patches) == 10
fig.savefig(join(save_path, "test_Lam_Mag_11i_3_Mag_2-Stator.png"))
def test_Lam_Mag_12_inset(self):
"""Test machine plot with Magnet 12 inset"""
plt.close("all")
rotor = LamSlotMag(
Rint=40e-3,
Rext=90e-3,
is_internal=True,
is_stator=False,
L1=0.35,
Nrvd=3,
Wrvd=0.05,
)
rotor.magnet.Lmag = 0.5
rotor.slot = SlotM12(Zs=8, W0=0.04, H0=0.02, Hmag=0.02, Wmag=0.04)
rotor.mat_type.mag = MatMagnetics(Wlam=0.5e-3)
stator = LamSlotMag(
Rint=110e-3,
Rext=200e-3,
is_internal=False,
is_stator=True,
L1=0.35,
Nrvd=3,
Wrvd=0.05,
)
stator.magnet.Lmag = 0.5
stator.slot = SlotM12(Zs=4, W0=0.04, H0=0.02, Hmag=0.03, Wmag=0.04)
stator.mat_type.mag = MatMagnetics(Wlam=0.5e-3)
rotor.plot(is_show_fig=False)
fig = plt.gcf()
assert len(fig.axes[0].patches) == 10
fig.savefig(join(save_path, "test_Lam_Mag_12i_1-Rotor.png"))
stator.plot(is_show_fig=False)
fig = plt.gcf()
assert len(fig.axes[0].patches) == 6
fig.savefig(join(save_path, "test_Lam_Mag_12i_2-Stator.png"))
rotor.slot.Hmag = rotor.slot.Hmag * 1.2
rotor.slot.Wmag = rotor.slot.Wmag * 0.5
rotor.plot(is_show_fig=False)
fig = plt.gcf()
assert len(fig.axes[0].patches) == 10
fig.savefig(join(save_path, "test_Lam_Mag_12i_3-Rotor_missmatch.png"))
rotor.magnet = None
rotor.plot(is_show_fig=False)
fig = plt.gcf()
assert len(fig.axes[0].patches) == 2
fig.savefig(join(save_path, "test_Lam_Mag_12i_4-Rotor_no_mag.png"))
def test_Lam_Mag_13_inset(self):
"""Test machine plot with Magnet 13 inset"""
plt.close("all")
rotor = LamSlotMag(
Rint=40e-3,
Rext=90e-3,
is_internal=True,
is_stator=False,
L1=0.42,
Nrvd=4,
Wrvd=0.02,
)
rotor.magnet.Lmag = 0.5
rotor.slot = SlotM13(Zs=8, W0=0.04, H0=0.02, Hmag=0.02, Wmag=0.04, Rtopm=0.04)
rotor.mat_type.mag = MatMagnetics(Wlam=0.5e-3)
stator = LamSlotMag(
Rint=110e-3,
Rext=200e-3,
is_internal=False,
is_stator=True,
L1=0.42,
Nrvd=4,
Wrvd=0.02,
)
stator.magnet.Lmag = 0.5
stator.slot = SlotM13(Zs=4, W0=0.04, H0=0.025, Hmag=0.02, Wmag=0.04, Rtopm=0.04)
stator.mat_type.mag = MatMagnetics(Wlam=0.5e-3)
rotor.plot(is_show_fig=False)
fig = plt.gcf()
assert len(fig.axes[0].patches) == 10
fig.savefig(join(save_path, "test_Lam_Mag_13i_1-Rotor.png"))
stator.plot(is_show_fig=False)
fig = plt.gcf()
assert len(fig.axes[0].patches) == 6
fig.savefig(join(save_path, "test_Lam_Mag_13i_2-Stator.png"))
rotor.slot.Wmag = rotor.slot.Wmag * 0.5
rotor.slot.Hmag = rotor.slot.Hmag * 1.4
rotor.slot.Rtopm = rotor.slot.Rtopm * 0.5
rotor.plot(is_show_fig=False)
fig = plt.gcf()
assert len(fig.axes[0].patches) == 10
fig.savefig(join(save_path, "test_Lam_Mag_13i_3-Rotor_missmatch.png"))
rotor.magnet = None
rotor.plot(is_show_fig=False)
fig = plt.gcf()
assert len(fig.axes[0].patches) == 2
fig.savefig(join(save_path, "test_Lam_Mag_13i_4-Rotor_No_mag.png"))
def test_Lam_Mag_14_inset(self):
"""Test machine plot with Magnet 14 inset"""
plt.close("all")
rotor = LamSlotMag(
Rint=40e-3,
Rext=90e-3,
is_internal=True,
is_stator=False,
L1=0.4,
Nrvd=5,
Wrvd=0.02,
)
rotor.magnet.Lmag = 0.5
rotor.slot = SlotM14(Zs=4, W0=0.628, H0=0.02, Hmag=0.02, Wmag=0.628, Rtopm=0.04)
rotor.mat_type.mag = MatMagnetics(Wlam=0.5e-3)
stator = Lamination(
Rint=130e-3,
Rext=0.2,
is_internal=False,
is_stator=True,
L1=0.4,
Nrvd=5,
Wrvd=0.02,
)
stator.mat_type.mag = MatMagnetics(Wlam=0.5e-3)
rotor.plot(is_show_fig=False)
fig = plt.gcf()
assert len(fig.axes[0].patches) == 6
fig.savefig(join(save_path, "test_Lam_Mag_14i_1-Rotor.png"))
rotor.slot.Wmag = rotor.slot.Wmag * 0.5
rotor.plot(is_show_fig=False)
fig = plt.gcf()
assert len(fig.axes[0].patches) == 6
fig.savefig(join(save_path, "test_Lam_Mag_14i_2-Rotor_missmatch.png"))
rotor.magnet = None
rotor.plot(is_show_fig=False)
fig = plt.gcf()
assert len(fig.axes[0].patches) == 2
fig.savefig(join(save_path, "test_Lam_Mag_14i_3-Rotor_no_mag.png"))
def test_Lam_Mag_15_inset(self):
"""Test machine plot with Magnet 15 inset"""
plt.close("all")
mm = 1e-3
rotor = LamSlotMag(Rint=40 * mm, Rext=110 * mm, is_internal=True)
rotor.slot = SlotM15(
Zs=4,
W0=80 * pi / 180,
H0=10 * mm,
Hmag=20 * mm,
Wmag=100 * mm,
Rtopm=100 * mm,
)
rotor.plot(is_show_fig=False)
fig = plt.gcf()
assert len(fig.axes[0].patches) == 6
fig.savefig(join(save_path, "test_Lam_Mag_15i_1-Rotor.png"))
rotor.slot.Wmag = rotor.slot.Wmag * 0.5
rotor.slot.Rtopm = rotor.slot.Rtopm * 0.5
rotor.plot(is_show_fig=False)
fig = plt.gcf()
assert len(fig.axes[0].patches) == 6
fig.savefig(join(save_path, "test_Lam_Mag_15i_2-Rotor_missmatch.png"))
rotor.magnet = None
rotor.plot(is_show_fig=False)
fig = plt.gcf()
assert len(fig.axes[0].patches) == 2
fig.savefig(join(save_path, "test_Lam_Mag_15i_3-Rotor_No_mag.png"))
def test_Lam_Mag_16_inset(self):
"""Test machine plot with SlotM10 inset"""
plt.close("all")
rotor = LamSlotMag(
Rint=80e-3,
Rext=200e-3,
is_internal=True,
is_stator=False,
)
rotor.slot = SlotM16(Zs=4, W0=0.02, H0=0.02, H1=0.08, W1=0.04)
stator = LamSlotMag(
Rint=220e-3,
Rext=400e-3,
is_internal=False,
is_stator=True,
)
stator.slot = SlotM16(Zs=8, W0=0.02, H0=0.02, H1=0.08, W1=0.04)
rotor.plot(is_show_fig=False)
fig = plt.gcf()
assert len(fig.axes[0].patches) == 6
fig.savefig(join(save_path, "test_Lam_Mag_16i_1-Rotor.png"))
stator.plot(is_show_fig=False)
fig = plt.gcf()
assert len(fig.axes[0].patches) == 10
fig.savefig(join(save_path, "test_Lam_Mag_16i_2-Stator.png"))
rotor.magnet = None
rotor.plot(is_show_fig=False)
fig = plt.gcf()
assert len(fig.axes[0].patches) == 2
fig.savefig(join(save_path, "test_Lam_Mag_16i_3-Rotor_no_mag.png"))
| 32.150905 | 88 | 0.553977 | 2,332 | 15,979 | 3.65223 | 0.073328 | 0.032054 | 0.045791 | 0.044264 | 0.845838 | 0.838441 | 0.830926 | 0.790067 | 0.779382 | 0.733826 | 0 | 0.09449 | 0.315164 | 15,979 | 496 | 89 | 32.215726 | 0.683816 | 0.029414 | 0 | 0.687204 | 0 | 0 | 0.066309 | 0.061586 | 0 | 0 | 0 | 0 | 0.06872 | 1 | 0.021327 | false | 0 | 0.045024 | 0 | 0.06872 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
2de961804e1a3ea35d32cbeb3fc72f94b632569e | 48 | py | Python | src/spaceone/cost_analysis/error/__init__.py | spaceone-dev/plugin-sse-cost-datasource | c7dd2494b4de82f87dd4b5c131c4e975f033e651 | [
"Apache-2.0"
] | null | null | null | src/spaceone/cost_analysis/error/__init__.py | spaceone-dev/plugin-sse-cost-datasource | c7dd2494b4de82f87dd4b5c131c4e975f033e651 | [
"Apache-2.0"
] | 1 | 2022-03-28T10:54:26.000Z | 2022-03-29T04:43:36.000Z | src/spaceone/cost_analysis/error/__init__.py | spaceone-dev/plugin-sse-cost-datasource | c7dd2494b4de82f87dd4b5c131c4e975f033e651 | [
"Apache-2.0"
] | null | null | null | from spaceone.cost_analysis.error.cost import *
| 24 | 47 | 0.833333 | 7 | 48 | 5.571429 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.083333 | 48 | 1 | 48 | 48 | 0.886364 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
2dfeb097b2ffb5df545173cf278101385fecbe09 | 182 | py | Python | lambdata/__init__.py | ren-curry/lambdata-ren-curry | d08fe37e54c0133e603b07735bbe33a3927b7775 | [
"MIT"
] | null | null | null | lambdata/__init__.py | ren-curry/lambdata-ren-curry | d08fe37e54c0133e603b07735bbe33a3927b7775 | [
"MIT"
] | null | null | null | lambdata/__init__.py | ren-curry/lambdata-ren-curry | d08fe37e54c0133e603b07735bbe33a3927b7775 | [
"MIT"
] | null | null | null | """Lambdata - a collection of Data Science Helper Functions"""
import pandas as pd
import numpy as np
def df_cleaner(df):
"""Will clean a DF of nulls"""
# TODO - implement
| 20.222222 | 62 | 0.686813 | 28 | 182 | 4.428571 | 0.785714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.214286 | 182 | 8 | 63 | 22.75 | 0.867133 | 0.543956 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 0 | 1 | 0.333333 | false | 0 | 0.666667 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
930a79819ec944ce2f1f6982446c44707bcc55f4 | 23 | py | Python | src/persist/__init__.py | diegor2/redditbot | 6f63b5b4bf64dcf773ec9fc73d90617e8a425988 | [
"Apache-2.0"
] | null | null | null | src/persist/__init__.py | diegor2/redditbot | 6f63b5b4bf64dcf773ec9fc73d90617e8a425988 | [
"Apache-2.0"
] | null | null | null | src/persist/__init__.py | diegor2/redditbot | 6f63b5b4bf64dcf773ec9fc73d90617e8a425988 | [
"Apache-2.0"
] | null | null | null | from .persist import *
| 11.5 | 22 | 0.73913 | 3 | 23 | 5.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.173913 | 23 | 1 | 23 | 23 | 0.894737 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
933f5abe7a705de4ebcf478695e0b3fd327372c0 | 120 | py | Python | codewars/arraymadness.py | git-bit-code/competetive_coding | 889cfb70d4baf4316025a4f5be4d44c4a35e102d | [
"MIT"
] | null | null | null | codewars/arraymadness.py | git-bit-code/competetive_coding | 889cfb70d4baf4316025a4f5be4d44c4a35e102d | [
"MIT"
] | null | null | null | codewars/arraymadness.py | git-bit-code/competetive_coding | 889cfb70d4baf4316025a4f5be4d44c4a35e102d | [
"MIT"
] | null | null | null | def array_madness(a,b):
return sum(x**2 for x in a)> sum(y**3 for y in b)
print(array_madness([4, 5, 6], [1, 2, 3])) | 40 | 53 | 0.6 | 29 | 120 | 2.413793 | 0.62069 | 0.342857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.081633 | 0.183333 | 120 | 3 | 54 | 40 | 0.632653 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0.333333 | 0.666667 | 0.333333 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
fab1405e104900db9f6ad427156af30d2e808024 | 139 | py | Python | nqs/resources/utils.py | eanorambuena/NQS | 494514d91f97d0f626e2981b5a46e6bdc61eec0d | [
"MIT"
] | null | null | null | nqs/resources/utils.py | eanorambuena/NQS | 494514d91f97d0f626e2981b5a46e6bdc61eec0d | [
"MIT"
] | null | null | null | nqs/resources/utils.py | eanorambuena/NQS | 494514d91f97d0f626e2981b5a46e6bdc61eec0d | [
"MIT"
] | null | null | null | import os, json, math
def dump(structure, file):
json.dump(structure, file, indent=2)
def floor(f: float):
return math.floor(f) | 19.857143 | 39 | 0.676259 | 22 | 139 | 4.272727 | 0.636364 | 0.276596 | 0.361702 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00885 | 0.18705 | 139 | 7 | 40 | 19.857143 | 0.823009 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0.2 | 0.2 | 0.8 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 6 |
fadbfd0e7ff679b400f582fa4bcd84de0fdb9c64 | 21 | py | Python | retina/models/__init__.py | nunenuh/retinaface.pytorch | 091f920b2f7a04d9e0cb74998f9937387692e29d | [
"MIT"
] | null | null | null | retina/models/__init__.py | nunenuh/retinaface.pytorch | 091f920b2f7a04d9e0cb74998f9937387692e29d | [
"MIT"
] | null | null | null | retina/models/__init__.py | nunenuh/retinaface.pytorch | 091f920b2f7a04d9e0cb74998f9937387692e29d | [
"MIT"
] | null | null | null | from .retina import * | 21 | 21 | 0.761905 | 3 | 21 | 5.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 21 | 1 | 21 | 21 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
fae6e946fbe7aac3eaaedb47ca1aead1100e1e52 | 139 | py | Python | test.py | ikn/pyepgdb | fa4f1ea5b5677d59bdeb1bcdb69fc5ef2091e835 | [
"BSD-3-Clause"
] | null | null | null | test.py | ikn/pyepgdb | fa4f1ea5b5677d59bdeb1bcdb69fc5ef2091e835 | [
"BSD-3-Clause"
] | null | null | null | test.py | ikn/pyepgdb | fa4f1ea5b5677d59bdeb1bcdb69fc5ef2091e835 | [
"BSD-3-Clause"
] | null | null | null | import unittest
from test.integration.core import *
from test.integration.dvbtuk import *
if __name__ == '__main__':
unittest.main()
| 17.375 | 37 | 0.748201 | 17 | 139 | 5.647059 | 0.588235 | 0.166667 | 0.395833 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.151079 | 139 | 7 | 38 | 19.857143 | 0.813559 | 0 | 0 | 0 | 0 | 0 | 0.057554 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.6 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
faf2b8a3ccc68ec30e9de1ce681645f591d6678b | 166 | py | Python | oscillationtracking/tests/test_version.py | rob-luke/oscillationtracking | 4f536d663837e575010bce55704dcbe2fbdcf58e | [
"BSD-3-Clause"
] | 1 | 2020-05-20T10:34:42.000Z | 2020-05-20T10:34:42.000Z | oscillationtracking/tests/test_version.py | rob-luke/oscillationtracking | 4f536d663837e575010bce55704dcbe2fbdcf58e | [
"BSD-3-Clause"
] | 44 | 2020-05-26T14:33:57.000Z | 2022-01-15T02:33:55.000Z | oscillationtracking/tests/test_version.py | rob-luke/oscillationtracking | 4f536d663837e575010bce55704dcbe2fbdcf58e | [
"BSD-3-Clause"
] | null | null | null | # Authors: Robert Luke <mail@robertluke.net>
#
# License: BSD (3-clause)
import oscillationtracking
def test_version():
print(oscillationtracking.__version__)
| 16.6 | 44 | 0.759036 | 18 | 166 | 6.722222 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006944 | 0.13253 | 166 | 9 | 45 | 18.444444 | 0.833333 | 0.39759 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0.333333 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
87ad4e02d41511389bd053f21bcc721033bf2e55 | 26 | py | Python | app/__init__.py | RIT-Election-Security/SAVI-registrar | de7fc5987ed802b0fe39dc9e6b85c5999560c26e | [
"MIT"
] | null | null | null | app/__init__.py | RIT-Election-Security/SAVI-registrar | de7fc5987ed802b0fe39dc9e6b85c5999560c26e | [
"MIT"
] | null | null | null | app/__init__.py | RIT-Election-Security/SAVI-registrar | de7fc5987ed802b0fe39dc9e6b85c5999560c26e | [
"MIT"
] | null | null | null | from .registrar import app | 26 | 26 | 0.846154 | 4 | 26 | 5.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.115385 | 26 | 1 | 26 | 26 | 0.956522 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
87d83195f2144cc4207b4bdfbd7efda34eb6896c | 163 | py | Python | cyberfile.py | sayonsom/Canvass | e59cd68f26722144abc5caf2d7ae1e7389c39ad1 | [
"MIT"
] | 9 | 2018-01-29T10:53:25.000Z | 2021-02-21T19:35:23.000Z | cyberfile.py | cyberange-dev0ps/Canvass | e59cd68f26722144abc5caf2d7ae1e7389c39ad1 | [
"MIT"
] | 1 | 2019-06-04T14:43:34.000Z | 2021-07-09T08:35:13.000Z | cyberfile.py | cyberange-dev0ps/Canvass | e59cd68f26722144abc5caf2d7ae1e7389c39ad1 | [
"MIT"
] | 12 | 2017-05-04T23:39:10.000Z | 2021-09-25T17:05:00.000Z | #!/usr/bin/python
from mininet.topo import Topo
from mininet.net import Mininet
from mininet.util import dumpNodeConnections
from mininet.log import setLogLevel
| 20.375 | 44 | 0.828221 | 23 | 163 | 5.869565 | 0.521739 | 0.325926 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.116564 | 163 | 7 | 45 | 23.285714 | 0.9375 | 0.09816 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
87feac0384850dbf54fee39909bc292d115191e0 | 4,988 | py | Python | torchocr/models/necks/fpn.py | hua1024/OpenOCR | 13ecfd18d103d5e70a87922cebe89077e8f0eb9c | [
"Apache-2.0"
] | 3 | 2021-02-02T06:10:50.000Z | 2021-05-10T01:27:31.000Z | torchocr/models/necks/fpn.py | hua1024/OpenOCR | 13ecfd18d103d5e70a87922cebe89077e8f0eb9c | [
"Apache-2.0"
] | null | null | null | torchocr/models/necks/fpn.py | hua1024/OpenOCR | 13ecfd18d103d5e70a87922cebe89077e8f0eb9c | [
"Apache-2.0"
] | 2 | 2021-02-02T06:11:25.000Z | 2021-02-09T16:27:48.000Z | # coding=utf-8
# @Time : 2020/12/22 14:48
# @Auto : zzf-jeff
'''
DB_FPN 输出 channels = 256
PSE_FPN 输出 channels = 256*4
'''
from torch import nn
import torch
import torch.nn.functional as F
from ..builder import NECKS
from ..utils.conv import ConvBnRelu
@NECKS.register_module()
class DB_FPN(nn.Module):
def __init__(self, in_channels, out_channels=256, **kwargs):
super(DB_FPN, self).__init__()
inner_channels = out_channels // 4
# inx 为将输入的channels 转为256
self.in5 = ConvBnRelu(in_channels[-1], out_channels, kernel_size=1, stride=1, padding=0)
self.in4 = ConvBnRelu(in_channels[-2], out_channels, kernel_size=1, stride=1, padding=0)
self.in3 = ConvBnRelu(in_channels[-3], out_channels, kernel_size=1, stride=1, padding=0)
self.in2 = ConvBnRelu(in_channels[-4], out_channels, kernel_size=1, stride=1, padding=0)
# out 为将输入的channels 转为256//4方便后面的cat,在通用目标检测中用来做smooth
self.out5 = ConvBnRelu(out_channels, inner_channels, kernel_size=3, stride=1, padding=1)
self.out4 = ConvBnRelu(out_channels, inner_channels, kernel_size=3, stride=1, padding=1)
self.out3 = ConvBnRelu(out_channels, inner_channels, kernel_size=3, stride=1, padding=1)
self.out2 = ConvBnRelu(out_channels, inner_channels, kernel_size=3, stride=1, padding=1)
def forward(self, x):
c2, c3, c4, c5 = x
in5 = self.in5(c5)
in4 = self.in4(c4)
in3 = self.in3(c3)
in2 = self.in2(c2)
out4 = self._upsample_add(in5, in4) # 1/16
out3 = self._upsample_add(out4, in3) # 1/8
out2 = self._upsample_add(out3, in2) # 1/4
p5 = self._upsample(self.out5(in5), out2) # 1/4
p4 = self._upsample(self.out4(out4), out2) # 1/4
p3 = self._upsample(self.out3(out3), out2) # 1/4
p2 = self.out2(out2) # 1/4
fuse = torch.cat((p5, p4, p3, p2), 1)
return fuse
def _upsample(self, x, y, scale=1):
_, _, H, W = y.size()
# return F.upsample(x, size=(H // scale, W // scale), mode='nearest')
# trt - change
return F.interpolate(x, size=(H // scale, W // scale), mode='bilinear', align_corners=True)
def _upsample_add(self, x, y):
_, _, H, W = y.size()
# return F.upsample(x, size=(H, W), mode='nearest') + y
return F.interpolate(x, size=(H, W), mode='bilinear', align_corners=True) + y
def init_weights(self, pretrained=None):
for m in self.modules():
if isinstance(m, nn.Conv2d):
nn.init.kaiming_normal_(m.weight.data)
elif isinstance(m, nn.BatchNorm2d):
m.weight.data.fill_(1.)
m.bias.data.zero_()
@NECKS.register_module()
class PSE_FPN(nn.Module):
def __init__(self, in_channels, out_channels=256, **kwargs):
super(PSE_FPN, self).__init__()
# inner_channels = out_channels // 4
# inx 为将输入的channels 转为256
self.in5 = ConvBnRelu(in_channels[-1], out_channels, kernel_size=1, stride=1, padding=0)
self.in4 = ConvBnRelu(in_channels[-2], out_channels, kernel_size=1, stride=1, padding=0)
self.in3 = ConvBnRelu(in_channels[-3], out_channels, kernel_size=1, stride=1, padding=0)
self.in2 = ConvBnRelu(in_channels[-4], out_channels, kernel_size=1, stride=1, padding=0)
#
self.out5 = ConvBnRelu(out_channels, out_channels, kernel_size=3, stride=1, padding=1)
self.out4 = ConvBnRelu(out_channels, out_channels, kernel_size=3, stride=1, padding=1)
self.out3 = ConvBnRelu(out_channels, out_channels, kernel_size=3, stride=1, padding=1)
self.out2 = ConvBnRelu(out_channels, out_channels, kernel_size=3, stride=1, padding=1)
self.out_conv = ConvBnRelu(out_channels * 4, out_channels, kernel_size=3, stride=1, padding=1)
def forward(self, x):
c2, c3, c4, c5 = x
in5 = self.in5(c5)
in4 = self.in4(c4)
in3 = self.in3(c3)
in2 = self.in2(c2)
out4 = self._upsample_add(in5, in4) # 1/16
out3 = self._upsample_add(out4, in3) # 1/8
out2 = self._upsample_add(out3, in2) # 1/4
p5 = self._upsample(self.out5(in5), out2)
p4 = self._upsample(self.out4(out4), out2)
p3 = self._upsample(self.out3(out3), out2)
p2 = self.out2(out2)
fuse = torch.cat((p5, p4, p3, p2), 1)
fuse = self.out_conv(fuse)
return fuse
def _upsample(self, x, y, scale=1):
_, _, H, W = y.size()
# return F.upsample(x, size=(H // scale, W // scale), mode='nearest')
return F.interpolate(x, size=(H // scale, W // scale), mode='nearest')
def _upsample_add(self, x, y):
_, _, H, W = y.size()
# return F.upsample(x, size=(H, W), mode='nearest') + y
return F.interpolate(x, size=(H, W), mode='nearest') + y
| 39.587302 | 103 | 0.602245 | 709 | 4,988 | 4.062059 | 0.155148 | 0.099306 | 0.10625 | 0.094792 | 0.817014 | 0.796181 | 0.793056 | 0.748611 | 0.731597 | 0.730556 | 0 | 0.062483 | 0.258821 | 4,988 | 125 | 104 | 39.904 | 0.716527 | 0.110064 | 0 | 0.617284 | 0 | 0 | 0.007003 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.061728 | 0 | 0.271605 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
35d07e028ebf86d0b9354edcc50130810bd4f925 | 41 | py | Python | mqtt_as/main.py | XuBovey/micropython-esp32-aliyun | e6edeb965cfc8cb57a1e4274707b53c9f041fc18 | [
"Apache-2.0"
] | null | null | null | mqtt_as/main.py | XuBovey/micropython-esp32-aliyun | e6edeb965cfc8cb57a1e4274707b53c9f041fc18 | [
"Apache-2.0"
] | null | null | null | mqtt_as/main.py | XuBovey/micropython-esp32-aliyun | e6edeb965cfc8cb57a1e4274707b53c9f041fc18 | [
"Apache-2.0"
] | null | null | null | import utime
utime.sleep(4)
import clean
| 10.25 | 14 | 0.804878 | 7 | 41 | 4.714286 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.027778 | 0.121951 | 41 | 3 | 15 | 13.666667 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
35d42d9af4409049f021f78e687b91f54e5a1e93 | 27 | py | Python | jsl/gym_envs/agents/__init__.py | apoorvagnihotri/JSL | 83e12645de833cb595bd554b9a14704a3fb1449c | [
"MIT"
] | null | null | null | jsl/gym_envs/agents/__init__.py | apoorvagnihotri/JSL | 83e12645de833cb595bd554b9a14704a3fb1449c | [
"MIT"
] | null | null | null | jsl/gym_envs/agents/__init__.py | apoorvagnihotri/JSL | 83e12645de833cb595bd554b9a14704a3fb1449c | [
"MIT"
] | null | null | null | from . import kalman_filter | 27 | 27 | 0.851852 | 4 | 27 | 5.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 27 | 1 | 27 | 27 | 0.916667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ea0c24d107414b4e9c1d39f6424ed4cd4b1f1aff | 183 | py | Python | zookeeper/core/__init__.py | sib1/zookeeper | 942ca6c0442d5b76c3b01ef2f5ecb62b7e918917 | [
"Apache-2.0"
] | null | null | null | zookeeper/core/__init__.py | sib1/zookeeper | 942ca6c0442d5b76c3b01ef2f5ecb62b7e918917 | [
"Apache-2.0"
] | null | null | null | zookeeper/core/__init__.py | sib1/zookeeper | 942ca6c0442d5b76c3b01ef2f5ecb62b7e918917 | [
"Apache-2.0"
] | null | null | null | from zookeeper.core.cli import cli
from zookeeper.core.component import component, configure
from zookeeper.core.task import task
__all__ = ["component", "configure", "cli", "task"]
| 30.5 | 57 | 0.775956 | 24 | 183 | 5.75 | 0.375 | 0.282609 | 0.369565 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.10929 | 183 | 5 | 58 | 36.6 | 0.846626 | 0 | 0 | 0 | 0 | 0 | 0.136612 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.75 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ea22d79d23fddc1597a55f808d402c9ad079429c | 24 | py | Python | {{cookiecutter.package_name}}/{{cookiecutter.package_name}}/user/__init__.py | cmeadows/fbone-marrow | 0c69bcafbe21c48641cc9759f2a959b9b7881ce3 | [
"BSD-3-Clause"
] | null | null | null | {{cookiecutter.package_name}}/{{cookiecutter.package_name}}/user/__init__.py | cmeadows/fbone-marrow | 0c69bcafbe21c48641cc9759f2a959b9b7881ce3 | [
"BSD-3-Clause"
] | null | null | null | {{cookiecutter.package_name}}/{{cookiecutter.package_name}}/user/__init__.py | cmeadows/fbone-marrow | 0c69bcafbe21c48641cc9759f2a959b9b7881ce3 | [
"BSD-3-Clause"
] | 1 | 2020-04-25T14:01:26.000Z | 2020-04-25T14:01:26.000Z | from .views import user
| 12 | 23 | 0.791667 | 4 | 24 | 4.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 24 | 1 | 24 | 24 | 0.95 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
57652a4209bf6dcda1274550764455c0640cb056 | 9,676 | py | Python | tests/unit/plugins/test_fade_transform_set_select_merge_plugin.py | zerofox-oss/deepstar | fe0fe12317975104fa6ff6c058d141f11e6e951d | [
"BSD-3-Clause-Clear"
] | 44 | 2019-08-09T16:14:27.000Z | 2022-02-10T06:54:35.000Z | tests/unit/plugins/test_fade_transform_set_select_merge_plugin.py | zerofox-oss/deepstar | fe0fe12317975104fa6ff6c058d141f11e6e951d | [
"BSD-3-Clause-Clear"
] | 2 | 2020-09-26T00:05:52.000Z | 2021-03-22T13:27:36.000Z | tests/unit/plugins/test_fade_transform_set_select_merge_plugin.py | zerofox-oss/deepstar | fe0fe12317975104fa6ff6c058d141f11e6e951d | [
"BSD-3-Clause-Clear"
] | 14 | 2019-08-19T16:47:32.000Z | 2022-03-04T03:57:27.000Z | import mock
import os
import unittest
import cv2
import numpy as np
from deepstar.command_line_route_handlers \
.frame_set_command_line_route_handler import \
FrameSetCommandLineRouteHandler
from deepstar.command_line_route_handlers \
.video_command_line_route_handler import \
VideoCommandLineRouteHandler
from deepstar.filesystem.transform_file import TransformFile
from deepstar.filesystem.transform_set_sub_dir import TransformSetSubDir
from deepstar.models.transform_model import TransformModel
from deepstar.models.transform_set_model import TransformSetModel
from deepstar.plugins.fade_transform_set_select_merge_plugin import \
FadeTransformSetSelectMergePlugin
from .. import deepstar_path
class TestFadeTransformSetSelectMergePlugin(unittest.TestCase):
"""
This class tests the FadeTransformSetSelectMergePlugin class.
"""
def test_transform_set_select_merge_fade(self):
with deepstar_path():
with mock.patch.dict(os.environ, {'DEBUG_LEVEL': '0'}):
route_handler = VideoCommandLineRouteHandler()
video_0001 = os.path.dirname(os.path.realpath(__file__)) + '/../../support/video_0001.mp4' # noqa
route_handler.insert_file(video_0001)
route_handler.select_extract([1])
route_handler = FrameSetCommandLineRouteHandler()
route_handler.select_extract([1], 'transform_set', {})
route_handler.select_extract([1], 'transform_set', {})
FadeTransformSetSelectMergePlugin().transform_set_select_merge([1, 2], {'frame-count': '2'}) # noqa
# db
result = TransformSetModel().select(3)
self.assertEqual(result, (3, 'fade', None, None))
result = TransformModel().list(3)
self.assertEqual(len(result), 8)
self.assertEqual(result[0], (11, 3, 1, None, 0))
self.assertEqual(result[1], (12, 3, 2, None, 0))
self.assertEqual(result[2], (13, 3, 3, None, 0))
self.assertEqual(result[3], (14, 3, None, None, 0))
self.assertEqual(result[4], (15, 3, None, None, 0))
self.assertEqual(result[5], (16, 3, 3, None, 0))
self.assertEqual(result[6], (17, 3, 4, None, 0))
self.assertEqual(result[7], (18, 3, 5, None, 0))
# files
p1 = TransformSetSubDir.path(3)
# transforms
self.assertIsInstance(cv2.imread(TransformFile.path(p1, 11, 'jpg')), np.ndarray) # noqa
self.assertIsInstance(cv2.imread(TransformFile.path(p1, 12, 'jpg')), np.ndarray) # noqa
self.assertIsInstance(cv2.imread(TransformFile.path(p1, 13, 'jpg')), np.ndarray) # noqa
self.assertIsInstance(cv2.imread(TransformFile.path(p1, 14, 'jpg')), np.ndarray) # noqa
self.assertIsInstance(cv2.imread(TransformFile.path(p1, 15, 'jpg')), np.ndarray) # noqa
self.assertIsInstance(cv2.imread(TransformFile.path(p1, 16, 'jpg')), np.ndarray) # noqa
self.assertIsInstance(cv2.imread(TransformFile.path(p1, 17, 'jpg')), np.ndarray) # noqa
self.assertIsInstance(cv2.imread(TransformFile.path(p1, 18, 'jpg')), np.ndarray) # noqa
def test_transform_set_select_merge_fade_rejected(self):
with deepstar_path():
with mock.patch.dict(os.environ, {'DEBUG_LEVEL': '0'}):
route_handler = VideoCommandLineRouteHandler()
video_0001 = os.path.dirname(os.path.realpath(__file__)) + '/../../support/video_0001.mp4' # noqa
route_handler.insert_file(video_0001)
route_handler.select_extract([1])
route_handler = FrameSetCommandLineRouteHandler()
route_handler.select_extract([1], 'transform_set', {})
route_handler.select_extract([1], 'transform_set', {})
transform_model = TransformModel()
transform_model.update(1, rejected=1)
transform_model.update(10, rejected=1)
FadeTransformSetSelectMergePlugin().transform_set_select_merge([1, 2], {'frame-count': '2'}) # noqa
# db
result = TransformSetModel().select(3)
self.assertEqual(result, (3, 'fade', None, None))
result = TransformModel().list(3)
self.assertEqual(len(result), 6)
self.assertEqual(result[0], (11, 3, 2, None, 0))
self.assertEqual(result[1], (12, 3, 3, None, 0))
self.assertEqual(result[2], (13, 3, None, None, 0))
self.assertEqual(result[3], (14, 3, None, None, 0))
self.assertEqual(result[4], (15, 3, 3, None, 0))
self.assertEqual(result[5], (16, 3, 4, None, 0))
# files
p1 = TransformSetSubDir.path(3)
# transforms
self.assertIsInstance(cv2.imread(TransformFile.path(p1, 11, 'jpg')), np.ndarray) # noqa
self.assertIsInstance(cv2.imread(TransformFile.path(p1, 12, 'jpg')), np.ndarray) # noqa
self.assertIsInstance(cv2.imread(TransformFile.path(p1, 13, 'jpg')), np.ndarray) # noqa
self.assertIsInstance(cv2.imread(TransformFile.path(p1, 14, 'jpg')), np.ndarray) # noqa
self.assertIsInstance(cv2.imread(TransformFile.path(p1, 15, 'jpg')), np.ndarray) # noqa
self.assertIsInstance(cv2.imread(TransformFile.path(p1, 16, 'jpg')), np.ndarray) # noqa
def test_transform_set_select_merge_fade_fails_due_to_transform_set_id_count(self): # noqa
with deepstar_path():
with mock.patch.dict(os.environ, {'DEBUG_LEVEL': '0'}):
route_handler = VideoCommandLineRouteHandler()
video_0001 = os.path.dirname(os.path.realpath(__file__)) + '/../../support/video_0001.mp4' # noqa
route_handler.insert_file(video_0001)
route_handler.select_extract([1])
route_handler = FrameSetCommandLineRouteHandler()
route_handler.select_extract([1], 'transform_set', {})
route_handler.select_extract([1], 'transform_set', {})
with self.assertRaises(ValueError):
try:
FadeTransformSetSelectMergePlugin().transform_set_select_merge([1, 2, 3], {}) # noqa
except ValueError as e:
self.assertEqual(str(e), 'Exactly two transform set IDs must be supplied') # noqa
raise e
def test_transform_set_select_merge_fade_fails_due_to_missing_required_option(self): # noqa
with deepstar_path():
with mock.patch.dict(os.environ, {'DEBUG_LEVEL': '0'}):
route_handler = VideoCommandLineRouteHandler()
video_0001 = os.path.dirname(os.path.realpath(__file__)) + '/../../support/video_0001.mp4' # noqa
route_handler.insert_file(video_0001)
route_handler.select_extract([1])
route_handler = FrameSetCommandLineRouteHandler()
route_handler.select_extract([1], 'transform_set', {})
route_handler.select_extract([1], 'transform_set', {})
with self.assertRaises(ValueError):
try:
FadeTransformSetSelectMergePlugin().transform_set_select_merge([1, 2], {}) # noqa
except ValueError as e:
self.assertEqual(str(e), 'The frame-count option is required but was not supplied') # noqa
raise e
def test_transform_set_select_merge_fade_fails_due_to_frame_count_less_than_one(self): # noqa
with deepstar_path():
with mock.patch.dict(os.environ, {'DEBUG_LEVEL': '0'}):
route_handler = VideoCommandLineRouteHandler()
video_0001 = os.path.dirname(os.path.realpath(__file__)) + '/../../support/video_0001.mp4' # noqa
route_handler.insert_file(video_0001)
route_handler.select_extract([1])
route_handler = FrameSetCommandLineRouteHandler()
route_handler.select_extract([1], 'transform_set', {})
route_handler.select_extract([1], 'transform_set', {})
with self.assertRaises(ValueError):
try:
FadeTransformSetSelectMergePlugin().transform_set_select_merge([1, 2], {'frame-count': '0'}) # noqa
except ValueError as e:
self.assertEqual(str(e), 'Frame count must be 1 or greater') # noqa
raise e
def test_transform_set_select_merge_fade_fails_due_to_transform_set_count_less_than_frame_count(self): # noqa
with deepstar_path():
with mock.patch.dict(os.environ, {'DEBUG_LEVEL': '0'}):
route_handler = VideoCommandLineRouteHandler()
video_0001 = os.path.dirname(os.path.realpath(__file__)) + '/../../support/video_0001.mp4' # noqa
route_handler.insert_file(video_0001)
route_handler.select_extract([1])
route_handler = FrameSetCommandLineRouteHandler()
route_handler.select_extract([1], 'transform_set', {})
route_handler.select_extract([1], 'transform_set', {})
with self.assertRaises(ValueError):
try:
FadeTransformSetSelectMergePlugin().transform_set_select_merge([1, 2], {'frame-count': '6'}) # noqa
except ValueError as e:
self.assertEqual(str(e), 'Both transform sets must be greater than frame count') # noqa
raise e
| 44.589862 | 120 | 0.619678 | 1,043 | 9,676 | 5.521572 | 0.120805 | 0.07918 | 0.05626 | 0.078139 | 0.841813 | 0.827227 | 0.806216 | 0.793193 | 0.761938 | 0.732766 | 0 | 0.038402 | 0.262609 | 9,676 | 216 | 121 | 44.796296 | 0.768746 | 0.028111 | 0 | 0.657534 | 0 | 0 | 0.07327 | 0.018612 | 0 | 0 | 0 | 0 | 0.273973 | 1 | 0.041096 | false | 0 | 0.089041 | 0 | 0.136986 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
577ab156f64f1fa262a688d19c65df243d7e945f | 3,426 | py | Python | connectivity/Fig2D_SFig6A_ob_vs_RandomClawModel.py | bocklab/pn_kc | 96fb0de1833539a0643ecc136b8a0dcd50d34932 | [
"MIT"
] | 1 | 2020-06-06T04:32:23.000Z | 2020-06-06T04:32:23.000Z | connectivity/Fig2D_SFig6A_ob_vs_RandomClawModel.py | bocklab/pn_kc | 96fb0de1833539a0643ecc136b8a0dcd50d34932 | [
"MIT"
] | null | null | null | connectivity/Fig2D_SFig6A_ob_vs_RandomClawModel.py | bocklab/pn_kc | 96fb0de1833539a0643ecc136b8a0dcd50d34932 | [
"MIT"
] | 2 | 2020-05-31T21:10:08.000Z | 2022-01-22T01:04:03.000Z |
# Fig2D, SFig6A (200326, PNKC2019_v9_fig_200313DB-ZZfixedSuppl6B.pptx)
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# need ana_all_rd from analysis.py
##----------------------------------------------------
## observed vs. random claw (precise) maintain the precise number of claws per PN
# Fig2D
ana = ana_all_rd
conn_data = ana.conn_data['glom_kc_in_claw_unit']
ob_conn, glom_prob, glom_idx_ids = get_conn_prob_idx(conn_data)
stat = [get_raw_inputs(i) for i in shuffle_glom_kc_iterate(ob_conn, 1000)]
stat = np.array(stat)
sd = np.nanstd(stat, axis=0)
avg = np.nanmean(stat, axis=0)
ob_ci = get_raw_inputs(ob_conn)
comm_zscore = np.divide(np.subtract(ob_ci, avg), sd)
# clustering
cm_zs = PairMatrix('', comm_zscore, glom_idx_ids)
reorder_idx = km_cluster(cm_zs.conn)
# reorder_idx = reorder(ClusterOrder0707, glom_idx_ids)
t1_zs = cm_zs.reorder(reorder_idx, return_new=True)
# plotting z score matrix
fig, ax1 = plt.subplots()
t1 = t1_zs;
gloms = df_lookup('glom_id',t1.col_ids,'short_glom_name',glom_btn_table)
sns.heatmap(t1.conn, xticklabels=gloms, yticklabels=gloms, ax=ax1, vmin=-8.53, vmax=8.53, cmap="RdBu_r")
ax1.tick_params(bottom=False,labeltop=True, top=True, labelbottom=False)
ax1.tick_params(axis='x',labelrotation=90)
col_list = t1.col_ids
col_colors = df_lookup('short_glom_name', gloms, 'color', tbl)
for x in [ax1.get_xticklabels(), ax1.get_yticklabels()]:
for idx, tick in enumerate(x):
tick.set_color(col_colors[idx])
if col_list[idx] in comm_ids:
tick.set_weight("extra bold")
ax1.set_aspect("equal")
fig.set_size_inches(16,12)
plt.show()
# fig.savefig(save_path + '200228-compare_random_claw_PreciseClawCount_recluster.png', bbox_inches='tight')
# SFig6A
##------------------------------------------
# a randomized connectivity (random claw model) against the null model (random claw model)
sfl_conn = shuffle_glom_kc_iterate(ob_conn, 1)[0].copy()
stat = [get_raw_inputs(i) for i in shuffle_glom_kc_iterate(sfl_conn, 1000)]
stat = np.array(stat)
sd = np.nanstd(stat, axis=0)
avg = np.nanmean(stat, axis=0)
ob_ci = get_raw_inputs(sfl_conn)
comm_zscore = np.divide(np.subtract(ob_ci, avg), sd)
# clustering
cm_zs = PairMatrix('', comm_zscore, glom_idx_ids)
reorder_idx = km_cluster(cm_zs.conn)
# reorder_idx = reorder(ClusterOrder0707, glom_idx_ids)
t1_zs = cm_zs.reorder(reorder_idx, return_new=True)
# plotting z score matrix
fig, ax1 = plt.subplots()
t1 = t1_zs;
gloms = df_lookup('glom_id',t1.col_ids,'short_glom_name',glom_btn_table)
sns.heatmap(t1.conn, xticklabels=gloms, yticklabels=gloms, ax=ax1, vmin=-8.53, vmax=8.53, cmap="RdBu_r")
ax1.tick_params(bottom=False,labeltop=True, top=True, labelbottom=False)
ax1.tick_params(axis='x',labelrotation=90)
col_list = t1.col_ids
col_colors = df_lookup('short_glom_name', gloms, 'color', tbl)
for x in [ax1.get_xticklabels(), ax1.get_yticklabels()]:
for idx, tick in enumerate(x):
tick.set_color(col_colors[idx])
if col_list[idx] in comm_ids:
tick.set_weight("extra bold")
ax1.set_aspect("equal")
fig.set_size_inches(16,12)
plt.show()
# fig.savefig(save_path + '200228-compare_random_claw_PreciseClawCount_recluster_RandomClawAgainstRandomClaw.png', bbox_inches='tight')
# old comments
#-----------------------------------------------------------
# copy from connectivity/200224-compare_PreciseOrRatioOutdegree_RandomClawModel.py
| 31.722222 | 135 | 0.721249 | 540 | 3,426 | 4.303704 | 0.294444 | 0.010327 | 0.021515 | 0.025818 | 0.76506 | 0.76506 | 0.753012 | 0.753012 | 0.753012 | 0.753012 | 0 | 0.036854 | 0.11296 | 3,426 | 107 | 136 | 32.018692 | 0.727871 | 0.276708 | 0 | 0.807018 | 0 | 0 | 0.06031 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.052632 | 0 | 0.052632 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
577f746437aba539404230eb4ae2c858ffbf0812 | 108 | py | Python | commands/upgrader/commands/say.py | Red-Teapot/mc-commandblock-1.13-update | 64106e1ecb5adca2aff1eeb3a1fcc11486940000 | [
"MIT"
] | 1 | 2020-07-27T16:53:26.000Z | 2020-07-27T16:53:26.000Z | commands/upgrader/commands/say.py | Red-Teapot/mc-commandblock-1.13-update | 64106e1ecb5adca2aff1eeb3a1fcc11486940000 | [
"MIT"
] | 5 | 2019-01-02T14:21:32.000Z | 2019-07-07T05:39:39.000Z | commands/upgrader/commands/say.py | Red-Teapot/mc-commandblock-1.13-update | 64106e1ecb5adca2aff1eeb3a1fcc11486940000 | [
"MIT"
] | null | null | null | # Nothing to do
# TODO Maybe find and upgrade selectors
def upgrade(command: str) -> str:
return command | 27 | 39 | 0.731481 | 16 | 108 | 4.9375 | 0.8125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.194444 | 108 | 4 | 40 | 27 | 0.908046 | 0.472222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.25 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
578c756664de9a16f5c0701e3d7e989d30fb9008 | 27 | py | Python | graph/__init__.py | Uason-Chen/SGP-JCA | 4ea9d4c7b049fe729ea98c86263ba208871beaf1 | [
"MIT"
] | 3 | 2020-12-28T05:49:14.000Z | 2021-07-28T07:41:51.000Z | graph/__init__.py | Uason-Chen/SGP-JCA | 4ea9d4c7b049fe729ea98c86263ba208871beaf1 | [
"MIT"
] | null | null | null | graph/__init__.py | Uason-Chen/SGP-JCA | 4ea9d4c7b049fe729ea98c86263ba208871beaf1 | [
"MIT"
] | 1 | 2022-02-22T10:03:17.000Z | 2022-02-22T10:03:17.000Z | from . import ntu_rgb_d_sgp | 27 | 27 | 0.851852 | 6 | 27 | 3.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 27 | 1 | 27 | 27 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
57904c07319961ab0773746f4cdf24f28bb026ca | 4,075 | py | Python | backend/app/tests/test_spaceView.py | ExZos/Mound | 5d1e9ab1149ce7892f0f2d303f22db7d4af0b46e | [
"MIT"
] | null | null | null | backend/app/tests/test_spaceView.py | ExZos/Mound | 5d1e9ab1149ce7892f0f2d303f22db7d4af0b46e | [
"MIT"
] | 3 | 2021-06-09T18:09:07.000Z | 2021-09-30T14:34:52.000Z | backend/app/tests/test_spaceView.py | ExZos/Mound | 5d1e9ab1149ce7892f0f2d303f22db7d4af0b46e | [
"MIT"
] | null | null | null | from django.test import TestCase
from rest_framework import status
from rest_framework.test import APIClient
class getSpaceByNameTests(TestCase):
client = APIClient()
@classmethod
def setUpTestData(self):
self.client.post('/api/spaces/', {'name': 'Home'}, format='json')
def test_get_matching_name(self):
response = self.client.get('/space/getByName/Home/')
self.assertEqual(response.status_code, status.HTTP_200_OK)
def test_fail_get_missing_name(self):
response = self.client.get('/space/getByName/Work/')
self.assertEqual(response.status_code, status.HTTP_404_NOT_FOUND)
def test_fail_get_matching_name_w_blank(self):
response = self.client.get('/space/getByName/Home /')
self.assertEqual(response.status_code, status.HTTP_404_NOT_FOUND)
def test_fail_get_case_sensitive_name(self):
response = self.client.get('/space/getByName/HOme/')
self.assertEqual(response.status_code, status.HTTP_404_NOT_FOUND)
def test_fail_get_containing_name(self):
response = self.client.get('/space/getByName/Homestay/')
self.assertEqual(response.status_code, status.HTTP_404_NOT_FOUND)
class getUserCountInSpaceForUserTests(TestCase):
client = APIClient()
@classmethod
def setUpTestData(self):
self.client.post('/api/spaces/', {'name': 'Home'}, format='json')
self.client.post('/api/spaces/', {'name': 'Work'}, format='json')
self.client.post('/api/spaces/', {'name': 'School'}, format='json')
self.client.post('/api/users/', {'name': 'Alex', 'space': 1}, format='json')
self.client.post('/api/users/', {'name': 'Bob', 'space': 1}, format='json')
self.client.post('/api/users/', {'name': 'Celine', 'space': 1}, format='json')
self.client.post('/api/users/', {'name': 'Alex', 'space': 2}, format='json')
self.client.post('/api/users/', {'name': 'Alex', 'space': 3}, format='json')
self.client.post('/api/users/', {'name': 'Bob', 'space': 3}, format='json')
def test_get_in_space_w_3_users_for_user(self):
response = self.client.get('/space/getUserCountForUser/1/1/')
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertIn('userCount', response.data)
self.assertEqual(response.data['userCount'], 3)
self.assertIn('user', response.data)
self.assertIn('id', response.data['user'])
self.assertEqual(response.data['user']['id'], 1)
def test_get_in_space_w_1_user_for_user(self):
response = self.client.get('/space/getUserCountForUser/2/4/')
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertIn('userCount', response.data)
self.assertEqual(response.data['userCount'], 1)
self.assertNotIn('user', response.data)
def test_get_in_space_w_2_users_for_user(self):
response = self.client.get('/space/getUserCountForUser/3/5/')
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertIn('userCount', response.data)
self.assertEqual(response.data['userCount'], 2)
self.assertNotIn('user', response.data)
def test_get_in_missing_space_for_user(self):
response = self.client.get('/space/getUserCountForUser/4/1/')
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertIn('userCount', response.data)
self.assertEqual(response.data['userCount'], 0)
self.assertNotIn('user', response.data)
def test_get_in_space_for_missing_user(self):
response = self.client.get('/space/getUserCountForUser/1/7/')
self.assertEqual(response.status_code, status.HTTP_404_NOT_FOUND)
def test_get_in_missing_space_for_missing_user(self):
response = self.client.get('/space/getUserCountForUser/4/7/')
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertIn('userCount', response.data)
self.assertEqual(response.data['userCount'], 0)
self.assertNotIn('user', response.data)
| 46.83908 | 86 | 0.683926 | 514 | 4,075 | 5.217899 | 0.13035 | 0.0783 | 0.145787 | 0.090231 | 0.865772 | 0.858315 | 0.844519 | 0.83557 | 0.778896 | 0.717748 | 0 | 0.017606 | 0.163681 | 4,075 | 86 | 87 | 47.383721 | 0.769366 | 0 | 0 | 0.450704 | 0 | 0 | 0.16908 | 0.073374 | 0 | 0 | 0 | 0 | 0.394366 | 1 | 0.183099 | false | 0 | 0.042254 | 0 | 0.28169 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
57a1de064e3ebc47e08d56effa229105852d47e9 | 30,166 | py | Python | sdss_catl_utils/mocks_manager/tests/test_catl_utils.py | vcalderon2009/sdss_catl_utils | 9bfa3ae062112535aca18967fb5896c29173e3b0 | [
"BSD-3-Clause"
] | null | null | null | sdss_catl_utils/mocks_manager/tests/test_catl_utils.py | vcalderon2009/sdss_catl_utils | 9bfa3ae062112535aca18967fb5896c29173e3b0 | [
"BSD-3-Clause"
] | null | null | null | sdss_catl_utils/mocks_manager/tests/test_catl_utils.py | vcalderon2009/sdss_catl_utils | 9bfa3ae062112535aca18967fb5896c29173e3b0 | [
"BSD-3-Clause"
] | null | null | null | #! /usr/bin/env python
# -*- coding: utf-8 -*-
# Victor Calderon
# Created : 2018-12-24
# Last Modified: 2018-12-24
# Vanderbilt University
from __future__ import absolute_import, division, print_function
__author__ = ['Victor Calderon']
__copyright__ = ["Copyright 2018 Victor Calderon, 2018"]
__email__ = ['victor.calderon@vanderbilt.edu']
__maintainer__ = ['Victor Calderon']
"""
Set of test functions for the `catl_utils` functions
"""
import numpy as np
import pytest
from sdss_catl_utils.mocks_manager import catl_utils
from sdss_catl_utils.custom_exceptions import SDSSCatlUtils_Error
## Functions
#### ----------------- Test `catl_keys` function - Types --------------------##
catl_keys_types_arr = [ ('data' , 'list', 3, list),
('data' , 'dict', 3, dict),
('mocks', 'list', 3, list),
('mocks', 'dict', 3, dict) ]
@pytest.mark.parametrize('catl_kind, return_type, nelem, expected',
catl_keys_types_arr)
def test_catl_keys_types_nelem(catl_kind, return_type, nelem, expected):
"""
Tests the function:
cosmo_utils.mock_catalogues.catl_utils.catl_keys` for input and
output variables.
It verifies the `type` of the output returned by the function.
Parameters
-----------
catl_kind : {'data', 'mocks'} `str`
Type of catalogue to use. This variable is set to `data` by default.
Options:
- `data` : catalogues come from SDSS `real` catalogue
- `mocks` : catalogue come from SDSS `mock` catalogues
return_type : {'list', 'dict'} `str`
Type of output to the be returned. This variable is set to `list`
by default.
Options:
- 'list' : Returns the values as part of a list
- 'dict' : Returns the values as part of a python dictionary
nelem : `int`
Expected number of elements inside the object returned by the function.
expected : `str`
Expected type of element from the `catl_keys` function
"""
## Constants
perf_opt = False
## Running element
output = catl_utils.catl_keys(catl_kind, return_type=return_type,
perf_opt=perf_opt)
## Comparing against `expected` value - Type
assert(isinstance(output, expected))
## Checking number of elements returned
if isinstance(output, list):
assert(len(output) == nelem)
elif isinstance(output, dict):
assert(len(output.keys()) == nelem)
#### ----------------- Test `catl_keys` function - Outputs ------------------##
catl_keys_return_arr = [ 'list' , 'dict']
catl_keys_output_arr = [('data' , False, ['M_h', 'groupid', 'galtype']),
('data' , False, ['M_h', 'groupid', 'galtype']),
('mocks', False, ['M_group', 'groupid', 'g_galtype']),
('mocks', True, ['M_h', 'haloid', 'galtype'])]
@pytest.mark.parametrize('return_type', catl_keys_return_arr)
@pytest.mark.parametrize('catl_kind, perf_opt, expected', catl_keys_output_arr)
def test_catl_keys_outputs(catl_kind, perf_opt, return_type, expected):
"""
Tests the function:
cosmo_utils.mock_catalogues.catl_utils.catl_keys` for input and
output variables.
It verifies the output returned by the function.
Parameters
-----------
catl_kind : {'data', 'mocks'} str
Type of catalogue to use. This variable is set to `data` by default.
Options:
- `data` : catalogues come from SDSS `real` catalogue
- `mocks` : catalogue come from SDSS `mock` catalogues
perf_opt : `bool`, optional
Option for using a `perfect` mock catalogue.
return_type : {'list', 'dict'} str
Type of output to the be returned. This variable is set to `list`
by default.
Options:
- 'list' : Returns the values as part of a list
- 'dict' : Returns the values as part of a python dictionary
expected : str
Expected type of element from the `catl_keys` function
"""
## Running element
output = catl_utils.catl_keys(catl_kind, perf_opt=perf_opt,
return_type=return_type)
## Comparing against `expected` value - Output
if isinstance(output, list):
np.testing.assert_equal(output, expected)
elif isinstance(output, dict):
out_keys = ['gm_key', 'id_key', 'galtype_key']
out_vals = [output[xx] for xx in out_keys]
np.testing.assert_equal(out_vals, expected)
#### ----------- Test `catl_keys` function - Errors - `catl_kind` -----------##
catl_keys_catl_kind_arr = [ 'data1', 'mocks1', 'NoMethod']
catl_keys_catl_perf_arr = [ True, False]
catl_keys_return_arr = [ 'list' , 'dict']
@pytest.mark.parametrize('catl_kind', catl_keys_catl_kind_arr)
@pytest.mark.parametrize('return_type', catl_keys_return_arr)
@pytest.mark.parametrize('perf_opt', catl_keys_catl_perf_arr)
def test_catl_keys_catl_kind_errors_1(catl_kind, perf_opt, return_type):
"""
Tests the function:
cosmo_utils.mock_catalogues.catl_utils.catl_keys` for input and
output variables.
It verifies if errors are raised when `catl_kind` is incorrect
Parameters
-----------
catl_kind : {'data', 'mocks'} str
Type of catalogue to use. This variable is set to `data` by default.
Options:
- `data` : catalogues come from SDSS `real` catalogue
- `mocks` : catalogue come from SDSS `mock` catalogues
"""
## Running function
with pytest.raises(SDSSCatlUtils_Error):
output = catl_utils.catl_keys(catl_kind, perf_opt=perf_opt,
return_type=return_type)
#### --------- Test `catl_keys` function - Errors - `return_type` -----------##
catl_keys_catl_kind_arr = ['data', 'mocks']
catl_keys_catl_perf_arr = [True, False]
catl_keys_return_arr = [ 'list_no' , 'dict1', 'NoMethod']
@pytest.mark.parametrize('catl_kind', catl_keys_catl_kind_arr)
@pytest.mark.parametrize('return_type', catl_keys_return_arr)
@pytest.mark.parametrize('perf_opt', catl_keys_catl_perf_arr)
def test_catl_keys_catl_kind_errors_2(catl_kind, perf_opt, return_type):
"""
Tests the function:
cosmo_utils.mock_catalogues.catl_utils.catl_keys` for input and
output variables.
It verifies if errors are raised when `catl_kind` is incorrect
Parameters
-----------
catl_kind : {'data', 'mocks'} str
Type of catalogue to use. This variable is set to `data` by default.
Options:
- `data` : catalogues come from SDSS `real` catalogue
- `mocks` : catalogue come from SDSS `mock` catalogues
"""
## Running function
with pytest.raises(SDSSCatlUtils_Error):
output = catl_utils.catl_keys(catl_kind, perf_opt=perf_opt,
return_type=return_type)
#### --------- Test `catl_keys` function - Errors - `return_type` -----------##
catl_keys_catl_kind_arr = ['data', 'mocks']
catl_keys_catl_perf_arr = [ 'NotBoolean', 1, 'mark', 1.2]
catl_keys_return_arr = [ 'list' , 'dict']
@pytest.mark.parametrize('catl_kind', catl_keys_catl_kind_arr)
@pytest.mark.parametrize('return_type', catl_keys_return_arr)
@pytest.mark.parametrize('perf_opt', catl_keys_catl_perf_arr)
def test_catl_keys_catl_kind_errors_3(catl_kind, perf_opt, return_type):
"""
Tests the function:
cosmo_utils.mock_catalogues.catl_utils.catl_keys` for input and
output variables.
It verifies if errors are raised when `catl_kind` is incorrect
Parameters
-----------
catl_kind : {'data', 'mocks'} str
Type of catalogue to use. This variable is set to `data` by default.
Options:
- `data` : catalogues come from SDSS `real` catalogue
- `mocks` : catalogue come from SDSS `mock` catalogues
"""
## Running function
with pytest.raises(TypeError):
output = catl_utils.catl_keys(catl_kind, perf_opt=perf_opt,
return_type=return_type)
#########-------------------------------------------------------------#########
#########-------------------------------------------------------------#########
#### ----------------- Test `catl_keys_prop` function - Types ---------------##
catl_keys_prop_info_arr = ['memb', 'groups']
catl_keys_prop_types_arr = [('data' , 'list', 2, list),
('data' , 'dict', 2, dict),
('mocks', 'list', 2, list),
('mocks', 'dict', 2, dict) ]
@pytest.mark.parametrize('catl_info', catl_keys_prop_info_arr)
@pytest.mark.parametrize('catl_kind, return_type, nelem, expected',
catl_keys_prop_types_arr)
def test_catl_keys_prop_types_nelem(catl_kind, catl_info, return_type, nelem,
expected):
"""
Tests the function:
cosmo_utils.mock_catalogues.catl_utils.catl_keys_prop` for input and
output variables.
It verifies the `type` of the output returned by the function.
Parameters
-----------
catl_kind : {'data', 'mocks'} str
Type of catalogue to use. This variable is set to `data` by default.
Options:
- `data` : catalogues come from SDSS `real` catalogue
- `mocks` : catalogue come from SDSS `mock` catalogues
catl_info : {'memb', 'groups'} str, optional
Option for which kind of catalogues to use.
return_type : {'list', 'dict'} str
Type of output to the be returned. This variable is set to `list`
by default.
Options:
- 'list' : Returns the values as part of a list
- 'dict' : Returns the values as part of a python dictionary
nelem : int
Expected number of elements inside the object returned by the function.
expected : str
Expected type of element from the `catl_keys_prop` function
"""
## Running element
output = catl_utils.catl_keys_prop(catl_kind, catl_info=catl_info,
return_type=return_type)
## Comparing against `expected` value - Type
assert(isinstance(output, expected))
## Checking number of elements returned
if isinstance(output, list):
assert(len(output) == nelem)
elif isinstance(output, dict):
assert(len(output.keys()) == nelem)
#### ----------------- Test `catl_keys_prop` function - Output --------------##
catl_keys_prop_return_arr = [ 'list' , 'dict']
catl_keys_prop_output_arr = [('data' , 'memb', ['logssfr' , 'logMstar_JHU']),
('data' , 'groups' , ['logssfr_tot', 'logMstar_tot']),
('mocks', 'memb', ['logssfr' , 'logMstar']),
('mocks', 'groups' , ['logssfr' , 'logMstar'])]
@pytest.mark.parametrize('return_type', catl_keys_prop_return_arr)
@pytest.mark.parametrize('catl_kind, catl_info, expected', catl_keys_prop_output_arr)
def test_catl_keys_prop_outputs(catl_kind, catl_info, return_type, expected):
"""
Tests the function:
cosmo_utils.mock_catalogues.catl_utils.catl_keys_prop` for input and
output variables.
It verifies the output returned by the function.
Parameters
-----------
catl_kind : {'data', 'mocks'} str
Type of catalogue to use. This variable is set to `data` by default.
Options:
- `data` : catalogues come from SDSS `real` catalogue
- `mocks` : catalogue come from SDSS `mock` catalogues
catl_info : {'memb', 'groups'} str, optional
Option for which kind of catalogues to use.
return_type : {'list', 'dict'} str
Type of output to the be returned. This variable is set to `list`
by default.
Options:
- 'list' : Returns the values as part of a list
- 'dict' : Returns the values as part of a python dictionary
expected : str
Expected type of element from the `catl_keys_prop` function
"""
## Running element
output = catl_utils.catl_keys_prop(catl_kind, catl_info=catl_info,
return_type=return_type)
## Comparing against `expected` value - Output
if isinstance(output, list):
np.testing.assert_equal(output, expected)
elif isinstance(output, dict):
out_keys = ['logssfr_key', 'logmstar_key']
out_vals = [output[xx] for xx in out_keys]
np.testing.assert_equal(out_vals, expected)
#### -------- Test `catl_keys_prop` function - Errors - `catl_kind` ---------##
catl_keys_prop_catl_kind_arr = [ 'data1', 'mocks1', 'NoMethod']
catl_keys_prop_return_arr = [ 'list' , 'dict']
catl_keys_prop_catl_info_arr = [ 'memb', 'groups']
@pytest.mark.parametrize('catl_kind', catl_keys_prop_catl_kind_arr)
@pytest.mark.parametrize('return_type', catl_keys_prop_return_arr)
@pytest.mark.parametrize('catl_info', catl_keys_prop_catl_info_arr)
def test_catl_keys_prop_catl_kind_errors(catl_kind, catl_info, return_type):
"""
Tests the function:
cosmo_utils.mock_catalogues.catl_utils.catl_keys_prop` for input and
output variables.
It verifies if errors are raised when `catl_kind` is incorrect
Parameters
-----------
catl_kind : {'data', 'mocks'} str
Type of catalogue to use. This variable is set to `data` by default.
Options:
- `data` : catalogues come from SDSS `real` catalogue
- `mocks` : catalogue come from SDSS `mock` catalogues
"""
## Running function
with pytest.raises(SDSSCatlUtils_Error):
output = catl_utils.catl_keys_prop(catl_kind, catl_info=catl_info,
return_type=return_type)
#### -------- Test `catl_keys_prop` function - Errors - `catl_info` ---------##
## Test `catl_keys_prop` function - Errors - `catl_info`
catl_keys_prop_catl_kind_arr = [ 'data', 'mocks']
catl_keys_prop_return_arr = [ 'list' , 'dict']
catl_keys_prop_catl_info_arr = [ 'members_no', 'groups_Invalid', 1, 1.2]
@pytest.mark.parametrize('catl_kind', catl_keys_prop_catl_kind_arr)
@pytest.mark.parametrize('return_type', catl_keys_prop_return_arr)
@pytest.mark.parametrize('catl_info', catl_keys_prop_catl_info_arr)
def test_catl_keys_prop_catl_info_errors(catl_kind, catl_info, return_type):
"""
Tests the function:
cosmo_utils.mock_catalogues.catl_utils.catl_keys_prop` for input and
output variables.
It verifies if errors are raised when `catl_info` is incorrect
Parameters
-----------
catl_kind : {'data', 'mocks'} str
Type of catalogue to use. This variable is set to `data` by default.
Options:
- `data` : catalogues come from SDSS `real` catalogue
- `mocks` : catalogue come from SDSS `mock` catalogues
"""
## Running function
with pytest.raises(SDSSCatlUtils_Error):
output = catl_utils.catl_keys_prop(catl_kind, catl_info=catl_info,
return_type=return_type)
#### ------- Test `catl_keys_prop` function - Errors - `return_type` --------##
catl_keys_prop_catl_kind_arr = [ 'data', 'mocks']
catl_keys_prop_return_arr = [ 'list_no' , 'dict1', 'NoMethod']
catl_keys_prop_catl_info_arr = [ 'memb', 'groups']
@pytest.mark.parametrize('catl_kind', catl_keys_prop_catl_kind_arr)
@pytest.mark.parametrize('return_type', catl_keys_prop_return_arr)
@pytest.mark.parametrize('catl_info', catl_keys_prop_catl_info_arr)
def test_catl_keys_prop_return_type_errors(catl_kind, catl_info, return_type):
"""
Tests the function:
cosmo_utils.mock_catalogues.catl_utils.catl_keys_prop` for input and
output variables.
It verifies if errors are raised when `return_type` is incorrect
Parameters
-----------
catl_kind : {'data', 'mocks'} str
Type of catalogue to use. This variable is set to `data` by default.
Options:
- `data` : catalogues come from SDSS `real` catalogue
- `mocks` : catalogue come from SDSS `mock` catalogues
"""
## Running function
with pytest.raises(SDSSCatlUtils_Error):
output = catl_utils.catl_keys_prop(catl_kind, catl_info=catl_info,
return_type=return_type)
#########-------------------------------------------------------------#########
#########-------------------------------------------------------------#########
#### --------------- Test `check_input_params` function - Types -------------##
input_arr = [ ('catl_kind', 'data'),
('hod_n', 1),
('halotype', 'fof'),
('clf_method', 1),
('clf_seed', 1234),
('dv', 1.),
('sample', '19'),
('type_am', 'mstar'),
('cosmo_choice', 'LasDamas'),
('perf_opt', True),
('remove_files', True),
('environ_name', 'test_name')]
@pytest.mark.parametrize('var_name, input_var', input_arr)
def test_check_input_params_types(input_var, var_name):
"""
Checks the function `~sdss_catl_utils.mocks_manager.catl_utils.check_input_params`
for input parameters.
Parameters
------------
input_var : `int`, `float`, `bool`, `str`
Input variable to be evaluated.
var_name : `str`
Name of the input parameter being evaluated. This variable name
must correspond to one of the keys in the `type` or `vals`
dictionaries.
"""
check_type = 'type'
# Running function
catl_utils.check_input_params(input_var, var_name, check_type=check_type)
#### --------------- Test `check_input_params` function - Values ------------##
input_arr = [ ('catl_kind', 'data'),
('catl_kind', 'mocks'),
('hod_n', 1),
('hod_n', 6),
('hod_n', 9),
('halotype', 'fof'),
('halotype', 'so'),
('clf_method', 1),
('clf_method', 2),
('clf_method', 3),
('sample', '19'),
('sample', '20'),
('sample', '21'),
('type_am', 'mstar'),
('type_am', 'mr'),
('cosmo_choice', 'LasDamas'),
('cosmo_choice', 'Planck')]
@pytest.mark.parametrize('var_name, input_var', input_arr)
def test_check_input_params_vals(input_var, var_name):
"""
Checks the function `~sdss_catl_utils.mocks_manager.catl_utils.check_input_params`
for input parameters.
Parameters
------------
input_var : `int`, `float`, `bool`, `str`
Input variable to be evaluated.
var_name : `str`
Name of the input parameter being evaluated. This variable name
must correspond to one of the keys in the `type` or `vals`
dictionaries.
"""
check_type = 'vals'
# Running function
catl_utils.check_input_params(input_var, var_name, check_type=check_type)
#### ---------- Test `check_input_params` function - Error - Type -----------##
input_arr = [ ('catl_kind', 1),
('hod_n', 'test'),
('halotype', None),
('clf_method', 'test'),
('clf_seed', '10'),
('dv', '1000'),
('sample', 19),
('type_am', 10),
('cosmo_choice', 123),
('perf_opt', 'None'),
('remove_files', 'True'),
('environ_name', 1)]
@pytest.mark.parametrize('var_name, input_var', input_arr)
def test_check_input_params_err_type(input_var, var_name):
"""
Checks the function `~sdss_catl_utils.mocks_manager.catl_utils.check_input_params`
for input parameters.
Parameters
------------
input_var : `int`, `float`, `bool`, `str`
Input variable to be evaluated.
var_name : `str`
Name of the input parameter being evaluated. This variable name
must correspond to one of the keys in the `type` or `vals`
dictionaries.
"""
check_type = 'type'
# Running function
with pytest.raises(TypeError):
catl_utils.check_input_params(input_var, var_name,
check_type=check_type)
#### ---------- Test `check_input_params` function - Errors - Values --------##
input_arr = [ ('catl_kind', 'data_no'),
('catl_kind', 'mocks_test'),
('hod_n', 11),
('hod_n', 63),
('hod_n', 103),
('halotype', 'fof_alt'),
('halotype', 'sos'),
('clf_method', 12),
('clf_method', 23),
('clf_method', 43),
('sample', '22'),
('sample', '34'),
('sample', '10'),
('type_am', '1_mstar'),
('type_am', '2_mr'),
('cosmo_choice', 'LasDamas_old'),
('cosmo_choice', 'Planck_new')]
@pytest.mark.parametrize('var_name, input_var', input_arr)
def test_check_input_params_err_vals(input_var, var_name):
"""
Checks the function `~sdss_catl_utils.mocks_manager.catl_utils.check_input_params`
for input parameters.
Parameters
------------
input_var : `int`, `float`, `bool`, `str`
Input variable to be evaluated.
var_name : `str`
Name of the input parameter being evaluated. This variable name
must correspond to one of the keys in the `type` or `vals`
dictionaries.
"""
check_type = 'vals'
# Running function
with pytest.raises(ValueError):
catl_utils.check_input_params(input_var, var_name,
check_type=check_type)
#### ---------- Test `check_input_params` function - Errors - KeyError --------##
input_arr = [ ('catl_kind_1', 'data_no'),
('hod_n_test', 103),
('_test_halotype', 'sos'),
('1123_clf_method', 43),
('_test_sample', '34'),
('type_type_am', '2_mr'),
('cosmo_choice_other_test', 'Planck_new')]
@pytest.mark.parametrize('var_name, input_var', input_arr)
def test_check_input_params_err_key(input_var, var_name):
"""
Checks the function `~sdss_catl_utils.mocks_manager.catl_utils.check_input_params`
for input parameters.
Parameters
------------
input_var : `int`, `float`, `bool`, `str`
Input variable to be evaluated.
var_name : `str`
Name of the input parameter being evaluated. This variable name
must correspond to one of the keys in the `type` or `vals`
dictionaries.
"""
check_type = 'vals'
# Running function
with pytest.raises(KeyError):
catl_utils.check_input_params(input_var, var_name,
check_type=check_type)
#########-------------------------------------------------------------#########
#########-------------------------------------------------------------#########
#### --------------- Test `catl_prefix_path` function - Types -------------##
input_arr = [
('data', 0, 'fof', 1, 1235, '19', 'mr', False, 'data/mr/Mr19'),
('mocks', 0, 'so', 1, 0, '20', 'mr', True, 'mocks/halos_so/dv_1.0/hod_model_0/clf_seed_0/clf_method_1/sigma_c_0.1417/mr/Mr20'),
('mocks', 6, 'so', 3, 10, '19', 'mr', False, 'mocks/halos_so/dv_1.0/hod_model_6/clf_seed_10/clf_method_3/sigma_c_0.1417/mr/Mr19')]
pytest_str = 'catl_kind, hod_n, halotype, clf_method, clf_seed, sample, '
pytest_str += 'type_am, perf_opt, expected'
@pytest.mark.parametrize(pytest_str, input_arr)
def test_catl_prefix_path_inputs(catl_kind, hod_n, halotype, clf_method,
clf_seed, sample, type_am, perf_opt, expected):
"""
Checks the function
`~sdss_catl_utils.mocks_manager.catl_utils.catl_prefix_path` for input
parameters.
Parameters
-------------
catl_kind : {``data``, ``mocks``} `str`
Kind of catalogues to download. This variable is set to
``mocks`` by default.
Options:
- ``data``: Downloads the SDSS DR7 real catalogues.
- ``mocks``: Downloads the synthetic catalogues of SDSS DR7.
hod_n : `int`
Number of the HOD model to use.
halotype : {'so', 'fof'}, `str`
Type of dark matter definition to use.
Options:
- ``so``: Spherical Overdensity halo definition.
- ``fof``: Friends-of-Friends halo definition.
clf_method : {1, 2, 3}, `int`
Method for assigning galaxy properties to mock galaxies.
This variable dictates how galaxies are assigned
luminosities or stellar masses based on their galaxy type
and host halo's mass.
Options:
- ``1``: Independent assignment of (g-r) colour, sersic, and specific star formation rate (`logssfr`)
- ``2``: (g-r) colour dictates active/passive designation and draws values independently.
- ``3``: (g-r) colour dictates active/passive designation, and assigns other galaxy properties for that given galaxy.
clf_seed : `int`
Value of the random seed used for the conditional luminosity function.
sample : {'19', '20', '21'}, `str`
Luminosity of the SDSS volume-limited sample to analyze.
Options:
- ``'19'``: :math:`M_r = 19` volume-limited sample
- ``'20'``: :math:`M_r = 20` volume-limited sample
- ``'21'``: :math:`M_r = 21` volume-limited sample
type_am : {'mr', 'mstar'}, `str`
Type of Abundance matching used in the catalogue. This
Options:
- ``'mr'``: Luminosity-based abundance matching used
- ``'mstar'``: Stellar-mass-based abundance matching used.
perf_opt : `bool`
If `True`, it chooses to analyze the ``perfect`` version of
the synthetic galaxy/group galaxy catalogues. Otherwise,
it downloads the catalogues with group-finding errors
included.
expected : `str`
Expected `path` to the set of catalogues
"""
# Output path from function `catl_prefix_path`
output_path = catl_utils.catl_prefix_path( catl_kind=catl_kind,
hod_n=hod_n,
halotype=halotype,
clf_method=clf_method,
clf_seed=clf_seed,
sample=sample,
type_am=type_am,
perf_opt=perf_opt)
# Comparing expected with output
assert(output_path == expected)
#### --------------- Test `catl_prefix_str` function - Types -------------##
input_arr = [
('data', 0, 'fof', 1, 1235, '19', 'mr', False, 'data_Mr19_am_mr'),
('mocks', 0, 'so', 1, 0, '20', 'mr', True, 'Mr20_halo_so_dv_1.0_hn_0_clfs_0_clfm_1_sigclf_0.1417_am_mr_pf_True'),
('mocks', 6, 'so', 3, 10, '19', 'mr', False, 'Mr19_halo_so_dv_1.0_hn_6_clfs_10_clfm_3_sigclf_0.1417_am_mr_pf_False')]
pytest_str = 'catl_kind, hod_n, halotype, clf_method, clf_seed, sample, '
pytest_str += 'type_am, perf_opt, expected'
@pytest.mark.parametrize(pytest_str, input_arr)
def test_catl_prefix_path_inputs(catl_kind, hod_n, halotype, clf_method,
clf_seed, sample, type_am, perf_opt, expected):
"""
Checks the function
`~sdss_catl_utils.mocks_manager.catl_utils.catl_prefix_str` for input
parameters.
Parameters
-------------
catl_kind : {``data``, ``mocks``} `str`
Kind of catalogues to download. This variable is set to
``mocks`` by default.
Options:
- ``data``: Downloads the SDSS DR7 real catalogues.
- ``mocks``: Downloads the synthetic catalogues of SDSS DR7.
hod_n : `int`
Number of the HOD model to use.
halotype : {'so', 'fof'}, `str`
Type of dark matter definition to use.
Options:
- ``so``: Spherical Overdensity halo definition.
- ``fof``: Friends-of-Friends halo definition.
clf_method : {1, 2, 3}, `int`
Method for assigning galaxy properties to mock galaxies.
This variable dictates how galaxies are assigned
luminosities or stellar masses based on their galaxy type
and host halo's mass.
Options:
- ``1``: Independent assignment of (g-r) colour, sersic, and specific star formation rate (`logssfr`)
- ``2``: (g-r) colour dictates active/passive designation and draws values independently.
- ``3``: (g-r) colour dictates active/passive designation, and assigns other galaxy properties for that given galaxy.
clf_seed : `int`
Value of the random seed used for the conditional luminosity function.
sample : {'19', '20', '21'}, `str`
Luminosity of the SDSS volume-limited sample to analyze.
Options:
- ``'19'``: :math:`M_r = 19` volume-limited sample
- ``'20'``: :math:`M_r = 20` volume-limited sample
- ``'21'``: :math:`M_r = 21` volume-limited sample
type_am : {'mr', 'mstar'}, `str`
Type of Abundance matching used in the catalogue. This
Options:
- ``'mr'``: Luminosity-based abundance matching used
- ``'mstar'``: Stellar-mass-based abundance matching used.
perf_opt : `bool`
If `True`, it chooses to analyze the ``perfect`` version of
the synthetic galaxy/group galaxy catalogues. Otherwise,
it downloads the catalogues with group-finding errors
included.
expected : `str`
Expected `path` to the set of catalogues
"""
# Output path from function `catl_prefix_path`
output_path = catl_utils.catl_prefix_str( catl_kind=catl_kind,
hod_n=hod_n,
halotype=halotype,
clf_method=clf_method,
clf_seed=clf_seed,
sample=sample,
type_am=type_am,
perf_opt=perf_opt)
# Comparing expected with output
assert(output_path == expected)
| 37.754693 | 134 | 0.598754 | 3,663 | 30,166 | 4.677314 | 0.081354 | 0.044359 | 0.03432 | 0.019845 | 0.895757 | 0.879589 | 0.849647 | 0.846962 | 0.829977 | 0.825833 | 0 | 0.011697 | 0.257475 | 30,166 | 798 | 135 | 37.802005 | 0.753203 | 0.490718 | 0 | 0.546154 | 0 | 0.007692 | 0.176169 | 0.025502 | 0 | 0 | 0 | 0 | 0.046154 | 1 | 0.065385 | false | 0 | 0.019231 | 0 | 0.084615 | 0.003846 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a4bc871cfd7cbd93c5b4270cb06ca6e0badc9a2e | 96 | py | Python | venv/lib/python3.8/site-packages/jeepney/tests/test_bus.py | GiulianaPola/select_repeats | 17a0d053d4f874e42cf654dd142168c2ec8fbd11 | [
"MIT"
] | 2 | 2022-03-13T01:58:52.000Z | 2022-03-31T06:07:54.000Z | venv/lib/python3.8/site-packages/jeepney/tests/test_bus.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | 19 | 2021-11-20T04:09:18.000Z | 2022-03-23T15:05:55.000Z | venv/lib/python3.8/site-packages/jeepney/tests/test_bus.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | null | null | null | /home/runner/.cache/pip/pool/02/93/b1/77701c610075e06d57b22146058b50e3148ac39db2f58be63f3ef4d207 | 96 | 96 | 0.895833 | 9 | 96 | 9.555556 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.479167 | 0 | 96 | 1 | 96 | 96 | 0.416667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a4d5074b4bb5e19634af6baa20c1870313abe927 | 2,601 | py | Python | tf_utils.py | BarracudaPff/code-golf-data-pythpn | 42e8858c2ebc6a061012bcadb167d29cebb85c5e | [
"MIT"
] | null | null | null | tf_utils.py | BarracudaPff/code-golf-data-pythpn | 42e8858c2ebc6a061012bcadb167d29cebb85c5e | [
"MIT"
] | null | null | null | tf_utils.py | BarracudaPff/code-golf-data-pythpn | 42e8858c2ebc6a061012bcadb167d29cebb85c5e | [
"MIT"
] | null | null | null | import tensorflow as tf
def dense_layer(inputs, output_units, bias=True, activation=None, batch_norm=None, dropout=None, scope="dense-layer", reuse=False):
"""
Applies a dense layer to a 2D tensor of shape [batch_size, input_units]
to produce a tensor of shape [batch_size, output_units].
Args:
inputs: Tensor of shape [batch size, input_units].
output_units: Number of output units.
activation: activation function.
dropout: dropout keep prob.
Returns:
Tensor of shape [batch size, output_units].
"""
with tf.variable_scope(scope, reuse=reuse):
W = tf.get_variable(name="weights", initializer=tf.contrib.layers.variance_scaling_initializer(), shape=[shape(inputs, -1), output_units])
z = tf.matmul(inputs, W)
if bias:
b = tf.get_variable(name="biases", initializer=tf.constant_initializer(), shape=[output_units])
z = z + b
if batch_norm is not None:
z = tf.layers.batch_normalization(z, training=batch_norm, reuse=reuse)
z = activation(z) if activation else z
z = tf.nn.dropout(z, dropout) if dropout is not None else z
return z
def time_distributed_dense_layer(inputs, output_units, bias=True, activation=None, batch_norm=None, dropout=None, scope="time-distributed-dense-layer", reuse=False):
"""
Applies a shared dense layer to each timestep of a tensor of shape
[batch_size, max_seq_len, input_units] to produce a tensor of shape
[batch_size, max_seq_len, output_units].
Args:
inputs: Tensor of shape [batch size, max sequence length, ...].
output_units: Number of output units.
activation: activation function.
dropout: dropout keep prob.
Returns:
Tensor of shape [batch size, max sequence length, output_units].
"""
with tf.variable_scope(scope, reuse=reuse):
W = tf.get_variable(name="weights", initializer=tf.contrib.layers.variance_scaling_initializer(), shape=[shape(inputs, -1), output_units])
z = tf.einsum("ijk,kl->ijl", inputs, W)
if bias:
b = tf.get_variable(name="biases", initializer=tf.constant_initializer(), shape=[output_units])
z = z + b
if batch_norm is not None:
z = tf.layers.batch_normalization(z, training=batch_norm, reuse=reuse)
z = activation(z) if activation else z
z = tf.nn.dropout(z, dropout) if dropout is not None else z
return z
def shape(tensor, dim=None):
"""Get tensor shape/dimension as list/int"""
if dim is None:
return tensor.shape.as_list()
else:
return tensor.shape.as_list()[dim]
def rank(tensor):
"""Get tensor rank as python list"""
return len(tensor.shape.as_list()) | 45.631579 | 165 | 0.711649 | 391 | 2,601 | 4.606138 | 0.207161 | 0.085508 | 0.057746 | 0.079956 | 0.860078 | 0.834536 | 0.803443 | 0.785675 | 0.785675 | 0.727374 | 0 | 0.001397 | 0.174164 | 2,601 | 57 | 166 | 45.631579 | 0.837058 | 0.332949 | 0 | 0.625 | 0 | 0 | 0.046712 | 0.01721 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.03125 | 0 | 0.3125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a4d8779a3ad2642f3e0a2d0457febe355dcdb37b | 45 | py | Python | projects/thesis/continuous/custom/modeling/backbone/custom_model/resnet/__init__.py | cpark90/rrrcnn | ba66cc391265be76fa3896b66459ff7241b47972 | [
"Apache-2.0"
] | null | null | null | projects/thesis/continuous/custom/modeling/backbone/custom_model/resnet/__init__.py | cpark90/rrrcnn | ba66cc391265be76fa3896b66459ff7241b47972 | [
"Apache-2.0"
] | null | null | null | projects/thesis/continuous/custom/modeling/backbone/custom_model/resnet/__init__.py | cpark90/rrrcnn | ba66cc391265be76fa3896b66459ff7241b47972 | [
"Apache-2.0"
] | null | null | null | from .stem import *
from .bottleneck import * | 22.5 | 25 | 0.755556 | 6 | 45 | 5.666667 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.155556 | 45 | 2 | 25 | 22.5 | 0.894737 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a4f2f9cd6fa9a1d76dea512e0b7cab5a5f88c36d | 27 | py | Python | src/thornfield/__init__.py | drorvinkler/thornfield | 3c5bb8afaa96097bc71cccb119394a0f351d828f | [
"MIT"
] | 2 | 2020-11-24T13:27:14.000Z | 2020-11-24T13:29:40.000Z | src/thornfield/__init__.py | drorvinkler/thornfield | 3c5bb8afaa96097bc71cccb119394a0f351d828f | [
"MIT"
] | 1 | 2020-11-24T13:33:45.000Z | 2020-11-24T15:10:41.000Z | src/thornfield/__init__.py | drorvinkler/thornfield | 3c5bb8afaa96097bc71cccb119394a0f351d828f | [
"MIT"
] | null | null | null | from .cacher import Cacher
| 13.5 | 26 | 0.814815 | 4 | 27 | 5.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148148 | 27 | 1 | 27 | 27 | 0.956522 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a4fd2192768e9e458c6986528bcfe50a234f421a | 9,637 | py | Python | pytistory/api/post.py | JeongUkJae/pytistory | 27097b24bcea93240914c0dd23716a69f9ae77bc | [
"MIT"
] | 9 | 2018-02-08T14:31:53.000Z | 2018-10-29T14:07:16.000Z | pytistory/api/post.py | jeongukjae/pytistory | 27097b24bcea93240914c0dd23716a69f9ae77bc | [
"MIT"
] | 3 | 2019-08-21T15:38:37.000Z | 2019-08-30T00:27:36.000Z | pytistory/api/post.py | JeongUkJae/pytistory | 27097b24bcea93240914c0dd23716a69f9ae77bc | [
"MIT"
] | 2 | 2019-06-19T07:20:52.000Z | 2022-02-05T15:41:39.000Z | # -*- coding: utf8 -*-
"""Post 관련 API Client 구현입니다.
"""
import datetime
from .base_api import BaseAPI
class Post(BaseAPI):
"""Post 관련 API Client 구현입니다.
다음과 같은 API Client가 구현되어 있습니다.
- post/list
최근 게시물 목록을 가져올 수 있는 API입니다.
- post/write
게시글을 작성할 수 있는 API입니다.
- post/modify
작성된 게시글을 수정할 수 있는 API입니다.
- post/read
단일 게시글을 읽을 수 있는 API입니다.
- post/attach
파일을 첨부 할 수 있는 API입니다.
- post/delete
단일 게시글을 삭제할 수 있는 API입니다.
"""
# pylint: disable=too-many-arguments
kind = 'post'
def list(self, blog_name=None, target_url=None):
"""post/list API 구현입니다.
최근 게시물 목록을 가져올 수 있는 API입니다. 해당 API에 관한 정보는
`링크 <http://www.tistory.com/guide/api/post.php#post-list>`_ 를 통해
살펴보실 수 있습니다.
:param blog_name: 블로그 명입니다., defaults to None
:type blog_name: str, optional
:param target_url: 블로그의 url입니다. deprecated된 옵션입니다., defaults to None
:type target_url: str, optional
:raise NoSpecifiedBlog: 블로그 정보를 설정할 수 없을 때 일어납니다.
:raise TypeError: 인자의 타입이 잘못되었을 때 일어납니다.
:return:
`최근 게시글 목록 API <http://www.tistory.com/guide/api/post.php#post-list>`_ 링크에서
어떤 데이터가 넘어오는 지 알 수 있습니다.
:rtype: dict
"""
url = self._get_url(self.kind, 'list')
params = self._get_default_params()
self._set_blog_name(params, blog_name, target_url)
response = self._perform('GET', url, params=params)
return response
def write(self, title, blog_name=None, target_url=None, visibility=0,
published=None, category=0, content=None, slogan=None, tag=None):
"""post/list API 구현입니다.
게시글을 작성할 수 있는 API입니다. 해당 API에 관한 정보는
`링크 <http://www.tistory.com/guide/api/post.php#post-write>`_ 를 통해
살펴보실 수 있습니다.
:param title: 포스트 제목입니다.
:type title: str
:param blog_name: 블로그 명입니다., defaults to None
:type blog_name: str, optional
:param target_url: 블로그의 url입니다. deprecated된 옵션입니다., defaults to None
:type target_url: str, optional
:param visibility:
- 0: 비공개
- 1: 보호
- 2: 공개
- 3: 발행
defaults to 0
:type visibility: int, optional
:param published: 발행 시간. 만약 설정시 예약 발행이 됨., defaults to None
:type published: :class:`datetime.datetime`, optional
:param category: 0은 분류없음. 값 설정시 카테고리 설정, defaults to 0
:type category: int, optional
:param content: 글 내용, defaults to None
:type content: str, optional
:param slogan: 문자 주소. 이는 아마 블로그 주소 형식을 문자로 설정했을 때의 값인 듯 함., defaults to None
:type slogan: str, optional
:param tag: 게시글에 태그를 설정합니다, defaults to None
:type tag: list, optional
:raise NoSpecifiedBlog: 블로그 정보를 설정할 수 없을 때 일어납니다.
:raise TypeError: 인자의 타입이 잘못되었을 때 일어납니다.
:return:
`최근 게시글 목록 API <http://www.tistory.com/guide/api/post.php#post-write>`_ 링크에서
어떤 데이터가 넘어오는 지 알 수 있습니다.
:rtype: dict
"""
url = self._get_url(self.kind, 'write')
params = self._get_default_params()
self._set_blog_name(params, blog_name, target_url)
if isinstance(visibility, int) and visibility >= 0 and visibility <= 3:
params['visibility'] = visibility
else:
raise TypeError('A visibility must be 0, 1, 2, or 3.')
if published:
if isinstance(published, datetime.datetime):
params['published'] = published.timestamp()
else:
raise TypeError('A published must be a datetime object')
# dangerous-default-value
if tag is None:
tag = []
if isinstance(tag, list):
params['tag'] = ','.join(tag)
else:
raise TypeError('A tag must be a list.')
params['title'] = title
params['category'] = category
params['content'] = content
params['slogan'] = slogan
response = self._perform('POST', url, data=params)
return response
def modify(self, title, post_id, blog_name=None, target_url=None, visibility=0,
category=0, content=None, slogan=None, tag=None):
"""post/modify API 구현입니다.
작성된 게시글을 수정할 수 있는 API입니다. 해당 API에 관한 정보는
`링크 <http://www.tistory.com/guide/api/post.php#post-modify>`_ 를 통해
살펴보실 수 있습니다.
:param title: 포스트 제목입니다.
:type title: str
:param post_id: 포스트 고유번호입니다.
:type title: int
:param blog_name: 블로그 명입니다., defaults to None
:type blog_name: str, optional
:param target_url: 블로그의 url입니다. deprecated된 옵션입니다., defaults to None
:type target_url: str, optional
:param visibility:
- 0: 비공개
- 1: 보호
- 2: 공개
- 3: 발행
defaults to 0
:type visibility: int, optional
:param category: 0은 분류없음. 값 설정시 카테고리 설정, defaults to 0
:type category: int, optional
:param content: 글 내용, defaults to None
:type content: str, optional
:param slogan: 문자 주소. 이는 아마 블로그 주소 형식을 문자로 설정했을 때의 값인 듯 함., defaults to None
:type slogan: str, optional
:param tag: 게시글에 태그를 설정합니다, defaults to None
:type tag: list, optional
:raise NoSpecifiedBlog: 블로그 정보를 설정할 수 없을 때 일어납니다.
:raise TypeError: 인자의 타입이 잘못되었을 때 일어납니다.
:return:
`최근 게시글 목록 API <http://www.tistory.com/guide/api/post.php#post-modify>`_ 링크에서
어떤 데이터가 넘어오는 지 알 수 있습니다.
:rtype: dict
"""
url = self._get_url(self.kind, 'modify')
params = self._get_default_params()
self._set_blog_name(params, blog_name, target_url)
if isinstance(visibility, int) and visibility >= 0 and visibility <= 3:
params['visibility'] = visibility
else:
raise TypeError('A visibility must be 0, 1, 2, or 3.')
if tag is None:
tag = []
if isinstance(tag, list):
params['tag'] = ','.join(tag)
else:
raise TypeError('A tag must be a list.')
params['title'] = title
params['postId'] = post_id
params['category'] = category
params['content'] = content
params['slogan'] = slogan
response = self._perform('POST', url, data=params)
return response
def read(self, post_id, blog_name=None, target_url=None):
"""post/read API 구현입니다.
단일 게시글을 읽을 수 있는 API입니다. 해당 API에 관한 정보는
`링크 <http://www.tistory.com/guide/api/post.php#post-read>`_ 를 통해
살펴보실 수 있습니다.
:param post_id: 게시글 번호
:type post_id: int
:param blog_name: 블로그 명입니다., defaults to None
:type blog_name: str, optional
:param target_url: 블로그의 url입니다. deprecated된 옵션입니다., defaults to None
:type target_url: str, optional
:raises NoSpecifiedBlogError: 해당하는 블로그가 존재하지 않을 때 일어나는 에러입니다.
:return:
`글 읽기 API <http://www.tistory.com/guide/api/post.php#post-read>`_ 링크에서
어떤 데이터가 넘어오는 지 알 수 있습니다.
:rtype: dict
"""
url = self._get_url(self.kind, 'read')
params = self._get_default_params()
self._set_blog_name(params, blog_name, target_url)
params['postId'] = post_id
response = self._perform('GET', url, params=params)
return response
def attach(self, uploaded_file, blog_name=None, target_url=None):
"""post/attach API 구현입니다.
파일을 첨부 할 수 있는 API입니다. 해당 API에 관한 정보는
`링크 <http://www.tistory.com/guide/api/post.php#post-attach>`_ 를 통해
살펴보실 수 있습니다.
:param uploaded_file: 업로드할 파일의 경로입니다.
:type uploaded_file: str
:param blog_name: 블로그 명입니다., defaults to None
:type blog_name: str, optional
:param target_url: 블로그의 url입니다. deprecated된 옵션입니다., defaults to None
:type target_url: str, optional
:raises NoSpecifiedBlogError: 해당하는 블로그가 존재하지 않을 때 일어나는 에러입니다.
:return:
`파일 첨부 API <http://www.tistory.com/guide/api/post.php#post-attach>`_ 링크에서
어떤 데이터가 넘어오는 지 알 수 있습니다.
:rtype: dict
"""
url = self._get_url(self.kind, 'attach')
params = self._get_default_params()
self._set_blog_name(params, blog_name, target_url)
with open(uploaded_file, 'rb') as f:
files = {'uploadedfile': f}
response = self._perform('POST', url, data=params, files=files)
return response
def delete(self, post_id, blog_name=None, target_url=None):
"""post/delete API 구현입니다.
단일 게시글을 삭제할 수 있는 API입니다. 해당 API에 관한 정보는
`링크 <http://www.tistory.com/guide/api/post.php#post-delete>`_ 를 통해
살펴보실 수 있습니다.
:param post_id: 삭제할 게시글 번호입니다.
:type post_id: int
:param blog_name: 블로그 명입니다., defaults to None
:type blog_name: str, optional
:param target_url: 블로그의 url입니다. deprecated된 옵션입니다., defaults to None
:type target_url: str, optional
:raises NoSpecifiedBlogError: 해당하는 블로그가 존재하지 않을 때 일어나는 에러입니다.
:return:
`글 삭제 API <http://www.tistory.com/guide/api/post.php#post-delete>`_ 링크에서
어떤 데이터가 넘어오는 지 알 수 있습니다.
:rtype: dict
"""
url = self._get_url(self.kind, 'delete')
params = self._get_default_params()
self._set_blog_name(params, blog_name, target_url)
params['postId'] = post_id
response = self._perform('POST', url, data=params)
return response
| 34.665468 | 89 | 0.58815 | 1,308 | 9,637 | 4.233945 | 0.149847 | 0.043337 | 0.048032 | 0.061755 | 0.866197 | 0.850488 | 0.826833 | 0.804261 | 0.781871 | 0.761466 | 0 | 0.004668 | 0.310885 | 9,637 | 277 | 90 | 34.790614 | 0.829243 | 0.505759 | 0 | 0.6875 | 0 | 0 | 0.089442 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.075 | false | 0 | 0.025 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
351d710dd93bdaaa8aacc2d78108fc2fbf358873 | 969 | py | Python | sdk/python/pulumi_google_native/dialogflow/v2beta1/__init__.py | AaronFriel/pulumi-google-native | 75d1cda425e33d4610348972cd70bddf35f1770d | [
"Apache-2.0"
] | 44 | 2021-04-18T23:00:48.000Z | 2022-02-14T17:43:15.000Z | sdk/python/pulumi_google_native/dialogflow/v2beta1/__init__.py | AaronFriel/pulumi-google-native | 75d1cda425e33d4610348972cd70bddf35f1770d | [
"Apache-2.0"
] | 354 | 2021-04-16T16:48:39.000Z | 2022-03-31T17:16:39.000Z | sdk/python/pulumi_google_native/dialogflow/v2beta1/__init__.py | AaronFriel/pulumi-google-native | 75d1cda425e33d4610348972cd70bddf35f1770d | [
"Apache-2.0"
] | 8 | 2021-04-24T17:46:51.000Z | 2022-01-05T10:40:21.000Z | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi SDK Generator. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
from ... import _utilities
import typing
# Export this package's modules as members:
from ._enums import *
from .context import *
from .conversation import *
from .conversation_profile import *
from .document import *
from .entity_type import *
from .environment import *
from .get_context import *
from .get_conversation import *
from .get_conversation_profile import *
from .get_document import *
from .get_entity_type import *
from .get_environment import *
from .get_intent import *
from .get_knowledge_base import *
from .get_participant import *
from .get_session_entity_type import *
from .get_version import *
from .intent import *
from .knowledge_base import *
from .participant import *
from .session_entity_type import *
from .version import *
from ._inputs import *
from . import outputs
| 29.363636 | 80 | 0.770898 | 136 | 969 | 5.316176 | 0.382353 | 0.33195 | 0.197787 | 0.11065 | 0.11065 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001215 | 0.150671 | 969 | 32 | 81 | 30.28125 | 0.877278 | 0.209494 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
35483095c344c8d329b17047f97e0cd24ab169e4 | 9,230 | py | Python | paradigm/instruction_text.py | luc-vermeylen/TS_Conditioning | 68a334e52778c04b00150ab9b240f3fc319429ea | [
"MIT"
] | null | null | null | paradigm/instruction_text.py | luc-vermeylen/TS_Conditioning | 68a334e52778c04b00150ab9b240f3fc319429ea | [
"MIT"
] | null | null | null | paradigm/instruction_text.py | luc-vermeylen/TS_Conditioning | 68a334e52778c04b00150ab9b240f3fc319429ea | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Created on Thu Nov 7 12:37:54 2019
@author: luc
"""
from psychopy import core, visual, event
def introduction(win, size, animacy, free_keys):
def show_text(text, win):
instr = visual.TextStim(win = win, text = '', height = .05, wrapWidth = 1.5, font = 'monospace')
instr.text = text;
instr.draw(); win.flip();
text_resp = event.waitKeys()
return text_resp
start = """Welkom en alvast bedankt voor je deelname aan dit experiment!
Alvorens je begint willen we je eerst even aan twee belangrijke
regels van uw experimentdeelname herinneren: \n\n
Deze experimentsafname gebeurt in groep. Probeer hier rekening mee te houden:
Indien u eventuele vragen, onzekerheden of opmerkingen hebt over het experiment, vraag dit
dan eerst aan de proefleider en indien mogelijk zonder de andere deelnemers te storen. \n\n
Dit experiment is een reactietijden-experiment.
In reactietijden-experimenten is het steeds de bedoeling zo snel en accuraat mogelijk te
reageren! Om genoeg data te kunnen verzamelen bieden we daarbij veel opeenvolgende
beurten aan. Dit kan soms repetitief en eentonig overkomen, dus vragen wij er uw aandacht "
zo goed mogelijk bij te houden.\n\n
Druk op spatie om verder te gaan..."""
prac_instr = """Dit is het experiment, let op, de procedure is een beetje complex, dus lees aandachtig:
Je zal straks steeds een letter en een woord zien verschijnen. Bijvoorbeeld: \n\n
A\n
koe\n\n
Jouw taak bestaat er uit om eerst te bepalen of de letter een klinker of een medeklinker is,
en vervolgens de taak uit te voeren afhankelijk van het type letter. \n\n
Namelijk, als de letter een {} is, moet je in deze taak: \n
op de letter G drukken wanneer het woord kleiner is, en de letter H het woord groter is dan een basketbal.\n\n
Echter, wanneer de letter een {} is, moet je in deze taak: \n
op de letter G drukken wanneer het woord niet levend is, en de letter H wanneer wel levend.
Met levend bedoelen we hier elk soort levend organisme: dier, boom, plant, fruit, of groente. \n\n
Druk op spatie om verder te gaan...""".format(size,animacy)
prac_instr2 = """Je zal soms ook meerdere beurten na elkaar het # tekentje en een cijfer zien verschijnen
in plaats van een letter en een woord. \n
Jouw taak bestaat er uit om ofwel te beoordelen of het cijfer even of oneven is,
ofwel te beoordelen of het cijfer kleiner of groter dan 5 is.\n\n
Op deze beurten mag je zelf kiezen welke taak je uitvoert. Echter, probeer dit zo willekeurig
mogelijk te doen! Alsof een dobbelsteen de keuze zou bepalen van welke taak je uitvoert!\n\n
Let op! De toetsen die je moet gebruiken hangen nu af van je keuze.\n\n
Namelijk, als de je de cijfers wilt beoordelen als kleiner/groter dan 5 moet je \n
op de letter {} drukken wanneer het cijfer kleiner is,
en de letter {} het cijfer groter is dan 5.\n\n
Echter, wanneer je de cijfers wilt beoordelen als even/oneven moet je \n
op de letter {} drukken wanneer het cijfer oneven is,
en de letter {} wanneer het cijfer even is.\n\n
Druk op spatie om verder te gaan...""".format(free_keys['nsize']['left'].upper(),free_keys['nsize']['right'].upper(),free_keys['parity']['left'].upper(),free_keys['parity']['right'].upper())
prac_instr3 = """!!Je kan met dit experiment ook een FNAC-BON van 50 euro winnen!!\n\n
Op elke beurt kan je punten winnen als je correct antwoord. Soms is dit maar 1 punt, maar soms
kunnen dit ook 10 punten zijn. Dit is volledig willekeurig bepaald.\n\n
Je weet dus niet op voorhand hoeveel punten te verdienen zijn voor elke beurt:
Probeer daarom op elke beurt correct en snel genoeg te antwoorden!\n\n
Enkel op de beurten waar je vrij kan kiezen welke taak je doet kan je geen punten verdienen.
Echter, deelnemers die daar te veel fouten maken of niet willekeurig taken kiezen
tijdens deze fase, worden uitgesloten voor de competitie om de FNAC bon.\n\n
Druk op spatie om nog een keer de instructies te zien..."""
show_text(start, win)
show_text(prac_instr, win)
show_text(prac_instr2, win)
show_text(prac_instr3, win)
#%%
def cued_prac_instructions(win, size, animacy, free_keys):
def show_text(text, win):
instr = visual.TextStim(win = win, text = '', height = .05, wrapWidth = 1.5, font = 'monospace')
instr.text = text;
instr.draw(); win.flip();
text_resp = event.waitKeys()
return text_resp
prac_instr4 = """Jouw taak bestaat er uit om eerst te bepalen of de letter een klinker of een
medeklinker is, en vervolgens de taak uit te voeren afhankelijk van het type letter. \n\n
Namelijk, als de letter een {} is, moet je in deze taak: \n
op de letter G drukken wanneer het woord kleiner is, en de letter H het woord groter is dan een basketbal.\n\n
Echter, wanneer de letter een {} is, moet je in deze taak: \n
op de letter G drukken wanneer het woord niet levend is, en de letter H wanneer wel levend.
Met levend bedoelen we hier elk soort levend organisme: dier, boom, plant, fruit, of groente. \n\n
Druk op spatie om eens enkele oefenbeurten te proberen (nog niet voor punten)...""".format(size,animacy)
show_text(prac_instr4,win)
#%%
def free_prac_instructions(win, size, animacy, free_keys):
def show_text(text, win):
instr = visual.TextStim(win = win, text = '', height = .05, wrapWidth = 1.5, font = 'monospace')
instr.text = text;
instr.draw(); win.flip();
text_resp = event.waitKeys()
return text_resp
prac_instr5 = """Nu zal je meerdere beurten na elkaar het # tekentje en een cijfer zien verschijnen
in plaats van een letter en een woord.\n
Jouw taak bestaat er uit om ofwel te beoordelen of het cijfer even of oneven is,
ofwel te beoordelen of het cijfer kleiner of groter dan 5 is.\n\n
Op deze beurten mag je zelf kiezen welke taak je uitvoert. Echter, probeer dit zo willekeurig
mogelijk te doen! Alsof een dobbelsteen de keuze zou bepalen van welke taak je uitvoert!\n\n
Let op! De toetsen die je moet gebruiken hangen nu af van je keuze.\n\n
Namelijk, als de je de cijfers wilt beoordelen als kleiner/groter dan 5 moet je \n
op de letter {} drukken wanneer het cijfer kleiner is,
en de letter {} het cijfer groter is dan 5.\n\n
Echter, wanneer je de cijfers wilt beoordelen als even/oneven moet je \n
op de letter {} drukken wanneer het cijfer oneven is,
en de letter {} wanneer het cijfer even is.\n\n
Druk op spatie om eens enkele oefenbeurten te proberen ...""".format(free_keys['nsize']['left'].upper(),free_keys['nsize']['right'].upper(),free_keys['parity']['left'].upper(),free_keys['parity']['right'].upper())
show_text(prac_instr5,win)
#%%
def review_instructions(win, size, animacy, free_keys):
def show_text(text, win):
instr = visual.TextStim(win = win, text = '', height = .05, wrapWidth = 1.5, font = 'monospace')
instr.text = text;
instr.draw(); win.flip();
text_resp = event.waitKeys()
return text_resp
prac_instr6 = """Duidelijk? Zoniet, laat zeker nog eens weten aan de proefleider.\n\n
Nu begint het eigenlijke experiment voor punten!\n\n
Veel succes!\n\n
Druk op 'spatie' om aan het eigenlijke experiment te beginnen.\n\n"""
show_text(prac_instr6,win)[0]
#%%
def cued_instructions(win, size, animacy, free_keys):
def show_text(text, win):
instr = visual.TextStim(win = win, text = '', height = .05, wrapWidth = 1.5, font = 'monospace')
instr.text = text;
instr.draw(); win.flip();
text_resp = event.waitKeys()
return text_resp
prac_instr4 = """In het volgende blok, moet je op basis van de letter het woord beoordelen. \n\n
Namelijk, als de letter een {} is, moet je in deze taak: \n
op de letter G drukken wanneer het woord kleiner is, en de letter H het woord groter is dan een basketbal.\n\n
Echter, wanneer de letter een {} is, moet je in deze taak: \n
op de letter G drukken wanneer het woord niet levend is, en de letter H wanneer wel levend.
Met levend bedoelen we hier elk soort levend organisme: dier, boom, plant, fruit, of groente. \n\n
Druk op spatie om te starten...""".format(size,animacy)
show_text(prac_instr4,win)
#%%
def free_instructions(win, size, animacy, free_keys):
def show_text(text, win):
instr = visual.TextStim(win = win, text = '', height = .05, wrapWidth = 1.5, font = 'monospace')
instr.text = text;
instr.draw(); win.flip();
text_resp = event.waitKeys()
return text_resp
prac_instr5 = """In het volgende blok kies je zelf hoe je de cijfers zal beoordelen!
Namelijk, als de je de cijfers wilt beoordelen als kleiner/groter dan 5 moet je \n
op de letter {} drukken wanneer het cijfer kleiner is,
en de letter {} het cijfer groter is dan 5.\n\n
Echter, wanneer je de cijfers wilt beoordelen als even/oneven moet je \n
op de letter {} drukken wanneer het cijfer oneven is,
en de letter {} wanneer het cijfer even is.\n\n
Druk op spatie om te starten...""".format(free_keys['nsize']['left'].upper(),free_keys['nsize']['right'].upper(),free_keys['parity']['left'].upper(),free_keys['parity']['right'].upper())
show_text(prac_instr5,win) | 49.623656 | 213 | 0.713651 | 1,522 | 9,230 | 4.28318 | 0.187911 | 0.010431 | 0.009204 | 0.020249 | 0.714834 | 0.711919 | 0.70701 | 0.706857 | 0.706857 | 0.699801 | 0 | 0.008615 | 0.195125 | 9,230 | 186 | 214 | 49.623656 | 0.868892 | 0.009101 | 0 | 0.607143 | 0 | 0.05 | 0.68776 | 0.005584 | 0 | 0 | 0 | 0 | 0 | 1 | 0.085714 | false | 0 | 0.007143 | 0 | 0.135714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
101eb7789419f949a9bbc43a8192eb92390164c5 | 30 | py | Python | scrubadub_stanford/detectors/utils/__init__.py | LeapBeyond/scrubadub_stanford | 18fe57158380fec2ef4ab2e35736cfa6046c4faf | [
"Apache-2.0"
] | null | null | null | scrubadub_stanford/detectors/utils/__init__.py | LeapBeyond/scrubadub_stanford | 18fe57158380fec2ef4ab2e35736cfa6046c4faf | [
"Apache-2.0"
] | null | null | null | scrubadub_stanford/detectors/utils/__init__.py | LeapBeyond/scrubadub_stanford | 18fe57158380fec2ef4ab2e35736cfa6046c4faf | [
"Apache-2.0"
] | null | null | null | from .utils import tag_helper
| 15 | 29 | 0.833333 | 5 | 30 | 4.8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.133333 | 30 | 1 | 30 | 30 | 0.923077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
10575f295452b6652802a79f4acee5c5c74cbc4d | 7,780 | py | Python | mstrio/api/reports.py | LLejoly/mstrio-py | 497fb041318d0def12cf72917ede2c02c1808067 | [
"Apache-2.0"
] | null | null | null | mstrio/api/reports.py | LLejoly/mstrio-py | 497fb041318d0def12cf72917ede2c02c1808067 | [
"Apache-2.0"
] | null | null | null | mstrio/api/reports.py | LLejoly/mstrio-py | 497fb041318d0def12cf72917ede2c02c1808067 | [
"Apache-2.0"
] | null | null | null | from packaging import version
from mstrio.utils.helper import response_handler
def report_definition(connection, report_id):
"""Get the definition of a specific report, including attributes and
metrics. This in-memory report definition provides information about all
available objects without actually running any data query/report. The
results can be used by other requests to help filter large datasets and
retrieve values dynamically, helping with performance and scalability.
Args:
connection: MicroStrategy REST API connection object
report_id (str): Unique ID of the report you wish to extract information from.
Returns:
Complete HTTP response object.
"""
connection._validate_project_selected()
response = connection.session.get(url=connection.base_url + '/api/v2/reports/' + report_id)
if not response.ok:
response_handler(response, "Error getting report definition. Check report ID.")
return response
def report_instance(connection, report_id, body={}, offset=0, limit=5000):
"""Get the results of a newly created report instance. This in-memory
report instance can be used by other requests.
Args:
connection: MicroStrategy REST API connection object.
report_id (str): Unique ID of the report you wish to extract information
from.
offset (int, optional): Starting point within the collection of returned
results. Default is 0.
limit (int, optional): Used to control data extract behavior on datasets
which have a large number of rows. The default is 1000. As
an example, if the dataset has 50,000 rows, this function will
incrementally extract all 50,000 rows in 1,000 row chunks. Depending
on system resources, using a higher limit setting (e.g. 10,000) may
reduce the total time required to extract the entire dataset.
Returns:
Complete HTTP response object.
"""
params = {'offset': offset, 'limit': limit}
if version.parse(connection.iserver_version) >= version.parse("11.2.0200"):
params['fields'] = '-data.metricValues.extras,-data.metricValues.formatted'
response = connection.session.post(url=connection.base_url + '/api/v2/reports/' + report_id + '/instances/',
json=body,
params=params)
if not response.ok:
response_handler(response, "Error getting report contents.")
return response
def report_instance_id(connection, report_id, instance_id, offset=0, limit=5000):
"""Get the results of a previously created report instance, using the in-
memory report instance created by a POST /api/reports/{reportId}/instances
request.
Args:
connection: MicroStrategy REST API connection object
report_id (str): Unique ID of the report you wish to extract information
from.
instance_id (str): Unique ID of the in-memory instance of a published
report.
offset (int): Optional. Starting point within the collection of returned
results. Default is 0.
limit (int, optional): Used to control data extract behavior on datasets
which have a large number of rows. The default is 1000. As
an example, if the dataset has 50,000 rows, this function will
incrementally extract all 50,000 rows in 1,000 row chunks. Depending
on system resources, using a higher limit setting (e.g. 10,000) may
reduce the total time required to extract the entire dataset.
Returns:
Complete HTTP response object.
"""
params = {'offset': offset, 'limit': limit}
if version.parse(connection.iserver_version) >= version.parse("11.2.0200"):
params['fields'] = '-data.metricValues.extras,-data.metricValues.formatted'
response = connection.session.get(url=connection.base_url + '/api/v2/reports/' + report_id + '/instances/' +
instance_id,
params=params)
if not response.ok:
response_handler(response, "Error getting cube contents.")
return response
def report_instance_id_coroutine(future_session, connection, report_id, instance_id, offset=0, limit=5000):
"""Get the future of a previously created instance for a specific report
asynchroneously, using the in-memory instance created by report_instance().
Returns:
Complete Future object.
"""
params = {'offset': offset, 'limit': limit}
if version.parse(connection.iserver_version) >= version.parse("11.2.0200"):
params['fields'] = '-data.metricValues.extras,-data.metricValues.formatted'
url = connection.base_url + '/api/v2/reports/' + report_id + '/instances/' + instance_id
future = future_session.get(url, params=params)
return future
def report_single_attribute_elements(connection, report_id, attribute_id, offset=0, limit=200000):
"""Get elements of a specific attribute of a specific report.
Args:
connection: MicroStrategy REST API connection object.
report_id (str): Unique ID of the report you wish to extract information
from.
attribute_id (str): Unique ID of the attribute in the report.
offset (int): Optional. Starting point within the collection of returned
results. Default is 0.
limit (int, optional): Used to control data extract behavior on datasets
which have a large number of rows. The default is 1000. As
an example, if the dataset has 50,000 rows, this function will
incrementally extract all 50,000 rows in 1,000 row chunks. Depending
on system resources, using a higher limit setting (e.g. 10,000) may
reduce the total time required to extract the entire dataset.
Returns:
Complete HTTP response object
"""
response = connection.session.get(url=connection.base_url + '/api/reports/' + report_id + '/attributes/' +
attribute_id + '/elements',
params={'offset': offset,
'limit': limit})
if not response.ok:
response_handler(response, "Error retrieving attribute " +
attribute_id + " elements")
return response
def report_single_attribute_elements_coroutine(future_session, connection, report_id, attribute_id, offset=0, limit=200000):
"""Get elements of a specific attribute of a specific report.
Args:
connection: MicroStrategy REST API connection object.
report_id (str): Unique ID of the report you wish to extract information
from.
attribute_id (str): Unique ID of the attribute in the report.
offset (int): Optional. Starting point within the collection of returned
results. Default is 0.
limit (int, optional): Used to control data extract behavior on datasets
which have a large number of rows. The default is 1000. As
an example, if the dataset has 50,000 rows, this function will
incrementally extract all 50,000 rows in 1,000 row chunks. Depending
on system resources, using a higher limit setting (e.g. 10,000) may
reduce the total time required to extract the entire dataset.
Returns:
Complete Future object
"""
url = connection.base_url + '/api/reports/' + report_id + '/attributes/' + attribute_id + '/elements'
future = future_session.get(url, params={'offset': offset,
'limit': limit})
return future
| 47.730061 | 124 | 0.662725 | 978 | 7,780 | 5.205521 | 0.169734 | 0.028285 | 0.017285 | 0.020428 | 0.823807 | 0.794343 | 0.755058 | 0.738951 | 0.730505 | 0.716755 | 0 | 0.026151 | 0.262725 | 7,780 | 162 | 125 | 48.024691 | 0.861402 | 0.554242 | 0 | 0.469388 | 0 | 0 | 0.183457 | 0.05214 | 0 | 0 | 0 | 0 | 0 | 1 | 0.122449 | false | 0 | 0.040816 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
106fd443112c0ff7b0f6fe94e403a7c38fa1da6e | 32 | py | Python | test/new.py | tokyodrift1993/verify-changed-files | ec6ed9637374de934d468b342368c2d9cd2892d6 | [
"MIT"
] | null | null | null | test/new.py | tokyodrift1993/verify-changed-files | ec6ed9637374de934d468b342368c2d9cd2892d6 | [
"MIT"
] | null | null | null | test/new.py | tokyodrift1993/verify-changed-files | ec6ed9637374de934d468b342368c2d9cd2892d6 | [
"MIT"
] | null | null | null | print("Test 1")
print("Test 2")
| 10.666667 | 15 | 0.625 | 6 | 32 | 3.333333 | 0.666667 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.071429 | 0.125 | 32 | 2 | 16 | 16 | 0.642857 | 0 | 0 | 0 | 0 | 0 | 0.375 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
108200e7a579652bf40cb9f3e7ab793710610427 | 8,318 | py | Python | tests/testflows/rbac/tests/privileges/show/show_columns.py | pdv-ru/ClickHouse | 0ff975bcf3008fa6c6373cbdfed16328e3863ec5 | [
"Apache-2.0"
] | 15,577 | 2019-09-23T11:57:53.000Z | 2022-03-31T18:21:48.000Z | tests/testflows/rbac/tests/privileges/show/show_columns.py | pdv-ru/ClickHouse | 0ff975bcf3008fa6c6373cbdfed16328e3863ec5 | [
"Apache-2.0"
] | 16,476 | 2019-09-23T11:47:00.000Z | 2022-03-31T23:06:01.000Z | tests/testflows/rbac/tests/privileges/show/show_columns.py | pdv-ru/ClickHouse | 0ff975bcf3008fa6c6373cbdfed16328e3863ec5 | [
"Apache-2.0"
] | 3,633 | 2019-09-23T12:18:28.000Z | 2022-03-31T15:55:48.000Z | from testflows.core import *
from testflows.asserts import error
from rbac.requirements import *
from rbac.helper.common import *
import rbac.helper.errors as errors
@TestSuite
def describe_with_privilege_granted_directly(self, node=None):
"""Check that user is able to execute DESCRIBE on a table if and only if
they have SHOW COLUMNS privilege for that table granted directly.
"""
user_name = f"user_{getuid()}"
if node is None:
node = self.context.node
with user(node, f"{user_name}"):
table_name = f"table_name_{getuid()}"
Suite(test=describe)(grant_target_name=user_name, user_name=user_name, table_name=table_name)
@TestSuite
def describe_with_privilege_granted_via_role(self, node=None):
"""Check that user is able to execute DESCRIBE on a table if and only if
they have SHOW COLUMNS privilege for that table granted through a role.
"""
user_name = f"user_{getuid()}"
role_name = f"role_{getuid()}"
if node is None:
node = self.context.node
with user(node, f"{user_name}"), role(node, f"{role_name}"):
table_name = f"table_name_{getuid()}"
with When("I grant the role to the user"):
node.query(f"GRANT {role_name} TO {user_name}")
Suite(test=describe)(grant_target_name=role_name, user_name=user_name, table_name=table_name)
@TestSuite
@Requirements(
RQ_SRS_006_RBAC_DescribeTable_RequiredPrivilege("1.0"),
)
def describe(self, grant_target_name, user_name, table_name, node=None):
"""Check that user is able to execute DESCRIBE only when they have SHOW COLUMNS privilege.
"""
exitcode, message = errors.not_enough_privileges(name=user_name)
if node is None:
node = self.context.node
with table(node, table_name):
with Scenario("DESCRIBE table without privilege"):
with When("I grant the user NONE privilege"):
node.query(f"GRANT NONE TO {grant_target_name}")
with And("I grant the user USAGE privilege"):
node.query(f"GRANT USAGE ON *.* TO {grant_target_name}")
with Then(f"I attempt to DESCRIBE {table_name}"):
node.query(f"DESCRIBE {table_name}", settings=[("user",user_name)],
exitcode=exitcode, message=message)
with Scenario("DESCRIBE with privilege"):
with When(f"I grant SHOW COLUMNS on the table"):
node.query(f"GRANT SHOW COLUMNS ON {table_name} TO {grant_target_name}")
with Then(f"I attempt to DESCRIBE {table_name}"):
node.query(f"DESCRIBE TABLE {table_name}", settings=[("user",user_name)])
with Scenario("DESCRIBE with revoked privilege"):
with When(f"I grant SHOW COLUMNS on the table"):
node.query(f"GRANT SHOW COLUMNS ON {table_name} TO {grant_target_name}")
with And(f"I revoke SHOW COLUMNS on the table"):
node.query(f"REVOKE SHOW COLUMNS ON {table_name} FROM {grant_target_name}")
with Then(f"I attempt to DESCRIBE {table_name}"):
node.query(f"DESCRIBE {table_name}", settings=[("user",user_name)],
exitcode=exitcode, message=message)
with Scenario("DESCRIBE with revoked ALL privilege"):
with When(f"I grant SHOW COLUMNS on the table"):
node.query(f"GRANT SHOW COLUMNS ON {table_name} TO {grant_target_name}")
with And("I revoke ALL privilege"):
node.query(f"REVOKE ALL ON *.* FROM {grant_target_name}")
with Then(f"I attempt to DESCRIBE {table_name}"):
node.query(f"DESCRIBE {table_name}", settings=[("user",user_name)],
exitcode=exitcode, message=message)
with Scenario("DESCRIBE with ALL privilege"):
with When(f"I grant SHOW COLUMNS on the table"):
node.query(f"GRANT ALL ON *.* TO {grant_target_name}")
with Then(f"I attempt to DESCRIBE {table_name}"):
node.query(f"DESCRIBE TABLE {table_name}", settings=[("user",user_name)])
@TestSuite
def show_create_with_privilege_granted_directly(self, node=None):
"""Check that user is able to execute SHOW CREATE on a table if and only if
they have SHOW COLUMNS privilege for that table granted directly.
"""
user_name = f"user_{getuid()}"
if node is None:
node = self.context.node
with user(node, f"{user_name}"):
table_name = f"table_name_{getuid()}"
Suite(test=show_create)(grant_target_name=user_name, user_name=user_name, table_name=table_name)
@TestSuite
def show_create_with_privilege_granted_via_role(self, node=None):
"""Check that user is able to execute SHOW CREATE on a table if and only if
they have SHOW COLUMNS privilege for that table granted directly.
"""
user_name = f"user_{getuid()}"
role_name = f"role_{getuid()}"
if node is None:
node = self.context.node
with user(node, f"{user_name}"), role(node, f"{role_name}"):
table_name = f"table_name_{getuid()}"
with When("I grant the role to the user"):
node.query(f"GRANT {role_name} TO {user_name}")
Suite(test=show_create)(grant_target_name=role_name, user_name=user_name, table_name=table_name)
@TestSuite
@Requirements(
RQ_SRS_006_RBAC_ShowCreateTable_RequiredPrivilege("1.0"),
)
def show_create(self, grant_target_name, user_name, table_name, node=None):
"""Check that user is able to execute SHOW CREATE on a table only when they have SHOW COLUMNS privilege.
"""
exitcode, message = errors.not_enough_privileges(name=user_name)
if node is None:
node = self.context.node
with table(node, table_name):
with Scenario("SHOW CREATE without privilege"):
with When("I grant the user NONE privilege"):
node.query(f"GRANT NONE TO {grant_target_name}")
with And("I grant the user USAGE privilege"):
node.query(f"GRANT USAGE ON *.* TO {grant_target_name}")
with Then(f"I attempt to SHOW CREATE {table_name}"):
node.query(f"SHOW CREATE TABLE {table_name}", settings=[("user",user_name)],
exitcode=exitcode, message=message)
with Scenario("SHOW CREATE with privilege"):
with When(f"I grant SHOW COLUMNS on the table"):
node.query(f"GRANT SHOW COLUMNS ON {table_name} TO {grant_target_name}")
with Then(f"I attempt to SHOW CREATE {table_name}"):
node.query(f"SHOW CREATE TABLE {table_name}", settings=[("user",user_name)])
with Scenario("SHOW CREATE with revoked privilege"):
with When(f"I grant SHOW COLUMNS on the table"):
node.query(f"GRANT SHOW COLUMNS ON {table_name} TO {grant_target_name}")
with And(f"I revoke SHOW COLUMNS on the table"):
node.query(f"REVOKE SHOW COLUMNS ON {table_name} FROM {grant_target_name}")
with Then(f"I attempt to SHOW CREATE {table_name}"):
node.query(f"SHOW CREATE TABLE {table_name}", settings=[("user",user_name)],
exitcode=exitcode, message=message)
with Scenario("SHOW CREATE with ALL privilege"):
with When(f"I grant SHOW COLUMNS on the table"):
node.query(f"GRANT ALL ON *.* TO {grant_target_name}")
with Then(f"I attempt to SHOW CREATE {table_name}"):
node.query(f"SHOW CREATE TABLE {table_name}", settings=[("user",user_name)])
@TestFeature
@Name("show columns")
@Requirements(
RQ_SRS_006_RBAC_ShowColumns_Privilege("1.0"),
RQ_SRS_006_RBAC_Privileges_All("1.0"),
RQ_SRS_006_RBAC_Privileges_None("1.0")
)
def feature(self, node="clickhouse1"):
"""Check the RBAC functionality of SHOW COLUMNS.
"""
self.context.node = self.context.cluster.node(node)
Suite(run=describe_with_privilege_granted_directly, setup=instrument_clickhouse_server_log)
Suite(run=describe_with_privilege_granted_via_role, setup=instrument_clickhouse_server_log)
Suite(run=show_create_with_privilege_granted_directly, setup=instrument_clickhouse_server_log)
Suite(run=show_create_with_privilege_granted_via_role, setup=instrument_clickhouse_server_log)
| 39.235849 | 108 | 0.662178 | 1,166 | 8,318 | 4.535163 | 0.078045 | 0.076589 | 0.047277 | 0.050303 | 0.914334 | 0.903177 | 0.890318 | 0.877269 | 0.871218 | 0.866679 | 0 | 0.004071 | 0.232147 | 8,318 | 211 | 109 | 39.421801 | 0.823861 | 0.096898 | 0 | 0.686567 | 0 | 0 | 0.319355 | 0.01129 | 0 | 0 | 0 | 0 | 0.007463 | 1 | 0.052239 | false | 0 | 0.037313 | 0 | 0.089552 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
108a8c25ad66d41f70e0f116d038a374476e38d1 | 102 | py | Python | tcr_embedding/evaluation/__init__.py | SchubertLab/mvTCR | d815749e24650f69ef68054e0078d490af91b71d | [
"MIT"
] | 16 | 2021-06-28T20:30:50.000Z | 2022-03-05T12:40:26.000Z | tcr_embedding/evaluation/__init__.py | SchubertLab/mvTCR | d815749e24650f69ef68054e0078d490af91b71d | [
"MIT"
] | 2 | 2021-06-29T07:42:10.000Z | 2022-01-11T08:16:42.000Z | tcr_embedding/evaluation/__init__.py | SchubertLab/mvTCR | d815749e24650f69ef68054e0078d490af91b71d | [
"MIT"
] | 1 | 2021-07-23T18:59:56.000Z | 2021-07-23T18:59:56.000Z | from . import Imputation
from . import Metrics
from . import Clustering
from . import WrapperFunctions | 25.5 | 30 | 0.813725 | 12 | 102 | 6.916667 | 0.5 | 0.481928 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.147059 | 102 | 4 | 30 | 25.5 | 0.954023 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
52a7543430c3ca4781b7396cfb0956c8cbbd8fc9 | 39 | py | Python | pyziabm/__init__.py | blakelucey/pyziabm | b4e62aa036233e58d7b44b654c375baf57ffc2d3 | [
"BSD-3-Clause"
] | 35 | 2017-11-27T13:10:42.000Z | 2021-09-13T13:39:55.000Z | pyziabm/__init__.py | blakelucey/pyziabm | b4e62aa036233e58d7b44b654c375baf57ffc2d3 | [
"BSD-3-Clause"
] | 2 | 2017-10-10T20:28:49.000Z | 2021-09-06T14:59:13.000Z | pyziabm/__init__.py | blakelucey/pyziabm | b4e62aa036233e58d7b44b654c375baf57ffc2d3 | [
"BSD-3-Clause"
] | 23 | 2017-08-28T18:29:09.000Z | 2022-03-20T01:59:26.000Z | from pyziabm.runner2017mpi_r4 import *
| 19.5 | 38 | 0.846154 | 5 | 39 | 6.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 0.102564 | 39 | 1 | 39 | 39 | 0.771429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5e1720798eaee29bf70189ad40db383daebdb4e5 | 1,545 | py | Python | testing_rpy2.py | amloewi/css-blockmodels | f4c0b907632c1cd9dea2930e3efc25125cd18e66 | [
"MIT"
] | 2 | 2015-11-20T14:22:19.000Z | 2016-10-12T21:03:49.000Z | testing_rpy2.py | amloewi/css-blockmodels | f4c0b907632c1cd9dea2930e3efc25125cd18e66 | [
"MIT"
] | null | null | null | testing_rpy2.py | amloewi/css-blockmodels | f4c0b907632c1cd9dea2930e3efc25125cd18e66 | [
"MIT"
] | null | null | null |
# import numpy as np
# import pandas as pd
#
# # base = library('base') -- import packages from R
# from rpy2.robjects.packages import importr as library
# # R.R('x <- 1') AND R.Array('...') etc -- the core interface
# import rpy2.robjects as R
# # Not clear what this does yet, but allows numpy->R easily?
# import rpy2.robjects.numpy2ri
# # Guess if I want to use formulas, I do really need pandas though --
# # rdf = pd2r.convert_to_r_dataframe(pdf)
# import pandas.rpy.common as pd2r
#
#
# def setup_R():
#
# import numpy as np
# import pandas as pd
#
# # base = library('base') -- import packages from R
# from rpy2.robjects.packages import importr as library
# # R.R('x <- 1') AND R.Array('...') etc -- the core interface
# import rpy2.robjects as R
# # Not clear what this does yet, but allows numpy->R easily?
# import rpy2.robjects.numpy2ri
# # Guess if I want to use formulas, I do really need pandas though --
# # rdf = pd2r.convert_to_r_dataframe(pdf)
# import pandas.rpy.common as pd2r
#
#
#
# def to_rdf(df, name):
# converted = pd2r.convert_to_r_dataframe(df)
# R.globalenv[name] = converted
# return converted
from r import *
if __name__ == '__main__':
base = library('base')
stats = library('stats')
gam = library('gam')
kernlab = library('kernlab')
x = np.random.randn(100)
df = pd.DataFrame({'y':2*x+1, 'x':x})
rdf = dataframe(df, 'rdf')
xgam = R.r("gam(y ~ x, family=gaussian, data=rdf)")
print base.summary(xgam)
| 27.105263 | 74 | 0.638188 | 231 | 1,545 | 4.186147 | 0.311688 | 0.074457 | 0.074457 | 0.043433 | 0.733195 | 0.709411 | 0.709411 | 0.709411 | 0.709411 | 0.709411 | 0 | 0.016694 | 0.224595 | 1,545 | 56 | 75 | 27.589286 | 0.790484 | 0.712621 | 0 | 0 | 0 | 0 | 0.174242 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.090909 | null | null | 0.090909 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5e31299e4248b6db116be45a6554970caa8cd815 | 240 | py | Python | sourcelyzer/httpapi/v1/resources/scm_commit.py | sourcelyzer/sourcelyzer | bbb5d9cce9d79986d905f7484989d97a78b1f5aa | [
"MIT"
] | 1 | 2017-07-25T21:06:09.000Z | 2017-07-25T21:06:09.000Z | sourcelyzer/httpapi/v1/resources/scm_commit.py | sourcelyzer/sourcelyzer | bbb5d9cce9d79986d905f7484989d97a78b1f5aa | [
"MIT"
] | null | null | null | sourcelyzer/httpapi/v1/resources/scm_commit.py | sourcelyzer/sourcelyzer | bbb5d9cce9d79986d905f7484989d97a78b1f5aa | [
"MIT"
] | null | null | null | from sourcelyzer.dao import ScmCommit
from sourcelyzer.httpapi.v1.resources.base import DBResource
from sourcelyzer.httpapi.tools import RequireAuthentication
import cherrypy
class ScmCommitResource(DBResource):
resource = ScmCommit
| 26.666667 | 60 | 0.845833 | 26 | 240 | 7.807692 | 0.615385 | 0.221675 | 0.216749 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004673 | 0.108333 | 240 | 8 | 61 | 30 | 0.943925 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
eaa7013e2267037177fcd035acfb97cbdbffd47e | 80 | py | Python | scattering_compositional_learner/__init__.py | mikomel/scattering-compositional-learner | d91f35e56fff62c1968a2819451ce922caa26863 | [
"MIT"
] | null | null | null | scattering_compositional_learner/__init__.py | mikomel/scattering-compositional-learner | d91f35e56fff62c1968a2819451ce922caa26863 | [
"MIT"
] | null | null | null | scattering_compositional_learner/__init__.py | mikomel/scattering-compositional-learner | d91f35e56fff62c1968a2819451ce922caa26863 | [
"MIT"
] | null | null | null | from scattering_compositional_learner.scl import ScatteringCompositionalLearner
| 40 | 79 | 0.9375 | 7 | 80 | 10.428571 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.05 | 80 | 1 | 80 | 80 | 0.960526 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d80deebf92de8a894311ac5fc449a2fb27b01d2e | 106 | py | Python | usage_demo/loader_1.py | aroberge/ideas | f0c8a49f7030276f629101480be77138db07d881 | [
"MIT"
] | 36 | 2020-02-23T19:06:24.000Z | 2022-02-20T22:53:02.000Z | usage_demo/loader_1.py | aroberge/ideas | f0c8a49f7030276f629101480be77138db07d881 | [
"MIT"
] | 13 | 2020-02-21T15:25:40.000Z | 2021-07-01T09:56:35.000Z | usage_demo/loader_1.py | aroberge/ideas | f0c8a49f7030276f629101480be77138db07d881 | [
"MIT"
] | 1 | 2020-11-05T13:12:07.000Z | 2020-11-05T13:12:07.000Z | # loader_1.py
from ideas.examples import function_keyword
function_keyword.add_hook()
import my_program
| 15.142857 | 43 | 0.839623 | 16 | 106 | 5.25 | 0.8125 | 0.357143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010526 | 0.103774 | 106 | 6 | 44 | 17.666667 | 0.873684 | 0.103774 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d820b599f8af28b489923c6cffc4c6b8829fc6aa | 32 | py | Python | vbench/graphs.py | DataDog/vbench | a4e4497bed2778989fb714c2537cff03438e9ae6 | [
"MIT"
] | 48 | 2015-01-11T23:50:01.000Z | 2016-04-13T03:41:45.000Z | vbench/graphs.py | vene/vbench | 77989fa0d3c45e63f576968d206021ffee72a24c | [
"MIT"
] | 3 | 2017-10-12T19:28:33.000Z | 2022-03-07T13:53:32.000Z | vbench/graphs.py | vene/vbench | 77989fa0d3c45e63f576968d206021ffee72a24c | [
"MIT"
] | 7 | 2015-03-15T19:21:44.000Z | 2016-03-14T11:35:18.000Z | import matplotlib.pyplot as plt
| 16 | 31 | 0.84375 | 5 | 32 | 5.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 32 | 1 | 32 | 32 | 0.964286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
dc1c6f056bdf82a518fc0430a7c5d8ecfed7f3d2 | 894 | py | Python | sure_tosca-client_python_stubs/test/test_tosca_template.py | QCDIS/CONF | 6ddb37b691754bbba97c85228d266ac050c4baa4 | [
"Apache-2.0"
] | null | null | null | sure_tosca-client_python_stubs/test/test_tosca_template.py | QCDIS/CONF | 6ddb37b691754bbba97c85228d266ac050c4baa4 | [
"Apache-2.0"
] | 41 | 2017-01-23T16:20:55.000Z | 2019-10-07T12:45:21.000Z | sure_tosca-client_python_stubs/test/test_tosca_template.py | skoulouzis/CONF | 8c0596810f7ef5fec001148dd67192b25abbe3c8 | [
"Apache-2.0"
] | 2 | 2020-05-26T12:53:14.000Z | 2020-10-08T05:59:46.000Z | # coding: utf-8
"""
tosca-sure
TOSCA Simple qUeRy sErvice (SURE). # noqa: E501
OpenAPI spec version: 1.0.0
Contact: S.Koulouzis@uva.nl
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
from __future__ import absolute_import
import unittest
import sure_tosca_client
from sure_tosca_client.models.tosca_template import ToscaTemplate # noqa: E501
from sure_tosca_client.rest import ApiException
class TestToscaTemplate(unittest.TestCase):
"""ToscaTemplate unit test stubs"""
def setUp(self):
pass
def tearDown(self):
pass
def testToscaTemplate(self):
"""Test ToscaTemplate"""
# FIXME: construct object with mandatory attributes with example values
# model = swagger_client.models.tosca_template.ToscaTemplate() # noqa: E501
pass
if __name__ == '__main__':
unittest.main()
| 21.804878 | 84 | 0.700224 | 107 | 894 | 5.64486 | 0.588785 | 0.059603 | 0.074503 | 0.062914 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.018388 | 0.209172 | 894 | 40 | 85 | 22.35 | 0.835926 | 0.449664 | 0 | 0.214286 | 1 | 0 | 0.017778 | 0 | 0 | 0 | 0 | 0.025 | 0 | 1 | 0.214286 | false | 0.214286 | 0.357143 | 0 | 0.642857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
dc691243dc4f154cc113bd53b42d809f0cc61d4b | 16,440 | py | Python | models/networks/ContextualLoss.py | jiye-ML/CoCosNet | c4b3f44393462c8353c6c6952d7b05496298df1c | [
"MIT"
] | 319 | 2020-06-19T09:09:06.000Z | 2022-03-30T15:40:25.000Z | models/networks/ContextualLoss.py | jiye-ML/CoCosNet | c4b3f44393462c8353c6c6952d7b05496298df1c | [
"MIT"
] | 36 | 2020-06-19T18:04:52.000Z | 2021-08-11T07:44:02.000Z | models/networks/ContextualLoss.py | jiye-ML/CoCosNet | c4b3f44393462c8353c6c6952d7b05496298df1c | [
"MIT"
] | 45 | 2020-06-19T09:06:20.000Z | 2022-03-17T05:04:20.000Z | # Copyright (c) Microsoft Corporation.
# Licensed under the MIT License.
import sys
from collections import OrderedDict, namedtuple
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.autograd import Variable
from util.util import feature_normalize, mse_loss
import matplotlib.pyplot as plt
import torchvision
import numpy as np
postpa = torchvision.transforms.Compose([
torchvision.transforms.Lambda(lambda x: x.mul_(1. / 255)),
torchvision.transforms.Normalize(
mean=[-0.40760392, -0.45795686, -0.48501961], #add imagenet mean
std=[1, 1, 1]),
torchvision.transforms.Lambda(lambda x: x[torch.LongTensor([2, 1, 0])]), #turn to RGB
])
postpb = torchvision.transforms.Compose([torchvision.transforms.ToPILImage()])
def post_processing(tensor):
t = postpa(tensor) # denormalize the image since the optimized tensor is the normalized one
t[t > 1] = 1
t[t < 0] = 0
img = postpb(t)
img = np.array(img)
return img
class ContextualLoss(nn.Module):
'''
input is Al, Bl, channel = 1, range ~ [0, 255]
'''
def __init__(self):
super(ContextualLoss, self).__init__()
return None
def forward(self, X_features, Y_features, h=0.1, feature_centering=True):
'''
X_features&Y_features are are feature vectors or feature 2d array
h: bandwidth
return the per-sample loss
'''
batch_size = X_features.shape[0]
feature_depth = X_features.shape[1]
feature_size = X_features.shape[2]
# center the feature vector???
# to normalized feature vectors
if feature_centering:
X_features = X_features - Y_features.view(batch_size, feature_depth, -1).mean(dim=-1).unsqueeze(dim=-1).unsqueeze(dim=-1)
Y_features = Y_features - Y_features.view(batch_size, feature_depth, -1).mean(dim=-1).unsqueeze(dim=-1).unsqueeze(dim=-1)
X_features = feature_normalize(X_features).view(batch_size, feature_depth, -1) # batch_size * feature_depth * feature_size^2
Y_features = feature_normalize(Y_features).view(batch_size, feature_depth, -1) # batch_size * feature_depth * feature_size^2
# conine distance = 1 - similarity
X_features_permute = X_features.permute(0, 2, 1) # batch_size * feature_size^2 * feature_depth
d = 1 - torch.matmul(X_features_permute, Y_features) # batch_size * feature_size^2 * feature_size^2
# normalized distance: dij_bar
d_norm = d / (torch.min(d, dim=-1, keepdim=True)[0] + 1e-5) # batch_size * feature_size^2 * feature_size^2
# pairwise affinity
w = torch.exp((1 - d_norm) / h)
A_ij = w / torch.sum(w, dim=-1, keepdim=True)
# contextual loss per sample
CX = torch.mean(torch.max(A_ij, dim=1)[0], dim=-1)
loss = -torch.log(CX)
# contextual loss per batch
# loss = torch.mean(loss)
return loss
class ContextualLoss_forward(nn.Module):
'''
input is Al, Bl, channel = 1, range ~ [0, 255]
'''
def __init__(self, opt):
super(ContextualLoss_forward, self).__init__()
self.opt = opt
return None
def forward(self, X_features, Y_features, h=0.1, feature_centering=True):
'''
X_features&Y_features are are feature vectors or feature 2d array
h: bandwidth
return the per-sample loss
'''
batch_size = X_features.shape[0]
feature_depth = X_features.shape[1]
feature_size = X_features.shape[2]
# to normalized feature vectors
if feature_centering:
if self.opt.PONO:
X_features = X_features - Y_features.mean(dim=1).unsqueeze(dim=1)
Y_features = Y_features - Y_features.mean(dim=1).unsqueeze(dim=1)
else:
X_features = X_features - Y_features.view(batch_size, feature_depth, -1).mean(dim=-1).unsqueeze(dim=-1).unsqueeze(dim=-1)
Y_features = Y_features - Y_features.view(batch_size, feature_depth, -1).mean(dim=-1).unsqueeze(dim=-1).unsqueeze(dim=-1)
X_features = feature_normalize(X_features).view(batch_size, feature_depth, -1) # batch_size * feature_depth * feature_size * feature_size
Y_features = feature_normalize(Y_features).view(batch_size, feature_depth, -1) # batch_size * feature_depth * feature_size * feature_size
# X_features = F.unfold(
# X_features, kernel_size=self.opt.match_kernel, stride=1, padding=int(self.opt.match_kernel // 2)) # batch_size * feature_depth_new * feature_size^2
# Y_features = F.unfold(
# Y_features, kernel_size=self.opt.match_kernel, stride=1, padding=int(self.opt.match_kernel // 2)) # batch_size * feature_depth_new * feature_size^2
# conine distance = 1 - similarity
X_features_permute = X_features.permute(0, 2, 1) # batch_size * feature_size^2 * feature_depth
d = 1 - torch.matmul(X_features_permute, Y_features) # batch_size * feature_size^2 * feature_size^2
# normalized distance: dij_bar
# d_norm = d
d_norm = d / (torch.min(d, dim=-1, keepdim=True)[0] + 1e-3) # batch_size * feature_size^2 * feature_size^2
# pairwise affinity
w = torch.exp((1 - d_norm) / h)
A_ij = w / torch.sum(w, dim=-1, keepdim=True)
# contextual loss per sample
CX = torch.mean(torch.max(A_ij, dim=-1)[0], dim=1)
loss = -torch.log(CX)
# contextual loss per batch
# loss = torch.mean(loss)
return loss
class ContextualLoss_complex(nn.Module):
'''
input is Al, Bl, channel = 1, range ~ [0, 255]
'''
def __init__(self):
super(ContextualLoss_complex, self).__init__()
return None
def forward(self, X_features, Y_features, h=0.1, patch_size=1, direction='forward'):
'''
X_features&Y_features are are feature vectors or feature 2d array
h: bandwidth
return the per-sample loss
'''
batch_size = X_features.shape[0]
feature_depth = X_features.shape[1]
feature_size = X_features.shape[2]
# to normalized feature vectors
# TODO: center by the mean of Y_features
X_features = X_features - Y_features.view(batch_size, feature_depth, -1).mean(dim=-1).unsqueeze(dim=-1).unsqueeze(dim=-1)
Y_features = Y_features - Y_features.view(batch_size, feature_depth, -1).mean(dim=-1).unsqueeze(dim=-1).unsqueeze(dim=-1)
X_features = feature_normalize(X_features) # batch_size * feature_depth * feature_size^2
Y_features = feature_normalize(Y_features) # batch_size * feature_depth * feature_size^2
# to normalized feature vectors
X_features = F.unfold(
X_features, kernel_size=(patch_size, patch_size), stride=(1, 1), padding=(patch_size // 2,
patch_size // 2)) # batch_size * feature_depth_new * feature_size^2
Y_features = F.unfold(
Y_features, kernel_size=(patch_size, patch_size), stride=(1, 1), padding=(patch_size // 2,
patch_size // 2)) # batch_size * feature_depth_new * feature_size^2
# conine distance = 1 - similarity
X_features_permute = X_features.permute(0, 2, 1) # batch_size * feature_size^2 * feature_depth
d = 1 - torch.matmul(X_features_permute, Y_features) # batch_size * feature_size^2 * feature_size^2
# normalized distance: dij_bar
d_norm = d / (torch.min(d, dim=-1, keepdim=True)[0] + 1e-5) # batch_size * feature_size^2 * feature_size^2
# pairwise affinity
w = torch.exp((1 - d_norm) / h)
A_ij = w / torch.sum(w, dim=-1, keepdim=True)
# contextual loss per sample
if direction == 'forward':
CX = torch.mean(torch.max(A_ij, dim=-1)[0], dim=1)
else:
CX = torch.mean(torch.max(A_ij, dim=1)[0], dim=-1)
loss = -torch.log(CX)
return loss
class ChamferDistance_patch_loss(nn.Module):
'''
input is Al, Bl, channel = 1, range ~ [0, 255]
'''
def __init__(self):
super(ChamferDistance_patch_loss, self).__init__()
return None
def forward(self, X_features, Y_features, patch_size=3, image_x=None, image_y=None, h=0.1, Y_features_in=None):
'''
X_features&Y_features are are feature vectors or feature 2d array
h: bandwidth
return the per-sample loss
'''
batch_size = X_features.shape[0]
feature_depth = X_features.shape[1]
feature_size = X_features.shape[2]
# to normalized feature vectors
X_features = F.unfold(
X_features, kernel_size=(patch_size, patch_size), stride=(1, 1), padding=(patch_size // 2,
patch_size // 2)) # batch_size, feature_depth_new * feature_size^2
Y_features = F.unfold(
Y_features, kernel_size=(patch_size, patch_size), stride=(1, 1), padding=(patch_size // 2,
patch_size // 2)) # batch_size, feature_depth_new * feature_size^2
if image_x is not None and image_y is not None:
image_x = torch.nn.functional.interpolate(image_x, size=(feature_size, feature_size), mode='bilinear').view(batch_size, 3, -1)
image_y = torch.nn.functional.interpolate(image_y, size=(feature_size, feature_size), mode='bilinear').view(batch_size, 3, -1)
X_features_permute = X_features.permute(0, 2, 1) # batch_size * feature_size^2 * feature_depth
similarity_matrix = torch.matmul(X_features_permute, Y_features) # batch_size * feature_size^2 * feature_size^2
NN_index = similarity_matrix.max(dim=-1, keepdim=True)[1].squeeze()
if Y_features_in is not None:
loss = torch.mean((X_features - Y_features_in.detach())**2)
Y_features_in = Y_features_in.detach()
else:
loss = torch.mean((X_features - Y_features[:, :, NN_index].detach())**2)
Y_features_in = Y_features[:, :, NN_index].detach()
# re-arrange image
if image_x is not None and image_y is not None:
image_y_rearrange = image_y[:, :, NN_index]
image_y_rearrange = image_y_rearrange.view(batch_size, 3, feature_size, feature_size)
image_x = image_x.view(batch_size, 3, feature_size, feature_size)
image_y = image_y.view(batch_size, 3, feature_size, feature_size)
# plt.figure()
# plt.imshow((post_processing(image_x[0].detach().cpu())))
# plt.title('image x')
# plt.figure()
# plt.imshow((image_y[0]).permute(1, 2, 0).cpu().numpy())
# plt.title('image y')
# plt.figure()
# plt.imshow((image_y_rearrange[0]).permute(1, 2, 0).cpu().numpy())
# plt.title('corresponded image y')
# plt.show()
return loss
class ChamferDistance_loss(nn.Module):
'''
input is Al, Bl, channel = 1, range ~ [0, 255]
'''
def __init__(self):
super(ChamferDistance_loss, self).__init__()
return None
def forward(self, X_features, Y_features, image_x, image_y, h=0.1, Y_features_in=None):
'''
X_features&Y_features are are feature vectors or feature 2d array
h: bandwidth
return the per-sample loss
'''
batch_size = X_features.shape[0]
feature_depth = X_features.shape[1]
feature_size = X_features.shape[2]
# to normalized feature vectors
X_features = feature_normalize(X_features).view(batch_size, feature_depth, -1) # batch_size * feature_depth * feature_size^2
Y_features = feature_normalize(Y_features).view(batch_size, feature_depth, -1) # batch_size * feature_depth * feature_size^2
image_x = torch.nn.functional.interpolate(image_x, size=(feature_size, feature_size), mode='bilinear').view(batch_size, 3, -1)
image_y = torch.nn.functional.interpolate(image_y, size=(feature_size, feature_size), mode='bilinear').view(batch_size, 3, -1)
X_features_permute = X_features.permute(0, 2, 1) # batch_size * feature_size^2 * feature_depth
similarity_matrix = torch.matmul(X_features_permute, Y_features) # batch_size * feature_size^2 * feature_size^2
NN_index = similarity_matrix.max(dim=-1, keepdim=True)[1].squeeze()
if Y_features_in is not None:
loss = torch.mean((X_features - Y_features_in.detach())**2)
Y_features_in = Y_features_in.detach()
else:
loss = torch.mean((X_features - Y_features[:, :, NN_index].detach())**2)
Y_features_in = Y_features[:, :, NN_index].detach()
# re-arrange image
image_y_rearrange = image_y[:, :, NN_index]
image_y_rearrange = image_y_rearrange.view(batch_size, 3, feature_size, feature_size)
image_x = image_x.view(batch_size, 3, feature_size, feature_size)
image_y = image_y.view(batch_size, 3, feature_size, feature_size)
# plt.figure()
# plt.imshow((post_processing(image_x[0].detach().cpu())))
# plt.title('image x')
# plt.figure()
# plt.imshow((image_y[0]).permute(1, 2, 0).cpu().numpy())
# plt.title('image y')
# plt.figure()
# plt.imshow((image_y_rearrange[0]).permute(1, 2, 0).cpu().numpy())
# plt.title('corresponded image y')
# plt.show()
return loss, Y_features_in, X_features
# class ChamferDistance_loss(nn.Module):
# '''
# input is Al, Bl, channel = 1, range ~ [0, 255]
# '''
# def __init__(self):
# super(ChamferDistance_loss, self).__init__()
# return None
# def forward(self, X_features, Y_features, image_x, image_y):
# '''
# X_features&Y_features are are feature vectors or feature 2d array
# h: bandwidth
# return the per-sample loss
# '''
# batch_size = X_features.shape[0]
# feature_depth = X_features.shape[1]
# feature_size = X_features.shape[2]
# # to normalized feature vectors
# X_features = feature_normalize(X_features).view(batch_size, feature_depth, -1) # batch_size * feature_depth * feature_size^2
# Y_features = feature_normalize(Y_features).view(batch_size, feature_depth, -1) # batch_size * feature_depth * feature_size^2
# image_x = torch.nn.functional.interpolate(image_x, size=(feature_size, feature_size), mode='bilinear').view(batch_size, 3, -1)
# image_y = torch.nn.functional.interpolate(image_y, size=(feature_size, feature_size), mode='bilinear').view(batch_size, 3, -1)
# X_features_permute = X_features.permute(0, 2, 1) # batch_size * feature_size^2 * feature_depth
# similarity_matrix = torch.matmul(X_features_permute, Y_features) # batch_size * feature_size^2 * feature_size^2
# NN_index = similarity_matrix.max(dim=-1, keepdim=True)[1].squeeze()
# loss = torch.mean((X_features - Y_features[:, :, NN_index].detach())**2)
# # re-arrange image
# image_y_rearrange = image_y[:, :, NN_index]
# image_y_rearrange = image_y_rearrange.view(batch_size, 3, feature_size, feature_size)
# image_x = image_x.view(batch_size, 3, feature_size, feature_size)
# image_y = image_y.view(batch_size, 3, feature_size, feature_size)
# # plt.figure()
# # plt.imshow((post_processing(image_x[0].detach().cpu())))
# # plt.title('image x')
# # plt.figure()
# # plt.imshow((image_y[0]).permute(1, 2, 0).cpu().numpy())
# # plt.title('image y')
# # plt.figure()
# # plt.imshow((image_y_rearrange[0]).permute(1, 2, 0).cpu().numpy())
# # plt.title('corresponded image y')
# # plt.show()
# return loss
if __name__ == "__main__":
contextual_loss = ContextualLoss()
batch_size = 32
feature_depth = 8
feature_size = 16
X_features = torch.zeros(batch_size, feature_depth, feature_size, feature_size)
Y_features = torch.zeros(batch_size, feature_depth, feature_size, feature_size)
cx_loss = contextual_loss(X_features, Y_features, 1)
print(cx_loss) | 44.432432 | 162 | 0.630109 | 2,245 | 16,440 | 4.346548 | 0.077506 | 0.093564 | 0.077065 | 0.068867 | 0.898852 | 0.890961 | 0.882763 | 0.876409 | 0.871388 | 0.867083 | 0 | 0.025949 | 0.249878 | 16,440 | 370 | 163 | 44.432432 | 0.765326 | 0.364903 | 0 | 0.660714 | 0 | 0 | 0.005394 | 0 | 0 | 0 | 0 | 0.002703 | 0 | 1 | 0.065476 | false | 0 | 0.059524 | 0 | 0.220238 | 0.005952 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
dc83e091bc3a16b03275ab2034e202717b2abefd | 157 | py | Python | katas/kyu_6/balance_the_arrays.py | the-zebulan/CodeWars | 1eafd1247d60955a5dfb63e4882e8ce86019f43a | [
"MIT"
] | 40 | 2016-03-09T12:26:20.000Z | 2022-03-23T08:44:51.000Z | katas/kyu_6/balance_the_arrays.py | akalynych/CodeWars | 1eafd1247d60955a5dfb63e4882e8ce86019f43a | [
"MIT"
] | null | null | null | katas/kyu_6/balance_the_arrays.py | akalynych/CodeWars | 1eafd1247d60955a5dfb63e4882e8ce86019f43a | [
"MIT"
] | 36 | 2016-11-07T19:59:58.000Z | 2022-03-31T11:18:27.000Z | from collections import Counter
def balance(arr1, arr2):
return (sorted(Counter(arr1).itervalues()) ==
sorted(Counter(arr2).itervalues()))
| 22.428571 | 49 | 0.675159 | 17 | 157 | 6.235294 | 0.647059 | 0.245283 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.03125 | 0.184713 | 157 | 6 | 50 | 26.166667 | 0.796875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0.25 | 0.75 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
f4db98a5e0e2f1b6e8723aa0ce3df1811e9aa836 | 32 | py | Python | pysecm/ric/index/__init__.py | bostonrwalker/pysecm | 76fa1d537c6f222214d7582d723ea9b9b67c87b9 | [
"MIT"
] | null | null | null | pysecm/ric/index/__init__.py | bostonrwalker/pysecm | 76fa1d537c6f222214d7582d723ea9b9b67c87b9 | [
"MIT"
] | null | null | null | pysecm/ric/index/__init__.py | bostonrwalker/pysecm | 76fa1d537c6f222214d7582d723ea9b9b67c87b9 | [
"MIT"
] | null | null | null | from .index_ric import IndexRIC
| 16 | 31 | 0.84375 | 5 | 32 | 5.2 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 32 | 1 | 32 | 32 | 0.928571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f4f645fee41402ff948ef3b6ef8d136057e8a7f8 | 46 | py | Python | wfm/__init__.py | dsluo-archive/wfm.py | fa2c2721fdae4ffd829411653201bb7a455da5b5 | [
"MIT"
] | null | null | null | wfm/__init__.py | dsluo-archive/wfm.py | fa2c2721fdae4ffd829411653201bb7a455da5b5 | [
"MIT"
] | null | null | null | wfm/__init__.py | dsluo-archive/wfm.py | fa2c2721fdae4ffd829411653201bb7a455da5b5 | [
"MIT"
] | null | null | null | from .client import *
from .resources import * | 23 | 24 | 0.76087 | 6 | 46 | 5.833333 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.152174 | 46 | 2 | 24 | 23 | 0.897436 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
760759e3d7e016164501f11d2404b390d5e8aeaa | 30 | py | Python | api/tests/routes/indicators/__init__.py | enermaps/Hotmaps-toolbox-service | a9a5616e3c6fad081134aadf5ce96b3dcc416bf9 | [
"Apache-2.0"
] | null | null | null | api/tests/routes/indicators/__init__.py | enermaps/Hotmaps-toolbox-service | a9a5616e3c6fad081134aadf5ce96b3dcc416bf9 | [
"Apache-2.0"
] | 1 | 2020-10-09T14:09:57.000Z | 2020-10-27T09:27:53.000Z | api/tests/routes/indicators/__init__.py | enermaps/Hotmaps-toolbox-service | a9a5616e3c6fad081134aadf5ce96b3dcc416bf9 | [
"Apache-2.0"
] | null | null | null | from . import test_indicators
| 15 | 29 | 0.833333 | 4 | 30 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.133333 | 30 | 1 | 30 | 30 | 0.923077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
76107cbbe3567b1cb775144e2cc3f77d92c36b1e | 125 | py | Python | news_app/core/admin.py | nijatrajab/NewsApi | a359a3c62dc8abd84c22a995981f085f0fae6670 | [
"MIT"
] | null | null | null | news_app/core/admin.py | nijatrajab/NewsApi | a359a3c62dc8abd84c22a995981f085f0fae6670 | [
"MIT"
] | null | null | null | news_app/core/admin.py | nijatrajab/NewsApi | a359a3c62dc8abd84c22a995981f085f0fae6670 | [
"MIT"
] | null | null | null | from django.contrib import admin
from . import models
admin.site.register(models.News)
admin.site.register(models.Comment)
| 17.857143 | 35 | 0.808 | 18 | 125 | 5.611111 | 0.555556 | 0.178218 | 0.336634 | 0.455446 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.096 | 125 | 6 | 36 | 20.833333 | 0.893805 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
5216c19bed8b64f5a76cae3362cac74ff240827e | 26 | py | Python | tests/test_viz.py | gchhablani/vformer | c7dc7d14e33aa5b2974667d281e7910e17538b34 | [
"MIT"
] | null | null | null | tests/test_viz.py | gchhablani/vformer | c7dc7d14e33aa5b2974667d281e7910e17538b34 | [
"MIT"
] | null | null | null | tests/test_viz.py | gchhablani/vformer | c7dc7d14e33aa5b2974667d281e7910e17538b34 | [
"MIT"
] | null | null | null | import vformer.viz as viz
| 13 | 25 | 0.807692 | 5 | 26 | 4.2 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.153846 | 26 | 1 | 26 | 26 | 0.954545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5221e0ec65f446962850aa27100b7994c1853444 | 20 | py | Python | exercises/bob/bob.py | RJTK/python | f9678d629735f75354bbd543eb7f10220a498dae | [
"MIT"
] | 1 | 2021-05-15T19:59:04.000Z | 2021-05-15T19:59:04.000Z | exercises/bob/bob.py | RJTK/python | f9678d629735f75354bbd543eb7f10220a498dae | [
"MIT"
] | null | null | null | exercises/bob/bob.py | RJTK/python | f9678d629735f75354bbd543eb7f10220a498dae | [
"MIT"
] | 2 | 2018-03-03T08:32:12.000Z | 2019-08-22T11:55:53.000Z | def hey():
pass
| 6.666667 | 10 | 0.5 | 3 | 20 | 3.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.35 | 20 | 2 | 11 | 10 | 0.769231 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
8755cca913ada38e5a9e340cdc4782432cc61107 | 15,970 | py | Python | tests/rewrite/test_sql_rewriter_engine.py | hongfuli/sharding-py | a26a64aa9d9196c830e7e2fa4095a58bef608a40 | [
"Apache-2.0"
] | 1 | 2021-01-29T13:29:29.000Z | 2021-01-29T13:29:29.000Z | tests/rewrite/test_sql_rewriter_engine.py | hongfuli/sharding-py | a26a64aa9d9196c830e7e2fa4095a58bef608a40 | [
"Apache-2.0"
] | null | null | null | tests/rewrite/test_sql_rewriter_engine.py | hongfuli/sharding-py | a26a64aa9d9196c830e7e2fa4095a58bef608a40 | [
"Apache-2.0"
] | null | null | null | import unittest
from shardingpy.api.config.base import load_sharding_rule_config_from_dict
from shardingpy.constant import DatabaseType, OrderDirection
from shardingpy.optimizer.condition import ShardingConditions
from shardingpy.optimizer.insert_optimizer import InsertShardingCondition
from shardingpy.parsing.parser.context.limit import Limit, LimitValue
from shardingpy.parsing.parser.context.others import OrderItem
from shardingpy.parsing.parser.context.table import Table
from shardingpy.parsing.parser.sql.dml.insert import InsertStatement
from shardingpy.parsing.parser.sql.dql.select import SelectStatement
from shardingpy.parsing.parser.token import TableToken, ItemsToken, InsertValuesToken, InsertColumnToken, OffsetToken, \
RowCountToken, OrderByToken
from shardingpy.rewrite.rewrite_engine import SQLRewriteEngine
from shardingpy.routing.types.base import TableUnit, RoutingTable
from shardingpy.rule.base import ShardingRule, DataNode
from . import rewrite_rule
class SQLRewriteEngineTest(unittest.TestCase):
def setUp(self):
sharding_rule_config = load_sharding_rule_config_from_dict(rewrite_rule.sharding_rule_config['sharding_rule'])
self.sharding_rule = ShardingRule(sharding_rule_config,
rewrite_rule.sharding_rule_config['data_sources'].keys())
self.select_statement = SelectStatement()
self.insert_statement = InsertStatement()
self.table_tokens = {'table_x': 'table_1'}
def test_rewrite_without_change(self):
rewrite_engine = SQLRewriteEngine(self.sharding_rule, 'SELECT table_y.id FROM table_y WHERE table_y.id=?',
DatabaseType.MySQL, self.select_statement, None, [1])
self.assertEqual(rewrite_engine.rewrite(True).to_sql(None, self.table_tokens, None).sql,
'SELECT table_y.id FROM table_y WHERE table_y.id=?')
def test_rewrite_for_table_name(self):
self.select_statement.sql_tokens.append(TableToken(7, 0, 'table_x'))
self.select_statement.sql_tokens.append(TableToken(31, 0, 'table_x'))
self.select_statement.sql_tokens.append(TableToken(47, 0, 'table_x'))
sql = 'SELECT table_x.id, x.name FROM table_x x WHERE table_x.id=? AND x.name=?'
rewrite_engine = SQLRewriteEngine(self.sharding_rule, sql, DatabaseType.MySQL, self.select_statement, None,
[1, 'x'])
rewrite_sql = 'SELECT table_1.id, x.name FROM table_1 x WHERE table_1.id=? AND x.name=?'
self.assertEqual(rewrite_engine.rewrite(True).to_sql(None, self.table_tokens, None).sql, rewrite_sql)
def test_rewrite_for_order_by_and_group_by_by_derived_columns(self):
self.select_statement.sql_tokens.append(TableToken(18, 0, 'table_x'))
items_token = ItemsToken(12)
items_token.items.extend(['x.id as GROUP_BY_DERIVED_0', 'x.name as ORDER_BY_DERIVED_0'])
self.select_statement.sql_tokens.append(items_token)
sql = 'SELECT x.age FROM table_x x GROUP BY x.id ORDER BY x.name'
rewrite_engine = SQLRewriteEngine(self.sharding_rule, sql, DatabaseType.MySQL, self.select_statement, None,
[])
rewrite_sql = 'SELECT x.age, x.id as GROUP_BY_DERIVED_0, x.name as ORDER_BY_DERIVED_0 FROM table_1 x GROUP BY x.id ORDER BY x.name'
self.assertEqual(rewrite_engine.rewrite(True).to_sql(None, self.table_tokens, None).sql, rewrite_sql)
def test_rewrite_for_aggregation_derived_columns(self):
self.select_statement.sql_tokens.append(TableToken(23, 0, 'table_x'))
items_token = ItemsToken(17)
items_token.items.extend(['COUNT(x.age) as AVG_DERIVED_COUNT_0', 'SUM(x.age) as AVG_DERIVED_SUM_0'])
self.select_statement.sql_tokens.append(items_token)
sql = 'SELECT AVG(x.age) FROM table_x x'
rewrite_engine = SQLRewriteEngine(self.sharding_rule, sql, DatabaseType.MySQL, self.select_statement, None,
[])
rewrite_sql = 'SELECT AVG(x.age), COUNT(x.age) as AVG_DERIVED_COUNT_0, SUM(x.age) as AVG_DERIVED_SUM_0 FROM table_1 x'
self.assertEqual(rewrite_engine.rewrite(True).to_sql(None, self.table_tokens, None).sql, rewrite_sql)
def test_rewrite_auto_generated_key_column(self):
parameters = ['x', 1]
self.insert_statement.parameters_index = 2
self.insert_statement.insert_values_list_last_position = 45
self.insert_statement.sql_tokens.append(TableToken(12, 0, 'table_x'))
items_token = ItemsToken(30)
items_token.items.append('id')
self.insert_statement.sql_tokens.append(items_token)
self.insert_statement.sql_tokens.append(InsertValuesToken(39, 'table_x'))
sharding_condition = InsertShardingCondition('(?, ?, ?)', parameters)
sharding_condition.data_nodes.append(DataNode('db0.table_1'))
table_unit = TableUnit('db0')
table_unit.routing_tables.append(RoutingTable('table_x', 'table_1'))
sql = 'INSERT INTO table_x (name, age) VALUES (?, ?)'
rewrite_engine = SQLRewriteEngine(self.sharding_rule, sql, DatabaseType.MySQL, self.insert_statement,
ShardingConditions([sharding_condition]), parameters)
rewrite_sql = 'INSERT INTO table_1 (name, age, id) VALUES (?, ?, ?)'
self.assertEqual(rewrite_engine.rewrite(True).to_sql(table_unit, self.table_tokens, None).sql, rewrite_sql)
def test_rewrite_for_auto_generated_key_column_without_columns_with_parameter(self):
parameters = ['Bill']
self.insert_statement.parameters_index = 1
self.insert_statement.insert_values_list_last_position = 32
self.insert_statement.sql_tokens.append(TableToken(12, 0, '`table_x`'))
self.insert_statement.generate_key_column_index = 0
self.insert_statement.sql_tokens.append(InsertColumnToken(21, '('))
items_token = ItemsToken(21)
items_token.is_first_of_items_special = True
items_token.items.append('name')
items_token.items.append('id')
self.insert_statement.sql_tokens.append(items_token)
self.insert_statement.sql_tokens.append(InsertColumnToken(21, ')'))
self.insert_statement.sql_tokens.append(InsertValuesToken(29, 'table_x'))
sharding_condition = InsertShardingCondition('(?, ?)', parameters)
sharding_condition.data_nodes.append(DataNode('db0.table_1'))
table_unit = TableUnit('db0')
table_unit.routing_tables.append(RoutingTable('table_x', 'table_1'))
sql = 'INSERT INTO `table_x` VALUES (?)'
rewrite_engine = SQLRewriteEngine(self.sharding_rule, sql, DatabaseType.MySQL, self.insert_statement,
ShardingConditions([sharding_condition]), parameters)
rewrite_sql = 'INSERT INTO table_1(name, id) VALUES (?, ?)'
self.assertEqual(rewrite_engine.rewrite(True).to_sql(table_unit, self.table_tokens, None).sql, rewrite_sql)
def test_rewrite_for_auto_generated_key_column_without_columns_without_parameter(self):
self.insert_statement.insert_values_list_last_position = 33
self.insert_statement.sql_tokens.append(TableToken(12, 0, '`table_x`'))
self.insert_statement.generate_key_column_index = 0
self.insert_statement.sql_tokens.append(InsertColumnToken(21, '('))
items_token = ItemsToken(21)
items_token.is_first_of_items_special = True
items_token.items.append('name')
items_token.items.append('id')
self.insert_statement.sql_tokens.append(items_token)
self.insert_statement.sql_tokens.append(InsertColumnToken(21, ')'))
self.insert_statement.sql_tokens.append(InsertValuesToken(29, 'table_x'))
sharding_condition = InsertShardingCondition('(10, 1)', [])
sharding_condition.data_nodes.append(DataNode('db0.table_1'))
table_unit = TableUnit('db0')
table_unit.routing_tables.append(RoutingTable('table_x', 'table_1'))
sql = 'INSERT INTO `table_x` VALUES (10)'
rewrite_engine = SQLRewriteEngine(self.sharding_rule, sql, DatabaseType.MySQL, self.insert_statement,
ShardingConditions([sharding_condition]), [])
rewrite_sql = 'INSERT INTO table_1(name, id) VALUES (10, 1)'
self.assertEqual(rewrite_engine.rewrite(True).to_sql(table_unit, self.table_tokens, None).sql, rewrite_sql)
def test_rewrite_column_without_columns_without_parameters(self):
self.insert_statement.insert_values_list_last_position = 36
self.insert_statement.sql_tokens.append(TableToken(12, 0, '`table_x`'))
self.insert_statement.generate_key_column_index = 0
self.insert_statement.sql_tokens.append(InsertColumnToken(21, '('))
items_token = ItemsToken(21)
items_token.is_first_of_items_special = True
items_token.items.append('name')
items_token.items.append('id')
self.insert_statement.sql_tokens.append(items_token)
self.insert_statement.sql_tokens.append(InsertColumnToken(21, ')'))
self.insert_statement.sql_tokens.append(InsertValuesToken(29, 'table_x'))
sharding_condition = InsertShardingCondition('(10, 1)', [])
sharding_condition.data_nodes.append(DataNode('db0.table_1'))
table_unit = TableUnit('db0')
table_unit.routing_tables.append(RoutingTable('table_x', 'table_1'))
sql = 'INSERT INTO `table_x` VALUES (10, 1)'
rewrite_engine = SQLRewriteEngine(self.sharding_rule, sql, DatabaseType.MySQL, self.insert_statement,
ShardingConditions([sharding_condition]), [])
rewrite_sql = 'INSERT INTO table_1(name, id) VALUES (10, 1)'
self.assertEqual(rewrite_engine.rewrite(True).to_sql(table_unit, self.table_tokens, None).sql, rewrite_sql)
def test_rewrite_column_without_columns_with_parameters(self):
parameters = ['x', 1]
self.insert_statement.insert_values_list_last_position = 35
self.insert_statement.sql_tokens.append(TableToken(12, 0, '`table_x`'))
self.insert_statement.generate_key_column_index = 0
self.insert_statement.sql_tokens.append(InsertColumnToken(21, '('))
items_token = ItemsToken(21)
items_token.is_first_of_items_special = True
items_token.items.append('name')
items_token.items.append('id')
self.insert_statement.sql_tokens.append(items_token)
self.insert_statement.sql_tokens.append(InsertColumnToken(21, ')'))
self.insert_statement.sql_tokens.append(InsertValuesToken(29, 'table_x'))
sharding_condition = InsertShardingCondition('(?, ?)', parameters)
sharding_condition.data_nodes.append(DataNode('db0.table_1'))
table_unit = TableUnit('db0')
table_unit.routing_tables.append(RoutingTable('table_x', 'table_1'))
sql = 'INSERT INTO `table_x` VALUES (?, ?)'
rewrite_engine = SQLRewriteEngine(self.sharding_rule, sql, DatabaseType.MySQL, self.insert_statement,
ShardingConditions([sharding_condition]), [])
rewrite_sql = 'INSERT INTO table_1(name, id) VALUES (?, ?)'
self.assertEqual(rewrite_engine.rewrite(True).to_sql(table_unit, self.table_tokens, None).sql, rewrite_sql)
def test_rewrite_for_limit(self):
self.select_statement.limit = Limit(DatabaseType.MySQL, LimitValue(2, -1, True), LimitValue(2, -1, True))
self.select_statement.sql_tokens.append(TableToken(17, 0, 'table_x'))
self.select_statement.sql_tokens.append(OffsetToken(33, 2))
self.select_statement.sql_tokens.append(RowCountToken(36, 2))
sql = 'SELECT x.id FROM table_x x LIMIT 2, 2'
rewrite_engine = SQLRewriteEngine(self.sharding_rule, sql, DatabaseType.MySQL, self.select_statement,
None, [])
rewrite_sql = 'SELECT x.id FROM table_1 x LIMIT 0, 4'
self.assertEqual(rewrite_engine.rewrite(True).to_sql(None, self.table_tokens, None).sql, rewrite_sql)
def test_rewrite_for_limit_for_memory_group_by(self):
self.select_statement.limit = Limit(DatabaseType.MySQL, LimitValue(2, -1, True), LimitValue(2, -1, True))
self.select_statement.order_by_items.append(OrderItem('x', 'id', OrderDirection.ASC, OrderDirection.ASC, None))
self.select_statement.group_by_items.append(OrderItem('x', 'id', OrderDirection.DESC, OrderDirection.ASC, None))
self.select_statement.sql_tokens.append(TableToken(17, 0, 'table_x'))
self.select_statement.sql_tokens.append(OffsetToken(33, 2))
self.select_statement.sql_tokens.append(RowCountToken(36, 2))
sql = 'SELECT x.id FROM table_x x LIMIT 2, 2'
rewrite_engine = SQLRewriteEngine(self.sharding_rule, sql, DatabaseType.MySQL, self.select_statement,
None, [])
rewrite_sql = 'SELECT x.id FROM table_1 x LIMIT 0, 2147483647'
self.assertEqual(rewrite_engine.rewrite(True).to_sql(None, self.table_tokens, None).sql, rewrite_sql)
def test_rewrite_for_limit_for_no_rewrite_limit(self):
self.select_statement.limit = Limit(DatabaseType.MySQL, LimitValue(2, -1, True), LimitValue(2, -1, True))
self.select_statement.sql_tokens.append(TableToken(17, 0, 'table_x'))
self.select_statement.sql_tokens.append(OffsetToken(33, 2))
self.select_statement.sql_tokens.append(RowCountToken(36, 2))
sql = 'SELECT x.id FROM table_x x LIMIT 2, 2'
rewrite_engine = SQLRewriteEngine(self.sharding_rule, sql, DatabaseType.MySQL, self.select_statement,
None, [])
rewrite_sql = 'SELECT x.id FROM table_1 x LIMIT 2, 2'
self.assertEqual(rewrite_engine.rewrite(False).to_sql(None, self.table_tokens, None).sql, rewrite_sql)
def test_rewrite_for_derived_order_by(self):
self.select_statement.group_by_last_position = 61
self.select_statement.order_by_items.append(OrderItem('x', 'id', OrderDirection.ASC, OrderDirection.ASC, None))
self.select_statement.order_by_items.append(
OrderItem('x', 'name', OrderDirection.DESC, OrderDirection.ASC, None))
self.select_statement.sql_tokens.append(TableToken(25, 0, 'table_x'))
self.select_statement.sql_tokens.append(OrderByToken(61))
sql = 'SELECT x.id, x.name FROM table_x x GROUP BY x.id, x.name DESC'
rewrite_engine = SQLRewriteEngine(self.sharding_rule, sql, DatabaseType.MySQL, self.select_statement,
None, [])
rewrite_sql = 'SELECT x.id, x.name FROM table_1 x GROUP BY x.id, x.name DESC ORDER BY id ASC,name DESC '
self.assertEqual(rewrite_engine.rewrite(False).to_sql(None, self.table_tokens, None).sql, rewrite_sql)
def test_generate_sql(self):
parameters = [1, 'x']
self.select_statement.sql_tokens.append(TableToken(7, 0, 'table_x'))
self.select_statement.sql_tokens.append(TableToken(31, 0, 'table_x'))
self.select_statement.sql_tokens.append(TableToken(58, 0, 'table_x'))
self.select_statement.tables.add(Table('table_x', 'x'))
self.select_statement.tables.add(Table('table_y', 'y'))
sql = 'SELECT table_x.id, x.name FROM table_x x, table_y y WHERE table_x.id=? AND x.name=?'
rewrite_engine = SQLRewriteEngine(self.sharding_rule, sql, DatabaseType.MySQL, self.select_statement,
None, parameters)
rewrite_sql = 'SELECT table_x.id, x.name FROM table_x x, table_y y WHERE table_x.id=? AND x.name=?'
table_unit = TableUnit('db0')
table_unit.routing_tables.append(RoutingTable('table_x', 'table_x'))
self.assertEqual(rewrite_engine.generate_sql(table_unit, rewrite_engine.rewrite(True)).sql, rewrite_sql)
| 66.541667 | 139 | 0.703444 | 2,039 | 15,970 | 5.228053 | 0.070623 | 0.028705 | 0.074296 | 0.099062 | 0.867355 | 0.833771 | 0.814634 | 0.800469 | 0.778705 | 0.750188 | 0 | 0.017452 | 0.189105 | 15,970 | 239 | 140 | 66.820084 | 0.805714 | 0 | 0 | 0.574661 | 0 | 0.036199 | 0.130636 | 0 | 0 | 0 | 0 | 0 | 0.063348 | 1 | 0.067873 | false | 0 | 0.067873 | 0 | 0.140271 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8757f03602ffdf39db5681f0792e50deb3411a1f | 16,004 | py | Python | ninjarmmpy/queries.py | StuffbyYuki/ninjarmmpy | b2d5205a1075024164e7007526605bca0f398a2c | [
"MIT"
] | 2 | 2021-06-10T02:34:39.000Z | 2021-07-13T12:19:24.000Z | ninjarmmpy/queries.py | StuffbyYuki/ninjarmmpy | b2d5205a1075024164e7007526605bca0f398a2c | [
"MIT"
] | 1 | 2021-03-28T20:21:09.000Z | 2021-03-28T20:21:09.000Z | ninjarmmpy/queries.py | StuffbyYuki/ninjarmmpy | b2d5205a1075024164e7007526605bca0f398a2c | [
"MIT"
] | 2 | 2021-01-28T22:23:01.000Z | 2021-01-30T21:22:37.000Z | from .utils import return_response, api_get_request # noqa, flake8 issue
class QueriesMixin():
# Queries
NINJA_API_QUERIES = '/v2/queries'
NINJA_API_QUERIES_ANTIVIRUS_THREATS = NINJA_API_QUERIES + '/antivirus-threats'
NINJA_API_QUERIES_OPERATING_SYSTEMS = NINJA_API_QUERIES + '/operating-systems'
NINJA_API_QUERIES_PROCESSORS = NINJA_API_QUERIES + '/processors'
NINJA_API_QUERIES_VOLUMES = NINJA_API_QUERIES + '/volumes'
NINJA_API_QUERIES_DISKS = NINJA_API_QUERIES + '/disks'
NINJA_API_QUERIES_COMPUTER_SYSTEMS = NINJA_API_QUERIES + '/computer-systems'
NINJA_API_QUERIES_DEVICE_HEALTH = NINJA_API_QUERIES + '/device-health'
NINJA_API_QUERIES_SOFTWARE = NINJA_API_QUERIES + '/software'
NINJA_API_QUERIES_OS_PATCHES = NINJA_API_QUERIES + '/os-patches'
NINJA_API_QUERIES_OS_PATCH_INSTALLS = NINJA_API_QUERIES + '/os-patch-installs'
NINJA_API_QUERIES_SOFTWARE_PATCHES = NINJA_API_QUERIES + '/software-patches'
NINJA_API_QUERIES_SOFTWARE_PATCH_INSTALLS = NINJA_API_QUERIES + '/software-patch-installs'
NINJA_API_QUERIES_RAID_CONTROLLERS = NINJA_API_QUERIES + '/raid-controllers'
NINJA_API_QUERIES_RAID_DRIVES = NINJA_API_QUERIES + '/raid-drives'
NINJA_API_QUERIES_WINDOWS_SERVICES = NINJA_API_QUERIES + '/windows-services'
NINJA_API_QUERIES_LOGGED_ON_USERS = NINJA_API_QUERIES + '/logged-on-users'
NINJA_API_QUERIES_ANTIVIRUS_STATUS = NINJA_API_QUERIES + '/antivirus-status'
def __init__(self):
pass
@return_response
def getAntivirusThreats(self, df: str = None, ts: str = None, cursor: str = None, pageSize: int = None):
"""Returns list of antivirus threats
Keyword arguments:
df: str -- Device filter
ts: str -- Monitoring timestamp filter
cursor: str -- Cursor name
pageSize: int -- Limit number of records per page
"""
params = {
'df': df,
'ts': ts,
'cursor': cursor,
'pageSize': pageSize
}
return self.api_get_request(f'{self.NINJA_API_QUERIES_ANTIVIRUS_THREATS}', params=params)
@return_response
def getOperatingSystems(self, df: str = None, ts: str = None, cursor: str = None, pageSize: int = None):
"""Returns operating systems for devices
Keyword arguments:
df: str -- Device filter
ts: str -- Monitoring timestamp filter
cursor: str -- Cursor name
pageSize: int -- Limit number of records per page
"""
params = {
'df': df,
'ts': ts,
'cursor': cursor,
'pageSize': pageSize
}
return self.api_get_request(f'{self.NINJA_API_QUERIES_OPERATING_SYSTEMS}', params=params)
@return_response
def getProcessors(self, df: str = None, ts: str = None, cursor: str = None, pageSize: int = None):
"""Returns list of processors
Keyword arguments:
df: str -- Device filter
ts: str -- Monitoring timestamp filter
cursor: str -- Cursor name
pageSize: int -- Limit number of records per page
"""
params = {
'df': df,
'ts': ts,
'cursor': cursor,
'pageSize': pageSize
}
return self.api_get_request(f'{self.NINJA_API_QUERIES_PROCESSORS}', params=params)
@return_response
def getVolumes(self, df: str = None, ts: str = None, cursor: str = None, pageSize: int = None):
"""Returns list of disk volumes
Keyword arguments:
df: str -- Device filter
ts: str -- Monitoring timestamp filter
cursor: str -- Cursor name
pageSize: int -- Limit number of records per page
"""
params = {
'df': df,
'ts': ts,
'cursor': cursor,
'pageSize': pageSize
}
return self.api_get_request(f'{self.NINJA_API_QUERIES_VOLUMES}', params=params)
@return_response
def getDiskDrives(self, df: str = None, ts: str = None, cursor: str = None, pageSize: int = None):
"""Returns list of physical disks
Keyword arguments:
df: str -- Device filter
ts: str -- Monitoring timestamp filter
cursor: str -- Cursor name
pageSize: int -- Limit number of records per page
"""
params = {
'df': df,
'ts': ts,
'cursor': cursor,
'pageSize': pageSize
}
return self.api_get_request(f'{self.NINJA_API_QUERIES_DISKS}', params=params)
@return_response
def getComputerSystems(self, df: str = None, ts: str = None, cursor: str = None, pageSize: int = None):
"""Returns computer systems information for devices
Keyword arguments:
df: str -- Device filter
ts: str -- Monitoring timestamp filter
cursor: str -- Cursor name
pageSize: int -- Limit number of records per page
"""
params = {
'df': df,
'ts': ts,
'cursor': cursor,
'pageSize': pageSize
}
return self.api_get_request(f'{self.NINJA_API_QUERIES_COMPUTER_SYSTEMS}', params=params)
@return_response
def getDeviceHealthReport(self, df: str = None, ts: str = None, cursor: str = None, pageSize: int = None):
"""Returns list of device health summary records
Keyword arguments:
df: str -- Device filter
ts: str -- Monitoring timestamp filter
cursor: str -- Cursor name
pageSize: int -- Limit number of records per page
"""
params = {
'df': df,
'ts': ts,
'cursor': cursor,
'pageSize': pageSize
}
return self.api_get_request(f'{self.NINJA_API_QUERIES_DEVICE_HEALTH}', params=params)
@return_response
def getSoftware(self, df: str = None, cursor: str = None, pageSize: int = None, installedBefore: str = None, installedAfter: str = None):
"""Returns list of software installed on devices
Keyword arguments:
df: str -- Device filter
cursor: str -- Cursor name
pageSize: int -- Limit number of records per page
installedBefore: str -- Include software installed before specified date
installedAfter: str -- Include software installed after specified date
"""
params = {
'df': df,
'cursor': cursor,
'pageSize': pageSize,
'installedBefore': installedBefore,
'installedAfter': installedAfter
}
return self.api_get_request(f'{self.NINJA_API_QUERIES_SOFTWARE}', params=params)
@return_response
def getPendingFailedRejectedOSPatches(self, df: str = None, ts: str = None, status: str = None,
patch_type: str = None, severity: str = None, cursor: str = None,
pageSize: int = None):
"""Returns list of OS patches for which there were no installation attempts
Keyword arguments:
df: str -- Device filter
ts: str -- Monitoring timestamp filter
status: str -- Patch Status filter
type: str -- Patch Type filter
severity: str -- Patch Severity filter
cursor: str -- Cursor name
pageSize: int -- Limit number of records per page
"""
params = {
'df': df,
'ts': ts,
'status': status,
'type': patch_type,
'severity': severity,
'cursor': cursor,
'pageSize': pageSize
}
return self.api_get_request(f'{self.NINJA_API_QUERIES_OS_PATCHES}', params=params)
@return_response
def getInstalledOSPatches(self, df: str = None, status: str = None, cursor: str = None, pageSize: int = None, installedBefore: str = None, installedAfter: str = None):
"""Returns pach installation history records, successful and failed
Keyword arguments:
df: str -- Device filter
status: str -- Patch Status filter (FAILED, INSTALLED)
cursor: str -- Cursor name
pageSize: int -- Limit number of records per page
installedBefore: str -- Include software installed before specified date
installedAfter: str -- Include software installed after specified date
"""
params = {
'df': df,
'status': status,
'cursor': cursor,
'pageSize': pageSize,
'installedBefore': installedBefore,
'installedAfter': installedAfter
}
return self.api_get_request(f'{self.NINJA_API_QUERIES_OS_PATCH_INSTALLS}', params=params)
@return_response
def getPendingFailedRejectedSoftwarePatches(self, df: str = None, ts: str = None, status: str = None,
productIdentifier: str = None, patch_type: str = None, impact: str = None,
cursor: str = None, pageSize: int = None):
"""Returns list of 3rd party Software patches for which there were no installation attempts
Keyword arguments:
df: str -- Device filter
ts: str -- Monitoring timestamp filter
status: str -- Patch Status filter
productIdentifier: str -- Product identifier
patch_type: str -- Patch Type filter
impact: str -- Patch Impact filter
cursor: str -- Cursor name
pageSize: int -- Limit number of records per page
"""
params = {
'df': df,
'ts': ts,
'status': status,
'type': patch_type,
'productIdentifier': productIdentifier,
'impact': impact,
'cursor': cursor,
'pageSize': pageSize
}
return self.api_get_request(f'{self.NINJA_API_QUERIES_SOFTWARE_PATCHES}', params=params)
@return_response
def getInstalledSoftwarePatches(self, df: str = None, ts: str = None, status: str = None,
productIdentifier: str = None, patch_type: str = None, impact: str = None,
cursor: str = None, pageSize: int = None,
installedBefore: str = None, installedAfter: str = None):
"""Returns 3rd party software patch installation history records (successful and failed)
Keyword arguments:
df: str -- Device filter
ts: str -- Monitoring timestamp filter
status: str -- Patch Status filter
productIdentifier: str -- Product identifier
patch_type: str -- Patch Type filter
impact: str -- Patch Impact filter
cursor: str -- Cursor name
pageSize: int -- Limit number of records per page
installedBefore: str -- Include patches installed before specified date
installedAfter: str -- Include patches installed after specified date
"""
params = {
'df': df,
'ts': ts,
'status': status,
'type': patch_type,
'productIdentifier': productIdentifier,
'impact': impact,
'cursor': cursor,
'pageSize': pageSize,
'installedBefore': installedBefore,
'installedAfter': installedAfter
}
return self.api_get_request(f'{self.NINJA_API_QUERIES_SOFTWARE_PATCH_INSTALLS}', params=params)
@return_response
def getRAIDControllerReport(self, df: str = None, ts: str = None, cursor: str = None, pageSize: int = None):
"""Returns list of RAID controllers
Keyword arguments:
df: str -- Device filter
ts: str -- Monitoring timestamp filter
cursor: str -- Cursor name
pageSize: int -- Limit number of records per page
"""
params = {
'df': df,
'ts': ts,
'cursor': cursor,
'pageSize': pageSize
}
return self.api_get_request(f'{self.NINJA_API_QUERIES_RAID_CONTROLLERS}', params=params)
@return_response
def getRAIDDriveReport(self, df: str = None, ts: str = None, cursor: str = None, pageSize: int = None):
"""Returns list of drives connected to RAID controllers
Keyword arguments:
df: str -- Device filter
ts: str -- Monitoring timestamp filter
cursor: str -- Cursor name
pageSize: int -- Limit number of records per page
"""
params = {
'df': df,
'ts': ts,
'cursor': cursor,
'pageSize': pageSize
}
return self.api_get_request(f'{self.NINJA_API_QUERIES_RAID_DRIVES}', params=params)
@return_response
def getWindowsServicesReport(self, df: str = None, name: str = None, state: str = None, cursor: str = None, pageSize: int = None):
"""Returns list of Windows Services and their statuses
Keyword arguments:
df: str -- Device filter
name: str -- Service name
state: str -- Service state, available values: UNKNOWN, STOPPED, START_PENDING, RUNNING, STOP_PENDING, PAUSE_PENDING, PAUSED, CONTINUE_PENDING
cursor: str -- Cursor name
pageSize: int -- Limit number of records per page
"""
params = {
'df': df,
'name': name,
'state': state,
'cursor': cursor,
'pageSize': pageSize
}
return self.api_get_request(f'{self.NINJA_API_QUERIES_WINDOWS_SERVICES}', params=params)
@return_response
def getLastLoggedOnUsersReport(self, df: str = None, cursor: str = None, pageSize: int = 1000):
"""Returns usernames and logon times
Keyword arguments:
df: str -- Device filter
cursor: str -- Cursor name
pageSize: int -- Limit number of records per page, default value: 1000
"""
params = {
'df': df,
'cursor': cursor,
'pageSize': pageSize
}
return self.api_get_request(f'{self.NINJA_API_QUERIES_LOGGED_ON_USERS}', params=params)
@return_response
def getAntivirusStatusReport(self, df: str = None, ts: str = None, productState: str = None,
productName: str = None, cursor: str = None, pageSize: int = None):
"""Returns list of statues of antivirus software installed on devices
Keyword arguments:
df: str -- Device filter
ts: str -- Monitoring timestamp filter
productState: str -- Product State filter
productName: str -- Product Name filter
cursor: str -- Cursor name
pageSize: int -- Limit number of records per page, default value: 1000
"""
params = {
'df': df,
'ts': ts,
'productState': productState,
'productName': productName,
'cursor': cursor,
'pageSize': pageSize
}
return self.api_get_request(f'{self.NINJA_API_QUERIES_ANTIVIRUS_STATUS}', params=params)
| 44.33241 | 171 | 0.562734 | 1,610 | 16,004 | 5.444721 | 0.08882 | 0.055099 | 0.08898 | 0.025211 | 0.862879 | 0.811773 | 0.785307 | 0.747091 | 0.706137 | 0.656742 | 0 | 0.00154 | 0.3506 | 16,004 | 360 | 172 | 44.455556 | 0.841913 | 0.332792 | 0 | 0.58209 | 0 | 0 | 0.148325 | 0.070938 | 0 | 0 | 0 | 0 | 0 | 1 | 0.089552 | false | 0.004975 | 0.004975 | 0 | 0.273632 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
87653c03d119f8496af38bd279dc145605fbcbff | 165 | py | Python | bbpyp/interpreter_state_machine/__init__.py | BloggerBust/bbpyp | 078f940dd38bc3ee7c5adcfb2555c2843a4ca57b | [
"Apache-2.0"
] | null | null | null | bbpyp/interpreter_state_machine/__init__.py | BloggerBust/bbpyp | 078f940dd38bc3ee7c5adcfb2555c2843a4ca57b | [
"Apache-2.0"
] | null | null | null | bbpyp/interpreter_state_machine/__init__.py | BloggerBust/bbpyp | 078f940dd38bc3ee7c5adcfb2555c2843a4ca57b | [
"Apache-2.0"
] | null | null | null | from bbpyp.__nspkg_meta__ import __version__
from bbpyp.interpreter_state_machine.interpreter_state_machine_ioc_container import InterpreterStateMachineIocContainer
| 55 | 119 | 0.933333 | 18 | 165 | 7.722222 | 0.666667 | 0.129496 | 0.330935 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.048485 | 165 | 2 | 120 | 82.5 | 0.88535 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5e423f1837352f1dcf9fde298587a66f2776e22a | 378 | py | Python | Searcher/__init__.py | ronhashjr/FinanceDatabase | c8f8faf9d74be611944f901957c639bb55660cad | [
"MIT"
] | 1 | 2021-12-03T22:34:50.000Z | 2021-12-03T22:34:50.000Z | Searcher/__init__.py | briancaffey/FinanceDatabase | ccb835d8235d166c22fc1a72fe89af18a3e0ea10 | [
"MIT"
] | null | null | null | Searcher/__init__.py | briancaffey/FinanceDatabase | ccb835d8235d166c22fc1a72fe89af18a3e0ea10 | [
"MIT"
] | 1 | 2021-12-03T22:34:58.000Z | 2021-12-03T22:34:58.000Z | # Modules
from .json_picker import select_cryptocurrencies
from .json_picker import select_currencies
from .json_picker import select_etfs
from .json_picker import select_equities
from .json_picker import select_funds
from .json_picker import select_indices
from .json_picker import select_other
from .json_options import show_options
from .json_options import search_products
| 31.5 | 48 | 0.87037 | 55 | 378 | 5.654545 | 0.309091 | 0.231511 | 0.315113 | 0.450161 | 0.585209 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.103175 | 378 | 11 | 49 | 34.363636 | 0.917404 | 0.018519 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0dcbb3fde72d11f176121280b5ac1b95efce4aa2 | 77 | py | Python | infrastructor/api/ResourceBase.py | muhammetbolat/pythondataintegrator | 5b274db8d39ca1340d535a500f04f6e734f1d54d | [
"MIT"
] | null | null | null | infrastructor/api/ResourceBase.py | muhammetbolat/pythondataintegrator | 5b274db8d39ca1340d535a500f04f6e734f1d54d | [
"MIT"
] | null | null | null | infrastructor/api/ResourceBase.py | muhammetbolat/pythondataintegrator | 5b274db8d39ca1340d535a500f04f6e734f1d54d | [
"MIT"
] | null | null | null | from flask_restplus import Resource
class ResourceBase(Resource):
pass
| 12.833333 | 35 | 0.792208 | 9 | 77 | 6.666667 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.168831 | 77 | 5 | 36 | 15.4 | 0.9375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
0df18296784aea052ed6bc411ff760a0f083a344 | 19 | py | Python | S2D_models/__init__.py | JamesPerlman/Dain-App | f589abdca8309cfdb6dd106da7c7c4613d152c72 | [
"MIT"
] | 7,517 | 2019-03-25T01:04:47.000Z | 2022-03-31T06:40:51.000Z | S2D_models/__init__.py | JamesPerlman/Dain-App | f589abdca8309cfdb6dd106da7c7c4613d152c72 | [
"MIT"
] | 138 | 2019-04-04T07:06:32.000Z | 2022-03-31T18:32:07.000Z | S2D_models/__init__.py | JamesPerlman/Dain-App | f589abdca8309cfdb6dd106da7c7c4613d152c72 | [
"MIT"
] | 848 | 2019-03-25T01:05:05.000Z | 2022-03-31T10:01:48.000Z | from .S2DF import * | 19 | 19 | 0.736842 | 3 | 19 | 4.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.0625 | 0.157895 | 19 | 1 | 19 | 19 | 0.8125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
21d3afdd412f2845b86346960689c4f45f216fe6 | 32 | py | Python | tools/intogen/runtime/pyenv/lib/python2.7/site-packages/wok/data/mongo/__init__.py | globusgenomics/galaxy | 7caf74d9700057587b3e3434c64e82c5b16540f1 | [
"CC-BY-3.0"
] | 1 | 2021-02-05T13:19:58.000Z | 2021-02-05T13:19:58.000Z | chapter2/wok/master/wok/data/mongo/__init__.py | chris-zen/phd-thesis | 1eefdff8e7ca1910304e27ae42551dc64496b101 | [
"Unlicense"
] | null | null | null | chapter2/wok/master/wok/data/mongo/__init__.py | chris-zen/phd-thesis | 1eefdff8e7ca1910304e27ae42551dc64496b101 | [
"Unlicense"
] | null | null | null | from mongo import MongoProvider
| 16 | 31 | 0.875 | 4 | 32 | 7 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 32 | 1 | 32 | 32 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
21fd2f10356060e86aa933628652e0cc7bc68db9 | 34 | wsgi | Python | web.wsgi | scott0228/epub_convert | 860747c6b7fe9d2f427e9b236618117921ba9ef3 | [
"MIT"
] | 4 | 2021-07-21T18:53:44.000Z | 2022-03-17T02:49:26.000Z | web.wsgi | scott0228/epub_convert | 860747c6b7fe9d2f427e9b236618117921ba9ef3 | [
"MIT"
] | 1 | 2021-06-17T17:02:12.000Z | 2021-06-17T17:17:01.000Z | web.wsgi | scott0228/epub_convert | 860747c6b7fe9d2f427e9b236618117921ba9ef3 | [
"MIT"
] | 5 | 2020-08-05T10:03:23.000Z | 2022-03-17T06:13:51.000Z | from web import app as application | 34 | 34 | 0.852941 | 6 | 34 | 4.833333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.147059 | 34 | 1 | 34 | 34 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1d0ed4b53d0619da0723dff01c41cad617f57483 | 45,881 | py | Python | src/datadog_api_client/v2/api/logs_archives_api.py | MichaelTROEHLER/datadog-api-client-python | 12c46626622fb1277bb1e172753b342c671348bd | [
"Apache-2.0"
] | null | null | null | src/datadog_api_client/v2/api/logs_archives_api.py | MichaelTROEHLER/datadog-api-client-python | 12c46626622fb1277bb1e172753b342c671348bd | [
"Apache-2.0"
] | null | null | null | src/datadog_api_client/v2/api/logs_archives_api.py | MichaelTROEHLER/datadog-api-client-python | 12c46626622fb1277bb1e172753b342c671348bd | [
"Apache-2.0"
] | null | null | null | # coding: utf-8
# Unless explicitly stated otherwise all files in this repository are licensed under the Apache-2.0 License.
# This product includes software developed at Datadog (https://www.datadoghq.com/).
# Copyright 2019-Present Datadog, Inc.
import re # noqa: F401
import sys # noqa: F401
from datadog_api_client.v2.api_client import ApiClient, Endpoint
from datadog_api_client.v2.model_utils import ( # noqa: F401
check_allowed_values,
check_validations,
date,
datetime,
file_type,
none_type,
validate_and_convert_types
)
from datadog_api_client.v2.model.api_error_response import APIErrorResponse
from datadog_api_client.v2.model.logs_archive import LogsArchive
from datadog_api_client.v2.model.logs_archive_create_request import LogsArchiveCreateRequest
from datadog_api_client.v2.model.logs_archive_order import LogsArchiveOrder
from datadog_api_client.v2.model.logs_archives import LogsArchives
from datadog_api_client.v2.model.relationship_to_role import RelationshipToRole
from datadog_api_client.v2.model.roles_response import RolesResponse
class LogsArchivesApi(object):
"""NOTE: This class is auto generated by OpenAPI Generator
Ref: https://openapi-generator.tech
Do not edit the class manually.
"""
def __init__(self, api_client=None):
if api_client is None:
api_client = ApiClient()
self.api_client = api_client
def __add_read_role_to_archive(
self,
archive_id,
**kwargs
):
"""Grant role to an archive # noqa: E501
Adds a read role to an archive. ([Roles API](https://docs.datadoghq.com/api/v2/roles/)) # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.add_read_role_to_archive(archive_id, async_req=True)
>>> result = thread.get()
Args:
archive_id (str): The ID of the archive.
Keyword Args:
body (RelationshipToRole): [optional]
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (float/tuple): timeout setting for this request. If one
number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
None
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['archive_id'] = \
archive_id
return self.call_with_http_info(**kwargs)
self.add_read_role_to_archive = Endpoint(
settings={
'response_type': None,
'auth': [
'apiKeyAuth',
'appKeyAuth'
],
'endpoint_path': '/api/v2/logs/config/archives/{archive_id}/readers',
'operation_id': 'add_read_role_to_archive',
'http_method': 'POST',
'servers': None,
},
params_map={
'all': [
'archive_id',
'body',
],
'required': [
'archive_id',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'archive_id':
(str,),
'body':
(RelationshipToRole,),
},
'attribute_map': {
'archive_id': 'archive_id',
},
'location_map': {
'archive_id': 'path',
'body': 'body',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [
'application/json'
]
},
api_client=api_client,
callable=__add_read_role_to_archive
)
def __create_logs_archive(
self,
body,
**kwargs
):
"""Create an archive # noqa: E501
Create an archive in your organization. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.create_logs_archive(body, async_req=True)
>>> result = thread.get()
Args:
body (LogsArchiveCreateRequest): The definition of the new archive.
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (float/tuple): timeout setting for this request. If one
number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
LogsArchive
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['body'] = \
body
return self.call_with_http_info(**kwargs)
self.create_logs_archive = Endpoint(
settings={
'response_type': (LogsArchive,),
'auth': [
'apiKeyAuth',
'appKeyAuth'
],
'endpoint_path': '/api/v2/logs/config/archives',
'operation_id': 'create_logs_archive',
'http_method': 'POST',
'servers': None,
},
params_map={
'all': [
'body',
],
'required': [
'body',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'body':
(LogsArchiveCreateRequest,),
},
'attribute_map': {
},
'location_map': {
'body': 'body',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [
'application/json'
]
},
api_client=api_client,
callable=__create_logs_archive
)
def __delete_logs_archive(
self,
archive_id,
**kwargs
):
"""Delete an archive # noqa: E501
Delete a given archive from your organization. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.delete_logs_archive(archive_id, async_req=True)
>>> result = thread.get()
Args:
archive_id (str): The ID of the archive.
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (float/tuple): timeout setting for this request. If one
number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
None
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['archive_id'] = \
archive_id
return self.call_with_http_info(**kwargs)
self.delete_logs_archive = Endpoint(
settings={
'response_type': None,
'auth': [
'apiKeyAuth',
'appKeyAuth'
],
'endpoint_path': '/api/v2/logs/config/archives/{archive_id}',
'operation_id': 'delete_logs_archive',
'http_method': 'DELETE',
'servers': None,
},
params_map={
'all': [
'archive_id',
],
'required': [
'archive_id',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'archive_id':
(str,),
},
'attribute_map': {
'archive_id': 'archive_id',
},
'location_map': {
'archive_id': 'path',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client,
callable=__delete_logs_archive
)
def __get_logs_archive(
self,
archive_id,
**kwargs
):
"""Get an archive # noqa: E501
Get a specific archive from your organization. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_logs_archive(archive_id, async_req=True)
>>> result = thread.get()
Args:
archive_id (str): The ID of the archive.
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (float/tuple): timeout setting for this request. If one
number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
LogsArchive
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['archive_id'] = \
archive_id
return self.call_with_http_info(**kwargs)
self.get_logs_archive = Endpoint(
settings={
'response_type': (LogsArchive,),
'auth': [
'apiKeyAuth',
'appKeyAuth'
],
'endpoint_path': '/api/v2/logs/config/archives/{archive_id}',
'operation_id': 'get_logs_archive',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
'archive_id',
],
'required': [
'archive_id',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'archive_id':
(str,),
},
'attribute_map': {
'archive_id': 'archive_id',
},
'location_map': {
'archive_id': 'path',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client,
callable=__get_logs_archive
)
def __get_logs_archive_order(
self,
**kwargs
):
"""Get archive order # noqa: E501
Get the current order of your archives. This endpoint takes no JSON arguments. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_logs_archive_order(async_req=True)
>>> result = thread.get()
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (float/tuple): timeout setting for this request. If one
number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
LogsArchiveOrder
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_host_index'] = kwargs.get('_host_index')
return self.call_with_http_info(**kwargs)
self.get_logs_archive_order = Endpoint(
settings={
'response_type': (LogsArchiveOrder,),
'auth': [
'apiKeyAuth',
'appKeyAuth'
],
'endpoint_path': '/api/v2/logs/config/archive-order',
'operation_id': 'get_logs_archive_order',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
],
'required': [],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
},
'attribute_map': {
},
'location_map': {
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client,
callable=__get_logs_archive_order
)
def __list_archive_read_roles(
self,
archive_id,
**kwargs
):
"""List read roles for an archive # noqa: E501
Returns all read roles a given archive is restricted to. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.list_archive_read_roles(archive_id, async_req=True)
>>> result = thread.get()
Args:
archive_id (str): The ID of the archive.
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (float/tuple): timeout setting for this request. If one
number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
RolesResponse
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['archive_id'] = \
archive_id
return self.call_with_http_info(**kwargs)
self.list_archive_read_roles = Endpoint(
settings={
'response_type': (RolesResponse,),
'auth': [
'apiKeyAuth',
'appKeyAuth'
],
'endpoint_path': '/api/v2/logs/config/archives/{archive_id}/readers',
'operation_id': 'list_archive_read_roles',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
'archive_id',
],
'required': [
'archive_id',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'archive_id':
(str,),
},
'attribute_map': {
'archive_id': 'archive_id',
},
'location_map': {
'archive_id': 'path',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client,
callable=__list_archive_read_roles
)
def __list_logs_archives(
self,
**kwargs
):
"""Get all archives # noqa: E501
Get the list of configured logs archives with their definitions. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.list_logs_archives(async_req=True)
>>> result = thread.get()
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (float/tuple): timeout setting for this request. If one
number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
LogsArchives
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_host_index'] = kwargs.get('_host_index')
return self.call_with_http_info(**kwargs)
self.list_logs_archives = Endpoint(
settings={
'response_type': (LogsArchives,),
'auth': [
'apiKeyAuth',
'appKeyAuth'
],
'endpoint_path': '/api/v2/logs/config/archives',
'operation_id': 'list_logs_archives',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
],
'required': [],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
},
'attribute_map': {
},
'location_map': {
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client,
callable=__list_logs_archives
)
def __remove_role_from_archive(
self,
archive_id,
**kwargs
):
"""Revoke role from an archive # noqa: E501
Removes a role from an archive. ([Roles API](https://docs.datadoghq.com/api/v2/roles/)) # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.remove_role_from_archive(archive_id, async_req=True)
>>> result = thread.get()
Args:
archive_id (str): The ID of the archive.
Keyword Args:
body (RelationshipToRole): [optional]
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (float/tuple): timeout setting for this request. If one
number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
None
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['archive_id'] = \
archive_id
return self.call_with_http_info(**kwargs)
self.remove_role_from_archive = Endpoint(
settings={
'response_type': None,
'auth': [
'apiKeyAuth',
'appKeyAuth'
],
'endpoint_path': '/api/v2/logs/config/archives/{archive_id}/readers',
'operation_id': 'remove_role_from_archive',
'http_method': 'DELETE',
'servers': None,
},
params_map={
'all': [
'archive_id',
'body',
],
'required': [
'archive_id',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'archive_id':
(str,),
'body':
(RelationshipToRole,),
},
'attribute_map': {
'archive_id': 'archive_id',
},
'location_map': {
'archive_id': 'path',
'body': 'body',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [
'application/json'
]
},
api_client=api_client,
callable=__remove_role_from_archive
)
def __update_logs_archive(
self,
archive_id,
body,
**kwargs
):
"""Update an archive # noqa: E501
Update a given archive configuration. **Note**: Using this method updates your archive configuration by **replacing** your current configuration with the new one sent to your Datadog organization. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.update_logs_archive(archive_id, body, async_req=True)
>>> result = thread.get()
Args:
archive_id (str): The ID of the archive.
body (LogsArchiveCreateRequest): New definition of the archive.
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (float/tuple): timeout setting for this request. If one
number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
LogsArchive
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['archive_id'] = \
archive_id
kwargs['body'] = \
body
return self.call_with_http_info(**kwargs)
self.update_logs_archive = Endpoint(
settings={
'response_type': (LogsArchive,),
'auth': [
'apiKeyAuth',
'appKeyAuth'
],
'endpoint_path': '/api/v2/logs/config/archives/{archive_id}',
'operation_id': 'update_logs_archive',
'http_method': 'PUT',
'servers': None,
},
params_map={
'all': [
'archive_id',
'body',
],
'required': [
'archive_id',
'body',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'archive_id':
(str,),
'body':
(LogsArchiveCreateRequest,),
},
'attribute_map': {
'archive_id': 'archive_id',
},
'location_map': {
'archive_id': 'path',
'body': 'body',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [
'application/json'
]
},
api_client=api_client,
callable=__update_logs_archive
)
def __update_logs_archive_order(
self,
body,
**kwargs
):
"""Update archive order # noqa: E501
Update the order of your archives. Since logs are processed sequentially, reordering an archive may change the structure and content of the data processed by other archives. **Note**: Using the `PUT` method updates your archive's order by replacing the current order with the new one. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.update_logs_archive_order(body, async_req=True)
>>> result = thread.get()
Args:
body (LogsArchiveOrder): An object containing the new ordered list of archive IDs.
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (float/tuple): timeout setting for this request. If one
number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
LogsArchiveOrder
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['body'] = \
body
return self.call_with_http_info(**kwargs)
self.update_logs_archive_order = Endpoint(
settings={
'response_type': (LogsArchiveOrder,),
'auth': [
'apiKeyAuth',
'appKeyAuth'
],
'endpoint_path': '/api/v2/logs/config/archive-order',
'operation_id': 'update_logs_archive_order',
'http_method': 'PUT',
'servers': None,
},
params_map={
'all': [
'body',
],
'required': [
'body',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'body':
(LogsArchiveOrder,),
},
'attribute_map': {
},
'location_map': {
'body': 'body',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [
'application/json'
]
},
api_client=api_client,
callable=__update_logs_archive_order
)
| 36.881833 | 311 | 0.466293 | 3,997 | 45,881 | 5.099575 | 0.062047 | 0.032233 | 0.025511 | 0.026493 | 0.88044 | 0.866801 | 0.856694 | 0.855173 | 0.848109 | 0.841829 | 0 | 0.004271 | 0.453935 | 45,881 | 1,243 | 312 | 36.911504 | 0.809292 | 0.342015 | 0 | 0.695652 | 0 | 0 | 0.219097 | 0.044147 | 0 | 0 | 0 | 0 | 0 | 1 | 0.013285 | false | 0 | 0.013285 | 0 | 0.039855 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
df3075c3bb47505ab1c73f32571ad7d290f68e1a | 47 | py | Python | Class_04.py | chandrakant1991/pythonnew | cff0dc90d7d57f6de4aa4b7aff69740a355d8b27 | [
"MIT"
] | null | null | null | Class_04.py | chandrakant1991/pythonnew | cff0dc90d7d57f6de4aa4b7aff69740a355d8b27 | [
"MIT"
] | null | null | null | Class_04.py | chandrakant1991/pythonnew | cff0dc90d7d57f6de4aa4b7aff69740a355d8b27 | [
"MIT"
] | null | null | null | print('hello word')
print('hello chandrakant')
| 15.666667 | 26 | 0.744681 | 6 | 47 | 5.833333 | 0.666667 | 0.571429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.085106 | 47 | 2 | 27 | 23.5 | 0.813953 | 0 | 0 | 0 | 0 | 0 | 0.574468 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
df4fd0ef50da193806c886d0e73110d5ab31beea | 18,110 | py | Python | IDEAS/Resources/src/fluid/heatpumps/calibration/PythonModel/compressors.py | JavierArroyoBastida/IDEAS | d8df09206d90451f8a5910aa5780f363573ecd8c | [
"BSD-3-Clause"
] | 87 | 2015-01-13T10:48:28.000Z | 2022-02-07T12:46:06.000Z | IDEAS/Resources/src/fluid/heatpumps/calibration/PythonModel/compressors.py | JavierArroyoBastida/IDEAS | d8df09206d90451f8a5910aa5780f363573ecd8c | [
"BSD-3-Clause"
] | 871 | 2015-01-02T09:14:43.000Z | 2022-03-28T20:22:25.000Z | IDEAS/Resources/src/fluid/heatpumps/calibration/PythonModel/compressors.py | JavierArroyoBastida/IDEAS | d8df09206d90451f8a5910aa5780f363573ecd8c | [
"BSD-3-Clause"
] | 45 | 2015-01-12T13:51:45.000Z | 2022-03-14T08:01:40.000Z | from __future__ import division, print_function, absolute_import
class ReciprocatingCompressor(object):
""" Object for reciprocating compressor model based on Jin (2002):
H. Jin. Parameter estimation based models of water source heat pumps.
PhD Thesis. Oklahoma State University. Stillwater, Oklahoma, USA. 2012.
:param pisDis: Piston displacement (m3/s).
:param cleFac: Clearance factor (-).
:param etaEle: Electro-mechanical efficiency (-).
:param PLos: Constant part of the power losses (W).
:param pDro: Pressure drop at compressor suction and discharge (Pa).
:param dTSup: Degree of superheating (K).
"""
def __init__(self, parameters):
self.pisDis = parameters[0]
self.cleFac = parameters[1]
self.etaEle = parameters[2]
self.PLos = parameters[3]
self.pDro = parameters[4]
self.dTSup = parameters[5]
self.NPar = 6
return
def get_SuctionTemperature(self, TEva):
""" Evaluate the suction temperature.
:param TEva: Evaporating temperature (K).
:return: Suction temperature (K).
Usage: Type
>>> com = ReciprocatingCompressor([0.00162, 0.069, 0.696, 100.0, 99.29e3, 9.82])
>>> '%.2f' % com.get_SuctionTemperature(283.15)
'292.97'
"""
# Apply superheating to evaporating temperature
TSuc = TEva + self.dTSup
return TSuc
def get_SuctionPressure(self, pEva):
""" Evaluate the suction pressure.
:param pEva: Evaporating pressure (Pa).
:return: Suction pressure (Pa).
Usage: Type
>>> com = ReciprocatingCompressor([0.00162, 0.069, 0.696, 100.0, 99.29e3, 9.82])
>>> '%.1f' % com.get_SuctionPressure(1.083e6)
'983710.0'
"""
# Apply pressure drop at compressor suction
pSuc = pEva - self.pDro
return pSuc
def get_DischargePressure(self, pCon):
"""Evaluate the discharge pressure (Pa).
:param pCon: Condensing pressure (Pa).
:return: Discharge pressure (Pa).
Usage: Type
>>> com = ReciprocatingCompressor([0.00162, 0.069, 0.696, 100.0, 99.29e3, 9.82])
>>> '%.1f' % com.get_DischargePressure(1.879e6)
'1978290.0'
"""
# Apply pressure drop at compressor discharge
pDis = pCon + self.pDro
return pDis
def get_RefrigerantMassFlowRate(self, vSuc, ref, pDis, pSuc, TSuc,
**kargs):
"""Evaluate the refrigerant mass flow rate.
:param vSuv: Suction specific volume (m3/kg).
:param ref: Refrigerant model.
:param pDis: Discharge pressure (Pa).
:param pSuc: Suction pressure (Pa).
:param TSuc: Suction temperature (K).
:return: Refrigerant mass flow rate (kg/s).
Usage: Type
>>> import refrigerants
>>> ref = refrigerants.R410A()
>>> com = ReciprocatingCompressor([0.00162, 0.069, 0.696, 100.0, 99.29e3, 9.82])
>>> '%.8f' % com.get_RefrigerantMassFlowRate(0.0288, ref, 1978290.0, 983710.0, 292.97)
'0.05358166'
"""
# Evaluate refrigerant mass flow rate
k = ref.get_IsentropicExponent_vT(v=vSuc, T=TSuc)
PR = max(0.0, pDis/pSuc)
m_flow = self.pisDis/vSuc * (1.0 + self.cleFac
- self.cleFac * (PR)**(1.0/k))
return m_flow
def get_Power(self, vSuc, ref, pDis, pSuc, TSuc, **kargs):
""" Evaluate the power input to the compressor.
:param vSuv: Suction specific volume (m3/kg).
:param ref: Refrigerant model.
:param pDis: Discharge pressure (Pa).
:param pSuc: Suction pressure (Pa).
:param TSuc: Suction temperature (K).
:return: Power input to the compressor (W).
Usage: Type
>>> import refrigerants
>>> ref = refrigerants.R410A()
>>> com = ReciprocatingCompressor([0.00162, 0.069, 0.696, 100.0, 99.29e3, 9.82])
>>> '%.2f' % com.get_Power(0.0288, ref, 1978290.0, 983710.0, 292.97)
'1765.63'
"""
# Evaluate compressor power consumption
k = ref.get_IsentropicExponent_vT(v=vSuc, T=TSuc)
PR = max(0.0, pDis/pSuc)
m_flow = self.get_RefrigerantMassFlowRate(vSuc=vSuc, ref=ref,
pDis=pDis, pSuc=pSuc,
TSuc=TSuc)
PThe = k/(k - 1.0) * m_flow * pSuc * vSuc * ((PR)**((k - 1.0)/k) - 1.0)
P = PThe / self.etaEle + self.PLos
return P
def initialGuessParameters(self, Q_nominal, P_nominal, TSou_nominal,
TLoa_nominal, ref, CoolingMode):
""" Initialize guess parameters for calibration of the heat pump model.
:param Q_nominal: Nominal heat pump capacity (W).
:param P_nominal: Nominal power input (W).
:param TSou_nominal: Source-side water temperature at
nominal conditions (K).
:param TLoa_nominal: Load-side water temperature at
nominal conditions (K).
:param ref: Refrigerant model.
:param CoolingMode: Boolean, True if heat pump is in cooling mode.
:return: A list of parameters to the compressor model, a list of tuples
of the bounds of the parameters (min, max) for the calibration
routine.
"""
# Initialize guess parameters for the reciprocating compressor
# Temperature difference between EWT and evaporating temperature
dTEva = 5.0
# Temperature difference between EWT and condensing temperature
dTCon = 5.0
if CoolingMode:
TEva = TLoa_nominal - dTEva
TCon = TSou_nominal + dTCon
QEva = - Q_nominal
else:
TEva = TSou_nominal - dTEva
TCon = TLoa_nominal + dTCon
QEva = P_nominal - Q_nominal
pEva = ref.get_SaturatedVaporPressure(TEva)
pCon = ref.get_SaturatedVaporPressure(TCon)
hA = ref.get_SaturatedVaporEnthalpy(TEva)
hB = ref.get_SaturatedLiquidEnthalpy(TEva)
cleFac = 0.05
etaEle = 0.8
PLos = 0.05 * P_nominal
pDro = 100.0e3
dTSup = 8.0
pDis = pCon + pDro
pSuc = pEva - pDro
TSuc = TEva + dTSup
vSuc = ref.get_VaporSpecificVolume(pSuc, TSuc)
kSuc = ref.get_IsentropicExponent_vT(vSuc, TSuc)
m_flow = -QEva / (hA - hB)
pisDis = m_flow * vSuc / (1.0 + cleFac
- cleFac * (pDis/pSuc)**(1.0/kSuc))
pisDis = 1.5e-7 * Q_nominal
cleFac = 0.05
etaEle = 0.8
PLos = 0.1 * P_nominal
pDro = 100e3
dTSup = 5
bounds = [(0., None), (0., 1.), (0., 1.),
(0., 0.2*P_nominal), (0., None), (0., 10.)]
return [pisDis, cleFac, etaEle, PLos, pDro, dTSup], bounds
def modelicaModelPath(self):
""" Returns the full path to the compressor model in the Buildings
library.
:return: Full path to the compressor model in the IBPSA library.
Usage: Type
>>> com = ReciprocatingCompressor([0.00162, 0.069, 0.696, 100.0, 99.29e3, 9.82])
>>> com.modelicaModelPath()
'IBPSA.Fluid.HeatPumps.Compressors.ReciprocatingCompressor'
"""
return 'IBPSA.Fluid.HeatPumps.Compressors.ReciprocatingCompressor'
def printParameters(self):
""" Prints the value of the model parameters.
"""
print('Piston displacement : ' + str(self.pisDis) + ' m3/s')
print('Clearance factor : ' + str(self.cleFac) + ' ')
print('Electro-mechanical efficiency : ' + str(self.etaEle) + ' ')
print('Constant part of power losses : ' + str(self.PLos) + ' W')
print('Suction and discharge pressure drop : ' + str(self.pDro) + ' Pa')
print('Amplitude of superheating : ' + str(self.dTSup) + ' K\n')
return
def reinitializeParameters(self, parameters):
""" Reinitializes the compressor using new parameters.
:param pisDis: Piston displacement (m3/s).
:param cleFac: Clearance factor (-).
:param etaEle: Electro-mechanical efficiency (-).
:param PLos: Constant part of the power losses (W).
:param pDro: Pressure drop at compressor suction and discharge (Pa).
:param dTSup: Degree of superheating (K).
"""
self.pisDis = parameters[0]
self.cleFac = parameters[1]
self.etaEle = parameters[2]
self.PLos = parameters[3]
self.pDro = parameters[4]
self.dTSup = parameters[5]
return
class ScrollCompressor(object):
""" Object for scroll compressor model based on Jin (2002):
H. Jin. Parameter estimation based models of water source heat pumps.
PhD Thesis. Oklahoma State University. Stillwater, Oklahoma, USA. 2012.
:param volRat: Volume ratio (-).
:param v_flow: Nominal Volume flow rate (m3/s).
:param leaCoe: LEakage coefficient (kg/s).
:param etaEle: Electro-mechanical efficiency (-).
:param PLos: Constant part of the power losses (W).
:param dTSup: Degree of superheating (K).
"""
def __init__(self, parameters):
self.volRat = parameters[0]
self.v_flow = parameters[1]
self.leaCoe = parameters[2]
self.etaEle = parameters[3]
self.PLos = parameters[4]
self.dTSup = parameters[5]
self.NPar = 6
return
def get_SuctionTemperature(self, TEva):
""" Evaluate the suction temperature.
:param TEva: Evaporating temperature (K).
:return: Suction temperature (K).
Usage: Type
>>> com = ScrollCompressor([2.362, 0.00287, 0.0041, 0.922, 398.7, 6.49])
>>> '%.2f' % com.get_SuctionTemperature(283.15)
'289.64'
"""
# Apply superheating to evaporating temperature
TSuc = TEva + self.dTSup
return TSuc
def get_SuctionPressure(self, pEva):
""" Evaluate the suction pressure.
:param pEva: Evaporating pressure (Pa).
:return: Suction pressure (Pa).
Usage: Type
>>> com = ScrollCompressor([2.362, 0.00287, 0.0041, 0.922, 398.7, 6.49])
>>> '%.1f' % com.get_SuctionPressure(1.083e6)
'1083000.0'
"""
# No pressure drop at compressor suction
pSuc = pEva
return pSuc
def get_DischargePressure(self, pCon):
"""Evaluate the discharge pressure (Pa).
:param pCon: Condensing pressure (Pa).
:return: Discharge pressure (Pa).
Usage: Type
>>> com = ScrollCompressor([2.362, 0.00287, 0.0041, 0.922, 398.7, 6.49])
>>> '%.1f' % com.get_DischargePressure(1.879e6)
'1879000.0'
"""
# No pressure drop at compressor discharge
pDis = pCon
return pDis
def get_RefrigerantMassFlowRate(self, vSuc, pDis, pSuc, **kargs):
"""Evaluate the refrigerant mass flow rate.
:param vSuv: Suction specific volume (m3/kg).
:param pDis: Discharge pressure (Pa).
:param pSuc: Suction pressure (Pa).
:param TSuc: Suction temperature (K).
:return: Refrigerant mass flow rate (kg/s).
Usage: Type
>>> com = ScrollCompressor([2.362, 0.00287, 0.0041, 0.922, 398.7, 6.49])
>>> '%.6f' % com.get_RefrigerantMassFlowRate(0.025, 1.879e6, 1.083e6)
'0.107687'
"""
# Evaluate refrigerant mass flwo rate
m_leak = self._leakageMassFlowRate(pDis, pSuc)
m_flow = self.v_flow/vSuc - m_leak
return m_flow
def get_Power(self, vSuc, ref, pDis, pSuc, TSuc):
""" Evaluate the power input to the compressor.
:param vSuv: Suction specific volume (m3/kg).
:param ref: Refrigerant model.
:param pDis: Discharge pressure (Pa).
:param pSuc: Suction pressure (Pa).
:param TSuc: Suction temperature (K).
:return: Power input to the compressor (W).
Usage: Type
>>> import refrigerants
>>> ref = refrigerants.R410A()
>>> com = ScrollCompressor([2.362, 0.00287, 0.0041, 0.922, 398.7, 6.49])
>>> '%.2f' % com.get_Power(0.025, ref, 1.879e6, 1.083e6, 289.64)
'2940.26'
"""
# Evaluate compressor power consumption
k = ref.get_IsentropicExponent_vT(v=vSuc, T=TSuc)
PR = max(0.0, pDis/pSuc) # External pressure ratio
PRInt = self.volRat**k # Built-in pressure ratio
PThe = k/(k - 1.0) * pSuc * self.v_flow \
* (((k - 1.0)/k) * PR/self.volRat
+ 1.0/k * PRInt**((k - 1.0)/k) - 1.0)
P = PThe / self.etaEle + self.PLos
return P
def set_ModelicaParameters(self, simulator, suffix=''):
""" Set parameter values for simulation in dymola.
:param simulator: Simulator object (BuildinsPy)
:param suffix: String to add at the end of parameter names.
:return: Simulator object (BuildingsPy)
"""
parameters = {'volRat'+suffix: self.volRat,
'V_flow_nominal'+suffix: self.v_flow,
'leaCoe'+suffix: self.leaCoe,
'etaEle'+suffix: self.etaEle,
'PLos'+suffix: self.PLos,
'dTSup'+suffix: self.dTSup}
simulator.addParameters(parameters)
return simulator
def initialGuessParameters(self, Q_nominal, P_nominal, TSou_nominal,
TLoa_nominal, ref, CoolingMode):
""" Initialize guess parameters for calibration of the heat pump model.
:param Q_nominal: Nominal heat pump capacity (W).
:param P_nominal: Nominal power input (W).
:param TSou_nominal: Source-side water temperature at
nominal conditions (K).
:param TLoa_nominal: Load-side water temperature at
nominal conditions (K).
:param ref: Refrigerant model.
:param CoolingMode: Boolean, True if heat pump is in cooling mode.
:return: A list of parameters to the compressor model, a list of tuples
of the bounds of the parameters (min, max) for the calibration
routine.
"""
# Initialize guess parameters for the scroll compressor
dTEva = 5.0 # Temp. difference between EWT and evaporating temp.
dTCon = 5.0 # Temp. difference between EWT and condensing temp.
dTSup = 4.0
if CoolingMode:
TEva = TLoa_nominal - dTEva
TCon = TSou_nominal + dTCon
QEva = -Q_nominal
else:
TEva = TSou_nominal - dTEva
TCon = TLoa_nominal + dTCon
QEva = (P_nominal - Q_nominal)
pEva = ref.get_SaturatedVaporPressure(TEva)
pCon = ref.get_SaturatedVaporPressure(TCon)
hA = ref.get_SaturatedVaporEnthalpy(TEva)
hB = ref.get_SaturatedLiquidEnthalpy(TEva)
TSuc = TEva + dTSup
vSuc = ref.get_VaporSpecificVolume(pEva, TSuc)
kSuc = ref.get_IsentropicExponent_vT(vSuc, TSuc)
volRat = (pCon/pEva)**(1.0/kSuc)
m_flow = -QEva / (hA - hB)
m_leak = 0.01*m_flow
v_flow = (m_flow + m_leak) * vSuc
PThe = kSuc/(kSuc - 1.0) * pEva * v_flow \
* ((pCon/pEva)**((kSuc - 1.0)/kSuc) - 1.0)
etaEle = 0.95
PLos = max(etaEle * P_nominal - PThe, 0.0)
leaCoe = m_leak / (pCon/pEva)
# bounds = [(1., None), (0., None), (0., 1.),
# (0., 1.), (0., None), (0., None)]
bounds = [(1.5, 3.5), (0., None), (1.0e-4, 1.),
(0., 1.), (0., 0.25*P_nominal), (0., 10.)]
return [volRat, v_flow, leaCoe, etaEle, PLos, dTSup], bounds
def modelicaModelPath(self):
""" Returns the full path to the compressor model in the Buildings
library.
:return: Full path to the compressor model in the IBPSA library.
Usage: Type
>>> com = ScrollCompressor([2.362, 0.00287, 0.0041, 0.922, 398.7, 6.49])
>>> com.modelicaModelPath()
'IBPSA.Fluid.HeatPumps.Compressors.ScrollCompressor'
"""
return 'IBPSA.Fluid.HeatPumps.Compressors.ScrollCompressor'
def printParameters(self):
""" Prints the value of the model parameters.
"""
print('Volume ratio : ' + str(self.volRat) + ' ')
print('Volume flow rate : ' + str(self.v_flow) + ' m3/s')
print('Leakage coefficient : ' + str(self.leaCoe) + ' kg/s')
print('Electro-mechanical efficiency : ' + str(self.etaEle) + ' ')
print('Constant part of power losses : ' + str(self.PLos) + ' W')
print('Amplitude of superheating : ' + str(self.dTSup) + ' K\n')
return
def reinitializeParameters(self, parameters):
""" Reinitializes the compressor using new parameters.
:param volRat: Volume ratio (-).
:param v_flow: Nominal Volume flow rate (m3/s).
:param leaCoe: LEakage coefficient (kg/s).
:param etaEle: Electro-mechanical efficiency (-).
:param PLos: Constant part of the power losses (W).
:param dTSup: Degree of superheating (K).
"""
self.volRat = parameters[0]
self.v_flow = parameters[1]
self.leaCoe = parameters[2]
self.etaEle = parameters[3]
self.PLos = parameters[4]
self.dTSup = parameters[5]
return
def _leakageMassFlowRate(self, pDis, pSuc):
""" Evaluate the leakage mass flow rate.
:param pDis: Discharge pressure (Pa).
:param pSuc: Suction pressure (Pa).
:return: Leakage mass flow rate (kg/s).
"""
m_leak = self.leaCoe*pDis/pSuc
return m_leak
| 36.292585 | 97 | 0.573937 | 2,097 | 18,110 | 4.896996 | 0.125894 | 0.004285 | 0.016068 | 0.01636 | 0.816146 | 0.781965 | 0.757425 | 0.716525 | 0.704061 | 0.695881 | 0 | 0.055413 | 0.313418 | 18,110 | 498 | 98 | 36.365462 | 0.770468 | 0.467532 | 0 | 0.602151 | 0 | 0 | 0.06176 | 0.01319 | 0 | 0 | 0 | 0 | 0 | 1 | 0.11828 | false | 0 | 0.005376 | 0 | 0.252688 | 0.080645 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
df64c7ea16920803892c0be2c57cdd3e6353d19f | 106 | py | Python | tatau_core/node/estimator/__init__.py | makar21/core | e6a0c8d5456567dd3139ee3fd3cf6cd4acdd4a05 | [
"Apache-2.0"
] | null | null | null | tatau_core/node/estimator/__init__.py | makar21/core | e6a0c8d5456567dd3139ee3fd3cf6cd4acdd4a05 | [
"Apache-2.0"
] | null | null | null | tatau_core/node/estimator/__init__.py | makar21/core | e6a0c8d5456567dd3139ee3fd3cf6cd4acdd4a05 | [
"Apache-2.0"
] | null | null | null | from .worker_estimator_node import WorkerEstimator
from .verifier_estimator_node import VerifierEstimator
| 35.333333 | 54 | 0.90566 | 12 | 106 | 7.666667 | 0.666667 | 0.282609 | 0.413043 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.075472 | 106 | 2 | 55 | 53 | 0.938776 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
800207d8f8422a86273074eb5cf59cec42c330e5 | 149 | py | Python | pysemcor/__init__.py | letuananh/pysemcor | cd620fccfa64549ae9130bca3d9370a5764d97e5 | [
"MIT"
] | 2 | 2016-04-13T19:26:43.000Z | 2018-04-25T08:36:42.000Z | pysemcor/__init__.py | letuananh/pysemcor | cd620fccfa64549ae9130bca3d9370a5764d97e5 | [
"MIT"
] | null | null | null | pysemcor/__init__.py | letuananh/pysemcor | cd620fccfa64549ae9130bca3d9370a5764d97e5 | [
"MIT"
] | 1 | 2019-11-21T09:13:38.000Z | 2019-11-21T09:13:38.000Z | # -*- coding: utf-8 -*-
from .semcorxml import SemcorXML
from .semcorxml import TokenInfo, FileSet
__all__ = ["SemcorXML", "TokenInfo", "FileSet"]
| 21.285714 | 47 | 0.704698 | 16 | 149 | 6.3125 | 0.5625 | 0.257426 | 0.376238 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007813 | 0.14094 | 149 | 6 | 48 | 24.833333 | 0.78125 | 0.14094 | 0 | 0 | 0 | 0 | 0.198413 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8022871f00cea9ce93457a72a24bb74ca915f57a | 64 | py | Python | conditional_independence/suffstats/__init__.py | uhlerlab/conditional_independence | aa4b5117b6f24bf39433d427d490312864e9bd69 | [
"BSD-3-Clause"
] | 4 | 2021-01-29T20:27:31.000Z | 2022-02-01T11:55:33.000Z | conditional_independence/suffstats/__init__.py | uhlerlab/conditional_independence | aa4b5117b6f24bf39433d427d490312864e9bd69 | [
"BSD-3-Clause"
] | null | null | null | conditional_independence/suffstats/__init__.py | uhlerlab/conditional_independence | aa4b5117b6f24bf39433d427d490312864e9bd69 | [
"BSD-3-Clause"
] | 1 | 2021-09-12T13:41:21.000Z | 2021-09-12T13:41:21.000Z | from .ci_suffstats import *
from .invariance_suffstats import *
| 21.333333 | 35 | 0.8125 | 8 | 64 | 6.25 | 0.625 | 0.6 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 64 | 2 | 36 | 32 | 0.892857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
33a3e162cb611263587c45a970d7c095521aceef | 37 | py | Python | di_baseline/my_submission/policy/__init__.py | lichuminglcm/GoBigger-Challenge-2021 | db9e4c0e555b103d41d3bd843dbed55bcc3945e6 | [
"Apache-2.0"
] | null | null | null | di_baseline/my_submission/policy/__init__.py | lichuminglcm/GoBigger-Challenge-2021 | db9e4c0e555b103d41d3bd843dbed55bcc3945e6 | [
"Apache-2.0"
] | null | null | null | di_baseline/my_submission/policy/__init__.py | lichuminglcm/GoBigger-Challenge-2021 | db9e4c0e555b103d41d3bd843dbed55bcc3945e6 | [
"Apache-2.0"
] | null | null | null | from .gobigger import GoBiggerPolicy
| 18.5 | 36 | 0.864865 | 4 | 37 | 8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.108108 | 37 | 1 | 37 | 37 | 0.969697 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
33c83d2ade35f79e9bb72d28044a2fa48eda0e78 | 128 | py | Python | backend/chess_skill/constants.py | mosure/assistant-skill-chess | 9f5094905625b562ec5aba114a6fb8a8dc094c37 | [
"MIT"
] | 1 | 2021-05-31T20:44:28.000Z | 2021-05-31T20:44:28.000Z | backend/chess_skill/constants.py | mosure/chess-assistant-skill | 9f5094905625b562ec5aba114a6fb8a8dc094c37 | [
"MIT"
] | null | null | null | backend/chess_skill/constants.py | mosure/chess-assistant-skill | 9f5094905625b562ec5aba114a6fb8a8dc094c37 | [
"MIT"
] | null | null | null | import os
FRONTEND_URL = os.environ.get('FRONTEND_URL')
def frontend_url_with_hash(hash):
return f'{FRONTEND_URL}#{hash}'
| 18.285714 | 45 | 0.75 | 20 | 128 | 4.5 | 0.55 | 0.488889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.117188 | 128 | 6 | 46 | 21.333333 | 0.79646 | 0 | 0 | 0 | 0 | 0 | 0.257813 | 0.164063 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0.25 | 0.75 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
1d4cfacb0023e58519ad8c28d4e6521a48f5964b | 58 | py | Python | django_sorcery/formsets/__init__.py | shosca/django-sorcery | 1d16c7affe7b8cc8185b7c2ff312ee13efe8f23a | [
"MIT"
] | 73 | 2018-05-04T12:44:49.000Z | 2022-02-16T23:32:04.000Z | django_sorcery/formsets/__init__.py | shosca/django-sorcery | 1d16c7affe7b8cc8185b7c2ff312ee13efe8f23a | [
"MIT"
] | 119 | 2018-05-07T14:15:59.000Z | 2022-03-27T02:29:03.000Z | django_sorcery/formsets/__init__.py | shosca/django-sorcery | 1d16c7affe7b8cc8185b7c2ff312ee13efe8f23a | [
"MIT"
] | 9 | 2018-08-06T18:50:09.000Z | 2021-07-30T08:01:25.000Z | from .base import * # noqa
from .inline import * # noqa
| 19.333333 | 29 | 0.655172 | 8 | 58 | 4.75 | 0.625 | 0.526316 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.241379 | 58 | 2 | 30 | 29 | 0.863636 | 0.155172 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.